The handover concept

UMTS Handovers
Handover Concept
In mobile communication handovers is refer to the transformation of an ongoing call or data from one channel to another which are connected to the core network. It enables the users of cellular technology to receive their calls anywhere and at any time so this process provides the mobility to the users, making it possible to the users to roam seamlessly from one cell to another cell. It is performed when the link quality between the base station and the mobile station on the move is decreasing from certain level of threshold.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In this process the existing link or the connection is tear down and the link is replaced by the cell to which the cellular user is handed on the cell to which the user is handed over is the target cell in this case. The network controller decides from the measurement reports about the link quality that the hand over process is needed to another cell or not. The inability of the network to make a new connection to the target cell is the handover failure. It occurs only when there are no resources or the quality of the radio link is very less from some threshold value in the target cell.
The request of the handover is same as the new call, the utilization of the resource should be optimized which is severe in order to make less the call dropping and blocking probabilities. But it is admitted that force dropping of present call is more worthy then the new call blocking.
Requirements of handovers
The handover process is required when the following situations occurs.

When the motion of the user equipment is very fast.
The movement of the user’s equipment from one cell to another during an ongoing session.
The experience of interference phenomena by the user’s equipment from the near cell.

These are some basic points due to which the network decides that the handover process is required.
Handovers Aim
The main aim of the handover process is to allow the mobile users to roam freely from one mobile network to another either the network are same or different. To achieve the load balancing in the different cell handover is also required. Also to maintain the good radio quality of the link between the mobile users and the serving base station and to minimize the interference level.
UMTS Handovers
An effectual process of the handover is necessary in the UMTS network which assures effective mobility, providing of the maintaining of ongoing session and quality of services. In addition the freedom to move with in same or different network, the balancing of load and minimization of the interference level by allowing a good connectivity of the radio link to the base stations is main results of the UMTS handovers.
Handovers Types in UMTS
The following are the different handovers types in UMTS.

Horizontal Handovers
Vertical Handovers
Soft Handovers
Hard Handovers
Softer Handovers
Intra System Handover
Inter System Handover

Horizontal Handovers
The transformation of an ongoing session from one cell to another cell having the same access technology. For example if user equipment is connected with the radio ink with the GSM network the horizontal handover must be from GSM to GSM. Similarly the handovers between two UMTS network is the horizontal handover.
Vertical Handovers
The transformation of an ongoing session or call from one cell to another cell having different access technologies. For example when a mobile user is moving from GSM based network to the UMTS network, here the access technologies are changed so the handover in this case is the vertical handover.
Loose coupling and tight coupling are the two architectures’ used in the vertical handovers between UMTS and WLAN.
Hard Handovers
In the hard handovers the old radio link is released first between the user equipment and the radio network controller before the new radio link is made between the user equipment and the radio network controller. Thus the source connection is broken first and then the target connection is made, so this type of handover is also called as “break before make”. These handoffs are designed to be instant in order to less the breaking of call. The network engineers felt the hard handovers as an event on the ongoing call [1].
In GSM system each cell has different frequencies to operate with, so these types of handoffs are used there too. Mobile users when entering the new cells that have different frequencies has to broke down the original connection and t will make a new connection with the target base station. It uses a simple algorithm, when the signal strength from the current base station decreases from the nearby cell whose signal strengths are stronger than the current cell. In the UMTS to change the band of frequency between the user equipment and the UTRAN the hard handovers are used. The UMTS operator can demand for spectrum to increase the capacity so the many 5 MHZ band would used by one operator, the outcome here is the need of hard handovers between them. To change the cell that have the same frequency and when there is not the support of the micro diversity these hard handovers are used else when a user equipment is allocated a dedicated control channel, it move to the new and near cell of the UMTS network and when there is the possibilities of other handovers like soft or softer handover the hard handover is then an option and is performed [2].
Pros and Cons of Hard Handovers
There are many pros and cons of hard handovers which are discussed in detail in the below section.
Pros

The hard handovers are simple and economical as the cellular phone hardware is not able to make connection with the two or more channels at the same time.
Only one channel is used at any interval of time which makes it simple and easy.

Cons

If the hardover process is not successfully executed then call may be terminated or ended.

Soft Handovers
In this type of handovers the user equipment communicates parallel from different Node-B’s with more then one sectors so the link are added and it is deleted in such a way the mobile equipment and the UTRAN always keeps a link. The technique known as micro diversity is used in this type which is known as at the same time many radio links are working and active. This technique has many advantages which are shown in the bullets below [3].

The near-far effect is reduced.
The connections are more repellent to shadowing.
It offers the chance to transmit data the other Node-B’s and thus the communication is maintained.

The property of the CDMA that the same frequencies are used by the all Node-B’s gives an edge to the soft handover. The user equipment connected to the Node-B is called as the User Equipment Active Set [4].
These handovers are also called as “make before break”. It is because the connection is made first to the other Node-B and releases the older connection after making the connection to the target.
Pros and Cons of Soft Handovers
There are also some pros and cons of the soft handover which are explained in the following section
Pros

Sophisticated handover type in which the call dropping probabilities are low as compared to hard handovers.
The connection to the target cell are more reliable as compared to the source connection at which the user equipment is connected first and after the handover procedure the target connection are more reliable.

Cons

More than one radio links are used so the more complex hardware is needed for it in order to cope with the existing situation.
More than one channel is used parallel in a single call so the handover process should be done in such a way that the dropping probabilities’ should be low as possible.

Softer Handovers
It is the special type of the soft handover the communication moves parallel to the same Node-B’s having over its different sector [3]. The user equipment and RNC communicates with the two different air interface channels. So two different codes are required for downlink thus the user equipments can know the signal. Rake processing used in the user equipment can receive the two signals.
Inter System Handovers
There are different radio access techniques like UMTS uses WCDMA GSM uses CDMA etc, so the inter system handovers are that type of handovers which takes place between different cells having different radio access techniques.
Intra System Handovers
Only in the single system these handovers are found [18]. The dual mode terminal FDD-TDD these handovers can be observed. The handovers occurs from the techniques FDD to TDD. There are two special types of the inter system handovers which are explained in the following sections.
Intra Frequency Handovers
In the WCDMA system if the intra system handover occurs with the cells having the carrier frequency same then this type of hand over is the intra frequency handovers.
Inter Frequency Handovers
In the WCDMA system if the intra system handover occurs with the cells having the carrier frequency different then this type of hand over is the inter frequency handovers.
Strategies of Handovers
The handover processes are carried out with different strategies each having its own pros and cons. The strategy is adopted and depends upon the quality of service which users require at that specific time and the cost of the network. The following are the different handover strategies.
Non prioritized strategy

Reserved channel strategy

In this type of handover strategy for the arrival of handovers calls some of the channels are specified for it.
RNC role in the Handovers Process
The soft handovers are easy if the Node-B’s taking part in the hand over process if the RNC’s are same. It becomes difficult if the Node-B’s are in control of unlike RNC’s. If the problems occurs in the Radio access network the core network is not permitted to be witting of the problem. Yet it is important if the communication between the RNC’s is impossible directly to each other over the interface-Iur.
 

Reflection on the Concept of Knowledge

“That which is accepted as knowledge today is sometimes discarded tomorrow.” Consider knowledge issues raised by this statement in two areas of knowledge.

Plato’ once said, “Knowledge is a justified, true belief.” [1]It’s not just systematic organization of facts, but what an individual deems true and invests faith in. When we talk about knowledge being “discarded,” does it mean that it’s nullified and not further used? Or, does it mean that it’s temporarily ignored due to differing opinions? In my opinion, knowledge can be debunked as in, discarded or temporarily put on hold, much like theories. As the statement is further explored, another questions arises as to who “accepts” knowledge or who “discards,” it? I believe, knowledge should always be backed by legitimate evidence.
In my study, I want to explore the multiple perspectives – the various possibilities, ideas, and the holistic view on which our world ought to be explored, in order to understand what knowledge truly is and its significance on our lives. One begins to question the usefulness of knowledge if it would eventually become obsolete anyways? If knowledge can change so easily, do we have the right to question the validity of the current theorems if they would only have a temporary existence?

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

I believe that eventually it is up to the individual to accept knowledge as it is today. However, if one wants to question it, they have the right to do so because, if no one questioned information newer information would never come into existence and the world would never progress. This does not mean that in our progress toward the future we can forget the past. In the modern world, two widely known areas of knowledge which have numerous practical applications, the natural sciences and history have undergone drastic changes revolutionizing each field. To further my study I will be using three different ways of knowing –reason, sense perception and emotion.
History as we know is a record of our entire past experiences, information and ideas. It shows us the way the world was, or what we thought was in the previous generations. We can clearly see through a panorama of perception, the radical change in knowledge, evident in modern human’s different way of thinking than from that of their ancestors. On the other hand, the natural sciences, we see sweeping changes all over the globe occurring even as we speak. We are given new things to see, to explore and to question due to the rapid development in technology and scientific research. But, how legitimate is this? Is it possible that some of the material we know today is perhaps less sensible than that of the ‘outdated’ discoveries, or information that our ancestors perceived as the right ones? If so, how do we bank on what is right and what is wrong, or how do we predict what could change and what could not?
A theory that has long been discarded is that of spontaneous generation[2]. This stated that human beings originated from inanimate lifeless substances, such as rocks. Our ancestors developed this through viewing the growth of maggots from rotting meat. Although this concept seems ridiculous now, we must understand that this theory was believed by most of the 19th century scientists. In fact, it was considered as a scientific fact. However, the theory of falsification[3] that basically tells us that there is an inherent possibility that a hypothesis or theory can be false is an example of the instability of knowledge. This is where those who believe in wide-range perception come in. This is where perception kicks in as a key element to survival and to understanding knowledge. “Spontaneous generation” was countered by Luis Pasteur in 1859, putting it to test[4]. He had placed two pieces of meat in separate jars, one opened and the other closed. He observed maggots only growing in the one that was opened. Thus, he concluded that the origins of the maggots must be from outside, living organisms in the air. In truth it was flies that had laid their eggs in the meat to nourish their young. In an instant our view of the world and the perspective of the origins of life were debunked. Nevertheless people began to believe just as completely in a whole new theory proposed by Pasteur[5]. On this basis, at this rate, if a major portion of an entire generation would believe in the same fact for years without doubt, then where does the fate of human kind lie?.
I believe that I can find the right information using both intuition and reasoning. For example, when you look at all the historians that worked hard to define knowledge through their works or investigations, you see flaws in the knowledge that we had blindly believed for generations. The internet era’s historic event, The World Trade Centre 9/11 attack, was claimed by conspiracy theorists to have collapsed in 9 seconds inciting probable links to the centre being rigged with explosives prior to the attack[6]. This theory was supported by Rosie O’Donnell who stated that investigation was must. If this wasn’t ever questioned, an entire historic event would simply be falsified in records due to a one person’s wrong research. Many people would’ve believed her account despite never even witnessing the actual footage of the building collapsing, which took almost 20 plus seconds. This defies the entire logic of the building falling at “free-fall” speed, shattering the entirely false conspiracy. Not only can such theory affect the emotional stability of researchers, patriots and common men and women, but can create a sore patch in the minds of the victims’ families that actually underwent trauma through such events. Nevertheless, we now understand that the peculiar collapsing of the building was due to the fact that it had been built with triangles around the sectors of the building because of its enormity. A majority of people, however, did not know the truth and based their views on less knowledge associated with something never completely understood. Only when people started looking into the matter themselves was it instantly debunked. If this same process was repeated throughout history, we could find many loopholes. Ultimately, it lies in the individual, whether one would accept or deny the knowledge granted. Perception is what drives this; people choose what they believe in.
Our reasoning cannot always be right but we are rational beings, capable of making informed decisions with some prior knowledge. Some essential human based facts always will persist and the key to understanding these facts is beyond simply accepting them. To truly understand a concept one must ask questions about that specific subject and their knowledge can either be further strengthened or their entire perception could change. People unnecessarily take information sculpted by someone else’s research without doing any of their own based on the idea that the researcher who took the time to do the investigation must be correct. It must be regarded as false until the point when the one who receives the information actually looks into the matter and validates the knowledge.
In a world where information changes every day, some persist, and some simply vanish creating needless new ideas. One such idea that had been so ingrained in the mind of humanity was the concept of a static universe. This image of the universe had persisted even until the twentieth century. In fact, one of the greatest intellectual minds Albert Einstein even believed in this concept. When he had created his theory of the universe, the general theory of relativity, in 1915 he added a completely irrelevant and seemingly random concept just to accommodate it. He introduced the idea of a cosmological constant, an all pervading force that would prevent the universe contracting from gravity and remain static. Before this, though Edwin Hubble had observed a red shift in the galaxies nearby and an even larger shift in those further. A red shift occurs when light that is emitted by a source, a galaxy for example, that is moving away from the observer becomes elongated. This phenomenon was observed on all sides of us and it increases with distance, meaning that the universe was expanding in all directions. Einstein did not accept this knowledge and had unnecessarily complicated his theory by adding a constant that clearly made no sense. The information that was proven true was not accepted, as a previous knowledge was stuck in his mind restricting his ability to formulate a realistic theory. Later, he understood the validity of the information and incorporated the idea of an expanding universe into his theory. A theory previously thought false was proven true and needlessly discarded.
But, the urge to question, the urge to want to know more will always be a crucial part of the human mind. This is what will lead us to want to change the knowledge we know today and enhance current knowledge. It doesn’t stop there though; perception is the key to becoming a knowledgeable thinker. If one thinks critically about all the minute and grand paradigms of the universe, the inventive scope for more knowledge could be limitless.
Bibliography
http://oregonstate.edu/instruct/phl201/modules/Philosophers/Protagoras/protagoras_plato_knowledge.htm
http://science.howstuffworks.com/innovation/scientific-experiments/scientific-method5.htm
http://en.wikipedia.org/wiki/Falsifiability
http://listverse.com/2009/01/19/10-debunked-scientific-beliefs-of-the-past/
http://www.pasteurbrewing.com/the-life-and-work-of-louis-pasteur/experiments/louis-pasteurs-experiment-to-refute-spontaneous-generation/204.html
http://www.debunking911.com/freefall.htm
1

[1] http://oregonstate.edu/instruct/phl201/modules/Philosophers/Protagoras/protagoras_plato_knowledge.htm
[2] http://science.howstuffworks.com/innovation/scientific-experiments/scientific-method5.htm
[3] http://en.wikipedia.org/wiki/Falsifiability
[4] http://listverse.com/2009/01/19/10-debunked-scientific-beliefs-of-the-past/
[5] http://www.pasteurbrewing.com/the-life-and-work-of-louis-pasteur/experiments/louis-pasteurs-experiment-to-refute-spontaneous-generation/204.html
[6] http://www.debunking911.com/freefall.htm
 

History And Fundamental Concept Of Acoustic Music Essay

Acoustics is the study of the physical characteristics of sounds. Its deal with things like the frequency, amplitude and complexity of sound waves and how sound waves interact with various environments. It can also be refer casually and generally to the over-all quality of sound in a given place. Someone might say in a non-technical conversation: “I like to perform at Smith Hall; the acoustics are very brights.” 

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

From the everyday sounds of speech, the hum of appliances, to the sounds caused by wind and water, we are immersed in an ocean of sounds. Yet, what is sound, and how do we “hear” it? Why do two instruments playing the same note “sound” different? In this lab you will learn the basics of the answers to these questions. To answer the later question, we will analyze sound as an audio engineer would, through a technique called harmonic analysis. Harmonic analysis allows sound to be understood from a quantitative perspective. Also, we will come to an understanding of why the way a computer analyses sound is similar to how our ears analyse sound.
I will start this genre presentation by introducing the genre acoustic music. It isn’t really a genre, as music played with acoustic instruments can sound very different, but I chose to call the post this, as acoustic music have many similarities. If you like these songs, you should really check out Bedtime Tunes, which is a site only with songs like these. So without further ado, here are 11 songs with acoustic guitars, pianos, strings and beautiful voices: First here is Antony Hearty with his band Antony and the Johnsons. Antony Hegarty is a very special person, he is transgendrous, and his voice is absolutely amazing. Unfortunately I haven’t seen him live, but I’ve heard that almost all of the audience comes out from the concert crying
Or Acoustics (from Greek pronounced acoustics meaning “of or for hearing, ready to hear”) is the science that studies sound, in particular its production, transmission, and effects. Sound can often be considered as something pleasant; an example of this would be music. In that case a main application is room acoustics, since the purpose of room acoustical design and optimisation is to make a room sound as good as possible. But some noises can also be unpleasant and make people feel uncomfortable. In fact noise reduction is a major challenge, particularly within the transportation industry as people are becoming more and more demanding. Furthermore ultrasounds also have applications in detection, such as sonar systems or non-destructive material testing.
2. History of acoustic
If he first mentioned the “Acoustique Art” in his Advancement of Learning (1605), Francis Bacon (1561-1626) was drawing a distinction between the physical acoustics he expanded in the Sylva Sylva rum (1627) and the harmonics of the Pythagorean mathematical tradition. The Pythagorean tradition still survived in Bacon’s time in the works of such diverse people as Gioseffo Zarlino (1517-1590), René Descartes (1596-1650), and Johannes Kepler (1571-1630). In Bacon’s words: “The nature of sounds, in some sort, [hath been with some diligence inquired,] as far as concerneth music. But the nature of sounds in general hath been superficially observed. It is one of the subtlest pieces of nature”.
Bacon’s “Acoustique Art” was therefore concerned with the study of “immusical sounds” and with experiments in the “migration in sounds” so that the harnessing of sounds in buildings (architectural acoustics) by their “enclosure” in artificial channels inside the walls or in the environment (hydraulic acoustics). Aim of Baconian acoustics was to catalog, quantify, and shape human space by means of sound. This stemmed from the echometria, an early modern tradition of literature on echo, as studied by the mathematicians Giuseppe Biancani (1566-1624), Marin Mersenne (1588-1648), and Daniello Bartoli (1608-1685), in which the model of optics was applied in acoustics to the behaviour of sound. It was in a sense a historical antecedent to Isaac Newton’s (1642-1727) analogy between colours and musical tones in Upticks (1704). Athanasius Kircher’s (1601-1680) Phonurgia Nova of 1673 was the outcome of this tradition. Attacking British acoustics traditions, Kirsches argued that the “origin of the Acoustical Art” lay in his own earlier experiments with sounding tubes at the Collegio Romano in 1649 and sketched the ideology of a Christian baroque science of acoustics designed to dominate the world by exploiting the “boundless powers of sound”
17th-century empirical observations and mathematical explanations of the simultaneous vibrations of a string at different frequencies were important in the development of modern experimental acoustics. The earliest contribution in this branch of acoustics was made by Mersenne, who derived the mathematical law governing the physics of a vibrating string. Around 1673 Christian Huygens (1629-1695) estimated its absolute frequency, and in 1677 John Wallis (1616-1703) published a report of experiments on the overtones of a vibrating string. In 1692 Francis Roberts (1650-1718) followed with similar findings.
These achievements paved the way for the 18th-century acoustique of Joseph Sauveur (1653-1716) and for the work of Brook Taylor (1685-1731), Leonhard Euler (1707-1783), Jean Le Rond d ‘Alembert (1717-1783), Daniel Bernoulli (1700-1782), and Giordani Riccati (1709-1790), who all attempted to determine mathematically the fundamental tone and the overtones of a sonorous body. Modern experimental acoustics sought in nature, a physical law of the sounding body, the perfect harmony that in the Pythagorean tradition sprang from the mind of the “geometrizing God.” Experimental epistemology in acoustics also influenced the studies of the anatomy and physiology of hearing, especially the work of Joseph-Guichard Duverney (1648-1730) and Antonio Maria Valsalva (1666-1723), that in the 19th century gave rise to physiological and psychological acoustics.
3. Fundamental concepts of acoustics
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into acoustic energy, producing the acoustic wave. There is one fundamental equation that describes acoustic wave propagation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse HYPERLINK “http://www.answers.com/topic/transverse-wave”waves and surface waves. Acoustics looks first at the pressure levels and frequencies in the sound wave. Transduction processes are also of special importance.
4. Application of Acoustics
The science of sound and hearing. This treats the sonic qualities of rooms and buildings, and the transmission of sound by the voice, musical instruments or electric means. Voice is caused by vibration, which is communicated by the sound source to the air as fluctuations in pressure and then to the listener’s ear-drum. The faster the vibration (or the greater its ‘frequency’) the higher the pitch. The greater the amplitude of the vibration, the louder the sound. Mostly musical sound consist not only of regular vibration at one particular frequency but also vibration at various multiples of that frequency. The frequency of middle C is 256 cycles per second (or Hertz, abbreviated Hz) but when one hears middle C there are components of the sound vibrating at 512 Hz, 768 Hz etc (see Harmonics). The presence and relative strength of these harmonics determine the quality of a sound. The difference in quality, for example. between a flute, an oboe and a clarinet playing the same note is that the flute’s tone is relatively ‘pure’ (i.e. has few and weak harmonics), the oboe is rich in higher harmonics and the clarinet has a preponderance of odd-numbered harmonics. Their different harmonic spectra are caused primarily by the way the sound vibration is actuated (by the blowing of air across an edge with the flute, by the oboe’s double reed and the clarinet’s single reed) and by the shape of the tube. Where the player’s lips are the vibrating agent, as with most brass instruments, the tube can be made to sound not its fundamental note but other harmonics by means of the player’s lip pressure.
The vibrating air column is only one of the standard ways of creating musical sound. The longer the column the lower the pitch; the players can raise the pitch by uncovering hole in the tubes. With that human voice, air is set in motion by means of the vocal cords, folds in the throat which convert the air stream from the lungs into sound; pitch is controlled by the size and shape of the cavities in the pharynx and mouth. For a string instrument, such as the violin, the guitar or the piano, the string is set in vibration by (respectively) bowing, plucking or striking; the tighter and thinner the string, the fasters it will vibrate. By pressing the string against the fingerboard and thus making the operative string-length shorter, the player can raise the pitch. With a percussion instrument, such as the drum or the xylophone, a membrane or a piece of wood is set in vibration by striking; sometimes the vibration is regular and gives a definite pitch but sometimes the pitch is indefinite.
In the recording of sound, the vibration patterns set up by the instrument or instruments to be recorded are encode by analogue (or, in recent recordings. digitally) in terms of electrical impulse. This information can then be stored, in mechanical or electrical form; this can then be decoded, amplified and conveyed to loudspeakers which transmit the same vibration pattern to the airs.
The study of the acoustics of buildings is immensely complicated because of the variety of ways in which sound is conveyed, reflected, diffused, absorbed etc. The design of buildings for performances has to take account of such matters as the smooth and even representation of sound at all pitches in all parts of the building, the balance of clarity and blend and the directions in which reflected sound may impinge upon the audiences. The use of particular material (especially wood and artificial acoustical substances) and the breaking-up of surfaces, to avoid certain types of reflection of sounds, play a part in the design of concert halls, which however remains an uncertain art in which experimentation and ‘tuning’ (by shifting surface, by adding resonators etc.) is often necessary. The term ‘acoustic’ is sometimes used, of a recording or an instrument, to mean ‘not electric’: an acoustic recording is one made before electric methods came into use, and an acoustic guitar is one not electrically amplified.
4.1 Theory of acoustic
The area of physics known as acoustics is devoted to the study of the production, transmission, and reception of sound. Thus, wherever sound is produced and transmitted, it will have an effect some whereas, even if there is no one present to hear it. The medium of sound transmissions is an all-important, key factor. Among the areas addressed within the realm of acoustics are the production of sounds by the human sounds and various instrument, as like the reception of sound waves by the human ear.
5. Working concept of acoustic
Sound waves are an example of a larger phenomenon known as wave motion, and wave motion is, in turn, a subset of harmonic motion-that is, repeated movement of a particle about a position of equilibrium, or balance. In the case of sound, the “particle” is not an item of matter, but of energy, and wave motion is a type of harmonic movement that carries energy from one place to another without actually moving any matter.
Particles in waves experience oscillation, harmonic motion in one or more dimensions. Oscillation itself involves little movement, though some particles do move short distances as they interact with other particles. Primarily, however, it involves only movement in place. The waves themselves, on the other hand, move across space, ending up in a position different from the one in which they started.
A transverse wave forms a regular up-and-down pattern in which the oscillation is perpendicular to the direction the wave is moving. This is a fairly easy type of wave to visualize: imagine a curve moving up and down along a straight line. Sound waves, on the other hand, are longitudinal waves, in which oscillation occurs in the same direction as the wave itself.
These oscillations are really just fluctuations in pressure. As a sound wave moves through a medium such as air, these changes in pressure cause the medium to experience alternations of density and rarefaction (a decrease in density). It , in turn, produces vibrations in the human ear or in any other object that receives the sound waves.
5.1 Properties of Sound Waves
5.1.1 Cycle and Period
The term cycle has a definition that varies slightly, depending on whether the type of motion being discussed is oscillation, the movement of transverse waves, or the motion of a longitudinal sound wave. In the latter case, a cycle is defined as a single complete vibration.
A period (represented by the symbol T) is the amount of time required to complete one full cycle. The period of a sound wave can be mathematically related to several other aspects of wave motion, including wave speed, frequency, and wavelength.
5.1.2 The Speed of Sound in Various Medium
People often refer to the “speed of sound” as though this were a fixed value like the speed of light, but, in fact, the speed of sound is a function of the medium through which it travels. What people ordinarily mean by the “speed of sound” is the speed of sound through air at a specific temperature. For sound travelling at sea level, the speed at 32°F (0°C) is 740 MPH (331 m/s), and at 68°F (20°C), it is 767 MPH (343 m/s).
In the essay on aerodynamics, the speed of sound for aircraft was given at 660 MPH (451 m/s). This is much less than the figures given above for the speed of sound through air at sea level, because obviously, aircraft are not flying at sea level, but well above it, and the air through which they pass is well below freezing temperature.
The speed of sound through a gas is proportional to the square root of the pressure divided by the density. According to Gay-Lussac’s law, pressure is directly related to temperature, meaning that the lower the pressure, the lower the temperature-and vice versa. At high altitudes, the temperature is low, and, therefore, so is the pressure; and, due to the relatively small gravitational pull that Earth exerts on the air at that height, the density is also low. Hence, the speed of sound is also low.
It follows that the higher the pressure of the material, and the greater the density, the faster sound travels through it: thus sound travels faster through a liquid than through a gas. This might seem a bit surprising: at first glance, it would seem that sound travels fastest through air, but only because we are just more accustomed to hearing sounds that travel through that medium. The speed of sound in water varies from about 3,244 MPH (1,450 m/s) to about 3,355 MPH (1500 m/s). Sound travels even faster through a solid-typically about 11,185 MPH (5,000 m/s)-than it does through a liquid.
5.1.3 Frequency
Frequency (abbreviated f) is the number of waves passing through a given point during the interval of one second. It is measured in Hertz (Hz), named after nineteenth-century German physicist Heinrich Rudolf Hertz (1857-1894) and a Hertz is equal to one cycle of oscillation per second. Higher frequencies are expressed in terms of kilohertz (kHz; 103 or 1,000 cycles per second) or megahertz(MHz; 106 or 1 million cycles per second.)
The human ear is capable of hearing sounds from 20 to approximately 20,000 Hz-a relatively small range for a mammal, considering that bats, whales, and dolphins can hear sounds at a frequency up to 150 kHz. Human speech is in the range of about 1 kHz, and the 88 keys on a piano vary in frequency from 27 Hz to 4,186 Hz. Each note has its own frequency, with middle C (the “white key” in the very middle of a piano keyboard) at 264 Hz. The quality of harmony or dissonance when two notes are played together is a function of the relationship between the frequencies of the two.
Frequencies below the range of human audibility are called infrasound, and those above it are referred to as ultrasound. There are a number of practical applications for ultrasonic technology in medicine, navigation, and other fields.
5.1.4 Wavelength
Wavelength (represented by the symbol λ, the Greek letter lambda) is the distance between a crest and the adjacent crest, or a trough and an adjacent trough, of a wave. The higher the frequency, the shorter the wavelength, and vice versa. Thus, a frequency of 20 Hz, at the bottom end of human audibility, has a very large wavelength: 56 ft. (17 m). The top end frequency of 20,000 Hz is only 0.67 inches (17 mm).
There is a special type of high-frequency sound wave beyond ultrasound: hyper sound, which has frequencies above 107 MHz, or 10 trillion Hz. It is almost impossible for hyper sound waves to travel through all but the densest media, because their wavelengths are so short. In order to be transmitted properly, hyper sound requires an extremely tight molecular structure; otherwise, the wave would get lost between molecules.
Wavelengths of visible light, part of the electromagnetic spectrum, have a frequency much higher even than hyper sounds waves: about 109 MHz, 100 times greater than for hyper sound. This, in turn, means that these wavelengths are incredibly small, and this is why light waves can easily be blocked out by using one’s hand or a curtain.
The same does not hold for sound waves, because the wavelengths of sounds in the range of human audibility are comparable to the size of ordinary objects. To block out a sound wave, one needs something of much greater dimensions-width, height, and depth-than a mere cloth curtain. A thick concrete wall, for instance, may be enough to block out the waves. Better still would be the use of materials that absorb sound, such as cork, or even the use of machines that produce sound waves which destructively interfere with the offending sounds.
5.1.5 Amplitude and Intensity
Amplitude is critical to the understanding of sound, though it is mathematically independent from the parameters so far discussed. Defined as the maximum displacement of a vibrating material, amplitude is the “size” of a wave. The greater the amplitude, the greater the energy the wave contains: amplitude indicates intensity, commonly known as “volume,” which is the rate at which a wave moves energy per unit of a cross-sectional area.
Intensity can be measured in watts per square meter, or W/m2. A sound wave of minimum intensity for human audibility would have a value of 10−12, or 0.000000000001, W/m2. As a basis of comparison, a person speaking in an ordinary tone of voice generates about 10−4, or 0.0001, watts. On the other hand, a sound with an intensity of 1 W/m2 would be powerful enough to damage a person’s ears.
5.2 Real-Life Applications
5.2.1 Decibel Levels
For measuring the intensity of a sound as experienced by the human ear, we use a unit other than the watt per square meter, because ears do not respond to sounds in a linear, or straight-line, progression. If the intensity of a sound is doubled, a person perceives a greater intensity, but nothing approaching twice that of the original sound. Instead, a different system-known in mathematics as a logarithmic scale-is applied.
In measuring the effect of sound intensity on the human ear, a unit called the decibel (abbreviated dB) is used. A sound of minimal audibility (10−12 W/m2) is assigned the value of 0 dB, and 10 dB is 10 times as great-10−11 W/m2. But 20 dB is not 20 times as intense as 0 dB; it is 100 times as intense, or 10−10 W/m2. Every increase of 10 dB thus indicates a tenfold increase in intensity. Therefore, 120 dB, the maximum decibel level that a human ear can endure without experiencing damage, is not 120 times as great as the minimal level for audibility, but 1012 (1 trillion) times as great-equal to 1 W/m2, referred to above as the highest safe intensity level.
Of course, sounds can be much louder than 120 dB: a rock band, for instance, can generate sounds of 125 dB, which is 5 times the maximum safe decibel level. A gunshot, firecracker, or a jet-if one is exposed to these sounds at a sufficiently close proximity-can be as high as 140 dB, or 20 times the maximum safe level. Nor is 120 dB safe for prolonged periods: hearing experts indicate that regular and repeated exposure to even 85 dB (5 less than a lawn mower) can cause permanent damage to one’s hearing.
5.3 Production of Sound Waves
5.3.1 Musical Instruments
Sound waves are vibrations; thus, in order to produce sound, vibrations must be produced. For a stringed instrument, such as a guitar, harp, or piano, the strings must be set into vibration, either by the musician’s fingers or the mechanism that connects piano keys to the strings inside the case of the piano.
In other woodwind instruments and horns, the musician causes vibrations by blowing into the mouthpiece. The exact process by which the vibrations emerge as sound differs between woodwind instruments, such as a clarinet or saxophone on the one hand, and brass instruments, such as a trumpet or trombone on the other. Then there is a drum or other percussion instrument, which produces vibrations, if not musical notes.
5.3.2 Electronic Amplification
Sound is a form of energy: thus, when an automobile or other machine produces sound incidental to its operation, this actually represents energy that is lost. Energy itself is conserved, but not all of the energy put into the machine can ever be realized as useful energy; thus, the automobile loses some energy in the form of sound and heat.
The fact that sound is energy, however, also means that it can be converted to other forms of energy, and this is precisely what a microphone does: it receives sound waves and converts them to electrical energy. These electrical signals are transmitted to an amplifier, and next to a loudspeaker, which turns electrical energy back into sound energy-only now, the intensity of the sound is much greater.
Inside a loudspeaker is a diaphragm, a thin, flexible disk that vibrates with the intensity of the sound it produces. When it pushes outward, the diaphragm forces nearby air molecules closer together, creating a high-pressure region around the loudspeaker. (Remember, as stated earlier, that sound is a matter of fluctuations in pressure.) The diaphragm is then pushed backward in response, freeing up an area of space for the air molecules. These, then, rush toward the diaphragm, creating a low-pressure region behind the high-pressure one. The loudspeaker thus sends out alternating waves of high and low pressure, vibrations on the same frequency of the original sound.
5.3.3 The Human Voice
As impressive as the electronic means of sound production are (and of course the description just given is highly simplified), this technology pales in comparison to the greatest of all sound-producing mechanisms: the human voice. Speech itself is a highly complex physical process, much too involved to be discussed in any depth here. For our present purpose, it is important only to recognize that speech is essentially a matter of producing vibrations on the vocal cords, and then transmitting those vibrations.
Before a person speaks, the brain sends signals to the vocal cords, causing them to tighten. As speech begins, air is forced across the vocal cords, and this produces vibrations. The action of the vocal cords in producing these vibrations is, like everything about the miracle of speech, exceedingly involved: at any given moment as a person is talking, parts of the vocal cords are opened, and parts are closed.
The sound of a person’s voice is affected by a number of factors: the size and shape of the sinuses and other cavities in the head, the shape of the mouth, and the placement of the teeth and tongue. These factors influence the production of specific frequencies of sound, and result in differing vocal qualities. Again, the mechanisms of speech are highly complicated, involving action of the diaphragm (a partition of muscle and tissue between the chest and abdominal cavities), larynx, pharynx, glottis, hard and soft palates, and so on. But, it all begins with the production of vibrations.
6. Propagation: Does It Make a Sound
As stated in the introduction, acoustics is concerned with the production, transmission (sometimes called propagation), and reception of sound. Transmission has already been examined in terms of the speed at which sound travels through various media. One aspect of sound transmission needs to be reiterated, however: for sound to be propagated, there must be a medium.
There is an age-old “philosophical” question that goes something like this: If a tree falls in the woods and there is no one to hear it, does it make a sound? In fact, the question is not a matter of philosophy at all, but of physics, and the answer is, of course, “yes.” As the tree falls, it releases energy in a number of forms, and part of this energy is manifested as sound waves.
Consider, on the other hand, this rephrased version of the question: “If a tree falls in a vacuum-an area completely devoid of matter, including air-does it make a sound?” The answer is now a qualified “no”: certainly, there is a release of energy, as before, but the sound waves cannot be transmitted. Without air or any other matter to carry the waves, there is literally no sound.
Hence, there is a great deal of truth to the tagline associated with the 1979 science-fiction film Alien : “In space, no one can hear you scream.” Inside an astronaut’s suit, there is pressure and an oxygen supply; without either, the astronaut would perish quickly. The pressure and air inside the suit also allow the astronaut to hear sounds within the suit, including communications via microphone from other astronauts. But, if there were an explosion in the vacuum of deep space outside the spacecraft, no one inside would be able to hear it.
7. Reception of Sound
7.1 Recording
Earlier the structure of electronic amplification was described in very simple terms. Some of the same processes-specifically, the conversion of sound to electrical energy-are used in the recording of sound. In sound recording, when a sound wave is emitted, it causes vibrations in a diaphragm attached to an electrical condenser. This causes variations in the electrical current passed on by the condenser.
These electrical pulses are processed and ultimately passed on to an electromagnetic “recording head.” The magnetic field of the recording head extends over the section of tape being recorded: what began as loud sounds now produce strong magnetic fields, and soft sounds produce weak fields. Yet, just as electronic means of sound production and transmission are still not as impressive as the mechanisms of the human voice, so electronic sound reception and recording technology is a less magnificent device than the human ear.
8. How the Ear Hears
As almost everyone has noticed, a change in altitude (and, hence, of atmospheric pressure) leads to a strange “popping” sensation in the ears. Usually, this condition can be overcome by swallowing, or even better, by yawning. This opens the Eustachian tube, a passageway that maintains atmospheric pressure in the ear. Useful as it is, the Eustachian tube is just one of the human ear’s many parts.
The “funny” shape of the ear helps it to capture and amplify sound waves, which pass-through the ear canal and cause the eardrum to vibrate. Though humans can hear sounds over a much wider range, the optimal range of audibility is from 3,000 to 4,000 Hz. This is because the structure of the ear canal is such that sounds in this frequency produce magnified pressure fluctuations. Thanks to this, as well as other specific properties, the ear acts as an amplifier of sounds. Beyond the eardrum is the middle ear, an intricate sound-reception device containing some of the smallest bones in the human body-bones commonly known, because of their shapes, as the hammer, anvil, and stirrup. Vibrations pass from the hammer to the anvil to the stirrup, through the membrane that covers the oval window, and into the inner ear.
Filled with liquid, the inner ear contains the semi-circular canals responsible for providing a sense of balance or orientation: without these, a person literally “would not know which way is up.” Also, in the inner ear is the cochlea, an organ shaped like a snail. Waves of pressure from the fluids of the inner ear are passed through the cochlea to the auditory nerve, which then transmits these signals to the brain.
The basilar membrane of the cochlea is a particularly wondrous instrument, responsible in large part for the ability to discriminate between sounds of different frequencies and intensities. The surface of the membrane is covered with thousands of fibres, which are highly sensitive to disturbances, and it transmits information concerning these disturbances to the auditory nerve. The brain, in turn, forms a relation between the position of the nerve ending and the frequency of the sound. It also equates the degree of disturbance in the basilar membrane with the intensity of the sound: the greater the disturbance, the louder the sounds.
 

Lean Manufacturing: Concept Overview and Disadvantages

Introduction
“The most noteworthy evolution of lean accounting in recent years has been a sharpening focus on value. Lean has always been centered on creating value for customers and eliminating non-value adding waste” (Asefeso, p 9). Lean accounting has been steadily making it possible for manufacturers to explicitly measure value in financial terms and to focus improvement efforts on increasing value. With many manufacturers now implementing lean, it becomes essential to discover what part of lean accounting has played in the changes made. This paper will give a brief background of lean manufacturing and a general overview of what lean accounting is. I will also explore some problems and disadvantages of lean accounting from various researched articles.
Background of Lean Manufacturing
Lean is a philosophy that spurred from the Toyota Production System (TPS). TPS was created by Toyota’s founder Sakichi Toyodo, Kiichiro Toyoda, and Taiichi Ohno. Much of TPS was also influenced by W. Edwards Deming’s statistic process control (SPC) and Henry Ford’s mass production lines. However, the Japanese were not impressed with Ford’s approach because it was filled with over-production, lots of inventory, and much waiting. Toyota identified these weaknesses in Ford’s production line and adapted the production line to create a more productive and reliable production line. TPS and lean also use just-in-time inventory where only small amounts of inventory were ordered and very little inventory was left waiting in the production line. This also was very different from Ford’s production line which usually bought high volumes of materials and had high inventory levels to lower costs.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

After TPS proved to be successful for Toyota, many companies adapted their production lines to incorporate lean principles. Lean management was first introduced in the United States in the early 1980’s after a global study of the performance of automotive assembly plants. Essentially, the primary principle of lean is that it is a tool used in manufacturing to eliminate waste, improve quality, and reduce cost. Waste is eliminated by identifying non-value added activity. The main objective is to supply perfect value to the customer through a perfect value product that has no waste. “Eliminating waste along entire value streams, instead of at isolated points, creates processes that need less human effort, less space, less capital, and less time to make products and services at far less costs and with much fewer defects, compared with traditional business systems” (“What is Lean?”).
Companies may face certain challenges when applying lean to their production lines. First, lean should be applied to companies that have production lines that are routine, predictable, stable, and can be flow charted. Second, lean implementation may take years and can be very costly in large companies. Depending on how integrated the systems and how disciplined the production line is, it is quite possible that a lean implementation may fail. “There are several key lean manufacturing principles that need to be understood in order to implement lean. Failure to understand and apply these principles will most likely result in failure or a lack of commitment from everyone” (“Key Lean Manufacturing”). These principles are as follows: “1. Elimination of waste; 2. Continuous improvement; 3. Respect for humanity; 4. Levelized production; 5. Just-in-time production; and 6. Quality built-in” (“Key Lean Manufacturing”).
Management may also be discouraged to adopt lean manufacturing right away because the lean implementation is a long term investment. Most CEOs make decisions that benefit the company in the short run, and may choose not to adopt lean because it may show unfavorable results on the financial statement during the early stages. Lean will cause a decrease in inventory levels, causing assets on the balance sheet to drop which is not always favorable. However, these short term negative results will eventually become long run gains as the company benefits from less inventory holding costs and improved processes.
Background of Lean Accounting
While most people associate lean to manufacturing processes, it is now taking on a very important key role for companies to adopt lean throughout the other departments of the company. An example of a support function that uses the lean concept is the accounting field. Since accounting is a support department, it should apply lean principles after the manufacturing department has incorporated lean. Accounting’s main duty is to accurately measure and communicate financial activity, and by adopting lean accounting after successfully implementing lean manufacturing would allow for the accurate measurement of the new production system.
“Lean accounting evolved from a concern that traditional accounting practices were inadequate and, in fact, a deterrent to the adoption of some of the necessary improvements to manufacturing operations. While manufacturing managers knew that investments in automation and the adoption of lean manufacturing practices were the right things to do, traditional accounting was often an obstacle to such improvements, yielding numbers that only supported investments when they could be justified by reductions in direct labor, with little benefit ascribed to any improvements to quality, flexibility or company throughput” (Asefeso, p 10).
Lean accounting is the cornerstone of a completely different model of manufacturing management. By itself, lean accounting has limited value, but as the financial basis for the application of logistics, superior management, factory operations, marketing, pricing, and other vital business functions, lean accounting is very powerful. “A core principle of lean accounting is that the value stream is the only appropriate cost collection entity within the organization, as opposed to traditional accounting’s use of cells, cost or profit centers or departments normally based on smaller, functional groupings of work activity” (Asefeso, p12). The main idea behind lean is minimizing waste, therefore creating more value for customers with fewer resources.
Problems and Disadvantages of Lean Accounting
Lean accounting may reduce the manufacturing process to a few numbers, but it does not provide a lot of information. There are several flaws of using the lean accounting approach. “Speed gives you an advantage over the competition. No matter if you are first in a market or deliver a product faster, it will improve your competitiveness and hence your revenue. However, it is nearly impossible to determine this advantage quantitatively. How much does it get you to be in the market seven days earlier? One big thing in lean manufacturing is to reduce fluctuations. The more even your system works, the more profitable you will be. However, it is difficult to measure these fluctuations, even more difficult to determine the impact of an improvement on fluctuations, and hence nearly impossible to calculate the monetary benefit of reducing fluctuations. Yet another thing in lean is customer satisfaction, often described as value to the customer. What is the monetary damage if a delivery is delayed, if a product breaks, if service is slow, or if your people are unfriendly? It is nearly impossible to know. Even more difficult to determine is how improvement measures will actually influence the above. How much does it cost you to provide a better service, how will this influence customer satisfaction, and what is your benefit from this?” (“The Problems of”). Using lean accounting can also lead to bad decisions such as where to put the money when profits are maximized and where to take the money out that has been saved.
There are also several disadvantages of using lean accounting. “One disadvantage of lean accounting is that it requires a top-down, sometimes monumental cultural shift. Most manufacturing companies have cost accounting systems in place that measure production improvements in terms of short and medium-term cost reductions. However, lean accounting focuses on freeing up resources to increase the product or product line’s value to customers and make more money. Senior management must therefore change their thinking from one focused on the bottom line to one focused somewhere between revenues and profits. Without management’s full commitment, full implementation of an effective lean accounting system will stall” (Wright).
“Accounting systems traditionally generate internal reports that owners and management – both senior and departmental – review and discuss. Lean accounting aims to translate the information into numbers that task-based employees in various departments can use. These accounting systems focus on compiling cost-based data. Since lean accounting focuses on value creation, companies often need to completely overhaul their accounting systems, collection and measurement procedures, controls and software. Any system overhaul can be daunting, but the scope of an accounting system overhaul can be particularly exhaustive” (Wright).
“Lean accounting focuses on increasing revenues and profits by increasing the value of a company’s products and services. When lean accounting systems focus on value stream instead of cost, they may inadvertently omit costs or ignore issues related to specific costs. Until a company fully captures a product or product line’s value stream, accountants may not be able to appropriately price products or determine each product’s individual level of profitability” (Wright).
“Effective lean thinking and lean accounting require input and involvement by all employees. Many employees in a traditional manufacturing or distribution environment are reactive, following the orders given them. Companies must therefore invest in training, developing and empowering all their employees to help them become proactive. This can be expensive and time consuming” (Wright).
“Unless the accountants understand the way that lean works, in the worst case it seems to them that lean produces losses, not efficiencies. In a typical case, they cannot see the cost advantages. Those who were fighting to introduce lean into their companies reported over and over again that finding a way to reconcile accounting the way lean does it and standard cost accounting was proving to be much harder than it should be” (Woods).
“Lean practitioners think of accounting in cash terms. Lean is against creating data and reports for their own sake. That would be considered another form of waste. In general, lean advocates have a jaundiced view of enterprise software and any general-purpose automation tools. The lean approach measures how well your value stream is working” (Woods).
The difference between lean accounting and standard cost accounting can be explained in a simple weight loss analogy. “When dieting, standard cost accounting would advise you to weigh yourself once a week to see if you’re losing weight. Lean accounting would measure your calorie intake and your exercise and then attempt to adjust them until you achieve the desired outcome. While this analogy is oversimplified, it does get to the core difference between lean and standard cost accounting. Lean accounting attempts to find measures that predict success. Standard cost accounting measures results after the fact” (Woods).
“But even when the accounting types and the lean practitioners start to understand each other, problems remain. How can we reconcile the kind of data collection and accounting that lean demands and the standard cost accounting? Duplicated data collection and reporting is indeed a form of waste” (Woods).
Conclusion
“While lean accounting is still a work-in-process, there is now an agreed body of knowledge that is becoming the standard approach to accounting, control, and measurement. These principles, practices, and tools of lean accounting have been implemented in a wide range of companies at various stages on the journey to lean transformation. These methods can be readily adjusted to meet your company’s specific needs and they rigorously maintain adherence to GAAP and external reporting requirements and regulations. Lean accounting is itself lean, low-waste, and visual, and frees up finance and accounting people’s time so they can become actively involved in lean change instead of being merely “bean counters.” Companies using lean accounting have better information for decision-making, have simple and timely reports that are clearly understood by everyone in the company, they understand the true financial impact of lean changes, they focus the business around the value created for the customers, and lean accounting actively drives the lean transformation. This helps the company to grow, to add more value for the customers, and to increase cash flow and value for the stockholders and owners” (Maskell and Baggaley, p 43).
Works Cited
Asefeso, Ade. Lean Accounting, Second Edition. AA Global Sourcing Ltd, 2014. p 9, p10 and p12.
“Key Lean Manufacturing Principles”. www.lean-manufacturing-junction.com. Accessed February 25, 2017.
Maskell, Brian H. and Baggaley, Bruce L. “Lean Accounting: What’s It All About?”. Target Magazine. Association for Manufacturing Excellence, 2006. p 43. www.aicpa.org. Accessed February 25, 2017.
“The Problems of Cost Accounting with Lean”. www.allaboutlean.com. Accessed February 27, 2017.
“What is Lean?”. www.lean.org. Accessed February 25, 2017.
Woods, Dan. “Lean Accounting’s Fat Problem”. Published July 28, 2009. www.forbes.com. Accessed March 1, 2017.
Wright, Tiffany C. “The Disadvantages of Lean Accounting”. www.smallbusiness.chron.com. Accessed March 1, 2017.

The Concept of the Eco-city

The next new wave in city planning is “Eco-City” in response to global climate changes crisis. It is a relatively new concept, combining together ideas from several disciplines such as urban design, urban planning, transportation, health, housing, energy, economic development, natural habitats, public participation, and social justice (Register 1994). In simple word, Eco-city is settlement where it allows the citizen to live and work using minimum resources.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

As cities continue to grow and population increase rapidly, the needs for sustainable form of development become increasing urgent. The search for appropriate solution and to create more sustainable cities has become the main concerns of designer, policy makers and environmental groups. The locations, types of buildings and infrastructure have direct impacts on its environment, economy and society. As city continue to grow and alters over a periods of time, it is difficult to change after inhabitation and construction. So, designers are trying to avoid that problems and prefer a new, master planned eco-cities. They argue that new eco-cities can fully integrate sustainable concepts of urban planning principle to create sustainable living environment as we go along with retrofitting existing cities. The master plan eco-city will be built using all the latest green technologies. But there people who oppose eco-city concept and called it a utopian city. But is eco-city really feasible or is it utopian concept? To fully understand, origin of eco-city concept will be analysed.
Eco-city originated in 1975 when Richard register and few friends founded Urban Ecology in Berkeley, California, as Non-profit organisation to make built our cities in balance with nature. According to Register (1994), the purpose of urban ecology was to build in Berkeley a “slow street” which is to have many trees along road, solar green houses, energy ordinance, establish good and efficient public transport, promoting pedestrainization as alternative to automobile, holding regular conference meeting with different stake holder.
But it was until the publications of Register’s visionary new book called Eco-city Berkeley in 1987, that the urban ecology gained momentum (Roseland, 2001). And the organisation’s new journal called The Urban Ecologist. The organisation held First International Eco-City Conference, in Berkeley in 1990 and ever since it held conference every year inviting people from around the world to discuss urban problems and to submit proposal for designing our cities based on ecological principles.
In 1992, David Engwicht, an Australian community activist, published Towards an Eco-City, in which he talks about how city planners and engineers have virtually eliminated effective human interaction by buildings more roads, shopping malls, gutting communities and increasing dense traffic. For Engwicht, a city is a place for inventions of maximizing exchanging and having minimized travel distance. The book was later reissued in North America as Reclaiming Our Cities and Towns (1993). Engwicht talks about how city planners and engineers have eliminated effective human exchange by building more roads, taking commerce out of the cities into strip malls, gutting communities, and increasing traffic fatalities. A city is an invention for maximizing exchange and minimizing travel (Engwicht, 1993). He advocates eco-city where there is transaction of all sorts of goods, money, ideas, emotions, genetic material, etc and where people move freely via foot, bicycles, and mass transit and interact freely without fear of traffic and pollutions.
But it was until the 1960’s, the use of fossil fuels, chemically controlled agriculture, deforestation and depletion of marine resources was thought to be not in dangers. In 1987, the World Commission on Environment and Development (the Brundtland Commission), released a summary report called “our Common Future” which cause widespread concerns on world deepening environmental degradation(WCED 1987). And this pushed sustainable development on the forefront. Various industries and sector are also going for sustainable development. The issue of sustainable planning is also a concern for planner, urban designer, construction industries, development authority and the population at large.
Register, Engwicht and Urban Ecology certainly deserve credit for popularizing the term “eco-city” in the last decade, but the eco-city concept is strongly influenced by other movements as well(Roseland, 2001). The mission of Urban Ecology is to create ecological cities based on the following 10 principles (Urban Ecology 1996):

Revise land-use priorities to create compact, diverse, green, safe, pleasant, and vital mixed-use communities near transit nodes and other transportation facilities.
Revise transportation priorities to favour foot, bicycle, cart, and transit over autos, and to emphasize “access by proximity.”
Restore damaged urban environments, especially creeks, shore lines, ridgelines, and wetlands.
Create decent, affordable, safe, convenient, and racially and economically mixed housing.
Nurture social justice and create improved opportunities for women, people of color, and the disabled.
Support local agriculture, urban greening projects, and community gardening.
Promote recycling, innovative appropriate technology, and resource conservation while reducing pollution and hazardous wastes.
Work with businesses to support ecologically sound economic activity while discouraging pollution, waste, and the use and production of hazardous materials.
Promote voluntary simplicity and discourage excessive consumption of material goods.
Increase awareness of the local environment and bioregion through activist and educational projects that increase public awareness of ecological sustainability issues.

The practical application of these principles has not been really encouraging for many years until literature that promotes the ideas began to appear. It appears in different terminology as per the orientations of the authors. The Authors include Designers, Practitioners, Visionaries and Activists, and the terminology includes everything from neotraditional town planning, pedestrian pockets, reurbanization, post-industrial suburbs, sustainable cities, green cities and eco-communities.
Although, the authors’ orientation has discernible differences in analysis, emphasis, and strategy between the variations as shown in table-1, the “eco-city” theme can encompass any and all of them. The term eco-city can be applied to existing eco-city or master plan eco-city as affirm by Register’s when he explains that “there are two ways to go about building eco-cities: changing existing towns or building new ones” (Register 1987 ).
Citizen organizations and municipal officials in cities and towns around the world have recently started experimenting on this eco-city concept to meet the social and environmental challenges (Roseland 1997, 1998). There is a urgent realization that Urban planning is a significant management tool for dealing with the sustainable urbanization challenges facing 21st century cities. Many cities has applied eco-city planning concept although most of them in small scale. Chattanooga and the San Francisco Bay Area in the U.S., Ottawa, Hamilton-Wentworth, and Greater Toronto in Canada, and Curitiba in Brazil are some of the earliest cities where this concept has been successfully applied.
Curitiba, a small Brazilian city, is one of the most sustainable cities in the world. It has received international recognition for its integrated transportation and land-use planning, and for its waste management programs. The city’s success is due to strong leadership-city officials who focused on simple, flexible, and affordable solutions. Throughout the project, the government conducted regular meeting with citizen so that citizen are involved in the process (Rabinovitch 1996).
Emboldened by the success of the above projects, Designer and local government are planning for massive overhaul of traditional way of city planning. They are looking at a way to plan new cities incorporating the entire above concept.
China, one of the world most populous countries in the world, faced massive environmental problem. It has emerged as major industrial power but at a great cost. The environment degradation is so severe that it is a cause for concern in china and could have international repercussions. Since pollution know no boundaries. Sulphur dioxide and nitrogen oxides produce by China’s coal-fired power plants fall as acid rain on Seoul, South Korea, and Tokyo. Suspended particulate over Los Angeles city originates in China, according to the Journal of Geophysical Research (Kahn and Yardley 2007).
The Shanghai Industrial Investment Corporation (SIIC) hired Arup in 2005, to design a city which would exclusively use sustainable energy (solar panels, wind turbines and bio-fuels), self-sufficient and reduce energy consumption by 66% in comparision to Shanghai. The eco-city of Dongtan, which is be located on the island of Chongming, not far from Shanghai will be one of the world largest eco-city to provide housing for 500,000 people from rural areas. The Dongtan city will cover about 8,800 hectares which is roughly equal to the size of Manhattan Island. Dongtan will have ecological footprint of 2.2 ha per person by means of a combination of behaviour change and energy efficiency which is very close to limit of sustainability of 1.9 ha set forth by World Wide Fund for Nature.
China is also partnering with Singapore to build eco-city in Tianjin based on three harmonies principles which are people-people, people-environment and people-economy(Quek 2008) . The 30-square-kilometer site is a wasted land and water scarcity area which will be built over a period of 15 years at a cost of around 50 billion yuan (S$10 billion). The criteria for selection of site are that it should be wasted land and water scarce area. First, restoring the jiyun river will be top priority for propose new city of 350,000. Renewable energy like solar and wind power, rainwater harvesting, wastewater treatment and desalination of sea water are some of the proposal.
United Arab Emirates has planned to build the world’s most sustainable city, called Masdar City, initiatives of Abu Dhabi Future Energy Company. It is an ambitious project which will cost $22 billion to build a new, zero-emissions city for 50,000 residents in Abu Dhabi. The project is launched in 2007 and is designed by British firm Foster + Partners. The propose new city will have new university, the Headquarters for Abu Dhabi’s Future Energy Company, special economic zones and an Innovation Center. According to the designer, Masdar eco-city is to be constructed in an energy efficient way that depends on large photovoltaic power plant to meet energy needs, which shall be for 2nd phase of the city expansion. The city is a car free, with a maximum walking distance of 200m to the nearest transport link and amenities. The streets are compact to encourage walking and are complemented by a personalised rapid transport system. Due to it compactness, the walkway and streets are shaded creating a pedestrian-friendly environment. The city will have wind, photovoltaic farms, research fields and plantations, so that it is entirely self-sustaining. Masdar City will be built in seven phases, the first of which is the Masdar Institute, which is set to be completed in 2010. The city’s phases will be progressively built over the next decade with the first phase reaching completion in 2013(Foster and Partner).
The idea of a city without any waste, landfill, car, self contained or without any carbon emission seem very desirable for a city but for some sceptic it a utopian dream which will never materialise. Sceptics are questioning whether totally designing a new city is possible incorporating all the eco-city concepts due to time and cost involved. The main weakness for master plan eco-city is the large inputs of energy required to construct an entire, functional city as a long continuous project. They are concern that it might just be a strategy used to shield from environmental criticism while countries like China and UAE continue to grow along the same unsustainable path. However, countries like China and UAE are in a position to fund such kind of projects and if it is successful it will create a precedent for other parts of the world as well.
Unfortunately, Dongtan eco-city never materialise. Although, the highest echelon in Chinese official expresses has shown keen interest in the project, the first phase of construction which is to be ready for Shanghai expo 2010 has not even started. The Dongtan eco-city in spite of being a government endeavour has failed to materialise. The Mayor of Shanghai has been sentence to 18 years jail term on corruption charges and abuse of power in 2008(Larson 2009). Sceptics of eco-city are saying that policy makers in China misuse the term of eco-city, to reduce criticism of china’s poor environment records without having any real commitment to the idea.
As for Masdar eco-city, work has already started for phase 1. However, sceptics are concern that it might be just an isolated green in the desert where the rest of UAE proceed in the same line of big ecological footprint which is even bigger than United State. They are also apprehensive about the embodied energy used in buildings and infrastructure which are very high. The heavy dependent on technology for personal rapid transport and infrastructure is another issue. Since the technology for personal rapid transport is not fully developed and co-ordinating infrastructure with different agencies is difficult.
The concept of building a city from a scratch or retrofitting existing building or redeveloping existing city are some of the burning issues. Designing a new city from scratch permits a greater comprehensive, whole systems approach, and more degrees of freedom than adaptation of an existing city( Fox 2008). On the other hand, the resources and energy needed for new construction of a city will be far greater than redeveloping an existing city. However, the beliefs and movement toward eco-cities has spread worldwide and has taken strong hold among planner. In spite of setback for some project, eco-city has will be main driving force for today cities and tomorrow cities. Eco-cities can be built on existing eco-cities or new master plan eco-city. Most propose master plan eco-city is to be developed in several stages in the next fifteen to forty years.
Some of the relevant issue for Eco-city planning concept for developing new city or adapting for existing cities are as follow:

Eco-city is based on holistic approach. This integrated approach is hindered by fragmented administrative structures, political rivalries and a disregard for citizen expertise. As in Dongtan case, the surrounding inhabitants are not even consulted and not aware of the projects.
Eco-city concept is not really encouraged by policy makers and planner as there are suspicious of the intention as it involve alternative ways of decision-making (e.g. community involvement), the implementation of new technologies (e.g. like Personal rapid transit for Masdar or energy generation ) and new organisational solutions (e.g. multiple use). The additional costs involved and loss of influence are some of their main concerns.
Eco-city concept may fail due to lack of political will and commitments on the part of everyone involved.
The Initial investments are very high compared to traditional approach to planning which can scarce potential investor.

Nevertheless, for successful implementation of eco-city, commitment from individual or Party involved is paramount. Vision, ambition and thinking big in long term are some of the necessary requirement. Besides, there has to be free flow of information and trust between the policy maker and non-policy maker. There has to be creation of win-win situation for everyone to make it successful. There has to be compromise in difference of opinion and unity of alliance.
A series of challenges exist for developing cities in many part of the world, particularly in developing countries where rapid economic development will put pressure on cities to accommodate rising population and more infrastructures. It is the place where next megacities are coming up. The designer, public policy maker are committed to developing eco-cities and other types of sustainable communities in the face of climate change, environmental pollution, water shortage, and energy demand. Today utopia’s vision can become tomorrow reality. Many of the sustainable city emphasize on compact land use, clean transport, waste management, renewable energy( wind turbines and solar energy).
Most of eco-city plans are huge and need long term investments. But should we turn away from utopian visions they provoke? Planning completely new cities is expensive, and it is not possible to build all new cities. However, we can strive to improve existing cities when there is an abundance of already established cities and urban areas. In my opinion, I think we should embrace them and work towards searching for improving them. Perhaps, the scales of new master eco-city project need to be smaller so as to have short construction time and less costly. Someday the impressive catchphrases, such as “carbon-neutral”, “zero-waste”, and “car-free” for a city might be reality.
References

Daly,H. 1973. Toward a Steady-State Economy, Freeman, San Francisco (1973).
McDonnell,M.J., Hahs, A.K., Breuste, J.H. 2009, Ecology of a cities and towns: A comparative approach. Cambridge University Press 2009.
Rabinovitch, J. 1996. Integrated transportation and land use planning channel Curitiba’s growth. In World Resources Institute, United Nations Environment Program, United Nations Development Program, The World Bank, World Resources 1996-97: The Urban Environment. New York: Oxford University Press.
Roseland, M., 2001, The eco-city approach to sustainable development in urban areas. In: Devuyst D, Hens L, De Lannoy W (eds). How green is the city? Sustainability assessment and the management of urban environments. Columbia University Press, New York, pp 85-104.
Register, R. 1987. Eco-City Berkeley: Building Cities for a Healthy Future. Berkeley, CA: North Atlantic Books.
Register, R. 1994. Eco-cities: Rebuilding civilization, restoring nature. In D. Aberley, ed., Futures By Design: The Practice of Ecological Planning. Gabriola Island, B.C.: New Society Publishers.
Roseland, M. 1995. Sustainable communities: An examination of the literature.” In Sustainable Communities Resource Package. Toronto: Ontario Round Table on the Environment and the Economy.
Roseland, M. 1997. Dimensions of the eco-city. CITIES: The International Journal of Urban Policy and Planning 14,4: 197-202.
Roseland, M., ed. 1997. Eco-City Dimensions: Healthy Communities, Healthy Planet. Gabriola Island, BC: New Society Publishers.
Roseland, M. 1998. Toward Sustainable Communities, Resources
Roseland, M., “Sustainable Community Development: Integrating Environmental, Economic, and Social Objectives,” Progress in Planning, Volume 54 (2), October 2000, pp. 73-132.
Roseland,M., Dimension of the eco-city, Cities, Volume 14, Issue 4, August 1997, Pages 197-202
Resilience Alliance (2007) A research prospectus for urban resilience. A resilience alliance initiative for transitioning urban systems towards sustainable futures. Available at http://www. resalliance.org/files/1172764197_urbanresilienceresearchprospe ctusv7feb07.pdf accessed on 29 March 2010
Kenworthy, J.R., The eco-city: ten key transport and planning dimensions for sustainable city development, Environment and Urbanization, Vol. 18, No. 1, 67-85 (2006)
World Commission on Environment and Development, 1987. World Commission on Environment and Development, Our Common Future. , Oxford University Press, New York (1987).
Dongtan, An Eco-City, edited by Zhao Yan, Herbert Girardet, et was published by Arup and SIIC in February 2006.
UN HABITAT, Planning Sustainable Cities: Policy directions. Global Report on Human Settlements 2009. Abridged edition. Gutenberg Press, Malta. Available from http://www.unhabitat.org/grhs/2009. Accessed on 2 march 2010
Kahn, J and Yardley, J. As China Roars, Pollution Reaches Deadly Extremes. The New York Times. August 26, 2007. Available on http://www.nytimes.com/2007/08/26/world/asia/26china.html Accessed on 27 march 2010
Dongtan: The world’s first large-scale eco-city? Available on http://sustainablecities.dk/en/city-projects/cases/dongtan-the-world-s-first-large-scale-eco-city
Quek, Tracy, S’pore, China break ground , straits times, China Correspondent. Sep 29, 2008. http://www.straitstimes.com/Breaking%2BNews/World/Story/STIStory_283867.html. Accessed on 27 march 2010
Larson, Christina. China’s Grand Plans for Eco-Cities Now Lie Abandoned. Yale e360. 06 Apr 2009. Available on http://e360.yale.edu/content/feature.msp?id=2138. Accessed on 28 march 2010
Fox, Jesse. “Ecocities of Tomorrow: Can Foster + Partners’ Masdar City in the U.A.E. be Truly sustainable?”. Treehugger. March 4, 2008. Available on http://www.treehugger.com/files/2008/03/masdar-roundtable.php. Accessed on 29 march 2010
Richard Register – Author, theorist, philosopher and 35 year veteran of the ecocity movement. Founder of Ecocity Builders and Urban Ecology, and author of Ecocities: Rebuilding Cities in Balance with Nature.

 

Is the Concept of Human Rights Philosophically Defensible?

On your view is the concept of human rights philosophically defensible, or is it for example a purely political notion? Explain.

According to Biron, human rights are the distinctive basic entitlement possessed by every individual against the state or other public authority by virtue of being a member of the human family irrespective of every other consideration” (259-260).

Eastwood posits that human rights cannot be separated from the concrete exercise of political power (91). Human rights exist relative to the state, the are rights that cannot be exercised in a vacuum but are most likely to flourish within the societal framework of democracy, and economic parity: a distinctive characteristic rationalized in the declaration that “the recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.”(UNDHR Preamble).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The notion of human rights as a political concept is traceable to the early practice of societal adherence to natural or customary rights during the phase when natural laws were the determinants of rights (Jowitt 187-194) and only a particular class of society had certain rights by virtue of position for instance, the rights of a king differed from the rights of a follower, the lower class did not have the same rights as the upper class in the society (still a cultural practice in some parts of the world). The evolution of human rights spanned several decades leading to autocracy in most nations with the state bearing the burden to protect human rights and ensure the protection of rights. The existence of national sovereignty eminent at this period provided nations the right to exist without interference from other nations and by extension provided an avenue for oppression and individual rights violation for instance, the forced deportation and massacre of Armenians, Greeks and Assyrians by the government of Turkey, the United States sedition and espionage acts restricting the right to free speech and other rights of citizens, the rape and massacre by Japanese soldiers of about 200,000-300,000 civilians and unarmed soldiers during their invasion of China (Human Rights for all Ages).

Although human rights existed as summarised above, long before the United Nations charter asserting the international standard of human rights, the United Nations Declaration on Human Rights was adopted as an aftermath of the magnitude of appalling human rights violations recorded during World War 2. The document called for equality and self determination as stated therein “All human beings are born free and equal in dignity and rights.” (UNDHR Article 1). The declaration outlined the fundamental rights, rights which coincidentally are also the foundation of any democratic society were adopted by diplomatic representatives of countries, most of which where strong allies, Britain and France were at the time colonising other nations with prevailing cases of human rights violations and oppression.

The declaration marked the end of sovereignty allowing nations interfere in the politics of other nations under the guise of withholding or promoting human rights.

With sovereignty lifted and with it the limitation of inter-nation interference, wars are being justified under the guise of human rights violation leading to further right violation through the bombing of cities, prisoners of war, sieges, shooting and maiming in order to enforce human rights in conflict regions. Some of these acts promote hegemony, others are for political and economic gains. For instance, the Israeli–Palestinian conflict that began in the mid-20th century and referred to as the world’s “most intractable conflict”, with the ongoing Israeli occupation of the West Bank and the Gaza Strip (Wikipedia). Eastwood, in his review of Nicola Perugini and Neve Gordon’s “The Human Right to Dominate”, presents the authors stance on Israel’s manipulative positioning of itself as a human rights victim, while Palestinian human rights advocacy against occupation has paradoxically helped to normalize domination. He further portrays the extent to which international humanitarian law has been contorted to provide justifications for the killing of Palestinian civilians and the proposed plan to legitimize the further colonization of Palestinian land by framing their activities as in defense of the human rights of Israeli settlers. (Eastwood 92).

Another example is the 1994 Rwandan genocide; the United Nations involvement or lack thereof remains questionable, France had a national interest at stake and rather than make concerted humanitarian efforts to end the genocide, contributed actively to the genocide. According to Rory Carroll, the United States of America’s failure to intervene can be attributed to the lack of economic gains available because Rwanda had no minerals or strategic value. (The Guardian).

Another instance involves the Sudanese government’s sponsorship of the genocide in Darfur aided by China and Russia.

“Both China and Russia have worked to block many United Nations resolutions in attempts to appease the Sudanese government. From its seat on the United Nations Security Council, China has been Sudan’s chief diplomatic ally. China invests heavily in Sudanese oil. The country is China’s largest oversees oil provider. Sudan’s military is supplied by Chinese-made helicopters, tanks, fighter planes, bombers, rocket launch propelled grenades, and machine guns. For decades, Russia and China have maintained a strong economic and politically strategic partnership. The countries opposed UN peace keeping troops in the Sudan. Russia strongly supports Sudan’s territorial integrity and opposes the creation of an independent Darfuri state. Also, Russia is Sudan’s strongest investment partner and political ally in Europe. Russia considers Sudan as an important global ally in the African continent”. (Darfur Genocide)

Thirdly, the United States funded and supported the Afghan terror group during the war with Soviet Union, however, after the September 11 occurrence, Afghanistan became the recipient of attacks by the United States, the ‘war on terror’ was justified by condemning the human rights record of the Taliban government in Afghanistan. The war which is presently deadlocked is probably the longest conflict in American history, thousands of U.S. soldiers remain within the borders.

United States is seemingly fighting for the rights of humans in other nations but has numerous ongoing and unresolved human rights violation cases; Guantanamo detention centre, capital punishment still practiced in some states, regulatory actions by Trump’s administration negatively affecting refugees & immigrants, the right to health and the right of persons with disabilities.

Privacy rights are also violated by the widespread media circulation of pictures and videos of victims of poverty and war without their consent under the guise of alerting the world, even the United Nations world report contains such.

Practically, the various charters and declarations on human rights are essentially declarations of policy acts imposed on the people for whom the acts should favour; the supposed beneficiaries are not involved in the process but governments and public organisations through the enforcement of rights promote political power and, in this case, political power transcending borders (Gready 745).

Most of the human rights are borne out of the need to protect a populace affected by poor political choices, domination and diplomatic compromise which ultimately affect equitable distribution of resources and by extension human dignity. How then can we justify the reality of human rights; how is it determined that we have a right to these rights when the government/state has not made the primary amenities available.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

The populace cannot exercise their rights without the necessary social structures in place, the population who live below poverty level have the same rights as the ‘bourgeoise’ but are unable to enjoy their rights fully because the state has failed in the provision of the objects of the rights, for instance quoting (Article 25(1) “everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.”(UNDHR).

If the state does not provide for instance, medical facilities (which is the reality in many third world countries) then an individual cannot exercise his right and by extension, his right to life and to work is also affected as infirmity would become an impediment ultimately leading to death.

According to Jowitt, there are instances of the use of human rights to conceal politically motivated intentions aimed at favoring a group of people over others (186) for instance, between 1990-96, political maneuvering in Fiji resulted in the approval of an unfavourable constitution review, a move the Fijian political parties opposed culminating in a coup in May 2000. The coup was justified as a necessity for the protection of the rights of the indigenous people (BBC News, Jowitt 186). The sponsor claims to have “set foundations for change once and for all in the affairs of the country of Fiji as desired by the indigenous people…. Now, they will be able to achieve self-determination and control the future destiny in all matters pertaining to their livelihood.”(BBC News). The BBC news reports that the instigator had links with nationalist groups which had been protesting against the Indian-dominated government. The salient point here is that the indigenous peoples’ rights were a mere front for the underlying political power plays.

Virtually every nation is experiencing human rights violation in one form or the other. There is no working framework in place to adequately address human rights violations

What is evident from the above scenarios is that human rights are merely a political notion used by nations.

I share Gready’s view that human rights are a major utility to international politics and exist as guiding principles because political systems are a major cause of the violation of human rights. (745)

     WORKS CITED

BBC News. “Attempted coup in Fiji.” World: Asia-Pacific. news.bbc.co.uk/2/hi/asia-pacific/754653.stm. Accessed 11-02-2019

Biren, Roy. “In Defence of Human Rights.” Economic and Political Weekly, vol. 32, no. 6, 1997, pp. 259-260. www.jstor.org/stable/4405066. Accessed: 11-02-2019.

“Darfur Genocide.” world without GENOCIDE. worldwithoutgenocide.org/genocides-and-conflicts/darfur-genocide. Accessed 12-02-2019

Eastwood, James. “Review of the Right to Dominate by Nicola Perugini and Neve Gordon.” Journal of Palestine Studies, vol. XLVI, no. 2, 2017, pp. 91-104. www.academia.edu/32293895/Review_of_The_Human_Right_to_Dominate_by_Nicola_Perugini_and_Neve_Gordon. doi.org/10.1525/jps.2017.46.2.91. Accessed 12-02-2019

“Ending the Greatest Human Rights Tragedy on Earth!” Human Rights for all Ages, 2003. www.humanrightsforallages.org/hrtimeline.php. Accessed 12-02-2019

Gready, Paul. “The Politics of Human Rights.” Third World Quarterly, vol 24, no. 4, 2003, pp. 745-757. www.jstor.org/stable/4405066. Accessed: 9-02-2019

“Human Rights Watch.” World Report 2018. www.hrw.org/world-report/2018#. Accessed 5-02-2019

Jowitt, Anita, and Tess Newton Cain, editors. “The notion of Human Rights.” Passage of Change: Law, Society and Governance in the Pacific, ANU Press, Canberra, 2010, pp. 185–198. JSTOR, www.jstor.org/stable/j.ctt24h3jd.18.

Rory, Carroll. “US chose to ignore Rwandan Genocide.” The Guardian, 2004 www.theguardian.com/world/2004/mar/31/usa.rwanda. Accessed 5-02-2019

“Universal Declaration of Human Rights” www.un.org/en/universal-declaration-human-rights/. Accessed 12-01-2019

Israeli-Palestinian Conflict.” Wikipedia, The Free Encyclopedia. en.wikipedia.org/wiki/Israeli–Palestinian_conflict. Accessed 12-02-2019

A Concept Analysis of Advanced Nursing Practice

Introduction
The idea of advanced practice in nursing presents a challenge to the general nurse in terms of exploring scope of practice and potential professional development (An Bord Altranais, 2000; Thompson and Watson, 2003). There appears to be a lack of clarity in defining the concept of advanced practice (Thompson and Watson, 2003), with terms such as specialist practice, consultant nursing roles and the like clouding the waters of the debate, suggesting a need to perhaps amalgamate and standardise roles (An Bord Altranais, 2000;) .

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

This author, as a Community General Nurse in Ireland, is aware of two advanced practice roles within her own practice area: one within the Accident and Emergency Department, an acute care facility, and one within Education, which straddles the academic/practice divide. However, the changing and developing role of the nurse and rapid changes towards higher levels of practice (NMC, 2002; Thompson and Watson, 2003; Lorentzon and Hooker, 2006) seems to suggest that advanced practice may be an integral part of career progression within nursing (An Bord Altranais, 2000), which leads to a need to clarify the concept and map its components and meanings. Concept analysis and conceptual clarification form an identifiable genre within the nursing literature (Paley, 1996). This essay will follow one model of concept analysis to map the concept and explore the implications for practice through an exemplar model case.
Concept Analysis
Concepts and theories within science are strongly linked (Paley, 1996), and both seem to be interdependent. Concept analysis enables the definition of a concept and allows the critical reader to differentiate between similar and dissimilar concepts (McKenna, 1997). Achieving conceptual clarity is an important task for both research and practice (Walker, 2006). There are a range of concept analyses that have been used within scientific and nursing literature. Morse (1995) suggests that techniques to map concepts should relate to the maturity of the concept concerned. In this case, Advanced Practice is an extant concept which demands clarification in relation to specific areas of nursing activity. Therefore there is a need to determine a means of concept delineation and clarification (Morse, 1995). There is also a need to identify an appropriate means of clarifying the concept, for example whether or not to utilise qualitative or quantitative methods (Morse et al, 1996.)
In this instance, a qualitative approach based on Rodgers (1989; 1991;1993) model of concept analysis will be utilised. This particular model has been chosen because of its firm grounding in research traditions of sociology and nursing (McKenzie, 2000). The Rodgers’ approach has already been utilised to map evolving phenomena (Walker, 2006) and so is particularly applicable to a still developing topic area. As Rodgers’ approach is an inductive, cyclical approach (Walker, 2006), it is a more creative endeavour suitable to the generation of new ideas and definitions. A literature review will be carried out, in a targeted manner, utilising a structured approach (see Table 1).
Table 1 Framework for concept analysis

Identify concept of interest
List published literature relevant to the topic and select papers to be included in the sample
Identify surrogate terms and relevant uses of the concept.
Identify and select appropriate sample for data collection.
Identify the attributes of the concept
Identify the references, antecedents and consequences of the concept.
Identify concepts that are related to the concept of interest
Identify a model case of the concept.

The Process of Analysis.
Concept of interest
McKenna (1997) suggests that when choosing a concept, it is best to select a concept that represents phenomena of interest to the researcher. McKenna and Cutcliffe (2005) also suggest that there should be some confusion or lack of consensus about the concepts’ meaning, but the scope should not be too broad. The concept of interest is advanced nursing practice in community general nursing, which is related to the author’s own area of practice and experience of practice delivery. This concept also meets McKenna’s (1997) stipulation that the concept should also be abstract enough o retain its meaning when removed from specific situations. Therefore, the concept of advanced nursing practice is being analysed, with reference to one specific area of practice but not limited by that practice.
Surrogate Terms

Higher Level of Practice
Specialist nursing practice
Role of the Specialist nurse and consultant nurse
Professional Development in Nursing
Community nursing practice

Sample
Please see Appendix for the audit trail of sample selection.
Attributes of the Concept
The concept of advanced practice is not a new one (Carroll, 2002). Clinical nurse specialists have been cited since the 1940s (Carroll, 2002). It is a nursing concept (Carroll, 2002) despite being associated with advanced practices traditionally carried out by the medical profession (Mantzoukas and Watkinson, 2007). The literature is in agreement that the concept of advanced nursing practice lacks agreement on the core characteristics and roles of such a practitioner (Mantzoukas and Watkinson, 2007).
The concept is related to specialism (Mantsoukas and Watkinson, 2007) suggesting that the role emerges as a unique expression of need within a distinct area of practice (Gardner and Gardner, 2005). Hamric (1996) links advanced nursing practice to practical, theoretical and research based interventions within a specific clinical area linked to the larger discipline of nursing. However, it can also be a more general theoretical construct of any form of nursing which progresses to an advanced level of practice (Mantzoukas and Watkinson, 2007). Evidence does seem to suggest that similarities between specialist nurses and clinical nurse specialist roles and between nurse practitioner and advanced nursing practice roles (Carnwell and Daly, 2003). Therefore it would appear that an eclectic set of role schema have emerged from the general stew of advances in nursing practice. Bryant-Lukosius et al (2004) further define the term advanced nursing practice as referring to the work, or to what nurses actually do in their roles, but also makes reference to the multi-dimensional scope and mandate of the concept.
Specific attributes of the concept include the ability to discover, innovate and expand the nursing profession by employing multiple types of knowledge and skills, support by research evidence and academic thinking processes (Mantzoukas and Watkinson, 2007). Other attributes are: the use of the knowledge in practice; critical thinking and analytical skills; clinical judgement and decision-making skills; professional leadership and clinical inquiry; research skills; mentoring skills; and the ability to change practice (Mantzoukas and Watkinson, 2007). Furlong and Smith (2005), analysing the edicts of the National Council in Ireland, describe the core concepts of advanced nursing practice as: autonomy in clinical practice; clinical and professional leadership; and expert practitioner and researcher. All of these appear to relate meaningfully to nursing as a profession but do not address the application of the role to patient outcomes and clinical effectiveness. However, Benner et al (1999) relate critical thinking to active thinking in practice, the application here being evident. This would then relate to clinical judgement, but the question arises of acceptability of nurses undertaking clinical decision making in the current NHS climate.
References, antecedents and consequences of the concept
Antecedents or prefixes to the concept include the notion of education and individual roles, historical development of the profession (Carroll, 2002), and advanced roles as part of the development process of the nursing profession (Mantzoukas and Watkinson, 2007). In order for the advanced nursing role to exist, there must be an identified need for such a role in specific areas of nursing practice (Caroll, 2002; Mantzoukas and Watkinson, 2007). In particular, the need to perform specific nursing tasks, interventions and clinical monitoring for individual conditions may be viewed as an antecedent (Gardner et al, 2004). Specialist preparation and legislative/professional evolution are also antecedents (Mantzoukas and Watkinson, 2007). Education for advanced nursing practitioners is linked to research-derived curricula and learning defined by clinical practice (Gardner et al, 2004). However, education and specialist preparation of the advanced practitioner in nursing could also be viewed as a consequence, as specific programmes of education have had to be developed in response to the developments of these nursing roles (Gardner et al, 2004).
Consequences include lack of role clarity (Carroll, 2002; Griffin and Melby, 2006) and the notion of the mini-doctor role which leads to nursing practice being carried out within a medical model rather than the optimal holistic nursing model (Carroll, 2002). This would have an impact on nurses themselves and their professional self concept, and on the client/patient, affecting the type and perhaps quality of their care. It might also lead to the erosion of general nursing roles in favour of specialisation, again following a medical model of professional development (Mantzoukas and Watkinson, 2007). However, other literature sees advanced nursing practitioners as being a result of recent health care policies, the role having developed to meet the complex demands of health care systems (Carnwell and Daly, 2003).
Another professional consequence of the concept is the need for regulation and supervision (NMC, 2002). In relation to this is the development and evolution of professional nursing autonomy (Mantzoukas and Watkinson, 2007). The expansion of advanced roles can also be seen as a consequence of the concept, whereby established areas of advanced practice pave the way for its implementation in a range of specific clinical areas (Mantzoukas and Watkinson, 2007). This may be related to practice development ensuring that nursing remains responsive to the changing needs of patients and clients (Thompson and Watson, 2003). This related to another consequence of advanced practice, ongoing change in clinical practice (Mantzoukas and Watkinson, 2007). However, it could be argued that practice development is an antecedent to the concept of advanced nursing practice as well, echoing the blurred nature of the concept from a range of perspectives. Autonomy could also be viewed as a consequence (Wade, 1999). The fact that advanced nursing practice is valued within the healthcare arena is also an important factor (Dunn, 1997; Griffin and Melby, 2006), and makes its most important consequence improvement in patient outcomes and the associated improvements in healthcare and reduced demand on resources (Coster et al, 2006: Gardner and Gardner, 2005).
Concepts related to the main concept
One concept related to advanced nursing practice is fitness for practice (Thompson and Watson, 2003; NMC, 2002). Another is that of barriers and resistance to advanced practice, particularly in relation to the current NHS climate (Thompson and Watson, 2003). Systems and processes must be in place and be effective for advanced practice to establish itself and its efficacy (Gardner and Gardner, 2005). Policy background and political drive are also related to this particular practice development (Carnwell and Daly, 2003). The international or global scope of the concept is also evident from the literature sampled here (Bryant-Lukosius et al, 2004; Sutton and Smith, 1995). Nurse prescribing and authority in pharmacological intervention is another related concept (Lorentzon and Hooker, 2006).
Model Case.
Patient K, a 65 year old woman had been referred to the author (a community RGN), due to a recurrent, chronic leg ulcer on the left ankle. This ulcer had been treated for some years with topical preparations and dressings, including antibiotic treatment and a variety of therapeutic dressings, and the involvement of other professionals such as dietician and physiotherapist had attempted to address potential underlying causes of failure to heal, such as lack of mobility and poor diet. However, after some deterioration in the condition of the ulcer, increased haemoserous loss and offensive odour, K attended the GP and was referred by the practice nurse to the wound specialist clinic at the local outpatient department.
The clinical wound specialist nurse reviewed K’s case, identified the ulcer as a venous ulcer and prescribed four-layer pressure bandaging to treat the wound, based on her own awareness of the research evidence that demonstrated the efficacy of this intervention. The four-layer bandaging technique improves venous return in the lower extremity by providing a gradient of pressure from the bottom of the lower limb towards the knee.
The specialist nurse engaged K in a degree of learning about her condition and its treatment, in order to ensure compliance. The four-layer bandages are left on for two to three days at a time, then removed to dress the ulcer, then replaced with clean four-layer bandages. They can be uncomfortable, and so patient compliance is important in the success of treatment. The specialist nurse spent time with the client, informed her of the rationale and evidence base, and then further contacted this author, her community general nurse, to ensure that those treating K were fully competent in the four-layer bandaging technique. She also advised K to return to her for regular review of her condition. Within 12 weeks the ulcer was healed, which greatly pleased K and allowed her discharge from nursing care.
This case demonstrates many of the features defined by the concept analysis of advanced nursing practice. The advanced practice developed out of a defined need for a specialist wound clinic staffed by specifically trained and experienced staff. The specialist nurse occupies a senior role with a large degree of autonomy. She has been educated in her specialism, utilises evidence-based practice, and engages in an educative role with clients and with non-specialist nurses, demonstrating the components of expert practice but also advancing the expertise of those around her (Benner, 1994).
Conclusion
Professionalizing forces in nursing, clinical need and extension and changes in primary health care appear to have combined to create new roles for nurses in the NHS (Lorentzon and Hooker, 2006). These roles appear to have functional bases defined by gaps within service provision and focus on client need. Therefore, given this concept analysis, it would appear that advanced nursing practice is a needs-driven development of specialist nursing management to provide optimum clinical outcomes for client and service provider. Such practice is evidence based and provided by a trained, competent clinician with the academic and experiential authority required to implement theory into practice, bridging the theory-practice gap through exemplary implementation of clinical judgement (Upton, 1996). It can also be viewed as a logical outcome of continuing professional development within nursing.
This author’s role within the community nursing team encompasses a range of nursing challenges, one of which has been described here. It is through liaison with such specialists that the community nurse can facilitate evidence-based practice and bridge the theory-practice divide which continues to challenge the achievement of best practice in every clinical situation. However, it is also evident that there is a need for further clarification and consensus around such roles and better awareness of the scope of advanced nursing practice both within individual specialisms and in the wider realm of NHS nursing care. This author can see that the role of the community general nurse itself could be further developed into an advanced nursing role, drawing on the successes of such roles in other areas, but this would need policy, systems and ideological change to achieve. Ultimately, if the results are demonstrable improvements in patient outcomes, it would be well worth the challenge.
2,500 words.
References
An Bord Altranais (2007) http://www.nursingboard.ie. Accessed 13-4-07.
Benner, P., Hooper-Kyriakidis, P. & Stannard, D. (1999) Clinical Wisdom: Interventions in Critical Care WB Saunders: Philadelphia.
Benner, P. (1984) From Novice to Expert California: Addison-Wesley Publishing Company.
Bryant-Lukosius, D., DiCenso, A., Browne, G. & Pinelli, J. (2004). Advanced practice nursing roles: development, implementation and evaluation. Journal of Advanced Nursing 48 (5) 519-529.
Carnwell, R. & Daly, W.M. (2003) Advanced nursing practitioners in primary care settings: an exploration of the developing roles. Journal of Clinical Nursing 12 (5) 630-642.
Carroll, M. (2002) Advanced Nursing Practice. Nursing Standard 16 (29) 33-35.
Castledine, G. & McGee, P. (eds) (1998) Advanced and Specialist Nursing Practice Oxford: Blackwell Science.
Coster, S., Redfern, S. Wilso-Barnett, J. et al. (2006) Impact of the role of nurse, midwife and health visitor consultant. Journal of Advanced Nursing 55 93) 352-363.
Cutcliffe, J.R. & McKenna, H.P. (13005) The Essential Concepts of Nursing Edinburgh: Churchill Livingstone.
Dunn, L. (1997). A literature review of advanced clinical nursing practice in the United States of America. Journal of Advanced Nursing 25 (4) 814-819.
Furlong, E. and Smith, R. (2005) Advanced nursing practice: policy, education and role development. Journal of Clinical Nursing 14 (9) 1059-1066.
Gardner, A. and Gardner, G. (2005) A trial of nurse practitioner scope of practice. Journal of Advanced Nursing 49 (2) 135-145.
Gardner, G., Gardner, A. & Proctor, M. (2004) Nurse practitioner education: a research-based curriculum structure. Journal of Advanced Nursing 47 (2) 143-152.
Griffin, M. & Melby, V. (2006) Developing and advanced nurse practitioner service in emergency care: attitudes of nurses and doctors. Journal of Advanced Nursing. 56 (3) 292-301.
Hamric, A.B. (1996) A definition of advanced nursing practice. In Hamric, A.B., Spross, J.A. and Handson, C.M. (eds) Advanced Nursing Practice: An Integrated Approach Philadelphia: WB Saunders.
Lorentzon, M. & Hooker, J.C. (2006) Nurse Practitioners, practice nurses and nurse specialists: what’s in a name? Journal of Advanced Nursing.
Mantzoukas, S. & Watkinson, S. (2007). Review of advanced nursing practice: the international literature and developing the gneric feature. Journal of Clinical Nursing 16 (1) 28-37.
McKenna, H. (1997) Nursing Theories and Models London: Routledge.
McKenzie, N. (2000) Review of Concept Analysis. Graduate Research in Nursing www.graduateresearch.com Accessed 13-4-07.
Morse, J.M. (1995) Exploring the theoretical basis of nursing using advanced techniques of concept analysis. Advances in Nursing Science 17 (3) 31-46.
Morse, J.M., Hupcey, J.E., Mitcham, C. & Lenz, E.R. (1996) Concept analysis in nursing research: a critical appraisal. Scholarly Inquiry in Nursing Practice 10 (3) 253-277.
Nursing and Midwifery Council (2002) Higher Level Practice www.nmc-uk.org Accessed 13-4-07.
Paley, J. (1996) How not to clarify concepts in nursing Journal of Advanced Nursing 24 (3) 572-578.
Rodgers, B.L. (1989) Concepts, analysis and the development of nursing knowledge: the evolutionary cycle. Journal of Advanced Nursing. 14 330-335.
Rodgers, B.L. (1991) Using concept analysis to enhance clinical practice and research. Dimensions of Critical Care Nursing 10 28-34.
Rodgers, B.L. (1993) Concept analysis: An evolutionary view. In: Rodgers, B.L. & Knafl, K.A. (Eds.) Concept Development in Nursing: Foundations, Techniques and Applications Philadelphia: WB Saunders.
Sutton, F. & Smith, C. (1995) Advanced nursing practice: new ideas and new perspectives. Journal of Advanced Nursing 21 (6) 1037-1043.
Thompson, D. & Watson, R. (2003) Advanced nursing practice: what is it? International Journal of Nursing Practice 9 (3) 129-130.
Wade, G.H. (1999) Professional nurse autonomy: concept analysis and application to nursing education. Journal of Advanced Nursing 30 (2) 310-218.
Walker, W.M. (2006) Witnessed resuscitation: a concept analysis. International Journal of Nursing Studies 43 (3) 377-387.
Appendix
Audit Trail
The search engine/gateway British Nursing Index was accessed and searches were carried out utilising the following keywords with their associated hits:

Advanced Nursing Practice
Higher Level of Practice
Specialist nursing practice
Role of the Specialist nurse and consultant nurse
Professional Development in Nursing
Community nursing practice

The list of returned citations was further limited by defining parameters as follows:

Full text
English Language
Nursing.
Peer-reviewed
Research
Original Articles.

The express aim was to review 20% of the returned citations, leaving the author with a targeted sample of articles from a range of nursing journals including Journal of Advanced Nursing; Journal of Clinical Nursing; Advances in Nursing Science; International Journal of Nursing Practice; Dimensions of Critical Care Nursing; International Journal of Nursing Studies; Nursing Standard; Graduate Research in Nursing.
The focus of the concept analysis being Advanced Nursing Practice, only those articles which deal specifically with this concept were included in the sample.
 

The Concept of Equitable Globalisation

Rebecca Knighton
Globalisation Debates: The Concept of Equitable Globalisation and the Offshoring of Jobs
‘One of the fundamental questions of today’s world is undoubtedly the question of equitable globalisation’, these were the words of Dr Janez Drnovšek (2004), then President of the Republic of Slovenia, in a speech addressing members of the Alliance of Liberals and Democrats for Europe. In order to realise the importance of that sentence, an understanding must be gained of what is meant by globalisation. A word that Godin (2006) described as a buzzword; globalisation is today used to define, justify and legitimise the interconnectedness of the world. Theodore Levitt and his 1983 article The Globalisation of Markets in the Harvard Business Review are accepted by many commentators as the origin of the mainstream use of the term (Mullen, 2006; Abdelal & Tedlow, 2006).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Equitable globalisation can be defined as an interconnected world in which progress made is fair and development is impartial. When comparing this ambition to how modern day global relationships operate, it’s clear that globalisation today does not possess these qualities. Joshi (2009) explores globalisation and describes it as the increasing economic interdependence of national economies across the world, attributing this interdependence to a rapid increase in the cross-border movements of ‘goods, service, technology, and capital’, while this idea does not contradict the pursuit for fairness, the real and tangible effects of global interconnectedness do not always embody fairness or impartiality.
The debate that will be the focus of this discussion, one that routes from the equality – or lack of equality – within global interconnectedness, is centred on offshoring. Offshoring is the process of moving parts of a business’s operations to a different country – this can be either through subcontracting with a contractual agreement or setting up business further premises in another carrying out tasks there. Mankiw (2004) describes the notion as the latest manifestation of the gains from trade ‘that economists have talked about at least since Adam Smith’, his opinion is that this so-called ‘phenomenon’ (Vedder, Guynes and Reilly 2010) is simply the next step businesses can take to profit in many ways in a progressing business environment. The source of the debate leads on from the aforementioned pursuit of equitable globalisation and the contrasting opinions, and justifications of those opinions, between those who support or disparage offshoring. The debate itself can be separated between the country a business originates from and its chosen destination of offshoring, arguments from both locations identify reasons either for or against. Throughout the debate, the economic, political, social and cultural elements of this element of globalisation will be examined.
The first area of the debate to be examined is the contrasting opinions about offshoring in the country of the business’s origin. For the purpose of this discussion, there will be a focus on the USA. A word that seems synonymous with these contrasting opinions is ‘protectionism’ – Mankiw and Swagel look into the term in their insightful 2006 article and conclude that in different arguments it come with entirely different connotations. Members of the American public are looking for some security and consistency in their job and the services they receive, and the term ‘protect’ is tantamount to this, and something they feel the US Government should prioritise. Brothers Ron and Anil Hira are prominent authors within this globalisation debate, and their book Outsouring America (2005) represents the debate well. Their view is that America policy, representative of MEDC’s around the world, is ‘naïve’ – stating that ‘the formula of free, deregulated markets and faith in American superiority ignores how the international economy has slowly and gradually shifted in the last few decades ‘. Their point follows on from book’s foreword by Lou Dobbs, in which the accusation is made that globalisation and its consequential offshoring have and continue to lead to economic insecurity which is in direct contradiction of the American Dream.
This argument is somewhat fuelled by the media (Mankiw & Swagel 2006; Amiti & Wei 2005). Within the last decade, political events such as the publishing and the controversy surrounding CEA’s February 2004 Economic Report for the President Report in the run up to the 2004 election – which mentioned offshoring – have coincided with impartial reports and media attention regarding job losses and economic slowdown. These overlapping events have led to the subject of offshoring becoming thought of as a justification for a faltering labour market.
In addition to the argument of a loss of American jobs, an element of this debate is about the quality of exported services. A customer survey by American Banker/Gallup (2004) found that of the two thirds of respondent aware of offshore outsourcing, the vast majority (78%) held an unfavourable opinion. Exemplifying this point is the relocation, and consequential return, of a call centre for the computer technology firm Dell due to customers complaining that upon its move to India, standards dropped and customer service quality was reduced, this was discussed by Taylor and Bain (2004). Although this case is not alone it its controversy, may call centres have remained in India and other popular offshoring locations – part of the Asian information technology enabled services (ITES) industry estimated to be worth US$1.5–1.6 trillion in 2020 (NASSCOM 2009a).
To refer back to the aforementioned point of varying connotations of protectionism, the opinions found in academic and particularly economic literate are that the notion carries negative implications. This academic literature forms part of the discourse that offshoring is a positive contribution to a country’s economy. In order to justify the concept of outsourcing, economists look into the theory that defines their subject area – a part of this theory is comparative advantage. This is the ability for one party to produce a good or provide a service at a lower marginal cost to its competitor (Baumol & Minder 2009) and can also be applied to whole countries. The comparative advantage that, for example, India can offer US companies for elements of their business that can be outsourced, is the driver of offshoring. One view of this concept is that of Bhagwati (2008), who labelled the phenomenon ‘kaleidoscopic comparative advantage’ is recognition of its complexity. In direct contradiction to the so-called protectionists’ opinion of a negative effect on the economy, McKinsey Consulting (2003) calculate that overall net US income rises by about 12–14 cents for every dollar of outsourcing; this is due to the increased profits of companies being contributed to tax, being used to develop and grow the business – leading to more US employment, and consumers paying lower prices for products and services that have been made cheaper by offshoring. A further point in the debate that this embodiment of globalisation is good for the economy is that these global economic developments could be liken to a third Industrial Revolution. Blinder (2006) explored this idea – he identified that such vast and unsettling adjustments are not unique today as the same repercussions were felt during both the agricultural and the manufacturing industrial revolutions, but added that both of those economic changes are looked back upon as successful and relevant steps forward. The article goes on to address the opinion that jobs are risk of being relocated are those that are typically lower paid; using an example of taxi drivers, aeroplane pilots, janitors and crane operators as ‘safe’ jobs, compared with accountants, computer programmers, radiologists and security guards as jobs that could potentially be outsource. The range of jobs that are or are not at risk do not correspond to traditional distinctions between high-end and low-end work.
A further point opposed to the argument against the offshoring of job to America is the contest to opinions that it lead to a reduced quality of customer service. Blinder (2006) comments on the constant improvements in technology and global communication, says that due to this there has been little or no degradation in quality. The education of the employees in foreign companies is discussed by Doyle (2012) – he used the example of the recent vast improvements of English Language education in India and puts forward the point that this in turn eliminates a potential language barrier that may have supposed negative effect on the customer service provided by companies that outsource their call centres to country that don’t have English as a first language.
Having explored both view of offshoring in the country of the business’ origin, the nest step to gaining an understanding of this globalisation debate is that of the country hosting these outsource jobs. Similarly to the previous arguments, using a case study will allow a more in depth investigation into the opinions and justifications of this debate. India will be the focus of this debate – chosen due to its popularity amongst business as a destination for offshoring jobs. According to the Tholons 2013 report of the top worldwide outsourcing destinations, six Indian cities are within the ten most favourable, including the 1st and 2nd being Bangalore and Mumbai respectively.
The offshoring of jobs to India is regarded as the main vendor of offshored jobs, with some estimates that an additional 400 people are employed a day due to jobs that have been offshored (Bergh et al, 2011). This contribution to the economy is the main positive with this globalisation debate in favour of outsourcing jobs to India; a contribution estimate by Nasscom to be growing 19% per year (Nasscom, 2012). Bergh et al (2011) go on to discuss the impacts of this input into the India economy, such as vast improved have been made to infrastructure that has in turn allowed further expansion and an increased quality of life.
A further part of the debate is the social side of this embodiment of globalisation: this impacts of increased employment. Despite criticism, that will be explored further into this discussion, there is evidence within academic literature and other publications that improvement are made to the quality of lives of those employed by companies that have offshored their jobs to India, Ball et al (2005) explore this point, their findings indicate that those employed by subsidiaries of the original company that has outsourced the jobs benefit from working conditions better than if they were employed by companies based in India, as well as a better sense of job security. Another point is the claims that these companies recognise the nature of the work, identifying that by working and travelling home overnight employee would be increasingly vulnerable, and by offering security and transport services care is taken of these employees (Messenger and Ghosheh, 2010).
Whilst this argument of the positive effects on the vendors’ economy and the satisfactory to good working conditions provided is legitimised by academic papers on the subject, the opposing opinions come from a strong stand point and are very well justified by both academic research and events in the media.
One underlying point of this discourse relates back to the point of equitable globalisation and the impartiality of development – a concept which ties in with the opportunity to develop sustainably. A major criticism of the presence of outsourced jobs and the effects of these in India and other vendor nations is the instability of and speed in which changes are being made. Whilst governments, such as in India, have been recognised as paramount in facilitating an inflow of not only foreign capital but also knowledge and technology. Winters and Yusuf (2007) highlight the pressure felt governments by internationally trading companies to aid their overseas operations – attributing this to the fast growth and lack of forward planning when implementing incentive schemes. This potential instability is worsen by claims that India may be losing its popularity amongst multi-national companies leading to a slowdown in investment (Helyah, 2010; The Economist, 2013).
A second element to this discourse is explored by Messenger and Ghosheh (2010), and is based on the deep rooted cultural differences between vendor countries, i.e. India, and the companies’ country of origin. This leads to difficulties in integration and segregation between higher management and workers, which is turn can very negatively affect moral. A further point in the issue of cultural difference, is the westernisation of the nation a company is operating – an example of this is demonstrated in a Post-Colonial perspective investigation into recent changes in Indian culture and an example within the paper, by Ravishenkar et al (2013), is the education system in India that is said to ‘mimick’ Western concepts and ignore local stakeholder. Whilst this change would not be considered a negative by all commentators, it exemplifies a potential loss of national identity which has been explored in the wider sense of globalisation by Featherstone (2005).
A final point in the discussion of this debate is the working conditions of people employed in offshored jobs. Ghimire (no date) commentates the topic and highlights the following point as issues within the sector: disturbed social and family life due to overbearing work commitments and a lack of flexibility by employers; detachments from local culture and lifestyle; racist abuse from customers abroad. This list is increased by further contributions from Messenger and Ghosheh (2010) who explain that of their sample over 50% have suffered from work related illnesses and conditions including back and neck pain, sleep problems and headaches; they also reveal that many regulations set out by India’s government are not adhered to or are interpreted in the favour of employees: example are having the breaks required by law being dependant on outputs and call levels (in call centres) or breaks cut short due to overloading workloads and missed, sometime considered unattainable, targets. Due to the nature of companies with insufficient working condition, data is not available across the board due to secrecy and strict employee contracts; information in the media gives an insight into how conditions are worse than this, but cannot be relied on to be true and legitimate sources in an academic discussion.
By investigating the debate with what can be identified as four separate discourses, a comprehensive understanding can be gained of the opinions, justifications and evidence of each opposing argument. The exploration of such a topical and global debate bring some difficulties – such as contradicting literature and misinterpretation of statistical evidence. The question of the practice of offshoring is a prominent debate within globalisation; and due to its so called ‘kaleidoscopic’ complexity (Bhagwati, 2008) and multiple standpoints it demonstrates the complexity in the global interconnectedness of today’s world. When returning to the initial concept of equitable globalisation, this debate highlights how the pursuit of that ideal is somewhat unattainable; the impartiality of the concept is impossible to obtain due to the nature of the profit driven forces that dominate the global relations and drive globalisation itself.
References
Abdelal, R and Tedlow, R S (2003) Theodore Levitt’s ‘The Globalization of Markets’: An Evaluation after Two Decades. Harvard NOM Working Paper No. 03-20; Harvard Business School Working Paper No. 03-082. [Online] Last accessed 04/01/14 at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=383242
American Banker/Gallup (2004), What Americans think about overseas outsourcing? American Banker. 169 (192) 18
Amiti, M and Wei, Shang-Jin (2005) Fear of Service Outsourcing: Is it justified? Economic Policy. 20 (42) 308-347
Anon (2004) Economic Report of the President, 108th Congress, 2nd Session [Online] Last accessed 06/01/14 at http://www.gpo.gov/fdsys/pkg/ERP-2004/pdf/ERP-2004.pdf
Anon (2013) India No Longer Automatic Choice for Services and Back Office Work. The Economist. (Special Report).
Bain, P and Taylor, P (2004) Call Centre Offshoring to India: The Revenge of History? Labour and Industry: A Journal of the Social and Economic Relations of Work. 14 (3)
Baumol, W and Binder, A (2009) Economics: Principles and Policy. Ohio: South Western Cengage Learning
Bergh, A, Israels, R, Mehta, S, Sheychenko, A (2011) A decade of offshore outsourcing to India: Define your strategy for the next decade. [Online] Last accessed 07/01/14 at http://www.quintgroup.com/content/library/A_Decade_of_Offshore_Outsourcing.pdf
Bhagwati, J (2008) The selfish hegemon must offer a New Deal on trade. [Online] Last accessed 06/01/14 at http://delong.typepad.com/egregious_moderation/2008/08/jagdish-bhagwat.html
Blinder, A (2006) Offshoring: The Next Industrial Revolution? Foreign Affairs. 85 (2) 113-128
Drnovšek, J (2004) Speech by President Drnovšek at the opening of the meeting of European Liberals in Ljubljana [Online] Last accessed 04/01/14 at http://www2.gov.si/up-rs/2002-2007/jd-ang.nsf/dokumentiweb/A28B9C6C3EC2ABFEC1256F95002CB360?OpenDocument
Farrell, D, Baily, M, Agrawal, V, Bansal, V, Beacom, T, Kaka, N, Kejriwal, M, Kumar, A, Palmade, V, Remes, J, Heinz, T (2003) Offshoring: Is it a Win–Win Game? McKinsey Global Institute
Featherstone, M (2005) Undoing Culture: Globalisation, Postmodernism and Identity. London: Sage Publications
Ghimire, b (no date) Social Impact of Outsourcing. Understanding Outsourcing. Professional Education, Testing and Certification Organization International [Online] Last accessed 07/01/14 at http://www.peoi.org/Courses/Coursesen/outsrc/outsrc6.html
Ghosheh, N and Messenger, J (Eds) (2010) Offshoring and Working Conditions in Remote Work
Godin, B (2006) The Knowledge-Based Economy: Conceptual Framework or Buzzword. The Journal of Technology Transfer. 31 (1) 17-30
Guynes, C, Reilly, R and Vedder, R (2010) Offshoring Limitations. Review of Business Information Systems. 14 (1)
Helyar, J (2012) Outsourcing: A Passage out of India. Bloomberg Business Week – Companies and Industry
Hira, A and Hira, R (2005) Outsourcing America: What’s behind our national crisis and how we can reclaim American jobs. New York: AMACON
Joshi, R M (2009) International Business. New Delhi and New York: Oxford University Press
Levitt, T (1983) Globalization of Markets. Harvard Business Review. May/June. 92-102
Mankiw, G and Swagel, P (2006) The Politics and economics of Offshore Outsourcing. Journal of Monetary Economics. 53 (5).
Mullen, J (2006) An ‘Original Mind’ of Marketing Dies. Advertising Ages. 77 (8)
NASSCOM. (2009) Gender inclusivity in India: Building an empowered organisation. [Online] Last accessed 06/01/14 at: http://www.nasscom.in
Tholons (2013) 2013 Top 100 Outsourcing Destinations: Rankings and Report Overview. p2
Winters, A and Yusuf, S (2007) Dancing with Giants: China, India and the Global Economy. Washington: World Bank Publications
 

Hobbes Concept of the State of Nature Analysis

Explain and assess Hobbes’ claim that the ‘state of nature’ would be a war in which ‘every man is enemy to every man’.
Hobbes concept of the state of nature that he proposed in the Leviathan was defined merely as a condition of war, without the creation of a civil society he suggested that there would be a war where ‘every man is enemy to every man’. Hobbes assumption of human nature is based around the absence of a political society such as government; where no laws or rules are present. This condition creates a society filled with individuals living in constant fear and leads to perpetual war. In the first section of this essay I will explain the foundations that characterized Hobbes idea of the state of nature around and whether there is any escape from it. I will then go onto to evaluate whether this state of nature is only defined by savage behaviour and war and how other philosophers such as Locke and Rousseau researched the state of nature to come up with conclusions that contradict Hobbes original theory.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Hobbes stated that an individual’s natural condition is seen ultimately as egoist, with no concerns of morality each are driven by a powerful desire to amass great power. This instinctual drive cannot be restrained due to the lack of an overarching authority in society. Thus each human is continuously seeking to destroy the other in pursuit of reputation and self-preservation. This ultimately leads to life being ‘nasty, brutish and short’ (Hobbes, 1982). Hobbes believed that morality could not exist in such a state and that judgments centred around good and evil cannot exist until they are dictated by a higher authority present in society. Individual’s naturally attempt to increase their power sources as a means of future protection, this combined with their need to acquire what they like leads to this continual competition between each other. However we need to question whether this competition in the state of nature would eventually lead to war?
Another assumption Hobbes puts forward is that all men are equal by nature, meaning that each of them possess equal abilities to accumulate powers and to gain what their appetites desire. However he recognises that there are limited resources available which encourages competition leading to each becoming enemies and supporting his quote of every man is enemy to every man. You would think that by regarding equality Hobbes would consider that we should respect each other and act with compassion but Hobbes definition of equality relates to the idea that we all retain the same level of skill and strength therefore we all hold the same capacity to kill one and another. It is a condition in which ‘every man has right to everything; even to another’s body’ (Hobbes, 1982) This concept was supported by Doyle who identified that men were equal as they had similar passions and potentialities, they were mostly dominated by lusts and inner passions which were out of their control (Doyle 1927, pg. 353) He went further to state that the condition of life was one of never ending war as ‘justice and injustice have no place’.
Hobbes main idea of self-preservation in the state of nature can be exemplified by Plato’s story of the Ring of Gyges. Those in possession of this ring acquire the power of invisibility. With this ring, the individual either decides to act morally or immorally (Plato, 2007) Individuals in the state of nature would use this ring to satisfy their own personal desire. Whilst in possession of this ring they would be able to obtain everything they want, it would be very beneficial in terms of survival. Hobbes suggestion is that if individuals were presented with the ring they would not hesitate using it as it would provide protection and self-preservation which is their main focus due their egoistic makeup, this supports his idea of the condition of mankind.
There are therefore 3 key main elements which characterise the state of nature; glory competition, and diffidence. These are known as the causes of quarrel. We are primarily concerned with our own safety and Hill (2006 pg. 134) reinforced Hobbes idea that uncertainty about the character and behaviour of others in society leads to mistrust due to the lack of confidence in the motive of others which turns them against each other. This consequently leads to the establishment of a sovereign to enforce authority over society. Hobbes definition of the state of war is not characterized by violence but as an individual’s constant readiness to fight. This state becomes too harsh that human beings naturally seek peace due to reason, and the best way to this goal is to create the Leviathan through what is called the ‘social contract’ which is entails having an ultimate sovereign as a legitimate source of power. The state will function due an element of fear being present which will ultimately protect and ensure that the contract is followed, people would have given up their rights and overall power to the government. As Alexandra (2001) stated that to escape from a state of nature it is necessary that the fundamental laws of nature are accepted as “public standards of behaviour” (pg.3), and according to Hobbes this can only be achieved if all people agree to limit their rights and to act to in accordance.
Even though Hobbes viewed this nature as a battle and struggle between men for the ultimate goal of survival, there are arguments against this idea that the state of nature is characterized by a chaotic world of continual fear between individuals. Locke interpretation of the state of nature was one of perfect freedom were men inherently have a sense of morality which discourages them from engaging in acts of evil. We can thus resolve any conflicts. He depicted the idea of men not having any incentives to “destroy himself, or any creature in his possession”. It is reason that leads the way in preserving a peaceful life, and teaches us that harming one another is not a moral action (Locke, 2005) From Locke’s analysis on the state, we can see that it contradicts Hobbes views on human nature. One on hand the individual is represented as good with an innate moral instinct while the other is a self-driven creature, we need to regard whether it is possible to live an acceptable life in the absence of government or sovereign rule?
Thomas.J (2009) researched to conclude that men have always been under the influence of some degree of authority, and even when there has been no control exercised by the state it has been god that has inspired them to act in a kindly manner with generosity. Men have the natural habitual ability of living with other members of their society without becoming a “social animal”. He developed his ideas further and stated that even before the state emerged, fathers were seen as the dominant figure in households and ruled over their wife and children; families were seen as ‘a unit of social organisation’.
Doyle backed up the idea presented by Thomas by stating that human beings were predestined to perform acts of evil through god, so we needed to question whether they could really be held responsible for their actions.(1927, Pg. 340). He however went onto support Hobbes claim that men were dominated by their natural instinct and were free to act as they wished, which meant they only had the power to evil. Nevertheless we also needed to consider that the action of good deeds by man is seen as automatic (1927 pg. 342)
The main concept Hobbes failed to examine and take into consideration when coming up with his theory of the state of nature is that humans have a social inclinations which include affection, building relationships and friendship which leads us to being rational human beings. This social nature embed into humans is one that drives them to cooperate. Merriam (1906) examined Hobbes literature and notice how he failed to recognize the existence of social qualities in human nature. The fundamental laws of nature commands all men to be peaceable but to also be compliant with each other, even if they entered a state of war nature would command them to be socially minded and love one another which would minimalize any effects of war between man. This statement was contradicted by Haji (1991) who argued that individuals fail to realise the benefits that cooperation with others would bring in the long term and would rather just opt for the short term benefits of them choosing to not cooperate and act solely, this leads to a course of action where everyone in society decides to not cooperate than achieve any effects through cooperation which ultimately leads to a continual fight for self-preservation.
It is clear that both researchers have examined the notion that cooperation is an important aspect of human beings day to day life, however there will always be different circumstances where individuals choose whether to cooperate or not. We can focus on the prisoner’s dilemma to look into this further, the prisoner’s dilemma is a game theory which gives the individual an overall outcome and a path of choice, and we can relate this to everyday life where certain choices give us greater benefits. We may desire to choose one that gives us greatest satisfaction or an equal option which benefits both parties. There are different people in society, some are more aggressive and self-motivated while others are inclined towards social relations.
Nevertheless it is important to realise that social behaviour that involves cooperation can be adopted and learned in such a way that restrictions enforced by society are not necessary to control the behaviour of certain individuals. Human behaviour thus can be controlled by education (Kavka,1983). Kavka also goes onto say that Hobbes theory on the state of nature is narrow minded due to his interpretation of what establishes a civil society and of what constitutes the state of nature. Hobbes predominant view is that only an absolute sovereign can be an authoritative common power. Otherwise, he stated that in the absence of a common power, people are in a state of war which is not necessarily true.
We can therefore conclude that Hobbes claim that the state of nature is one of war is not entirely true, and at no time has this state of nature existed, it was a hypothetical scenario formed by Hobbes based around the presumption of a state in anarchy. The state of nature was represented as a state of war upon the assumption that society is suffering due to a shortage of resources and competition over food supplies, however this is not the case in real life and there is ‘room for all man’. Thomas (2009) states that a state of war will only arise when individuals are severely hindered in preserving their lives.
If we look at current political situation however there is anarchy present among the states. There is no overall world government which regulates power over all states. If we consider states separately we can justify what Hobbes stated about the state of nature. It is evident that there is current inter-state and intra state war still present today, and certain crimes which are committed which go unpunished. The fear of war is always existent and states go to extreme lengths to dominate others, as well as this there is still a certain degree of mistrust between people even when there is a common power, thus Hobbes idea of a state of nature being one of war is supported to some extent as there will always be some form of competition amongst people but it does not necessarily have to be as brutish and vulgar as Hobbes described.
Bibliography
Alexandra, A.(1992). ‘Should Hobbes state of nature be represented as a prisoner’s dilemma?’ .The Southern Journal of Philosophy. Vol 2. Melbourne: The University of Melbourne.
Alexander, J. (2001). ‘Group Dynamics in the State of Nature’ Erkenntnis. 55(2): pp.169-182
Doyle, P. (1927). “The contemporary background of Hobbes ‘state of nature’”. Economica. Vol 21. pp 336-355.
Haji, I. (1991). ‘Hampton on Hobbes on state of nature cooperation’. Philosophy and phenomenal research. 51(3): pp 589-601.
Hobbes, T (1982). Leviathan (Penguin Classics)
Hill, G. (2006). Rousseau’s Theory of Human Association: Transparent and Opaque Communities.
Kavka, G. (1983) ‘Hobbes War of All Against All’. Ethics. 93 (2):pp. 291-310
Locke, J. (2005). Two Treatises of Government. London.
Merriam, C. (1906). ‘Hobbes Doctrine of the State of Nature’. Proceedings of the American Political Science Association. Vol 3. pp. 151-157
Plato. (2007). The Republic (Penguin Classic) Oxford University Press.
Schochet, G. (1967). ‘Thomas Hobbes on the family and the state of nature’. Political science quarterly. 82(3): pp 427-445.
Thomas, J. (1929) ‘Some Contemporary Critics of Thomas Hobbes’. Economica. Vol 26. pp.185-191
 

Concept of Drawing as a Medium

This essay will address the subject of drawing. The main starting point will be the ideas of John Berger on Drawing. These ideas can be summed up into three main concepts: Drawing as observation, drawing as memory and drawing as expressing ideas. Although drawing from observation was of fundamental importance in the past today we see more and more an engagement of drawing with memory and as expression of ideas. This doesn’t mean that people don’t deal with observational drawing, it means that its practice as it was in the past has become obsolete in the sense of drawing is a starting point as study for a final painting. The introduction of photography and the end of old art academies accelerated this change. This essay will deal mainly with drawing as memory and drawing as ideas. It will first look at artists who use drawing in a more conventional way for ideas and memory. Then it will move on to consider artists who challenge the medium itself (pencil, paper and so on) to push the idea of drawing to express drawing as memory and drawing as observation. The essay will discuss the distinctions between painting, sculpture and performance as a way of discovering the possibilities of drawing and also to discuss the new expectations of drawing as a medium.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The art practice of drawing in the late twentieth century has achieved the status of art in its own right. The approach to drawing is also changing in ways that reflect trends within the art world at large. Many artworks in association with drawing challenge traditional boundaries among media. Secondly, there is self-consciousness about the nature of art and what is involved in the creation of art. I see drawing now at a primarily end of its tradition it can be argued that it no longer stands as drawing to represent. It seems that we are on the verge of anther paradigm shift in drawing, now reflecting an altered view of its nature as a skilled activity, what we now perceive as drawing has been obliterated. By this I mean change for instance, Rauschenberg’s Erased De Kooning, an example of change by removal of the drawing. Rauschenberg proved to be going backwards in drawing traditions. The drawing was there and now it is not. This was a kind of rebellion going against traditions, although the drawing may have gone backwards drawing was to go forwards i.e., modernism. The process of change will be discussed.
Drawing is discovery
John Berger, Berger on drawing, had enabled me to begin research and analyse in depth both the physical and the metaphysical act of drawing. What we draw is not only the subject observed but also what we already know about it. In fact the past experience of the subject affects the way we draw it. Berger further raise’s the point that will be discussed in this essay; the differences between the actions of drawing and painting. According to Berger the audience can identify with the subject illustrated when confronted with a painting. I will attempt to establish a dialogue on the possibilities of drawing with reference to artistic process. Firstly I will analyse the work of Jackson Pollock as a link to contemporary practice. I have also found it important to research literary theory on the grounds of the process of mark making and the social connotations it has created. The reading of ‘Berger on drawing’ helped me to begin thinking about some key terms such as drawing as memory, drawing as a way to express or show forth ideas and drawing as observation. It was useful to reflect on the idea of truth in drawing. How truthful can we be when we draw? Do we draw what we see or what we know? Can we overcome our set knowledge of things? I will try to find out more about these issues studying the work of other artists.
Artists such as Jackson Pollock, CY Twombly, Susan Collis, Louise Bourgeois and Yves Klein whom will be discussed in this essay distinguish their mark making to be somewhat unknown and less predictable. Their works all would free all ‘external impurities’. This essay will examine a process of where drawing stands to date in relation to the past.
It seems that drawing is everything it is not just the stimulators of the pencil it is in fact the motif and creation of exploring possibilities within the concept. I want to question that without these familiarities then what is known, as drawing could never have happened. Drawing is a continuous action, commonly known as the before of something. Now drawing is the beginning and the end of concepts. Drawing, when perceived as truth or good is the act of line. This is the common factor that persuades all subject matter to fall in the same category as writing, in the relation to ‘text and image’. Conversely, bad drawing is lining by the means of lines, a fact lamentably patent in things as widely divergent. This point permits me to repeat that drawing specifically means to visualize ideas by means of lines.
‘Drawing is discovery; drawing is a way of seeing what is hiding under the surface’. If the artist observes what is in front of him then “dissects in his minds eye” this demonstrates that the artist relies on memory and past observations to draw the subject before him rather than simply examining what lies before him. What we draw is not only the subject observed but must also be what we already know about it. It is the difference between the actions of drawing and painting that need to be explored further, for instance in abstract expressionism the line between subject and artist is subtle in distinction whereas Yves Klein paints with a figure, which will expose the difference in this relation.
Drawing into painting chapter One (a discussion with chosen material)and chance
There are distinctions between drawing and painting. It seems this became irretrievably blurred when Jackson Pollock started to paint with ‘line’ in the late 1940’S. Bernice rose has stated in the writings of his work that, perhaps, then it would be more precise to say that there is no real dividing line between painting and drawing in his work. Perhaps there is no divide between painting and drawing? The same is being made, the mark or as discussed the ‘line’ is only been made larger and the feeling is now more intense. Pollock erased the distinctions then pertaining between drawings as a discipline. Referring to Cy Twombly cycles and seasons, reading paragraph, ‘coincidence’ where Shiff talks about the pencil line as ‘something that is happening’, this means that the line is not there to describe or configure things with a narrative aim, i.e. The line is not meant to represent objects belonging to the world. The line is not linked to the act of seeing with Pollock. The line is linked to the act of investigation and drawing as idea and memory. As Pollock would push the boundaries between drawing and painting thus drawing becomes painting and vice versa. Therefore drawing loses its dependence on painting. Twombly would repeat what Pollock had started. Their work both suggests internal feelings and relate in a much deeper way to merely observation. From memory they would represent their emotions going against the conventions of traditional drawing.
The line represents and describes feelings and emotions, which seems a constant flux of things that can happen at the same time. Using Twombly as a reference once again, what appears really interesting to me is the constant change in both of their works: lines are constantly erased, changed, redrawn and re-erased. Furthermore it seems to me that the past and the present are a constant dialogue. As Pollock immediately pours the paint medium onto the canvas the expression is different. The raw emotional expression allowed the drawing to become much more complex and indeed most energetic.
Now, the conventional sense in his paintings is that they generally neutralize the distinction between figure and ground, a factor closely allied with the theory of the all-over.
Because the image evinces no definite form but only a compact, restless texture that appears to continually advance and recede and allows the eye no point of rest, is banal.It remained, however, for Pollock to move from this to a full recognition of pictorial identity of drawing and painting. Furthermore, in his work the effort to bring painted effects into balance with those obtainable in drawings vanishes as both materials of ink and oiled pigment would operate from the same overall conception. – The conception that would see the blank paper and the indrawn canvas as comparable visual fields.
The aura of drawing surrounding the act of painting almost denies any difference. Looking at Pollock’s earlier works contains numerous indicators of the great significance that Picasso’s work held for Pollock. Looking again at drawing as memory or past knowledge Picasso arouses the interest with themes of sex, beauty and young woman but also reference to the old masters in his work. In the words of Jeffery Hoffeld Picasso displays a “panorama of works from the history of his own art’. The Title links in with Pollock’s idea of drawing as an element of memory but also through drawing as past experiences or past knowledge of a specific subject. Significance in style and development evokes the condition of drawing. The intensity of Pollock’s Paintings had clearly evolved through his ‘act of drawing’, drawing from an idea in his head, creates this impulsive ‘drawing performance’i.e the body moves with every drip and every mark to be made, as though the artist would walk with the drawing he would become part of the work.
This drawing is enriched with energy and feeling that could be connected into painting almost immediately. The line scratches through the figures, the impulsive brush over of marks, and gradually discovers beneath the network of strokes a circular shadow that seems to hover in the pictorial space and yet create depth. ‘The beholder’ has a sense of a hallucination. Walter Benjamin has suggested that when a drawing entirely uses or covers its supporting ground it can no longer be called a drawing. This can be added to define the characteristics of the overworked nature in his work.
This definition to me seems unfair to say it is that act of drawing that relates to figure and ground what becomes of the image is unknown. If we extend the list of artists that have used a similar approach to painting and drawing that like wise experimental dripping, such as Susan Collis an artist who also experiments with accidental drips, attaches to the technique as such. Collis’ work would seem like careless splashes and stains upon the surface, however with careful inspection these marks would heighten the idea to mislead the viewer as these are counterfeit marks playing with our reactions and our understanding with mark making. For instance, Susan Collis, No. 2(In series), 2004, red glitter and self adhesive Vinyl are an example of the process of replacing the original mark with her own.
‘To live is to leave traces’
This misleading conception that their may have never been an object or a sheet of paper there, is an argument to raise the point that drawing has been extended it has worked its way of the surface and onto a new. Furthermore, Collis works with marks left by things, the incidental and transient and lending their permanence. No matter what point we might eventually select the fundamental function of Collis work allows us to rethink past experiences in Art history and the change within art concepts. Subject is defined with false conceptions playing on the idea of what is and was may not be reality. Referring back to what Berger has said; drawing from memory , this can relate with the work of Susan Collis as the traces are of objects that where once there she has celebrated the idea of memory, drawing from her memory as a way to discover the past in relation to the present. If confronted by something that has no form, no language, or no place, a familiar analogue steps in; we use one thing to describe another.When the artist has no words to describe something drawing can define these lost words both the real and the unreal in visual terms.
‘Gestures of freedom’
For instance, CY Twombly pushes the limits of drawing and painting with words; it is very hard to classify his work either as painting or drawing. ‘Illustrious and Unknown’ is what Degas aspired to be, and what Cy Twombly has become. The boundary between drawing and painting becomes blurred into his practice as an artist. Playing on the tension between drawing and painting, Twombly was able to question and redefine what drawing is or what it can be. For instance this challenge to drawing can be seen in his experiments to drawing in the dark. In this way he denies the old principle of drawing that is drawing from observation since the act of looking is invalidated in darkness. This can question; how can the values of drawings be recognized as having reflected changes in the material conditions and technology of drawing?The condition of materials being as much unknown in the dark as his mark analysis’s an exciting process with discovering his material. With these examples there is a change within how art can be made; Art can be made of anything firmly established, as they would work with a range of materials simultaneously.
The dictionary definition of drawing suggests that it is inextricably linked with line. It’s clear that drawing and painting both exist simarily in the same worlds, what I distinguish between both of them is the order of similar motifs. These artists discussed so far all relate to drawing as memory and drawing as ideas. After the breakdown of modernism, artist became less concerned with any specific properties with their chosen medium, instead selecting the medium for its compatibility with their particular thesis or proposition.
There is and order to maximise the formal potential of their chosen material. We only have to study the work of Marlene Dumas to gain an understanding of the relationship drawing has to painting. Drawing is a vital part of Marlene Dumas’ oeuvre; by drawing with the line tools of painting her works on paper and the oil paintings echo each other. This is an opportunity to once again ‘blow up’ the image as said. In this case drawing is a way of getting to understand the image, ‘I use second hand images and first hand emotions’. Her paintings differ from her contemporaries who during the 1980s revisited the figure in neo-expressionist work that favoured intoxicating colour. Dumas uses paint as a subversive, anti- conventional means of expression and the figure as a vehicle for achieving these ends. The image is created with the feeling of expressing ideas. She is an artist who works with memory and ideas to work out a dialogue with mark making and story telling. Her paintings become drawings and her drawings become paintings. These paintings make similar marks to a single ‘line’ in the association with drawing. It is not drawing of an outline as the single brushstrokes acts as a drawing and a painting at the same time. Her materials that she uses obscure new possibilities and meaning – the paint, the lines, the ink the drawing’Thus demonstrating that the line is of importance in both relationships of painting and drawing. The pre-knowledge of her feelings, memories, ideas and associations with the image revert back to the impulsive line.
Her direct approach shows the power of the image, which is informed by the immediate gesture of the drawing. The tension that has seemingly been created in the image, we can recognise what is depicted and yet we are not entirely sure about its meaning. As the viewer we are compelled by the poetic nature in movements throughout the image. Other important key terms for the possibilities of drawing are: chance and the relationship child -like/childish drawing. There is an element of chance and randomness in Dumas work, also referring back to Twombly’s drawings he too works with the same ideas of chance. As for the relationship child like/childish, Twombly’s drawings fall into the first category. In fact as much as one tries to regain the innocent eye of the child one will never succeed because he/she is not a child anymore. In a sense this reminder me of Picasso’s mission in art as well, i.e. to regain playfulness in the act of drawing. In fact a child is able to create without the concern and the clichés the adult artist is concerned with. It is the coloured pencil drawings of Cy Twombly that the line wonders off back and forth in the distance charming the viewer as the marks turn discreet.
We can also see this parallel shift from drawing possibilities with materials into painting in the works of Louise Hopkins. Hopkins work hovers on the boundary between drawing and painting. She is and artist whom describes she will ‘paint’ rather than being a specified ‘painter’. Hopkins delicate approach rejects the traditions to ‘picture making’. The result is certainly a drawing in purely technical terms, but at the same time it may represent the drastic function of line, her process of change and use of line is meticulously, one stroke at a time. She never starts with a blank canvas. That fear of being confronted with a problem before the image becomes part of the context. For example, in Untitled (the of the) 2002 Hopkins has taken a broadsheet newspaper and drawn over every single key word and image, leaving behind only the connecting words.The words then become isolated and immediately transformed into a new context. This wonderful image has the feeling of a night sky with nothing but stars connect with. However maintains its aim with undeniable pattern, rhythm and form.
Hopkins ground has been inked out; leaving behind its signifiers, the notes, which are there, but the song has been interrupted with this ‘blocking’ out technique, which seems a repetitive process. ‘White black black white’ explains in its title the process of repetition; Hopkins repeats her actions on the surface developing its contrasts and rhythm. This process has created a different kind of rhythm, played out but the white circles and lines framing the musical notation. This seemingly repetitive action is merely Hopkins aesthetic decision to highlight specific points within the page and thereby compose her personal and original tune. This method appears once said painting and drawing in reverse. The existing material and images are systematically covered up. What is interesting in this work is the idea of a memory the surface is a memory, and is yet to be vanished. The more ink that is added the less information she maintains of its originality. Once again this process of change relates to drawing as ideas and drawing as memory, Hopkins time consuming change to the image represents the processes within drawing. As disused in Hopkins work there is congruence here with Robert Rauschenberg drawing ‘Erased de Kooning’, 1953. Here the drawing has been removed as part of the progression of drawing.
There is a clear conceptual starting point here with both artists. Once again this relates to memory. The initial image has been completely removed; however it is still obvious of its existence. Drawings are often created and removed by the lack of success in the drawing. This process of change is clear. The existence of the drawing has shifted from being obvious to then becoming unsure. From the title of the work we can’t help but imagine what was. In relation to Hopkins work both artists are ‘drawing’ attention to what they are taken away, creating possibilities for stereotypical images. There is significance in drawing then to painting or to be known in some sense the painting is once again the drawing. Although we would not understand that this was once a drawing, the title allows an understanding -Text and Image. The text and image represent a personal commentary on concerns that shape much recent art.
Drawing into Sculpture chapter Two’ memories’
‘Drawings is analytical but its also expressive in its own right, it has duty to bear witness, nit simply by making a representation of something, but taking things apart and reassembling in a way that makes new connections, it is entirely experimental’- Antony Gormley
This chapter will discuss sculpture and drawing as a way to discover ideas. Joseph Beuys would have had ‘false conceptions running through his mind if he hadn’t made drawings’Drawing in this case would relate to drawing as expressing ideas. Drawing in these key terms would exist differently in real space than a sketched or painted one.
“With the situation of postmodernism, practice is not defined in relation to the given medium-sculpture- but rather in relation to the logical operations on a set of cultural terms for which any medium -photography, books, lines on wall, mirrors, or sculpture itself might be used. Thus the field provides both for an expanded but finite set of related positions for a given artist to occupy and explore”
The sculptural work is physically present and the space it exists in identical with real space. Drawing and painting for example tell stories, stories from the artist and stories that we are allowed to fabricate. Whether reality and fiction are allowed to be classified drawing does however extend into those dream dimensions that seem unattainable for sculpture.The drawing as known is not dedicated to any kind of medium, after the breakdown of modernism it seemed that artist became less concerned with the properties of a specific medium. Indeed artist would go against convention. As Stuart Morgan comments on Louise Bourgeois work,
“For an artist with no fixed style or material or medium, only the rule seems to apply and that there are no rules. No rules at least, which cannot be broken”.
Despite all gloomy prognoses of the end of freehand drawing, the strengths of drawing- being able to develop, test, and vary and idea with the greatest possible freedom and with an individual touch- have yet been obtained. When I think of Auguste Rodin this prognosis allows me to point out that Rodin thus falls into another category outlined by Berger, in this case drawing from observation and not memory. Rodin’s important synthesis defined the importance of the body in order to bring out purity. The artist’s drawing fall under different categories: drawings as preparations for sculptures; drawings as observational exercises per se and drawings from imagination. His approach to drawing as a sculpture, in his black drawings is visible in Rodin use of three dimensionality achieved by the use of chiaroscuro. It is interesting to find Rodin an artist from a traditional period within art, however Rodin felt it was necessary to go back to observation as his drawings became unknown. ‘ I realised my drawings where too divorced from reality, I started all over again, and worked from my life models’. To summarise Rodin used drawing to work out his sculptures using observations, the artist that I will now discuss differ in terms of their practice as it seems fair to say that now drawing is used as an excursion away from reality. This past observation looks at the similarities in which contemporary artist’s such as Rachael Whiteread that used drawing as a way to form her compositions and as a tool of expressing the object/structure with all of its possibilities. These drawings would initially start as plans, and without these plans ‘false conceptions’ of the work would appear apparent. Whiteread draws with a sculptures mind, for she follows that sense peculiar to making wood or stone sculptures.
‘The drawing is seen as a field as co-extensive with real space, no longer subject to the illusion of an object marked off from the rest of the world’.
The space of illusionism can change and connect with the space within the world, in doing so it loses its objective and would become more subjective and accessible only to the individuals raw perception. Furthermore, drawing dedicates itself to the space within. The importance of drawing within the space is a crucial process whiteread would have, by redraw the entire space to understand and refine her ideas. For instance, if we look at Floor Study, 1994, ink correction fluid on paper, 46x34cm
This wonderful drawing evokes such movement and repetition that can be designed in the sculptures she makes. In relation to such work Louise Bourgeois who uses her drawings as ways of sketching forth ideas. Her memories are the inspiration as she draws sense of her childhood from memory this way of drawing is not systematically correct as there is no end to the line. It was only a matter of time before drawing could be viewed as an opportunity and opportunity to develop the traditions and stick by the conventions in only a symbolic sense. Artists would now discover the imagination as they moved without restraints between media. The Insomnia Drawings, by Louise Bourgeois are a series of two hundred and twenty drawings that contain major themes in her work. Very important in these Insomnia Drawings is the link between drawing and words. The artist expressed her ideas about childhood fears and memories via drawing. Drawing became the channel to exorcise her fears. In my opinion her drawings are described mainly from a psychoanalytical point of view. Bronfen, The insomnia Drawings suggests that the artist’s drawings can be divided into two main categories: on one hand abstract and geometric on the other figurative and realistic.Marie- Laurie Bernadac elucidates that ‘ the abstract drawings come from a deep need to achieve peace, rest and sleep, they relate to unconscious memories’ whereas the realistic drawings represent the conquest of negative memory, the need to erase and get rid of..’I found these distinctions that Marie-Laure draws between realistic and abstract drawings interesting, However in my opinion the drawings that is described as ‘realistic’ could not be described as such in the conventional sense. I see her drawings as more of a dreamt reality. In this sense the act of Bourgeois drawings are successful in expressing her ideas, for instance the work ‘femme maison’, where the link between the female body and the house is expressed in a simple and effective way. If we look closely as though we were discovering the secret poetry within Twombly’s paintings, Bourgeois uses words which are generally used to express ideas; in her drawings words become drawings themselves. Furthermore the use of words as an aesthetic element with excitement functions to challenge the separation of written word and visual language. In fact the artist expresses them as a whole. Words are also used to represent the banality of everyday.In other words, every real artist, by means of lines compels us to recognise what has been drawn this is the spirit of the subject. Close to Bourgeois subject would undoubtedly be Tracy Emin, her work also makes reference to the feminine and sublime. Tracy Emin returns to drawing as the primary means of expressing her abject state of mind and body. Though she employs a vast array of media such as film, sculpture and performance, it is however drawing that satisfies her confessional practice with a constant presence within her practise (. i.e.). The ‘line’ takes control over the way she makes marks; with thread she can sew the line and engage with the same familiarities that the line has within drawing.
‘The difference between drawing and a picture is that in the latter the subject is worked out for us to look at; the former I can imagine so many things which are only suggested.’
The possibilities of drawing fall back to its original tradition; there is a constant flux of ideas that of which deal with the process of change and randomness. Jan Albers for instance, works in a constant hover between reality and phantasm between figuration and abstract. His interest in spiritualism and imagery reveal the intensity of his artistic research and practice.
In this example the exploding ‘lines’ of colour create a shield covering the figure which defines the structure of the drawing as repetitive mark making with use of the pencil. Often his drawings become three dimensional; the drawings step out into our reality and also are part of Albers reality. The radiating lines extend the drawings hung on the wall; his work deplores the change in drawing and is an infinite example of what drawing can become or what drawing has become. Drawing to me is far greater than being such of a secondary nature; it is in fact primary Sometimes leading to the discovery of another.
Progression with their chosen material’s
Joseph Beuys for instance, his drawing can be compared to most recent works within contemporary art such as Monika Grzymala. The drawing is an exercise far removed from perfection often their drawings are much obvious where there drawing began and what sequence the overlapping steps where executed. Furthermore these artists both deal with Time and Energy. Beuys’ drawings share a complexity of line yet the basic materials used to create the line defines a greater similarity. There lines are erratic and confusing to look at; Beuys’ drawings investigate his ideas using his memories to make a mark. Grzymala works with tape as her tool too make a mark on a surface.
‘Line is a point taken for a walk’
There is a fearful energy to Monika Grzymala’s drawing installations: layer upon layer of black lines scrawling up the gallery walls. They have a similar intensity to Beuys spontaneous suggestions of form. Beuys’ per formative actions served to widen the possibilities for what was considered art. I am defining the themes of change and progression; anything and everything has become possible.
Drawing and performance ‘ the body’ chapter three
This chapter will discuss, drawing in relation to performance within the conceptual art world. I will use artist such as Paul McCarthy, Rebecca Horn and Ives Klein as a way of comparing and evaluating the ‘extreme’ ways in which these artists would create drawings but not in the traditional sense. These artists would go against their traditions and explore possibilities of finding a new way of drawing, idea art that reinforce the connections with figure and ground and the physical relation that they have engaged with. If we look at the work of Keith Herring it is clear to identify the fusion of post modern theory that, activist practise and the appropriation of the idea of site specific drawing (performance).The growing eclecticism of styles in the 1980’s gave artists the freedom to appropriate style and form from other disciplines such as architectural, fashion, and scientific illustration, as well as popular culture.At this time particular artist began to champion drawing again, originally seen as the eccentrics within art, and then gradually acknowledged as important individuals. It can be said that drawing for these artist could be the only method that allowed them to fully express their thoughts, ideas and emotions.For instance Rebecca Horn is a performance artist who creates site specific installations, a sculpture that also makes films whose values of drawing derive from this process of her experimentation. These following examinations will portray the artistic style and energy, motifs and aesthetic strategies in which reflect the importance of drawing and demonstrate why these drawings should be accorded far greater importance than they have been in all her previous exhibitions and publications.Even in the momentum of drawing Rebecca Horn fuses conceptual thinking with emotional and per formative procedure. For instance, her Pencil Mask from 1972 (image), considers these aspects offering a more empathetic demonstration of this approach.
Rebecca Horn challenges the drawing and the making of the drawing proves a highly concentrated labour. The head mask consisting of a lattice of vertical and horizontal straps cross.Systematically the actions are prepared to measure spontaneous expression. It can be