## The Life And Work Of Euclid

While studying geometry with Euclid a youth inquired after having learned the first proposition, “What do I get by learning these things?” Euclid called a slave to them and said, “Give him threepence, since he must make a gain out of what he learns.” [8]
Euclid, a Greek mathematician and teacher, changed the course of the world. Euclid’s work not only affected the work of other prominent scientists to come after him, but also the lives of ordinary people, which contributed to the rise of modern science in western civilization. What is perplexing is that despite him changing the course of world, we know very little about him. Unlike some other well-known historical figures, Euclid’s influence did not spread simply by fame. Historians don’t even know his exact date of birth. To this day, we do not know which continent he was born on, much less the city. Of the little we do know about Euclid, we know that he taught in Alexandria around 300 B.C. [9], and that he wrote, amongst approximately 10 other books, arguably one of the greatest mathematical textbooks in history, The Elements.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The Elements is a geometry textbook that unified all of the previously known principles of geometry. It was unique in that it was constructive in its delivery of its principles. Basically, it explained mathematic principles from the ground up and added onto what was already established. Imagine trying to study science if one concept didn’t flow into the next and everything was garbled and out of order. The Elements solved this problem through careful organization and logical delivery of its principles. The Elements wasn’t a revolutionary observation or a new and exciting revelation, but rather a book of brilliant deductive reasoning, analysis, and organization. The Elements was explained so well that every Geometry textbook preceding it was practically discarded, and because of this the term “Euclidean” wasn’t necessary or used for over two thousand years because there was no other known form of geometry[17].
Concerning Euclid’s deductive reasoning and analysis, his axiomatic systems are most prominent. His axiomatic systems are considered to be constructive. [18] This means that he never reached any conclusions or spoke about concepts that he did not yet explain to the reader. He arranged the geometric theorems so that they flowed logically from one to the next. [9] For example, he started with the simplest of concepts such as describing a geometric point and worked his way into derived propositions. [16] More specifically he took a small number of axioms (self-evident logical truths) and deduced many other theorems from them. He even filled in the blanks whenever it was necessary by filling in the missing steps absent from other’s processes, and even by developing his own proofs [9]. For example, Euclid proved that it is impossible to find the largest prime number. He proved that if you were to take the largest known prime number and 1 to the product of all the prime numbers leading up to it and including it then you will get another prime number. This is accepted as being one of the classics proofs in mathematics because of how clear and concise it is. [5]
Euclid put a lot of effort into making it possible for common people to understand geometry rather than just professional mathematicians. How the natural flow and style of explanation of The Elements affected the world is self-evident. Since it is easier to understand scientific concepts when they are communicated clearly and concisely and delivered in a logical order, Euclid’s book made it much easier for the people to acquire a complete understanding of geometry. As newborns in this world often one of the first things we get to play with are blocks of different geometric shapes. This helps us to develop our minds both visually and mathematically. Euclidean shapes are quite literally everywhere in our society. Unlike Calculus where there is usually a fixed method for solving a given problem, when it comes to geometry, using Euclidean axioms allows people to solve any one problem in several different ways. It also inspires development of problem solving skills.
One of the ways Euclidean geometry has been applied and influences our day to day lives is through construction and architecture. For example, if somebody wants to construct a wooden table. If they wanted to figure out if it was square or not they could measure each corner of the table to see if it was at a 90 angle. With Euclidean Geometry, however, they would need only to measure two of the corners. The properties of right triangles within The Elements tells us that if two corners are square then the whole shape is square. This is probably very obvious to a person of our modern day, but it was not at the tme. Unless you are a mathematician you may not even know who such properties can be attributed to and just consider them common knowledge. Another, less obvious way they could have done this is to have measured the distance between two diagonal corners of the table. If the two distances are the same then the table must be a square. The latter method I have described is a common way for construction workers or home-improvement workers to check their work. There are countless examples of this that common people can utilize in their everyday lives with the principles of Euclidean Geometry. Euclid’s influence doesn’t end there. Examples of Euclidean geometry can be found in modern day computer graphics. It is used in mesh generation. A mesh is basically a combination of geometric polygons or polyhedrons that create the illusion of a curve. Although the Euclidean Geometry may be widespread within western civilization, in some third world countries there are houses are constructed as lop-sided indeterminate shapes. This is a real-life example of what our architecture would have looked like without Euclid’s influence.[4]
It is fair to say that the study of Euclid’s book was one of the main contributing factors to the Scientific Revolution and subsequently the rise of science in Europe rather than in Asia. The Elements made the concept of one principle being built upon another glaringly obvious and, over the course of time, it became considered common knowledge in western civilization. Of course, scientists such as Newton, Copernicus, Kepler, and Galileo played significant roles as well [9], but as Sir Isaac Newton said “If I have seen further it is by standing on the shoulders of giants” [21]. Euclid’s book provided for us, not just a “shoulder”, but an entire foundation built of “giant’s shoulders” that would have otherwise been scattered and disorganized. This solid base of knowledge allowed western civilization to reach new heights. For example, when it came to Isaac Newton and his book, Principles Of Natural Philosophy, many of his proofs were set in a “geometric form” similar to those found in The Elements. [12]
As it is with any great work of science, The Elements allows others to build upon it or advance into new areas of discovery. Some men, such as Girolamo Saccehri, have tried to disprove or find flaws in Euclid’s axioms. Saccehri was an Italian mathematician who in 1733 almost discovered a form of non-Euclidean geometry. He studied for years to find a flaw in Euclid’s work. He was supposedly on the verge of a breakthrough but gave up before his work came to fruition. It wasn’t until about a hundred years later in 1899 that a German mathematician by the name of David Hilbert found another set of geometric axioms that differed from Euclid. [13] Non-Euclidean geometry allows us to describe physical space in new ways. Following Hilbert came another German, by the name of Albert Einstein. Einstein recalls receiving two gifts that had particular influence on him as a child, one a magnetic compass, and the other Euclid’s The Elements. He referred to The Elements as the “holy little geometry book”. [3]
Another example of a great scientist that was influenced by Euclid is Galileo Galilei. In his old age Galileo told his biographers that while attending the University of Pisa he would nose-drop in on lectures being given by Ostilio Ricci to the court pages on Euclid. These lectures were only available to members of the court so he would try to stay quiet whenever he attended them. His interest in Euclid got the better of him after a while and he approached Ricci to ask him questions on Euclid. Ricci noticed Galileo’s talent for math and eventually became his teacher. Although Galileo was supposed to be going to college to study medicine, (Galen) he secretly studied mathematics (Euclid) instead. Galileo later used Euclid’s Book Five, Definition Five, to show how bodies of certain arbitrary weight have weights directly proportional to their volumes. [2] This is one of the best examples how influential Euclid’s work was to anybody with a mind for mathematics and how he changed the course of history by capturing the interest of a man such as Galileo.
Euclid’s work also influenced philosophers such as Benedict Spinoza. Benedict Spinoza was a prominent philosopher of 17th century. He wrote the ambitious philosophicical book Ethics where he attempts to provide us with a coherent view of the universe and our place in it. To explain such concepts he used Euclid’s style of delivery complete with axioms and propositions. Speaking of the system within his book and the style in which he chose to present it in Spinoza said that it was “demonstrated in geometrical order”. [23] Usually philosophical books were written differently, such as Rene Descartes’ Meditations that was written like a diary.
When it comes to mathematicians I think every mathematician alive since the time of Euclid had to have been influenced by his work in some form or another, but, of some of the most prominent mathematicians, Euclid specifically influenced the work of Bertrand Rusell, Alred North Whitehead, Blaise Pascal, Marin Mersenne , and Adrien-Marie Legendre. Interestingly enough Bertrand Russell, an English 20th century mathematician and logician, used Euclid’s work to push mathematics into the next level by explaining to people in his book An Essay On The Foundations Of Geometry [11] how Euclidean Geometry was being replaced by more advanced forms of geometry. Both Russell and Whitehead were co-authored the epoch Principia Mathmatica in which they referenced Euclid a number of times as evidence in their work. Pascal, a 17th century French mathematician, received a copy of Euclid’s Elements as a boy and before the age of 13 he had proven the 32nd proposition of Euclid and discovered a flaw in Rene Descartes geometry [25]. Mersenne, also a 17th century French mathematician, used Euclid’s proof on prime numbers to develop his own ways or “forms” as they are called, making it even easier to find large prime numbers. Prime numbers are important to modern day society because they are used in cryptographic software security systems. Basically, large prime numbers can be implemented into coding schemes that are difficult to break. Legendre, a 19th century French mathematician, wrote his most famous book Eléments de Géométrie based entirely off of The Elements. In it he sought to simplify Euclid’s propositions even further. Eléments de Géométrie was used in elementary school classrooms for over a 100 years. [13][24][6]
Euclid influenced politicians such as Abraham Lincoln. Lincoln, as a lawyer traveling on horseback would carry a copy of Euclid’s Elements in his saddlebag. According to his law partner, at night Lincoln would lay on the floor for hours at night studying Euclid’s Elements by lamplight. [5] He was a great admirer of the logical delivery of information that The Elements contained and used Euclid’s systematic approach in many of his speeches. It is no coincidence that the phrase “dedicated to the proposition” bears such striking similarities to Euclid’s axioms. Lincoln, speaking of his study of Euclid, said,
“In the course of my law reading I constantly came upon the word ‘demonstrate’. I thought at first that I understood its meaning, but soon became satisfied that I did not. I said to myself, What do I do when I demonstrate more than when I reason or prove? How does demonstration differ from any other proof?
I consulted Webster’s Dictionary. They told of ‘certain proof,’ ‘proof beyond the possibility of doubt’; but I could form no idea of what sort of proof that was. I thought a great many things were proved beyond the possibility of doubt, without recourse to any such extraordinary process of reasoning as I understood demonstration to be. I consulted all the dictionaries and books of reference I could find, but with no better results. You might as well have defined blue to a blind man.
At last I said,- Lincoln, you never can make a lawyer if you do not understand what demonstrate means; and I left my situation in Springfield, went home to my father’s house, and stayed there till I could give any proposition in the six books of Euclid at sight. I then found out what demonstrate means, and went back to my law studies.” [1][5]
The astronomers Johannes Kepler and Nicolaus Copernicus were also influenced by Euclid’s work. When it came to Kepler’s approach to astronomy he depended almost entirely on Euclid. Kepler, much like Galileo studied Euclid while attending a university (Tübingen). Kepler was a devout Lutheran and considered Euclid geometry to be the only geometry that could be applied to the heavens and refused to use any other form of geometry because he considered such forms to be heretical. He developed a proof of concerning planetary motion based entirely off propositions found in The Elements [22]. Copernicus used Euclid’s work on optics as evidence in his book On The Revolutions Of The Celestial Spheres which was considered the “starting point of modern astronomy and the defining epiphany that began the scientific revolution”.
All these great men of science were not able to use Euclid’s work as evidence simply because he was well known or famous for doing something exciting and spectacular. It was the intellectual quality of Euclid’s work that made the difference. We don’t know enough about Euclid to either love him nor hate him and unless you happen to be a mathematician, his work is undoubtedly not awe inspiring. Nevertheless, Euclid still managed to affect some of the most important figures of the Scientific Revolution by setting the foundations necessary for the development of modern science.
Sources:
1. The Lincoln year book, written by Abraham Lincoln, 1809-1865, passage 32
2. Galileo at Work: His Scientific Biography, written by Stillman Drake, pages 2-3
3. Einstein as a Student, written by Dudley Herschbach, page 3
4. How To Use Euclidean Geometry, written by Henri Bauholz, http://www.ehow.com/how_4461018_use-euclidean-geometry.html
5. Euclid, Math Open Reference, http://www.mathopenref.com/euclid.html
6. Great Scientists: from Euclid to Stephen Hawking, written by John Farndon, 2007
7. A Chronicle of Mathematical People, written by Robert A. Nowlan
8. Geometry Quotes, History of Mathematics Archive, http://www-history.mcs.st-and.ac.uk/~john/MT4521/Lectures/Q1.html
9. The 100: A Ranking Of The Most Influential Persons In History, written by Michael H. Hart, 2000
10. Encyclopedia of World Biography. Euclid
11. The Teaching of Euclid, written by Bertrand Russell, http://www-history.mcs.st-and.ac.uk/Extras/Russell_Euclid.html
12. Isaac Newton, Wikipedia, http://en.wikipedia.org/wiki/Isaac_Newton
13. Mathematicians Are People, Too: Stories from the Lives of Great Mathematicians, written by Luetta Reimer & Wilbert Reimer, 1990
14. The Beginnings of Western Science: The European Scientific Tradition in Philosophical, Religious, and Institutional Context, Prehistory to A.D. 1450, written by David C. Lindberg, 2008
15. Mathematics: From the Birth of Numbers, written by Jan Gulberg, 1996
16. Euclid’s Elements, written by D.E. Joyce, http://aleph0.clarku.edu/~djoyce/java/elements/elements.html
17. Euclid, Wikipedia, http://en.wikipedia.org/wiki/Euclid
18. Axiomatic Systems for Geometry, written by George Francis, 2002
19. The Thirteen Books of the Elements, written by Euclid / Thomas L. Heath
21. Newton: Understanding the Cosmos, New Horizons, Letter from Isaac Newton to Robert Hooke, 1676, as transcribed by Jean-Pierre Maury, 1992
22. KEPLER’S PLANETARY LAWS, written by A. E. Davis, http://www-history.mcs.st-and.ac.uk/HistTopics/Keplers_laws.html
23. Spinoza and Jefferson, The Teaching Community, http://teachingcompany.12.forumer.com/viewtopic.php?t=2147
24. A History of Mathematics, written by Carl B. Boyer, 1985
25. The History of Computing Project, Blaise Pascal, http://www.thocp.net/biographies/pascal_blaise.html

## Content and Process Theories of Work Motivation

The work motivation theories can be broadly classified as content theories and process theories. The content theories are concerned with identifying the needs that people have and how needs are prioritized. They are concerned with types of incentives that drive people to attain need fulfillment. The Maslow hierarchy theory, Fredrick Herzberg’s two factor theory and Alderfer’s ERG needs theory fall in this category. Although such a content approach has logic, is easy to understand, and can be readily translated in practice, the research evidence points out limitations. There is very little research support for these models’ theoretical basic and predictability. The trade off for simplicity sacrifices true understanding of the complexity of work motivation. On the positive side, however, the content models have given emphasis to important content factors that were largely ignored by human relationists. In addition the Alderfer’s ERG needs theory allows more flexibility and Herzberg’s two-factor theory is useful as an explanation for job satisfaction and as a point of departure for job design.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The process theories are concerned with the cognitive antecedents that go into motivation and with the way they are related to one another. The theories given by Vroom, Porter and Lawler, equity theory and attribution theory fall in this category. These theories provide a much sounder explanation of work motivations. The expectancy model of Vroom and the extensions and the refinements provided by Porter and Lawler help explain the important cognitive variables and how they relate to one another in the process of work motivation. The Porter Lawler model also gives specific attention to the important relationship between performance and satisfaction. A growing research literature is somewhat supportive of these expectancy models, but conceptual and methodological problems remain. Unlike the content models, these expectancy models are relatively complex and difficult to translate into actual practice. They have also failed to meet the goals of prediction and control
Motivation Theory 1 – Adam’s Equity Theory of Work Motivation
The theory explains that a major input into job performance and satisfaction is the degree of equity or inequity that people perceive in work situations. Adam depicts a specific process of how this motivation occurs.
Inequality occurs when a person perceives that the ratio of his or her outcomes to inputs and the ratio of a relevant other’s outcomes to inputs are unequal.
Our Outcomes Our Inputs Other’s Inputs
Our Outcomes = Other’s Outcomes = Equity
Our Inputs Other’s Inputs
Our Outcomes > Other’s Outcomes = Inequity (over-rewarded)
Our Inputs Other’s Inputs
Both the inputs and the outputs of the person and the other are based upon the person’s perceptions, which are affected by age, sex, education, social status, organizational position, qualifications, and how hard the person works, etc. Outcomes consist primarily of rewards such as pay, status, promotion, and intrinsic interest in the job. Equity sensitivity is the ratio based upon the person’s perception of what the person is giving (inputs) and receiving (outcomes) versus the ratio of what the relevant is giving and receiving. This cognition may or may not be the same as someone else’s observation of the ratios or the same as the actual situation.
If the person’s perceived ratio is not equal to the other’s, he or she will strive to restore the ratio to equity. This striving to restore equity is used as the explanation of work motivation. The strength of this motivation is in direct proportion to the perceived inequity that exists.
Research suggests that individuals engage in illegal behaviors to maintain equity in relationships, either with their employing organization or with other people (Greenberg, 1990).
The theory was later expanded with the concept of “Organizational Justice”. Organizational justice reflects the extend to which people perceive that they are treated fairly at work. It identified three different components of justice: distributive (The perceived fairness of how resources and rewards are distributed), procedural (The perceived fairness of the process and procedures used to make allocation decisions) and interactional (The perceived fairness of the decision maker’s behavior in the process of decision-making). (Copanzano, Rupp, Mohler and Schminke, 2001).
Critiques:
Equity theory is descriptive and it reflects much of our everyday experience. As a theory however equity is only partial in analysis and as a predictor. There are many societal and institutional variables (inequalities) that we all navigate. The theory ignores people’s natural resilience, their competitiveness, selflessness and selfishness, their ethical dilemmas in decision-making and their passions.
It does not adequately explain interactions in close relationships such as marriage or “emotional labor” – where we may provide care to others at a burdensome cost of declining personal well-being and self-denial. Norms of equity and reciprocity are often discounted in close and romantic friendships or where there are deep family bonds.
In the social exchanges of business, causal, or stranger relationships, there may be more of a dominant assumption that inputs are offered with the expectation of a like response. There is more of a formal contract of tangible and intangible reward. A promise unfulfilled, without proper reciprocity incurs a debt of honor. A promise is broken. In our community, obligations of reciprocal response operate. We are expected to apply the Golden Rule and to help where we can – an act ably demonstrated by “the Parable of the Good Samaritan”.
Social exchange theory assumes rational, calculated action involving an expected pay-off. We do not always act rationally. Many will not be as selfish as rational action may suggest. Indeed our reward may be the inner glow of respecting oneself and living to one’s own values. Such altruism, albeit self-referential, does not sit easily under the assumptions of the “rational, economic-person” model.
Implications
It is necessary to pay attention to what employees’ perceive to be fair and equitable. For example: In my company, one of my colleagues was assigned to a project that required him to work during non business hours frequently. He worked three days at the office and two days at home in a week for a month and half. This caused others to start working from home during business hours.
Allow employees to have a “voice” and an opportunity to appeal. Organizational changes, promoting cooperation, etc. can come easier with equitable outcomes.
Management’s failure to achieve equity could be costly for the organization. For example: One of my technically team members was not very competent. He took double the time to complete any give work when compared to the others. Management failed to take any action; instead the others were given more work. Eventually, even the competent workers took it easy to restore equity causing project delays.
Motivation Theory 2 – Vroom’s Expectancy Theory of Motivation:
Expectancy theory provides a framework for analyzing work motivation, which is eminently practical. It provides a checklist of factors to be considered in any managerial situation and it points to the links between the relevant factors and the direction, which these factors tend to follow in their interrelationships. (Tony J. Watson, Routledge & Kegan Paul, 1986).
Expectancy theory holds that people are motivated to behave in ways that produce desired combinations of expected outcomes. It can be used to predict motivation and behavior in any situation in which a choice between two or more alternatives must be made. (Kreitner R. & Kinicki A., Mcgraw Hill, 7th Edition). Vroom gave the following equation of Motivation:
Motivation (M) = Valence (V) x Expectancy (E)
Valence stands for the preference of an individual for a particular outcome. Thus, when an individual desires a particular outcome the value of V is positive. On the other hand when the individual does not desire a certain outcome, the value of V is negative.
The value of expectancy ranges between zero and one. When a certain event will definitely not occur the value of E is zero. On the other hand when the event is sure to occur the value of E is one.
Since its original conception, the expectancy theory model has been refined and extended many times. The better know of all is the Porter-Lawler model. Although conventional wisdom argues that satisfaction leads to performance, Porter and Lawler argued the reverse. If rewards are adequate, high levels of performance may lead to satisfaction. In addition to the features included in the original expectancy model, the Porter-Lawler model includes abilities, traits, and role perceptions.
Critiques:
Vroom’s theory does not directly contribute to the techniques of motivating people. It is of value in understanding organizational behavior. It clarifies the relation between individuals and the organizational goals. The model is designed to help management understand and analyze employee motivation and identify some to the relevant variables. However, the theory falls short of providing specific solutions to the motivational problems.
The theory also does not take into account the individual differences based on individual perceptions nor does it assume that most people have the same hierarchy of needs. It treats as a variable to be investigated just what it is that particular employees are seeking in their work. Thus the theory indicates only the conceptional determinants of motivation and how they are related.
Research studies have confirmed that the association of both kinds of expectancies and valences with effort and performance. The motivated behavior of people arises from their valuing expected rewards, believing effort will lead to performance, and that performance will result in desired rewards.
The expectancy theory explains motivation in the U.S. better than elsewhere and therefore may not be suitable for other regions.
Implications
This theory can be used by the managers to:

Determine the primary outcome each employee wants.
Decide what levels and kinds of performance are needed to meet organizational goals.
Make sure the desired levels of performance are possible.
Link desired outcomes and desired performance.
Analyze the situation for conflicting expectations.
Make sure the rewards are large enough.
Make sure the overall system is equitable for everyone.

Motivation Theory 3 – Maslow’s Theory of Hierarchy of Need:
Maslow believed that within every individual, there exists a hierarchy of five needs and that each level of need must be satisfied before an individual pursues the next higher level of need (Maslow, 1943). As an individual progresses through the various levels of needs, the proceeding needs loose their motivational value.
The basic human needs placed by Maslow in an ascending order of importance can be summarized and shown as below:

The desire to become what one is capable of becoming.
These are the needs to be held in esteem both by oneself and by others.
These are the needs to belong and to be accepted by various groups.
These are the needs to be free of physical danger. The safety needs look to the future.
These are the basic needs for sustaining human life itself, such as food, water, warmth, shelter, and sleep.

Maslow in his later work (Maslow, 1954) said:

Gratification of the self-actualization need causes an increase in its importance rather than a decrease.
Long deprivation of a given need, results in fixation for that need.
Higher needs may emerge not after gratification, but rather by long deprivation, renunciation or suppression of lower needs.
Human behavior is multi-determined and multi-motivated.

Critiques:
Part of the appeal of Maslow’s theory is that it provides both a theory of human motives by classifying basic human needs in a hierarchy and the theory of human motivation that relates these needs to general behavior. Maslow’s major contribution lies in the hierarchical concept. He was the first to recognize that a need once satisfied is a spent force and ceases to be a motivator.
Maslow’s need hierarchy presents a paradox in as much as while the theory is widely accepted, there is a little research evidence available to support the theory.
It is said that beyond structuring needs in a certain fashion Maslow does not give concrete guidance to the managers as to how they should motivate their employees.
Implications:
The need hierarchy as postulated by Maslow does not appear in practice. It is likely that over fulfillment of anyone’s particular need may result in fixation for the need. In that case even when a particular need is satisfied a person may still engage in the fulfillment of the same need. Furthermore, in a normal human being, all the needs are not always satisfied entirely. There remains an unsatisfied corner of every need in spite of which the person seeks fulfillment of the higher need.
A person may move on to the next need in spite of the lower need being unfulfilled or being partly fulfilled.
Conclusion
No single motivation theory can suffice in today’s workplace. Each motivational theory has its pros and cons. A theory may get the highest performance from an employee but may not from another employee.
The organization’s workplace has changed dramatically in the past decade. Companies are both downsizing and expanding (often at the same time, in different divisions or levels of the hierarchy). Work is being out-sourced to various regions and countries. The workforce is characterized by increased diversity with highly divergent needs and demands. Information technology has frequently changed both the manner and location of work activities. New organizational forms (such as e-commerce) are now common. Teams are redefining the notion of hierarchy, as well as traditional power distributions. The use of contingent workers is on the rise and globalization and the challenges of managing across borders are now the norm. These changes have had a profound influence on how companies attempt to attract, retain, and motivate their employees.
Yet we lack new models capable of guiding managers in this new era of work. As management scholar Peter Cappelli notes, “Most observers of the corporate world believe that the traditional relationship between employer and employee is gone, but there is little understanding of why it ended and even less about what is replacing that relationship” (Cappelli, 1999). I believe that the existing work motivation and job performance theories are inadequate to cater to the present era of such diverse workforce. New theories of motivation are required to commensurate with this new era.

## How Does A Blowpipe Work English Language Essay

CHAPTER 2
The blowpipe sometimes also called a blowgun or blow tube has a long but not necessarily well documented history [6]. A blowpipe is a primitive weapon that has its origins in ancient history. The weapon is constructed out of a narrow, hollow lightweight tube. By blowing air into one end of the blowpipe, a small dart or other projectile is fired from the weapon at several hundred meter per second. Although very simple, the blowpipe has been used for centuries as an effective hunting weapon around the world. The most common projectile fired by
a blowpipe is the dart. Some hunters use a poison dart since the blowpipe is not guaranteed to make a kill with one shot.
2.1.1 How does a Blowpipe Work
The hunter simply placed a dart into one end of the gun, placed his mouth over the opposite end, took aim, and blew [8]. A strong blow of air forced the dart through the tube, hopefully to capacitate a small bird or animal. The velocity of the dart was dependent upon the length of the tube and one’s lung capacity.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Indigenous tribes in South America and parts of Asia were especially skilful in the use of the blowpipe. Blowpipes do not possess the killing power of a rifle, but their extremely sharp darts can easily pierce skin [8]. Thus, blowpipes were typically a tool for hunting small game. To take larger game, many of the tribes coated their darts with poisons. Via this technique the blowpipe was effective against larger animals and even against humans during times of battle and warfare.
Tactically, the blowpipe offers a number of significant advantages over other weapon. One distinct advantage is its quietness [7]. With the exception of hand thrown weapon, no other projectile weapon is as quite as a blowpipe. From distances of more than a few meters, the blowpipe can hardly be heard at all.
Historian accounts tell of natives who could shoot hummingbirds in flight with their blowpipe or kill a deer with a poisoned dart at 100 yards [7]. While these stories may be exaggerated, historians writing about witnessing native blowgunners in action invariably regard them with awe. One of the first considerations of any projectile weapon is the velocity with which it delivers its projectiles. Velocity not only affects the power with which a dart or pellet hits, it also determines the practical accuracy of the weapon. Chronograph test have revealed that, when shot by average shooters, dart can easily approach or exceed velocities of 300 feet per second [7]. This of course varies with the length of the blowpipe, the weight of the dart, and the lung power of the shooters, but it represents a good average. With this velocity, a short ranges steel dart will easily penetrate 3/8 – inch plywood. (Refer Figure 2.1)
Figure 2.1: Steel dart easily penetrate 3/8 inch plywood
( Source: Janich, Michael D,’ Blowguns – The Breath of Death’)
2.2 Pneumatic System
2.2.1 Check valve
A check valve (sometimes called clack valve, non-return valve or one-way valve) is a mechanical device, a valve, which normally allows fluid (liquid or gas) to flow through it in only one direction [9].
Check valves are two-port valves, meaning they have two openings in the body, one for fluid to enter and the other for fluid to leave. There are various types of check valves used in a wide variety of applications. Check valves are often part of common household items. Although they are available in a wide range of sizes and costs, check valves generally are very small, simple, and/or cheap. Check valves work automatically and most are not controlled by a person or any external control; accordingly, most do not have any valve handle or stem. The bodies (external shells) of most check valves are made of plastic or metal.
An important concept in check valves is the cracking pressure which is the minimum upstream pressure at which the valve will operate. Typically the check valve is designed for and can therefore be specified for a specific cracking pressure.
B
A
Figure 2.2: Check Valve A) A closed ball check valve and B) An open ball check valve.
2.2.2 Regulator
The air enters the regulator from the reservoir, travels through the piston and into the firing valve chamber. As the pressure increases so does the force on the large end of the piston. As the force increases on the piston the spring behind the piston begins to compress. This process continues until the shaft of the piston contacts the Teflon seat and shuts the flow of air off. When the shot is fired the air pressure in the firing valve chamber drops and the spring lifts the piston off its seat allowing high pressure air to flow into the valve chamber once again and the cycle is repeated.
Figure 2.3: Regulator
Source: (Amir,2009)
The pressure in the firing valve chamber is determined by the size of the piston head and the strength of the spring. These dimensions will vary between manufactures, among all the airgun regulators available today none has the same dimensions as another. There are, however, several regulators available that use this basic design.
2.2.3 Pressure Reservoirs
Pressure reservoir is used to store air compressed in pneumatic blowpipe. It will discharge to firing chamber by a regulator. The way to compressed air into the reservoir is by hand pump or compressor. Figure 2.4 shows the pressure versus stroke in the reservoir.
aircomp6
Figure 2.4: Pressure versus Stroke in Pneumatic Cylinder
(Source: Amir,2009.)
2.2.4 Barrel
Basically, barrel has two types, smoothbore and rifling. Usually, the smoothbore barrel is used for fin-stabilized projectile.
Figure 2.5: Rifled Versus Smoothbore Barrels
(Source: Amir, 2009)
The purpose of rifling is to stabilize the bullet and increase its accuracy. This is called spin stabilizing, and works because of gyroscopic forces acting on the spinning bullet during flight.
There are various ways to rifle a barrel. The old way was to cut the rifling one groove at a time on a rifling machine. A more modern method is to pull a gang of broaches through the barrel, which cuts the all the grooves into the bore simultaneously. Another is to insert a very hard mandrel, which bears the reverse of the intended rifling pattern, into an oversize bore; then the outside of the barrel is “hammer forged” (or beaten) to impress the rifling into the bore. A fourth method is to pull a very hot rifling “button” through the bore, turning it as it progresses, which irons (melts) the rifling into the barrel. All of these methods are entirely satisfactory if done properly.
Rifle barrels are usually made from steel alloys called ordinance steel, nickel steel, chrome-molybdenum steel, or stainless steel, depending upon the requirements of the cartridge for which they are chambered (Geoffrey Kolbe, September 1991). The higher the pressure and velocity of a cartridge (pressure and velocity usually go up together), the faster it will wear out a barrel.
The rate of twist, expressed as one turn in so many inches (i.e. 1 in 10″), is designed to stabilize the range of bullets normally used in a particular caliber. It takes less twist to stabilize a given bullet at high velocity than at low velocity. At the same velocity in the same caliber, longer (pointed) bullets require faster twist rates than shorter (round nose) bullets of the same weight and heavier bullets require faster twist rates than lighter bullets of the same shape. It is undesirable to spin a bullet a great deal faster than necessary, as this can degrade accuracy. A fast twist increases pressure and also the strain on the bullet jacket.
2.3 Projectile
A projectile is a body which is propelled with various initial velocities, and then allowed to be acted upon by the forces of gravity and possible drag [10]. Figure 2.6 show that the maximum upward distance h reached by the projectile is called the height, the horizontal distance travelled x the range (or sometimes distance), and the path of the object is called its trajectory[11]. If a body is allowed to free-fall under gravity and is acted upon by the drag of air resistance, it reaches a maximum downward velocity known as the terminal velocity. The study of the motion of projectiles is called ballistics [11].
http://scienceworld.wolfram.com/physics/pimg465.gif
Figure 2.6: Motion of Projectile
In ballistics, the easiest way to describe a trajectory is by its x- and z-components, with the z component being affected by local gravity. Ignoring air resistance, a particle that is fired from the origin at time t = 0, where is the initial velocity and is the initial angle made with the x-axis, the trajectory of a particle is described by
(1)
(2)
Where t is the elapsed time and g is the gravitational acceleration, and its velocity components are
(3)
(4)
2.3.1 Bullet
Bullets are composed of a casing containing an explosive powder charge, which, on striking, forces the end projectile element out at speeds of up to 1500 metres/second, depending upon the ammunition and the type of gun used [12]. The word “bullet” is sometimes used to refer to ammunition generally, or to a cartridge, which is a combination of the bullet, case/shell, powder, and primer.
2.3.2 Design
Generally, bullet shapes are a compromise between aerodynamics, interior ballistic necessities, and terminal ballistics requirements.
Table 2.1: Effect bullet design at ballistic
Types of Ballistic
Details
Internal Ballistic
They must first form a seal with the gun’s bore.
If a strong seal is not achieved, gas from the propellant charge leaks past the bullet, reducing efficiency.
The bullet must also engage the rifling without damaging the gun’s bore.
Bullets must have a surface which will form this seal without causing excessive friction.
Bullets must be produced to a high standard, as surface imperfections can affect firing accuracy.
External Ballistic
The primary factors affecting the aerodynamics of a bullet in flight are the bullet’s shape and the rotation imparted by the rifling of the gun barrel.
Rotational forces stabilize the bullet gyroscopically as well as aerodynamically.
Any asymmetry in the bullet is largely cancelled as it spins.
With smooth-bore firearms, a spherical shape was optimum because no matter how it was oriented, it presented a uniform front.
Unstable bullets tumbled erratically and provided only moderate accuracy.
Method of stabilization is for the center of mass of the bullet to be as far forward as is practical as in the shuttlecock.
This allows the bullet to fly front-forward by means of aerodynamics.
Terminal Ballistic
The outcome of the impact is determined by the composition and density of the target material, the angle of incidence, and the velocity and physical characteristics of the bullet itself.
Bullets are generally designed to penetrate, deform, and/or break apart.
For a given material and bullet, the strike velocity is the primary factor determining which outcome is achieved.
2.3.3 Common Bullet Types
2.3.3.1 Hollow Point Bullets
Figure 2.7: Cut-through of a hollow-point bullet.
(Source: www.Wikipedia.com)
Expansion, or hollow point, bullets are specialised bullets designed to deform upon impact because of a collapsible space within the projectile tip. The result is that a single projectile will inflict greater overall damage to a target, allowing an increased transfer of kinetic energy compared with a standard bullet. The “benefits” include a decreased risk of ricochet because the overall penetration distance is reduced. However, some of the older ammunition failed to expand on impact as a result of pieces of clothing obstructing the cavity.
Hollow point bullets, characterized by a small hollow cavity in the nose. Hollow-point bullets are often used in hunting ammunition to provide a clean and humane kill, reducing crippling. Their use in law enforcement and personal defense ammunition is to enhance the stopping effect and reduce the danger of over-penetration.
Despite recent claims, hollow-point bullets are not specifically designed to cause more injury to victims. Rather, they are designed to transfer energy from the bullet to the target to maximize the stopping effect and minimize the unintended consequences if someone must regrettably fire a bullet at another human being in self-defense. When such a course of action becomes necessary, the private citizen, law enforcement agent or soldier needs to have the appropriate ammunition. It is simply not possible to design handgun ammunition that adequately does its job under appropriate circumstances that cannot be misused by violent felons to their own end.
2.3.3.2 Full Metal Jacket
Figure 2.8: An example of FMJ bullets in their usual shapes: pointed (“spitzer”) for the 7.62x39mm rifle and round-nosed for the 7.62x25mm pistol cartridges.
(Source www.Wikipedia.com)
Manufactured to military specifications, such bullets are generally not as accurate as civilian rifle ammunition but are optimized toward the reliable functioning of the firearms for which they are designed [13]. Therefore, they are loaded to moderate operating pressure and are usually equipped with a “full metal jacket” completely enclosing their lead core [13]. The unique requirements of military use, where the wounding of an enemy combatant may be a desirable goal, mandate that military handgun ammunition is, paradoxically, not the best choice for personal defense. This fact is illustrated by the almost universal use of semi jacketed handgun ammunition by all federal and state agencies and police departments that use defensive handguns full metal jacketed ammunition simply offers too much possibility of over-penetration through the target, resulting in the danger of injury to innocent bystanders beyond the intended target. Ricochets may also occur since, if such a bullet hits a hard surface, it does not immediately deform and break up as lead or semi-jacketed ammunition usually does.
2.4 Basic Ballistic Theory
Ballistics is the study of the firing, flight, and effects of projectiles [14]. It is the science of how a projectile shot from a weapon behaves [15]. The field of ballistic can be broadly classified into three major disciplines:-
Internal ballistics concerns what happen between the cartridges being fired and the projectile leaving the muzzle
External ballistics is concerned with the flight of the projectile from the muzzle to the target and what happens during the bullet’s flight.
Terminal ballistic describes what happen when the projectile strikes the target.
2.4.1 Internal Ballistic
Internal ballistics studies the events inside the weapon when the primer is detonated igniting the propellant [16]. From the terminal ballistic aspect it is relevant to know the internal ballistic factors affecting the bullet velocity. Every powder type has its characteristic burning velocity. Burning is actually controlled explosion since no external oxygen is required [16].
From the terminal ballistic and tactical point of view it is important that the powder and primer are as insensitive to external temperature as possible and thus contribute to consistent performance. A combination of the amount of powder, its burning velocity, burning volume and bullet resistance in the barrel give a pressure curve depicting how fast and how high the pressure builds up and how fast it subsides. An ideal powder charge burns almost completely before the bullet exits the muzzle. Shortening the barrel will reduce the Vo when the same cartridge is used [16]. Reducing the powder charge also will naturally do that too. Figure 2.9 shows an example of the relationship between pressure, barrel length and bullet velocity.
Figure 2.9: An example of the relationship between chamber pressure, barrel length and bullet velocity of a 5.56×45 mm cartridge calculated using Broemel QuickLoad software.
When the pressure increases sufficiently high, it pushes the bullet into the barrel. The forces involved cause radial expansion and torsional twist of the barrel as the bullet is forced into the helical rifling.
2.4.1.1 Basic Requirement for a High-speed Gun
The basic factor determining the speed of a projectile propelled from the rifle may be simply obtained by applying Newton’s force equation to the projectile. From the calculation, we can see how long the barrel needs to accelerate the projectile. For this calculation, below is the schematic of projectile during travel in the gun barrel.
Figure 2.10: Diagram of Barrel
(Source: Arnol E. Seigel “The Theory of High Speed Guns”)
The projectile mass is denoted by M, the length of barrel by L, and the cross-sectional area of the barrel by A. the propellant pressure at the back end of the projectile is denoted by Pp. At any instant of time Newton’s Law applied to the projectile given
Where
The equation above is integrated becomes
The partial average propelling pressure defined as
Then projectile velocity becomes
Figure 2.11: Graph Pressure versus Length of Barrel
(Source: Arnol E. Seigel “The Theory of High Speed Guns”)
From the Figure 2.11, pressure behind the projectile in the conventional gun is plotted as a function as its travel. The rise in pressure from zero to peak pressure PM results from the burning of propellant. Then the rapid pressure decrease thereafter results mainly from the propellant inertia as the propellant gas accelerates to push the projectile. It is evident from the figure above that the average pressure, is considerably below the peak PM for the conventional propellant.
2.4.2 External ballistics
Once the bullet exits the barrel, it decelerates and faces the effects of atmospheric drag [14]. This area of ballistics is known as external. Here, it is subjected to the force of the pressure of the atmosphere that it is flying through, the force induced by its spin, and the force due to the acceleration of gravity [18].
The two factors are determined the external ballistics of a projectile [17]:
Muzzle velocity – velocity with which the bullet exits the barrel.
Ballistic coefficient
2.4.2.1 Ballistic Coefficient
The rate at which velocity decays is a measure of penetration; it is often referred to by means of a quantity called the ballistic coefficient [19]. The ballistic coefficient is significant because it determines the rate of which the projectile slows down, and in conjunction with the muzzle velocity this decides the maximum range (at given elevation) and the time of flight to any particular distance [20]. The time of flight in turn decides the amount by which the projectile drops downwards as this happen at a constant rate due to gravity. This is incorporated into the ballistic coefficient, BC as [19]:
It follows that the deceleration of the projectile is given by [21]:
2.4.3 Terminal Ballistic
Once the target is penetrated, the study is categorized as terminal ballistics. Terminal ballistics is the study of the penetration of a medium denser than air. In other words, it is the scientific study of injuries caused by projectiles and the behavior of these projectiles within human biological tissue [16]. This thesis project will solely focus on terminal ballistics. There are several factors that affect the terminal ballistic result that call injury assessment which are:
Type of the bullet.
Stopping Power
Energy Transfer/ Kinetic Energy
Hydrostatic Shock/ Shock Wave
Taylor Knock Out Theory (KO)
Relative Stopping Power (RSP)
Stopping Power (StP)
2.4.3.1 Wound Ballistics
Wound ballistics is the area of terminal ballistics that studies the injury pattern of a particular bullet. The characteristics of a bullet wound include the depth penetration, the permanent cavity diameter, temporary cavity diameter, and bullet fragmentation. Wound ballistics analyzes the potential of a bullet to incapacitate and the underlying mechanisms.
2.4.3.2 Mechanic of Projectile Wounding
In order to predict the possibility of incapacitation with any firearm bullet, an understanding of the mechanic of wounding is required. There are four components of projectile wounding [22]. There are:
Penetration – The tissue through which the projectile passes and which disrupts or destroy.
Permanent Cavity – The volume of space once occupied by tissue that has been destroyed by the passage of the projectile. This is the function of penetration and the frontal area of the projectile. Quite simply, it is the hole left by the passage of the bullet.
Temporary Cavity – The expansion of the permanent cavity by stretching due to the transfer of kinetic energy during the projectile’s passage. A missile’s ability to produce a temporary cavity is considered an important component in wound production and degree of destruction [35]. Most researchers agree that the wounding effect of the cavitations phenomenon is only significant in velocities surpassing 300 meter per second [34]. When a missile enters the body, the kinetic energy imparted on the surrounding tissues forces them forward and radially producing a temporary cavity or temporary displacement of tissues [33]. The temporary cavity may be considerably larger than the diameter of the bullet, and rarely lasts longer than a few milliseconds before collapsing into the permanent cavity or wound (bullet) track (Kirkpatrick, 1988).
Fragmentation – Projectile pieces or secondary fragments of bone which are impelled outward from the permanents cavity and may sever muscle tissue, blood vessel, apart from the permanents cavity[23]. Fragmentation is not necessarily present in every projectile round. It may, or may not occur and can be consider a secondary effect [24].
Projectile incapacitate by damaging or destroying the nervous system, or by causing lethal blood loss. To extend the wound components cause or increase the effects of these two mechanisms, the likelihood of incapacitation increases.
2.4.3.3 Energy Transfer
The energy transfer hypothesis states that the more energy that is transferred to the target, the greater the destructive potential. In terminal ballistics, energy is a function of mass and the square of velocity which is related to the kinetic energy equation. Bullet weight and velocity determine the kinetic energy possessed by a projectile with velocity being the most critical component [31]. A variety of factors are responsible for the amount of kinetic energy lost in the body: “…amount of kinetic energy possessed by the bullet at the time of impact…” (Di Maio, 1985:46), mass, yaw (deviation of the missile from its flight path), caliber or size of bullet, shape, deformation, and density of the tissue being struck [32].
Double the mass will double the value of kinetic energy and double the velocity will give the value of kinetic energy four times greater.
It is the aim of the shooter to deliver an enough amount of energy to the target through the projectiles. Projectiles such as rifle bullets and high velocity handgun bullets can over-penetrate. Projectiles such as handgun bullets and shot-gun can under-penetrate. Projectiles that reach the target with too low velocity may not penetrate at all. All the above conditions affect energy transfer.
Furthermore, over-penetration is one of the factors to stopping power regards to energy. This is because a bullet that passes throughout the target does not transfer all of its energy to the target. Although it’s decreased tissue damage due to loss of transferred energy on an over-penetrating shot, the resulting exit wound would cause increased blood loss and therefore a decrease in blood pressure in the victim.
The Swiss delegation to the Expert Meeting of the International Committee of the Red Cross presented a Draft Protocol on Small Calibre Weapon Systems (1994) (Prokosch,1995). Recognising that not only bullet expansion but also other factors cause tissue injury, it proposes a limit for the amount of kinetic energy that is released. It suggests prohibiting the use of ‘arms and ammunition with a calibre of less than 12.7 millimetres which from a firing distance of at least 25 meters release more than 20 joules of energy per centimetre during the first 15 centimetres of their trajectory within the human body’.
Under-penetration is also one of the factors to stopping power. Projectile/s that does not transfer enough energy to the target may fail to create a fatal wound cavity. Also vital organs may not be reached, thereby limiting the amount of tissue damage, blood loss, and/or loss of blood pressure.
2.4.3.4 Hydrostatic Shock
Hydrostatic shock can be describes the observation that a penetrating projectile can produce remote wounding and incapacitating effects in living targets, in addition to local effects in tissue caused by direct impact, through a hydraulic effect in liquid-filled tissues.
The term also can be described as ballistic pressure wave which is the force per unit area created by a ballistic impact. The hypothesis from Michael Courtney states that bullets producing larger pressure waves incapacitate more rapidly than bullets producing smaller pressure waves.
The origin of the pressure wave is Newton’s third law. The bullet slows down in tissue due to the force the tissue applies to the bullet. By Newton’s third law, the bullet exerts an equal and opposite force on the tissue. When a force is applied to a fluid or a viscous-elastic material such as tissue or ballistic gelatine, a pressure wave radiates outward in all directions from the location where the force is applied [27].
The instantaneous magnitude of the force, F, between the bullet and the tissue is given by
Where E = is the instantaneous kinetic energy of the bullet, and x is the instantaneous penetration distance.
Courtney and Courtney believe that remote neural effects only begin to make significant contributions to rapid incapacitation for ballistic pressure wave levels above 3,400 kPa corresponds to transferring roughly 410 J in 30 cm of penetration and become easily observable above 6,900 kPa corresponds to transferring roughly 810 J in 30 cm of penetration [28]. Dave Ehrig expresses the view that hydrostatic shock depends on impact velocities above 340 meter per second.
2.4.3.5 Relative Stopping Power (RSP)
As the objective of the Armed Forces is to stop life endangering activity of an offender quickly and effectively the incapacitation or “stopping power” approach has certain validity. General J.S. Hatcher presented the concept of stopping power in his book “Pistols and Revolvers and their use” in 1927 and later “relative stopping power” (RSP) (Hatcher 1935). According to Hatcher the incapacitation potential of a projectile was proportional to impact momentum times the bullet’s cross-sectional area [29].
Where A is the cross-sectional area of the bullet and the form factor U.S. Army expanded Hatcher’s theory by hypothesizing that incapacitation “stopping power” (StP ) was a function of kinetic energy deposited in 15 cm of gelatine tissue stimulant (Sturdivan 1969 referred to in Bruchey and Frank 1983a). DiMaio expanded the theory in 1974 on handgun effectiveness (DiMaio 1974 referred to in Bruchey and Frank 1983a and Sellier and Kneubuehl 1994, Kneubuehl 1999).
Which are form factor, Ff o r m :-
0.70   Fully Jacketed Pointed
0.90   Fully Jacketed Round Nose
1.05   Fully Jacketed Flat Point
1.10   Fully Jacketed Flat Point (Large flat)
1.10   Lead Flat Point (Large Flat)
1.35   Jacketed Softpoint (expanded)
1.10   Hollow Point (unexpanded)
1.35   Hollow Point (expanded)
2.4.3.6 Taylor Knockout Theory (KO)
J. Taylor, a british big-game hunter developed a “Knockout value” (KO) in 1948 to describe the effectiveness of hunting ammunition [30].

## Academic and professional work experience goal statement

Valuable work experience has helped me to develop strong qualitative architectural expertise along with good leadership and communication skills. This has helped me to tackle complex issues in my field of work and gives me the confidence to pursue post graduate studies. I am Sruthi Maria George, currently working in an architectural consultants firm as a Junior Architect.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

With a strong desire to create artistic, creative and imaginative building structures, I chose Bachelor of Architecture for my under graduate studies. After completing my bachelors, my next step was to get a job in a challenging and dynamic environment and so I did find one as soon as I returned to my parents residing in United Arab Emirates. Throughout my work tenure so far, I have been working on challenging design projects which emphasized on creating healthy and eco-friendly buildings and environment. One of the major requirements to be fulfilled while planning and designing of the projects was the attainment of a ‘PEARL 2′ rating, a class of sustainable approach known as ‘ESTIDAMA’ which consists of sustainable planning strategies. This field influenced me to know more about the environmental aspects in order to widen my knowledge and career skills.
With the rise of disasters occurring around the world, we as humans need to be aware of the factors that lead to these natural calamities. A few environmental issues identified would be as follows:

Global warming – Climate change being recently observed is the main result of this global warming. The ozone layer of the earth being depleted with the increased use of chlorofluorocarbons (CFCs) in refrigerants, propellants and solvents, emission of carbon dioxide from burning of fossil fuels (petrol, diesel, kerosene) and other greenhouse gases. This degradation of the ozone layer affects the ultraviolet filtration of the sun’s radiation, causing the earth’s temperature to rise and having negative impacts on the living beings on earth.
Energy exhaustion – Tremendous use of non renewable sources such as fossil fuels, natural gas and coals over the years are being depleting and adverse effects on air quality causing human and environmental problems. These needs to be replaced with renewable sources of energy such as solar, wind, hydro, geothermal, biomass energies etc.
Landfill waste – With the increase in population and their activities, waste disposal is taking place on a higher rate. The adverse effects of this landfill wastes are pollution of the environment, emission of methane gases which is a greenhouse gas leading to the depletion of the ozone layer and other hazardous impact on the livings things and  the environment. This needs to be reduced by incorporating waste reduction and recycling strategies.
Threat to ecosystems and endangered species – Biodiversity enhances the productivity of the ecosystems. Threat to biodiversity leads to the destruction of the ecosystem, thus affecting the ecological pyramid and the food web. A recent article which caught my eyes, a killing game of dolphins and whales which takes place in Denmark by the local teens to show that they adults. These creatures have become near to extinction due to this.
Deforestation – Due to the increase in human population and their activities, conversion of forests to non- forest areas for development purpose is on a rise. This contributes to the increase of carbon dioxide content in the atmosphere, loss of biodiversity and soil erosion leading to natural calamities.
Pollution – Air pollution resulting from the burning of fossil fuels from vehicles and industries, hence affecting the environment and the health of humans. Contaminating water bodies by disposal of wastes leads to degradation of the ecosystem and human health issues.

A master’s degree in environmental studies would help me to become personally aware of the existing and future environmental issues arising in today’s world. It would help me to understand the issues on a broader aspect of view and to resolve them by creating environmental friendly designs contributing to the wellness of the social economy and for the beneficial use of the future generations. The main environmental specialization that I would like to do is the studies on sustainable development as this combines the social, economic and environmental aspects. As this field, is going global and is one of the major considerations in today’s world, specializing in this field would be of international standards and quality.
University of Illinois is one of the few and reputable institutions in the world to offer an innovative course which combines environmental studies and sustainable development as a joint online program without having me to compromise on my current working status. Hence I sought to pursue my higher education here.
If given a chance to pursue my post graduate studies in University of Illinois for Masters of Environmental studies specializing in sustainable development and policy, I would prove myself to be an asset by my hard work and dedication to the department and the university.
Thanking you.
Sincerely,
Sruthi Maria George

## Politics Essays – Making Democracy Work

Making Democracy Work
A Review of Robert Putnam’s Making Democracy Work
Introduction:
Since its publication in 1993, Robert Putnam’s Making Democracy Work: Civic Traditions in Modern Italy has been hailed for changing the way academics and policy-makers approach the relationship between politics and society. Putnam accomplishes this feat not so much with his compelling arguments, but with the innovative methodology he employs.
Much attention has already been given to the way Putnam combines quantitative and qualitative data in his research; he amalgamates numerical data on Italian institutional performance and civic culture, with the path-dependent historical legacy that predates it. Similarly, much attention has also been focused on the introduction of social capital as a new variable worthy of social scientists’ consideration. Since these topics have already been exhausted in reviews as well as other literature connected to Putnam’s book, this essay will attempt to go a different route.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

This essay will primarily argue that Putnam has successfully managed to combine both a structure and agency-centered approach into a cohesive research design project. Firstly, the structural approach is inherent in Putnam’s study due to the fact that he is attempting to analyze why Italian regions with the same political structure perform differently. Secondly, using network analysis, Putnam’s social capital and civic culture variables will be understood as being related to agency – and of affecting institutional performance. Finally, the overall strengths and weakness that arise from combining the two approaches in a research design project will be highlighted. Overall, despite several unavoidable limitations, in Making Democracy Work Putnam shows that using a combined structuration approach is capable of harvesting a fuller understanding of a particular issue – in this case, Italian institutional performance.
The Study and the Setting:
In 1970 the highly centralized Italian government set-up identical regional governmental institutions in each of the country’s twenty regions. The experiment offered Robert Putnam and his colleagues a unique opportunity to analyze institutional performance over time, and what precisely makes government work in a setting where national factors and institutional design are held constant.
Despite the fact that all the Italian regions got identical institutions, the performance of these institutions varied widely across Italy. The discrepancy between the regions – particularly between the North and the South – led Putnam to believe that “social context and history profoundly condition the effectiveness of institutions” (Putnam, 182). Therefore, in the causal argument that Putnam puts forth in order to explain what affects institutional performance, institutions are framed as both an independent and dependent variable. So to speak, even though institutions do shape politics, institutions themselves are shaped by social context and history. For this reason, Putnam considers yet another independent variable in his complex causal relationship – civic culture.
Putnam’s Methodology:
Before analyzing how structure and agency unite, and the way in which civic culture is measured in Making Democracy Work, it is worthwhile to take a look at the broader – and overarching – methodological backdrop on the grounds of which Robert Putnam’s study takes root.
The setting for the study, as alluded to above, offered Robert Putnam and his colleagues the opportunity to embark on a twenty year voyage of inquiry; their choice of vessel, a sub-national comparison. Certainly with the case of Italian institutional performance a sub-national paired comparison is sure to prove more illuminating than a cross-national comparison because one can hold-constant for national context. That being said, it is necessary to note that often when one considers cultural, historical, economic and/or socioeconomic conditions, there will invariably be cases where greater variation exists within countries than does between them (Snyder, 96).
The experience of Italy provides a unique backdrop for Putnam to study institutional performance because many factors are held constant, relatively speaking. Aside from holding institutional design constant, Italy is a far less diverse country than say India or even Russia with regards to language, religion, ethnicity, class and caste. Though it might prove hard for Putnam’s methods to travel beyond a Western context and be directly applied, it should not be held against him or discredit his book by any means.
Just because the arguments might have difficulty traveling (and we should note that Putnam’s arguments in Making Democracy Work are the underpinnings of his second book Bowling Alone: The Collapse and Revival of the American Community) does not mean that they should be judged negatively. After all, this is the precise purpose of a sub-national paired comparison – to develop theories or generalizations that one is unable to make through cross-national paired comparisons due to all the intervening variables that cannot be held constant.
Furthermore, Making Democracy Work does not qualify merely as a sub-national paired comparison. Putnam really tests his arguments against a broad spectrum. In so doing, he avoids the common problem of selection bias, and – derivatively – of false dichotomies. Putnam does not pick and choose the regions he incorporates in his study. Making Democracy Work is extensive in that it includes and considers all of the regions in Italy equally, and weighs them up against the same credo (where information permits).
In each region Putnam interprets quantitative data on institutional performance and then analyzes it alongside quantitative data regarding its civic culture. He then pushes the envelope by reaching far-beyond direct causal inference and into history. The historical qualitative data that Putnam accumulates, allows him, ostensibly, to isolate the main factor that leads to variance in institutional performance in Northern and Southern Italy – social capital.
Making Democracy Work benefits from diverse measurements – the indicators used are wide-ranging, innovative, impressive, and provide for a superior demonstration of Putnam’s arguments. In fact, it is the combination of both the quantitative and qualitative data that earn Robert Putnam and Making Democracy Work the recognition of being simultaneously both a large-N and small-N sub-national comparison.
Structural Forces:
Having laid out the methodological framework that Putnam has developed it is now possible to focus on the structuration approach that he incorporates. The explanation of institutional performance – the dependent variable – is contingent to a certain degree on a structural analysis.
While all the regions in Italy are constrained by the same national structural force – the highly centralized government, the regions are also constrained by their own historical legacies and the structures that have emerged from the past. In this sense, according to Putnam, the history of the North has cultivated an arena/structure much more conducive to proper institutional performance than has the South.
Putnam chooses twelve indicators as evidence of institutional performance, or “good government”. These indicators include: Cabinet stability, budget promptness, statistical and information services, reform legislation, legislative innovation, day care centers, family clinics, industrial policy instruments, agricultural spending capacity, local health unit expenditures, housing and urban development and bureaucratic responsiveness.
Far from agency-centered, the conditions of these indicators are all determined by the structure in which they are situated. Essentially, the greater the influence of the structure, the more predictable the political behaviour is likely to be. Following Putnam’s path-dependent argument that historical legacies shape the structural forces (which come to light from such indicators), it is important to then consider the nature of the historical legacies themselves. In Putnam’s view the historical legacies worth exploring are those of civic culture.

Analyzing the Affects Agency:
The affects of agency on Italian institutional performance is not analyzed explicitly in Making Democracy Work. Putnam does not look at individual leaders, regional representatives, or even influential citizens in any of Italy’s diverse regions – contemporarily nor historically. However, implicit in his definition of civic culture, as the “norms of reciprocity and networks of civic engagement” (Putnam, 167) is an understanding of agency nonetheless. If agency is based on the actions and decisions of a single person, it must also be based on the interactions and collective wills of many people.
A horizontal-network analysis is an ideal approach to take when trying to understand the affects of agency in regional patterns of behavior. From a nominalist point of view the researcher must use a conceptual framework to define the boundaries of the network – or who/what is and is not included in the research agenda.
For his part, Putnam proposes four indicators in which one can find evidence of a civic culture; these indicators include participation in voluntary associations, newspaper readership, referenda turnout, and personalized preference voting (or lack thereof). Even though groups like football clubs are internally heterogeneous and diverse, network analysis helps Putnam to disentangle the inherent complexity and to highlight the important aspects of functioning as a group.
To the point of emphasis, the fact that Putnam also correlates these “objective” measures with more opinion-based survey indicators of civic culture goes to show that Putnam is committed to incorporating the role of agency in his research design. Essentially, he moves from a nominalist to a more realist network analysis by focusing on the individuals. More specifically, Putnam shows that network boundaries are established based on the subjective perspectives of the network actors themselves. For this reason, the data in his research is based to large degree on surveys, questionnaires and interviews.
The difference between the North and the South of Italy therefore, can be expressed in the different types of networks they produce. Putnam considers all of the following: the different types of networks that exist, the organization of the networks, and the individuals within the networks. Relating to the different types of networks, Putnam notices that the density of networks in the North is much greater than in the South.
Not only do more social groups exist in the North, but membership in them is greater and the pattern of ties between the members is stronger. With regards to the networks’ organization, in the North there is a higher frequency of interaction, and a larger amount of emotional investment within the network. Lastly, as far as individuals are concerned, Putnam looks at subjective measures like trust, solidarity, personal closeness and ideological proximity to ultimately discern that in Northern Italy individuals are more likely to enter horizontal-networks and develop a more cohesive civic culture that fosters responsive government and higher institutional performance.
Strengths and Weakness of Structuration:
In a sense, Putnam has combined a structural and agency approach into a single research design. The structuration approach has several strength and weaknesses worth highlighting, particularly with reference to Making Democracy Work. Perhaps the major benefit of combining the analysis of structure and agency in the case of Italian institutional performance is that Putnam is able to recognize and demonstrate the interplay between the two.
Putnam shows how structures and agents are co-determining and mutually implicating. When assessing the causal relationship between civic culture and Italian institutional performance the case is made that the two entities are defined by their internal relationship, such that the two entities derive their meaning by their relationship and have no meaning or basis without the other. People produce the structure, and the structure in turn reproduces the people. So to speak, agents and structures are ontologically equal in Making Democracy Work.
Inherit in this methodological approach’s greatest strength is also its greatest weakness. One of the major problems with operationalizing the structuration approach is that it is often difficult to design a research strategy that can draw valid causal inferences. As with the case of Making Democracy Work, the difficulty in making inferences is determining whether something is a cause or an effect – there has to be a starting point for an analysis.
One inevitably has to choose a bottom-up or top-down approach treating either agent or structure as ontologically primitive. Robert Putnam, by discerning them ontologically equal has failed to choose a starting point for analysis. Instead of a parsimonious and simple linear causal relationship, Putnam points to vicious and virtuous circles that have led to contrasting, path dependent social equlibria (Putnam, 180). Good or bad institutional performance will further continue a history of good or bad civic culture. More so, the correlation between civic associations and social capital that Putnam professes is also circular:
While to think purely in terms of linear causation is to do injustice to the overall interconnectedness of the variables, the danger of thinking in terms of equilibria is that you develop a ‘chicken or egg’ scenario. One begins to beg the question of where in history it is right to draw the line when studying Italian civic culture?
Indeed, Putnam’s historical record has become the focus of considerable criticism from scholars. Sidney Tarrow, in “Making Social Science Work across Time and Space”, contends that social scientists go to history with a theory to prove, and do not objectively derive viable generalizations from history. History requires picking and choosing; one must even choose where in history to draw the line before beginning a study. However, if a line can always be drawn back farther one must ask whether cases can really be isolable and independent at all.
For example, can the case not be made that because the North of Italy colonized the South, that the problems of the South are really the problems of the North? Some critics say that it is unfair for Putnam to displace the problem of poor institutional performance on the South and not to consider the possibility of contamination.
However, Putnam can hardly be criticized for this – everything can be understood as ex post facto something else. Irrespective of whether Putnam is right or wrong on where in history he draws his line, Making Democracy Work should be hailed for its attempt to – regardless of its actual success at – combining quantitative and qualitative data, and structure and agency, in creating a complex causal relationship.
Conclusion:
In Making Democracy Work: Civic Traditions in Modern Italy, Robert Putnam has successfully managed to unite both a large-N and small-N sub-national comparison into a single model of inquiry. Equally as impressive, he has successfully managed to combine both a structure and agency-centered approach into a cohesive research design project. Putnam uses a structural approach to analyze his dependent variable – political institutions, and an agency-centered approach to analyze an independent variable that has an affect on the development of political institutions and their efficacy – civic culture.
In so doing, Putnam manages to turn political institutions into an independent variable too, highlighting the interconnectedness of the two variables. Due to this interconnected circular nature of Putnam’s argument, Putnam’s study of Italian institutional performance, though both descriptive and predictive, lacks convincing prescriptive capabilities. Nevertheless, despite its prescriptive shortcomings, Putnam shows that using a combined structuration approach is capable of harvesting a fuller understanding of a particular issue – in this case, Italian institutional performance.
Works Cited
Putnam, Robert D. Making Democracy Work: Civic Traditions in Modern Italy(Princeton: Princeton University Press, 1993).
Snyder, Richard. “Scaling Down: The Subnational Comparative Method,” Studies in Comparative International Development 26:1 (Spring 2001), pp. 93-110.
Works Consulted
Dwainpayan, Bhattacharyya, et al. (eds.) Interrogating Social Capital: The Indian Experience. (New Delhi: Sage Publications, 2004).
Furlong, Paul. “Review of: Robert Putnam’s Making Democracy Work: Civic Traditions in Modern Italy,” International Affairs 70 (January 1994), pp. 172.
Kwon, Hyeong-Ki. “Associations, Civic Norms, and Democracy: Revisiting the Italian Case,” Theory and Society 33 (2004), pp. 135-166.
Levi, Margaret. “Social and Unsocial Capital: A Review Essay of Robert Putnam’s Making Democracy Work,” Politics and Society24 (March 1996), pp. 45-55.
Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community (New York: Simon and Schuster, 2000).
Sabetti, Filippo. “Path Dependency and Civic Culture: Some Lessons from Italy About Interpreting Social Experiments,” Politics and Society 24 (March 1996), pp. 19-44.
Tarrow, Sidney. “Making Social Science Work Across Space and Time: A Critical Reflection on Robert Putnam’s Making Democracy Work,” American Political Science Review 90 (June 1996), pp. 389-397.

## The Life And Work Of Confucius

Confucius (551 – 479 BCE), was a thinker, political figure, educator and founder of the Ru School of Chinese thought. Confucius was born at Shang-ping, in the country of Lu. His given name was Kong, but his disciples called him Kong-fu-tse, (i.e. Kong the Master, or Teacher.) His father passed away when he was only three years old. Confucius mother Yan-she raised him. During his younger years Confucius showed a love of learning, and an expression of awe for the ancient laws of his country.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Confucius was only nineteen years old when he married, but he divorced his wife after only four years of marriage so that he could have more time for his study and the performance of his public duties. His mother passed away when he was twenty-three, which was the reasoning behind the first solemn and important act of Confucius as a moral reformer. The solemnity and splendor of the burial ceremony that Confucius honored her remains struck his fellow citizens with astonishment. Confucius shut himself up in his home for three years of mourning for his mother, using the whole time dedicated to philosophical study. He reflected deeply on the eternal laws of morality, tracing them to their source, saturated his mind with a sense of the duties they impose instinctively on all men, and determined to make them the unalterable rule of all his actions. From that day forward his career was only an illustration of his ethical system. He began to instruct his countrymen in the principle of morality, exhibiting in his own person all the virtues he instilled in others. His disciples gradually increased, as the practical character of his philosophy became more apparent. Generally, Confucius disciples were not young and enthusiastic. He preferred middle age men who were sober, grave, respectable, and occupied public situations. This fact cast light on both the character and design of his philosophy. It was moral, not religious, and aimed exclusively at fitting men for conducting themselves honorably and carefully in this life.
Confucius travelled through many states, some of them he was welcomed, while in others he wasn’t appreciated. His later trips were very unfavorable, with state after state refusing to be improved. There were some instances where Confucius was persecuted. He was once imprisoned and nearly starved. He finally realized there was no hope of securing the favorable attention he desired from his countrymen while alive, he returned to his native state, spending his last years in the mixture of literary works, by which all of his descendants at least might be instructed. Confucius died 479 B.C., in his seventies. Immediately after his death, Confucius began to be regarded with respect and his family was characterized by excellence with various honors and privileges. Many people honored all of Confucius’ work by building temples in every city in China to honor Confucius. Since Confucius’ teachings and philosophy was so advanced, it was the education for China for 2,000 years. It is called Confucianism which is the complex system of moral, social, political, and religious teaching built up by Confucius and the ancient Chinese traditions. Confucianism goal is making not only the man virtuous, but also making him the man of learning and of good manners. The perfect man must combine the qualities of a saint, scholar, and gentleman. Confucianism is a religion whose worship is centered in offerings to the dead. The notion of duty is extended beyond the boundaries of morals and embraces the details of daily life.
The best source for understanding Confucius and his thought is the Analects. But the Analects are considered problematic and controversial work, having been compiled in variant versions long after Confucius’s death by disciples or the disciples of disciples. Some have argued that, because of the text’s inconsistencies and incompatibilities of thought, there is much in the Analects that is non-Confucian and should be discarded as a basis for understanding the thought of Confucius. Benjamin Schwartz cautions us against such radical measures.
While textual criticism based on rigorous philological and historic analysis is crucial, and while the later sections [of the Analects] do contain late materials, the type of textual criticism that is based on considerations of alleged logical inconsistencies and incompatibilities of thought must be viewed with great suspicion. . . . While none of us comes to such an enterprise without deep-laid assumptions about necessary logical relations and compatibilities, we should at least hold before ourselves the constant injunction to mistrust all our unexamined preconceptions on these matters when dealing with comparative thought. (The World of Thought in Ancient China, p. 61)
Confucius’ philosophy was predominately a moral and political one. It was founded on the belief that heaven and earth coexist in harmony and balanced strength while maintaining a perpetual dynamism. Human beings, he taught, are sustained by these conditions and must strive to emulate the cosmic model.
The Doctrine of the Mean is the elaboration of the way of harmony; it furnishes the details of the kind of life that, in its recognition of due degree, will be in accordance with the principle of equilibrium, the root of all things. These ideas of harmony, justice and balance in both the cosmos and the individual provided a focus for political theory and practice. (Collinson. Plant, Wilkinson, Fifty Eastern Thinkers)
Links / Confucius Philosophy, Confucianism Religion
http://www.friesian.com/confuci.htm – An analysis of the moral philosophy of K’ung-fu-tzu or Kongfuzi (Confucius).
http://pasture.ecn.purdue.edu/~agenhtml/agenmc/china/classlit.html – Art of China Homepage. Classic Chinese Literature, The Analects, Confucius Bibliography on Confucian Philosophy Da Xue (The Great Learning) Confucius Tao Te Ching / Lao Tzu The Art of War / Sun Tzu
http://plato.stanford.edu/entries/confucius/ – The life and work of the Chinese philosopher and educatory; by Jeffrey Riegel.
http://www.wsu.edu:8080/~dee/CHPHIL/CONF.HTM – Discussion of Chinese Philosophy and the life and thought of Confucius along its principle lines
http://www.heptune.com/confuciu.html – What Confucius Thought by Megaera Lorenz. A brief summary of the basic concepts behind one of the world’s oldest philosophies, Chinese Confucianism.

## Reflection on Engineering Work

Competency Element

A brief summary of how you have applied the element

Paragraph number in the career episode(s) where the element is addressed

PE1 KNOWLEDGE AND SKILL BASE

PE1.1 Comprehensive, theory-based understanding of the underpinning natural and physical sciences and the engineering fundamentals applicable to the engineering discipline

Theoretical knowledge gained from studying “Renewable Energy Resources” , “Mechanics of Materials” and “Heating Ventilation and Air Conditioning” was used in the projects.

CE 1.2, 2.1, 2.2

PE1.2 Conceptual understanding of the mathematics, numerical analysis, statistics and computer and information sciences which underpin the engineering discipline

I used different mathematical equations for the designing of Parabolic Trough.
Heating and Cooling load for the Air Conditioning were calculated using load calculation equations. CAMEL software was used to optimize the load and analyze and compare it with manual calculations.

CE 1.15, 1.16, 1.17, 1.18, 1.19, 3.8, 3.11, 3.12, 3.21

PE1.3 In-depth understanding of specialist bodies of knowledge within the engineering discipline

Knowledge gained in “Finite Element Methods” and analysis software ANSYS helped to analyze the drop table.

CE 2.1, 2.2, 2.10

PE1.4 Discernment of knowledge development and research directions within the engineering discipline

Sequential switching of energy resources from traditional fossil fuels to renewable energy resources is seeming eminent. Parabolic Trough is the future of energy sector in energy deficient countries, like Pakistan.

CE 1.1, 1.2,1.21

PE1.5 Knowledge of contextual factors impacting the engineering discipline

Being aware of the side effects some of fossil fuels have on the environment, helped us to use the environmental friendly Solar power to generate electricity. It reduces the carbon foot print and hence, guarantees a greener and healthier future.

CE 1.2, 1.21

PE1.6 Understanding of the scope, principles, norms, accountabilities and bounds of contemporary engineering practice in the specific discipline

Being project leader the responsibility laid on my shoulders to ensure successful timely completion of the project. For this I employed Primavera and Microsoft Project software to finish the project within given timeline.

CE 1.8, 2.7, 3.5, 3.7

PE2 ENGINEERING APPLICATION ABILITY

PE2.1 Application of established engineering methods to complex engineering problem solving

Working on renewable energy project incited students and industrialists to use this energy source to power their needs. And I visited them to help them design the projects.

CE 1.21

PE2.2 Fluent application of engineering techniques, tools and resources

I used the VRV system instead of the Central Air Conditioning as it is more energy efficient and gives more control.
I used CAMEL to analyze the manual load calculations and suggest changings in the structure of building.
ANSYS was used to analyze the drop table for the drop test.

CE 2.2, 2.10, 3.4, 3.21, 3.22

PE2.3 Application of systematic engineering synthesis and design processes

In each project I followed the engineering design process i.e. Defined the problem, searched for solution and picked a solution and developed it (Solar Power Plant). At the end, I prepared the report for each project including all experiments in systematic order.

CE 1.21

PE2.4 Application of systematic approaches to the conduct and management of engineering projects

I used my management skills and software i.e. Primavera and Microsoft Project to keep track of the progress and finish it within given time.

CE 1.7, 1.8, 2.7, 3.5, 3.7

PE3 PROFESSIONAL AND PERSONAL ATTRIBUTES

PE3.1 Ethical conduct and professional accountability

Before the start of each project I made sure that my team follows the predefined guide lines to ensure professional and ethical conduct. Safety talks before every critical activity helped to achieve this goal.

CE 1.8, 1.9, 1.20, 2.14

PE3.2 Effective oral and written communication in professional and lay domains

I presented my Final Year Project (Solar Trough) in front of project supervisor, Chairman of Mechanical Engineering department and an external examiner.

CE 1.21

PE3.3 Creative innovative and proactive demeanour

Used economical techniques to select the Concentrated Solar Power technology, which need small absorbing surfaces and large reflective surfaces. Absorbing materials are more expensive than the reflective surfaces.

CE 1.10

PE3.4 Professional use and management of information

I kept record of all the meetings by writing minutes of meetings at the end of each meeting. Prepared the project reports using all the experimental and theoretical knowledge.

CE 1.5,1.8,1.9,2.7,3.5,3.7

PE3.5 Orderly management of self, and professional conduct

My leadership skills and professional attitude during my final year project helped me to be leader in next two projects as well. Leading project teams more than once groomed my leadership skills and helped to enhance my professional conduct.

CE 1.7, 2.1, 3.5

PE3.6 Effective team membership and team leadership

My leadership in the projects was effective enough to finish the projects well in time and in good team spirit. I inspired my team members to work through difficult situations and solve issues without being stressed out.

CE 3.5, 2.11,

## Life And Work Of Abu Ali Ibn Sina

Ibn Sina, Abu Ali (Latin Avicenna) (980-1037), a scholar of encyclopedic, physician and philosopher. He was born near Bukhara in Afshane 16 August 980. The father of Ibn Sina, Bukhara officer, a native of Balkh, while the capital of Greco-Bactrian kingdom, gave his son a systematic education at home, awakened in him at an early age the desire for knowledge. Soon, Abu Ali has surpassed his teacher and started an independent study of physics, metaphysics and medicine, referring to the works of Euclid, Ptolemy and Aristotle. If Euclid and Ptolemy’s Almagest did not give the young Ibn Sina the great difficulties that the Aristotelian Metaphysics demanded from him a lot of effort. Up to forty times he took a reading, but I could not comprehend the depth of its content has not encountered a book seller in writing of al-Farabi on to metaphysics, the commentary on the works of Aristotle.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

As a philosopher Ibn Sina belonged to the direction of “Falsafah, eastern peripatetizma. Did much to develop the philosophical vocabulary in Arabic and Persian languages. Defending and developing the philosophical system of Aristotle, Ibn Sina gave in his writings, considerable attention logic, the doctrine of causality, the first reason, matter and form of knowledge, categories, principles of organization of thought and knowledge. In the teachings of Ibn Sina are constantly present two approaches to the description of the world: the physical and metaphysical. When he talks as a “physicist”, it paints a picture of things in terms of movement, space, time and natural determinism, has things in order from simple to complex, from the inanimate to the living and completes the most complex organism, endowed with reason, – a man. In this picture, the mind is regarded as closely connected with the body, to matter: “Soul arise when there is a corporeal substance, suitable to use it the soul” (Book of the soul). This matter – the brain, various departments which correspond to different mental processes. “Storage is a general feeling of power performance, and it is located in front of the brain. That is why, when this part is damaged, the scope of representation is violated. Store that accepts the idea is a force called memory, and it is located in the back of the brain. The middle part of the brain was created as a place of power of imagination. ” Considering the different mental states and events: sleep, dreams, power of suggestion, predictions, prophecies, pondering the mysteries and miracles, called Ibn Sina “reveal the cause of all this, based on the laws of nature.”
The concept of a strictly ordered world, the slave laws of determinism, is one of the central points of the philosophy of Avicenna. A number of causal dependence, rising to one another generating reasons for ending the first cause, which, being the active principle (the will), releases its potentiality, which mediated a number of steps, there is a multiple created world. Solving the problem of not only the reality of the world, but its independence from the Creator, Ibn Sina has focused on the topic of the possible and necessary. The basic idea of Arabic peripatetics – the idea of the world, opportunities are already contained in the Uniform and therefore sovechnogo Creator. Adhering to the Peripatetic tradition in his doctrine of causality, Ibn Sina gave up hard determinism: the existence of vozmozhnosuschego is not necessary in itself and becomes such a result will change neobhodimosuschego as the first cause, giving rise to the subsequent series and own who were making them necessary. First, first principle – is the only thing originally to itself. Everything else derives from it, and therefore only possible. But since there is a reason, the possibility, the latter is in turn a necessity and as such – a necessary cause of the next generation. Thus, the first reason is just the first jolt, in the future world of things is determined by causal dependence within himself.
Another important point is the philosophy of Ibn Sina’s doctrine of the soul. Noting the indispensable bond of mind with bodily matter, Ibn Sina, in contrast to Aristotle’s interest in mind as well as a special, netelesnoy substance that existing in the body, differs from him and dominates him, it is not simply a form that exists in a solid substrate, it does not attach to the body, and (in the terminology peripatetizma) creates the human body as a creator, is the cause of the body. “Potential” mind through learning, mastery of knowledge is “urgent.” Reaching the top step, grasping the abstract forms, purchasing power of the “active” intellect, he is “acquired.” At this stage of the work of the mind can no longer depend on external impressions, and even the state of the body, thinking about thinking connection with the body, with matter rather a hindrance. Such a mind does not need to own who were studying intelligible – he understands them directly, intuitively. “In the acquired human mind is likened to the potency of the first principles of all things” (On the soul). The man – a free, sovereign being. His mind is not only the recipients of external impressions, but also a focused subject, projecting the idea. Independence of mind from the body of Ibn Sina argued its indivisibility, as well as the ability to work and even its gain with the weakening of the body, feelings, etc. A good argument in favor of netelesnosti mind is described by Ibn Sina introspective experience, the image of the so-called “floating people”. “If you think that your entity once created with common sense and perfect form, and we assume that parts of it are hidden from view and shall not come into contact, and separated from each other and hang some time in the outdoors, then you will find that it forgets everything except the assertion of individuality “, which consists in the mind (hints and instructions). In this experience the person is aware that “I am I, even if I do not know that I have an arm, leg or any other authority”, “I stayed I would have, even if they were not there” (On the Soul) . As netelesnoy soul is immortal, as vnutritelesnoy – individually, and, moreover, forever (the concept of individual immortality). Accordingly, the man’s knowledge of itself (introspection) unremovably individually. On this understanding of Ibn Sina mind and forms of knowledge influenced Sufism and personal experience “Tariqah (Sufi way to God). This is reflected in its pure “Sufi” works: A Treatise on the Haya, son of Yakzana, the Epistle of the Birds, Salman and Absalom, etc.

## How Do Water Boilers Work Environmental Sciences Essay

A boiler is a closed vessel in which water or other liquid is heated in order to generate steam or vapor which is then used for other external processes. Water is a useful and cheap medium for transferring heat to a process. When water is boiled into steam, its volume increases about 1600 times, generating a force that is very explosive. This can be achieved by combustion of wood, natural gas, coal or oil. Electric steam boilers on the other hand use resistance to produce the required heat. The chemical energy from any of these external fuel sources is converted into heat which is then transferred to the water through radiation which is the transfer of heat from a hot body to a cold body without a conveying medium, conduction which involves the transfer of heat by actual physical contact and by convection, the transfer of heat by a conveying medium like air or water (EuropeanCommission, 2006).

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Since the development of the first boilers in the 18th century, boilers have evolved so as to increase their efficiency and low-cost design as well as pay more attention to air pollutant emissions like carbon monoxide and hydrogen chloride. These types of emissions depend on the type of fuel used and the load factor of the boiler. The power of a boiler is determined by the required steam mass flow rate, temperature and pressure. The amount of input fuel required depends on the fuel energy content and on the overall energy efficiency. A boiler’s performance is characterized by its steam pressure and temperature. Saturated steam is steam at boiling temperature for a given pressure, which is what most boilers produce and make use of. If more heat is supplied and the steam pressure rises above the saturation temperature at a given pressure, then the steam becomes superheated steam. This kind of steam though at a higher temperature, can decrease the efficiency of the steam generating plant. If more heat is supplied to the superheated steam, it becomes supercritical steam which can be used in power generation (USEPA, 2004).
A closed boiler is one in which all the steam generated is returned to the vessel in form of water and is reused while an open boiler is one that does not return water to the original vessel.
The boiler system comprises of three major parts which are: the feed water system, the steam system and the fuel system. The feed system supplies water to the boiler and regulates it to meet the systems demands. The steam system is responsible for collection and control of the steam produced in the boiler. This system is regulated and checked using pressure gauges and is directed to the point of use through an efficient piping system. The fuel system includes all the equipment used to generate the required heat which is dependent on the type of fuel used in the system (Hartford, 1911).
There are three basic types of boilers which are used for industrial uses. These are the fire tube, the water tube and the fire box boilers. In the fire tube boilers, heat passes through the tubes which are surrounded by the water being heated. These tubes are arranged in banks so that the heat produced can pass through the vessel many times before escaping. Fire tube boilers are relatively small in size compared to the other type of boilers.
In water tube boilers, heat is made to pass through the tubes which contain the water. These tubes are then interconnected to a steam outlet for distribution to the plant system. These types of boilers are the most commonly used because they are larger in size and can therefore withstand greater pressures and temperatures, though their initial and maintenance costs are higher.
In a fire box boiler, the hot gases from the fire box which is the space where the fuel is burned are channeled into the tubes where they heat the water.
Water is supplied to the boiler from the boiler feed water plant also known as the demineralizer plant. The demineralizer removes all salts present in the water by removal of hydrogen ions which are replaced with sulphuric acid. This water should be free of any foreign materials that could cause harm to the boiler and also decrease its performance. Some of these harmful substances include oxygen, positively charged ions of calcium, aluminum, sodium and zinc. There are also other negatively charged ions like carbonates, bicarbonates, silica and fluorides which could harm the boiler efficiency. The removal of oxygen is usually done in the de-aerator located after the ion exchanger.
The de-aeration of the condensate returning from the process ensures that the water is free of oxygen bubbles that may inhibit heat transfer. In de-aeration, the dissolved gases are removed by preheating the feed water before it is allowed to enter the boiler. The removal of these gases is very important to the boiler equipment longevity as well as safety of operation. De-aeration can be done by chemical de-aeration, mechanical de-aeration or both. The chemical treatment is used to remove harmful substances that could cause build up in the heat transfer equipment. The economizer is used to preheat the water entering the boiler. This helps reduce fuel cost making the boiler more efficient (Shields, 1961).
The water vessel in a boiler is connected to the heat source by metal rods which heat the water and convert it to steam. The steam is allowed to collect in the dome before exiting the boiler. The function of the dome is to force the steam to become highly condensed in order for it to exit the boiler with a large amount of pressure. A boiler also contains a drain which removes impurities from the water vessel and a chimney to allow heat to escape once it has passed the water vessel. It is vital for all boilers to have safety valves in order to allow excess steam to be released in order to prevent explosions.
The heart of a boiler is a pressure vessel which is a closed container designed to hold gases or liquids at a pressure. This pressure vessel is usually made of steel or wrought iron. This pressure is obtained from an indirect source or from the application of heat from a direct or indirect source. If not properly maintained, boilers can be a source of serious injuries and can lead to huge losses in form of property destruction. Thin and brittle metals that make up some parts of the boiler could rapture or poorly welded seams could open up leading to violent eruptions of the pressurized steam. Collapsed boiler tubes could also spray the hot steam they contain into the air injuring the around (Reeves, 2001).
Even with the best pretreatment programs, boiler feed water often contain some degree of impurities which normally accumulate in the boiler. The increasing accumulation of dissolved solids may lead to carry over of boiler water into the steam which may cause damage to the piping system as well as the process equipment. These suspended solids could also lead to the formation of sludge which will lead to a reduction of the boiler efficiency as well as it s heat transfer capability.
In order to avoid these problems, water should often be discharged from the boiler in order to control the concentrations of the suspended and dissolved solids in the boiler. Discharging of the surface water is usually done in order to get rid of the dissolved solids while the discharging of bottom water is done in order to remove the sludge from the bottom of the boiler.
Boiler blow down i.e. discharge of water from the boiler is a very important aspect of the boiler maintenance. Lack of proper blow down can lead to increased fuel consumption, extra chemical treatments for the boiler as well as increased heat loss. Also, since the blow down water has the same temperature as the boiler water, it can be reused in the boiler operations once removed. However, excessive blow down can lead to wastage of water, energy and treatment.
The two major types of boiler blow downs are intermittent and continuous blow down. Intermittent blow down is done by manually fitting a valve at the bottom of the boiler which is removes the unwanted parameters. It requires large short-term increases in the amount of feed water put into the boiler which leads to a substantial amount of heat energy being lost. Alternatively, continuous blow down involves the steady and constant dispatch of small stream of concentrated boiling water being replaced with steady and constant inflow of feed water.
The various energy efficiency opportunities in a boiler system can be related to combustion, heat transfer, water quality, avoidable losses and blow down. To maximize a boiler’s efficiency, the stack temperature should be designed to be as low as possible. Nevertheless, it should not be very low such that water vapor in the exhaust condenses on the stack walls. Automatic blow down controls that sense and respond to the boiler water conductivity and pH should be installed in order to reduce uncontrolled continuous blow down. In oil and coal fired boilers, soot should be removed as it acts as an insulator against heat transfer.

## Therapeutic Work with Children

Play therapy (PT) has been defined as a method of establishing an “interpersonal process wherein trained play therapists use the therapeutic powers of play to help clients prevent or resolve psychosocial difficulties and achieve optimal growth and development” (Association of Play Therapy, 2014).  It is one of a number of interventions used for children who suffer from emotional and behavioural disorders due to its responsiveness to their unique developmental needs (Bratton et al., 2005), such as developing self-awareness, self-monitoring and self-resilience (Paone & Douma, 2009). Numerous tools such as puppets, sand trays and role-play may be employed by therapists to take advantage of a child’s most natural form of expression in order to communicate with them effectively (Mullen et al., 2007). A rise in the popularity and widespread acceptance of PT over the past 70 years has come at a time when societal problems that directly impact children such as fragmented families, substance abuse and media violence have been on the rise (Bratton et al., 2005). However, within the scientific community PT has been less widely accepted due to a lack of sound empirical evidence to support its use (Campbell, 1992).

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Accounts of psychotherapy involving children can be found as early as Sigmund Freud and his work with ‘Little Hans’ (Freud, 1909). However, it is generally acknowledged that the psychoanalysts Anna Freud (1928) and Melanie Klein (1932) pioneered PT as a psychotherapeutic modality (Knell, 1993). From this psychoanalytic perspective, the key principles underpinning PT are the exploration or analysis of transference and resistance by the client (Brandchaft, 2014). Transference refers to the process by which the client transfers their emotions that were originally transferred to the parents onto the therapist (Tishby & Wiseman, 2014), whereas resistance refers to the repressing of painful experiences by the child into their unconscious in order to bear the stress they may be facing (Scaer, 2014). In this sense, play provides an avenue for decontextualization, allowing them to rid themselves of negative feelings associated with traumatic events (Goldstein, 1994). Importantly, Klein (1932) emphasised the role of play as a substitution for free association used within adult psychoanalysis, and argued that a child’s actions during play could reveal underlying thoughts and feelings. The next major development in the field of PT came from the work of Axline (1947). Axline (1947) took the client-centered approaches of Rogers (1951), which placed emphasis on an accepting and empathetic relationship between the therapist and client, and applied them to children to develop non-directive PT. Axline centered this new school of PT on her belief that children have the capacity to resolve their own problems through play, if given the right therapeutic environment (Rasmussen & Cunningham, 1995). These two theories, psychoanalytical and humanistic, exist on the basis that they inform how the therapy is understood, the case is conceptualised and what outcomes would follow through. While they may be in direct contrast with one another, they both engage the same modality of play in order to help children deal with a range of mental health issues. The field of PT has grown dramatically since then as various theorists, academicians, and practitioners have developed specific PT approaches based on their theoretical views and personal experiences with children (Bratton et al., 2005). These approaches include Filial PT (Guerney, 1964), gestalt PT (Oaklander, 1994), Adlerian PT (Kottman, 1995), and ecosystemic PT (O’Connor, 2000), to name a few.

To date, there is a shortage of studies yielding statistically significant results on the efficacy of PT with children. Instead, there exists a wealth of descriptive and theoretical works with inadequate or flawed research design (Bratton et al., 2005). These works often suffer from a reliance on case studies, small samples, and uncontrolled studies, whilst also failing to provide sufficient definitions of what constitutes PT and inadequate or non-measurable determinants of treatment outcome (LeBlanc & Ritchie, 2001). In order to address this, a number of meta-analyses have been carried out (Leblanc & Ritchie, 2001; Bratton et al., 2005; Jenson et al., 2017). The outcomes of these ambitious studies have caused considerable excitement in the field of PT, and are often cited as support for PT’s evidence base (Phillips, 2010). They offer some much-needed organisation of remote research endeavours in the field, allowing a glimpse at the entire PT research field (Jenson et al., 2017).

When looking at PT as one therapeutic approach, in all of its forms, the results have been mixed, and have divided clinicians and researchers with regards to its effectiveness. There appears to be a discrepancy between the theoretical plausibility of PTs use as the natural mode of expression for children, and its performance when assessed by researchers. In their seminal meta-analysis LeBlanc & Ritchie (2001) found an overall effect size (ES) of 0.66, and similarly Bratton et al. (2005) in their following meta-analysis, which included almost double the number of studies, found an overall ES of 0.80. These findings indicate that PT is considerably better than nothing, and that it is better to about the same degree as most other forms of child psychotherapy. However, these findings were in contrast to Jenson et al.’s (2017) later meta-analysis which found a far more modest overall ES of 0.44, no longer comparable to the outcomes of other child-focussed treatments. Phillips (2010) argues that these differences are due to the coding categories used to evaluate the individual studies and the rigor to which they were upheld. While compiling individual studies and determining overall ESs may be beneficial in comparing therapeutic interventions and gaining a grand picture of the field, this may come at a cost to the smaller picture, such as the influence of different variables. As such, the different variables in the delivery of PT will be discussed in relation to their benefits and limitations.

The first variable in the use of PT which will be discussed is the treatment modality used; non-directive or directive. Theoretically, a non-directive approach has the advantage of allowing the child autonomy over their actions, as well as enabling the treatment provider a window into their mind, free from external motivation or teaching (Swan & Ray, 2014). This may be particularly beneficial for children with non-pathological speech disorders, such as selective mutism (Moustakas, 1951). In these conditions where an underlying anxiety disorder or social phobia may be present, a non-directive approach may relieve some of the pressure felt by the child to speak or act in a certain way (Wilson & Ryan, 2006). In this sense, non-directive PT may be beneficial as an anxiety reducing device (Mollamohammadi & Yazdkhasti, 2017). This is further supported by Bratton et al.’s (2005) meta-analysis which found that while both treatment modalities can be considered effective, non-directive interventions had a greater ES (0.93), than directive interventions (0.71). However, these ESs may reflect the difference in the number of studies for each modality, with Bratton et al. (2005) including six times as many nondirective PT studies in their meta-analysis. Bratton et al. (2005) also acknowledged both the shortage of clearly articulated interventions and the mixing of approaches within the same treatment protocols. This further evidences the issues that still exist within PT research, even with the use meta-analyses. While the available scientific evidence in favour of non-directive over directive PT is persuasive, directive PT has been shown to be effective with trauma victims (Ryan & Needham, 2001). In challenging cases such as these, therapists are able to prepare for the session thereby enabling selection of specific activities appropriate to the goal of therapy (Tennessen & Strand, 1998). Additionally, a directive approach allows the therapist to walk through the child’s emotions with them, which may be particularly valuable in the first few sessions (Andrews, 2010). However, offsetting these benefits is the risk that the therapist will externally influence the child and stall the development of their independence and self-coping mechanisms (Phillip, 2010). It is clear that both modalities have benefits if adopted appropriately according to the child’s condition, however the results from meta-analyses may mask some of these more specific benefits such as the impact of directive PT with trauma victims.

An equally significant variable in the effective delivery of PT is the treatment provider. LeBlanc & Ritchie (2001) revealed that PT delivered by a paraprofessional had a larger ES (1.05) than delivery by a mental health professional (0.72). Furthermore, Bratton et al. (2005) noted that the majority of paraprofessional studies involved parents (22 of 26 studies), and so when recalculating for parent-only filial studies, found an increased ES of 1.15. Use of paraprofessionals in therapy may both elevate some of the stresses of a shortage of mental health professionals (Jenson et al, 2017), as well as benefit the child in the short and long term (Bratton, 2010). Filial therapy enables parents to connect with their child and gain a greater understanding of their feelings, motives and behaviours (Ginsburg, 2007), as well as arming them with techniques they can use to more effectively respond to their children at home (Paone & Douma, 2009). However, while there is consensus in the PT literature regarding the efficacy of paraprofessional delivery (Bratton et al., 2005), ESs may be inflated due to its reliance on self-reporting (Phillips, 2010). In all of the studies included in Bratton et al.’s (2005) study in which parents provided treatment, parents were also a source of the outcome measure. Certainly, parents who are willing to invest themselves fully in their child’s therapy are likely to see greater benefits in their child than those who are ambivalent about the therapeutic process (Guerney, 1997). In addition, the results may be reflective of paraprofessionals being matched to children appropriate to their skill level, and professionals being assigned more difficult cases (Bratton et al., 2005). In light of this, the benefits and limitations of therapist-delivered and parent-delivered therapy need to be explored further.

Another significant advantage of PT is that it can be delivered effectively to individuals, as well as groups (Jenson et al., 2017). While having a significant monetary benefit (Ginott, 1994), group PT also has a number of other benefits. The group nature allows children to build relationships outside the therapist-child one and utilises play as the means through which children learn perspective taking, language skills, problem solving and an awareness of the needs of others (Davidson, 1998). Additionally, it may be used by therapists to provide valuable observation time in order to secure diagnostic formulations (Jenson et al., 2017). Conversely, group PT may be unsuccessful or even damning for children with attention issues or social phobias (Casey & Berman, 1985). Perhaps in these cases, group PT would be better utilised as a tool for enhancing skills learnt in previous individual therapy (Phillips, 2010). In order for therapists to get the most beneficial results from group PT, careful selection of the group so as to select children who would benefit from this format as well as matching developmental ages is of paramount importance (Ginott, 1994).

Another factor that may impact the benefits of PT as a therapeutic approach is it’s duration. LeBlanc & Ritchie (2001) looked closely at this variable and found that the optimum number of treatment sessions was between 30 and 35. Bratton et al. (2005) speculated that this finding was due to intensified problem behaviours at the onset of therapy. Yet, often due to financial and resource constraints, 30 sessions is not feasible (Cummings, 1977). Clearly, in these cases other therapeutic interventions would be more beneficial, exhibiting a considerable limitation of PT. However, Bratton et al. (2005) identified an intriguing subgroup of children who responded more quickly to PT intervention. The study noted an inverse relationship between children in the critical-incident category (i.e. hospitals, prisons) and the number of sessions. These results are promising and indicate that children in crisis may respond more readily to treatment at that time. Evidently, the benefits and indeed limitations of PT are subject to the right approach taken with the right child.

The final variable and the one that shows the most variance in the benefits and limitations of PT is the characteristics of the child, namely their age, personality and target problem (Ray, 2008). PT is inherently developmentally sensitive and therefore can be used with children younger than those targeted by more traditional talk therapies (Bratton et al., 2005). In 1962, Piaget proposed a theory of cognitive development, laid out in a serious of stages. In the pre-operational stage (three-six years) symbolic play dominates, whereas in the later, concrete operational stage (seven years and older) socialising becomes a far more central component of play, and the use of games and rules feature (Piaget & Inhelder, 1969). Dougherty & Ray (2007) investigated this further and concluded that PT was effective at both stages, given the right therapeutic environment. This was supported by Leblanc and Ritchie (2001) who noted that neither age nor gender were significant predictors of treatment outcome. Clearly, PT is beneficial in that it is uniquely responsive to boy’s and girl’s developmental needs from 3-to-12-years old.

Another important child characteristic is their personality (LeBlanc & Ritchie, 1999). Some children, such as Dibs, the young boy discussed in Axline’s accounts, are charismatic and have protective factors present within themselves (Axline, 1964). Therefore, they are more predisposed to benefit from PT. However, children may be very resistant to therapy, feeling forced to be there against their will, which may lead to poorer outcomes (Fall et al., 1999). Additionally, if the child’s methods of coping with certain triggers or memories are held rigidly, PT may be limited in its ability to teach them more adaptive ones (Danger & Landreth, 2005). In these children, as well as ones with extremely severe target problems such as post-traumatic stress disorder (PTSD), a more targeted intervention, such as directive PT or another therapeutic intervention may be far more successful (Vickerman & Margolin, 2007). This is supported by a systematic review (Gillies, 2016) that found that in reducing PTSD symptoms in the short term, CBT was favoured over play therapy.

The efficacy of PT has been studied with a wide array of different conditions (Shirk & Karver, 2003). However, a few stand out in the literature, including in treatment of speech difficulties (Danger & Landreth, 2005), in children facing medical procedures (Phillips, 2010) and for children displaying aggressive behaviour (Shaefer & Mattei, 2005). Historical literature demonstrates a long-established link between the development of play behaviours and the acquisition of language (O’Brien et al. 1987). Some findings suggest that children with an impairment in the ability to play, almost always have difficulty in learning to talk (Mundy et al., 1987). Martin (1981) furthers this that “play is an essential precursor to language”. Today, PT is recognised as useful in improving receptive and expressive language skills of children with speech difficulties (Danger & Landreth, 2005). PT has also been shown to be useful in aiding the development of language for children with autism spectrum disorder (Hebert et al. 2014).

Clearly, the benefits of PT can extend to children with a variety of mental health needs. Another well-documented use of PT is for children facing medical procedures (Phillips, 2010). Many Italian hospital programs encourage the young patients to attend PT sessions in order to help them overcome the fear of invasive and painful medical procedures and to improve treatment compliance (Scarponi, 2016). PT is also an ideal opportunity for suffering children to act out characters, share experiences, and discuss the fears of being patients (Scarponi, 2016). However, the effectiveness of PT in treatment of aggressive behaviours is not so clear cut. Landreth (2002) argues that through the expression of aggressive feelings or behaviours in the playroom, but most importantly in the presence of an empathic and understanding adult, a child will learn to meet self-needs in a socially appropriate manner. Dollard et al. (1939) stated that “the occurrence of any act of aggression is assumed to reduce the instigation of aggression”. According to this supposition, one aggressive act can serve as a substitute for another in reducing the aggressive drive. In this sense aggressive urges tend to build up and the child needs to release them in a safe environment, such as a playroom. Without this release, the pressure will mount until the point at which the aggressive impulses will erupt in real life aggressive behaviour that could be harmful to both self and others (Schaefer & Mattei, 2005). In light of the above, play therapists often grant children the opportunity to play aggressively by supplying them with toys such as guns (Laue, 2015). However, citing social learning theory[1] as an explanation, critics of allowing aggressive behaviours in the playroom claim that freedom of aggressive play will reinforce, and therefore increase aggression in children (Drewes, 2008; Schaefer & Mattei, 2005). Some play therapists have reported the need to “tame” children by “blocking” aggressive behaviour (Crenshaw & Mordock, 2005).

Discussion of the efficacy of PT is challenging due to the combinatorial quality of its different modalities. It is most beneficial when the correct combination of variables are used in order to address the specific needs of the child (Shirk & Karver, 2003). Therefore, perhaps PT’s benefits like in its versatility, and as such is best used in association with other treatments. A number of studies point to the efficacy of play therapy in combination with cognitive behavioural therapy in the treatment of aggression (Badamian & Ebrahimi Moghaddam, 2017).

While adults use language to communicate with one another, children use play as their primary medium of expression (Trotter, Eshelman, & Landreth, 2003). Play allows them to express their feelings in a comfortable way, by bridging concrete experience with abstract thought (Kot, Landreth, & Giordano, 1998). However, while the theory behind PT seems secure, the research to support its efficacy is not. The field is characterized by a disparate array of studies that often do not build incrementally or theoretically on previous work. However, by untangling the current research, a number of interesting benefits emerge, particularly its success with language difficulties and its use before medical procedures as well as its success in group settings and its ability to be paraprofessionally delivered. However, a number of limitations also arise, specifically its use with more severe psychological trauma and its extended optimum duration.

REFERENCES:

A4pt.org. (2019) Association for Play Therapy. [online] Available at: https://www.a4pt.org/ [Accessed 12 May 2019].

Andrews, C. (2010) ‘Who directs the play and why? an exploratory study of directive versus nondirective play therapy’, Theses, Dissertations, and Projects. 1169

Axline, V. (1964) ‘Dibs in search of self’, New York: Ballantine Books.

Badamian, R. and Ebrahimi Moghaddam, N. (2017) ‘The effectiveness of cognitive-behavioral play therapy on flexibility in aggressive children’, Journal of Fundamentals of Mental Health, 19, 133-137.

Bandura, A. (1978). Social learning theory of aggression, Journal of communication, 28(3), 12-29.

Brandchaft, B.B., Atwood, G.E. and Stolorow, R.D. (2014). ‘Psychoanalytic treatment: An intersubjective approach’, Routledge.

Bratton, S. (2010) ‘Meeting the early mental health needs of children through school-based play therapy’, School-based play therapy, 17, 17-58.

Bratton, S.C., Ray, D., Rhine, T. and Jones, L. (2005) ‘The efficacy of play therapy with children: A meta-analytic review of treatment outcomes’, Professional psychology: Research and practice, 36(4), 376.

Brignell, A., Chenausky, K.V., Song, H., Zhu, J., Suo, C. and Morgan, A.T. (2018) ‘Communication interventions for autism spectrum disorder in minimally verbal children’, Cochrane Database of Systematic Reviews, (11).

Campbell, T.W. (1992) ‘Promoting play therapy: Marketing dream or empirical nightmare’, Issues in Child Abuse Accusations, 4(3), 111-117.

Casey, R.J. and Berman, J.S. (1985) ‘The outcome of psychotherapy with children’. Psychological bulletin, 98(2), 388.

Crenshaw, D.A. and Mordock, J.B. (2005) ‘Understanding and treating the aggression of children: Fawns in gorilla suits’, Jason Aronson.

Cummings, N.A. (1977) ‘Prolonged (ideal) versus short-term (realistic) psychotherapy’, Professional Psychology, 8(4), 491.

Danger, S. and Landreth, G. (2005) ‘Child-centered group play therapy with children with speech difficulties’, International Journal of Play Therapy, 14(1), 81.

Dollard, J., Miller, N.E., Doob, L.W., Mowrer, O.H. and Sears, R.R. (1939) ‘Frustration and aggression’, New Haven, CT: Yale University Press.

Dougherty, J. and Ray, D. (2007) ‘Differential impact of play therapy on developmental levels of children’, International Journal of Play Therapy, 16(1), 2.

Drewes, A.A. (2008) ‘Bobo revisited: What the research says’, International Journal of Play Therapy, 17(1), 52.

Fall, M., Balvanz, J., Johnson, L. and Nelson, L. (1999) ‘A play therapy intervention and its relationship to self-efficacy and learning behaviors’, Professional School Counseling, 2(3), 194.

Freud, A. (1928) ‘On the Theory of Child Analysis’, The Psychoanalytic Review, 15, 85.

Freud, S. (1909) ‘Analysis of a phobia of a five-year old boy’, The Pelican Library, 8, 9-306.

Gillies, D., Maiocchi, L., Bhandari, A.P., Taylor, F., Gray, C. and O’Brien, L. (2016) ‘Psychological therapies for children and adolescents exposed to trauma’, Cochrane database of systematic reviews, (10).

Ginsburg, K.R. (2007) ‘The importance of play in promoting healthy child development and maintaining strong parent-child bonds’, Pediatrics, 119(1), 182-191.

Goldstein, R. (1994) ‘Toys, play, and child development’, Cambridge University Press.

Guerney Jr, B. (1964) ‘Filial therapy: Description and rationale’, Journal of Consulting Psychology, 28(4), 304.

Guerney, L. (1997) ‘Filial therapy’, Play therapy theory and practice: A comparative presentation, 150-160.

Hébert, M.L., Kehayia, E., Prelock, P., Wood-Dauphinee, S. and Snider, L. (2014) ‘Does occupational therapy play a role for communication in children with autism spectrum disorders?’, International journal of speech-language pathology, 16(6), 594-602.

Inhelder, B. and Piaget, J. (1969) ‘The psychology of the child’, New York: Basic Books.

Jensen, S.A., Biesen, J.N. and Graham, E.R. (2017) ‘A meta-analytic review of play therapy with emphasis on outcome measures’, Professional Psychology: Research and Practice, 48(5), 390.

Klein, M. (1975) ‘The psycho-analysis of children (1932)’, Trans. Alix Strachey. New York: Delacorte Books.

Knell, S.M. (1993) ‘Cognitive-behavioral play therapy’, Rowman & Littlefield.

Kot, S., Landreth, G.L. and Giordano, M. (1998) ‘Intensive child-centered play therapy with child witnesses of domestic violence’, International Journal of Play Therapy, 7(2), 17.

Kottman, T. (1995) ‘Parent in play: An Adlerian approach to play therapy’, American Counseling Association.

Landreth, G.L. (2002) ‘Therapeutic limit setting in the play therapy relationship’, Professional psychology: Research and practice, 33(6), 529.

Laue, C.E. (2015) ‘Toy guns in play therapy: Controversy and current practice’ [PhD Thesis], The Chicago School of Professional Psychology.

LeBlanc, M. and Ritchie, M. (1999) ‘Predictors of play therapy outcomes’, International Journal of Play Therapy, 8(2), 19.

LeBlanc, M. and Ritchie, M. (2001) ‘A meta-analysis of play therapy outcomes’, Counselling Psychology Quarterly, 14(2), 149-163.

Martin, J.AM. (1981) ‘Voice, Speech and Language in the Child: Development and Disorder’, New York, Springer-Verlag.

Mollamohammadi, F. and Yazdkhasti, F. (2017) ‘Effect of play therapy on reduction of social anxiety and increasing social skills in preschool children in Omidiyeh’, International Journal of Educational and Psychological Researches, 3(2), 128.

Moustakas, C.E. (1951) ‘Situational play therapy with normal children’, Journal of consulting psychology, 15(3), 225.

Mullen, J.A., Luke, M. and Drewes, A.A. (2007) ‘Supervision can be playful, too: Play therapy techniques that enhance supervision’, International Journal of Play Therapy, 16(1), 69.

Mundy, P., Sigman, M., Ungerer, J. and Sherman, T. (1987) ‘Nonverbal communication and play correlates of language development in autistic children’, Journal of autism and developmental disorders, 17(3), 349-364.

O’Brien, M. and Nagle, K.J. (1987) ‘Parents’ speech to toddlers: The effect of play context’, Journal of Child Language, 14(2), 269-279.

O’Connor, K.J. (2000) ‘The play therapy primer (2nd edn)’, New York: John Wiley & Sons, Inc.

Oaklander, V. (1994) ‘Gestalt play therapy’, Handbook of play therapy: Advances and innovations, 143-156.

Paone, T.R. and Douma, K.B. (2009) ‘Child-centered play therapy with a seven-year-old boy diagnosed with intermittent explosive disorder’, International journal of play therapy, 18(1), 31.

Phillips, R.D. (2010) ‘How firm is our foundation? Current play therapy research’, International Journal of Play Therapy, 19(1), 13.

Piaget, J. (1962) ‘Play, dreams and imitation in childhood’, London: Routledge & Kegan Paul.

Rasmussen, L.A. and Cunningham, C. (1995) ‘Focused play therapy and non-directive play therapy’, Journal of Child Sexual Abuse, 4(1), 1-20.

Ray, D.C. (2008) ‘Impact of play therapy on parent–child relationship stress at a mental health training setting’, British Journal of Guidance & Counselling, 36(2), 165-187.

Rogers, C. R. (1951) ‘Client-centered therapy: Its current practice, implications, and theory’, Boston, MA, USA: Houghton Mifflin Company.

Ryan, V. and Needham, C. (2001) ‘Non-directive play therapy with children experiencing psychic trauma’, Clinical Child Psychology and Psychiatry, 6(3), 437-453.

Scaer, R. (2014) ‘The body bears the burden: Trauma, dissociation, and disease’, Routledge.

Scarponi, D. (2016) ‘Play therapy to control pain and suffering in paediatric oncology’, Frontiers in pediatrics, 4, 132.

Schaefer, C.E. and Mattei, D. (2005) ‘Catharsis: effectiveness in children’s aggression’, International Journal of Play Therapy, 14(2), 103-109.

Shirk, S.R. and Karver, M. (2003) ‘Prediction of treatment outcome from relationship variables in child and adolescent therapy: a meta-analytic review’, Journal of consulting and clinical psychology, 71(3), 452.

Swan, K.L. and Ray, D.C. (2014) ‘Effects of child‐centered play therapy on irritability and hyperactivity behaviors of children with intellectual disabilities’, The Journal of Humanistic Counseling, 53(2), 120-133.

Tennessen, J. and Strand, D. (1998) ‘A comparative analysis of directed sandplay therapy and principles of Ericksonian psychology’, The Arts in psychotherapy, 25(2), 109-114.

The British Association of Play Therapists. (2019) ‘History of Play Therapy – The British Association of Play Therapists’ [online] Available at: http://www.bapt.info/play-therapy/history-play-therapy/ [Accessed 12 May 2019].

Tishby, O. and Wiseman, H. (2014) ‘Types of countertransference dynamics: An exploration of their impact on the client-therapist relationship’, Psychotherapy research, 24(3), 360-375.

Trotter, K., Eshelman, D. and Landreth, G. (2003) ‘A place for Bobo in play therapy. International Journal of Play Therapy, 12(1), 117.

Vickerman, K.A. and Margolin, G. (2007) ‘Posttraumatic stress in children and adolescents exposed to family violence: II. Treatment’, Professional Psychology: Research and Practice, 38(6), 620.

Wilson, K. and Ryan, V. (2006) ‘Play therapy: A non-directive approach for children and adolescents’, Elsevier Health Sciences.

[1] The theory that people learn from one another via observation, imitation, and modelling (Bandura, 1978)