## Increasing Time Efficiency of Insertion Sort

Increasing Time Efficiency of Insertion Sort for the Worst Case Scenario

Surabhi Patel, Moirangthem Dennis Singh

Abstract. Insertion sort gives us a time complexity of O(n) for the best case. In the worst case where the input is in the descending order fashion, the time complexity is O(n2). In the case of arrays, shifting is taking O(n2) while in the case of linked lists, comparison is coming to O(n2). Here a new way of sorting for the worst case problem is proposed. We will use arrays as data structures and take more space. We will take 2n spaces where n is the number of elements and start the insertion from (n-1)th location of the array. In this proposed technique the time complexity is O(nlogn) as compared to O(n2) in the worst case.
Keywords. Insertion Sort, Time Complexity, Space Complexity

Introduction

Insertion sort is a simple sorting algorithm[1], a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. Every repetition of insertion sort removes an element from the input data, inserting it into the correct position in the already-sorted list, until no input elements remain.
The best case input is an array that is already sorted. In this case insertion sort has a linear running time which is O(n). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.
The worst case input is an array sorted in reverse order. In this case, every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. For this case insertion sort has a quadratic running time which is O(n2).
The average case also has a quadratic running time of O(n2).

Literature Survey

In an insertion sort algorithm, there are always two constraints in time complexity. One is shifting the elements and the other one is comparison of the elements. The time complexity is also dependent on the data structure which is used while sorting. If we use array as data structure then shifting takes O(n2) in the worst case. While using link list data structure, searching takes more time, viz. O(n2).
We will take the following examples:
Sort 50, 40, 30, 20, 10 using arrays.

Shifting = 0, Comparison = 0

0

1

2

3

4

50

40

40

50

Shifting = 1, Comparison = log1

0

1

2

3

4

40

50

30

40

30

50

30

40

50

Shifting = 2, Comparison = log2

0

1

2

3

4

30

40

50

20

30

40

20

50

30

20

40

50

20

30

40

50

Shifting = 3, Comparison = log3

0

1

2

3

4

20

30

40

50

10

20

30

40

10

50

20

30

10

40

50

20

10

40

40

50

10

20

30

40

50

Shifting = 4, Comparison = log4
Time Complexity in Shifting: O(n2)
Time Complexity in Comparison: O(nlogn)
Total time complexity: O(n2)
Here as the array is sorted, we can use binary search for comparison which will lead to a time complexity of O(nlogn) but Shifting takes O(n2). Therefore the total time complexity becomes O(n2)
To solve this problem, link list can be used as illustrated in the following example.
Sort 50, 40, 30, 20, 10 using link list. In a link list shifting takes O(1) as new elements can be inserted at their right positions without shifting.

Comparison = 0

50

–>

40

40

–>

50

Comparison = 1

40

–>

50

–>

30

30

–>

40

–>

50

Comparison = 2

30

–>

40

–>

50

–>

20

20

–>

30

–>

40

–>

50

Comparison = 3

20

–>

30

–>

40

–>

50

–>

10

10

–>

20

–>

30

–>

40

–>

50

Comparison = 4
Time Complexity in Shifting: O(1)
Time Complexity in Comparison: O(n2)
Total time Complexity: O(n2)
Here as we cannot use binary search for comparison which will lead to a time complexity O(n2) even though shifting takes a constant amount of time.
As we have observed in the examples illustrated above, in both the cases the Time complexity is not getting reduced. Hence we are proposing an improvised insertion sort taking additional space to sort the elements. As space complexity is less important than time complexity[2][3], we have concentrated more over the time taken instead of space.

Proposed Work

In the insertion sort technique proposed here, we will take 2n spaces in an array data structure, where n is the total number of elements. The insertion of elements will start from n-1th position of the array. The same procedure of a standard insertion sort is followed in this technique. Finding the suitable positions of the elements to be inserted will be done using binary search. In the following cases we will discuss the details of our work.

Case 1

For the best case scenario in a standard Insertion Sort is the input elements in ascending order using proposed technique.
e.g. 10, 20, 30, 40, 50

0

1

2

3

4

5

6

7

8

9

10

Shifting =0 , Comparison = 0

0

1

2

3

4

5

6

7

8

9

10

20

Shifting =0 , Comparison = 1

0

1

2

3

4

5

6

7

8

9

10

20

30

Shifting =0 , Comparison = 1

0

1

2

3

4

5

6

7

8

9

10

20

30

40

Shifting =0 , Comparison = 1

0

1

2

3

4

5

6

7

8

9

10

20

30

40

50

Shifting =0 , Comparison = 1

Total Shifting =0, Total Comparison = n-1
Therefore time complexity is O(1)+O(n) = O(n)

Case 2:

For the worst case scenario in a standard Insertion Sort is the input elements in descending order using proposed technique.
e.g. 50, 40, 30, 20, 10

0

1

2

3

4

5

6

7

8

9

50

Shifting =0 , Comparison = 0

0

1

2

3

4

5

6

7

8

9

50

40

40

50

Shifting =1 , Comparison = log1

0

1

2

3

4

5

6

7

8

9

40

50

30

30

40

50

Shifting =1 , Comparison = log2

0

1

2

3

4

5

6

7

8

9

30

40

50

20

20

30

40

50

Shifting =1 , Comparison = log3

0

1

2

3

4

5

6

7

8

9

20

30

40

50

10

10

20

30

40

50

Shifting =1 , Comparison = log4

Total Shifting =n-1,
Total Comparison =log( 1*2*3*4)
=log((n-1)!)
=log((n-1) (n-1))
=(n-1)log(n-1)
=nlog(n-1) – log(n-1)
Therefore time complexity is O(n)+O(nlogn) = O(nlogn)

Case 3:

For the average case scenario in a standard Insertion Sort, the input elements are in random order. We are following the same procedure but comparison is done via binary search algorithm. Hence it takes O(nlogn) for comparison. For shifting the elements, the time taken tends to O(n2) but is not equal to O(n2). As we have more spaces, there are possibilities that the shifting of some elements may be reduced because elements may be inserted both at the end as well as in the beginning.

Results

Now we compare the time complexities of proposed sorting technique and the standard Insertion sort.

Input Elements

Standard Insertion Sort

Proposed Sorting Technique

Best Case (Ascending Order)

O(n)

O(n)

Worst Case (Descending Order)

O(n2)

O(nlogn)

Average Case (Random Order)

O(n2)

Tends to O(n2)

Conclusion

We are decreasing the time complexity of worst case scenario in Insertion sort algorithm by increasing the space complexity. Our future scope of work includes decreasing time complexity of the average case which is O(n2) currently. There are promising results shown in the average case scenario where the time complexity may be reduce from O(n2), if the probability of the input elements is a combination of increasing and decreasing order.

Acknowledgement

We would like to thank Prof Anirban Roy, Department of Basic Sciences Christ University Faculty of Engineering for helpful discussions and support.
REFERENCES

Insertion Sort,http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Insertion_sort.html
Michael A. Bender, “Insertion Sort is O(nlogn),” Third International Conference on Fun With Algorithms(FUN), Pages 16-23, 2004
H. W. Thimbleby, “Using Sentinels in Insert Sort,” Software Practice and Experience, Volume 19(3), Pages 303–307, 1989.

## Bacterial Transformation Efficiency: E.Coli with pGLO

Bacterial Transformation Efficiency in E.Coli with pGLO Plasmids
By: Richard Stone
Introduction
“The conversion of one genotype into another by the introduction of exogenous DNA (that is, bits of DNA from an external source) is termed transformation. The transformation was discovered in Streptococcus pneumoniae in 1928 by Frederick Griffith; in 1944, Oswald T. Avery, Colin M. MacLeod, and Maclyn McCarty demonstrated that the “transforming principle” was DNA. Both results are milestones in the elucidation of the molecular nature of genes.” 1

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Bacteria transformation is the process of a bacteria absorbing and expressing foreign genetic information using plasmids. Plasmids are small circular molecules of DNA that holds a small number of genes. The plasmids used in the experiment have the ampicillin resistance gene. Ampicillin (amp) is an antibiotic used to kill bacteria such as E. coli, the bacteria used in the experiment. E. coli (Escherichia coli) is a simple bacterium commonly found in our body’s and in everyday life but most commonly found in mammal’s intestines. Glowing Fluorescent Proteins (GFP’s) is the gene found in jellyfish that holds bioluminescent properties and “glow” under UV light. By knowing the location of the gene, scientists can “cut out” the GFP gene from the jellyfish DNA. They do this using restriction enzymes to which recognize and cut DNA in a specific region of nucleotides to acquire a specific gene. Once the gene is isolated, it can be used in the experiment and “glued” into a plasmid that contains the AMP gene. This is done by sticky ends as the Jellyfish DNA binds to the amp resistance plasmid using hydrogen bonds which are hen sealed by DNA ligase. This creates pGLO a plasmid which is used in the experiment in the transformation of the bacteria. Before it can be part of the transformation the bacteria must be made competent to accept the pGLO. This is done by “heat shocking” the bacteria which makes it easier for the pGLO to be incorporated into the bacteria. For the bacteria to fluoresce sunder UV light it must be in presence of arabinose sugars, which “turns on the gene for the production of Glowing Fluorescent Proteins.2 The amp Resistance gene enables bacteria to survive in the presence of the antibiotic ampicillin. When a plasmid containing both the GFP gene and AMP gene (pGLO) is transferred into an E. coli bacterium, the transformed cells can be grown in a culture dish that contains ampicillin. Only a small number of bacteria cells will be transformed and grow on the LB (lysogeny broth) and amp plates and glow. 3
The experiment demonstrates how Bacteria is modified to express a specific gene through the process of bacterial transformation. The purpose of this experiment is to find the efficiency of bacterial transformation in E. Coli bacteria by observing their expression of the plasmids. This is calculated by determining the frequency of the bacterium with GFP’s and arabinose sugars by counting the glowing colonies.
It was the results for each plate was hypothesized before the experiment. The LB plate with only the bacteria and no pGLO administered will grow a lawn of bacteria and have no glowing properties. The LB with ampicillin but bacteria without pGLO will not survive at all and there will be no bacteria growth. The LB plate with amp and bacteria with the pGLO will have bioluminescent properties but only a very small percentage of the bacteria will survive the amp and bacterial transformation will occur. Finally, the LB with no amp but the bacteria with the pGLO will form a lawn of bacteria and the bacteria that is transformed will glow like the previous plate. The efficiency of the bacterial transformation is hypothesized using in class discussion and background knowledge, to be about 8×10-4 %. 4
Materials and Methods

E. coli bacteria cultures
100-1000 µl micropipette
0.5-10 µl micropipette
sterile tips
2 sterile 15-ml test tubes
500 Î¼L of ice cold 0.05M CaCl2 (ph. 6.1)
500 Î¼L of lysogeny broth/agar
Bunsen burner
4 agar plates: 2 ampicillins+ and 2 ampicillin –
an incubator
a sterile inoculating loop
10 Î¼L of pAMP solution
a timer
ice
tape
a water bath

1. Use a permanent marker to label one sterile 15-ml tube “+”, and another “-“.
2. Use a 100-1000 µl micropipette and sterile tip to add 250 µl of CaCl2 (calcium chloride) solution to each tube.
3. Place both tubes on ice.
4. Use a sterile inoculating loop to transfer a visible mass of E. coli from a starter plate to the + tube:
a. Sterilize loop in Bunsen burner flame until it glows red hot.
b. Carefully, stab loop into agar to cool.
c. Scrape up a visible mass of E. coli, but be careful not to transfer any agar. (Impurities in agar can inhibit transformation.)
d. Immerse loop tip in CaCl2 solution and vigorously tap against the wall of the tube to dislodge bacteria. Hold tube up to light to observe the bacteria drop off into the calcium chloride solution. Make sure cell mass is not left on a loop or on side of tube.
e. Sterilize loop before setting it on the lab bench.
5. Immediately suspend cells in the + tube by repeatedly pipetting in and out, using a 100-1000 µl micropipette with a fresh sterile tip.a. Pipet carefully to avoid making bubbles in suspension or splashing suspension far up sides of the tube.
b. Hold tube up to light to check that suspension is homogeneous. No visible clumps of cells should remain.
6. Return + tube to ice.
7. Transfer the second mass of cells to – tube as described in Step 4, and resuspend cells as described in Step 5.
8. Return – tube to ice. Both tubes should be on the ice.
9. Use a 0.5-10 µl micropipette to add 10 µl of 0.005 µg/µl pGFP solution directly into cell suspension in the + tube. Tap tube with a finger to mix. Avoid making bubbles in suspension or splashing suspension up to the sides of the tube. [DO NOT ADD pGFP TO THE “-” TUBE.]
10. Return + tube to ice. Incubate both tubes on ice for 15 minutes.
11. While cells are incubating, use a permanent marker to label two LB plates and two LB/amp plates with name and the date.
Label one LB/amp plate “+ GFP”. This is the experimental plate.
Label the other LB/amp plate “- GFP”. This is a negative control.
Label one LB plate “+ GFP”. This is a positive control.
Label the other LB plate “- GFP”. This is a negative control.
12. Following the 15-minute incubation on ice, heat shock the cells in both the + and – tubes. It is critical that cells receive a sharp and distinct shock:
a. Carry ice beaker to the water bath. Remove tubes from ice, and immediately immerse in 42°C water bath for 90 seconds.
b. Immediately return both tubes to ice, and let stand on ice for at least 1 additional minute.
13. Place + and – tubes in test tube rack at room temperature.
14. Use a 100-1000 µl micropipette with a fresh sterile tip to add 250 µl of sterile LB medium to each tube. Gently tap tubes to mix. This will allow the cells to recover from the heat shock.
15. Use the matrix below as a checklist as + and – cells are spread on each plate:
16. Use a 100-1000 µl micropipette with a fresh sterile tip to add 100 µl of cell suspension from the – tube onto the – LB plate and another 100 µl onto the – LB/amp plate.
17. Use a 100-1000 µl micropipette with a fresh sterile tip to add 100 µl of cell suspension from the + tube onto + LB plate and another 100 µl of cell suspension onto + LB/amp plate. [Do not let suspensions sit on plates too long before proceeding to Step 18.]
18. Use sterile glass beads to spread cells over the surface of each – plate:
a.Obtain four 1.5 ml tubes containing at least five sterilized glass beads.
b.Lift lid of one – plate, only enough to allow pouring of the beads from one of the 1.5 ml tubes onto the surface of the agar. Replace plate lid; do not set the lid down on the lab bench. Repeat for all plates.
c.Use beads to spread bacteria evenly on plates by moving plates side to side several times. Do not move plates in a circular motion.
d.Rotate plates ¼ turn, and repeat spreading motion. Repeat two more times. The object is to separate cells on agar so that each gives rise to a distinct colony of clones.
19. Let plates set for several minutes to allowing the suspension to become absorbed into the agar. Then wrap together with tape.
20. Place plates upside down in 37°C incubator, and incubate for 12-24 hours, or store at room temperature for approximately 48 hours.5
Results

Transformed cells

Non-transformed cells

LB/amp

Bacterial Growth in form of green colonies

No growth on plate

LB

Growth spread across entire plate (bacteria lawn)

Growth spread across entire plate (bacteria lawn)

Table 1. the E. coli bacterial plates after incubation.
Discussion
Before the experiment was conducted the results of each plate was hypothesized. It was believed that the plate with only the LB and no plasmids added would grow a lawn of bacteria, this was proven correct by the experiment. The plate with LB and ampicillin but no pGLO was predicted to have no growth, which was also proven correct by the experiment. The plate with LB and ampicillin but the bacteria was administered with the pGLO was predicted to survive the amp but not in very large quantities. Finally, for the plate with only LB but with the pGLO administered to the bacteria it was hypothesized that it would glow, not necessarily in large quantities but at least a little. This was different from the results of the experiment in which the bacteria did not show bioluminescent properties. This can occur for numerous reasons, the lack of bacteria that was transformed, unsterile equipment, improper heat shocking to make the bacteria competent. While all these are the possible reasoning for the experiment results the most probable cause for the plates to not grow is the lack of arabinose sugar which is an important part in the expression of the GFP’S (see introduction). If the plates lack the arabinose sugar the GFP proteins may not be expressed. This explains why the LB only plate with the pGLO did not produce transformed bacteria. This also draws questions to why the plate with LB and ampicillin and the transformed bacteria. Why would it glow if it didn’t have any arabinose sugar? This most likely is explained by the fact that it must have been administered in the LB but not in the others.3

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

The transformation Efficiency was determined by counting the number of colonies on the LB/amp plate pGFP. Any bacteria that shows light under the UV light must have accepted the plasmids and successfully transformed the desired genes to survive the lb/amp plate and express the GFP gene. Each colony represents one bacteria that has been transformed. Using this the efficiency can be determined. Transformation efficiency is expressed as the number of antibiotic resistant colonies per µg of pGFP DNA. To find this the mass of the pGFP used must first be determined by the formula Concentration X Volume = Mass. This is shown in figure 1 and was calculated using the formula 0.005 µg /µl x 10 µl = 0.05 µg. Then using the formula to determine the total number of cells per plate the fraction of cells suspended onto the +LB/Amp plate. This is shown in figure 1 and was calculated using the formula .005 µg/510 µl=9.8×10^-5 µg /µl this number must them be multiplied by 100 because there are approximately 100 cells in use. This is calculated in figure 1 and is solved to be 9.8×10^-3. To determine the transformants per microgram the formula (total transformed cells/cells per plate)/10000 to find the efficiency in transformants per microgram. This is solved in figure 1 to be 8.673 transformants per microgram. Then the Transformation Efficiency can be found. This is shown in figure 1 which uses the formula (Total cells to start / total microliters) x 100 microliters to find the total number of cells on the plate. Then the formula (Transformants/ Total cells) x100 = percent of efficiency. This is calculated as (8.673 transformants/ 1,960,784,314) x100 to calculate a transformation efficiency of .000004335% or in scientific notation 10x 4.3355 ^ -6
Before conducting the experiment, it was hypothesized that the transformation efficiency would be about 8×10^-4%. After doing the experiment the transformation efficiency was found to be 4.335×10^-6% or 8.673 transformants per microgram. This proves the percentage of efficiency to be significantly lower than hypothesized. The transformation efficiency being lower than expected shows the rarity of this specific form of genetic modification. The experiment tests how rare it is for the genetic modification to occur and demonstrates the results of the modification and its effect on an organism.
Citations

Griffiths, Anthony JF. “Bacterial Transformation.” An Introduction to Genetic Analysis. 7th Edition. U.S. National Library of Medicine, 01 Jan. 1970. Web. 31 Dec. 2016.
“Bacterial Transformation.” SpringerReference (n.d.): n. pag. Cold Spring Harbor Laboratory. Dolan DNA Learning Center. Web.
Reece, Jane B. Campbell Biology, Volume 1. Boston, MA: Peason Learning Solutions, 2011. Print. Chapter 20
Transfer, Genetics, And Information. BIOTECHNOLOGY: BACTERIAL TRANSFORMATION* (n.d.): n. pag. Web
“Lab Center – Bacterial Transformation.” Lab Center – Bacterial Transformation. N.p., n.d. Web. 03 Jan. 2017.
“Bacterial Transformation.” SpringerReference (n.d.): n. pag. Web.

## Improving Effectiveness and Efficiency of Sentiment Analysis

Modha Jalaj S.
Chapter – 1
1. Introduction:
Big Data has been created lot of buzz in Information Technology word. Big Data contain large amount of data from various sources like Social Media, News Articles, Blogs, Web, Sensor Data and Medical Records etc.
Big Data includes Structured, Semi-Structured and Unstructured data. All these data are very useful to extract the important information for analytics.
1.1 Introduction of Big Data: [26]
Big Data is differs for other data in 5 Dimensions such as volume, velocity, variety, and value. [26]

Volume: Machine generated data will be large volume of data.

Velocity: Social media websites generates large data but not massive. Rate at which data acquired from the social web sites are increasing rapidly.

Variety: Different types of data will be generated when a new sensor and new services.

Value: Even the unstructured data has some valuable information. So extracting such information from large volume of data is more considerable.

Complexity: Connection and correlation of data which describes more about relationship among the data.

Big Data include social media, Product reviews, movie reviews, News Article, Blogs etc.. So, to analyze this kind of unstructured data is challenging task.
This thing makes Big Data a trending research area in computer Science and sentiment analysis is one of the most important part of this research area.
As we have lot of amount of data which is certainly express opinion about the Social issues, events, organization, movies and News which we are considering for sentiment analysis and predict the future trends and effect of certain event on society.
We can also modify or make the improve strategy for CRM after analysing the comments or reviews of the customer. This kind analysis is the application of Big Data.
1.2 Introduction of Sentiment Analysis:
Big Data is trending research area in computer Science and sentiment analysis is one of the most important part of this research area. Big data is considered as very large amount of data which can be found easily on web, Social media, remote sensing data and medical records etc. in form of structured, semi-structured or unstructured data and we can use these data for sentiment analysis.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Sentimental Analysis is all about to get the real voice of people towards specific product, services, organization, movies, news, events, issues and their attributes[1]. Sentiment Analysis includes branches of computer science like Natural Language Processing, Machine Learning, Text Mining and Information Theory and Coding. By using approaches, methods, techniques and models of defined branches, we can categorized our data which is unstructured data may be in form of news articles, blogs, tweets, movie reviews, product reviews etc. into positive, negative or neutral sentiment according to the sentiment is expressed in them.

Figure 1.2.1: Sentiment Analysis
Sentiment analysis is done on three levels [1]

Document Level
Sentence Level
Entity or Aspect Level.

Document Level Sentiment analysis is performed for the whole document and then decide whether the document express positive or negative sentiment. [1]
Entity or Aspect Level sentiment analysis performs finer-grained analysis. The goal of entity or aspect level sentiment analysis is to find sentiment on entities and/or aspect of those entities.
For example consider a statement “My HTC Wildfire S phone has good picture quality but it has low phone memory storage.” so sentiment on HTCâ€Ÿs camera and display quality is positive but the sentiment on its phone memory storage is negative. We can generate summery of opinions about entities. Comparative statements are also part of the entity or aspect level sentiment analysis but deal with techniques of comparative sentiment analysis.
Sentence level sentiment analysis is related to find sentiment form sentences whether each sentence expressed a positive, negative or neutral sentiment. Sentence level sentiment analysis is closely related to subjectivity classification. Many of the statements about entities are factual in nature and yet they still carry sentiment. Current sentiment analysis approaches express the sentiment of subjective statements and neglect such objective statements that carry sentiment [1].
For Example, “I bought a Motorola phone two weeks ago. Everything was good initially. The voice was clear and the battery life was long, although it is a bit bulky. Then, it stopped working yesterday. [1]” The first sentence expresses no opinion as it simply states a fact. All other sentences express either explicit or implicit sentiments. The last sentence “Then, it stopped working yesterday” is objective sentences but current techniques can’t express sentiment for the above specified sentence even though it carry negative sentiment or undesirable sentiment. So I try to solve out the above problematic situation using our approach. [1]
The Proposed classification approach handles the subjective as well as objective sentences and generate sentiment form them.
1.3 Objectives:
The objective of this research work is to improve the effectiveness and efficiency of classification as well as sentiment analysis because this analysis plays a very important role in analytics application.
Till now Sentiment analysis focus on Subjectivity or Subjective sentiment i.e. explicit opinion and get idea about the people sentiment view on particular event, issue and products. Sentiment analysis does not consider objective statements although objective statements carry sentiment i.e. implicit opinion.
So here the main objective is to handle subjective sentences as well as objective sentences and give better result of sentiment analysis.
Classification of unstructured data and analysis of classified unstructured data are major objectives of me.
Practical implementation will be also done by me in the next phase.
1.4 Scope:
Scope of this dissertation is described as below.

We are considering implicit and explicit opinion so sentiment analysis expected to be improved
Analysis of unstructured data gives us important information about people choice and view
We are proposed an approach which can be applied for close domain like “Indian Political news article”, “Movie Reviews”, “Stock Market News” and Product Review” so, with the consideration of implicit and explicit opinions we can generate precise view of people so industries can define their strategies.
Business and Social Intelligence applications use this sentiment analysis so with this approach it’ll be efficient.

Applications:

There are so many application of Sentiment Analysis which is used now-a-day to generate predictive analysis for unstructured data.
Areas of applications are Social and Business intelligence applications, Product reviews help us to define marketing or production strategies, Movie reviews analysis, News Analysis, Consider political news and comments of people and generate the analysis of election, Predict the effect of specific events or issues on people, Emotional identification of person can be also generated, Find trends in the world Comparative view can also be described for products, movies and events, Improve predictive analysis of return of investment strategies.
1.6 Challenges:
There are following challenges which are exists in sentiment analysis are

Deal with noisy text in sentiment analysis is difficult.
Create SentiWordNet for open domain is challenging task i.e. make a universal SentiWordNet is the Challenging task.
When a document discusses several entities, it is crucial to identify the text relevant to each entity. Current accuracy in identifying the relevant text is far from satisfactory.[5]
There is a need for better modelling of compositional sentiment. At the sentence level, this means more accurate calculation of the overall sentence sentiment of the sentiment-bearing words, the sentiment shifters, and the sentence structure. [5]
There are some approaches that use to identify sarcasm, they are not yet integrated within autonomous sentiment analysis systems.[5]

## Improving the Efficiency of Semantic Based Search

An Effective Approach to Improve the Efficiency of Semantic Based Search

ABSTRACT: The incredible progress in the size of data and with the great growth of amount of web pages, outdated search engines are not suitable and not proper any longer. Search engine is the best significant device to determine any information in World Wide Web. Semantic Search Engine is innate of outdated search engine to solve the above problem. The Semantic Web is a postponement of the existing web where data is given in fixed meaning. Semantic web tools have an vital role in improving web search, because it is functioning to produce machine readable data and semantic web technologies will not exchange traditional search engine

Introduction

The keyword search engine does not provide the relevant result because they do not know the meaning of the words and expressions used in the web pages. The incredible progress in the size of data and with the excessive development of amount of web pages, traditional search engines are not suitable and not proper anymore. Search engine is an important tool to determine any information in World Wide Web. The Semantic Web is an postponement of the existing web where data is given in fixed meaning. Semantic web machineries have a vital role in improving web search, because it is functioning to produce machine readable data and semantic web technologies will not exchange traditional search engine. The keyword search engine like Google and yahoo and the semantic search engine like Hakia, DuckDuckGo and Bing are selected to search. While comparing both of the search engines, semantic engine result was shown better than keyword search engine.
Some pages contain hundreds of words just to attract the users. It shows only the advertisement of the page rather than giving the relevant result to the users, If a user gives a keyword in the search engine that itself will suggest for so many pages according to the previous user search. But if the keyword is wrong it does not going to show up anything. This research work proposes a framework to resolve this problem called enhanced skyline sweep algorithm. Algorithm says that even if the particular keyword given by the user is wrong, the search engine is going to give the relevant result to the user
2. Semantic Web Search Engine
The semantic search greatly advances search exactness of the query related data and the search engine provides the exact content, the user intent to know. There’s no rejecting the control and reputation of the Google search engine. By using semantic search engine we will ensure that it results in more relevant and smart results. The search engines are able to compare or extract the data and gives very relevant results for the queries.
A. Approaches to Semantic Web
There are four methods for semantic search. And the method differs that is based on the semantic search engine .First method uses contextual analysis to help to disambiguate queries. Second is reasoning and third is natural language reasoning and the fourth is ontology search
3. Literature Survey
In [1] researchers comparing the performance of different keyword search techniques and there results was not up to the expectation level .The run time performance was poor and the execution times for various search techniques vary for different evaluations
In [2], it explains and proposes an effective move towards keyword query in relational database. Keyword search technique in the web cannot be applied directly to the databases as data which present in the internet are of different forms. That is in databases the information is seen as data tuples and relationships. Researchers proposes a model called semantic graph model consist of database metadata, database values, user terms and their semantic connections
In [3], Systems produce answers quickly for many queries but the other side many others they take a long time or sometimes fail to produce answer after exhausting memory. It conclude that this approach is successful in returning a combination of answers in predictable amount of time
In [4],researchers investigates about the problem that occurs when the user searches for a data base query on a SQL database SQL database suggests so many tuples that satisfies the given query. The problem is when too many tuples are there in the answer. It leads to many-answers problem .They propose a ranking approach for the answers for database queries

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In [5], researchers found that the problem for the graphical structured textual data is extracting best answer trees from a data graph. XML and HTML data can be characterized as graphs by using entities as nodes and relationships as edges. To achieve this elasticity, they create a novel search frontier prioritization technique and this technique is centered on spreading activation.
In [6], it proposed a new approach semantic search engine which will answer the intelligent queries and also more efficiently and accurately. They used XML Meta tags to search the information. The XML page contains built in and user defined tags. The proposed approach proves that it takes less time to answer the queries. Using W3C compliant tools helps the system to work on any platform
In [7] it evaluates search performance of various search engine by allowing each query to run in keyword based search engine as well as semantic based search engine. For both keyword-based search engines and the semantic based search engine semantic search engine performance was low
In [8] it presented a generic approach for mapping queries in a user language into an expressive logical language and also presented a particular instantiation of our generic approach which translates keyword queries into DL conjunctive queries using knowledge available in the KB
In [9], Semantic knowledge has repeatedly been engaged to apply relational database reliability. It also proposals the chance to convert a query into a semantically equivalent query which is more efficient .This paper explains a meaning based transformation technique that uses constraints and semantic integrity to reduce the cost of query processing
In [10], In this paper, a survey is done on the web search engine that are developed by different authors and they confirmed that no search engine gives answer properly and seamlessly modern means up-to-date
In [11], This paper, a survey done about the semantic based search engine to extract the gifted features of various semantic search engine and also it says about the explanation of some of the better semantic search engine
In [12], In this paper, a survey is done on the approaches and features of some of the semantic search engines and they give a detail about the various advantages and techniques of some of the best semantic search engine .And the difference between the semantic search engines and traditional search
In [13],the paper says that retrieving relevant information by the search engine is tough. To solve the above problem the semantic search engine plays a vital role in computer system. A survey is done on the generations of search engines and advantages, features of the various search engines and also survey is done on the role in web
In [14], Traditional search engine does not provide the relevant information because it does not know the meaning but the semantic search engine are meaning based search engine and it can overcome the above problem. This paper gives a brief about the traditional search engine and keyword search engine
In [15], the paper says that however a number of techniques have been implemented and proposed those all had a lack of standardization for system evaluation. This paper gives an empirical evaluation of the performance for the relational keyword search systems. They concluded with the results like many existing search technique is not giving a good performance and also discover the relationship between execution time and factors that mottled in earlier calculations
4. Methodology

Fig 1: System Architecture
The above Fig 1 says that when the user gives a particular query in the semantic search engine it will extract the relevant result and gives to the user. If the particular query is wrong the result is not going to show to the user .So the skyline sweep algorithm helps to give the relevant result even if the particular query is wrong by key combination process. In this various enhancements on resource in keyword search is introduce the Skyline sweep is the process of extension has been an active area of research throughout the past decade. Despite a significant number of research papers being published in this area, no research prototypes have transitioned from proof-of-concept implementations into deployed systems. The lack of technology transfer coupled with discrepancies among existing evaluations indicates a need for a thorough, independent empirical evaluation of proposed search techniques. Two data sets IMDb and Wikipedia contain the full text of articles, which emphasizes sophisticated ranking schemes for results. Our data sets roughly span the range of data set sizes that have been used in other evaluations even though our IMDb and Wikipedia data sets are both subsets of original databases. Using a database subset likely overstates the efficiency and effectiveness of evaluated search techniques.
A. USER INTERFACE
To connect with server user must give their username and password then only they can able to connect the server. If the user already exits directly can login into the server else user must register their details such as username, password and Email id, into the server. Server will create the account for the entire user to maintain upload and download rate. Name will be set as user id. . Logging in is usually used to enter a specific page.
Example: Create node and set name, port for that node. Nodes are created and displayed.
Admin maintain the user information. And he can upload the file to search the user. The file uploaded completed then only the user can able to search the file what we are want. And then admin can check the user information. Suppose here one file is searched that related all information is stored into admin. Searching information mean when the user searched the file and timing everything stored in admin. Finally admin check what file we are uploaded.
C. Query processing
Query processing means what we are searching that is passed by query. Admin uploaded all files are stored in database. User search in database where is available the requested keyword. Suppose the requested file is available in database that is passed to user. Suppose the user give one keyword depends upon the keyword all related lines are displayed. In that line from user get what are the data we need. This file searching and execution details is stored in data base. When ever need this we can able to view this details.
Example: User searches the Query (keyword) in database. User gets that query related output.
D. Recommended module
Recommended module meant suppose now we give any keyword wrongly that word automatically going to mapped correct keyword. And then displayed what are the keyword mapped related that word. Suppose we give any wrong keyword that related all correct word going to mapped and displayed. Here we used “Skyline sweep algorithm” for automatically checked that correct keyword.
Example: The user gives the wrong query. Key combination
Will give the correct output
E. Top ranking
File ranking can be viewed by the chart. Top rank meant most of the files viewed by user that is called top ranked. That files are come in first. After then only comes the user searching keyword. So now we can easily understand which files are mostly viewed by user. That ranking is displayed in chart.
Example: User searches the keyword. The keyword already viewed by user, that keyword displayed in first.

Reduce Time consumption during retrieval.
Efficient to search an data in various search engines.
Easy to execute in realistic manner in proposed system

7. Conclusion and Future Work
It is concluded that searching the internet today is a encounter and it is projected that approximately partial of the complex questions go unanswered .Semantic search has the power to enhance the traditional web search. Whether a search engine can meet all these conditions still remain a question .We proposed a framework using enhanced skyline sweep algorithm to overcome this problem. In which the process can be done by the favor a realistic query workload instead of a larger workload with queries that are unlikely to be representative in various resource that can being with experimental results do not reflect well on existing relational keyword search techniques. Runtime performance is unacceptable for most search techniques. Memory consumption is also excessive for many search techniques in our experimental results, question to the scalability and improvements claimed by previous evaluations so we will prefer the consumption on runtime of searching an data in upcoming technologies.
8. References
[1] J. Coffman and A.C. Weaver, “An Empirical Performance Evaluation of Relational Keyword Search Systems,” Technical Report CS-2011-07, Univ. of Virginia, IEEE transaction on knowledge and data engineering, Vol. 26, No. 1,January 2014
[2] Jarunee Saelee, Veera Boonjing,” A metadata data search approach to keyword query in relational databases, International Journal of Computer Applications pp. 140-149, May 2013
[3] A. Baid, I. Rae, J. Li, A. Doan, and J. Naughton, “Toward Scalable Keyword Search over Relational Data,” University of Wisconsin, Madison fbaid, ian, jxli, anhai, naughtong@cs.wisc.edu
[4] F. Surajit Chaudhuri, Gautam Das, “Probabilistic Ranking of Database Query Results,” Microsoft Research One Microsoft Way Redmond, WA 98053 USA {surajitc, gautamd}@microsoft.com
[5] V. Kacholia, S. Pandit, S. Chakrabarti, S. Sudarshan, R. Desai, and H. Karambelkar, “Bidirectional Expansion for Keyword Search on Graph Databases,” Indian Institute of Technology, Bombay varunk@acm.org shashank+@cs.cmu.edu
{Soumen, Sudarsha, hrishi}@cse.iitb.ac.in rushi@desai.name
[6] Ritu Khatri, Kanwalvir Singh Dhindsa, Vishal Khatri “Investigation and Approach Of New Analysis Of Intelligent Semantic Web Search Engine” International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-1, Issue-1, April 2012
[7] Duygu Tümer, Mohammad Ahmed Shah, YÄ±ltan Bitirim “An Empirical Evaluation on Semantic Search Performance of Keyword-Based and Semantic Search Engines: Google, Yahoo, Msn and Hakia” Fourth International Conference on Internet Monitoring and Protection, 2009
[8] Thanh Tran, Philipp Cimiano, Sebastian Rudolph and Rudi Studer “Ontology-based Interpretation of Keywords for Semantic Search” Institute AIFB, University ät Karlsruhe, Germany
[9] W. David Haseman, University of Wisconsin-Milwaukee, daveh@uwm.edu Tung-Ching Lin, Nationa Sun Yat-Sen University, Taiwan, tclin@mis.nsysu.edu.tw Derek L. Nazareth, University of Wisconsin-Milwaukee, derek@uwm.edu “An Intelligent Approach to Semantic Query Processing”
[10] S. Latha Shanmuga Vadivu1, M. Rajaram2, and S. N. Sivanandam3 “A Survey on Semantic Web Mining Based Web Search Engines”ARPN Journal of Engineering and Applied Sciences VOL. 6, NO. 10, OCTOBER 2011
[11] Anusree.ramachandran, R.Sujatha School of Information Technology and Engineering, VIT University” Semantic search engine: A survey” Int. J. Comp. Tech. Appl., Vol 2 (6), 1806-1811
[12] G .Sudeepthi1 , G. Anuradha ,Prof. M.Surendra Prasad Babu “A Survey on Semantic Web Search Engine” IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 2, No 1, March 2012
[13] G.Madhu1 and Dr.A.Govardhan2 Dr.T.V.Rajinikanth3 “Intelligent Semantic Web Search Engines: A Brief Survey” International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
[14] Junaidah Mohamed Kassim and Mahathir Rahmany Introduction to Semantic Search Engine 2009 International Conference on Electrical Engineering and Informatics5-7 August 2009, Selangor, Malaysia
[15] Joel Coffman, Alfred C. Weaver “An Empirical Performance Evaluation of Relational Keyword Search Systems” Department of Computer Science, University of Virginia Charlottesville, VA, USA

## Efficiency of photovoltaic cells

This year’s Nobel Prize has been awarded to an American physicist and chemist whose work paved the way to built efficient and low-cost polymer photovoltaic cells.
Professor A. J. Heeger of University of California at Santa Barbara, US received the prestigious Nobel Prize for his research on polymer photovoltaic solar cells over the past two decades.
In 1995 Heeger published a paper (science 270 1789), in which he proposed a new approach to fabricate photovoltaic devices, which led to the development of efficient solar cells. This enabled to fabricate renewable, sustainable, and recyclable, low cost photovoltaic devices which are used to convert light energy into electric current.
This approach is enhanced and widely used in commercial applications to produce flexible organic solar cells. The increasing demand for energy has created a need for low cost and eco-friendly energy source. Solar power, which is a renewable energy source holds good for producing energy at low cost.
Breakthrough
Efficiency of Photovoltaic cells depends on the energy conversion and charge collection of the device, which are high in inorganic based photovoltaic device, however organic photovoltaic devices have major advantages over inorganic photovoltaic devices i.e., low-cost fabrication, mechanical flexibility and disposability. This led many researches to focus on polymer photovoltaic cells, hence several approaches have been proposed for fabricating photovoltaic cells like mono and bilayered organic solar cells by using photo induced electron transfer in composites of conducting polymers as donors(D) and Buckminsterfullerene and its derivatives as acceptors(A). However the conversion efficiency is limited by the carrier collection efficiency at the D-A interface. A major breakthrough in the field of organic photovoltaic cells is achieved by overcoming the limitation of the efficiency in the bilayer heterojunction, which is proposed by Heeger. A high interfacial area is achieved within a bulk material by carefully controlling the morphology of the phase separation into an interpenetrating bicontinuous D-A network, which yields efficient photo induced charge separation, this obtained interfacial area is known as “Bulk D-A heterojunction”. Even though the bulk heterojunction is previously proposed by Hiramoto et al, [J. Appl. Phys. 72, 3781, 1992] but the fabrication of solar cells is far more difficult than that of Heeger approach.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Heeger used composite film of poly (2-methoxy-5-(2-ethyl-hexyloxy)-1, 4-phenylene vinylene) (MEH-PPV) and soluble derivatives of buckminsterfullerene namely [6,6]PCBM and [5,6]PCBM to form a polymer blend. To overcome the limited solubility of C60, a series of soluble C60 derivatives are used, this concept of soluble C60 derivatives enabled to realize new device concept. The structure of bulk heterojunction consists of metal electrode contacts (Ca or Al) of different work function to optimise the efficiency of carrier collection of holes from donor phase and electrons from acceptor phase.
Indium tin oxide (ITO) is used as anode and Ca or Al is used as anode which automatically extracts electrons and holes from the polymer blend. The performance of bulk heterojunction photovoltaic cells depends on the phase separation in polymer blend devices; so much of the research is concentrated on the precise control of the phase separation. The film formation has to be very fast, in order to obtain fine structures i.e., Phase separation has to be arrested earlier, which results in smaller domains than exciton diffusion length. This can be achieved [Adv. Mater. 12, 498, 2000] by spin coating on a heated substrate, so that the solvent evaporates faster.
The quantum efficiency (percentage of photos hitting the photo reactive surface that will produce electron-hole pair) of up to 2.9% is achieved by Heeger, which is further enhanced by using different low molecular weight materials [Adv. Mater. 12, 1270, 2000]. Quantum efficiency can be improved up to 10% by using different materials.
Evolution
A wide range of research is carried out based on bulk heterojunction approach, which resulted in efficient photovoltaic cells; however organic solar cells degrade when exposed to ultraviolet light, which effects the life time of the cells. Energy conversion efficiency is also low when compared to its inorganic counterparts. Fig 1 shows the efficiency achieved by different research groups in the last decade. 7.9% efficiency is achieved by Solarmer Company, which is certified by National renewable energy laboratory (NREL). Whereas Heeger achieved a quantum efficiency of 2.9%, this indicates a rapid development in this field over a decade. Solarmer produce photovoltaic commercial products using bulk heterojunction approach.
Another company named Konarka which is founded by Heeger also manufactures plastic electronics and solar cells with bulk heterojunction. Konarka offer conventional products like sensors, portable battery charging for PDA, mobiles and other small devices, microelectronics, portable power, remote power, building integrated photovoltaic.
Plextronics is another company developing and selling pre-formulated inks as well as the know-how to print them, which are extensively tested for outdoor lifetime. Device using these products have high lifetimes of the order of years.
However the efficiency of polymer photovoltaic cells is low when compared to the silicon based photovoltaic devices. In order to compete with other available technologies, the efficiency of polymer photovoltaic cells should be increased to 15% with a lifetime of 15-20 years [Solar energy, 2009, 1224]. Heeger made a significant contribution to polymer solar cells field by proposing the bulk Heterojunction approach, which has many potential applications in renewable energy.

## Investment and Efficiency of Solar and Wind Energy

Are renewable energies the best option to dealwith the issue of the massive fossil fuels usage?

The more a society evolves, more energy it consumes. In the last five decades, situations such as the excessive use of fossil fuels, the weakening of the ozone layer, and deforestation, have caused an increase in the Earth’s temperature, generating great changes in the global climate. Society should start using more favorable energy sources. This essay will compare two types of renewable energies, solar and wind energy, which are endless and eco-friendly. Topics such as investment, efficiency, and location will be discussed. Additionally, some projects and research carried out throughout the world will be presented as solutions to this issue.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Non-renewable energy is seemingly the most used today. When utilised sources such as coal, oil and natural gas are burned, carbon dioxide is released into the environment. CO2 is a useful gas that keeps the temperature on the planet in a condition called greenhouse effect based on balanced coal levels, favoring the normal evolution of life (Morse, 2013, par.2),. However, with widespread industrialisation, these levels have reached unsuspected limits, endangering the whole world, due mainly to excessive emissions. Alternatively, communities nowadays funded on the desire for economic growth and environmental sustainability, need to adopt new methods for power generation. Renewable energy is a valuable concept and can be defined as those energies generated from infinite sources. It can come from nature, sun and wind for example. It has many benefits such as reduced carbon emissions, secure energy supply, thus economic stability can be achieved , and environmental and human damage is minimized (Mason, 2016,p.1).

People can start using the wind to generate energy, because it is unlimited and clean. But, wind is not constant because weather changes everyday, therefore, other power sources plants could be needed. There are two types of turbines, with vertical or horizontal axes. Verticals are used for residential supply, they are only five meters high, the commercial value is very affordable for homeowners, and they are easy to maintain. Horizontals are eighty-meter towers, with upper blades capturing the wind. Although the initial investment could still be expensive, long-term wind energy is feasible. An efficiency example is Samso, Denmark where 100% of electricity is generated for residents. “In 2015 this country broke its own world record by producing more than 40% of its national power from wind energy” (Mason, 2016, par.12).

In the United States, large wind-farms tend to be located in agricultural areas. A lease could be paid to landowners for use of their land, but they could still work on their agricultural farms. Furthermore, some scientists suggest that wind turbines could even improve the flow of CO2 to surrounding crops (Morse, Turgeon, 2012, par.5). An example of a large onshore wind farm is in Jaisalmer, India. In April 2012, it produced 1064  megawatts  of electricity  (Morse , Turgeon , 2012, par.4). Even if the placement has been well studied, people complain that turbines are very loud and unsightly in the landscape. An optimal installation would be on large expansive plains or in the sea, far from the cities, however, the location on   sea represent  a risk  for  ships  during violent storms. Moreover, this would imply an investment to transport energy from the generation site to the places of consumption. Nevertheless, in wind energy is very popular, and many facilities are extended into the sea. Walney Wind is the largest offshore wind-farm in the world, with 102 turbines in the Irish Sea, generating 367 megawatts of energy(Morse & Turgeon, 2012, par.5).

On the other hand, the sun is one of the best-known sources of renewable energy, it is also free. Stated by the National Renewable Energy Laboratory, “More energy from the sun falls on the earth in an hour than is used by everyone in the world in a year” (Shinn, 2018, par.6). Solar energy does not produce pollution in the air or greenhouse gases. It can be used for domestic uses such as heating the water directly, for crops, and as a source of light. Even though their efficiency depends on climate and power decreases during the cloudiest days, solar energy systems are free of noise, can continue working at all times, and they are easy to maintain. In addition, for residential uses connected to the network, the excess of energy generated can be added back to the grid, and receive payment from power companies.  According to Dr. Jennifer Baxter of the Institution of Mechanical Engineers ,  for  industrial  purposes,  this  excess  could  be  used  to  generate  hydrogen,  which  is obtained  using electricity to  divide the  water. This hydrogen  will  function as  an  energy  storage, consequently,  the  balance  between  the  electric  supply  and  demand  could  be  achieved.  This hydrogen could also be used to recharge electric cars (Vaughan, 2018, par.4).

Special siting should be considered to achieve good performance and also be advantageous. The Sahara research is an example of this situation. This project proved that installing large-scale wind and solar power could green the desert, improving agriculture, vegetation, and as a consequence increase livestock members, which would be very auspicious for people who live in the area; stated by Dr. Safa Motesharrei (McGrath, 2018, par.3). For the excess of generated energy, there are storage batteries to avoid waste, but they are still expensive. Nonetheless, with constantly updated technology, there will always be an improvement in these appliances. Stanford researchers, for instance, manufactured a water-based battery; the manganese-hydrogen prototype, which is small, cheap, could be charged up to 10,000 times, and last more than ten years (Abate, 2018).

Concluding, renewable energies can be implemented often only with a great investment at the beginning. However, this can be recovered as time progresses, and with minimal maintenance costs. Both solar and wind energy can be used residentially, even with an economic benefit. Moreover, the positive consequences, such as the reduction of air pollution and climate change generate a more purified environment that benefits all living beings. Therefore, our priority as a society is to keep trying to promote the use of renewable energies all over the world.

REFERENCES:

         Mason M., (February, 2016). Renewable Energy: All you need to know. Retrieved from:       https://www.environmentalscience.org/renewable-energy

Vaughan A., (May 09, 2018). Use excess wind and solar power to produce hydrogen.The Guardian News. Retrieved from: https://www.theguardian.com/environment/2018/may/09/use-excess-wind-and-solar-power-to-produce-hydrogen-report

## Efficiency Reward Management in British Airways

Competition in the airline industry has gone global and the market and industry dynamics have necessitated the need for companies to make concerted efforts streamlined towards ensuring that high quality goods and services are offered in the market at competitive prices. This has resulted in the adoption and implementation of several tools and strategies by British Airways geared towards the aforementioned goals attainment. One of the strategies that have been soundly embraced by British Airways is the effective and efficient management of human resource department in regards to the selection, recruitment and satisfaction of employees. This has been attained through an emphasis on work site wellness program within the company. These initiatives are aimed at enhance performance management within the company.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Company Overview
Stiff competition has pushed the airline industry to attain very high levels of service quality to their customers. British Airways has been left outside the bracket in this push to smoke away competition and remain a top provider of airline services. The market expectation levels are expected to improve with the increase in the complexity of travelers’ demands. British Airways has recognized that employees from its most prized assets and has attached the capacity to improve its performance on the ability to effectively and efficiently manage it’s of human resource department.
The pivotal challenge faced by the company is its inability to become a truly transnational airline. The recent economic crunch, political uncertainties in the Middle East and managerial problems have negatively on its ability to improve on its current performance. However, despite the above challenges, Yahoo Finance (2010) illustrates that net profit for the company has improved from 72m Pounds in 2003 to \$438 m in Pounds in 2007 and the earnings per share increased from 6.7 pence to 37.2 pence within the same period.
Efficiency Reward Management in British Airways
Reward management
Chew and Teo (1991) state that “a reward system expresses what an organization values and is prepared to pay for; it is governed by the need to reward the right things in order to convey the right message about what is important in terms of expected behaviors and outcomes.” The importance of HRM has increased with time and the need to properly manage people is becoming a central focus within organizations today. This has precipitated competition amongst various organizations seeking to portray the best skills in people’s management. This has defined a new role for line managers whose roles in organizations have shifted from the traditional supervisory role to more advanced people resource management. To effectively take efficient steps in the recruitment and selection, employee relations, reward management, appraisal and performance reviews, line managers must receive the support of the HR specialists. The above discussions illustrate the high levels at which British Airways as gone to enhance high levels of performance through better rewards management.
Reward systems within organizations are always based on how one’s value to the organization. “It is concerned with both financial and non-financial rewards and embraces the philosophies, strategies, policies, plans and processes used by organizations to develop and maintain reward systems.” Most organizations make use of the term “compensation” to refer to “pay” or “remuneration”. There has been a noted problem with the term compensation in that it means rewards to the employee is only ” for making amends for the distasteful fact people have to work to make a living”. In the analysis of Chew and Teo (1991) proposition “for most people work is, in the main, a source for disutility, and they therefore require payment to compensate them for the time they devoted towards it”. While this argument is true in its literal sense, it however fails to provide a complete definition of pay philosophy. This is because pay philosophy should take into consideration one’ competence and contribution, not just compensation simply because some none has worked for it. In appreciating that employee rewards takes into deep consideration of the organization’s integrated policies and practices, rewards are best given according to market worth of an employee. In addition to that, the one’s contribution, skills and competence should also form central measurements under which rewards systems can be based. The rewards scheme runs through the culture and philosophies of an organization and is developed within its framework with the aim of maintaining the best levels of pay, benefits, compensation and other forms of rewards.
According to Carter (1988), “reward system consists of financial (fixed or variable pay) and employee benefits, which together comprises the total remuneration.” In addition to that, rewards system also encompasses non-financial components that include (recognition, praise, achievement, responsibility and personal growth). The non-financial components of rewards system also include performance management systems (Lafferty & McMillan, 1989). The combination of the two; financial and non-financial rewards forms the total reward system. Deeper analyses of the reward systems reveal that it has five more components that include processes, practices, structures, schemes and procedures (Heskett, Sasser and Hart, (1990).
The successful design, development and implementation of management decisions are very complex and at times daunting tasks for many managers especially when managing the most prizes assets of organization-employees. Usually, managers will be faced with daily problems that require the application of tools that will ensure for the successful operations irrespective of the sectors they manage such as the identification of the objectives of the organization, alternative means of achieving the stated objectives and the selection of the means that accomplish the objectives in the most efficient manner. The first process in the decision making process will entail the identification of the problem. The problem in dealing with employee rewards for the optimum benefit of the organization must enhance the ability of the organization to effectively achieve its objectives. Ideally, successful identification of the problem will encapsulate trying to delineate answers to questions such as what could be the causes of the problem, where this is happening, how it is happening, when it is happening, with whom it is happening, and why it is happening (MacNamara, 2008). In essence, this should be followed by an in-depth analysis of the delineation of the complexity of the problem, verification of the understanding of the problem; prioritization and understanding the role to be played towards the redress of the problem (Collins, 1987).
In recognizing the fact that an organization’s performance depends primarily on the quality of its management and employees, line managers appreciate the role of reward in improving the quality of management through generous rewards. British Airways knows that rewards alone cannot play the sole role of management quality improvement but this process demands with it a number of other factors for it to be fully realized. This is because, “the culture, values, and management style of an organization, together with its performance management and employee development programs are equally important” (Bureau of Tourism Research, 1989).Â It is therefore true that reward management forms an integral part of quality management but cannot stand alone in an organization in ensuring quality management.
Reward management is one of the central management issue British Airways top management has over the year managed excellently. Effective reward management not only motivates the employees but also depicts harmonious management style the company is applying to capture and succeed in the market. In addition, the recruitment and retention of best talents take precedence in the business. According to Debrah (2005),
The reward or compensation people receive for their contribution to an organization includes monetary and non-monetary components. Remuneration does not simply compensate employees for their efforts – it also has an impact on the recruitment and retention of talented people.
In this regard, reward management within British management calls for brilliant strategies to ensure that it succeeds. Towards this, the company has employed a number of strategies to help successful implement this program. These strategies include controlling reward, monitoring and evaluating reward theories, managing development of reward system, devolution of line mangers for responsibility for reward system (Hollings, 1998).
Controlling reward
British Airways has got a good reward management control strategy. Control offers the opportunity to plan and execute reward in a more organized and logical manner which reflect the spirit and mission of the company. According to Gabriel (1988), employers and managers should pay attention to their employees and special attention to the best employees. This is done to encourage good performers, to push them to greater heights. Positive recognition for people can ensure a positive and a productive organization. The recognition of outstanding performance aims to create an understanding of what behaviors might add significant value to the organization and to promote such behaviors. Awards- monetary and non-monetary – should be given based on the achievements and accomplishments of workers.
Effective reward management calls for effective and strategic management to ensure that the programs not only succeed but also offer a good platform for other companies to emulate. This is an entrenched culture within British Airways aimed at ensuring employee performance improvement. In controlling the rewards, the organization benefits a lot from such an initiative. The befits that come along as a result of reward control include offer of the best opportunity for strategic planning, ensures continuity of the reward system, it is effective in the process of the reward scheme evaluation.
Monitoring and evaluating reward theories
The process of monitoring and evaluating reward theories demand good management practices from the line managers. In British Airways, This process is ideally inclusive of the major parties to the problem and will involve holding a brainstorming session where the possible solutions to the problem are all presented and analyzed. Bowen (1986) has advised against passing judgment on the possible solutions as presented at the earliest stage of evaluating rewards so as to provide chance for possible solutions and errors that could be omitted. The selection of the reward within British Airways considers best alternative to resolving the problem is the next stage and is essentially where the possible solutions advanced are analyzed and dissected in details. In the selection of the best alternative, the line managers within British Airways takes into considerations the approach that is likely to resolve the problem in the long run, the most realistic solutions, the resources available, time and the risks associated with each alternative (McNamara, 2008).
Managing the developing reward system
Initiating a reward program in most organizations has been easy but managing and developing the rewards comes along with many challenges. This is because reward systems must be well developed and enhanced to reduce employee conflict (Irwin, 2003). In British Airways, this involves assessing how the situation will be once the reward has been initiated and looking for possible weaknesses within the reward scheme. This process is well handled within the British Airways by a pool of highly trained line managers. Essentially, this will entail a careful consideration of the best way to implement the new reward policies and procedures, what resources are desirable in terms of people, facilities and finances, time, who will drive the process, and the person in who will be responsible for the success of the plan. It is imperative that the action plan is communicated to all the stakeholders who will be affected by the new changes within and without the organization to limit the possibility of conflict and take into consideration all the divergent views. Communication within the British Airways values the culture and takes into consideration the major drive within the Airline industry which centrally aims at providing the best competitive work environment to the employees.
Devolution for line managers
The success of reward schemes and projects has to a large extent relied on the interests, support and commitment of the senior management within the British Airways. This is in order to ensure that everybody in the project team and indeed the whole employees are focused and committed. Most reward schemes within organizations are sometimes conceived, funded and developed without appropriate senior management involvement or approval. Naila (2009) has for example noted that some projects go forward without the management clearly conceptualizing what the project entails. A distinction between mere approval and commitment should be clearly discerned so that the projects run smoothly. According to Kerzner (2006), most projects fail when the senior management lacks a clear understanding and a paucity of the project’s perceived benefits, risks and difficulties. This is fundamental because the management plays a central role in costs appropriations and budget allocations for project activities. This means that while the project’s approval may actually have been acquired, in the euphoria of getting the projects approved; some of the risks may be ignored or glossed over. Efficient project cost management especially in the field of IT should however ensure that projects approvals are not based on hype and unrealistic calculations but on a framework that encapsulates a realistic assessment of the projects. These remain the central themes within British Airways that define its culture and its reward schemes.
Interviews in selection and recruitment
The most frequently used selection method in most organizations and companies, with British Airways being no exception, is the interview. The company employs this selection process in selecting and recruiting personnel in the top management positions such as departmental managers. Interviews occur when a candidate responds to questions posed by a manager or some other organizational representative. In an interview, common areas in which questions are posed include education, experience and knowledge of job procedures, mental ability, personality, communication ability, social skills as well as the knowledge of current affairs.
The recruitment process within British Airways as a close nit process that enasure only the best is recruited. This is ullusterated below by Guemier and Lockwood (1989).
Quality Performance Measurement
The capacity to understands and measure performance of an organizational policies is crucial for the success of any business. These measures should include process performance and improvements that can be seen by customers. The importance of performance measurement is important to ensure that customer service is given, to set individual team and business objectives, highlight problems and failure in the processes, provide the needed stimulus for continuous growth and provide benchmark for establishing comparisons.
To effectively carry out quality performance, an organization must understand the component of quality costs. These is because the capacity to show that quality system is effective, find more efficient ways of working and get it right from the first time are fundamental in the processes. Performance measurements include four quality costs such as prevention costs, appraisal costs, internal failure costs and external failure costs.
Through the application of EFQM that recognizes the fact that there are many approaches to achieving sustainable excellence, British Airways has extensively made use of this non prescriptive framework to analyze its quality performance measurements. This process has been carried out using leadership in at the fore front while enablers include people, policy and strategy partnership and resources who are subjected through a process. The results for the performance of the reward policy within British Airways are then measured by people’s results, customer results and social results. These generate key performance results that are generated through three result components. The tool that was preferred for this process was Radar Scoring Matrix that was capable of covering all aspects of results, approach, deployment, assessment and review. The five poor causes of quality include wrong application of measurement tools, poor combination of enablers for the process, poor leadership, inability to establish a measurement process and failure to engage of all employees in the process.
Conclusion
The world over, organizations and business enterprises are experiencing major economic crunch and environmental upheaval such as deregulated industrial regulation systems, globalization, competition and technological advancement. These economic, social and political circumstances have precipitated a complex and sophisticated of overlapping and concurrent interventions that are radically changing the existing structures, cultures and job requirements. In response to this dynamic and rapid change, managers need to approach the selection and recruitment from a strategic perspective. Recruitment and selection strategies, process and policies should be integrated within the company human resource department and the organization culture. These have been entrenched in the operational culture of British Airways.
In the Airline industry, there is need to streamline the operations to embrace the dynamic changes in selection and recruitment. These changes include new strategies on selection and outsourcing. British Airways has been successful and continue to gain more ground in the world market due to its strategic planning and management. This paper has given a comprehensive and in-depth analysis of the role of the human resource department in the selection and recruitment with special reference to British Airways.

## Effect of Working Practices on Efficiency and Productivity

Abstract
Aim
The aim of this project is to identify why current working practices and procedures are affecting workshop efficiency (class contact time) and productivity (hands on time) during the daily running of an educational motor vehicle workshop.
Objective
The main objective of the report will be to make recommendations on work area design and workshop layout and the proposal of new working practices and procedures to help improve the efficiency and productivity within the motor vehicle workshop.
Chapter 1 – Introduction
Chapter 2 – Background
Clydebank College first opened as a technical college in 1965 its aim was to support the training needs of apprentices in the local manufacturing companies and the shipyards.
The economic activity in the area has changed over the years so the courses offered by the college have had to change to meet the local employment needs.
The original college was in a severe state of disrepair and as a result of this Clydebank College opened a brand new £34 million campus at Queens Quay on the riverside at Clydebank in the summer of 2007.
The college delivers education and training from its main campus in Clydebank, and from community outreach centres in Dumbarton and Faifley.
Most of the college’s learners come from areas of high unemployment, where there is a low participation in further education and a lower proportion of school leaver’s than average progress into higher education.
2.1 Existing Laboratory
The motor vehicle workshop at Clydebank College is a single room, open plan, workshop approximately 25 x 20 metres (500m²) in size. The workshop was designed to accommodate up to 6 classes of approximately 12 students and one lecturer per class.
2.1.1 Workshop Layout
The laboratory has work bays laid out for 23 motor vehicles it also has to hold motorcycles, quads, buggies and associated workshop tools and equipment.
There are workbenches and lockers situated at various points around the workshop, two communal sinks are plumbed in at one end and a moveable rolling road is installed in the corner of the workshop, cleaning equipment and large workshop tools are also stored in the main workshop area, all these facilities are shared between all motor vehicle classes.
Open plan design allows a work area to be easily changed into a different workspace with limited costs should the need arise. The workspace is more adaptable and with no internal walls etc. the initial build costs are much lower.
This open plan design of the motor vehicle workshop is a new concept for the college and most of the policies and procedures that are in place have been brought over from the old campus, whilst some of these policies and procedures do work there have been a number of issues develop over the last year as a result of this change in workshop design.
2.2 Automotive Curriculum
The motor vehicle courses offered at Clydebank College are as follows:
* City & Guilds 3901
* City & Guilds 4101 (Level 1,2 & 3)
* HNC/D Automotive engineering
2.2.1 City & Guilds 3901
Aimed at students with no previous qualification or knowledge of the subject area it is suitable for the 14+ age range. This qualification is ideal for secondary school students or as a pre-entry level to the modern apprenticeship program it focuses mainly on developing students practical skills with some oral questioning to test underpinning knowledge.
2.2.2 City & Guilds 4101
Level 1, 2 & 3 and the modern apprenticeship program is an introduction to the maintenance, repair and diagnosis of automotive vehicles it has routes for tyre fitting, general fitting, light vehicle, heavy vehicle and motorcycle maintenance.
The starting point for students with no prior experience of the subject area is Level 1 this level is suitable for 14+ year olds. Level 2 recognises that the learner will now be in a position to carry out routine tasks with a lower level of supervision and Level 3 focuses on developing student’s diagnostic techniques.
2.2.3 Higher National Certificate/Diploma
HNC/D automotive engineering is delivered over 2.5 days per week for 2 years it focuses mainly on the theoretical side of automotive engineering but also has practically assessed diagnostic units.
2.3 Staffing
The delivery of the motor vehicle curriculum is carried out by 13 members of staff in total. The motor vehicle section consists of a curriculum leader and assistant curriculum leader, 7 full time lecturers, two part time lecturers, a store person and two technicians.
2.3.1 Course equipment requirements
The motor vehicle courses delivered at Clydebank College require various workshop equipments to facilitate the completion of practical assessments.
See appendix A for a list of the equipment holding for the motor vehicle workshop.
The majority of the workshop tools and equipment are centralised within the motor vehicle store and as such are not part of the problem that this report is trying to address.
The equipment that is stored within the main workshop area is only to be considered during this report.
2.3.2 Health & Safety
Health and safety policies and procedures will not be analysed during this report, any issues found in this area will be passed onto the college H&S officer for further investigation.
2.4 Literature Review
The Design Council (About: Workplace Design, no date) have identified that there are a number of key challenges faced in developing a more innovative workplace strategy through a change in workplace design.
The credibility of new ideas is usually always questioned because most people don’t like change, especially people that have been in an organisation for many years. People in this situation have become comfortable with what they know and usually have a mentality of “what works now will always work” or “what’s the point” or “if it aint broke don’t fix it”.
Most people have little idea that the working environment affects our attitudes and performance, (Strange and Banning, ) pointed out that “although features of the physical environment lend themselves theoretically to all possibilities, the layout, location and arrangement of space and facilities render some behaviours much more likely, and thus more probable than others.”
“Educational institutes should learn to understand that spatial arrangements can support retention and improve student performances; they must also understand that good space is not a luxury but a key determinant of good learning environments.” (Oblinger, 2006)
Any proposals to change the spatial arrangements within an organisation should firstly be discussed with the current employees. Management should seriously consider ideas from staff on workplace remodelling before imposing their decisions upon the workforce, it must be remembered that it’s the employees that have to work in the environment being changed every day of the week. It would also be wise to ask for employees to be involved at various stages of the process to assist in making the changes work.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Keeping the facility or equipment in an operational condition can be difficult in a training facility due to an educational establishments varied hours and rates of occupancy. These can impact on the facilities operations and maintenance schedules. A proactive facility management program should be employed to anticipate facility problems rather than reacting to them when they occur (WBDG, 2009). This will ensure optimal long and short term use of the facility and if integrated early enough in the design process can improve productivity and reduce operating costs (Manuele, Christensen, 1999).
Maintaining a training facility and its equipment in a clean and tidy condition will promote good engineering hygiene practises in its students. (Strange and Banning) highlighted ways in which the physical appearance of a campus convey a non verbal message, they cited research that links the physical appearance of a space to the motivation and task performance of those working in that space.
The (Whole Building Design Guide, 2009) point out that training facilities, courses and timetables vary frequently and that instructors have different and evolving training methods. Flexibility, therefore, should be a huge consideration of any proposed spatial design change and is critical to the continuing success of an enduring training facility. (WBDG, 2009) also recommend strategies to assist in achieving an improved training facility such as clustering instructional areas around shared support and resource spaces and the use of an appropriate combination of stand alone moveable partitions between classrooms and shared spaces. Partitions that can be adjusted in height are a good idea to ensure some visual contact can be kept with the rest of the activities going on around, but a degree of privacy is maintained (Evans and Lovell, 1979).
Research into partitioning in the nursery school suggests that young children prefer social contexts rather than the privacy of small activity spaces but as they get older it found they retain this preference but also realise that they need more peace and quiet to think!! It is also important to realise that partitioning can aid the control of the children where their own ability to control themselves is limited; as with younger children or children with learning difficulties.
Workspaces should be arranged in line with the educational goals of the training facility but should also ensure a moderate openness but with acoustical privacy; allowing students to hear their instructors clearly but with a low ambient background noise and few distractions. This would be achieved with some form of room partitioning.
(Hudson Valley Community College, 2009) agreed that their proposed new automotive training facility would have mini-labs with lab space for three cars as well as two vehicle lifts and an area with work benches and tool storage areas. This facility design, they believe, would improve the educational environment and enhance the students workforce readiness by working in a space that is similar to the space they will experience in the workplace. (Klatte and others, 1997) also emphasized that a standardised, ergonomically designed workspace as the basis for an improvement in working and (Govindaraju, 2001) stated that ergonomic considerations improve human performance.
Kletz (1991) wrote that it is difficult for engineers to change human nature and, therefore, instead of trying to persuade people not to make mistakes, we should accept people as we find them and try to remove opportunities for error by changing the work situation, that is, equipment design or the method of working.
Like many other organisations, Cisco concluded that their workplace environment was at odds with the way they worked. They believed a flexible, collaborative workspace would improve employee satisfaction and increase productivity. Some solutions that were introduced were unassigned workspaces, small individual workstations, highly mobile furnishings and space dividers and lockers for personal items. (Cisco-Connected workspace enhances work experience)
Changes to spatial layouts can be costly, complex and highly disruptive when changing the physical layout or the fabric of the building. This level of cost is not relevant to all organisations and all proposed changes and with some smart thinking design ideas to improve efficiency can be implemented with a prudent level of expenditure.
Any changes made to a workplace should be measurable. Deciding on the evaluation criteria at an early stage will allow changes to be measured. Measurement criteria should be sensible and simple, such as staff absences, running costs, replacing damaged/lost equipment, the intensity of space occupancy or error reporting, staff and student morale.
(Kuh et al,) discovered that the physical environment is an important characteristic of institutions that do exceptionally well in engaging with their students and that spatial arrangements support learner retention and are a key factor in a quality learning environment.
If a superior quality product or result is wanted then it must be designed into new systems and processes (Deming, 1986). Process improvement is a never ending cycle that requires continuous efforts to bring new ideas to improve performance.
Changes in customer needs, changes in technology and competitors speed up these efforts (Kumru, Kilicogullari, 2007).
Chapter 3 – Laboratory Issues
The motor vehicle workshop is an extremely difficult area to manage in its current form mainly due to its size, number of staff, the quantity of equipment and the number of activities undertaken within.
The assistant curriculum leader is responsible for managing the workshop in its entirety on a daily basis. The ACL must ensure that vehicles are not being damaged and that they are put back together fully following classroom activities; that shared resources are maintained in a serviceable condition and are returned to their correct locations. The ACL must also ensure that the workshop in general is kept in a clean and well maintained condition and is responsible for the health and safety of staff and students within.
All these tasks must be done whilst still being committed to a full teaching timetable that very rarely takes place in the workshop.
Workshop practical time is at a premium for students and is essential for completing a motor vehicle course successfully. Full time students would expect to receive 9 hours tuition per week in the classroom for technology theory and 9 hours per week tuition in the vehicle workshop on practical tasks and assessment. A typical schools class would normally spend approximately 80 hours per week in the workshop and is assessed on practical competencies only.
Students whilst in the motor vehicle workshop can and do spend a lot of time collecting hand tools, finding equipment, finding serviceable equipment, waiting for shared resources to become available, travelling through other classes to find shared resources, rectifying unreported vehicle faults and a lot of time can be spend standing around or misbehaving whilst a lecturer’s time is spent elsewhere remedying one or more of the above.
Student lab time is normally affected by one or more of the problems listed below.
3.1 Work areas
There are no designated classroom areas within the workshop, bay allocation is on a first come first serve basis and lecturers must liaise with each other to obtain suitable class workspace.
Lecturers can also find it difficult to keep track of their students in such a busy environment with no defined classroom areas, this can lead to health & safety concerns and child protection issues given the number of students under the age of 16 years that attend classes within the motor vehicle engineering department.
Workshop cleanliness and general housekeeping tends to suffer in or around the common areas currently there is no way of pinpointing who is responsible for the mess.
3.1.1 Mezzanine area
The workshop mezzanine area is currently a disorganised storage point for most of the shared workshop equipment this equipment is getting damaged and is eating into valuable class space. Shelving has been ordered to alleviate some of the storage problems although there is no lifting facility to move objects to the upper level of the mezzanine.
The mezzanine area is also used to store motorcycles, quads, off-road buggies etc for other specialist classes within the curriculum area, these assets act as a distraction to most students, and are sustaining damage when students ‘play’ on them.
3.2 Shared resources
Most of the shared workshop equipment does not have designated storage points and are currently stored at random around the vehicle workshop; shared resources are not signed for and when finished with have no official storage area to be returned to; all this equipment is used on a first come first serve basis.
Staff and students requiring the use shared workshop equipment usually have to travel through other classes to locate often causing a disturbance.
When two or more classes within the workshop are using shared equipment such as jacks, axle stands or cleaning equipment there are not always enough units to go around this can leave some classes in a position were they must wait idly for this equipment to become available.
Unproductive students can often misbehave or wander around the workshop through other classes causing a distraction trying to find equipment that is no longer being used or has not been returned to its original location.
Shared resources also tend not to be reported by students when they become damaged or unserviceable because it is too much of a hassle and they have no responsibility for it.
Presently there are four badly equipped tool chests for students and lecturers in the workshop to share. Tools regularly go missing from these toolboxes due to them being left lying around the various work areas or tools can become damaged without being replaced.
Workshop vehicle keys are issued from the main storeroom to students as and when they are required; these keys can mistakenly get taken home and cars can get started unnecessarily, sometimes dangerously as most of the motor vehicle students are not competent enough technically or yet hold a valid driving licence.
Damage to equipment, unproductive students, class disturbances, H&S issues
3.3 Fault reporting
Vehicle faults, damaged equipment and work requests to the technicians are passed through a paper based work request slip, only the technician and lecturer requesting the work know that the job exists, there is no way of informing other lecturers that a job on a vehicle has not been completed in time other than by word of mouth this can sometimes lead to a class having to put a vehicle back together before they start their own work or a class expecting to start work on a vehicle but find that the car has been broken and nobody knows about it.
There is also no system to inform other lecturers that a vehicle has been set up for an assessment, again, other than by word of mouth.
3.3.1 Welfare
Lockers are not issued permanently to motor vehicle students but are issued by lecturing staff at the start of each lesson and keys receipted at the end.
There are not always enough lockers for students when the workshop is busy as presently locker keys are owned by lecturing staff and not shared, some lecturing staff have no access to lockers unless they are borrowed from colleagues.
3.3.2 Learner Retention and Pass Rates
The problems highlighted can and do affect the students learning experience they stretch workshop resources, reduce the students’ practical time on vehicles and impact on the lecturers contact time with the class, this will affect learner retention and ultimately student pass rates.
Very little has been written on improving efficiency and productivity in an educational vehicle workshop.
Work study
Method study
Motion study
Motion economy
Time study
Work measurement
Why are the indentified problems a problem?
Poor citing of shared resources, inability to find equipment, lack of fault reporting, etc. all lead to a reduction in efficiency and productivity.
What would stop the problems from being problems?
Having lecturers take responsibility for areas of the workshop.
Better citing of, and designated areas for, shared resources, more classroom resources or better citing of existing classroom equipment.
An effective fault reporting mechanism put in place.
Equipment in designated areas with workshop plan and equipment lists at each base to easily guide students to equipment location.
How are we going to implement or manage the change?
Break the workshop down into smaller workshop or classroom areas, equip each classroom individually and assign a lecturer or two to manage each classroom. Colour coded equipment within each classroom for ease of identification.
What has happened as a result of the changes?
All equipment within each classroom is sufficient to complete tasks within it. Equipment is placed back at its storage point at the end of each lesson. Faults are reported to lecturers as they happen and dealt with or serviceable classroom equipment is compromised.
Chapter 4 – Preferred Setup
It has been proven since the opening of the new college that a workshop of this size cannot be managed effectively without a full time workshop manager in place. This appointment will never happen in an educational institution so other forms of managing the work space must be found.
The workshop should be organised in such a way that it is self managing but it must also be able to be used as an efficient reporting mechanism for informing the assistant curriculum leader/curriculum leader of issues arising in the workshop to enable them to be acted upon.
Individual members of staff should have a clear understanding of what is expected of them and be accountable for their own and their student’s actions.
The preferred arrangement in any motor vehicle workshop should see that it is adequately equipped and that the equipment is suitably positioned in such a way that it provides an efficient means of working.
Where similar workshop tasks are being performed the equipment and mechanisms for management should be identical so that all staff members are clear about what is expected and that there is no ambiguity or confusion when staff are timetabled to work in various areas of the workshop.
When part time members of staff are employed there is only one system of work to learn, all advice or questions will be responded to with the same answer as each permanent member of staff will be working to the same set of procedures.
4.1 Proposed Changes to the Laboratory
To rectify the problem of workspace allocation it is proposed that the interior of the workshop be split into 6 classroom areas excluding the mezzanine area.
The six workshop areas should be timetabled individually from the college central timetabling system. Timetabling each area separately will prevent the workshop from becoming overloaded and will ensure that each class has a designated work area for the duration of their allocated slot.
Splitting the laboratory from one large area into six smaller areas will ease the burden of its day to day management. One person will not be required to continually oversee the daily operation of the workshop instead they will only need to be reported to. Each individual lecturer within the department by being centrally allocated a work area will be required to take ownership for it and will therefore be accountable for all that goes on within that area.
The six classroom areas should be partitioned by some form of barrier i.e. moveable boards or screens, the barriers will provide a clear indication of classroom boundaries and assist with identifying class areas of responsibility.
The barriers will help prevent pupils from straying away from their work areas making it easier for lecturers to keep track of their students. The barriers should also assist in preventing students from disturbing other class lectures.
Dividing classrooms within the workshop will assist in the control of school aged pupils; closer supervision is required for these class groups due to their maturity levels and inability to relate to health and safety requirements.
Child protection concerns will also be easier to identify and manage.
Human traffic, within the motor vehicle laboratory, would be easier to direct onto designated walkways away from the work areas and vehicles further reducing the risk of injury, class disturbance and damage to vehicles and equipment.
Classroom barriers would also provide additional space for diagrams or posters and allow electronic lectures or demonstrations to be projected onto.
4.2 Classroom Work Areas
Timetabling classes to work areas within the laboratory will introduce a fairer system of workspace allocation. It will ensure that lecturers and students always have a space to work in and vehicles to work on. This system will make lecturers accountable for the space in which they are working and encourage them to ensure students are completing tasks fully, that tools and equipment are always kept serviceable or reported when faults develop, it will ensure that tools and equipment are put away in there designated areas after each class and reduce equipment losses and it will also improve the general housekeeping of the workshop.
Any issues arising in the workshop for a specific time period can be addressed by looking up the class and lecturer that were working in the area when the problems occurred.
4.3 Classroom Equipment
It is recommended that each classroom area within the workshop is issued with a selection of regularly used tools and equipment. This will increase the time available to students for working on vehicles by reducing the time that they spend looking for this type of equipment in the workshop.
It will also provide a means of conveniently being able to perform a daily stock check of equipment and will provide a mechanism for reporting on the condition of tools and equipment within each of the classes.
Below is a recommended list of equipment that should be issued to each classroom area within the workshop:
* A lecturers’ locker would enable the secure storage of student folders, lesson notes, specialist, valuable or loaned equipment, etc.
* 12-16 lockers for students personal effects
* 1x Workbench per vehicle bay
* 1x black drip tray for oil per work bay
* 2x 3 litre oil filling jugs
* 1x green drip tray for coolant/water per bay
* 1x vehicle jack per work bay
* 4x axle stands per work bay
* 1x wheel braces per work bay
* 1x watering can per class
* 1x wash bucket per bay
* 1x dust pan and brush per bay
* 2x mop and mop bucket per class
* 1x Bench vice per work bay
* 1x desk per classroom for diagnostic work; paperwork, laptop citing, projector etc.
* 1x rubbish bin per class
* 1x shelving unit to store tools and equipment
* 1x fault report book
4.4 Technician work area
As part of the workshops reorganisation and to assist the technicians with fault rectification and preparation work it is recommended that the motor vehicle technicians be given a vehicle bay as a designated work area; this work area should be situated in the corner of the workshop and allow for easy access into the technicians workroom. This designated bay will enable vehicles, which require work to be done, to be taken out of the class room area and worked on without disruption to students, lecturers and the technicians. This work bay should be screened off, preferably by welding screens, to prevent access by non authorised personnel, to reduce disturbances to both classes and technicians and to allow welding tasks etc. to be carried out at any time of the day.
The technicians work bay should be equipped independently of the rest of the workshop with equipment such as:
* 1x jack
* 4x axle stands
* 1x complete tool kit in roller cabinet
* 1x complete set of air tools
* 1x set of power tools (grinder, drill, etc)
* MIG welder and associated equipment
* Oxy-Acetylene welding equipment
* 1x oil drip tray
* 1x coolant drip tray
* 1x metal bench with vice
* 1x watering can
* 1x rubbish bin
* 1x soft brush and dust pan
* 1x shelving unit to store tools and equipment
4.5 Identifying and Controlling Equipment
To help identify and control tools and equipment within the six workshop areas it is recommended that each classroom is designated a colour. All equipment that is issued to and contained within each of the classroom areas should be painted the colour that has been designated to that classroom for ease of identification.
All classroom equipment that is able to be shelved should be stored on a colour coded shelving unit. The shelving unit should be labelled with the equipment that is to be stored upon it and a laminated sheet attached as a guide for students as to where each item of equipment should be stored and its quantities.
Colour coding will assist both staff and students with daily equipment checks, locating equipment and will improve the reporting of equipment faults or losses.
Classroom equipment should only be used within its designated classroom area.
Student locker keys should be stored in the main store room in a colour coded container. This will ensure that all lecturers have the ability to issue a locker to each student in their class wherever they are working in the workshop.
Lecturers will collect keys from the main store at the start of the morning or afternoon period when work bays are identified and will be returned to the store complete at the end of each slot.
Locker keys will be issued to students in exchange for a valid student ID card.
Student ID cards will be returned to each student when lecturers are happy that all tools signed out have been returned to the main store and when the locker has been emptied and the key returned, this will accurately identify students that have not returned tools to the store or returned locker keys and will also ensure that student ID cards are brought to college.
4.6 Mezzanine Area
The area below the mezzanine should be separated into designated work or storage areas to better utilise the workshop floor space.
The individual work areas should be separated by a barrier or partition wall of some kind to act as a clear boundary to make work space housekeeping easier to manage and as somewhere to place posters/instructions/diagrams etc.
Work areas should consist of a tyre fitting bay, a bench fitting area, a storage area for removed vehicle parts, a storage area for large shared resources and a recycling/waste area.
The tyre fitting bay should contain the workshops tyre removal machine and wheel balancing equipment. Both these items should be secured to the floor to prevent them from moving or tipping whilst students work on them, the items should also be permanently wired into the workshop electrical supply to reduce the risk of electrocution from coming into contact with a 240v mains supply.
This area should also be fitted with a dedicated tyre shelving unit to provide a storage solution for the tyre clutter that amasses regularly on the upper mezzanine area. Storing the tyres at ground level will eliminate the need to visit the upper mezzanine area, will allow the tyres to be better managed and reduce the risk of fire.
A dedicated bench fitting area will provide students with a place to take components stripped from vehicles to be examined or worked on. It will provide lecturers with a suitable space to teach and develop student’s basic metal fitting skills prior to working on vehicles. The area should contain workbenches and vices for an entire class to work productively, a bench mounted grinder should be located in this area along with a floor mounted pillar drill and a floor mounted hydraulic press. The pillar drill and hydraulic press should be secured to the floor to prevent them from

## Organic Solar Cells – History, Principles and Efficiency

Solar Cells
Solar cells are cells or devices use for converting sunlight into electric cur­rent (electricity) or voltage. They are also called photovoltaic cells (PV) or devices and the process of generating electricity from sunlight is called pho­toelectric effect. Solar Energy conversion through photovoltaic effect can be achieved with many materials at different lifetimes. Over the years many research and development have been conducted in the area of solar energy (thin film applications)[1]-[3]. But most of these developments have been in inorganic solar cells with conventional silicon base solar cells dominating in the production of solar energy in the commercial market [4]-[5]. Silicon base cells for thin film application have enormous advantages like good absorp­tion rate of sunlight, suitable band gap for photovoltaic applications, longer lifetimes and improving efficiency. But the process of silicon base cells gen­eration of voltage is tedious and above all very expensive for the commercial market. Research for alternatives to silicon has been ongoing for some time now with some other inorganic materials like Copper Indium Gallium Sele­nium (Cu-In-Ga-Se)[6], Cadmium Sulfide (CdS)[7], Lead Cadmium Sulfide (PbCdS)[8], etc. But some have similar production problems like the silicon and as well expensive. Others also are of dangerous elements which are not environmentally friendly (CdS, PbCdS, etc). Another alternative to silicon base cells in terms of thin film (solar cells) research for photovoltaic applica­tion could be organic solar cells (also known as plastic solar cells)[9]. With this, photocurrents are generated from organic materials. In this review, brief history of organic solar cells is discussed, the basic principle of operation is outlined and some performance in terms of the materials absorption rate, efficiency, stability and degradation and comparison between organic solar cells and inorganic solar cells (silicon) are also discussed.
Chapter 2
Organic Solar cells (Plastic Solar cells)
The infancy of organic solar cells began in the late 1950s [10]. At this time, photoconductivity in some organic semiconductor cells (anthracene, chlorophyll) were measured with voltage of 1 V by some research groups[11]­[12].They proposed that if a single layer PV cell is illuminated consisting of an organic layer, sandwich cell with low work function metal (aluminum, Al) and a conducting glass of high work function (indium tin oxide, ITO), photoconductivity will be observed. With this interesting result and less cost effective of these organic semiconductor cells and also a possibility of doping these materials to achieve more encouraging results caught up with many researchers in this field. The work done since has been unprecedented as shown in figure 2.1 on the next page.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In the 1960s, semiconducting properties were observed in dyes partic­ularly in methylene blue [13]. Efficiency of 10−5 % in sunlight conversion was reported in the early 1970s to an improvement of 1 % in the early 1980s [14]. This was achieved through an interesting phenomenon known as heterojunction[15]. This phenomenon is a surface between semiconduct­ing materials of dissimilar layers. Photovoltaic devices were applied with heterojunction where donor-acceptor organic cells were tailored together. In recent years, photoconductivity has been measured in dyes and the dye so­lar cells have progressively been improved for laboratory cells[16]. Currently power conversion efficiency of organic photovoltaics in single-junction devices is over 9 %[17] and that of multi-junction cell is over 12 %[18].
Some materials of organic solar cells are dyes and some polymers like origomers[19], dendrimers[20], liquid crystal materials[21] and self-assembled monolayers [22]. All these need to be prepared carefully to obtain optimum efficiency and stability[23]
Figure 2.1: Number of publications is plotted against the year of publications. This shows the inception of organic solar cells and how much interest the field has generated among scientists and the commercial entities over the years. Years below 1990 saw less publication (1960 to 1970 -10 and 1980 to 1990 ­29) compared to the years in the figure.
Principle of Operations.
In recent time, organic solar cells are of different operations due to their usage. Similar to inorganic solar cells, organic solar cells can be used to convert sunlight into electricity with the aid of a semiconductor. The basic principle behind this operation is outline below:
Most organic solar cells have very thin material layer either single or multi-layer where there is a strong absorption of light sandwich between two electrodes, an anode (A) and a cathode (C). The anode (usually indium tin oxide ITO) is transparent and has a high work function. The cathode (aluminum) is opaque and has a low work function. The material layer is usually a photosensitive organic semiconductor. When light of appropriate energy (sunlight) is incident on it, an electron is excited from the highest occupied molecular orbital (HOMO) to a lower uncopied state called lowest uncopied molecular orbital (LUMO) leaving a hole in the HOMO. This leads to exciton formation. That is, there is a creation of an electron-hole pair which is strongly bounded together. As the electron stays at the LUMO, there is a loss in energy by the electron through thermal relaxation as the electron penetrates the energy band gap. The electron-hole pair diffuses in­dependent of the electric field and are separated (exciton dissociation) at the interface between the donor state (HOMO) and the accepter state (LUMO). The electron is collected at one end of the electrode (cathode) and the hole at the other end of the electrode (anode) thereby generation photocurrent in the process. If the electron and the hole after separation do not reach the interface, their absorbed energies are dissipated out and no photocurrent is generated. Step by step principle is illustrated in pictorial form below:
Figure 3.2: a) Light is incident on an electron (red). (b) Electron is excited from the HOMO to the LUMO creating a hole (black) at the HOMO. (c) Exciton formation of electronhole pair. (d) Diffusion of exciton independent of electric field. (e) Exciton dissociation. (f) Collection of charges.
Chapter 4
Performance
4.1 Absorption of light.
In organic solar cells, the thin organic semiconducting layer is responsible for light absorption. This layer has a valence band which is densed with electrons and a conduction band. These bands are separated by an energy gap. When the layer absorbs light, an excited state is created. This state is characterized by an energy gap. The energy gap is the energy difference between the higher energy state (LUMO) and the lower energy state (HOMO). It is usually of the range of (1.0 -4.0) eV[24] and it is determined as:
Eg = ELUMO − EHOMO . (4.1)
Where Eg is the energy gap in electron volts (eV), ELUMO is the energy at LUMO (higher energy state) and EHOMO is the energy at HOMO (lower energy state).
The energy gap usually serves as an activation energy barrier. This acti­vation energy barrier needs to be overcome before an electron is excited from the lower energy state to the higher energy state. The excited electron has energy greater than or equal to this activation energy barrier. This energy is determined as:
h.c
Ephoton = ≥ Eg . (4.2)λphoton
Where Ephoton is the energy of the incident photon (light), h is Plancks constant (6.626 ×10−34 Js), c is speed of light (2.997 ×108 ms−1) and λphoton is wavelength of the photon (≈ (400 -700) nm).
As the excited electron remains at the LUMO, a hole is created in the HOMO. The electron undergoes thermal relaxation as it remains at the LUMO and this result in loss of energy by the electron. This energy loss is compensated for as:
El = Eelectron − Eg . (4.3)
Where El is thermal energy loss of the electron, Eelectron is the energy of the electron at the LUMO and Eg is the energy gap.
Figure 4.1: (a) Thin organic semiconductor layer (with both LUMO and HOMO) with energy gap (Eg). (b) Incident light of greater energy than the energy gap excites electron (red) from HOMO to LUMO. This creates a hole (black) at the HOMO (c) Energy lost by the electron through thermal relaxation.
In solar cell application, long operational lifetime performance is required. To achieve this, stability and degradation are few of the key important issues to look at in real-time application. Over the years, stability of organic solar cells has improved very much in terms of their power conversions[25]. This is clearly shown in the figure below:
Ideally the advantages of organic solar cells with their low cost materi­als, recyclable, easy production and production in large quantities, ï¬‚exibility and durability (low weight), stability should be optimum. These advantages somehow also affect the stability of the organic cells. The active layer (thin organic semiconducting layer) component which is a core component of the cells is sometimes prone to degradations. These degradations occur dur­ing their production (printing in bulk quantities and rolling them together thereby introducing some mechanical properties which then affect the mor­phology of the active layer) and also reactions from weathering (UV light, oxygen, water). Extensive work on photo stability of some organic solar cells (large number of polymers) has been investigated by Manceau et al[27].
Figure 4.2: Organic Photovoltaic (OPV) production with progression in years shown. The years below 2010 had lower production of OPVs (> 0.5 MW) [26]. Chapter 5
Comparism between organic solar cells and inorganic solar cells (Silicon base solar cells).
Organic and inorganic solar cells serve similar applications but they interest­ing differences in terms of how they are made. Organic solar cells are cheap in terms of materials, production and are recyclable, they have very thin solar cells with little energy in making them, they are ï¬‚exible, durable and have low weight, they are colourful and they have easy production and can be produced in large areas. But they have low efficiency and lifetime compared to silicon base solar cells. Inorganic solar cells are cost effective in terms of materials, production and are not recyclable, much energy is need to have thin layer cells, they are rigid and not durable, they are of dark grey materials with dark blue to black coat­ing, they have complicated production and are difficult to produce in large areas. But they have good light absorption rate, better efficiency and longer lifetime.
Chapter 6
Conclusion
Organic solar cells can be alternative to silicon base solar cells with its in­teresting applications. They can be fabricated into our day to day usage materials and equipment with low cost technology in serving their purpose. Efficiency and stability still remains areas that should be addressed in the future to optimally have good power conversions.

## The Arguments For And Against Market Efficiency Finance Essay

Financial markets are mechanisms (formal and informal) that allow people to buy and sell financial securities, commodities and other items of value at a price. For decades now, these markets have contributed positively to the development of a nation’s economy, but their continuous efficiency has been debated by scholars. One of such reviews is Eugene Fama (1970) which supports the assertion that financial markets are “efficient” (that is, a market which prices always fully reflect available information).

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The Efficient Market Hypothesis (EMH) views prices of securities in the financial markets as fully reflecting all available information. This theory of efficient capital markets is supported by the academic field of finance. However, the validity of the hypothesis has been questioned by critics in recent years. EMH is one of the hotly contested propositions in all social sciences. Even after several decades of research and literally thousands of published articles on the topic, economics have not yet reached a consensus about whether financial markets are efficient.
This essay comprises of three sections. Section 2 is a review of market efficiency. A brief history of market efficiency, the various market efficiency forms, and empirical tests for market efficiency are enumerated upon. Criticisms of the EMH and behavioural finance are further discussed. Section 3 concludes this work.
REVIEW OF MARKET EFFICIENCY
The concept of market efficiency is being employed by finance and economic professionals. There is a comprehensive review by Fama (1970) on the theory and evidence on market efficiency, which proceeds from theory to empirical work. He noted that most of the empirical work preceded development of the theory.
2.1 Brief History of Market Efficiency
The Efficient Market Hypothesis (EMH) was first expressed by Louis Bachelier, a French mathematician, in his PhD thesis in 1900. “In his opening paragraph, Bachelier recognizes that past, present and even discounted future events are reflected in market price, but often show no apparent relation to price changes. This recognition of the informational efficiency of the market leads Bachelier to continue in his opening paragraph, that if market, in effect does not predict its fluctuations, it does assess them as being more or less likely, and this likelihood can be evaluated mathematically” (Dimson and Mussavian 1998, p.92). Further research by Cowles and Jones in the 1930s and 1940s on stock prices showed that investors were unable to outperform the market. Both research followed the same principle of the random walk model. However, all these earlier studies were ignored as at that time.
The origin of the EMH was first given form by the works of two individuals in the 1960s. Eugene Fama and Paul Samuelson independently developed the same notion of market efficiency from their different research. Samuelson (1965) contribution is summarized by his article: “Proof that Properly Anticipated Prices Fluctuate Randomly”. The EMH was developed by Professor Eugene Fama at the University of Chicago Booth School of Business as an academic concept of study in the early 1960s. It was widely accepted up until decades ago where some empirical analysis by scholars have consistently found problems with the efficient market hypothesis. These anomalies will be addresses in section 2.4 of this work.
Forms of Market Efficiency
In 1970, Fama published a review of the theory and the evidence for the hypothesis. Included in his paper were the various forms of financial market efficiency: weak, semi-strong and strong forms. Empirical reviews were also carried out on the various forms of market efficiency.
Weak Form Efficiency
The weak form hypothesis shows that market prices fully reflect all information inferred from past price change. Future prices of stock cannot be predicted by analyzing prices from the past. This form of market efficiency opposes technical analysis which involves studying past stock prices data and searching for patterns such as trends and regular cycle. Future price movements follow a random walk and determined entirely by information contained in the price series.
Semi-Strong Form Efficiency
In this form, scholars believe that market prices reflect not only information implied by historic changes but also other publicly available information relevant to a company’s security. It implies that price of securities rapidly adjust to publicly available information such that no excess returns can be earned by trading on that information. Semi-strong efficiency asserts that neither technical analysis nor fundamental analysis will be able to produce excess returns for an investor.
Strong Form Efficiency
Dimson and Mussavian (1998) view this form of market efficiency as one which asserts that information known to any participant is reflected in market prices. Market prices reflect all available information including information available to company insiders, and no one can earn excess returns. If there are legal barriers to private information becoming public, as with insider trading laws, strong form of market efficiency is impossible, except in case where laws are universally ignored.
Empirical Tests for Market Efficiency
Test of weak form efficiency
Test for random walk have been conducted as a test for weak form efficiency. As earlier explained, the idea of weak form efficiency is that the best forecast of the future price of a security is the current price. Past price movements are not useful to predict future prices.
The first statement and test of the Random Walk Hypothesis (RWH) was that of Bachelier, a French mathematician in his 1900 PhD thesis, “The Theory of Speculation”. He recognized that past, present and future events are reflected in market prices, concluding that commodity prices fluctuate randomly. Cowles and Jones (1937) tested the RWH. In their study, they compared the frequency of “sequences” and “reversals” in past stock returns, where the former are pairs of consecutive returns with the same sign, and the latter are pairs of consecutive returns with opposite signs. Their article suggested that professional investors were in general unable to outperform the market.
Kendall (1953) examined 22 UK stocks and commodity price series using statistical analysis. He found out that there were random changes involving series of prices from one term to the next, which were observed at fairly close intervals. The resulting data behave like wandering series. According to Dimson and Mussavian (1998), the near-zero serial correlation of price changes was an observation that appeared inconsistent with the views of economists. These empirical observations came to be labeled “the Random Walk Model”.
Osborne (1959) analyzed US stock prices data, applying methods of statistical mechanics to the stock market, with a detailed analysis of stock price fluctuation. His research showed that common stock prices have properties which are similar to the movement of molecules. His article indicated support for the RWH.
The random walk model emerged as a prominent theory in the mid- 1960s. In 1964, Cootner published his papers on the topic while Fama (1965b) published his dissertation arguing for the random walk hypothesis. Fama reviewed the existing literature on stock price behaviour, examining the distribution and serial dependence of stock markets returns and concludes that there is strong evidence in favour of the random walk model.
Test of Semi-Strong Efficiency
In testing for semi-strong market efficiency, it is believed that the adjustments to previously unknown news must be of a reasonable size and instantaneous. Consistent upward and downward adjustments after the initial change must be looked for. If such adjustments exist, it would suggest that investors had interpreted the information in a biased fashion and hence in an inefficient manner.
Fama et al. (1969) tested the speed of adjustment of stock prices to new information. The study provided evidence on the reaction of share prices to stock split and earnings announcements. The market appears to anticipate the information, while most of the adjustments are completed before the event is revealed to the market. There is a rapid and accurate adjustment of the remaining price once the news is released. The Fama et al. study concludes that “the evidence indicates that on the average the market judgments concerning the information implication of a split are fully reflected in the price at least by the end of the split month but most probably almost immediately after the announcement date” (p.20).
In Jensen (1969), a sample consisting of the portfolios of 115 open-end mutual funds was used to statistically test for evidence in support of semi-strong efficient market. The rational was to address the following questions: (1) If the mutual funds on the average provided investors with returns greater than, less than, or equal to returns implied by their level of systematic risk and capital asset pricing model? (2) And if the funds in general provided investors with efficient portfolio. The Jensen study concludes that current prices of securities completely capture all effects of all currently available information. Therefore, attempts by mutual funds provider to analyze past information more thoroughly have not resulted in increased returns.
However, on the contrary, a recent study by Asbell and Bacon (2010) tested the effects of announcing insider purchases on the stock price’s risk adjusted rate of return for a randomly selected sample of 25 firms on November 26, 2008.These stocks were traded on NYSE or NASDAQ. Statistical test for significance were conducted and results show a slightly positive reaction prior to the announcement and a significant positive reaction after the announcement. Their findings fail to support efficient market theory at the semi-strong form level as documented by Fama (1970). “Specifically, for this study the announcement of insider purchases is viewed as a mixed signal, no significant insider trading before the purchase date, but a significant upwards trend after the purchase date. Investors appear to receive the insider purchase news as an opportunity to buy and gain in the future from their investments. Evidence here suggests no sign of insider trading prior to the gain in the announcement date. The market’s positive reaction to the announcement suggests that the company and the stockholders have nothing to fear, even though the results test the strength of market efficiency” (Asbell and Bacon 2010, p.180).
Test of Strong Form Efficiency
The principle in testing for strong form efficiency is that a market needs to exist where investors cannot consistently earn excess returns over a long period of time. Even if some managers are observed to consistently beat the market, it is believed that no refutation even of strong form efficiency follows: with hundreds of thousands of fund managers worldwide, even a normal distribution of returns (as efficiency predicts) should be expected to produce some “star” performers.
Maloney and Mulherin (2003) provide a test of strong form market efficiency on how quickly and accurately the stock market process the implications of the space shuttle crash that occurred January 28th, 1986. Although information about the Challenge crash was not available to the public until 11:47am, there were lots of variations attributed to the stock of the four firms prior to this announcement. The study shows the speed and manner in which Morton Thiokol was distinguished from the other three firms as a possible cause of the crash. The existence of prior knowledge about the O-ring problem associated with the space shuttle programme, suggested that investors who were aware of this private information facilitated the price discovery process on the day of the explosion. The price discovery process was not attributed to the informed traders, though some segment of the market quickly reacted to the news of the disaster.
Further evidence from the study revealed that there was no abnormal volume or stock price movements in Morton Thiokol on days of prior shuttle launches. Also, there was no abnormal short interest in Morton Thiokol on the days of previous launches, nor were there any short sales on the day of the explosion prior to the launch time. The Challenger case study shows that the information processed by the market participants is not simply some linear combination of private and public components but often complex and can produce complicated price patterns in which the relation between information arrival and price discovery is not always direct.
2.4 The Efficient Market Hypothesis and Its Critics
The efficient market hypothesis was widely accepted by academic financial economists decades ago. However, this theory has become less universal and debated by scholars in recent years. Some of the critics of market efficiency have been centred on the following: size effect, seasonal and day-of-the-week effect, excess volatility, short term effects and long-run return reversal, and stock market crashes. These criticisms or attacks on the efficient market hypothesis will now be analyzed below and the beliefs that stock market prices are partially predictable.
The “size effect” is one anomaly found by critics. Some empirical studies such as Banz (1981) and Reinganum (1981) showed that small-capitalization firms on the New York Stock Exchange (NYSE) earned higher average return than predicted. There is tendency for small company stocks to generate larger returns than those of larger company stocks over long periods of time. It is reasonable to suggest that one should rather be interested in the extent to which higher returns of small companies represent a predictable pattern that allow investors to make excess profit. “If the beta measure of systematic risk from the Capital Asset Pricing Model is accepted as the correct risk measurement statistics, the size effect can be interpreted as indicating an anomaly and a market inefficiency, because using this measure, portfolios consisting of smaller of smaller stocks have excess risk-adjusted returns” (Malkiel 2003, p.17). Fama and French (1992) study show that the average relationship between beta and return during the 1963- 1990 period was flat. This is not consistent with the “upwards sloping” as predicted by the CAPM. In one of their exhibits, within the size deciles, the relationship between beta and return continues to be flat suggesting that size may be a far better proxy for risk than beta. Their findings should not be interpreted as indicating that markets are inefficient.
However, it seems that the small-firm anomaly has disappeared since the initial publication of the papers that discovered it. The different risk premium for small-capitalization stocks has been much smaller (practically no gains from holding smaller stocks) since 1983, than it was during the period 1926- 1982.
The “Seasonal and Day-of-the-week pattern” is another anomaly propagated by critics of the efficient market hypothesis. Some research have found that January has been a very unusual month as stock market returns are usually very high during the first two weeks of the year. This has been particularly evident for stocks of small companies, as the so-called “January effect” seems to have diminished in recent years for shares of large companies. There also appear to be a number of day-of-the-week effects as French (1980) study show a significantly higher Monday effects.
In line with the study by Malkiel (2003), the problem with the predictable patterns or anomalies (seasonal effects) is that they are not dependable from period to period. The non-random effects (even if they were dependable) are very small relative to the transaction costs involved in trying to exploit them. Investors do not appear to take advantage of the abnormal returns in January and buy stocks in December, thus eliminating the abnormal returns.
“Excess Volatility” effect is another anomaly considered here. The critics believe stock market appears to display excessive volatility (that is, fluctuations in stock prices may be much greater than is warranted by fluctuations in their fundamental value). Criticizing the efficient market hypothesis on the basis of volatile assets prices looks conceptually wrong. This argument supports the review by Szafarz (2010) which asserts that efficiency is about rationality and information, not about stability. The study shows that variance bounds and stability are not part of market efficiency and therefore, one should not reject market efficiency on the basis of excess volatility test. However, speculative bubbles are compatible with rational valuation, and hence constitute possible outcome of the efficiency market dynamics.
“Short-run Effect and Long-run Return Reversals” is another argument against market efficiency. Some reviews show that some positive serial correlations exist when stock returns are measured in the short-run (period of days or weeks). But many other studies have shown evidence of negative serial correlation (return reversal). Return reversal can also be termed as mean reversion. This means that stocks that had done poorly in the past are more likely to do well in the future because there will be a predictable positive change in the future price, suggesting that stock prices are not a random walk.
Despite all these, the finding of mean reversion is not uniform as it is a bit weaker in some periods, than it is for other periods. It is known that the strongest empirical results are found in periods of Great Depression. “There was a statistically strong pattern of return reversal, but not one that implied inefficiency in the market that would enable investors to make excess return” (Malkiel 2003, p.11). In line with this, one can believe that this forecast is due to overreaction in stock market prices. Behavioural economists attributes the imperfection in the financial markets to a combination of cognitive biases such as overreaction, overconfidence, representative bias, information bias and various other predictable human errors in reasoning and information processing. Of course, it is impossible to rule out the existence of behavioural or psychological influences on stock market pricing.
2.5 Behavioural Finance
A new breed of behavioural economists attributes the imperfection in the financial markets to psychology and behavioural elements of stock-price determination. This approach is a more promising alternative to the efficient market hypothesis. Behavioural finance applies the concept from other social science to understand the behaviour of stock prices. Psychologists believe that people are loss averse and are unhappy when the suffer losses than they are when they make gains. Also, people tend to be overconfident in their own judgment. “As a result, it is no surprise that investors tend to believe that they are smarter than other investors and so are willing to assume that the market typically does not get it right and therefore trade on their beliefs” (Mishkin & Eakins 2009, p.142). Overconfidence and social contagion can provide an explanation for speculative bubbles in stock markets.
However, in line with the defenders of efficient market hypothesis, one can say that behavioural finance strengthens the case for EMH in that it highlights biases in individuals and committees, not competitive markets. Behavioural psychologists, mutual fund managers and economists are all drawn from the human population and are therefore subject to the biases that behaviouralists showcase.
CONCLUSION
The concept of EMH asserts that current market price of a security instantly and fully reflects all available information. Investors cannot consistently achieve returns in excess of average market returns on a risk-adjusted basis, given the information publicly available at the time. The EMH has been applied extensively to theoretical models and empirical studies of financial securities prices, generating considerable controversy as well as fundamental insight into the price discovery process. Some of the arguments against the EMH involve size effects, seasonal effects, excess volatility, mean reversion and market overreaction. Some of these anomalies pertaining to market efficiency can be explained by the impact of transaction costs. That is, cost-benefit analysis made by those willing to incur the cost of acquiring the valuable information in other to trade on it. There is also no clear evidence that these anomalies seriously challenge the EMH.
Psychologists and behavioural economists recently, argue that EMH is based on counterfactual assumptions regarding human behaviour. One cannot rule out the existence of behavioural or psychological influences on stock market pricing. Behavioural finance should therefore be seen as a case that strengthens the EMH as price signals in financial markets are far less subject to individual biases highlighted by Behavioural Finance.