FPGA Stage for Application-Level Network Security

A Self-Versatile FPGA Stage for Application-Level Network Security
A Research Report for the DSCI 60998 Capstone Project in Digital Sciences Course Vamsi Krishna Chanupati
Ramya Ganguri Kent State University Fall Semester, 2016

Wireless communication networks are subjected to vulnerable attacks. The extent of attacks is rising day by day. The proposed work shows the extent of attacks growing in every-day life and a counter method to minimize the extent of these vulnerable attacks. Several studies shows that new and more stable security methods need to be developed by considering information safety, confidentiality, authentication and non-repudiation in the wireless sensor networks. The proposed study shows a self-adoptable FPGA Stage for Application-Level Network Security using application-independent core process IP, UDP and TCP protocols as well as ARP and ICMP message plots. The altered quickened figure outline utilizes information subordinate changes, and can be utilized for quick equipment, firmware, programming and WSN encryption frameworks. The approach exhibited demonstrated that figures utilizing this approach are more averse to endure interruption of differential cryptanalysis than as of now utilized famous WSN figures like DES, Camellia. In this report an overview of existing FPGA algorithms for application level network security is examined and a new FPGA algorithm is proposed.
Keywords: FPGA, WSN encryption, computer aided systems design.

The Purpose of the Study (Statement of the Problem)
With the developing dependence of business, government, and additionally private clients on the Web, the interest for fast information exchange has swelled. On a specialized level, this has been accomplished by enhanced transmission advancements: 10 Gb/s Ethernet is now in across the board reasonable use at the ISP and server farm levels, gauges for 40 Gb/s and 100
Gb/s speeds have as of now been figured. The information volume exchanged at these velocities introduces a huge test to current efforts to establish safety, particularly while going past straightforward firewalls and additionally considering payload assessment, or even application- level conventions.
Wireless Sensor Networks are most pre-dominant with this speeds and it is very difficult for customary programmable processors are to stay aware of these speeds. A wireless sensor network (WSN) is a gathering of spatially dispersed, free gadgets that gather information by measuring the physical or ecological conditions. A portion of the conditions are being measured is temperature, weight, dampness, sound, position, lighting, and use data. These readings, as information, are gone through the network, are ordered and sorted out, and later it is conveyed to end client. WSNs are utilized for some applications like power framework controls, modern process checking and control, human wellbeing observing.
Generally, these WSNs tend to require a considerable measure of energy to work, yet diminishing the power is needed for the framework, It builds the lifespan of the sensor gadgets and also leaving space for the battery-fueled applications. As an option, both programming customized committed system handling units and equipment quickening agents for these operations have been proposed. The utilization of reconfigurable rationale for the last permits more noteworthy adaptability than hardwiring the usefulness, while yet permitting full-speed operation. This research gives a detailed description of present day FPGA (Field Programmable Gate Array) and examines the extent of security level standards in the existing FPGA algorithms.

The Research Objectives
The objectives of this research are Wireless level networks and analysis of security issues

This step involves the study of the existing techniques in wireless network security. The research of the existing literature reveals that the wireless sensor network security techniques have been proposed for network security by some researchers and the existing models does not consider the use of feistel ciphers in the research. Design of the algorithm model – The model to be proposed uses self-adoptable FPGA (Field Programmable Gate Array) for application level network security.

A new FPGA based algorithm is designed in order to decrease the extent of attacks in application level network security. It shows that new and more stable security algorithms need to be developed to provide information safety and confidentiality in the networks. This is useful in minimizing the vulnerable attacks in application level networks. There are several other indirect applications of the model to be proposed.
Literature Review
A survey on FPGA for network security that was presented by Muhlbach (2010) depicts an execution of an intrusion detection system (IDS) on a FPGA for network security. Various studies have analyzed string-coordinating circuits for IDS. A strategy for producing a string based coordinating circuit that has expandability of handling information width and radically lessened asset prerequisites. This circuit is used for packet filtering for an intrusion protection system (IPS). An apparatus for consequently creating the Verilog HDL source code of the IDS circuit from rules set is developed, utilizing the FPGA and the IDS circuit generator, this framework can redesign the coordinating origin relating to new interruptions and attacks. The IDS circuit on a FPGA board has been assessed and its exactness and throughput is calculated.

There are various methods, which depicts the usage of Simple Network Intrusion Detection System (SNIDS) detailed explanation is given by Flynn, A (2009), basic equipment arrange interruption recognition framework focusing on FPGA gadgets. SNIDS snoops the activity on the transport interfacing the processor to the Ethernet fringe center and identifies the Ethernet outlines that match a predefined set of examples demonstrating malevolent or refused content. SNIDS depends on an as of late proposed engineering for high-throughput string coordinating. This method executes the SNIDS utilizing the Xilinx CAD (Computer Aided Design) devices and tests its operation on a FPGA gadget. Moreover, programming instruments that empower programmed era of a SNIDS center coordinating a predefined set of examples.
They exhibit the utilization of SNIDS inside a practical FPGA framework on a chip associated with a little system.
Chan et al. exhibited that the PIKE plans include lower memory stockpiling necessities than arbitrary key circulation while requiring practically identical correspondence overheads.
PIKE is as of now the main symmetric-key predistribution plot which scales sub-straightly in both correspondences overhead per hub and memory overhead per hub while being flexible to an enemy fit for undetected hub bargain. PIKE appreciates a uniform correspondence design for key foundation, which is difficult to irritate for an assailant. The dispersed way of PIKE likewise does not give a solitary purpose of inability to assault, giving versatility against focused assaults.
There are certain challenges to be overcome while designing an FPGA algorithm for application level network security, a detailed explanation and analyses is given in (Koch Cho., 2007). The first and difficult challenge is designing an FPGA based algorithm for network security. The system to handle and analyze such data should be super-fast and compatible. The existing hardware is able to do many operations to handle the data; however, special computing systems should be designed to process larger data in shorter time. Another challenge in this area is to secure the data that is generated by multiple sources of different nature. The data needs to be processed before analyzing it for pattern discovery. The data generated is not necessarily complete because of different usage cases of the device. In addition, this feature is used to predict the events of a device and manage every other device and network connected to the device for efficiency, performance and reliability.

Preparing abilities in wireless network hubs are ordinarily in view of Digital Signal Processors (DSPs) or programmable microcontrollers. In any case, the utilization of Field Programmable Gate Arrays (FPGAs) gives particular equipment innovation, which can likewise be reprogrammable in this way giving a reconfigurable wireless network framework. The incomplete reconfiguration is the way toward altering just areas of the rationale that is executed in a FPGA. Accordingly, the comparing circuit can be adjusted to adjust its usefulness to perform diverse assignments. This adjustment ability permits the usage of complex applications by utilizing the fractional re-configurability with low power utilization. This last element additionally speaks to a critical perspective when FPGAs are connected in wireless network frameworks. These days, the wireless network frameworks are required to give an expanding exactness, determination, and accuracy while diminishing the size and utilization. Also, FPGAs and their fractional re-configurability permit us to furnish wireless network frameworks with extra properties like high security, preparing abilities, interfaces, testing, arrangement, and so on.
The present capacities of FPGA designs permit not just execution of basic combinational and consecutive circuits, additionally the incorporation of abnormal state delicate processors.
The utilization of incorporated processors holds numerous uncommon points of interest for the fashioner, including customization, out of date quality moderation, and segment and cost lessening and equipment increasing speed. FPGA implanted processors utilize FPGA rationale components to fabricate inside memory units, information and control transports, interior and outer fringe and memory controllers. Both Xilinx and Altera give FPGA gadgets that install physical center processors worked inside the FPGA chip. These sorts of processors are called “hard” processors. Such is the situation for the PowerPCâ„¢ 405 inside Virtex-4 FPGA gadgets from Xilinx and the ARM922Tâ„¢ inside Excalibur FPGA gadgets from Altera. Delicate processors are microchips whose design is completely constructed utilizing a hardware description language (HDL). The proposed research uses an efficient method of Self-adoptable FPGA Stage for Application-Level Network Security.

Research Design
Description of the Research Design
Wireless communication is one of the latest and the revolutionary technology of the last decade. It intends to connect every device on the planet wirelessly. This number could be billions or even trillions. These communication networks have higher transmission speeds and capable of handling the entire load. Security of this wireless communication network plays an important role to keep it robust and yet flexible.
Network security is a basic issue for the application of new technologies in every aspect of society and the economy. It is especially critical for e-exchanges, where it is an essential to provide security for the transactions. The future threats to network security are still severe. As per a Computer Security Institute (CSI) survey, companies reported average annual losses of the $168,000 in 2006 and $350,424 in 2007, up forcefully from (Hao Chen &Yu Chen, 2010).

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

This data reflects both the serious circumstance of system security, and also individual’s accomplishment in this issue. Focused on attacks have turned into a pattern in system security. A focused attack is a malware targeted to a particular segment. Around 20% of the respondents of the CSI review endured this sort of security attacks are turning out to be more prominent than any time in recent time.

Among the type of notorious target attacks, Denial-of-Service (DoS) attack is the most threatening system security. Since 2000, DoS attacks have become quickly and have been one of the significant dangers to the accessibility and unwavering quality of system based administrations. Securing the network infrastructure has turned into a high need because of its fundamental impacts for data protection, ecommerce and even national security (Hao Chen &Yu Chen, 2010). Data security principally concentrates on information, data protection and encryption. The following are some of the Requirements for a Successful Security Application.

Real-Time Protection: It is key for a powerful data instrument to process information at line-speed with moderate cost. All the data movement is subjected for examination in a convenient way, and alerts are produced precisely when unusual circumstances happen.
Flexible Updating: Constantly developing malicious attacks require security answers for be versatile to hold viability. The redesign could be of the learning databases (marks) that the security examination relies on upon, another answer for determining, or even the framework itself. Redesigning an application will frequently be more functional than supplanting it practically speaking.
Well Controlled Scalability. Scalability is another basic concern toward functional development. Numerous reported approaches function admirably on a little scale look into system, be that as it may, their execution weakens quickly when conveyed to down to earth scale systems, for example, grounds level systems on the other hand bigger. The principle purpose behind this is framework multifaceted nature for the most part increments at a much more noteworthy rate than the system.

In contrast to programming executions, application oriented and very parallel plan standards make equipment usage prevalent as far as execution. For instance, Transmission Control Protocol (TCP) Stream Reassembly and State Tracking, an Application Specific Integrated Circuit (ASIC) could dissect a solitary TCP stream at 3.2Gbps in (M. Necker, D. Contis 2002). A FPGA-based TCP-processor created by Open Network Laboratory (ONL) was equipped for checking 8 million bidirectional TCP streams at OC-48 (2.5Gbps) information rate. ASIC-based gadgets not just have the upside of elite, accomplished through circuit plan committed to the errand, yet have the potential for low unit cost. Notwithstanding, generous cost alleviation from enormous non-repeating building venture must be accomplished when ASIC gadgets accomplish adequately high-volume creation. Shockingly, this may not be appropriate to network security applications. Steady developing guidelines and prerequisites make it unfeasible to manufacture ASIC-based system security applications at such a high volume. In addition, custom ASICs offer practically zero reconfigurability, which could be another reason that ASICs have not been generally connected in the system security zone.
Reconfigurability is a key prerequisite for the accomplishment of equipment based system security applications and the accessibility of reconfigurable equipment has empowered the plan of equipment based security applications. A reconfigurable gadget could be considered as a hybrid equipment/programming stage since reconfigurability is utilized to stay up with the latest. FPGAs are the most illustrative reconfigurable equipment gadgets. A Field-Programmable Gate Array (FPGA) is a kind of broadly useful, multi-level programmable rationale gadget that can be customized. At the physical level, rationale squares and programmable interconnections make the principle structure out of a FPGA. A rationale square more often than not contains a 4- input look-into table (LUT) and a flip slump for essential rationale operations, while programmable interconnections between pieces permit clients to actualize multi-level rationale. At the plan level, a rationale circuit chart or a high level hardware description language (HDL), for example, VHDL or Verilog, is utilized for the programming that indicates how the chip ought to work. In the gadgets business it is imperative to achieve the market with new items in the briefest conceivable time and to lessen the monetary danger of executing new thoughts. FPGAs were immediately embraced for the prototyping of new rationale outlines not long after they were designed in the mid 1980s because of their one of a kind component of adaptability in equipment improvement. While the execution and size of FPGAs restricted their application in, thickness and speed have brought about narrowing the execution hole amongst FPGAs and ASICs empowering FPGAs to serve as quick prototyping devices as well as to wind up essential parts in installed networks.

Description of the Subject Matter(and/or), Procedures, Tasks
Current FPGAs share the execution favorable position of ASICs in light of the fact that they can execute parallel rationale works in equipment (Flynn, A., 2009). They additionally share a portion of the adaptability of implanted system processors in that they can be powerfully reconfigured. The architecture of reconfigurable network platform, called Net Stage/DPR. The application-free center uses IP, UDP and TCP conventions and additionally ARP and ICMP messages. It has a hierarchical design plan that permits the quick expansion of new conventions in modules at all layers of the systems administration. 
From figure 1, Handlers are connected to the core by using two different shared buses with a throughput of 20 Gb/s each, one for the transmit and one for the receive side. Buffers boost the different processing stages and limit the impact of Handler in the processing flow. The interface between the buffers and the actual handlers acts as a boundary for using dynamic partial reconfiguration to swap the handlers to and fro as required. 

All handlers have the same coherent and physical interfaces to the center framework. The physical interface comprises of the association with the buffers, strategic flags, for example, clock and reset. However, the handlers communicate with the rest of the framework simply by sending and accepting messages (not really relating to real system bundles). These messages comprise of an inner control header (containing, e.g., charges or state information) and (alternatively) the payload of a system bundle. In this form, the physical interface can stay indistinguishable over all handlers, which significantly rearranges DPR. For a similar reason, handlers ought to likewise be stateless and utilize the Global State Memory benefit by the Net Stage center rather (state information will then simply turn out to be a piece of the messages). 
This approach avoids the need to explicitly reestablish state when handlers are reconfigured. 
Incoming packets must be routed to the fitting Handler. In any case, utilizing the Handler may really be arranged onto diverse parts of the FPGA. In this manner, we require an element routing table that coordinates the message encapsulated payloads to the suitable administration module. Our routing table has the standard structure of coordinating convention, attachment, and address/net mask information of an approaching bundle to discover the related Handler and it can get information for a whole subnet. On the transmitting side, handlers store active messages into their departure cushions, where they will be grabbed by the center for sending. This is done utilizing a straightforward round-robin approach, yet more perplexing plans could, obviously, be included as required. On the off chance that bundles are bound for a Handler with a full entrance cradle, they will be disposed of. Nonetheless, since the greater part of our present handlers can work at any rate at the line rate; this will not happen with amid standard operation. Bundles for which a Handler is accessible disconnected (not yet arranged onto the gadget) will be checked before being disposed of, in the long run bringing about arranging the Handler onto the FPGA. 

This technique does not ensure the gathering of all bundles yet speaks to a decent tradeoff between speed what’s more, many-sided quality. In this case that no fitting Handler exists bundles will be discharged immediately.
From Figure 2, The system can perform the self-ruling of a host PC. A committed equipment unit is utilized as Controller of an implanted delicate center processor, since the last would not have the capacity to accomplish the high reconfiguration speeds. Since of the capacity prerequisites the Handler bit streams are put away in an outside SDRAM memory, and sustained into the on-chip arrangement get to port (ICAP) by utilizing quick exchanges. For effective results, underlying execution requires isolate bit streams for each Handler, comparing to the physical area of the in part reconfigurable regions. To this end, the SDRAM is composed in groups, which hold various forms of every Handler, tended to by the Handler ID and the objective Slot number. For more accurate implementation we set the group estimate to the normal size of every Handler’s bit stream. In a more refined execution, we could utilize a solitary bit stream for every Handler, which would then be moved to the objective Slot at run-time, and bit stream pressure strategies to encourage lessen its size.
A rule based adjustment system is executed in the Adaptation Engine that deciphers packets measurements. In particular, bundles at the attachment level got in a period interval. 
These measurements are kept for packets for which a Handler is really accessible. The design looks for quick run queries and insights upgrades (few cycles) not withstanding for high packet rates (10 Gb/s, bundle estimate
Since they depend on similar information structures, the Packet Forwarder and the Adaptation Motor are acknowledged in a typical equipment module. It contains the rationale for following insights, deciphering rules, and overseeing Handler-Slot assignments. Double ports Block RAMs are utilized to understand the 1024-section Rule and 512-section Counter Tables. 

Hence, queries to decide the Slot of the goal Handler for an approaching bundle can be performed in parallel to the run administration what’s more, counter procedures. For range proficiency, the CAM is shared between the capacities. Be that as it may, since the throughput of the framework is straightforwardly influenced by the Packet Forwarding execution, the comparing opening steering queries will dependably have need while getting to the CAM. Since the CAM is utilized quickly for every procedure, it won’t turn into a bottleneck. The Packet Forwarder rationale puts the goal Handler opening for an approaching parcel in the yield line. The sending gaze upward is pipelined: by beginning the procedure when convention, IP address and port number have been gotten, the looked-into goal opening will by and large be accessible when it is really required (once the bundle has gone through the entire center convention handling). Since parcels will be neither reordered nor dropped some time recently the Handler arrange, basic lines suffice for buffering look-into results here. Since not every approaching parcel ought to be numbered (e.g., TCP ACKs ought to be disregarded), the Adaptation Engine utilizes a different port to upgrade the Counter Table just for particular bundles. The Rule Management subsystem acknowledges orders from the administration organize interface through a different FIFO, and has an inward FIFO that monitors accessible line addresses in the Rule Table.
From Figure 3, The FPGA locales for every Slot have been measured to 1920 LUTs (only twice as the normal module measure). All openings have rise to region about demonstrate  that module sizes are moderately close. This rearranges the adjustment handle, since else we would need to play out different sweeps while selecting on-line/disconnected hopefuls (one for each unique Slot measure class). The dynamic halfway reconfiguration times and the subsequent number of conceivable reconfigurations every second for the ICAP recurrence of 100 MHz we utilize. We demonstrate the times not just for the 1920 LUT Slots we have utilized additionally for both littler and bigger decisions (the best size is application-subordinate). By and large, LUTs are not rare while acknowledging bigger Slots; however the predetermined number of accessible Block RAMs can oblige a plan to fewer than 16 Slots if a Slot requires committed Block RAMs. Considering the total adjustment operation, the time required is ruled by the real reconfiguration time, as ICAP throughput is the restricting figure. Every single different process is fundamentally speedier. For instance, the procedure to look over every one of the 512 Counter Table passages to locate the following competitors requires just around 3µs at 156.25MHz clock speed, an immaterial time relative to the reconfiguration time (Hori Y, Satoh.2008) 

Possible Errors and Their Solutions
The following are the possible errors accustomed in FPGA, tampering threats such as destructive analysis, over- and under-voltage analysis, and timing analysis. Using destructive analysis, each layer of the device is captured to determine its functionality. This process requires expensive equipment and expertise. Timing analysis and over- and under-voltage analysis do not require expensive equipment, but are error prone, so are less frequently used to reverse-engineer complex FPGA designs. Also, timing analysis on an FPGA is deterministic, so the time taken from input to output can be determined by passing a signal through a multiplexer.

Wireless communication is one of the latest and the revolutionary technology of the last decade. It intends to connect every device on the planet wirelessly. This number could be billions or even trillions. A Self Adoptable FPGA for application level network security is must in order to have effective network security (Sascha & Andreas, 2014). Since they depend on similar information structures, it contains the rationale for following insights, deciphering rules, and overseeing Handler-Slot assignments. Block RAMs are utilized to understand the section Rule and section Counter Tables. This method has very low security and the security standards can be easily cracked.
(Deng et al. R. Han, 2006) created INSENS, a protected and Intrusion tolerant routing algorithm for application level security in wireless Sensor Networks. Excess multipath routing enhances interruption resilience by bypassing malignant nodes. INSENS works effectively in the nearness of interlopers. To address asset requirements, calculation on the network nodes is offloaded to asset rich base stations, e.g. registering routing tables, while low-multifaceted nature security techniques are connected, e.g. symmetric key cryptography and one-way hash capacities. The extent of harm delivered by interlopers is further constrained by limiting flooding to the base station and by having the base station arrange its bundles utilizing one-way grouping numbers.
(Kang et al. K. Liu 2006) investigated the issue of versatile network routing algorithm. Regardless of the possibility that area data is checked, nodes may in any case get into mischief, for instance, by sending an extreme number of packets or dropping packets. To powerfully maintain a strategic distance from un-trusted ways and keep on routing packets even within the sight of attacks, the proposed arrangement utilizes rate control, parcel planning, and probabilistic multipath routing joined with the trust-based course choice. They examined the proposed approach in detail, sketching out effective decisions by considering conceivable attacks. They analyzed the execution of their strong network routing protocol and its performance in various situations.

Several algorithms are proposed by researchers in order to improve the efficiency of application level network security, every method has its own merits and demerits. A new method to improve the algorithmic efficiency has been proposed in this research by examining all the previous algorithms. Proposed method will be high efficient when it is related to the existing techniques. The new algorithm proposed uses spacecraft network standards of communications by upgrading the data transfer processing speed to higher performance speeds with the available standards.
This research is concept based and discusses the feasibility of FPGA in application level wireless communication networks to enhance applications. This study reviews the existing literature thoroughly and also proposes the use of FPGA to be applied as the next version to the application level network security
The model to be proposed uses self-adoptable FPGA for application level network security. A new FPGA based algorithm is designed in order to decrease the extent of attacks in application level network security. It shows that new and more stable security algorithms need to be developed to provide information safety and confidentiality in the networks. This is useful in minimizing the vulnerable attacks in application level networks.
The applications of the proposed model are infinite. FPGA intends to strong network security. Therefore, these are not specific to any field or application. There are different classifications of the applications. These classifications are required for better understanding and not necessarily research requirements. These are useful to the users in a way that increases the extent of safety and security of data in wireless data transmission. The performance analysis in network security is determined based of the extent of vulnerable attacks. The proposed algorithm is not tested further research is required for implementing this algorithm in a real time platform.

Restatement of the Problem
With the developing dependence of business, government, and additionally priv

Cite This Work
To export a reference to this article please select a referencing stye below:

UKEssays. (November 2018). FPGA Stage for Application-Level Network Security. Retrieved from https://www.ukessays.com/essays/engineering/fpga-stage-applicationlevel-network-4378.php?vref=1
Copy to Clipboard
Reference Copied to Clipboard.

“FPGA Stage for Application-Level Network Security.” ukessays.com. 11 2018. UKEssays. 07 2024 .
Copy to Clipboard
Reference Copied to Clipboard.

“FPGA Stage for Application-Level Network Security.” UKEssays. ukessays.com, November 2018. Web. 18 July 2024. .
Copy to Clipboard
Reference Copied to Clipboard.

UKEssays. November 2018. FPGA Stage for Application-Level Network Security. [online]. Available from: https://www.ukessays.com/essays/engineering/fpga-stage-applicationlevel-network-4378.php?vref=1 [Accessed 18 July 2024].
Copy to Clipboard
Reference Copied to Clipboard.

UKEssays. FPGA Stage for Application-Level Network Security [Internet]. November 2018. [Accessed 18 July 2024]; Available from: https://www.ukessays.com/essays/engineering/fpga-stage-applicationlevel-network-4378.php?vref=1.
Copy to Clipboard
Reference Copied to Clipboard.

{{cite web|last=Answers |first=All |url=https://www.ukessays.com/essays/engineering/fpga-stage-applicationlevel-network-4378.php?vref=1 |title=FPGA Stage for Application-Level Network Security |publisher=UKEssays.com |date=November 2018 |accessdate=18 July 2024 |location=Nottingham, UK}}
Copy to Clipboard
Reference Copied to Clipboard.

All Answers ltd, ‘FPGA Stage for Application-Level Network Security’ (UKEssays.com, July 2024) accessed 18 July 2024
Copy to Clipboard
Reference Copied to Clipboard.

Related Services
View all

DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please:

Single Stage to Orbit (SSTO) Propulsion System

Several organisations world-wide are studying the technical and commercial feasibility of reusable SSTO launchers. This new class of vehicles appear to offer the tantalising prospect of greatly reduced recurring costs and increased reliability compared to existing expendable vehicles. However achieving this breakthrough is a difficult task since the attainment of orbital velocity in a re-entry capable single stage demands extraordinary propulsive performance.
Most studies to date have focused on high pressure hydrogen/oxygen (H2/O2) rocket engines for the primary propulsion of such vehicles. However it is the authors opinion that despite recent advances in materials technology such an approach is not destined to succeed, due to the relatively low specific impulse of this type of propulsion. Airbreathing engines offer a possible route forward with their intrinsically higher specific impulse. However their low thrust/weight ratio, limited Mach number range and high dynamic pressure trajectory have in the past cancelled any theoretical advantage.
By design review of the relevant characteristics of both rockets and airbreathing engines this paper sets out the rationale for the selection of deeply precooled hybrid airbreathing rocket engines for the main propulsion system of SSTO launchers as exemplified by the SKYLON vehicle [1].
2. Propulsion Candidates
This paper will only consider those engine types which would result in politically and environmentally acceptable vehicles. Therefore engines employing nuclear reactions (eg: onboard fission reactors or external nuclear pulse) and chemical engines with toxic exhausts (eg: fluorine/oxygen) will be excluded.
The candidate engines can be split into two broad groups, namely pure rockets and engines with an airbreathing component. Since none of the airbreathers are capable of accelerating an SSTO vehicle all the way to orbital velocity, a practical vehicle will always have an onboard rocket engine to complete the ascent. Therefore the use of airbreathing has always been proposed within the context of improving the specific impulse of pure rocket propulsion during the initial lower Mach portion of the trajectory.
Airbreathing engines have a much lower thrust/ weight ratio than rocket engines (≈10%) which tends to offset the advantage of reduced fuel consumption. Therefore vehicles with airbreathing engines invariably have wings and employ a lifting trajectory in order to reduce the installed thrust requirement and hence the airbreathing engine mass penalty. The combination of wings and airbreathing engines then demands a low flat trajectory (compared to a ballistic rocket trajectory) in order to maximise the installed performance (i.e. (thrust-drag)/fuel flow). This high dynamic pressure trajectory gives rise to one of the drawbacks of an airbreathing approach since the airframe heating and loading are increased during the ascent which ultimately reflects in increased structure mass. However the absolute level of mass growth depends on the relative severity of the ascent as compared with reentry which in turn is mostly dependant on the type of airbreathing engine selected. An additional drawback to the low trajectory is increased drag losses particularly since the vehicle loiters longer in the lower atmosphere due to the lower acceleration, offset to some extent by the much reduced gravity loss during the rocket powered ascent.
Importantly however, the addition of a set of wings brings more than just performance advantages to airbreathing vehicles. They also give considerably increased abort capability since a properly configured vehicle can remain in stable flight with up to half of its propulsion systems shutdown. Also during reentry the presence of wings reduces the ballistic coefficient thereby reducing the heating and hence thermal protection system mass, whilst simultaneously improving the vehicle lift/drag ratio permitting greater crossrange.
The suitability of the following engines to the SSTO launcher role will be discussed since these are representative of the main types presently under study within various organisations world-wide:

Liquid Hydrogen/Oxygen rockets
Ramjets and Scramjets
Turbojets/Turborockets and variants
Liquid Air Cycle Engines (LACE) and Air Collection Engines (ACE)
Precooled hybrid airbreathing rocket engines

3.Selection Criteria
The selection of an ‘optimum’ propulsion system involves an assessment of a number of interdependant factors which are listed below. The relative importance of these factors depends on the severity of the mission and the vehicle characteristics.

Engine performance

Useable Mach number and altitude range.
Installed specific impulse.
Installed thrust/weight.
Performance sensitivity to component level efficiencies.

Engine/Airframe integration

Effect on airframe layout (Cg/Cp pitch trim & structural efficiency).
Effect of required engine trajectory (Q and heating) on airframe technology/materials.

Technology level

Materials/structures/aerothermodynamic and manufacturing technology.

Development cost

Engine scale and technology level.
Complexity and power demand of ground test facilities.
Necessity of an X plane research project to precede the main development program.
4.Hydrogen/Oxygen Rocket Engines
Hydrogen/oxygen rocket engines achieve a very high thrust/weight ratio (60-80) but relatively low specific impulse (450-475 secs in vacuum) compared with conventional airbreathing engines. Due to the relatively large ∆V needed to reach low earth orbit (approx 9 km/s including gravity and drag losses) in relation to the engine exhaust velocity, SSTO rocket vehicles are characterised by very high mass ratios and low payload fractions.
The H2/O2 propellant combination is invariably chosen for SSTO rockets due to its higher performance than other alternatives despite the structural penalties of employing a very low density cryogenic fuel. In order to maximise the specific impulse, high area ratio nozzles are required which inevitably leads to a high chamber pressure cycle in order to give a compact installation and reduce back pressure losses at low altitude. The need to minimise back pressure losses normally results in the selection of some form of altitude compensating nozzle since conventional bell nozzles have high divergence and overexpansion losses when running in a separated condition.
The high thrust/weight and low specific impulse of H2/O2 rocket engines favours vertical takeoff wingless vehicles since the wing mass and drag penalty of a lifting trajectory results in a smaller payload than a steep ballistic climb out of the atmosphere. The ascent trajectory is therefore extremely benign (in terms of dynamic pressure and heating) with vehicle material selection determined by re-entry. Relative to airbreathing vehicles a pure rocket vehicle has a higher density (gross take off weight/volume) due to the reduced hydrogen consumption which has a favourable effect on the tankage and thermal protection system mass.
In their favour rocket engines represent broadly known (current) technology, are ground testable in simple facilities, functional throughout the whole Mach number range and physically very compact resulting in good engine/airframe integration. Abort capability for an SSTO rocket vehicle would be achieved by arranging a high takeoff thrust/weight ratio (eg: 1.5) and a large number of engines (eg: 10) to permit shutdown of at least two whilst retaining overall vehicle control. From an operational standpoint SSTO rockets will be relatively noisy since the high takeoff mass and thrust/weight ratio results in an installed thrust level up to 10 times higher than a well designed airbreather.
Reentry should be relatively straightforward providing the vehicle reenters base first with active cooling of the engine nozzles and the vehicle base. However the maximum lift/drag ratio in this attitude is relatively low (approx 0.25) limiting the maximum achievable crossrange to around 250 km. Having reached a low altitude some of the main engines would be restarted to control the subsonic descent before finally effecting a tailfirst landing on legs. Low crossrange is not a particular problem providing the vehicle operator has adequate time to wait for the orbital plane to cross the landing site. However in the case of a military or commercial operator this could pose a serious operational restriction and is consequently considered to be an undesirable characteristic for a new launch vehicle.
In an attempt to increase the crossrange capability some designs attempt nosefirst re-entry of a blunt cone shaped vehicle or alternatively a blended wing/body configuration. This approach potentially increases the lift/drag ratio by reducing the fuselage wave drag and/or increasing the aerodynamic lift generation. However the drawback to this approach is that the nosefirst attitude is aerodynamically unstable since the aft mounted engine package pulls the empty center of gravity a considerable distance behind the hypersonic center of pressure. The resulting pitching moment is difficult to trim without adding nose ballast or large control surfaces projecting from the vehicle base. It is expected that the additional mass of these components is likely to erode the small payload capability of this engine/vehicle combination to the point where it is no longer feasible.
Recent advances in materials technology (eg: fibre reinforced plastics and ceramics) have made a big impact on the feasibility of these vehicles. However the payload fraction is still very small at around 1-2% for an Equatorial low Earth orbit falling to as low as 0.25% for a Polar orbit. The low payload fraction is generally perceived to be the main disadvantage of this engine/vehicle combination and has historically prevented the development of such vehicles, since it is felt that a small degree of optimism in the preliminary mass estimates may be concealing the fact that the ‘real’ payload fraction is negative.
One possible route forward to increasing the average specific impulse of rocket vehicles is to employ the atmosphere for both oxidiser and reaction mass for part of the ascent. This is an old idea dating back to the 1950’s and revitalised by the emergence of the BAe/Rolls Royce ‘HOTOL’ project in the 1980’s [2]. The following sections will review the main airbreathing engine candidates and trace the design background of precooled hybrid airbreathing rockets.
5.Ramjet and Scramjet Engines
A ramjet engine is from a thermodynamic viewpoint a very simple device consisting of an intake, combustion and nozzle system in which the cycle pressure rise is achieved purely by ram compression. Consequently a separate propulsion system is needed to accelerate the vehicle to speeds at which the ramjet can takeover (Mach 1-2). A conventional hydrogen fuelled ramjet with a subsonic combustor is capable of operating up to around Mach 5-6 at which point the limiting effects of dissociation reduce the effective heat addition to the airflow resulting in a rapid loss in nett thrust. The idea behind the scramjet engine is to avoid the dissociation limit by only partially slowing the airstream through the intake system (thereby reducing the static temperature rise) and hence permitting greater useful heat addition in the now supersonic combustor. By this means scramjet engines offer the tantalising prospect of achieving a high specific impulse up to very high Mach numbers. The consequent decrease in the rocket powered ∆V would translate into a large saving in the mass of liquid oxygen required and hence possibly a reduction in launch mass.
Although the scramjet is theoretically capable of generating positive nett thrust to a significant fraction of orbital velocity it is unworkable at low supersonic speeds. Therefore it is generally proposed that the internal geometry be reconfigured to function as a conventional ramjet to Mach 5 followed by transition to scramjet mode. A further reduction of the useful speed range of the scramjet results from consideration of the nett vehicle specific impulse ((thrust-drag)/fuel flow) in scramjet mode as compared with rocket mode. This tradeoff shows that it is more effective to shut the scramjet down at Mach 12-15 and continue the remainder of the ascent on pure rocket power. Therefore a scramjet powered launcher would have four main propulsion modes: a low speed accelerator mode to ramjet followed by scramjet and finally rocket mode. The proposed low speed propulsor is often a ducted ejector rocket system employing the scramjet injector struts as both ejector nozzles to entrain air at low speeds and later as the rocket combustion chambers for the final ascent.
Whilst the scramjet engine is thermodynamically simple in conception, in engineering practice it is the most complex and technically demanding of all the engine concepts discussed in this paper. To make matters worse many studies including the recent ESA ‘Winged Launcher Concept’ study have failed to show a positive payload for a scramjet powered SSTO since the fundamental propulsive characteristics of scramjets are poorly suited to the launcher role. The low specific thrust and high specific impulse of scramjets tends to favour a cruise vehicle application flying at fixed Mach number over long distances, especially since this would enable the elimination of most of the variable geometry.
Scramjet engines have a relatively low specific thrust (nett thrust/airflow) due to the moderate combustor temperature rise and pressure ratio, and therefore a very large air mass flow is required to give adequate vehicle thrust/weight ratio. However at constant freestream dynamic head the captured air mass flow reduces for a given intake area as speed rises above Mach 1. Consequently the entire vehicle frontal area is needed to serve as an intake at scramjet speeds and similarly the exhaust flow has to be re-expanded back into the original streamtube in order to achieve a reasonable exhaust velocity. However employing the vehicle forebody and aftbody as part of the propulsion system has many disadvantages:

The forebody boundary layer (up to 40% of the intake flow) must be carried through the entire shock system with consequent likelihood of upsetting the intake flow stability. The conventional solution of bleeding the boundary layer off would be unacceptable due to the prohibitive momentum drag penalty.
The vehicle undersurface must be flat in order to provide a reasonably uniform flowfield for the engine installation. The flattened vehicle cross section is poorly suited to pressurised tankage and has a higher surface area/volume than a circular cross section with knock-on penalties in aeroshell, insulation and structure mass.
Since the engine and airframe are physically inseparable little freedom is available to the designer to control the vehicle pitch balance. The single sided intake and nozzle systems positioned underneath the vehicle generate both lift and pitching moments. Since it is necessary to optimise the intake and nozzle system geometry to maximise the engine performance it is extremely unlikely that the vehicle will be pitch balanced over the entire Mach number range. Further it is not clear whether adequate CG movement to trim the vehicle could be achieved by active propellant transfer.
Clustering the engines into a compact package underneath the vehicle results in a highly interdependant flowfield. An unexpected failure in one engine with a consequent loss of internal flow is likely to unstart the entire engine installation precipitating a violent change in vehicle pitching moment.

In order to focus the intake shock system and generate the correct duct flow areas over the whole Mach range, variable geometry intake/combustor and nozzle surfaces are required. The large variation in flow passage shape forces the adoption of a rectangular engine cross section with flat moving ramps thereby incurring a severe penalty in the pressure vessel mass. Also to maximise the installed engine performance requires a high dynamic pressure trajectory which in combination with the high Mach number imposes severe heating rates on the airframe. Active cooling of significant portions of the airframe will be necessary with further penalties in mass and complexity.
Further drawbacks to the scramjet concept are evident in many areas. The nett thrust of a scramjet engine is very sensitive to the intake, combustion and nozzle efficiencies due to the exceptionally poor work ratio of the cycle. Since the exhaust velocity is only slightly greater than the incoming freestream velocity a small reduction in pressure recovery or combustion efficiency is likely to convert a small nett thrust into a small nett drag. This situation might be tolerable if the theoretical methods (CFD codes) and engineering knowledge were on a very solid footing with ample correlation of theory with experiment. However the reality is that the component efficiencies are dependant on the detailed physics of poorly understood areas like flow turbulence, shock wave/boundary layer interactions and boundary layer transition. To exacerbate this deficiency in the underlying physics existing ground test facilities are unable to replicate the flowfield at physically representative sizes, forcing the adoption of expensive flight research vehicles to acquire the necessary data.
Scramjet development could only proceed after a lengthy technology program and even then would probably be a risky and expensive project. In 1993 Reaction Engines estimated that a 130 tonne scramjet vehicle development program would cost $25B (at fixed prices) assuming that the program proceeded according to plan. This program would have included two X planes, one devoted to the subsonic handling and low supersonic regime and the other an air dropped scramjet research vehicle to explore the Mach 5-15 regime.
6.Turbojets, Turborockets and
In this section are grouped those engines that employ turbocompressors to compress the airflow but without the aid of precoolers. The advantage of cycles that employ onboard work transfer to the airflow is that they are capable of operation from sea level static conditions. This has important performance advantages over engines employing solely ram compression and additionally enables a cheaper development program since the mechanical reliability can be acquired in relatively inexpensive open air ground test facilities.
6.1 Turbojets
Turbojets (Fig. 1) exhibit a very rapid thrust decay above about Mach 3 due to the effects of the rising compressor inlet temperature forcing a reduction in both flow and pressure ratio. Compressors must be operated within a stable part of their characteristic bounded by the surge and choke limits. In addition structural considerations impose an upper outlet temperature and spool speed limit. As inlet temperature rises (whilst operating at constant W√T/P and N/√T) the spool speed and/or outlet temperature limit is rapidly approached. Either way it is necessary to throttle the engine by moving down the running line, in the process reducing both flow and pressure ratio. The consequent reduction in nozzle pressure ratio and mass flow results in a rapid loss in nett thrust.
However at Mach 3 the vehicle has received an insufficient boost to make up for the mass penalty of the airbreathing engine. Therefore all these cycles tend to be proposed in conjunction with a subsonic combustion ramjet mode to higher Mach numbers. The turbojet would be isolated from the hot airflow in ramjet mode by blocker doors which allow the airstream to flow around the core engine with small pressure loss. The ramjet mode provides reasonable specific thrust to around Mach 6-7 at which point transition to rocket propulsion is effected.
Despite the ramjet extension to the Mach number range the performance of these systems is poor due mainly to their low thrust/weight ratio. An uninstalled turbojet has a thrust/weight ratio of around 10. However this falls to 5 or less when the intake and nozzle systems are added which compares badly with a H2/O2 rocket of 60+.
6.2 Turborocket
The turborocket (Fig. 2) cycles represent an attempt to improve on the low thrust/weight of the turbojet and to increase the useful Mach number range. The pure turborocket consists of a low pressure ratio fan driven by an entirely separate turbine employing H2/O2 combustion products. Due to the separate turbine working fluid the matching problems of the turbojet are eased since the compressor can in principle be operated anywhere on its characteristic. By manufacturing the compressor components in a suitable high temperature material (such as reinforced ceramic) it is possible to eliminate the ramjet bypass duct and operate the engine to Mach 5-6 whilst staying within outlet temperature and spool speed limits. In practice this involves operating at reduced nondimensional speed N/√T and hence pressure ratio. Consequently to avoid choking the compressor outlet guide vanes a low pressure ratio compressor is selected (often only 2 stages) which permits operation over a wider flow range. The turborocket is considerably lighter than a turbojet. However the low cycle pressure ratio reduces the specific thrust at low Mach numbers and in conjunction with the preburner liquid oxygen flow results in a poor specific impulse compared to the turbojet.
6.3 Expander Cycle Turborocket
This cycle is a variant of the turborocket whereby the turbine working fluid is replaced by high pressure regeneratively heated hydrogen warmed in a heat exchanger located in the exhaust duct (Fig. 3). Due to heat exchanger metal temperature limitations the combustion process is normally split into two stages (upstream and downstream of the ma-

Fig. 1 Turbo-ramjet Engine (with integrated rocket engine).
Fig. 2 Turborocket.

Fig. 3 Turbo-expander engine.
trix) and the turbine entry temperature is quite low at around 950K. This variant exhibits a moderate improvement in specific impulse compared with the pure turborocket due to the elimination of the liquid oxygen flow. However this is achieved at the expense of additional pressure loss in the air ducting and the mass penalty of the heat exchanger.
Unfortunately none of the above engines exhibit any performance improvement over a pure rocket approach to the SSTO launcher problem, despite the wide variations in core engine cycle and machinery. This is for the simple reason that the core engine masses are swamped by the much larger masses of the intake and nozzle systems which tend to outweigh the advantage of increased specific impulse.
Due to the relatively low pressure ratio ramjet modes of these engines, it is essential to provide an efficient high pressure recovery variable geometry intake and a variable geometry exhaust nozzle. The need for high pressure recovery forces the adoption of 2 dimensional geometry for the intake system due to the requirement to focus multiple oblique shockwaves over a wide mach number range. This results in a very serious mass penalty due to the inefficient pressure vessel cross section and the physically large and complicated moving ramp assembly with its high actuation loads. Similarly the exhaust nozzle geometry must be capable of a wide area ratio variation in order to cope with the widely differing flow conditions (W√T/P and pressure ratio) between transonic and high Mach number flight. A further complication emerges due to the requirement to integrate the rocket engine needed for the later ascent into the airbreathing engine nozzle. This avoids the prohibitive base drag penalty that would result from a separate ‘dead’ nozzle system as the vehicle attempted to accelerate through transonic.
7. Liquid Air Cycle Engines (LACE) and Air Collection Engines (ACE)
Liquid Air Cycle Engines were first proposed by Marquardt in the early 1960’s. The simple LACE engine exploits the low temperature and high specific heat of liquid hydrogen in order to liquify the captured airstream in a specially designed condenser (Fig. 4). Following liquifaction the air is relatively easily pumped up to such high pressures that it can be fed into a conventional rocket combustion chamber. The main advantage of this approach is that the airbreathing and rocket propulsion systems can be combined with only a single nozzle required for both modes. This results in a mass saving and a compact installation with efficient base area utilisation. Also the engine is in principle capable of operation from sea level static conditions up to perhaps Mach 6-7.

Liquid Air Turbopump Fig. 4 Liquid Air Cycle Engine (LACE).

The main disadvantage of the LACE engine however is that the fuel consumption is very high (compared to other airbreathing engines) with a specific impulse of only about 800 secs. Condensing the airflow necessitates the removal of the latent heat of vaporisation under isothermal conditions. However the hydrogen coolant is in a supercritical state following compression in the turbopump and absorbs the heat load with an accompanying increase in temperature. Consequently a temperature ‘pinch point’ occurs within the condenser at around 80K and can only be controlled by increasing the hydrogen flow to several times stoichiometric. The air pressure within the condenser affects the latent heat of vaporisation and the liquifaction temperature and consequently has a strong effect on the fuel/air ratio. However at sea level static conditions of around 1 bar the minimum fuel/air ratio required is about 0.35 (ie: 12 times greater than the stoichiometric ratio of 0.029) assuming that the hydrogen had been compressed to 200 bar. Increasing the air pressure or reducing the hydrogen pump delivery pressure (and temperature) could reduce the fuel/ air ratio to perhaps 0.2 but nevertheless the fuel flow remains very high. At high Mach numbers the fuel flow may need to be increased further, due to heat exchanger metal temperature limitations (exacerbated by hydrogen embrittlement limiting the choice of tube materials). To reduce the fuel flow it is sometimes proposed to employ slush hydrogen and recirculate a portion of the coolant flow back into the tankage. However the handling of slush hydrogen poses difficult technical and operational problems.
From a technology standpoint the main challenges of the simple LACE engine are the need to prevent clogging of the condenser by frozen carbon dioxide, argon and water vapour. Also the ability of the condenser to cope with a changing ‘g’ vector and of designing a scavenge pump to operate with a very low NPSH inlet. Nevertheless performance studies of SSTO’s equipped with LACE engines have shown no performance gains due to the inadequate specific impulse in airbreathing mode despite the reasonable thrust/weight ratio and Mach number capability.
The Air Collection Engine (ACE) is a more complex variant of the LACE engine in which a liquid oxygen separator is incorporated after the air liquifier. The intention is to takeoff with the main liquid oxygen tanks empty and fill them during the airbreathing ascent thereby possibly reducing the undercarriage mass and installed thrust level. The ACE principal is often proposed for parallel operation with a ramjet main propulsion system. In this variant the hydrogen fuel flow would condense a quantity of air from which the oxygen would be separated before entering the ramjet combustion chamber at a near stoichiometric mixture ratio. The liquid nitrogen from the separator could perform various cooling duties before being fed back into the ramjet airflow to recover the momentum drag.
The oxygen separator would be a complex and heavy item since the physical properties of liquid oxygen and nitrogen are very similar. However setting aside the engineering details, the basic thermodynamics of the ACE principal are wholly unsuited to an SSTO launcher. Since a fuel/air mixture ratio of approximately 0.2 is needed to liquify the air and since oxygen is 23.1% of the airflow it is apparent that a roughly equal mass of hydrogen is required to liquify a given mass of oxygen. Therefore there is no saving in the takeoff propellant loading and in reality a severe structure mass penalty due to the increased fuselage volume needed to contain the low density liquid hydrogen.
8. Precooled Hybrid Airbreathing
Rocket Engines
This last class of engines is specifically formulated for the SSTO propulsion role and combines some of the best features of the previous types whilst simultaneously overcoming their faults. The first engine of this type was the RB545 powerpla

Early Years Foundation Stage Guidelines

Hasana khan
Explain the observation, assessment and planning cycle.
The EYFS requires practitioners to plan activities and play opportunities that will support children’s learning while supporting the areas of learning within the EYFS. Practitioners must plan carefully so that individual children’s needs are met and that the activities and play opportunities help children progress towards their early learning goals. Planning, observation and assessment contribute to supporting the learning and development requirements of children. Observing individual children carefully can help to identify what their needs and interests are. To ensure that practitioners meet the needs of individual children it is important that the follow the observation, assessment and planning cycle. Observation is when practitioners observe/watch children to understand their interests, needs and learning styles. Observing children is a useful process as it provides information which the practitioners can use to support the children when planning and preparing activities for them. Observations should be made in a range of contexts, for example they should be done during independent play, during everyday routines and also when the child is engaged in play with others.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

EYFS MAY 2008 “planning should be flexible enough to adapt to circumstances”. Observing children will also enable practitioners to understand what their current stage of development is. Without the process of observation practitioners will not be able to fully support the children as they will not have a clear idea on what the child’s needs and interests are. Practitioners must ensure that they gain parent’s permission before they carry out any observations on the children because some parents may not want their child to be observed. During observation practitioners need to look, listen and record what they see in the observation, they must not involve themselves in the observation as it may affect what the child is doing. An assessment is when practitioners analyse observations to see what they tell them about a child. Accurate assessments enable practitioners to make judgements which lead to action to support individual children. They help each child to develop and learn by ensuring that the practitioners provide children with appropriate experiences and opportunities. Practitioners gather the information in their observations to identify aspects of the child’s learning and development. By doing this it will enable them to assess what a child’s needs and requirements are and how well they can be supported. The final part of the cycle is planning, this is when practitioners then use the information that they have gathered to plan for the child. This could include planning experiences and opportunities that the child could benefit from and also ensuring that the environment is suitable and the child has access to appropriate resources. The practitioner will also need to plan what their role will be in supporting children with their learning and development. Practitioners must ensure that they include each area of learning and development through planning, purposeful play and through a mix of adult-led and child initiated activity. Practitioners must ensure that their planning reflects and supports children’s current interests, learning styles and the stage of development of each child. The planning process enables practitioners to contribute and understand the experiences that they have planned for the children. Practitioners can also ensure that parents and children have a voice in the planning process, for example children can share their feelings and activities that they want to take part in. Parents can also share their knowledge of their child and any additional support that they may require. Observation, assessment and planning all feed into one another and contribute to our knowledge about the child enabling the practitioners to fully support the needs, requirements, learning and development of each child.
Describe how to develop planning for individual children.
When working with children practitioners will find that they are required to plan activities and experiences for children which support their learning and development. EYFS MAY 2008 “good planning is the key to making children’s learning effective, exciting varied and progressive”. Practitioners need to ensure that they plan activities which are linked with the different areas of learning within the EYFS. Practitioners must also ensure that they plan and prepare activities which meet the individual needs and requirements of the children. When planning for the children the practitioners need to bear in mind that whatever is planned for the child is age and stage appropriate and suitable for the child to take part in. There are many different sources that an individual can use the help them when planning for the children, for example each child has their own interests and preferences and they may enjoy playing more with some toys that they do with others. A practitioner can use a child’s interest and make an activity more exciting and challenging for the child.
This will also enable the child to learn new things as well as taking part in something that they enjoy doing. Regular observations and assessments support the practitioner when planning for a child because a lot of information can be processed as the practitioner is able to physically see what a child likes/dislikes doing. Observing the children helps the individual indentify a child’s needs, interests and any additional support that they may require to support their learning and development. EYFS MAY 2008 “planning should include all children, including those with additional needs”. Practitioners must ensure that they make full use of the observations gained in order to support the child and ensure that their needs are fully met. Within the setting the practitioners can work in partnership with parents/carer’s as stated in the EYFS in order to ensure that they are also included with their child’s learning and development. Parents/carer’s will be able to share information with the practitioners about what the child is like at home and what interest and needs that they may have. Parents/carer’s can help the practitioners with planning for the children as they will be able to identify what area a child may need support with.
Sharing ideas with colleagues can be useful during planning as an individual may have noticed something about the child which was not noticed by anyone else, this can be useful as a child may be more close to one member of staff than they may be with another staff. There may be times when a practitioner is not always with the children so it is important that information is shared to ensure that all members of staff are aware on the child’s needs and interests. Within the setting some children may also be under the care of other professionals this is useful because the practitioners are then able to work alongside the professionals to share and also learn new ideas on how the child can fully be supported within each setting. The practitioners must ensure that at all times their planning reflects the different needs and interests of the children, the planning must also provide opportunities for the children where they are able to learn and gain new skills.
Differentiate between formative and summative assessment methods.
EYFS MAY 2008 “make informed decision about the childs progress and plan next steps to meet their development and learning needs”. When working with children practitioners will find that settings will carry out progress reviews on children’s development, these can be done every six months or on an annually basis. The practitioners will be required to provide parents with a progress report about the child’s learning and development. This will give the parents an idea on what stage their child is at with their learning and development and whether or not they may require any additional support. Practitioners must ensure that they meet the individual needs of all children through following the requirements of the EYFS and it is important to deliver personalised learning, development and care to help children get the best possible start in life. There are two formal assessments, these are a completion of the progress report at age two and also completing the learning and progress journey of each child during their time at the nursery. To ensure that practitioners assess the children effectively they must analyse and review the information that they have about each child’s learning and development. They then need to plan next steps to meet the individual needs of children. A formative assessment is when a practitioner keeps a record of the child’s learning and development. The practitioner will take daily observations of a child using notes and photo evidence and keep them in an individual record of the child. The record will be available for the parents to view, this will enable them to review their child’s learning and development within the setting. It will also give the parents a chance to see what their child has achieved and what stage of development they are at. Practitioners must ensure that they regularly update children’s records by including the appropriate information. Formative assessment: This is an assessment based on observations, photos, work from children or any information that a practitioner receives from the parents. It is also an ongoing assessment of children and is carried out on a regular basis through observations that practitioners gather from children. Children are also required to have a progress check done when they are aged two, this is a summary of information that has been gained about the child. Practitioners compare children to the learning areas to identify whether or not a child has achieved their learning goal for their age and stage of development. The progress checks will be given to parents as it will be a summary of the development stages a child has achieved. They will also consist of targets/goals a child will have for the future and how they will be achieved. Summative assessment: This assessment is a summary of any evidence that a practitioner gains through carrying out a formative assessment. This type of assessments are used to review children’s developmental progress over a period of time, they are also used to identify if a child has achieved their target/goals for their age and stage of development. This is a summary of all the formative assessments done over a longer period and makes a statement about a child’s achievements. The EYFS Profile is the summative assessment used to review children’s progress along the early learning goals.
Explain the two statutory assessments that must be carried out on all children.
EYFS MAY 2008 “all effective assessment involves analysing and reviewing what you know about each child’s development and learning”. When working with children practitioners need to carry out two main assessments of the children in their care, one is the EYFS progress check which is done at age two. The second assessment is the EYFS profile which summarises and describes a child’s achievements and is a record of their development. This profile is a record of the child up to the age of five up until the child leaves the nursery. These two statutory assessments check the children’s development against the seven areas of learning. The EYFS progress check requires the practitioners to make a summary of the child’s development, achievements and also state any targets or goals that need to be met in order for the child to make further progress with their development. The progress checks show the parents and practitioners any additional support that a child may require. The practitioners are required to review the children’s progress and also ensure that parents receive a written record, this will enable them to see what the child has achieved and what stage they are at with their learning and development. The key workers have the role to complete the progress checks for all of their key children. In some settings the practitioners set up parents evening where they discuss the child’s progress and also hand out the progress checks to the parents. This also gives the parents a chance to discuss their child’s learning and development and also share information or ask and questions. The progress checks are useful as they enable parents to see how they can support their child at home and also identify their needs and interest. In order to complete the progress checks the practitioners should use the findings from their daily assessments and observations that they complete on the child, this will help to give an overview on what a child can and cannot do based on their learning and development. The summary must include the information that the practitioner has gathered about a child over the period of time the child has been at the nursery. The early years profile is an assessment of the child that is done at the end of the foundation stage, practitioners must ensure that they complete a profile for every child at the end of the term. This assessment will normally be completed by the reception class teachers, it will be assessed against the seventeen learning goals which can be found in the EYFS. The early years profile will be completed using observations of the child that have been gathered on a regular basis. The profile will consist of targets that the child has met or need to meet during their time at the nursery. The practitioners must ensure that they share the Early years profile assessment with the parents so support them so that they are able to understand their child’s learning and development. The parents will also be able to see what their child does within the setting and any progress they are making. To ensure that practitioners fully understand on how to complete the early years profile, it is a requirement that appropriate training is given to those working within a childcare setting.

Areas in the Early Years Foundation Stage

Building positive relationships
To build a positive relationship with a little person is not always easy. Some people are really opened and easy to get along with but then some are uncommunicative so they would rather be alone and do their own thing than play and talk to others. Also, children’s behaviour is unpredictable, you just never know what is behind the corner. That is why you can not work in child care setting thinking you act the same with every single person. Unfortunately it is not that easy because personalities are different and every child is an individual who needs different kind of care.
The Early Years Foundation Stage (EYFS) is the statuary framework that sets the standards that all early years providers must meet to ensure that children learn and develop well and are kept healthy and safe. These standards are:

The promotion of the welfare of children in the child care setting
Appropriately screened adults to work with the children
A suitable environment, equipment and premises
Correctly maintained documentation
The provision of an organisational structure in which they can learn and develop emotionally, socially, physically and intellectually

It is extremely important that the adults working with kids are working for these standards especially when they are OFSTED registered because simply not working with these rules and being registered with OFSTED is against the law. Children learn best when they are healthy, safe and secure, when their individual needs are met and when they have positive relationships with the adults caring of them. With these rules child care professionals have learnt how to implement the 6 learning goals, which is worked out by The Early Years Foundation Stage, in the child care setting.
The Early Years Foundation Stage consists of six areas of learning and development which are:

Personal, social and emotional development
Communication, language and literacy
Problem solving, reasoning and numeracy
Knowledge and understanding of the world
Physical development
Creative development

Most of the areas are covered simply just the children being in the child care setting surrounded by intelligent adults and of course other children whom they are socialising and playing with. Only playing with other kids brings out most of the six points in individual but obviously it is not enough. With manners, literacy, numeracy, knowledge and understanding of the world kids do need a little help from the adults but that is why professional carers know all different type of activities and exercises that are good for learning and development.
People working with children have been taught to observe every person so they know what every single individual needs to learn quicker, how to learn and to develop more. Obviously it is always good to communicate with other carers and of course parents, to make the best decisions out of it.
Children’s respect to other people comes from an early age, it all depends of the environment they have been raised up in but also of the parents, what they have taught to their children. It is typical the kids believe in the same things as the adults living at home but it is not always good. As the time goes on, there is so many changes in life that older people do not accept but we can not teach our kids to do that. That is why it is important to teach children from young age to respect and value individuality.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

We often find children who do not have any siblings not really respectful to others, because they have used to get everything they want but also there is no-one at home they have to share their things with. There is lots of responsibilities coming from home that parents need to do but also if the kids attend nursery or a child care setting, carers have their own part of raising children to become responsible young adults. Going to the child care setting regularly definitely helps children to understand other children’s needs and how to communicate and play with others.
Professional carers have been taught how to deal with young people. They know how to act in front of and with the children so they have no bad example they could take from the adults. But communication is the key! Carers and parents definitely need to talk things through, because children get confused and nothing good comes out of it if adults at home are acting very different to the ones at the child care setting.
It is also very easy to teach kids through games. Games teach how to share things, how to communicate with others and when difficulties appear, how to solve problems and arguments. Really good for this are role plays. But it is important for adults to stay on the side and observe so they could help to understand the situation of the problems and find the solution together with the kids involved.
Showing children that everyone in this world are equal no matter their culture, material status, skin colour or age, could keep them away of trouble when being kids but also in the future. It is important them to tolerate and value individuality.
Keeping positive behaviour and avoid negative at home and also in the child care setting, is probably every parent’s and carer’s dream. But the thing adults are usually struggling with is consistency.
From my own experience when moved into my previous host family to look after the kids, they did have the same problem. It seems to be really difficult for parents to stick to what they have said. With two boys in the family to look after I made it pretty clear from the start what they can and can not to. Of course they tried to push the boundaries but I sticked to my words. The difference how the kids behaved with me and with the parents was huge. I knew when I asked them to do something, they did it and never had to ask twice but when parents asked, really often kids did not even respond. It shows clearly what these parents have done wrong and therefore they will not get enough respect by their own children.
The main thing for parents is to keep their promises and keep up with consistency but the children need to also know that after every bad behaviour there are always consequences. It is really important for adults to keep up with this too, otherwise child will get confused and there will be no result.
But recognising only bad behaviour and keep telling children off is frustrating for both sides. That is why we also need to notice the good things kids do. Nurseries and schools are often using stickers as a reward for a good behaviour. As children love stickers it is really good idea to use it in a child care setting.
By seeing and noticing the good things kids are doing, they want to get noticed even more because they know it will also bring more attention from adults and of course they will get rewards. Parents and carers do need to be careful though because there is so many cases when children are using this and will blackmail the adults to get what they want simply just misbehaving when the parents will not buy something they wanted. But there should not be any problems when the rules are clear from an early age.
In every household and child care setting children and adults are really often facing conflicts. It is usually between children who are fighting over toys or attention, or problems are appearing simply because of lack of social skills, hunger, tiredness or lack of suitable role models. But sometimes we do see conflicts between children and adults. Usually caused because of not enough attention, generational clashes or because the middle child has been forgotten about. So what should we do, to solve these problems?! And at the end of the day, are conflicts good or bad?
The KidsHealth website gives advice for parents to give some more privacy to their children but also to trust them a little bit more. It is obviously more of an advice to parents with a little bit older children who actually do know what they are doing. The website also says to listen and of course to do more explaining. Kids should be taught to ask adults to explain things through so the conflict would not develop.
So, is conflict good or a bad thing? Many theorists agree that it is a good thing and it helps children to develop. Piaget believed that conflict in children was healthy, and if worked through, would help children to overcome their egocentric thought patterns, Erikson believed that to become a better person one must resolve the conflict in each stage of life, because life is full of conflicts. And Vygotsky thought conflict is more like a learning progress, he believed that children will learn from the conflict.
Many different child care facilities seems to think that conflict is a part of human nature and kids needs to have the skill to solve a problem without an adults help. That is why they believed it is an adults responsibility to give children conflicts to resolve, at this point with adults but by the time they are all grown up, they can do it themselves and through this, survive.
Keeping the perfect parent/carer and friend relationship at home or in the child care setting might be sometimes really difficult. Kids often just would not take well an adults telling them what to do, or how to behave with others and also by teaching them, children would not think you still want to be their friend. Still trying to be a friend but at the same time to stay professional, have children’s respect and keep up with the consistency is a hard work. But if this happens when children and adults find the perfect balance, it will be really good harmony in a whole household/child care setting, which is a good influence to everyone.


Early Stage Of Dementia

Dementia is a common disease in the geriatric population but can also be noticed in any stage of adulthood. In a study issued by European researchers, it is estimated that about 35 million people have dementia worldwide. It is called a syndrome because it involves a serious of signs and symptoms. It is a non-specific clinical syndrome caused by a wide variety of diseases or injuries that affect the brain. Due to alarming increase of number of dementia cases in elderly people, need for extensive research on appropriate care for the elderly dementia patients arises. Nursing home is considered as embodied institution mean to provide constant care.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In order to study if the nursing home is the most appropriate care environment for older person diagnosed with dementia, an extensive literature search was performed in accordance to Oxford Brookes style. 10 articles were obtained as a result of extensive literature search after incorporating inclusion and exclusion criterion arising due to the personal need. The results are categorised in to four main themes which are as follows:
The facilities and care available at a nursing home. Is multidisciplinary approach essential?
Care received in nursing home vs. home care.
Impact of elderly people joining nursing home at an early stage.
Importance of nursing home in elderly patients in the early stage of dementia
After applying CASP tool to all the articles, Careful analysis was done to draw the discussion. Basing on the discussion, nursing home is considered as the most appropriate care environment for elderly patient diagnosed with dementia. Recommendations are proposed on the basis of conclusions and implications of my research in the future are mentioned.
Dementia is defined as a medical condition which is characterised by loss of cognitive ability which is caused either due to normal aging or any kind of sudden impairment (Berrios, 1987). It is also described as non specific illness causing set of symptoms affecting memory, language, attention and problem solving regions of the cognitive region of the brain (Calleo and Stanley, 2008).
Dementia could be either static, caused due to injury of the brain affecting the cognitive area or progressive (slowly progressive and rapidly progressive) resulting in damage of the brain. Although the disease is seen commonly in elderly patients, it occurs at every stage of adulthood (Berrios, 1987).During initial stages of dementia; all the higher mental functions are affected leading to confusion, forgetfulness leading to gradual progression (Gleason, 2003). In aged people, the experience of dementia is worse due to pain and ill health. These symptoms lead to problems associated with ambulation, mood swings, depression, disturbances in sleep pattern, decreased appetite and slowness in activity (Gleason, 2003).
Caring People suffering from dementia:
During the initial symptoms, the patient is taken care by the family members and relatives. They ensure the patient that the process is normal with aging which makes their lifestyle a bit easy (Algase, 1996). A person suffering from dementia is shifted to a nursing home due to unavoidable circumstances like absence of carers, hectic life schedule and excessive progression of disease, expensive treatment (Weinberger et al., 1993).
Nursing home is defined as a place of residence for patients needing continuous support. Nursing home is chosen in many circumstances as mentioned by Weinberger and coworkers. According to him, the need of skilled nursing care, physical intervention and close understanding of the patient play an important role. Along with continuous care, patients in countries like Ireland, United Kingdom and Wales also receive assistance from physical, occupational, speech therapists, social workers, psychiatrists, psychologists to look after the necessities (Kristine et al., 2002). Emergency management is also provided as an essential part of treatment which forms an added advantage. The most important responsibility of the nursing home is elderly care which provides the patients with all the basic services like assistance in living, day care and long term care (Kristine et al., 2002).
The Nursing home acts as a caring unit for many dementia patients at various stages of dementia. The quality of nursing home varies and is most important variable in determining efficiency (Kristine et al., 2002).The qualification, knowledge and responsibility of the carers or staff in charge and presence of physicians to monitor health condition of the patient play an important role (Kristine et al., 2002).
Although efficiency of care is expected, there are many disadvantages associated with the nursing homes. The patient initially shows signs of disagreement to shift to a nursing home but may agree when explained. The chances of forgetfulness in these issues are also more which could cause agitation after joining (Algase, 1996). The initial adjustment of the patient towards the new atmosphere may create more confusion and deteriorate the health condition which is a major drawback (Steele et al, 1990). The cost of the nursing home and nursing staff is unaffordable by many of the patients which worsen the situation (Kristine et al., 2002). The extent of care and the support of family members play a vital part in influencing treatment. In particular situations like impaired mobility and disability or in cases where elder people are declared as mentally and physically incompetent, care in the nursing home remains as the best factor to increase longetivity of the patients (Steele et al, 1990).
The purpose of this paper is to review various national, International scientific journals and articles which seek to address on “Appropriate care of elderly patients diagnosed with early stage of dementia”. The intention of the present paper is to provide a suitable answer to the research question; “Is the nursing home an appropriate environment for an older adult diagnosed with early stage dementia”? To answer the question extensive study on literature search and study was performed. The literature review covered numerous journals, policies, and papers which examined the issues on care provided to the elderly patients in early stage of dementia. The reviews include thorough analysis of elderly dementia patients, forms of care available to them and to examine the best suitable care to improve the health condition of these patients. The present research will evaluate available data on nursing home as perfect environment for caring these kinds of patients. The review elaborates on the expectations of the patients and relatives towards care and the attempt of health professionals to live up to the expectations of them. The review also highlights the difference between care obtained in the home and a typical nursing home. It throws light on advantages and disadvantages of care given in nursing homes and these factors are considered later to draw conclusions on the most appropriate environment to care for elderly dementia patients.
The objective of the research paper is to investigate the literature on qualitative, quantitative and mixed experimental approaches on proper care of elderly patients. These inferences would form the basis for understanding if the nursing home is the most appropriate place for caring dementia patients.
In order to concentrate on the objectives of the study, extensive literature exploration was performed. A literature review is a body of text that aims to review the critical points of current knowledge including substantive findings as well as theoretical and methodological contributions to a particular topic (Aveyard, 2007). Literature review was considered as a best source of research methodology because of time constraint and lack of ethical considerations to perform primary research. Literature review focuses on primary research done in various clinical circumstances. The other advantage is possibility of comparative study among various qualitative, quantitative and mixed primary researches across the world (Aveyard, 2007).
During my study on the available literature, various situations experienced by dementia patients in nursing homes were studied. Ideas about circumstances experienced by the elderly patients in nursing homes were identified. Some of the papers focus on style of practice in nursing home and some of them focus on the attitude of patients towards nursing care. There were many controversies identified according to different perspectives of authors. Through these studies, an idea about the best suitable place of care for elderly patients at early stage of dementia could be conveyed.
Research process:
The process of research involved organized and vigilant consideration of literature suitable for my research work. The PICO model to formulate a question as suggested by Johnson and Fineout (2005) and by Stone 2002 cited in Gerrish and Lacey, 2010 is shown in the appendix 1 of the present research work. The model gives a simpler representation of the present research work.
The four main terms which were used as a part of my literature search included dementia, elderly patients, quality care and nursing home. While using these terms care was taken to use them in combination rather than using singularly which would widen the research area of expertise. The list of search terms and the keyword identification table as suggested by Aveyard and Sharp (2009) is given in the appendix 2a and 2b respectively of the present investigation report.
While considering the term dementia, early stage was emphasised in particular to refine my search. In addition to early stage, another term, elderly patients were also used to avoid searching among all the age groups. The term used for search looked like presented below:
Dementia OR Alzheimer* OR memory loss and early stage
The Boolean operator and was used in between these terms to ensure that research of literature included these three words in combination. In cases of excluding Boolean operator, the research resulted in articles including primary research of dementia at all stages among all age groups.
The other important search term was concerned with the age of the patient which was mainly confined to elderly patients. The parameters used included Boolean operator ‘and’. The phrase was as follows:
Older person OR Elderly OR older adult
The other term used in conjugation in the research term included quality care. As quality care is considered as a wide term, a Boolean operator “or” and truncation symbol, star “*”was used to enable thorough research without exclusion of any important article or journal. The term entered was presented as below:
Quality care * or appropriate care* or concern or caring* or wellbeing or well-being
The use of Boolean operator and truncation symbol ensured non omission of important articles containing synonyms or differently presented words.
The final term used in literature search was “nursing home”. Since my research focuses to study the most appropriate care environment for elderly dementia patients at early stage, this final term was used separately. The terms used in comparison included:
Nursing home and care home* or residence*
In this particular context, nursing home and care home are considered in comparison with residence of the patient. To enable the results to be confined to single term, the Boolean operator, or was used. The operator and was used to search results including both, nursing homes and care homes. The truncation symbol star was used to include articles with words displayed in alternative formats.
The final research phrase for search looked as displayed below:
Dementia OR Alzheimer* OR memory loss and early stage
Older person OR Elderly OR older adult
Quality care * or appropriate care* or concern or caring* or wellbeing or well-being
Nursing home and care home* or residence*
The immediate course of action was to use these terms in appropriate databases, I was guided by the university library manuals of the Oxford Brookes to consider CINAHL, BRITISH NURSING INDEX and MEDLINE as most relevant databases for search. CINAHL deals mainly in Nursing and health care in North America and Europe (Oxford Brookes University, 2009). British Nursing Index includes journals and articles based on care and community health pertaining to nursing and midwifery (Oxford Brookes University, 2009). Medline (Pubmed) is a collection of articles on medicine and nursing compiled by the intervention of National Library of Medicine ‘USA’ (Oxford Brookes University, 2009).
When the entire research phrase was posed in CINAHL, it retrieved 332 articles. When the same research phrase was typed in MEDLINE, it retrieved 75 articles. Further refinement was done in the search by using limiters 20000101-20101231 and retrieved 57 and 54 articles respectively. Inclusion and exclusion criteria were used in particular to include time constraint which enabled to select most recent articles in the present decade. This included articles beginning from 2000 to present. Another inclusion criterion was the place which restricted the search results to the investigation done in the UK. The inclusion and exclusion criteria yielded 55 and 29 articles in CINAHL and MEDLINE respectively (The database searches and hits are shown in the appendix as suggested by Oxford Brookes University, 2011).
To analyse the best suitable material to carry forward my research, four main principles as suggested by Aveyard (2007) were used, which included electronic searching, searching reference lists, hand searching of relevant journals and contacting authors directly. Out of all the suggested principles, the first three types were used as analytical tools in deciding best suitable literature. While choosing the primary research material, utmost care was taken to read through the abstract, findings and research methodology involved. This criterion was used for including or excluding the article for my research purpose. For some of the searches, hand searching was also used to obtain certain useful information on statistics in UK (shown in appendix 4). Due to time constraint in the research work, contacting health care professionals and conducting appropriate interviews could not be performed.
As a result of the research methodology, 10 articles were found relevant to the context being investigated. The findings of the papers were thoroughly studied in order to answer the research question. Nursing home was considered to be an ideal place to take care of an elderly dementia patient in initial stage of the disorder.
Critiquing my research methodology:
There were many criticisms noticed following my research methodology. The main criticisms observed included the following:
Inability to access all the journals in the databases as it required paid registration. Most of the websites which have excellent articles require a payment. I managed collect as many as articles I wanted to do answer this research question by login on Athens. It was beyond my finances to fund for all the articles.
Lack of time to contact primary health care professionals to incorporate their views as a part of my research work.
Lack of time to go through all the publications of a journal which resulted in referring to recent publications.
Lack of much information in the title which would enable me to take appropriate decision regarding the content of article which resulted in reading the abstract in order to include article for research.
Thus the major constraints of the present research article were identified to be cost and time. However, the freely available data obtained within the specified time were sufficient to draw conclusions to address the research question.
For the entire 10 articles, critical appraisal skills programme, CASP (2006) tools were applied to draw the most relevant themes. The main themes identified are:
The facilities and care available at a nursing home. Is multidisciplinary approach essential?
Care received in nursing home vs. home care.
Impact of elderly people joining nursing home at an early stage.
Importance of nursing home in elderly patients in the early stage of dementia.
The findings of the literature were categorized into main themes which made it easier to draw conclusions. The section depicts the investigation done in the 10 articles grouped together in accordance with the theme.
Theme 1: The facilities and care available at a nursing home. Is multidisciplinary approach essential?
Author, Year, Location
1. J. Cohen Mansfield and A.Parpura- Gill (2008). International Journal of Geriatric Psychiatry.
Practice Style in the nursing home: Dimensions for assessment and quality improvement
The investigation examined the operation style of the nursing home in terms of two main components which are the staff and institutional components.
The four domains which served as tool to test the staff conduct included knowledge, proficiency of practice style, flexibility and individual care and communication.
The three domains used to test the conduct of institution include support of staff, availability of resources and administration of policies.
As a result of the investigation, key features of institutional factors and staff were studied and monitored. Based upon the need and demand, the features requiring change and improvement were noted to ensure quality of care.
The investigation emphasises on the practice styles of the staff in a nursing home with respect to care provided. The research journal gives an insight of vivid styles of care provided within a nursing home which ensures quality of care. It focuses on the knowledge, communication, flexibility and understanding of the staff in taking appropriate care in elderly persons
It also gives a note on changing style of practice to cope up with increasing demand.
2. E.Finemma et al., 2005. International Journal of Geriatric Psychiatry.
The effect of integrated emotion-oriented care versus usual
care on elderly persons with dementia in the nursing home
and on nursing assistants: a randomized clinical trial
The investigation is based on randomised clinical trial of two groups of elderly dementia patients, measuring the effects at baseline after specific period of time. The study involved 146 numbers of elderly dementia patients and 99 numbers of nursing staff. The study was performed in 16 psycho geriatric wards located in 14 nursing homes located in the Netherlands.
The primary research studied the difference between the usual care and Integrated emotion-oriented care. The nursing assistants were tested on the basis of care given.
Positive effects were reportedly noticed in patients experiencing mild to moderate dementia in terms of portraying emotional balance and positive self image. Results also showed that training nursing staff resulted in less stress reactions increasing quality and patience.
The investigation revealed that emotion based care showed increase performance in early stage dementia patients when compared to normal usual care. However, It did not show any eye catching increase in quality with regard to people suffering from severe dementia. The study also focuses in reduction of stress in well trained nursing staff.
3. D. Challis et al., 2000.
Journal article from Age and Ageing.
Dependency in older people recently admitted to care homes.
The investigation was based on the study conducted among 308 elderly people aged over 65 in one of the nursing care home located in North west England. The study was conducted within two weeks of admission for people intending to continue treatment for long term. Barthel score and Crichton royal behaviour rating scale were used to analyse the dependency rates.
On the basis of Barthel rating scale and Crichton royal behaviour rating scale, 50% of the population were showed to be measured in the low dependency scale (13-20).Out of them, 31% in case of nursing home and 71% in case of residential care homes. On the whole, dementia patients are not assessed primarily before admission into the nursing home.
Studies revealed that there was lack of pre admission assessment and diagnosis before joining patients. This study throws light on lack of communication about the pre assessment and diagnostic information about the patients to the health care and nursing staff. Effective targeting of institutionalised resources is focussed with high importance.
4. Leontjevas et al., 2009.
American Journal of Alzheimer’s Disease & Other Dementias
Apathy and Depressive Mood Symptoms in Early onset dementia.
As a part of epidemiological study, patients were studied for symptoms of apathy and depressive mood in early onset of dementia. Studies were performed in 63 nursing homes. MADRS, NPI and MMSE scales were used to detect rate of depression and MDS-RAI and GDS were used to detect severity of dementia.
Studies revealed that depressive mood disorders and apathy are not observed severely in cases of patients suffering from early onset dementia .The results noted were accounted as 14% in ADL, 13% in GDS and 9% in MMSE.
The investigation reveals that the symptoms of mood depression and apathy are seen extensively in patients suffering with early onset of dementia when compared to elderly patients revealing the severity of aggression.
Theme 2: Quality of care received in nursing home vs. home care.
Author, Year, Location
1. Ehrlich et al., 2006.
Home health care management and practice
Caring for the Frail Elderly in the Home: A Multidisciplinary Approach
The study depends upon short portable mental status questionnaire proposed by Pfeiffer in 1975 to identify dementia in geriatric population.
The test confines to recall and memory of short term and long term orientation.
Additionally, evidence based practice is applied to screen patients with the disease.
The screening methodology enabled identification of patients suffering with dementia. Interdisciplinary approach is applied to propose a model for caring elderly people in home atmosphere.
The primary research article focussed on the major disorders affecting the elderly person which forms the basis of joining a nursing home for care. The interdisciplinary approach gives an idea about caring older patient from the most necessary syndromes to enable ease of treatment in the house without intervention of nursing home.
2. Milke et al., 2006.
Journal of Applied Gerontology
Meeting the Needs in Continuing
Care of Facility-Based Residents
Diagnosed With Dementia:
Comparison of Ratings by Families,
Direct Care Staff, and Other Staff
The data was collected by sampling method in five different places including Edmonton, Alberta, Canada, Pennsylvania and New York. A total of 184 elderly residents diagnosed with dementia and 197 nursing staff participated in the study. Suitably tailored questionnaires were distributed across the five sites among non direct care staff, family group, direct care group, licensed practical nurses.
The results of the investigation provided the comparison of care between families, direct caregivers, and other staff and volunteers. It also gave an idea about extent of care needed by the residents by each class. The results provide comparison of care in nursing home and the care in the patients own house.
The research work emphasizes on individual care provided by various groups like families, friends, licensed medical nurses, volunteers and other professional care givers. It gives a relation of trust and cooperation between the patient and care givers. It focuses on various negative aspects of care in the patient’s house created due to stress, pressure and miscommunication.
The article gives insight on the advantages of care provided by professional care givers over the family members in terms of knowledge, patience and quality.
Theme 3: Impact of elderly people joining nursing home at an early stage.
Author, Year, Location
Connor et al., 1991.
Papers from British Medical Journal
Does early intervention reduce the number of elderly people with
Dementia admitted to institutions for long term care?
The investigation utilised seven general practice areas located in Cambridge in the form of controlled clinical trials. 2885 subjects aged over 75 diagnosed with dementia were involved. 159 subjects in the group were diagnosed with initial stage dementia, 86 of them required extra support and 73 of the subjects had access to usual services and acted as control.
The research revealed that there was no direct contribution of early intervention to long term admission of patients in the nursing home. 9 out of 14 subjects who were living at home without support joined nursing home due to the extended facilities available.
The investigation focussed on screening procedures of dementia to identify level of severity of the disorder in the patients. In certain patients, the severity is high requiring instant admission into nursing home and in some cases, support by family members would be sufficient. Evidences also record that early intervention of dementia would decrease the risk of severity in the disease.
Theme 4: Importance of nursing home in elderly patients in the early stage of dementia.
Author, Year, Location
1. Voyer et al., 2005
Clinical effectiveness in nursing
2. Dettmore et al., 2009
Geriatric nursing
Characteristics of institutionalized older
patients with delirium newly admitted to an
acute care hospital
Aggression in Persons with Dementia:
Use of Nursing Theory to Guide
Clinical Practice
The investigation involves cross sectional secondary analysis study of old patients in nursing homes and other health care units. Confusion assessment method was used to test patients with delirium upon their admission.
The research work utilizes Need-driven Dementia- compromised Behavior (NDB) model to explain aggression in the individuals undergoing constant core in a nursing home.
In the total of 104 patients suffering from cognitive impairment, 68% people were recorded to possess delirium. The MMSE scale was used to screen patients to test the presence of delirium. The major symptoms which were observed in all the patients were bowel incontinence, illness. The most uncommon symptom observed was hearing impairment which occurred rarely.
Clinical management algorithm was framed in accordance with the NDB model to study the behavior of aggressive patients and frame a theory to take care of the patients in aggressive moods and to avoid repetition of the syndrome.
The research emphasizes on the importance of nursing homes in providing care and offer screening of the disorder. The severity of cognitive impairment doesn’t influence the preventive nursing interventions. Independent of the level of impairment, nursing care portrays important feature in improving the quality of patients requiring close care.
The patients suffering with dementia undergo frequent episodes of aggressive beahvior making care by professional care givers difficult. The paper focusses on the proposal of clinical management algorithm which is based on ndb model to manage certain aggressive episodes of the patient.
3. Holliday-Welsch et al., 2009
Geriatric nursing
Massage in the Management of Agitation in
Nursing Home Residents with Cognitive
The study was performed using subjects who are susceptible to agitation and aggrieve mood by nursing staff. The susceptible patients were selected by the use of minimum data set (MDS) report. The data collection was done during 3 days considered as base line, then the intervention followed up to another 6 days continued by follow up for the next few days.
it was observed that Subjects’ agitation was lower during the intervention of massage
Than at baseline and remains still low at follow-up. Wandering, verbally agitated, physically agitated and care resistance were proved to be decreased upon intervention of massage.
In this study, the five aspects of agitation which are wandering, verbal agitation, physical agitation, abusiveness, socially inappropriate agitation, disruptive aggressiveness. At each of the observation, agitation was scored for five times. Massage is one of the non pharmacological interventions in these patients suffering from agitation. This could be used as an effective tool by nursing staff in eliciting quality care
All the themes identified in the research play a suitable role in delivering the conclusion to provide a suitable answer for my research question. The themes are arranged sequentially to ultimately conclude upon appropriate care for elderly dementia patients at early stage of the disorder. Each of the 10 articles selected, carries an important examination which forms the basis for future implications in the nursing staff.
The first theme based in my results is facilities and care available in nursing homes using a multidisciplinary approach. This particular theme identifies the importance of nursing home as an institutionalised care centre to exhibit support and care to all kinds of dementia patients. The study proposed by J. Cohen Mansfield and A.Parpura- Gill (2008) suggests the nomenclature involved in nursing homes including the care provided by the nursing staff. The paper focuses on the improvement of these facilities to improvise style of nursing home which ultimately determines the quality. He regards flexibility, knowledge, communication as an essential factor for influencing care by nursing professionals. Along with characteristics of the staff, he also focuses on certain institutional factors which serve as tool of improvement (Beck et al., 1999). The most important institutional factors include timing of care, alternatives of care, resident and family involvement (Porras, 1987; Kanter, 1993). The frame work which is important for a nursing home is changed regularly on the basis of organizational and staffs needs to ensure implementing better system for staff especially in case of dementia, where there is a need of care specialist to monitor a group of care providers (Noelker and Harel, 2001). The investigation based by on the study of e. Finnema et al., 2005 portrays the role of emotion oriented care in the patients suffering from mild to moderate dementia in nursing homes. He describes the role of emotional oriented care in influencing body adaptation and balance of the dementia patient seen in early stages (Finnema et al., 2000). General health condition was also proved to improve especially in c

Active Vs Physiological Management of Third Stage of Labour

Active versus physiological management of the third stage of labour.
This essay is primarily concerned with the arguments that are currently active in relation to the benefits and disadvantages of having either an active or passive third stage of labour. We shall examine this issue from several angles including the currently accepted medical opinions as expressed in the peer reviewed press, the perspective of various opinions expressed by women in labour and the evidence base to support these opinions.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

It is a generally accepted truism that if there is controversy surrounding a subject, then this implies that there is not a sufficiently strong evidence base to settle the argument one way or the other. (De Martino B et al. 2006). In the case of this particular subject, this is possibly not true, as the evidence base is quite robust (and we shall examine this in due course).
Midwifery deals with situations that are steeped in layers of strongly felt emotion, and this has a great tendency to colour rational argument. Blind belief in one area often appears to stem from total disbelief in another (Baines D. 2001) and in consideration of some of the literature in this area this would certainly appear to be true.
Let us try to examine the basic facts of the arguments together with the evidence base that supports them.
In the civilised world it is estimated that approximately 515,000 currently die annually from problems directly related to pregnancy. (extrapolated from Hill K et al. 2001). The largest single category of such deaths occur within 4 hrs. of delivery, most commonly from post partum haemorrhage and its complications (AbouZahr C 1998), the most common factor in such cases being uterine atony. (Ripley D L 1999). Depending on the area of the world (as this tends to determine the standard of care and resources available), post partum haemorrhage deaths constitutes between 10-60% of all maternal deaths (AbouZahr C 1998). Statistically, the majority of such maternal deaths occur in the developing countries where women may receive inappropriate, unskilled or inadequate care during labour or the post partum period. (PATH 2001). In developed countries the vast majority of these deaths could be (and largely are) avoided with effective obstetric intervention. (WHO 1994). One of the central arguments that we shall deploy in favour of the active management of the third stage of labour is the fact that relying on the identification of risk factors for women at risk of haemorrhage does not appear to decrease the overall figures for post partum haemorrhage morbidity or mortality as more than 70% of such cases of post partum haemorrhage occur in women with no identifiable risk factors. (Atkins S 1994).
Prendiville, in his recently published Cochrane review (Prendiville W J et al. 2000) states that:
where maternal mortality from haemorrhage is high, evidence-based practices that reduce haemorrhage incidence, such as active management of the third stage of labour, should always be followed
It is hard to rationally counter such an argument, particularly in view of the strength of the evidence base presented in the review, although we shall finish this essay with a discussion of a paper by Stevenson which attempts to provide a rational counter argument in this area.
It could be argued that the management of the third stage of labour, as far as formal teaching and published literature is concerned, is eclipsed by the other two stages (Baskett T F 1999). Cunningham agrees with this viewpoint with the observation that a current standard textbook of obstetrics (unnamed) devotes only 4 of its 1,500 pages to the third stage of labour but a huge amount more to the complications that can arise directly after the delivery of the baby (Cunningham, 2001). Donald makes the comment “This indeed is the unforgiving stage of labour, and in it there lurks more unheralded treachery than in both the other stages combined. The normal case can, within a minute, become abnormal and successful delivery can turn swiftly to disaster.” (Donald, 1979).
chapter 1:define third stage of labour,
The definition of the third stage of labour varies between authorities in terms of wording, but in functional terms there is general agreement that it is the part of labour that starts directly after the birth of the baby and concludes with the successful delivery of the placenta and the foetal membranes.
Functionally, it is during the third stage of labour that the myometrium contracts dramatically and causes the placenta to separate from the uterine wall and then subsequently expelled from the uterine cavity. This stage can be managed actively or observed passively. Practically, it is the speed with which this stage is accomplished which effectively dictates the volume of blood that is eventually lost. It follows that if anything interferes with this process then the risk of increased blood loss gets greater. If the uterus becomes atonic, the placenta does not separate efficiently and the blood vessels that had formally supplied it are not actively constricted. (Chamberlain G et al. 1999). We shall discuss this process in greater detail shortly.
Proponents of passive management of the third stage of labour rely on the normal physiological processes to shut down the bleeding from the placental site and to expel the placenta. Those who favour active management use three elements of management. One is the use of an ecbolic drug given in the minute after delivery of the baby and before the placenta is delivered. The second element is early clamping and cutting of the cord and the third is the use of controlled cord traction to facilitate the delivery of the placenta. We shall discuss each of these elements in greater detail in due course. The rationale behind active management of the third stage of labour is basically that by speeding up the natural delivery of the placenta, one can allow the uterus to contract more efficiently thereby reducing the total blood loss and minimising the risk of post partum haemorrhage. (O’Driscoll K 1994)
discuss optimal practice,
Let us start our consideration of optimal practice with a critical analysis of the paper by Cherine (Cherine M et al. 2004) which takes a collective overview of the literature on the subject. The authors point to the fact that there have been a number of large scale randomised controlled studies which have compared the outcomes of labours which have been either actively or passively managed. One of the biggest difficulties that they experienced was the inconsistency of terminology on the subject, as a number of healthcare professionals had reported management as passive when there had been elements of active management such as controlled cord traction and early cord clamping.
As an overview, they were able to conclude that actively managed women had a lower prevalence of “post partum haemorrhage, a shorter third stage of labour, reduced post partum anaemia, less need for blood transfusion or therapeutic oxytocics” (Prendiville W J et al. 2001). Other factors derived from the paper include the observation that the administration of oxytocin before delivery of the placenta (rather than afterwards), was shown to decrease the overall incidence of post partum haemorrhage, the overall amount of blood loss, the need for additional uterotonic drugs, the need for blood transfusions when compared to deliveries with similar duration of the third stage of labour as a control. In addition to all of this they noted that there was no increased incidence of the condition of retained placenta. (Elbourne D R et al. 2001). The evidence base for these comments is both robust and strong. On the face of it, there seems therefore little to recommend the adoption of passive management of the third stage of labour.
Earlier we noted the difficulties in definition of active management of the third stage of labour. In consideration of any individual paper where interpretation of the figures are required, great care must therefore be taken in assessing exactly what is being measured and compared. Cherine points to the fact that some respondents categorised their management as “passive management of the third stage of labour” when, in reality they had used some aspect of active management. They may not have used ecbolic drugs (this was found to be the case in 19% of the deliveries considered). This point is worth considering further as oxytocin was given to 98% of the 148 women in the trial who received ecbolic. In terms of optimum management 34% received the ecbolic at the appropriate time (as specified in the management protocols as being before the delivery of the placenta and within one minute of the delivery of the baby). For the remaining 66%, it was given incorrectly, either after the delivery of the placenta or, in one case, later than one minute after the delivery of the baby.
Further analysis of the practices reported that where uterotonic drugs were given, cord traction was not done in 49%, and early cord clamping not done in 7% of the deliveries observed where the optimum active management of the third stage of labour protocols were not followed.
>From an analytical point of view, we should cite the evidence base to suggest the degree to which these two practices are associated with morbidity.
Walter P et al. 1999 state that their analysis of their data shows that early cord clamping and controlled cord traction are shown to be associated with a shorter third stage and lower mean blood loss, whereas Mitchelle (G G et al. 2005) found them to be associated with a lower incidence of retained placenta.
Other considerations relating to the practice of early cord clamping are that it reduces the degree of mother to baby blood transfusion. It is clear that giving uterotonic drugs without early clamping will cause the myometrium to contract and physically squeeze the placenta, thereby accelerating the both the speed and the total quantity of the transfusion. This has the effect of upsetting the physiological balance of the blood volume between baby and placenta, and can cause a number of undesirable effects in the baby including an increased tendency to jaundice. (Rogers J et al. 1998)
The major features that are commonly accepted as being characteristic of active management and passive management of the third stage of labour are set out below.
Physiological Versus Active Management

. . .

Physiological Management

Active Management


None or after placenta delivered

With delivery of anterior shoulder or baby


Assessment of size and tone

Assessment of size and tone

Cord traction


Application of controlled cord traction* when uterus contracted

Cord clamping



(After Smith J R et al. 1999)
physiology of third stage
The physiology of the third stage can only be realistically considered in relation to some of the elements which occur in the preceding months of pregnancy. The first significant consideration are the changes in haemodynamics as the pregnancy progresses. The maternal blood volume increases by a factor of about 50% (from about 4 litres to about 6litres). (Abouzahr C 1998)
This is due to a disproportionate increase in the plasma volume over the RBC volume which is seen clinically with a physiological fall in both Hb and Heamatocrit values. Supplemental iron can reduce this fall particularly if the woman concerned has poor iron reserves or was anaemic before the pregnancy began. The evolutionary physiology behind this change revolves around the fact that the placenta (or more accurately the utero-placental unit) has low resistance perfusion demands which are better served by a high circulating blood volume and it also provides a buffer for the inevitable blood loss that occurs at the time of delivery. (Dansereau J et al. 1999).
The high progesterone levels encountered in pregnancy are also relevant insofar as they tend to reduce the general vascular tone thereby increase venous pooling. This, in turn, reduces the venous return to the heart and this would (if not compensated for by the increased blood volume) lead to hypotension which would contribute to reductions in levels of foetal oxygenation. (Baskett T F 1999). Coincident and concurrent with these heamodynamic changes are a number of physiological changes in the coagulation system.
There is seen to be a sharp increase in the quantity of most of the clotting factors in the blood and a functional decrease in the fibrinolytic activity. (Carroli G et al. 2002). Platelet levels are observed to fall. This is thought to be due to a combination of factors. Haemodilution is one and a low level increase in platelet utilisation is also thought to be relevant. The overall functioning of the platelet system is rarely affected. All of these changes are mediated by the dramatic increase in the levels of circulating oestrogen. The relevance of these considerations is clear when we consider that one of the main hazards facing the mother during the third stage of labour is that of haemorrhage. (Soltani H et al. 2005) and the changes in the haemodynamics are largely germinal to this fact.
The other major factor in our considerations is the efficiency of the haemostasis produced by the uterine contraction in the third stage of labour. The prime agent in the immediate control of blood loss after separation of the placenta, is uterine contraction which can exert a physical pressure on the arterioles to reduce immediate blood loss. Clot formation and the resultant fibrin deposition, although they occur rapidly, only become functional after the coagulation cascade has triggered off and progressed. Once operative however, this secondary mechanism becomes dominant in securing haemostasis in the days following delivery. (Sleep, 1993).
The uterus both grows and enlarges as pregnancy progresses under the primary influence of oestrogen. The organ itself changes from a non-gravid weight of about 70g and cavity volume of about 10 ml. to a fully gravid weight of about 1.1 kg. and a cavity capacity of about 5 litres. This growth, together with the subsequent growth of the feto-placental unit is fed by the increased blood volume and blood flow through the uterus which, at term, is estimated to be about 5-800 ml/min or approximately 10-15% of the total cardiac output
(Thilaganathan B et al. 1993). It can therefore be appreciated why haemorrhage is a significant potential danger in the third stage of labour with potentially 15% of the cardiac output being directed towards a raw placental bed.
The physiology of the third stage of labour also involves the mechanism of placental expulsion. After the baby has been delivered, the uterus continues to contract rhythmically and this reduction in size causes a shear line to form at the utero-placental junction. This is thought to be mainly a physical phenomenon as the uterus is capable of contraction, whereas the placenta (being devoid of muscular tissue) is not. We should note the characteristic of the myometrium which is unique in the animal kingdom, and this is the ability of the myometrial fibres to maintain its shortened length after each contraction and then to be able to contract further with subsequent contractions. This characteristic results in a progressive and (normally) fairy rapid reduction in the overall surface area of the placental site. (Sanborn B M et al. 1998)
In the words of Rogers (J et al. 1998), by this mechanism “the placenta is undermined, detached, and propelled into the lower uterine segment.”
Other physiological mechanisms also come into play in this stage of labour. Placental separation also occurs by virtue of the physical separation engendered by the formation of a sub-placental haematoma. This is brought about by the dual mechanisms of venous occlusion and vascular rupture of the arterioles and capillaries in the placental bed and is secondary to the uterine contractions (Sharma J B et al. 2005). The physiology of the normal control of this phenomenon is both unique and complex. The structure of the uterine side of the placental bed is a latticework of arterioles that spiral around and inbetween the meshwork of interlacing and interlocking myometrial fibrils. As the myometrial fibres progressively shorten, they effectively actively constrict the arterioles by kinking them . Baskett (T F 1999) refers to this action and structure as the “living ligatures” and “physiologic sutures” of the uterus.
These dramatic effects are triggered and mediated by a number of mechanisms. The actual definitive trigger for labour is still a matter of active debate, but we can observe that the myometrium becomes significantly more sensitive to oxytocin towards the end of the pregnancy and the amounts of oxytocin produced by the posterior pituitary glad increase dramatically just before the onset of labour. (Gülmezoglu A M et al. 2001)
It is known that the F-series, and some other) prostaglandins are equally active and may have a role to play in the genesis of labour. (Gulmezoglu A M et al. 2004)
>From an interventional point of view, we note that a number of synthetic ergot alkaloids are also capable of causing sustained uterine contractions. (Elbourne D R et al. 2002)
chapter 2 discuss active management, criteria, implications for mother and fetus.
This essay is asking us to consider the essential differences between active management and passive management of the third stage of labour. In this segment we shall discuss the principles of active management and contrast them with the principles of passive management.
Those clinicians who practice the passive management of the third stage of labour put forward arguments that mothers have been giving birth without the assistance of the trained healthcare professionals for millennia and, to a degree, the human body is the product of evolutionary forces which have focussed upon the perpetuation of the species as their prime driving force. Whilst accepting that both of these concepts are manifestly true, such arguments do not take account of the “natural wastage” that drives such evolutionary adaptations. In human terms such “natural wastage” is simply not ethically or morally acceptable in modern society. (Sugarman J et al. 2001)
There may be some validity in the arguments that natural processes will achieve normal separation and delivery of the placenta and may lead to fewer complications and if the patient should suffer from post partum haemorrhage then there are techniques, medications and equipment that can be utilised to contain and control the clinical situation. Additional arguments are invoked that controlled cord traction can increase the risk of uterine inversion and ecbolic drugs can increase the risks of other complications such as retained placenta and difficulties in delivering an undiagnosed twin. (El-Refaey H et al. 2003)
The proponents of active management counter these arguments by suggesting that the use of ecbolic agents reduces the risks of post partum haemorrhage, faster separation of the placenta, reduction of maternal blood loss. Inversion of the uterus can be avoided by using only gentle controlled cord traction when the uterus is well contracted together with the controlling of the uterus by the Brandt-Andrews manoeuvre.
The arguments relating to the undiagnosed second twin are loosing ground as this eventuality is becoming progressively more rare. The advent of ultrasound together with the advent of protocols which call for the mandatory examination of the uterus after the birth and before the administration of the ecbolic agent effectively minimise this possibility. (Prendiville, 2002).
If we consider the works of Prendiville (referred to above) we note the meta-analyses done of the various trials on the comparison of active management against the passive management of the third stage of labour and find that active management consistently leads to several benefits when compared to passive management. The most significant of which are set out below.
Benefits of Active Management Versus Physiological Management


Control Rate, %

Relative Risk

95% CI*


95% CI

PPH >500 mL






PPH >1000 mL












Blood transfusion






Therapeutic uterotonics






*95% confidence interval †Number needed to treat
(After Prendiville, 2002).
The statistics obtained make interesting consideration. In these figures we can deduce that for every 12 patients receiving active management (rather than passive management) one post partum haemorrhage is avoided and further extrapolation suggests that for every 67 patients managed actively one blood transfusion is avoided.
With regard to the assertions relating to problems with a retained placenta, there was no evidence to support it, indeed the figures showed that there was no increase in the incidence of retained placenta. Equally it was noted that the third stage of labour was significantly shorter in the actively managed group.
In terms of significance for the mother there were negative findings in relation to active management and these included a higher incidence of raised blood pressure post delivery (the criteria used being > 100 mm Hg). Higher incidences of reported nausea and vomiting were also found although these were apparently related to the use of ergot ecbolic and not with oxytocin. This is possibly a reflection of the fact that ergot acts on all smooth muscle (including the gut) whereas the oxytocin derivatives act only on uterine muscle. (Dansereau, 1999).
None of the trials included in the meta-analysis reported and incidence of either uterine inversions or undiagnosed second twins. Critical analysis of these findings would have to consider that one would have to envisage truly enormous study cohorts in order to obtain statistical significance with these very rare events. (Concato, J et al. 2000)
With specific regard to the mother and baby we note some authors recommend the use of early suckling as nipple stimulation is thought to increase uterine contractions and thereby reduce the likelihood of post partum haemorrhage. Studies have shown that this does not appear to be the case (Bullough, 1989), although the authors suggest that it should still be recommended as it promotes both bonding and breastfeeding.
The most important element of active management of the third stage of labour is the administration of an ecbolic agent directly after the delivery of the anterior shoulder or within a minute of the complete delivery of the baby. The significance of the anterior shoulder delivery is that if the ecbolic is given prior to delivery of the anterior shoulder then there is a significantly increased risk of shoulder dystocia which, with a strongly contracting uterus, can be technically very difficult to reduce and will have significant detrimental effects on the baby by reducing its oxygen supply from the placenta still further. The fundal height should be assessed immediately after delivery to exclude the possibility of an undiagnosed second twin. (Sandler L C et al. 2000)
There are a number of different (but widely accepted) protocols for ecbolic administration. Commonly, 10 IU of oxytocin is given intramuscularly or occasionally a 5 IU IV bolus. Ergot compounds should be avoided in patients who have raised blood pressure, migraine and Raynaud’s phenomenon. (Pierre, 1992).
The issue of early clamping of the cord is complex and, of the three components of the active management of the third stage of labour this, arguably, gives rise to the least demonstrable benefits in terms of the evidence base in the literature.
We have already discussed the increased incidence of postnatal jaundice in the newborn infant if cord clamping is delayed but this has to be offset against both the occasional need for the invoking of prompt resuscitation measures (i.e. cord around the neck) or the reduction in the incidence of childhood anaemia and higher iron stores (Gupta, 2002). In a very recent paper, Mercer also points to the lower rates of neonatal intraventricular haemorrhage although it has to be said that the evidence base is less secure in this area. (Mercer J S et al. 2006)
Other foetal issues are seldom encountered in this regard except for the comparatively rare occurrence when some form of dystocia occurs and the infant had to be manipulated and represented (viz. the Zavanelli procedure). If the cord has already been divided then this effectively deprives the infant of any possibility of placental support while the manoeuvre is being carried out with consequences that clearly could be fatal. (Thornton J G et al. 1999)
In the recent past, the emergence of the practice of harvesting foetal stem cells from the cord blood may also have an influence on the timing of the clamping but this should not interfere with issues relating to the clinical management of the third stage. (Lavender T et al. 2006)
There are some references in the literature to the practice of allowing the placenta to exsanguinate after clamping of the distal portion as some authorities suggest that this may aid in both separation (Soltani H et al. 2005) and delivery (Sharma J P et al. 2005). of the placenta. It has to be noted that such references are limited in their value to the evidence base and perhaps it would be wiser to consider this point unproven.
We have searched the literature for trials that consider the effect of controlled cord traction without the administration of embolic drugs. The only published trial on the issue suggested that controlled cord traction, when used alone to deliver the placenta, had no positive effect on the incidence of post partum haemorrhage (Jackson, 2001). The same author also considered the results of the administration of ecbolic agents directly after placental delivery and found that the results (in terms of post partum haemorrhage at least), were similar to those obtained with ecbolics given with the anterior shoulder delivery, although an earlier trial (Zamora, 1999) showed that active management (as above) did result in a statistically significant reduction in the incidence of post partum haemorrhage when compared to controlled cord traction and ecbolics at the time of placental delivery.
In this segment we should also consider the situation where the atonic uterus (in passive management of the third stage of labour) can result in the placenta becoming detached but remaining at the level of the internal os. This can be clinically manifest by a lengthening of the cord but no subsequent delivery of the placenta. In these circumstances the placental site can continue to bleed and the uterus can fill with blood, which distends the uterus and thereby increases the tendency for the placental site to bleed further. This clearly has very significant implications for the mother. (Neilson J et al. 2003)
There are other issues which impact on the foetal and maternal wellbeing in this stage of the delivery but these are generally not a feature issues relating to the active or passive management of the third stage of labour and therefore will not be considered further.
There are a number of other factors which can influence the progress of the third stage of labour and these can be iatrogenic. Concurrent administration of some drugs can affect the physiology of the body in such a way as to change the way it responds to normal physiological processes. On a first principles basis, one could suggest that, from what we have already discussed, any agent that causes relaxation of the myometrium or a reduction in uterine tone could potentially interfere with the efficient contraction of the uterine musculature in the third stage and thereby potentially increase the incidence of post partum haemorrhage.
Beta-agonists (the sympathomimetic group) work by relaxing smooth muscle via the beta-2 pathway. The commonest of these is salbutamol. When given in its usual form of an inhaler for asthma, the blood levels are very small indeed and therefore scarcely clinically significant but higher doses may well exert a negative effect in this respect. (Steer P et al. 1999)
The NSAIA group have two potential modes of action that can interfere with the third stage. Firstly they have an action on the platelet function and can impair the clotting process which potentially could interfere with the body’s ability to achieve haemostasis after placental delivery. (Li D-K et al. 2003)
Secondly their main mode of therapeutic action is via the prostaglandin pathway (inhibitory action) and, as such they are often used for the treatment of both uterine cramping, dysmenorrhoea and post delivery afterpains. (Nielsen G L et al. 2001)
They achieve their effect by reducing the ability of the myometrium to contract and, as such, clearly are contraindicated when strong uterine contractions are required, both in the immediate post partum period and if any degree of post partum haemorrhage has occurred.
Other commonly used medications can also interfere with the ability of the myometrium to contract. The calcium antagonist group (e.g. nifedipine) are able to do this (Pittrof R et al. 1996) and therefore are changed for an alternative medication if their cardiovascular effects need to be maintained. (Khan R K et al. 1998)
We should also note that some anaesthetic agents can inhibit myometrium contractility. Although they are usually of rapid onset of action, and therefore rapid elimination from the body, they may still be clinically significant if given at the time of childbirth for some form of operative vaginal delivery. (Gülmezoglu A et al. 2003)
relevant legal and ethical issues related to topic and midwife,
Many of the legal and ethical issues in this area revolve around issues of consent, which we shall discuss in detail shortly, and competence.
Professional competence is an area which is difficult to define and is evolving as the status of the midwife, together with the technical expectations expected of her, increase with the advance of technology.

The global stage

Global Media: Foreign Media Organizations in China
In the globalization age, information flow, just like commodity flow and capital flow, increasingly takes place at the global stage. Global media and communication, although certainly has its flaws, has become a prominent phenomenon today. This constitutes the international environment in which the current development of media in China takes place.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Globalization and Global Media
When communications satellites and computer networks took off in the early 1990s, the world found itself faced with a new generation of media technology which not only undermined geographical distances but also national borders. Fueled by a wave of communications policy deregulation, changes in the media industries soon led to the belief that the whole world was now linked by global media which transmit messages in split seconds to audiences everywhere, including those living in the most remote corners of the world. The era of “global” media thus pronounced itself arrived.
In recent years people have come to witness interesting, albeit somewhat puzzling, developments in the world of media, specifically the transnationalization of national, or even local, media in many parts of the world. These developments have painted a media landscape that quite different from what people used to be familiar with.
In the discourse of globalization, there does not seem to be a generally accepted definition of the term. On many different occasions the term has been defined as the free worldwide flow of the production elements and resources, as borderless or stateless economy. It is also widely viewed as the cultural, political, and economic, integration of the whole world. Even before the term “globalization” became a catchword in the academic and popular vocabulary in the 1990s, global operations and transnational corporations in many industries had long aroused academic attention. In addition, many scholars have long been noticing the connection between the media and globalization. For instance, McLuhan, a media theorist, was claimed to have suggested their connection “by combining ‘the medium is the message’ with his ‘global village’” (Rantanen, 2004:1.)
Globalization suggests simultaneously two views of culture. The first, taking a monoculturalist point of view, treats globalization as the “extension outward of a particular culture to its limits, the globe,” through a process of conquest, homogenization and unification brought about by the consumption of the same cultural and material products (Featherstone, 1995:6). The second one, adopting a multiculturalist stand, perceive globalization as the “compression of cultures” (Featherstone, 1995:6).
While the meaning of globalization remains ambiguous, “global media” or “media globalization” have quickly become clichés in media studies. Two questions can be raised about the use of such terms, however. First, what is meant by a globalized media industry, and secondly, can we assume that a genuine globalization of the industry has already taken place? More precisely, what is the direction of changes that we can observe now-globalization, localization, or something else?
Too often when the term “global” is used in conjunction with the media, it refers primarily to the extent of coverage, with the popularity of satellite television and computer networks serving as evidence of the globalization of communications. However, the linkages brought about by the globalization process are largely confined to OECD and G7 member countries, which constitute one-third of the world population. And even when a medium, e.g. CNN, can put over 150 countries on its map, the rate of penetration and actual consumption can present rather a different picture. As Street (1997:77) has said, the fact that a product is available everywhere is no guarantee that it achieves the same level of popularity, let alone acquires the same significance, meaning or response (Featherstone, 1990:10). It is no secret that CNN’s audiences normally account for only a small fragment of a nation’s population.
But even with its conceptual flaws corrected, coverage is merely one of the important dimensions of the media industry. The meaning of a globalized industry would be seriously distorted if other dimensions were left out of the discussion. These dimensions, including the dynamics of the market, modes of production, the contents and messages transmitted, are closely related to the perception of the role and function of media in the globalization process, the direction of change in the industry, and ultimately, the cultural images presented by the theories of globalization. What roles and changes, then, should be expected to see in the media industries according to the monoculturalist point of view?
Media Development in China
Since the 1990s, with broadcasting and newspapers outlets already reaching large numbers, China has moved onwards into a new stage of media development, prioritizing quality improvement, intensive management/operation instead of increase in numbers, and optimization of the industry structure. In the globalization context – with a goal of making its media more competitive and more effective in the mass media market, as well as to strengthen the media industry, China has been adopting the strategy of optimization of the media industry structure (Zhang, 2007:78). The country has closed down, combined, or transformed several media organizations that failed to satisfy the needs of the market competition.
In recent years, China’s media development is also mirrored in the adoption of the latest information technologies, most especially the Internet, by media organizations. In the late 1990s, media organizations in the country used computer technologies extensively. The fever in adopting Internet technologies was spurred on primarily by factors like the eagerness to embrace the trend throughout the world, towards building an information superhighway, the need to stay competitive with other media institutions, and the desire to grasp the opportunities for the station’s or paper’s new development (Zhang, 2007:78).
The Chinese experience suggests a strong link between globalization and the enthusiasm of the media organizations to adopt Internet technologies. Starting from the year 2000, media in the country have maintained such desire to adopt state-of-the-art information and communications technologies. Along with the ever-increasing media websites, a new type of websites has emerged – sites jointly established and operated by several media institutions in a region (Zhang, 2007:78).
Presently, new media technologies are in the spotlight in the technological stage in China’s media industry. For example, CTP technology is widely used in the country’s newspaper industry. Digital TV and digital audio broadcasting have also emerged in China. Internet protocol TV (IPTV) is also one of the highlights in the current development of the Chinese TV industry. In the area of online outlets, news websites have become a recent type of media outlets in the country. News websites are composed of three levels: websites of large national media organizations, major provincial/municipal ones, and city-level ones (Zhang, 2007:79). Moreover, cross-media operations in mass communications constitute another important aspect of the development of media in China (Zhang, 2007:80).
Foreign Media
Since China entered the World Trade Organization (WTO) in 2001, the government has eventually opened its domestic media market. Because of the the increasing degree of the media openness, foreign media organizations have begun to enter the Chinese media market. In 2001, from October to December, the government permitted three overseas TVchannels to go into Guangdong province. These are Star TV, Phoenix Satellite Television, and CETV, which is owned by AOL Time Warner. It is the very first time for China to allow foreign channels to be played on local cable and satellite system.
Despite these limited entries, the event has caused major ripples throughout the entire Chinese media industry. In addition, in October 2004, the State Administration of Radio Film and Television (SART|FT) enacted two regulation policies allowing foreign media in the country through more diverse formats (Chan and Ellis, 2005:1). It suggested a constantly-open media market toward foreign capitals in China, a fiery trial for the entire country.
Due to the easing of regulation, foreign media organizations have started to swarm into China. In 2008, SARFT approved 33 foreign channels. Many broadcasting organizations had branches in Beijing, Guangdong,Shanghai, and Chongqing. These include Time-Warner, Sony, Disney, News Corp, and Viacom (China Business New, 2008:1). Shanghai, for example, houses competitive foreign media organizations like CNBC (US Cable Network), BBC (British Broadcasting Company), FBC (Italy FactBased Communications), NHK (Japan Broadcasting Corp.), and SUNSET (France), with investment flowing from the United States, the United Kingdom, France, Switzerland, Germany, Japan, Singapore, and South Korea, among others.
According to neo-Marxists, who advocate a homogeneous world view, one of the major characteristics of globalization is that everyone has the feeling of being a member of one single society. The feeling, as described by Albrow (1990:8-10), is the sense of “the whole earth as the physical environment, where all are citizens, consumers and producers, possessed of a “common sense interest in collective action to solve global problems.” The increasing interdependencies of nation-states has been cited as a major cause for nurturing such a feeling. Today the comforts and assurances of local communal experience are now undermined by distant social forces.
Communications media, TV in particular, is an important factor in the compression of time and space. It constantly brings distant events and concerns to the homes and minds of people around the world as they happen. This constitutes an intrusion of distant events into everyday consciousness. However, this compression of time and space is not without its limits. As pointed out by Mittelman (1996:229), capital and technology flows must eventually “touch down” in distinct places. These places, in contrast to the global phenomenal world, are where everyone lives his or her local life.
To human beings, wanting a place where one feels a sense of belonging is natural. However, such a sense of place is cultural, as has been pointed out by Hall (1995:178). Despite the intrusion of distant social forces, feelings and perception of people about their environment remain closely associated with the memories and personal ties they have, together with the social, cultural, and even geographical and climatic setting of their environments. The emphasis on what is called a “local culture” is “the taken-for-granted, habitual and repetitive nature of the everyday culture of which individuals have a practical mastery” (Featherstone, 1995:92). This and the cultural forms, the common language, shared knowledge and experiences associated with a place, are the essence of the concept of local culture.
Global political and economic factors and media technologies serve to compress, but not eliminate, time and space. In addition, the sense of place, something associated with the essence of a local culture, has become a major determinant in the restructuring of the world communications industry. To suggest that media globalization is no more than a part of a process of domination by Western media – and ultimately of the Westernization of world cultures – conflicts with the advocacy of Asian values in Asia and is reductionist.
To modify the monoculturalist image of culture, Featherstone (1995:6) suggested that globalization may be better considered as a “form, a space or field, made possible through improved means of communication in which different cultures meet and clash,” or simply “a stage for global differences.” According to him, this conception points directly to the fragmented and de-centered aspects of the globalization of culture, and in the mean time suggests greater cultural exchanges and complexity.
One may argue that a multiculturalist view of globalization does not advocate the localization of transnational media as the only venue for communication as a platform for cultures to meet and clash. But powerful as the idea may be, this view does not offer a clear picture, nor an indication of, how the structure of the world cultural industries has, and will, change; how different it is from what we used to have, and how the ideals of “meeting/clashing points” may be achieved and professed. According to cultural and media imperialism theories, the demise of local cultures and cultural industries was something predictable, as a consequence of the importation of television programs. By the 1990s, however, it has become evident that the theories have suffered from a lack of evidence.
Destructive or Constructive?
Since many foreign media organizations have penetrated China, it can be argued that Western media products transmitted in the process will challenge or damage the local culture. However, the impacts of these organizations in the local media market in China appear to be constructive, not damaging, to local cultural heritage.
Foreign programs offer great opportunities for reflexive awareness. Audiences do not just receive meanings passively. They are critical and active during the reception process. Watching Hollywood movies and foreign TV shows does not mean the local audience are being American. Instead, in theory, Chinese viewers form a reflexive awareness (who am I? or, who am not I?). Also, many studies in China and the rest of East Asia have suggested that the opposition to foreign culture has been engendered by watching overseas TV shows and thus evokes a protective attitude toward their local culture. While the purity of cultural identity remains a much debated issue, there is no denying that the Chinese audience are also reflexively considering their own identities while being faced with increased importation of foreign cultural products. Regional or national consciousness more than a homogenous global identity enlarges as exposure to alien cultures speeds up.
In spite of some visible evidence of cultural homogenization as part of the everyday life, like westernization, it seems that people have a stronger sense of membership in their groups (Morris, 2002:278). In addition, according to Harvey (1989:306), “localism and nationalism have become stronger precisely because of the quest for the security that place always offers”. On the contrary, there is little evidence of “cultural abrasion”, instead there is an increasing protective attitude and reflexive awareness within the receiving nations (Varan, 1998:58).
The entry of foreign media organizations in China appears to be constructive, not destructive, when one views localization as a form of cultural adaptation. Cultural adaptation, in the mass media context, refer to a comprise strategy, like adding Chinese subtitles for overseas programs. It also refers to an active devotion into the local culture made by the transnational media. Foreign media organizations not only provide Chinese subtitles to achieve high ratings, but they also do research and make compelling contents for the local audience in China.
In order to produce a program that will fit with the Chinese culture and one that will not offend sensibilities, many foreign media organizations actively delve themselves into local cultures. They also try to penetrate the market by employing local production groups, such as producers, directors, and performers, as well as as original scripts.. The contents are produced to satisfy the local taste, full of cultural factors and traditional background. Thus what transnational media organizations have brought to the country seems not to be the threat of foreign or western cultural products, but the significant amount of foreign investment used to produce local cultural programs.
From the cultural point of view, these programs owned by foreign media organizations would not damage Chinese native norms or values. This is because many shows are produced considering the local audience and embedding with them strong traditional cultural background. From the economic point of view, the significant amount of foreign capital brought by the large media organizations can help the local media market prosper. Basically, local media companies benefit from the competition by cooperating with foreign companies.
The localization process of the media in China not only induces the indigenized strategies of global companies; it also induces the globalized reactions of local media industries. The strategy of localization cannot be understood simply as a unidirectional flow of global power on the local media industry. Mutual influences suggest a complex and complicated reciprocal interaction between the global and the local, taking into consideration the reverse effects that the local brings to the global.
For example, AOL/Time Warner promised to air CCTV 9th channel through its cable network in Houston, Los Angeles, and New York. CCTV 9th channel contains music, news report, travel and leisure, nature, as well as mandarin education specifically designed to expand the Chinese traditional culture. The aim of such move appears to be the reeducation of the Americans and Chinese Americans and changing of their attitude toward China (Rowe, 2001:1). This shows that Chinese TV officials have already realized the significance of exportation of Chinese programs. By borrowing resources from foreign media organizations, China is able to to send out locally produced products, shaping western attitude toward China and its culture ideologically.
It is also noteworthy that many other broadcasters throughout Asia have already begun to target people in overseas market. For example, TVB has been serving Chinese speaking subscribers in North America and Canada. Likewise, MBC, a Korean broadcaster, has established a channel aimed at Koreans in the US. Zee TV, a South Asian broadcaster, has also penetrated The US and the UK (Chadha and Kavoori, 2000:415).
As media globalization goes intensive, further reaction taken by the receiving societies is not limited to the cultural resistance of the local audience any more. It now has changed to the active competition among media organizations as well as the exportation of cultural products outside China. Such active activities from China, and also the developing countries in Asia, are seen to increase in the next few decades. With a stronger than ever economic development, the exportation of media contents will became more and more.
It can be argued that there is actually no absolute weak and strong culture in the media globalization trend. Every culture changes over time – no culture is exempt from this fact. In economic arena, the :Third World” or “developed countries” category has constantly been facing challenges and has been forced to change. The same is true in the media and cultural arena. No one culture in the world will be the stronger or weaker culture forever.
The influx of foreign media organizations in China has not yet threatened the local culture as seriously as many observers have proclaimed. There is a conscious effort among transnational media organizations to adapt culturally in order to produce programs that cater to the local taste and ones that embed traditional culture in them. Local Chines audience have a strong reflexive awareness, making them active viewers not passive. This safeguards local culture.
Similarly, the local media industry is not passively waiting for challenge; rather, local media organizations actively pose serious competition with media conglomerates, borrowing their resources to promote Chinese culture outside China and to educate people around the world about their culture. Cultural hybridization is expected to be promoted by the strong influences of local responses. Recent exporting activities in China and other Asian nations suggest a novel reciprocal interaction between the global and the local.
What the globalization of media brings to China, and the less developed countries in Asia, is not only the difficult challenges, but also the many benefit. While the eastern and western cultures become increasingly because of the media globalization trend, local cultures are also given the opportunity to keep its own characteristics and its independence. Overall, the consequences of media globalization to China seem to be constructive rather than destructive both from local cultural and economic points of view.

Pre Listening Stage English Listening Teaching

Language can be recognized as a media of communication, rather than the simple complex of sound, vocabulary and grammar. English language teaching (ELT), therefore, has long been conducted through reading, listening as receptive skills and speaking, writing as productive skills in communication. Among all the factors, listening is an essential section of language competence and it indicates the comprehending of spoken language.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

During the process, listening input is usually accompanied with other sounds and sometimes with visual input (Lynch & Mendelsohn, 2002). In making sense of the listening contents, the context of the communication happens in and listeners’ relevant prior knowledge is vital (ibid). However, as many linguists reviewed, listening has long been neglected until the early 1970s (Morley, 2001; Brown, 1987; Rivers, 1966). It is only since then that listening attracts more interests from linguists and researchers. Therefore, as it is far less studied than other fundamental skills, listening needs more research and is worth to be emphasized in ELT.
II. An Overview of a Listening Lesson
In the contemporary English language teaching and research, listening is becoming more and more important. Some researchers advocate and encourage teachers to apply listening strategies in classroom teaching and guide students to listen (Mendelsohn, 1994; Field, 1998). Listening approaches are also suggested and experienced. Harmer (1987) reviewed some basic principles of receptive skills and stated that, learners read and listen to language with purpose, desire and expectations. He further pointed out that, a lead-in stage can create expectations and arouse the students’ motivation in the following listening contents. Field (1998) proposed a diagnostic approach which involves pre-listening, listening and post-listening in a listening class. He asserts that the approach can check and adjust students’ listening skills through short micro-listening exercises. According to the introduction given by Hedge (2000), the process of listening class can be divided into three stages, pre-listening stage, while-listening stage and post-listening stage.
1. Pre-listening Stage
It is commonly recognized that pre-listening is a preparation of the listening class. In this stage, teachers tend to arouse learners’ expectation and interest of the language text they are going to listen. They can also motivate learners by providing background knowledge of the text; organizing learners to discuss a picture or a related topic which involves in the text; asking some related questions to the text, and etc. In general, pre-listening plays a role of warming-up and the main aim of this stage is to make learners focus their attention on the following while-listening stage and decrease the difficulties of the text. It is more important in its relating to and being of help to many other aspects which will be represented later.
2. While-listening Stage
While-listening is the main procedure of listening information input. In this stage, learners are given some audio materials for listening. Learners may be requested to deal with some questions with the listening materials, such as Yes/No questions, Cloze, True/False questions and etc. Usually learners need to answer the questions simultaneously or take note of some main points of the listening materials. Teachers, as a guide during this process take control of the speed of the materials, start or pause of the machine and raise some questions for discussions or give necessary explanations to help the learner comprehend the materials. Depending on the learners’ language level and the difficulty level of the materials, teachers can decide the times of presenting the listening materials. The purpose of while-listening is to provide the learners with audio material input with exercises and therefore promote the learners’ listening competence.
3. Post-listening Stage
Post-listening is also an important stage as it reviews and checks the listening efficiency and result. During this stage, teachers are not only supposed to check the answers, they also need to lead the learners to consolidate the comprehension of the listening input. They can organize further discussions on the listening text, explain some new terms and phrases, summing up appeared language rules and designing some related exercise for the learners to strengthen their impression about the knowledge. In addition, giving a dictation on a summary of the text may check all the different language points and learners’ mastery of knowledge. Via the first two stages, learners have received many comprehensible input, thus, the purpose of post-listening is to transfer these input into intake. In another word, the stage of post-listening can be considered as a transformation of language knowledge to language competence in listening teaching section.
III. The Essentiality of Pre-listening in a Listening Class
Pre-listening, as the first stage of listening teaching, is long argued by linguists and teachers on its contexts and role in the listening teaching. For example, some researchers (Buck, 1991; Cohen, 1984) suggested arrange a question preview in pre-listening stage with the reason that it may guide the students’ attention in the right direction. On the contrary, others (Ur, 1984; Weir, 1993) argued that the question preview process may distract the learners from attending to the actual input. Hence, it is worthwhile to clarify the status of pre-listening in classroom teaching of listening.
Before analyzing the role of pre-listening in the process of a listening class, it is useful to overview the difficulties in listening teaching initially so that the role of pre-listening stage can be further discussed.
1. The Difficulties in Teaching Listening
Comparing to other language competence, such as reading and writing, listening has some specific features which could bring learners pressure and difficulty in dealing with it. They are concluded as follows (Lynch & Mendelsohn, 2002; Thomson, 2005):

High frequency in communication. Based on the investigation of Rivers and Temperley (1978), listening takes approximately 45% of the place in communication of an individual’s daily life.
Passivity. Apparently, listening is considered as a totally passive action in communication, though it is further regarded as an active process rather than its original passive role (Lynch & Mendelsohn, 2002).
Speediness and repeatlessness. Differ from reading, listening normally needs to process the information instantly and usually just once. It is not as flexible as in reading that readers can refer to the contents as many times as they like.
Other widely-concerned aspects of natural characteristics. In the process of listening, many other aspects of language of knowledge are needed such as phonetic, vocabulary, grammar.

Due to above features of listening, teaching listening was involved in an amount of difficulties. According to the introduction of Cherry (1957), in second and foreign language listening, most of the difficulties are caused by “uncertainty” which could present in the area of speech sounds and patterns, language and syntax, recognition of content and other influence of environment. The difficulties could show different representations in classroom teaching of listening:

Learners could be anxious about a long text for the reason of lacking time to process information.
Unfamiliar context and background could scare the learners and make them lose interests and patience.
Learners may be influenced by new vocabularies, phonetic phenomenon, grammar structure and these affections could decrease their comprehension about the text.
By giving a long audio material, learners could have difficulties to concentrate on the important information.
There are also some other elements in the process of listening which could confuse the learners such as different accent, background noise and assimilation, etc.

2. The Functions of Pre-listening in a Listening Class
As discussed above, pre-listening can be recognized as a stage of preparation and warming up of the whole process of listening. As some researchers (Rees,2002; Peachey,2002)review, there are a few of aims and types of pre-listening tasks that enable the learners deal with the following listening text smoothly and strategically, such as to generate interest, build up confidence and facilitate comprehension. Following is the detailed discussions on the functions of pre-listening.
(1) Motivating learners
People believe “Interest is the best teacher”. To arouse students’ interests is one of the most important conditions for a teaching process. Only when the students are interested in the contents of teaching can the efficiency of teaching and learning be guaranteed. Therefore, the first role of pre-listening is motivating learners.
Underwood (1989) summarizes a variety of ways of pre-listening work can be carried out during the classroom teaching. Some of them are suitable in motivating students:

The teacher gives background information.
Organizing the students to have a discussion about the topic or situation in the upcoming text.
Showing a picture which is related to the content of the text.

To make the listening task interesting, the teacher also can tell the beginning part of the text and provide with some questions as a guideline for the students to guess the end or take some keywords for brainstorming.
(2) Activating current world knowledge and acquiring new knowledge
The main purpose of listening is to teach the knowledge of language and help the learners to be competent in listening. Design some activities that can activate learners’ world knowledge will facilitate them behave better in the listening. Moreover, pre-listening can also play a role to input some new language knowledge. Therefore, it is necessary and meaningful to introduce or review the language knowledge in pre-listening session.
There could be a number of ways to make this part meaningful, depends on the content of the text, the teacher can:

List the new vocabularies and make sure the students know the meaning and the pronunciation of each one.
Introduce some phonetics knowledge which could impact on comprehension, such as jointed sounds, lost sounds and etc.
Review the complex grammar rules and introduce new sentence patterns if any.
Introduce some language discourse knowledge briefly.

(3) Setting context and predicting content
Rees (2002) emphasized the importance of setting context for listeners in pre-listening session because even in exams learners have the chance to know a general idea of the listening materials. It will greatly help them to predict what they are going to learn. It will help learners to form expectancy of what they will listen and this is an important listening strategy for their future study.
Listening is a difficult and complex section in language learning. Especially in foreign language teaching which has no language environment for practising, listening competence seems even harder to be developed. Thus, before presenting a “long and horrible” text, acquiring some listening techniques (for instance, concentrating on the stressed words, predicting the information, etc.) could be helpful for the students to deal with the task.
(4) Checking the listening task
To check with the learners if they have full understanding of the task is important in pre-listening. In this procedure, the teacher is recommended to set some tasks according to the content of the text for the students. They can also directly make sure with them in case misunderstanding happens and it may demotivate them.
In the specific classroom activity, the task could be one or two simple questions which relate to the final or important point of the text. For example, if the main content of the text is concerned about competing for a job, the task could be “Who got the job in the end”, if it is about a process of making a manufactory, the task could be designed as “How many procedures are needed to make xxx”.
IV. The Appropriate Length of Pre-listening
By analyzing the role and functions of pre-listening, the essentiality of pre-listening stage is undoubted and it seems that it is worthwhile to spend much time and energy on this stage. However, the main process of listening class must be a fluent work. It does not make sense to spend too much time on pre-listening. The fundamental aim of pre-listening is to prepare learners behave better in while-listening.
Actually, the length of pre-listening is not fixed in every listening class. As Rees (ibid) argues, pre-listening should take a “fair proportion” of a lesson but it usually depends on the teachers’ aim and the learners’ language level to decide how long it should take. Also, based on the different backgrounds of the texts (length, difficulty, genre, etc.) and the level of the learners (beginning, intermediate, advanced, etc.), the type and length of pre-listening can be various. For example, if the content of the text is easy to understand, teachers do not need to spend too much time on basic language knowledge teaching any more; if the students are advanced learners, it is unnecessary to spend much time on pre-listening part for the reason that they have already have enough language basis and may be confident in what they are going to listen. On the contrary, if the learners are at beginning level, the pre-listening part is supposed to be longer. In addition, a very short listening task can be prepared by simply presenting several sentences to clarify the situation of the listening or the necessary information in which the length of pre-listening can be very short. Therefore, pre-listening is rather flexible and the length can be based on the specific aim and situation.
Via analyzing the role of pre-listening in a listening lesson and its relationship with the other two stages, it shows that well-arranged pre-listening activities are essential for listening comprehension.
V. Conclusion
Listening is an essential competence in language teaching and learning. On account of the features of listening teaching and the role of pre-listening stage, it is vital to design and arrange appropriate pre-listening activities in a listening lesson. A well-planned pre-listening activity could prepare the students to deal with the listening text smoothly. It is also helpful to build up students’ confidence and motivate them to listen. During the pre-listening process, teachers can take the opportunity to introduce world knowledge and related language knowledge related to the text. Moreover, it devotes to fulfill the whole process of a listening lesson in making the work more effective and efficient. However, even though pre-listening plays a significant role in the whole listening process, it does not mean that it needs to occupy too much time in the classroom teaching. The length of pre-listening part could be flexible in different circumstance.
Based on the analysis of the features and aim of listening teaching and the role of pre-listening, while-listening and post-listening stage in a listening lesson respectively, a successful listening class is recommended to include following elements:

The audio materials are appropriate for the learners in length, speed and difficulty.
The students are well motivated before listening to the text.
The aim and forms of the listening task is clarified to students.
The length of each stage are well arranged and closely connected with each other.

The old saying goes, “Well begun is half done”. As the warming-up of formal listening process, pre-listening should be well-organized and emphasized to play its role of stimulating students’ motivation and expectations for the text. Hence, more investigation should be focused on designing optimizing pre-listening activities in order to facilitate the listening teaching in ELT.

Microgrid Energy Management Using a Two Stage Rolling Horizon Technique

Abstract—In this paper, a new Energy Management (EM) strategy is proposed which uses a two-stage rolling horizon (RH) technique to control a battery energy storage system (BESS). The objectives of the control are to increase the self-consumption of the renewable energy resources (RES) and minimize the daily cost of the energy drawn from the main electrical grid. Mixed integer linear programming (MILP) optimization is used as a part of the RH technique to obtain the optimal control settings for the BESS. Using the RH technique and processing the control signals with two different time periods gives more optimal BESS settings which can overcome the errors associated with load prediction and operational constraints. Simulation results demonstrate that the proposed strategy can benefit MG customers and satisfy different market conditions.

Keywords—Energy management system, Microgrid, Mixed integer linear programming, Rolling Horizon, Optimization.

I.     Introduction

The growth of renewable energy sources (RES) in the electricity grid together with the increasing use of electricity for transport and heating, ventilation and air-conditioning requires a new vision for future transmission and distribution grids. The Global Smart Grid Federation report claims that the existing power grid networks are not equipped enough to meet the demands of the 21st century [1].

Increasing the complexity and variability of generation sources introduces a new type of electric grid which needs more innovation to solve its challenges, manage operation, and control its expansion.

The concept of the Micro-grid (MG) is a promising and welcome idea that introduces engineered solutions to challenges that face the electricity grid, in an efficient, reliable, and economic way and  Micro-grid Energy Management (MGEM) is the most important topic regarding MGs particularly as it has to address both technical and commercial challenges [2].

There is much research focusing on MGEM. In [3], the design and experimental validation of an adaptable MGEM is implemented in an online scheme. In this case, the author aims to minimize the operating costs and the disconnection of loads by proposing an architecture that allows the interaction of forecasting, measurement and optimization modules, in which a generic generation-side mathematical problem was modelled

In [4], Mohsen et al. introduce two dispatch-optimizers for a centralized MGEM system as a universal tool. Scheduling the unit commitment and the economic dispatch of the MG units is achieved using an improved real-coded GA and an enhanced Mixed Integer Linear Programming (MILP) based method. The authors in [5] implement an economic dispatch strategy with MILP to determine the optimal operation for a hybrid energy system using steady-state models. The hybrid system is composed of biomass, biogas, photovoltaic panels, a diesel generator and a battery bank.

Daniel & Erlon [6] present a mathematical model for the EM problem of an MG by means of a MILP approach. The objective is to minimize the operating costs subject to economical and technical constraints over a planning horizon through determining a generation and a controllable load demand policy. The results show the efficiency of the proposed approach to deal with extreme situations with the misbehaving of the forecasted consumption or even with unstable generation.

A novel MGEM system based on a rolling horizon (RH) strategy for a renewable-based MG is proposed in [7]. The proposed technique is implemented for an MG, which consists of two wind turbines, photovoltaic panels, a diesel generator, and an energy storage system in which a mixed integer optimization problem based on forecasting is solved for each decision step. Based on a demand-side management technique, the MGEM provides online set points for each generation unit and signals for consumers. The results show the economic revenue of the proposed strategy

The authors in [8] focus on the development of optimization-based scheduling strategies for the coordination of MGs. Simultaneous management of energy demand  and energy production are used within a reactive scheduling approach to solve the problem of uncertainty associated with generation and consumption.

Martin et al. [9] presented an EMS prototype for an isolated renewable-based MG which consists of two stages: a deterministic management model is formulated in the first stage followed by integration into a RH control strategy. The advantage of this proposal considers the management of energy sources in addition to including the possibility of flexible timing of energy consumption (demand management) by modelling controllable and uncontrollable loads.

The research presented in this paper focuses on using an energy management based on a two-stage rolling horizon strategy. The objectives are to increase the self-consumption of the renewable energy resources and to minimize the daily cost of the energy drawn by the MG from the main electrical grid, and thus achieve good economic performance for the MG customers and minimize the dependency of the MG on the main electrical grid. The effect of using different tariff schemes to suit different market conditions is also demonstrated as it has a direct effect on the optimal settings of the energy storage system control and on the overall economic results.

II.    Microgrid description

The proposed MG used in this paper consists of eight houses located in a UK based community, a Photovoltaic (PV) generation system and a Battery Energy Storage System (BESS). The MG is also connected to the main electric grid. The electrical load profile of the UK community is created using a model from the Centre for Renewable Energy Systems Technology (CREST) created by Richardson and Thompson [10]. The PV generation profiles used in this paper are obtained from data available at the PVOutput website [11] for the ETB 22kW station located at the University of Nottingham. The BESS used for this analysis has a rated capacity of 80kWh and a rated power of 15kW. The operation of the microgrid was simulated under various operating conditions using the Matlab/Simulink simulation environment.

III.   proposed energy management strategy and the main implementation steps

The MGEM strategy proposed in this paper focuses mainly on increasing the self-consumption of the RES within an MG, minimizing the daily cost of the energy drawn from the main electrical grid (called the “Community Power Flow” (CPF) in this work) by the MG and also the dependency of the MG on the main electric grid. The controller generates a charge/discharge reference for the BESS, which directly controls the community power flow. This strategy uses a MILP optimization process as a part of the RH technique to obtain the optimal control settings for the BESS (located in the MG) to minimize the daily cost of energy and maximize self-consumption. For the first stage, an optimization process is performed for one day ahead to determine the reference values for the CPF to be drawn from the grid that minimizes the daily cost of energy. This optimization process is performed using a predicted profile for the load and the generation, which has a 15-minute sample period for one day ahead. The reference values obtained for the CPF also have a 15-minute sampling time.

The reference values for the CPF are then processed using a second control stage. During this stage, these values are used as well as a predicted data for the load and generation with sampling time of 1 minute and for only 15 minutes ahead, to determine the optimal settings for the BESS (i.e. 1 optimal setting every 1 minute for 15 minutes ahead). The optimal settings obtained for the BESS will be the actual reference values for the community power flow to ensure minimizing the daily cost of the energy drawn from the main grid. This second stage helps to compensate for imperfect predictions from the first stage and the constraints associated with the operating limits of the BESS.

The RH theory depends on repeating a defined process every fixed time interval and obtaining new results [7], [9]. The two stages described above are repeated every fixed time interval (15 minutes) where a new optimization is performed based on an updated forecast of generation and consumption for the next time interval and feedback of each device status at the end of the previous interval: a new, more accurate optimized setting for the BESS is therefore obtained. The updated forecasts for generation and consumption are obtained from a new prediction over the two stages. For the first stage, a prediction for one day ahead (with a sampling time of 15 minutes) starting from the end of the previous 15 minutes is made, and for the second stage a prediction for only 15 minutes ahead (with a sampling time of 1 minute) starting from the end of the previous 15 minutes is made. Using the RH technique helps in mitigating errors associated with the load prediction and the uncertainty associated with the control signal – e.g. tracking and updating the status of the units in the MG by repeating the optimization process several times. In addition, using the RH strategy, it is possible to move from performing optimization over one day only (without considering the past or the following day, which could affect the results) to a new process of working over a longer continuous period (several days, weeks or even months). The RH strategy enables the MGEM to take into account what happens the following day and the optimization process now covers more than one day. Through this process, earlier actions regarding charging and discharging of the BESS can be accounted for.

The main implementation steps of the proposed strategy are summarized in the flowchart shown in Fig 1.

To implement this strategy. First, a cost function has to be defined that minimizes the daily cost of the energy drawn from the main grid and all the constraints associated with it. In addition, the BESS should be modeled including any constraints associated with its operation.

A.    Objective function formulation

The objective function is formulated to minimize the daily cost of the energy drawn from the main grid “
” and to increase the self-consumption of the RES generated by the MG. This cost can be developed in terms of payments and incomes. The payments include the cost of purchased electricity from the main electrical grid; incomes consider the revenue of the electric energy sold to the main electrical grid – the excess energy produced by the MG PV generation after satisfying the MG consumption and charging the BESS.

The daily cost of the energy drawn from the main grid can be formulated as follows:

: The daily cost of the energy drawn by the MG from the main electrical grid (£/day)
:  The daily cost of the purchased electrical energy from the main electrical grid (£/day)
: The daily income of the exported electrical energy to the main electrical grid (£/day)

These terms can be described as follows:
Cbuy_daily= ∆T ×∑toTTariffbuy(t)×max⁡Pload(t)+Plosses(t)–PES(t)–PPV(t), 0                                          (2)
Cselldaily= ∆T×∑toTTariffsellt×max⁡PPVt+PESt–Ploadt–Plossest, 0                    3

: Time of day starts at (12 am)
: Time of day end (after 24 hours)
: Sampling time; step size of each optimal solution (1 min.)
: Purchase electricity tariff from the main grid (£/kWh)
: Sale electricity tariff to the main electrical grid (£/kWh)
: Electric power losses across the MG at time interval “t” (kW)
: Electrical load demand at time interval “t” (kW)
: Electric power produced by BESS at time interval “t” (kW)
: Electric power generated by the PV system at time interval “t” (kW)

B.    Power balance equation of the MG

The balance equation of the total active power in the MG is formulated as follows:
∑toT{ ±P Gridt±PBESSt+PPV(t)} = ∑toT{ Pload(t)+Plosses(t) }                (4)

P Grid(t) 
: The power drawn by the MG from the main electrical grid at time interval “t” (kW), +P means the MG imports power from the main grid, -P means the MG exports power to the main grid.
: The electrical power produced by the BESS at time interval “t” (kW), +P means that the BESS discharges, -P means that the BESS charges.

C.   Battery energy storage system modeling and constraints

The constraints associated with the operation of the BESS are formulated as follows:

1)   BESS power output
PB max≤PBESSt≤ PB max  

PB max
is the maximum power that can be produced by the BESS (kW), +P means the maximum discharge power, -P means the maximum charge power.

2)   BESS State of charge (SOC)
Et=Et–1– ∆T×Pdischtηd– ∆T×ηc×Pchargt       (6)
SOCmin≤SOCt≤ SOCmax 

: Stored energy in the BESS at time interval “t” (kWh)
: Stored energy in the BESS at time interval “t-1” (kWh)
: Discharge power from the BESS at time interval “t” (Kw)
: Charge power in the BESS at time interval “t” (kW)
ηd , ηc
: Efficiency of discharging and charging respectively.
: Battery capacity (kWh)
: Maximum and minimum state of charge limits of the BESS respectively

3)   SOC variation

This constraint corresponds to the transition between two states during two consecutive settings, which reflects max ramp up/down rate for the BESS power.
∆SOCt≤ ∆SOCmax

is the variation of the state of charge during charging/discharging periods and
is the maximum acceptable variation of the state of charge for both charging and discharging periods.

4)   Power converter losses and efficiency

The power losses in the power converter which is used for the control of the BESS and for grid interface, should be taken into consideration.
Pconvt=ηConv×Pdischt+ PchargtηConv– Pconstant    (10)

: Converter output power at time interval “t” (kW)
: Discharge power from battery at time interval “t” (kW)
: Charge power in the battery at time interval “t” (kW)
: Converter efficiency.
: Constant losses in converter (kW)

D.   The power drawn from the main electrical grid

The power drawn by the MG from the main electrical grid has also a number of constraints as follows:
P Grid_share, MIN≤P Grid_sharet≤ P Grid_share, MAX

P Grid share, MAX
P Grid_share, MIN
are the maximum and the minimum power that can be drawn from the main electrical grid respectively (kW). This constraint is used to minimize the imported power from the main grid and increase self-consumption of RES.

IV.   Mixed Integer Linear Programming optimization

MILP is a mathematical optimization program used to solve constrained optimization problems of applications where the constrained optimization problem contains a set of variables, an objective function and a set of constraints [12], [13]. The role of the optimization is to find the best solution for the objective function in the set of solutions that satisfy the constraints (constraints can be equations, inequalities or linear restrictions on the type of a variable).

The mathematical formulation of the MILP problem is expressed as follows:



where   x ∈ Zn
        C, b are vectors and A is a matrix,

A solution that satisfies all constraints is called a feasible solution. Feasible solutions that achieve the best objective function value are called optimal solutions.

There are three different approaches to solving MILP, namely, Branch and Bound, Cutting Plane and Feasibility Pump. MILP problems are generally solved using a branch-and-bound algorithm. Basic LP-based branch-and-bound algorithms (Known as Tree search) can be discussed as follows. Start with the original mixed integer linear problem and remove all restrictions, the resulting problem is called ‘‘linear programming relaxation’’ of the original problem, which is solved using the tree search algorithm. The tree is built using three main steps. Branch: pick a variable and divide the problem in two sub problems at this variable. Bound: solves the LP-relaxation to determine the best possible objective value for the node. Prune: prune the branch of the tree (i.e. the tree will not develop any further in this node) if the sub problem is infeasible [14].

For example, to optimize the objective function formulated in 1. First, the problem is solved without any constraints and a list of initial variables and solutions are obtained. Second, the constraints are applied over the obtained solutions and the infeasible ones are refused. Third, the variables which give a feasible solution are then used to generate more variables and the problem is solved again with those variables until the optimal solution is obtained.

V.    Simulation results and discussion

The following results are obtained from simulation for the MG defined in section II using the parameters shown in table 1.

Different tariff schemes are used in this research to represent the various market conditions and to show the capability of the proposed strategy to get the best operation scenario with different tariff schemes. For purchasing electricity from the main grid, a time of use tariff (TOU) and a real time pricing tariff are used [15]. For selling electricity to the main grid, a fixed type tariff is used [16], [17]. The tariff schemes which are used in this research tariffs are shown in Fig. 2.






15 min.


0.33 kW

Battery capacity

80 kWh




20 %

P Grid share, MIN

-12 kW


90 %

P Grid share, MAX

12 kW

PB min

-12 kW

∆PB rampup

12 kW

PB max

12 kW

∆PB rampdown

12 kW




15 min.


2% of
average Pload

A.    EM results using MILP optimization

In this section, the first stage of the proposed strategy is demonstrated where an optimization process is performed using predicted profiles for the load and generation with a sample period of 15 minutes for one day ahead. The reference values obtained for the CPF have a 15-minute sample time for one day ahead. MILP optimization results of the first stage are shown in Fig 3.

Fig.  3. Optimal settings for the BESS, reference values for the power drawn from the main electrical grid and the SOC of the BESS using MILP optimization with a sample time of 15 minutes

From Fig 3, it is observed that the optimization strategy succeeded in 1) minimizing the electric energy purchased from the main grid at the peak hours (from 4 pm to 8 pm, when the purchase electricity tariff from the main electrical grid is high as shown in Fig 3b). 2) maximizing the self-consumption of the generated PV energy to feed the load within the MG – no export is seen in Fig 3b. 3) maximizing the charging of the BESS at off peak times (from 11 pm to 7 am, where the purchasing electricity tariff from the main electrical grid is low) and using this to feed the load during the rest of the day – as seen in Fig 3c. The optimization strategy takes into consideration the BESS modelling and constraints, and manages to keep the SOC of the BESS and all other constraints associated with it within limits (SOC between 20 and 90 %, maximum charging/discharging power is 15kW) as shown in Fig 3a and 3c.

B.    EM results using the proposed two stage RH strategy with TOU purchasing tariff scheme

In this section, the proposed two stage RH strategy is used to derive the optimal control signals for the MG. The load and PV generation profiles used in this part have a 1-minute sample time and are demonstrated for two different consecutive days.

The results obtained in Fig 4 show that the proposed strategy succeeded in determining the optimal settings for the BESS that minimize the daily cost of the energy drawn from the main grid. The optimal settings for the BESS have a 1-minute sample time. It can be seen from Fig 4a and 4d that more accurate predicted load and generation profile, with a 1-minute sample time, are used in this stage to deliver the 1-minute sample time optimal settings for the BESS. These settings compensate for any change in the load and keeps the actual CPF close to the reference through the day as shown in Fig 4b.

Using TOU tariff scheme, the BESS charges at off peak time when the purchase tariff is low and uses it to feed the load at mid-peak and peak times. The results show that the RES are used to feed the load within the MG and the extra energy is saved in the BESS to be used later as shown in Fig 4a between hours 10-12 and between 34 -37. The extra energy that cannot be saved in the BESS due to the SOC limits, are exported to the main electrical grid as shown in Fig 4b between hours 37-38. The daily cost of the energy drawn from the main grid is reduced from £34 to £18.4 for the 2 days by using the BESS and the proposed strategy (i.e. reduction percentage is 45.9% per day).

C.   EM results using the proposed two stage RH strategy with real time purchasing tariff scheme

In this section, the system is simulated using real time purchasing tariff instead of TOU tariff. The results obtained in Fig 5a show that changing the tariff scheme, affects the energy drawn by the MG from the main electrical grid. The operation scenario of the BESS and the SOC curve are affected also by changing the tariff scheme as seen in Fig 5a and 5c. The new operation scenario of the BESS shown in Fig 5a is delivered to minimize the daily cost of the energy drawn from the main grid in this case.

The obtained results confirm the capability of the proposed strategy to deal with different tariff schemes and to achieve good results and to ensure system performance in an economic way.

D.   EM results with and without using the second stage of the proposed RH strategy.

Fig 6 shows the actual power drawn from the main electric grid with and without using the second stage of the proposed RH strategy that compensates for the load changes and delivers more optimal settings to the BESS. As seen in Fig 6, without the second processing stage, the generated power from the RES is exported to the main electrical grid at a non-proper time period (such as between hours 33 and 37) which reduces the self-consumption of the RES within the MG; Also, the MG imports energy from the main grid at the peak time periods (between hours 40 and 44) which increase the daily cost of the energy drawn from the main grid.

The second processing stage succeeded in making the actual power drawn from the main grid follow the reference values obtained from the optimization as shown in Fig 6, and also reduced the daily cost of the energy drawn from the main grid from £9.44/day (i.e. in case of using the first optimization stage only) to £9.19 /day.

On the other hand, the constraints and limits associated with the operation of the BESS (such as SOC limit or the maximum value of charge/discharge power) do not allow the optimal settings of the BESS to be executed at certain times, and in this case, the real power drawn from the main grid deviates from the reference values obtained from optimization as shown in Fig 6 (zoom). This happens after using all the solutions available to overcome this problem.

VI.   Conclusion.

The new approach to MGEM introduced in this research increases the self-consumption of the renewable energy resources (RES) within the MG, minimizes the daily cost of the energy drawn from the main grid and reduces the dependency of the MG on the main grid.

Using MILP optimization, the BESS is used to ensure the MG community power flow follows the reference with minimum errors.

Using a second processing stage more accurate settings for the BESS can be derived at a faster sampling time and therefore a more accurate response to load changes can be achieved to keep the actual community power flow close to the reference values obtained from the first optimization process.

Repeating the first and the second processing stages every fixed time interval using a RH technique, enables errors associated with load prediction to be reduced. The proposed strategy also managed to maintain BESS behaviour within the constraints associated with the operation of the MG (power constraints and BESS constraints).

Simulating the system using different tariff schemes demonstrates the efficiency of the proposed strategy to deliver appropriate economic solutions under various market conditions


This work is supported by the Egyptian Government- ministry of higher education (cultural affairs and missions sector) and the British Council through Newton-Mosharafa fund.


[1] G. S. G. Federation, “Global smart grid federation report,” Global Smart Grid Federation, pp. 15, 2012.

[2] A. Kowalczyk, A. Włodarczyk, and J. Tarnawski, “Microgrid energy management system.” pp. 157-162, 2016.

[3] A. C. Luna, L. Meng, N. L. Diaz, M. Graells, J. C. Vasquez, and J. M. Guerrero, “Online energy management systems for microgrids: experimental validation and assessment framework,” IEEE Transactions on Power Electronics, vol. 33, no. 3, pp. 2201-2215, 2018.

[4] M. Nemati, M. Braun, and S. Tenbohlen, “Optimization of unit commitment and economic dispatch in microgrids based on genetic algorithm and mixed integer linear programming,” Applied Energy, vol. 210, pp. 944-963, 2018.

[5] A. Gupta, R. Saini, and M. Sharma, “Steady-state modelling of hybrid energy system.” pp. 1-10, 2009.

[6] D. Tenfen, and E. C. Finardi, “A mixed integer linear programming model for the energy management problem of microgrids,” Electric Power Systems Research, vol. 122, pp. 19-28, 2015.

[7] R. Palma-Behnke, C. Benavides, F. Lanas, B. Severino, L. Reyes, J. Llanos, and D. Sáez, “A microgrid energy management system based on the rolling horizon strategy,” IEEE Transactions on Smart Grid, vol. 4, no. 2, pp. 996-1006, 2013.

[8] J. Silvente, G. M. Kopanos, E. N. Pistikopoulos, and A. Espuña, “A rolling horizon optimization framework for the simultaneous energy supply and demand planning in microgrids,” Applied Energy, vol. 155, pp. 485-501, 2015.

[9] M. P. Marietta, M. Graells, and J. M. Guerrero, “A rolling horizon rescheduling strategy for flexible energy in a microgrid.” pp. 1297-1303, 2014.

[10] I. Richardson, and M. Thomson, “Domestic electricity demand model-simulation example,” Data Sets and Software (CREST), 2010.

[11] ETB_UoN_Notts_UK, “Daily ETB_UoN_Notts_UK 22kW,” Available at : https://pvoutput.org, 2016.

[12] M. Jünger, T. M. Liebling, D. Naddef, G. L. Nemhauser, W. R. Pulleyblank, G. Reinelt, G. Rinaldi, and L. A. Wolsey, ” 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-art”: Springer Science & Business Media, 2009.

[13] J. P. Vielma, “Mixed integer linear programming formulation techniques,” SIAM Review, vol. 57, no. 1, pp. 3-57, 2015.

[14] J. C. Smith, and Z. C. Taskin, “A tutorial guide to mixed-integer programming models and solution techniques,” Optimization in Medicine and Biology, pp. 521-548, 2008.

[15] “purchasing electricity tariffs in UK ” Available at: https://www.greenenergyuk.com.

[16] “Selling electricity tariffs in UK,” Available at:  https://www.gov.uk/feed-in-tariffs.

[17] “Feed-in Tariff Generation & Export Payment Rate Table,” Available at :  https://www.ofgem.gov.uk/system/files/docs/2018/01/fit_tariff_table-_january_2018.

Principles of the Early Years Foundation Stage Framework

In this report I am going to be reviewing the Early Years Foundation Stage Framework (EYFS) looking at the principles, how they underpin our Early Years settings and are based on the theory of pioneers. I will then cover the value and importance of play and how this is a major part of children’s learning. I will outline how we got to where we are today with the EYFS Framework including the importance and the impact it has had on today’s practitioners. At the end of the review I will look at how training and the continuing professional development of practitioners is essential.
Dictionary definition – ‘a truth or general law that is used as a basis for a theory or system of belief’ Oxford English Dictionary, third edition 2005
Early Years Foundation Stage principles:

A unique child – every child is a competent learner from birth who can be resilient, capable, confident and self-assured.
Positive Relationships – children learn to be strong and independent from a base of loving and secure relationships with parents and/or a key person.
Enabling Environments – the environment plays a key role in supporting and extending children’s development and learning.
Learning and Developing – children develop and learn in different ways and at different rates and all areas of Learning and Development are equally important and inter-connected.

Today’s children are the main priority in every Early Years practice. The Early Years Foundation Stage must be underpinned by principles supporting every area of a child’s development. They are all of equal importance and need to be in place when caring for children. They work together providing a stimulating and valuable practice, delivery of the EYFS and putting the legal requirements in to perspective. They also support child’s needs and interest which means appropriate activities are delivered.
Key pioneers and theorists such as Montessori and Margaret MacMillan have been studying how children learn for over 200 years. Through studying and observing children they realised and established what was important for a child to develop and learn. Margaret MacMillan came to her theory after noticing the affect poverty was having on children. She became aware of the importance of exploring the natural world, being outside in open spaces and receiving regular meals, bath time and plenty of sleep. As according to M.MacMillian ‘In open-air nursery children had no examinations to sit, no formal structure to the day but had time to play, to run free in open spaces, feel the sun and the wind and explore the natural world.(how children learn pg24) Key pioneers and theorists still influence our principles and teaching today, as we ensure that children’s learning is extended and that they have access throughout the day to both the indoor and outside area and not just at set times. The outdoor area is now an extension of the classroom bringing the indoor areas outdoors including role play, writing, gardening, and caring for life stock. Children’s families who are on a low income are also offered free school meals to ensure the child received a healthy balanced diet and all children are given the time and space to rest throughout the day.
Value of play
‘Play is a powerful motivator, encouraging children to be creative and to develop their ideas, understanding and language. Through play, children explore, apply and test out what they know and can do’ Rumbold report pg7 56
All babies and children enjoy playing, it is an essential part of their growing up and is needed for children to reach their full potential. It allows the children to be ‘in charge’ of their own learning and is used everyday, this allows us to see a lot more of their achievements rather setting the scene for them. Children are able to combine their play with learning in a safe environment as C.Macintyre (into VIII) states ‘although the children might be seen to be ‘just playing’ all the time they are learning, just as fast as they can’
Play supports a child’s holistic development as PLAY ‘play underpins all development and learning for young children. Most children play spontaneously, although some may need adult support, and it is through play that they develop intellectually, creatively, physically, socially and emotionally.
Children can learn everything through play and it is an effective way of learning so it should be made fun and enjoyable for both the children and the parents. It is also important that children and practitioners understand they are allowed to play and that it is through play that they learn. When playing children naturally develop their skills and to act out and over come any issues they have in the immediate world. It is also where the children do their thinking, problem solving and use first hand experiences so it is important that the practitioners and parents enter the children’s world and encourage their play. Playing can take place anywhere not only in the classroom but the outdoor area as well and children need to be given time and space to place.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The journey of Early Years Foundation Stage curriculum
The journey of how we got to today’s EYFS curriculum started in 1990 with the Rumbold report ‘starting with quality’. It researched in to the quality of education for under five’s and how the process of a child’s learning is just as important as the outcome. The report states ‘Children’s imagination can be nurtured by responding to their curiosity. With encouragement and stimulation, this curiosity will develop into a thirst for, and enjoyment of, learning.’ Pg 7 56. In 1996 Desirable Outcomes were introduced consisting of six areas of learning: personal and social development, language and literacy, mathematics, knowledge and understanding of the world, physical development, and creative development. The Curriculum Guidance was then set up in 2000 for the Foundation Stage children aged 3-5 years. It meant they had their own curriculum which supported their needs within the 6 areas of learning. Under each area then had set goals which gave guidance and structure to their education. Each child will achieve these goals at their own rate and are the foundation of their learning. It was then noticed that children under 3 also needed some guidance so in 2003 Sure start introduced a framework known as Birth to three: supporting our youngest children introduced. It takes a holistic approach in little stepping stones caring for children needs and routine. These are covered by four components: A strong child, skilful communicator, competent learner and healthy child. Today every practice is required to follow the Early years framework. It complies and supports all children from birth to five and separates from the National curriculum. It focuses on development, learning and care of the child.
The framework
The EYFS framework is one document which all settings working with children will have to comply with. It includes both education and care and is supported by the four principles (appendix). For a effective setting it is important that the following key points are in place. This has had a huge impact on practitioner as it ensures every child’s development is being met and they are seen as an individual.
Observing a child is an important part of the day-to-day role of a practitioner within an Early Years setting. As observing a child you are able to discover the child’s interests, likes and dislikes, behavioral patterns, asses the child’s stage of development and identify any patterns in the child’s learning. S.Isaac pg 35 how children learn ‘allowed adults to really get to know children, that their emotions were not hidden’ It can also highlight any concerns you may have and ensures that the child is seen as an individual with all its needs being met. Observing a child involves looking, listening and being activity involved.
Assessing a child is of equal importance as observing them, as you use the information from the observation to identify the child’s achievement and plan the child’s next steps in their development and learning. ‘Ongoing assessment is an integral part of the learning and development process’ EYFS Statutory Framework pg 16 2.19 In my own setting we are regularly observing children during play as this is when we feel we gain more from observing the children as they are more comfortable and demonstrate the skills that they have learnt. We then take the child’s observation and record their achievements in their individual profiles and learning journeys. From looking at their achievements we then plan their next steps. This process is a continuous cycle as shown in the diagram.
There are three different types of planning Long-term, Medium term and short term all of which are important as they ensure all areas of a child’s development are evenly met. It also ensures all the principles are being underpinned within the setting and that the children have access to a wide range of area including indoor, outdoor and a quiet area. Planning also enables areas of development to be linked together so the children are developing a range of skills and learning. In my setting the children are very much involved with the planning as we are interested in what the children want to learn. We use short-term weekly plans (Appendix) and review the activities each day including to see how successful they have been and to extend the children’s learning. Good planning is the key to making children’s learning effective’ EYFS FRAMEWORK principle pg12 2.8
Record keeping
Keeping a record of children’s development is thoroughly important as it monitors a child’s progress and achievements. Also highlights any patterns in a child development and is used as evidence to show parents, outside professionals and teachers. In my setting each child has their own Learning Journal which they are involved in. It consists of the child profile, evidence of their development and learning using photos, observations and their own work. This is shared between the child, their parents and the practitioners.
Relationships with parents and importance of reporting to them
Parents are a vital part of a child’s learning as they are their main educators. A good relationship between the parents and the setting helps to build a strong connection which enables the parents to support their child and offer a continuity of expectations, experiences and behavior ‘All families are important and should be welcomed and valued in all settings’ principle parents as partners 2.2
The parents also have an understanding of the EYFS and so understand how important it is for their child to play and how they their role as a parent is needed for them to develop.
Within my setting we support the parents by making sure they feel involved and appreciated. We have an open door policy which allows parents to come and talk to a member of staff when they feel they need to. We also offer parent consultations, workshops, helping hand events and inform them of any information through meetings, newsletters, telephone calls and home/school diaries.
Learning does not stop once leaving school you continue to learn throughout your life and within your professional career, today this is known as Lifelong Learning.
So With frequent changes to the Curriculum it is important that practitioners continue to keep up to date with the training, as this helps them to develop on their knowledge and improve their skills within their career. It also allows them to reflect on their own learning experiences and to notice their achievements.