Lundie Conservation Area Analysis

Introduction

The purpose of this document is to identify the character and appearance of the Lundie conservation area and also to define special qualities of architectural and historic interest. This document is seeking to find out if the area merits being considered as a Conservation Area and the protection it merits. This information will be used to manage change in the Conservation area to ensure its preservation or enhancement. The character analysis in this document, together with Angus Council’s Development Plan and Advice Notes that relate to development in conservation areas, will inform the assessments of development proposals and other changes against the impact on the character or appearance as stated in the Planning Act, 1997 under the Listed Buildings and Conservation Areas. There are major parts of the character and appearance of Lundie that cannot be overlooked that, they need to be stated as guidelines for designers and developers to conform to. Character appraisals provide the opportunity to inform residents about the special needs and characteristics of the area and help developers identify and formulate development proposals.
1.1 Purpose of the Guidance
This appraisal will be a tool which be used to control and manage and also help to point out the special interest and also be abreast with the changes in the area. It serves as supplementary planning guidance to the Angus area council. The design guidance established will aid the assessment of development proposals.
1.2 Objectives of the Guidance
The character appraisal will;

Provide background information regarding the historical and architectural interest of Lundie, in particular the conservation area

Review the existing conservation area

Help local authorities to develop a management plan for the conservation area by analysing what is positive and negative, and identify opportunities for beneficial change or the need for additional protection and restraint.

1.3 Methodology
Visual Analysis and Art-Historical Analysis was used in this document to appraise the character of Lundie conservation area. Aesthetic, Perceptive and Phenomenological Analysis was used as the basis for the general Visual analysis. It, thus, is an attempt to relate judgment from visual analysis through what was experienced with my eyes by moving through the conservation area, navigating my way from one place to another by identifying landmarks and also by looking at the emotional and conceptual connections co notated through the ‘meaning’ of the place and ‘structure’ of the place to the particular assessment criteria such as scenic beauty, what makes the place deserve the status of a conservational area, architectural interests, archaeological interests and community historic preference. The main purpose of this kind of study is to identify, measure, and evaluate the characteristics or qualities of Lundie Conservation area. Art-Historical Analysis was also used to analyse historical and monumental interests in the area by recording the historical, archaeological and architectural character. Existing literature on Lundie conservation area was also consulted. Though realising the fact that analysis by aesthetic qualities is very personal, depending highly on the individual’s taste and socializing experiences, using the other forms of analysis mentioned above which in is not subjective but to some extent objective, helped to achieve a balanced character appraisal.
1.4 Location and Setting
Lundie is a parish and small hamlet in Angus, Scotland, 10 miles (16km) northwest of Dundee, situated at the head of the Dighty valley in the Sidlaws, off the A923 Dundee to Coupar Angus road. In 1882-4, Frances Groome’s Ordnance Gazetteer of Scotland described Lundie like this, “Lundie, a village and a parish of SW Forfarshire. The village stands 3 miles WSW of Auchterhouse station, 6 ESE of Coupar-Angus, and 9 NW by W of Dundee, under which it has a post office. The parish is bounded N by Newtyle, E by Auchterhouse, S by Fowlis-Easter in Perthshire, and W by Kettins. Its utmost length, from W by N to E by S, is 4 miles; its utmost breadth is 3 miles; and its area is 4296 ¼ acres, of which 1075/6 are water”.
1.5 Reason for Designation
This is an area of special architectural or historic interest, the character or appearance of which it is desirable to preserve or enhance. The Conservation Area will consist of the whole village of Lundie, including: —
The Manse,
Smithy Cottage,
Gamekeepers Cottage, (The Edinburgh Gazette 27 September 1991). Lundie has significant architectural and historic interest as. The Lundie parish church which was dedicated to St Lawrence was once the property of the priory of St Andrews. Inside the church is a War memorial plaque commemorating parishioners who died in World War 1. Preserving and enhancing these key features led to the designation of the whole village of Lundie as a Conservation Area in 1991.
1.6 Lundie Conservation Area and The Conservation Area Boundary
Lundie Conservation Area was designated on 8th of July, 1991 (The Edinburgh Gazette 27 September 1991) and an Article 4 put in place on the 16th of September, 1992. (The Edinburgh Gazette 20 October 1992)
Using the Church as the pivot, the conservation area encompasses the main Lundie village where most of the properties are. It starts from the Smitty Cottage on the north-western side, goes around the Sawmill Cottage on the north then down to Kirkton FarM Cottage. It continues down to the Old School all the way to Oaksydix building on the south-eastern side, then around the Lundie Mill and goes up along the road to Rowanholme building. It then goes down along the road on the left towards the Manse building to the south. It then goes up north to the Well and then extends to the right towards the Village Hall past the Pump to Smitty Cottage.
1.7 Conservation Areas
More than 600 conservation areas are in Scotland and of this, 19 are under the Angus Council. Conservation areas can be said to be places within or the entire village, town or city which contain areas of special historic or architectural character which needs protection or enhancement. They are designated by planning authority as being areas of special architectural or historic interest, the character or appearance of which they have the desire to preserve or enhance. These interests create the character of an area and any new development should be carefully assessed to ensure that it if permitted, will blend into the character of the area and not cause a blight on the character of the area. Designating a conservation area should not be seen as prohibiting change but as a means to carefully manage change to ensure the character and appearance of these areas are safeguarded and enhanced for the enjoyment and benefit of future generations. The public are consulted on any proposals to designate conservation areas or change their boundaries. The management of conservation areas is under the management of the local authority it falls under.
1.8 The Legal and Policy Framework
Conservation areas identification can be traced back to the coming into force of the Civic Amenities Act, 1967. The government BY then recognised how important it was to protect areas in totality as against individual buildings, from indiscriminate developments and wide scale demolition of buildings in areas selected for slum clearances. Therefore, while individual buildings of special or unique characteristics may be of important, what should be considered is the group value of the buildings in the area, the buildings orientation, street design, public space and greenery which all contribute to the character and identity of a place. Considering it carefully, these same factors make up or come together to form the character of a conservation area. This 1967 law is now one way or the other replicated in The Town and Country Planning Act, 1990 and The Planning (Listed Buildings and Conservation Areas) Act of 1990. The Planning Act of 1990 empowers local planning authorities to review already existing conservation areas within their jurisdiction, designating new areas and coming up with character appraisal or place analysis and management plan proposals for the protection, preservation and enhancement of these conservation areas. Consent is required from the appropriate authority for any activity that is going to change or in any way affect the character of the area. The character or appearance of a conservation area through the demolition of a building and/or the construction of a new building can be significantly altered and lose some of the justification for its designation.
1.9 Conservation Areas in Angus
There are currently 19 Conservation Areas in Angus of which Lundie Conservation Area is part of and six of them including Lundie conservation area have Article 4 Directions on them. The Article 4 Directions are further ways of making sure that these conservation areas maintain their character and uniqueness. There are additional laws that control the way in which buildings can be altered and planning permission is needed if such alterations are deemed to affect the character of the conservation area and trees in conservation areas are no exception. Angus Council is committed to preparing character appraisals for all the conservation areas under it and in addition it also publishes guidance on matters affecting these conservation areas. A planning application which is seen to have the potential to disrupt the character of a Conservation Area must be published in the local press and a notice posted near the site. Angus Council must then give a 21 day period for objections and comments to be put across before considering the application.
 

Extraction of Blue Ice Area in Antarctica

Chapter 3
METHODOLOGY
High resolution satellite data has made it possible to obtain optimistic results in feature extraction processes. High resolution World-View-2 data is used for mapping blue ice areas (BIAs) in Antarctic regions. World-View-2 provides extensively high accuracy, agility, capacity and spectral diversity. First high-resolution 8-band multispectral commercial/business satellite is World-View-2 launched October 2009. Working at an elevation of 770 kilometres, World-View-2 gives 46cm panchromatic resolution and 1.85m multispectral resolution. World-View-2 has a normal revisit time of 1.1 days and it is able of catching up to 1 million square kilometres of 8-band imagery per day. Satellite pictures generally track seasonal annual variations in BIAs coverage over the past 30 year on the East Antarctic plateau region. In late studies, the distribution of BIAs can likewise mulled over from the SAR (synthetic aperture radar) images. In SAR satellite image, blue ice can likewise be outwardly perceived. The amplitude of blue ice is less than that of snow (white), because the ice surface is smoother than the latter. Yet, distinction is not at all that conspicuous when applying Semi-automatic extraction approach. Blue ice can be distinguished effortlessly in the coherence map got from two SAR pictures in a view of higher coherence of blue ice. It is additionally found that the picture texture data is useful for distinguishing various types of blue ice like rough, smooth and level blue ice. In this study, Atmospheric corrected (QUAC) sharpen calibrated image (World-View-2 data) is used for extracting blue ice areas in Schirmacher Oasis in Antarctic region. Extraction of blue ice area in Antarctica deal with the total area of blue ice areas excluding the other feature (non-target) appearing on or near it. Blue ice areas have some specific qualities that make them of special interest for extraction as they are just 1% of Antarctic region. Many remote sensing approaches have been implemented to monitor and map Antarctic BIAs.
3.1 Methodology Protocol
The extraction of blue ice areas is simplified by the Methodology protocol. As the whole image takes time for processing, as Schirmacher Oasis is with an area of 34km², ranks among the smallestAntarctic oasis and is a typicalpolar desert, so the image is divided in 12 test tiles of different parts of entire World-View-2 image to achieve prior results. Atmospheric correction is done with QUAC (quick atmospheric correction) method to obtain better results. Atmospheric correction to each tile added suitable outputs results to workflow. Calibrated data is also used without applying atmospheric correction to it. Multiband image combination was made from atmospheric corrected data and calibrated data of the study area.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Alternating snow and blue ice bands surface patterns are generally found in East Antarctica due to which it is hard task to clearly extract BIAs. For feature extraction processes region of interest (ROI) is considered in which blue ice is target and white ice appearing on or near the blue ice is considered as non-target. Methodology workflow is prepared in order to achieve good and prior results comparing with the previous studies. Extraction of blue ice is not that easy task as dust and white snow appears on it as non-target. Various Semi-automatic extraction methods like TERCAT, Target Detection Wizard, Mapping Methods, Spectral Matching and Object Base Image Analysis (OBIA) are used for extracting blue ice areas in Antarctica. The initial results obtained were good but not better enough to keep them prior. Many trials were carried out for extracting blue ice in Antarctica. Prior results were kept in workflow of methodology to compare them with every trial results.
Object based and Pixel based both the classification are used in workflow to get good results. From the High resolution World-View-2 data reference data (digitized data) was prepared for blue ice area and extracted blue ice area was obtained from Semi-automatic extraction methods and OBIA. From the extracted blue ice, blue ice is considered as target and white snow appearing on it as non-target. Comparing reference data and extracted data Bias, % Bias and RMSE values were calculated. After that Average for Bias, % Bias and RMSE values is estimated.
BIAS=
% BIAS=
RMSE=
Where,
Ref A is Reference area and Ext A is Extracted area
n= no. of tiles processed.
3.2 Semi-automatic extraction methods
The semi-automatic feature extraction approach intuitively makes endeavours to commonly empowering the insight or data of human perception framework to robustly detect the targeted feature and the computer-aided system to bring fast extraction of targeted feature and exact shape representation. In semiautomatic feature extraction strategy, first target feature is detected by human vision and a couple of estimates in terms of seed points or coaching samples concerning the targeted feature on highlight are typically given. The targeted feature is then portrayed automatically by the PC helped calculations.
3.2.1 TERCAT approach (ENVI 5.1 Exellis Help) [33]
The Terrain Categorization (TERCAT) tool creates an output product in which pixels with similar spectral properties are clumped into categories. These categories may be either user-defined, or automatically generated by the classification algorithm. The TERCAT tool provides all of the standard ENVI classification algorithms, plus an additional algorithm called Winner Takes All. This is a voting method that classifies pixels based on the majority compiled from all of the other classification methods that were conducted. In this research, the sub approaches for TERCAT are Maximum Likelihood, Spectral Angle Mapper, Parallelepiped and Winner Takes All.
3.2.2 Target Detection approach (ENVI 5.1 Exellis Help) [33]
Target detection algorithms work on the principle of extracting target features based on spectral characteristic of initial coaching spectral signatures of target features, and performing end to the background noise using spectral signatures of non-target features. If the users knows that the image contains at least one target of interest, the wizard can be used to find other targets like it in the same image. The workflow can also be accessed programmatically, so the user can customize options if needed. Target detection tools (ENVI 5.1) were executed to perform supervised image processing tasks into workflows (CEM, ACE, OSP, TCIMF, and MT-TCIMF) to extract blue ice areas (BIAs) as target and white ice as non-target.
3.2.3 Spectral Matching approach (ENVI 5.1 Exellis Help) [33]
Spectral matching approaches extract the target features that are described in multispectral imagery based on the target feature’s spectral characteristics. Spectral matching algorithms confirm the spectral similarity or matching between input satellite imagery and reference key points to form an output product within which pixels with similar spectral properties are clumped into target and non-target categories. Spectral Matching (ENVI 5.1) were executed to perform supervised image processing tasks into workflows (MF, SAM, MTMF and SAMBM) to extract blue ice areas (BIAs) as target and white ice as non-target.
3.2.4 Mapping Methods approach (ENVI 5.1 Exellis Help) [33]
Selected hyperspectral Mapping Methods describes advanced concepts and procedures for analyzing imaging spectrometer data or hyperspectral images. Spectral Information Divergence (SID) is a spectral classification method that uses a divergence measure to match pixels to reference spectra. The smaller the divergence, the more likely the pixels are similar. Pixels with a measurement greater than the specified maximum divergence threshold are not classified. End member spectra used by SID can come from ASCII files or spectral libraries, or you can extract them directly from an image (as ROI average spectra). Mapping Methods (ENVI 5.1) were executed to perform supervised image processing tasks into workflows [SID SV (0.05), SID SV (0.07), SID SV (0.1), SID MV (0.05) and SID MV (0.09)] to extract blue ice areas (BIAs) as target and white ice as non-target.
3.2.5 Object Based Image Analysis (OBIA) approach (Ecognition Developer Help) [34]
Object Based Image Analysis (OBIA), is an advanced method used to segment a pixel based image into map objects that can then be classified as a whole. This kind of analysis is ideal for mapping with high-resolution imagery, where a single feature (such as a tree) might have several different shades of pixels. The example of rule-set for Trial 1, 2, 3 and 4 for extracting blue ice areas in this research is stated below;
For Trial 1:
02.063 50 [shape.: 0.8 compact.:0.6] Creating ‘level 1’
Export view to segmentation (no geo)
Unclassified with mean nir-1>=50 and mean nir-1Export view to assign class (no geo)
Blue ice with mean nir-1>=50 and mean nir-1Export view to merging (non geo)
For Trial 2:
02.063 60 [shape.: 0.8 compact.:0.6] Creating ‘level 1’
Export view to segmentation (no geo)
Unclassified with mean nir-1>=100 and mean nir-1Export view to assign class (no geo)
Blue ice with mean nir-1>=100 and mean nir-1Export view to merging (non geo)
For Trial 3:
02.063 70 [shape.: 0.8 compact.:0.6] Creating ‘level 1’
Export view to segmentation (no geo)
Unclassified with mean nir-1>=150 and mean nir-1Export view to assign class (no geo)
Blue ice with mean nir-1>=150 and mean nir-1Export view to merging (non geo)
For Trial 4:
02.063 80 [shape.: 0.8 compact.:0.6] Creating ‘level 1’
Export view to segmentation (no geo)
Unclassified with mean nir-1>=200 and mean nir-1Export view to assign class (no geo)
Blue ice with mean nir-1>=200 and mean nir-1Export view to merging (non geo)
The on-top rule-set is employed to extract blue ice areas as well as non-target depending on their mean band values. OBIA is making considerable progress towards spatially explicit information extraction advancement, such as is required for spatial planning as well as for many monitoring programmes.
The Semi-automatic extraction strategies and OBIA utilized in this study to extract blue ice areas (BIAs) are supported differently on different underlying principles. To compare these strategies objectively, we kept the input ROIs (regions of interest or coaching samples) constant for all methods for each tile. ROIs are different for different tiles as the area differs. After classifying the image into target spectra, i.e., blue ice areas, using the Semi-automatic extraction methods and OBIA approaches, the 12 semi-automatically extracted tiles (for BIAs) were vectorized to calculate the area of individual tile.
 

Effect of Surface Area on Reaction Rate

Surface Area vs. Reaction Rate
How does the surface area of pure cane sugar cubes affect the rate of dissolution in water?
Chandler Hultine
 
Abstract
The purpose of this lab was to investigate how surface area affects the reaction rate of a substance in a solution. This lab was put together to find out how differentiating surface areas of pure cane sugar cubes would affect the rate of dissolution in water.
The investigation was undertaken by using five different groups of sugar cubes, each group having a different surface area than the others. The cubes were submerged and stirred in a solution of water until they completely dissolved, and the time it that it took them each to dissolve was recorded. The longer the time it took for the cubes to dissolve, the slower the reaction rate, and vice versa.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The initial hypothesis, if the surface area of the cube increases, then the reaction rate of the dissolution of the cube in water will also increase because more of the cube will be exposed to the water which will allow for more collisions of particles to occur at a time, was accepted due to a positive correlation between dissolution times and surface area of cubes. The more broken up a cube was, the faster it tended to dissolve in water and vice versa, because the more broken up cubes had more surface area. (Abstract Words: 212)
 
Introduction
The overall aim of this lab is to investigate how surface area is related to reaction rate in terms of the dissolution rate of a substance in a solution. This lab will be experimenting with sugar cubes of the same volume, but different surface areas to see how exactly surface area affects the rate of dissolution.
How does the surface area of pure cane sugar cubes affect the rate of dissolution in water? If the surface area of the cube increases, then the reaction rate of the dissolution of the cube in water will also increase because more of the cube will be exposed to the water which will allow for more collisions of particles to occur at a time.3,6
With most things in life, size is a very important factor that people consider in many choices they make, whether it be deciding between the newest smartphones or burning wood chips versus entire logs in a fire.1 Seeing how size affects something is key when taking an item/idea and making it more effective. The purpose of this experiment is to see how the amount of surface area of a substance is related to the reaction rate when said substance is placed into a solution.5 This investigation is to see how the reaction rate of a substance can be either increased or decreased when placed into a solution.
Investigation
For the investigation, a variety of sources that related to how surface and dissolution/reaction rates are related. The [main] sources include but are not limited to:

Research on the topic done by NASA,
An excerpt from Ansel’s Pharmaceutical Dosage Forms and Drug Delivery Systems,
And experiment research from sciencebuddies.org titled Big Pieces or Small Pieces: Which React Faster?.

These sources have provided a great amount of background information, especially the article by NASA involving an explanation on the correlation between surface areas and reaction rates.
Materials
In order to complete this experiment, the following materials were required:

25 Sugar cubes (any brand, just make sure all the same)
1 Timer
5 Beakers (250mL)
1 Pipet
1 Thermometer
1 Knife
1 Paper towel or piece of paper (cut sugar cubes on)
1 Hammer or weighted object (to crush one of the sugar cubes into a powder like state)
1 Pencil and paper (to record observations)
1 Stirring device of any kind (like a chopstick)

Constants
Water source, brand of beakers, size of beakers, amount of water, stirring device, type of sugar cube, temperature of water, temperature of surroundings, temperature of beakers, cuts in sugar cubes, pipets, timer, thermometer
Procedure

Divide the 25 sugar cubes into groups of five so that each group has five sugar cubes.
Leave the first group untouched. This will be the group that has the smallest surface area.
Take the second group of five sugar cubes and, using the knife, cut each cube in half.
Take the third group of sugar cubes and cut each cube into quarters (cut each one in half then cut the halves in half).
The fourth group will be cut into eighths.
The last group of sugar cubes will be completely ground up into a powder. This will be the group with the greatest surface area.
Once all the cubes are cut up and put into groups, fill up each of the 5 beakers with water to the 200mL mark. Use a pipet to make the measurement precise.
Wait 30 minutes after filling the beakers with water to ensure they are all room temperature.
Begin with the uncut sugar cube. With the timer and stirring device at hand, place the uncut cube into the water-filled beaker and begin the timer and stirring as soon as the sugar cube is placed in the water.
Stir the sugar cube in the water until it completely dissolves/disappears in the water.
Stop the timer as soon as the sugar cube completely dissolves.
Record the results on a pre-made data table.
Repeat steps 6 to 9 for all variants of the sugar cube for one group.
Repeat the entire experiment for all 5 groups of sugar cubes, making sure that one group is finished before moving onto another group. DO NOT finish dissolving all of the sugar cubes of one specific surface area size and then moving onto another set of the same surface area sized cubes; make sure the experiment is carried out group by group. Treat each group with the five different surface area sized sugar cubes as an individual experiment. This way a total of 5 experiments will be carried out.

Data
Trial 1

Size of Sugar Cube

Time (seconds) for dissolution

Full

412

Half

217

Quarter

123

Eighth

82

Powder

51

Trial 2

Size of Sugar Cube

Time (seconds) for dissolution

Full

401

Half

202

Quarter

150

Eighth

77

Powder

58

Trial 3

Size of Sugar Cube

Time (seconds) for dissolution

Full

426

Half

236

Quarter

120

Eighth

68

Powder

47

Trial 4

Size of Sugar Cube

Time (seconds) for dissolution

Full

455

Half

241

Quarter

117

Eighth

81

Powder

55

Trial 5

Size of Sugar Cube

Time (seconds) for dissolution

Full

423

Half

221

Quarter

136

Eighth

71

Powder

52

Mean time for full sugar cube: 423.4
Mean time for half sugar cube: 223.4
Mean time for quarter sugar cube: 129.2
Mean time for eighth sugar cube: 75.8
Mean time for powder sugar cube: 52.6
Results and Discussion
The results of this experiment show that a more broken up sugar cube resulted in a faster dissolution rate of the cube in water, and vice versa when there were longer rates of dissolution for sugar cubes that were less broken up. Since the purpose of this experiment was to find the relationship between surface area and reaction rate, this experiment was successful.
Trial 1 data shows the times nearly being cut in half as the sugar cube becomes more crushed up, except for the transition between the powder and sugar cube broken up into eighths.
Trial 2 data also shows the time between each tier of sugar cubes being split in half as the surface area increases. However, this is not true for the half-broken up and quarter-broken up sugar cubes. The time in seconds for dissolution rate for those two sugar cubes only had a difference of ~50 seconds, which is not even close to half. This makes me wonder what happened during that part of the lab, because the data does not follow the conventional trend like the rest of my experiment results. A possible source of error for this trial was that I did not collect all of the sugar from the sugar cube after it was cut. When all of the sugar is not completely collected, the data can become skewed because not all of the sugar cube is actually being dissolved in the solution.
Trials 3, 4, and 5 all show around the similar results. The times are very close to each other for each size sugar cube that was dissolved. Trials 3, 4, and 5 are also relatively close to the data shown in trial 1. This shows that there was a little less precision that went into trial 2.
What does all of this data mean? Well for starters, the data and experiment are relevant for any other experiment out there that tries to determine the relation between surface area and reaction rate. The reason for this is because whenever different rates of reaction are being tested for, a change in the surface area of a reactant/variable will have an effect on the rate of reaction, because the alteration of surface area means that the frequency of particle collisions is altered as well.1,3,7 For example, if the surface area (of an object that is about to be placed in a solution) is doubled, that means there will be twice as much area for particles to potentially interact with on the object as compared to the original object that has the original surface area.3 This is true for all aspects of reaction rate; surface area plays a substantial role whenever reaction rate is tested for.1,3
Conclusion
Initial Hypothesis: If the surface area of the cube increases, then the reaction rate of the dissolution of the cube in water will also increase because more of the cube will be exposed to the water which will allow for more reaction between water and sugar cube to occur at one time.3,6
There was a strong, positive correlation between the data that was collected and the initial hypothesis. From looking at the data, it is apparent that the cubes that were more broken up that had more surface area dissolved much faster than a cube that was less broken up and did not have as much surface area. The data shows that more surface area does mean faster reaction rate, and vice versa.3 The powder/completely crushed up sugar cube had the quickest time for dissolution in water which was on average 52.6 seconds, whereas the full sugar cube that was untouched and had the smallest amount of surface area had the slowest time for dissolution which was on average 423.4 seconds. Therefore, the hypothesis is accepted with the support of the data. The larger cubes that were not cut up took the longest to completely dissolve, whereas the finely crushed up cubes dissolved quickest.5
The accuracy of this experiment could be slightly improved in the future by adapting a more consistent and reliable method of stirring the sugar cubes around when they are placed in water. This would improve the accuracy of the time that each cube takes to completely dissolve in the solution of water.
Bibliography
Reaction Rates. Publication. NASA, n.d. Web. 1
Allen, Loyd V., Nicholas G. Popovich, Howard C. Ansel, and Howard C. Ansel.Ansel’s Pharmaceutical Dosage Forms and Drug Delivery Systems. Philadelphia: Lippincott Williams & Wilkins, 2005. Print. 2
Clark, Jim. “The Effect of Surface Area on Rates of Reaction.”The Effect of Surface Area on Rates of Reaction. N.p., n.d. Web. 06 May 2013. 3
Bayer HealthCare, 2005. “Temperature and Rate of Reaction,” Bayer HealthCare, LLC [accessed May 8, 2007]http://www.alka-seltzer.com/as/experiment/student_experiment1.htm. 4
Olson, Andrew. “Big Pieces or Small Pieces: Which React Faster?”Big Pieces or Small Pieces: Which React Faster?Science Buddies, n.d. Web. 06 May 2013. 5
Kenneth Connors, Chemical Kinetics, 1990, VCH Publishers, pg. 14 6
Isaacs, N.S., “Physical Organic Chemistry, 2nd edition, Section 2.8.3, Adison Wesley Longman, Harlow UK, 1995. 7
(Bibliography Words: 126)
 

Geographic Study of Mountain Area

CHAPTER – II
STUDY AREA PROFILE
2.0 General:
The study area (13858.83 ha) is a mountain range between River Pravara and River Mula Basin. The range started from western boarder at Ghatghar village and end eastern border at village Washere in the Akole tahsil, district Ahmendagar of Maharashtra state. The extent of study area is 19° 35′ 06.86″ to 19° 30′ 13.08″ N latitude and 73° 37′ 00.03″ to 74° 04′ 24.65″ E longitude. It covers parts of the Survey of India topographic sheet numbers 47 E/ 10, 11, 14, 15 and 47 I/ 2, 3. The depth and water-holding capacity of the soils are varied even if there is slightly change in slopes which is the one of the reason in the variation of forest land. The slope of the area is decreasing from NW to SE respectively and the height varies from 560 m to 1646 meters above mean sea level.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Study area is distributed in the Sahyadri mountains (Western ghat) region of the Maharashtra state. Geologically this area formed from basaltic lava. Basalt rock prevent percolation of rainy, reservoir water in to underground zone. Due to rock type the soil cover is very shallow at the top of the mountain and increasing its deepness at foothill zones near water reservoirs. Basic Intrusive (Dykes) mainly found nearby this area. This are the approximate reasons of the shallow soil cover. Very shallow loamy, shallow clayey soil found on the moderate (1°- 3°) and stiff (3°- 6°) slope. Soil moisture impact on the amount of the vegetation cover with respect to soil type and slope. Therefore, North West and South zone have maximum vegetation cover compare to other land of the study area. It receives annual rainfall about 440.4 mm. The mean annual maximum and minimum temperatures are 39.80 C and 8.70 C respectively. Local tribal people engages with the agricultural activities at reclaimed land from forest area. Forestry is the second occupation after agriculture.
2.1 Geology:
Study area is a part of Sahyadri Mountain Range (Western Ghat). Also called as Deccan Trap formed by basaltic rocks; amygdaloidal basalts form the bedrock.
This area has shallow soil like loam, clayey; again divided in to sub types based on depth and slope classes. Overlying weathered and fractured rocks, resting on hard massive basalt. The basalts are nearly horizontal, separated by thin layers of ancient soil and volcanic ash (red bole). The basalt flows are nearly flat-lying (the sequence has a regional southerly dip of 0.5-1°) and mainly belong to the Thakurvadi Formation (Fm) of the Kalsubai Subgroup (Khadri et al. 1988; Subbarao and Hooper 1988).
The lithology of the area indicating that around 77.17 % area covered by 12-14 compound pahoehoe flows and some Aa flows (max 206m). Around 4.53 % by 2 compound pahoehoe flows (40-50m) and Megacryst compound pahoehoe basaltic flow M3 (50-60m) up to 3.26 %. Remaing 0.89% covered by 5 Aa and 1 compound pahoehoe basaltic lava flows (Max. 160m); 4-5 compound pahoehoe basaltic lava flows (Max. 150m), Basik Sill/Lava channels respectively. The regional stratigraphy of the Deccan basalts has been described by Beane et al. (1989), Khadri et al (1988), and Subbarao and Hooper (1988). Structural indices indicate the part of basic intrusive (dykes) in the part of noer-west and south-east. One fault line cross at the middle part of the study area.
2.2 Relief:
Study situated at the middle of the tehsil Akole. It has horizontal shape and act like a natural water divider. Relief turn and fix the surface geographical landforms. The altitude of this area is varies from less than 640 meter (minimum) to 1646 meter (maximum). The formation of soil, natural vegetation cover and soil moisture conditions are totally controlled by the status of the relief. Contour lines demarcate the height of the study area above mean sea level. The Kalasubai (1646m) highest peak of the Maharashtra state located in the Akole tehsil. In the tehsil second highest peak Harishchandragarh (1422m) located in the south-west part of the study area. Relief decreasing toward to the Washere village of this mountain range. Drainage network flow depends on relief is explained in next point.
2.3 Slope:
Slope of the study area calculated in degree (0° to 90°) on the basis of contours. This slope of the area divided in to 7 classes. Gentle slope has up to 1° slope where water reserve and collected in dam. Soil depth, cover and types also depends on the nature of slope. Hill top and cliff sides has precipitous to very steep slope (12° to 90°).
At foothill slope moderate to steep (1° to 12°) zone has maximum forest cover in north-west and south-west direction. Eroded material on the top hill concentrating on the foothill slopes and favourable for soil formation. That is why the in this area soil moisture, soil depth and vegetation cover found more than other zone. Soil types and different characteristics has been elaborated in the next point.
2.4 Drainage:
Network of drainage is developing continually and it’s responsible for the different landform creation. Relief controlled the drainage flow and streams erode land surface in to different geographical landform features. Relief and streams has strong correlation. Study area has an origin point of the main river Pravara. River flows from north-west to north-east direction. This river has main and minor dam. Bhandardara is main dam situated on river Pravara, which is an important land-cover feature in study area. At the time of robust forest change analysis this water body play an important role.
Soil moisture depend on drainage network and water reservoirs after rainy season.
It made difference in the type of vegetation cover from dense forest to open scrub land.
Drainage pattern related to slope and slope related to forest growth has been explained in detailed in the next point.
2.5 Soil:
The growth and reproduction of forest cannot be understood without the knowledge of soil. The soil and vegetation have a complex interrelation because they develop together over a long period of time. The vegetation influences the chemical properties of soil to a great extent. The selective absorption of nutrient elements by different tree species and their capacity to return them to the soil brings about changes in soil properties (Singh et al. 1986). Soil element is one of the most important biophysical matter. Concentration of elements in the soils is a good indicator of their availability to plants. Their presence in soil would give good information towards the knowledge of nutrient cycling and bio-chemical cycle in the soil–plant ecosystem (Pandit and Thampan 1988). Generation of soil is depend on geology, topography, time span, climatic conditions, organic and inorganic factors, etc. Forests in general have a greater influence on soil conditions than most other plant ecosystem types, due to a well-developed ‘‘O’’ horizon, moderating temperature, and humidity at the soil surface, input of litter with high lignin content, high total net primary production, and high water and nutrient demand (Binkley and Giardina 1998).
Study area is a hilly zone, soil is very shallow at the top-hills while excessively drained loamy soil (a rich soil consisting of a mixture of sand and clay and decaying organic materials) found at steep slopes north-west direction. Shallow well drained clayey soil and slightly deep excessively drained loamy soil found over moderate to gentle slope respectively. Clay soils, are made up of very fine, microscopic particles. These tiny particles fit together tightly, resulting in tiny pore spaces between them. The tiny pore spaces allow water to move through them, but at a much slower pace than in sandy soils. Clay soils drain quite slowly and hold more water than sandy soils. Loams soil capacity of maximum water holding (MWHC) approximately 0.18 inches of water per inch of soil depth, and clays hold up to 0.17 inches of water per inch of soil depth. However, soil types, soil elements, soil depth depends on the geology of the study area, explained in next point.
2.6Population and economic activities:
Humans being living surrounding this area most are the tribal population. Primary economical activities including shifting cultivation, fishery,
2.7Spectral properties of plants in the forest: (1st ch)
Interaction of radiation with plant leaves is extremely complex. General features of this interaction have been studied but many spectral features are yet unexplained. Gates et al., (1965) are considered pioneers, who have studied spectral characteristics of leaf reflection, transmission and absorption. Optical properties of plants have been further studied to understand the mechanisms involved by Gausman and Allen (1973), Wooley (1971) and Allen et al., (1970).
It is the synthesis of the parameters like reflection of plant parts, reflection of plant canopies, nature and state of plant canopies and Structure and texture of plant canopies, which will be required to fully understand the remote sensing data collected from space borne and aerial platforms. They have been attempted for crop canopies through the development of models but not yet fully achieved. It will be initially required to discuss the electromagnetic spectrum and its interaction with vegetation canopies. Subsequent factors affecting the spectral reflectance of plant canopies with its possible applications in remote sensing technology would be discussed.
The vegetation reflectance is influenced by the reflectance characteristics of individual plant organs, canopy organization and type, growth stage of plants, structure and texture of the canopies. The synthesis of the above four aspects provides true reflectance characteristic. However, various authors without fully achieving models to determine vegetation reflectance characteristics have studied effect of individual parameters.
2.6.1 Nature of the Plant:
Numerous measurements have been performed to evaluate the spectral response of various categories of plants with a spectrophotometer (Fig. ***).
For a plant in its normal state i.e., typical and healthy the spectral reflectance is specific of the group, the species and even of the variety at a given stage in its phenological evolution. The general aspects of spectral reflectance of healthy plant in the range from 0.4 to 2.6 µm is shown in figure ****.
The very abrupt increase in reflectance near 0.7 µm and the fairly abrupt decrease near 1.5 µm are present for all mature, healthy green leaves. Very high; further in the far infrared >3.0 µm. Thus, the typical spectral curve of plant is divided into three prominent zones correlated with morphological characteristics of the leaves (Gates, 1971).
2.6.2 Pigment Absorption Zone:
The important pigments, viz. chlorophyll, xanthophylls and carotenoids absorb energy strongly in ultraviolet blue and red regions of the EMR. The reflectance and transmittance are weak. The absorbed energy of this part of this spectrum is utilized for the photosynthetic activity (Allen et al. 1970).
2.6.3 Multidioptric Reflectance Zone:
In this zone, the reflectance is high, while the absorbptance remains weak. All the unabsorbed energy (30 to 70% according to the type of plant) is transmitted. They reflectance is essentially due to the internal structure of the leaf and the radiation is able to penetrate. The reflectance from internal structure is of physical more than chemical nature. Apart front the contribution of the waxy cuticle, the magnitude of the reflectance depends primarily upon the amount of spongy mesophyll.
2.6.4 Hydric Zone:
Amount of water inside the leaf affect the pattern of spectral reflectance with water specific absorption bands at 1.45 µm, 1.95 µm and 2.6 µm. Liquid water in a leaf causes strong absorption throughout middle infrared region. Beyond 2.5 µm the reflectance becomes less than 5% due to atmospheric absorption and beyond 3 µm the vegetation starts acting as quasi blackbody (Gates et al., 1965).
There are numerous factors either internal of the plant or external coming from the environmental conditions have an influence on the specific spectral reflectance. The above descriptions are true only for a normal, mature and healthy vegetation. The factors which affect the spectral reflectance of leaves are leaf structure, maturity, pigmentation, sun exposition, phyllotaxis, pubescene, turgidity (water content) nutritional status and, disease etc. Important factors are pigmentation, nutritional status, anatomy of leaves and water content. While, sun exposition and phyllotaxy affects the canopy reflectance, phenological state and disease are linked to the primary factors affecting the spectral reflectance (Wooley, 1971).
2.7Spectral vegetation indices:
Radiant energy intercepted by a vegetative canopy is primarily scattered by leaves either away from the leaf surface or to the leaf interior. The scattered radiation is reflected, transmitted or absorbed by leaves. The partitioning of radiation a reflected, transmitted or absorbed energy depends on a number of factor including leaf cellular structures (Gates et al. 1965; Kfipling, 1970; Woolley, 1971), leaf pubescence and roughness (Gausman, 1977), leaf morphology and physiology (Gausman et al., 1969 a, b; Gausman and Allen, 1973; Gausman et al., 1971) and leaf surface characteristics (Breece and Hommes, 1971; Grant, 1985).
Leaves are not perfectly diffuse reflectors but have diffuse and specular characteristics. Leaf transmittance tends to have a non Lambertian distribution, while leaf reflectance is dependent on illumination and view angles. Knowledge of soils radiation interaction with individual leaves is necessary for several reasons like special to interpret and process remotely sensed data. Typical reflectance and transmittance spectrum of a individual plant leaf indicate three distinct wavelength regions in interaction: visible (0.4-0.7 µm), near infrared (NIR) (0.7-1.35 µm) and mid infrared (mid IR) (1.35-2.7 µm). Thus the typical spectral curve of plant is divided into three prominent zones correlated with morphological/anatomical/physiological characteristics of the leaves and these are Pigment Absorption Zone, Multi-Dioptric Reflectance Zone and Hydric Zone, etc.
The analysis of all remotely sensed data involves models of many processes wherein the EM radiation is transformed (the scene, atmosphere and sensor) and whereby inference is made about the scene from the image data. The most common strategy for relating remote sensing data to vegetation canopies has been via the correlation of vegetation indices with vegetation structure and functional variables. This simple empirical approach has yielded substantial understanding of the structure and dynamics of vegetation at all scales. These indices are capable of handling variation introduced in a scene due to atmosphere or sensor and vegetation background influence in low vegetation cover areas.
The capacity to assess and monitor the structure of terrestrial vegetation using spectral properties recorded by remote sensing is important because structure can be related to functioning, that is to ecosystem processes that are ultimately aggregated up to the functioning of the local-regional-global level of ecosystem. The categorization of the various spectral indices in to approximately five types. Such as Ratio Indices, Vegetation Indices, Orthogonal based Indices, Perpendicular Vegetation Indices and Tasseled Cap Transformation, etc.
Remote sensing of cropland, forest and grassland involves the measurement of reflected energy of component in the presence of each other. The development and usefulness of vegetation indices are dependent upon the degree to which the spectral contribution of non-vegetation component can be isolated from the measured canopy response. Although vegetation indices have been widely recognized a valuable tools in the measurement and interpretation of ‘vegetation condition’ several limitation have also been identified. They are related to soil brightness effect and secondary soil spectral deviations. The use of site specific soil lines reduces soil background influence. In this context SAVI, GRABS and PVI holds greater promise in low vegetated areas.
The vegetation indices are simplified method to extract information about vegetation parameter from multispectral data however, their use in spectral modeling needs to be studied in context of spectral dynamics of earth surface components.
2.8Resume`:
Forest cover is an important natural resource for the environment and socio-eco on the surface of the earth. It can bridge the gap between nature and human beings conflicts. Changes in the forest land increase the imbalance in the ecosystem, climatic conditions, temperature, land degradation, drought prone zones, soil erosion, depending manmade activities, etc. The living tribes in the mountain hill as well as foot hill area utilized forest material for their domestic usages. Therefore, the objectives of detection and delineation of the forest land by using ordinary classification methods have been outlined in the present study. The methodology has been outlined in this chapter. The Landsat-5 TM and Landsat-7 ETM+ dataset has been suggested as a source of information to achieve the objectives of the study. The basic knowledge regarding spectral properties of the forest and physiographic elements as well as spectral vegetation indices area has been proposed for the second chapter to make information base study for image analysis, classification and interpretation in the next chapters.
*********
 

Small local grocery store: disadvatages in metropolitan area

EST1 Task 1
Being a small, local grocery store chain in a major metropolitan area is daunting. National and regional chains are regularly putting out of business small stores owned by local companies. This reason, along with the at large social responsibility taking hold of consumers requires all companies to adjust their organizations from solely a profit-seeking motivation to being socially and ethically understanding in their business outlook.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Company Q recently closed two stores in higher-crime-rate areas. Those closures where attributed to the consistently negative balance sheets of those stores. If these stores in higher-crime areas were not making a profit, what is the reason they were losing money? To take a socially responsible approach to all of our store locations will mean understanding our customers. For example, if a store in a predominately-Jewish neighborhood is selling non-kosher items we could expect these items to not be sold in the same volume as kosher items. Taking a Jewish-centric approach to a store’s marketing in a Jewish neighborhood makes good business and ethical sense. When we understand our costumers and their communities, we understand that business flourishes where society thrives.
Company Q, after many years of customer requests, began to offer a limited selection of health-conscience and organic products in all of their stores. Offering organic and other health-conscience offerings in response to customer demand is a positive step in forming a social contract with our customers instead of merely offering them what we feel they need. Understanding our customers means providing them what they want and what they need. Offering high margin products to customers who have neither the financial ability to afford the higher costs associated with those products, nor have a desire to purchase these products will not help Company Q’s bottom line.
Product choices must be targeted to the consumer. Ethnic foods must be endemic to the neighborhood their being offered in. Marketing of stores in cultural or racially specific communities must be stocked with products that meet the needs of those people. It takes very little effort to understand our customers, but that little effort can be the difference between a store being successful or failing. Insuring that Company Q’s stores differentiate themselves in the marketplace will help give the company a competitive edge in these tough economic times.
Company Q’s current policy of disposing of day-old products is a perfect example of missing a great public relations and corporate social responsibility opportunity. When asked by the area’s food bank to donate product that would otherwise be thrown away, management declined. Employees concerned management over lost revenue through possible fraud and theft instead of donating the food.
The first concern with this issue is understanding its costs and actual or perceived benefits. Company Q will write-off any product that needs to be disposed off due to exceeding the expiration date. The products are disposed of in a dumpster and that is the end of the products usefulness in Company Q’s current viewpoint. The company, if paying by weight or volume, will incur greater disposal rates from the waste removal company for disposing of the unsellable product instead of donating it to the local food bank.
The second concern with not donating product that would otherwise be thrown away is employees’ attitudes. If we’ve communicated to our employees that we will not be socially conscience to those people in need in our community, what does that say to our employees since they are also a part of the local community. In our digitally connected society it would be foolhardy to not expect a socially aware employee to film the disposal of food that we may not be able to sell but which could be given away and used by those in need. The potential negative feedback of such an event for a small chain like Company Q cannot be overstated.
The above concern dealt with not just the direct financial costs to our company but the possible social capital loss that we find in our current position. Thankfully, Company Q doesn’t need to expend much in the way of financial or employee effort to make a considerable difference in our store neighborhood community respect. One delivery van can be used to pick-up the product that would otherwise be thrown away at the end of the workday and transported by the store’s supervisor to the local food bank. The food would be unloaded by food bank staff while the Company Q supervisor could discuss with the food bank managers the impact that those donations will have on the community. The marginal time spent loading and travelling to the local food bank is a minor inconvenience for the storeowner at worst and a major public relations benefit for not only the local store but also Company Q in general. We could also expect a reduction in our waste removal services since less product will be thrown away.
“The point is to attract customers wanting to make a difference in society through their purchasing” (Bronn, 2001, p.2). The intrinsic and extrinsic benefits for not only Company Q management but also storeowners and store employees alike, clearly proves the need for a socially conscience corporate attitude.
References
Bronn, P.S., Vironi, A.B. (2001). Corporate social responsibility and cause related marketing: an overview. International Journal of Advertising, 2. Retrieved February 27, 2010, from http://www.basisboekmvo.nl/files/cause-related.pdf
 

Current Environmental Issues in the Greater Toronto Area

Introduction

In this research paper, I will be talking about some current environmental issues in the Greater Toronto Area (GTA). There are many current environmental issues in Toronto concerning different types of pollution caused by commercial and consumer activity in the city. There are many harmful effects of human activity, such as air pollution, water pollution, and other influences caused by urban infrastructure like highways and public transportation services. But in this paper, I am going to focus on these three current environmental issues in Toronto: the large amount of air pollution in our city, the extensive contamination in the waterways around the GTA, and the tremendous difficulties concerning waste management. I will first discuss the root cause for each issue. Then I will explain the effects of each issue, particularly those on health, the environment, and quality of life. I will conclude my paper by giving recommendations on how to solve each of the three primary environmental issues in Toronto.

Discussions and Analysis

The first environmental issue I will discuss is the large amount of air pollution in Toronto. In 2004, Toronto Public Health reported that there were around 1,700 premature deaths and 6,000 hospitalizations each year in Toronto because of air pollution. (Stephanie Gower, 2014) In 2014, air pollution had an even larger impact on the health of people in Toronto, even though there were improvements on air quality. Last year there were around 1,300 premature deaths and 3,550 hospitalizations in Toronto due to increased air pollution. (Stephanie Gower, 2014)

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

More than 50% of Toronto’s air pollution is distributed to urban areas in and around the city. The primary causes of air pollution are exhaust from motor vehicles and emissions from factories. Millions of cars, trucks, vans and busses use the city’s roadways each year. The number of factories in the GTA is always increasing. According to the average, these sources of air pollution account for around 280 deaths and 1,090 hospitalizations in Toronto each year. (Stephanie Gower, 2014) These sources account for around 42% of premature deaths and 55% of hospitalizations because of air pollution in Toronto. These percentages of premature death and hospitalizations show that there is a decrease comparing with 2007 estimates, as air pollution distributed by vehicles gave rise to about 440 deaths and 1,700 hospitalizations that year. (Stephanie Gower, 2014) But there is still a very large health impact caused by air pollution.

There are two primary results of air pollution, and those are the affects on health and on the environment. Air pollution has very adverse effects on the health of children and adults. There are five key air pollutants that can harm humans, and those are sulphur dioxide (SO2), nitogen dioxide (NO2), carbon monoxide (CO), fine particulate matter (PM2.5), and ozone (O3). (Health effects of air pollution, 2017) The elderly and young children are the ones greatly affected by air pollution. The five air pollutants affect breathing and lung condition, which can ultimately lead to such illnesses as: asthma, allergies, chronic obstructive pulmonary disease (COPD); and heart conditions, such as angina, arrhythmia, heart attack, heart failure, and hypertension. (Health effects of air pollution, 2017) Also, there are some symptoms that might be evident in a person’s behavior because of air pollution. These include: tiredness, headaches, dizziness, coughing, sneezing, wheezing, difficulty breathing, mucous in the nose or throat; and dryness or irritation in the eyes, nose, throat and skin.  (Health effects of air pollution, 2017) These harmful effects on health cause many people in Toronto to lose time at work, be hospitalized, or even die. As for the negative environmental effects of air pollution there are several, including acid rain, harm to wildlife, lower crop yield, forest damage, and global climate change. (JR., 2018) As for acid rain, this effect makes the river and lake water unsuitable for some fish and other wildlife in Toronto. As for the effects on wildlife, toxic pollutants in the air have a severe impact. Pollution causes animals to have many health problems, shortening the longevity and worsening the quality of life for countless species. In terms of crop and forest damage, air pollution can damage crops and trees in many ways, lowering crop yield and worsening the quality of produce. As for global climate change, air pollution produces greenhouse gases (GHGs) which harm the environment, causing the world to gradually become warmer each year, which is causing major weather disasters with more regularity. 

We know that transportation is a major contributor of air pollution, which has harmful impacts on health and the environment in Toronto. To reduce pollution and its harmful effects to people and the environment, there are certain things we can do as a society. We need to reduce vehicle usage, particularly the number of cars on the road, as this is directly causing a high percentage of air pollution. We need people to drive less and do more walking and cycling. We need everyone to use more public transit, such as taking the bus, streetcar, and subway. Also, we need to reduce the pollution produced from factories. Companies should be taxed more heavily for their harmful emissions, even if it results in higher prices for consumers and less profit for businesses. We need to reduce our consumption, reuse more of the materials that web typically throw away, and recycle more as well. All of these actions will result in better air quality, a cleaner environment, and fewer illnesses due to pollution. 

The second environmental issue that we are going to discuss is water pollution. Lakes, rivers, and steams in the GTA are becoming over-polluted. When we think about water pollution in GTA, we might think about smaller waterways such as the Don River, Humber River, and Rouge River. But in this research paper, I am going to be focusing on the sources of fecal pollution in Lake Ontario. Municipal wastewater is a major source of fecal pollution. (Thomas A. Edge , 2007) Even though we have made improvements to control the pollution in Lake Ontario through improved sewage treatment plant, there is still room for improvement. With the effluents and combined sewer overflows, beach closures persist in a lot of communities around Lake Ontario. There are many problems about fecal pollution in Lake Ontario, such as droppings from birds, impervious surface runoff, mats of cladophora green alga, and foreshore sand. (Thomas A. Edge , 2007) According to a recent investigation, fecal dropping from birds is the major contributor to the elevated numbers of Escherichia coli in the beach water of Lake Ontario. And this causes the water in Lake Ontario to be very dirty, which harms people and the environment.

The effects of fecal pollution on public health and the environment are large and harmful. Fecal pollution is extremely unsanitary and dangerous, as it contains pathogenic organisms that cause gastrointestinal infections following ingestion or infections of the upper respiratory tract, ears, eyes, nasal cavity and skin. (Faecal pollution and water quality, 2016) Fecal pollution causes people to have Infections and illness, and it is difficult to detect by routine surveillance systems. According to research, tests showed a number of adverse health outcomes including gastrointestinal and respiratory infections to be related on fecal pollution in Lake Ontario. (Thomas A. Edge , 2007) And this is a definite a burden of disease on public health and economic loss. Also, Fecal pollution has an extreme effect on the environment. It makes our water in Lake Ontario very dirty, making it difficult for species of fish and aquatic creatures to survive or breed. And if they cannot survive in the water, people cannot catch them and eat it, which results in a large drop to the economy and public utility. Also, if the water is dirty in the Lake Ontario, this will have a large effect on public health because the water is definitely not safe to drink. 

We know that fecal pollution has a large effect on the Lake Ontario. Right now, there are no specific ways to prevent this pollution because you cannot kill all the birds that cause the enormous amounts of fecal droppings. But we can use antimicrobial resistance analysis, a system that we can use to make sure our water is clean. Also we need to have a better understanding of water interface on beaches to inform sand-grooming practices and have a well-planned beach management system to protect public health in Lake Ontario. (Thomas A. Edge , 2007)

The last environmental issue that I will discuss is waste management. In 2006, 27,249178 tonnes of waste was dumped by Canada. An amount of 10,437,780 tonnes of waste was dumped by the Province of Ontario, and 1,218,540 tonnes of waste was dumped by the Greater Toronto Area (GTA). Around 38.3% of Canadian waste was dumped by the Province of Ontario, and around 11.7% waste was dumped by the GTA. (Shamsul, 2010) Exactly 7,749030 tonnes of waste was diverted by Canada, 2,396,856 tonnes of waste was diverted by Province of Ontario, and 913,930 tonnes of waste was diverted by GTA. (Shamsul, 2010) In 2008, 1,067,054 tonnes of waste was dumped by the GTA and 1,078,261 tonnes of waste was diverted by the GTA. Comparing to the results in 2006, the disposal of waste decreased by 12.43% and diversion of waste was increased by 18% in GTA by the year 2008. (Shamsul, 2010) But waste management is still a big problem which requires a lot of improvement in the GTA, especially given the continuously increasing population growth. The major issue about waste management in the GTA is landfills. Dumping too much garbage will cause land pollution to worsen and an uncontrolled buildup of all sorts of solid waste. (What are Landfills?, 2016) There are multiple effects of poor waste management. The first effect is on air quality. Waste impacts the air quality around landfills because of the toxic fumes which are produced, such as Greenhouse Gas (GHGs) (Eugene A. Mohareb, 2011) which are very bad for the environment. The second effect is ground water pollution. Harmful run-off water is groundwater pollution that results from liquid which leeches from landfills, and it is hard to prevent the natural deterioration of ground water. This has severely adverse effects on the environment because animals and plants drink and absorb this poison. The third harmful effect by landfill waste is that on public health. If people live near the landfill areas, there are risks of health implications include birth defects, low birth weight, and particular cancers. Other undesirable impacts are sleepiness, nausea, headaches, and lassitude. The last unwanted effect by landfill waste is soil and land pollution. Landfill waste could directly make the soil and land to be unusable, destroying the ground area because of toxic chemicals spread over its area. Then after a long time, the soil is irreparably damaged, distorting soil fertility, and greatly harming plant life.

In order to solve the landfill waste issue in the GTA, there are multiple ways to improve the situation. The first solution is source reduction. Source reduction is the most effect way to minimize the waste in landfill. (Shamsul, 2010) It reduces the volume and toxicity of generated waste, such as lowering the GHG pollution. It also saves costs from transportation and extends the life of landfills. For example, instead of using plastic shopping bags, we can encourage people to use cotton or other disposable bags. People can buy less unnecessary consumables and use them most efficiently to reduce waste. The second solution is for people and companies to reuse and recycle much more material than at present time. This is an efficient way to reduce the millions of tones of waste that is constantly dumped in landfills. The three Rs help to extend the life of landfills and reduce GHG pollution. Also it saves limited and costly resources, which will make our environment becoming better and better. The last solution is to have a well-designed implementation of integrated waste management, as this can directly decrease the impacts of landfill on soil, air and water. If landfills are well designed and operated, then we will have a cleaner environment that is harmed less from pollution.

Conclusion

In conclusion, there are still a lot of environmental issues in the GTA, which are bound to persist for decades to come. After researching the pressing environmental issues of the GTA, I recognize that we can do a lot more to reduce the harmful pollution that is a result of human activity. People and all businesses should face larger fines and penalties for excessive pollution and dumping or not recycling. If people use more mass transportation like busses and subways, then there will be far less pollution in our valuable air. If factories reuse resources to produce in a more effective and efficient way, then there will be far less pollution in the air. Also if people do more recycling, then we can reduce our ever-growing problems of waste management. In the future, I will try to do my part by using more mass transportation and recycling more of my refuse more diligently and effectively. I must do these things otherwise when I tell others to do the same I will be nothing more than a bystander and hypocrite.

References

Thomas A. Edge , S. H.. (2007, May 17th). Multiple lines of evidence to identify the sources of fecal pollution at a freshwater beach in Hamilton Harbour, Lake Ontario .

What are Landfills? (2016). Retrieved from www.conserve-energy-future.com: https://www.conserve-energy-future.com/causes-effects-solutions-of-landfills.php

Eugene A. Mohareb, H. L. (2011, May). Greenhouse Gas Emissions from Waste Management . Assessment of Quantification Methods .

Faecal pollution and water quality. (2016). Retrieved from www.whio.int: http://www.who.int/water_sanitation_health/bathing/srwe1-chap4.pdf

Health effects of air pollution. (2017, November 16th). Retrieved November 11th, 2018 , from www.canada.ca: https://www.canada.ca/en/health-canada/services/air-quality/health-effects-indoor-air-pollution.html

JR., R. K. (2018). Health & Environmental Effects of Air Pollution . Health Effects .

Shamsul, A. (2010). A Study on potential for sustainable waste management in the Greater Toronto Area.

Stephanie Gower, R. M. (2014, April). Path to Healthier Air. Toronto Air Pollution Burden of Illness Update .

 

Calculation of Body Surface Area (BSA) for Blood Volume

CHAPTER 25
Calculation of Body Surface Area, Circulating Blood Volume, Requirement of Blood Products
Namita Mishra, Sudha Rawat, Vishva Nath Sharma
BODY SURFACE AREA (BSA)
Body surface area (BSA) is the area of the external surface of the body, expressed in square meters (m2). In physiology and medicine, the body surface area is the measured or calculated surface of human body. It is used to calculate metabolic, electrolyte, nutritional requirements, drug dosage, and expected pulmonary function measurements. BSA is a measurement used in many medical tasks. For many clinical purposes BSA is a better indicator of metabolic mass than body weight because it is less affected by abnormal adipose mass. Nevertheless, there have been several important critiques of the use of BSA in determining the dosage of medications with a narrow therapeutic index like many chemotherapy medications.
USES OF THE BSA

To gain an appreciation of the true required glomerular filtration rate (GFR) renal clearance is usually divided by the BSA.
To calculate a better approximation of the required cardiac output as for example in children, cardiac index is used.

Cardiac output = Cardiac Index / BSA

Chemotherapy is often dosed according to the patient’s BSA.
Glucocorticoid dosing is also expressed in terms of BSA for calculating maintenance doses or to compare high dose use with maintenance requirement.

CALCULATION OF BSA
It is difficult to actually measure the surface area of the human body so various calculations have been published to arrive at the BSA without direct measurement.

The most widely used is the Du Bois formula:

BSA = 0.007184 X W0.425 X H0.725

A commonly used and simple one is the Mosteller formula:

0R
BSA = ( [ H X W]/ 3600)1/2
Where
H = Height
W = weight
for example : Patient’s weight = 65 Kg
Patient’s height = 165 cm
BSA = ([65 X 165])/3600)1/2
BSA= 1.72 m2
Recently, a weight-based formula was validated in the pediatric age group that does not include a square root, making it easier to use. It is [4Wkg+7]/[90+Wkg].
AVERAGE VALUES
Average BSA for various weights:

WEIGHT (Kg)

BSA (m2)

1.5 – 4

0.13 – 0.26

4.1 – 9

0.26 – 0.48

9.1 – 14

0.48 – 0.56

14.1 – 20

0.56 – 0.71

20.1 – 26

0.71 – 0.84

26.1 – 34

0.84 – 1.0

34.1 – 50

1.0 – 1.4

50.1 – 66

1.4 – 1.63

Over 66.1

Over 1.63

EFFECTIVE CIRCULATING VOLUME
Blood volume is the volume of blood (both red blood cells and plasma) in the circulatory system of any individual. A typical adult has a blood volume of approximately between 4.7 and 5 liters, with females generally having less blood volume than males. Blood volume (BV) can be calculated given the hematocrit (HCT; the fraction of blood that is red blood cells) and plasma volume (PV):
BV = PV/ (1-HCT)
Diagnostic technologies are commercially available to measure human blood volume. A recent radio nucleotide study called BVA (Blood Volume Analysis)-100, provides a measure of Red Blood Cells and Plasma with 98% accuracy.
BLOOD VOLUME ESTIMATION

WEIGHT (kg)

BLOOD VOLUME ( ml/kg)

New born to 10

85

11 to 20

80

21 to 30

75

31 to 40

70

Above 40

65

CIRCULATING VOLUME OF THE CPB CIRCUIT
PRIMING VOLUME: the minimum amount of fluid (hemic or non hemic fluid) used to de- air the complete cardiopulmonary bypass (CPB) circuit is called priming volume or the circulating volume of CPB circuit. Priming of the CPB circuit is an important task for the perfusionist. Generally the main objectives of priming are:

To deair the CPB circuit
To check for any leaks in the circuit
To check for any mistake in the assembling of the circuit
To meet the need for the extra volume required to prime the CPB circuit as the patient’s blood volume is not sufficient enough to prime the CPB circuit.
For achieving sufficient hemodilution.

It is a standard practice to use a non blood CPB prime because of the benefits of hemodilution and concerns about blood borne diseases. The total priming volume is determined by the hardware selected for the circuit to be employed. Following are the tables showing the volume required to de-air various oxygenators, arterial filters and tubing.
CPB CIRCUIT AND TOTAL PRIMING VOLUME WITH VARIOUS WEIGHT
GROUP

Weight Group (Kg)

Boot Size
(inches)

Venous line Size (inches)

Arterial line Size (inches)

Total Priming Volume (ml)

0-4

1/4

1/4

1/4

450

4.1-8

3/8

1/4

1/4

600

8-12

3/8

3/8

1/4

800

12.1-25

3/8

3/8

3/8

1100

>25

1/2

1/2

3/8

1800

TUBING SIZE WITH VOLUME (ml/feet)

SIZE (inch)

VOLUME (ml/feet)

3/32”

1.8

1/8”

2.5

3/16”

5

1/4”

9.65

3/8”

21.7

1/2”

38.6

SPECIFIC CONSIDERATIONS:
In cases where patient is deeply cyanotic the size of the oxygenator and tubing size is selected keeping in mind the requirement of higher degree of hemodilution and higher requirement of arterial blood flows because of the presence of large (or major) aorto-pulmonary collaterals (MAPCA’s). MAPCAs arise from the aorta or its large branches and supply blood to the pulmonary arteries, because of blockade of the main pulmonary arteries. These MAPCAs ‘steal’ part of the cardiac output of the aorta and this results in reduced systemic perfusion and thus increased pump flows are required during CPB in cyanosed patients with MAPCAs to compensate for this ‘stolen’ cardiac output.
CACULATION OF BLOOD AND BLOOD PRODUCT REQUIREMENT
The hematocrit (HCT), also known as packed cell volume (PCV) or erythrocyte volume fraction (EVF) is the volume percentage (%) of red blood cells in blood. It is normally about 45% for men and 40% for women. It is considered an integral part of a person’s complete blood count along with hemoglobin concentration, white blood cell count, and platelet count. Haemoglobin concentration is reduced as a normal consequence of CPB with hemodilution. Thus the hematocrit that will result from the hemodilution caused due to priming volume of the CPB circuit should be determined. Several calculations are required to assess hemodilution and blood product requirements. To determine the effects of hemodilution, the volume concentration formula is used.
C1 X Pt BV = C2 X TVon CPB
Where Pt BV = patient’s blood volume ( patient’s body weight X blood volume factor)
TVon CPB = total volume on CPB (total priming volume + patient’s total blood volume)
C1 = Pre bypass hematocrit of the patient (%)
C2 = calculated hemodilutional hematocrit (%)
A decision must be made initially regarding the desired hematocrit during cardiopulmonary bypass. Based on the results of the randomized clinical study from Children’s Hospital, Boston ,it seems reasonable to consider a hematocrit of 25% to be the minimal acceptable hematocrit for any cardiopulmonary bypass condition. When the desired hematocrit has been selected the amount of bank blood that must be added to the prime should be calculated.
Prime RBC vol = {[C3]x[Pt BV + PV]} – {Pt RBC vol}
Where
Prime RBC vol = volume of blood required in prime
C3 = desired HCT on bypass
Pt BV = patient’s blood volume ( patient’s body weight X blood volume factor)
PV = total priming volume of the CPB circuit to be used
Pt RBC vol = patient’s blood volume X patient’s pre bypass hematocrit
For example:
Patient’s weight = 5 Kg
Pre bypass hematocrit (C1) = 40%
Patients blood volume (Pt BV) = 5 X 85 = 425 ml (85 is blood volume factor for 5 Kg)
PV (total priming volume of the CPB circuit to be used) = 600ml
TVOn CPB = (600 + 425) = 1025ml
Calculated hemodilutional HCT (%) (C2) = C1 X Pt BV / TVon CPB
= 40 X 425 / 1025
= 16.5 %
16.5 is the hematocrit on bypass. If there is a certain desired hematocrit, then to achieve that hematocrit, the amount packed RBCs if needed for the same patient can be calculated as follows:
C3 (desired HCT) = 30 %
Pt BV = 425 ml
PV = 600 ml
TV On CPB = (Pt BV + PV) = (425 + 600) = 1025 ml
Pt RBC vol = 425 X 0.40 = 170
Prime RBC vol = {[C3]X [Pt BV + PV]}-{Pt RBC vol} = {[0.30] X [1025]}-{170} = 137.5
Volume of RBCs needed in prime = 137.5
The hematocrit of packed RBCs is 70% thus 137.5/0.70 = 196 ml
196 ml of packed RBCs are needed to achieve a hematocrit of 30%.
Thus, 196 ml of the clear prime fluid is removed from the priming volume to account for the added packed RBCs. Therefore the calculation of priming volume now has 196 ml of packed RBCs and 404 ml of prime (crystalloid or colloid).
In some cyanotic cases where the patient’s pre bypass hematocrit is more, the blood is diluted to obtain an optimal hematocrit during cardiopulmonary bypass in order to decrease the viscosity of the blood to improve tissue perfusion and to prevent hemolysis. Thus the effect of priming fluid added to dilute the blood can also be calculated as:
TVon CPB X C4 = TVon CPB 1 X C5
WHERE
TVon CPB = total volume on CPB (total priming volume + patient’s total blood volume) = 1025ml
C4 = Hematocrit (of cyanotic patient) on bypass = 0.60
TVon CPB1 = total volume on CPB after adding 500 ml of priming fluid to the CPB circuit.
TVon CPB1 = (1025 + 500) = 1525 ml
C5 = the new (affected) Hematocrit
Thus
C5 = (1025 X 0.60) / 1525 = 0.40
40 % is the new hematocrit achieved after adding 500 ml of priming fluid.
FIBRINOGEN
A critical consideration is plasma fibrinogen dilution. Normal plasma fibrinogen levels are 150-400 mg./dL. The infant/ pediatric patient’s relative low blood volume with priming requirements of the ECC circuit causes the fibrinogen concentration to be adversely diluted. During CPB, it is desirable to maintain the plasma fibrinogen concentration above 100 mg./dL. in order to prevent impairment of post-CPB hemostasis.
Given an example of a 5 Kilogram patient with blood volume of (5 x 85) 425 ml, pre bypass hematocrit of 55%, hematocrit on CPB of 25%, priming volume of 800ml of the circuit to be used for CPB and fibrinogen level of 275 mg/dl. To calculate the effect of priming, patient’s plasma volume is calculated by following formula:
BV = PV/ (1-HCT)
PV = (1-HCT) X BV
Thus PV = (1-0.55) X 425 = 191ml
PV = 191ml
Patient’s fibrinogen = 191 X 275 mg/100ml = 525 mg
Number of milligrams required = (425 + 800 ) X (1.00-0.25) = 9.19 dl
If the goal is 100mg/dl, then 919 mg of fibrinogen are needed.
Amount of fibrinogen to be added = 919 – 525 = 394mg.
394 mg of fibrinogen must be added to the prime to achieve a goal of 100 mg per dl. FFP usually contains 200 mg of fibrinogen per dl.
Thus ml of FPP needed = (394/200) X 100 = 197 ml.
Now for the calculation of priming volume the 197ml of the prime fluid (crystalloid or colloid) is replaced by FPP. Thus the clear prime volume becomes 603ml.
Suggested reading

Jianfeng Wang, Eiji Hihara. A unified formula for calculating body surface area of humans and animal. Eur j Appl Physiol.2004;92:13-17
Dill DB, Costill DL. Calculation of percentage changes in volumes of blood, plasma, and red cells in dehydration. J. Appl. Physiol. 1974; 37(2):247-248.
Tarazi RC .Pulmonary blood volume. Eur Heart J.1985Oct;6SupplC:43
Tarazi RC . Blood volume.. Eur Heart J.1985;6SupplC:41-42

 

Effect of Surface Area on the Rate of the Reaction Between Calcium Carbonate and Hydrochloric Acid

1.0  Title

 

Investigating the effect of surface area on the rate of the reaction between calcium carbonate and hydrochloric acid in an experiment.

 

2.0  Research Question

 

How does the surface area affect the rate of the reaction of Calcium Carbonate in a powdered and chip form?

3.0  Rationale

 

This experiment will focus on how the surface area effects the rate of reaction between calcium carbonate and hydrochloric acid.

 

The rate of a chemical reaction, depends on a variety of factors. These include temperature, concentration, surface area and the presence of a catalyst.

 

Each reaction proceeds at its own speed. Some reactions are naturally faster or slower than others. However, the rate of a chemical reaction depends on a variety of factors. These include temperature, concentration, surface area and catalyst. almost any reaction can be modified in several ways. Collision theory helps to explain why this rate changes occur. Collision theory states that the rate of a chemical reaction is proportional to the number of collisions between reactant molecules. (

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

This experiment will focus on surface area.) The effect of these factors on reaction rate can be explained by collision theory. Collision theory states that the rate of a chemical reaction is proportional to the number of collisions between reactant molecules. The more often reactant molecules collide, the more often they react with one another, and the faster the reaction rate. (BBC, 2014)

By raising the temperature, the reaction rate increases. By lowering the temperature, the reaction rate decreases. This is because at higher temperatures the motions of the reactant particles are more energetic than at lower temperatures, which also means that the particles will have enough energy to collide to form products. Lowering the temp. would have the opposite effect. Therefore, at higher temperatures the reaction rate increases, while at lower decreases. Increasing the concentration of reactants increases the number of reacting particles in a given volume that affects the rate at which reactions occur. Cramming more particles into a fixed volume increases collision frequency. Therefore, increasing the concentration of reactants increases the reaction rate. The reaction rate can also be increased by increasing the catalyst. A catalyst is a substance that increases the rate of a reaction without being used up itself in the reaction. It reduces the amount of energy that the reactant particles need to collide with and be converted into products. Because they need less energy, the reactants will be converted into products faster and so the reaction rate increases. (ChemistryLibreTexts, 2017) An increase in catalyst is not always the best way to increase the rate of a reaction. The reaction rate can also be increased, by increasing the surface area. To increase the surface area the material needs to be crushed into a powder. For less surface area the material needs to stay in a solid form.

But, the purpose of conducting this experiment was to determine the effect of surface area on the rate of reaction between calcium carbonate and hydrochloric acid. The prediction was that the reaction rate between calcium carbonate and Hydrochloric acid will be different because of a different surface area. It is well established that the larger the surface area the greater the reaction rate, this is because more surface area particles of the CaCo3 will be exposed to the dilute hydrochloric acid, so there would be more frequent and successful collisions, increasing the rate of reaction. Decreasing the surface area would have and opposite effect.

Formula used for this experiment included the following products: calcium carbonate chips (CaCo3), calcium carbonate powder (CaCo3), hydrochloric acid (HCl), a salt calcium carbonate (CaCl2), carbon dioxide (CO2) and water (H2O)

Equation 1:  CaCO3 + HCl         CaCl2 + CO2 + H2O

3.1 Aim

To investigate the effect of surface area on the rate of the reaction between chips, powder and hydrochloric acid, and to determine if a larger surface area has a faster reaction rate then a smaller surface area.

 

3.2  Variables

Independent variable: The independent variable within this experiment was the surface area of calcium carbonate. This was changed by using a large and a small surface area. 

Dependent variable: The dependent variable in this experiment was the time taken (sec) for the calcium carbonate to react with the hydrochloric acid.

Controlled variable: The variables that were controlled in this experiment were the mass of Calcium carbonate and the volume of hydrochloric acid (1ML) These were controlled by using the same amount of Calcium Carbonate and Hydrochloric acid throughout the experiment.

3.3 Hypothesis

 

It is predicted that when the surface area is increased the reaction rate between calcium carbonate and hydrochloric acid will speed up. But if the surface area decreased then the reaction rate will slow down because, with an increase of surface area there are more atoms to react with compared to a smaller surface area.

4.0 Equipment list

 

In order to complete this experiment, the following materials were required:

Hydrochloric acid (1ML)

Marble chips (5g)

Powder (5g) 

Stopwatch

Conical Flask

Scales

Spatula

Measuring cylinder

Beaker (250 ml)

Gloves

Apron

Goggles

Thermometer

Weight boat

 

 

4.1 Methodology

 

 

 

The equipment was set up as shown in the diagram above

3.1 Place a weight boat on the scale and tare it. Weigh 5g of caco3 chips and remove the weight boat from the scales

3.2 Place a 250ml beaker on the scales. Using the measuring cylinder, measure 50ml of 1M HCL and pour into the beaker. Zero the scales. (when adding Hydrochloric acid to the beakers, make sure eyes are at the same level of the beaker to make an accurate measurement.)

3.3 Place the thermometer into the beaker and tare it.

3.4 Pour the 5g of CaCo3 chips into the beaker and immediately start the timer. (make sure Calcium Carbonate is placed in a beaker with care)

3.5 Every 20 sec. record the weight and temperature results in a table until the time exceeds to 5 min.

3.6 Repeat steps 1-5 using the CaCo3 chips

3.7 Repeat steps 1-5 using the CaCo3 powder.

3.8 The data was recorded, tabulated and graphed.

3.9 When the experiment is completely finished, all chemicals need to be tipped into the sink. All of equipment that has been used is cleaned and packed away. Wipe the table, ensuring all the chemicals have been removed of the table.

4.2 Original experiment

 

The original experiment used calcium carbonate chips, calcium carbonate powder and hydrochloric acid to determine how surface area effects the rate of the reaction. The rate of the reaction was measured using the scales and the time was recorded using a stopwatch.

 

4.3 Modifications

 

To ensure relevant and accurate data, the original experiment was refined by:

Doing several repeats in order to get an average which would make the data collected more reliable.

Washing all of the equipment after each trial, to ensure that the solution left over will not affect reactants for the next test. 

Shortening the experimental time to 5 minutes and recording the results each 20 seconds.

4.4 Management of Risks

 

Table 1: Risk assessment and management implemented for the surface area experiment

 

Hazard

Risk

Management

 Glass breakage

Cuts due to incorrect handling of glassware.

Handle it with care. Always hold glassware firmly and never with wet or slippery hands.

Chemical spillage

Causes serious eye and skin irritation.

Appropriate personal protective clothing must be worn at all times in laboratories.

Eye Injuries

A minor eye injury could be redness and irritation. A more serious eye injury from chemical exposure could cause permanent blindness.

To ensure that eyes are protected, approved safety goggles must be worn when handling potentially harmful chemicals.

Heavy masses (scales)

Dropped onto toes/fingers

Take care when handling asses. Carry with two hands.

 

5.0Results

 

5.1 Raw data

 

Table 2: Raw data – obtained from experiment

Mass loss in the reaction between CaCo3 and HCl

 

Time (s)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Qualitive results:

There was a large difference between how vigorously the reactions occurred at different …. The 1M acid was the most vigorous and a large amount of bubbles were observed during the reaction. The 1M acid was the leats vigorous and produced much less gas bubbles.

 

Figure 1: Mass loss in the reaction between CaCo3 and HCl

 

5.2 Processing of Data

 

The mass loss for calcium carbonate chips and powder was calculated and placed into table 2. The 20 seconds time interval was placed against the mass loss to demonstrate the mass decrease between each 20 seconds for Calcium Carbonates. As expected, for powder, the average reaction rate for the first couple of minutes was significantly higher. The same relationship is evident for chips, However, it is slightly lower, compared to the powder. 

 

6.0  Discussion

 

6.1  Interpretation of results

 

The results show that increasing the surface area, it increases the reaction rate (Figure 1) According to multiple sources this occurs because the reaction rate can only be increased, by increasing the surface area. To increase the surface area, the material needs to be crushed into a powder. For less surface area, the material needs to stay in a big clump. 

6.2 Evaluation of methodology

 

Limitation

Consequence on reliability and validity

Suggested improvements and/or extensions

Measurement device limitations (scales)

The scales were very sensitive and any wind or movement had affected the results, which means that an instrument itself was flawed and provided inaccurate reading.

The digital scales, only measured up to three decimal places, which is a potential limitation, because no exact measurement was given.

Change the digital scales to different ones, which will provide accurate results.

During the experiment make sure all the fans are turned off and a small amount of movement occurs.

Observational limitation

When the observer incorrectly read, measured or wrote down the results.

The best way to minimize systematic errors is to carefully consider and specify the conditions that could affect the measurement.

Take more data. Some of the results that were included were outliers which means that they should have either been left out of the calculation or repeated to get a result that was more like the others.

 

Conclusion:

The three graphs together show the trend that as the surface area increases the rate of reaction increases.This observation is supported by collision thory which states that …

Results have shown that the hypothesis was supported by the results in the experiment. The results were accurate as well.

7.0 Reference list:

Bbc.co.uk. BBC – GCSE Bitesize: Effect of surface area, 2017 http://www.bbc.co.uk/schools/gcsebitesize/science/add_ocr_gateway/chemical_economics/reaction3rev1.shtml

David N Blauch,2012, Chemical Kinetics: Reaction rates, viewed 4 June 2012, http://www.chemguide.co.uk/physical/basicratesmenu.html

Chemguide.co.uk.The effect of surface area on rates of reaction, 2017, http://www.chemguide.co.uk/physical/basicrates/surfacearea.html

Chemistry LibreTexts. Collision Theory, 2017, https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Kinetics/Modeling_Reaction_Kinetics/Collision_Theory

 

Wireless Body Area Network Technology

INTRODUCTION
A Body Area Network is defined by IEEE 802.15 as a standard for communication in or near the human body that can serve a variety of applications like medical testing, electronics and private entertainment optimized for low power devices and operation” [1]. In more common terms, to cooperate for the benefit of the user Body Area Network is a device system in a close contact to a person’s body.
A Wireless Body Area Network is capable of establishing a wireless communication link consists of intelligent and small devices implanted or attached in the body. These devices provide health monitoring for continuous and provides feedback to the medical personnel or user which is real time. The measurements can be recorded and used over a long period of time.
There are two types of devices can be used for evaluation: sensors and actuators. The sensors, internal or external, are implanted on body to measure some parameters of the human body. For example body temperature, measuring the heartbeat or recording an ECG readings. The actuators can take some specific actions according to the data received from the sensors e.g., any sensor equipped with a built-in reservoir checks the correct dose of insulin to give, based on the glucose level measurements, to a diabetics patient.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In body area network for medical purposes, a number of sensors are implanted on patient’s body. These sensors collect the data from body and send collected data to the main sensor. This main sensor analyzes the data and takes specific action. It sometimes consists of actuator which is used for taking required action. For eg. the sensors collect the data from body of a diabetic patient and sends it the main sensor. The main sensor analyzes the data and if it is dropped then it can inject insulin into the body of the patient and make it comfortable till the main medical aid it gets.
IEEE 802.15.4 is a standard for low-rate (LR) WPANs. A LR-WPAN network allows wireless connectivity in applications with limited power, low cost and simple communication and relaxed throughput requirements.[4] Ease of installation, extremely low cost, reliable data transfer, short-range operation and a reasonable battery life are the main objective.
There are different type of topologies are used in communication system for different purposes and according to our need. Topologies which are used widely are : Star ,Mesh and Cluster, Ring, Bus topology. On the basis of average jitter, throughput, end –to-end delay, traffic bits sent, traffic bits received, we can find out that which topology is good for our system. With different topologies come different routing protocols. The routing protocols to be used with zigbee protocol are AODV, DYMO, DSR etc. In wireless communication, there is no any fixed or dedicated route is assigned between two nodes for communication. Whenever , they want to communicate with each other or any other node they request for route to the system and these routing protocols , according to their properties find out the best route for communication. That route will be shorter in length so that their won’t be any delay.
Body area network is being used very widely in today’s high tech world. Mainly for body area sensors detection, health monitoring and for providing assistance to differently able persons. Below are some of the advantages of Body Area Network:-

Quick transmission time
Reliability
Good quality of service
Different data rates can be used
Compatibility
Low power required (as work on battery)
Security (because of encryption)
Portable

As there are many kind of routing protocols and topologies are available for communication. There are routing protocols like AODV, DYMO, Bellman ford ,LANMAR etc but it depends on type of requirement and demand that which protocol is suitable for our purpose. So, in this project we will try to find out that which routing protocol is better for our system with suitable toplogy like star, mesh etc. In this project we have zigbee based wireless sensors for monitoring. It defines the upper layers like physical and MAC layer. It is suitable choice for monitoring medical purpose sensors. Every node will sense the data from body and collected data will be sent to main node. We will design and simulate these systems on Qualnet then we will make comparison between them on the basis of throughput, average jitter, average end-to-end delay etc. The performance of each topology will be compared with every routing protocol.
TECHNOLOGY TO BE USED
BODY AREA NETWORK
Introduction
With the invent of new and high tech environment there is need of small, low power, light weight, portable devices with sensors. These devices can be used at low data rates for improving speed and accuracy. A number of these devices can be implemented on body for the monitoring of body sensor networks for applications such as health monitoring. In a body area network , it consists of small, portable devices that can be easily implanted on one’s body and they can establish wireless network link. These devices take the data continuously for health monitoring and provide real time readings to the medical examiners. These readings can be recorded and can be used for long time.
A body area network generally consists of actuators and sensors, which can be implanted on or inside the body. These sensors are used to collect data. Like for eg. taking heartbeats, taking readings of ECG or temperature of body etc. The actuators take required actions on the basis of data they receive from sensors or from users. Sometimes these actuators have in-built pumps or reservoirs that keep on checking the dose of insulin and it can inject it inside body if needed. It is helpful for diabetic patients. The communication with other person or user can be done by portable wireless devices such as smart phone or PDA.
The body area network works on the principle in which data is received through implanted devices and transmitted to external devices. The sensors implanted in or outside the body interact with one another and to the actuators. The actuator is based on the process of taking action according to the surrounding conditions. All the sensors send their data to main sensor. The main sensor collects the data from each sensor, fuses it and sends it to the particular person via internet. Generally, body area network comprise of small sensors and devices therefore ad-hoc network is best suitable choice for this kind of network. The IMEC (Interuniversity Micro Electronics Center) working on the principle to get hospital to the nearest location with patient. It is gives the patient the freedom of not going to hospital on regular basis for checking and taking his readings. The patient is now out of worries of regular check-up. The devices itself will take the readings and pass it to the concerned doctor and according to the readings it can take required action too, without the need of any medical personnel, in case of emergency.
Architecture
A body area network has a network created in or around the human body. The architecture of the body area network is as shown below.

Figure 2.1 Architecture of body area network

Figure 2.2 Core of body area network
The proposed architecture of body area network as shown in figure 2.1 consists of following elements :

Sensors: These are used to collect data from the different parts of the body continuously and transmitted this data to main sensor.
Main sensor: The main sensors collect the data from other sensors and fuse it together. Then it supplies this data to coordinator.
Coordinator: The coordinator analyzes the data and takes suitable action, if required otherwise send this data to PDA being used by the user.
PDA or smart phone : These are the devices which get data in the form required from the sensors and transmit over the network to the laptop or desktop, wherever it is being recorded for future purpose.

The core of body area network as shown in figure 2.2 consists of several body sensor units (BSU) and one body control unit (BCU).
Applications
1. Medical Applications
With the invent of new technology and fast processing, there was need of speed, comfortablity and convenience in the field of health monitoring too. So, with the help of body area network, it became possible and easy to monitor the health of patient remotely.
2. Sports Applications
In the field of medical, it can check the health of athletes and can give a accurate and clear picture about it to their coaches so that they can determine their weaknesses and strengths. It can be used in measuring many factors during competitions like race. This kind of observation can be done anywhere and there is no need of going to laboratory and running on trademills everytime for taking readings.
3. Entertainment Applications
Body area network can be used for entertainment also. It can be used for gaming, multimedia applications, 3D video and video buffering etc.
Issues involved
1. Sensors: What type of sensors should be used? The types of sensors to be used depend on the requirement and purpose.
2. Source of power: These devices are to be used for a long time and continuously therefore power source should be continuous and strong.
3. Communication Range: The range of the system should be such that it can give person nearest location help and should not get disconnected even if it is far.
4. Size and weight: The size of the sensor should be small enough to be get implanted on body easily and weight should be as minimum as possible. Because a number of sensors are to be implanted on body so it should not be difficult for the person to carry them over his body.
5. Mounting of sensor: The sensors should be implanted at the correct point of the body so that sensors can take the readings correctly. If they are mounted incorrectly then system may not get the required reading. If the sensors have to count heartbeats then sensors should be placed near heart for taking data.
6. Robustness: There is very less probability of taking wrong readings if the readings are taken incorrectly then it can cause big problems.
7. Synchronization: The sensors should be synchronized with each other and with main sensor. They should be working in real time.
8. Cost: The cost of the system should be low so that more number of persons can use it and could be used for mass production.
ZIGBEE PROTOCOL
Introduction
SIMULATION AND RESULTS
Simulation is the main process of finding out the performance of the proposed system. It tells us the ability and efficiency of the particular system when it is used under different system, surrounding and environmental conditions. It tells us that how really our system is going to work in a real environment and what factors should be taken care of while using and designing it .So, instead of designing any factory prototype of system before , it is simulated and ran on software by virtually designing it.
In this project we are working for IEEE 802.15.4 zigbee protocol for body sensor network. We have used two topologies : star and mesh. We have used software QUALNET 5.0 for simulation of our scenarios that is star and mesh for different routing protocols such as AODV and DYMO.
QUALNET 5.0 is a product of scalable technologies and is a good software for designing and simulating wired and wireless networks such as wi-fi, wi-max, GSM etc. There are a number of protocols available for simulation of different type of systems. It also has 802.15.4 protocol for zigbee which can be used for designing body area network prototypes. Qualnet is chosen because of its accuracy and its available graphical user interface. Using qualnet we designed star and mesh topologies containing PAN co-ordinator, routers and a number of sensor elements and then we developed them for different routing protocols such as AODV and DYMO.
After developing them, we tested and compared them for throughput, end-to-end delay, average jitter etc. So that we can find out better performing routing protocol for respective topology used. The simulation results are shown as per respective factor for different topology showing performance on different routing protocol.
THROUGHPUT:
Any routing protocol in any network can send only a fixed amount of data over the route so if we are having a large bit message then we have to divide that data into a number of packets that can be transferred over the route to the destination. These packets have size which is applicable for the route. When these are sent over the network then some of the packets can get corrupted due to the noise or lost or discarded and not all of the sent packets will be received by the receiver. Then, throughput comes into picture which is the rate of the successful transfer of packets. It is measured in bps that is bits per second.
Below are the simulation results for throughput of star and mesh topologies :
The above result is shown for the throughput comparison of star topology for AODV and DYMO routing protocol at different nodes . It can be seen from the figure that the throughput is same for both.
The above result shows the comparison of mesh topology for AODV and DYMO routing protocol. From the above result we can see that throughput for DYMO is very less than the AODV. So it can be concluded that AODV is better than DYMO for mesh topology.
AVERAGE JITTER:
When a number of packets are transmitted over a network then there can be some delay (latency) over the network due to which the receiver will receive packet after the expected time. The variability in time can be observed for various networks. This variability in latency is jitter. A network which has no latency or constant latency has no jitter.
The above result is shown for comparison of average jitter of star topology for AODV and DYMO routing protocol. It can be seen from the above result that average jitter for AODV is larger than DYMO so it can be concluded that DYMO is better than AODV for star because it has less dealys for packet transmission. Also, it can be concluded that there will be less collision in DYMO because it is taking less time for transmission.
The above result is shown for comparison of mesh topology for AODV and DYMO routing protocol for average jitter. It can been seen from the result that DYMO has less jitter than AODV. That means DYMO is better than AODV because it has less latency. Also, it can be concluded that there will be less collision in DYMO because it is taking less time for transmission. Same was the result for star topology so it can be concluded that DYMO is better when it comes to the performance based on the jitter.
 

Designing a Play-based Curriculum for a Specific Learning Area

Learning and Pedagogy in the Early Years: Play-based curriculum and assessment

Case Study – Designing a play-based curriculum for a specific learning area and context in the early years, aged 3-8.

Introduction

This assessment details a play-based teaching plan for a unit called ‘Let’s Build It!’ in a Kindergarten class. The underpinning theoretical approaches taken in the plan are socio-cultural theory and inquiry-based learning.

Research and Analysis of Theories/Perspectives

As an educator in the 21st century, there is a need more than ever to offer a diverse range of learning experiences to the students represented in our classrooms. The impact of students who feel disengaged with their learning, their teacher, the curriculum and their school environment is profound. The early years of education contributes significantly to a child’s ongoing learning success. Ultimately, there needs to be a greater emphasis on a more collective approach to student success which embodies quality teaching, high levels of engagement between students and teachers and rich, meaningful learning experiences.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Through the lens of a socio-cultural perspective, the relationship between play, learning and development is multidirectional with each significantly impacting the other. This perspective explores the importance of social interactions for learning and consequently development. “Learning in a sociocultural perspective is thought to occur through interactions, negotiation and collaboration” (Scott & Palincsar, 2013, p. 5). In this perspective, children are active agents and have a voice in their own learning, and with the assistant of adults and their peers’ experiences are scaffolded for their learning. Bredekamp & Copple (as cited in Edwards, 2003) explain this relationship more in-depth, stating that:

development and learning are dynamic processes requiring that adults understand the continuum [of development], observe children closely to match curriculum and teaching to children’s emerging competencies, needs and interests, and then help them move forward by targeting educational experiences to the edge of children’s changing capacities so as to challenge, but not frustrate them (p. 260).

In this perspective, play has a significant role and connects learning and development. Play fuels a child’s imagination, provides deep insights into one’s thinking and understanding and provides a understanding into how one makes meaning. Through their play-based experiences, children subconsciously engage in learning and develop skills like communication, intrapersonal and interpersonal competencies, creativity and problem solving (Skolvertket, as cited in Fleer, 2017, p. 194). The process of playing allows children to have autonomy in their own learning and development as they exercise control within their environments.

This teaching plan will also draw on the perspective offered by inquiry-based learning. “Inquiry based learning is a constructivist approach where the overall goal is for students to make meaning” (Noack, 2014, p. 1). This style of learning evolves from student’s questions and the inquiry is guided with minimal teacher assistance. For the inquiry process to be successful, the questions that guide the inquiry need to modelled and appropriate so that students can access information to help guide their learning. In this way students are dynamic in their own construction of knowledge and understanding. Students are highly motivated in this collaborative process as they have a sense of autonomy over their learning as their understanding evolves from other students and their own discoveries. Central to this approach is that students come to learning experiences with a “genuine sense of curiosity, wonder and questions” (Noack, 2014, p. 1) which allows educators to shape learning experiences around their students and in turn provide a rich and contextual curriculum. This is reinforced by Van Oers (2012) who states, “for children, personal sense is the starting point for all curriculum investigations in the classroom…children’s agency in co-structuring learning is vitally important” (p. 98). There are some clear parallels between socio-cultural theory and inquiry-based learning, as there is an emphasis on relationships and co-constructing meaning. Scott and Palinscar (2013) links the two approaches by stating “teachers and students are coinquirers, with teachers mediating among students’ personal meanings. These meanings emerging from the collective thinking and talk of the students, and the culturally established meanings of the wider society” (p. 5). Both approaches engage in learning that promotes essential skills that students require to thrive as 21st century learners.

The Play-Based Teaching Plan

Topic: This unit is called ‘Let’s Build It!” The play-based teaching plan is an integrated plan that links Literacy and Science with a specific focus on inquiry-based learning. The overarching focus areas for the plan are cooperation and communication. These capabilities are vital traits of successful, confident learners and additionally begins to build the culture of collaboration between students.

Target age group:This teaching plan is set in the context of a Kindergarten class in Term One. The class consists of 24 students, 15 boys and 9 girls.

Objectives:These objectives are a combination of knowledge and observable skills.

Students can communicate their ideas, thoughts and feelings with their peers and teacher.

Students can work with others for a common goal.

Students select, sort and describe materials based on their properties.

Context: Students have engaged in free play experiences as a way of developing relationships and to provide insights into student’s abilities. Through observations, a few insights emerged. Firstly, the boys and girls engaged separately in play with boys drawn more towards the problem-solving, construction, and physical activities. The girls on the other hand freely engaged in transforming themselves with wigs, dressing up clothes and took on roles in the kitchen corner. The girls continuously used language to shift between their play role and incorporated some common ideas from fairy tales in their play. Fleer (2017) states that “there is evidence that in the early years of school, children who are engaged in play activities demonstrate important concepts in action, which teachers can document, analyse in order to make judgements about children’s learning” (p. 238). The initial free play experiences have been pivotal in providing direction for the future learning of this Kindergarten class.

Curriculum Connections: Australian Curriculum (Australian Curriculum, Assessment and Reporting Authority [ACARA], 2015)

Science Understanding and Inquiry Skills

Objects are made of materials that have observable properties (ACARA, 2015, ACSSU003)

Communicating – Share observations and ideas (ACARA, 2015, ACSIS012)

English – Literature

Identify some features of texts including events and characters and retell events from a text (ACARA, 2015, ACELT1578)

Creating Literature – Innovate on familiar texts through play (ACARA, 2015, ACELT1831)

My Time, Our Place (Department of Education, Employment and Workplace Relations [DEEWR], 2011)

Outcome 4: Children are confident and involved learners (DEEWR, 2011, p. 34)

Children develop dispositions such as curiosity, cooperation, confidence, creativity, commitment, enthusiasm, persistence, imagination and reflexivity.

Children use a range of skills and processes such as problem solving, inquiry, experimentation, hypothesizing, researching and investigating.

Outcome 5: Children are effective communicators (DEEWR, 2011, p. 39)

Children collaborate with others, express ideas and make meaning from a wide range of media and communication technologies.

Teaching and Learning Activities

 

Lesson 1 – What things could be used to make a house to protect the pigs?

Read two versions of story of the three little pigs. Use a story map to model retelling the story using some language structures from the text, such as ‘this little pig,’ ‘he huffed, and he puffed and he blew the house down.’ In small groups of 4, students use dramatic play to retell the story. Students are given access to a ‘Concept box or Prop Box’ with a range of materials that may assist in bringing the story to life. “Concept boxes were introduced as part of an assessment for students…the aim was to enable students to produce a collection of resources that would contextualise specific concepts for children through promoting play experiences” (Brock, as cited in Fleer, 2017, p. 241). Each group gets an opportunity to retell the story. After groups have presented, pose the following problem to students to guide inquiry process, “what could the little pigs used other than bricks, sticks and straw?” Record student responses and begin discussing what we see in our neighbourhood and what they are built from.

Resources; Storymap scaffolding sheet (Appendix 1), big book (The Three Little Pigs), ipad, Youtube clip, prop boxes (sticks, twigs straw, lego bricks, gloves, material, hats)

Differentiation – Students are placed in mixed ability groups with a combination of girls and boys. Prop boxes assist students who are finding it challenging to retell the story. The prop acts as a visual reminder of that part of the story and assist students getting into character.

Formative assessment – 1. Teachers observations during group work (focusing on communication between students and their capacity to cooperate with each other).

2. During play experience of retelling the story, note student’s ability to use the concept boxes to demonstrate their learning.

Teaching and Learning Activities

Lesson 2 and 3 – Inquiry Question – What would happen if we built with other materials?

Revisit student responses from previous lesson. Show images (appendix 2) of a range of buildings around the world and discuss the materials used and the differences between the buildings which we see in our neighborhood. After viewing all the images, students work in small groups of 4 and explore one the buildings in further detail. Students use these materials (bamboo, mud, ice, fabric, sticks and stones) to make a wall (of a building) to test; how well the material can stick together and how strong the material is. In their building groups, students predict what they think might happen when they try to build a wall with the materials. These predictions are recorded on iPad, using voice recorder or videos. Students then begin constructing their wall with their chosen material. Once constructed, students record through drawings what happens when the wall is blown (by breath) and sprayed with water.

Resources: Images of buildings from all over the world (Compassion International, 2018), materials for building bamboo, mud, ice, fabric, sticks and stones, water spray bottle, adhesive materials (glue, bluetac, stickytape, playdough)

Differentiation – Students are placed in mixed ability groups with a combination of girls and boys.

Formative Assessment – 1. Teachers observations during group work (focusing on communication between students and their capacity to cooperate with each other). 2. Students ability to communicate their findings through the video recordings.

 

Lesson 4 – Inquiry Question – What should we build first?

Invite a builder (who is a parent of one of the students) into the classroom to facilitate a building workshop with the class. Discuss the materials used in construction and show process of how a house is built. Using photos and videos, students learn about the sequence of building a house. Each student gets to follow the steps outlined by the builder and build a small house using geometric connecting shapes. Possible questions; Would it be a good idea to place the roof on first? Would a curved house be a good idea? Record student responses for use in the next lesson.

Resources: Pictures of materials, blueprint, pictures of steps taken when building, geometric connecting shapes, butcher’s paper

Differentiation – Students each use visuals to assist in construction. Extension activity – students begin experimenting with alternate shapes and evaluate effectiveness when building.

Formative Assessment – 1. Teacher observations of discoveries made from hands on experience. 2. Students abilities to follow instructions in making their house. 3. Students abilities to communicate the solution to the question raised.

Teaching and Learning Activities

Lesson 5 – Inquiry Question – Which materials would be strong enough to keep the big bad wolf away?

In groups students are given the following task.

Imagine you are one of the little pigs. Your job is to build a building that will be strong enough to stay up when blown by the big bad hairdryer. You are to work in building teams to make something from materials which you’ve been learning about. They explore materials which are available to them in the ‘Construction’ concept box. Students then design a blueprint (terminology introduced by our expert builder) and label their structure. Finally, in building teams they begin the building process.

Resources: Blueprints (A3 paper)

Prop Box – Construction (vests, rope, sticky tape, recycled materials, fabric, wool, old household items, paper, building materials, stones, lego bricks, adhesives, scissors)

Differentiation – Students are placed in mixed ability groups with a combination of girls and boys. Prop boxes assist student in their design process and help students find create solutions to image the possibilities an item has.

Assessment – 1. Teacher observations during group work (focusing on communication between students and their capacity to cooperate with each other). 2. Students ability to communicate their findings through the video recordings. 3. Students abilities to use prior knowledge to inform their design.

Teaching and Learning Activities

Lesson 6 – Inquiry Question – How could we improve? What advice could we give to other builders?

Parents are invited into this celebration to view students structures and the testing.

Once all structures are built, celebrate student’s achievements by allowing each group to present their structure and give some information about the materials used, the shape they chose and their design. Retell the story of the three little pigs to place the ‘test’ in context. Begin testing the strength of these by using a big bad woof hairdryer on maximum speed. During testing, building groups use iPad to record results of the experiment and reflect on how they might change their structure next time to improve. Using a discussion circle, discuss the process of working with other people and how they found the task. Individually, teacher asks students the follow question; what were the 2 strongest/weakest materials in your build? Describe why they were strong/weak. Students then evaluate the experience

Resources: Talking piece (to guide discussion), iPads, evaluation (appendix two), hairdryer.

Differentiation – Various methods (group, individual and video recording) to present student’s knowledge and reflection.

Assessment – Discussion circles allow students to share and communicate their ideas and findings and offer solutions to how they could improve.

Summative assessment – Students knowledge about materials and their ability to communicate their understanding.

Evaluation Questions:

Reflecting and evaluating are fundamental parts of teaching. “Reflection builds insight, inspires teachers to explore new ways to improve learning and relationships, and provide starting points for making decisions about curriculum” (Queensland Studies Authority, 2010, p. 15). These questions address the goals established at the beginning of the unit and the practical application of the teaching and learning activities throughout the sequence of lessons. The questions also critique aspects of the work in order to identify ways to improve in the future.

Were students engaging with each other more so than they were at the beginning of the unit?

How have students demonstrated their ability to work with other students? Can this be measured?

How did my observations inform my planning and teaching? Were these an effective assessment tool?

If students were disengaged, were there obvious skills that needed to be built before engaging in this activity?

Do student’s video records (complied throughout the plan) demonstrate student understanding? Can these be improved or completed in a different way?

 

Critical Reflection

The place of play, in an outcome driven curriculum can be confronting and challenging for some educators. One reason may be that teachers don’t have deep understanding of the significance of play and find it challenging to incorporate into their intentional day-to-day teaching. “The goal is to make learning an integral part of the play structure itself, rather than something separate and compartmentalized, as it often is in school” (Hakkarainen, 2006, p. 208). In planning this sequence of learning in taking place in term one, I attempted to merge contextual play experiences with appropriate content.

A question which I kept reflecting on was is the teaching plan too prescriptive? Through inquiry-based learning we know that students have autonomy in directing how they learn. This led me to question, whether it was more effective to plan weeks at a time or for plans to be designed week-to-week? The reality of this is challenging but not impossible. Learning and teaching in this way authentically allows students to follow their own interests. Van Oers (2008) introduces the idea of ‘degree of freedom’ which explains “when a child is free to follow their interests in the context of learning something new, then their motive for learning is supported. They have agency and can pursue in depth something about which they are wondering” (as cited in Fleer, 2017, p. 102). Finding the balance of play in the curriculum is a process and one which deter some educators. However, those who promote play as part of learning capitalize on an opportunity to motivate, engage, challenge and impact the whole child.

Another consideration was the place of assessment. A perceived challenge is that “play does not necessarily leave a tangible or visible product” which when assessing students in their learning, and “makes it difficult to judge its cognitive worth” (Hakkarainen, 2006, p. 215). In this way, I attempted to reframe my understanding of play so that it supports the learning but it’s not the only or final product to demonstrate learning. Play was the main catalyst in developing the skills of communication and cooperation. I felt that many of the strategies I used assessed for play. “Assessment for play focuses on the conditions created by teachers for supporting play practices that lead to both the development of play complexity and the generation of learning through play (Fleer, 2017, p. 246). The sequence used formative assessment of play in each lesson and then culminated with a task which integrated all knowledge and understanding gained throughout the lessons. In this way the summative assessment was supported by previous play experiences and was contextual, meaningful and engaging.

The implications this has had for my personal philosophy was further acknowledging the potential that students bring to their learning. As a Kindergarten educator I have often fallen in the trap of neglecting student voice in planning lesson and have become focused more on teaching a curriculum rather than bringing it to life. It has reinforced that educators need to think of themselves as facilitators who, with their students, transform the curriculum. A second implication is the importance of have students working collaboratively with their peers. In a socio-cultural perspective, the process of learning arises from children working together and the power of this collaboration stretches students academically, socially and emotionally. This reinforces that successful learners are creative, confident and possess the abilities to solve problems and take risks individually and when working with others.

Reference List

Compassion International (2018) 25 different types of houses from around the world [Photographs]. Retrieved from https://www.compassionuk.org/blogs/25-different-types-of-houses-from-around-the-world/

Department of Education, Employment and Workplace Relations [DEEWR], (2009). Belonging, Being and Becoming: The Early Years Learning Framework for Australia, Commonwealth of Australia, Canberra, ACT

Department of Education, Employment and Workplace Relations [DEEWR}, (2011). My Time, Our Place, the Framework for School Age Care, Commonwealth of Australia, Canberra, ACT

Edutopia. (2015, Dec 16). Inquiry-based learning: From teacher-guided to student-driven [Video File]. Retrieved from https://www.youtube.com/watch?time_continue=212&v=mAYh4nWUkU0

Edwards, S. (2003) New directions: charting the paths for the role of sociocultural theory in early childhood education and curriculum. Contemporary Issues in Early Childhood, 4(3), 251-266. doi:10.1.1.1021.2439

Fleer, M. (2017). Play in the early years (2nd Edition.). Melbourne, Australia: Cambridge University Press

Hakkarainen, P. (2006). Learning and development in play. In J. Einarsdottir & JT. Wagner (Eds.), Nordic childhoods and early education: Philosophy research, policy and practice in Denmark, Finland, Iceland, Norway and Sweden (pp. 183-222). Charlotte, NC: Information Age.

Ministerial Council on Education, Employment, Training and Youth Affairs (2008). Melbourne declaration on educational goals for young Australians. Retrieved from http://www.curriculum.edu.au/verve/_resources/National_Declaration_on_the_Educational_Goals_for_Young_Australians.pdf

Noack, M. (2014) Approaches to learning: Inquiry based learning [PDF]. Retrieved from https://www.australiancurriculum.edu.au/media/1360/lutheran-education-queensland-inquiry-based-learning.pdf

Nolan, A. & Raban, B. (2015). Theories into practice: Understanding and rethinking our work with young children. Published by Teaching Solutions, Albert Park, Australia.

PSC Alliance. (2012). Effective curriculum planning documentation methods in education and care services. Retrieved from https://www.ecrh.edu.au/docs/default-source/resources/ipsp/effective-curriculum-planning-and-documentation-methods-in-education-and-care-services.pdf?sfvrsn=8

Queensland Studies Authority. (2010). Queensland kindergarten learning guidelines, The state of Queensland, South Brisbane, QLD.

The Australian Curriculum, Assessment and Reporting Authority [ACARA]. (2015). Australian Curriculum: F-10 Curriculum. Retrieved from https://www.australiancurriculum.edu.au/f-10-curriculum/

Van Oers, B. (2012). Developmental education: Foundations of a play-based curriculum, in B van Oers (ed.), Developmental education for young children: Concept, practice, and implementation, Dordrecht: Springer, p. 13-26)

Appendix One

Story Map – Three Little Pigs (Teacher summarizes four main parts of the text to assist students orally retell the story)

Each pig builds a different house

Mother asking pigs to move out

Woof blows each house, 2 falls down

Woof climbs down chimney of brick house, gets burnt and runs away

Appendix Two

Student evaluation

________________’ s Evaluation

My model (draw):

Planning

Did the blueprint match the model?

Materials

Were the materials strong enough?

 

Team Work

Did I work well with my team?

Testing

Was the model successful?

 

My favourite part of my model was..