Analyzing Zimbardo’s Experiment

The Zimbardo Prison Experiment (1973), occurred in order to analyze what influences individuals to change their behaviors, such as dispositional or situational. The research explicitly asserts Phillip Zimbardo is interested in seeing how situations such as social environments dictate how individuals act. Zimbardo’s prison experiment took an experimental perspective in social psychology. Even though this experiment is well known, it has ethical and methodological problems important to consider when conducting social psychology research. There are many ways this experiment could have prevented problems by taking in consideration different aspects.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The Zimbardo prison experiment took place at Stanford University in 1971 after professor Zimbardo placed an ad to hire male participants to engage in a study. After narrowing it down to 21 participants and randomly selecting them to fill the role of guards and prisoners the experiment began. The participants acknowledged a contract that informed them of some things to expect and how some of their rights were going to have to be revoked during the course.  The guards went through an orientation before the study of what they were expected to do throughout the experiment, leaving them to think the prisoners were the ones that were going to be studied.  The experiment was expected to be as realistic as possible. Therefore, a prison-like environment was constructed in the basement of the university, where participants wore uniforms, and performed roles realistically from the beginning.  During day two to six, there were prisoner rebellions, mental breakdowns, hunger strikes, privileges, and guard aggression that continued to escalate. The experiment’s goal was to see how an individual’s behavior and emotions are influenced by the social environment they are placed in and the roles they undertook. The researcher did not inform the participants explicitly of what was being studied in the experiment. However, Zimbardo was aware of what he was researching and notified the rest of the researchers of his explicit question.

 This experiment took on a critical perspective in social psychology research by researching how individuals are influenced and interconnected to the social world around them. The research presented in the experiment was qualitative in observing the behaviors of the participants through their social environments. While the sample size was quite small with only 21 participants and there were no variables identified. Through the experiment, we get a deeper understanding of how the experience of the participant’s suits the context, including how the individuals within it are not able to be able to see from an outside perspective. The experiment displayed an intersubjective representation of how individuals collectively create an understanding of the world they are inside. Through this, we witness how situation attribution occurs. Our behaviors, morals, and emotions are intertwined to the situations and environments we are placed within. The experiment also highlights how individuals internalize the roles they are placed in by becoming conformed and adjusting their behaviors to them. Cognitive dissonance is also present in the experiment leading one to analyze how one’s behaviors are influenced by this. The guards present cognitive dissonance while justifying their cruel actions and blaming the prisoners while enforcing more power on them. The impact of deindividuation also displays in the experiment by showing how as the prisoners begin to lose their identity they were more prone to accepting being mistreated, while the guards became more violent as the prisoners became more identifiable with their numbers.

 The Zimbardo experiment is a very well-known study, due to how it analyzes a certain situation and the participant’s actions. However, there is a lot of controversy with how this experiment was performed. There are many problems that have been analyzed in the research. To start out an ethical issue is that the creator of the experiment, Zimbardo, decided to include himself in the experiment. This opened the door to unbiased actions occurring within the experiment since Zimbardo, the main individual analyzing the experiment, became a non-natural observer. He became so involved with the experiments situation and lost sight of the outside perspective. He did not become aware of when unethical behaviors were occurring within the experiment. Especially since “Zimbardo himself took responsibility for creating norms which encouraged tyranny, [limiting] insight into the wat in which tranny might emerge as part of a social process” (Haslam S., Reicher S., 2003, p.24). The experiment also lacked variables making it hard to analyze the qualitative information the experiment presented. It did not present operational definitions, what is being measured, or controls. Even though the experiment did try to make the setting as realistic as possible, a methodological issue is that “ethical, legal, and practical considerations set limits upon the degree to which the situation could approach” (Haney et al, 1973, p. 11) the realistic prison environment. The study is hard to generalize due to sample size and the fact they were all males, of the same age, race, and education level.  The experiment is hard to be replicated due to the methodological issue of how, “although instructions about how to behave in the roles of guards or prisoners were not explicitly defined, Demand characteristics in the experiment obviously exerted some directing influence” (Haney et al, 1973, p. 11). The participant’s actions could have been guided by how they thought the researcher wanted them to behave in the experiment. For example, “on that day the prisoners staged a rebellion, ripping off their numbers, refusing to obey commands, and mocking the guards. Zimbardo asked the guards to take steps to control the situation, and they did exactly that.” (Sunstein, C.R., 2007). Selection bias could also have occurred in the experiment due to Zimbardo selecting to participants based on certain aspects and not randomly. Lastly, another ethical issue regarding the study that questions its creditability is that “the study was never reported in a mainstream social psychology journal” (Haslam S., Reicher S., 2003, p.22) and it is controversial to analyze the information Zimbardo has presented about it on his website.

 If I was to construct an alternative social psychological research project to answer the same questions identified in the original study, I would construct one similar to Zimbardo’s, however, avoid having many issues. To answer the question of how an individual is influenced by their social context, I would construct an observational study. My experiment will be a critical study one where a qualitative approach will be exercised. My hypothesis would consist that individuals change and demonstrate social priming depending on their social context and influence. We will observe the qualitative behaviors of the control group which is the realistic jails and the experimental jails. My independent variable will by the jail where the participants are chosen, while my dependent one will be the realistic jail. I will use macro-discourse analysis to analyze the qualitative information of how particular functions display the deployment of power. Every day through observation each participant will be evaluated on their levels of power, conformity, and submission behaviors by the psychologist. I would not create internal validity by guiding participants behaviors in telling them how they are supposed to act. They will only be told the experiments sample size of 10 individuals as guards and 10 as prisoners and they will be in the jail setting for 15 days. The setting of my experiment will take place in two different jails where individuals of different ages, education levels, and backgrounds will be selected to participate. I will also observe 2 different jails in the normal setting with everyday guards and prisoners in order to observe naturalistic observation. I will not construct a jail for my participants since “bias could have been minimized [in Zimbardo’s experiment] by using multiple small jails across the country to lessen the impact of Zimbardo’s own preconceptions” (Meyers, M. R., 2008). Before the study, I would have psychological tests performed on them by other doctors to determine that the participants are all in a good state of mind and health. After the participants are given random assignment they will only be given their custom, placed in the setting, and observed. The guards will have access to going outside after their 10 hour shifts, however, will have a strict schedule. While the prisoner will have to remain in the jail and follow the schedule the prisoner sets upon them 24/7. I will not participate in any role of the experiment so that no methodological biases are created. At the end of the experiment, I will compare the actions from the guards and prisoners. I will also examine the actions of the participants and those of the real individuals in their environment. Finally, at the end of the experiment I will make sure to make a responsible, honest, and valuable publication of the experiment.

 Zimbardo’s prison experiment is a famous experiment. It takes place in order to evaluate an individuals impact depending on their social context. He selected the most favorable type of experiment necessary to research his question. However, there is a lot of experimental issues involved in his experiment. Zimbardo has many ethical and methodological problems associated with his experiment. These problems include him taking a role in the experiment and telling his participants what was expected of them. If I was to research the same question Zimbardo did, of how an individual is impacted by their social context. I will also perform an observation experiment. However, I would do everything in my power to make sure no experimental issues and biases arise. Lastly, I will also have concrete variables and evaluations which will enable me to determine my finding. This will allow me to correctly publish my research.

References

Haney, C., Banks, C., Zimbardo, P. (1973) A Study of Prisoners and Guards in a Stimulated Prison. Naval Research. Retrieved from https://westga.view.usg.edu/d2l/le/content/1651103/viewContent/27712490/View

Haslam, S., Reicher S.. (2003) Beyond Stanford: Questioning a role-based explanation of tyranny. Dialogue (2003), 18,22-25. Retrieved from https://westga.view.usg.edu/d2l/le/content/1651103/viewContent/27712491/View

Meyers, M. R. (2008). The Lucifer Effect. Magill’s Literary Annual 2008, 1–3. Retrieved form http://articles.westga.edu:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=lfh&AN=103331MLA200811070300305261&site=eds-live&scope=site

Sunstein, C. R. (2007). The Thin Line. New Republic, 236(16), 51–55. Retrieved from http://articles.westga.edu:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=25049439&site=eds-live&scope=site

 

Bistable Flip-Flop Experiment

Objectives:

To study the properties and performance of cross-coupled inverting logic gates.
To set up the gates in order to obtain an experience, in the same time able to understand the Bistable Flip-Flop.

These circuits have been mostly replaced become a straightforward and effective design. These designs for applications including large dimension digital circuits. Although these circuits have been changed, they still have important use range, and it is necessary to understand their characteristics. This experiment state clearly that digital circuits are still be made from analogue parts. It has analogue functions correlative to current, voltages and time-varying diversification.
Materials and Equipment:

Built-in socket connector bread board
A selection of IC devices
Jumper wires and connector leads
Digital multimeter with test probes

Theory:
Flip-Flop
A standard Bistable circuit is made by simple combination of NAND gates or NOR gates. Hence, produce the required sequential circuit.
Common Sequential Logic circuits:

Clock Driven- Synchronized to a clock signal.
Event Driven- Asynchronous. Changing state when an external event happens.
Pulse Driven- Combination of Synchronous and Asynchronous.

SR NAND Flip-Flop
This system assembled of two inputs and two outputs. R and S inputs are representing Reset and Set. Q and are represent as outputs of the circuit. Firstly, user need to construct the inputs Set and Reset to a pair of cross coupled 2-input 7400 NAND gates in order to shape into a SR Bistable. Thus, the action of feedback may occur from each output to one of the other inputs.
RST Flip-Flop
The device connected and synchronized to a clock signal. The outputs are only trigger when Set (S), Reset (R), and Trigger (T) inputs are in logic 1 level. There will we un-trigger when the inputs are in logic 0 level.
NAND gate

M74HC00 is a high rate CMOS QUAD 2-input NAND gate. Silicon gate C2 MOS technology is applied.
The internal circuit is build up by 3 stages including buffer output, which can prevent high noise and produce stable output.

Task Discussion:
Investigation of a Bistable Flip-Flop
Theoretical Details:
The consequential circuit has two stable situations, when the direct feedback cross-coupling is implemented among inverting NAND logic gates. Bistable is either of which can be choose by submission of the correct input situation.
R and S inputs are representing Reset and Set. Q and are represent as outputs of the circuit. At standard running, both NAND inputs must normally be logic 1 level. The logic level of the Q and outputs will become relative.
To stabilizing the two possible states, changing the R input temporarily to logic 0 level, that will create a output with logic 1 level. In the same time, the output output with logic 1 level will be applied to the S input (2nd input), which is logic 1 level. Thus, the Q output will temporarily become a logic 0 level.
While both R and S inputs become logic 0 level at the same period, it is forbidden. In this state, both Q and outputs will become logic 1 level. Hence, that will override the load-back motion. The final state of the latch will not be resolved in front of time.
One practical unfavorable of the RS Flip-Flop effects from the data that the outputs can change state when either or both of the logic level of inputs is change. Operation is non-simultaneous.
Modifying the Bistable Flip-Flop: Creating an RST Flip-Flop
Theoretical Details:
It is similar in the RS NAND Flip-Flop operation. The R and S inputs are at logic 1 level. The third input (Trigger) has been added. The Q and outputs can only change states while the Trigger input is at logic 1 level. If logic level of Trigger input is 0, the R and S inputs are no effect for the outputs.
In a valid operation, the R or S inputs must be logic 1 level, and the Trigger input must be logic 1 level and then logic 0 level. In the end, the selected input must be returned to logic 0 level.
Investigation of a NAND gate
Theoretical Details:
The NAND gate is a digital gate, obtains voltages and currents at its inputs. While connect to the variable voltage supply, these may involve any value in a real circuit. For instance, since during an input changes, the output voltages may takes a non-zero time for the change to occur, so the voltages will not be accurately come up to 5V or 0V all the time.
Objective:
To concern the transforms and voltage levels of the output of the NAND gate to the states of the inputs.
Procedure:

Circuit shown in Figure 2.7 is constructed and an external variable voltage from a power supply is used. Any value from 1k? to 10k? can be taken by R1.
A fixed digital voltage (0 or 5 volts) is applied to one terminal of a NAND gate. A variable voltage is applied to another terminal.
Firstly, the input voltage Vin is varied up to a maximum of +5V and Vin against Vout is plotted. Thus, the logic 1 output voltage (V1) and the logic 0 input voltage (Vgo) are determined.
The output unchanging for wide ranges of input voltage is noted.
To found the overall behavior, the rough initial experiment is did.
More reading is taken.

Conclusion:
All of the objectives are achieved. In this experiment we understand the theory of Bistable Flip-Flop, Standard SR NAND Flip-Flop and RST Flip-Flop. All of the properties and performance of cross-coupled inverting logic gates have been studied. Experience is obtained during the construction of the gates.
In conclusion, at standard running of SR NAND Flip-Flop, both NAND inputs must normally be logic 1 level. Thus, the logic level of the Q and outputs will become relative.
While both R and S inputs become logic 0 level at the same period, it is forbidden. In this state, both Q and outputs will become logic 1 level. Hence, that will override the load-back motion. The final state of the latch will not be resolved in front of time.
For the operation of RST Flip-Flop, the Q and outputs can only change states while the Trigger input is at logic 1 level. If logic level of Trigger input is 0, the R and S inputs are no effect for the outputs. Hence, to obtain a valid operation the R or S inputs must be logic 1 level, and the Trigger input must be logic 1 level and then logic 0 level. In the end, the selected input must be returned to logic 0 level.
References:

http://www.play-hookey.com/digital/rs_nand_latch.html
http://www.play-hookey.com/digital/clocked_rs_latch.html
http://us.st.com/stonline/books/pdf/docs/1879.pdf
http://www.electronics-tutorials.ws/sequential/seq_1.html

 

Sordaria Fimicola: Meiotic Divisions Experiment

Abstract
The purpose of this investigation is to determine the frequency of meiotic divisions analyzed from hybrid crossings collected from different strains of the fungus Sordaria fimicola. The experiment was conducted to demonstrate hybrid crossings with MI and MII patterns of ascospores within the asci. Over the course of seven days, the sample of Sordaria was incubated and fused under laboratory conditions. In the outer areas of the blocks of agar, hyphae growth from the mutant tan strain (t-g+) and wild-type black strain (t+g+) was visible through the “X-shaped” and outer rims of the Petri dish.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

By identifying the amounts of non-hybrid and hybrid MI and MII asci, the observation of ascospores within the asci displayed the one possible pattern of MI, and the four possible patterns of MII. The first part of the laboratory experiment formed a hypothesis predicting that 8 ascospores would result from two stages of Meiosis and one stage of Mitosis. After calculating the frequency of crossing over, the map distance of the gene to the centromere in the tan colored gene observed was 32 map units, significantly different from the projected null hypothesis and expected 26 map units.
Introduction
Many research investigations utilize the common fungus Sordaria fimicola as a primary and reliable model organism for displaying genetics due to its firm structure and life cycle. Mapping the distance between the tan gene (t-g+) and the centromere requires careful preparation of a fused sample of Sordaria already containing hybrid and non hybrid arrangements in the ascus. By measuring the amounts of hybrid MI (non-crossover) asci and MII (crossover) asci, and calculating the frequency of crossover, the percentage of Asci may also be calculated from the rate of crossovers throughout the experiment. With an understanding of frequency of crossover, biological ideologies such as adaptation, mutation, and recombination are expressed fully within the experiment. The null hypothesis states that there will not be a considerable difference between the expected 26 map units and the observed map distance from the collected class data (Helms, Kosinski, Cummings, 350). Collective effort from each bench to calculate the correct amount of asci assigned will certainly affect the frequency of calculation and rejection or acceptance of the null hypothesis.
Biological evolution closely relates to the process of Sordaria crossovers. Mendel’s Law of Independent Assortment is directly validated through the life cycle of the fungus. As a member of Ascomycota, Sordaria fimicola practices “strict sexual reproduction”, and provides the easiest visualization of meiosis I, II, and mitotic division found in the ascus (Volk). Some characteristics that display the easiness of observation lie in the Sordaria fimicola structure. Lengthened nature of the ascus prevents the overlapping of ascospores. Therefore, carefully ruptured perithecia are rightly lined up according to the production of meiosis of tan and black spores: making it relatively easier to perform with more efficiency in counting MI and MII patterns. With its phenotype almost equivalent to its genotype, due to the absence of another dominant allele, the accurate physical traits are examined directly from the genetic makeup of Sordaria (Helms, Kosinski, Cummings, 334).
During hybrid crossovers in Prophase I, a tetrad forms four haploid nuclei, each of which then form two haploid nuclei, leading to a total of eight ascospores in a single ascus. Generally, Sordaria is a common fungus for genetics research because of various reasons centered on the easiness in the demonstration of Meiosis, observation of structure, and/or behavior of its life cycle. Growth of the Sordaria fungus is a significant factor and dependent variable carried out throughout the study. The Ascomycota fungus only grows under the conditions of decomposing vegetation, making it available for nutrients to be absorbed and increase hyphae growth and extension (“Meiosis and Recombination in Sordaria Fimicola”). The results of this study could contribute to a broader knowledge of mutation, biodiversity, and segregation. Further applications towards investigating meiotic and mitotic crossovers and map distances may soon propose new interpretations of Mendel’s laws.
Materials and Methods
During week one of the experiment, wild -type black (+) and mutant tan (t) cultures of Sordaria fimicola were obtained and while using aseptic technique, placed in a sterile Petri dish divided into four subsections labeled for the two gene colors. After a metal spatula was disinfected into 95% ethanol, it was heated using a Bunsen burner and cooled for 10 to 15 seconds.
While carefully lifting the lid of the Petri dish slightly to prevent contamination, a block of agar was removed and transferred faced down for mycelium linkage and crossing agar. After re-flaming the spatula and repeating proper aseptic technique, the process was repeated with wild type (+) black strain and two mutant (t) tan strains positioned on the marks of the Petri dish indicating the labeled plus(+) sign. After all necessary blocks of agar have been placed in the proper sections of the Petri dish, the plates were incubated in 22 to 24°C temperature in the dark for 7 days.
During week two, a plate of Sordaria fimicola containing the fusion of black and tan strains were obtained for the analysis of hybrids and non hybrids within the 8 produced ascospores. Using a toothpick, the surface of the plate along the “X-shaped area” was scraped gently to collect a sample of perithecia. A slide of perithecia was prepared by dropping water on a slide the collected perithecia, and then secured with a coverslip. Before placing the slide under a 10x Objective microscope, the slide was first gently pressured with a pencil eraser or equivalent pressure pointer rupturing the perithecia without destroying the structure of the ascus. Using the microscope, slides were examined to locate hybrid and non hybrid asci. Class data on numbers of MI, MII, Total Asci, percentage of crossover, and frequency were calculated. A Chi -Square Test was performed since necessary. (Helms, Kosinski, Cummings 336 -350).
 
Discussion
Based on the individual bench results, the number of total MI and MII asci counted depended on the number of asci assigned per person. For example, since there were only two bench members in Bench B and each bench member in the class were assigned to find and count 5 hybrid crossovers each, consequently, there was a total of 10 MI and MII asci for Bench B, shown on the table. According to the Biology Lab manual, 26 map units was the published map distance of the tan spore gene from the centromere (Helms 350).
The level of frequency is closely related to how “loosely” or “tightly” linked genes are on the chromosome. For this experiment, the deviations between the frequencies of the benches individually does not seem drastic, although the results from Bench F shows a slight over calculation of total asci counted, therefore resulting with the highest frequency level of 34.6, way over the expected 26 map units. Analyzing the class data as a whole, with 276 total MI and MII Asci counted, the percent (%) of Asci showing crossover was 64%, giving a frequency of 32 map units.
In order to justify if there is a significant difference between the 32 map units observed and the 26 map units expected, we perform a Chi -Square calculation. With χ² equaling 16.291, my conclusion is that the class data demonstrates a much higher frequency than expected. The degree of freedom (dƒ) for the experiment was 1, from n-1, with 2 attributes MI and MII. Since the probability value (p) was greater than (>)0.05, we rejected the null hypothesis and accepted the alternative hypothesis asserting that our observed frequency of 32 map units is significantly different from the expected 26 map units provided by published results. Possible Sources of error can be closely examined from the bench data results. Besides an over calculation of MI and MII asci, mentioned earlier that produced inconsistent figures, another source of miscalculation may have come from counting/including hybrid crossovers that had a 3-1-2 or 2-3-1 abnormal arrangement. Many times students were obligated to restructure a new slide of perithecia because their slide either did not have enough hybrids, or they ruptured the vulnerable perithecia incorrectly, proving very time consuming. Overall, the conducted lab was precise in calculating the frequency.
Sordaria fimicola investigations have multiple purposes and applications. If conducted correctly, the fungus demonstrates an accurate arrangement of spores resulting from the meiotic and mitotic divisions. In a very similar laboratory experiment, Meiosis and Recombination in Sordaria Fimicola, the same approaches of the two labs shared common procedures including: crossing a wild type and mutant type gene, growing the hyphae in rotting vegetation, and calculating the genetic map distances. Calculating the number of map units will be consistent throughout most Sordaria fimicola studies because the frequency of crossing over is always divided by 2(because frequency of recombination is exactly .5 of frequency crossed over) proved in most investigations. The easiness of growing agar on Petri dishes and crossing a wild type and mutant gene increases recombination of genetic material, leading to increases in the range of genotypes, paving a way towards future increases in biological development.
References
Helms, Doris R., Carl W. Helms, Robert J. Kosinski, and John R. Cummings. Biology in the Laboratory Third Edition :Biol 1161& Biol 1162 : Intoduction to Biological Sciences Laboratory University of Houston. Third. New York : W.H. Freeman and Company, 1998. 334-352. Print.
“Meiosis and Recombination in Sordaria Fimicola.” n. pag. Web. 8 Mar 2010. .
Volk, Tom. “Sordaria Fimicola, a fungus used in genetics.” n. pag. Web. 6 Mar 2010. .
 

Fraunhofer Diffraction Experiment

INTRODUCTION
Diffraction is one of the most important topics in optics, it refers to a spectacle which occurs when a wave encounters an obstacle or slit in its path. The wave will then bend around the edges or corners of the obstacle or aperture, into the region of a geometrical shadow of the obstacle. The Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens. In contrast, the diffraction pattern created near the object, in the near field region, is given by the Fresnel diffraction equation.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

If the shadow of an object cast on a screen by a small source of light is examined, it is found that the boundary of the shadow is not sharp. The light is not propagated strictly in straight lines, and peculiar patterns are produced near the edges of the shadow, which depend on the size and shape of the object. This breaking up of the light, which occurs as it passes the object, is known as diffraction and the patterns observed are called diffraction patterns. The phenomena arise because of the natural wave nature of light. Apertures and objects produce a similar effect.
In Fraunhofer diffraction, a parallel beam of light passes the diffracting object in question and the effects are observed in the focal plane of a lens placed behind it.
From the diagram in FIG 1, AB represents a slit whose length is perpendicular to the plane of the paper given by the distance $d$, and which parallel beam of light passes through from left to right. Per Huygens’s principle, each point in the slit must be considered as a source of secondary wavelets that spread out in all directions. Now the wavelets travelling straight forward along AC, BD, and so on, will arrive at the lens in phase and will produce strong constructive interference at point O. Secondary wavelets spreading out in a direction such as AE, BF, and so on will arrive at the lens with a phase difference between successive wavelets, and the effect at P will depend on whether this phase difference causes destructive interference or not.
It will be noticed that there will always be a bright fringe at the centre of the diffraction pattern. The separation of the diffraction bands increases as the width of the slit is reduced; with a wide slit the bands are so close together that they are not readily noticeable. The separation also depends on the wavelength of light, being greater for longer wavelengths.
In the case of the slit shown in the diagram, the first dark line at P is in a direction $theta$ such that BG is one wavelength, $lambda$. If d is the width of the slit, then $theta = lambda/d$. This is assuming the angle is so small then $sin(theta) approx theta$.
EXPERIMENTS
In these sets of experiments a low power (0.5 mW) Helium-neon laser is used as the source of light. The laser light produced by the laser used is coherent and parallel, but for these sets of experiments the beams diameter is far too small. To get around this problem a beam expander arrangement is set up in front of the laser source to expand the beam to a larger width before hitting the object being examined.
From FIG 2 it can be seen that the biconcave lens A causes the beam to diverge, and appear to emerge from the point X in the focal plane of the lens A. If a second lens B with focal length $f_B$ and place it $f_B$ away from X as shown, the outputted laser light will be parallel again, but it will have a large width.
The output of this beam is used to examine Fraunhofer diffraction patterns produced under various circumstances, viewing the resulting patterns on a white screen or with the use of a photodetector to detect beam intensity at varying locations.
A good bit of time is spend aligning the laser to be as close to the center of the lenses as possible and therefore careful note is taken for where each position of the lenses stands are set, this will help with consistency between different days and if the apparatus is tampered with. The distance from the object being examined to the photodetector was kept at a constant $(0.53pm 0.01)m$ throughout all experiments carried out.
SINGLE SLIT
The first object to be examined is the simple single slit. Setting up a variable slit in the object path the slit width can be adjusted allowing investigation of slit width and intensity to be measured. The intensity distribution on the screen is given by the equation,
The resulting laser beam from the beam expander passes through the single slit then through another lens to focus on a detector screen. Placing a white sheet of paper on this screen the maxima’s can easily be seen by eye allowing simple marks to be placed where they are. These marks then can be easily measured with a set of digital callipers, which have a measurement uncertainty of $pm$0.02mm for measurements less than 100mm and $pm$0.02mm for less than 200mmcite{digitalcalipers}. It is seen that for a varying single slit the separation of the diffraction bands increases as the width of the slit is reduced; with a wide slit the bands are so close together that they are not readily noticeable. This is as expected from the predicted theory.
Using a single non-variable slit as the object, the resulting slit separation can be calculated. This is done by taking the measurements from the central maximum and plotting them against their order. This relation is given by Young’s equation, where $y_m$ is the distance from the central maxima for the m’th order fringe, $lambda$ is the wavelength of laser light used, $D$ is the distance from the object to the screen and $a$ is the slit width.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

Plotting the values of $y_m$ versus the corresponding order value $m$ the resulting line of best fit is the value of $frac{lambda D}{a}$, with the use of the known constant the value of $a$ can be determined. This calculation is easily done with MATLAB which would give a more accurate result than hand drawing a graph, using the function $nlinfit$ the error in the line of best fit can be obtained and thus the uncertainty in the measurement of the slit width. Each value for $y_m$ is taken multiple times to reduce reading uncertainty and also the marking of maxima on the paper is repeated to further reduce reading uncertainty.
From measurements taken the calculated value for the slit width was found to be $(7.31pm 0.39)cdot10^{-5}m$, this agrees with typical values for a single slit which are in the order of Nano meters.
At this point it was found that the photodetector didn’t function properly. Trying to measure intensity it was seen that the measured value was negative. It was also not notable to see second and third maxima’s, just the central maxima could be clearly detectable. Many attempts were made to correct this, re alignment of the laser had very little effect. Ensuring the room was constantly dark to try to eliminate the background light was also tested, but again no improvement in the reading. It was decided to stop taking any measurements of the intensities for the remaining experiments.
MULTIPLE SLITS
An arrangement consisting of many parallel slits, of the same width and separated by equal distance is known as a Diffraction grating. When the spacing between the lines is of the order of the wavelength of light, then a noticeable deviation of the light is produced.
The intensity of light can be adapted from one single slit to a generalisation for N number of slits, the distribution for N number of slits is given by,
The $sin^2beta/beta^2$ term is describing the diffraction from each individual slit. While the $(sin^2(NY))/(sin^2(Y))$ describes the interference for the N slits, and so this gives a maximum and minimum where,
Each diffraction grating was placed in the source holder one by one and the outputted diffraction patterns on the detector screen were observed. It was found to be that the second maxima were weaker as the number of slits on the source was increased and the central maxima became sharper. Grating with 6 slits was found to be the sharpest central image while the slit with only 2 was the weakest.
ONE AND TWO DIMENSIONAL
One dimensional gratings can now be used to examine the difference in slit width and to examine the difference in diffraction patterns observed, for this part there were three unknown one dimensional gratings to be examined. The gratings were loaded in one by one and marking the central maximum and other maximum observed on the screen the distances can be measured allowing slit width to be calculated. It was observed that the different gratings gave a different spread of maxima on the screen. For a one dimensional grating the measurements were repeated 3 times for three different gratings. The same method is used to calculate the slit distance as in the single slit experiment. The measurements for the gratings widths were found to be, $(6.90pm 0.51)cdot10^{-5}m$, $(2.37pm 0.46)cdot10^{-5}m$ and $(1.49pm 0.14)cdot10^{-5}m$. All these values lie within the expected range for a slit to diffract light.
To measure the output of the two-dimensional grating we can model it as two one dimensional problems. Measuring the maxima in one direction then again in the other direction, these two can be compared and should be with in similar value is the grating is equally spaced in both directions. Results were found to be $(5.84pm 2.62)cdot10^{-5}m$ and $(5.24pm 2.62
CONCLUSION
All parts of the experiments were carried out effectively and for all parts of the experiment data was collected and analysed. For a single slit of unknown width the calculated value for it was found to be $(7.31pm 0.39)cdot10^{-5}m$, which is in the right order of magnitude for a single slit resulting in light diffracting. Also observing multiple slits on a source was found to show that the second maxima were weaker as the number of slits on the source was increased and the central maxima became sharper. Finally, a one and two-dimensional grating was analysed to calculate wire separation. It was found for the one dimension samples the separation width was $(6.90pm 0.51)cdot10^{-5}m$, $(2.37pm 0.46)cdot10^{-5}m$ and $(1.49pm 0.14)cdot10^{-5}m$ and for the two dimensional it was found that in each directions the width was $(5.84pm 2.62)cdot10^{-5}m$ and $(5.24pm 2.62)cdot10^{-5}m$.
Unfortunately, the photodetector did not work accordingly. The values obtained from one measurement did not match with values obtained later or on different days. Attempts were made to try and improve readings; keeping room constantly pitch black and realignment of the mirrors. It was decided to stop taking detector measurements.

Determination of Unknown Salt Experiment

Materials:
The materials required in order completing the lab included goggles/eye wear; this will help prevent any type of harmful substances that we worked with from damaging your eyes. Three Styrofoam cups, the cups helps make an insulator and create the calorimeter. The thermometer was required in order to determine the temperature of both the water and the unknown salt. A weighing boat was also another source of material needed, in order to place 3.0grams of our salt. In addition, a scoopula and a scale were needed to help us determine the exact measurements of the unknown salt needed. Water was required to dissolve our salt into and measure the temperature of. A 100ml graduated cylinder was used to determine the accurate quantity of water required. Our unknown salt was another source of material given by our teacher, and this allowed us to complete the experiment. Other materials needed in order to complete the lab included paper towels.
Procedure:
In order to determine what our unknown salt is, we needed to make a guideline of the steps required to determine it. The procedure of our lab is:

Gather all equipment/ materials to start procedure.
Weigh the weighing boat, record the weight.
Place 3.0 grams of our unknown salt 7
Take two of the three foam cups and place them within each other to create an insulator from preventing heat to escape or cold air from entering.
Take the 100-mL graduated cylinder and measure 25.0mL of water.
Take the 25mL of water and place it in the two foam cups
Cut the third Styrofoam cup to fit the top of the first two cups.
Make a hole, place thermometer in the calorimeter
Read the temperature of water record it.
Remove thermometer, add 3.0grams of unknown salt into the calorimeter.
Let the salt dissolve and determine the temperature, by placing the thermometer through the top of the third cup.
Before measuring the temperature, shake the cup to insure the unknown salt reacted/ dissolved completely.
Determine the temperature and record results.
Dispose of waste, clean the equipment and restart for the remaining two trials.

Observations and Results:
Before beginning the calculations for the lab, we need to determine what possible equation we will have to use.
Equations:

∆T= T2 – T1

The equation above is the change in temperature, represented by delta (∆), which is the second temperature recorded subtracted by the first temperature recorded (T2 – T1).

Q=mc∆T

The equation above allows us to determine the q, which is the quantity of heat transferred, which equals the mass (m), multiplied by the specific heat capacity (c), and multiplied by the change in temperature (∆T=T2 – T1 ).
∆H=-q
The equation above allows us to solve for the ∆H system. Once we determined the quantity of heat transferred, by using the equation q=mc∆T, we can determine delta h by either replacing the q with mc∆T, or place the result of q in the equation.
Average Enthalpy= Trail 1+Trail 2+Trail 3
3
The equation above gives us the average enthalpy for the number of trails that was conducted by our group. We add up all the Enthalpy of all trials and divide it by 3, to give the average.
Percent error= Theoretical yield-Actual yield x100
Theoretical
This equation allows us to determine the percentage error of our results. After calculating for our enthalpy, we can take the theoretical yield, found on page 347, table 1 in our textbook, we can subtract is by the actual yield. After determining the value of that, we divide it by the theoretical value and multiply it all by 100%.
With the recording of all our data obtained from doing the experiment, we were able to form a chart for all three procedures and mathematically determine what the unknown salt was.
Weighing Boat=1.81 Grams
Temperature of water and unknown salt obtained from three trails

Temperature

Trail 1

Trail 2

Trail 3

T1

22oC

12oC

13oC

T2

14oC

5oC

6oC

∆T(T2 – T1 )

14oC – 22oC= – 8.0oC

5oC – 12oC= -7.0oC

6oC – 13oC= – 7oC

Table 1: Temperature results and Change in temperature of water through three trails. From this chart above, we can see that we completed three trails to determine the exact value of the unknown salt, and to determine what the unknown salt is. In addition, we recorded our temperatures of the water before the salt was added (T1) and after the salt was added (T2). From that point we calculated the change in the temperature for each trail, with the equation, ∆H=T2 – T1.
Heat capacity and enthalpy of unknown salt for three trials

Trial(s)

Heat

Enthalpy

Trail 1

Q=mc∆T
Q= -836J

m∆H=q
∆H=0.279KJ/g

Trail 2

Q=mc∆T
Q=-0.7315KJ

m∆H=q
∆H=0.2438KJ/g

Trail 3

Q=mc∆T
Q=-0.7315KJ

m∆H=q
∆H=0.2438KJ/g

Table 2: Enthalpy and heat capacity of unknown salt 7 for three trials. The chart above shows the heat capacity and enthalpy of the unknown salt from three different tests conducted. We determined the heat capacity using the equation q=mc∆T and the enthalpy using m∆H=q. The calculations for determine the results are shown below:
Calculations:
Note: 1mL is 1 gram. (M=dV, mass= density (1.00grams/mL) x volume (mL))
Trail 1:
Q=mc∆T m∆H=-(q)
Q= (25g) (4.18J/goC) (-8oC) (3.00g) ∆H=-(-0.836KJ)
Q= -836J ∆H=0.836KJ/3.00g
∆H=0.279KJ/g
Trail 2:
Q=mc∆T m∆H=-(q)
Q= (25g) (4.18J/goC) (-7oC) (3.00g) ∆H=-(-0.7315KJ)
Q= -731.5J ∆H=0.7315JK/3.00g
Q= -0.7315KJ ∆H=0.2438KJ/g
Trail 3:
Q=mc∆Tm∆H=-(q)
Q= (25g) (4.18J/goC) (-7oC) (3.00g) ∆H=-(-0.7315KJ)
Q=-731.5J ∆H=0.7315KJ/3.00g
Q=-0.7315KJ ∆H=0.2438KJ/g
Average Enthalpy:
Avg Enthalpy= Trail 1+ Trail 2+ Trail 3
3
Avg Enthalpy= (0.279KJ/g) (0.2438KJ/g) (0.2438KJ/g)
3
Avg Enthalpy= 0.256KJ/g
After determining our average enthalpy, we can determine what compound it is. Going into our textbook, onto page 347 and taking a look at table 1, we are given a list of compounds. The nearest compound our enthalpy is at is ammonium chloride. Ammonium chloride has an enthalpy of 0.277kj/g and we got an enthalpy of 0.256kj/g. using our knowledge based on rounding, we rounded up and made a conclusion stating that our compound was in fact ammonium chloride.
Percentage Yield:
Percent error= Theoretical yield-Actual yield x100
Theoretical
Percent error= 0.277KJ/g – 0.256KJ/gx100
0.277KJ/g
Percent error= 7.58%
Therefore, the percentage error of our results was 7.58%
Discussion:
Throughout the cold pack experiment not all our results were accurate. Our results weren’t as accurate because we stumbled upon some errors while completing the lab. One error that we encountered while completing this experiment and one that had an impact on our final results was the way our calorimeter was created. Through the experiment we were to assume that the calorimeter would create an isolated and insulated system, but in truth it didn’t. As we proceeded through the experiment of putting the water into our calorimeter, there was a possible moment when there was a transfer of heat in between the Styrofoam cups and the solution, in our case unknown salt 7. The stage when there may have been a transfer of heat, was not taken into account and this could of have caused an increase or a decrease in the temperature of our solution. As we already know that the reaction was endothermic and since it’s endothermic the solution absorbed the heat, from the cups and reaction. Such an error would cause a change in the temperatures of our solution to either increase or decrease. The result of this error had a medium impact on our final results. This was a medium impact because it not only affected our solution, but also the measurements we took. Resolutions to possibly prevent this error from occurring includes, taking account that the temperature may increase or decrease due to that fact it isn’t an actual isolated system. Another solution can include using different materials that would insulate the solution better.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

Our second source of error was taking the measurements of the water and measuring the accurate temperature of the water. When taking the measurements of the temperature of both the water and solution, there could have been an error from what we saw and what we wrote. Furthermore, since we don’t know whether the thermometer was actually inside the water, it could have not been touching it which in turn gave us the wrong results. An example could include is when taking the temperature of the water, the thermometer could have read 22oC and we could have seen it as 23oC or24oC. The result of this error had a medium effect on our data and due to this effect, our enthalpy wasn’t as accurate, and didn’t exactly match the ones in the textbook. When measuring the amount of water required dispensing in the calorimeter we need to use a graduated cylinder for accurate measurements. The cylinder was to give us the accurate measurement of whether we had exactly 25ml of water or not. The error in this measurement was for us to check with if it was exactly 25ml, and this may have resulted in either something less than 25ml or more than 25ml. The impact the source of error had on our final result was medium. The impact was medium because even though it did affect our final results, it didn’t affect it by a lot. It hadn’t affected our results by a large quantity because the difference between the solution we were supposed to get and the one we had, had a small margin of differences.
The last source of error, that we hadn’t taken account for throughout the process of completing the experimental lab, may have had an impact on our final results. This error that affected our results was the fact that our unknown salt 7, was exposed to air for a periodic time. Due to a fact that the salt was exposed to air, it may have resulted in some of the salt reacting with the atmosphere. Due to this error, our results could have been incorrect because when massing the 3.0 grams, it could have reacted with the atmosphere, giving us 0.10 off, such as 2.90grams. This may not affect the results by a lot, but there would still be an effect on it. Another example of our results being affected by this includes that since some of our unknown salt reacted, when we measured the temperature it could have actually been either lower or higher than what we actually expected. For example, if the salt wasn’t exposed to the atmosphere we could have got a temperature of 18oC, but instead due to the fact it was exposed we got 20oC. The affect this had on our results and solution is a medium result. This is a medium result because if some of the unknown salt reacted, it would have been in such a small quantity, that it wouldn’t have a large effect on our results. Possible solutions from stopping this problem from occurring includes, either keeping the salt in an isolated room, put a tad more of the unknown salt in the water, just to counter act for the ones that reacted.
In the mixed of completing the lab, we stumbled upon a mistake with determining the unknown salt. The mistake had an impact on final answer and wasn’t taken into account that it may possible have an effect on our final solution. The mistake that may have been encountered includes that our unknown source of salt, when added into the water, may have not dissolved properly. This resulted in the reaction not taking place to dissolve the entire product, which may have affected the temperature that was measured. Due to the fact that the salt wasn’t dissolved and it didn’t participate in the reaction, the temperature we may have taken could have been only the waters temperature. This source of error had a large effect on our solution because we had no way of determining whether it dissolved or not, without tampering the solution. Furthermore, due to the fact of the error, we may have been given the wrong temperature of the solution that in turn gave us the incorrect results for the enthalpy. In accordance, not only will we have been given the incorrect enthalpy, but the results were affected as well. In order to prevent this source of error from occurring again, what I could do is, while the unknown salt is in the water, I could stir it to dissolve properly; another method can include is to shake the calorimeter to dissolve the salt. When shaking it, I would hold it from the top to prevent heat transfer from my hand and the water.
 
 
Diagram 1: From the diagram we can see the calorimeter being constructed and the final result is over on the right. I would hold the middle of the calorimeter and spin it around to better dissolve the unknown salt. “DoChem 095 Heat of Solution of Magnesium.”DoChem 095 Heat of Solution of Magnesium. N.p., n.d. Web. 10 Apr. 2014.
Conclusion:
In conclusion, this experiment allowed us, the students, to use theories learned in class to real life applications, or real life applications that we will soon encounter. The lab better prepared us for what may be expected in the future, and allowed us to determine different factors that affected our results in more than one possible way. The cold pack experiment lab that was conducted by my group and I, had resulted in us facing errors such as measurement errors, errors including the calorimeter and errors including our unknown salt. These errors were recorded and explained to better help us prevent it from occurring again. By following the correct procedure and having the correct materials required, we were able to determine the final enthalpy. That allowed us to determine what our unknown salt was, which was ammonium chloride.
Bibliography:
“DoChem 095 Heat of Solution of Magnesium.”DoChem 095 Heat of Solution of Magnesium. N.P., n.d. Web. 10 Apr. 2014.
Brain, Marshall , and Sara Elliot. “How Refrigerators Work.”HowStuffWorks. N.p., n.d. Web. 13 Apr. 2014.
Kessel, Hans Van. “The Bohr Atomic Theory.”Nelson Chemistry 12. Toronto: Thomson Nelson, 2003. 174-76. Print.
 

Experiment on Distillation Principles

Abstract

The general objective of this experiment is to investigate and understand distillation principles, the parameters affecting the operation of distillation columns and how to determine optimal operating conditions. To achieve this, two experiments were carried out. Experiment one was carried out to investigate the relationship between the column pressure drop and the boil-up rate, the second experiment was performed to determine the composition of the mixture of dichloromethane-trichloroethylene. The data obtained in experiment one and two were used to determine the overall column efficiency.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In order to investigate the pressure drop of the column, the power was set to 0.65 kW, 0.75 kW, 0.85, and lastly 0.95 kW. For each power input, the sample was collected for 10 seconds and then the procedures were repeated for each increment. Afterward, a graph of pressure drops against the boil-up rate (log/log) to determine the relationship between the two parameters. It was observed that the pressure drop and the boil- up rate increased as the power input increased and also that the degree of foaming increases as the power was increased from gentle at 0.65 kW to violent over the whole tray at 0.95 kW. The samples collected were tested for its Refractive Index. It was observed that the degree of foaming increases as the power was increased from gentle at 0.65 kW to violent over the whole tray at 0.95 kW.

The overall efficiency of the column for a power of 0.65 kW is 42.5%. However, that may not be the most optimal condition since it was not possible to test for other three power input due to some systematic and technical errors.

 

Table of Contents

Nomenclature…………………………………………………………………………..

Introduction…………………………………………………………………………….

Objectives………………………………………………………………………………

Methodology………………………………………………………………………………..

Results…………………………………………………….………………………………..

Discussion of Results…………..………………………………..………………………..

Conclusions ………………………………………………………………………………..

References ………………………………………………………………………………..

Nomenclature

n = number of theoretical plates

XA = mole fraction of the more volatile component

XB = mole fraction of the least volatile component

αav = average relative volatility

Subscripts D, B or d, b indicates distillate and bottoms respectively

∆H̅vap = average latent heat of vaporisation of DCM-TCE mixture (J/mol)

R = 8.314 J/mol*K

Tb(TCE) = boiling point of trichloroethylene (K)  

Tb(DCM) = boiling point of dichloromethane (K)

Introduction

Distillation is defined as a process in which a liquid or vapour mixture being made of two or more components is separated into its component fractions with the desired purity, by the input and removal of heat.

Distillation is one of the most common liquid-liquid separation processes and can be carried out in a continuous or batch system. The basic theory behind them is very simple and relies on separating a mixture being made of two or more components of different boiling points, though partial vaporisation of a liquid mixture or by partial condensation of the gas mixture. As the mixture is fed to the column, some fractions may vaporise and move up the column. The vapour components will condense and leave the column at different levels as the temperature is lower at the top of the tower. Based on a binary mixture, the more volatile component will leave at the top of the tower, and the less volatile component will leave at the bottom as a liquid.

Objectives

The objectives of this experiment are:

To investigate the pressure drop of the distillation column for four boil-up rates, and observe the degree of forming for each power supply

Use of refractometer to determine the overhead and bottom mixture composition

To determine the overall column efficiency

Literature Review

Distillation columns are usually made up of a vertical tower containing a series of plates. As liquid runs down the tower vapour goes towards the top. In order to understand its working principle, consider what happens when heating a liquid. At its boiling point, the molecules of the liquid possess enough kinetic energy to escape into the vapour phase (evaporation) and if the temperature decreases some molecule in the vapour phase return to the liquid phase (condensation). The same with the mixture of dichloromethane and trichloroethylene for this experiment. As heat is applied to the column, the eventually the most volatile component (in this case trichloroethylene) begins to vaporize. As the trichloroethylene vaporizes it takes with it some part of dichloromethane. The vapour mixture is then condensed and evaporated again, giving a higher mole fraction of the least volatile component in the liquid phase and a higher mole fraction of the most volatile in the vapour phase.

For this experiment, the column was be set to operate at total reflux. Which means that all the overhead products are be condensed and fed back into the top of the tower and allowed to flow to the bottom of the column, i.e. overall no top product is taken out of the system while the column is operational.

The total pressure drop across each tray is the sum of that caused by the restriction of the holes in the sieve tray, and that caused by passing through the liquid (foam) on top of the tray.

As the velocity of the vapours passing up the column increases then so does the overall pressure drop. The velocity can be monitored by varying the boil-up rate which is done by changing the power input to the reboiler. Under certain conditions where only the vapour phase is present, the trays will act as an orifice and in that, the velocity will be directly proportional to the square root of pressure drop. However; this relationship does not become visible until the head of liquid has been overcome and foaming is taking place. In a graph of log pressure drop vs. log boil-up rate, at low boil-up rates, the pressure drop will remain almost constant until foaming occurs when the pressure drop would be expected to rise sharply for unit increases in boil-up rate.

Key Definitions

Column efficiency:  The overall efficiency is defined as the ratio of the number of theoretical trays to the actual number of trays required for an entire column.

Foaming: Foaming regarding distillation column is defined as the expansion of liquid due to the passage of vapour, or gas. Although it provides good liquid-vapour contact interfacial, however, excessive foaming may lead to liquid build-up on column trays.

Boil- up Rate: Also called the distillation load. It is the rate at which the mixture is being distilled in the column.

McCabe Thiele Diagram. It is a diagram where y is plotted as a function x along the column provides an insightful graphical solution to the combined components. It is mainly used to determine the minimum theoretical plates required in a distillation column for separation of binary mixtures.

 

The Fenske Method: similar to McCabe Thiele Diagram, it is a method used to determine the minimum number of theoretical plates required in a distillation column for separation of binary mixtures. It uses equation (4).

Methodology

 

APPARATUS

       Figure 1: Distillation Column Apparatus

Distillation Column

Condenser

Electromagnet (reflux control)

Reboiler

Cooler

Bottom

Distillate

Feed

 

Distillation column

Dichloromethane 4.15L

Trichloroethylene 5.85L

Automatic digital refractometer

Distilled water

Measuring cylinder

Conical flask

Dropper

Manometer

Stopwatch

Procedure for Experiment A: Variation of column pressure drop

Ensure all the valves on the equipment are closed and then open valve 10 (V10)

Reboiler heater power is switched on at the console and power to the heater is adjusted until a reading of 0.65 kW is obtained on the digital wattmeter. Water in the reboiler began to heat up and this is observed by selecting T9 (the reboiler temperature) on the process temperature digital display.

The temperature is let stabilize for 5-10 minutes.

Open V6 and V7 and measure the pressure difference in the manometer. Then close V6 and V7.

Volume collection. Open V3 so that all condensate is delivered into a measuring cylinder for 10 seconds

Few drops of the sample are taken and the refractive index for the sample is checked by using the refractometer.

Repeat steps 1 to 6 for power of 0.75, 0.85 and 0.95kW.

Procedure for Experiment B:  Determining the Mixture Composition

 Using the reflectometer, the refractive index (R.I) of pure dichloromethane and pure trichloroethylene are measured.

Measure the refractive index of small quantities of 25%, 50%, 75 and 100 mol percent of dichloromethane/trichloroethylene. The volume of constituents is calculated are shown in the results section.

Procedure for Experiment C:  Overall Column Efficiency

Note: The overall efficiency was determined using the data from part A and B

Results

Experiment A:

Table 1.  Measured and calculated parameters

Power (kW)

Overhead RI

Bottom RI

Pressure drop (mm H2O)

Average column T

Degree of foaming on trays

0.65

1.4235

1.4490

104

41.5

Gentle localized

0.75

1.4220

1.4510

108

41.5

Gentle localized

0.85

1.4123

1.4520

121

41.5

Violently localized

0.95

1.4080

1.4500

123

40.7

Violent Over whole tray

Table 1.  Measured and calculated parameters

Power (kW)

Collection time (s)

Boil-up rate (L/s)

Pressure drop (mm H2O)

Refractive index

Degree of foaming on trays

0.65

10

5.20

104

1.4490

Gentle localized

0.75

10

5.30

108

1.4510

Gentle localized

0.85

10

5.40

121

1.4123

Violently localized

0.95

10

6.20

122

1.4082

Violent Over whole tray

Figure 2. Relationship between pressure drop and boil-up rate

 

Experiment B:

Table 3. Recorded refractive index of dichloromethane at different concentrations

Sample

Dichloromethane Concentration (mol %)

Refractive index

A

0

1.4343

B

25

1.4410

C

50

1.4600

D

75

1.4700

E

100

1.4755

Figure 3: Refractive index vs mole fraction of dichloromethane

The compositions of the mixture were determined using the equation obtained from figure 3. The equation is Y = -0.013×2 -0.0315x + 1.4768. Where y represents the refractive index and x represents the molar composition of dichloromethane in the mixture. The value for X was found by substituting the value for the refractive index for a given power input and solve the quadratic equation to determine the molar composition x.

For the first power input (0.65kW)

Overhead RI = 1.4235 so Y = 1.4235   then the equation becomesː

1.4235 = -0.013×2 -0.0315x + 1.4768        

0.04×2 + 0.0315x – 0.00533

By solving the quadratic equation the values for x can be obtained.

X1 = 0.73         X2 = -0.138

Bottom RI = 1.449 so Y = 1.449   then the equation becomesː

1.449 = -0.04×2 -0.0315x + 1.4768        

0.04×2 + 0.0315x – 0.00333

By solving the quadratic equation the values for x can be obtained.

X1 = 0.41         X2 = -0.76

This means that the composition of dichloromethane in the overhead product is = 0.41 thus the composition of trichloroethylene is 1-0.41 = 0.59.

For the first power input (0.75kW)

Overhead RI = 1.422 so Y = 1.422   then the equation becomesː

1.4222 = -0.013×2 -0.0315x + 1.4768        

0.013×2 + 0.0315x – 0.065

By solving the quadratic equation the values for x can be obtained.

X1 = 1.23                   X2 = -0.65

Applying the above procedure for bottom RI, we obtainː

X1 = 1.6                   X2 = -0.85

This implies that some errors have affected the experiment because the mole fraction of both components must not be greater than one. The same problem was identified with the next two power increments for both overhead and bottom refractive index (0.85 and 0.95kW).

 

 

Experiment C:

The overall column efficiency is defined as the ratio of the total number of theoretical plates and the actual number of plates present in the distillation column as previously stated. The actual number of plates in the column is 8.

Fenske’s method was used to determine the total number of theoretical plates of the distillation column, equation (4) below is the Fenske’s equation used.

N + 1 =
  log⁡[XAXB d*XBXAb]  log⁡αav
                                                                           (4)

αAv =

αd* αb

αAv = exp[
∆HvapR(   1  Tb(tce)–
   1  Tb(dcm) )]

E =
Number of theoretical platesNumber of actual plates*
100%

Tb(DCM) = 312.9 K

Tb(TCE) = 360.5 K

∆H̅vap = 27.9J/mol

The actua number of plates is 8.

Power input of  0.65kW

By calculating the value for α and then substituting all the known parameters into equation (4), It was found that the number of theoretical plates for a power of 0.65kW is 3.4.

E =
3.48*
100%                            E = 42.5%

Using McCabe Thiele diagram for distillation of the binary mixture,  as shown in figure 4 below. It was found that the number of theoretical stages is 4. This yields an overall efficiency of 50%.

Figure 4. McCabe Thiele diagram for dichloromethane/trichloroethylene binary mixture

The column efficiency for the power of 0.75,0.85, and 0.95 could not be calculated because the molar compositions of dichloromethane in the mixture was more than one which implies that some technical errors were made while performing the experiment.

Discussion of Results

At the beginning of the experiment, the power was first set to 0.5kW and that power, no pressure drop really occurred. The pressure only began to drop at power>0.6kW. That because the energy that was being generated was not high enough to boil up the solution. When the power was set up at a higher value, the boil-up rate getting higher and the pressure start to drop as the boil- up rate reached its needed rate. However, when the power was set up at 65kW, the pressure starts to drop as the boil- up rate reached its needed rate. As shown in table 1, the boil-up rate (column load) and input power increased as the pressure drop increased. Except for the last input power supply of 0.95 kW was the value of pressure drop decreased. The reason for that might be due to technical errors while performing the experiment as it was quite challenging to record the pressure drop. This situation violates the theory that says that pressure drop is proportional to the column load.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

Conclusions

From the results, it is noticeable that there is a constantly increase in pressure drop as the boil-up rate increased. From the results, it is also noticeable that the pressure drop in the column is a function of the input power supplied. The pressure drop increased as the more power was supplied to the column. This would be caused by the energy loses through increased thermal radiation and frictional forces with an increased vapour velocity through the column.

There might be some significant errors that might have affected the results. The values for pressure drop and refractive index are not 100% accurate since the readings were taken using the naked eye. As experiments were only conducted once without repetition for each power input, it is probable that some other results would not be highly accurate.

References:

Perry’s Chemical Engineering Handbook 7th ed. 13-12

Chemical Engineering Vol 1 – Coulson and Richardson

Chemical Engineering Laboratories Handbook 08-09

Argon Cluster and Graphene Collision Simulation Experiment

Formation of Nanopore in a Suspended Graphene Sheet with Argon Cluster Bombardment: A Molecular Dynamics Simulation study
Abstract: Formation of a nanopore in a suspended graphene sheet using an argon gas beam was simulated using molecular dynamics (MD) method. The Lennard-Jones (LJ) two-body potential and Tersoff–Brenner empirical potential energy function are applied in the MD simulations for different interactions between particles. The simulation results demonstrated that the incident energy and cluster size played a crucial role in the collisions. Simulation results for the Ar55 –graphene collisions show that the Ar55 cluster bounces back when the incident energy is less than 11ev/atom, the argon cluster penetrates when the incident energy is greater than 14 ev/atom. The two threshold incident energies, i.e. threshold incident energy of defect formation in graphene and threshold energy of penetration argon cluster were observed in the simulation. The threshold energies were found to have relatively weak negative power law dependence on the cluster size. The number of sputtered carbon atoms is obtained as a function of the kinetic energy of the cluster.
Keywords: Nanopore, Suspended graphene sheet, Argon cluster, Molecular dynamics simulation

Introduction

The carbon atoms in graphene condense in a honeycomb lattice due to sp2-hybridized carbon bond in two dimensions [1]. It has unique mechanical [2], thermal [3-4], electronic [5], optical [6], and transport properties [7], which leads to its huge potential applications in nanoelectronic and energy science [8]. One of the key obstacles of pristine graphene in nanoelectronics is the absence of band gap [9-10]. Theoretical studies have shown that chemical doping of graphene with foreign atoms can modulate the electronic band structure of graphene and lead to the metal to semiconductor transition and break the polarized transport degeneracy [11-12]. Also, computational studies have demonstrated that some vacancies of carbon atoms within the graphene plane could induce a band-gap opening and Fermi level shifting [13-14]. Graphene nanopores can have potential applications in various technologies, such as DNA sequencing, gas separation, and single-molecule analysis [15-16]. Generating sub-nanometer pores with precisely-controlled sizes is the key difficulty in the design of a graphene nanopore device. Several method have been employed to punch nanopores in graphene sheets, including electron beam from a transmission electron microscope (TEM) and heavy ion irradiation.
Using electron beam technique, Fischbein et al.[17] drilled nanopores with the width of several nanometers and demonstrated that porous graphene is very stable; but, this method cannot be widely used because of its low efficiency and high cost. Russo et al. [18] used energetic ion exposure technique to create nanopores with radius as small as 3Å. S. Zhao et al. [19] indicated that energetic cluster irradiation was more effective in generating nanopores in graphene, because their much larger kinetic energy could be transferred to the target atoms. Recent experimental works have further confirmed that cluster irradiation is a feasible and promising way in the generation of nanopores [20]. Numerical simulations have demonstrated that, by choosing a suitable cluster species and controlling its energy, a nanopores of desired sizes and qualities can be fabricated in a graphene sheet [19].
A useful tool for studying the influence of different conditions of interactions between cluster and graphene on the formation of nanopore is numerical simulations utilizing molecular dynamics (MD) [21]. The results may be useful in explaining experimental results and predicting optimal conditions for desirable graphene nanopores.
In this paper, MD simulations were performed for the collisions between an argon cluster and graphene. The phenomena of argon cluster–graphene collisions and mechanism of the atomic nanopore formation in graphene were investigated. Effects of cluster size on the threshold incident energy of defect formation in graphene were also discussed.

Molecular Dynamics Method

MD simulations were performed for the collisions between an argon cluster and graphene. For present simulations we used an effective code LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator, written by Sandia National Laboratories [22]. Length (along the X axis) of the graphene layer was 11 nm, its width (along the Y axis) was 10 nm, and each layer contained 3936 atoms. Periodic boundary conditions were applied to both lateral directions. In the simulation, the Tersoff–Brenner empirical potential energy function (PEF) was utilized to simulate the energy of covalent bonding between carbon atoms in the structure of graphene layer [23-24]. The initial configuration was fully relaxed before the collision simulations and the target temperature was maintained at 300 K. During the collision phase, a thermostat was applied to the borders of graphene. The Ar nanocluster was arranged by cutting a sphere from FCC bulk crystals, which had no initial thermal motion. The Ar cluster was initially located above the center of graphene at a sufficiently large distance so that there would be no interaction between the Ar and graphene atoms. Then, a negative translational velocity component, Vz, was assumed for each atom of the clusters. Incident angle of the argon cluster to the graphene normal was zero. Lennard-Jones (LJ) two-body potential was employed to simulate the interactions of Ar–Ar and Ar–C atoms. The form of LJ potentials was:

(1)

In the LJ potential, is the distance at which the potential is zero and is the depth of the potential well. Note that the constants were obtained from the mixing rules given by σij = (σi+σj)/2 and Ԑij = (ԐiԐj)1/2. The parameters for Ԑ and σ used in the present simulation are shown in Table 1[25]. Position of the atom was updated by the velocity Verlet algorithm with a time step of less than t = 0.5 fs. To reduce the calculation time, a cut-off length was introduced. The Van der Waals interaction of Ar-Ar and Ar-C atoms with the distance of 11A or above was neglected.

Result

Studying the effect of incident energy in ranging 1–120 ev/atom was chosen to demonstrate two distinctive phenomena: (i) Argon atoms were just reflected, and (ii) some argon atoms penetrated through graphene. Fig. 1 demonstrates the probabilities of reflection and penetration of the Ar55 cluster.
Fig. 2 shows the snapshots of the deformation of the graphene sheet due to the collision with an Ar55 cluster in the case of the incident energy of less than 11ev. During the collision, graphene was bended in the circular region around the collision point and the transverse deflection wave was observed. After the collision, argon cluster was bursted into fragments.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Fig. 3 shows the final atomic configurations resulted from the incidence of Ar55 cluster with the energy of 10 and 11 ev/atom. There were two possibilities for the structure of the graphene sheet after the collision: (i) the graphene was rippled after the collision and no damaged region was formed, this was observed in case of the incident energy of less than 11ev (Fig. 3(a)), and (ii) the collision caused defect in graphene (Fig. 3(b)).
Fig. 4 shows that there were two possibilities for the structure of the graphene sheet after collision with an Ar55 cluster in the case of the incident energy of greater than 11 ev/atom: (i) the argon cluster penetrated into the graphene sheet without the sputtered carbon atoms (Fig. 4(a)), and (ii) the argon cluster penetrated into the graphene sheet with the sputtered carbon atoms (Fig. 4(b)). When the incident energy of argon cluster was 11ev/atom, atomic-scale defects such as Stone−Wales defect were formed in the graphene sheet (Fig. 3(b)). With the increase of the incident energy, these atomic defects began to get connected and finally a nanopore with carbon chains on the pore edge was created in graphene. The atomic carbon chains with unsaturated bonds thus provided the method for chemical functionalization of graphene nanopores in order to improve their separation ability and detection. For example, oxidation of packed multilayered graphene sheets was significantly permeable to water and impermeable to He, N2, Ar, and H2 [26].
Accordingly, it was necessary to introduce the concept of threshold incident energy of defect formation (Ed) in graphene and threshold energy (Ep) of penetration argon cluster in graphene. Fig. 5 shows the size dependence of each threshold incident energy. Thus, both Ed and Ep were supposed to be written in simple power-law equations:

In Eq. (2), Ed(1) and Ep(1) indicate the threshold energy for argon atom, and N is cluster size. Power indices on N, α, and β, mean the degree of non-linear effect.

(2)

Fig. 6 shows the final atomic configurations resulted from the incidence of Ar55 cluster with the energy of 14 , 15 ev/atom. By further increasing energy, the carbon chains became short and the pore edge became smooth
we calculated the number of sputtered carbon atoms as a function of total incident energy, because the number of the sputtered carbon atoms was in correspondence to the area of nanopore in graphene. Fig. 7 shows the number of sputtered carbon atoms as a function of total cluster energy in the case of Ar19 and Ar55 cluster collision. For both cases, as the total energy increased, the number of sputtered carbon atoms increased. This result was in agreement with the previous study [27] .The number of sputtered carbon atoms can be approximated by a constant value for incident energy larger than 10 Kev. The cluster collision with large size led to higher the number of sputtered carbon atoms when all clusters had the same total cluster energy.

Conclusions

The phenomena of argon cluster–graphene collisions and mechanism of the atomic nanopore formation in suspended graphene sheet were investigated using molecular dynamics method. Summary of the obtained results is as follows:

Threshold incident energy which caused defect formation (Ed) in graphene and penetration (Ep) into argon cluster were introduced.
Simulation results for the argon cluster–graphene collisions showed that the argon cluster bounced back when the incident energy was less than Ed and broke when the incident energy was greater than Ep.
Suspended carbon chains could be formed at the edge of the nanopore via adjusting the incident energy and, by increasing energy, the carbon chains became short and the pore edge became smooth.
Ed and Ep were found to have relatively weak negative power law dependence on cluster size.
The cluster collisions with large size led to higher the number of sputtered carbon atoms when all clusters had the same total cluster energy.

References
[1] K. S. Novoselov,A. K. Geim, S. V. Morozov,D. Jiang,Y. Zhang,S. V. Dubonos,I. V. Grigorieva,A. A. Firsov , Science. 306 ( 2004) 666.
[2] T. Lenosky, X. Gonze, M. Teter, V. Elser, Nature.355 (1992) 333.
[3] J.N. Hu, X.L. Ruan, Y.P. Chen, Nano Lett. 9 (7) (2009) 2730.
[4] S. Ghosh, I. Calizo, D. Teweldebrhan, E.P. Pokatilov, D.L. Nika, A.A. Balandin, W. Bao, F. Miao, C.N. Lau, Appl. Phys. Lett. 92 (15) (2008) 151911-1.
[5] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys.81 ( 2009) 109.
[6] D. S. L. Abergel,A. Russell,V. I. Fal’ko, Appl. Phys. Lett. 91 (2007) 063125.
[7] A. Cresti, N. Nemec, B. Biel, G. Niebler, F. Triozon, G. Cuniberti, S. Roche, Nano Research. 1 (2008) 361.
[8] A. K. Geim, Science. 324 (2009) 1530
[9] A. Du, Z. Zhu, S. C. Smith, J. Am. Chem. Soc. 132(9) (2010) 2876.
[10] R. Balog, B. Jørgensen, L. Nilsson, M. Andersen, E. Rienks, M. Bianchi, M. Fanetti, E. Lægsgaard, A. Baraldi, S. Lizzit, Z. Sljivancanin, F. Besenbacher, B. Hammer, T. G. Pedersen, P. Hofmann, L. Hornekær, Nat. Mater. 9 (2010) 315.
[11] T. B. Martins, R. H. Miwa, A. J. R. da Silva, A. Fazzio, Phys. Rev. Lett. 98 (2007) 19680.
[12] Y. M. Lin, C. Dimitrakopoulos, K. A. Jenkins, D. B. Farmer, H. Y. Chiu, A. Grill and P. Avouris, Science. 327 ( 2010) 662.
[13] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, A. K. Geim, Rev. Mod. Phys. 81 (2009) 109.
[14 ] D. J. Appelhans, Z. Lin, M. T. Lusk, Phys. Rev. B. 82 (2010) 073410.
[15] G. F. Schneider, Nano Lett. 10(8) (2010) 3163.
[16] P. Russo, A. Hu, G. Compagnini, Nano-Micro Lett. 5(4) (2013) 260.
[17] M. D. Fischbein, M. Drndic, Appl. Phys. Lett.93 ( 2008) 113107.
[18] C. J. Russo, J. A. Golovchenko, Proc. Natl. Acad. Sci. USA. 109(16) (2012) 5953.
[19] S. J. Zhao, J. M. Xue, L. Liang, Y. G. Wang, S. Yan, J. Phys. Chem. C 116(21) (2012) 11776.
[20] Y. C. Cheng, H. T. Wang, Z. Y. Zhu, Y. H. Zhu, Y. Han, X. X. Zhang, U. Schwingenschlögl, Phys. Rev. B. 85 ( 2012) 073406.
[21]H. Araghi, Z. Zabihi, Nucl. Inst. Methods B 298 (2013) 12.
[22] S.J. Plimpton, Journal of Computational Physics 117 (1995) 1.
[23] D.W. Brenner, Phys. Rev. B .42 (1990) 9458.
[24] D. W. Brenner, O. A. Shenderova, J. A. Harrison, S. J. Stuart, B. Ni, S. B. Sinnott, J. Phys. Condens. Mater. 14 (2002) 783
[25] Y. Yamaguchi, J. Gspann, Eur. Phys. J. D. 16 (2001) 103
[26] R. R. Nair, H. A. Wu, P. N. Jayaram, I. V. Grigorieva, A. K. Geim , Science. 335 ( 2012) 442.
[27] N. Inui, K. Mochiji, K. Moritani, N. Nakashima, Appl. Phys. A: Mater. Sci. Process. 98 (2010) 787.
Fig. 1. Incident energy dependence of the reflection and penetration probabilities
Fig. 2. Snapshots of Ar55 clusters collision on graphene sheet : (a) t=0 ps , (b) t= 1 ps , (c) t=6 ps
Fig.3. Final atomic configurations to X–Y plane when the collision energy is: (a)10 ev, and ( b)11 ev
Fig. 4. Final atomic configurations , when the incident energy is: (a)14 ev, and (b)15 ev
Fig. 5. Final atomic configurations to X–Y plane when the incident energy is: (a) 1 Kev, (b) 10 Kev, (c) 20 Kev
Fig. 6. (a) Cluster size dependence of threshold incident energy of defect formation in graphene, (b) Cluster size dependence of threshold energy of penetration into argon cluster
Fig. 7. Dependence of sputtered atoms on kinetic energy of a cluster
Table 1. Lennard–Jones potential parameters

 

σ (A)

Ԑ(ev)

Ar-Ar

3.4

0.0104

Ar-C

3.385

0.005

 

Cloud Point Extraction Experiment

Bromothymol blue (also known as bromothymol sulfone phthalein, BTB) (Figure 2.1.1) is a pH indicator (yellow at pH 6.0 and blue at pH 7.6). Its chemical name is: 4,40-(1,1-dioxido-3H-2,1-benzoxathiole-3,3-diyl)bis(2-bromo-6-isopropyl-3-methylphenol (The Merck Index, 13th edition, 2007)[1]. pKa of BTB is 7.1. This dye is the most appropriate pH indicator dye in physiological tissue and also used in the investigation of the interaction of lipid with protein (Puschett and Rao 1991; Gorbenko 1998; Sotomayor et al. 1998)[2,3,4]. It is widely applied in biomedical, biological, and chemical engineering applications (Schegg and Baldini 1986; Ibarra and Olivares-Perez 2002)[5,6]. BTB in protonated or deprotonated form is yellow or blue in color, respectively, while its solution is bluish green in neutral solution. It is sometimes used to define cell walls or nuclei under the microscope. BTB is mostly used for the evaluation and estimation of the pH of pools and fish tanks and the determination of the presence of carbonic acid in liquid. There are several treatment procedures for dyes from waste materials, including adsorption (Nandi, Goswami, and Purkait 2009)[8], coagulation–flocculation, oxidation–ozonation, reverse osmosis, membrane filtration, biological degradation, and electrochemical processes (Shen et al. 2001; Kim et al. 2004; Chatterjee, Lee, and Woo 2010)[9,10,11].

2.1.2 EXPERIMENTAL
2.1.2.1 Materials:
All the solutions were prepared with double-distilled water.
2.1.2.1.1 Triton X – 100 (0.1M): Triton X-100 was purchased from Qualigens Analytical grade. The TX-100 was cleared of any low-boiling impurities by exposure to vaccum for 3h at 700C following the procedure given by Kumar and Balasubrahmanium[19]. 31.4 g of TX-100 liquid is dissolved 500 ml volumetric flask and made up to the mark to obtain 0.1 mol/dm3 solution. The critical micellar concentration and Cloud point of TX-100 are 2.8×10−4 [20] 65â-¦C [21] respectively.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

2.1.2.1.2 Bromothymol Blue (BTB) : 1.0 g of BTB dye Merck India was dissolved in 5.0 ml of ethanol (99.8%) for dissolution then dilution are made with double distilled water into a 1000 ml volumetric flask up to the mark to obtain an concentration of 1000 mg/dm3(Babak Samiey, Kamal Alizadeh et.al 2004)[22]. In order to avoid fading stock solution was wrapped black color paper. The working solutions of BTB were prepared by appropriate dilutions of the stock solution immediately prior to their use.
2.1.2.1.3 Acetic acid (0.5M). 28.5 ml glacial acetic acid (A.R.grade) Qualigens was diluted with distilled water in a 1000 ml volumetric flask to give 0.5M Acetic acid solution. The solution obtained was diluted to required concentration and standardized as per the procedure (Vogel et. al. 1989)[23] with standard NaOH solution.
2.1.2.1.4 Sodium acetate (0.5M): 13.6 g sodiumacetate.trihydrate, (CH3COONa.3H2O) of Analytical grade Qualigens is dissolved in 100 ml volumetric flask and made up to the mark (Vogel et. al. 1978)[24].
2.1.2.1.5 NaCl (0.1M): 2.922 g pure dry salt of sodiumchloride of analytical grade Qualigens is weighing out and dissolved in 500 ml volumetric flask to give 0.1M NaCl solution(Vogel et. al. 1989)[25].
2.1.2.1.6 Na2SO4 (0.5M): 16.1 g of sodiumsulphate decahydrate,(Na2SO4.10H2O) A.R.grade from Merck (India), is dissolved in 100 ml volumetric flask and made up to the mark to give 0.5M Na2SO4 solution(Vogel et. al. 1989)[26].
2.1.2.1.7 KH2PO4(1.0M): 34.02 g of KH2PO4 of Analytical grade Qualigens is dissolved in 250 ml volumetric flask and made up to the mark (Vogel et. al. 1978)[27].
2.1.2.1.8 Na2HPO4 (1.0M): A.R. grade disodium hydrogen phosphate, Na2HPO4.2H2O, is taken in porcelain crucible and heated until no more water is liberated. Then 17.8 g of this cold residue is taken in 100 ml volumetric flask and made up to the make to give 1.0 M of Na2HPO4 solution (Vogel et. al. 1978)[28]. The reagent is prepared freshly each time.
2.1.2.1.9 Buffer solution of pH4.0(±0.05): 5 ml of 4M sodium acetate (A.R. grade) Qualigens and 20 ml of 4M acetic acid (A.R. grade) Qualigens are mixed in an 100ml volumetric flask and made up to the mark which has resultant pH of 4.0(±0.05) (Vogel et. al. 1989)[29] .
2.1.2.1.10 Buffer solution of pH5.0(±0.05):: 17.5 ml of 4M sodium acetate (A.R. grade) Qualigens and 10 ml of 4M acetic acid (A.R. grade) Qualigens are mixed in an 100ml volumetric flask and made up to the mark which has resultant pH of 5.0(±0.05) (Vogel et. al. 1989)[30] .
2.1.2.1.11 Buffer solution of pH6.0(±0.05): 13.2 ml of1M KH2PO4 (A.R.grade) Qualigens and 86.8 ml of 1M Na2HPO4 (A.R.grade) Qualigens are mixed in 100ml volumetric flask which has resultant pH of 6.0(±0.05) (Vogel et. al. 1989)[31].
2.1.2.1.12 Buffer solution of pH7.0(±0.05): 61.5 ml of 1M KH2PO4 (A.R.grade) Qualigens and 38.5 ml of 1M Na2HPO4 (A.R.grade) Qualigens are mixed in 100ml volumetric flask which has resultant pH of 7.0(±0.05) (Vogel et. al. 1989)[31].
2.1.2.1.13 Buffer solution of pH8.0(±0.05): 94.0 ml of 1M KH2PO4 (A.R.grade) Qualigens and 6.0 ml of 1M Na2HPO4 (A.R.grade) Qualigens are mixed in 100ml volumetric flask which has resultant pH of 8.0(±0.05) (Vogel et. al. 1989)[31].
2.1.2.1.14 Buffer solution of pH9.2(±0.05): 1.905g of Na2B4O7.10.H2O of (A.R.grade) Qualigens is dissolved in 100ml volumetric flask and made up to the mark to obtain 0.05 M of borax solution.The resultant pH of the solution is 9.2(±0.05) (Vogel et. al. 1989)[32].
2.1.2.2 Methodology for cloud point extraction:
2.1.2.2.1 Procedure:
The cloud point temperature was determined by literature method reported by Carvalho et al. [33]. This is based on the ‘visual observation of the separation of phases’ in the micellar solution. The solution was heated gradually in the water bath until turbidity appeared. To verify the results, the opposite process was carried out by cooling gradually with constant stirring and the cloud point was considered as the temperature at which the solution became clear. The reported value was the average of these two determinations; in most cases, these two temperatures were identical, within + 0.5oC.
Cloud point extraction experiment was conducted by using a 10 ml centrifuge tube with a screw cap containing different concentrations of Triton X-100 and BTB and sonicated for 2 minutes for proper mixing. The solution is heated up to 80ËšC in a thermostatic temperature bath for 20 min. The turbid solution was then centrifuged at 3500 rpm for 5 min and cooled in an ice bath for 2 minutes in order to separate the phases. Both the phases are separated and the volumes of surfactant rich phase (coacervate phase) and dilute phases were measured. Average of three determinations is reported in all cases. The concentration of dye in both the phases has been measured by using PerkinElmer lamda-25 UV-Visible spectrophotometer. In order to determine the influence of the reagents added to the surfactant phase, cloud point determinations were performed with the additions of buffer, dye and inorganic salts. The procedure for the determination of critical temperature was the same as above, but using only a fixed surfactant concentration. The phase diagram for Triton X-100 was obtained by measuring the cloud point temperature of aqueous surfactant solutions at different concentrations.
2.1.2.2.2 Spectra and calibrated graph
The concentration of the dye was determined by U.V-visible spectrophotometer (PerkinElmer lamda-25). Pure BTB was initially calibrated separately for different concentrations in terms of absorbance units, which were recorded at wavelength 430 nm, at which maximum absorption takes place (Figure 2.1.2, 2.1.3). No significant change in the absorbance has been observed even in the presence of TX-100. Therefore all the absorbance measurements were performed at this wave length.
Figure 2.1.2 Spectra of BTB dye

Figure 2.1.3 Calibration curve of BTB dye

2.1.2.2.3 Determination of Phase volume Ratio, Fractional coacervate phase volume and pre-concentration factor
The volumes of the respective surfactant-rich and aqueous phases obtained after the separation of phases were determined using calibrated centrifuge tubes for calculating the pre concentration factor. Surfactant solutions containing typical amounts of the BTB were extracted using the CPE procedure, followed by the measurement of the respective phase volumes. The results reported are the average of three determinations.
The phase volume ratio is defined as the ratio of the volume of the surfactant-rich phase to that of the aqueous phase. It is calculated using the following formula.
—— (2.1.1)
Where RV is the phase volume ratio, VS and VW are volumes of surfactant-rich phase and aqueous phase respectively.
The pre-concentration factor, (fC) is defined as the ratio of the volume of bulk solution before phase separation (Vt) to that of the surfactant-rich phase after phase separation (Vs).
—— (2.1.2)
Where Vt and VS are the volumes of the bulk solution before phase separation and the surfactant-rich phase respectively.
The fractional coacervate phase volume with the feed surfactant concentration is calculated by using the relationship:
—— (2.1.3)
Where FC is the fractional coacervate volume and Cs is the molar concentration of the feed surfactant solution, for fixed feed dye concentration, the parameters a and b vary linearly with temperature. The value of Fc lies in between 0.04-0.23 for various operating conditions.
Surfactant partition coefficient (m) is defined as the ratio of concentration of surfactant in coacervate and dilute phase.
——- (2.1.4)
The efficiency of extraction is defined as
—- (2.1.5)
2.1.4 Discussion:
This section is divided into four parts. In first part, factors influencing the extraction efficiency (e.g., concentrations of non-ionic surfactants, dye and salt, temperature and pH of the solution), fractional coacervate phase volume have been discussed. The nature of solubilization isotherm at different temperature has been presented in the second part. In the third and fourth parts, thermodynamic parameters and a calculation procedure for the determination of surfactant requirement for the dye removal to a desired level is briefly discussed.
2.1.4.1 Factors influencing efficiency:
For ionizable solutes, the charge of the solute can greatly influence its extent of binding to a micellar assembly [34]. The ionic form of a molecule normally does not interact with and bind the micellar aggregate as strongly as does its neutral form. ‘Thus adjustment of the solution pH for maximum extractability is of special importance when controlling experimental variables in CPE.
With increasing pH, the efficiency of extraction increases up to pH 8.0 and then decreases. This is in accordance with the decrease in cloud point till pH 8.0 and a sudden increase at pH 9.2. Further, the pK value of BTB is 7.1. In the absence of any buffer solution, pH of the dye solution is 7.0 and there is no change in pH event after the extraction process is completed. Hence, all the parameters were optimized at this fixed pH of the medium. No significant increase in efficiency is observed with increasing [Dye] since the cloud point is not altered much with increasing the concentration of dye.
The extraction efficiency of dye increases with the increase of surfactant concentration. The concentration of the micelle increases with the surfactant concentration, resulting in more solubilisation of dye in micelles. The surfactant concentration in the dilute phase remains constant (and equal to around CMC); the surfactant concentration along with the solubilised dye in the coacervate phase (micellar phase) increases to maintain the material balance[42-46]. The extraction of dye with TX-100 solution is due to hydrophobic interaction between BTB and hydrophobic micelles in the solution. However, with the increase of TX-100 concentration, the analytical signal becomes weak due to the increase in the final volume of the surfactant rich phase that causes pre concentration factor (phase volume ratio) to decrease [35]. In view of these observations, a 0.04 mol/dm3 triton X- 100 is used throughout.
It has been shown that the presence of electrolyte can change the CP in different ways[36]. Salting out electrolyte such as NaCl, decreases the cloud point temperature. They can promote the dehydration of ethoxy groups on the outer surface of the micelles, enhancing the miceller concentration leading to solubilisation of more dye and resulting in a more efficient extraction [37] and reduce the time required for phase separation. A lower salt concentration gives a smaller pre concentration factor, due to the larger volume in the surfactant-rich phase at lower salt concentrations [38]. As shown in the fig the ability of salts to enhance extraction efficiency of the dye was in the order of Na2SO4>NaCl.
Temperature has pronounced effect on the extraction of solute. (i) At high temperature, CMC of non-ionic surfactant decreases. (ii) the non-ionic surfactant becomes more hydrophobic due to dehydration of ether oxygen [39] and increase in micellar concentration and solubilization.
A general preconcentration factor of 20-60 was obtained with this CPE method and similar pre concentration has been reported for other analytes (40). Typical preconcentration factors reported in the literature[41] varïed fiom 10 to 100. The CPE method gives a better preconcentration factor compared to conventional solvent extraction methods. In general, high pre concentration factors in CPE can be achieved using small amounts of surfactants which have large capacity to accommodate dye molecules. The hydrated nature and relative polarity of micelles, on the other hand, limit the extraction of dye into the surfactant-rich phase.
From the viewpoint of concentrating the analytes present in aqueous solutions, the larger pre concentration factor, e.g., the smaller phase volume in the surfactant-rich phase is desired. A lower surfactant concentration gives a higher pre concentration factor. However, it becomes very difficult for sampling and accurate analysis with a very small volume of the surfactant-rich phase. On the contrary, excessive amount of added salt of “salting-out” effect can give the higher pre concentration factor, but it is likely forming the very viscous liquid crystalline phase, instead of the fluidic Liquid phase, in the system, making it difficult to separate the surfactant-rich phase. Therefore, optimization of the pre concentration factor is very critical in a feasible CPE technique. Hence, surfactant concentration of 0.04 mol/dm3 was chosen to conduct CPE experiments in this research.
2.1.4.2 Solubilization isotherm:
The adsorption isotherm relating moles of solute solubilized per mole of surfactant[50] are presented in Figure2.1.8.
The isotherm can be expressed according to Langmuir type expression:
—— (2.1.6)
Where, both m and n are functions of temperature.
Figure 2.1.8 Solubilisation curve of BTB dye

Assuming a homogeneous monolayer adsorption, the linearized Langmuir sorption model of equation (2.1.6) can be written as:
—— (2.1.7)
Plot of 1/qe vs. 1/Ce over the entire dye concentrations was linear with a correlation coefficient of 0.983 as shown in Figure 2.1.9. Thus, the solubilization of dye obeys the Langmuir adsorption model. The calculated values of Langmuir parameters m and n from the slope and intercept of the linear plot of 1/qe vs. 1/Ce were found equal to 4.29X 10-3 (mol/mol) and 2.04X104 dm3 /mol, respectively.
Figure 2.1.9 Langmuir isotherm of BTB dye
2.1.4.3 Thermodynamic parameters: The overall thermodynamic parameters ΔG0, ΔS0 and ΔH0 were calculated using equations (2.1.8, 2.1.9) [48,49]as follows.
—— (2.1.8)
——- (2.1.9)
Where T is the temperature in (K), qe/Ce is called the solubilization affinity.
ΔS0 and ΔH0 are obtained from a linear plot of log (qe/Ce) versus (1/T), from Eq. (2.1.8) and. Once these two parameters are obtained, ΔG0 is determined from Eq. (2.1.9) and presented in Table 2.1.6. Plot of log (qe/Ce) versus (1/T) is shown in Figure 2.1.10.

Table 2.1.6 : Thermodynamic parameters
Temp = 80±0.1Ëšc; [BTB]initial =12.80×10-5 mol/dm3 ; [TX-100] =4.0×10-2 mol/dm3

pH ( ±0.05)

-∆G ( KJ/mole )

∆S ( KJ/mole/K )

∆H ( KJ/mole )

353

343

333

6.0

19.57

16.93

14.28

0.27

73.92

7.0

22.11

18.75

15.38

0.34

96.76

8.0

21.10

18.63

16.16

0.25

66.06

Figure 2.1.10 log (qe/Ce) versus (1/T)

2.1.4.4 Design of experiment: The amount of surfactant required can be evaluated from the residual dye present in the dilute phase of the solution after conducting cloud point extraction can be determined [45].
qe is the mole of dye solubilized per mole of non-ionic surfactant.
—— (2.1.10)
Moles of dye solubilized can be obtained from mass balance equation,
—— (2.1.11)
—— (2.1.12)
Where, A is the moles of dye solubilized in the micelles, V0 and Vd are the volume of the feed solution and that of the dilute phase after CPE, C0 and Ce are concentration of the BTB dye after CPE respectively; Cs is the concentration of surfactant in feed.
From the equation 2.1.10, 2.1.11 and 2.1.12 we can write,
—— (2.1.13)
Moles of dye solubilized can be obtained from mass balance equation,
Where, qe is the mole of dye solubilized per mole of non-ionic surfactant, x is moles TX-100 used, A is the moles of dye solubilized in the micelles, V0 and Vd are the volume of the feed solution and that of the dilute phase after CPE, C0 and respectively; Cs is the concentration of surfactant in feed.
—— (2.1.14)
Now, by involving the definition of fractional coavervate volume in the above equation we get,
—— (2.1.15)
——- (2.1.16)
Where a,b are the parameters a and b which are functions of temperature.
Substituting the above equation in equation (8) we get,
——- (2.1.17)
Substituting the above equation in equation (1) and rearranging we get,
——– (2.1.18)
From the above equation the desired surfactant required (Cs) can be obtained knowing the value of m and n the Langmuir constants, a and b the operating temperature constants, Ce the amount of dye in dilute phase after cloud point extraction.
By using the above equation experiments which are conducted were compared for surfactant used and required are shown in Table 2.1.8.

Table 2.1.8: Comparison data of required and used TX-100 at 80ËšC

105 [BTB]initial mol/dm3

105[ BTB]dilute mol/dm3

102[ TX-100 ]used mol/dm3

102[ TX-100 ]Required mol/dm3

3.20

1.11

4.00

2.64

6.40

1.87

4.00

3.82

8.00

2.22

4.00

4.32

9.60

3.19

4.00

3.79

12.80

4.09

4.00

4.46

16.00

6.72

4.00

3.74

8.00

3.60

3.00

2.42

8.00

1.73

4.50

4.32

8.00

1.18

5.00

5.60

 

Pipe Surge and Water Hammer Experiment

The objective of the work undertaken was consisted two separate experiments, pipe surge and water hammer. These are both caused by a reduction in the flow rate within a pipe. They are two alternative dissipations of the kinetic energy of the fluid into another form of energy – pressure in the case of the water hammer, and potential energy in the case of the surge shaft.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The surge shaft is a device used as a way of avoiding pressure surges which accompany the water hammer effect, by allowing the fluid up a shaft near the valve, thus absorbing the pressure exerted by the fluid on the valve and the pipe. The aim of these two experiments was to compare the results with the theory derived from Newton’s Second Law of Motion.
Introduction
Pipe Surge
Water pipelines and distribution systems are subjected to surges almost daily, which over time can cause damage to equipment and the pipeline itself. Surges are caused by sudden changes in flow velocity that result from common causes such as rapid valve closure, pump starts and stops, and improper filling practices. Pipelines often see their first surge during filling when the air being expelled from a pipeline rapidly escapes through a manual vent or a throttled valve followed by the water. Being many times denser than air, water follows the air to the outlet at a high velocity, but its velocity is restricted by the outlet thereby causing a surge. It is imperative that the filling flow rate be carefully controlled and the air vented through properly sized automatic air valves. Similarly, line valves must be closed and opened slowly to prevent rapid changes in flow rate. The operation of pumps and sudden stoppage of pumps due to power failures probably have the most frequent impact on the system and the greatest potential to cause significant surges.
If the pumping system is not controlled or protected, contamination and damage to equipment and the pipeline itself can be serious. The effects of surges can be as minor as loosening of pipe joints to as severe as damage to pumps, valves, and concrete structures. Damaged pipe joints and vacuum conditions can cause contamination to the system from ground water and backflow situations. Uncontrolled surges can be catastrophic as well. Line breaks can cause flooding and line shifting can cause damage to supports and even concrete piers and vaults. Losses can be in the millions of dollars so it is essential that surges be understood and controlled with the proper equipment.
Water Hammer
Water hammer is the formation of pressure waves as the result of a sudden change in liquid velocity in a piping system. Water hammer usually occurs when a fluid flow start or stops quickly or is forced to make a rapid change in direction. Quick closing of valves and stoppage of pump can create water hammer. Valve closing in 1.5s or less depending upon the valve size and system conditions causes an abrupt stoppage of the slow. Since liquid is not compressible, any energy that is applied to is instantly transmitted. The pressure waves created at rapid valve closure can reach five times the system’s working pressure. If not considered for, this pressure pulse will rapidly accelerate to the speed of sound in liquid, which can exceed 1200 m/s, causing burst of the pipeline and pump causing as well as fracture in the pipe fittings. For this reason, it is essential to understand under what conditions these pressure waves are produced and reduce the pressure rise as much as possible in a piping system.
Risk assessment
In experimental work there are always some risks to everyone in the lab, hence a health and safety briefing before commencing the labs. These will aware people to the potential risks and the appropriate steps to reduce the likelihood of accidents. Therefore it is crucial to follow the advice of the staff supervising at all times and use the protection equipment provided.
There are different hazard around in the lab, identifying them is important.
There are people doing other experiments at the same time in the lab, make sure what the worst situation can happen with it.
Therefore knowing where is the closest fire exit is important, or the short route to get out the build.
Making sure there are not wire on the floor, incase people fell over it.
Make sure that all the equipments going to be used are safe.
Connecting the equipments correctly to prevent short circuit.
Make sure that the load is not too heavy to left.
When loading the equipment, be careful it might fell on to someone’s toe.
Be aware of anything caught into the equipment
When leaving the lab make sure things are placed back to the original place, and all equipments are switched off.
There are ways to prevent it happen.
Make sure you know the risk of the experiment.
Ask others to help to set up, if not sure what the equipment does.
Do not leave anything unattended.
Not lift anything heavy alone or with equipment’s help.
Wear PPE
Methodology
Pipe Surge
The equipment is set up as shown Figure 4 – 1, where the head loss can be measured. The static head (hs) is recorded through the level on the surge shaft when there is no flow, this will be the datum level throughout the experiment. Then adjusting the gate valve and supply control valve, so that there is a steady of water flowing into the sump tank, where the new reading in the surge shaft is the velocity head (hv). Then the gate valve is close and wait for the oscillations to stop, once it is stopped the lever is opened to operated gate valve and the water level should drop back to the same value for the velocity head.
The value of hs and hv are used to calculate the head loss due to friction which is hs – hv = hf. The flow rate will be needed by closing the dump tank to find the quantity of water in the tank in 60 seconds. More reading should be taken for better accuracy. The flow rate should not be changed for the rest of the experiment.
The maximum and minimum surge heights are measure by the oscillations and the time between the gate valves is quickly closed. The same procedure is repeated but the time taken between the surges passing the datum point is measured.
Water Hammer:
Follow the Appendix 8 -1 to set the equipment up. Where the water hammer flow control valve should be fully open and the surge shaft valve is fully closed, then the measurement of the volumetric flow rate will be taken and thus calculate the flow velocity. The volumetric flow rate can be measure using the same procedure as Pipe Surge. Then the fast acting valve is release to stop the flow of water instantaneously causing a pressure pulse to travel up and down the pipe. This is instantaneous closures which mean closure less than 2L/c, i.e. the valve is closed before a reflected wave reaches the valve again, as this will give us the same pressure rise as an instantaneous closure. These pulses are captured on the oscilloscope where we record the average amplitude, time base and the duration of the pulse. The time lags between the two pressure transducers are also recorded.
For the second half of this experiment, the oscilloscope setting is changed so that the time base setting is increased to 25ms/div. Once it is set up, the same procedure will be repeated as before. The fast acting valve is release and records the average amplitude value and duration of the pulse for the traces that are on the oscilloscope.
 
 
Discussion
 
When comparing the values gained experimentally to the values predicted from the equations, tabulated in table 6 -1, it can be observed that the predicted flow rates and the period of oscillation are both quite similar with their experimental values. The reason for the slight difference in flow rates is partly due to the fact that the equation that we needed to use to find the flow rate had two unknown values in it, Q and hf. The equation that we used was:
The experimental value of frictional head loss is used so that the predicted flow rate can be calculated. The experimental value of Q is used for calculating the theoretical value for frictional head loss by substituting this value in to the equation
However this value would have accumulated more errors and therefore the value would be further away from the experimental value.
From Figure 6 – 1 the time period is about 8 seconds can be observed, whereas the predicted value is 7.5705 seconds. The discrepancy between the two numbers is most likely to be as a result of human error, when timing the points of max and min surge and also when the surge crosses the datum a time factor needs to be taken into consideration for the time taken between the person saying when to stop the timer and the other person actually pressing the button. This time delay could easily explain the half second difference between the two values.
When comparing the difference between the experimental and predicted values for maximum surge height, the first predicted value is hugely different to the actual value achieved. The reason for this is because the equation gives the max surge from the static head assuming that there are no losses due to friction, therefore the equation will need to be adjust to take into consideration of the effects of friction.
This acts as a correction factor. The reason why it need to be use, because the initial head loss which is due to friction, this is the difference between the static head and the velocity head which is much lower than the static head therefore the initial max amplitude should be taken away.
Throughout the effects of friction is important as dealing with a small bore system whereas in reality surge shafts have diameters in meters. The effects of friction can be assumed negligible, as long as the initial head at the valve is assume the same as at the reservoir. However in the flow frictional losses are relatively large, this can be seen in the fact that there is a large difference between the static head and velocity head. This is partly due to the small diameter of the pipe, as the friction occurs at the walls and if the diameter of the pipe is small then the area in which the fluid is unaffected by the friction is going to be smaller. In order to take the effects of friction in to account, the equation of the max amplitude must start from the velocity head therefore the head loss due to friction can also be taken into consideration.
Water Hammer
 
From observing Figure 5 -1 the single pressure wave, it varies slightly to the symmetrical smooth square shown as in the Fluid Mechanics Lab Manual. The pulse shown on the oscilloscope showed an unsymmetrical, rough rectangle. This irregularity of the line is as a result of not all the kinetic energy being transferred into potential energy, which is the pressure pulse, and the remaining energy being lost in the form of heat, sound and strain. The strain loss is where the compression of the water tries to expand the pipe, i.e. constant volume therefore change the cross sectional area. The reason of that assumption is the irregular graph as when deriving the equations as assumed that the kinetic energy lost is equal to the energy gained in the form of the pressure pulse, this does not take into consideration the effects of energy losses like heat noise and deformation.
In another part of the experiment, the pressure transducer set up halfway along the pipe. i.e. 1.5meters away from the valve; this meant there is a time lag between the first wave and the second wave giving the opportunity to measure the speed of sound in water. Firstly the time lag need to be calculated, using 0.75 per division. In the first set up the time axis for the oscilloscope to 2.5milliseconds per division, therefore the time lag is 1.5 milliseconds. The time lag should roughly be a quarter of the time period, so it is as expected the time lag is 1.5625 milliseconds, which is very close to what experimentally gained therefore suggesting that the value has a slight error but not as significant error that the value can’t be used to work out Ce. As a result the value of the time lag in the equation can be used
An experimental value was given for the speed of sound in the water/pipe system which is 960m/s. This value is used to calculate the time it takes a single pressure pulse to travel a complete circuit of the pipe, in this case 6 meters, and the value is 4.523 milliseconds compared to 6.25 milliseconds from the sketch. The difference between these two values could be due to not reading the number of divisions accurately enough and also where the measure of the period from, both of which could have made the result closer to the result calculated. However the discrepancy might also be due to pulse travelling further than it is assumed. For the calculations, assumption is made that it is just travelling the length of the pipe, however the pulse might travel some distance into the header tank instead of being reflected back at the edge. This would then account for why the measured time period is longer, as it could be travelling further than the 6 meters as assumed.
When looking at the table 6 -2 for the water hammer experiment, the predicted and experimental values for the speed of sound in water can be compared, peak pressure and also the duration of the first pulse. There is not much difference the experimental and predicted values of speed of sound in the water/pipe system, this indicates that the experiment went well and that the calculations and therefore the equations used are correct. However there is a significant difference between the peak pressure and also the duration of the pulse, it is quite likely that measured the duration of the pulse inaccurately as determined a rough value for how many divisions the period was, likewise with the amplitude of the pulse. Furthermore when calculating the experimental velocity of sound in water the time lag was used as the time in the equation and the time lag again was measured by reading how many divisions it took up and as a consequence was open to human error in reading it.
From Figure 5 – 1 can be observe several reflected pressure waves. When the pulse is reflected as a low pressure wave, the pulse is going lower than the original start point. The pressure wave is actually reaching the vapour pressure of water and as a consequence the water is boiling and evaporating creating bubbles, this causes a vacuum to be created thus slowing down the pulse. The energy created from the boiling water soon dissipates and when there are not enough bubbles to slow down the pulse then a second pulse starts and the whole process repeats itself. The fact that the pulse is slowed down in the pressure trough by the vacuum and bubbles means that the pulses are not symmetrical.
Studying the Figure 5 – 1 more closely, on the second pulse wave there is a small spike half way between the first pulse and the second pulse can be observe, this could be due to a number of reasons but the most likely is that it is the pulse that has been reflected back from the back of the Header tank. Ideally the experiment would be set up such that the header tank has a big enough change in volume and pressure compared to the pipe that it would act as a discontinuity and reflect the pulse back straight away. However in this case some of the pulse could be being reflected from the back wall of the header tank. This would also explain why there is a difference between some of our experimental and predicted results for the speed of sound in water, as we could be assuming that the distance travelled by the pulse is slightly shorter than it travelled in reality, thus having different values when calculating C. The reason why the amplitudes of the pulse wave are not symmetrical is partly due to the vaporisation of the water and also as a consequence of friction, as the flow is slowed the frictional head loss also reduces and so the head at the valve increases to the equilibrium position of the static head, that is why the amplitudes converges towards the static equilibrium can be observe.
Conclusions
In conclusion, the results between theoretical and experimental were similar and close to each other. However, the slight discrepancies might due to human error, e.g. not recording the time as accurately and also the effects of friction will need to be taken in consideration. Therefore if the experiment is repeated to get better accuracy for the result can be more reliable to use.
References
Fluid Mechanics Laboratory Manual
Level 1 and 2 notes on unsteady flow
Douglas JF, Gasiorek JM and Swaffield JA, Fluid Mechanics, 4th ed, Prentice Hall, 2001. (ISBN 0582414768)
Massey, B, Mechanics of Fluids, 8th ed, Taylor & Francis, 2006 (ISBN 0-415-36206)
http://www.valmatic.com/pdfs/SurgeControlPumpingSystems.pdf
http://ksbpak.com/pdfs/waterhammer.pdf
 

Measurement of Anti-proliferative Activity Experiment

Human cancer cell lines A549 (Lung carcinoma), MCF-7 (Breast adenocarcinoma), DU 145 (Prostate carcinoma), DLD-1 (Colorectal adenocarcinoma), FaDu (squamous cell carcinoma of pharynx) were obtained from American Type Culture Collection (ATCC), USA. These cells were cultured in DMEM supplemented with 10% FBS and antibiotic combinations in 5% CO2 humidified atmosphere at 37 0C.
A colorimetric sulforhodamine B (SRB) assay was used for the measurement of anti-proliferative activity as described before (Adaramoye et al., 2011; Fricker and Buckley, 1995; Keepers et al., 1991; Skehan et al., 1990). It is the second major technique for testing and is the more preferred. This basically depends on the incur of the negatively charged pink amino xanthine dye, sulphorhodamine B (SRB) through basic amino acids in the cells. The released dye will give a more intense colour and more absorbance, when the number of cells and amount of dye is taken up is greater, after fixing, when the cells are lysed, (Skehan et al., 1990). The SRB assay is sensitive, simple, reproducible and more rapid than the formazan-based assays and gives better linearity, a good signal-to-noise ratio and has a stable end-point that does not require a time-sensitive measurement, as do the MTT or XTT assays (Fricker and Buckley, 1995; Keepers et al., 1991).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Ten thousand cells were seeded to each well of 96-well plate, grown overnight and exposed to test samples at 100 µg/ml concentration for 48 h. Cells were then fixed with ice-cold tri-chloro acetic acid (50% w/v, 50µl/well), stained with SRB (0.4% w/v in 1% acetic acid, 50µl/well), washed and air dried. Bound dye was dissolved in 150 μL of 10mM Tris base and plates were read at 510 nm absorbance (Epoch Microplate Reader, Biotek, USA).
Anti-proliferative activity of test samples was calculated as:
% inhibition in cell growth = [100-(Absorbance of compound treated cells/ Absorbance of untreated cells)] x100.

Principal component analysis

PCA was carried out based on the contents of eighteen bioactive compounds in fruits and leaves of five Cassia species, using STATISTICA 7.0 software. When the contents of investigated compounds were below the quantitation limit or not detected in the samples, the values of such elements were considered to be zero.

Results and discussion

Optimization of chromatographic and MS/MS conditions

Complete separation of proximate analytes is certainly not required for MS/MS detection. In this study, chrysophanic acid and emodin are having same product ion, while catechin and epicatechin are having same precursor and product ion. Therefore, mobile phase was optimized using different compositions of solvents and adjusting their gradient elution for separation of all the compounds. Acetonitrile possesses stronger elution ability in comparison to methanol, which shortens the elution time and thus selected for this method. On the basis of the polarity of anthraquinones, phenolics, flavonoids and terpenoids in the extracts of Cassia species samples, an Acquity UPLC BEH C18 (2.1 mm × 50 mm, 1.7µm; Waters, Milford, MA) column was selected for their separation, which was more suitable for acidic mobile phase with smoother baseline in the separation as compared to other tested columns. Compared with acetic acid, formic acid was found more effective for ionization of compounds detected in the negative ESI mode. Thus, different concentration strengths (0.05%, 0.1% and 0.2%) of formic acid were investigated, and finally 0.1% formic acid concentration was selected for analysis. Therefore, optimized gradient elution with 0.1% formic acid in water and acetonitrile at a flow rate of 0.4 mL/min with the column temperature of 30°C resulted in separation of the 18 compounds in less than 8 min chromatographic run time.
All the compound dependent MS parameters (precursor ion, product ion, declustering potential (DP) and collision energy (CE) were carefully optimized for each targeted compound in negative ESI mode, which was performed by flow injection analysis (FIA). The chemical structures of 18 components were characterized based on their retention behaviour and MS information such as quasimolecular ions [M-H]–, fragment ions [M-H-COO]–, [M-H-COO-CH3]–, [M- CO-H2O] compared to related standards and literatures (Pandey et al., 2014; Wei et al., 2013; Xia et al., 2011; Yu et al., 2009). MRM parameters: DP, EP, CE and CXP were optimized to achieve the most abundant, specific and stable MRM transition for each compound as shown in Table 1. MRM extracted ion chromatogram of analytes are shown in Fig. 1.

Analytical Method Validation

The proposed UPLC-MRM method for quantitative analysis was validated according to the guidelines of international conference on harmonization (ICH, Q2R1) by linearity, LOQs and LODs, precision, solution stability, and recovery.

Linearity, LOD and LOQ

The internal standard method was employed to calculate the contents of eighteen analytes in Cassia species. The stock solution was diluted with methanol to different working concentrations for the construction of calibration curves. The linearity of calibration was performed by the analytes-to-IS peak area ratios versus the nominal concentration and the calibration curves were constructed with a weight (1/x2) factor by least-squares linear regression. The applied calibration model for all curves was y = a x + b, where y = peak area ratio (analyte/IS), x = concentration of the analyte, a = slope of the curve and b = intercept. The LODs and LOQs were measured with S/N of 3 and 10, respectively as criteria. The results were listed in Table 1. All the calibration curves indicated good linearity with correlation coefficients (r2) from 0.9990 to 0.9999 within the test ranges. The LODs for each analyte varied from 0.02-1.34 ng/mL and LOQs from 0.06-3.88 ng/ml and were much lower than those obtained with previous HPLC methods (Chewchinda et al., 2012; Chewchinda et al., 2014; Chewchinda et al., 2013; Ni et al., 2009; Prakash et al., 2007).

Precision, Stability and Recovery

The intra-day and inter-day variations, for the determination of precision of the developed method, were evaluated by determining the eighteen analytes in six replicates on a single day and by duplicating the experiments over three successive days. The overall intra-day and inter-day precision were not more than 3.37 %. Stability of sample solutions stored at room temperature was evaluated by replicate injections at 0, 2, 4, 8, 12 and 24 h. The RSDs value of stability of the eighteen analytes ≤ 3.19 %. A recovery test was applied to evaluate the accuracy of this method. Three different concentration levels (high, middle and low) of the analytical standards were added into the samples. Three replicates were performed at each level. The percentage recoveries were calculated according to the following equation: (detected amount–original amount) × 100% / added amount. The analytical method developed had good accuracy with overall recovery in the range from 97.75-105.09 % (RSD ≤ 2.42 %) for all analytes (Table 1).