Testing of Aggregates Analysis

Numerous test has been developed that test “toughness and abrasion resistance” and “durability and soundness” of aggregates. This report analyses the testing of Aggregates using three Main tests to analyse the degradation of aggregates so best performance is observed in construction, asphalt, concrete or any other field. The aggregates must be abrasion resistant and weather durable to provide good base in pavements for roads. Detailed description of these tests is provided with their respective results. The results are then examined to see which of the following three test are more accurate to check the durability and abrasion of the rocks. Based on the Laboratory results and the Literature reviews, Los Angeles Abrasion test results are used more than others. Although most of the DOT’s and construction companies use all three test prior to use of the materials. Soundness test has poor repeatability so it is often not considered as primary test.
The quality of the aggregates in the construction materials, asphalt concrete is determined by various tests out of which Los Angeles, Micro Deval and Soundness are most widely used in construction industry. Los Angles Abrasion test and Micro Deval test involve the spinning of aggregates in a close vessel where there are spun in a medium of water or air. The vessel is filled with contact charges (Iron sphere) for a specific amount of time.
The analysis of particle degradation using mechanical test can be classified in two class, fragmentation and wearing. The sample that has more wide range of grain size ( e.g. 1250 gm of ) indicates fragmentation and has a well graded distribution curve whereas the sample that has poor range of grain size( 5000gm of…) indicates wearing with a bad graded distribution curve.
Soundness test involves testing the durability of the aggregates using sodium sulphate or magnesium sulphate solution. Samples of different grain sizes are washed and dried and kept in salt solution for 16 hours and then kept in oven to dry. This cycle is repeated for seven days and then the sample is weighed to see the loss of sample. This test usually received poor rating for its inconsistent repeatability and correlations.
Standard Testing methods

Los Angeles Abrasion Test AASHTO T96 ( ASTM C131)
Micro Deval Test AASHTO T327 (ASTM D6928)
Sodium and Magnesium Sulfate Soundness AASHTO T 104 (ASTM C88)

As Per American Standard of Testing Materials following pass-fail criteria were used: –

LA abrasion: Passed if loss ≤ 40%
Micro Deval: Passed if loss ≤ 18%
Sodium Sulphate Soundness: Passed if ≤12%

If the aggregates passed the above criteria, then they are durable. 
Los Angeles Abrasion test involves the test to measure the degradation of the aggregates by creating actions like impact, abrasion, grinding and constant wear and tear inside a rotating steel drum. The steel drum spins for a specific time with a specific number of steel balls of specific weight to create an abrasion of aggregates. The number of steel balls and the amount of aggregates put in the steel drum depend upon the grading of the test sample. The steel sphere carries the aggregates and the steel balls creating a grinding effect and then drops it to the other side making a crushing effect. This cycle is repeated again and after certain number of revolution the sample is sieved too see the amount retained from the degradation and too see the percentage loss.

Los Angeles Machine with wall thickness of at least 12mm. The inside diameter should be 711±5mm and the length of 508±5mm. The rotating drum should be closed from all ends and should be set at a rotating speed of 30±3 rpm. ( ASTM C131)
Sieve with 1.7mm (No.12) passing.
An accurate scale with no more than 0.1% error of the test load.
The charges or the steel balls. The number of steel balls used in the test depend on the gradation of the sample to be tested. The steel balls should have diameter between 46.038mm and 47.625, with mass between 400g and 440g each. A constant weight check should always be performed on the charges because this test is very aggressive and could lead in loss of weight of the charges.

Table 1: Mass of Steel balls for LA Abrasion Test


Number of Spheres

Mass of the charges, gm













Table 2: Grading of the Test Sample for LA Abrasion Test

Sieve Sizes (Square Opening)

Mass of the aggregates, g

Through Screen

Retained on






37.5 mm (11/2 in.)

25.0 mm (1 in.)


25.0 mm (1 in.)

19.0 mm (3/4 in.)


19.0 mm (3/4 in.)

12.5 mm (1/2 in.)



12.5 mm (1/2 in.)

9.5 mm (3/8 in.)



9.5 mm (3/8 in.)

6.3 mm (1/4 in.)


6.3 mm (1/4 in.)

4.75 mm (No.4)


4.75 mm (No.4)

2.36 mm (No.8)


Total, g





Select the appropriate Grading according to the amount of aggregate available for test. It is recommended to go from higher to lower grade to gain accurate results.
Wash and oven dry the sample at 110±5°C (230°F) to constant mass and then separate into individual sizes per their respective weights.
Put the aggregates and the sample in the rotating drum. Close the Los Angeles Abrasion machine tightly and let it run for 500 revolutions at 30 to 33 rpm.
After the drum stops take all the sample out and remove the steel balls from it. Now take the crushed aggregates and sieve it on 1.7 mm (No.12) sieve.
Weigh the sample retained on the sieve and calculate the percentage loss.

Micro Deval
Micro-Deval is a test that involves measure of resistance of aggregates towards abrasion and test durability of the sample towards grinding of steel balls (ASTM D7428). The medium that is used here is water of room temperature. The sample and charges (steel balls) are kept in the Micro-Deval tank and then the apparatus is filled with water. The apparatus is rotated so that the aggregates undergo grinding and abrasion. The use of this test is mainly towards the aggregates that degrade more in presence of water than air. This test also gives a measure of how soft or “shaley” the sample is. The materials that give a high percentage loss degrade more during mixing or handling in industries. (ASTM D7428)

Micro-Deval Abrasion Tank with volume of 5.03 L and external diameter of 202mm and the internal height shall be from 170 mm to 177 mm. The stainless-steel tank comes with a rubber sealing to make it water-tight. The inner and outer surface of the tank should be smooth and ridge free. (ASTM D6928)
Micro-Deval Abrasion machine is a rolling machine with an adjustable speed which rolls the tank at 100±5 rpm.
Steel charges of diameter 9.5±.5mm are requires. The total mass of steel balls needed is 5000±5g.
Sieves with 5mm and 1.25mm sizes are also required.
An accurate scale with no more than 0.1% error of the test load.

Table 3: Mass of Aggregates for Micro-Deval Test.




20 mm

16 mm

375 g

16 mm

14 mm

375 g

14 mm

10 mm

750 g

Take washed and oven dried sample so it loses the dust on it. Prepare a representative sample of 1500±5g put it in the Micro-Deval tank.
Add 5000±5g of steel charges in it and 2.0±.05L of tap water in the Micro-Deval tank. Let this sit for 1 hour.
After the sample, has been soaked tighten it up and put it on the Micro-Deval rolling machine to roll for 2 hours±1 minute.
After the machine stops rolling pour the sample on a stack of 5 mm and 1.25 mm sieve. Wash the remains of the sample in the tank on the sieve.
Oven dry the sample at 110±5°C and weigh it later. Calculate the percentage loss using the calculation sheet.

Soundness test is a very crucial test in paving industries especially when making massive highways, bridges and dams. This test measure the amount of degradation caused by weathering freeze-thaw cycles. The aggregates that pass this test are more durable to be used and don’t cause premature distress in pavements (http://www.pavementinteractive.org/article/durability-and-soundness/). The aggregates are kept in a sodium sulphate or a magnesium sulphate bath. The solutions is at a saturated state and causes salt crystals to be formed on the aggregates. This test is usually carried out for seven days which involves simultaneous wet-dry cycles. When the sample is submerged in the salt solution bath, salt crystals are formed in minute pores of the aggregates and causes internal forces that eventually lead crack in the aggregates. This gives us a replicated demonstration of how the substance will behave in natural habitat. This test has very poor repeatability so it is never considered as primary test to measure the degradation of the aggregates.

Sieves of different sizes- 5⁄16 in., 3⁄8 in., No. 50, 1⁄2 in., 5⁄8 in., No. 30, 3⁄4 in., 1 in., No. 16, No. 8, No. 4.
Metal baskets made of wire mesh or stainless steel that allows the aggregates to freely contact the solution and permit free drainage of the loss of sample.
Temperature regulator to ensure that the temperature of the sulphate bath is constant at specified one.
Balances with the accuracy of 0.1% are must for this test.
Hydrometers are also needed for this test to measure the specific gravity within ±0.001.


Prepare the sodium sulphate solution that has specific gravity between 1.154 and 1.171.


Mass of the Sample

Sieve Size


2 in. (50 mm)


1.5 in. (37.5 mm)


1.0 in.


0.75 in.


0.5 in.


0.375 in.




Prepare Sample as per the table displayed above. The sample should be washed and dried at 110±5°C.
Mix the 2 inch and 1.5 inch retained material and place the 5000g sample in a container.
Mix the 1 inch and 0.75 inch retained and place the 1500g sample in a separate container and mark the container by making a groove on it with a particular symbol so it does not get mixed up with the other containers. This way it is also more efficient to identify them when changing the cycles.
Mix the 0.5 inch and the 0.375 inch retained samples and put the 1000g sample together in a container.
After the test samples are ready place them in the solution prepared for 16 to 18 hours and then let them drain for 15 minutes. Let the sample oven dry at 110±5°C for 4 hours and then let them cool down until they reach 20 °C to 25°C. Again, immerse it in the solution and repeat this cycle 5 times.
After the 5 cycles are done the aggregates are to be washed thoroughly so that all the salt on the surface is removed and then oven dried at 110±5 °C.

Table 4: Sieve Sizes to be used to Measure Loss

Aggregate Size

Sieve Used

>1.5 inch

1.25 inch

1.5 to 0.75 inch

5/8 inch

0.75 inch to 0.375 inch

5/16 inch

0.375 inch to No.4


Use the above given table to respective sieve the aggregates used in the test. Utmost care must be taken to sieve samples from each container separately.
Take the sample that is retained on the sieve and weigh it to note it. The difference in the mass of the aggregates before and after the experiment gives us the amount lost due to the disintegration of the sample.

To compare the results of the following three, test a study was examined in which 20 Lab results were considered. These tests were performed by Montana Department of Transportation either in the Montana State University soil laboratory or Montana Department of Transportation (MDT) Helena materials laboratory. The samples were obtained from various random pits and quarries across Montana by MDT personnel. (Western Transport Institute) To get a very good relation and a study between these test, 5 repeats were done on the Micro-Deval test and at least 3 repeats were done for L.A Abrasion test. This also provided a good study of the repeatability of the tests.
There was no repeat test done on the soundness test since it has a very poor repeatability.
To analyse the repeatability of the tests, repeated Micro-Deval and Los Angeles Abrasion test were done on the same sample. The Coefficient of Variation was calculated to examine the variation in the test results for the same sample. There was no COV calculated for the Sodium Sulphate Soundness test because only one test result was provided by MDT.
The Coefficient of variation is standardized measurement calculate by diving the standard deviation of a set of results by the average mean and then multiplying it by hundred to get a percentage value. By analyzing and comparing this value we can predict the repeatability of the test. If the COV is a lower number, then the test is less variable and hence it has a good repeatability. The COV calculated for L.A abrasion test came out to be 6.5% with standard deviation of 1.5 loss percentage. Similarly, the COV for Micro-Deval test came out to be 6.5% for a standard deviation of 0.7 percentage loss. Since both the Coefficient of variations are less than 10%, both the tests are considered to have good repeatability. Another evidence to support the repeatability of the test is that there is no significant difference in the COV of Micro-Deval and L.A abrasion which are 6.6% and 6.5% respectively.
As per the comparative bar graph plotted below, majority of the coefficient of the variation fall between 5% to 15%. The COV of Lab number 861553 rocketed to 26.9% because the result was very small accounting to be 2.1% average loss. So, a small change in small result make a large COV.

Figure 1: Graphical Representation of Coefficient of Variation for L.A
Abrasion and Micro-Deval Tests.
As per the specifications provided by American Standard for Testing Materials the aggregates are classified as “durable” if the loss percentage is less than the cut-off percentage and they are “non-durable” if the loss percentage of the aggregates is more than the cut-off percentage. The cut-off percentage that we have used for L.A Abrasion, Micro-Deval and Sodium Sulphate Soundness Tests are as follows: –

LA abrasion: Passed if loss ≤ 40%
Micro Deval: Passed if loss ≤ 18%
Sodium Sulphate Soundness: Passed if ≤12%

To create a direct comparison in between these tests normalized value for each test is calculated. Normalized value is used for a direct comparison between Micro-Deval, L.A Abrasion and Sodium Sulphate Soundness test. Normalized value is average loss percentage divided by the cut-off for that test. (MDT paper)

The ideal Normalized value is 1.0. If the calculated normalized value is greater than 1.0 it means that the test did not pass and the aggregate tested are not durable and if the value is less than 1.0 it means that the aggregates are durable and the test passed.
To draw a direct comparison in between two, test a two-dimensional scattered graph is plotted with four quadrants.

The top right (North-East) quadrant depicts the are where both the test failed and the aggregates are not durable.
The top left (North-West) quadrant depicts the area where the test plotted on the X-axis passed but the one on Y-axis failed.
The bottom right (South-East) quadrant indicates the area where the test plotted on the Y-axis passed but the test that was plotted on X-axis failed.
The bottom left (South-West) quadrant depicts the region where both the tests passed and the aggregates are durable.

The data points plotted in the top-right (NE) and bottom-left (SW) quadrants indicate that the tests are consistent as the aggregates were either durable for pass/pass or not durable for fail/fail. On the other hand, the data points plotted in the top-left (NW) and bottom-right (SE) quadrant indicate discontinuity and lack of coherence since one of the test would indicate that the aggregates passed the cut-off and are durable while the other would indicate that they didn’t pass are not durable for use.

L.A abrasion vs Micro-Deval.

Figure 2. Graphical Representation of Comparison between Normalized Loss of L.A Abrasion and Micro-Deval Tests.
The above shown graphical representation is the plot of comparison between the Micro-Deval and the L.A Abrasion test on 20 different samples that were tested by Montana Department of Transportation. The blue dotted line that runs at 45° along the centre of the graph indicates symmetry and a perfect correlation. The data points close to the line indicate a good co-relation between the test. There was only one result that had failed both the test and was considered to be “non-durable” which accounted 5% of the materials tested. There were five results (25% of the samples tested) that fall in the awkward category and had lack of coherence since, according to L.A abrasion test it passed as was considered “durable” but according to the Micro-Deval test it was considered to be “non-durable”. The relations between these two test is quite strong as 70% of the samples tested i.e. 15 out of 20 passed the test and fall in the bottom-left quadrant which indicates that the samples were durable according to both the test.

Sodium Sulphate Soundness Test Vs L.A Abrasion Test.

Figure 3. Graphical Representation of Comparison between Normalized Loss of Sodium Sulphate Soundness and Micro-Deval Tests.

Sodium Sulphate Soundness Test Vs Micro-Deval Test.

Figure 4. Graphical Representation of Comparison between Normalized Loss of Sodium Sulphate Soundness and L.A Abrasion Tests.
The co-op report guidelines suggest separating the Results and Discussion sections. I think that it is usually easier to present a discussion immediately after the results. But, you may choose to use the structure that makes the most sense for your report.
Gregates typically encountered on Montana highway projects, and to determine if the MicroDeval test provides better, timelier, and more repeatable information about the quality of an aggregate than the Sodium Sulfate test. The laboratory testing program was structured to examine how well three aggregate durability test methods correlate for a sampling of Montana soils. Aggregate durability tests were conducted on 32 different soils using the Micro-Deval, L.A. Abrasion, and Sodium Sulfate tests. Multiple Micro-Deval and L.A. Abrasion tests were conducted on some of the soil samples to investigate the same-lab repeatability of the test methods. The methods differ in their treatment of the aggregate during testing; and consequently, each method produces a unique value of percent loss, which is used to distinguish between durable aggregate and non-durable aggregate. For the purposes of this study, the following percent loss pass-fail standards were used for each test: • Micro-Deval: passing (durable), if % loss ≤ 18%; • L.A. Abrasion: passing (durable), if % loss ≤ 40%; and • Sodium Sulfate: passing (durable), if % loss ≤ 12%. Because of the differences in percent loss criteria for each method, results from the suite of laboratory tests were normalized to facilitate direct comparisons between the three methods. Normalized results were obtained by taking the average percent loss for a particular soil and dividing it by the cutoff for that test. Table 16 summarizes the comparisons between each test using data collected during this study. Based on the metrics identified in the table, the MicroDeval and Sodium Sulfate tests had the best correlation, while the Micro-Deval/L.A. Abrasion and the L.A. Abrasion/Sodium Sulfate correlations were significant, but not as strong. Table 16. Summary Comparison of Test Methods Test Methods R2 Pass/Fail Agreement (%) Inconsistent Durability Determination* (%) M-D versus NaSO4 0.72 92.9 7.1 M-D versus L.A. 0.46 85.2 14.8 L.A. versus NaSO4 0.28 84.0 16.0 Perfect Correlation 1.0 100.0 0.0 *Note: Column 4 refers to the percentage of samples that passed one of the tests but failed the other test. This inconsistency is identified as a data point that plots in one of the cross-hatched zones identified in Figures 3, 4, and 5. Conclusions and Recommendations Western Transportation Institute 35 The percentages of inconsistent durability determinations (pass or fail inconsistencies) listed in column 4 of Table 16 are indicative of a discontinuity between tests and are probably the most important metric for the comparison study. In this case, one test characterized the material as durable aggregate, while the other test characterized the same material as non-durable aggregate. Qualitatively, the authors believe that an excellent correlation between two test methods is obtained when the percentage of inconsistent results is less than about 5%, values between 5 to 10% signify a good correlation, values between 10 to 20% signify a fair to poor correlation, and values above 20% signify a poor or unreliable correlation between tests. Multiple tests conducted on samples obtained from the same sources indicate similar values of same-lab repeatability for both the Micro-Deval and L.A. Abrasion tests. The coefficients of variation for the multiple tests were less than 10% for both methods. Considering the natural variability that occurs within an aggregate source, the measured variations were low, indicating good repeatability of the test methods. This conclusion has also been supported by others (Jayawickrama et al., 2006; Tarefder et al., 2003; and Hunt, 2001). Repeatability of the Sodium Sulfate test was not examined in this study. The relationship between Micro-Deval test results and field performance was not examined in this study; however, evaluations by Fowler et al. (2006), Rangaraju et al. (2005), Tarefder et al. (2003) and Wu et al. (1998) indicate that Micro-Deval test results relate well with field performance. An excellent correlation between rutting performance and Micro-Deval test results were observed by White et al. (2006). They sug
There are many reference styles available to choose from. It is not very critical which one you use, as long as you are consistent throughout the report. The APA reference style is an appropriate choice. You can quickly generation citations for your reference list using the online citation generator from RefME (RefME, 2016). Always review the text the generator has populated in the form for you as it is not always accurate.
Entries in the reference list are sorted alphabetically. Some examples of common references for the reference list and in-text citations are shown below.
Robertson, J. (2016, August 8). Man solves Rubik’s cube while free-falling. Retrieved August 9, 2016, from http://www.cbc.ca/news/canada/edmonton/learning-to-solve-a-rubik-s-cube-while-free-falling-1.3712116
RefME. (2016). FREE APA citation generator & format. Retrieved July 30, 2016, from https://www.refme.com/ca/citation-generator/apa/
In-text Citation: (Robertson, 2016)
In-text Citation: (RefME, 2016)
Felder, R. M., & Brent, R. (2016). Teaching and learning stem: A practical guide. United States: John Wiley & Sons.
In-line Citation: (Felder & Brent, 2016)
Journal Articles
Lombardo, S. J. (2010). Teaching technical writing in a lab course in chemical engineering. Chemical Engineering Education, 44(1), 58-62.
In-line Citation: (Lombardo, 2010)

Tracks Covering in Penetration Testing

Er. Ramesh Narwal
Er. Gaurav Gupta

After completing attack, covering tracks is the next step in penetration testing. In tracks covering after completing attack we will return to each exploited system to erase tracks and clean up all footprints we left behind. Tracks covering is important because it gives clue to forensics analyst or Intrusion Detection System (IDS). Sometimes it’s difficult to hide all tracks but an attacker can manipulate the system to confuse the examiner and make it almost impossible to identify the extent of the attacker. In this research paper we describe all of the methods used in tracks covering and their future scope.
Keywords: Exploit, Payload, Vulnerability Assessment, Penetration Testing, Track Covering
Penetration testing is nowadays an important organisation security testing method. Penetration testing is also known as Pentesting. Main objective of penetration testing is to identify the security threats in networks, systems, servers and applications. Penetration testing consists of various phases which we discuss in overview of penetration testing. After gaining administrative access on a system or server, attacker first task is to cover their tracks to prevent detection of his current and past presence in the system. An attacker or intruder may also try to remove evidence of their identity or activities on the system to prevent tracing of their identity or location by authorities. To prevent himself an attacker usually erases all error messages, alerts or security events that have been logged.
Overview of Penetration Testing
Penetration Testing used for validation and effectiveness of security protections and controls of an organisation. It reduce an organisation’s expenditure on IT security by identifying an remediating vulnerabilities or loopholes. It provides preventive steps that can prevent upcoming exploitation. Penetration testing phases

Pre-engagement Interactions
Intelligence Gathering
Threat Modeling
Vulnerability Analysis
Post Exploitation
Covering Tracks

Pre-engagement Interactions
Planning is the first step in pre-engagement. During this phase scope, goal and terms of the penetration test is finalised with the client. Target and methods of planned attacks are also finalised in this phase.
Intelligence Gathering
This is most important phase if we miss something here we might miss an entire opportunity of attack. All information regarding target is gathered by using social media networks, google hacking and other methods. Our primary goal during this phase to gain accurate information about target without revealing our presence, to learn how organisation operates and to determine the best entry point.
Threat Modeling
The information acquired in intelligence gathering phase used in this phase to identify existing vulnerabilities on the target system. In threat modelling, we determine the most effective attack methods, the information type we need and how attack can be implemented at an organisation.
Vulnerability Analysis
Vulnerability is loophole or weakness in the system, network or product by using which can compromise it. After identification of most effective attack method, we consider how we can access the target. During this phase we combine information acquired in previous phases and use that information to find out most effective attack. Port and Vulnerability scans are performe in this phase and all data is also gathered from previous phases.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Exploit is a code which allows an attacker to take advantage of the flaw or vulnerability within system, application or service. We must perform exploit only when we are sure that the particular exploit will be successful. May be unforeseen protective measures might be on the target that inhibit a particular exploit. Before trigger a vulnerability we must sure that the system is vulnerable.
Our exploit must do proper clean-up after execution at compromised system and must not cause the compromised system to grow into unstable state. Given below figure shows some system shutdown prompt at compromised windows machine due to without proper clean-up of exploit after execution.

After successful exploitation the compromised system is under the control of an attacker. Many times attacker or penetration tester need to alter the compromised or breached systems to attain privilege escalation.
Post Exploitation
Payload is actual code which executed on the compromised system after exploitation. Post Exploitation phase begins after compromised one or more systems. In this phase penetration tester identifies critical infrastructure, targets specific systems, targets information and data that values most and that must be attempted to secure. In Post Exploitation while attacking systems we should take time to understand what the system do and their different user roles. Every tester and attacker generally spend time in compromised system to understand the information he have and how he can take benefit from that information.
After gaining access of one system an attacker can access other systems in that network by using compromised as a staging point. This method is known as pivoting. Sometimes attackers creates backdoor into the compromised system to regain access of the system in the future
Covering Tracks
In the previous phases penetration tester or attacker often made significant changes to the compromised systems to exploit the sytems or to gain administrative rights. This is the final stage in penetration test in which an attack clears all the changes made by himself in the compromised systems and returns the system and all compromised hosts to the precise configurations as they are before conducting penetration test.
All of the information like vulnerability reports, diagrams and exploitation results generated during penetration testing must be deleted after handover to the client. If any information is not deleted it should be in the knowledge of client and mentioned in the technical report which is generated after penetration testing.
Reporting is the last phase in penetration test in which penetration tester organise available data and related result sets into report and present that report to the client. This report is highly confidential which have all the results of penetration tests like vulnerabilities list in the organisation systems, networks or products and recommendations to solve these problems related to the security of the organisation assets, which helps organisation in stopping future attacks.
How to cover tracks
To compromise system successfully an attacker need to be stealthy and avoid detection by various security systems like firewalls, Intrusion detection systems (IDS). System administrators and other security personals uses similar techniques to identify malicious activities, so it’s very important for attacker to be remains undetected. A system administrator can examine processes and log files to check malicious activities. There are various challenges which are faced by a penetration tester after successfully compromise of target system. Now we describe various problem faced by a penetration tester in covering tracks
Manipulating Log Files Data
To manipulate log files data an attacker must have nice knowledge of commonly used operating systems. An attacker must aware of two types of log files system generated and application generated.
Penetraion tester or attacker have two options when manipulating log data first one is to delete entire log and second one is to modify the content of the log file. After deleting entire log an attacker there is surety of undetectability. But there is drawback of deletion of entire log is detection.
Second option an attacker have to manipulation of log files data within the log files so that system administrator is not able to notice attacker presence in the system. But sometimes if attacker removal of so much information make gap between logs files makes it noticeable.
Log Files Management in Various System
Main purpose of log files in various operating systems is to check health and state of operating system, to detect malicious activity, to analysis system if something bad happens(system troubleshooting). Here we show locations of log files in commonly used operating systems Windows, Linux/Unix, Mac.
In windows log files or stored in event viewer, which is easy to find simply search event viewer and run it. Event viewer is simply look like the figure as given below, where we can see all log files of the system and applications.

Figure : Log Files Managements in Windows
In mainly all linux and unix operating systems log files are stored in the /var/log directory. Mainly system log files are hidden in linux and unix operating systems to see complete list of log files from shell simply type ls –l /var/log/ command in shell. In the below figure we show log files in BackTrack linux operating system

Figure : Log Files Management in Linux/Unix
To get or access log files in MAC operating system simply open finder and select “Go to Folder” in the Go menu. Type in /Library/Logs and hit Enter here you get the screen like as given in figure which contains all log files.

Figure : Log Files Management in Mac OS X
To manipulation of log files data an attacker must have root privileges.
Challenges in Manipulation of Log Files
If the system administrator configures its system to transfer all log files on the remote server time to time, in that case an attacker or penetration tester can only stop log files transfer process except it they have no other way.
Hiding Files
Various Tools for Covering Tracks
There are so many to compromise a system but after compromising the system the attack must need to cover their tracks because each and every activity that attacker can do is stored or recorded by the system. Every system have different way to record the activity that occurs in the system. Every attacker must covers their tracks that are recorded by the system so that no one can identify him.

Sample Speech Against Animal Testing

Good morning, ladies and gentlemen, it is great to be here with you all on this marvellous morning. I am here to convince all of you to oppose, stop and disengage from the cruel, detrimental and unnecessary animal testing.
Do you know that the lipstick, the eyeshadow and the mascara we use to make ourselves look more attractive have poisoned hundreds of thousands of innocent animals?
Do you know that the hairspray, the hair gel and the perfume we use to make ourselves look smarter have blinded hundreds of thousands of innocent animals?
Do you know that even the toothpaste, the shampoo and the soap we use everyday have killed hundreds of thousands of innocent animals?
If your answer is ‘No’, now is the time for all of us to know it. Animal testing is not only a research to find cures for human diseases, it is also an experimentation to establish safety of various products such as daily necessities, cosmetic products and medicines.
To produce a safe product for us, numerous animals have died in laboratories. To ensure our health, numerous animals have tortured in laboratories. To let us stay away from diseases, numerous animals have gone through the unbearable aches and pains in laboratories.
An overview of animal testing of People for the Ethical Treatment of Animals has judged us guilty of killing nearly 100 million of animals in research laboratories every year. Each year, nearly 100 million of animals have been burned, poisoned and starved. Each year, nearly 100 million of animals have been dosed with poisonous elements, driven insane and deliberately infected with diseases such as cancer, diabetes and AIDS. Each year, nearly 100 million of animals, their eyes are removed, their brains are damaged and their bones are broken. Each year, nearly 100 million of animals have been brutally abused, mercilessly tortured and defencelessly killed for human benefits. Did they deserve such cruel and brutal treatment?
They died for genetics research, for biomedical research, for xenotransplantation, for physiological research, for medical research, for drug testing and for toxicology tests.
Perhaps you may say these tests and researches are for a good cause, but is it a really good cause that numerous innocent animals are caged up, tortured and sacrificed to achieve?
Perhaps you may say these tests and researches are good for your safety, but is the chemical reaction on an animal same as the one on a human being?
Perhaps you may say these tests and researches are good for your health, but can these tests and researches reliably predict effects in humans? Are there no any side effects on human beings?
Scientists and researchers claimed that they have unlimited access to animals for experiments in order to find cures for human diseases. Yet, animal testing has actually endangered the life of human beings as the results from animal testing cannot be applied to humans. According to PETA’s fact sheet, they argued that “In many cases, animal studies do not just hurt animals and waste money, they kill people too. Some drugs were all tested on animals and judged safe but had devastating consequences for the humans who used them.” Have all of us thought that why this would happen? The answer is very simple. This is because animals and humans are completely different from each other. As Dr. Arie Brecher said, “No animal species can serve as an experimental model for man.”

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Scientists should ask themselves; do dogs have the same DNA as us? Do cats have the same genetic characteristic as us? Do rabbits or rats have the same body cell as us? It is absolutely ironic when scientists answer ‘No’ to these questions while they are still using human benefits as an unacceptable and unconvincing excuse to perform the practice of animal experimentation.
Thus, should we still keep our faith in scientists’ and researchers’ ability to find a cure via animal testing? Should we still believe in those products which have made millions of rabbits blind? Should we still depend on and rely on such an inaccurate experimentation to cure our diseases? For me, the answer to these questions is ‘No’. It should also be the answer of yours, the answer of our humane society, the answer of our country, the answer of all the five continents and the answer of the entire world.
We have no the right to use animals as the subject for any researches or experimentations just as we do not have the right to experiment on humans without their consent. We should respect the right of all species just as we respects the right of all people. We should pitch in with the work against animal testing and stand up for animal rights, for the animals tortured and yelled behind laboratory doors just as we stand up for our own right. Like Sri Aurobindo said, “Life is life – whether in a cat, or dog or man. There is no difference there between a cat or a man. The idea of difference is a human conception for man’s own advantage.”
Any of us who donates to a medical charity is actually assisting to fund the research involving animal testing. We fund to cover the expenses of cage, the expenses of feed and the expenses of experimental materials. We fund to provide and purchase animals as experimental subject. We fund to blind, scald and poison animals.
Animals are just like our family, friends and companions. Is it right for us to provide money that causes our family, friends and companions to be subjected to medical research?
Animals are just like us, they are creatures which created by god. Just like us, they have feelings. Just like us, they are able to feel pain, hunger and thirst. Just like us, they will grieve over loved ones they have lost.
We should try to imagine the feeling of animals. We should imagine if we were massacred by those wild and ferocious animals and nobody is trying to save us. Imagine if we were living inside a small cage and waiting to die in vain. Imagine if we had no any control of our own life and had no any freedom. Imagine if we were forced to be injected with drugs or toxic substances when we had never even done anything.
With modern technology that we have created these days, animal testing is really an unreliable, unscientific and unnecessary experimentation. Nowadays, we have plenty of alternatives which have a much higher percentage of success than animal testing. Instead of animal testing, we can use human cell culture systems; instead of animal testing, we can use computer mathematical models; instead of animal testing, we can use artificial human skin and eyes that mimic the body’s natural properties.
I believe that with the changes in technology these days, we are able to find more ways and methods that scientists and researchers can do research without involving any cruelties and causing any harm to any creatures.
Now, let us stop buying and using the products tested on animals.
Now, let us save the ship of animal rights that had sunk to the bottom of the sea of humans’ ignorance, rudeness and curiosity.
Now, let us dig up the roots of cruelty and start sowing the seeds of humanism all over the world.
Now, let us start it today.
Thank you very much.

Reliability of Reflotron in Testing of Total Cholesterol

Reliability of Reflotron in Testing of total Cholesterol and Urea in Non- centralized Medical Setting
Point-of-care testing (PoCT) has been defined as “those analytical patient-testing activities provided within the institution, but performed outside the physical facilities of the clinical laboratories (1). There has been a growing interest in point-of-care testing (PoCT) because of its advantages over standard laboratory procedures, it provides timely information to medical teams, facilitating rational, time-critical decisions, and has been demonstrated to improve patient outcomes in critical care settings (2). At least a dozen portable cholesterol and urea – testing instruments have been designed for use in community and office settings. These instruments have made mass screenings for these risk factors feasible and thus are now in widespread use for this purpose (3).
Dyslipidemia; including both hypercholesterolemia and hypertriglyceridemia represent significant risk factors for the development of peripheral artery diseases and negative health outcomes (4, 5).
High blood cholesterol increases the risk of heart disease, is a major modifiable risk factor, and contributes to the leading cause of death in the USA (6,7).
Chronic kidney disease (CKD) is now recognized as a major world-wide health problem (8). A method for the estimation of the urea in blood coming from individual organs and for clinical purposes must be efficient when only small quantities of blood can be obtained (9).
Aim of work:
In Arar city many non- centralized Medical Setting used Reflotron for medical analysis and diagnosis disease. The purpose of this study was to assess the validity of Reflotron in the testing of total cholesterol and urea for screening and diagnosis in Arar city.
Cross sectional study held in Arar city in the period from 1 November 2013 to 10 November 2013, 30 blood sample was taken and measured by Reflotron apparatus and the results was rechecked by Dimension RXI MAX –apparatus to compare between the results between 2 methods.
Approximately 20ml of blood was collected from each participant, after fasting for 12h, using standardized venipuncture techniques in the antecubital vein in the bend of the elbow. In order to overcome technician error, two drops of blood (30μl) were collected immediately from the previously drawn venous sample by drawing blood into the capillary tube from the opening in the top of the venous tube before centrifuging the venous sample, rather than ‘sticking’ the finger.
Statistical analysis was done by SPSS 20 and suitable statistical methods were used, pResults:
Table (1): Comparison between Reading of Reflotron and Dimension RXI MAX test:






Urea (mg/dl)










Total cholesterol (mg/dl)











Table 1 shows that the mean of urea was 65.22±46.3 by Dimension RXI MAX apparatus while it was 63.73±41.1 by Reflotron, as regards Total cholesterol mean by Dimension RXI MAX and Reflotron was 150.04±38.9 and 167.7±40.3 respectively, the difference between the reading of the two apparatuses was not statistically significant in both Urea and cholesterol.

Table (2): Mean Percent of change between Reflotron and Dimension RXI MAX test in urea and cholesterol


Mean percent of change

Total cholesterol (mg/dl)


Urea (mg/dl)


Table 2 shows that the mean percent of change between Reflotron and Dimension RXI MAX test in urea and cholesterol was -0.4% and 12.5% respectively
The Reflotron has been marketed aggressively for use in community screening programs. The marketing has focused heavily on the instrument’s relatively low cost, ease of operation, and accuracy. This strategy has resulted in the widespread use of this instrument in blood cholesterol screenings. The Reflotron has been studied previously using various settings, sample sizes, and methodologies (10).
This study compared the same blood sample using dry chemistry by the portable analyzer Reflotron plus and wet chemistry by Dimension RXI MAX apparatus.
The MultiCare systems are pocket-sized reflectance photometers, in which the intensity of the color developed from a chromogen reaction being proportional to the concentration of the cholesterol or urea in the blood. The results of the MultiCare method compared with the reference method demonstrated good agreement between the 2 methods, the difference between the reading of the two apparatuses was not statistically significant in both Urea and cholesterol with a mean difference of 12.5% and –0.4% for cholesterol and urea, respectively. The availability of POCT lipid monitors has increased in recent years, any POCT must be validated for bias and imprecision to ensure that appropriate medical decisions and population screenings are made (11-17). The National Cholesterol Education Program (NCEP) in the United States recommended bias goals of 3% and 5% for cholesterol and triglycerides, respectively.
Conclusion: The portable analyzer Reflotron provided clinically relevant underestimations of total cholesterol values comparison with Dimension RXI MAX, whereas, urea values urea values satisfied. Consequently, lipid values obtained using the Reflotron may be useful for screening, but the Reflotron should not be used as a diagnostic tool. Urea values useful for screening and diagnosis kidney diseases

U.S. Department of Health and Human Services, National institutes of Health. Point-of-Care Diagnostic Testing Fact Sheet. Jul 2007.
Birkhahn RH, Haines E, Wen W, Reddy L, Briggs WM, Datillo PA (2011). Estimating the clinical impact of bringing a multimarker cardiac panel to the bedside in the ED. Am J Emerg Med, 29(3):304-8.
Havas, Stephen; Bishop, Robert; et al Performance of the Reflotron in Massachusetts Model System for blood cholesterol screening program. American journal of public health; Mar 1992;82,3, ProQuist central.
Davis, C.L., Harmon, W.E., Himmelfarb, J., Hostetter, T., Powe, N., Smedberg, P., Szczech, L.A. and Aronson, P.S. 2008: World Kidney Day 2008: think globally, speak locally. Journal of the American Society of Nephrology 19, 413–16.
Sullivan DR. Screening for cardiovascular disease with cholesterol. Int J Clin Chem 2002;315:49–60.
State-specific cholesterol screening trends-United States, 1991–1999. MMWR Morb Mortal Wkly Rep 2000;284: p. 1374–5.
Cheng AY, Leiter LA. (2006). Implications of recent clinical trials for the National Cholesterol Education Program Adult Treatment Panel III guidelines. Curr Opin Cardiol 21(4):400–404.
Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (2001).Executive summary of the Third Report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). JAMA 285:2486–2497.
Volles DF, McKenney JM, Miller WG, Ruffen D, Zhang D. Ana- lytic and clinical performance of two compact cholesterol-testing devices. Pharmacotherapy 1998;18:184–92.
Havas S, Bishop R, Koumjian L, Reisman J, Wozenski S.Performance of the Reflotron in Massachusetts’ model system for blood cholesterol measurement. Am J Public

Health 1992;82:458–61.

Shephard MD, Mazzachi BC, Shephard AK. Comparative perfor- mance of two point-of-care analysers for lipid testing. Clin Lab 2007;53:561–6.
Stein JH, Carlsson CM, Papcke-Benson K, Einerson JA, McBride PE, Wiebe DA. Inaccuracy of lipid measurements with the portable Cholestech L.D.X analyzer in patients with hypercho- lesterolemia. Clin Chem 2002;48:284–90.
du Plessis M, Ubbink JB, Vermaak WJ. Analytical quality of near- patient blood cholesterol and glucose determinations. Clin Chem 2000;46:1085–90.
Gottschling HD, Reuter W, Ronquist G, Steinmetz A, Hattemer A. Multicentre evaluation of a non-wipe system for the rapid determination of total cholesterol in capillary blood, Accutrend Cholesterol on Accutrend GC. Eur J Clin Chem Clin Biochem 1995;33:373–81.
Laboratory Standardization Panel of the National Cholesterol Education Program. Current status of blood cholesterol measurement in clinical laboratories in the United States: a report from the Laboratory Standardization Panel of the National Cholesterol Education Program. Clin Chem 1988;34:193–201.
Carey M, Markham C, Gaffney P, Boran C, Maher V. Validation of a point of care lipid analyser using a hospital based reference laboratory. Ir J Med Sci 2006;175:30–5.
Luley C, Ronquist G, Reuter W, et al. Point-of-care testing of triglycerides: evaluation of the Accutrend triglycerides system. Clin Chem 2000;46:287–91.


Data Flow testing using Genetic Algorithms



Testing a software plays a very key role in software development life cycle. There are two types of testing in software development life cycle they are – white box testing and black box testing. Data flow testing comes under white box testing technique which involves flow of control and flow of data throughout the software for testing. Evolutionary approaches in testing selects are generates test data by using optimized searching techniques. The working for data flow testing by using genetic algorithm for automatic generation of testing locations for data flow testing. This algorithm generates random initial populations of test paths for data flow testing and then based on selected testing area new paths are generated by applying genetic algorithm. A fitness function in this algorithm evaluates the fitness of a chromosomes within selected data for data flow testing. There are two techniques for data flow testing like crossover, mutation of chromosomes from the selected testing data to generate new paths and figure out test results with good fitness values. This type of approach helps us to get good results compared to random testing.

1. Introduction

Software testing is the way toward executing a program with the expectation of discovering mistakes [1]. Programming testing is one of the vital periods of programming advancement life cycle. There are diverse sorts of programming testing including white box testing and discovery testing. White box testing confirms the inner rationale of the program while dark testing manages the check of the product usefulness. Programming testing can be performed at various dimensions including unit testing, joining testing and framework testing. Unit testing tests the single unit of programming including classes and techniques. Mix testing checks the interfaces utilized for correspondence between various segments and message going through these parts. Framework testing confirms the in general usefulness of the framework. White box testing is separated into two noteworthy classes, i.e., control stream testing and information stream testing. Control stream testing utilizes the control stream chart for programming testing [2] [3]. In a control stream diagram, a hub relates to a code portion; hubs are marked utilizing letters or then again numbers. An edge relates to the stream of control between code fragments; edges are spoken to as bolts. There is a passage and leave the point in a control stream diagram. Control stream testing criteria incorporate explanation inclusion, choice inclusion, condition inclusion and so forth. Information stream testing is a white box testing procedure used to check the stream of information over the product program. Information stream testing is utilized to check the definition and utilization of program factors in an exact way [2] [3]. Information stream testing is classified into static information stream testing and dynamic information stream testing. Static information stream testing doesn’t include the execution of the programming program. In this sort of information stream testing just a static examination is performed by breaking down the source code. Dynamic information stream testing really executes the programming program for the location of information irregularity. Extraordinary inclusion criteria are characterized for dynamic information stream testing to execute distinctive ways in programming program for discovery of abnormalities [2] [3]. Robotization is a vital angle for programming testing, without the computerized device, it is difficult to perform thorough programming testing. In this paper, we have actualized a robotized device for the age of test ways for information stream testing. The use of developmental calculations [4] to programming testing for an age of test information is known as transformative testing. In this paper, we have executed an instrument that utilizes the hereditary calculation for the age for test ways in information stream testing at the unit level. We have executed hereditary calculation in our apparatus for information stream testing for the age of test ways utilizing information stream inclusion criteria. The proposed explore device called ETODF is the continuation of our past work [6] on information stream testing utilizing developmental methodologies has been executed in Java dialect for approval. It explores, different avenues regarding this device, our methodology has much better results when contrasted with irregular testing. Whatever is left of the paper is composed as pursues: Segment 2 expounds the overview of various methods proposed for programming testing utilizing developmental methodologies, segment 3 portrays the proposed instrument (Genetic Algorithms) for information stream testing utilizing developmental methodologies and segment 4 is connected with analyses and testing results as in correlation with irregular testing. Area 5 closes the paper and shows what’s to come work.

2. Related Work

Tonella [7] utilized transformative testing for testing of classes at the unit level for the age of experiments. Subterranean insect state streamlining was connected by McMinn and Holcombe [8] to take care of state issue in protest situated projects. Affixing the approach was reached out to take care of state issues in question situated projects by McMinn and Holcombe [8]. Watkins [9] performed diverse examinations for correlation of wellness function by various scientists. As per them, branch predicate and opposite way likelihood approaches were the best wellness function when contrasted with different methodologies. Wegener et al. [10] [11] built up a robotized testing framework utilizing transformative methodologies for testing of inserted frameworks.

Baresel et al. [12] propose a few alterations in wellness function configuration to enhance transformative basic testing. A very much developed function can expand the shot of finding the arrangement and achieve a superior inclusion of the product under test and result in a superior direction of the hunt and in this way in improvements with less emphasis.

McMinn [13] gives an extensive overview of developmental testing approaches and talks about the application of developmental testing. The creator examines the extraordinary approaches in which developmental testing has been connected and additionally gives future bearings in every individual zone. Developmental methodologies are for the most part connected to the programming testing for mechanized test information age [14] [15] [16] [17] [18] [19]. Cheon et al. [20] characterized a wellness work for protest situated program. Dharsana et al. [21] create test cases for Java-based projects and furthermore performed streamlining of test cases utilizing hereditary calculation. Jones et al. [22] performed programmed basic testing utilizing hereditary calculation in their approach. Bilal and Nadeem [23] proposed a state-based wellness function for protest situated projects utilizing hereditary calculation. S.A Khanand Nadeem [24, 25] proposed approaches for test information age at reconciliation level utilizing transformative approaches.

There are two persuasive factors behind this examine the device. There is no or little research chip away at the dynamic information stream testing by applying developmental approaches. The second inspiration includes that there is no mechanized device for information stream testing that utilizes transformative approaches for dynamic information stream testing.

3. Using Genetic Algorithm

The proposed methodology [6] has been executed in a model instrument called Genetic Algorithm (Developmental Testing of Information Stream) in Java. The model device utilizes the hereditary calculation for the age of test way with single point traverse and change. The abnormal state engineering of the apparatus appears in Figure 6. Genetic Algorithm has the accompanying parts:

Graph Parser

Semantic Analyzer and Preserver

Test Path Generator

Sorter and Validator

Fitness Calculator

Genetic Algorithm takes information stream diagram as information and stores the chart in the wake of breaking down and applying semantic on every hub. The info of the information stream chart is as a nearness list. Contiguousness list is the portrayal of everything being equal or bends in a chart as a rundown. The parser parses the diagram and stores it in a parser protest. The parser question peruses the diagram data from the information record in the shape contiguousness list. The parser protest contains the chart protest that stores the entire diagram as nearness list in the memory for further handling in straight away parts.

The semantic analyzer and preserver is the most essential part of the Genetic Algorithm. This segment breaks down the semantic related with every hub and stores the semantic with every hub of the diagram. The semantic analyzer, what’s more, preserver peruses the information stream data from the information record furthermore, partners this data with every hub of the chart utilizing the parser question which stores the genuine chart. After semantic examination, every hub has now all data related to the definition and utilization of every factor. The hubs are currently capable to process the information stream inclusion criteria whenever connected on hubs in a way.

Fig .l. High Level Architecture of Genetic Algorithm

The test way generator is an imperative segment of Genetic Algorithm which initially produces the test way arbitrarily and on the second cycle, it utilizes hereditary calculation for test way age. The test way generator utilizes the hereditary calculation with one-point traverse and change for the age of new populace. The test ways are arranged by the sorter part of the Genetic Algorithm and after that, every way is approved utilizing semantics of the diagram for validness. In this progression, the invalid ways are dealt with from the test ways.

The substantial ways are passed to the wellness number cruncher which ascertains the wellness of every way utilizing the information stream inclusion criteria. The wellness mini-computer takes legitimate ways from Validator segments and information stream inclusion criteria from the client as information also, compute the wellness of the way as indicated by the information stream inclusion criteria. After the assessment, the way that fulfils the inclusion criteria legitimate are included the worldwide rundown of the way while those which don’t fulfil the inclusion criteria stays in the populace and utilized in the recombination and change for straightaway cycle.

4. Genetic Algorithm implementation

The significant bundles executed in the apparatus have appeared in bundle chart in figure 2. The GraphParser class peruses the input chart and store it in parser question. The Node Information class peruses the information stream data and it contains the information stream data of every hub. The DataflowCriteria class characterizes the testing criteria for dynamic information stream of testing. The FitnessCalculator class computes the wellness of every chromosome and it likewise figures the wellness of populace. The Centralized server is the section point for Genetic Algorithm instrument, this class is in charge of producing GUI of the device, what’s more, this is the principle controller class that calls different classes. The rest of the classes are partner classes for various functionalities in the Genetic Algorithm.

5. Case Study

Trial results have been gotten utilizing proposed technique by applying apparatus Genetic Algorithm on the procedural code at the unit level. We have utilized All definitions, all uses, All-c-uses, and All p-utilizes criteria for our tests also, contrast the outcomes and irregular testing. We utilized the following code for our trials with Genetic Algorithm. In tests with this model, our methodology has much better results when contrasted with arbitrary testing.

Table .1. Source Code used for Evaluation of

Genetic Algorithm

In this precedent, there are distinctive factors characterized and utilized. We have connected the Genetic Algorithm on these factors and thought about the outcomes with irregular testing. The example ways for All definitions, All-c-uses and All-p-utilizes for variables, it is given beneath in table 2. The information stream diagram for the source code appears in figure 2.

Table .2. Sample Path for Variable ‘tv’ and ‘ti’

Fig .2. Data Flow Graph

 Figure 3 shows the generated values from a Genetic Algorithm.

Fig .3. The Generated Values using a

Genetic Algorithm

We have performed explores different avenues regarding our proposed apparatus for information stream testing criteria i.e. All definitions, All C-Uses, and All Purses and so on. We have performed hundred emphasizes with Genetic Algorithm for an age of test ways that fulfil information stream testing criteria as chosen by the client from Genetic Algorithm UI. For the age of five experiments for All definitions, arbitrary testing had taken forty-three cycles while ETODF takes just twelve emphasizes. Correspondingly for ten experiments, irregular testing took eighty-seven cycles while ETODF took as it were twenty-one cycles for All definitions testing criteria. For the age of five experiments for All C-Uses, irregular testing had taken fifty-six cycles while ETODF takes just eleven emphases. Essentially, for ten experiments irregular testing took a hundred cycles and it can’t create hundred per cent results while Genetic Algorithm took just twenty-seven cycles for All C-Uses testing criteria. For All P-Uses/Some C-Uses what not P-Uses results outlined in Table 3. For the test results, it has been presumed that our proposed instrument has much better outcomes when contrasted with irregular testing. We contend that our methodology will perform shockingly better for vast and complex programs too.

Table 3 concludes the results obtained from different approaches, our proposed genetic algorithm and random testing.

Table .3. Comparison of Random Testing and

Genetic Algorithm Values

From table 3, it has been reasoned that just Genetic Algorithm produces hundred per cent results in all cases and in all testing criteria while irregular testing performs better in the little arrangement of prerequisites and it isn’t creating hundred per cent results where prerequisites are higher regarding quantities of test cases (ways). Based on these outcomes, we contend that the approach will perform far better for substantial and complex programs too

6. Experimental Measurements

I have cross verified the efficiency of proposed values with random test values on two verifications:

Number of Iterations


Fig .4. Genetic Algorithm and Random Iterations

Number of emphases shows that what number of cycles are required for the age of test way as per the inclusion criteria. As per the outcomes, we got from analyses Genetic Algorithm out performs arbitrary testing and has much better outcomes when contrasted with arbitrary testing as appeared in figure. The thing that matters is higher when quantities of way are more.

Fig .5. Genetic Algorithm and Random Coverage

Coverage demonstrates that how much coverage is accomplished in getting number of test ways as per the coverage criteria.

Coverage = (Generated Paths/Required Paths) *100

According to the results, we got from experimental values of Genetic Algorithm out performs random test values and has much better optimized results as compared to random test values as shown in figure 5. Genetic Algorithm has achieved 100% percent results in all the areas of testing. The difference when cross verified is much more than the greatest paths are used.

7. Conclusion and Future Work

This theory shows the usage of genetic algorithm for the automatic generation of test paths using data flow calculation. The usage of genetic algorithm is implemented and seen in Java. In experiments with this algorithm, my approach has much optimized results as compared to random test value results. I will extend this type of testing to all the other levels of testing i.e., system testing, unit testing and integration testing. Currently I compared the output of experimental testing to the random testing values only. In future, I will also carry out complete wide investigations on large case studies for the verification and clarification of this proposed approach using all data flow coverage criteria’s.

8. References

[1] Boris Beizer, “Software Testing Techniques”, International Thomson Computer Press, 1990.

[2] Lee Copeland, “A Practitioner’s Guide to Software Test Design”, STQE Publishing, 2004.

[3] Khan, M. “Different Approaches to White Box Testing Technique for Finding Errors”, International Journal of Software Engineering and Its Applications, Vol.5, No.3, July, (2011).

[4] Rao, V. and Madiraju, S. “Genetic Algorithms and Programming -An Evolutionary Methodology”, International Journal of Hybrid Information Technology, Vol.3, No.4, October, (2010).

[5] Srivastaval, P. and Kim, T.”Application of Genetic Algorithm in Software Testing”, International Journal of Software Engineering and Its Applications, Vol.3, No.4, October, (2009).

[6] Khan, S. and Nadeem, A., “Applying Evolutionary Approaches to Data Flow Testing at Unit Level”. Software Engineering, Business Continuity, and Education Communications in Computer and Information Science, 2011.

[7] Tonella, P., (2004 July) “Evolutionary Testing of Classes”, In Proceedings of the ACM SIGSOFT International Symposium of Software Testing and Analysis, Boston, MA, pp. 119-128.

[8] McMinn, P., Holcombe, M., (2003 July) “The state problem for evolutionary testing.” In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Lecture Notes in Computer Science vol. 2724, pages 2488-2497, Chicago, USA. Springer-Verlag.

[9] Watkins, A., (1995 July) “The automatic generation of test data using genetic algorithms.” In Proceedings of the Fourth Software Quality Conference, pages 300-309. ACM, 1995.

[10] Wegener, J., Baresel, A., Sthamer, H., (2001) “Evolutionary test environment for automatic structural testing.” Information and Software Technology Special Issue on Software Engineering using Metaheuristic Innovative Algorithms, 43 pp.841-854.

[11] Wegener, J., Buhr, K., Pohlheim, H., (2002 July) “Automatic test data generation for structural testing of embedded software systems by evolutionary testing”, In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2002), pages 1233-1240, New York, USA. Morgan Kaufinann.

[12] Baresel, A., Sthamer, H., Schmidt, M., (2002 July) “Fitness Function Design to improve Evolutionary Structural Testing”, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 02), New York (NY), USA.

[13] McMinn, P., (2004) “Search-based Software Test Data Generation: A Survey”, Journal of Software Testing, Verifications, and Reliability, vol. 14, no. 2, pp. 105-156, June.

[14] McGraw, G., Michael, C., Schatz, M., (2001) “Generating software test data by evolution.” IEEE Transactions on Software Engineering, 27(12): 1085 1110.

[15] Pargas, R., Harrold, M., Peck, R., (1999) “Test-data generation using genetic algorithms. Software Testing”, Verification and Reliability, 9(4):263-282.

[16] Roper, M., (1997 May) “Computer aided software testing using genetic algorithms.” In 10th International Software Quality Week, San Francisco, USA.

[17] Tracey, N., Clark, J., Mander, K., McDermid, J., (2000) “Automated test-data generation for exception conditions”, SOFTWARE—PRACTICE AND EXPERIENCE, vol., Pages 61-79, January.

[18] Sthamer, H., (1996) “The automatic generation of software test data using genetic algorithms”, PhD Thesis, University of Ghamorgan, Pontyprid, Wales, Great Britain.

[19] Seesing, A., Gross, H., (2006) “A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software”, International Transactions on Systems Science and Applications, vol. l,no. 2, pp. 127-134.

[20] Cheon, Y., Kim, M., (2006 July) “A specification-based fitness function for evolutionary testing of object-oriented programs”, Proceedings of the 8th annual conference on Genetic and evolutionary computation, Washington, USA.

[21] Dharsana, C.S.S., Askarunisha, A., (2007 December) “Java based Test Case Generation and Optimization Using Evolutionary Testing”. International Conference on Computational Intelligence and Multimedia Applications, Sivakasi, India.

[22] Jones, B., Sthamer, H., Eyres, D., (1996) “Automatic structural testing using genetic algorithms”, Software Engineering Journal, vol. 11, no. 5, pp. 299-306.

[23] Bilal, M., Nadeem, A., (2009 April) “A State based Fitness Function for Evolutionary Testing of Object-Oriented Programs”. Studies in Computational Intelligence, 2009, Volume 253/2009, 83-94, DOI: 10.1007/978-3-642-05441-9. Software Engineering Research, Management and Applications 2009.

[24] SA Khan, A Nadeem, “Automated Test Data Generation for Coupling Based Integration Testing of Object Oriented Programs Using Evolutionary Approaches”, Proceedings of the 2013 10th International Conference on Information Technology: New Generations (ITNG 2013), Pages 369-374, LAS Vegas, Nevada, USA.

[25] S.A Khan, A. Nadeem, “Automated Test Data Generation for Coupling Based Integration Testing of Object Oriented Programs Using Particle Swarm Optimization (PSO)”, Proceedings of the Seventh International Conference on Genetic and Evolutionary Computing, ICGEC 2013, August 25 – 27,2013 Prague, Czech Republic.

Charpy Impact Testing

Charpy Impact Testing


Charpy impact testing is designed to measure the energy absorbed by a force placed on different materials that allow us to measure the material’s resistance to failure. The amount of energy a material can absorb or “impact energy” allows us to determine the ductile to brittle transition temperature (DBTT) as well as the ductility of the material itself. Usually, the more ductile a material, the more energy it will absorb due to its tendency to resist fracturing. We used the Charpy test because it is not only cost effective, but it is straightforward and simple to use. The test determines impact energy buy staging a swinging pendulum at a certain height then releasing it to strike a sample thus measuring the energy absorbed by comparing the height the pendulum rises after impact to the height from which it was dropped. Our pendulum had a scale that gave the resultant energy on a scale of 150kJ. We tested various materials with different compositions in various thermal conditions to see the effect of temperature on ductility and brittleness.


Table 1: Estimated DBTT (°C)



1018 Normalized


1045 Normalized

No Transition

1095 Normalized


1045 Cold-Finished


304 Stainless Steel

No Transition

6061 Aluminum

No Transition


No Transition

Table 1 DBTT for Materials

Table 2 Summary of Temperature, Impact Energy, and Fracture Results

Figure 1: Impact energy for 1045 N and 1045 CF vs Temperature


Image 1 1045 CW at 250°C with Camera  Image 2 1045 CW at 250°C with 20x Keyence


Image 3 1045 Normalized at 250°C with Camera       Image 4 1045 Normalized at 250°C with 20x Keyence

Figure 2: Impact energy for PVC, 6061 Al, and 304 SS vs Temperature


Figure 1 above shows the difference in impact energy of two 1045 normalized steel versus 1045 cold worked steel at different temperatures. We can clearly see that the 1045 normalized had higher impact strength than the 1045 coldworked. From images 1 and 2 we see that the fracture surface of 1045 CW at 250°C is more ductile than that of the 1045 N. When we look at the raw data; however, we see that the normalized was able to absorb more energy showing that the normalization of the material increased its strength. From Images 1 and 3 we can see that another result of that normalization was an increase in brittleness.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

DBTT is the temperature that allows us to calculate where the material changes from brittle to ductile failure [1]. Before DBTT, the 1095 normalized sample was ductile until it reached -60°C as we can see in Table 2. The reading for 1095 at 250 °C is very low at only 2kJ, and does not follow the trend for impact energy for this material, so I believe this to be an error when loading the sample into the impact machine. When comparing the 1045 to the 1018 sample, we can see that while 1018 had higher strength and did not fail from 22°C to 100°C while 1045; however, did even though they are both composed of plain carbon. We did not have a sample for 1018 at 250°C so there is a discrepancy in the data trend at that point. The 1045 cold worked steel became ductile at a much larger temperature than the 1018, and when we look at the composition of 1045 and 1018 we see that it has a larger carbon content. This shows that larger the carbon content, the more brittle the sample [2] When comparing the other plain carbon samples, we see a trend of the impact energy decreasing as percent carbon decreased.

Figure 2 shows the effect of temperature on the impact energy PVC, 6061 Al, and 304 SS. When viewing these results, we have to keep in mind that PVC could not be tested at 250°C or 100°C due to its low melting temperature. From this graph we can clearly see that 304 stainless steel had the best impact strength. From Table 2, the results for impact energy and the viewed fracture surface for each material at its tested temperature clearly show that 304 stainless steel absorbed the majority of the impact energy at all temperatures.

When looking at Figure 2, we see the different impact energies of polyvinyl chloride (PVC), 6061 aluminum alloy aluminum alloy, and 304 stainless steel. The PVC was very brittle especially when tested below room temperature where the sample shattered verses having a smooth brittle failure like the metals. It also has the lowest impact energy than any other material. This explains why we use steel alloys and not PVC in heavy industrial applications. PVC’s strength is very low and it would likely fail if excessive force or weight were to be applied. The 304 stainless steel had the highest impact energy of all other tested materials its composition shows that it less than 0.12% carbon and the rest is mostly chromium, a highly ductile metal. The inclusion of chromium as well as the low percent carbon explains why the stainless steel’s ductility is so high, and why it absorbed the majority of the energy of the force applied by the pendulum.


The Charpy test has allowed us not only to see the impact energy of various materials, but it has explained how composition, temperature, normalization, and cold working affect the strength and ductility of a material. As carbon content increases, we have observed that impact energy increases. The addition of more ductile metals into carbon-based steel alloys for an extremely ductile material which can absorb large amount of energy. With temperature, we saw that with most materials an increase in temperature increased impact energy and ductility, while a decrease led to a sharp decline of impact energy and a large decrease in ductility resulting in brittle failure.



Askeland, Donald R., Pradeep P. Fulay, and Wendelin Wright. The Science and  Engineering of Materials. 7th ed. Cengage Learning, 2016. Print.

Gannon, Robert. “What Really Sank the Titanic?” Popular Science 246, no. 2 (February 1995): pp 49-55.

Standard Test Methods for Notched Bar Impact Testing of Metallic Materials, E 23-07, 2007 Annual Book of ASTM Standards, American Society for Testing and Materials.

William F. Smith, Structure and Properties of Engineering Alloys, Second Edition, McGraw-Hill, 1993.

Planning, Development and Testing Internetwork Design


Table of Contents

Aims of the Project

Problem Analysis

Requirements & Solutions

WAN requirements:

General LAN requirements:

Non- Functional Requirements

Functional Requirements


Technical Information Existing & Recommendations

Key Factors

Resources & Materials

Information’s Sources


Routing Protocols

Open Shortest Path First (OSPF)

Enhanced interior gateway routing protocol (EIGRP)








Switches Port Density

IP Scheme







Physical Security

WPK2 for Wireless

Authentication and Encryption


USER Groups



Layer 3 Addressing Scheme

Switching VLANS

Test Plan

Provisional Design Topology’s

Glasgow Floor Plan

Cardiff Floor plan

Birmingham Office Floor plan



This project will analyse, investigate, develop and test a new internetwork for Langburgh between their Glasgow, Birmingham and Cardiff offices. The project will make recommendations on a new IT infrastructure that will make the current structure more efficient, reliable secure and scalable for the future. Regular meetings will be held with the Managing Director to ensure the aims of the project keep in line with the objectives of the company.

Internetworking Design Basics

This report will outline the process of the planning, development and testing of the proposed internetwork design between Lanburgh’s Glasgow, Cardiff and Birmingham’s Offices with proposed upgrades.

Designing an internetwork can be a challenging task. An internetwork that consists of only 50 meshed routing nodes can pose complex problems that lead to unpredictable results. Attempting to optimize internetworks that feature thousands of nodes can pose even more complex problems.

This report provides an overview of planning and design guidelines. The report will be divided into three main areas

Determining Requirements 

Identifying and Selecting Capabilities 

Choosing Reliability, Efficiency, Scalability& Security Options 

WAN requirements:

Appropriate routing equipment at each company site to interconnect branches Cisco 1290 Routers

Application of purchased IP address block Layer 3 Subnetted Addressing scheme

Use of static / dynamic routing

Appropriate redundancy HSRP

Method of secure data transfer between Cardiff and Glasgow VPN Tunnelling

Dedicated 1GB Cardiff/Birmingham connection Static Route

General LAN requirements:

Logically layered converged switched network with appropriate management and redundancy facilities HSRP Hot Standby Router protocol

Suitable, efficient RFC 1918 IPv4 address scheme to support users with appropriate growth accommodated IPv4 Sub netted Address Scheme

Efficient allocation of IP configuration Ipv4 Address Scheme

Capability for network devices to be securely managed and configuration to be backed up Cisco Server

Ensure end device security Anti-Virus Software & Upgrade to Windows 10

Physical security Locked Cupboards Off Site Back up to cloud

Glasgow LAN requirements:

On-site hosting of the company email and web servers Cisco Server

Address translation mechanism for internal hosts accessing services outside of the network Cisco Switches

Cardiff LAN requirements:

Capability for employees to connect wirelessly to company LAN as required Cisco Wireless Routers

Appropriate fault tolerance on network devices HSRP

Birmingham LAN requirements:

Appropriate security to filter traffic allowing only students access to the email server on Glasgow campus Extended ACL

Appropriate security to filter traffic allowing only teaching staff access to the web server (intranet) on Glasgow campus Extended ACL

Implement IPv6 on 2 sample clients in an isolated test LAN ensuring layer 3 connectivity with IPV6 network egress point IPv6 Address Scheme

Routers, switches and other internetworking devices must reflect the goals, of the organizations in which they operate. For this purpose, all devices will come from Cisco. Cisco has a proven track record of reliability and efficiency and offer a lot of support and training for their devices

Two goals drive networking design and implementation:

Application availability Applications must be easily and readily available to the end users for a network to perform reliably and efficiently.

Costs Budgets play a big part in designing a good network

Non-functional requirements describe how the system works, while functional requirements describe what the system should do.

Non- Functional Requirements

Business Rules

Transaction corrections, adjustments and cancellations

Administrative functions


Authorization levels

Certification Requirements

Legal or Regulatory Requirements


Functional Requirements

Performance – for example Response Time, Throughput, Utilization, Static Volumetric









Data Integrity



These constraints include money, labour, technology, space, and time. Economic constraints play a major role in any network design


Figure 1 General Network Design Process

Below is a network design process that investigates, analysing, produces a plan and then tests the plan until all your requirements are met’

Assessing User Requirements

Users primarily want their applications available in a quick response time and it to be reliable. Response time it the time a user asked the device to perform a function and how long it takes to complete the function.

Lanburgh’s user requirements will be assessed in several ways.

User community profiles outlining what different user groups require. This is the first step in determining internetwork requirements. Faculty Staff will require more restricted access than students and finance will require more detailed information. Proper steps will be taken to ensure the confidentiality of each of these needs by a number of ways. 

Assessing Costs

A list of costs associated with internetworks include

Router hardware and software costs These can be expensive to buy or upgrade but are one of the most important part of the network system 

Performance trade-off costs This is selecting what equipment you really need and can afford 

Installation costs this can be one of the largest and most expensive jobs, it included labour charges for installation.

Expansion costs scalability, if it will save money in the future it could be recommended to install better equipment now. 

Support costs certain equipment like servers can be difficult to manage without the proper expertise or support.

Cost of downtime this is how long your company can be out of commission for repairs or installation and upgraded or if poor equipment fails.

Figure 2 is a provisional list of costs. The latest software has been proposed to increase security and efficiency. Some back up services will be moved to cloud storage for back up purposes. This can be divided into separate cloud storage allocations for Staff and Students. Hubs will be replaced with switches as these are far more efficient and secure. Two 24 port switches will be used instead of a 48 port as this will be more reliable in case one goes down. On Site storage and extra server will also be used as an extra back up. All cabling will be upgraded to 100mb to increases speed and scalability for future devices.

Figure 2 Costs of Materials



Cost £

HPE ProLiant DL380 Gen9 Xeon E5-2620V4 2.1GHz 16GB RAM 2U Rack Server



Server License



Windows 10 Volume License



Switches 2960 24port



Synology DS418 DiskStation 4-Bay 16TB Network Attached NAS



100 TB Cloud


Office 365 Volume License


750 per month

One Drive cloud Storage 100TB


100 per/month

Dedicated Line


30 per/month

1 GB Secure Line


30 per/month

100mb cable








Server software

Windows server Upgraded to Windows 2016

Windows server Windows 2016 Back Up

Client OS Software

Upgraded to Windows 10

Upgraded to Windows 10

Upgraded to Windows 10

Client Application Software

MS Office 365, HR software package.

MS Office 365, Finance software package

MS Office 365, Marketing software package


Asymmetric up to 100Mbps

Asymmetric up to 100Mbps

Asymmetric up to 100 Mbps

Public IPv4 addresses (simulated)

Router – Assigned by ISP                Server –

Router –

 Assigned by ISP               

Router –

 Assigned by ISP               

LAN IPv4 ranges /8 /8 /8


Not currently used

Not currently used

To be tested by two users


3* 24 port managed switch, 2*

3* 24 port managed switch,

2* 24 port managed switch,


Provided by ISP

Provided by ISP

Provided by ISP


12 Mono LaserJet                         2 colour inkjet

10 Mono LaserJet                          1 colour LaserJet

1 Mono LaserJet 1 colour inkjet

Host security

Native security that comes with end station OS.  This applies to client and server OS.

Native security that comes with end station OS

Native security that comes with end station OS

Network Security

Native security that comes with ISP router firewall.      

Native security that comes with router firewall.          

Native security that comes with router firewall.          


5TB off-site NAS device

100TB Cloud Storage

5TB off-site NAS device

5TB off-site NAS device

Key factors involved in this project are the £150,000 budget that has become available for Lanburgh to upgrade its IT system

1. Understand your network goals

 2. Create a budget and acquire components.

3. Training, security, and scalability.

4. IT maintenance.

Items required for this project are as follows and will be assessed in the Analysis section.

Software programs Microsoft Word, Visio, Packet Tracer, Microsoft Project & License

 Computer with Internet Connection


Computer with Internet connection

Network Engineers

Dedicated Leased Line from ISP


Managing Director


Faculty Staff

Web Searches



Networking Books

Project Brief

Routing Protocols



Variable-Length Subnet Masking (VLSM)

This is subnetting this will allow more subnets without wasting large amounts of addresses.


This will be used to lowering the amount of routing tables. It achieves this by consolidating multiple routes into a single route



Network Address Translation (NAT) A static Nat will be used for Students in Birmingham to access the email server in Glasgow

This lets the router to change private IP addresses into public IP addresses

There will be a static route for students to the email server



There will be a port translation from the public facing router to port HTTP Port 80 and HTTPS port 443 for the staff to access the internet.


Virtual Trunking Protocol – This allows you to set leave one switch as server and configure others as clients saving time configuring them individually.


Virtual local area network (VLAN)

This allows a group to be added inside a local area network as if they are on separate networks


The Spanning Tree Protocol (STP)

This protocol prevents data going around in loops which can slow down and bring your network to a standstill



Dynamic Host Configuration Protocol (DHCP) is a client/server protocol.

This automatically assigns IP addresses to clients. A separate DHCP server will be used for students and staff.


Switches Port Density

Port density is the number of ports in a network device or the number of ports in a backbone.


IP Scheme

IP scheme must scale from ipv4 to ipv6

Most IP addresses are still IPv4 IPv6 was created to allow more IP addresses as time has gone by more and more end users require IP addresses. Using IPv6 will increase the scalability of your network.



Hot Standby Router Protocol is a Cisco redundancy protocol for establishing a fault-tolerant default gateway.  If the main router goes down the standby router will step in.


A Redundant Array of Independent Disks (RAID) this is an arrangement of multiple disk drives set together to act as a single disk drive. There will be a 5tb off-site storage.



This is an uninterrupted power supply that can come in the form of a battery this will protect against power failures and power surges


 A firewall protects your network from harm by creating a barrier between trusted internal and external networks.

Physical Security

This is protection of personnel, software and hardware and the physical harm that can be caused from fire, flooding, theft and vandalism. The servers and storage devices will be kept in a locked room.


WPK2 for Wireless

Wi-Fi Protected Access 2 (WPA2) this is considered the most secure encryption for wireless.


Authentication and Encryption

Encryption turns readable data into data that looks illegible using secrets that can transform it back into meaningful data at the other end. Authentication will allow only the person with permission to access the network. This will be used on all routers and switches. SSH will be for remote management rather than telnet as it used encryption rather than clear text.

Clients will require to be signed in with a username and password.



Extended Access Control List (ACL)

These are filters that allow a network administrator to control the flow of routing updates and filter traffic for extra security. One will be created for Students to access the Glasgow email server and one will be created for staff to access the internet.


USER Groups

These are security groups that permit or deny users from accessing certain data. The windows server operating system can allow the administrator to control user groups centrally. This makes the process a lot more efficient. User groups will be created for staff, students and management.



Intrusion Detection (IDS) and Prevention (IPS) Systems. This is used to monitor your network and act if necessary, of any unauthorised entry into your network. This will not be used at present as it will severely impact on the systems performance.

Research will be carried out by

Online and offline sources for up to date networking materials and cost.

Vendor product manuals

Looking at project brief

Contacting Cisco

Questionnaires from faculty staff

Interviews from students

Reviewing Existing documentation

Project brief

Gantt Chart

Visio Diagrams

Packet Tracer Topology


Face to Face interviews

Telephone calls


Staff questionnaires



Number of users

Projected 5-year growth %




Teaching Staff









IT Dept.








Teaching Staff











Teaching Staff










2 Sample users

Public Facing IP

IP Range

Glasgow IP Range

Cardiff  IP Range

Birmingham IP Range

Private Addressing scheme  

IP range

Net mask

Switching VLANS

VLAN 1 0 Staff

VLAN 2 0 Students

VLAN 22 Unused Ports

VLAN 30 Management.

VLAN 99  Native

Test Plan

Test Name

Test Type

Date of Test


Expected Result

Actual result

Outcome and action Required




Use the show ip route command on the Birmingham Router

Router will show up as being rip version 2 and directly connected routes will be shown as well as rip connected routers

Router shows connected through rip

No action required

Dedicated 1GB Line

Static and Default routes



Use the show ip route command on the Cardiff router

S* Will show for static route between Cardiff and Birmingham

S* shows no action required




Use the show ip route on the Glasgow Main Router

Classless Addresses will show

Classless IPs  addresses show/no action required

NAT Static Route



Use test Student in Birminghams web browser to ping cardiffs public facing cable

Cisco Packet Tracer should show up

Use show show ip nat translations and ip nat statistics command

Action Needed/ Apply To Glasgow Branch email server.

Port Address Translation



Use Staff tester to put birmingham public ip address in

Cisco packet tracer should show.

Use show show ip nat translations and ip nat statistics command

Action Needed apply to Glasgow Branch Web Browser




Use the show VTP status in the Glasgow Root switch






Use Show VLAN Command on Glasgow Root Switch

Hosts can ping all hosts on their VLAN

Inter Vlan Networking



Ping any student client to any staff client .

Staff test 1 from Student Test 1

Pings Successful/no action required




Show Spanning-tree protocol

Use the show spanning -tree to show new root bridge is configured

New Root Bridge Is selected/ no action required




Student Test 1 and Staff test one Clients and Request IP Address

Request new ip address from respective dhcp server

Requests Successful no action required

Switch Port Sticky



Only the first mac address in port 24 will be allowed

Plug in another client to port 24 in the Glasgow root bridge. The pot will be blocked

Port Successfully blocked/no action required.

IP Scheme IPV4



Use Ping command from staff test 1 in Glasgow to student test 2 In Cardiff and student test 3 in Birmingham

Clients will reply

Pings successful /no action required

IP Scheme IPV6



Use Ping command from PC2 to PC! In the ipv6 test area

PC1 will reply to ping

Ping Successful no action rquired




Use Shutdown command on main router so standby router becomes active

Ping ISP Router which




Shut down power to server in glasgow

Batteries will continue the power supply until main power is restored










Connect Rogue Device.

SSID is changed from default.

WPA2 -PSK Authentication required with password

Device wont Connect

Rogue devices wont connect/no action required.




show crypto ipsec sa

switchport port-security



show interfaces switchport

switchport port-security aging time 120



show interfaces switchport

No cdp enable



show interfaces switchport

spanning-tree portfast



show interfaces switchport

spanning-tree bpduguard enable



show interfaces switchport

storm-control broadcast level 75.5



show interfaces switchport

switchport mode trunk

switchport no negotiate



show interfaces switchport

No action needed

Ether Channel LACP – Link Aggregation Control Protocol



Issue Show Run command

Interface Port Channel should show

Interface Port Channel 1 shows/no action required

Provisional Design Topology’

Refer to Visio file


3.1 Outline of the assignment

You should produce an outline of the assignment and to what extent the solution met the original requirements of the assignment brief as noted below. (4 marks)

You should give a statement regarding the extent to which each of these objectives has been achieved. If an objective has not been achieved, or has only been partially achieved, you should give an explanation.

In this assignment we were asked to upgrade the company’s existing network infrastructure. Almost all the requirements have been met except, the following

HSRP is partially working, there is a backup router, but it is misconfigured after the network was streamlined. Time constrains have made it unable to be reconfigured

VPN is also partially configured a serial cable was removed for efficiency and the wrong IP route was put on the Cardiff and Glasgow routers time constraints have made it unable to be reconfigured.

Extended ACLs have still to be implemented and tested time constraints have cause this not to be implemented yet.

There were problems implementing NAT in Glasgow. My plan was to put a Static NAT to the student email server and a PAT to port 80 and 443 to the Glasgow web server I had put a Server on DHCP for quickness and configured PAT on it, when I started the network back up it was then misconfigured. This was rectified with a static IP but became misconfigured when I put a static NAT on the same serial cable port, so I put a Static in Cardiff and a PAT in Birmingham for testing.

Vlans are all configured correctly with sub interfaces for inter vlan networking. This caused a few configuration problems as the number of gateways increased.

I would have used a Zone based firewall as it is easier to configure this than a ASA firewall, it is also a lot more cost effective as you don’t need to buy extra equipment.

IPV^ has been properly configures and through time I would role this out thought the full network.

There is redundancy on every site with no single points of failure, there are multiple switches and multiple routers all giving at least two possible routes.

3.2 Strengths and weaknesses

You should give an assessment of the strengths and weaknesses of the outputs of the practical assignment. (4 marks)

The network I have planned and built is strong on security and efficiency. I have used RipV2 as it is one of the easiest to implement and maintain. Through time I would recommend using some of the other protocols as the network grows and the management staff get to know it.

The Inter vlans work on every site. I have good switch security with a goodpractices demonstration in a switch and on a router for test, this would be used on every switch and router after the testing stage.

There is a DHCP properly configured on each site on for the Staff and one for the students, each DHCP is on the same Vlan as the clients they are serving. Staff and Students also have they’re on printers on there on vlan for added privacy and security.

Each site offline storage is at another site in the network. Glasgow’s in in Cardiff, Cardiff’s is in Glasgow’s and Birmingham is in Glasgow’s

All trunk GB ports are used as trunk ports as they have lower costs and faster.


3.3 Recommendations

You should make recommendations for any future development of the solution and give your reasons for these recommendations. (4 marks)

In the future I would use other routing protocols which can be more efficient though harder to configure. I would make every trunk route and ether channel linking the two gigabyte ports this doubles the bandwidth. As the company grows, I would use more ethernet channels to create more bandwidth.

I would guide have a booklet printed for all the users advising on safe practice and strong passwords as a lot of damage from networks can come from the inside as well as the outside such as a person bringing their own devices which contain viruses.

I would create a VPN from Glasgow to Birmingham and Birmingham to Glasgow as well for added security.

I would remove the printers off DHCP and put them on a static IP rather than having to request one every time they are switched on. They were configured DHCP for time scale factors.

There is a TFTP server which backs up the running configs in Glasgow this will be rolled out to every site in future


3.4 Modifications

You should give a summary of any modifications to the project plan, solution design and/or implementation that were made during the project, including reference to any unforeseen events and how they were handled. (4 marks)

OSPF and EIGRP were initially be going to be used but RipV2 was quicker and easier to use.

More switch security has been implemented as they are easy to configure i.e. Switchport security, broadcast storm control. Switches come with preconfigure settings which can be easily manipulated by attackers and leave the system vulnerable.

One unforeseen even was configuring with no single default gateway with sub interfaces. After research I understood that a sub interface can be configured just like a physical interface.

I had more routers than I needed so after removing one my configurations on HSRP and VPN became misconfigured, these still need to be trouble shooted.

3.5 Knowledge and skills

You should identify any knowledge and skills which have been gained or developed while carrying out the project assignment and how the actions/ process of carrying out the project could have been improved. (4 marks)

I have gained skills in using the command line interface and the commands, using some of them that often has made configuring routers a switch a lot easier.

I know have a better understanding of what the commands are and why they are used.

I have found a lot more ways to configure switches for extra security with some basic switch configurations to start with like closing all ports until you use them, removing them from vlan 1, enabling broadcast storm control.

I have learned a lot more about configuring routers and setting them up to start with with passwords, no ip-domain look up, password encryption and understand a lot clearer why the work.

I have learned a lot more about IPv6 and what the addresses are made up of like a post code and machine mac address.

I’ve learned a lot more about encryption with things like SSH which encrypts text where telnet ins clear text for remote access and making it a lot more secure.

I’ve learned how to configure servers for DHCP, WEB, EMAIL TFTP, FTP and how to back up the settings.

I have learned a lot about standard ACLs and Extended Access-lists and how to configure and verify them allowing filtering from different networks or clients.

I have learned a lot more about what ports are used for and have most of the well-known ones memorised whereas before they were just jumbled numbers.

I could have improved my network by doing a lot more research first, some command I learned save a lot of time. I could have HSRP and VPN better configured instead of deleting a router leaving older configurations.

Identifying the root bridge earlier would have allowed me to reduce the costs of the network by using the GB ports straight to the router then the redundancy put on fast ethernet.


         https://www.techopedia.com/definition/26152/vlan-trunking-protocol-vtp

Cisco Networking Academy Logbooks

CCNA Routing & Switching Protocol PDFs

Dans Courses CCNA material


Mock Circulation Loop For Biomedical Device Testing Physical Education Essay

As biomedical devices are become ever more high-tech, research in them is spreading wider across various engineering disciplines. In order to allow the advancement of technologies in this area to continue, it is necessary that companies and research organisations have access to their own low cost, pulsatile Mock Circulation Loop (MCL) for developmental testing. A Mock Circulation Loop simulates pressure-flow response of the human circulatory system, for different physiological states. This study proposes a low cost MCL, designed from “off the shelf” commodity components.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

A Mock Circulation Loop simulates pressure-flow response of the human circulatory system, while also being able to replicate this system for different physiological states. Previous authors have suggested that an effective MCL should have a least three benchmark states; healthy person in sleep, rest and mild physical activity. MCLs are used to investigate the effectiveness of biomedical devices across a wide range of applications, but are predominantly used to test artificial heart valves, vascular prostheses and stents. They have also been found to be a very useful educational tool as they have the advantage over various other educational medium, simply by providing the student with a visual platform. To be able to observe the phenomena in operation makes it easier to interpret many different physiological aspects [1].
MCLs that closely mimic human parameters are an extremely important tool nowadays in bringing devices to the biomedical market from a cost, but also an ethical perspective. Unnecessary in vivo animal trials can be alleviated by carrying out comprehensive testing of devices in an in vitro environment under various physiological conditions. This means only the best and most promising designs will go forward for animal trials, thus reducing needless in vivo trial and error design which can be extremely costly and unethical [2].
Recent design and research into MCLs has come predominantly from those interested in producing accurate in vitro testing systems for aiding the development of ventricular assist devices [3-6]. However there is little or no reason to believe that the systems designed for these purposes would not be equally applied to beneficial effect in the development of new, innovative devices such as BioSensors or BioMEMS (biomedical micro-electro-mechanical devices). The origins of these components are usually not from the well established areas of biomedical research and design, but from historically new up and coming areas. Without the know how or experience in place, it is inevitable that the requirement for easily attainable, low cost “out of body” testing equipment will become more in demand as this new area of biomedical device design continues to grow with demand. It is hoped that this thesis can go some way in bridging this knowledge gap.
This thesis presents a research project for the design and construction of a mock circulation loop in a low cost manner, employing the best aspects of currently available MCLs and where possible, using off the shelf components. The mock loop will consist of two major elements: (1) passively filling left pulsatile artifical heart ventricle; and (2) air/water vessels to simulate the venous and arterial compliances. These elements will be coupled together using appropriate tubing.
The performance level of the completed MCL will be made accessible to engineers in the form of: (1) End systolic pressure-volume relationship plots, (2) Ventricular pressure-time relationship plots, (3) Systemic & Pulmonic Pressure Distribution versus time plots. These can then be compared against equivalent data available for the human physiology and also other current MCLs. The key contributions expected from this work are: (1) that W.I.T. will have a fully functioning MCL test rig for “in-house” development of microelectronic biomedical device components, (2) the MCL will present further opportunities within W.I.T. for projects in the area of cardiovascular research (3) the complete design and construction data will be available to any institution or company wishing to build their own low cost MCL testing rig.
Existing MCL Designs
Mock circulation loops for the in vitro replication of the human cardiovascular system have been developed since the 1970s [7-8]. The more modern versions vary quite a lot in how they attempt to replicate these human parameters, from the pumping systems they use, to how they achieve compliance. Some of the methods used for imitating the contraction of the heart are shown in Figures 1-3. These are; 1) diaphragm pumps controlled by pneumatic compression and vacuum [2, 4], 2) motor controlled piston pump [1], and 3) pneumatic supply directly into artificial ventricle [9-10]. In terms of producing a low cost model then it would appear the third method would be the most beneficial, while also providing an accurate model based on its demonstration to closely replicate the key elements of any MCL.
The disadvantage to using such a system would be the ability to closely replicate the time-dependent flow waveform supplied by the ventricle, both in physiological and pathological conditions. It has been reported that the hydraulic volumetric pumping systems inherently provide a finer control of the flow waveform [2]. However, improved control of ventricle contractility in order to improve waveform replication can be achieved through the use of an electro-pneumatic regulator [10]. The downside to this solution is that these regulators are very costly.
Cardiovascular Physiology
The cardiac cycle, the systemic compliance, and the systemic vascular resistance are three very important parameters to understand and replicate, if an accurate mock circulation loop is to be designed and built. These three areas are described herein.
Cardiac Cycle
The atria and ventricles contract in sequence, resulting in a cycle of pressure and volume changes. The cardiac cycle has four phases, over a combined time of 0.8 to 0.9sec for 70 beats per minute. The four phases and their time duration are outlined briefly.
Ventricular Filling, Duration 0.5s
Ventricular diastole lasts for nearly two-thirds of the cycle at rest, providing adequate time for refilling the chamber. There is an initial phase of rapid filling, lasting about 0.15s, as shown by the cardio-meter volume trace in Figure 4. As the ventricle reaches its natural volume, the rate of filling slows down and further filling requires distension of the ventricle by the pressure of the venous blood; ventricular pressure now begins to rise. In the final third of the filling phase, the atria contract and force some additional blood into the ventricle. The volume of blood in the ventricle at the end of the filling phase is called the end-diastolic volume (EDV) and is typically around 120ml in an adult human. The corresponding end-diastolic pressure (EDP) is a few mmHg. EDP is a little higher on the left side of the heart than on the right, because the left ventricle wall is thicker and therefore needs a higher pressure to distend it.
Isovolumetric Contraction, Duration 0.05s
As atrial systole begins to wane, ventricular systole commences. It lasts 0.35s and is divided into a brief isovolumetric phase and a longer ejection phase. As soon as ventricular pressure rises fractionally above atrial pressure, the atrioventricular valves are forced shut by the reversed pressure gradient. The ventricle is now a closed chamber and the growing wall tension causes a steep rise in the pressure of the trapped blood; indeed the maximum rate of rise of pressure (dP/dt)max, is frequently used as an index of cardiac contractility.
Ejection, Duration 0.3s
When ventricular pressure exceeds arterial pressure, the outflow valves are forced open and ejection begins. Three quarters of the stroke volume is ejected in the first half of the ejection phase and at first blood is ejected faster than it can escape out of the arterial tree. As a result, much of it has to be accommodated by distension of the large elastic arteries, and this drives arterial pressure up to its maximum or ‘systolic’ level. As systole weakens and the rate of ejection slows down, the rate of which blood flows away through the arterial system begins to exceed the ejection rate, so pressure begins to fall. As the ventricle begins to relax, ventricular pressure falls below arterial pressure by 2-3mmHg (see stippled zone in Figure 4) but the outward momentum of the blood prevents immediate valve closure. The reversed pressure gradient however, progressively decelerates the outflow, as shown in the bottom trace of Figure 4, until finally a brief backflow closes the outflow valve. Backflow is less than 5% of stroke volume normally, but is greatly increased if the aortic valve is leaky. It must be emphasised that the ventricle does not empty completely but only by about two-thirds. The average ejection fraction in man is 0.67, corresponding to a stroke volume of 70-80ml in adults. The residual end-systolic volume of about 50ml acts as a reserve which can be utilised to increase stroke volume in exercise.
Isovolumetric Relaxation, Duration 0.08s
With closure of the aortic and pulmonary valves, each ventricle once again becomes a closed chamber. Ventricle pressure falls very rapidly owing to mechanical recoil of collagen fibres within the myocardium, which were tensed and deformed by the contracting myocytes. When ventricular pressure has fallen just below atrial pressure, the atrioventricular vales open. Blood then floods into the atria, which has been refilling during ventricular systole [11].
Compliance is related to the ability of a vessel to distend when encountering a change in blood volume [12]. It is defined as a change in volume for a given change in pressure and can be described by Equation 1 as follows:
The distension of the elastic arteries raises the blood pressure and the amount by which pressure rises depends partly on the distensibilty of the arterial system [11]. The compliance of the veins is approximately 24 times greater than that of arteries. This gives the veins the ability to hold large amounts of blood in comparison to arteries. Ventricular compliance influences the ventricle’s pressure volume curve. If the compliance of the ventricle is decreased, this increases the end diastolic pressure for any given end diastolic volume [10]. The EDPVR provides a boundary on which the PV loop falls at the end of the cardiac cycle [13].
Systemic Vascular Resistance
The systemic vascular resistance (SVR) or total peripheral resistance (TPR) is the ratio between the mean pressure drop across the arterial system [which is equal to the mean aortic pressure (MAP) minus the central venous pressure (CVP)] and mean flow into the arterial system [which is equal to the cardiac output (CO)]. Unlike aortic pressure by itself, this measure is independent of the functioning of the ventricle. Therefore, it is an index which describes arterial properties. According to its mathematical definition, it can only be used to relate mean flows and pressures through the arterial system.
Design of MCL System
The most recent circulation loop designs are complete mock circulation systems i.e. they include both the systemic and pulmonary circuits into their design. This is not desirable in a lost cost mock circulation model as it doubles the cost, without necessarily adding equivalent value, and in terms of testing microelectronics biomedical devices, a simple pulsatile single loop is sufficient. The mock circulation loop in this study will be a single loop, replicating the parameters of the systemic system.
Mock Circulation
Based predominantly on one half of the Timms [9] mock circulation rig. The rig consists of three main systems (Figure 5). These are 1) compressed air supply, 2) mock circulation loop, and 3) data acquisition system.
Air Supply
The air supply system is responsible for heart contraction, contractility, heart rate, and systolic time. The system is made up of the following components in series; air compressor (24litre Hobby), precision pressure regulator (SMC NIR201), electro pneumatic regulator (SMC ITV2030-31F2BL3-Q), and 3/2 solenoid valve (SMC EVT307-5DZ-02F-Q).
Air is supplied via the air compressor at 7bar pressure to the precision regulator, where the pressure is reduced to a desired lower pressure, sufficient to pump a required amount of fluid contained in the mock ventricle tube. Contraction of the mock ventricle is initiated when the solenoid is in the open position, and air is able to flow through it and into the top of the ventricle chamber. The contractility of contraction can be varied by the electro pneumatic regulator, which increases or decreases the amount of air supplied. When the solenoid changes to the closed position, air is exhausted out of the ventricle chamber, mimicking diastole. The rate at which this happens can be varied by changing the exit port size. The period of systole and diastole for a given cycle can be varied by changing the time the solenoid is in the open and closed positions.
Mock Circulation Loop
The hydraulic circulation loop itself consists of atrium, ventricle, and systemic and coronary vasculature components (Figure 6). The atrium is open to atmosphere and is passively filling. It is constructed from 40mm diameter pvc pipe. The ventricle is downstream of this and is constructed similarly, except that it is capped. The capped ventricle is tapped with the pneumatic line from the air supply system. For the heart valves, a 40mm brass check valve (Cimberio C80-40) is used as the mitral valve and a 32mm brass check valve (Cimberio C80-32) as the aortic valve. The cross-sectional opening areas of these valves are similar to that of the human heart valves. The swing gate flow resistance on these brass valves is satisfactorily low, yet also prevents fluid backflow during cardiac pumping.
Vasculature parameters of compliance and resistance were replicated through the use of windkessel chambers and a pinch valve. Compliance is varied by altering the vertical position of the test plug. In doing so, the amount of air contained above the fluid is altered, and in turn the compliance level is varied.
Resistance is increased by tightening of the pinch valve, which increases pipe occlusion. Inherent resistance values are calculated by taking the required pressure drop across a component and dividing it by the maximum flow rate through the component (Eq. 1). Max flow rate is calculated by first using Bernoulli’s equation (Eq. 2) to determine maximum fluid velocity (v2), assuming initial and final water heights are equal (z1=z2) and initial velocity (v1) is zero. Multiplying final velocity (v2) by each component’s maximum pipe cross sectional area (A) reveals maximum flow rate (Eq. 3) [12].
Data Acquisition System
Flow rate in the systemic path will be measured by an electromagnetic flow meter (Omega FMG 3002-PP) as represented in Figure 6. Pressures at three locations in the circulatory loop, as identified by the number 1 in Figure 6, will be measured by pressure sensors (WIKA Pressure Transmitter A-10.). All measured signals are taken to I/O connector block (NI SCC-68 National Instruments Inc.) and then sent to a data acquisition board (NI-6040e National Instruments Inc.) and processed in Labview on desktop PC.
Test Protocol
There will initially be two different activity levels simulated with the mock circulation system. These will be a healthy person at rest, and in exercise (equivalent to ascending stairs). It is envisaged that further conditions of activity and pathology will be simulated, should the primary two activity levels run successfully. The parameters for the aforementioned states are set out by Liu [14] in Table 1.
Experimental Procedure
National Instrument’s LabVIEW will be used to control heart rate and ventricle pressure, while also being able to record in realtime, the feedback pressures and flowrates from the system.
Compliance values as described earlier, are achieved by filling the compliance chambers with a set amount of water and adjusting the test plugs to the required height. Once in the correct position, the bleed valve can then be closed, thereby trapping a volume of air above the water level. The bleed valve and pressure sensor are fitted into two small holes drilled into the top of the plug. With the compliance chambers now set, a small volume of water is added to the ‘open to atmosphere’ atrial chamber. By adding water, the compliance pressures can be finely tuned to suit the desired physiological state. The air compressor is then charged until its 24litre reservoir has been filled. The output regulator for the compressor is tuned, followed by the ventricle input precision regulator. Heart beat, which is controlled by the 3/2 solenoid, is instigated in accordance with Table 1 (40% Systolic) using the LabVIEW controller. A manual tuning clamp is used to adjust the vascular resistance.
Results Expected
The results of simulated tests for the various physiological conditions will be presented in order to compare with natural cardiovascular hemodynamics. There will be an individual graphical plot for each condition. The plot should be contain at least four continuous cardiac cycles, in order that repeatability of each physiological state can be verified. The graphical plots will have time as the X-axis. From this, ventricle pressure, atrial pressure and flow-rate over time shall be presented. Results obtained for all scenarios will be tabulated into one table, similar to Table 2 [14], for easy comparison.
The test results for simulated healthy at rest conditions for two very similar mock circulation loops [9, 14] are displayed in Figures 7 and 8. For a normal, healthy individual, the heart rate and systolic ratio set by these authors are 60bpm and 40% respectively. Values obtained for peak left ventricular pressure are both in the region of 120mmHg and ventricle end diastolic pressure is the 5-10mmHg region. Cardiac cycle times are very close at 0.8-0.9 seconds.
Intense pressure fluctuations in the atrial, and ventricle chambers has been observed in the results of both authors. This has been attributed to the rigid nature of the valves, causing a water-hammer effect during brisk closing.
No results are provided by either author in relation to whether their systems display the ‘Frank Starling Effect.’ However, Timms [9] states that this effect has been visually observed through the clear PVC ventricle chamber as changes in fluid level prior to systole.
The results for the above two similar mock circulation loops demonstrate that they can successfully replicate the conditions of the human physiological state of a healthy person at rest. They also demonstrated success for other conditions, the results of which have not been presented here. Of particular interest was the success of the compressed air system used by Timms [9] to produce the contracting ventricle. This method is directly similar to that proposed for the MCL in this study.
Liu [14] has demonstrated that LabVIEW, as will be employed in this project, can be used to promising effect in the control and recording of various elements contained in the mock circulation loop.
Proposed methods for the reduction of the intense fluctuations in pressure due to water-hammer include suppression by imposing a digital filter on the recorded pressure data or the introduction of an accumulator near the valves to physically reduce these transients [9],
It is a desire within this study to record ventricle volume data, which has not been attempted in the MCLs of Liu or Timms [9, 14]. By recording the ventricle volume, it is possible to demonstrate the ‘Frank Starling Effect’ during the operation of the MCL. It is envisioned that this can be achieved through the use of a capacitance coil placed within the ventricle chamber.
The system proposed in this study very closely matches those of Timms and Liu [9, 14], except that is expected to be produced for a lower cost, due to the commodity components it will use and also that it will only simulate the systemic loop, which is deemed sufficient for development of microelectronic biomedical device components. Due to the successful results obtained by the aforementioned MCLs, and the similarity of the proposed design to these MCLs, suggests that equally positive replication of the human cardiovascular physiology can also be achieved.
The author would like to acknowledge the financial support provided by the Department of Mechanical Engineering, Waterford Institute of Technology, together with the invaluable guidance and technical support afforded by Dr. Austin Coffey and Mr. Philip Walsh.
Key Words: Cardiovascular system, pulsatile mock circulation loop, MCL, PV loops, biomedical device testing

Research Development: Automated Web Application Testing

Abstract:  A software development process is incomplete without testing and that’s the reason why every organization spends considerable time and effort on testing. New verticals are coming with various revenue models for web-applications which makes testing a crucial part of web development. Same way in the downside new security threats which are plaguing the web applications on daily basis makes testing a must for overall development process. Web application’s dynamic and interactive nature makes it hard to apply traditional testing techniques and tools as they would not be sufficient for their testing. The faster and quicker release cycles of web applications make their testing very challenging. Testing is a prime component in producing quality applications in the domain of software engineering. This research will put the focus on what is automated web application testing, and the current web application testing techniques.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Introduction: High-Level Overview of Automated Testing: Software testing is a huge domain, but it can be broadly categorized into two distinct areas: Manual Testing and Automated Testing. Automated testing is a development process that involves tools to execute predefined tests against software based on the event. Automated testing involves operations that are repetitive in both their nature and outcome. The nature of automated testing for developers is clear as predefined series of the task will be performed(tests) against a set of preconditions and post conditions based upon a triggering event. (2011 ) Automated Software Testing is application and implementation of software technology throughout the entire software testing lifecycle with the goal to improve efficiencies and effectiveness of the product.

It is well-known that manual testing approach is not always effective in finding certain classes of defects, like low-level interface regression testing can be challenging as well as

time-consuming but test-automation offers a possibility to perform these types of testing effectively. Automated testing offers to conduct tests in an asynchronous and autonomous manner. Once automated tests have been developed, they can be run quickly and repeatedly. This is cost-effective method for regression testing of a software product that have a long maintenance life. Complex, evolving and rapidly updated nature of web-applications made their testing challenging as well as critical because traditional approach of testing does not address the distinctive features like ample use of events, rich user interface and incorporation of server-side scripting of web-application. The attributes like speed and reliability which comes with automated testing have made it a mandatory practice in software development process. To keep the development process agile and lean, automated testing is on the rise as there has been an incredible jump in test automation across all industries over the past two years. Authors have suggested various approaches to automated web application testing.

Web Automation Testing Framework-The Evolution:  Test automation has evolved with time, earlier it was just debugging, with the arising of more complex systems software testing idea came into the trend. The reason to develop an automated web application testing framework is to share and access information efficiently and with no limits. With the automation framework by using testing tools separately and also in combined way can solve the various challenges in software testing.

(Angmo & Sharma)The test automation framework integrated by web testing tool, selenium and Jmeter, provides various types of testing for web application. The use of this framework efficiently improves the extensibility and reusability of automation test, also it improves coding productivity as well as product quality. Load testing automation framework for load testing of the web applications is based on usage model and workload to simulate user’s behaviors and help to generate realistic load for load testing of web applications.

Web-Testing Approaches: Long known web testing techniques diffused in current web testing scenario: Web application has mostly human user and that makes quality most prime concern if we are talking about web applications. The challenges in testing increases because of distributed and heterogeneous nature of web applications and as web applications are supporting a vast range of important activities like information sharing, medical system, scientific activities and many more, therefore it becomes really crucial to test all entities of web applications. The usual way to access the components of web-applications is navigation mechanism which is implemented by hyperlinks and that’s why it becomes important to make sure that no unreachable component or broken links are included in web-application. A variety of web-application testing approach has been proposed which satisfies two criteria for web -application testing which is page coverage criterion and hyperlink coverage criterion.

Model Based Testing of Web Applications:  Different testing approaches leads various testing model development-  Web Test Model which is object-oriented where entities are objects and their structure and dynamic behavior is described. One testing approach to generate test cases by using mutation analysis which mainly put the focus on validation regarding reliability of data interactions among xml-based components of web-based applications. Another

model which is based on web-frameset and browser interaction mainly put the focus on modeling web navigation and generate and execute test cases by formalizing the navigation model. The white box testing which is traditional is totally based on the internal structure of the system. To apply white box testing we follow two approaches: one by seeing the level of abstraction of code, for example one of the proposed ‬UML model ‭ ‬of ‭ ‬Web application for high level abstraction. ‭ ‬This model entirely based on static HTML links and dynamic aspects of the software cannot be incorporated in this, and second by using navigation among the pages. The navigation model is based on the graph where each web page is node and each link an edge. On the other hand, black box-testing is not dependent on code structure and implementation. The combinatorial model represents how the web application behaves and based on that it generates test cases, this is another testing approach in which test cases can be created based on collected user interaction with the web application.‬ ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬

There is a major difference between non-equivalence issues when we talk about traditional software testing and web application testing. Like maintenance at a faster rate compared to other software systems, because of the huge user population, high demand server’s performance is a must for web applications. Also, we cannot skip dealing with concurrent transactions, web content rendering capability should be looked when large user population is accessing the web application at the same time.

Now comes the architecture because this is the main difference between traditional and web application testing as web-application poses a multi-tier architecture, so it becomes tough to find the error as it could be in any layer, and that brings end-to-end testing techniques which test the overall behavior of web applications. Though we can list out some web-specific faults like authentication problem, incorrect multilanguage support, cross-browser portability, wrong session management and more. The main concern the researchers discussed is that heterogeneous nature of web applications (different programming languages, various technologies: ruby on rails, ajax, flash, and many more) makes testing of the web application a tough task as generating a test environment for this type of application is not easy. As it is very clear that web applications are prone to faults because of the emergence of new technologies also because of asynchronous communication, stateful client, DOM manipulation, so it becomes crucial to consider these aspects of web applications like state navigation, asynchronous behavior, delta server message, transition navigation, and the stateful behavior.

(Arora & Sinha, 2012 )State of art of web application testing: Among all the testing tools for web application testing the main focus is on protocol performance, load testing, validating HTML. The testing tools in web application testing dimension are able to generate automatic test cases. In context of state of art of web application testing the research illustrates that we can easily bridge the gap by using conventional technique for server side and various level testing for client side. For example, Selenium which provides DOM based testing with capturing user sessions. State analysis testing approach fails due to dynamic behavior of recent and modern web applications. Here comes invariant based automatic testing approach. Though two well-known testing techniques has been discussed but issues related to scalability or how to capture user session still remains. The overall point is the testing is dependent on implementation technologies, so it is required to be more adaptive towards heterogeneity and dynamic nature of web applications.


Web Application Testing Methodology: There are different types of testing techniques has been proposed and presented by researchers for web-application testing like ( Lakshmi & Mallika, 2017)Structural testing which performs data flow analysis on web applications, Statistical testing basically, generates input sequence to test the interactions with web applications, Mutation testing provides effective coverage criterion for web-applications, Combinatorial interaction testing generates test case by using unique input space matrix for web applications. ‭ Search ‭ ‬based ‭ ‬software ‭ ‬engineering ‭ ‬testing’s main idea is to provide branch coverage of web applications. GUI interaction testing is used to test the correctness by observing the state of the gui widgets. Cross browser ‭ ‬compatibility ‭ ‬testing checks web ‭ ‬applications deployment across ‭ ‬different ‬browsers. Browser fuzzing by scheduled mutation approach validates browser by using the static and dynamic ways. Invariant-based technique mainly designs a state flow graph with all the possible user interaction sequences. Model-based testing technique in which web application is reduced to a state transition graph and navigation ‬‬through links can be tested.‬ ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬


Automated Functional Testing Based on the Navigation of Web-applications:  Complex nature of web-applications makes their testing intricate hard and time-consuming, and here test-automation becomes really crucial to avoid the scenarios of poorly performed testing. In very simple terms functional requirements are actions that are expected from an application to do. Therefore, the evaluation of the correct navigation of web applications performs the assessment of the specified functional requirements. Implementation within a framework makes automated testing more effective. Abstract concepts, procedures, and environment define the

testing framework in which automated tests will be designed and implemented. The very first generation follows the linear approach of automated testing, the second generation has two frameworks: data-driven (typically stored in the database) and functional decomposition (the process of producing modular components). The third generation of automated testing includes keyword-driven and model-based frameworks.


Findings: It is needed to develop a new benchmark to indicate the health of a web application. It is quite clear that there are not much tools available that test the non-functional attributes (reliability, trustworthiness, fault-tolerance) of web-applications. It is required to construct a model for structural testing which is normally carried out manually. Recent web application testing tool are able to automate test case generation, their execution and also the evaluation of test result. Well known testing techniques like state-based testing and invariant-based testing has been successfully applied to various case studies but still the problem remains mainly related to scalability issues. Also, how to capture user session data? How to avoid state explosion problem and how to reduce state-spaces? Among all these various approaches of web-testing there is no superior to other as the factors and deployment environment affects the results. The findings state that generating a test environment to test the latest web technology designed web applications is the prime need in today’s testing dimension.


Conclusions: Test automation can bring many benefits like reusability, reliability, simultaneity and continuity also, it allows to build better apps with less effort. Maintaining quality execution across web applications is key to the customer experience and this is what

means testing. It is needed to detect defects early in development lifecycle and testing automation is the best answer for this. It is required to develop a framework using proposed strategies for automated testing of web applications. (Garc´ıa)the significant conclusion is that the further research efforts should be spent to define and assess the effectiveness of testing models, methods, techniques and tools that can combine the conventional testing approaches with new and specific ones.

Also, today all web application moving towards cloud-based services, next important approach is exploring the automated web application testing in this dimension. Understanding the dynamic and asynchronous nature of web application will definitely help in developing automated testing for web applications. Still lot of scope is available to explore the horizons of automated web application testing.




Lakshmi, D., & Mallika, S. (2017, August ). Retrieved from https://www.researchgate.net/publication/320248662_A_Review_on_Web_Application_Testing_and_its_Current_Research_Directions

(2011 , June 9). Retrieved from https://pdfs.semanticscholar.org/3e1c/c8242065c7899139db4d4287e12cbe88c8d2.pdf

An Automated Web Application Testing System. (2014, August). Retrieved from https://pdfs.semanticscholar.org/961c/5dfd7c3c87583298d6eca1ac3284494a15ef.pdf

Angmo, R., & Sharma, M. (n.d.). International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS). Retrieved from https://pdfs.semanticscholar.org/0343/26b357620d6de180105dce67e957e2333875.pdf

Arora, A., & Sinha, M. (2012 , February). Retrieved from https://pdfs.semanticscholar.org/2811/5c63c146e4b77fa690e5a55175a36dacc634.pdf?_ga=2.237913024.114458962.1543100015-869717380.1543100015

Garc´ıa, B. (n.d.). Retrieved from https://arxiv.org/pdf/1108.2357.pdf












Testing of Stirling Engine for Beta Type Powered by Solar Power

Literature Review Report

This was a project that was carried out by the previous student. The aim of this project is to improve the existing design of the sterling engine by enhancing the performance and investigating any modifications needed to be made to achieve this. It was stated that energy consumptions have increased rapidly up to 52% in the past 30 years with the industrialization of developing countries. The reason for behind this is the increase in population and enhancement of standard of living. Fossil fuels not only causes harm to the environment but is it also a limited source (DepartmentofEnergyandClimateChangeDECC2013). Therefore, renewable energy has been in high demand and it causes less pollution to the environment. It has been predicted that the increase in demand for fossil fuels will be stopped at 2020 and solar energy alone will be expected to produce 29% (IanJohnston2017). Therefore, solar Stirling engine will be one of the solutions to increase the source of energy and to help people in developing countries that are in shortage.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The first Stirling engine was patented by Robert Stirling, a Scottish minister in 1816(Urieli1984). The engine operates by obeying the Stirling cycle where there is a thermodynamic cycle that consists of two constant pressure and volume cycles (i.e. Isochoric and isothermal).  The engine consists of a heat cylinders, pistons, dispenser (accompanies the movement of the gas in the cylinders. It is designed with a smaller diameter than the cylinder to allow the gas to travel in between the cylinder and the dispenser) and a regenerator (a wire mess to retain heat to improve the efficiency of the engine.) The Stirling engine is an external combustion engine, therefore renewable heat sources can be used such as solar power and biomass. Furthermore, it has advantages, working with less noise and high thermal efficiency since it is externally adiabatic (no heat loss) achieving the Carnot requirement for maximum efficiency, and consistency of power output. Ambient air, helium, and hydrogen can be used as the working fluid of the Stirling engine. (Singh2018)

The Stirling engine consists of three types of mechanism alpha, beta, and gamma. The alpha engine consists of two cylinders and piston for hot and cold heat transfer. It has a high power to volume ratio. However, it consists of problems such as gas leakage in the hot cylinder due to the durability of the sealing. Beta Stirling engine has a cylinder that includes the piston and the dispenser, compare to alpha it avoids the leakage of gas.  Gamma is like beta, however, the piston is placed in a different cylinder which is connected to the cylinder of the dispenser. The gas moves between them in a single body. The attachment of the cylinders is mechanically simpler and is used in multi-cylinder Stirling engines. Out of these configurations, it is analysed that gamma has the highest theoretical efficiency. (Stirling2012)

The objective of the project.

Understand the operation of the Stirling engine.

Study the of thermodynamic theories.

Design and build the prototype

Testing and collecting data.

Investigate methods to improve the performance of the sterling engine.

2.1    Thermal performance of a Stirling engine powered by a solar simulator

In this paper, the performance of the beta Stirling engine is investigated by using a heat source of halogen lamps to replicate to solar energy. Two different powered lamps were used 400w and 1000w. The working fluid used was helium. In the experiment, the pressure inputs were varied from 1 bar to 5 bars. The temperature was measured using an infrared thermometer, where it was recorded that 400 W and 1000w was 623k and 823k respectively. The cold end of the piston was kept at a constant temperature of 300k.  To obtain the theoretical results of heat transfer coefficient, nodal analysis was used, however, assuming the transfer coefficient in beforehand is difficult. This was one reason for using lamps to obtain more stable working conditions and distribution of heat compared to a solar energy test. Nodal analysis results were compared with the experiment results. 

The dispenser of the beta engine has been altered to capture the reflected ray of the sun. The components are divided into two sections. The upper section is made from aluminum which functions as a heater where the solar energy is absorbed. The bottom section act as a cooler and is made of non-brass ASTR steel.  The working fluid was tested with a pressure of 5.5 bar to check any leakage.                                                                        Figure 1 Beta Stirling engine

In the discussion, it was found that high torque was gained at low speeds of the engine, due to a decrease of flow loss which allowed the heat exchanges to maintain a better thermodynamic cycle. To obtain high torque with high speed, some factors of the components needs to be altered, such as increasing the inner surface area for more volume for expansion and improving the higher rate of heat transfer. Furthermore, as the engine speed increases the power increases, however at a certain point the power decreased rapidly. This is due to the insufficient time for the heat to transfer in the fluid. Also, the mechanical losses such as friction, increase with the speed of the engine. Therefore, it was found to obtain the maximum efficacy of the Stirling engine there is an ideal speed.  Moving on the pressure, the power increase with pressure however at a certain point it starts to decrease, this can be due to limited space in the engine and courses sealing problem(Costea1999). The increase in temperature increases thermal efficiency. It was observed that the thermal efficiency was 9.26% and 12.85% for 400w and 1000w in pressure of 5 bar. Finally, to conclude the maximum thermal efficiency of the experiment and the theoretical value was 12.717% at 405 rpm and 25.38% at 237 rpm respectively. The difference is explained due to the low speed of the engine. As the speed increases the difference decreases. As mentioned earlier the increase of speed decreases the rate of heat transfer. In the experiment, they haven’t used a dish to test the Stirling engine.  Using a parabolic dish would make the results more reliable for the dish solar sterling engine since heat loosed from the dish and the absorber are involved. (Aksoy2015)

2.2    Optimization of Solar-power Stirling engine with finite-time     Thermodynamics

To investigate the overall thermal efficiency of the solar dish Stirling engine a mathematical equilibrium theory is used called finite time thermodynamics (ANDERSON1999). The theory includes finite – rate heat transfer, regenerative heat losses, conductive thermal bringing losses and finite regeneration process time. Furthermore, the theory investigates the effect of the absorber temperature, concentration ratio on the thermal efficiency; the heat transfer between the absorber and the working fluid, and the convection heat transfer between the heat sink and the working fluid.

The setup of the system consists of a dish (which has a dish concentrator and a thermal absorber.) The Stirling engine is located at the focal point of the dish. The dish tracks the sun and reflects the solar energy to the absorber where the heat is transfer to the Stirling engine, which forms the solar-powered Stirling engine.       Figure 2   Solar dish receiver

In the experiment it was found that the thermal efficiency decreases swiftly with the increase of the absorber temperature (which increases with the concentration ratio), the thermal efficiency is limited by the optical efficiency of the concentrator. It was found that the optimal temperature for the absorber varies from 1100k to 1300k, this is also where the maximum thermal efficiency of the Stirling engine is obtained which is 34% and was stated that is it similar to Carnot efficiency at about 50%. In the investigation, the increase of the regenerator effectiveness, proportionally increased and the reduction of the leakage coefficient increased the thermal efficiency of the Stirling engine. This paper provides a theoretical guidance for designing and methods of increases the performance of the Stirling engine. (Yaqi2011)

2.3    Beta-type Stirling engine operating at atmospheric pressure

The performance of the beta Stirling engine was tested using atmospheric pressure and air as the working fluid. The heat source for the experiment was the electric heater. The engine tested was designed to have a piston and a dispenser that was an angle of 90 degree to each other on the fly wheel. The piston was made of cast iron that included graphite to minimise friction and achieve high impact resistance. The cylinder and the dispenser were made of ASTM steel. the cold end of the cylinder was cooled by using a water jacket. Maintaining a temperature of 30 degrees.

The engine had been run under different operating conditions during the process of development. The test was conducted under atmospheric conditions and the temperature of 800 ℃. The speed of the engine and torque was measured using a dynamometer and a digital tachometer. After each measurement, an increment of 100℃ ranged from 800 to 1000 ℃. From the obtained results it was established that maximum engine power was 5.98 W at 208 rpm at the temperature of 1000 ℃. It was also investigated that higher speed, higher the torque which is correlated to output power. However, at a certain speed, it was observed that there had been a decrease in torque or power can be explained due to insufficient heat transfer. (Cinar2005)

The purpose of this project is to improve and test the solar dish Stirling engine. The reason for choosing this project are personal aspects regarding the country of birth (Sri lanka) , how people in rural areas are having trouble due to the lack of energy, such as children do not have electricity to study in the night or people are unable to use motors to pull up the water from the wells unless using buckets or a generator, which is expensive. The other reason is the strong passion for thermodynamics.

The  Stirling engine configuration used in the project will be beta. Considering the time factor, beta type Stirling engine would be a convenient design and will cost least compared to other configurations due to its single piston and cylinder.  Furthermore, the sealing problems compared to alpha is avoided in beta and in (Aksoy2015) it was concluded that the heat performance of Beta Stirling engine was better than other configurations except for gamma. (Abuelyamen2018) (Singh2018)

In the project there would include mathematical calculations using software, MATLAB or Engineering Equation Solver (EES), to obtain the theoretical efficiency of the Stirling engine to compare the actual values with the tested data to understand the errors in the experiment. This can help to improve the design for future implementations.

 Further research will be made to investigate methods of calculating the parameters affecting the efficiency of the system (Dish solar Stirling engine).

The materials used would include aluminum, steel, and nylon. The research will be made to minimise cost and better selection of materials.

For the design and optimization, solid works or Auto CAD will be used. 

The risk in the project would be during testing of the engine. For safety, PPE would be worn.

Abuelyamen, A. and Ben-Mansour, R. (2018) ‘Energy efficiency comparison of Stirling engine types (α, β, and γ) using detailed CFD modeling’, International Journal of Thermal Sciences. Elsevier, 132(June), pp. 411–423. doi: 10.1016/j.ijthermalsci.2018.06.026.

Aksoy, F. et al. (2015) ‘Thermal performance of a stirling engine powered by a solar simulator’, Applied Thermal Engineering, 86, pp. 161–167. doi: 10.1016/j.applthermaleng.2015.04.047.

ANDERSON, B. (1999) ‘Minimizing losses – Tools of finite-time thermodynamics’, Thermodynamic optimization of complex energy systems, 69, pp. 411–420.

Cheng, C. H., Yang, H. S. and Keong, L. (2013) ‘Theoretical and experimental study of a 300-W beta-type Stirling engine’, Energy. Elsevier Ltd, 59, pp. 590–599. doi: 10.1016/j.energy.2013.06.060.

Cinar, C. et al. (2005) ‘Beta-type Stirling engine operating at atmospheric pressure’, Applied Energy, 81(4), pp. 351–357. doi: 10.1016/j.apenergy.2004.08.004.

Costea, M., Petrescu, S. and Harman, C. (1999) ‘Effect of irreversibilities on solar Stirling engine cycle performance’, Energy Conversion and Management, 40(15), pp. 1723–1731. doi: 10.1016/S0196-8904(99)00065-5.

Department of Energy and Climate Change (DECC) (2013) ‘Energy Consumption in the UK ( 2013 )’, pp. 1–9.

Ian Johnston (2017) Global fossil fuel demand set to fall from 2020, three centuries after the dawn of the Industrial Revolution | The Independent. Available at: https://www.independent.co.uk/environment/coal-oil-demand-renewable-energy-solar-panels-electric-vehicles-investors-a7557756.html (Accessed: 28 October 2018).

Singh, U. R. and Kumar, A. (2018) ‘Review on solar Stirling engine: Development and performance’, Thermal Science and Engineering Progress. Elsevier, 8(July), pp. 244–256. doi: 10.1016/j.tsep.2018.08.016.

Stirling, R. (2012) ‘Stirling engine : Wikis’, pp. 1–21.

Urieli, I. and Berchowitz, D. M. (1984) Stirling Cycle Engine Analysis. Edited by P. H. Proferssor Lipaman. Adam Hilger Ltd.

Yaqi, L., Yaling, H. and Weiwei, W. (2011) ‘Optimization of solar-powered Stirling heat engine with finite-time thermodynamics’, Renewable Energy. Elsevier Ltd, 36(1), pp. 421–427. doi: 10.1016/j.renene.2010.06.037.