Blue Ocean Strategy Strategy Simulation Analysis

The below chart shows the characteristics of Resheps blue ocean product Blue box, highlighted in Yellow. The product was considerably different from the existing products in the market, but due to concerns with the team budget the changes were kept austere to save on the project costs. Only one path (3) was targeted. The price of the product was fixed much higher than the ‘Red box’ as the product blue box was very superior to it in a number of ways, some of the features that the blue box continued from the red round products were very similar to Shiny station and Purple player levels and hence the price was felt justifiable.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Year 2012
The above charts clearly show that the strategy of our targeting was working given that the preference that we received in the 36+ age group was the highest; this was the group that was primarily targeted by path 3 that we had chosen while launching the product in the market. The concern, based on the feedback received; however were the following:
The Product could have received support from a wider majority had we chosen to add a few more features to the product.
The price of the product might be too high even for the features that had been provided.
Some of the features were below or above the expectations of the market and hence needed adjustment in their levels.
Based on this feedback we decided to change the product specifications in the manner shown below
The feature controller sophistication was reduced
Audio sophistication was increased
Rechargeable batteries was reduced
Ability to control gaming habits was increased
And Exer-gaming was introduced
The features were reduced or removed to keep the overall cost of production in control.
The greatest challenge that the blue ocean strategy simulation offered was that the simulation did not provide any intelligence as to what exact level the consumer wanted for a particular feature; it had to be derived from the analysis of the visual exploration brief.
Result 2012
The analysis of our team in the second round was correct but we again found certain discrepancies, such as:
The pricing was still found to be higher than expected for the blue box
the features which were to be reduced were not found to be reduced to the appropriate levels
Also the production plan was found to be lower than the demand
Year 2013
Keeping these in mind we considered our product to be largely successful, so for expanding the market we tried a strategy which was a little different from the one recommended by the feedback messages
Instead of just decreasing prices we decided to increase some features and reduce the prices only a little bit.
This was done to get the maximum possible margin from the market by getting more consumers to buy.
The prices were not reduced much but due to concerns regarding the EBIT
The production level was increased to 1000 units expecting a increase in sales due to enhancement in product features
Result 2013
The product features were accepted by the market but the sales slacked, this may have been due to the following:
The market did not need the feature that we added to the product
New products were introduced by the competitors
Chart below shows the comparative analysis of the competitors and our blue ocean strategy (Next page). Through product features and consumer preferences.
Clearly from the product specifications we can conclude that the Blue box was the most superior product but the consumer showed a very high preference for the product ‘blue pack’, which due to lower prices of the product. This was also the major feedback.
Based on this on the next round more features were added to attract more consumer and remaining different from the competitor even when as blue pack further reduced prices (as our company was facing profitability problems ), we could not lower the prices as we wanted to keep the losses at a minimum.
Learning from Blue ocean strategy simulation
To attract Non-customers it is most important to give them a price discount
The price discount will not be successful, unless the product is radically different from the category, as non-customers are those who have not been satisfied by the category as a whole
To reduce prices it is important to eliminate all the irrelevant features.
To reduce prices it is important to reduce all the unimportant features.
Implementation of blue ocean strategy in Mobile advertising industry
The mobile advertising industry is still very nascent in India, but already there are challenges regarding the bombardment of user with advertising messages. The biggest challenge that the advertisers face is that an average mobile user is receiving so many messages every day that it is difficult to stand out.
Features currently Available
The services that a mobile advertising agency in India provides are the following:
SMS Blast to user database collected on the bases of profession and education
SMS blast to opt-in user data base
MMS blast
Banner ads on mobile Wapsite
Click to call advertisements
Bluetooth based advertising
The biggest share of these is taken only by the SMS blasts collected in the bases of education and occupation databases.
A blue ocean initiative in such a scenario would be:
Description of Introduced feature
Location based kiosks
Bluetooth based advertising has still not been adopted in India as single brands are very cautious of the costs involved in the functions, but a viable model can be for a mobile agency to set up kiosks on location such as malls with large footfalls.
The booth will have a physical presence to attract the people to it
Once the mobile user comes close to the kiosk they can be requested to switch on the Bluetooth to receive attractive discounts and applications
This model can then be sold to the advertisers as a way to start a conversation with their customer
The process
The advantages of this model would be:
The Individual retailer which currently does not use Mobile as a medium for marketing will start doing so, thus creating the blue ocean.
The ads would be the most recent conversation with the consumer
The cost would be low as the advertiser will only pay for the number of applications/discounts disbursed
The clutter/competition will be irrelevant as the consumer will opt in to receive these messages.
The kiosk can also be used as a Out of home media
Assignment 2
Blue ocean strategy for Sports academy
The sports academies that exist today demand extraordinary commitment from their students. At a very young age the students are required to make a high level of commitment towards sports. The concept i am proposing involves the setting up of an academy which has the all the facilities that any academy of international standards provide to its students plus the aides the child in his education also.
This blue ocean will be somewhere between the operations of a typical sports academy and a typical K-12 educational establishment. A similar example would be the Cirque du soleil quoted in the Blue ocean strategy text book. The Cirque du soleil created a blue ocean by incorporating the features of a theatre performance in its circus performance.
The K-12 education sector has to seen tremendous growth in the past decade with the maturity coming in the established players in the market such as Educomp and entrance of traditional Educational establishment such as Manipal and DPS. The K-12 education model comprises of all establishments that have a uniform model across multiple branches and necessarily conduct classes starting from kindergarten to 12th standard.
The establishments operating currently in the K-12 education sector right now are offering the following characteristics in their offerings
Standardised tuitions, based on well defined curriculum
Digitised classrooms
Transport from and to homes
Science and mathematics labs
Canteen or other food service
Sports infrastructure
Facilities for hobby development
Regular feedback sessions with the parents
Mentor mentee program
Personality development program
Communication skills development programs
Career counselling
Preparation for competitive exams
A sports academy provides the following facilities to its students
Specialised infrastructure for various sports
Professional coaching for all the relevant sports
Mentor mentee program
Guidance on career in sports
Platform and certification to start competing in events
Medical facilities
Residential campuses
Fitness training
This establishment can be called a talent development academy. This is based on the fact the gradually even in India the focus of parents; who are the decision maker in case of K-12 education is shifting towards over all development of the child rather than just getting the educational degree, but they often end up not taking up sports as a career due to the immense risk involved. The academy will reduce this risk by providing its students with educational facilities as well on the campus itself.
This may be achieved by collaborating with some of the operators in the education sector to provide the sports academy students with adequate classroom facilities on or near campus.
The activities of the academy and attached educational establishment will be coordinated to allow students to manage the two adequately
The academy will coordinate with the school in the following manner
The student will be required to qualify the minimum criterion for classroom education
The school will accommodate the requirements of the sports training schedule
Personal database on the performance of each student will be maintained to monitor both academic and sports performance of the student
Academics will be handled only during a stipulated time frame
The classes will be scheduled according to the extra- curricular activity that the group of children has decided to pursue
The focus of the academy will be to develop highly professional sports persons from among its students
The result
Non-users: those students who had talent but could not pursue their interest due to the inability to handle both education and sports will become users.

Simulation of Fog Computing for Internet of Things (IoT) Networking

Simulation of Fog Computing for Internet of Things (IoT) Networking


Regardless of the expanding utilization of cloud computing, there are still issues unsolved because of inherent issues of cloud computing such as unreliable latency, lack of mobility support and location-awareness. By providing elastic resources and services to the end users at the edge of the network, Fog computing can address such issues. Cloud Computing is more about providing resources conveyed in the core network. This venture exhibits the idea and simulation of Fog Computing using Cisco Packet Tracer (Networking Perspective) & Amazon AWS (Cloud Platform). Cisco Packet Tracer is a network simulation tool and Amazon AWS is Cloud Computing Platform which can simulate the Internet of Things (IoT) nodes that are connected to a core network passing to the Fog network. The size and the speed computing of the Edge network can be optimized.

                                  TABLE OF CONTENTS

 Abstract …………………………………………………… 3


1. Introduction…………………………………………………. .5

2. Architecture & Implementation……………………………. 9

2.1.  Cisco Packet Tracer………………………………………..9

2.2. Amazon Web Service………………………………………10

2.3. Simulation in Aws Platform………………………………..11

2.4. Fault Tolerance Environment………………………………16

3. Results……………………..…………………………………. 19

5. Conclusion…………………………………………………….29

6. Future Scope…………………………………………………..30

7. References…………………………………………………… 33

INTRODUCTION:Fog Computing is a distributed computing paradigm that acts as an intermediate layer in between Cloud Data Centers and IoT devices and sensors. It offers compute, networking and storage facilities to the Cloud-based services that can be extended nearer to the IoT devices and sensors. The concept of Fog computing was first introduced by Cisco in 2012 to address the challenges faced by IoT applications in conventional Cloud computing. IoT devices and sensors are highly distributed at the edge of the network along with real-time and latency-sensitive service requirements. The Cloud data-centers are geographically centralized and they often fail to deal with storage and processing demands of billions of geo-distributed IoT devices and sensors. Hence, the congested network, high latency in service delivery, poor Quality of Service (QoS) are experienced.

                                       Fig 1: Fog Computing Environment

Typically, a Fog Computing environment is composed of conventional networking components such as routers, switches, set top boxes, proxy servers, Base Stations (BS), etc. and are placed at the closer proximity of IoT devices and sensors. These components are furnished with different computing, storage, networking, capabilities and can bolster benefit service-applications execution. Subsequently, the networking components empower Fog computing to make vast geographical dispersions of Cloud-based services. In addition, Fog computing facilitates location awareness, mobility support, real-time interactions, scalability and interoperability. In this way, Fog computing can perform productively as far as service latency, power consumption, network traffic, capital and operational expenses, content distribution. In this sense, Fog computing better meets the necessities as far IoT applications contrasted with an extensively utilization of Cloud computing.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

With the start of Cloud computing, computation technology has entered to another time. Numerous computation service providers including Google, Amazon, IBM, Microsoft, and so on are at present nurturing this popular computing paradigm as a utility. They have empowered cloud-based services such as (IaaS), (PaaS), & (SaaS)  to handle various enterprise related issues at the same time. However, most of the Cloud data centers are located centrally and are situated far away from the proximity of the end users. Consequently, real-time and latency-sensitive computation service requests to be responded by the distant Cloud data centers regularly persevere through huge round-trip delay, network congestion, service quality degradation. To determine these issues other than centralized Cloud computing, a new concept named “Edge computing” has recently been proposed.

The fundamental thought of Edge computing is to convey the computation facilities closer to the source of the data. It empowers data processing at the edge network. Edge network essentially comprises of end devices such as mobile phones & smart objects and edge devices such as border routers, set-top boxes, bridges, base stations and edge servers. These components can be outfitted with fundamental capabilities for supporting edge computation. As a confined computing paradigm, Edge computing gives faster responses to the computational service requests and most often resists bulk raw data to be sent towards core network.

                                                      Fig 2: Working of Fog

Fog Computing can also empower Edge computation. However, besides edge network, Fog computing can be extended to the core network as well. More precisely, both edge and core networking components can be utilized as computational framework in Fog computing. Subsequently, multi-level application deployment and service demand mitigation of huge number of IoT devices and sensors can easily be observed through Fog computing. In addition, Fog computing components at the edge network can be set nearer to the IoT devices and sensors compared to cloudlets and cellular edge servers. As IoT devices and sensors are densely dispersed and require real-time responses to the service requests, this approach enables IoT data to be stored and processed within the vicinity of IoT device and sensors.. Fog computing can extend cloud-based services like IaaS, PaaS etc. Due to the features, Fog computing is considered as more potential and well-structured for IoT compared to other related computing paradigms.

Architecture & Implementation:

         Cisco Packet Tracer:

Cisco Packet Tracer is a Network Simulation Tool designed by Cisco Systems that allows users to create network topologies and learn different behaviors of network. The Cisco Packet Tracer allows users to simulate the configuration of Cisco routers and switches using a simulated command line interface.

                                                               Fig: 3 Fog Computing Architecture in Cisco Packet Tracer

In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address to the cloud server with the IP address it takes average of 9ms. When we ping from the same host to the Fog server with the IP address it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.

2.2.            Amazon Web Service:

Amazon Web Services (AWS) is a computing service that provides cloud computing infrastructure.  Amazon Elastic Compute Cloud (EC2) allows us to launch a variety of cloud instances. EC2 allows to have deep system level control of computing resources while running in Amazon environment.  EC2 reduces the time required to boot new server instance.

 Allowing to quickly scale capacity as per computing requirements.  EC2 allows to build and configure instances as per user desired operating system.

2.3.            Simulation in Amazon Web Service Platform:

Testing without Fog Node:

1)     We setup an EC-2 instance in AWS which will act as web server. Figure 4 shows us the deployed EC-2 instance.


                                           Fig:4 Deployed EC-2 instance acting as a web server

2)     Figure 5 shows us the Linux web server page


                          Fig:5 Amazon Linux AMI page from EC2 instance serving as a web server

Testing with Fog Nodes:

  CloudFront is a service in AWS, the role of this service is to distribute static and dynamic web pages to customers. CloudFront sends the data from a global network of data centers. When a customer requests data with CloudFront, the customer is routed to the server location that has the lowest latency, so that data is delivered with top performance.

 When  data is already in the edge location with the lowest latency, CloudFront delivers it immediately.

When the data is not in that edge location, CloudFront retrieves it from  an HTTP server or any other point where it is been defined and which  has been identified as the source for the data

                             Fig:6 Content Delivery Network (CDN) Architecture


The figure 7 shows us the CDN distribution which is being created for this environment.

                                                 Fig:7 Creation of CDN

                            Fig:8 Webpage accessed via CloudFront

2.4.            Fault Tolerance Environment:

To implement the feature of fault tolerance & high-performance capability in our environment we have used autoscaling and load balancing services in AWS. The architecture is explained below

                 Figure 9: Architecture in AWS for fault tolerance in the AWS environment  

Elastic Load Balancing (ELB) distributes incoming HTTP traffic across many destinations, such as EC2 instances etc. ELB handles traffic in a Availability Zone or across many Availability Zones. ELB has 3 types of load balancers which has the following features like high availability, automatic scaling, and robust security necessary to make applications fault tolerant.

 ELB provides load balancing across multiple domains. Classic Load Balancer is used for applications that are built as per  the EC2-Classic network.

 Auto Scaling service of Amazon helps ensure that right number of Amazon EC2 instances are available to handle traffic for the user application. Collections of EC2 instances, called Auto Scaling groups. Minimum number of instances in each Auto Scaling group can be set and Auto Scaling ensures that group never does not below this limit. Similarly, the max number of instances in each Auto Scaling group can be set by the user, and Auto Scaling does not let it go above this set limit. Auto Scaling has a feature that the instances can be crated deleted increased decreased on user demand.

2.4.1.      Implementation of the fault tolerant architecture

[1]   Classic Load balancer is created using AWS allowing HTTP traffic to flow through the network

                              Fig: 10 Load balancer Creation

[2]   Auto scaling group is created using AWS service setting up 2 instances and other configurations are done in the environment.

                                        Fig: 11 Autoscaling group is created

[3]   The 2 running instances spawned by the auto scaling group

                       Fig: 12 Two running instances

[4]   The Apache Web server which is running on both the instances.

                                         Fig:13 Apache Web Server

[5]   The instance with ip address is terminated to simulate a environment when a fault is occurring in our web-server.


                                                Fig:14 Terminating one of the EC2 Instance

[6]   A new EC-2 instance is created because of the autoscaling group we have configured before. Thus, if any server goes down due to any reason, Auto scaling group will help to ensure the right number of ec-2 instances are present to handle the load in the environment. 

                              Fig: 15 Two EC2 Instances are running


         Cisco Packet Tracer

In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address to the cloud server with the IP address it takes average of 9ms. When we ping from the same host to the Fog server with the IP address it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.


                                  Fig: 16 Pinging from Host to Cloud Server


                                     Fig:17 Pinging from Host to Fog Server


                             Fig: 18 Traceroute to Cloud Server (


                                          Fig:19 Traceroute to Fog Server (

3.2.  Amazon Web Service Platform:

In AWS we have created the static website and accessed it from India, without the fog nodes to observe the latency. We accessed the website from India, using VPN. The maximum latency observed is from India. When we accessed the website from India then we observed the average latency of 571ms. When Aws CloudFront is configured for the same environment and the same website is accessed from India, we can see the difference in the latencies between the two. The latency is now reduced to 88ms.Thus by using CDN, we can conclude that low-latency can be achieved to access websites from any location in the world.


                                     Fig:20 Ping from India before CDN deployment


                                    Fig:21 Ping from India after CDN Deployment

In the fault-tolerant environment, it is observed from the figures that as soon as one of the instances goes over the set CPU threshold value or terminates, a new instance automatically spawns up within a couple of minutes. Thus, autoscaling service monitors the environment to make sure that the system is running at desired performance levels. So, when there is a spike in the network traffic autoscaling automatically increases the instances, so the load system reduces.

                                       Fig: 22 Network In before instancce fails

                                                  Fig: 23 Network Out before instancce fails

                          Fig: 24 CPU utilization before instancce fails

               Fig:25 Network paackets in before instancce fails

                          Fig: 26 Network packets out before instance fails

                   Fig: 27 Network in after  1st instance fails (blue line indicates new instance )

                     Fig:28 Network out after  1st instance fails (blue line indicates new instance )

Fig:29 Network Packets in after  1st instance fails (blue line indicates new instance )

Fig:30 Network Packets out after  1st instance fails (blue line indicates new instance )

Fig:31 CPU utilization  after 1st  instance fails (blue line indicates new instance )

The following charts show information about the devices from which CloudFront received requests for the selected distribution. The Devices charts are available only for web distributions that had activity during the specified period and that have not been deleted.

This chart shows the percentage of requests that CloudFront received from the most popular types of device. Valid values include:




                                                         Fig:32 Types of Devices

The chart shows parameters as a percent of all viewers request for created CloudFront:

Hits:  viewer request for which the object is served from a CloudFront edge cache.

Misses:  viewer request for which the object is not currently in a cache, so CloudFront must get the object.

Errors:  viewer request that resulted in an error, so CloudFront did not serve the object.

                                   Fig:33 Cache Results

4.      Conclusion

Fog computing is emerging as an attractive solution to the problem of data processing in the Internet of Things. Rather than outsourcing all operations to cloud, they also utilize devices on the edge of the network that have more processing power than the end devices and are closer to sensors, thus reducing latency and network congestion. Fog computing takes advantages of both edge and cloud computing while it benefits from edge devices’ proximity to the endpoints, it also leverages the on-demand scalability of cloud resources.

Future Scope

         Security Aspects


In Fog computing Authentications plays an important role in the security aspects [9]. This paper proposes the main security issue of fog computing as the authentication at different levels of fog nodes. Traditional PKI-based authentication is not efficient and has poor scalability. This paper proposed a cheap, secure and user-friendly solution to the authentication problem in local ad-hoc wireless network, relying on a physical contact for pre-authentication in a location-limited channel [10]. Similarly, NFC can also be used to simplify the authentication procedure in the case of cloudlet [11].As the development of biometric validation in versatile processing and distributed computing, for example, unique mark verification, confront confirmation, contact based or keystroke-based verification, and so forth., it will be helpful to apply biometric based verification in Fog Computing.

5.1.2.      Privacy

Privacy is another vital part of security in Fog Computing. There is dependably a hazard that there might be a hole of imperative information when the correspondence between the clients and cloud is continuous. In this manner information security is of highest significance on the grounds that the fog hubs are close to end clients and they convey imperative data. There are a few algorithms which are being used which take care of the issue of data privacy to some degree. Techniques such as homomorphic encryption can be used to permit security safeguarding conglomeration at the local gateways without decoding [20]. Differential privacy [10] can be used to guarantee non-exposure of security of a self-assertive single section in the informational index if there should arise an occurrence of measurable questions.


[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the internet of things. In workshop on Mobile cloud computing. ACM, 2012”

[2] S. Yi, Z. Hao, Z. Qin and Q. Li, “Fog Computing: Platform and Applications,” 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb)(HOTWEB), Washington DC, DC, USA, 2015, pp. 73-78. doi:10.1109/HotWeb.2015.22

[3] “

[4] “


[6] F. Bonomi et al., “Fog Computing: A Platform for Internet of Things and Analytics,” Big Data and Internet of Things: A Roadmap for Smart Environments, N. Bessis and C. Dobre, eds., Springer, 2014, pp. 169–186”

[7] Y. Cao et al., “FAST: A Fog Computing Assisted Distributed Analytics System to Monitor Fall for Stroke Mitigation,” Proc. 10th IEEE Int’l Conf. Networking, Architecture and Storage (NAS 15), 2015, pp. 2–11

[8] V. Stantchev et al., “Smart Items, Fog and Cloud Computing as Enablers of Servitization in Healthcare,” J. Sensors & Transducers, vol. 185, no. 2, 2015, pp. 121–128

[9] H. Gupta et al., iFogSim: A Toolkit for Modeling and Simulation of Resource Management Techniques in Internet of Things, Edge and Fog Computing Environments, tech. report CLOUDS-TR-2016-2, Cloud Computing and Distributed Systems Laboratory, Univ. of Melbourne, 2016; http://”

[10] I. Stojmenovic and S. Wen, “The Fog Computing Paradigm: Scenarios and Security Issues,” Proc. 2014 Federated Conf. Comp. Sci. and Info. Sys. (FedCSIS 14), 2014, pp. 1–8”

[11] Balfanz, D., Smetters, D.K., Stewart, P., Wong, H.C.: Talking to strangers: Authentication in ad-hoc wireless networks. In: NDSS (2002)”

[12] Bouzefrane, S., Mostefa, A.F.B., Houacine, F., Cagnon, H.: Cloudlets authentication in nfc-based mobile computing. In: MobileCloud. IEEE (2014)”

[13] Shin, S., Gu, G.: Cloudwatcher: Network security monitoring using openflow in dynamic cloud networks. In: ICNP. IEEE (2012)”

[14] McKeown, N., et al.: Openflow: enabling innovation in campus networks. ACM SIGCOMM CCR 38 (2008)”

[15] Klaedtke, F., Karame, G.O., Bifulco, R., Cui, H.: Access control for sdn controllers. In: HotSDN. vol. 14 (2014)”

[16] Yap, K.K., et al.: Separating authentication, access and accounting: A case study with openwifi. Open Networking Foundation, Tech. Rep (2011)

[17] Lu, R., et al.: Eppa: An efficient and privacy-preserving aggregation scheme for secure smart grid communications. TPDS 23 (2012)

[18] Dwork, C.: Differential privacy. In: Encyclopedia of Cryptography and Security. Springer (2011)

Comparing Binomial Tree, Monte Carlo Simulation And Finite

In recent years, numerical methods for valuing options such as binomial tree models, Monte Carlo simulation and finite difference methods are use for a wide range of financial purposes. This paper illustrates and compares the three numerical methods. On one hand, it provided general description of the three methods separately involved their definitions, merits and drawbacks and determinants of each method. On the other hand, this paper makes a concrete comparison in valuing options between the three numerical methods. Overall, the three numerical methods have proven to be valuable and efficient methods to value options.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In recent years, option valuation methods are very important in the theory of finance and increased wildly in the practice field. The various approaches on option prices valuation included binomial tree models, Monte Carlo simulation and finite difference methods. Binomial models are suggested by Cox, Ross and Rubinstein (1979). Boyle (1977) firstly discussed Monte Carlo simulation and then it has been used by both Johnson and Shanno(1985) and Hull and White(1987) to value options when it is a stochastic process. Finite difference methods are discussed by Schwartz (1977), Brennan and Schwartz (1979), and Courtadon (1982) (Hull and White, 1988). This essay aims to provide a comparison and contrast among the three numerical methods mentioned above. All these numerical methods focus on the objectives of both calculation accuracy and speed. The only way for any given method to achieve better accuracy and speed is to calculate with many times (Hull and White, 1988). For one thing, this essay provides general description about binomial trees, Monte Carlo simulation and finite difference methods and defines benefits and drawbacks of each method. For another thing, it makes contrast on the valuation option prices involved American and European options.
Binomial tree models
Hull and White (1988) provide a general description about binomial trees. They concluded that” Binomial model is a particular case of a more general set of multivariate multinomial models”. All multivariate multinomial models are characteristics as lattice approaches such as binomial and trinomial lattice models(Hull and White, 1988).And the binomial trees, a valuation option approach, which involved separating option into a large number of small time intervals of length Δt. The assumption of this method is that the asset price changed from its initial value to two new values, both upward and downward movement, Su and Sd separately. The probability of an upward movement was indicated as p, while the probability of a downward movement is 1-p and the parameter u, d, p are used to value option prices. (Hull, 2008)
The binomial model focused on option replication. For the binomial trees, the only way to reproduce the payoff of an option is to trade a portfolio involved the stock and the risk-free asset. Within other lattice approaches, involved the trinomial tree model, do not admit option replication(Figlewski&Gao, 1999).However, the fair value of option can be valued under the basic assumptions of option pricing which is the world is risk-neutral. (Hull, 2008)
In this case, the fair value can be valued simply by computing the expected values within the risk neutral distribution and discounting at the risk-free interest rate (Hull, 2008).When the world is risk-neutral, any approximation procedure which is based on a probability distribution and rough risk neutral distribution and make convergence to its limit, can be used to value options prices properly. Therefore, it is necessary to use trinomial tree model even a more complex structure without lack of the ability to calculating unique option payoffs (Figlewski&Gao, 1999).
What is also worth mentioning about the application of binomial tree is that there exists known payouts involved dividends (Hull and White, 1988). Dividend policy was based on the principle that the stock maintains a constant yield on each ex-dividend date which was denoted by δ (Cox, 1979)
Essentially, binomial and trinomial models are powerful, intuitive methods to value both American and European option. Moreover, it also provides asymptotically exact approximation based on Black-Scholes assumptions (Figlewski&Gao, 1999).
Consider the efficiency and accuracy of this method, the binomial method is more efficient and accurate when there are a small number of options values without dividends. However it lacks of efficient in a situation where effects of cash dividends should be analysed. Actually, the fixed dividend yield generated an improper hedge ratio despite that the assumption of fixed dividend yield is an efficient and accurate approximation. Furthermore, the binomial tree models are inefficient in valuing American options compared with European option. And it is less efficient and accurate than finite difference methods for multiple options valuation. This is because it has a conditional starting point (Geske&Shastri, 1985).
Monte Carlo simulation
Monte Carlo simulation is a useful numerical method to implement for various kinds of purposes of finance such as securities valuation. For the valuation of option, Monte Carlo simulation use risk-neutral measure (Hull, 2008). For example, a call option is a security whose expected payoffs depend on not only one basic security. The value of a derivative security can be obtained by discounted the expected payoff in the risk-neutral world at the riskless rate (Boyle,, 1997).
Boyle (1997) stated that “this approach comprises several steps in the following. Firstly, Simulate sample paths of the underlying state variables (e.g., underlying asset prices and interest rates) over the relevant time horizon. Stimulate these according to the risk-neutral measure. Secondly, evaluate the discounted cash flows of a security on each sample path, as determined by the structure of the security in question. Thirdly, average the discounted cash flows over sample paths”
There is a tendency that high-dimensional integral is becoming more and more necessary to evaluate in the derivative security. Monte Carlo simulation is widely used in the option valuation due to the increases of high dimension (Ibanez &Zapatero, 2004). Regarding the integral of the function f(x) over the d-dimensional unit hypercube, the simple Monte Carlo estimate of the integral is equivalent to the average value of the function f over n random points from the unit hypercube. When n tends to be infinite, this estimate converges to the true value of the integral. Furthermore, the distinct advantage of this method compared with other numerical approaches is that the error convergence rate is independent dimension. In addition, the function f should be square integrable and this is the only restriction which is relative and slight ((Boyle,, 1997).
Monte Carlo simulation is simple, flexible. It can be easily modified to adapt different processes which involved governing stock returns. Moreover, compared other methods, it has distinct merit in some specific circumstances. Essentially Monte Carlo simulation can be used when the process of generating future stock value movement determined the final stock value. This process mentioned above is created on a computer and aims to generate a series of stock price trajectories which is used to obtain the evaluation of option. In addition, the standard deviation also can be used simultaneously in order to make sure the accuracy of the results (Boyle, 1977).
However, there are some disadvantages of this method. In recent years, some new techniques were developed so as to overcome the disadvantages. One key drawback is that it is wasteful to calculate many times and difficult to control situations when there are early exercise opportunities (Hull, 2008). Different variances reduction techniques involved control variate approach and antithetic variate method are used to solve these problems. Furthermore, deterministic sequences also known as low-discrepancy sequences or quasi-random sequences are used to accelerate the valuation of multi-dimensional integrals, (Boyle,, 1997).
Quasi-Monte Carlo methods are suggested as a new approach to supplement Monte Carlo simulation. It uses deterministic sequences rather than random sequences. These sequences are used to obtain convergence with known error bounds¼ˆJoy¼Œ 1996¼‰
Until recently, Monte Carlo simulation has not been used in American options. The key problem is that payoff depends on some sources of uncertainty. The optimal exercise frontier of American options is uncertain (Barraquand &Martineau, 1995).
Finite difference methods
Hull (2008) provides a general description of finite difference methods. He concluded that “finite difference methods value a derivative by solving the differential equation that the derivative satisfies.” Finite difference methods are classified into two ways those are implicit and explicit finite difference method. The former approach is related the value of option at time t+Δt to three alternative values at time t, while the latter one is related the value of option at time t to three alternative values at time t+Δt (Hull& White, 1990).
The explicit finite difference method is equivalent to a trinomial lattice approach. Compared with the two finite difference methods, the distinct advantage of explicit finite difference method is that it has fewer boundary conditions than the implicit way. For instance, to implement implicit method, considering the price of a derivative security S, it is vital to specify boundary conditions for the derivative security whether minimising or maximising price. By contrast, the explicit method, regarded as a trinomial lattice approach, does not need specific boundary conditions (Hull& White, 1990).
There are two alternative problems of partial differential equations. The first, known as boundary value problems where a wide range of boundary conditions must be specified, the second, known as initial value problems where only a fraction of valuation required to be specified. There is a fact that most option valuation problems are initial value problems. The explicit finite difference method is the most appropriate method to solve initial value problems because implicit finite method used extra boundary condition which was produced errors (Hull& White, 1990).
Furthermore, consider the efficiency and accuracy of valuing option, the explicit finite difference method, with logarithmical transformation, is more efficient than the implicit method. This is because it does not need the solution solved a series of simultaneous equations (Geske&Shastri, 1985).
In addition, for the finite difference method and jump process, the simple explicit difference approximation is harmonized with a three-point jump process, while the more complex implicit difference approximation corresponds a generalized jump process which is based on that the value of derivative security will jump to infinite future values, not just three points(Brennan&Schwartz, 1978)
Finite difference approach can be used in the same situation as binomial tree approach. They can control American and European option and cannot easily used when the payoff of an option depends on the past history of the state variable. Furthermore, finite difference methods can be used in the situation where there are some state variable¼ˆHull 2008). However, the binomial tree method is more intuitive and easily implemented than the finite difference methods. Therefore, financial economists tend to use binomial tree methods when there are a small number of option values. In contrast, finite difference methods are frequently used and more efficient in a situation where there are a large number of option values (Geske&Shastri, 1985).
The comparison between the three methods
Overall, compared with the three numerical methods of valuing option, Monte Carlo simulation should be seen as a supplement methods for the binomial tree models and finite difference methods. This is because the increase of a variety of complexity in financial instruments (Boyle, 1977). Furthermore, binomial and finite difference methods are implemented with low dimension of problems and standard dynamics, while Monte Carlo simulation is the proper methods to solve high dimension problems and stochastic parameters (Ibanez &Zapatero, 2004)
The binomial tree models and finite difference methods are classified as backward methods and can easily handle early exercise opportunities. On the contrary, Monte Carlo simulation is a looking forward method and may be opposed with backward induction (Ibanez &Zapatero, 2004)
For the two similar methods, finite difference approach is equivalent to a trinomial lattice method. They are both useful for American and European options and tend not to be used in a situation where the options’ payoff depends on the past history of state variables. However, there also are some differences between them. Binomial tree methods can be used to calculate a small number of values of options, while finite difference methods can be used and more efficient and accurate when there exit a large number of option values. In addition, binomial tree models are more intuitive and readily completed than the finite difference methods
Monte Carlo simulation is a powerful and flexible method to value various options. In principle, Monte Carlo simulation is calculated a multi-dimension integral and this is becoming an attractiveness compared other numerical methods. It can be used to solve the problem of high dimension. The drawbacks should not be neglected. The computation with many times and cannot easily handle the situation where there are early exercise opportunities. Based the traditional Monte Carlo simulation, a new approach was developed, known as Quasi-Monte Carlo methods to improve the efficiency of Monte Carlo method. The basic theorem is to use deterministic number rather than random.
However, it has not been used in valuing American options due to the optimal exercise frontier is uncertain. One way to value American option is to achieve combination of Monte Carlo simulation and dynamic programming (Ibanez &Zapatero, 2004)
To sum up, with the complexity of numerical computation, numerical methods are wildly used to value derivative security. This paper provided general description and specific comparison between the three numerical methods mentioned above. Binomial tree models, known as lattice approach, are a powerful and intuitive tool to value both American and European option with and without dividend. When there are a small number of option values, binomial method is more efficient and accurate. On the contrary, it is inefficient in a situation where effects of cash dividend should be analysed.
Finite difference method can be seen as the trinomial lattice approach. They are used with the problems of low dimension and have been regarded as efficient and accurate methods to value American and European options. Compared with binomial tree models, finite difference methods is more efficient and accurate when practicers computing a large number of values of options.
Monte Carlo simulation can be seen as a supplement tool for the two methods mentioned above to value options. It can be used with high dimensional problems whereas other two methods are used with low dimensional problems. The flows of Monte Carlo simulation are that it consumes time for calculating and cannot readily handle the situation where there are early exercise opportunities. In this case, Quasi-Monte Carlo methods based on traditional Monte Carlo simulation utilise deterministic sequences known as quasi-random sequences. These sequences provide an opportunity to acquire convergence with known error bounds.
Barraquand¼ŒJ.& Martineau, D. (1995)”Numerical Valuation of High Dimensional Multivariate American Securities “The Journal of Financial and Quantitative Analysis, Vol. 30, No. 3 pp. 383 -405
Boyle, P.P., “Option: A Monte Carlo Approach,” Journal of Financial Economics, Volume:4, pp: 323-338
Boyle, P. Broadie, M. and Glasserman,P.(1997) “Monte Carlo methods for security pricing,” Journal of Economic Dynamics and Control, Volume 21, Issues 8-9,29,pp:1267-1321
Brennan, M.J. & Schwartz, E.S., (1978)”Finite Difference Methods and Jump Processes Arising in the Pricing of Contingent Claims: A Synthesis,” The Journal of Financial and Quantitative Analysis, Vol. 13, No. 3 pp. 461 -474
Cox, J.C., Ross, S.A. and Rubinstein. M.(1979) “Option pricing: A simplified approach,” Journal of Financial Economics, Volume 7, Issue 3, pp: 229-263
Figlewski,S.&Gao,B.(1999)”The adaptive mesh model: a new approach to efficient option pricing,” Journal of Financial Economics, Volume 53, Issue 3, pp: 313-351
Geske,R. &Shastri, K.(1985) “Valuation by Approximation: A Comparison of Alternative Option Valuation Techniques,” The Journal of Financial and Quantitative Analysis, Vol. 20, No. 1 pp. 45- 71
Hull, J.(2008) Option, Futures, and Other Derivatives,7th edition, Upper Saddle River: Pearson Prentice Hall
Hull, J, &White, A.(1988) “The Use of the Control Variate Technique in Option Pricing,” Journal of Financial and Quantitative Analysis. Vol. 23, Issue. 3; p. 237-251
Hull, J, &White, A. (1990) “Valuing Derivative Securities Using the Explicit Finite Difference Method,”Journal of Financial and Quantitative Analysis. Vol. 25, No. 1; pp: 87-100
Ibanez, A. &Zapatero, F. (2004) “Monte Carlo Valuation of American Options through Computation of the Optimal Exercise Frontier,” Journal of Financial and Quantitative analysis Vol.39, No. 2, pp: 253-275
Joy, C., Boyle, P.P. and Tan, K.S.(1996)” Quasi-Monte Carlo Methods in Numerical Finance,” Management Science.Vol.42, No.6,pp:926-938

Are We Living in a Computer Simulation?

Are we living in a computer simulation?



To start off we should imagine that either distant future humans or an advanced race of aliens could manufacture, code and start a supercomputer that could simulate a whole separate universe. If this is so who is saying that our descendants would not try to do that one day? And more importantly how would we know that we are not currently living in one.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

This thesis is important because it makes people question the very foundations of what they know to be true. It will make people think about how disconnected they are from the real world, because of this it provides and almost morbid curiosity into this subject. This paper will provide new understandings of perspectives and ideas that may not previously have been thought of by the reader(‘s). It is also a good research question because it is a point that I the writer have been interested in for a while.

The methodology used to explore this point are mainly through research, recording key points and exploring them all in essay form. There will also be philosophical ideas explored where multiple perspectives and arguments are brought up. Because of the partly scientific roots of this question I will restrain from expressing my own opinion as much as possible. Because of the philosophical side of the question the reader will be put into some of these perspectives so the point behind it can be better understood.

The hypothesis for the upcoming research is that of course the chances are that we are not living in a computer simulation but the points that will be explored may compose that it is possible to be living in a computer simulation. In order to understand a justified conclusion to the answer there are many philosophical and scientific points that we must explore. So, a specific hypothesis is hard to create.


Referring to the simulation argument proposed my Nick Bostrom [1], he argues that one of the three possible points must be true (1) The human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. These three points follow the belief that a civilization could reach a posthuman stage.

Nick Bostrom’s Three arguments

Posthuman or post-human is a concept that originates and tends to stay within the fields of science fiction. It is a state that a living being, human or non-human that transcends a state of being human, this could refer to humans that have physically evolved past any state that resembles todays humans, they could have cured aging which allowed them to live for hundreds and maybe thousands of years. this could also be that they are incredibly intelligent and able to for example create a very sophisticated and advanced computer simulation or answer impossible questions that are tens of thousands of years ahead of humans that exist today.

Of course, Nick Bostrom’s argument does not state that we are living in a simulation, he provides us with three options. The first option explains that humans will become extinct before we reach a posthuman stage. This considering human nature is the most likely of the three. In fact, mathematician Dr Fergus Simpson [2] predicts that there is a 0.2% chance of a “global catastrophe” occurring in any given year over the course of the 21st Century.

The second option proposes that any posthuman civilisation would be extremely unlikely to run a significant number of simulations of their evolutionary history (or variations of). This is the second most likely option due to how specific the simulation argument is, there is always a large chance that as the option states the posthumans could just “not do it”, to put it plainly (I go into more detail later in the paper).  This option also has an interesting point that any posthuman civilisation could be the one to create or in this case not create the simulation. If the nature of the simulation is to simulate a whole universe then we could be involved as programs whether human descendants create it, or any other alien simulation would create it.

The third option makes quite a bold statement that we are indeed living in a computer simulation, to explain why this statement is logically correct I must propose a mathematical look to it using probability. If we could imagine that a posthuman stage is reached, and they did in fact decide to create a simulation of their ancestors (us), then who is saying that we or they are in fact the biological original. If a simulation of this magnitude is feasible and if it has happened then maybe it would happen again, maybe many times, but if all these simulations where spawned inside other simulations then there could be an incredibly large number of simulations. If this is so then the chances of us being the original are quite slim and therefore it is safe to assume (if the original two points have not met) we are indeed living in simulation. I will explore this point further later in the paper.

The Assumption of Substrate-Independence

If we want a computer simulation to process a human’s mind down to a perfect detail and for each person in the simulation to truly be their own cognitive person then we need a Substrate-Independent mind. This is the idea that our minds are not limited to only being our brain cells, but that we can transfer our minds or create others to exist in other ways, for our example as a computer program. Of course, this isn’t necessary, the posthumans could program a neural network to act just like a human.

This is believed to be possible because the human consciousness is nothing more than electric signals being passed around the brain, our sense of awareness that makes us feel special is argued to just be a side effect of intelligence. We could call it a survival technique, when we are threatened or on the verge of death that last burst of energy and strength could be brought on by our willingness to save ourselves. There are arguments saying that a human consciousness is spiritual and almost supernatural, but the computer simulation argument isn’t going to get anywhere with that assumption. so, we will accept that it is only a matter of time until we can program a human mind using code and computer hardware.

Very small factors on a microscopic level that effect a neuron can be ignored for example nerve growth factors and negligible chemical releases in the synapse would not affect the signals being sent between neurons. Although it would lack authenticity the posthumans could program a human to react to curtain stimuli in a way identical to humans in that the consequences of which ensure the integrity and relatability of the simulation with authentic humans. The only challenges in creating a digital mind lies in programming the syntax of consciousness as we still do not fully understand what it is, we can only assume that in the future our increasing intelligence will eventually shed some light on the mind.

The processing power and other computational needs


Our technological advancements in computing have been, and are continuing to be rapidly increasing. there was a time where the only computer game was pong, it consisted of a few 2D shapes interacting with each other. Now only 47 years later our computer games are set in vast 3D worlds with a graphical resolution of up to 4K pixels, in these games we can play with over a hundred different players simultaneously all around the world, we have also recently developed virtual reality headsets that can provide an equal level of detail and immersion. Eventually if our progress continues at this rate the difference between the programmed world and reality will become increasingly small and eventually non-existent. 


Our proposed posthuman computer simulation does not need to process and implement every single quantum particle in that universe, neither does it need to process the size of the universe. If the intentions of this simulation where to for example study the 21st Century human’s anatomy or characteristics, then why would it need to for example process what a planet temperature is outside of our current observable universe. In short, the computer would not need to process information that would not affect the outcome of the study. An interesting example of this type of simulation comes from an episode of an American sitcom called Rick and Morty [3] Season 1 episode 4, “M. Night Shaym-Aliens!”. In this episode protagonist Morty comes to the attention that he is living in a computer simulation ran by aliens. He removes himself from the simulation by literally jumping out of it! The simulation was being created in the physical world around him and when the computer is hacked by Morty’s Grandfather Rick, rendering it unable to create more of his world he runs out of it as you would run of a stopping treadmill. Another interesting computing variable explored by this show was when the computer needed to simulate a much larger area the detail and efficiency of the simulation decreased e.g. The artificial intelligence robots used to pose as other humans started to act less like humans and more like cattle. It is explained by Rick that this is all because of the lack of processing power. 

Obviously right now we cannot create a conscious mind in a computer, we lack the processing power and software required to do so. But at the rate of which our technological advancements are going we will eventually meet the point where we can implement a human mind as a computer program. For our simulation argument it simply does not matter whether the amount of time it takes to achieve this goal equates to 50 years or even 500 million years from now. In both cases and almost all other ones we can still create our simulation just a little later!

We currently have quite a good idea of how much processing power is required to emulate a human brain, taken from an article written by S. Orca on “H+ magazines” [4], he states that the human cortex has about 22 billion neurons and 220 trillion synapses. A supercomputer capable of running a software simulation of the human brain doesn’t yet exist. Researchers estimate that it would require at least a machine with a computational capacity of 36.8 petaflops (a petaflop is a thousand trillion floating point operations per second). Now despite that sounding like a lot of processing power, the world’s most powerful super computer lives America and is known as “Summit [5]”, this computer’s processing power peaks at a staggering 200 petaflops. This shows that our current computers do have the raw power to emulate a human mind.

The human mind also stores memories, information that it has learnt to be stored away for future use. Well our computers do that as well, they store it in their memory. As far as how much our brain needs were also calculated in the same article from S. Orca to be around 3.2 petabytes. Of course, when it comes to whether we can provide for this it is incredibly easy, you can buy a rack server from tech outlets that can store up to 5 petabytes of data. And more efficient / larger storage technologies are being developed with each passing decade.

Of course, if you are in a computer simulation, you have the whole world around you? Not just your mind with free will and awareness but a whole world full of intelligent beings with the same attributes. Unfortunately it is quite futile to try and calculate how much processing power is needed to emulate the rest of the universe, of course the scale of your desired simulation can affect your needed processing as I said before you could only need to simulate the earth, all other objects in the sky could just be “lights” and if any of it is needed to be interacted with e.g. Asteroid or the moon landing then the post humans can just tweak the simulation slightly to continue to entertain the humans with false objects like just adding the section of the moon that was walked on for that time. But if the posthumans want to emulate whole star systems and galaxies then we can only use our understanding of how much our computers have developed and see that eventually we will reach these milestones.

Core reasoning


After some (although not completely un-controversial) proof and reasoning for a computer simulation of this nature to be possible we must address why our descendants would most likely not be the posthumans to create the simulation but that we (considering Nick Bostrom’s first two points are not fulfilled) are most definitely living in a simulation, If the computer simulation has the programs in that simulation reach a posthuman stage then there is indeed a god chance that they will create a very similar computer simulation. This can create a loop of an enormous amount of computer simulations all inside each other, all believing they are the original biological posthumans. Now if you were to spread all these simulations out (there could be a near infinite number) and throw a dart at one, what are the chances of that simulation being the first one, or the creators of that simulation being the first posthumans to do so. With this probability alone there is a near infinite chance that we are not the biological originals and are in fact living in a computer simulation.

Find Out How Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

So, if you can imagine that in your world you meet a brick wall, as a regular human you can assume that it is hard, heavy, have a pretty pattern and real. But if you were to zoom in to it at an atomic level you would see that it is made from atoms that relative to their size are quite spaced out and detached from each other, But their molecular bonds and how their individual particles interact with each other govern how we interact with the wall, but we don’t see this or think about it every time we see one. So, you could say that on an atomic level the brick wall might not seem hard, heavy or real (but still quite pretty I’m sure). Now I would like you to imagine you are playing a video game and you come across a brick wall, the fundamental building blocks of this object is code. Both code and particles cannot be seen by the human so who is to say the brick wall is not made from code. And every time you look into an electron microscope the posthumans have installed a subroutine that shows you the particles you should be seeing. I know this is quite a bold example, but my point is that the posthumans could be similar to game developers in terms of immersion and keeping the simulation a secret.

To make the simulation easier to run and achieve stability it could utilize a kind of Technological Solipsism. Solipsism is the view or theory that the self is all that can be known to exist. Imagine putting on a virtual reality headset and playing a game. To make the game less straining on the graphical processing unit when you are looking in a direction the space behind you has not yet been rendered or processed yet, it waits for you to look at that point for it to be loaded. The posthumans could take advantage of this by only loading the area of the simulation you interact with. All your personal belongings in your closed wardrobe may not exist right now and only pop into existence when you are about to open the door. This would just like in the VR game reduce the workload of the simulation’s computer.

The second of Nick Bostrom’s three points states, “any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)”. This may sound quite straight forward, the idea of the posthumans simply not wanting to. But there is more behind this point when you consider the nature of how intelligent these people will be. To be able to create a computer architecture that can support this magnitude of processing the posthuman race will be like gods compared to us. Imagine the most intelligent ant in the world walking into a movie theatre and seeing a collection of us humans sitting down in rows of seats eating popcorn and watching a flashy light in the shape of large humans on a big wall. It would seem nonsensical to the ant, and no matter how hard you try to explain the experience to the ant it would never understand. This could give you a rough idea of hoping to understand anything a posthuman would do. As said buy the “Kurzgesagt [6]” YouTube channel, “it is quite arrogant to assume these gods would create this computer simulation on something as insignificantly dumb as us?”

What could a reason be for the posthumans to create a simulation of this magnitude? One theory also brought to light by Nick Bostrom [1] is that we are living in an ancestral simulation, similar in intention to a history channels re-enactment of a battleground. But its real, we have or at least possess the illusion of free will. The posthumans may have wanted to see our first trip to mars or just what humanity looked like as it aged among the millions of years before it. I admit they must want some immense detail and it seems like quite a lot of effort to do so but they are the posthumans and I’m sure if they did want to it’s for a good reason.

Now it could be that our world that feels so real and is so perfectly normal that we can’t be in a simulation turns out to be completely artificial. The posthumans reality could have completely different laws of physics or fundamental variables that they didn’t include or change in our simulated reality. Referring to “Plato’s Allegory of the cave [7]”, where prisoners are chained in a cave forced to look in only one direction at a wall. There is a fire in this cave behind the prisoners that shines on puppets also behind the prisoners that cast a shadow on the wall they all face (See below). To these prisoners the cave around them and the shadow figures are their world, all sounds emanating from outside the cave and lights flickering seem to be all created by the shadow figures. If these poor prisoners where to escape then they would not know what to do with themselves in the outside world, it would be like stepping into another dimension. We could apply this to our experience in our world, everything we know we know to be right only because it’s what we’ve always known, or we hadn’t known better. So, the posthumans can manipulate its simulations programs (us) to know whatever they want as true reality. This could make all evidence for our reality being the original biological one simulated by the posthumans for insurance.


Quantum Monte Carlo effect


Students at the university of Oxford have done an experiment to try and create a computer simulation of the Quantum Monte Carlo effect or QMC [8] In short, the QMC is a phenomenon in physical systems that exhibit strong magnetic fields and very low temperatures, and manifests as an energy current that runs across the temperature gradient. They had to use random sampling to analyse many-body quantum problems where the equations involved cannot be solved directly. They discovered that the complexity of the simulation increases exponentially as more particles are analysed. So, the more information the computer had to analyse the more processing power it needs, this is quite self-explanatory for most computer programs, but this increases exponentially. So, every particle you add to analyse you must double the processing power. The researchers calculated that just storing information about a couple of hundred electrons would require a computer memory that would physically require more atoms than exist in the observable universe.

The QMC is a perfect example of a solid evidential proof that we are not living in a computer simulation and it cannot be ignored but that doesn’t mean we can’t refer to the author of the paper about the QMC, Zohar Ringel, “Who knows what the computing capabilities are of whatever simulates us?”. And I must link back to what I said about the posthumans controlling our reality and making their own laws of physics and computing for us to live with. This gives the philosophy I am applying to this argument an arrogant stance that it truly doesn’t matter what evidence is provided. So, it can be understandably ignored as useful and more of a trump card.


The probability of any race reaching a posthuman stage is extremely slim, the probability of a posthuman race running an ancestral based computer simulation is also extremely slim, if by some miracle these points are fulfilled then the probability of people with our kind if experiences that are living in a simulation is very likely. Whether a computer of a magnitude that could simulate a mind would be created is most likely possible, all of entirety around us can be manipulated to seem real, but just a program in its natural form. We know at least with our current understandings of physics that the entire observable universe down to its quantum level cannot be processed by any machine, we know this because of the QMC study and general nature of quantum entity’s. So, it comes down to whether us as a species overcome a very low probability of excellence then we must face a very high probability of gloominess. 

External links and sources:

[1] –

[2] –                      fergus-simpson-doomsday-argument-a7426451.html

[3] – 

[4] – 

[5] –

[6] –

[7] –

[8](picture)  – 

[9] –  

By Henry Mills

Simulation of Extrusion Replacement with Wire+arc Based Additive Manufacturing



Simulation of extrusion replacement with wire+arc based additive manufacturing.


Table of Contents


1.1 Background

1.2 Scope


2.1 Additive Manufacturing

2.1.1 Additive Manufacturing in Metals

2.2 Wire Arc Additive Manufacturing

2.2.1 Classification of WAAM Process

2.2.2 Robotic WAAM System

2.2.3 Common Defects in WAAM Fabricated Components

2.2.4 Methods for Quality Improvement in WAAM Process

2.3 Numerical Simulation in Additive Manufacturing

2.3.1 Modelling with Finite Elements


3.1 Research Gaps

3.2 Research Questions



Figure 1. Additive Manufacturing Process.  (Gibson et al., 2010)

Figure 2. Working Principle of Additive Manufacturing. (Kruth et al., 1998)

Figure 3. Classification of AM for Metals. (Ding et al., 2015)

Figure 4. WAAM system design concepts, University of Wollongong.

Figure 5. Defects in material relation in WAAM process. (Wu et al., 2018)

Figure 6. Bead on plate model’s temperature field a) Goldak b) Proposed model (Montevecchi et al., 2016).

Figure 7. Geometry of a WAAM multilayer wall sample (Ding et al., 2014)

Figure 8. Mesh of steady state thermal model. (Ding et al., 2014)

Figure 9.  Distortion comparison between the efficient “engineering” and the transient thermomechanical models (Ding et al., 2014)



Equation 1. Base material power density function…………………………………………………….13

Equation 2. Filler material power density function ………………………………………………….13


Table 1. The various wire feed process (Karnunakaran et al., 2010)

Table 2. Comparison of various WAAM techniques (Wu et al., 2018)

1.1 Background

Additive manufacturing (AM) is revolutionising the way engineers work. AM, also known as 3D printing, manufactures components using advanced engineering designs tools. (Wong and Hernandez, 2012). Present-day industries have an ever-increasing demand of sustainable, low cost, and an environmentally friendly manufacturing process compared to the conventional manufacturing process which often require large amount of machining. (Guo and Leu, 2013)

Thus, AM has become an important and revolutionary industrial process for the manufacture of intricate metal work pieces. The beads of welds are layered one upon the other for manufacturing the 3D elements. (Lockett et al., 2017)

Additive manufacturing enables to form and the pioneer standard producing process, conjointly adding potential to enrich ancient processes. Therefore, AM is receiving more and more attention and efforts all around the world because there is tremendous interest worldwide in evaluating the potential of AM as a useful and possibly disruptive technology. (Kruth, et al., 1998) The capacity to explore the potential use of AM can also lead to the emergence of innovations for light structures.

Wohlers T. (2010) proposed that AM technology has been growing rapidly from the past twenty years having gained more request for products and services, which in turn led to an increase in Wire Arc based Additive Manufacturing process, in the last years. Thus, there has been a growing interest for research in the metal additive manufacturing field.

 1.2 Scope

The thesis aims at investigating an improved method of Additive Manufacturing using the Wire-Arc based technique for structural components with the help of computer simulation. Furthermore, it also includes conducting and completing parametric studies and developing simple design implications, followed by building models, modifying them and interpreting results.

To establish material properties from elsewhere to develop a materially nonlinear and geometrically nonlinear finite element model using continuum models, i.e.

•          Large displacements

•          Material plasticity

In this, a review of the main fields related with the topic of the thesis is presented. It begins with a principle concepts of additive manufacturing in detail and application of this technology in metals. Moreover, the Wire and Arc Additive Manufacturing process and Simulation of Wire Arc additive manufacturing is reviewed.

 2.1 Additive Manufacturing

Additive manufacturing (AM), is defined by ASTM as the “process of joining materials to make objects, usually layer by layer, from 3D CAD data”. (F2792-10, 2018)

AM is used in a broad set of final parts, which includes models for verification, better understanding of concepts, arrangements with properties related to industrial application. (Guo and Leu, 2013). This process is developed based on the rapid prototype concept, in order to rapidly build parts and components designed by engineers and test their performance.

AM when associated with the conventional process has more advantage because of its capability to produce components with high geometrical complexity. The materials produced have been significantly improved with better performances with lighter weight, leading to lower fuel consumption and reduction in cost (Guo and Leu, 2013). Since it is very effective in producing high geometrical complexity, it reduces the components required to complete an assembly. Thereby, eliminating the need for joining and forming process.

Guo and Leu (2013) proposed that with the required material properties, AM can produce prototype parts for carrying out manufacturing of the final products and operate assessment for the different quantities of the respective product. Currently, the direct fabrication of functional end-use products has become the main trend of AM technology.

The following are the steps in AM process (Gibson, Rosen, & Stucker, 2010)

Figure 1. Additive Manufacturing Process.  (Gibson et al., 2010)

The first steps involves converting CAD file to a stereolithography (STL) file. The drawing made in CAD file is divided and sliced so that information can be printed from the respective layers. (Wong and Hernandez, 2012)

The CAD model is obtained by making a 3D scan, a product redesign or the 3D model is downloaded. The CAD model is then converted to STL/AMF file which is used for 3D-printing, rapid prototyping and computer-aided manufacture. Next up, the CAD drawing created needs to be sliced. The slicing of the CAD model can be done by using slicing software. Slicing of 3D models is usually done so that the 3D printer gets the required information from the CAD drawing. (Kruth, Leu and Nakagawa, 1998)

The slicing of the CAD model can be done by using slicing software. Slicing of 3D models is usually done so that the 3D printer gets the required information from the CAD drawing.

The general working principle of AM is schematically represented in figure 2:

Figure 2. Working Principle of Additive Manufacturing. (Kruth et al., 1998)

2.1.1 Additive Manufacturing in Metals

AM has many advantages such as a wide range of deposition range (low to high), near net shape, therefore minimizing the loss of material and reducing cost of manufacturing, minimum conventional machining time, better structural integrity. This can replace manufacturing components made of expensive materials with complex machining so that the expected waste is minimised. (Mehnen et al. 2010).

When the casting process is in the layer by layer process, it is designated as pattern based or indirect process otherwise it is a direct process which approaches to rapid tooling where patterns are not manufactured. Instead, additive processes are used such that direct tools can be produced.

Manufacturing systems are divided into three broad categories: (Frazier, 2014)

(i)                  powder bed systems

(ii)               powder feed systems, and

(iii)             wire feed systems

Ding et al., (2015) put forward that the powder-feed process is beneficial for manufacturing small size elements with high geometrical accuracy. While, wire-feed systems contribute to cleaner environment approach when compared to wire feed approach that impose hazard to the operators of the AM process.

Further, the classification is based on energy sources (Ding et al. 2015):

(i)                  electron beam

(ii)               arc welding

(iii)             laser based

Figure 3 represents the classification of Additive Manufacturing for Metals.

Figure 3. Classification of AM for Metals. (Ding et al., 2015)

Table 1 presents various existing technologies of wire feed process (Karunakaran et al., 2010).

Table 1. The various wire feed process. (Karnunakaran et al., 2010)




Direct metal deposition (DMD)

Directed light fabrication (DLF)

Laser additive manufacturing (LAM)

Laser based direct metal deposition (LBDMD)

Rapid direct metal deposition

Electron beam

Electron beam free forming


Hybrid Layered Manufacturing

Hybrid plasma deposition and milling (HPDM)

Shape deposition manufacturing



2.2 Wire Arc Additive Manufacturing

Wire Arc based additive manufacturing has fascinated engineers from the industrial manufacturing sector in recent years. This is due to the feasibility of manufacturing high deposition rate metal components on a large scale with the reduction in the cost of equipment, properties, which makes it environmentally friendly and high material utilization. (Ding et al., 2015)

Wu et al. (2018) stated that compelling development has been made of the WAAM process along with the progress in the microstructure and mechanical properties of fabricated components. With the tremendous growth in Wire Arc based additive manufacturing, an increasing range of materials have been correlated with the process and applications.

R. Baker (1925) proposed that Wire and arc additive manufacturing (WAAM) process is combination of an electric arc as heat resource with wire feeding, to create 3D component.

In the contrary to traditional subtractive manufacturing, WAAM proves to be a promising candidate for extensively used materials like nickel, titanium, aluminium and steel, because of its ability to reduce the overall time and post-matching time by 40-60% and 15-20% respectively. It is also preferred over traditional subtractive manufacturing for fabricating the metal components with complex geometry, that are larger in size and are expensive.  

2.2.1 Classification of WAAM Process

The three types of Wire Arc based additive manufacturing depending on the nature of heat sources are:

Gas Metal Arc Welding (GMAW)-based. (Ding et al., 2011)

Gas Tungsten Arc Welding (GTAW)-based. (Dickens et al., 1992)

Plasma Arc -Welding (PAW)-based. (Spencer et al.,1998)

GMAW has 2-3 times a higher rate of deposition compared to GTAW-based or PAW-based methods. As electric current acts directly on GMAW-based WAAM and generates more weld fumes it is less stable as compared to the other types of WAAM. The rate of production and the condition in which it is manufactured will be decided based on the WAAM techniques selected. (Wu et al., 2018).

 The Comparison of various WAAM techniques is shown in Table 2: (Wu et al., 2018)

Table 2. Comparison of various WAAM techniques. (Wu et al., 2018)






Non-consumable electrode; Separate wire feed process

Rate of deposition: 1-2kg/hour;

Wire and torch rotation are needed;



Consumable wire electrode;

Rate of deposition: 3-4kg/hour;

Poor arc stability, spatter

Cold metal transfer (CMT)

Reciprocating consumable wire electrode;

Rate of deposition: 2-3kg/hour

zero spatter, heat input is low, high process tolerance

Tandem GMAW

Two consumable wires electrodes;

Typical deposition: 6-8kg/hour;

Easy mixing to control composition for long-range-ordered alloy manufacturing



Non-consumable electrode;

Separate wire feed process;

Rate of deposition: 2-4kg/hour;

Wire and torch rotation are needed;



2.2.2 Robotic WAAM System

Wu et al, 2018 states that most WAAM systems use an articulated industrial robot as the motion mechanism.

There are two designs available:

The first design provides shielding for inert gas with the help of an enclosed chamber.

The second design uses linear rail positioned robot using the local gas shielding mechanism which is present currently or can be designed to increase the overall envelop. Very large metal structures can be assembled up to large dimension because of its proficiency.

Figure 4 shows an example of this design of WAAM system, used for the research and development at the University of Wollongong (UOW).

Figure 4. WAAM system design concepts, University of Wollongong.


There are three steps for manufacturing a part of WAAM:

(i)                  Process planning

(ii)               Deposition

(iii)             Post processing


Fabrication with high geometrical accuracy can be obtained by developing the desired robot motions and welding parameters by 3D slicing of CAD model (is done so that 3D drawing effectively translates into something a 3D printer can understand) and software with effective programmes. (Ding et al., 2015a, 2015b, 2016)

3D slicing and programming software is used to reduce the faults or defects of the potential process established on the welding deposition model for fabricating the components. The defects can be avoided by automated path planning and optimization of the process put forth by 3D slicing and programming software. (Ding et al., 2015)

In order to improve the material deposition efficiency (defined as the ratio of the real area of geometry to the deposited area), Ding et al. (2015) created an algorithm based on medial axis transformation, which can increase 2.4 times the material deposition efficiency.

Sensors such as welding signals, metal transfer behaviour (Geng et al., 2017), deposited bead geometry and interpass temperature in WAAM systems can help in manufacturing products of better quality and supporting in-process monitoring. In a preliminary study Zhang et al. (2016) developed a dedicated control technology for WAAM which included the CAD model processing into vector-based programming, path planning strategy and the control of deposition process parameters. A special control of the parameters has been given to both start and end of each pass where has been identified a higher and lower height, respectively.


2.2.3 Common Defects in WAAM Fabricated Components

Wu et al. (2018) stated that there are a few additive manufacturing defects even though the mechanical properties of WAAM is efficient in comparison with the traditional methods which are required to be considered for the various field of applications.

Some manufacturing parts due to intense environmental conditions are exposed to defects like high residual stress level, porosity, cracking and delamination. It should be made sure that these defects are avoided as they can cause other failure modes such as high temperature fatigue.

Following are some reasons which can cause defects in WAAM (Wu et al., 2017)

(i)                  Accumulation of heat can lead to thermal deformation.

(ii)               The strategy for the programming is poor.

(iii)             The parameter set up is not as per acceptable standards, will cause instability in weld pool dynamics.

(iv)              Machine malfunction and environmental significance (e.g. gas contamination)

As shown in Figure 5, defects for different materials are illustrated:

Figure 5. Defects and  material relation in WAAM process. (Wu et al., 2018)

The figure explains that for Titanium alloy, there is sever oxidation followed by residual stress and deformation, porosity for aluminium alloy is maximum followed by crack. Steel has severe deformation and crack along with poor roughness surface. Nickel alloy has severe cracks and poor oxidation large deformation, residual stresses and cracks typically occur in Bimetal.


2.2.4 Methods for Quality Improvement in WAAM Process

Generally, to improve the properties of materials, eliminate deformation and residual stress, reducing porosity, post processing treatment for WAAM parts is required. Most of the problems has an impact on quality of deposition can be eliminated by convenient application of post process. To improve the quality of WAAM, numerous post-processing treatments in manufacturing technologies have been proclaimed recently. (Wu et al., 2018)

Find Out How Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

For accomplishing improvement in quality, it is crucial to have a comprehensive understanding of different materials, the ideal process set up, parameter of the component and post processing methods. Three fundamental perspective considered are feedstock optimization, manufacturing process, and postprocess treatment. To guarantee the consistent quality and to reduce the defects, the deposition of the material should have sensible welding WAAM process. 

As WAAM matures as a commercial manufacturing process, improvement of an economically accessible WAAM framework for metal parts is an interdisciplinary test, which coordinates physical welding process advancement, materials engineering and thermo-mechanical building, mechatronic and control framework structure.


2.3 Numerical Simulation in Additive Manufacturing

2.3.1 Modelling with Finite Elements

Many studies, from the past few years have been published with direct modelling of physical phenomena that has been implemented in additive manufacturing. These theories are collaborated and interrelated together. The final geometry of parts in powder projection depends mainly on the following:

The evolution of the welding local geometry during the manufacturing;

The displacements and the inherent stains induced by the manufacturing

The welding of local geometry depends upon the dimensions of the molten pool that the laser forms on the substrate. Displacements and residual strains are usually temperature gradient dependent and thermos-mechanical property of the material used.

Residual stresses and distortion for the manufactured element are one of the main defects of WAAM. Thus, finite element (FE) modelling can be used to improve the quality and enhance the process since large scale simulation are not very effective using traditional model.

Montevecchi et al., 2016, presented research on simulation of the WAAM process using modified Goldak model using original definition of heat flow.

Goldak model does not allow to consider the actual power distribution between filler and base metal. Indeed, in GMAW, there are two ways in which arc power is transferred to the molten pool: direct transfer from electric arc to the base metal and filler metal melting energy transferred by means of beads enthalpy.

Power distribution between filler and base metal cannot be determined by using Goldak method. Therefore, there are two ways in GMAW by which the power is transmitted i.e, Direct transfer where the melting energy is transferred by means of bead enthalpy directly from electric arc to the base metal.

The main theory is to get different power for filler and base material. The base material receives the total power using Goldak Gaussian distribution and the rest of the power is dispersed with patterns that are consistent over the filler material. This will give the results of the steep temperature gradient which will give out the exact heat required by the filler material as proposed by Goldak heat source. (Goldak et al., 1986)

The proposed power density function is presented



Qw   and Qb  are the total power delivered respectively to the filler and base metal.
= Ellipsoidal distribution factor

b = Ellipsoid y semi axis

c = Ellipsoid z semi axis (front)

Figure 6. Bead on plate model’s temperature field a) Goldak b) Proposed model (Montevecchi et al., 2016).

After carrying out the validation, the proposed model had almost similar result compared to the experimental result which gives higher accuracy as that compared to traditional methods. Therefore, WAAM process is accurately simulated using the process proposed by the author.

Ding et al., (2013) examined the stress evolution of WAAM process during heating cycles using thermomechanical FE model. Using thermal cycles of WAAM process, residual stress at the peak temperature can be determined. After this theory, FE model was developed. (Teng et al., 2003, Song et al., 2005)

To help understand and optimise the process, finite element (FE) models are commonly used; however, the conventional transient models are not efficient for simulating a large-scale WAAM process. In this paper, the stress evolution during the thermal cycles of the WAAM process was investigated with the help of a transient thermomechanical FE model

A detailed mechanical model was developed, encouraged by Camilleri’s research (Camilleri et al., 2004, 2007) which was based on different processes of welding. His theory explained that the nodal response of the plastic flow will determine the thermal load that has to be applied to each node.

The dimensions of the model are shown in the figure 7  

Figure 7. Geometry of a WAAM multilayer wall sample (Ding et al., 2014)

The GMAW process is considered forming a 5mm width and height of 2mm with centreline if base plate being deposited with the four-layer wall. To provide a high rate of deposition and input low heat Fronius cold metal transfer (CMT) was taken as the power source.

ABAQUS® was used for designing the finite element model. The distribution of temperature was first calculated by using coupled transient thermocouple simulations and later the mechanical analysis was carried out.

Usually high-density uniform mesh is not required in the steady-state thermal model when compared to transient thermal model. In the heating area small elements were used to get temperature at higher gradient as shown in Figure 8. Course mesh is used in the model as temperature gradient was much less. Forced convection/diffusion brick elements (DCC3D8) were utilised in this model.

Figure 8. Mesh of steady state thermal model. (Ding et al., 2014) 

Research showed that the residual stress of a point can be determined by the maximum temperature experienced by the material in the WAAM process. Based on this theory, mechanical model with simplified properties was developed. The same properties used in the transient model was provided the in simplified mechanical model, i.e., having same boundary conditions and mechanical material properties.

Figure 9.  Distortion comparison between the efficient “engineering” and the transient thermomechanical models. (Ding et al., 2014)

It was assumed that as the number of layers added to the component kept on increasing the significant distortion increased which is represented in Figure 9. But it was found that increase in the number of layers, reduced the prediction of the model thereby increasing the rigidity of the component.

Ding et al., (2014) concluded that “The distortion and residual stress predictions from the efficient “engineering” FE model were validated against a conventional transient model and experimental result. It has been proved that the new model can provide accurate residual stress and distortion predictions as a full transient solution. Therefore, this model could be utilised in real engineering applications to help optimise the process by providing thermomechanical predictions of large-scale WAAM components efficiently.”

A comprehensive review of the detailed Wire Arc additive manufacturing has been presented, which gives an insight on the mechanical properties of  components, microstructure, defects in the WAAM process and post-processing treatment to improve the defects in the WAAM process. (Wu et al., 2018).

Wire Arc Additive Manufacturing when compared to powder based Additive Manufacturing for a given layer of thickness, it is noticed that it takes much longer for components to be manufactured in powder-based method. Also, the outcome of the structural properties is not as that expected.

Elsewhere, the properties exhibited by the Wire based method is affirmative to structures such as ductility with reduced lead time i.e., time between the initial and the completion of a production process. It uses electric arc as the heat source and wire than powder-based method. The deposition rates in WAAM ranges from low to high. Also, there is considerable reduction in the use of material as it produces near-net shape, which in turn reduces the machining time. WAAM is capable of manufacturing components with low to medium complexity with good structural integrity. (Hoefer et al., 2018)

Since WAAM provides high flexibility, it provides many opportunities for manufacturing large light structures as it can be tailored because of the unique approach of WAAM. Experimentally tested new geometries have been produced that provides material with properties that are flexible for ready-to-use parts. Thermo-mechanical conduct of WAAM can be understood by using Finite Element Model. Potential of AM can be analysed by combining virtual product design, CFD, FEM and the different analysis tool for numerical calculation along with the proper understanding of optimized design principles.

3.1 Research Gaps

Wire arc based additive manufacturing is not yet developed commercially even though there has been considerable advancement along the past few years. This is caused because CAD model inputs cannot be set effectively due to lack of automated process planner. 

As a result, the parameter’s selection for the process cannot be done practically by a person and will need to be selected by automated process because of the complexity of geometry. Hence, the next step will be to establish a CAD automated software, for making it a true WAAM system. The Powder based process or polymer material are one of few commercially available additive manufacturing software, but cannot be applied for WAAM systems.

Process planning has the following challenges:

Bead Modelling


Path Planning

There is a considerable lag in manual pre and post-processing even though the better accuracy and the fast approach of 3D printers.

The production chain in AM can be lengthy and have expensive pre- and post-processing steps e.g. to set up the model, recycling the material and support removal.

3.2 Research Questions

How to assess the possible impact of AM on the product in terms of technical performance economic considerations?

How effective is AM as compared to the other (traditional) manufacturing methods?

How to identify the best building strategies using simulations and systematically designed experiments?

What are the different approaches for improving the integration each process step in WAAM?

How to analyse the effect of the deposition rate for parts with very complex geometries?

ASTM. ASTM F2792–10 standard terminology for additive manufacturing technologies

Ding, D., Pan, Z., Cuiuri, D. & Li, H. (2015a). A multi-bead overlapping model for robotic wire and arc additive manufacturing (WAAM). Robotics and Computer Integrated Manufacturing, 31, 101-110.

Ding, D., Pan, Z., Cuiuri, D. & Li, H. (2015c). A practical path planning methodology for wire and arc additive manufacturing of thin-walled structures. Robotics and Computer Integrated Manufacturing, 34, 8-19.

Ding, D., Pan, Z., Cuiuri, D., Li, H. & Larkin, N. (2016). Adaptive path planning for wire-feed additive manufacturing using medial axis transformation. Journal of Cleaner Production, 133, 942-952.

Ding, J., Colegrove, P., Mehnen, J., Williams, S., Wang, F. & Almeida, P. S. (2014a). A computationally efficient finite element model of wire and arc additive manufacture. The International Journal of Advanced Manufacturing Technology, 70(1), 227-236.

Frazier, W. E. (2014). Metal Additive Manufacturing: A Review. Journal of Materials Engineering and Performance, 23(6), 1917-1928.

Geng, H., Li, J., Xiong, J., Lin, X. & Zhang, F. (2017). Optimization of wire feed for GTAW based additive manufacturing. Journal of Materials Processing Tech, 243, 40-47.

Gibson, I., Rosen, D. & Stucker, B. (2015). Additive manufacturing technologies: 3D printing, rapid prototyping, and direct digital manufacturing (Second ed.). New York: Springer.

Guo, N. & Leu, M. C. (2013). Additive manufacturing: technology, applications and research needs. Frontiers of Mechanical Engineering, 8(3), 215-243.

Karunakaran, K. P., Suryakumar, S., Pushpa, V. & Akula, S. (2010a). Low cost integration of additive and subtractive processes for hybrid layered manufacturing. Robotics and Computer Integrated Manufacturing, 26(5), 490-499.

Kruth, J. P., Leu, M. C. & Nakagawa, T. (1998). Progress in Additive Manufacturing and Rapid Prototyping. CIRP Annals – Manufacturing Technology, 47(2), 525-540.

Lockett, H., Ding, J., Williams, S. & Martina, F. (2017). Design for Wire + Arc Additive Manufacture: design rules and build orientation selection. Journal of Engineering Design, 28(7-9), 568-598.

Montevecchi, F., Venturini, G., Scippa, A. & Campatelli, G. (2016). Finite Element Modelling of Wire-arc-additive-manufacturing Process. Procedia CIRP, 55, 109-114.

Song, Y.-A., Park, S. & Chae, S.-W. (2005). 3D welding and milling: part II—optimization of the 3D welding process using an experimental design approach. International Journal of Machine Tools and Manufacture, 45(9), 1063-1069.

Teng, T.-L., Chang, P.-H. & Tseng, W.-C. (2003). Effect of welding sequences on residual stresses. Computers and Structures, 81(5), 273-286.

Wong, K. V. & Hernandez, A. (2012). A Review of Additive Manufacturing. ISRN Mechanical Engineering, 2012.

Xiong, J., Yin, Z. & Zhang, W. (2016). Closed-loop control of variable layer width for thin-walled parts in wire and arc additive manufacturing. Journal of Materials Processing Tech, 233, 100-106.

Argon Cluster and Graphene Collision Simulation Experiment

Formation of Nanopore in a Suspended Graphene Sheet with Argon Cluster Bombardment: A Molecular Dynamics Simulation study
Abstract: Formation of a nanopore in a suspended graphene sheet using an argon gas beam was simulated using molecular dynamics (MD) method. The Lennard-Jones (LJ) two-body potential and Tersoff–Brenner empirical potential energy function are applied in the MD simulations for different interactions between particles. The simulation results demonstrated that the incident energy and cluster size played a crucial role in the collisions. Simulation results for the Ar55 –graphene collisions show that the Ar55 cluster bounces back when the incident energy is less than 11ev/atom, the argon cluster penetrates when the incident energy is greater than 14 ev/atom. The two threshold incident energies, i.e. threshold incident energy of defect formation in graphene and threshold energy of penetration argon cluster were observed in the simulation. The threshold energies were found to have relatively weak negative power law dependence on the cluster size. The number of sputtered carbon atoms is obtained as a function of the kinetic energy of the cluster.
Keywords: Nanopore, Suspended graphene sheet, Argon cluster, Molecular dynamics simulation


The carbon atoms in graphene condense in a honeycomb lattice due to sp2-hybridized carbon bond in two dimensions [1]. It has unique mechanical [2], thermal [3-4], electronic [5], optical [6], and transport properties [7], which leads to its huge potential applications in nanoelectronic and energy science [8]. One of the key obstacles of pristine graphene in nanoelectronics is the absence of band gap [9-10]. Theoretical studies have shown that chemical doping of graphene with foreign atoms can modulate the electronic band structure of graphene and lead to the metal to semiconductor transition and break the polarized transport degeneracy [11-12]. Also, computational studies have demonstrated that some vacancies of carbon atoms within the graphene plane could induce a band-gap opening and Fermi level shifting [13-14]. Graphene nanopores can have potential applications in various technologies, such as DNA sequencing, gas separation, and single-molecule analysis [15-16]. Generating sub-nanometer pores with precisely-controlled sizes is the key difficulty in the design of a graphene nanopore device. Several method have been employed to punch nanopores in graphene sheets, including electron beam from a transmission electron microscope (TEM) and heavy ion irradiation.
Using electron beam technique, Fischbein et al.[17] drilled nanopores with the width of several nanometers and demonstrated that porous graphene is very stable; but, this method cannot be widely used because of its low efficiency and high cost. Russo et al. [18] used energetic ion exposure technique to create nanopores with radius as small as 3Å. S. Zhao et al. [19] indicated that energetic cluster irradiation was more effective in generating nanopores in graphene, because their much larger kinetic energy could be transferred to the target atoms. Recent experimental works have further confirmed that cluster irradiation is a feasible and promising way in the generation of nanopores [20]. Numerical simulations have demonstrated that, by choosing a suitable cluster species and controlling its energy, a nanopores of desired sizes and qualities can be fabricated in a graphene sheet [19].
A useful tool for studying the influence of different conditions of interactions between cluster and graphene on the formation of nanopore is numerical simulations utilizing molecular dynamics (MD) [21]. The results may be useful in explaining experimental results and predicting optimal conditions for desirable graphene nanopores.
In this paper, MD simulations were performed for the collisions between an argon cluster and graphene. The phenomena of argon cluster–graphene collisions and mechanism of the atomic nanopore formation in graphene were investigated. Effects of cluster size on the threshold incident energy of defect formation in graphene were also discussed.

Molecular Dynamics Method

MD simulations were performed for the collisions between an argon cluster and graphene. For present simulations we used an effective code LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator, written by Sandia National Laboratories [22]. Length (along the X axis) of the graphene layer was 11 nm, its width (along the Y axis) was 10 nm, and each layer contained 3936 atoms. Periodic boundary conditions were applied to both lateral directions. In the simulation, the Tersoff–Brenner empirical potential energy function (PEF) was utilized to simulate the energy of covalent bonding between carbon atoms in the structure of graphene layer [23-24]. The initial configuration was fully relaxed before the collision simulations and the target temperature was maintained at 300 K. During the collision phase, a thermostat was applied to the borders of graphene. The Ar nanocluster was arranged by cutting a sphere from FCC bulk crystals, which had no initial thermal motion. The Ar cluster was initially located above the center of graphene at a sufficiently large distance so that there would be no interaction between the Ar and graphene atoms. Then, a negative translational velocity component, Vz, was assumed for each atom of the clusters. Incident angle of the argon cluster to the graphene normal was zero. Lennard-Jones (LJ) two-body potential was employed to simulate the interactions of Ar–Ar and Ar–C atoms. The form of LJ potentials was:


In the LJ potential, is the distance at which the potential is zero and is the depth of the potential well. Note that the constants were obtained from the mixing rules given by σij = (σi+σj)/2 and Ԑij = (ԐiԐj)1/2. The parameters for Ԑ and σ used in the present simulation are shown in Table 1[25]. Position of the atom was updated by the velocity Verlet algorithm with a time step of less than t = 0.5 fs. To reduce the calculation time, a cut-off length was introduced. The Van der Waals interaction of Ar-Ar and Ar-C atoms with the distance of 11A or above was neglected.


Studying the effect of incident energy in ranging 1–120 ev/atom was chosen to demonstrate two distinctive phenomena: (i) Argon atoms were just reflected, and (ii) some argon atoms penetrated through graphene. Fig. 1 demonstrates the probabilities of reflection and penetration of the Ar55 cluster.
Fig. 2 shows the snapshots of the deformation of the graphene sheet due to the collision with an Ar55 cluster in the case of the incident energy of less than 11ev. During the collision, graphene was bended in the circular region around the collision point and the transverse deflection wave was observed. After the collision, argon cluster was bursted into fragments.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Fig. 3 shows the final atomic configurations resulted from the incidence of Ar55 cluster with the energy of 10 and 11 ev/atom. There were two possibilities for the structure of the graphene sheet after the collision: (i) the graphene was rippled after the collision and no damaged region was formed, this was observed in case of the incident energy of less than 11ev (Fig. 3(a)), and (ii) the collision caused defect in graphene (Fig. 3(b)).
Fig. 4 shows that there were two possibilities for the structure of the graphene sheet after collision with an Ar55 cluster in the case of the incident energy of greater than 11 ev/atom: (i) the argon cluster penetrated into the graphene sheet without the sputtered carbon atoms (Fig. 4(a)), and (ii) the argon cluster penetrated into the graphene sheet with the sputtered carbon atoms (Fig. 4(b)). When the incident energy of argon cluster was 11ev/atom, atomic-scale defects such as Stone−Wales defect were formed in the graphene sheet (Fig. 3(b)). With the increase of the incident energy, these atomic defects began to get connected and finally a nanopore with carbon chains on the pore edge was created in graphene. The atomic carbon chains with unsaturated bonds thus provided the method for chemical functionalization of graphene nanopores in order to improve their separation ability and detection. For example, oxidation of packed multilayered graphene sheets was significantly permeable to water and impermeable to He, N2, Ar, and H2 [26].
Accordingly, it was necessary to introduce the concept of threshold incident energy of defect formation (Ed) in graphene and threshold energy (Ep) of penetration argon cluster in graphene. Fig. 5 shows the size dependence of each threshold incident energy. Thus, both Ed and Ep were supposed to be written in simple power-law equations:

In Eq. (2), Ed(1) and Ep(1) indicate the threshold energy for argon atom, and N is cluster size. Power indices on N, α, and β, mean the degree of non-linear effect.


Fig. 6 shows the final atomic configurations resulted from the incidence of Ar55 cluster with the energy of 14 , 15 ev/atom. By further increasing energy, the carbon chains became short and the pore edge became smooth
we calculated the number of sputtered carbon atoms as a function of total incident energy, because the number of the sputtered carbon atoms was in correspondence to the area of nanopore in graphene. Fig. 7 shows the number of sputtered carbon atoms as a function of total cluster energy in the case of Ar19 and Ar55 cluster collision. For both cases, as the total energy increased, the number of sputtered carbon atoms increased. This result was in agreement with the previous study [27] .The number of sputtered carbon atoms can be approximated by a constant value for incident energy larger than 10 Kev. The cluster collision with large size led to higher the number of sputtered carbon atoms when all clusters had the same total cluster energy.


The phenomena of argon cluster–graphene collisions and mechanism of the atomic nanopore formation in suspended graphene sheet were investigated using molecular dynamics method. Summary of the obtained results is as follows:

Threshold incident energy which caused defect formation (Ed) in graphene and penetration (Ep) into argon cluster were introduced.
Simulation results for the argon cluster–graphene collisions showed that the argon cluster bounced back when the incident energy was less than Ed and broke when the incident energy was greater than Ep.
Suspended carbon chains could be formed at the edge of the nanopore via adjusting the incident energy and, by increasing energy, the carbon chains became short and the pore edge became smooth.
Ed and Ep were found to have relatively weak negative power law dependence on cluster size.
The cluster collisions with large size led to higher the number of sputtered carbon atoms when all clusters had the same total cluster energy.

[1] K. S. Novoselov,A. K. Geim, S. V. Morozov,D. Jiang,Y. Zhang,S. V. Dubonos,I. V. Grigorieva,A. A. Firsov , Science. 306 ( 2004) 666.
[2] T. Lenosky, X. Gonze, M. Teter, V. Elser, Nature.355 (1992) 333.
[3] J.N. Hu, X.L. Ruan, Y.P. Chen, Nano Lett. 9 (7) (2009) 2730.
[4] S. Ghosh, I. Calizo, D. Teweldebrhan, E.P. Pokatilov, D.L. Nika, A.A. Balandin, W. Bao, F. Miao, C.N. Lau, Appl. Phys. Lett. 92 (15) (2008) 151911-1.
[5] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys.81 ( 2009) 109.
[6] D. S. L. Abergel,A. Russell,V. I. Fal’ko, Appl. Phys. Lett. 91 (2007) 063125.
[7] A. Cresti, N. Nemec, B. Biel, G. Niebler, F. Triozon, G. Cuniberti, S. Roche, Nano Research. 1 (2008) 361.
[8] A. K. Geim, Science. 324 (2009) 1530
[9] A. Du, Z. Zhu, S. C. Smith, J. Am. Chem. Soc. 132(9) (2010) 2876.
[10] R. Balog, B. Jørgensen, L. Nilsson, M. Andersen, E. Rienks, M. Bianchi, M. Fanetti, E. Lægsgaard, A. Baraldi, S. Lizzit, Z. Sljivancanin, F. Besenbacher, B. Hammer, T. G. Pedersen, P. Hofmann, L. Hornekær, Nat. Mater. 9 (2010) 315.
[11] T. B. Martins, R. H. Miwa, A. J. R. da Silva, A. Fazzio, Phys. Rev. Lett. 98 (2007) 19680.
[12] Y. M. Lin, C. Dimitrakopoulos, K. A. Jenkins, D. B. Farmer, H. Y. Chiu, A. Grill and P. Avouris, Science. 327 ( 2010) 662.
[13] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, A. K. Geim, Rev. Mod. Phys. 81 (2009) 109.
[14 ] D. J. Appelhans, Z. Lin, M. T. Lusk, Phys. Rev. B. 82 (2010) 073410.
[15] G. F. Schneider, Nano Lett. 10(8) (2010) 3163.
[16] P. Russo, A. Hu, G. Compagnini, Nano-Micro Lett. 5(4) (2013) 260.
[17] M. D. Fischbein, M. Drndic, Appl. Phys. Lett.93 ( 2008) 113107.
[18] C. J. Russo, J. A. Golovchenko, Proc. Natl. Acad. Sci. USA. 109(16) (2012) 5953.
[19] S. J. Zhao, J. M. Xue, L. Liang, Y. G. Wang, S. Yan, J. Phys. Chem. C 116(21) (2012) 11776.
[20] Y. C. Cheng, H. T. Wang, Z. Y. Zhu, Y. H. Zhu, Y. Han, X. X. Zhang, U. Schwingenschlögl, Phys. Rev. B. 85 ( 2012) 073406.
[21]H. Araghi, Z. Zabihi, Nucl. Inst. Methods B 298 (2013) 12.
[22] S.J. Plimpton, Journal of Computational Physics 117 (1995) 1.
[23] D.W. Brenner, Phys. Rev. B .42 (1990) 9458.
[24] D. W. Brenner, O. A. Shenderova, J. A. Harrison, S. J. Stuart, B. Ni, S. B. Sinnott, J. Phys. Condens. Mater. 14 (2002) 783
[25] Y. Yamaguchi, J. Gspann, Eur. Phys. J. D. 16 (2001) 103
[26] R. R. Nair, H. A. Wu, P. N. Jayaram, I. V. Grigorieva, A. K. Geim , Science. 335 ( 2012) 442.
[27] N. Inui, K. Mochiji, K. Moritani, N. Nakashima, Appl. Phys. A: Mater. Sci. Process. 98 (2010) 787.
Fig. 1. Incident energy dependence of the reflection and penetration probabilities
Fig. 2. Snapshots of Ar55 clusters collision on graphene sheet : (a) t=0 ps , (b) t= 1 ps , (c) t=6 ps
Fig.3. Final atomic configurations to X–Y plane when the collision energy is: (a)10 ev, and ( b)11 ev
Fig. 4. Final atomic configurations , when the incident energy is: (a)14 ev, and (b)15 ev
Fig. 5. Final atomic configurations to X–Y plane when the incident energy is: (a) 1 Kev, (b) 10 Kev, (c) 20 Kev
Fig. 6. (a) Cluster size dependence of threshold incident energy of defect formation in graphene, (b) Cluster size dependence of threshold energy of penetration into argon cluster
Fig. 7. Dependence of sputtered atoms on kinetic energy of a cluster
Table 1. Lennard–Jones potential parameters


σ (A)









Common Bus System Simulation

In this project we are going to perform simulation on 16 bit common bus. To Understand what is common bus let us first discuss what is bus itself, A bus is set of parallel lines that information (data,addresses, instructions and other information)passes on internal architecture of a computer. Information travels on buses as a series of pulses , each pulse representing a one bit or a zero bit Buses are coming in various sizes such as 4 bits,8 bits,16 bits, 12 bits,24 bits,32 bits ,64 bits,80 bits,96 bits and 128 bits.
From the size of bus we can determine that how many bit a bus will carry in parallel.The speed of the is how fast it moves data along the path. This is usually measured in MegaHertz(MHz) or millions of times or second.
Data Carried by bus in a second is called as capacity of the bus.In buses there is concept of internal and external buses, Bus inside a processor is called is called as internal and outer to processor is called as external bus.
A bus master is a combination if circuits , control microchips, and internal software that control the movement of information between major componenets inside the computer.
A processor bus is a bus inside the processor. Some processor designs simplify the internal structure by having one or two processor buses. In a single processor bus system, all information is carried around inside the processor on one processor bus. In a dual processor bus system, there is a source bus dedicated to moving source data and a destination bus dedicated to moving results. An alternative approach is to have a lot of small buses that connect various units inside the processor. While this design is more complex, it also has the potential of being faster, especially if there are multiple units within the processor that can perform work simultaneously (a form of parallel processing).
A system bus connects the main processor with its primary support components, in particular connecting the processor to its memory. Depending on the computer, a system bus may also have other major components connected.
A data bus carries data. Most processors have internal data buses that carry information inside the processor and external data buses that carry information back and forth between the processor and memory.
An address bus carries address information. In most processors, memory is connected to the processor with separate address and data buses. The processor places the requested address in memory on the address bus for memory or the memory controller (if there is more than one chip or bank of memory, there will be a memory controller that controls the banks of memory for the processor). If the processor is writing data to memory, then it will assert a write signal and place the data on the data bus for transfer to memory. If the processor is reading data from memory, then it will assert a read signal and wait for data from memory to arrive on the data bus.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In some small processors the data bus and address bus will be combined into a single bus. This is called multiplexing. Special signals indicate whether the multiplexed bus is being used for data or address. This is at least twice as slow as separate buses, but greatly reduces the complexity and cost of support circuits, an important factor in the earliest days of computers, in the early days of microprocessors, and for small embedded processors (such as in a microwave oven, where speed is unimportant, but cost is a major factor).
An instruction bus is a specialized data bus for fetching instructions from memory. The very first computers had separate storage areas for data and programs (instructions). John Von Neumann introduced the von Neumann architecture, which combined both data and instructions into a single memory, simplifying computer architecture. The difference between data and instructions was a matter of interpretation. In the 1970s, some processors implemented hardware systems for dynamically mapping which parts of memory were for code (instructions) and which parts were for data, along with hardware to insure that data was never interpretted as code and that code was never interpretted as data. This isolation of code and data helped prevent crashes or other problems from “runaway code” that started wiping out other programs by incorrectly writing data over code (either from the same program or worse from some other user’s software). In more recent innovation, super computers and other powerful processors added separate buses for fetching data and instructions. This speeds up the processor by allowing the processor to fetch the next instruction (or group of instructions) at the same time that it is reading or writing data from the current or preceding instruction.
A memory bus is a bus that connects a processor to memory or connects a processor to a memory controller or connects a memory controller to a memory bank or memory chip.
A cache bus is a bus that connects a processor to its internal (L1 or Level 1) or external (L2 or Level 2) memory cache or caches.
An I/O bus (for input/output) is a bus that connects a processor to its support devices (such as internal hard drives, external media, expansion slots, or peripheral ports). Typically the connection is to controllers rather than directly to devices.
A graphics bus is a bus that connects a processor to a graphics controller or graphics port.
A local bus is a bus for items closely connected to the processor that can run at or near the same speed as the processor itself.
ACCUMULATER : The accumulator processor register in the common bus system is processing unit that help to perform manipulations.
It has two another register Called


ADDER AND LOGIC UNIT: It perform additions and other operation then store the value in the Accumulator.
E REGISTER: It contains the carry of addition and other operation performed in the adder and logic unit.
DATA REGISTER: When we fetched instruction from memory then it is neccesary to have data on which instruction is to be executed.
Data register provide data to instruction to execute it.
TEMPORARY REGISTER: When we are executing instruction then in the way of computing situation arrives when we need a register to save intermediate result.
To save intermediate result we hace register called Temorary register that holds the data or result temporarly from which data will be fetched lator.
INSTRUCTION REGISTER: It tells that which instruction will be ececuted
ADDRESS REGISTER: AR contains the address of the oprends to execute instruction.For example AR(0-11)
PROGRAM COUNTER; It is counter in a common bus that will tell that which instruction will be executed next .Hence it contains the address of next instruction it is implemented as
PC– >PC +1;
INPUT REGISTER: It contains the data that will be inserted by user.
OUTPUT REGISTER: It has data that can be use full to take output.
WORKING OF PROJECT: This project contain one addition display of data which designed with help of graphics function.It is not relate to project at all. But it introduce you what is project.
The main coding when you press any key from key board will appear.
It demands from three control signal s0,s1,s2 these three bits aggregately defines the binary corresponding to which decimal number of the register activated which further give activated the its register and execute instruction In order to display the activated register i have used a pixel and circle that will fill the box.

Simulation of Single Cylinder SI/HCCI Internal Combustion Engine using AVL BOOST

Technology has made it possible to design and simulate various engineering models. Based on various formula and conditions the software can predict how a design would perform with great accuracy. One such software is AVL BOOST which is a fully integrated advanced level tool for running simulations on virtual engine. It involves simulation of engine cycle and gas exchange for an entire engine model. It has not only reduced time and cost involved in designing of engine models but also given the user the flexibility to optimize the design by changing parameters or inputs without going into laborious calculations. In short AVL Boost is one of the reliable and efficient tools which give the designer enough confidence to finally have a prototype of the design by drastically reducing the chances of failure [9].

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

1.1. Aims and Objectives:

 The aim of this report is to build a model of single cylinder SI and HCCI engine in BOOST using the geometries and valve timings provided and to use experimental data to determine combustion profile for the models. The models are calibrated based on in-cylinder pressure trace, IMEP and maximum pressure. The results are then evaluated using experimental data. Engine parameters are changed to make it more efficient and cleaner and an exhaust after treatment is incorporated.

2.1. Spark Ignition (SI) Engine:

Combustion is ignited by a spark in SI engines and almost stoichiometric air/fuel ratio is used to allow spark to ignite better and for the flame to propagate better [14].High lift was used in SI mode i.e. cases 4-7 and the fuel used was gasoline which has a stoichiometric air/fuel ratio of 14.7. The actual values of air/fuel ratio were calculated using given values of lambda in each case and the IVO and EVO for each case was input using given data.

2.2. Homogeneous Charge Compression Ignition (HCCI) Engine:

 Auto-ignition takes place in HCCI due to compression, the charge is premixed, and the combustion is lean. The engine is un-throttled which reduces throttling losses and combustion temperatures are low which reduce NOX and the quick combustion makes it comparable with the Otto Cycle [7].Hence the throttle and spark timing was removed for HCCI cases i.e. cases 1-3 and low lift was used. The air fuel ratios were calculated using stoichiometric values and the IVO and EVO was calculated using given data.

2.3. Input Data:

2.3.1. Technical Specification:

Table 1: Basic Geometry







Con-rod Length


Compression Ratio


2.3.2. Calculation of Required Input Data:

 Stroke volume was calculated using Eq (1)
Vd=πB24 ×L=565630mm3
      [4] (1)                    

The clearance volume was calculated as:
565630mm311.5–1=53869mm3 [4]
, a=44.45mm (3)
                    [4] (4)

By plotting logarithm of fired cylinder pressure and that of instantaneous volume and by taking the average of the two slopes the polytrophic constant was calculated for all the seven cases. Graphs for HCCI engine case 1and SI engine case 4 are shown in figure 1 and figure 2.Figure 1: Log P vs. log V for case 1 (HCCI)Figure 2: Log P vs. Log V case 4 (SI)
dQdθ=γγ–1 pdVdt+1γ–1pdPdt

γ was used in the above equation to calculate heat release rate. Using the polytrophic constant calculated for all seven cases, the heat release rate was also calculated for each case and the normalized heat release was input into AVL BOOST.

Figure 3: AVL BOOST Setup

The model was setup in BOOST using given data, the engine geometry and intake and exhaust geometry. The cylinder piston area, cylinder head surface area and liner area were calculated using AVL BOOST HELP and the intake and exhaust valve coefficients for both SI and HCCI was calculated using BOOST HELP and literature review.

2.4. Model calibration and validation

2.4.1 SI calibration and validation

In order to calibrate the SI case 4, the throttle angel was changed until the IMEP, Pmax and PCad matched the experimental data. Pressure vs. CAD plot was generated in BOOST which was exported in excel. A plot was also created from the provided experimental data. A comparison was made between the two which showed the pressure profile resembled perfectly and the crank angle at which the pressure peak occurred, the magnitude of the pressure peak and IMEP was same for both.

Figure 4: Pressure vs. Crank angle Curve Case 4 (Experimental Data)

To validate the SI model, the remaining SI cases with their corresponding heat release and air/fuel ratio were fed to the model and only the throttle angel variable was changed. Upon comparison of generated data vs. experimental data it showed that the in-cylinder pressure trace, IMEP, Pmax and crank angle data and IMEP matched, hence validating the model.

SI engines are calibrated based on throttle valve as it controls the amount of air and hence the fuel amount entering the system while maintaining a constant air/fuel ratio, mostly stoichiometric conditions, and thus controls the engine power. By increasing the throttle position more air is allowed to enter which in turn allows more fuel into to the system to maintain stoichiometric conditions and vice versa [22].

2.4.2 HCCI model calibration

To calibrate the HCCI engine the throttle was removed, and low lift was used. The low lift helps trap more residual gases to facilitate combustion and to extend the operating range of the engine [17].The air/fuel ratio and normalized heat release graph for case 1 was put in BOOST and the cylinder wall temperatures were changed until the correct IMEP, Pmax and Pcad were achieved. The wall temperature of
was used for case 1. Kezhuo Wang (2018) in his CFD simulation of HCCI engine studied influence of cylinder wall temperature on engine performance. In his investigation he varied cylinder wall temperature from about
and found out that decreasing cylinder wall temperature decreases maximum temperature of the engine cylinder. At temperature lower than
the engine misfired since heat release rate was quite low giving us the idea of lowering in cylinder temperature to minimum of
to avoid misfire and to to reduce emissions and increase efficiency for HCCI [20].

The pressure vs CAD was plot in BOOST and was exported to excel to compare the values. The graph below shows the Pmax and Pcad for case 1match the experimental data hence calibrating the model.Figure 5: Pressure vs Crank angle Curve Case 1 (Experimental Data)

To validate the model the data for the remaining HCCI cases was input in BOOST and only the wall temperatures were changed to get the correct values, hence validating the model.

HCCI engines are calibrated based on wall temperatures as HCCI combustion depends on chemical kinetics which is influenced by wall temperatures [21].The wall temperatures effect the charge near the wall and hence effect the combustion duration, ignition rate and heat lost to the walls [5].If the wall temperature is lowered the peak value of heat release rate is significantly decreased. On the other hand, increasing the temperature would delay the rate of pressure rise during combustion [12].The heat transfer happening inside the cylinder is affected by the temperature of its walls which in turn affects the air fuel mixture that enters it thus affecting the combustion process. Decreasing wall temperature results in delay in ignition timing thus extending duration of ignition [20].

 3.Results and Optimization of Engine:

3.1. Results:

Table 2: Experimental data vs AVL Boost Data

The PMax, PMax (CAD) and IMEP from the      BOOST data matches the experimental data for both the SI and HCCI cases with an error of less than 10% thus validating both models.

It was noted that as the throttle angel was reduced in SI engine (part load operation), the pumping losses increased, which could be seen by the PV graphs plotted in excel. This is because as the throttle restricted the amount of air entering the engine, the volumetric efficiency reduced. The intake air pressure dropped below atmospheric pressure thus increasing the cylinder pressure resulting in the piston working against this pressure difference in order to take more air in.

On the other hand, it was noted that the pumping losses in HCCI engine are significantly lower compared to SI engine. This is mainly attributed to the lean combustion used in HCCI engines. Since the ratio of air is greater in lean combustion, this means that the pressure in the intake manifold is high to allow more air into the cylinder thus reducing pumping losses [7].According to the results the greater the air/fuel ratio the higher the volumetric efficiency was.

3.2 Optimization of Engine:

3.2.1 SI Engine Optimization:

Several factors were changed for optimizing the performance of SI engine for case 4. The compression ratio, engine speed, exhaust runner lengths and IVO were varied to get optimum results.

By increasing the compression ratio to 15:1, the BMEP, Brake power, torque and thermal efficiency increased while the Bsfc, NOX and CO2reduced. The improvement is attributed to the better fuel evaporation and better mixing at high compression ratios [15].

The rpm was increased to 2500rpm. The positive valve overlap was also increased by opening IV early. This led to a reduction in NOX and HC emissions, because early intake opening leads to backflow of residual gases into the intake port which is recirculated in the next cycle into the cylinder leading to their combustion. As engine speed was increased the high overlap was beneficial as it increased volumetric efficiency due to ram effect [6].

Changes to the engine geometry were made by reducing the exhaust pipe length to 44mm which led to better efficiency, torque and power. This is due to wave tuning. When the pressure wave in the exhaust manifold is tuned correctly it returns to the cylinder before valve is closed thus producing a negative pressure at the valve opening. This phenomenon is called scavenging which pushes more amount of residual exhaust gas out thus improving efficiency[3][13].

The combined effect of all the parameters can be seen in the table below.













Before optimization







After optimization







The results show an increase in volumetric efficiency, brake power and BTE. The BSFC decreases as a result of increase in brake power. The increase in brake power also increases the BTE.
and CO2 are reduced.

3.2.2 HCCI Engine Optimization:

Several parameters were changed for optimization of case 1 HCCI. The efficiency in HCCI engine can be increased by optimizing NVO to trap hot residual gases(internal EGR) in the cylinder [23].By closing the exhaust valve early, the hot gases can be used to facilitate the auto-ignition process and reduce combustion timing[24].The hot residual gases heat up the fresh incoming charge thus increasing their temperature and facilitating combustion. Hence EVC and IVO were optimized to get higher volumetric efficiency, higher brake power, lower Bsfc and reduced NOx.

The combustion in an HCCI engine is dependent on the mixture chemistry in the cylinder. By reducing engine speed, the pre-combustion reactions during compression stroke are improved due to more residence time, thus combustion occurs early improving power and efficiency [16]. There is also greater breathing characteristics at reduced rpms which can be attributed to lesser flow friction and enhanced wave dynamics. Hence the rpm was reduced from 1500rpm to 1200rpm which led to an improved volumetric efficiency, thermal efficiency and brake power. The bsfc was also reduced.

Compression ratio has a strong effect on ignition timing and charge temperature in HCCI engines [10]. The compression ratio was also increased to 13.5:1.

The table shows the combined effect of reducing rpm to 1200rpm, increasing the compression ratio to 13.5:1 and optimizing negative overlap.





























The results show an increase in volumetric efficiency, brake thermal efficiency and brake power. The Bsfc reduces as a result of increase in brake power. The increase in brake power also increases the BTE. However, there is also an increase in CO levels. This shows that there is a tradeoff between optimum performance and reduced emissions.

4. Model/Software Limitations:

The model has limitations which cannot incorporate certain elements which are present in the operation of engines in real life. Two of which are turbulence and heat loses. Since the Reynolds number is very high inside the internal combustion engines while they are operating, turbulence is developed. Various other complex motions such as swirling flows and tumbling are produced after the introduction of air fuel mixture. As a result of turbulence and complex motions and their interaction with the valve motion, heat transfer inside the engine becomes unsteady and undergoes local changes. Reynolds number increases with increase in engine piston speed and thus turbulence increases which influences the heat transfer inside the engine [12].Effect of turbulence in the real-life 3D engine is quite different and all the heat loses in real life engine cannot be incorporated in the model. Turbulence affects the flame speed by assisting in mixing thus accelerating chemical reactions in SI engine [8].whereas in HCCI engine turbulence affects Rate of Heat Release [19]. Assuming a streamline flow of gasses ignores the changes that happen in reactivity of fuel through actual non-streamline flow. The assumption that the process is isentropic does not account for the loss that would happen due to friction, noise or other heat transfer loses [1]. The software is very much limited to inputs and the design that is made by the user. 

5. Exhaust After treatment:

NOx concentration in the exhaust gas depends on peak value of cyclic temperature and amount of oxygen available inside the combustion chamber. Hence in order to reduce
in the exhaust one can either reduce the peak temperature or reduce available oxygen in the combustion chamber. This can be done by diluting fuel air mixture through addition of substances that are non-combustible before it enters the engine cylinder. Water injection, catalytic converter and Exhaust gas recirculation are among the techniques used for this purpose. Water injection decreases specific fuel consumption, as a result this method cannot be used beyond a certain limit [2]. Catalytic convertor on the other hand reduces
emissions by changing the chemical properties of the exhaust gases. Most of the emissions are eventually converted into carbon dioxide and water vapor [2]. Exhaust gas recirculation is one method that is of more interest since it is effective in reducing harmful gases for both SI and HCCI engines where 10-30 % of engine exhaust gas is recirculated and sent back to the engine inlet manifold. Since the fresh air at inlet is mixed with exhaust gas it reduces oxygen concentration and simultaneously reduces maximum burning temperature thus reducing
[2]. The method is efficient to an extent where
emission can be reduced from 25.4% to 89.6% [11].

There are certain challenges related to usage of EGR and the major one is that it decreases the performance of the engine. Various researchers have come up with different approaches in order to overcome this shortcoming such as EGR hydrogen reforming and treating the stream before it enters the inlet manifold. In case of HCCI engine Lü and coworkers (2005) proposed cooling of EGR in order to prolong combustion time. One most used method to recover the reduced performance is application of turbo charging that avoids self-ignition levels from being reached during the process. By increasing compression ratio through turbo charging
formation increases as peak temperature is increased but at the same time addition of inert gases through EGR reduces it. Thus, in order to reach best results, one is required to optimize value of recycled gas amount and compression ratio [18].

6. Conclusion:

Both models were run successfully and were calibrated and validated with respect to given experimental data by making specific adjustments during initial modelling and simulation. The intent of improving emissions and optimizing the model is evident throughout the project.AVL BOOST proved an efficient tool in effectively simulating the models with good accuracy bearing in mind various limitations that the software has. 

 The software could have been more user friendly if recommendations or suggestions were presented by it especially during model development and calibration. More perfect outcome would have been achieved if phenomena such as turbulence and values of heat loss could have been incorporated in the process making the models closer to real life. Even though there are certain benefits of using 1 D software such as AVL BOOST that are related to simple and fast calculations but developing a working model requires good in-depth knowledge of input parameters. 

7. Project Management:

In order to achieve the given goals, the project was divided into various segments as shown in the Gantt chart in the end. Since it involved modelling, simulation, calibration and validation there were different steps in the process which needed revisiting. One of the most crucial activities was literature review for model development which was efficiently distributed between the team and led to positive discussions till an approach was developed at each stage of the project. Efficient time management and segregation of tasks ensured successful and on time achievement of the stated milestones.



[1]    Aceves, S. M., Flowers, D. L., Martinez-Frias, J., Espinosa-Loza, F., Christensen, M., & Johansson, B. (2005). Analysis of the Effect of Geometry Generated Turbulence on HCCI Combustion by Multi-Zone Modeling. Rio de Janeiro: SAE.

[2]    Amritkar, A. B., & Badge, N. (2016). Effect of Exhaust Gas Recirculation (EGR) in Internal Combustion Engine. International Research Journal of Engineering and Technology, 1180-1185.

[3]    Aradhye, O., & Bari, S. (2017). Continuously Varying Exhasut Pipe Length and Diameter to Improve the Performance of a Naturally Aspirated SI Engine. ASME International, 8.

[4]    B.Heywood, J. (2018). Internal Combustion Engine Fundamentals. Ohio: McGraw-Hill Book Company.

[5]    Chang, J., Filipi, Z., Assanis, D., Kuo, T.-W., Najt, P., & Rask, R. (2005). Characterizing the thermal sensitivity of a gasoline homogeneous charge compression ignition engine with measurements of instantaneous wall temperature and heat flux. International Journal of Engine Research, 289-310.

[6]    Choi, K., Lee, H., Hwang, I. G., Myung, C.-L., & Park, S. (2008). Effects of various intake valve timings and spark timings on combustion, cyclic THC and NOX emissions during cold start phase with idle operation in CVVT engine. Journal of Mechanical Science and Technology, 2254-2262.

[7]    Dahl, D. (2012). Gasoline Engine HCCI combustion extending the High Load Limit. Goteborg: Chalmers University of Technology.

[8]    Hynes, J. (1986). Turbulence effects on combustion in spark ignition engines. Leeds: University of Leeds.

[9]    LIST, A. (2018). AVL BOOST™ Combustion and Emissions. Retrieved 11 24, 2018, from

[10] Najafabadi, M. I., & Aziz, N. A. (2013). Homogeneous Charge Compression Ignition Combustion: Challenges and Proposed Solutions. Journal of Combustion, 14.

[11] Onawale O, T. (2017). Effect of Exhaust Gas Recirculation on Performance of Petrol Engine. Journal of Engineering and Technology, 14-17.

[12] Park, H. J. (2009). Development of an In-cylinder Heat Transfer Model with Variable Density Effects on Thermal Boundary Layers. Michigan: The university of the Michigan.

[13] Sawant, P., Warstler, M., & Bari, S. (2018). Exhasut Tuning of an Internal Combustion Engine by the Combined Effects of Variable Exhaust pipe Diameter and an Exhasut Valve Timing System. MDPI, 1-16.

[14] Stone, R. (1992). Introduction to Internal Combustion Engines. Middlesex: Macmillan.

[15] T., A., C. O, F., & G. Y. , P. (2012). Influence of compression ratio on the performance characteristics of a spark ignition engine. Advances in Applied Science Research, 1915-1922.

[16] Thring, R. H. (1989). Homogeneous-Charge Compression-Ignition(HCCI) Engines. SAE International, 12.

[17] Uyumaz, A., & ÇINAR, C. (2016). Understanding the Effects of Residual Gas Trapping on Combustion Characteristics, Engine Performance and Operating Range in a HCCI Engine. International Journal of Advances in Science Engineering and Technology, 6-12.

[18] Vianna, J., Reis, A., Oliveira, A., & Fraga, A. (2005). Reduction of Pollutants Emissions on SI Engines – Accomplishments With Efficiency Increase. ABCM , 217-222.

[19] Vressner, A., Hultqvist, A., & Johansson, B. (2007). Study on Combustion Chamber Geometry Effects in an HCCI Engine using High-Speed Cycle-Resolved Chemiluminescence Imaging. SAE International.

[20] Wang, K. (2018). HCCI engine CFD simulations: Influence of intake temperature, cylinder wall temperature and the equivalence ratio on ignition timing. The Ohio State University.

[21] Wilhelmsson, C., Vressner, A., Tunestål, P., Johansson, B., Särner, G., & Aldén, M. (2005). Combustion Chamber Wall Temperature Measurement and Modelling during Transient HCCI Operation. SAE Technical Paper Series, 13.

[22] Xu, C. C., & Cho, M. H. (2017). The study of an Air intake on the Throttle of the Engine by CFD in Spark Ignition Engine. International Journal of Applied Engineering Research, 5263-5266.

[23] Yang, J., Culp, T., & Kenney, T. (2002). Developmant of a Gasoline Engine System Using HCCI Technology- The Concept and the Test Results. SAE International, 16.

[24] Zhao, F., & Asmus, T. W. (2003). Chapter 4 : HCCI Control and Operating Range Extension. In F. Zhao, T. W. Asmus, D. N. Assanis, J. E. Dec, J. A. Eng, & P. M. Najt, Homogeneous Charge Compression Ignition (HCCI) Engines. SAE.


Training Simulation for Cyber Security Novice Analysts

Training Simulation for Cyber Security Novice Analysts based on Cognitive Analysis of Cyber Security Experts


In the world of digitization cyber security is becoming a greater concern for today’s society, with attacks on systems now being a more frequent and complex than ever. It makes it extremely hard for a trainee cyber security analyst to acquire the expert level skill set in the domain. This situation leads to have a need for a better training of cyber defense analysts. Major part of the job for a cyber security analyst to identify the false alarms correctly. This paper presents a cognitive task analysis approach for addressing this need for better training model focused on false alarm detection. The primary objective is to capture and characterize the performance of a cyber security expert to tackle the complex threat and incorporate it in the training model in order to provide effective training for the cyber situation awareness. To make it extremely effective it is crucial to design realistic training scenarios. As a part of the utilization of cognitive task analysis technique this paper focuses mainly on the improved training model for accurate identification of false alarms, helping a trainee to during performance to think and act as experts. To tackle the challenge of overloading information faced by cyber analysts, it proposes an attack-specific checklist items. During training, cyber analysts can adjust their own checklist items and set thresholds so that cyber attacks can be detected more quickly. Since the time required for cyber analysts to recognize, analyze and identify a threat as a false alarm is critical, we evaluate the performance of cyber analysts against the ideal timeline based on their response time.

Keywords: Cyber Attacks, Situation Awareness, Training for Cyber Security Experts

Training Simulation for Cyber Security Professionals

Cyber security is a large-scale societal problem. The threat to organizations and governments has continued to grow as we become increasingly dependent on information technology; meanwhile, the entities behind cyber attacks grow in sophistication. Low and slow attacks, also called advanced persistent threats, are a new category of cyber security threat designed to exist undetected over an extended period of time and disrupt the processes of an organization. In response, the role of the cyber security professional has developed as a specialized subset within information technology careers. Cyber security professionals are individuals who are responsible for ensuring the ongoing security of their organization’s computer network. Recent high-profile cases of network intrusions underscore the vulnerabilities in current information technology in banking, healthcare, retail, and in the government.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In general, cyber security professionals “protect, monitor, analyze, detect and respond to unauthorized activity,” a task called computer network defense (CND). Because of the large and growing volume of network activity, unaided performance of this task is impossible in large organizations. To reduce the human information processing requirements, automated tools are used. One example is an intrusion detection system (IDS), which examines server log files to find patterns associated with anomalies. When such a pattern is found, cyber security professionals can be alerted to investigate. However, IDSs are limited in their sophistication and reliability; this has been true of most forms of automation for CND. Because of this, CND is a joint human-machine collaborative task in which people depend on automated tools to perform their jobs but must remain “in the loop” as an information processor and decision maker.           Consequently, the cyber security professional is a critical line of defense in CND. Effective human decision making is a determinant of successful cyber security. Hence there is a need for training of cyber security analysts. It has been established that situation awareness (SA), a cognitive state resulting from a process of situation assessment, is a predictor of human performance across domains, and research has established its role in CND, where it is called cyber SA. In other words, cyber SA, as goal-relevant knowledge held during task performance, predicts threat response by describing whether cyber security professionals have adequate awareness of relevant elements in the task environment.

In cyber situational awareness, cyber analysts have to collect data and seek cues that form attack tracks, find the impact of attack tracks, and anticipate moves (actions, targets, time) of attackers. Due to the enormous size and complexity of network, cyber analysts face extraordinary cognitive challenges. First, the environment from which a cyber analyst has to perceive salient cues is vastly larger and more difficult to comprehend. Second, the speed at which the cyberspace changes is much faster, where new offensive technologies are constantly being developed. Third, the cyber analyst only sees the information that his/her (software) sensors are capable of detecting in a form that can be rendered on monitor screen. Furthermore, cyber analysts are given with large amounts of information (such as various IDS and audit logs) to look through, and CSA demands that various pieces of information be connected in both space and time. This connection necessitates team collaboration among cyber analysts working at different levels and on different parts of the system. As cyber attacks are becoming more frequent and more complex, the need for more effective training of cyber analysts and their collaborative efforts to protect critical assets and ensure system

security is also elevated.

Cognitive Task Analysis (CTA) is the process of extracting knowledge, thought process of cyber security experts and making use of this information to develop training scenarios (Huang, Shen, Doshi, Thomas & Duong, 2015). The outcome of CTA is the performance, equipment, conceptual and procedural knowledge used by experts as they perform a task. Training techniques for cyber security decision making will be developed. Informed by knowledge of mental models and their impact on SA, the research will

lead to new training techniques that result in transfer of skills and knowledge identified in this

research as critical to effective cyber security decision making. Measurement of mental models provides a way to evaluate structural knowledge and supports training and evaluation development; mental models that have been empirically developed from high performing experts can be used for evaluation in a variety of ways. Evaluating mental models can be used as a selection tool or a way to identify targets for training. To assess mental models that support cyber SA, it is important that measurement is well-suited to the mental model being assessed; because experts may hold multiple mental models, it is likely that several assessment techniques will be needed to assess all relevant mental models in CND. This training will be targeted to

two user populations: early career professionals, with the goal of improving human performance

in the industry, and students, with the goal of increasing the participation and preparation for cyber security careers.

Training materials is developed to teach novices how to perform like experts. In this paper, we present a cyber analyst training which is based on CTA approach to gain the insight of the cognitive workflow cyber analysts. Then, we find cyber analyst’s performance based on their response time of detecting cyber attacks comparing with estimated attack ideal time. Use of this assessment across diverse populations will demonstrate how cyber structural knowledge changes as a function of expertise. This research will identify patterns of gaps in structural knowledge within each population. It is expected that the most accurate and richest mental models will be held by cyber security professionals with the most industry experience. Even a different pattern is discovered, it will describe differences in expertise across populations. Ultimately, training needs for CND will be identified. This paper restrict scope of response time to the time taken by an analyst to conclude if a threat is a real threat or a false positive.

Literature Review

To understand and measure individual or team situational awareness and for evaluation of algorithms CyberCog (Rajivan, 2011), is used. CyberCog is a synthetic task environment for visualization intended to improve cyber situation awareness. CyberCog gives an interactive environment to directing human-tuned in examination in which the members of the investigation play out the tasks of cyber analysts. CyberCog produces execution measures and association logs for estimating individual and group execution performance. CyberCog has been utilized to assess group based situation awareness. CyberCog uses a collection of known cyber incidents and analysis data to build a synthetic task environment. Alerts and cues are produced based on copying of real-world analyst knowledge. From the mix of alerts and cues, trainees will react to identify threats (and vulnerabilities) individually or as a team. The identification of attacks are based on knowledge about the attack alert patterns.

Intended for better comprehension of the human in a cyber-analysis task, idsNETS (Giacobe, McNeese,  Mancuso & Minotra 2013), based upon the NeoCITIES Experimental Task Simulator (NETS), is a human-tuned simulator for interruption recognition analysis. Similar to CyberCog, NETS is also a synthetic task environment. The realistic scenarios are compressed and written into scaled world definitions and the simulation engine is capable of deciphering the scaled world definitions into a simulated environment, running the simulation, and responding to user interaction. In (Giacobe, McNeese,  Mancuso & Minotra 2013), several human subjects experiments have been performed using the NETS simulation engine, to explore human cognition in simulated cyber-security environments. The examination shows that the groups who had more comparative ranges of abilities showed a more firm cooperation by means of incessant correspondence and data sharing.

The primary difference between CyberCog/IdsNETS and LVC system ( Live Virtual Constructive (Varshney, Pickett, & Bagrodia, 2011) is that while CyberCog and IdsNETS are synthetic task environments, the LVC structure is a real system/emulator. A synthetic task environment may rely on previous incidents to generate the sequence of alerts and cues corresponding to those incidents, The LVC framework is able to simulate previous incidents as well as generate new simulated or emulated incidents on the fly (Huang, Shen, Doshi, Thomas & Duong, 2015). The LVC structure underpins a crossover system of real and virtual machines so assaults can be propelled from a actual or a virtual host, focusing on a real or a virtual host. Figure 2 outline the use instances of the LVC structure that consolidate physical machines and virtual system condition to perform cyber attacks and defense.

The Rationale and Objectives of the Study

The research objective of this proposal is to identify cognitive outcomes associated with

successful threat response in computer network defense (CND) and leverage those outcomes to

improve training for cyber security professionals. The role of cyber security professionals, who are responsible for ensuring the continued security of the network of their organization, has developed as a specialist subset in the careers of information technology. Broadly, cyber

security professionals investigate network activity to find, identify, and respond to anomalies.

CND is a joint human-machine collaborative task in which people depend on automated tools to

perform their jobs but must remain “in the loop” as an information processor and decision

maker. Consequently, CND is dependent on human decision making. Situation awareness (SA) and mental models are cognitive outcomes that predict human performance.

The research objectives of this proposal are to identify cognitive outcomes, including

mental models and situation awareness, that predict successful threat response in CND and to

create training to facilitate these outcomes. This proposal will address this objective through a

research approach that bridges human factors psychology and cyber security. Also, the objective is to improve the user experience of a training simulation model for a novice cyber security analyst to teach him how to think and act like an expert using characterization of cognitive analysis of a cyber security expert.

Research results will increase access to cyber security careers through the development of training for cyber security professionals and aspiring cyber security professionals, especially members of under-represented groups, as part of the educational objectives of these research. The recipients of this training include high school students. In addition, a new course will take an interdisciplinary approach to human decision – making in CND and expose students of computer science and psychology to the role of decision – making in CND.

Despite the presence of an interdisciplinary Human Factors M.S. program accredited by

the Human Factors and Ergonomics Society, students in traditional computer science paths receive limited exposure to human-centered approaches to technology problems, especially those incorporating science of decision making. Simultaneously, students in research psychology programs receive limited exposure to engineering applications of psychology. This new course will address this need. The course will be targeted to students majoring in computer science, psychology, and interdisciplinary human factors graduate programs.

As part of the educational goalsof this research, research outcomes will increase access

to cyber security careers through the development of training targeted to cyber security professionals and aspiring cyber security professionals, especially members of underrepresented

groups. Importantly, recipients of this training will include secondary school students. Further, a

new course will take an interdisciplinary approach to human decision making in CND and expose computer science and psychology students to the role of human decision making in CND.

The intellectual advantages of this proposal include new knowledge in the training science. The research will generate knowledge about the predictions of SA and performance in dynamic environments. The broader impacts of this project address the great need for the development of cyber security staff. Training in cyber security decision – making will make CND careers accessible to people who go beyond traditional careers in computer science. Threat

response training for CND will provide a strategic advantage, not only against known threats,

but against cyber adversaries as they continue to grow in sophistication and new threats emerge. Further, the training developed through this research is potentially transformativein that it will improve human decision making in CND, leading to better threat response and improved cyber security. Threat response training that improves the decision making skills in CND instead of training responses to individual threats will provide a strategic advantage against cyber adversaries as they continue to grow in sophistication and new threats emerge.

The Methods and Procedure

We propose realistic training scenarios for training and evaluation of cyber situations that allow cyber analysts to experience cyber attacks and learn how to detect ongoing cyber attacks. Cyber security lessons designed to involve cyber analysts in learning need to be carefully planned. We learn how, when, where and why to perform a cyber defense task. This knowledge can be used in the design of cyber security training scenarios to determine whether the attacks are real or false positive attacks. They would soon be overwhelmed by enormous data and would be forced to ignore potentially important evidence that introduces errors in the detection procedure. To solve the enormous cognitive demand faced by cyber analysts, we identify and design items on the cyber attack list. Cyber analysts can tailor their own watch list items and triggering thresholds in order to detect cyber attacks faster. Through collaboration with industry partner Cisco Systems, Inc., a provider of network solutions, cyber security professionals will be recruited as evaluators of candidate training products. In doing so, these cyber security professionals will benefit from state-of-the-art training in cyber security decision making. From this collaboration, a training workshop will be developed for early-career cyber security professionals. This workshop will introduce learners to the determinants of quality decision making in their careers, leverage the research to support development of cyber security decision making skills, and provide learners with methods of evaluating cyber security decision making.

Based on the design steps, the training workflow is shown in Figure 1, which contains the following steps:

Step 1 : The instructor shows the cyber security training scenario including an instruction sheet to describe the objective of the study. It includes expected time to identify the attack.

Step 2 : The simulated attacks and log data are shown to the analyst side. After analyzing these data, cyber analyst should react to these cyber events and identify in case of an attack or a false alarm.

Step 4 : During training, the training system can determine whether the cyber analyst’s response actions follow the expected time listed in the instruction sheet.

Step 5 : Based on the response time recorded for analyst in comparison with expected time, scoring is done. This analyst scoring is provided to the analyst for his next round.

Step 6 : Cyber analysts are asked to change their watch list items or based on their score report, they can improve upon the analysis capability.

Based on the customized learning scenario, cyber analyst will learn the necessary knowledge to monitor network conditions and to identify ongoing attacks. After cyber security training, cyber analysts can do the following with regard to a certain number of known attacks: List the relevant parameters for monitoring and knowing their characteristics in normal and abnormal operations. Recognize network attack symptoms. In particular, cyber analysts can isolate common network characteristics under attack and distinguish the specific characteristics of each attack (Huang, Shen, Doshi, Thomas & Duong, 2015). Given a certain number of current conditions (monitored parameters), you can analyze which type of attack occurs and how the attack started. Demonstrate proper remedial action procedures, including the selection of countermeasures to be applied and where to use them in the network.


Tyworth, M., Giacobe, N. A., Mancuso, V., & Dancy, C. (2012). The distributed nature of cyber situation awareness. 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support. doi:10.1109/cogsima.2012.6188375

Giacobe, N. A., McNeese, M. D., Mancuso, V. F., & Minotra, D. (2013). Capturing human cognition in cyber-security simulations with NETS. 2013 IEEE International Conference on Intelligence and Security Informatics. doi:10.1109/isi.2013.6578844

Mahoney, S., Roth, E., Steinke, K., Pfautz, J., Wu, C., & Farry, M. (2010). A cognitive task analysis for cyber situational awareness. PsycEXTRA Dataset. doi:10.1037/e578652012-003

McNeese, M. (2000). Situation Awareness Analysis and Measurement. doi:10.1201/b12461

Varshney, M., Pickett, K., & Bagrodia, R. (2011). A Live-Virtual-Constructive (LVC) framework for cyber operations test, evaluation and training. 2011 – MILCOM 2011 Military Communications Conference. doi:10.1109/milcom.2011.6127499

Huang, Z., Shen, C., Doshi, S., Thomas, N., & Duong, H. (2015). Cognitive Task Analysis Based Training for Cyber Situation Awareness. Information Security Education Across the Curriculum IFIP Advances in Information and Communication Technology,27-40. doi:10.1007/978-3-319-18500-2_3

D’Amico, A., Whitley, K., Tesone, D., O’Brien, B., & Roth, E. (2005). Achieving cyber defense situational awareness: A cognitive task analysis of information assurance analysts. PsycEXTRA Dataset. doi:10.1037/e577392012-004

Rajivan, P.(2011). CyberCog:A Synthetic Task Environment for Measuring Cyber Situation. Master Thesis of Arizona State University

Tables and Figures


Figure 1. Workflow for training system

Figure 2. This usage example of Live-Virtual-Constructive (LVC) framework adapted from Military Communications Conferencepaper.

Network Simulation With OPNET Modeler

Routing protocol is the key for the quality of modern communication network. EIGRP, OSPF and RIP are the dynamic routing protocols being used in the practical networks to propagate network topology information to the neighboring routers. There have been a large number of static and dynamic routing protocols available but choice of the right protocol for routing is dependent on many parameters critical being network convergence time, scalability, memory and CPU requirements, security and bandwidth requirement etc.
This Assignment uses OPNET simulation tool to analyze the performance of RIP and EIGRP commonly used in IP network.
Initially We have Following Network.

By Examining the Network we figure out that Red line indicating the Data Rate of 44.736 Mbps between network components and only Network connection between London Office and Portsmouth office has Data Rate of 64 Kbps.
The Traffic Flow between London Office and Bristol_corporate is IP_Traffic Flow having following chracteristics   

RIP Protocol Over Netwrok:
Routing Information Protocol (RIP) is a distance vector dynamic routing protocol that employs the hop count as a routing metric. RIP is implemented on top of the User Datagram Protocol (UDP) as its transport protocol. It is assigned the reserved port number 520. RIP prevents routing loops by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of permitted hops is 15. Hence a hop count of 16 is considered an infinite distance. This hop number limits the size of networks that RIP may support. RIP selects paths that have the smallest hop counts. However, the path may be the slowest in the network. RIP is simple and efficient in small networks.
First we have to run RIP routing protocol in the network for a simulation period of 600 seconds with selecting following criteria

Path Selection
Time Taken for routing convergence
Protocol Overhead

Path Selection
For path selection we get following result with RIP protocol

The IP traffic Flow is from London to Bristol Corporate and due to Low Data rate between London to Portsmouth path as compare to London to Oxford path the RIP protocol follows maximum the low Data rate path which is London Office to Portsmouth, and graph displays data throughput for the links London to Portsmouth and Portsmouth to Bristol.
Time Taken for routing convergence
RIP is distance vector routing protocols, announces its routes in an unsynchronized and unacknowledged manner. This can lead to convergence problems. The graph is showing the time taken for routing convergence of RIP. The convergence time is high 6.975 sec that’s mean routers are finding it difficult to exchange state information.

Protocol Overhead

RIP is a “distance vector” based protocol selects the best routing path based on a distance metric
(the distance) and an interface (the vector) , RIP protocols evaluate the best path based on
distance, which can be measured in terms of hops or a combination of metrics calculated to represent a distance value. In this exercise RIP selects London to Portsmouth link and maximum utilization occurs .
The utilisation and convergence data suggests there is some queuing and blocking on the link. For example, the utilisation for the London to Portsmouth link is high i.e 84.629 therefore suggesting the link is suffering from over-utilisation.

In the point to point queuing graph , the London to Portsmouth Link contains queuing delay on average 3.6032 sec , therefore suggesting there is traffic blocking or queuing on the link.
The link between London to Portsmouth uses a DS0 (Blue) cable with a data rate of 64Kbps compared to the other links in the network that use a DS3 cable (Red) with a data rate of 44.736Mbps; therefore the combination of the over-utilisation of the London to Portsmouth link with the low data rate cable (DS0) has caused traffic queuing or blocking to occur.
————————————-Excersise -2 ————————————-
EIGRP Protocol Over Netwrok:
Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco proprietary routing protocol. It is based on a new route calculation algorithm called the Diffusing Update Algorithm (DUAL). It has features of both distance vector and link state protocols. EIGRP metrics are based on reliability, MTU, delay, load, and bandwidth. Delay and bandwidth are the
basic parameters for calculating metrics

First we have to run EIGRP routing protocol in the network for a simulation period of 600 seconds with selecting following criteria

Path Selection
Time Taken for routing convergence
Protocol Overhead

Path Selection
For path selection we get following result with EIGRP protocol

The IP traffic Flow is from London to Bristol Corporate but as contrast with RIP which selected low data rate path, EIGRP select the path from London to Oxford , Oxford to Birmingham , and Birmingham to Bristol path to achieve the traffic Flow.
Time Taken for routing convergence
EIGRP is more efficient as compared to RIP , Graphs are showing the convergence duration very fast 0.0074427 Sec as Compared to RIP which was 6.975 Sec with same scenario.

Protocol Overhead

As Compared to RIP , No over utilisation occurs in EIGRP , Utilisation graphs shown above clearly that the utilisation distributed evenly over path with value for the London to Oxford is 5.5606 , Oxford to Birmingham is 5.5783 and Birmingham to Bristol is 5.5662
EIGRP performs better in terms of network convergence, routing traffic, and Ethernet delay.
EIGRP has the characteristics of both distance vector and link state protocols, has improved network convergence, reduced routing protocol traffic, and less CPU and RAM utilization compared to RIP protocol.
EIGRP has very low usage of network resources during normal operation since only hello packets are transmitted. When a routing table changes, its convergence time is short and it reduces bandwidth utilization
————————————-Excersise -3 ————————————-
We introduced a link Failure Scenario between Bristol corporate and Porstmouth Office after 100 Seconds and its recovery at 200 Seconds and run the RIP and EIGRP protocol over network.
Following are our Observations with side by side comparison of RIP and EIGRP



In the RIP protocol the link failure after 100 sec prevented the traffic to flow; therefore when the link recovered after 200 sec a huge amount of traffic was bottlenecked on the link causing the utilisation of the London to Portsmouth to suddenly increase. Also it can be observed that during the time of the failure the RIP protocol began to reroute the traffic over the London to Oxford, Oxford to Birmingham, and Birmingham to Bristol links before the link recovered the graph is showing this small utilization on the links.
In the EIGRP Protocol, link failure event did not affect the utilisation of the EIGRP protocol because the link was not used in the routing path; The EIGRP did not use the link Portsmouth to Bristol in its path selection, so the performance of the network will be barely affected by the failure ; hence the utilisation values doesn’t change

In RIP The Convergence Duration becomes much higher as compare to old scenario before Failure , it was 6.975 before and 19.409 now , this is because routers updates their routing tables when failure occurred and recovered it takes more time period ,
In Contrast with EIGRP protocol the Convergence Duration becomes 0.012273 , much less than RIP this is because EIGRP only update the link failure routing table not the whole network , So EIGRP provides much efficient and faster way to achieve convergence.

Time Delay of Protocol
More IP packets drops in RIP as compared to EIGRP because of the failure in link of the path which RIP follows , and as contrast less IP packets drops in EIGRP because it does not follow the path of failure link.

————————————-Excersise -4 ————————————-
Consider the given Network merging with another network, Picture shown below the merging Network

The IP Traffic flows sending traffic from London Office to three destination North-wales Plant, Birmingham Plant and Oxford Office. We defined IP Traffic according to given table.
A New Link DS_1 ( Black Line in Picture ) introduced which connects North-wales Plant to London Office via The New Manchester Office.
We runs RIP as routing Protocol which gives us Following observations:


From Graph it is clearly showing that utilisation is high for London office to Manchester office and Manchester Office to North Wales
Both are approximately 97 % utilisation which is overutilization and cause serious problems to the network.
For London to oxford office and Oxford to Birmingham Plant the utilisation is nearly 13% and 6% this is because of link using high data rate cable DS3 where we get the low utilisation and low data rate comparatively where we get lower data rate cable.
DS1 cable has data rate of 1.5Mbps which DS3 has 44.736 Mbps
With this Observation , we come to know that one possible solution is using EIGRP protocol , as EIGRP protocol solve the over utilisation problem from our network.
Lets see by running the EIGRP protocols and compare the result of it with RIP



The EIGRP protocol solves the Over utilization problem we have faced in RIP protocol and the resultant graph and comparison is showing this clearly with evenly distributed utilization over selected path by EIGRP.
CONCLUSIONS : It can be seen that EIGRP compared to RIP performs better in terms of network convergence activity and Routing protocol traffic. EIGRP has the characteristics of both distance vector and link state protocols, has improved network convergence, reduced routing protocol traffic, and less CPU and RAM utilization compared to RIP
Performance Analysis of RIP, EIGRP, and OSPF using OPNET By Don Xu and Ljiljana Trajković
Dynamic Routing Protocol Implementation Decision between EIGRP, OSPF and RIP Based on Technical Background Using OPNET Modeller By Thornier, S.G.