## History and Applications of Matrices

Matrices find many applications at current time and very useful to us. Physics makes use of matrices in various domains, for example in geometrical optics and matrix mechanics; the latter led to studying in more detail matrices with an infinite number of rows and columns. Graph theory uses matrices to keep track of distances between pairs of vertices in a graph. Computer graphics uses matrices to project 3-dimensional space onto a 2-dimensional screen.
Example of application
A message is converted into numeric form according to some scheme. The easiest scheme is to let space=0, A=1, B=2, …, Y=25, and Z=26. For example, the message “Red Rum” would become 18, 5, 4, 0, 18, 21, 13.
This data was placed into matrix form. The size of the matrix depends on the size of the encryption key. Let’s say that our encryption matrix (encoding matrix) is a 2×2 matrix. Since I have seven pieces of data, I would place that into a 4×2 matrix and fill the last spot with a space to make the matrix complete. Let’s call the original, unencrypted data matrix A.

There is an invertible matrix which is called the encryption matrix or the encoding matrix. We’ll call it matrix B. Since this matrix needs to be invertible, it must be square.
This could really be anything, it’s up to the person encrypting the matrix. I’ll use this matrix.

The unencrypted data is then multiplied by our encoding matrix. The result of this multiplication is the matrix containing the encrypted data. We’ll call it matrix X.

The message that you would pass on to the other person is the the stream of numbers 67, -21, 16, -8, 51, 27, 52, -26.
Decryption Process
Place the encrypted stream of numbers that represents an encrypted message into a matrix.
Multiply by the decoding matrix. The decoding matrix is the inverse of the encoding matrix.
Convert the matrix into a stream of numbers.
Conver the numbers into the text of the original message.
DETERMINANTS
The determinant of a matrix A is denoted det(A), or without parentheses: det A. An alternative notation, used for compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a matrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses.
For a fixed nonnegative integer n, there is a unique determinant function for the nÃ-n matrices over any commutative ring R. In particular, this unique function exists when R is the field of real or complex numbers.
For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix
Example. Evaluate
Let us transform this matrix into a triangular one through elementary operations. We will keep the first row and add to the second one the first multiplied by . We get
Using the Property 2, we get
Therefore, we have
which one may check easily.
EIGEN VALUES AND EIGEN VECTORS
In mathematics, eigenvalue, eigenvector, and eigenspace are related concepts in the field of linear algebra. The prefix eigen- is adopted from the German word “eigen” for “innate”, “idiosyncratic”, “own”. Linear algebra studies linear transformations, which are represented by matrices acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of a matrix. They are computed by a method described below, give important information about the matrix, and can be used in matrix factorization. They have applications in areas of applied mathematics as diverse as economics and quantum mechanics.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix. A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is the eigenvalue associated with that eigenvector. An eigenspace is the set of all eigenvectors that have the same eigenvalue, together with the zero vector.
These concepts are formally defined in the language of matrices and linear transformations. Formally, if A is a linear transformation, a non-null vector x is an eigenvector of A if there is a scalar Î» such that
The scalar Î» is said to be an eigenvalue of A corresponding to the eigenvector x.
Eigenvalues and Eigenvectors: An Introduction
The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on “eigenvalues” and “eigenvectors”-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.
Example.
Consider the matrix
Consider the three column matrices
We have
In other words, we have
Next consider the matrix P for which the columns are C1, C2, and C3, i.e.,
We have det(P) = 84. So this matrix is invertible. Easy calculations give
Next we evaluate the matrix P-1AP. We leave the details to the reader to check that we have
In other words, we have
Using the matrix multiplication, we obtain
which implies that A is similar to a diagonal matrix. In particular, we have
for . Note that it is almost impossible to find A75 directly from the original form of A.
This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P-1AP is a diagonal matrix?
From now on, we will call column matrices vectors. So the above column matrices C1, C2, and C3 are now vectors. We have the following definition.
Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such that
If such a number exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .
Remark. The eigenvector C must be non-zero since we have
for any number .
Example. Consider the matrix
We have seen that
where
So C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.
It may be interesting to know whether we found all the eigenvalues of A in the above example. In the next page, we will discuss this question as well as how to find the eigenvalues of a square matrix.
PROOFS OF PROPERTIES OF EIGEN VALUES:::
PROPERTY 1
{Inverse of a matrix A exists if and only if zero is not an eigenvalue of A}
Suppose A is a square matrix. Then A is singular if and only if Î»=0 is an eigenvalue of A.
Proof We have the following equivalences:
A is singular
â‡”there exists xâ‰ 0, Ax=0
â‡”there exists xâ‰ 0, Ax=0x
â‡”Î»=0 is an eigenvalue of A
Since SINGULAR matrix A has eigenvalue and the inverse of a singular matrix does not exist this implies that for a matrix to be invertible its eigenvalues must be non-zero.
PROPERTY-2
Eigenvalues of a matrix are real or complex conjugates in pairs
Suppose A is a square matrix with real entries and x is an eigenvector of A for the
eigenvalue Î». Then x is an eigenvector of A for the eigenvalue Î». â-¡
Proof
Ax =Ax
=Ax
=Î»x
=Î»x
A has real entries x eigenvector of A
Suppose A is an mÃ-n matrix and B is an nÃ-p matrix. Then AB=AB. â-¡
Proof To obtain this matrix equality, we will work entry-by-entry. For 1â‰¤iâ‰¤m, 1â‰¤jâ‰¤p,
ABij =ABij =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =âˆ‘nk=1AikBkj =ABij
APPLICATION OF EIGEN VALUES IN FACIAL RECOGNITION
How does it work?
The task of facial recogniton is discriminating input signals (image data) into several classes (persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting conditions, pose etc.), yet the input images are not completely random and in spite of their differences there are patterns which occur in any input signal. Such patterns, which can be observed in all signals could be – in the domain of facial recognition – the presence of some objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These characteristic features are called eigenfaces in the facial recognition domain (or principal components generally). They can be extracted out of original image data by means of a mathematical tool called Principal Component Analysis (PCA).
By means of PCA one can transform each original image of the training set into a corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct any original image from the training set by combining the eigenfaces. Remember that eigenfaces are nothing less than characteristic features of the faces. Therefore one could say that the original face image can be reconstructed from eigenfaces if one adds up all the eigenfaces (features) in the right proportion. Each eigenface represents only certain features of the face, which may or may not be present in the original image. If the feature is present in the original image to a higher degree, the share of the corresponding eigenface in the “sum” of the eigenfaces should be greater. If, contrary, the particular feature is not (or almost not) present in the original image, then the corresponding eigenface should contribute a smaller (or not at all) part to the sum of eigenfaces. So, in order to reconstruct the original image from the eigenfaces, one has to build a kind of weighted sum of all eigenfaces. That is, the reconstructed original image is equal to a sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, to what degree the specific feature (eigenface) is present in the original image.
If one uses all the eigenfaces extracted from original images, one can reconstruct the original images from the eigenfaces exactly. But one can also use only a part of the eigenfaces. Then the reconstructed image is an approximation of the original image. However, one can ensure that losses due to omitting some of the eigenfaces can be minimized. This happens by choosing only the most important features (eigenfaces). Omission of eigenfaces is necessary due to scarcity of computational resources.
How does this relate to facial recognition? The clue is that it is possible not only to extract the face from eigenfaces given a set of weights, but also to go the opposite way. This opposite way would be to extract the weights from eigenfaces and the face to be recognized. These weights tell nothing less, as the amount by which the face in question differs from “typical” faces represented by the eigenfaces. Therefore, using this weights one can determine two important things:
Determine, if the image in question is a face at all. In the case the weights of the image differ too much from the weights of face images (i.e. images, from which we know for sure that they are faces), the image probably is not a face.
Similar faces (images) possess similar features (eigenfaces) to similar degrees (weights). If one extracts weights from all the images available, the images could be grouped to clusters. That is, all images having similar weights are likely to be similar faces.

## Examining Matrices Of Relation

History of matrix had to be going back to the ancient times, because it is not applied until 1850.
Matrix is the Latin word for womb, and is same in English. It can also mean something is formed or produced.
Matrix was introdeced by James Joseph Sylvester,who have brief career at the University of Virginia, which came to an abrupt end after an enraged Sylvester, hit a newspaper-reading student with a sword stick and fled the country, believing he had killed the student!
An important Chinese text from between 300 BC and AD 200, Nine Chapters of the Mathematical Art (Chiu Chang Suan Shu), gives the use in matrix method to solve simultaneous equations. And this is origins of matrix.
“Too much and not enough,” is the concept of a determinant first appears in the treatise’s seventh chapter. These concepts is invented nearly two millennia before Japanese mathematician Seki Kowa in 1683 or his German contemporary Gottfried Leibnitz (who is also credited with the invention of differential calculus, separately from but simultaneously with Isaac Newton) found it and use it widely.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In chapter eight “Methods of rectangular arrays,” using a counting board that is mathematically identical to the modern matrix method of solution to solve the simultaneous equation is more widely use. This is also called Gaussian elimination outlined by Carl Friedrich Gauss (1777-1855). Matrices has its important in ancient China and today it is not only solve simultaneous equation, but also for designing the computer games graphics, describing the quantum mechanics of atomic structure, analysing relationships, and even plotting complicated dance steps!
Background of Matrices
More and larger with amount of numerical data, measurements of one form or another gathered from their lab is confronting the scientists. However the mere collecting and recording data have been collected, data must analyze and interpreted. And here, matrix algebra is useful in both simplifying and promoting much development of many analysis methods but also in organizing computer techniques to execute those methods and present its results.
Definition
An M x N matrix is a rectangular array of members having m rows and n columns. The number comprising the array are called element of the matrix. The numbers m and n are called dimensions of the matrix. The set of all m x n matrices is denoted by Rm x n.
We shall ordinarily denote a matrix by an upper case Latin or Greek letter, whenever possible, an element of a matrix will be denoted by the corresponding lower case Greek letter with two subscripts, the first specifying the row that contains the element and the second the column.
( )
( )
Thus the 3 x 3 matrix has the form:
A3x3
( )
The matrix is read as A with r rows and c columns has order r x c (read as “r” by “c”) or Ar x c
And 4 x 3 matrix has the form:
( )
In some applications, notably those involving partitioned matrices, considerable notational simplification can achieved by permitting matrices with one or both its dimensions zero. Such matrices will be said to be void.
Row and column matrix
The n x 1 matrix A has the form
Such matrix is called a column vector which has a single column only, which looks exactly like a member of Rn. We shall not distinguish between n x 1 matrices and n-vectors; they will de denoted by upper or lower case Latin letters as convenience dictates.
Example: the 1 x n matrix R has the form
R’= (Ï11, Ï12, â€¦â€¦ , Ï1n).
R’= (5, 6, 7, â€¦â€¦ ,n)
Such a matrix will be called a row vector.
A well-organized notation is that of denoting matrices by uppercase letters and their elements by the lowercase counterparts with appropriate subscripts. Vectors are denoted by lowercase letters, often from the end of the alphabet, using the prime superscript to distinguish a row vector from a column vector. Thus A is a column vector and R’ is a row vector, Î» is use for scalar whereby scalar represent a single number such as 2,-4
Equal matrices
For two matrices to be equal, every single element in the first matrix must be equal to the corresponding element in the other matrix.
So these two matrices are equal:
=
But these two are not:
Of course this means that if two matrices are equal, then they must have the same numbers of rows and columns as each other. So a 3×3 matrix could never be equal to a 2×4 matrix, for instance.
Also remember that each element must be equal to that element in the other matrix, so it’s no good if all the values are there but in different places:
Combining the ideas of subtraction and equality leads to the definition of zero matrix algebra. For when A=B , then aij =bij
And so
A – B = { aij – bij} = { 0 }=0
Which mean in matrix are
Square Matrix
A square matrix is a matrix which has the same number of rows and columns. An m x n matrix A is said to be a square matrix if m = n
Example: number of rows = number of columns.
*provided no ambiguity
In the sequel the dimensions and properties of a matrix will often be determined by context. As an example of this, the statement that A is of order n carries the implication that A is square.
An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. A square matrix A is called invertible or non-singular if there exists a matrix B such that
AB = I
This is equivalent to BA = I Moreover, if B exists, it is unique and is called the inverse matrix of A, denoted Aâˆ’1.
The entries Ai,i form the main diagonal of a matrix. The trace, TR(A) of a square matrix A is the sum of its diagonal entries. While, as mentioned above, matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors:
TR (AB) = TR (BA).
Also, the trace of a matrix is equal to that of its transpose, i.e. TR(A) = TR(AT).
If all entries outside the main diagonal are zero, A is called a diagonal matrix. If only all entries above (below) the main diagonal are zero, A is called a lower triangular matrix (upper triangular matrix, respectively). For example, if n = 3, they look like
(Diagonal), (lower) and (upper triangular matrix).
Properties of Square Matrix

â€¢ Any two square matrices of the same order can be added.

â€¢ Any two square matrices of the same order can be multiplied.

â€¢ A square matrix A is called invertible or non-singular if there exists a matrix B such that

AB = In.
Examples for Square Matrix

For example: A = is a square matrix of order 3 Ã- 3.
Relations of matrices
If R is a relation from X to Y and x1, . . . , xm is an ordering of the elements of X and y1, . . . , yn is an ordering of the elements of Y , the matrix A of R is obtained by defining Aij = 1 if xi R yj and 0 otherwise. Note that the matrix of R depends on the orderings of X and Y.
Example: The matrix of the relation
R = {(1, a), (3, c), (5, d), (1, b)}
From X = {1, 2, 3, 4, 5} to Y = {a, b, c, d, e} relative to the orderings 1, 2, 3, 4, 5 and a, b, c, d, e is
Example: We see from the matrix in the first example that the elements (1, a), (3, c), (5, d), (1, b) are in the relation because those entries in the matrix are 1. We also see that the domain is {1, 3, 5} because those rows contain at least one 1, and the range is {a, b, c, d} because those columns contain at least one.
Symmetric and anti-symmetric
Let R be a relation on a set X, let x1, . . . , xn be an ordering of X, and let A be the matrix of R where the ordering x1, . . . , xn is used for both the rows and columns. Then R is reflexive if and only if the main diagonal of A consists of all 1’s (i.e., Aii = 1 for all i). R is symmetric if and only if A is symmetric (i.e., Aij = Aji for all i and j). R is anti-symmetric if and only if for all i = j, Aij and Aji are not both equal to 1. R is transitive if and only if whenever A2 ij is nonzero, Aij is also nonzero.
Example:
The matrix of the relation R = {(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3), (4, 3)} on {1, 2, 3, 4} relative to the ordering 1, 2, 3, 4 is A =
We see that R is not reflexive because A’s main diagonal contains a 0. R is not symmetric because A is not symmetric; for example, A12 = 1, but
A21 = 0. R is anti-symmetric because for all i = j, Aij and Aji are not both equal to 1.
Reflexive Matrices
In functional analysis, reflexive operator is an operator that has enough invariant subspaces to characterize it. The matrices that obey the reflexive rules also called ref matrices. A relation is reflexive if and only if it contains (x,x) for all x in the base set. Nest algebras are examples of reflexive matrices. In dimensions or spaces of matrices, finite dimensions are the matrices of a given size whose nonzero entries lie in an upper-triangular pattern.
This 2 by 2 matrices is NOT a reflexive matrices
The matrix of the relation which is reflexive is
R={(a, a),(b,b),(c,c),(d,d),(b,c),(c,b)}on {a,b,c,d}, relative to the ordering a,b,c,d is
Or
In generally reflexive matrices are in the case if and only if it contains (x,x) for all x in the base set.
Transitive Matrices
When we talk about transitive matrices, we have to compare the A(matrix) to the A2(matrix). Whenever the element in the A is nonzero then the element in theA2 have to be nonzero or vice versa to show that the matrices is transitive.
For examples of transitive matrices:
Then the A2 is
Now we can have a look where all the element aij in A and A2 is either both nonzero or both are zero.
Another example:
Conclusion
In conclusion, the matrix we are discussed previous is useful and powerful in the mathematical analysis and collecting data. Besides the simultaneous equations, the characteristic of the matrices are useful in the programming where we putting in array that is a matrix also to store the data. Lastly, the matrices are playing very important role in the computer science and applied mathematics. So we can manage well of matrix, then we can play easy in computer science but the matrix is not easy to understand whereby these few pages of discussion and characteristic just a minor part of matrix. With this mini project, we know more about matrix and if we need to know all about how it uses in the computer science subject, I personally think that it will be difficult as it can be very complicated.

## Application of Matrices in Real-Life

Matrices are used much more in daily life than people would have thought. In fact it is in front of us every day when going to work, at the university and even at home.
Graphic software such as Adobe Photoshop on your personal computer uses matrices to process linear transformations to render images. A square matrix can represent a linear transformation of a geometric object.
For example, in the Cartesian X-Y plane, the matrix reflects an object in the vertical Y axis. In a video game, this would render the upside-down mirror image of an assassin reflected in a pond of blood. If the video game has curved reflecting surfaces, such as a shiny metal shield, the matrix would be more complicated, to stretch or shrink the reflection.
In physics related applications, matrices are used in the study of electrical circuits, quantum mechanics and optics. Engineers use matrices to model physical systems and perform accurate calculations needed for complex mechanics to work. Electronics networks, airplane and spacecraft, and in chemical engineering all require perfectly calibrated computations which are obtained from matrix transformations. In hospitals, medical imaging, CAT scans and MRI’s, use matrices to operate.
Whereas in programming which is taught at the university, matrices and inverse matrices are used for coding and encrypting messages. A message is made as a sequence of numbers in a binary format for communication and it follows code theory for solving.
In robotics and automation, matrices are the basic components for the robot movements. The inputs for controlling robots are obtained based on the calculations from matrices and these are very accurate movements.
Many IT companies also use matrices as data structures to track user information, perform search queries, and manage databases. In the world of information security, many systems are designed to work with matrices. Matrices are used in the compression of electronic information, for example in the storage of biometric data in the new Identity Card in Mauritius.
In geology, matrices are used for making seismic surveys. They are used for plotting graphs, statistics and also to do scientific studies and research in almost different fields. Matrices are also used in representing the real world data’s like the population of people, infant mortality rate, etc. They are best representation methods for plotting surveys. In economics very large matrices are used for optimization of problems, for example in making the best use of assets, whether labour or capital, in the manufacturing of a product and managing very large supply chains.
Application of Statistics in real-life problems.
Statistics can be defined as a type of mathematical analysis which involves the method of collecting and analyzing data and then summing up the data into a numerical form for a given set of factual data or real world observations.
In our daily life, we collect information which helps us in resolving questions regarding the world in which we live, that is statistics.
One main example is weather forecast. These charts and information that you see on the television are obtained using statistics that compare last weather conditions with current weather to predict future weather.
Whenever there’s an election as the one coming in a few days in Mauritius, the press consult statistical surveys with the population when they try to predict the winner. Candidates use statistics to know for example that 20,000 of these voters will be between the age of 18 and 22, that is this will be their first election and thus try to focus their campaign more on benefits for these young adults. Statistics play a part in which your elected government will be consisted of.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In industries and businesses it is crucial to be fast and accurate in decision making. They use statistics to know what customers want and therefore know what to produce and sell and in what quantities. Statistics helps to plan production according to the taste of the customers, the quality of the products or availability of materials. Good decisions can be made about the location of business, marketing of the products, financial resources etc…
Statistics are also used in agriculture to know what amount of crops is grown this year in comparison to previous years or what has been the demand for a certain crop during the past 5 years or quality and size of vegetables grown due to use of different fertilizers.
Last Friday was the results day for the CPE exams in Mauritius and statistics were used to compare the different pass rates for girls compared to boys and how the whole pass rate have evolved during the past years. These statistics helps the government to determine whether the education system in the country needs to be modified or completely re-implemented.
In medical studies scientists must show a statistically valid rate of efficacy before any drug can start to be prescribed in hospitals and pharmacies. Statistics are behind every medical study you hear about. For example an ongoing case, the Ebola virus. Statistics are used to determine the number of infected persons in different countries and these data helps to warn neighbouring countries about the risks they are exposed to.
Application of Regression in real-life problems.
Correlation and regression are largely used methods to look into the relationships between quantitative variables. A correlation looks at the validity of the relationship between variables and regression helps to determine the nature of the relationship, or how it behaves. This allows predictions to be made. These methods are very useful, but easily misused.
Regressions can be used in business to evaluate trends and make estimates. For e.g. if a company’s sales have increased rapidly every month for the past years, using a linear regression on the sales data with monthly sales on the y-axis and time on the x-axis would produce a line that illustrates the ascending trend in sales. After obtaining the trend line, the company could use the slope of the line to anticipate sales in future months.
A company can use linear regression to determine the best sale price for a certain product bought by customers. This can be done by plotting a graph of price against quantity. The resulting line would denote how customers reduce their consumption of the product as the price increases. This could help in decision making of the prices of future products.
Linear regression can be used in assessing risk. For e.g. a health insurance company shall plot number of claims per customer against age and by reading the graph deduce that older customers tend to make more health insurance claims. The results of such an analysis might lead to important business decisions made to account for risks.
Application of Correlation in real-life problems.
For e.g. a researcher suggested that taller people have higher self-esteem. After analyzing his data and coming up with an r-value of .08, he abandons his hypothesis because the two variables do not appear to be strongly related at all.
Another area where correlation is used is in the study of intelligence where research has been carried out to test the strength of the relationship between the I.Q. levels of identical and non-identical twins.
In medical studies, correlation is used widely and one e.g. is the study to test if glucose level is related to the age of a person.
Correlation is mostly used in research studies. In schools for e.g. a use of correlation would be the study of how a student who has many absences has a decrease in grades or the more years of education you complete, the higher your earning potential will be.
In the sports area correlation is used broadly by coaches to develop workout routines. Some common correlations are: the more time a person spends running on a treadmill, the more calories he will burn or the more you exercise your core muscles, the more stable your body gets.