MAE 316 Brookline College Analysis of Engineering Systems Questions

Description

MAE 316 Spring 2021 Homework #1
1. Find governing equations and second-order accurate stencils for the following sets of fluxes
and source terms:
2
• q(x) = −κ(x + 2) dT
dx ; f (x) = x T
• q(x) = −κ sin (2x) dT
dx ; f (x) = ln x − 3
• q(x) = −κ ln (2x) dT
dx ; f (x) = 3T + 2x
2. Find the error (in terms of ∆x) in the center-difference approximation for u0 (x), where u(x) =
x4 − 3x + 3.
3. Use Gaussian elimination to determine whether or not the following matrix is singular:


a
b
c
2a
d
0
3a 2d − b −c
If the matrix is singular, find the nullspace.
4. (a) The lower-triangular matrix

1
0
1
L = l21
0 l32

0
0
1
can be factored into two matrices, such that L = L1 L2 . Find L1 and L2 . Hint: Both L1
and L2 are lower triangular, with ones on the diagonal and only a single nonzero off-diagonal
entry.
−1
(b) Find the inverse L−1 = L−1
2 L1 by finding the inverses of L1 and L2 . Hint: the inverses
are also lower triangular with ones on the diagonal and only a single nonzero off-diagonal
entry – use trial-and-error or Gaussian elimination column-wise on the problem L1 L−1
1 = I.
(c) For the matrix


a11 a12 0
A = a21 a22 a23 
0 a32 a33
−1 −1
calculate L−1
1 A; then, assuming that l21 = a21 /a11 , calculate L2 L1 A.
5. (a) Show that Ak = SΛk S −1 for the case where k is a positive integer.
(b) For the matrices

0.6 0.4
A1 =
0.4 0.6
and

A2 =
Determine whether Ak → 0 as k → ∞.
1
0.6 0.9
0.1 0.6

6. A linear dynamic system is one that can be described by a set of linear, ordinary differential
equations of the following type:
du
= Au
dt
(a) Show that a solution to the equation above is u(t) = xeλt , where x and λ are an eigenvector
and corresponding eigenvalue, respectively, of the matrix A.
(b) Given

3 −4
A=
3 2
along with the initial condition

1
u(0) =
1
Use MATLAB to find the complete solution
u(t) = c1 x1 eλ1 t + c2 x2 eλ2 t
including the constants c1 and c2 .
(c) Plot the solutions u1 (t)
7. Use MATLAB to find the collapse mechanisms for the truss pictured. Assume both angles
are 60◦ .
2
Gaussian elimination is the most common method of solving the system Ax = b. The
basic idea should be familiar to anyone who has solved a system of equations, like this one:
3x + 2y = 9
x−y =3
(1)
(2)
We can solve this by multiplying (2) by 2 and adding it to (1), thus elimination of the
second unknown:
5x = 15 ⇒ x = 3
(3)
we then back-substitute the value for x into either equation:
3−y =3 ⇒ y =0
(4)
Gaussian elimination is simply a way to standardize this process. The idea is that we eliminate – or render zero – all entries below the main diagonal, leaving us with an upper
triangular matrix. This matrix structure ensures that the equation of the matrix pertaining to the last row has only one unknown. This equation is easily solved, and the result can
be used in the next equation up, which will have two unknowns, etc. The formal procedure
for the elimination step is the following:
• Working from the left, eliminate the sub-diagonal entries one column at a time.
• For each column, multiply the row containing the main diagonal entry (which is also
called the pivot) by a factor that will render it equal to the entry to be eliminated.
Label and retain that multiplier with the indices of the eliminated entry.
• Subtract the modified pivot row from the row with the entry to be elminated, replacing
the entire row with the result.
As an example, take the linear system

   
2 −1 0
u1
f1
−1 2 −1 u2  = f2 
0 −1 2
u3
f3
(5)
or Ku = f . The elimination procedure starts with the leftmost column; the first entry to
be eliminated is that at row 2, column 1. The multiplier `21 = −1/2, or the entry at K21
divided by the pivot. The modified first row becomes

  

−1 1/2 0
u1
−f1 /2
−1 2 −1 u2  =  f2 
(6)
0 −1 2
u3
f3
Subtracting row 1 from row 2, and replacing row 2 with the result yields

  

2 −1 0
u1
f1
0 3/2 −1 u2  = f2 + f1 /2
0 −1 2
u3
f3
1
(7)
Column 1 is done; moving to
is `32 = −2/3:

2
0
0
column 2 the entry to be eliminated is K32 , and the multiplier
  

−1 0
u1
f1
−1 2/3 u2  = −2f2 /3 − f1 /3
−1 2
u3
f3
yielding after subtraction

  

2 −1 0
u1
f1
0 3/2 −1  u2  = 

f2 + f1 /2
0 0 4/3 u3
f3 + 2f2 /3 + f1 /3
(8)
(9)
The upper-triangular matrix appearing in (9) is called the row-echelon form of the
matrix and is ready for back-substitution. Choosing a value of the right-hand side vector

f= 4 0 0
(10)
yields the row-echelon system

   
2 −1 0
u1
4
0 3/2 −1  u2  =  2 
0 0 4/3 u3
4/3
(11)
The last equation yields u3 = 1. Substituting that result into the second row yields
3u2 /2 − 1 = 2 ⇒ u2 = 2
(12)
then taking that result and substituting above in row 1:
2u1 − 2 = 4 ⇒ u1 = 3
(13)
   
u1
3
u2  = 2
u3
1
(14)
That is,
Matrix decomposition is a factoring of a matrix into two or more matrices. Gaussian
elimination creates a matrix decomposition called an LU decomposition. The U in LU
is the upper-triangular reduced-echelon matrix of (11). The L is a lower-triangular matrix
with all ones on the main diagonal and the multipliers ` generated during elimination:

 

1
0 0
1
0
0
1
0
L = `21 1 0 = −1/2
(15)
0 `32 1
0
−2/3 1
2
Carrying out a matrix multiplication we find


 

1
0
0
2 −1 0
2 −1 0
1
0 0 3/2 −1  = −1 2 −1 = K
LU = −1/2
0
−2/3 1
0 0 4/3
0 −1 2
For a symmetric matrix like K, it’s possible

2 0
U = DLT = 0 3/2
0 0
(16)
to further factorize the row-echelon form as


0
1 −1/2
0
0  0
1
−2/3
(17)
4/3 0
0
1
where D is a diagonal matrix containing the pivots. This decomposition can also be written
K = AAT
(18)

where A = L D. Equation (18) is called a Cholesky decomposition.
What’s the point of all this decomposing? For an answer to that question, let’s count
operations needed to perform elimination on a generic N × N matrix:


k11 k12 . . . k1N
 k21 k22 . . . k2N 


K =  ..
(19)
.. 
.
.
 .
.
. 
kN 1 kN 2 . . . kN N
To eliminate the first column, we have to find the multiplier (one operation, or flop for the
computer) and multiply it by all N members of the first row. Then we have to subtract it
from the target row, which is another N operations, for a total of 2N . Since N is a much
bigger number than 2 (usually something like 106 for the biggest engineering problems), we
can say that this is on the order of N operations, to eliminate one entry in the first column.
This has to be repeated N − 1 times, yielding something on the order of N 2 operations to
eliminate a single column. Do this for N columns and the total is O(N 3 ) for the elimination
process.
Now let’s count the operations required for back-substitution:


p11 p12 . . . p1N
 0 p22 . . . p2N 


U =  ..
(20)
.. 
.
.
 .
.
. 
0
0 . . . pN N
(Here we’re using the notation p for the elements of the row-echelon matrix U in order to
avoid confusion regarding the elements of the unknown vector u.) To solve the equation
represented by the last row in the matrix requires only one flop, but up toward the middle
we’ll be substituting in something on the order of N previously solved-for variables. In other
3
words, solving each equation in back-substitution is an order N thing we do order N times,
yielding O(N 2 ) for the process.
Back-substitution is thus an entire factor of N less time-consuming than elimination.
Now consider the set of linear problems
Kui = fi
(21)
which requires a solution for a series of right-hand side vectors but retains the same matrix K.
This is a common situation encountered for time-dependent problems, for example. Instead
of performing elimination for each different right-hand side vector fi , we can perform an LU
decomposition at the outset:
LU ui = fi
(22)
now we make the substitution
U ui = ci ⇒ Lci = fi
(23)
The system above on the right can be solved by back-substitution: L is lower-triangular,
which means its topmost row is an equation with one unknown, the second row has two,
etc. Once that system has been solved for ci , the result can be used to solve the system on
the left, also using back-substitution, for ui . The expensive elimination step need only be
performed once.
Singular matrices are those whose associated linear systems are lacking information.
That deficiency comes out in the elimination process. Take the matrix


2 −1 −1
C = −1 2 −1
(24)
−1 −1 2
Eliminating the element at row 2, col 1 yields


2 −1 −1
 0 3/2 −3/2
−1 −1
2
(25)
Getting rid now of row 3, col 1:


2 −1
−1
0 3/2 −3/2
0 −3/2 3/2
(26)
Now moving on to row 3, col 2:


2 −1 −1
0 3/2 −3/2
0 0
0
(27)
we find that we’re stuck. We can’t do back-substitution with just any right-hand side,
because the coefficient of the third unknown (the third pivot) is zero. What this means is
4
that our system either has no solution – if the right-hand side vector has something other
than a zero in position 3 – or an infinite number of them, if that position holds a zero as
well. This matrix is called singular.
It has a few other names – not invertible, because it doesn’t have an inverse. If C −1
existed, then we could uniquely compute u = C −1 f and as we have seen, there is no such
unique solution for the system Cu = f . It’s also called rank-deficient, in that a linear
system with as many independent equations as unknowns is said to be full rank and the
matrix C does not have as many independent equations as unknowns. You can see this if
you add the first row to the second and multiply by -1: there is the third row. The rows of
the matrix – and the corresponding equations – are not linearly independent.
Linear independence of a set of vectors means that no linear combination (multiplying
the vectors by any constant and adding them together) of any subset of the vectors can
produce any of the other vectors in the set. That is, if we have a set of vectors
( )
xi , ∀ i ∈ {1 . . . n}
(28)
then any linear combination of all of the vectors aside from one we’ll label xj will never be
able to reproduce xj :
X
ai xi 6= xj
(29)
i6=j
where the constants {ai } are arbitrary. A familiar example of a linearly independent set are
the vectors
( 1 0 0 )
0 , 1 , 0
(30)
0
0
1
These vectors form the vector space that defines a three-dimensional Cartesian coordinate
system: any 3D vector can be expressed as a linear combination of those three vectors. The
rows (or columns) of the matrix C, by contrast, are not linearly independent as noted above,
since
     
2
−1
−1
− −1 −  2  = −1
(31)
−1
−1
2
Nullspaces are features of linear systems like those represented by C where there are not
as many equations with unique information as there are unknowns. A nullspace is a vector
space comprised of vectors x that are solutions to the equation
 
0

Ax = 0
(32)
0
5
and furthermore are not simply vectors full of zeros. If the matrix A is singular, there will
be at least one non-trivial (in other words, non-zero) vectors that solve the equation. In the
above example:

   
2 −1 −1
x1
0
0 3/2 −3/2 x2  = 0
(33)
0 0
0
x3
0
We can get to a solution by simply choosing any value we like for x3 , then continuing with
back-substitution as usual. Settling on x3 = 1 yields
 
1

x = 1
(34)
1
However, we could have chosen any value for x3 – if we choose a constant a then the solution
is
 
 
a
1



x = a = a 1
(35)
a
1
Both vectors qualify as nullspace vectors of the matrix C.
Consider now the following matrix


1 1 1
D = 1 1 1
1 1 1
which is clearly singular; its row-echelon form is


1 1 1
0 0 0
0 0 0
(36)
(37)
There is only one independent equation in this system: two of the pivots are zero. This
matrix will have two nullspace vectors (one for every zero pivot). To find them we choose
two linearly independent pairs x2 , x3 and then solve for x1 for each case. An easy choice is
n o
x2
1
0
=
,
(38)
x3
0
1
which yields the nullspace
( −1 −1 )
 1 , 0 
0
1
(39)
Either of these vectors solves the equation
 
0

Dx = 0
0
6
(40)
and any linear combination of them does, too – we can see that if we call the first vector in
the space x1 and the second x2 :
 
   
0
0
0





D(a1 x1 + a2 x2 ) = a1 Dx1 + a2 Dx2 = a1 0 + a2 0 = 0
(41)
0
0
0
7
Eigenvalue problems first arose in mathematics as differential equations. A good example of an eigenvalue problem is the wave equation:
2
∂ 2u
2∂ u
=
c
∂t2
∂x2
(1)
where u is the displacement of a vibrating string, for example. To solve this we can use
separation of variables:
u(x, t) = F (x)G(t)
2
00
⇒ F (x)G̈(t) = c F (x)G(t)

F 00 (x)
G̈(t)
= c2
=k
G(t)
F (x)
(2)
(3)
(4)
where k is a constant. This leads to a set of two differential equations:
G̈(t) = kG(t)
k
F 00 (x) = 2 F (x)
c
(5)
(6)
In each of these equations, applying an operator (a second derivative) to a function results
in an expression equal to the original function times a constant. Functions that satisfy this
expression are called eigenfunctions of the operator, and the corresponding constants are
called eigenvalues. In the case of the wave equation, the eigenvectors are sinusoids of the
type
G(t) = sin nt
(7)
where n is an integer. The complete set of eigenfunctions and eigenvalues for an operator (or
eigenvectors and eigenvalues of a matrix) is called the spectrum because of this application
for the wave equation..
If we were to discretize the differential equations (using finite differences, for instance),
the set of continuous differential equations would be replaced by a set of algebraic equations,
taking the form
Ku = λu
(8)
where u is the eigenvector of the matrix K, and λ is its corresponding eigenvalue. Equation
(8) is the definition of an eigenvector and eigenvalue. There will be as many distinct pairs
of eigenvectors and eigenvalues as there are dimensions of the square matrix K.
There are many methods of finding eigenvectors and eigenvalues of a matrix, the method
we will use involves the characteristic equation. This arises from manipulating (8):
Ku − λu = 0
⇒ (K − λI)u = 0
(9)
(10)
What (10) implies is that the matrix K − λI is a singular matrix, and the vector u is its
nullspace vector.
1
Singular matrices have determinants that are equal to zero. We can use this fact to
develop a polynomial equation in λ, the roots of which are the eigenvalues of K. Take for
example the matrix

2 −1
K=
(11)
−1 2
such that

2 − λ −1
K − λI =
(12)
−1 2 − λ
Taking the determinant and setting it equal to zero yields
(2 − λ)2 − 1 = λ2 − 4λ + 3 = (λ − 3)(λ − 1) = 0
⇒ λ = {3, 1}
(13)
(14)
We find the eigenvector for each of the eigenvalues by substituting them into the matrix (12)
and finding the nullspace vector of the resulting singular matrix:

−1 −1
0
1
λ1 = 3 ⇒
u =
⇒ u1 =
(15)
−1 −1 1
0
−1

1 −1
0
1
λ2 = 1 ⇒
u2 =
⇒ u2 =
(16)
−1 1
0
1
Because any multiple of the vectors written above are also nullspace vectors, the same holds
true of eigenvectors: any multiple of either u1 or u2 is also an eigenvector of K.
If we follow the same procedure for the circulant matrix


2 −1 −1
−1 2 −1
(17)
−1 −1 2
the characteristic equation for which is
(2 − λ)3 − 3(2 − λ) − 2 = λ3 − 6λ2 + 9λ = λ(λ − 3)2 = 0
⇒ λ = {0, 3, 3}
(18)
(19)
If you recall, the circulant matrix is singular already, with one zero pivot. As such one of
the eigenvalues is also zero, and the corresponding eigenvector is the nullspace vector of the
original matrix:
 
1

u1 = 1
(20)
1
The other two eigenvalues are repeats: this is called degeneracy (no idea why it’s called that)
and it does not mean that the corresponding eigenvectors are equivalent. If we subtract 3
from each entry on the main diagonal we get


−1 −1 −1
−1 −1 −1
(21)
−1 −1 −1
2
which as we saw before is a singular matrix with two, linearly independent nullspace vectors:
 
 
1
0



u2 = 0 ; u3 = 1 
(22)
−1
−1
Eigendecomposition is the decomposition of a matrix into its component parts, which
consist of its eigenvectors and eigenvalues. Consider first the matrix


|
|
|
|
S = u1 u2 u3 · · · uN 
(23)
|
|
|
|
The matrix S is an eigenvector matrix, the columns of which are the eigenvectors of some
matrix K. If we left-multiply S by K, we get

 

|
|
|
|
|
|
|
|
KS = Ku1 Ku2 Ku3 · · · KuN  = λ1 u1 λ2 u2 λ3 u3 · · · λN uN  = SΛ (24)
|
|
|
|
|
|
|
|
where Λ is a diagonal matrix containing the eigenvalues on the diagonal. Re-arranging (24)
we get
K = SΛS −1
(25)
which is the eigendecomposition of K. Alternatively,
S −1 KS = Λ
(26)
which is called the diagonalization of K.
Maybe this is nifty, but what’s the point? Diagonalization of a matrix could make solving
the system easy, but in order to do it we have to: 1. produce the eigenvectors & eigenvalues
2. invert the matrix S. While there’s no getting around step 1, step 2 becomes easy – trivial,
even – when K is symmetric, as it often is for engineering problems. This is because the
eigenvectors of a symmetric matrix are orthonormal – that is,
uTi uj
n 0, i 6= j
=
1, i = j
(27)
This is what it means for two vectors to be orthogonal: their dot product is zero. In an
orthonormal set, all vectors in the set are orthogonal to all other vectors, but the vectors are
normalized such that the length of each vector is 1.
The proof of the orthonormality of the eigenvectors of a symmetric matrix is straightforward. Consider two arbitrary eigenvectors of a symmetric matrix A:
Aui = λi ui
Auj = λj uj
3
(28)
(29)
Now right-multiply each equation by the transpose of the vector appearing in the other
equation:
uTj Aui = λi uTj ui
(30)
λj uTi uj
(31)
uTi Auj
=
Now transpose the bottom equation on both sides:
uTj Aui = λi uTj ui
(32)
uTj AT ui = λj uTj ui
(33)
which because of the symmetry of A is
uTj Aui = λi uTj ui
(34)
uTj Aui
(35)
=
λj uTj ui
Now subtract the bottom equation from the top
0 = (λi − λj )uTj ui
(36)
which does it: if i 6= j, then λi 6= λj and so uTj ui = 0. If i = j, then the eigenvalues are
the same and the dot product will be something other than zero – we make sure it is 1 by
normalizing the eigenvectors, which we are free to do.
This has remarkable implications for the eigenvector matrix S of a symmetric matrix:


− u1 −

− u2 − 
|
|
|

 |


S T S = − u3 − u1 u2 u3 · · · uN 
(37)


..
|
|
|

 |
.
− uN −
It’s clear from the above that every element of the product matrix is a dot product between
eigenvectors – as such they will all be zero except when they are equivalent, when they will
be 1. The product matrix is thus the identity matrix, which means that the transpose of
S is its inverse.
This means finding S −1 becomes easy. This is such a big deal that the eigenvector matrix
for a symmetric matrix is usually called something else – Q:
A = QΛQT
(38)
Linear dynamic systems are important for control applications, such as those in air and
spacecraft navigation and robotics. A typical notation for a dynamic system is
u̇(t) = Ku(t)
4
(39)
where u(t) is a vector of time-dependent states and K is a matrix of coefficients of those
states in the system. Given a vector of initial conditions u(0), the system can be solved
using an eigenvector approach. Consider the solution
u(t) = xeλt
(40)
λxeλt = Kxeλt
⇒ Kx = λx
(41)
(42)
Substituting (40) into (39) gives
meaning that the solutions to the system can be found by finding all eigenvalue / eigenvector
pairs of the matrix K. The general solution will be the sum of all such pairs, multiplied by
constants:
N
X
u(t) =
ci x i e λ i t
(43)
i=1
The constants can be found by bringing in the initial conditions:
u(0) =
N
X
ci xi = Sc
(44)
i=1
where S is the eigenvector matrix for K and c is a vector of coefficients.
Consider for example the system

2 −1
K=
−1 2
(45)
along with the initial conditions

7
u(0) =
3
(46)
The eigenvalues and eigenvectors of K were found above. Plugging them into the general
solution we get

1
1 t
3t
u(t) = c1
e + c2
e
(47)
−1
1
Bringing in the initial conditions we get

7
1
1
1 1 c1
= c1
+ c2
=
3
−1
1
−1 1 c2
(48)
Solving the system yields

2
c=
5
5
(49)

Purchase answer to see full
attachment

Order your essay today and save 15% with the discount code: VACCINE

Order a unique copy of this paper

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
Top Academic Writers Ready to Help
with Your Research Proposal