PowerPoint14_Eigen

Download Report

Transcript PowerPoint14_Eigen

GG313 Lecture 15
Eigenvectors and Eigenvalues
10/11/05

Before continuing, we covered the dot product of 2
vectors as a special case of matrix multiplication. What’s
the cross product?
If you recall your physics, the cross product of two
vectors is a vector in the direction perpendicular to the
plane containing the two vectors being multiplied. The
amplitude of the result is the product of the input vector
lengths. Classically:
C  A  B  A B Sin( )
Where  is the angle between A and B, and ||A||=Sqrt(Ax2+
Ay2 +Az2). In matrix notation:
Ax

C  detBx

j x
Ay
By
jy
Az  Ax

Bz  Bx
jz 
 j x
Ay
Az
By
jy
Bz
jz
where ji is called the unit vector in the I direction. In
electronics, the induced magnetic field, the electric current
and the force on a wire are related by the equation: F=I B
Sin()=I x B, where F is the force, I is the electric current,
and B is the magnetic field. In matrix form:
Ix
Iy
Iz
F  Bx
jx
By
jy
Bz
jz
This equation describes the operation of electric motors,
electric generators, mass spectrometers, and the earth’s
magnetic field.
If the current in a wire is described by the
vector: I =[2 0 0] and the magnetic field by:
B=[0 4 0], what is the strength and direction
of the force exerted on the wire?
Eigenvalues and eigenvectors have their origins in
physics, in particular in problems where motion is involved,
although their uses extend from solutions to stress and
strain problems to differential equations and quantum
mechanics.
Recall from last class that we used matrices to deform a
body - the concept of STRAIN. Eigenvectors are vectors
that point in directions where there is no rotation.
Eigenvalues are the change in length of the eigenvector
from the original length. **** SHOW shearstraineigen.m
The basic equation in eigenvalue problems is:
Ax  x
Ax  x
(E.01)
In words, this deceptively simple equation says that for the
square matrix A, there is a vector x such that the product of
Ax such that the result is a SCALAR, , that, when multiplied
by x, results in the same product. The multiplication of
vector x by a scalar constant is the same as stretching or
shrinking the coordinates by a constant value.
Ax  x
The vector x is called an eigenvector and the scalar , is
called an eigenvalue.
Do all matrices have real eigenvalues?
No, they must be square and the determinant of A- I
must equal zero. This is easy to show:
Ax  x  0 xA  I  0
(E.02)
This can only be true if det(A- I )=|A- I |=0
(E.03)
Are eigenvectors unique?

No, if x is an eigenvector, then x is also an eigenvector
and   is an eigenvalue.
A(x)= Ax = x = ( x)
(E.04)
How do you calculate eigenvectors and eigenvalues?
Expand equation (E.03): det(A- I )=|A- I |=0 for a 2x2
matrix:
a11 a12  1 0 a11  
a12 
A  I  
 
 

a22   
a21 a22  0 1  a21
detA  I 
a11  
a12
a21
a22  
 a11   a22     a12a21  0
0  a11a22  a12a21   a11  a22  
2
(E.05)
For a 2-dimensional problem such as this, the equation
above is a simple quadratic equation with two solutions for
. In fact, there is generally one eigenvalue for each
dimension, but some may be zero, and some complex.

The solution to E.05 is:
0  a11a22  a12a21  a11  a22   2
  a11  a22 
(E.06)
a11  a22 
2
4 a11a22  a12a21 
(E.07)
This “characteristic equation” does not involve x, and the
resulting values of  can be used to solve for x.
Consider the following example:
1 2
A  

2 4
Eqn. E.07 doesn’t work here because a11a22-a12a12=0,
so we use E.06:

0  a11a22  a12a21  a11  a22   2
0  1 4  2  2  (1 4)  2
(1 4)  2
We see that one solution to this equation is =0, and
dividing both sides of the above equation by  yields =5.
Thus we have our two eigenvalues, and the eigenvectors
for the first eigenvalue, =0 are:
Ax  x,
A  Ix  0
1 2 0 x 1 2 x 1x  2y  0

     
   
  
2 4 0 y 2 4 y 2x  4y 0
These equations are multiples of x=-2y, so the smallest
whole number values that fit are x=2, y=-1

For the other eigenvalue, =5:
1 2 5 0 x 4 2  x 4x  2y 0

 
   
   
  
2 4 0 5 y 2 1 y  2x 1y  0
-4x + 2y = 0, and 2x  y  0, so, x  1, y  2
This example is rather special; A-1 does not exist, the two
rows of A- I are dependent and thus one of the
eigenvalues is zero. (Zero is a legitimate eigenvalue!)
EXAMPLE: A more common case is A =[1.05 .05 ; .05 1]
used in the strain exercise. Find the eigenvectors and
eigenvalues for this A, and then calculate [V,D]=eig[A].
The procedure is:
1) Compute the determinant of A- I
2) Find the roots of the polynomial given by | A- I|=0
3) Solve the system of equations (A- I)x=0
Eigenvectors and eigenvalues are used in structural
geology to determine the directions of principal strain; the
directions where angles are not changing. In seismology,
these are the directions of least compression (tension), the
compression axis, and the intermediate axis (in three
dimensions.
Some facts:
• The product of the eigenvalues=det|A|
• The sum of the eigenvalues=trace(A)
The x,y values of A can be thought of as representing
points on an ellipse centered at 0.0. The eigenvectors are
then in the directions of the major and minor axes of the
ellipse, and the eigenvalues are the lengths of these axes
to the ellipse from 0,0.
One particularly useful application of eigenvectors is for
correlation. Recall that we can define the correlations of
two functions x, and y as a correlation matrix:
 1 .75
A  

.75 1 
The correlation coefficient of x and y is 0.75. We can show
this as a sort of error ellipse using the eigenvalues and
eigenvectors:
>> A=[1 .75;.75 1]

>> [V,D]=eig(A)
V = -0.7071 0.7071
0.7071 0.7071
D =0.25
0
0
1.75
We can interpret this correlation as an ellipse whose
major axis is one eigenvalue and the minor axis length
is the other:
No correlation yields a circle, and perfect correlation yields
a line.
What good are such things?
Consider the matrix:
.8 .3
A  

.2 .7
What is A100 ?
We can get A100 by multiplying matrices many many times:


.70 .45 3 .65 .525 100 .600 .600
2
A  
 A  
 A  

.30 .55
.35 .475
.400 .400
Or we could find the eigenvalues of A and obtain A100 very
quickly using eigenvalues.

For now, I’ll just tell you that there are two eigenvectors
for A:
.6
.8
x1    and Ax1  
.4
.7
1 
.8
x 2    and Ax 2  
1
.7
.3.6
  x1 (1 = 1)
.2.4
.31  .5 
    (2 = 0.5)
.21 .5
The eigenvectors are x1=[.6 ; .4] and x2=[1 ; -1], and the
eigenvalues are 1=1 and 2=0.5.
Note that if we multiply x1 by A, we get x1. If we multiply x1
by A again, we STILL get x1. Thus x1 doesn’t change as we
mulitiply it by An.
Don’t believe it ? - open Matlab and try it.
What about x2? When we multiply A by x2, we get x2/2,
and if we multiply x2 by A2, we get x2/4 . This number
gets very small fast.
Note that when A is squared the eigenvectors stay the
same, but the eigenvalues are squared!
Back to our original problem we note that for A100, the
eigenvectors will be the same, the eigenvalues 1=1 and
2=(0.5)100, which is effectively zero.
Each eigenvector is multiplied by its eigenvalue whenever
A is applied,