I n - 大葉大學資訊工程系

Download Report

Transcript I n - 大葉大學資訊工程系

Linear Algebra
Chapter 5
Eigenvalues and Eigenvectors
大葉大學 資訊工程系
黃鈴玲
5.1 Eigenvalues and Eigenvectors
Definition
Let A be an n  n matrix. A scalar  is called an eigenvalue (特徵
值,固有值) of A if there exists a nonzero vector x in Rn such that
Ax = x.
The vector x is called an eigenvector corresponding to .
Figure 6.1
Ch5_2
Computation of Eigenvalues and
Eigenvectors
Let A be an n  n matrix with eigenvalue  and corresponding
eigenvector x. Thus Ax = x. This equation may be written
Ax – x = 0
given
(A – In)x = 0
Solving the equation |A – In| = 0 for  leads to all the eigenvalues
of A.
On expending the determinant |A – In|, we get a polynomial in .
This polynomial is called the characteristic polynomial of A.
The equation |A – In| = 0 is called the characteristic equation of
A.
Ch5_3
Example 1
Find the eigenvalues and eigenvectors of the matrix
  4  6
A
5 
3
Solution Let us first derive the characteristic polynomial of A.
We get
  4  6
1 0  4    6 
A  I 2  
 




3
5
0
1
3
5





 

A  I 2  (4   )(5   )  18  2    2
We now solve the characteristic equation of A.
2    2  0  (  2)(  1)  0    2 or  1
The eigenvalues of A are 2 and –1.
The corresponding eigenvectors are found by using these values
of  in the equation(A – I2)x = 0. There are many eigenvectors
corresponding to each eigenvalue.
Ch5_4
=2
We solve the equation (A – 2I2)x = 0 for x. The matrix
(A – 2I2) is obtained by subtracting 2 from the diagonal
elements of A. We get
 6  6  x1 
0


3

3   x2 

This leads to the system of equations
 6 x1  6 x2  0
3x1  3x2  0
giving x1 = –x2. The solutions to this system of equations are
x1 = –r, x2 = r, where r is a scalar. Thus the eigenvectors of A
corresponding to  = 2 are nonzero vectors of the form
 1
r 
 1
Ch5_5
 = –1
We solve the equation (A + 1I2)x = 0 for x. The matrix
(A + 1I2) is obtained by adding 1 to the diagonal elements of
A. We get
 3  6  x1 
0


3

6   x2 

This leads to the system of equations
 3x1  6 x2  0
3x1  6 x2  0
Thus x1 = –2x2. The solutions to this system of equations are
x1 = –2s and x2 = s, where s is a scalar. Thus the eigenvectors
of A corresponding to  = –1 are nonzero vectors of the form
  2
s 
隨堂作業:9(a)
1
 
先不求eigenspaces Ch5_6
Theorem 5.1
Let A be an n  n matrix and  an eigenvalue of A. The set of all
eigenvectors corresponding to , together with the zero vector, is
a subspace of Rn. This subspace is called the eigenspace of .
Proof
Let x1 and x2 be two vectors in the eigenspace of  and let c be a
scalar. Then Ax1 = x1 and Ax2 = x2. Hence,
Ax1  Ax 2  x1  x 2
A(x1  x 2 )   (x1  x 2 )
Thus x1  x 2 is a vector in the eigenspace of . The set is closed
under addition.
Ch5_7
Further, since Ax1 = x1,
cAx1  cx1
A(cx1 )   (cx1 )
Therefore cx1 is a vector in the eigenspace of . The set is closed
scalar multiplication.
Thus the set is a subspace of Rn.
Ch5_8
Example 2
Find the eigenvalues and eigenvectors of the matrix
 5 4 2
A  4 5 2
2 2 2


Solution The matrix A – I3 is obtained by subtracting  from
the diagonal elements of A.Thus
2 
5   4
A  I 3   4 5   2 
 2

2
2  

The characteristic polynomial of A is |A – I3|. Using row and
column operations to simplify determinants, we get
5
4
A  I 3  4 5  
2
2
2
1 
2  4
2
2
1 
5
2
0
2
2
Ch5_9

1 
0
4
9
2
4
0
2
2
 (1   )[(9   )( 2   )  8]  (1   )[2  11  10]
 (1   )(  10)(  1)  (  10)(  1) 2
We now solving the characteristic equation of A:
 (  10)(  1) 2  0
  10 or 1
The eigenvalues of A are 10 and 1.
The corresponding eigenvectors are found by using three values
of  in the equation (A – I3)x = 0.
Ch5_10
 = 10
We get
( A  10 I 3 )x  0
 5 4 2   x1 
 4  5 2   x2   0
 2 2  8  x3 
The solution to this system of equations are x1 = 2r, x2 = 2r,
and x3 = r, where r is a scalar. Thus the eigenspace of  = 10
is the one-dimensional space of vectors of the form.
2
r 2
 
1 
Ch5_11
=1
Let  = 1 in (A – I3)x = 0. We get
( A  1I 3 )x  0
4 4 2  x1 
 4 4 2   x2   0
2 2 1   x3 
The solution to this system of equations can be shown to be
x1 = – s – t, x2 = s, and x3 = 2t, where s and t are scalars.
Thus the eigenspace of  = 1 is the space of vectors of the
form.
 s  t 
 s 


 2t 
Ch5_12
Separating the parameters s and t, we can write
 s  t   1  1
 s   s  1  t  0

    
 2t   0  2
Thus the eigenspace of  = 1 is a two-dimensional subspace of
R2 with basis
 1  1 
    
 1,  0 
 0  0 
    
If an eigenvalue occurs as a k times repeated root of the
characteristic equation, we say that it is of multiplicity k.
Thus =10 has multiplicity 1, while =1 has multiplicity 2
in this example.
隨堂作業:10
Ch5_13
Example 3
Let A be an n  n matrix A with eigenvalues 1, …, n, and
corresponding eigenvectors X1, …, Xn. Prove that if c  0, then
the eigenvalues of cA are c1, …, cn with corresponding
eigenvectors X1, …, Xn.
隨堂作業:28
Solution
Let i be one of eigenvalues of A with corresponding
eigenvectors Xi. Then AXi = iXi. Multiply both sides of this
equation by c to get
cAXi = ciXi
Thus ci is an eigenvalues of cA with corresponding eigenvector
Xi.
Further, since cA is n  n matrix, the characteristic polynomial of
A is of degree n. The characteristic equation has n roots,
implying that cA has n eigenvalues. The eigenvalues of cA are
therefore c1, …, cn with corresponding eigenvectors X1, …,Ch5_14
Xn.
Homework
Exercise 5.1:
9, 10, 13, 15, 24, 26, 32
Ex24: Prove that if A is a diagonal matrix, then its eigenvalues are
the diagonal elements.
Ex26: Prove that if A and At have the same eigenvalues.
Ex32: Prove that the constant term of the characteristic polynomial
of a matrix A is |A|.
Ch5_15
5.3 Diagonalization of Matrices
Definition
Let A and B be square matrices of the same size. B is said to be
similar to A if there exists an invertible matrix C such that
B = C–1AC. The transformation of the matrix A into the matrix B
in this manner is called a similarity transformation.
Ch5_16
Example 1
Consider the following matrices A and C. C is invertible. Use the
similarity transformation C–1AC to transform A into a matrix B.
7  10
2 5
A
C


3  4 
1 3
Solution
1
2 5 7  10 2 5

1
B  C AC 
1 3 3  4  1 3
3  5 7  10 2 5

 1 2  3  4  1 3
6

 1
2


0
 10 2 5
2  1 3
0
1
隨堂作業:1(b)
Ch5_17
Theorem 5.3
Similar matrices have the same eigenvalues.
Proof
Let A and B be similar matrices. Hence there exists a matrix C
such that B = C–1AC. The characteristic polynomial of B is |B –
In|. Substituting for B and using the multiplicative properties of
determinants, we get
B  I  C 1 AC  I  C 1 ( A  I )C
 C 1 A  I C  A  I C 1 C
 A  I C 1C  A  I I
 A  I
The characteristic polynomials of A and B are identical. This
means that their eigenvalues are the same.
Ch5_18
Definition
A square matrix A is said to be diagonalizable if there exists a
matrix C such that D = C–1AC is a diagonal matrix.
Theorem 5.4
Let A be an n  n matrix.
(a) If A has n linearly independent eigenvectors, it is
diagonalizable. The matrix C whose columns consist of n
linearly independent eigenvectors can be used in a similarity
transformation C–1AC to give a diagonal matrix D. The
diagonal elements of D will be the eigenvalues of A.
(b) If A is diagonalizable, then it has n linearly independent
eigenvectors
Ch5_19
Proof
(a) Let A have eigenvalues 1, …, n, with corresponding linearly
independent eigenvectors v1, …, vn. Let C be the matrix having
v1, …, vn as column vectors.
C = [v1 … vn]
Since Av1 = 1v1, …, Avn = 1vn, matrix multiplication in terms
of columns gives
AC  Av1  v n 
  Av1  Av n 
 v1  v n 
 v 1
0
0
1
1
  C

 v n 


0

0





n
n
Ch5_20
Since the columns of C are linearly independent, C is nonsingular.
Thus
0
1

C 1 AC  

0



n
Therefore, if an n  n matrix A has n linearly independent
eigenvectors, these eigenvectors can be used as the columns of a
matrix A that diagonalizes A. The diagonal matrix has the
eigenvaules of A as diagonal elements.
Ch5_21
(b) The converse is proved by retracting the above steps.
Commence with the assumption that C is a matrix [v1 … vn] that
diagonalizes A. Thus, there exist scalars 1, …, n, such that
0
 1

C 1 AC  

0



n
Retracting the above steps, we arrive at the conclusion that
Av1 = 1v1, …, Avn = nvn
The v1, …, vn are eigenvectors of A. Since C is nonsingular,
these vectors (column vectors of C) are linearly independent.
Thus if an n  n matrix A is diagonalizable, it has n linearly
independent eigenvectors.
Ch5_22
Example 2
(a) Show that the following matrix A is diagonalizable.
(b) Find a diagonal matrix D that is similar to A.
(c) Determine the similarity transformation that diagonalizes A.
  4  6
A

3
5


Solution
(a) The eigenvalues and corresponding eigenvector of this
matrix were found in Example 1 of Section 5.1. They are
 1
1  2, v1  r  
 1
  2
2  1, and v 2  s  
 1
Since A, a 2  2 matrix, has two linearly independent
eigenvectors, it is diagonalizable.
Ch5_23
(b) A is similar to the diagonal matrix D, which has diagonal
elements 1 = 2 and 2 = –1. Thus
  4  6
2 0 
A
is similar to D  


3
5
0

1




(c) Select two convenient linearly independent eigenvectors, say
 1
  2
v1    and v 2   
 1
 1
Let these vectors be the column vectors of the diagonalizing
matrix C.
  1  2
C

1
1


We get
1
  1  2   4  6   1  2
1
C AC  
1   3
5   1
1 
1
2    4  6   1  2  2
1







5  1
1  0
 1  1  3
隨堂作業:3(a)
0
D

 1
Ch5_24
Note
If A is similar to a diagonal matrix D under the transformation
C–1AC, then it can be shown that Ak = CDkC–1.
This result can be used to compute Ak. Let us derive this result
and then apply it.
D k  (C 1 AC ) k  (C 1 AC )(C 1 AC )  (C 1 AC )  C 1 Ak C



k times
This leads to
Ak  CD k C 1
Ch5_25
Example 3
Compute A9 for the following matrix A.
  4  6
A
5 
3
Solution
A is the matrix of the previous example. Use the values of C and
D from that example. We get
9
9
2
0

2
0  512 0 


9
D 




9
0  1  0  1   0  1
A9  CD 9C 1
1
 1  2 512 0   1  2
 514  1026


 1
 513
1   0  1  1
1 
1025
隨堂作業:9(a)
Ch5_26
Example 4
Show that the following matrix A is not diagonalizable.
5  3

A
3  1
Solution
5  1
3 

A  I 2  
 1   
 3
The characteristic equation is
A  I 2  0  (5   )(1   )  9  0
 2  4  4  0  (  2)(  2)  0
There is a single eigenvalue,  = 2. We find the corresponding
eigenvectors. (A – 2I2)x = 0 gives 3  3  x1 
3  3  x   0  3x1  3x2  0.
2
Thus x1 = r, x2 = r. The eigenvectors are nonzero vectors of the
1
form
r 
隨堂作業:3(c)
1
The eigenspace is a one-dimensional space. A is a 2  2 matrix,
but it does not have two linearly independent eigenvectors.
Ch5_27
Thus A is not diagonalizable.
Theorem 5.5
Let A be an n  n symmetric matrix.
(a) All the eigenvalues of A are real numbers.
(b) The dimensional of an eigenspace of A is the multiplicity of
the eigenvalues as a root of the characteristic equation.
(c) The eigenspaces of A are orthogonal.
(d) A has n linearly independent eigenvectors.
Orthogonal Diagonalization
Definition
A square matrix A is said to be orthogonally diagonalizable if
there exists an orthogonal matrix C such that D = C1AC is a
diagonal matrix.
Ch5_28
Theorem 5.6
Let A be a square matrix. A is orthogonally diagonalizable if and
only if it is a symmetric matrix.
Example 5
Orthogonally diagonalize the following symmetric matrix A.
 1  2
A

 2 1 
Solution
The eigenvalues and corresponding eigenspaces of this matrix
are
 1

  1

1  1, V1  s   | s  R; 2  3, V2  r   | r  R
 1

  1

Ch5_29
  1 0
Since A is symmetric, it can be diagonalized to give D  

0
3


Let us determine the transformation. The eigenspaces V1 and V2
are orthogonal. Use a unit vector in each eigenspace as columns
of an orthogonal matrix C. We get
 12  12 
C1
1 
 2
2 
The orthogonal transformation that leads to D is
1

C t AC   12
 2
1
2
1
2
  1  2  12

1

  2 1   2

1
2
1
2
  1 0
 

0
3

 
隨堂作業:6(a)
Ch5_30
Homework
Exercise 5.3:
1, 2, 6, 9
Ch5_31