eigenvalue problem
Download
Report
Transcript eigenvalue problem
Chapter 11 The Eigenvalue Problem
11.1 Introduction
For the problem
(1)
Ax = lx ●S1
where A is a given n x n matrix, x is an unknown n x 1 vector,
and l is an unknown scalar. If we re-express Eq. (1) as
Ax = lIx (where I is an n x n identity matrix), then
(A – lI)x = 0
(2)
Eq. (2) is consistent because it necessarily admits the
“trivial” solution x = 0. However, out interest in Eq. (2) shall be
in the search for nontrivial solutions, and we anticipate that
whether or not nontrivial solutions exist will depend upon the
value of l.
1
Thus, the problem of interest is as follows: given the n x n
matrix A, find the value(s) of l (if any) such that Eq. (2) admits
nontrivial solutions, and find those nontrivial solution. The latter
is called the eigenvalue problem. The l’s that lead to nontrivial
solutions for x are called the eigenvalues, and the corresponding
nontrivial solutions for x are called the eigenvectors.
11.2 Solution procedure and applications
11.2.1 Solution and applications
The eigenvalue problem
(A-lI) x = 0
(1)
has the unique trivial solution x = 0 if det(A-lI) ≠0, and
nontrivial solutions (in addition to the trivial solution) if and
only if
(2)
det(A-lI) = 0
2
The latter is not a vector or matrix equation; it is an algebraic
equation in l, known as the characteristic equation corresponding
to the matrix A, and its left-hand side is an nth degree polynomial
known as the characteristic polynomial.
Let us consider Eq.(2) to have been solved for the eigenvalues l1,l2,…,lk (1 ≦ k ≦ n). Next, set l = l1 in Eq. (1). Since
det(A-l1I) = 0, it is guaranteed that (A-l1I)x = 0 will have
nontrivial solutions. We can find those solutions by Gauss
elimination, and we designate them as e1, where the letter e is
for eigenvector. The e1 solution space is called the eigenspace
corresponding to the eigenvalue l1.
Example 1
Determine all eigenvalues
and eigenspaces of
2 2 1
A 1 3 1
1 2 2
(3)
3
2-l 2
det(A-l I) 1 3-l
1
1
1 l 3 7l 2 11l 5
(4)
2 2-l
=(l -5)(l -1)2 0
so the eigenvalues of A are l1 = 5 and l2 = 1, with l2 = 1
called a repeated eigenvalue. Next, find the eigenspaces:
l1 = 5: Then (A - l1I)x = 0 becomes
2-5 2 1 x1 -3 2 1 x1 -3 2 1 x1 0
1 3-5 1 x 1 -2 1 x 0 1 -1 x 0
2
2
2
1
2 2-5 x3 1 2 -3 x3 0 0 0 x3 0
(5)
The solution is x3 =a (arbitrary), x2 = a, x1 = a, using e in
plane of x,
a
1
e a a 1
a
1
(7)
Thus, the eigenspace corresponding to l1 = 5 is
1
span{ 1 } 4
1
l2 = 1: Then (A - l2I)x = 0 becomes
2-1 2 1 x1 1 2 1 x1 1 2 1 x1 0
1 3-1 1 x 1 2 1 x 0 0 0 x 0
2
2
2
1
2 2-1 x3 1 2 1 x3 0 0 0 x3 0
(5)
The solution is x3 =b (arbitrary), x2 = g (arbitrary), x1 = -b2g
so
b 2g
1
2
e
g
b
b 0 g
1
1
0
(7)
Thus, the eigenspace corresponding to l2 = 1 is
1 2
span{ 0 , 1
1 0
}
5
Example 3
Solve the coupled differential equations
x x 4 y
y x y
(15)
rt
rt
x
(
t
)
q
e
,
y
(
x
)
q
e
Seek x, y in the form
1
2
(16)
where q1, q2, r are constants that are to be determined.
Putting Eq.(16) into (15) gives
rq1ert q1ert 4q2ert
rq2ert q1ert q2ert
expressing the result in matrix form, gives
q1
1 4 q1
1 1 q r q
2
2
which is an eigenvalue problem with l = r.
●S2
6
Proceeding as above, we obtain these eigenvalues and
eigenspaces:
2
-2
l1 3, e1 a ; l2 1, e2 b
1
1
(19)
2 3t
-2 t
x(t ) a e ; x(t ) b e
1
1
(20)
or in scale form
x(t ) 2a e3t 2b et
y (t ) a e b e
3t
t
(22)
7
11.3 Symmetric Matrices
11.3.1 Eigenvalue problem of Ax = l x
Theorem 11.3.1 Real eigenvalues
If A is symmetric (AT = A), then all of its eigenvalues are real.
Theorem 11.3.2 Dimension of eigenspace
If an eigenvalue l of a symmetric matrix A is of multiplicity k,
then the eigenspace corresponding to l is of dimension k.
Theorem 11.3.3 Orthogonality of eigenvectors
If A is symmetric, the eigenvectors corresponding to distinct
eigenvalues are orthogonal.
Let ej and ek be eigenvectors corresponding to distinct
eigenvalues lj and lk, respectively. Thus,
Aej = lj ej and Aek=lkek
(1a, 1b)
8
Recall x‧y = xTy and (AB)T =BTAT for any matrices A and B
that are conformable for multiplication. Then, if we dot ek into
each side of (1a) and dot each side of (1b) into ej, we obtain
ek‧(Aej )= ek‧(lj ej) and Aek‧ej=(lkek)‧ej
e k (Ae j )=e k (λ je j )
Ae k e j =(lk e k ) e j
eTk Ae j =λ jeTk e j
(Ae k ) T e j =lk e Tk e j
(2)
eTk A T e j =lk e Tk e j
But AT = A by assumption so if we subtract the bottom
equations on the left and right of the vertical divider, we obtain
0 = (λ j -lk )eTk e j
(3)
Since lj and lk were assumed to be distinct so it follows
from (3) that eTkej = 0. Thus, ek‧ej = 0, as claimed.
9
Theorem 11.3.3 Orthogonal basis
If an n x n matrix A is symmetric, then its eigenvectors
provide an orthogonal basis for n-space.
Example 2 Free vibration of a two-mass system
Consider the system of two masses subjected to forces
f1(t) and f2(t) and restrained laterally by springs and
supported vertically by a frictionless table as shown in Fig. 1.
m1 x1 (k1 k12 ) x1 k12 x2 f1 (t )
m2 x2 k12 x1 (k2 k12 ) x2 f 2 (t )
(11)
10
Let m1 = m2 = k1 = k12 = k2 = 1 and consider free vibration
such that f1(t) = f2(t) = 0. Then Eq. (11) becomes
x1 2 x1 x2 0
x2 x1 2 x2 0
Seek
(12)
x1 (t ) q1elt
x2 (t ) q2elt
(13)
On physical grounds we expect the solution to be a vibration,
and it seems more sensible to seek
x1 (t ) q1 sin(t )
x2 (t ) q2 sin(t )
(14)
Putting Eq. (14) into (12) gives
2 q1 2q1 q2 0
2 q2 q1 2q2 0
11
or, equivalently,
q
2 -1 q1
2 1
-1 2 q q
2
2
(15)
which is a matrix eigenvalue problem
Aq = lq
(16)
with l =2 as the eigenvalue. Solving for the eigenvalues
and eigenspaces, we have
1
1
(17)
l1 1, e1 a ; l2 3, e2 b
1
1
Each eigenpair gives us a solution of the form (14).
The first gives = (l1)1/2 = 1, and
x1 (t )
1
(18)
x
a sin(t 1 )
1
x2 (t )
12
The second gives = (l2)1/2 = 31/2, and
x1 (t )
1
x
b sin( 3t 2 )
1
x2 (t )
(19)
where a, b, 1, and 2 are arbitrary and satisfies Eqs. (18)
and (19). Since Eq. (12) is linear and homogeneous, it
follows that the linear combination
x1 (t )
1
1
x (t ) a 1 sin(t 1 ) b 1 sin( 3t 2 )
2
(20)
is also a solution.
Returning to scalar form, we have
x1 (t ) a sin(t 1 ) b sin( 3t 2 )
x2 (t ) a sin(t 1 ) b sin( 3t 2 )
(21)
Each eigenpair defines a vibration “mode”, the eigenvalue
gives the vibrational frequency ( = l1/2) and the eigenvector
gives the mode shape or configuration. The frequencies 13
are called the eigenfrequencies, or natural frequencies.
The first term in Eq. (20) is called the low mode because
it occurs at the lower of the two natural frequencies, and the
second term is called the high mode.
●S3
Case 1: x1 (0) 0, x2 (0) 0, x1(0) 1, x2 (0) 1 b 0, a 1, 1 0
Case 2 : x1 (0) 1, x2 (0) 1, x1(0) 0, x2 (0) 0 a 0, b 1, 2
2
Case 3: x1 (0) 1, x2 (0) 0, x1(0) 0, x2 (0) 1, motion containing both modes
Case 1(b =0)
Case 2(a = 0)
Case 3(mixed)
14
11.4 Diagonalization
Find the solution of Ax = c is tedious, where A is n x n matrix,
if n is large; find the result of Am is tedious if m Is large; find the
solution of
(1)
x’(t) = Ax(t)
is generally tedious. But it is simple if A is diagonal.
Find a matrix Q such that the variables x1 ,..., xn can be
converted to x1 ,..., xn .
and
x Qx
x Qx
(2)
where Q is a constant matrix and can be expressed as
x1 q11
xn qn1
q1n x1
qnn xn
(3)
15
Qx AQx
(4)
Choosing Q to be invertible. Then multiplying (4) by Q-1 gives
Q 1Qx Q 1 AQx or
x Q AQx
1
(5)
Given a matrix A, the idea is to find a Q matrix so that
Q 1 AQ D
(6)
is diagonal because then the differential equations within
Eq. (5) will be uncoupled. If there does exist such a Q we
say that A is diagonalizable and that Q diagonalizes A.
Two questions:(1) Given A, does there exist such a Q?
(2) How do we find it?
16
Theorem 11.4.1 Diagonalization
Let A be n x n
1. A is diagonalizable if and only if it has n LI eigenvalues.
2. If A has n LI eigenvectors e1,…,en and we make these the
columns of Q, so that Q =[e1,…, en], then Q-1AQ = D is
diagonal and the jth diagonal element of D is the jth
eigenvalue of A.
Proof: Let Q =[e1,…,en].
(1)Prove: If A is diagonalizable, then it has n LI eigenvectors.
If A is diagonalizable, then there is an invertible matrix Q
such that
d1 0 0
0 d
0
2
(7)
Q 1 AQ D
17
0
d
n
Pre-multplying both sides of Eq. (7) by Q gives AQ = QD
d1 0
q1n
q11
0 d2
AQ QD
qn1
qnn
0
d n q1n
d1q11
dq
1 1
d1qn1
d n qnn
0
0
dn
(8)
d nq n
where the vector qj simply denotes the jth column of Q.
Alternatively
AQ = A[q1,q2,…, qn] = [Aq1, Aq2,…,Aqn]
Comparing (8) and (9), we have
Aq1=d1q1,…, Aqn=dnqn
(9)
(10)
18
(2) If A has n LI eigenvectors, then it is diagonalizable.
Let Q to be made up of columns which are the eigenvectors
of A, so Q = [e1, …, en].
AQ [Ae1 ,
l1e11
l1en1
, Ae n ] [l1e1 ,
ln e1n e11
ln enn en1
, ln e n ]
l1 0
e1n
0 l2
enn
0
0
0
QD
ln
(11)
Q is invertible since its columns are LI, thus
Q-1AQ Q-1QD=D
19
Theorem 11.4.2 Distinct eigenvalues, LI eigenvectors
If n x n matrix A has distinct eigenvalues l1,…, ln, then the
corresponding eigenvectors e1,…, en are LI.
Theorem 11.4.3 Diagonalizability
If an n x n has n distinct eigenvalues, then it is
diagonalizable.
Theorem 11.4.4 Symmetric matrices
Every symmetric matrix is diagonalizable.
20
Problems for Chapter 11
Exercise 11.2
1. (a)、(b)
3.(a)、(e)、(i)、(l)
5.(a)、(c)、(d);
6.(b);11;16.(c); 18.(b);
Exercise 11.4
1.(a)、(c)、(g)、 (j)、
3.(c); 4;
Exercise 11.3
1.(a)、(d)、(g) 、(i)
10.
21