Transcript A+B

Chapter 10 Matrices and Linear Equations
10.1 Introduction
Just as functions act upon numbers, we shall see that
matrices act upon vectors and are mappings from one
vector space to another.
10.2 matrices and matrix algebra
A matrix is a rectangular array of quantities that are called
the elements of the matrix. Let us consider the elements to
be real numbers. Matrix A may be expressed as
 a11 a12
a a
   21 22


 am1 am 2
a1n 
a2 n 


amn 
(1)
1
A horizontal line of elements is called a row, and a vertical
line is called a column. The first subscript on aij the row index,
and the second subscript the column index . The matrix can be
expressed in different forms.
a

c
b
d 
 a11

 a21
a12 
a22 
(2)
  aij 
(3)
aij in Eq. (3) is called ij element and i=1,…,m and j=1,…,n.
Two matrices are said to be equal if they are of the
same form and if their corresponding elements are equal.
 
 
Matrix addition. If   aij and   bij are any two
matrices of the same form, say m × n, then their sum A + B
is
(4)
    aij  bij 
and is itself an m × n matrix.
2
 
Scalar multiplication. If   aij is any m × n matrix
and c is any scalar, their product is defined as
c  caij  ,
(7)
THEOREM 10.2.1 Properties of Matrix Addition and Scalar
Multiplication
If A, B, and C are m × n matrices, O is an m × n zero
matrix, and α, β
      ,
(commutativity)
(9a)
(A + B) + C = A + (B + C),
A + 0 = A,
A + (-A) = 0,
 ( )  ( ) ,
(   )     ,
 (   )    ,
(associativity)
(associativity)
(9b)
(9c)
(9d)
(9e)
(distributivity)
(distributivity)
(9f)
(9g)
1A = A,
(9h)
0A = 0,
(9i)
 0  0,
(9j)
3
The definitions of addition and scalar multiplication above
are identical to those introduced in Sec. 9.4 for n-tuple vectors.
We may refer to the matrices
 a11 
(10)


   a11 , , a1n  and =  
 an1 
as n-dimensional row and column vectors, respectively.
Cayley product, if   aij  is any m × n matrix and   bij  is
any n × p matrix, then the product AB is defined as
(11)
 n

   aik bkj 
 k 1

n
cij   aik bkj .
If we denote AB = C = {cij}
(12)
k 1
If the number of columns of A is equal to the number
of rows of B, then A and B are said to be conformable for
multiplication; if not, the product AB is not defined.
4
It is extremely important to see that matrix multiplication is
not, in general, commutative; that is,
  ,
(15)
except in exceptional cases.
The system of m linear algebraic equations
a11 x1  a12 x2 
 a1n xn  c1 ,
a21 x1  a22 x2 
 a2 n xn  c2 ,
am1 x1  am 2 x2 
 amn xn  cm
(17)
in the n unknowns x1, …, xn is equivalent to the single
compact matrix equation
Ax = c,
(18)
5
 a11 a12
a a
   21 22


 am1 am 2
a1n 
 x1 
c1 





a2 n 
x2 
c2 


, x
, and c 
.

 
 

 
 
amn 
 xn 
cm 
(19)
A is called the coefficient matrix.
 
Any n × n matrix   aij is said to be square, and of
order n, elements a11, a22, …, ann are said to lie on the main
diagonal of A.
If A is square and p is any positive integer, we defined
(21)
    p
p factors
The familiar laws of exponents,
 p q   p q , ( p )q   pq
follow for any positive integers p and q.
(22)
6
If, in particular, the only nonzero elements of a square
matrix lie on the main diagonal, A is said to be a diagonal
matrix. For example,
 d11 0
0 d
22
D


0
0 

,


d nn 
(23)
If futhermore,d11=d22=…=dnn=1, then D is called the
identity matrix I. Thus,
Where
 ij
1 0
0 1
I


0
0

   ij  ,


1
(25)
is the Kronecker delta symbol defined
 1 if i  j ,
 ij  
 0 if i  j.
(26)
7
The key property of the identity matrix is that if A is any
square matrix
IA = AI = A,
(27)
A is any n × n matrix, we define
0  
(28)
Where I is an n × n identity matrix.
THEOREM 10.2.2 “Exceptional”Properties of Matrix
Multiplication
(i) AB ≠ BA in general.
(ii) Even if A ≠ 0, AB = AC does not imply that B = C.
(iii) AB=0 does not imply that A = 0 and/or B = 0
(iv) A2 = I does not imply that A = +I or –I.
8
THEOREM 10.2.3 “Ordinary” Properties of Matrix
Multiplication
If α, β are scalars, and the matrices A, B, C are suitably
conformable, then
(αA)B = A(αB) = α(AB)
(associativity)
(30a)
A(BC) = (AB)C,
(associativity)
(30b)
(A+B)C = AC + BC,
(distributivity)
(30c)
C(A + B) = CA + CB,
(distributivity)
(30d)
A(αB + βC) = αAB + βAC
(linearity)
(30e)
9
Partitioning. The idea for partition is that any matrix A
(which is larger that 1 × 1) may be partitioned into a number
of smaller matrices called blocks by vertical lines that extend
from bottom to top, and horizontal lines that extend from left
to right.
2 0  3  2 0  3 
5 2 7  5 2 7   11 12 

    21  22  ,


1 3
0  1 3
0 

 
  31 32 
(31)
0 4 6  0 4 6 
10
EXAMPLE 8. If
 2 4 1
 11 12 


   1 3 0   
,

21 22 

 5 4 6
0 1 3 
11 12 


  2  4  1   
,

21 22 

5 8 2
(37)
(38)
If m = p, and n = q and each Aij block is of the same
form as the corresponding Bij block.
 1111  1221 1112  1222 
  
.

 2111  2221 2112  2222 
11
10.3 The Transpose Matrix
 
Given any m × n matrix   aij , we define the transpose
of A, denoted as AT and read as “A-transpose”, as
 a11
a
AT  a ji    12


 a1n
am1 
am 2 
,


amn 
a21
a22
a2 n
aijT =a ji
(1)
(2)
Theorem 10.3.1 Properties of the Transpose
A 
T T
 A,
 A  B   AT +BT ,
T

A
    AT ,
T
 AB   BT AT ,
T
(3a)
(3b)
(3c)
(3d)
12
Proof of (3d):
Let AB ≡ C = {cij}
n
cij   aik bkj
(4)
k 1
n
n
n
cij T  c ji   a jk bki   bki a jk   bikT akjT .
k 1
C T  T T
k 1
or
( )T  T T
( C )T  C T T T ,
Let
 x1 
x   
 xn 
k 1
(5)
and
(CD)T  DT C T T T ,
 y1 
y   
 yn 
(7)
13
Then the standard dot product
n
x  y  x1 y1  x2 y2  ...  xn yn   x j y j
j 1
and in matrix language
If
x  y  xT y
(8)
 
(9)
T
We say that A is symmetric, and if
T  ,
(10)
We say that it is skew-symmetric (or antisymmetric).
14
10.4 Determinants
  as
We denote the determinant of an n × n matrix   aij
det  
a11 a12
a1n
a21 a22
a2 n
an1 an 2
ann
(1)
 
The determinant of an n × n matrix   aij is defined by
n
the cofactor expansion
det    a jk  jk ,
(2)
1
where the summation is carried out on j for any fixed value of
k ( 1  k  n ) or on k for any fixed value of j ( 1  j  n ). Ajk is
called the cofactor of the ajk element and is defined as
 jk  (1) j k M jk ,
(3)
where Mjk is called the minor of ajk, namely, the determinant
of the (n-1)x(n-1) matrix that survives when the row and the
15
column containing ajk are struck out.
Example 1 Find the determinant of the matrix
det A  a11 (-1)(1+1) M11 +a12 (-1)(1+2) M12 +a13 (-1)(1+3) M13
0 2 -1 
A   4 3 5 
 2 0 -4 
 a11M11 -a12 M12 +a13M13
det A  a 21 (-1)(2+1) M 21 +a 22 (-1)(2+2) M 22 +a 23 (-1)(2+3) M 23
 a 21M 21 +a 22 M 22 -a 23M 23
det A  a13 (-1)(1+3) M13 +a 23 (-1)(2+3) M 23 +a 33 (-1)(3+3) M 33
 a13M13 -a 23M 23 +a 33M 33
43
02
02
=(-1)
-(5)
+(-4)
20
20
43
 (1)(6)  (5)(4)  (4)( 8)  58
 
A square matrix   aij is upper triangular if aij = 0 for all
j < i and lower triangular if aij = 0 for all j > i. If a matrix is
upper triangular or lower triangular it is said to be triangular.
16
 a11 a12
0 a
22



0
a1n 
 a11 0
a
a2 n 
a22
21

and




ann 
 an1 an 2
(upper triangular)
0 
0 
,


ann 
(lower triangular)
Properties of Determinants
D1. If any row (or column) of A is modified by adding α
times the corresponding elements of another row (or
column) to it, yielding a new matrix B, det B = det A.
Symbolically: rj → rj+αrk
D2. If any two rows (or column) of A are interchanged,
yielding a new matrix B, then det B = -det A.
Symbolically: rj ↔ rk
D3. If A is triangular, then det A is simply the product of
the diagonal elements, det A = a11a22…ann
17
D4. If all the elements of any row or column are zero, then
det A = 0.
D5. If any two rows or columns are proportional to each
other, then det A = 0.
D6. If any row (column) is a linear combination of other
rows (columns), then det A = 0.
D7. If all the elements of any row or column are scaled by
α, yielding a new matrix B, then det B = αdet A.
D8. det (αA) = αn det A.
D9. If any one row (or column) a of A is separated as
a = b + c, then
det  |a  det  |b  det  |c
18
D10. The determinant of A and its transpose are equal,
det( )  det 
T
D11. In general,
det(   )  det + det 
D12. The determinant of a product equals the product of
their determinants,
det( )  (det )(det )
det(  )   det + det 
(11)
(12)
19
10.5 Rank; Application to Linear Dependence and to
Existence and Uniqueness for Ax = c
DEFINITION 10.5.1 Rank
A matrix A, not necessarily square, is of rank r, or r(A), if it
contains at least one r × r submatrix with nonzero determinant
but no square submatrix larger than r × r with nonzero
determinant. A matrix is of rank 0 if it is a zero matrix.
EXAMPLE 1. Let
2  1
A  0
3
1
4
1
3
5
0
6 
9 
(1)
20
 
We may regard the rows of an m × n matrix   aij as
n-dimensional vectors, which we call the row vectors of A
and which we denote as r1, …, rm. Similarly, the columns are
m-dimensional vectors, which we call the column vectors of
A and which we denote as c1, …, cn. Further, we define the
vector spaces span{r1, …, rm} and span {c1, …, cn} as the row
and column spaces of A, respectively.
The elementary row operations:
1.Addition of a multiple of one row to another
Symbolically: rj → rj + αrk
2. Multiplication of a row by a nonzero constant
Symbolically: rj → αrj
3. Interchange of two rows
Symbolically: rj ↔ rk
21
THEOREM 10.5.1 Elementary Row Operations and Rank
Row equivalent matrices have the same rank. That is,
elementary row operations do not alter the rank of a matrix.
Elementary
A row operations
B
r(A) = r(B)
THEOREM 10.5.2 Rank and Linear Dependence
For any matrix A, the number of LI row vectors is equal to
the number of LI column vectors and these, in turn, equal
the rank of A.
22
EXAMPLE 5. Application to Stoichiometry.
1
CO  O2  CO2 ,
2
1
H 2  O2  H 2O,
2
3
CH 4  O2  CO  2 H 2O,
2
CH 4  2O2  CO2  2 H 2O,
1
CO  O2  CO2  0,
2
1
H 2  O2  H 2O  0,
2
3
CH 4  O2  CO  2 H 2O  0,
2
CH 4  2O2  CO2  2 H 2O  0,
(4a)
(4b)
(4c)
(4d)
(5)
23
CO O2 CO2 H2 H2O CH4

 1

 0
A

 1

 0

1
2
1
2
3
2
2
1
0
0
0
1
1
0
0 2
1
0 2

0

0  (6)


1

1
CO O2

1

0
0

0
CO2 H2 H2O CH4
1
2
1
1
0
1  4
0
0
0
0
0
2 2
0
2
0

0

0
1

0 
1
CO  O2  CO2  0,
2
O2  2 H 2  2 H 2O  0,
CO  1 O2  CO2  0,
2
H 2  1 O2  H 2O  0,
2
(7)
(8)
CO2  4 H 2  2 H 2O  CH 4  0,
CH 4  3 O2  CO  2 H 2O  0, (5)
2
CH 4  2O2  CO2  2 H 2O  0,
24
Example 6
x1  x2  x3  3x4 
2 x6  4
1 -1
1 0
Ac 
 2 -1

1 0
 3x3  3x4  x5 +6 x6  3
x1
2 x1  x2  2 x3  x4  x5 +7 x6  9
 5 x3  8 x4  x5 +7 x6  1
x1
1 -1
0 1

0 0

0 0
1

0

0

0
13 0 2 4 
2 0 -1 4 -1 
2 5 0 1 -2 

00 0 0 0 
6
2
2

- 5 -1 3 1 

5
1
0
-1 
2
2

0
0 0
0
0 0 -9
1 0
0 1
0 0
1 -1
0 1


0 0

0 0
3
-1 9
2 1
1 3
0 -5
1 5
2
0 0
1 3 0 2 4
3 3 -1 6 3
2 1 -1 7 9 

5 8 -1 7 1
0 2 4
1 0
0 1
-1 3 -1 


0 0
0 1 -1 
2


0 0
0 0 0 
1 -2 -1
0 -5 -1
1 5
0
2
0 0 0
5 5
3 1 
1 -1 
2

0 0 
1
5
 3
2 2
x2  1- 31   2  5 3
x3  -1-
x1  6 -
9
9
1   2   3
2
2
25
Conclusions
9
9 

 9
9 
6







1
2
3

 2
2 
1 
2
2  6 

  
 
 
1 
1

3




5

1

3
1
2
3 

 
5 
 
 

  1
 1
 5
0 
1
5
  3      1      2     3   
x   1  1
2
2

 0 
 2
 2
0 

 0
 1
 3  0 
1 

  
 
 
 
0

2 

 
 0
0 
0 

 1 
 0 
1 

=x0 +1x1 + 2x2 + 3x3
X0 is a particular solution of Ax = c, and
x1,x2,x3 are homogeneous solutions.
That is,
A(x0 +1x1 + 2x2 + 3x3)=c
1
5
 3
2 2
x2  1- 31   2  5 3
x3  -1-
x1  6 -
9
9
1   2   3
2
2
 9
 2
 3

1


 x1 , x 1 , x3    2
 0
 0

 1

2 
5 

-5 
2

1
0 

0 
1 9
1
0
0
1
0
26
Suppose that a system Ax = c, where A is m x n, has a
p-parameter family of solutions
x = x0 +1x1 + ..+ pxp
Then x0 is necessarily a particular solution, and x1,…, xp,
are necessarily LI homogeneous solutions. We call
span{x1,…,xp} as the solution space of the homogeneous
equation Ax = 0, or the null space of A. The dimension of
that null space is called the nullity of A.
Theorem 10.5.3 Existence and Uniqueness for Ax = c
Consider the linear system Ax = c, where is m x n. There is
1. no solution if and only if r(A|c) ≠r(A)
2. A unique solution if and only if r(A|c) = r(A) = n
3. An (n-r)-parameter family of solutions if and only if
r(A|c) = r(A) ≡ r is less than n.
27
Theorem 10.5.4 Homogeneous case where A is m x n
If A is m x n, then Ax = 0
1. Is consistent.
2. Admits the trivial solution x = 0.
3. Admits the unique solution x = 0 if and only if, r(A) =n.
4. Admits an (n-r)-parameter family of nontrivial solutions,
in addition to the trivial solution, if and only if r(A) ≡ r < n.
Theorem 10.5.5 Homogeneous case where A is n x n
If A is n x n, then Ax = 0 admits the nontrivial solution,
besides the trivial solution x = 0, if and only if det A=0
28
Example 7 Dimensional Analysis
Consider a rectangular flat plate in steady motion through
undisturbed air as shown in Fig. 1. The object is to conduct an
experimental determination of the lift force l generated on the
airfoil, that is, experimentally determine the functional
dependence of l on the various relevant quantities. A
List of the relevant variables is given in Table 1.
29
The Buckingham Pi theorem states that: Given a relation
among n parameters of the form
g (q1 , q2 , q3 ,...., qn )  0
then the n parameters may be grouped into n-m independent
dimensionless ratios, or n parameters, expressed in a
functional form by
G(1 ,  2 ,....,  nm )  0
or
1  G1 ( 2 , 3 ,....,  nm )
The number m is usually, but not always, equal to the
minimum number of independent dimensions required to
specify the dimensions of all the parameters
q1, q2, …and qn.
Next, we seek all possible dimensionless products of the
form
30
Aa B b cV dV0e  f  g l h
That is, we seek the exponents a, …, h such that
( L) a ( L)b ( M 0 L0T 0 )c ( LT 1 ) d ( LT 1 )e ( ML3 ) f ( ML1T 1 ) g ( MLT 2 ) h
 M 0 L0T 0
Equating exponents og L,T, M on both sides, we see that
a,…,h must satisfy the homogeneous linear system.
a b d e3f  g h  0
-d -e
- g - 2h  0
f g h0
(26)
Solving Eq. (26) by Gauss elimination gives the five-parameter
family of solutions
31
a 
 2 
 1
0 
0
 1
b 
0 
0 
0 
0
1 
 
 
 
 
 
 
c 
0 
0 
0 
1 
0 
 
 
 
 
 
 
d

2

1

1
0
                    0 
1
2
3
4
5
e 
0 
0 
1 
0
0 
 
 
 
 
 
 
f
 1 
 1
0 
0
0 
g 
0 
1 
0 
0
0 
 
 
 
 
 
 
 h 
1 
0 
0 
 0 
0 
(27)
where 1,…,a5 are arbitrary constants. With 1 = 1
and 2 = …= 5 = 0, Eq. (27) gives a = -2, b = c = 0, and hence
the nondimensional parameter becomes
A2 B 0 0V 2V00  1 0l1
that is l/(V2A2).
32
Set 2 = 1, and the other j’s = 0 gives
(Reynolds number), Re
 AV / 
Set 3 = -1, and the other j’s = 0 gives
(Mach number), M
V / V0
Set 4 = 1, and the other j’s = 0 gives
incident angel, 

Set 5 = 1, and the other j’s = 0 gives
aspect ratio, AR
/
Therefore, we can conclude that
l
 f (Re, M ,  , AR)
2 2
A V
(28)
33
10.6 Inverse matrix, Cramer’s rule, factorization
10.6.1 Inverse matrix
For a system of linear algebraic equations expressed as
Ax = c
(1)
Let us try to find a matrix A-1 having the property that A-1A = I.
and
A-1Ax = A-1c
becomes
I x = A-1c
since I x = x, we have the solution
x = A-1c
We call A-1 the inverse of A or A-inverse.
34
 A11 A21
A A
1  12 22
1
A 
det A 

 A1n A2n
An1 
An 2 


Ann 
(16)
The matrix in (16) is called the adjoint of A and is denoted
as adjA, so
1
1
A 
det A
adjA
(17)
If detA≠0, then A-1 exists, in this case we say that A is
invertible. If detA = 0, then A-1 does not exist, and we say
that A is singular.
Example 2 Determine the inverse of
3 2 -1
A  0 1 4 
1 5 -2 
自行練習
35
3 2 -1
A  0 1 4 , det A  57  0
1 5 -2 
T
 1 4 0 4 0 1 


5
-2
1
-2
1
5
T


-22
4
-1


-22 -1 9 
 2 -1 3 -1 3 2 
   -1 -5 -13   4 -5 -12 
adjA  



 5 -2 1 -2 1 5 
 9 -12 3 
 -1 -13 3 


3 2 
 2 -1 - 3 -1
 1 4
0 4
0 1 
-22 -1 9 
1
1 
-1
A =
adjA    4 -5 -12 
detA
57
 -1 -13 3 
36
Theorem 10.6.1 Inverse matrix
Let A be n x n. if detA ≠0, then there exists a unique
matrix A-1such that
A-1A = AA-1 = I
(27)
A is then said to be invertible, and its inverse is given by
Eq. (17). If detA = 0, then a matrix A-1 does not exist, and
A is said to be singular.
Theorem 10.6.2 Solution of Ax = c
Let A is n x n and detA ≠0, then Ax = c admits the
unique solution x = A-1c.
Properties of inverses
I1. If A and B are of the same order, and invertible,
then AB is too, and (AB)-1 = B-1A-1
(28)
37
I2. If A is invertible, then
and
(AT)-1 = (A-1)T
(29)
1
det(A ) 
det A
1
(30)
I3. If A is invertible, then (A-1)-1 = A and (Am)n = Amn for
any integers m and n (positive, negative, and zero).
I4. If A is invertible, then AB = AC implies that B = C,
BA = CA implies B = C, AB = 0 implies that B = 0,
and BA = 0 implies that B = 0.
10.6.3 Cramer’s rule
We have seen that if A is n x n and detA≠ 0, then
Ax = c has the unique solution
x = A-1c
(38)
38
Eq. (38) can be expressed as
 x1  11 12
 
  
 xn   n1  n 2
1n  c1    j 1 j c j 

   

  
 nn  cn    j  nj c j 


(39)
Equating the ith components on the left with the ith
component on the right, we have the scale statement
xi    ij c j
(40)
j
for any desired i (1 ≦ i ≦ n). Or, recalling Eq. (17)
xi   (
j
Aji
det A
) cj


j
Aji c j
det A
(41)
39
Theorem 10.6.3 Cramer’s Rule
If Ax = c where A is invertible, then each component xi of x
may be computed as the ratio of two determinants; the
denominator is detA, and the numerator is also the determinant
of the A matrix but with the ith column replaced by c.
Example 3 Solve the system
5 3 0
1 5 0
1 3 1
-2 1 1
-2 1 1 1
0 -2 1
x1 
 , x2 
1 3 0 8
1 3 0
-2 3 1
-2 3 1
0 1 1
0 1 1
 1 3 0  x1   5 
-2 3 1  x    1 

 2  
 0 1 1  xn  -2
1 3 5
-2 3 1
0 1 -2 29
13
 , x2 

1 3 0
8
8
-2 3 1
0 1 1
(42)
(43)
40
10.6.4 Evaluation of A-1 by elementary row operations
If we solve a system Ax = c of n equations in n unknowns,
or equivalently Ax = Ic, by gauss-Jordan reduction, the result
is the form x = A-1c, or equivalently I x = A-1c.
 1 3 0
A= -2 3 1
 0 1 1 
 1 3 0 1 0 0  1 3 0 1 0 0  1 0 0 1 4 -3 8 3 8 
A I = -2 3 1 0 1 0  0 9 1 2 1 0  0 1 0 1 4 1 8 -1 8 
 0 1 1 0 0 1  0 1 1 0 0 1 0 0 1 -1 4 -1 8 9 8
 1 4 -3 8 3 8
A-1 =  1 4 1 8 -1 8 
-1 4 -1 8 9 8 
41
10.6.5 LU-factorization
LU-factorization is an alternative method of solution that
is based upon the factorization of an n x n matrix A as a
lower triangular matrix L times an upper triangular matrix U:

A=LU= 

11
21
31
0 0  u11 u12 u13
 0 u u
0
22
22
23


0 u 33
32
33   0




If we carry out the multiplication on the right and equate
the nine elements of LU to the corresponding elements of A
we obtain nine equations in the 12 unknown lij’s and uij’s.
Since we have more unknowns than equations, there is some
flexibility in implementing the idea. According to Doolittle’s
Method we can set each lii = 1 in L and solve uniquely for
remaining lij’s and uij’s.
42
With L and U determined, we then solve Ax = LUx = c by
setting Ux = y so that L(Ux) = c breaks into the two problems
Ly = c
Ux = y
(49a)
(49b)
each of which is simple because L and U are triangular.
We solve Eq. (49a) for y, put that y into Eq. (49b), and then
solve Eq. (49b) for x.
 2 -3 3  x1  -2
Example 5 Solve 
by the Doolittle





6 -8 7   x2  = -3
-2 6 -1  x3   3 
LU-factorization method.
 2 -3 3 1 0 0  u11 u12 u13 



6 -8 7  = 

  21 1 0   0 u22 u23 
-2 6 -1  31 32 1   0 0 u 33 
43
 2 -3 3 u11
6 -8 7  =  u

  21 11
-2 6 -1  31u11
u12
u +u22
21 12
u +
31 12
u
32 22
u13 

u
+
u
21 13
23 

31u13 + 32u23 +u 33 
In turn:
u11 = 2
u12 = -3
u13  3
21
 6 / u11  3
u22  8 
u23  7 
u 1
21 12
u  7  (3)(3)  2
21 13
31
 2 / u11  1
32
 (6 
31 12
u33  1 
31 13
u )/u22  3
u 
u =-1-(-1)(3)-(3)(2)=8
32 23
44
Then Eq. (49a) becomes
 1 0 0   y1   2 
 3 1 0   y    3 

 2  
 1 3 1   y3   3 
which gives y = [-2, 3, -8]T. Finally Eq. (49a) becomes
 2 -3 3  x1  -2
0 1 -2  x    3

  2  
0 0 8   x3  -8
which gives the final solution x = [2, 1, -1]T.
45
10.7 Change of Basis
Let B= {e1,…,en} be a given basis for the vector space V
under consideration so that any vector x in V can be expanded
as
(1)
x = x1e1 +…+ xnen
If we switch to some other basis B’= {e’1,…,e’n}, then we may
expand the same vector x as
x = x’1e’1 +…+ x’ne’n
(2)
We may expand each of the ej’s in terms of B’:
e1  q11e1 
 qn1en
(3)
en  q1n e1 
 qnn en
46
Putting Eq. (3) into (1) gives
x  x1 (q11e1  ...  qn1en )  ...  xn (q1n e1  ...  qnnen )
 ( x1q11  ...  xn q1n )e1 )  ...  ( x1qn1  ...  xn qnn )en
(4)
a comparison of Eq. (2) and (4) gives the desired relations
x1  q11 x1  ...  q1n xn
(5)
xn  qn1 x1  ...  qnn xn
or, in matrix notation
where
 q11

Q
 qn1
q1n 


qnn 
[x]B’ = Q [x]B
(6)
and
(7)
 x1 
 x1 




[x]B    , [x]B   
 xn 
 xn 
(8)
47
We call [x]B the coordinate vector of the vector x with
respect to the ordered basis B, and similarly for [x]B’, and
we call Q the coordinate transformation matrix from B to B’.
In the remainder of this section we assume that both bases,
B and B’, are ON. Thus, let us rewrite Eq. (3) as
eˆ1  q11eˆ1 
 qn1eˆ n
(9)
eˆ n  q1n eˆ1 
 qnn eˆ n
If we dot ê1 into both sides of the first equation in Eq. (9),
we obtain q11 = eˆ1  eˆ1 . Dotting ê2 gives q21 = eˆ 2  eˆ1 . Dotting
ên gives qn1 = eˆ n  eˆ1. the result is in the formula
qij  eˆ i  eˆ j
(10)
48
which tell us how to compute the transformation matrix Q.
Two properties of Q, the first of these is Q-1 = QT
 q11
q
12
T

Q Q


 q1n
so that
qn1 
 q11 q12

qn 2  

  qn1 qn 2
qnn 
Q-1 = QT
q1n  eˆ1  eˆ1

 
qnn  eˆ n  eˆ1
eˆ1  eˆ n 
I

eˆ n  eˆ n 
(11)
(12)
The second of these is detQ = ±1
QTQ =  implies that det(QTQ) = det = 1. But det(QTQ) =
(detQT)(detQ) = (detQ)(detQ) = (detQ)2. Hence detQ must
be +1 or -1.
49
Example 1
Consider the vector space R2, with the
ON bases B  eˆ1 , eˆ2  and B  eˆ1, eˆ2  .
B’ is obtained from B by a counterclockwise rotation through an angle .
From the figure, we have
q11  eˆ1  eˆ1  (1)(1) cos   cos 

q12  eˆ1  eˆ2  (1)(1) cos(   )  sin 
2

q21  eˆ2  eˆ1  (1)(1) cos(   )   sin 
2
q11  eˆ2  eˆ2  (1)(1) cos   cos 
so that the coordinate transformation
 cos  sin  
matrix is
Q
  sin  cos  


Hence
 x1   cos  sin    x1 
 x     sin  cos    x 
 2
 2 
50