section 1.5-1.7

Download Report

Transcript section 1.5-1.7

1.5 Elementary Matrices and a Method for Finding A 1
An elementary row operation on a matrix A is any one of the following three
types of operations:
• Interchange of two rows of A.
• Replacement of a row r of A by c r for some number c ≠ 0.
• Replacement of a row r1 of A by the sum r1 + c r2 of that row and a
multiple of another row r2 of A.
An n×n elementary matrix is a matrix produced by applying exactly
one elementary row operation to In
Examples:
1
 1 4 0   1 0 0  
1
0

 
 
 0

,0 1 0,0 1 0,
0 2 0 0 1 0 0 1 0

 
 0

0
1
0
0
0
0
0
1
0

0
1

0
When a matrix A is multiplied on the left by an elementary matrix E, the effect is
To perform an elementary row operation on A.
Theorem (Row Operations by Matrix Multiplication)
Suppose that E is an m×m elementary matrix produced by applying a
particular elementary row operation to Im, and that A is an m×n matrix.
Then EA is the matrix that results from applying that same elementary row
operation to A
Theorem
Every elementary matrix is invertible, and the inverse is also an elementary matrix.
Remark:
The above theorem is primarily of theoretical interest. Computationally, it is
preferable to perform row operations directly rather than multiplying on the
left by an elementary matrix.
Theorem
Theorem (Equivalent Statements)
If A is an n×n matrix, then the following statements are equivalent, that is, all
true or all false.
• A is invertible.
• Ax = 0 has only the trivial solution.
• The reduced row-echelon form of A is In.
• A is expressible as a product of elementary matrices.
A Method for Inverting Matrices
By previous Theorem, if A is invertible, then the reduced row-echelon form of A
is In. That is, we can find elementary matrices E1, E2, …, Ek such that
Ek …E2E1A = In.
Multiplying it on the right by A-1 yields
Ek …E2E1In = A-1
That is,
A-1 = Ek …E2E1In
To find the inverse of an invertible matrix A, we must find a sequence of
elementary row operations that reduces A to the identity and then perform
this same sequence of operations on In to obtain A 1
Using Row Operations to Find A-1
 1 2 3


A

2
5
3
Example: Find the inverse of


 1 0 8


Solution:
• To accomplish this we shall adjoin the identity matrix to the right side of A,
thereby producing a matrix of the form [A | I ]
• We shall apply row operations to this matrix until the left side is reduced to
I; these operations will convert the right side to A 1 , so that the final matrix
1
will have the form [I | A ]
1 2 3 1 0 0 
2 5 3 0 1 0


 1 0 8 0 0 1 
Thus
Row operations
rref
1 0 0 40 16 9 
0 1 0 13 5 3


0 0 1 5 2 1
 40 16 9 


A1   13 5 3 
 5 2 1 


If and n X n matrix A is not invertible, then it cannot be reduced to In by elementary
row operations, i.e, the computation can be stopped.
Example:
1 6 4


A   2 4 1 
 1 2 5 


1.6 Further Results on Systems of Equations and Invertibility
Theorem 1.6.1
Every system of linear equations has either no solutions, exactly one
solution, or in finitely many solutions.
Theorem 1.6.2
If A is an invertible n×n matrix, then for each n×1 matrix b, the system of
equations Ax = b has exactly one solution, namely, x = A 1 b.
Remark: this method is less efficient, computationally, than Gaussian elimination,
But it is important in the analysis of equations involving matrices.
Example: Solve the system by using A 1
x1  2 x2  3x3  5
2 x1  5 x2  3x3  3
x1
 8 x3  17
Linear Systems with a Common Coefficient Matrix
To solve a sequence of linear systems, Ax = b1, Ax = b2, …, Ax = bk, with
common coefficient matrix A
1
1
1
• If A is invertible, then the solutions x1 = A b1, x2 = A b2 , …, xk = A bk
• A more efficient method is to form the matrix [ A | b1 | b2| … | bk ], then
reduce it to reduced row-echelon form we can solve all k systems at
once by Gauss-Jordan elimination (Here A may not be invertible)
Example: Solve the system
(a) x1  2 x2  3x3  4
2 x1  5 x2  3x3  5
x1
Solution:
 8 x3  9
(b) x1  2 x2  3x3  1
2 x1  5 x2  3x3  6
x1
 8 x3  6
Theorem 1.6.3
Let A be a square matrix
1
(a) If B is a square matrix satisfying BA = I, then B = A
(b) If B is a square matrix satisfying AB = I, then B = A 1
Theorem 1.6.5
Let A and B be square matrices of the same size. If AB is invertible, then
A and B must also be invertible
Theorem 1.6.4 (Equivalent Statements)
If A is an n×n matrix, then the following statements are equivalent
• A is invertible
• Ax = 0 has only the trivial solution
• The reduced row-echelon form of A is In
• A is expressible as a product of elementary matrices
• Ax = b is consistent for every n×1 matrix b
• Ax = b has exactly one solution for every n×1 matrix b
A Fundamental Problem: Let A be a fixed mXn matrix. Find all mX1 matrices b such
Such that the system of equations Ax=b is consistent.
If A is an invertible matrix, then for every mXn matrix b, the linear system Ax=b has
1
The unique solution x= A b.
If A is not square, or if A is a square but not invertible, then theorem 1.6.2 does not
Apply. In these cases the matrix b must satisfy certain conditions in order for Ax=b
To be consistent.
Determine Consistency by Elimination
Example: What conditions must b1, b2, and b3 satisfy in order for the system of
equations
x1  x2  2 x3  b1
x1 
To be consistent?
Solution:
 x3  b2
2 x1  x2  3x3  b3
Example: What conditions must b1, b2, and b3 satisfy in order for the system of
equations
x1  2 x2  3x3  b1
2 x1  5 x2  3x3  b2
x1
To be consistent?
Solution:
 8 x3  b3
Section 1.7 Diagonal, Triangular, and Symmetric matrices
A square matrix in which all the entries off the main diagonal are zero is called
a diagonal matrix.
For example:
A general nxn diagonal matrix
 d1
0

.
D
.
.

 0
0
0 
.

.
.

... d n 
0 ...
d 2 ...
.
.
.
0
(1)
A diagonal matrix is invertible if and only if all its diagonal entries are nonzero;
in this case the inverse of (1) is
0
...
0 
1/ d1



D 1  




0
0 
. 

. 
. 

... 1/ d n 
1/ d 2 ...
.
.
.
.
.
.
0
0
Diagonal Matrices
Powers of diagonal matrices are easy to compute: if D is the diagonal matrix (1)
and k is a positive integer, then
 d1k

0
 .
Dk  
 .
 .

 0
0
d2k
.
.
.
0
0 

0 
. 

. 
. 

... d n k 
...
...
In words, to multiply a matrix A on the left by a diagonal matrix D, one can
multiply successive rows of A by the successive diagonal entries of D, and
to multiply A on the right by D, one can multiply successive columns of A by
the successive diagonal entries of D.
Triangular Matrices
A square matrix in which all the entries above the main diagonal are zero is
called low triangular, and a square matrix in which all the entries below the
main diagonal are zero is called upper triangular. A matrix that is either
upper triangular or lower triangular is called triangular.
Theorem 1.7.1
a) The transpose of a lower triangular matrix is upper triangular, and the
transpose of an upper triangular matrix is lower triangular.
b) The product of lower triangular matrices is lower triangular, and the product
of upper triangular is upper triangular.
c) A triangular matrix is invertible if and only if its diagonal entries are all
nonzero.
d) The inverse of an invertible lower triangular matrix is lower triangular, and
the inverse of an invertible upper triangular matrix is upper triangular.
Symmetric matrices
A square matrix A is called symmetric if A=AT.
A matrix A=[aij] is symmetric if and only if aij=aji for all values of I and j.
Theorem 1.7.2
If A and B are symmetric matrices with the same size, and if k is any scalar,
then:
a) AT is symmetric.
b) A+B and A-B are symmetric.
c) kA is symmetric.
Note: in general, the product of symmetric matrices is not symmetric.
If A and B are matrices such that AB=BA, then we say A and B commute.
The product of two symmetric matrices is symmetric if and only if the matrices
commute.
Theorems
Theorem 1.7.3
If A is an invertible symmetric matrix, then A-1 is symmetric.
Theorem 1.7.4
If A is an invertible matrix, then AAT and ATA are also invertible.