Ch 6 PPT (V1)

Download Report

Transcript Ch 6 PPT (V1)

Ch 6 Vector Spaces
Vector Space Axioms
•
•
•
•
X,Y,Z elements of  and α, β elements of 
Def of vector addition
Def of multiplication of scalar and vector
These defs satisfy the 10 axioms (pg. 155)
• Addition
•
– Uniqueness, closure
– Commutativity
– Associativity
– Identity
– Inverse
Scalar Multiplication
– Uniqueness, closure
– Associativity
– Right Distributivity
– Left Distributivity
– Unit scalar mult.
Subspaces
• A set  is a subspace of a vector space 
if
– Every element of  is in , and
–  is a vector space.
• A line through the origin is a subspace of
2-D Euclidean space
• A plane that contains the origin is a
subspace of 3-D Euclidean space
Linear Independence of Vectors
• The vectors A1, A2, … An over a fieled  are
linearly independent if
– any set {k1, k2, … kn} of elements of , for
which
At least one
A1k1  A2 k2   An kn  0 
ki is zero.
Linear Combinations
• Consider a set of vectors, and a set of
scalars
• A linear combination of the vectors is
A1k1  A2 k2 
 An kn
or
k1 A1  k2 A2 
 kn An
Linear Dependence and Rank
• Multiplication of a matrix by a column vector
produces a linear combination of the columns
• Multiplication of a matrix by a matrix can be
viewed as multiplication of left matrix by each
column of right matrix.
• Full column rank implies columns are linearly
independent
• Rank deficient matrices yield infinite solutions
Range or Image
• View a matrix as a collection of column vectors.
• Consider the set of vectors formed by an
arbitrary linear combination of the column
vectors (result of multiplying a matrix by an
arbitrary column vector)
• The range space or the image of a matrix is
the set of vectors generated by multiplying the
matrix by an arbitrary column vector.
• (A)=
Basis
• If the columns of A are linearly independent,
they are called a basis for (A)
• If A has full column rank, n, and vector X in (A)
is a unique linear combination of the basis
vectors in A, i.e. X = AK has a unique solution
for K.
• The entries in K are the coordinates of X wrt
the Basis A.
Dimension
• The dimension of a vector space is the
number of (linearly independent) vectors
in a basis for the space.
Standard Basis
• The column vectors in the identity matrix for the
standard basis [e1 e2 …en]=In
• Remember i, j, k from physics
• AX=B has a solution only if each and every
column of B is in (A), i.e.
– (B) is a subset of (A)
• This can be tested by
– constructing the matrix [A B];
– computing an upper-row compression (Sec. 5.7)
and noting that (B) is a subset of (A) if and only if
P2B=0
Null Space or Kernel
• The null space, or kernel, of a matrix is the set
(A) = {X: AX = 0}
• Set of solutions of the homogeneous equation
AX = 0
– Contains non zero vectors only if rank(A) < cols(A)
• If X is a solution of AX=B, then so is X’=X+H for
any H in (A)
– Hence solution is unique only if A has full column
rang
Basis for (A) and (A)
Orthogonal Basis
Change of Basis
Similarity Transformation