Transcript A( v)
Computer Graphics
Recitation 5
Motivation – Shape Matching
What is the best transformation that aligns the dinosaur with the dog?
2
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets
using a linear transformation
The above is not a good matching….
3
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets
using a linear transformation
To find the best transformation we need to know SVD…
4
Finding underlying dimensionality
y
x
Given a set of points, find the best line that
approximates it – reduce the dimension of the data set…
5
Finding underlying dimensionality
y
y’
x’
x
Given a set of points, find the best line that
approximates it – reduce the dimension of the data set…
6
Finding underlying dimensionality
y
y’
x’
x
When we learn PCA (Principal Component Analysis),
we’ll know how to find these axes that minimize
the sum of distances2
7
PCA and SVD
PCA and SVD are important tools not only in
graphics but also in statistics, computer vision
and more.
To learn about them, we first need to get familiar
with eigenvalue decomposition.
So, today we’ll start with linear algebra basics
reminder.
8
The plan today
Basic linear algebra review.
Eigenvalues and eigenvectors.
9
Vector space
Informal definition:
V
v, w V v + w V
v V, is scalar v V
(a non-empty set of vectors)
(closed under addition)
(closed under multiplication by
scalar)
Formal definition includes axioms about associativity and
distributivity of the + and operators.
10
Subspace - example
Let l be a 2D line though the origin
2
L = {p – O | p l} is a linear subspace of R
y
O
x
11
Subspace - example
Let be a plane through the origin in 3D
3
V = {p – O | p } is a linear subspace of R
z
O
y
x
12
Linear independence
The vectors {v1, v2, …, vk} are a linearly
independent set if:
1v1 + 2v2 + … + kvk = 0 i = 0 i
It means that none of the vectors can be
obtained as a linear combination of the others.
13
Linear independence - example
Parallel vectors are always dependent:
v
v = 2.4 w v + (2.4)w = 0
w
Orthogonal vectors are always independent.
14
Basis
{v1, v2, …, vn} are linearly independent
{v1, v2, …, vn} span the whole vector space V:
V = {1v1 + 2v2 + … + nvn | i is scalar}
Any vector in V is a unique linear combination of
the basis.
The number of basis vectors is called the
dimension of V.
15
Basis - example
3
The standard basis of R – three unit orthogonal
vectors x, y, z: (sometimes called i, j, k or e1, e2, e3)
z
y
x
16
Matrix representation
Let {v1, v2, …, vn} be a basis of V
Every vV has a unique representation
v = 1v1 + 2v2 + … + nvn
Denote v by the column-vector:
1
v
n
The basis vectors are therefore denoted:
1
0
0
0
0
1
0
0
1
17
Linear operators
A:V W
is called linear operator if:
A(v + w) = A(v) + A(w)
A(v) = A(v)
In particular, A(0) = 0
Linear operators we know:
Scaling
Rotation,
reflection
Translation is not linear – moves the origin
18
Linear operators - illustration
Rotation is a linear operator:
R(v+w)
v+w
w
w
v
v
19
Linear operators - illustration
Rotation is a linear operator:
R(v+w)
v+w
R(w)
w
v
w
v
20
Linear operators - illustration
Rotation is a linear operator:
R(v+w)
v+w
R(w)
w
v
R(v)
w
v
21
Linear operators - illustration
Rotation is a linear operator:
R(v)+R(w)
R(v+w)
v+w
R(w)
w
v
R(v)
w
v
R(v+w) = R(v) + R(w)
22
Matrix representation of linear operators
Look at A(v1),…, A(vn) where {v1, v2, …, vn} is a basis.
For all other vectors: v = 1v1 + 2v2 + … + nvn
A(v) = 1A(v1) + 2A(v2) + … + nA(vn)
So, knowing what A does to the basis is enough
The matrix representing A is:
|
|
|
M A A (v1 ) A (v2 ) A (vn )
|
|
|
23
Matrix representation of linear operators
1
|
| |
|
0
A (v1 ) A (v2 ) A (vn ) A (v1 )
|
|
|
|
0
0
|
| |
|
1
A (v1 ) A (v2 ) A (vn ) A (v2 )
|
|
|
|
0
24
Matrix operations
Addition, subtraction, scalar multiplication –
simple…
Matrix by column vector:
a11
a
m1
a1i bi
a1n b1
i
amn bn amibi
i
25
Matrix operations
Transposition: make the rows columns
T
a11 a1n
a11 am1
a
a a
a
mn
mn
m1
1n
(AB)T = BTAT
26
Matrix operations
Inner product can in matrix form:
v, w vT w wT v
27
Matrix properties
Matrix A (nn) is non-singular if B, AB = BA = I
-1
B = A is called the inverse of A
A is non-singular det A 0
If A is non-singular then the equation Ax=b has
one unique solution for each b.
A is non-singular the rows of A are linearly
independent set of vectors.
28
Vector product
v w is a new vector:
||v w|| = ||v|| ||w|| |sin(v,w)|
v w v, v w w
v, w, v w is a right-hand system
vw
v
w
In matrix form in 3D:
v (v1 , v2 , v3 )T , w ( w1 , w2 , w3 )T
iˆ
v w det v1
w
1
ˆj
v2
w2
kˆ
v3 (v2 w3 v3 w2 , w1v3 w3v1 , v1w2 v2 w1 )
w3
29
Orthogonal matrices
-1
T
Matrix A (nn) is orthogonal if A = A
T
T
Follows: AA = A A = I
The rows of A are orthonormal vectors!
Proof:
I = ATA =
v1
v2
v1 v2
vn
=
viTvj =
ij
vn
<vi, vi> = 1 ||vi|| = 1;
<vi, vj> = 0
30
Orthogonal operators
A is orthogonal matrix A represents a linear
operator that preserves inner product (i.e.,
preserves lengths and angles):
Av, Aw ( Av)T ( Aw) vT AT Aw
vT I w vT w v, w .
Therefore, ||Av|| = ||v|| and (Av,Aw) = (v,w)
31
Orthogonal operators - example
Rotation by around the z-axis in R :
3
cos
R sin
0
sin
0
cos 0
0
1
In fact, any orthogonal 33 matrix represents a
rotation around some axis and/or a reflection
detA
= +1
detA = 1
rotation only
with reflection
32
Eigenvectors and eigenvalues
Let A be a square nn matrix
v is eigenvector of A if:
Av = v ( is a scalar)
v0
The scalar is called eigenvalue
Av = v A(v) = (v) v is also eigenvector
Av = v, Aw = w A(v+w) = (v+w)
Therefore, eigenvectors of the same form a linear
subspace.
33
Eigenvectors and eigenvalues
An eigenvector spans an axis that is invariant to A –
it remains the same under the transformation.
Example – the axis of rotation:
Eigenvector of the
rotation transformation
O
34
Finding eigenvalues
For which is there a non-zero solution to Ax = x ?
Ax = x Ax – x = 0 Ax – Ix = 0 (A- I)x = 0
So, non trivial solution exists det(A – I) = 0
A() = det(A – I) is a polynomial of degree n.
It is called the characteristic polynomial of A.
The roots of A are the eigenvalues of A.
Therefore, there are always at least complex
eigenvalues. If n is odd, there is at least one real
eigenvalue.
35
Example of computing A
2
1 0
A 3 0 3
1 1
4
1
A ( ) det 3
1
0
1
2
3
4
(1 )( (4 ) 3) 2(3 ) (1 ) 2 (3 ) 2(3 )
(3 )(2 2 3)
Cannot be factorized over R
Over C: (1 i 2 )(1 i 2 )
36
Computing eigenvectors
Solve the equation (A – I)x = 0
We’ll get a subspace of solutions.
37
Spectra and diagonalization
The set of all the eigenvalues of A is called
the spectrum of A.
A is diagonalizable if A has n independent
eigenvectors. Then:
Av1 1v1
Av2 2 v2
Avn n vn
1
A
v1
v2
vn
=
v1
v2
2
vn
n
AV = VD
38
Spectra and diagonalization
1
Therefore, A = VDV , where D is diagonal
A represents a scaling along the eigenvector
axes!
Av1 1v1
Av2 2 v2
Avn n vn
1
A
=
v1
v2
1
2
v1 v2
vn
vn
n
1
A = VDV
39
Spectra and diagonalization
A is called normal if AAT = ATA.
Important example: symmetric matrices A = AT.
It can be proved that normal nn matrices have
exactly n linearly independent eigenvectors
(over C).
If A is symmetric, all eigenvalues of A are real,
and it has n independent real orthonormal
eigenvectors.
40
Spectra and diagonalization
A
orthonormal
basis
rotation
reflection
D
other
orthonormal
basis
41
Spectra and diagonalization
A
42
Spectra and diagonalization
A
43
Spectra and diagonalization
A
44
Spectra and diagonalization
A
45
Why SVD…
Diagonalizable matrix is essentially a scaling
Most matrices are not diagonalizable – they do
other things along with scaling (such as rotation)
So, to understand how general matrices behave,
only eigenvalues are not enough
SVD tells us how general linear transformations
behave, and other things…
46
See you next time