Transcript intuition

MRA
basic concepts
Jyun-Ming Chen
Spring 2001
Introduction
• MRA (multiresolution analysis)
– Construct a hierarchy
of approximations to
functions in various
subspaces of a linear
vector space
• First explained in
finite-dimensional
linear vector space,
VN
• There exist N linearly
independent and
orthogonal basis
vectors
a1 , a 2 ,, a N
• Any vector in VN can be
expressed as a unique
linear combination of
these basis vectors
x  1a1   2a 2     N a N
Simple Illustration in Number
Representation
Space Vn : the set of numbers represente d
using n decimal digits
  R;   3.1415926...
V5 : represent number using 101 ,10 0 ,10 1 ,10  2 , 10 3
in V5 ,   3.142
V4 : represent number using 101 ,10 0 ,10 1 ,10  2
Properties :
in V4 ,   3.14
V j  V j 1  V j  2  ...  R, that is

V1 : represent number using 101
in V5 ,   0
lim Vn  R
n 
 V  0i
i
Nested Vector Spaces
• VN 1 : Subspace of a lower dimension by
taking only N-1 of the N basis vectors, say
a1 , a 2 ,, a N-1
• Continuing, …
VN 2 ,VN 3 ,,V1
• Hence,
V1  V2    VN
Approximate a Vector in
Subspaces
• Best approximation: minimize discrepancy
e N-1  x   N 1
• Let  N 1 be its orthogonal projection of x
in the subspace
a3
a1
e
a2
Orthogonal Projection and
Least Square Error
Error vect or Ax  b must be
perpendicu lar to the column space
E  Ax  b ; E 2   Ax  b   Ax  b 
T
dE 2
Least square solution :
0
dx
2 AT  Ax  b   0
Ay : Any vector in the column space
 Ay T  Ax  b   y T AT  Ax  b   0
For Orthonormal Basis (N=3)
a3
a3
x
a1
e2
1
a2
a1
e1
 2  x, a1 a1  x, a 2 a 2
1  1 , a1 a1
e2  x   2
e1   2  1
x  e2  e1  1
a2
Interpretations
• Approximating
vectors:
– Sequence of
orthogonal projection
vectors of x in the
subspaces
– Finest approximation
at VN-1
– Coarsest
approximation at V1
• Error (detail) vector in VN
– orthogonal to VN-1
• WN-1: the orthogonal
complement to VN-1 in VN
– dimensionality of 1
• Similarly, WN-2: the
orthogonal complement to
VN-2 in VN-1
Interpretations (cont)
• Every vector in VN can be
written in the form below (the
sum of one vector apiece
from the N subspaces WN-1,
WN-2, …, W1, V1)
x  e N-1  e N-2    e1  1
• The vectors eN-1, eN-2, …, e1,
X1 form an orthogonal set
• VN is the direct sum of these
subspaces
VN  WN 1  WN 2    W1  V1
V3
• Orthogonality of subspaces
Wk  Vk
k  1,  , N  1
Also
Wk  W j , j  1,..., k  1
W2
V2
V1
W1
From Vector Space to
Function Space
Example of an MRA
l 1
f 0 (t )   f ( )d , l  t  l  1
• Let f(t) be a continuous,
l
1 2l  2
real-valued, finite
f 1 (t )  
f ( )d , 2l  t  2l  2
2
l
2
energy signal
1 l / 21/ 2
f ( )d , l / 2  t  l / 2  1 / 2
• Approximate f(t) as f1 (t ) 

l
/
2
1/ 2
follows:
MRA Example (cont)
• V0: linear vector space, • Nested subspaces
formed by the set of
  V1  V0  V1  V2  
functions that are
piecewise constant
over unit interval
f (t )  f  (t )  lim f k (t )
k 
Approximating Function by
Orthogonal Projection
• Assume u is not a member of the space V spanned
by {fk}, a set of orthonormal basis
• We wish to find an approximation of u in V
u p   u, f k f k
k
• Remarks:
– Approximation error u-up is orthogonal to space V
u  u p , f k  0 k
– Mean square error of such an approximation is
minimum
Formal Definition of an MRA
An MRA consists of the nested linear vector space
such that   V1  V0  V1  V2  
• There exists a function f(t) (called scaling function)
such that f (t  k ) : k integer  is a basis for V0
• If f (t ) Vk then f (2t ) Vk 1 and vice versa
• lim V j  L2 ( R) ;  V j  {0}
j 
• Remarks:
– Does not require the set of f(t) and its integer translates
to be orthogonal (in general)
– No mention of wavelet