unscented kalman filter for robot mapping

Download Report

Transcript unscented kalman filter for robot mapping

UNSCENTED KALMAN
FILTER FOR ROBOT
MAPPING
Kalman Filter-based System
• [Arras et al. 98]:
• Laser range-finder and vision
• High precision (<1cm accuracy)
[Courtesy of Kai Arras]
2
KF, EKF and UKF
Taylor Approximation (EKF)
Unscented
Transform
Unscented Transform
• Remember – Unscented Kalman Filter is based on
SIGMA POINTS
Unscented Transform
Nonlinear transformation
Unscented Transform
• NOTE: We compute a Gaussian, but after nonlinear transformation.
Unscented Transform Overview
Avoids to linearize around the
mean as the EKF does
Sigma
Points
Sigma Points
Sigma Points Properties
Normalization
of weights
weights
Sigma points
Weighted sum of sigma points
matrix
Sigma Points
Sigma Points
• We need to calculate the matrix square root
Matrix
Square Root
How to calculate the Matrix Square Root?
A first method based on matrix S
S is the matrix square root of 
Matrix Square Root
Cholesky Matrix Square Root
A second method based on matrix L
• There is one more matrix square root based on
Cholesky decomposition
Sigma Points and Eigenvectors
• Properties of sigma points:
Example of Sigma Points,  and Square
root of 
Sigma Point Weights
n is dimension
 is scaling parameter
Unscented Transform: math details
Sigma points
Weights
 
w 
0
i   
0
m

( n   )

i

n
wmi  wci 
w 
0
c
1
2(n   )

n
 (1   2   )
for i  1,...,2n
Pass sigma points through nonlinear function
 i  g ( i )
Recover mean and covariance
2n
 '   wmi  i
i 0
2n
'   wci ( i   )( i   )T
i 0
22
Examples of various functions g(x,y)
• nonlinear
Unscented Transform Summary
Uses Sigma Points and Weights
Parameters of Unscented Transform
1. Unscented Transform has free parameters, there is no unique
solution – this gives us freedom of design, learning etc.
2. Unscented Transform uses many scale coefficients:
Examples of using various values of
parameters: ALPHA
• Changing
alpha we
change the
distance
from the
center but
locations
are
symmetrical
Examples of using parameters k
• Changing k
we change
the distance
from the
center,
locations are
symmetrical,
– can go out of
Gaussian
ellipse
Recover the
Gaussian
Recover the Gaussian: calculate its
mean and standard deviation
New
mean
New
covariance
matrix
We create a new shape of Gaussians
Examples of
recovering
gaussians
Example of recovering the Gaussian
Non-linearly transformed Sigma points
Nonlinear function
Sigma points
1.
2.
Now we have a gaussian
Gaussian of p(y) has a
different mean than
mean of UKF
Old gaussian
Linearization via Unscented Transform
EKF
EKF peak of
gaussian is less
than peak of
Gaussian of p(y)
UKF
UKF peak of
gaussian is higher
than peak of
Gaussian of p(y)
This slide shows a
case when UKF is
better than EKF
UKF Sigma-Point Estimate (2)
EKF
UKF
For larger standard
deviation the
improvement of UKF
over EKF is even more
dramatic
Case of larger
standard deviation
UKF Sigma-Point Estimate (3)
EKF
UKF
1. For small standard
deviation the UKF and EKF
work similarly
2. They are also similar to KF
34
UKF
versus EKF
Compare UKF versus EKF
Look to values
of the mean
A case of large
covariance
UKF versus EKF (Small Covariance)
UKF versus EKF – Banana Shape
Noisy polar coordinate measurements
True covariance
EKF calculated
covariance
UKF versus EKF (from E.A. Wan and R. van
der Merwe)
Observe
correct
rotations of
ellipses
UKF
Algorithm
EKF Algorithm
EKF to UKF - Prediction
UKF is a modified
EKF
UKF Algorithm - Prediction
• From one of previous slides
EKF to UKF - Correction
UKF Algorithm – Correction (1)
UKF Algorithm – Correction (1)
UKF Algorithm – Correction (2)
UKF Algorithm – Correction (2)
From EKF to UKF – Computing the
Covariance
Example of UKF for
localization
Complete UKF algorithm
for localization
UKF_localization ( t-1, t-1, ut, zt, m):
Prediction:
 1 | vt |  2 | t |2
M t  
0



2
 3 | vt |  4 | t | 
0
Motion noise
 r2 0 

Qt  
2
 0 r 
Measurement noise
ta1  tT1
Augmented state mean

 t 1

ta1   0
 0


0 0T 0 0T 
0
Mt
0
0

0
Qt 
Augmented covariance
ta1  ta1 ta1   ta1
 tx  g ut   tu ,  tx1 
ta1   ta1
t   wmi  ix,t

Predicted mean

t   w    t    t
i 0
i
c
x
i ,t
Sigma points
Prediction of sigma points
2L
i 0
2L

x
i ,t

T
Predicted covariance
UKF_localization ( t-1, t-1, ut, zt, m):
Complete UKF algorithm
for localization (cont)
Correction:
 
t  h  tx   tz
Measurement sigma points
2L
zˆt   wmi i ,t
Predicted measurement mean
i 0
2L


St   w i ,t  zˆt i ,t  zˆt
i 0

i
c
2L
x, z
t



T
  w   t i ,t  zˆt
i 0
i
c
Kt  tx, z St
x
i ,t
1
Pred. measurement covariance

T
Cross-covariance
Kalman gain
t  t  Kt ( zt  zˆt )
Updated mean
 t   t  K t St K
Updated covariance
T
t
1.
EKF_localization ( t-1, t-1, ut, zt, m):
Correction:
3.
5.
6.
7.
zˆt
Ht
2
2


mx  t , x   my  t , y  


 atan 2m   , m      
y
t,y
x
t,x
t , 


h( t , m)
xt
 r2 0 

Qt  
2
 0 r 
 rt


  t,x
 t

 t , x
rt
t , y
t
t , y
Let us compare to EKF
for the same problem
Predicted measurement mean
rt
t ,
t
t ,






Jacobian of h w.r.t location
St  H t t H tT  Qt
Pred. measurement covariance
8.
K t  t H tT St1
Kalman gain
9.
t  t  Kt ( zt  zˆt )
Updated mean
10.   I  K H t
t
t
t
Updated covariance
Graphical Illustration: UKF Prediction Step
UKF Observation Prediction Step
56
UKF Correction Step
57
EKF Correction Step
Let us compare
to EKF
58
Estimation Sequence
EKF
PF
Particle filter
UKF
Estimation Sequence (other example)
EKF
UKF
Prediction Quality
EKF
UKF
Prediction quality is
better for UKF
UT/UKF Summary
UKF versus EKF
1. UKF gives the same results as EKF for linear models
2. UKF is a better approximation than EKF for non-linear
models
3. Differences often are “somewhat small”.
4. No need to calculate Jacobians for the UKF.
5. UKF has in general the same complexity class as EKF
6. UKF is practically slightly slower than EKF
7. UKF still requires Gaussian Distributions
Final UKF Conclusions
• Highly efficient: Same complexity as EKF,
– with a constant factor slower in typical practical
applications
• Better linearization than EKF: Accurate in first
two terms of Taylor expansion
– (EKF only in first term)
• Derivative-free: No Jacobians needed
• Still not optimal!
Literature
Localization With
Multi-Hypothesis
Tracking
Multihypothesis
Tracking
67
Localization With Multi-Hypothesis
Tracking
• Belief is represented by multiple hypotheses
• Each hypothesis is tracked by a Kalman filter
• Additional problems:
• Data association: Which observation corresponds to which
hypothesis?
• Hypothesis management: When to add / delete
hypotheses?
• Huge body of literature on target tracking, motion
correspondence etc.
68
Multi-Hypothesis Tracking
Implemented System (1)
• Hypotheses are extracted from LRF scans
• Each hypothesis has probability of being the correct one:
H i  { xˆ i, i , P ( H i )}
• Hypothesis probability is computed using Bayes’ rule
• Hypotheses with low probability are deleted.
P( s | H i ) P( H i )
P( H are

• New candidates
from LRF scans.
i | s) extracted
P( s )
C j  {z j , R j }
[Jensfelt et al. ’00]
MHT: Implemented System (2)
Courtesy of P. Jensfelt and S. Kristensen
70
MHT: Implemented System (3)
Example run
# hypotheses
P(Hbest)
Map and trajectory
Courtesy of P. Jensfelt and S. Kristensen
#hypotheses vs. time
71