State estimation and Kalman filtering
Download
Report
Transcript State estimation and Kalman filtering
State Estimation and
Kalman Filtering
Sensors and Uncertainty
Real world sensors are noisy
and suffer from missing data
(e.g., occlusions, GPS
blackouts)
Use sensor models to
estimate ground truth,
unobserved variables, make
forecasts
How does this relate to motion?
A robot must:
Know where it is (localization)
Model its environment to avoid collisions
(mapping, model building)
Move in order to get a better view of an
object (view planning, inspection)
Predict how other agents move (tracking)
Intelligent Robot Architecture
Percept (raw sensor data)
Sensing algorithms
Models
Planning
Action
State estimation
Mapping
Tracking
Calibration
Object recognition
…
State Estimation Framework
Sensor model
(aka observation model)
xt
Dynamics model
(aka transition model)
State => observation
Observations may be partial, noisy
Dynamics
model
Sensor
model
State => next state
Transition may be noisy, controldependent
Probabilistic state estimation
Given observations, estimate
probability distribution over state
xt+1
zt
Markov Chain
Given an initial distribution and the
dynamics model, can predict how the state
evolves (as a probability distribution) over
time
X0
X1
X2
X3
Hidden Markov Model
Use observations to get a better idea of
where the robot is at time t
X0
X1
X2
X3
Hidden state
variables
z1
z2
z3
Observed
variables
Predict – observe – predict – observe…
General Bayesian Filtering
Compute P(xt|zt), given zt, prior on P(xt-1)
Bayes rule:
= a P(zt|xt)P(xt)
P(xt) = integral [ P(xt | xt-1) P(xt-1) dxt-1 ]
1/a = integral [ P(zt|xt)P(xt) dyt ]
P(xt|zt)
Nasty, hairy integrals
Key Representational Issue
How to represent and perform calculations
on probability distributions?
Kalman Filtering
In a nutshell
Efficient filtering in continuous state
spaces
Gaussian transition and observation
models
Ubiquitous for tracking with noisy
sensors, e.g. radar, GPS, cameras
Kalman Filtering
Closed form solution for HMM
Assuming linear Gaussian process
Transition
and observation function are linear
transformation + multivariate Gaussian noise
y=Ax+e
e ~ N(m,S)
Linear Gaussian Transition
Model for Moving 1D Point
Consider position and velocity xt, vt
Time step h
Without noise
xt+1 = xt + h vt
vt+1 = vt
With Gaussian noise of std s1
P(xt+1|xt) exp(-(xt+1 – (xt + h
vt))2/(2s12)
i.e. x ~ N(x + h v , s )
Linear Gaussian Transition
Model
If prior on position is Gaussian, then the
posterior is also Gaussian
vh
s1
N(m,s) N(m+vh,s+s1)
Linear Gaussian Observation
Model
Position observation zt
Gaussian noise of std s2
zt ~ N(xt,s2)
Linear Gaussian Observation
Model
If prior on position is Gaussian, then the
posterior is also Gaussian
Posterior probability
Observation probability
Position prior
m (s2z+s22m)/(s2+s22)
s2 s2s22/(s2+s22)
Kalman Filter
xt ~ N(mx,Sx)
Dynamics noise
xt+1 = F xt + g + v
Observation noise
zt+1 = H xt+1 + w
v ~ N(0,Sv), w ~ N(0,Sw)
Two Steps
Maintain mt, St the gaussian distribution
over state at time t
Predict
Compute
distribution of xt+1 using dynamics
model alone
Update
Compute xt+1|zt+1
with Bayes rule
Multivariate Computations
Linear transformations of gaussians
If x ~ N(m,S), y = A x + b
Then y ~ N(Am+b, ASAT)
Consequence
If x ~ N(mx,Sx), y ~ N(my,Sy),
Then z ~ N(mx+my,Sx+Sy)
z=x+y
Conditional of gaussian
If [x1,x2] ~ N([m1 m2],[S11,S12;S21,S22])
Then on observing x2=z, we have
x1 ~ N(m1-S12S22-1(z-m2), S11-S12S22-1S21)
Predict Step
Linear transformation rule:
xt+1 ~ N(Fmt + g, F St FT + Sv)
Let these be m’ and S’
Update step
Given m’ and S’, and new observation z
Compute the final distribution mt+1 and St+1
using the conditional distribution formulas
Presentation
Deriving the Update Rule
xt
zt
=N(
m’
a
,
S’
BT
B
C
xt ~ N(m’ , S’)
zt | xt ~ N(H xt, SW)
(1) Unknowns
) a,B,C
(2) Assumption
(3) Assumption
zt | xt ~ N(a-BTS’-1xt, C-BTS’-1B)
(4) Conditioning (1)
H xt = a-BTS’-1(xt-m’) => a=Hm’, BT=HS’
(5) Set mean (4)=(3)
C-BTS’-1B = SW => C = H S’ HT + SW
xt | zt ~ N(m’-BC-1(zt-a), S’-BC-1BT)
mt = m’ - S’HTC-1(zt-Hm’)
St = S’ - S’HTC-1HS’
(6) Set cov. (4)=(3)
(7) Conditioning (1)
(8,9) Kalman filter
Filtering Variants
Extended Kalman Filter (EKF)
Unscented Kalman Filter (UKF)
Particle Filtering
Extended/Unscented Kalman
Filter
Dynamics / sensing model is nonlinear
xt+1 = f(xt) + v
yt+1 = h(xt+1) + w
Gaussian noise, gaussian state estimate
as before
Strategy:
Propagate
means as usual
What about covariances & conditioning?
Extended Kalman Filter
Approach: linearize about current estimate
F = f/x(xt)
Jacobian = matrix of
H = h/x(xt+1) partial derivatives
Use Jacobians to propagate covariance
matrices and perform conditioning
Unscented Kalman Filter
Approach: propagate a set of points with
the same mean/covariance as the input,
reconstruct gaussian
Next Class
Particle Filter
Non-Gaussian
distributions
Nonlinear observation/transition function
Readings
Rekleitis
(2004)
P.R.M. Chapter 9
Midterm Project Report Schedule
(tentative)
Tuesday
Changsi and Yang: Indoor person following
Roland: Person tracking
Jiaan and Yubin: Indoor mapping
You-wei: Autonomous driving
Thursday
Adrija: Dynamic collision checking with point clouds
Damien: Netlogo
Santhosh and Yohanand: Robot chess
Jingru and Yajia: Ball collector with robot arm
Ye: UAV simulation