Introduction to particle filter
Download
Report
Transcript Introduction to particle filter
Presented by:
Amir Monemian
Advisor :
Dr Shiry
great way to track the state of a dynamic
system
Bayesian model
you can use particle filters to track your belief
state
the main reason is that for a lot of large or highdimensional problems, particle filters are
tractable whereas Kalman filters are not.
particle filters, which let you use the full,
complex model, but just find an approximate
solution instead.
Bayes Filtering
used to discuss the method
of using a predict/update
cycle to estimate the state
of a dynamical system from
sensor measurements
p(xt | do…t)
p(zt | xt),
perceptual model,
p(xt | xt-1, ut-1) action model
probability of xt given
all the data we’ve seen
so far
X is the state variable
Xt is the state variable at time t
U is the inputs to your system
z is the observations made by
the sensors
d just refers to inputs and
observations together
What we are given is the inputs, the
observations, the perceptual model, which is
the probability that you’ll see a particular
observation given that you’re in some state at
time t, and the action model, which is the
probability that you’ll end up in state xt at time
t, assuming that you started in state xt-1 at time
t-1, and input ut-1 to your system
The basic idea of particle filters is that any pdf can
be represented as a set of samples (particles).
the density of your samples in one area of the state
space represents the probability of that region.
This method can represent any arbitrary
distribution, making it good for non-Gaussian,
multi-modal pdfs.
the key idea is that you find an
approximate representation of a complex model
(any arbitrary pdf) rather than an exact
representation of a simplified mode (Gaussians).
how do you sample from your posterior?
you had some belief from the last time step that
you know how to update with your motion
model. (prior belief q(x) )
sample from q(x), and then for each sample
that you made, update it using what we will
call an ‘importance weight’, based on the
observations made.
To start the algorithm, we need the initial belief
state, p(x0). This is just our initial guess of the
pdf. For robot localization, if we have no idea,
we can just scatter particles all over the map.
then loop with three phases: prediction,
update, and resample
calculating this equation from right to left
In the prediction step, we take each particle
and add a random sample from the motion
model.
the update step , weight that is equal to the
probability of observing the sensor
measurements from that particle’s state.
in the resample step, a new set of particles is
chosen so that each particle survives in
proportion to its weight.