Introduction
Download
Report
Transcript Introduction
Advanced Mobile Robotics
Probabilistic Robotics
Dr. Jizhong Xiao
Department of Electrical Engineering
City College of New York
[email protected]
City College of New York
1
Robot Navigation
Fundamental problems to provide a mobile
robot with autonomous capabilities:
•Where am I going
Mission Planning
•What’s the best way there?
Path Planning
•Where have I been? how to create an
environmental map with imperfect sensors?
Mapping
• Where am I? how a robot can tell where
it is on a map?
Localization
• What if you’re lost and don’t have a map?
Robot SLAM
City College of New York
2
Representation of the Environment
• Environment Representation
– Continuos Metric
x,y,q
– Discrete Metric
metric grid
– Discrete Topological
topological grid
City College of New York
3
Localization, Where am I?
?
position
Position Update
(Estimation?)
Prediction of
Position
Encoder
matched
observations
(e.g. odometry)
YES
predicted position
Map
Matching
data base
• Localization base on external sensors,
beacons or landmarks
• Probabilistic Map Based Localization
City College of New York
Perception
Perception
• Odometry, Dead Reckoning
raw sensor data or
extracted features
Observation
4
Localization Methods
• Mathematic Background, Bayes Filters
• Markov Localization:
– Central idea: represent the robot’s belief by a probability distribution
over possible positions, and uses Bayes’ rule and convolution to update
the belief whenever the robot senses or moves
– Markov Assumption: past and future data are independent if one knows the
current state
• Kalman Filtering
– Central idea: posing localization problem as a sensor fusion problem
– Assumption: gaussian distribution function
• Particle Filtering
– Central idea: Sample-based, nonparametric Filter
– Monte-Carlo method
• SLAM (simultaneous localization and mapping)
• Multi-robot localization
City College of New York
5
Markov Localization
• Applying probability theory to robot localization
• Markov localization uses an
explicit, discrete representation for the probability of
all position in the state space.
• This is usually done by representing the environment by a
grid or a topological graph with a finite number of possible
states (positions).
• During each update, the probability for each state
(element) of the entire space is updated.
City College of New York
6
Markov Localization Example
• Assume the robot position is onedimensional
The robot is placed
somewhere in the
environment but it is
not told its location
The robot queries its
sensors and finds out it
is next to a door
City College of New York
7
Markov Localization Example
The robot moves one
meter forward. To
account for inherent
noise in robot motion the
new belief is smoother
The robot queries its
sensors and again it
finds itself next to a
door
City College of New York
8
Probabilistic Robotics
• Falls in between model-based and behaviorbased techniques
– There are models, and sensor measurements, but they are
assumed to be incomplete and insufficient for control
– Statistics provides the mathematical glue to integrate
models and sensor measurements
• Basic Mathematics
– Probabilities
– Bayes rule
– Bayes filters
City College of New York
9
• The next slides are provided by the
authors of the book "Probabilistic
Robotics“, you can download from the
website:
http://www.probabilistic-robotics.org/
City College of New York
10
Probabilistic Robotics
Mathematic Background
Probabilities
Bayes rule
Bayes filters
City College of New York
11
Probabilistic Robotics
Key idea:
Explicit representation of uncertainty using
the calculus of probability theory
– Perception = state estimation
– Action
= utility optimization
City College of New York
12
Axioms of Probability Theory
Pr(A) denotes probability that proposition A is true.
•
0 Pr( A) 1
•
Pr(True) 1
•
Pr( A B) Pr( A) Pr( B) Pr( A B)
Pr( False ) 0
City College of New York
13
A Closer Look at Axiom 3
Pr( A B) Pr( A) Pr( B) Pr( A B)
True
A
A B
B
B
City College of New York
14
Using the Axioms
Pr( A A ) Pr( A) Pr( A ) Pr( A A )
Pr(True) Pr( A) Pr( A ) Pr( False )
1
Pr( A )
Pr( A) Pr( A ) 0
1 Pr( A)
City College of New York
15
Discrete Random Variables
• X denotes a random variable.
• X can take on a countable number of values in {x1,
x2, …, xn}.
• P(X=xi), or P(xi), is the probability that the random
variable X takes on value xi.
• P( .) is called probability mass function.
• E.g.
P( Room) 0.7,0.2,0.08,0.02
City College of New York
16
Continuous Random Variables
• X takes on values in the continuum.
• p(X=x), or p(x), is a probability density function.
b
Pr( x (a, b)) p( x)dx
a
• E.g.
p(x)
x
City College of New York
17
Joint and Conditional Probability
• P(X=x and Y=y) = P(x,y)
• If X and Y are independent then
P(x,y) = P(x) P(y)
• P(x | y) is the probability of x given y
P(x | y) = P(x,y) / P(y)
P(x,y) = P(x | y) P(y)
• If X and Y are independent then
P(x | y) = P(x)
City College of New York
18
Law of Total Probability, Marginals
Discrete case
Continuous case
P( x) 1
p( x) dx 1
x
P ( x ) P ( x, y )
y
P( x) P( x | y ) P( y )
y
p ( x) p( x, y ) dy
p ( x) p ( x | y ) p ( y ) dy
City College of New York
19
Bayes Formula
P ( x, y ) P ( x | y ) P ( y ) P ( y | x ) P ( x )
P( y | x) P( x) likelihood prior
P( x y )
P( y )
evidence
If y is a new sensor reading
p (x )
Prior probability distribution
p( x y )
Posterior probability distribution
p ( y x)
p( y)
Generative model, characteristics of the sensor
Does not depend on x
City College of New York
20
Normalization
P( y | x) P( x)
P( x y )
P( y | x) P( x)
P( y )
1
1
P( y )
P( y | x)P( x)
x
Algorithm:
x : aux x| y P( y | x) P( x)
1
aux x| y
x
x : P( x | y ) aux x| y
City College of New York
21
Bayes Rule
with Background Knowledge
P ( y | x, z ) P ( x | z )
P( x | y, z )
P( y | z )
City College of New York
22
Conditioning
• Total probability:
P ( x) P ( x, z )dz
P ( x) P ( x | z ) P ( z )dz
P ( x y ) P ( x | y, z ) P ( z ) dz
City College of New York
23
Conditional Independence
P( x, y z ) P( x | z ) P( y | z )
equivalent to
P( x z ) P( x | z , y )
and
P ( y z ) P( y | z , x )
City College of New York
24
Simple Example of State Estimation
• Suppose a robot obtains measurement z
• What is P(open|z)?
City College of New York
25
Causal vs. Diagnostic Reasoning
• P(open|z) is diagnostic.
• P(z|open) is causal.
• Often causal knowledge is easier to
obtain.
count frequencies!
• Bayes rule allows us to use causal
knowledge:
P( z | open) P(open)
P(open | z )
P( z )
City College of New York
26
Example
• P(z|open) = 0.6
P(z|open) = 0.3
• P(open) = P(open) = 0.5
P( z | open) P(open)
P(open | z )
P( z | open) p(open) P( z | open) p(open)
0.6 0.5
2
P(open | z )
0.67
0.6 0.5 0.3 0.5 3
• z raises the probability that the door is open.
City College of New York
27
Combining Evidence
• Suppose our robot obtains another
observation z2.
• How can we integrate this new
information?
• More generally, how can we estimate
P(x| z1...zn )?
City College of New York
28
Recursive Bayesian Updating
P( zn | x, z1,, zn 1) P( x | z1,, zn 1)
P( x | z1,, zn)
P( zn | z1,, zn 1)
Markov assumption: zn is independent of z1,...,zn-1 if
we know x.
P ( zn | x) P ( x | z1, , zn 1)
P ( x | z1, , zn )
P ( zn | z1, , zn 1)
P ( zn | x) P ( x | z1, , zn 1)
1... n
P( z | x) P( x)
i
i 1... n
City College of New York
29
Example: Second Measurement
• P(z2|open) = 0.5
P(z2|open) = 0.6
• P(open|z1)=2/3
P( z2 | open) P(open | z1 )
P(open | z2 , z1 )
P( z2 | open) P(open | z1 ) P( z2 | open) P(open | z1 )
1 2
5
2 3
0.625
1 2 3 1
8
2 3 5 3
• z2 lowers the probability that the door is open.
City College of New York
30
Actions
• Often the world is dynamic since
– actions carried out by the robot,
– actions carried out by other agents,
– or just the time passing by
change the world.
• How can we incorporate such actions?
City College of New York
31
Typical Actions
• The robot turns its wheels to move
• The robot uses its manipulator to grasp an
object
• Plants grow over time…
• Actions are never carried out with absolute
certainty.
• In contrast to measurements, actions generally
increase the uncertainty.
City College of New York
32
Modeling Actions
• To incorporate the outcome of an action
u into the current “belief”, we use the
conditional pdf
P(x|u,x’)
• This term specifies the pdf that
executing u changes the state from x’
to x.
City College of New York
33
Integrating the Outcome of Actions
Continuous case:
P( x | u ) P( x | u, x' ) P( x' )dx'
Discrete case:
P( x | u) P( x | u, x' ) P( x' )
City College of New York
34
Example: Closing the door
City College of New York
35
State Transitions
P(x|u,x’) for u = “close door”:
0.9
0.1
open
closed
1
0
If the door is open, the action “close door”
succeeds in 90% of all cases.
P(open)=5/8
P(closed)=3/8
City College of New York
36
Example: The Resulting Belief
P(closed | u ) P(closed | u , x' ) P( x' )
P(closed | u, open) P(open)
P(closed | u , closed ) P(closed )
9 5 1 3 15
10 8 1 8 16
P(open | u ) P(open | u , x' ) P( x' )
P(open | u , open) P(open)
P(open | u, closed ) P(closed )
1 5 0 3 1
10 8 1 8 16
1 P(closed | u )
City College of New York
37
Bayes Filters: Framework
• Given:
– Stream of observations z and action data u:
dt {u1 , z1 , ut , zt }
– Sensor model P(z|x).
– Action model P(x|u,x’).
– Prior probability of the system state P(x).
• Wanted:
– Estimate of the state X of a dynamical system.
– The posterior of the state is also called Belief:
Bel ( xt ) P( xt | u1 , z1 , ut , zt )
City College of New York
38
Markov Assumption
p( zt | x0:t , z1:t , u1:t ) p( zt | xt )
Measurement probability
p( xt | x1:t 1 , z1:t , u1:t ) p( xt | xt 1 , ut ) State transition probability
Markov Assumption:
–past and future data are independent if one knows the current state
Underlying Assumptions
• Static world, Independent noise
• Perfect model, no approximation errors
City College of New York
39
Bayes Filters
z = observation
u = action
x = state
Bel ( xt ) P( xt | u1, z1 , ut , zt )
Bayes
P( zt | xt , u1, z1, , ut ) P( xt | u1, z1, , ut )
Markov
P( zt | xt ) P( xt | u1, z1, , ut )
Total prob.
P( zt | xt ) P( xt | u1 , z1 , , ut , xt 1 )
P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov
P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov
P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , zt 1 ) dxt 1
P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
City College of New York
40
Bel ( xt ) P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
Bayes Filter Algorithm
2.
Algorithm Bayes_filter( Bel(x),d ):
0
3.
If d is a perceptual data item z then
1.
4.
5.
6.
7.
8.
9.
10.
11.
12.
For all x do
Bel ' ( x) P( z | x) Bel ( x)
Bel ' ( x)
For all x do
Bel ' ( x) 1Bel ' ( x)
Else if d is an action data item u then
For all x do
Bel ' ( x) P( x | u, x' ) Bel ( x' ) dx'
Return Bel’(x)
City College of New York
41
Bayes Filters are Family!
Bel ( xt ) P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
•
•
•
•
•
Kalman filters
Particle filters
Hidden Markov models
Dynamic Bayesian networks
Partially Observable Markov Decision Processes
(POMDPs)
City College of New York
42
Summary
• Bayes rule allows us to compute
probabilities that are hard to assess
otherwise.
• Under the Markov assumption, recursive
Bayesian updating can be used to
efficiently combine evidence.
• Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.
City College of New York
43
Thank You
City College of New York
44