Beliefs in Probabilistic Robotics

Download Report

Transcript Beliefs in Probabilistic Robotics

Part 3 of 3: Beliefs
in Probabilistic Robotics
References and Sources of Figures
• Part 1:
Stuart Russell and Peter Norvig, Artificial
Intelligence, 2nd ed., Prentice Hall, Chapter 13
• Part 2:
Stuart Russell and Peter Norvig, Artificial
Intelligence, 2nd ed., Prentice Hall, Chapter 14
• Part 3:
Sebastian Thrun, Wolfram Burgard, and
Dieter Fox, Probabilistic Robotics, Chapter 2
Revisit the Mobile Robot Example
Scenario
• a mobile robot uses its camera to detect
the state of the door (open or closed)
• camera is noisy:
– if the door is in fact open:
• the probability of detecting it open is 0.6
– if the door is in fact closed:
• the probability of detecting it closed is 0.8
Scenario
• the robot can use its manipulator to push
open the door
• if the door is in fact closed:
• the probability of robot opening it is 0.8
Scenario
• At time t0, the probability of the door being
open is 0.5
• Suppose at t1 the robot takes no control
action but it senses an open door, what is
the probability of the door is open?
Scenario
• Using Bayes Filter, we will see that:
– at time t1 the probability of the door is open is:
• 0.75 after taking a measurement
– at time t2 the probability of the door is open is
• ~0.984 after the robot pushes open the door and
takes another measurement
Belief Distribution
bel( xt )  P( xt | z1:t , u1:t )
probability distribution over the state xt at time t,
conditioned on all past measurements z1:t and all
past controls u1:t
Belief Distribution
bel( xt )  P( xt | z1:t 1, u1:t )
probability distribution over the state xt at time t,
conditioned on all past measurements z1:t-1 (i.e.
before incorporating z1:t) and all past controls u1:t
bel( xt )  P( xt | z1:t 1, u1:t )
often referred to as prediction in the context of
probabilistic filtering
because it predicts the state (x) at time t based on
the previous state posterior, before incorporating
the measurement (z) at time t
Algorithm of Bayes Filter
Bayes_filter(bel(xt-1), ut, zt):
forPredict
allx xaftert do
exerting u:
calculate bel( xt ) from p( xt | ut , xt 1 ) and bel( xt 1 )
Update belief of x after making a measurement:
calculate bel( xt ) from p( zt | xt )bel( xt )
endfor
return bel(xt)
Example: A Mobile Robot Estimating the
State of a Door
At t0:
Noisy Sensors
Uncertainty from Manipulator
As you see, the degree of belief changes (is updated) over time as
actuations are performed and measurements are made.
Summary
•
Reviewed of probability theory, Bayes' rule, product rule
•
How random variables and their causal relations are represented in DAGs—
Bayesian Networks (BN)
•
Dynamic Bayesian Networks (DBN): Adding the temporal aspect to
Bayesian Networks
•
How DBN can be used to characterize the evolution of states (xt), controls
(ut), and measurements (zt) in robotics
•
Through DBN, discussed two overarching steps in localization filters: predict
and update beliefs
•
Demonstrated how these two steps work in the algorithm of Bayes filter
•
Explained where the Bayes' rule is used in the Bayer filter