Transcript 25-Concepts

AI Robotics
Concepts
Chapter 25
Content
•
•
•
•
•
Tasks
Effectors
Sensors
Agent Architectures
Actions in continuous space
Robot
• Definition from the “Robot Institute of America”
(fits a conveyor belt)
A robot is a programmable, multifunction manipulator
design to move material, parts, tools, or specific
devices through variable programmed motions for the
performance of a variety of tasks.
Russels&Norvig definition (excludes softbots)
A robot is an active, artificial agent whose
environment is the physical world.
What is 'Artificial Intelligence'?
• What is 'Artificial Intelligence' (Jim Hendler, CNN, 1999)?
– What computers cannot do yet
• E.g. NLP started as an AI field. Once successful (found
objective mathematical model) became a field by itself.
• Getting out of AI (conventional wisdom):
– Artificial Neural Networks are the 2nd best solution to any problem
– Genetic Algorithms are the 3rd best solution to any problem
– The (non-AI) mathematical solution, once found, is the best
solution of any problem.
• AI is about generalization to unseen and unexpected
situations. Once the situation is explicitly taken into
account, we can no longer speak of AI.
1
The real world as an environment
• Inaccessible (sensors do not tell all)
– need to maintain a model of the world
• Nondeterministic (wheels slip, parts break)
– need to deal with uncertainty
• Nonepisodic (effects of an action change over
time)
– need to handle sequential decision problems
• Dynamic
– need to know when to plan and when to reflex
• Continuous
– cannot enumerate possible actions
Tasks
• Manufacturing and material handling
– not autonomous: manufacturing, handling
– simple machines are the best solution
• need accuracy, power, shapes put in standard cradles
• Gofer robots (with autonomy)
– Mobile robots (mobots): couriers, under-water spies
• Hazardous environments
– toxic or deep sea environment
• Telepresence and Virtual Reality
– teleoperated robots for bomb scares in New York
• Augmentation of human abilities
– artificial fingers, eyes, exoskeletons
Components
• Rigid Links
• Rigid Joints
• End Effectors (screw-drivers)
– connected to “actuators” (convert electricity to
movement)
• an actuator = a degree of freedom
• uses: locomotion, manipulation, processing
DOF
• Holonomic Robots (same # effective and
controllable DOF)
– a car has 3 effective but 2 controllable DOF
• robotic arms are holonomic
• mobile robots are not (more complex mechanics)
Stanford manipulator 6DOF
5 revolute + 1 prismatic joint
non-holonomic 4-wheeled vehicle
Distance sensors
laser scanner & range scan
• Sensors
– force, torque, tactile, range (sonar), image,
odometry & shaft decoders (proprioceptive
sensor), inertial, GPS
• Types:
Locomotion
– statically stable (wheels, chenille, slow legs)
– dynamically stable (legs, hopping)
• less practical for most robots (expensive)
Marc Raibert
4-legged
“Big Dog”
DFKI Bremen
Perception
• Dynamic belief network
– Update the belief net
– filtering equations combine
• transition model / motion model
• sensor model
Localization
• Problems
– Tracking
– Global localization
– Kidnapping problem
• Techniques
– Probabilistic Prediction
– Landmarks Models or Range Scan Models
Localization
• Monte Carlo Localization, (MCL, particle
filtering)
– starts with a uniform distribution of particles
– computes/updates their probability
– Generates another set of samples based on the new
distribution
• Kalman filters
– need local linearization, e.g. using Taylor expansion
– Uncertainty grows until landmarks are spotted
– uncertainty of landmarks => data association problem
MCL Run
initial global uncertainty
bimodal uncertainty (symmetry)
unimodal uncertainty
Mapping
• Simultaneous localization and mapping
SLAM
– similar to the localization (filtering) problem
• has the map in the posterior
• Often uses Kalman filters + Landmarks
Usual Mapping
• Other approaches are less probabilistic
and more ad-hoc, and they represent a
trend!
Planning to Move
• Two types of motion:
– point-to-point (free motion)
– compliant motion (touching a wall, a screw)
• Two representations of planning problems
– configuration space
– workspace
• Planning algorithms
– cell decomposition
– skeletonization.
Configuration Space
• The Workspace is the 3D world around.
– typically a position is needed for each joint.
• The configuration Space is the n-ary world of
possible robot configurations.
– sometimes smaller size than the Workspace, as it
integrates linkage constraints.
• occupied space vs. free space
• Sometimes both spaces are required:
– Direct kinematics (configuration  Workspace)
– Inverse kinematics (workspace  configuration)
Workspace vs Configuration Space
Workspace of a robot with 2 DOF
box with flat hanging obstacle
Configuration space of same robot
White regions are free of collisions
The dot is for the current configuration
Robot configurations in workspace and configuration space.
Cell decomposition methods
• Continuous space is hard to work with
• Discretization uses cell decomposition
– Grid
• Easy to scan with dynamic programming
• Difficult to handle “gray” boxes 
– Incompleteness
– unsoundness
• May need further subdivision  exponential in dimensions
• Difficult to prove that a box is fully available
– Quad-trees
– Exact cell decomposition (complex shapes)
– Potential Field  optimization via value iteration
Value function and path
for discrete cell approximation
of the configuration space
Same path in workspace coordinates
Robot bends elbow to avoid collision.
Potential
A repelling potential field pushes
robot away from obstacles
Path found by minimizing path length
and the potential
Skeletonization methods
• Skeleton – a lower dimensional representation
of the configuration space
– Voronoi graph (points equally distant from two
obstacles)
– Probabilistic roadmap
Voronoi
Voronoi graph is the set of points
equidistant to the two or more obstacles
in configuration space
Probabilistic roadmap, 400 randomly
chosen points in free space
Skeletonization methods
• Skeleton – a lower dimensional representation
of the configuration space
– Voronoi graph (points equally distant from two
obstacles)
•
•
•
•
Reduces the dimension of the path planning.
Difficult to compute in the configuration space.
Difficult for many dimensions.
Leads to large/inefficient detours.
– Probabilistic roadmap
• Random generation of many points in the configuration
space (discarding occupied ones)
• Lines between close points, if they are in free space.
– Distribution of points may be based on need and promise
– Scales best
Planning Uncertain Movements
• Errors
– from the stochastic model
– From the approximation (particle filtering) algorithms
• Common simplification:
– Assume the “most likely state”
– “Works” when uncertainty is limited
• Solution
– (with observable states) Markov Decision Processes
• Returns optimal policy (action in each state)
– Called “Navigation function”
– (with partially observable states) POMDPs
• Creates policies based on distributions of state probabilities
– Not practical
Robust Methods
• Assumes bounded amount of uncertainty
– An interval of possibilities
– Works if the plan fails in that interval
– “Conformant planning” is the state independent planning (Chap
12)
• Fine Motion Planning (FMP)
– Assume motion in proximity of static objects
– Based on sensor feedback, where the motion is too sensible for
the robot
• Use “guarded motions” (guard is a predicate stating when to end
motion, and they attach “compliant motions”, i.e which suffer slide if
needed)
– Constructing FMP uses: configuration-space, angle of
uncertainty in directions, sensors
• Generates a set of steps guaranteeing success in the worst case
• May not be the most efficient.
Fine Motion Planning
2-dimensional environment, velocity uncertainty
cone and envelope of possible robot movements.
Fine Motion Plan
First motion. No matter the error,
we know that the final
configuration will be left of the hole
2nd motion command. Even with
error we get into the home
Moving
• Computing forces and inertia
– Dynamic state = position + velocities
– Relation kinematics-velocity via differential eq.
• Controller : keeps the robot on track
– Reference controller  robot on reference
path
– Optimizing a cost function 
• Optimal controllers (e.g., MDP policies)
– (Markov decision processes)
Control
a) proportional control with gain factor 1.
b) proportional control with gain 0.1
c) proportional derivative control: gain 0.3 for proportional and 0.8 for derivative
Robot tries to follow the path shown in gray
Controllers
• Control that is proportional to
displacement: P-controllers
– at=KP(y(t)-xt)
• y(t) desired location
• KP – gain parameter
• Assume small perturbations
– Stable controller: bounded error y(t)-xt.
• E.g. P-controllers
– Strictly Stable controller: can return to
reference path
P(I)D-Controllers
• Proportional Derivative controllers:
– at=KP(y(t)-xt)+KD*d(y(t)-xt)/dt
• Proportional Integrative Derivative
Controllers:
– at=KP(y(t)-xt)+KI*∫(y(t)-xt)+KD*d(y(t)-xt)/dt
• Corrects systematic errors
Potential for Control
The robot ascends a potential field composed of repelling
forces asserted from obstacles and an attracting force to the goal configuration
successful path
Has many local minima.
Does not depend on velocities
local optimum
Reactive Control
• Reflex agent design = reactive control
– For intractable many DOFs or too few sensors
Genghis, a hexapod robot.
– Simple rules: 3 legs at a time, a simple control
• Emergent behavior (no explicit environment model)
An augmented finite state machine AFSM for the control of a single leg. If stuck
is moved increasingly higher
Augmented Finite State Machines
• Finite States augmented with timers
• Timers enable state changes after
preprogrammed periods of time
• AFSM lets behaviors override each other:
– a suppression signal overrides the normal
input signal
– an inhibation signal causes output to be
completely inhibated
Robotic Software Architectures
• Software Architecture = methodology for
structuring algorithms
– Combining reactive with deliberative control
• Reactive ctrl is sensor driven, for low level decisions
• Leads to hybrid architectures.
• Reactive architectures
– Subsumption architecture (1986)
• Each layer’s goal subsumes that of underlying layers.
– bottom-up design
– explore  wander  avoid objects
• Composable augmented finite state machines (AFSM, see
hexapod).
– Assumes good sensors; focuses on one task; complex
Three-layer architecture
• Most popular hybrid architecture
– Reactive layer (low-level control, milliseconds)
– Executive layer:
• Selects which reactive control to invoke
• Following points proposed by deliberative ctrl
– Deliberative layer (planning, minutes/cycle)
• Other possible layers
– User Interface Layer
– Inter-robot interface
Summary
• Chapters to read
– 11.4 GraphPlan
– 12. Planning and Acting in Real World
– 14. Bayesian Nets
– 15. (Filtering, HMM, Kalmann, DBN)
– 17 (Decision) 17.1 to 17.4