Advanced Graphics Computer Animation

Download Report

Transcript Advanced Graphics Computer Animation

Advanced Graphics
Computer Animation
Autonomous Agents
Spring 2002
Professor Brogan
Quick Reminder
• Assignment 1 “take away message”
– It’s hard to build intuitive interfaces
• Adding knots to spline (beginning, middle, end)
– Graphics makes it easy to add feedback that lets
the user decide how to accomplish tasks
• Highlight potential objects of an action before execution
and change their graphical state once again when an
action is initiated
Autonomous Agents
• Particles in simulation are dumb
• Make them smart
– Let them add forces
• Virtual jet packs
– Let them select destinations
– Let them select neighbors
• Give them artificial intelligence
History of AI / Autonomous
Agents – Cliff Note Version
• 1950s – Newell, Simon, and McCarthy
– General Problem Solver (GPS)
• Use means-ends analysis
• Subdivide problem
• Use transformation functions (actions) to
subdivide and solve subtasks
– Useful for well-defined tasks
• Theorem proving, word problems, chess
• Iteration and exhaustive search used
History of AI
• 1960s – ELIZA, chess, natural language
processing, neural networks (birth and
death), numerical integration
• 1970-80s – Expert systems, fuzzy logic,
mobile robots, backpropagation networks
– Use “massive” storage capabilities of computers to
store thousands of “rules”
– Continuous (not discrete) inputs
– Multi-layer neural networks
History of AI
• 1990s - Major advances in all areas of AI
–
–
–
–
–
–
–
–
–
machine learning
intelligent tutoring
case-based reasoning
multi-agent planning
scheduling
data mining
natural language understanding and translation
vision
games
So Many Choices
• Important factors:
f(state, actions)=state_new
– # of inputs(state) and outputs(actions)
• Previous states don’t matter (Markov)
• Actions are orthogonal
–
–
–
–
Continuous versus discrete variables
Differentiability of f( )
Model of f( )
Costs of actions
Example: Path Planning
•
• State
Important factors:
f(state, actions)=state_new
– # of inputs(state) and outputs(actions)
• Previous states don’t matter (Markov)
• Actions are orthogonal
– Continuous versus discrete variables
– Differentiability of f( )
– Model of f( )
– Costs of actions
– Position, velocity,
obstacle positions,
goal (hunger, mood,
motivations), what you’ve tried
• Actions
– Movement (joint torques), get in car, eat,
think, select new goal
Path Planning
• Do previous states matter?
– Going in circles
• Are actions orthogonal?
– Satisfy multiple goals with one action
• Continuous versus discrete?
• Differentiability of f( )
– If f(state, action1) = state_new_1, does
f(state, 1.1 * action1) = 1.1 * state_new_1?
Path Planning
• Model of f( )
– Do you know the result of f(s, a) before you
execute it?
– Compare path planning in a dark, unknown
room to path planning in a room with a
map
• Costs of actions
– If many actions take state -> new_state
• How do you pick just one?
Let’s Keep it Simple
• Make particles that can intelligently
navigate through a room with obstacles
• Each particle has a jet pack
– Jet pack can swivel (yaw torque)
– Jet pack can propel forward (forward thrust)
• Previous states don’t matter
Particle Navigation
• State = position, velocity, obstacle positions
• Action = sequence of n torques and forces
• Solve for action s.t. f(s, a) = goal position
– Minimize sum of torques/forces (absolute value)
– We have a model: f=ma
– Previous states don’t matter
• We don’t care how we got to where we are now
• Tough problem
– Lots of torques and forces to compute
– Obstacle positions could move and force us to recompute
Simplify Particle Navigation
• State = position, velocity, obstacles
• Action = torque, force
• F (s, a) = new position, velocity
– Find action s.t. position is closer to goal position
• Smaller action space
• Local search – could get caught in local min
(box canyon)
• Adapts to moving obstacles
Multiple Particle Path Planning
• Flocking behavior
– Select an action for each particle in flock
• Avoid collisions with each other
• Avoid collisions with environment
• Don’t stray from flock
– Craig Reynolds: Flocks, Herds, and
Schools: A Distributed Behavioral Model,
SIGGRAPH ’87
Flocking
• Action choices
One Agent
All Agents
Slower but
One
Quick, but
better
Action suboptimal
coordination
Slower and Slowest but
All
replanning complete
Actions
required
and optimal
Models to the Rescue
• Do you expect your neighbor to behave
a certain way?
– You have a model of its actions
– You can act independently, yet coordinate
The Three Rules of Flocking
• Go the same speed as neighbors
(velocity matching)
– Minimizes chance of collision
• Move away from neighbors that are too
close
• Move towards neighbors that are too far
away
Emergent Behaviors
• Combination of three flocking rules
results in emergence of fluid group
movements
• Emergent behavior
– Behaviors that aren’t explicitly programmed
into individual agent rules
• Ants, bees, schooling fishes
Local Perception
• Success of flocking depends on local
perception (usually considered a
weakness)
– Border conditions (like cloth)
– Flock splitting
Ethological Motivation
• ethology: the scientific and objective
study of animal behavior especially
under natural conditions
• Perception (find neighbors) and action
• Fish data
Combining three rules
• Averaging desired actions of three rules can
be bad
– Turn left + Turn right = go straight…
• Force is allocated to rules according to
priority
– First collision detection gets all it needs
– Then velocity matching
– Then flock centering
• Starvation is possible
Action Selection
• Potential Fields – Collision Avoidance
Scaling Particles to Other
Systems
• Silas T. Dog, Bruce
Blumberg, MIT AI Lab
• Many more behaviors
and actions
– Internal state
– Multiple goals
– Many ways to move
Layering Control
• Perceive world
– Is there food here?
• Strategize goal(s)
– Get food
• Compute a sequence of actions that will
accomplish goal(s)
– Must walk around obstacle
• Convert each action into motor control
– Run, gallop, trot around obstacle
Multiple Goals
• Must assign a priority to goals
– Can’t eat and sleep at same time
• Can’t dither back and forth between goals
– Don’t start eating until finished sleeping
• Don’t let goals wither on priority queue
– Beware of starvation (literally)
• Unrelated goals can be overlapped
– Eating while resting is possible