Transcript grzeszczuk

Automated Learning of MuscleActuated Locomotion Through
Control Abstraction
Radek Grzeszczuk and Demetri Terzopoulos
Presented by Johann Hukari
• Animating animals is hard
• Muscle interactions are complex, even in
“lower” life forms
• People notice when animals walk like
they have a stick up their ass
• Is it possible for a physics based
simulation to control itself?
• Real animals are efficient locomotors
• Locomotion is a learned skill
• Behaviors are constructed from lower
level behaviors and combined into more
complex actions
• This paper explores extremely flexible
bodies: Things that slither or swim
• Bodies constructed from spring-mass
systems, many DOFs
• Providing animators with controls to
every muscle (even simplified versions)
is needlessly complex.
• Instead of direct control, let the animal control
itself
• Generate and test. Did this change result in
better motion?
• Low level motions combined to generate more
complex motion.
• Learning low-level control can be
lengthy, but is fairly simple
• “Muscles” are control according to a
scalar activation function
• “Animals” are allowed time to learn
simple locomotion and turns of various
radii
• These actions are combined together
with approx 5% which is blended to
disguise discontinuities
• When given goals “animals” choose the
locally optimal solution: “greedy”
• Stupid pet tricks
• Dolphin can learn “Sea World” style tricks by
combining swimming and turning
• Dolphin tail modeled by turning previously
generated shark’s tail sideways.
• Simply sequencing controllers greedily is
simplistic
• Other method: Learn complex behaviors by
combining (rather than sequencing)
• Doesn’t use intermediate solutions.
• Complex combinatorial problem