PowerPoint - University of Virginia

Download Report

Transcript PowerPoint - University of Virginia

Animation
CS 551 / 651
NeuroAnimator
Written homework (due next week)
Apply spacetime constraints to the following
system:
• A 1-D particle falling under gravity
• It has a jet pack to apply arbitrary force up / down
• Newtonian physics
• Initialized at t=0 with v=0 and in position p, p > 0
• Must end at t=5 at v=0, p = 0
• Time steps are of size 1
• Use spacetime constraints to find forces to apply such that
constraints are satisfied
Motivation
Physical Simulation
• Produces very realistic output
• Highly automated
• Computationally expensive
• Difficult to control (author controllers)
Motivation
What is it about simulations?
• (Differential) equations of motion are specified
– The equations may be brittle
• Small changes in state at time (t) may result in
large changes at time (t+1)
• Integration is required and error is possible
• Time steps may be small
Emulation
What if we could replace the “simulation”
• Replacing the equations of motion?
• Replacing the numerical integration?
• Both?
Emulation
Replacing simulation with neural network
• What is gained?
• What is lost?
Reminder about neural networks
They are a universal function approximator
• What function are we approximating?
– Equations of motion
• What do they look like?
– f(st) = st+1
• Can neural networks learn this?
Artificial Neurons
“learning”
• Transfer functions, weights, topologies
Artificial Neurons
“learning”
• Training requires data
– Underfitting – have lots of neurons
– Overfitting - 8-10 times as many examples are
used as there are weights in the network
• Backprop to tune the weights
The emulation
A simulation  neural network
•
st+Dt = sim (st, ut, ft)
•
st+Dt = NF (st, ut, ft)
– Timestep of ANN is (much) larger
Some initial concerns
Basic ANN needs some changes in setup
• Mapping from st to st+Dt is nonlinear and has a great
range
– Position and velocity can vary from +/- inf
– Sigmoid can capture nonlinearities, but its range is
limited
• Could use many sigmoids and shift/scale them
to cover the space
Some initial concerns
Changing basic ANN setup
• Learn Dst instead of st+1
• st+Dt = st + Dst
• Some variables are invariant to world position
– Normalize those variables to always be local
coordinates
Some initial concerns
Changing basic ANN setup
• Normalize
– Variables with larger magnitudes have bigger
impact on ANN output even though they may have
smaller impacts on simulation behavior
– Transform all variables to zero-mean, SD = 1
Some initial concerns
Learning with monolithic networks is tough
• Should the neurons affecting neck be influenced by
lower leg?
• Can we partition the variables a priori and learn
multiple ANNs?
Some initial concerns
Sampling simulation states
• Uniform sampling across all inputs is desirable?
– It’s too complex anyways
– Focus on what is typically encountered
– Grab data points in situ while simulation is running
Training
Examples later will demonstrate the ability
to learn the dynamics
Control
Hard to control physical simulations
• Typically trial-and-error
– Markov Chain Monte Carlo (MCMC) by Chenney
– Sims’ creatures
– Simulated annealing
• Having gradient is useful
– Jacobian in IK made that problem feasible
Control
ANNs are differentiable
• Find best control decisions for simulation
Performance
• Neural network: O(n2), n = state parameter size
• Simulation: O (n3)
• Delta t for ANN is 50 times larger than simulation
Error
Takeaway
• ANNs can learn simulation behaviors
– Normalization, hierarchical networks, localization
were all required
– Training was limited to space of typical behavior
• Control is easier to learn with ANNs because they are
analytically differentiable