Transcript MAS

Documenting Motion
Sequences with Personalized
Annotation System
IEEE Multimedia 2006
Kanav Kahol,
Priyamvada Tripathi, and
Sethuraman Panchanathan
Arizona State University
Yu-song Syu
20060822
Outline


Gestures & complex motion sequences
Steps of annotation







Modeling gestures anatomically
Motion capture
Gesture segmentation
Gesture recognition
Movement annotation
Results
Future work
Gestures

A sequence of poses


Modeled by state transition
Each state corresponds to a pose in the sequence
Start
pose
End
pose
When it becomes complex…



In dance, a large vocabulary of gestures
are used
A scalable gesture segmentation /
recognition methodology is needed
HMM is needed here
HMM – Hidden Markov Model

We have:






Possible symbols
Possible states
Possibility of transition between states
Possibility of symbols in every state
Symbol series are given
State series are hidden
Modeling gestures anatomically


Model the anatomy with 23 segments & 14 joints
A parent segment inherits the characteristics of
its children
Modeling gestures anatomically

Two adjacent segments can be
perceived as one when



They have similar motion vectors
Angle of the joint between them doesn’t
change in a time period
Dynamic body hierarchy
Dynamic body hierarchy
Segments behaving
as one unit have the
same color
Motion capsure




7 choreographers
Each creates 3 short dance sequences
Every sequences are repeated 3 times
Choreographers write down:



Original score for every dance sequence
Detail score for every gesture
Score: a verbal description
Motion capture
Gesture segmentation

For every body segment

Derivate the spatial orientation, velocity, and
acceleration


Compute the activity





Dynamic hierarchy
SegmentForce = SegmentMass * SegmentAcceleration
SegmentMomentum = SegmentMass * SegmentVelocity
SegmentKE = SegmentMass * segmentVelocity2
Derive parent activities by vector addition of roots
Gesture boundary determination
Gesture segmentation

Gesture boundary determination

Find out local minima as binary triples




Not every local minimum is a gesture boundary


When force reaches its minimum, mark “1”
In common with momentum and KE
I.e. (100), (011), …
22 real-world physical configurations in which adjacent
body segments could coalesce
We use the 23 triples and 22-elements vector to
train classifier to figure out whether the local
minimum is a gesture boundary
Gesture recognition


Find minima of total force of segments
Find stabilization of joints




Change of joint angle doesn’t exceed a
threshold during a time period
segmentHMM with 23 states
jointHMM with 14 states
cHMM couples above-mentioned HMMs
cHMM
cHMM


Θc’c: coupling weight
from jointHMM to
segmentHMM
d(t,i): distance between
segmentt and jointi
Movement annotation


Movement annotation can be useful
while teaching dance
Use Anvil annotation software while
training


http://www.dfki.de/~kipp/anvil
Choreographers can use it to add/modify
annotations and set gesture boundaries
Anvil
Motion annotation results

The proposed system is simple to use


A manual annotation of a 4-5 minute dance takes
about 60 minutes


Xml language and Anvil interface
This system takes only 1 minute
A 6-9 percent improvement in accuracy
Future work

Extend this system to annotate generic
human movements


i.e. walking, running, and washing utensils
Develop a common motion language
with this kind of software