Transcript Slide 1

Adaptive Fragments-Based Tracking of
Non-Rigid Objects Using Level Sets
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
CLEMSON UNIVERSITY
ABSTRACT
We present an approach to visual tracking based on dividing a target into multiple
regions, or fragments. The target is represented by a Gaussian mixture model in a
joint feature-spatial space, with each ellipsoid corresponding to a different
fragment. The fragments are automatically adapted to the image data, being
selected by an efficient region-growing procedure and updated according to a
weighted average of the past and present image statistics. Modeling of target and
background are performed in a Chan-Vese manner, using the framework of level
sets to preserve accurate boundaries of the target. The extracted target
boundaries are used to learn the dynamic shape of the target over time, enabling
tracking to continue under total occlusion. Experimental results on a number of
challenging sequences demonstrate the effectiveness of the technique.
CLEMSON, SC
Strength Image Computation:
A strength image indicates the
probability of each pixel belonging
to the target:
USA
EXPERIMENTAL RESULTS
ours
Level Set Formulation:
The energy functional over
the implicit function is
single Gaussian
TRACKING FRAMEWORK
individual
fragments
Bayesian Formulation:
The probability of the contour t at time t given the previous contours
0:t1 and all the measurements I 0:t is formulated using Bayes’ rule:
linear classifier
length of curve
Solution iterates:
p(t | I 0:t , 0:t 1 )  p( I t  | t ) p( I t  | t ) p(t | 0:t 1 ),
t arg et
background
pixels inside contour
shape
pixels outside contour
Fragment Modeling:
Assuming conditional independence among the pixels, the joint
probability of the pixels in a region is given by:
p( It  | t ) 
 p ( y |  ),

t
{, }
X R*
where y is the feature vector of a pixel containing its spatial
coordinates and color measurements. The likelihood of the individual
pixel is given by the Gaussian mixture model (GMM):
SEGMENTATION
Region growing algorithm
• repeatedly accumulates
pixels within t standard
deviations of the
Gaussian model of the
fragment;
• automatically computes
the number of fragments.
Results of the algorithm on various sequences
image
foreground fragments
k*
p ( y | t )    j p ( y | t , j ),
CONCLUSION
j 1
1
p ( y | t , j )   exp{ ( y   j * )T ( j * ) 1 ( y   j * )}
2


p
(
j
|

)
where j
t is the probability that the pixel was drawn from
the jth fragment, k* is the number of fragments in the target or
*
*
background,  j is the mean and  j is the covariance of the jth
fragment
Occlusion:
• is detected by the rate of decrease in the object size over the past
few frames;
• is handled by searching over the learned database to find the
contour that most closely matches the one just prior to occlusion
using Hausdorff distance. Hallucinated contours are indicated by .
foreground ellipsoids
background
fragments
FRAGMENT UPDATE
• The spatial parameters of the fragment are updated by averaging
the motion vectors obtained for feature points in a fragment using a
Joint Lucas-Kanade approach.
• The appearance parameters are updated using the past and
present image statistics:
➢ Non-rigid tracking algorithm is based upon modeling the
foreground and background regions with a mixture of
Gaussians.
➢ A simple and efficient region-growing procedure initializes
the models.
➢ The strength image computed using the GMM is embedded
into a level set framework to extract contours.
➢ Joint feature tracking and model updating are both
incorporated to improve performance.