Transcript wangwan

Motion Analysis using
Optical flow
CIS750 Presentation
Student: Wan Wang
Prof:
Longin Jan Latecki
Spring 2003
CIS Dept of Temple
Contents



Brief discussion to Motion Analysis
Introduction to optical flow
Application of Detect and tracking people
in complex scenes using optical flow
Part 1: Motion Analysis


Usual input of a motion analysis system is
a temporal image sequence
Motion analysis is often connected with
real-time analysis
Three main groups of motion analysis problem
•
Motion detection:
- register any detected motion
- single static camera
•
Moving object detection and location:
- moving object detection only : motion_based segmentation
methods
- detection of a moving object, detection of the trajectory of its motion,
prediction of its future trajectory: image object_matching techniques
are often used to solve these tasks (direct matching of image data,
matching of object features, specific representative object points
(corner etc.),represent moving object as graphs and mathing these
graphs); another useful method is optical flow
•
Derivation of 3D object properties: from a set of 2D projections of
acquired at different time instants of object motion
Part2: Optical flow



Reflects the image changes due to motion during a time interval dt, which
is short enough to guarntees small inter-frame motion changes
The immediate objective of optical flow is to determine a Velocity field:A 2D
representation of a (generally) 3D motion is called a motion field(velocity
field) Whereas each point is assigned avelocity vector corresponding the
motion direction, velocity and distance from an observer at an appropriate
image location
Based on 2 assumptions:
- The observed brightness of any object point is constant over time
- Nearby points in the image plane move in a similar manner(velocity
smoothness constraint)
Optical flow
Eg:http://www.ai.mit.edu/people/lpk/mars/temizer_2001/Optical_F
low/index.html
Computation Rationale
• Let us suppose we have a continuous image, the image intensity is
given by f(x,y,t), where the intensity is now a function of time t, as well as
of x and y.
• If this point(x,y) moves to a point (x+dx,y+dy) at time t+dt, the
following equation holds:
• Taylor expansion of the right side of the equation (1) is
Where fx(x,y,t),fy(x,y,t),ft(x,y,t) denote the partial derivation of f.
And e is the high-order term in Tylor series.
Computation Rationale
Assuming that e is negligible, we obtain the next equation:
That means
Computation Method
Optical flow in motion analysis
Motion, as it appears in dynamic images, is usually some
combination of 4 basic elements:
(a)Translation at constant distance from the observer.
---parallel motion vectors
(b)Translation in depth relative to the observer.
---Vectors having common focus of expansion.
(c) Rotation at constant distance from view axis.
---concentric motion vectors.
(d) Rotation of planar object perpendicular to the view axis.
---- vectors starting from straight line segments.
Optical flow in motion analysis
•
Mutual velocity of an observer and an object
Let mutual velocities be (u,v,w) at direction x,y,z.(z represent the
depth) if (x0,y0,z0) is the position at time t0=0.then the position of the
same point at time t can be determined as:
•
FOE (focus of expansion) determination:
•
Distance(depth) determination
•
Collision Prediction
Part 3
Experiment of detecting and tracking
people in complex scenes using optical flow
(by saitama univ)
Demand
•
Automatic visual surveillance systems are strongly demanded
for various applications. We have several systems commercially
available, most of which are based on subtraction between
consecutive frames or that between a current image and a
stored background image. They can work as expected if
environmental conditions do not change, such as indoors.
•
However, they cannot work outdoors because there are various
disturbances such as changes of lighting and movements of
background objects.
First step: compute the optical flow



By applying two different spatial filters g,h to the input image , the
following two constraint equations are derived.
Two orientation_selective spatial Gaussian filters g, h applied to the
original image f(x,y,t): one is sensitive to vertical edges, one is to horizental
edges.
(u,v) denotes an optical flow vector and subscript denotes partial
differentiation
Second step: Region Segmentation

segment the flow image into uniform flow regions in a split-andmerge fashion. First, we divide the image into 16 (4 X 4) regions,
calculating the mean flow vector in each region. If the region has
any outlier subregions whose flow vectors are different from the
mean, the region is further split into 4 (2 X 2) regions. If the region
has no outlier subregion, that is, the region has a uniform flow, it
will not be split. The above process is repeated to each region until
it becomes too small to be split
Third step: Predicted Path Voting


We prepare a four-dimensional voting space (
)For each
uniform flow region detected in the previous process, we predict a
path of the region in a certain time interval of future. Fig. shows the
predicted path(only x-y-t are shown). We assume that the region
continues to move in the direction of the mean flow vector ( u,v )
at its speed. We approximate each region by an ellipse whose
center coincides with the region centroid. Every point inside the
ellipse is given weight, according to the two dimensional Gaussian
as shown in Fig. 3(a). This weight is voted at the predicted position
(x,y) at the time (t) in the direction ( ).
The voted result is compared with a threshold. If there is any region
whose number of votes is over the threshold, the region is detected
as a target.
Reference



Image processing, analysis, and machine
vision
Detecting and tracking people in complex
scenes
http://www-cv.mech.eng.osakau.ac.jp/research/tracking_group/iketani/r
esearch_e/node1.html