Behavior Analysis

Download Report

Transcript Behavior Analysis

Behavior Analysis
Midterm Report
Lipov Irina
Ravid Dan
Kotek Tommer
Main Goal

Analyzing the people’s behavior in an
office environment using streams from 3
video cameras and objects identified by a
tracker as input.
Main Goal (cont.)



Initialization: Mapping the background
objects (such as a computer, a phone,
etc.)
For each object, decide whether it is a
person
For each person, decide whether they are
using a background object
Current State

Manual Mapping




For one camera
Skin Detection
Face Detection
Naïve Behavior Analysis

For one frame
Manual Mapping





The function objects performs the manual
mapping.
Displays the input background image.
For each object, the user selects a polygon
using the mouse and names it.
The polygon represents the object and/or it’s
relevant surroundings.
Outputs A list of masks, one for each object, a
list of object names, and the number of objects
Manual Mapping (cont.)
Objects
michael001.jpg
Skin Detection

Builds a histogram of the colors in a file
containing skin samples.
Skin samples
Skin color histogram
Skin Detection (cont.)

Remove background by subtraction.
RemoveBG
Skin Detection (cont.)

For each pixel in the subtracted image, the
probability of it being skin is computed
according to the histogram and a grayscale
probability image is generated.
Skin Detection (cont.)

The image is thresholded to create a skin
mask.
Face Detection




Assumption: A connected component in the skin
mask image representing a face must contain at
least one hole (eyes, nostrils, eyebrows, mouth,
etc. will not have skin color.)
Small holes are removed by blurring the image.
All connected components containing a hole are
stored in a list of masks.
Each such mask represents a single face.
Face Detection (cont.)
Naïve Behavior Analysis

For each person (from the person list) and each
background object (from the object list), checks
whether their mask intersect. If so, the person is using
the object.
What’s next

Improvements




Mapping – generalize to 3 cameras (generate 3 lists of
corresponding masks)
Skin detection – improve run times
Face detection – Find a more accurate criteria
What’s really next




Decide whether a tracker object is a person based on the images
from all cameras by relating each face to the relevant object
Analyze the person’s actions
Choose the analysis perspective, i.e. whether to run state
machines describing each person’s behavior or describing each
object’s use
Write and implement the state machines as chosen