Transcript Document

Face Recognition From Video
Part (II)
Advisor: Wei-Yang Lin
Presenter: C.J. Yang & S.C. Liang
Outline



Method (I) : A Real-Time Face
Recognition Approach from Video
Sequence using Skin Color Model and
Eigenface Method[1]
Method (II):An Automatic Face
Detection and Recognition System for
Video Streams[4]
Conclusion
A Real-Time Face Recognition
Approach from Video Sequence
using Skin Color Model and
Eigenface Method
Islam, M.W.; Monwar, M.M.
Paul, P.P.; Rezaei, S,
IEEE Electrical and Computer Engineering,
Canadian Conference on, May 2006
Page(s):2181 - 2185
Introduction

Real time face recognition
Face Detection
Others
Most use intensity values
Use skin color
 Majority of images acquired
are colored
Proposed  Skin color features should
be important sources of
information for discrimmating
faces from the background
Face Recognitin
Most ignore the question of
which features are important for
classification, which are not
Use eigenface approach
 Principal component analysis
(PCA) of the facial images, leave
only those features that are
critical for face recognition
 Speed, simplicity, learning
capability, robustness to small
changes in the face image
Method (I)
Real time image acquisition
Using MATLAB Image Acquisition Toolbox 1.1
Face detection
Face recognition
Video sequences
Results
Face Detection
- Skin Color Model


Adaptable to people of different skin
colors and to different lighting
conditions
Skin colors of different people are very
close, but they differ mainly in
intensities
Face Detection
- Skin Color Model (cont.)
Selected skin-color region
Cluster in color space
[2]
[2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of facial features in video
sequences," proceedings of the Mexican International Conference on Artificial Intelligence. Advances in
Artificial Intelligence, pp. 127 - 135, 2000.
Face Detection
- Skin Color Model (cont.)

Chromatic solors are defined by a
normalization process
R
G
r
,g 
RG B
RG B
g
N(m,C)
r 
r
Cluster in chromatic space
m=E{x} where x=  
g 
C=E{(x-m)(x-m)T}
g
r
Gaussian Model

 
rg
=  rr

 gr  gg 
Face Detection
- Skin Color Model (cont.)



Obtain the likelihood of skin for any
pixel of an image with the Gaussian
fitted skin color model
Transform a color image into a
grayscale image
Using threshold value to show skin
regions
Face Detection
- Skin Region Segmentation

Segmentation and approximate face
location detection process
Gray scale image
r=0.41~0.50
g=0.21~0.30
Face Detection
- Skin Region Segmentation (cont.)
Median filter
Face Detection
- Face Detection


Approximate face locations are detected
using a proper height-width proportion
of general face
Rough face locations are verified by an
eye template-matching scheme
Face Recognition
- Defining Eigenfaces

Main idea of PCA method


Vectors



Find the vectors which best account for the
distribution of face images within the entire image
space
Eigenvectors of covariance matrix corresponding
to the original face images
Face-like
Eigenfaces
Vectors define the subspace of face images
face space
Face Recognition
- Defining Eigenfaces
Face Recognition
- Defining Eigenfaces (cont.)
Calculate the Eigenfaces from the training set
Keeping only the M Eigenfaces which correspond to the highest
Eigenvalues, and M Eigenfaces denote the face space
Calculate the corresponding location in M-dimensional
weight space for each known individual
Calculate a set of weights based on a
new face image and the M Eigenfaces
Face Recognition
- Defining Eigenfaces (cont.)
Determine if the image is a face
If it is a face, classify the weight pattern as
either a known person or as unknown person
[3]
[3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces," proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991.
Face Recognition
- Calculating Eigenfaces
Steps
Obtain a set S with M face images (N by N)
Obtain the mean image
Find the difference


Calculate the Covariance matrix C
S  {1 , 2 , 3 ,...,  M }
1

M
M

n 1
n
 i  i  
1 M
T
T
C



AA
 n n
M n 1
where A  [1 ,  2 ,...,  M ]
Face Recognition
- Calculating Eigenfaces (cont.)
To find eigenvectors from C is a
huge computational task. Solution :
Find the eigenvectors of ATA first
Multiply A
Gain the eigenvectors
AT Avk  v k vk
l
AAT Avk  k Avk
M
ul   vlk  k , l  1, 2,..., M
k 1
Find the eigenvalues of C
1
k 
M
The M Eigenvectors are sorted in order of descending
Eigenvalues and chosen to represent Eigenspace.
M
T
2
(
u

)
 l n
n 1
Face Recognition
- Recognition Using Eigenfaces
Project each of the train images into Eigenspace
Give a vector of weights to represent
the contribution of each Eigenface
wk  ukT (   )
T  [ w1 , w2 ,..., wM ]
When a new face image is encountered,
project it into Eigenspace
Measure the Euclidean distance
An acceptance or rejection is determined
by applying a threshold
   a  b
2
Method (I)
- Result
Method (I)
- Conclusion

In this face recognition approach,


Skin color modeling approach is used for
face detection
Eigenface algorithm is used for face
recognition
An Automatic Face Detection and
Recognition System for Video
Streams
A. Pnevmatikakis and L. Polymenakos
2nd Joint Workshop on Multimodal
Interaction and Related Machine
Learning Algorithms (MLMI), 2005
[4]
Introduction


Authors present the AIT-FACER algorithm
The system is intended for meeting rooms


where background and illumination are fairly
constant
As participants enter the meeting room, the
system is expected to identify and recognize
all of them in a natural and unobtrusive way

i.e., participants do not need to enter one-by-one
and then pose still in front of a camera for the
system to work
AIT-FACER System

Four modules





Face Detector
Eye Locator
Frontal Face Verifier
Face Recognizer along with performance metrics
The goal of the first three modules




Detect possible face segments in video frames
Normalize them (in terms of shift, scale and rotation)
Assign to them a confidence level describing how frontal
they are
Feed them to the face recognizer finally
AIT-FACER System
Detect possible
face segments
Normalize face
segments
• DFFS: Distance-From-Face-Space
Decide if the
face is frontal
or not
(cont.)
To tell frontal
faces and profile
faces apart
To alleviate the effect
of lighting variations
and shadows
Foreground Estimation

Algorithm

Subtract the empty room image



Sum the RGB channels and binarize the result
In order to produce solid foreground segments


The empty room image is utilized as background
We perform a median filtering operation on 8x8 pixel
blocks is performed
Color normalization


Which is used to minimize the effects of shadows on a
frame level
We set the brightness of the foreground segment at 95%

The preferred and visibly better way is Gamma correction,
but a faster solution is needed for our real-time system
Foreground Estimation
(cont.)
Skin Likelihood Segmentation

Color model


based on the skin color and non-skin color histograms
Log-likelihood L(r,g,b)

[7]




s[rgb] is the pixel count contained in bin rgb of the skin histogram
n[rgb] is the equivalent count from the non-skin histogram
Ts and Tn are the total counts contained in the skin and non-skin
histograms, respectively
Skin Likelihood Segmentation
(cont.)

Algorithm


Obtain the likelihood map
The likelihood map L(r,g,b) is binarized



The different segments become connected in the skin map


By using 8-way connectivity
The bounding boxes of the segments are identified and
boxes with small area (<0.2% of the frame area) are
discarded


Pixels take the value 1 (skin color) if L(r,g,b) > -.75
The rest pixels take the value 0
Because their resolution is too low for recognition
Choose segments with face-like elliptical aspect ratios

The eigenvalues resulted by performing PCA are used to
estimate the elliptical aspect ratio of the region
Skin Likelihood Segmentation
(cont.)
Eye Detector

Thought


If we can identify the eyes and their location
reliably, we can perform necessary normalizations
in terms of shift, scale and rotation
Two stages


First, the eye zone (eyes and bridge of the nose
area) is detected in the face candidate segments
As a second stage, we detect the eyes in the
identified eye zone
Eye Detector
(cont.)
Frontal Face Verification

Problem



Skin segmentation heuristics define many areas that are not
frontal faces
Further, the eye detector always defines two dark spots as
eyes, even when the segment is not a frontal face
Solution



The first stage uses DFFS to compute the distance from a
frontal face prototype
Segments with smaller DFFS values are considered frontal
faces with larger confidence
A two-class LDA classifier is trained to discriminate frontal
from non-frontal head views
Frontal Face Verification

(cont.)
The 100 normalized segments in ascending DFFS order
Face Recognition

All normalized segments are finally
processed by an LDA classifier and an
identity tag is attached to each one
Result
Video-Based Face Recognition
Evaluation in the CHIL Project –
Run 1
Ekenel, H.K.; Pnevmatikakis, A.;
IEEE on Proceedings of the 7th International
Conference on Automatic Face and Gesture
Recognition (FGR’06), 2006
[5]
Smart-Room
Face Image
[6]
Reference







[1] Islam, M.W.; Monwar, M.M.; Paul, P.P.; Rezaei, S.;” A Real-Time Face
Recognition Approach from Video Sequence using Skin Color Model and
Eigenface Method,” IEEE Electrical and Computer Engineering, Canadian
Conference on, May 2006 Page(s):2181 - 2185
[2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of
facial features in video sequences," proceedings of the Mexican International
Conference on Artificial Intelligence. Advances in Artificial Intelligence, pp. 127 135, 2000
[3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces,"
proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pp. 586-591, June 1991
[4] A. Pnevmatikakis and L. Polymenakos, “An Automatic Face Detection and
Recognition System for Video Streams,” 2nd Joint Workshop on Multimodal
Interaction and Related Machine Learning Algorithms (MLMI), 2005
[5] Ekenel, H.K.; Pnevmatikakis, A.; “Video-Based Face Recognition Evaluation in
the CHIL Project – Run 1,” IEEE on Proceedings of the 7th International
Conference on Automatic Face and Gesture Recognition (FGR’06), 2006
[6] CHIL, http://chil.server.de/servlet/is/2764/
[7] M. Jones and J. Rehg. “Statistical color models with application to skin
detection,” Computer Vision and Pattern Recognition, pp. 274–280, 1999.