13. Multimedia computing - Department of Computer Science

Download Report

Transcript 13. Multimedia computing - Department of Computer Science

Multimedia Content Analysis on Clusters and Grids
Frank J. Seinstra
([email protected])
Vrije Universiteit
Faculty of Sciences
Amsterdam
Overview (1)
•
•
•
•
Part 1: What is Multimedia Content Analysis (MMCA) ?
Part 2: Why parallel computing in MMCA - and how?
Part 3: Software Platform: Parallel-Horus
Part 4: Example – Parallel Image Processing on Clusters
Overview (2)
•
•
•
•
Part 5: Grids and their specific problems
Part 6: Towards a Software Platform for MMCA on Grids
Part 7: Large-scale MMCA applications on Grids
Part 8: Future research directions (new projects?)
Introduction: A Few Realistic Problem Scenarios…
A Real Problem…
• News Broadcast – September 21, 2005:
• Police Investigation: over 80.000 CCTV recordings (by hand)
• First match found only 2.5 months after attacks
automatic
analysis?
Another Real Problem…
• Web Video Search:
• Search based on Video
content
– User annotations are known
to be notoriously bad (e.g.
YouTube)
Hillary Clinton candidate
Are these problems realistic?
• Beeld&Geluid (Dutch Institute for Sound and Vision, Hilversum):
– Interactive access to Dutch national TV history
• NFI (Dutch Forensics Institute, Den Haag):
– Surveillance Camera Analysis / Crime Scene Reconstruction
Part 1: What is Multimedia Content Analysis (MMCA)?
Multimedia
• Multimedia = Text + Sound + Image + Video + ….
• Video = image + image + image + ….
– In many (not all) multimedia applications:
• calculations are executed on each separate video frame independently
• So: we focus on Image Processing (+ Computer Vision)
What is a Digital Image?
• An image is a continuous function that has been discretized
in spatial coordinates, brightness and color frequencies
• Most often: 2-D with ‘pixels’ as scalar or vector value
• However:
– Image dimensionality can range from 1-D to n-D
• Example (medical imaging): 5-D = x, y, z, time, emission wavelength
– Pixel dimensionality can range from 1-D to n-D
• Generally: 1D = binary/grayscale; 3D = color (e.g. RGB)
• n-D = hyper-spectral (e.g. remote sensing by satellites; every 10-20nm)
Complete A-Z Multimedia Applications
Impala
A
(Parallel-) Horus
Low level operations
In: image
Image ---> (sub-) Image
Image ---> Scalar / Vector Value
Image ---> Array of S/V Values
K
R
Intermediate level operations
Z
High level operations
Out: ‘meaning’ful result
Feature Vector(s) ---> Similarity Matrix
---> Feature Vector Feature Vector(s) ---> Clustering (K-Means, …)
Feature Vector(s) ---> Classification (SVM, …)
--->
Ranked set of images
Recognized object
......
MMCA: software library design issues
• Two major implementation problems:
• 1. Nr. of commonly applied operations is very large
• 2. Nr. of data types is large
– combinatorial explosion caused many software projects to be discontinued
• Existing Library (Horus):
• Sets of operations with similar behavior & data accesses implemented as single generic algorithm
• So, we have a small number of algorithmic patterns
• Each to be used by instantiating it with the proper parameters, incl. the operation to be applied to
the individual pixels
Low Level Image Processing ‘Patterns’ (1)
=
+
=
Unary Pixel Operation
(example: absolute value)
Binary Pixel Operation
(example: addition)
N-ary Pixel Operation…
+
=
Template / Kernel / Filter /
Neighborhood Operation
(example: Gauss filter)
Low Level Image Processing ‘Patterns’ (2)
Reduction Operation
(example: sum)
=
2 1 7 6 4
+
M
transformation
matrix
=
N-Reduction Operation
(example: histogram)
=
Geometric Transformation
(example: rotation)
Example Application: Template Matching
Template
Input Image
Result Image
Example Application: Template Matching
for all images {
inputIm = readFile ( … );
unaryPixOpI ( sqrdInIm, inputIm, “set” );
binaryPixOpI ( sqrdInIm, inputIm, “mul” );
for all symbol images {
symbol = readFile ( … );
weight = readFile ( … );
unaryPixOpI (filtIm1, sqrdInIm, “set”);
unaryPixOpI (filtIm2, inputIm, “set”);
genNeighborhoodOp (filtIm1, borderMirror, weight, “mul”, “sum”);
binaryPixOpI (symbol, weight, “mul” );
genNeighborhoodOp (filtIm2, borderMirror, symbol, ”mul”, “sum”);
binaryPixOpI (filtIm1, filtIm2, “sub”);
binaryPixOpI (maxIm, filtIm1, “max”);
}
writeFile ( …, maxIm, … );
}
See: http:/www.science.uva.nl/~fjseins/ParHorusCode/
Part 2: Why Parallel Computing in MMCA (and how)?
The ‘Need for Speed’ in MMCA Research & Applications
• Growing interest in international ‘benchmark evaluations’
– Task: find ‘semantic concepts’ automatically
• PASCAL VOC Challenge (10,000++ images)
• NIST TRECVID (200+ hours of video)
• A problem of scale:
– At least 30-50 hours of processing time per hour of video
• Beel&Geluid:
• NASA:
• London Underground:
20,000 hours of TV broadcasts per year
over 1 TB of hyper-spectral image data per day
over 120,000 years of processing…!!!
High-Performance Computing
• Solution:
GPUs
Accelerators
– Parallel & distributed computing at a very large scale
General Purpose CPUs
• Question:
– What type of high-performance hardware is most suitable?
• Our initial choice:
– Clusters of general purpose CPUs (e.g. DAS-cluster)
– For many pragmatic reasons…
Clusters
Grids
But… how to let ‘non-experts’ program clusters easily & efficiently?
• Parallelization tools:
– Compilers
– Languages
– Parallelization Libraries
• General purpose & domain
specific
Message Passing
Libraries (e.g., MPI, PVM)
Effort
Shared Memory
Specifications (e.g., OpenMP)
Parallel Languages
(e.g., Occam, Orca)
Extended High Level
Languages (e.g., HPF)
Automatic Parallelizing
Compilers
User Transparent
Parallelization Tools
Parallel Image Processing
Languages (e.g., Apply, IAL)
Parallel Image Processing Libraries
Efficiency
Existing Parallel Image Processing Libraries
• Suffer from many problems:
– No ‘familiar’ programming model:
• Identifying parallelism still the responsibility of programmer (e.g. data partitioning
[Taniguchi97], loop parallelism [Niculescu02, Olk95])
– Reduced maintainability / portability:
• Multiple implementations for each operation [Jamieson94]
• Restricted to particular machine [Moore97, Webb93]
– Non-optimal efficiency of parallel execution:
• Ignore machine characteristics for optimization [Juhasz98, Lee97]
• Ignore optimization across library calls [all]
Bräunl et al (2001)
Our Approach
• Sustainable library-based software architecture for user-transparent
parallel image processing
– (1) Sustainability:
• Maintainability, extensibility, portability (i.e. from existing sequential ‘Horus’ library)
• Applicability to commodity clusters
– (2) User transparency:
• Strictly sequential API (identical to Horus)
• Intra-operation efficiency & inter-operation efficiency
(2003)
Part 3: Software Platform: Parallel-Horus
What Type(s) of Parallelism to Support?
• Data parallelism:
– “exploitation of concurrency that derives from the application
of the same operation to multiple elements of a data structure”
[Foster, 1995]
• Task parallelism:
– “a model of parallel computing in which many different
operations may be executed concurrently” [Wilson, 1995]
Why Data Parallelism (only)?
for all images {
inputIm = readFile ( … );
unaryPixOpI ( sqrdInIm, inputIm, “set” );
binaryPixOpI ( sqrdInIm, inputIm, “mul” );
for all symbol images {
symbol = readFile ( … );
weight = readFile ( … );
unaryPixOpI (filtIm1, sqrdInIm, “set”);
unaryPixOpI (filtIm2, inputIm, “set”);
genNeighborhoodOp (filtIm1, borderMirror, weight, “mul”, “sum”);
binaryPixOpI (symbol, weight, “mul” );
genNeighborhoodOp (filtIm2, borderMirror, symbol, ”mul”, “sum”);
binaryPixOpI (filtIm1, filtIm2, “sub”);
binaryPixOpI (maxIm, filtIm1, “max”);
}
writeFile ( …, maxIm, … );
}
•
•
•
•
•
Natural approach for low level image processing
Scalability (in general: #pixels >> #different tasks)
Load balancing is easy
Finding independent tasks automatically is hard
In other words: it’s just the best starting point…
(but not necessarily optimal at all times)
Many Low Level Imaging Algorithms are Embarrassingly Parallel
• On 2 CPUs:
Parallel Operation on Image
{
Scatter Image
(1)
Sequential Operation on Partial Image (2)
Gather Result Data
(1)
(2)
(3)
(3)
}
• Works (with minor issues) for unary, binary, n-ary operations & (n-) reduction operations
Other Imaging Algorithms Are Only Marginally More Complex (1)
• On 2 CPUs (without scatter / gather):
Parallel Filter Operation on Image
{
Scatter Image
(1)
Allocate Scratch
(2)
Copy Image into Scratch
(3)
Handle / Communicate Borders
(4)
Sequential Filter Operation on Scratch (5)
Gather Image
}
SCRATCH
SCRATCH
(6)
• Also possible: ‘overlapping’ scatter
• But not very useful in iterative filtering
Other Imaging Algorithms Are Only Marginally More Complex (2)
• On 2 CPUs (rotation; without b-cast / gather):
Parallel Geometric Transformation on Image
RESULT
IMAGE
{
Broadcast Image
(1)
Create Partial Image
(2)
Sequential Transform on Partial Image (3)
Gather Result Image
(4)
RESULT
IMAGE
}
• Potential faster implementations for special cases
More Challenging: Separable Recursive Filtering (2 x 1-D)
+
+
=
=
2D Template / Kernel / Filter / Neighborhood Operation
(example: Gauss filter)
followed
by
+
=
• Separable filters (1 x 2D becomes 2 x 1D): drastically reduces sequential computation time
• Recursive filtering: result of each filter step (a pixel value) stored back into input image
• So: a recursive filter re-uses (part of) its output as input
Parallel Recursive Filtering: Solution 1
(SCATTER)
(FILTER X-dir)
(TRANSPOSE)
(FILTER Y-dir)
(GATHER)
• Drawback: transpose operation is very expensive (esp. when nr. CPUs is large)
Parallel Recursive Filtering: Solution 2
P0
P1
P2
P0
P1
P2
P0
P1
P2
• Loop carrying dependence at final stage (sub-image level)
• minimal communication overhead
• full serialization
• Loop carrying dependence at innermost stage (pixel-column level)
• high communication overhead
• fine-grained wave-front parallelism
• Tiled loop carrying dependence at intermediate stage (image-tile level)
• moderate communication overhead
• coarse-grained wave-front parallelism
Parallel Recursive Filtering: Wave-front Parallelism
Processor 0
Processor 1
Processor 2
Processor 3
• Drawback:
– partial serialization
– non-optimal use of available CPUs
Parallel Recursive Filtering: Solution 3
• Multipartitioning:
Processor 0
Processor 1
Processor 2
Processor 3
– Skewed cyclic block partitioning
– Each CPU owns at least one tile in each of
the distributed dimensions
– All neighboring tiles in a particular direction
are owned by the same CPU
Multipartitioning
• Full Parallelism:
Processor 0
Processor 1
Processor 2
Processor 3
– First in one direction…
– And then in other…
– Border exchange at end of each sweep
– Communication at end of sweep always
with same node
Parallel-Horus: Parallelizable Patterns
Sequential API
• Minimal intrusion
– Much of the original sequential Horus
library has been left intact
– Parallelization localized in the code
Horus
Parallelizable Patterns
Parallel Extensions
MPI
• Easy to implement extensions
Parallel-Horus: Pattern Implementations (old vs. new)
template<class …, class …, class …>
inline DstArrayT*
CxPatUnaryPixOp(… dst, … src, … upo)
{
if (dst == 0)
dst = CxArrayClone<DstArrayT>(src);
}
template<class …, class …, class …>
inline DstArrayT*
CxPatUnaryPixOp(… dst, … src, … upo)
{
if (dst == 0)
dst = CxArrayClone<DstArrayT>(src);
CxFuncUpoDispatch(dst, src, upo);
if (!PxRunParallel()) {
// run sequential
CxFuncUpoDispatch(dst, src, upo);
return dst;
} else {
// run parallel
PxArrayPreStateTransition(src, …, …);
PxArrayPreStateTransition(dst, …, …);
CxFuncUpoDispatch(dst, src, upo);
PxArrayPostStateTransition(dst);
}
return dst;
}
Parallel-Horus: Inter-operation Optimization
• Lazy Parallelization:
– Don’t do this:
– Do this:
Avoid Communication
Parallel-Horus: Distributed Image Data Structures
• Distributed image data structure abstraction
global structure
CPU 0 (host)
CPU 1
– 3-tuple: < state of global, state of local, distribution type >
– state of global
– state of local
– distribution type
= { none, created, valid, invalid }
= { none, valid, invalid }
= { none, partial, full, not-reduced }
– 9 combinations are ‘legal’ states (e.g.: < valid, valid, partial > )
local structures
CPU 2
CPU 3
Lazy Parallelization: Finite State Machine
• Communication operations serve as state transition
functions between distributed data structure states
• State transitions performed only when absolutely necessary
• State transition functions allow correct conversion of legal
sequential code to legal parallel code at all times
• Nice features:
• Requires no a priori knowledge of loops and branches
• Can be done on the fly at run-time (with no measurable overhead)
Part 4: Example – Parallel Image Processing on Clusters
Application: Detection of Curvilinear Structures
• Apply anisotropic Gaussian filter bank to input image
• Maximum response when filter tuned to line direction
• Here 3 different implementations
• fixed filters applied to a rotating image
• rotating filters applied to fixed input image
– separable (UV)
– non-separable (2D)
• Depending on parameter space:
• few minutes - several hours
Sequential = Parallel
for all orientations theta {
geometricOp ( inputIm, &rotatIm, -theta, LINEAR, 0, p, “rotate” );
for all smoothing scales sy {
for all differentiation scales sx {
genConvolution ( filtIm1, mirrorBorder, “gauss”, sx, sy, 2, 0 );
genConvolution ( filtIm2, mirrorBorder, “gauss”, sx, sy, 0, 0 );
binaryPixOpI ( filtIm1, filtIm2, “negdiv” );
binaryPixOpC ( filtIm1, sx*sy, “mul” );
binaryPixOpI ( contrIm, filtIm1, “max” );
}
}
geometricOp ( contrIm, &backIm, theta, LINEAR, 0, p, “rotate” );
binaryPixOpI ( resltIm, backIm, “max” );
}
IMPLEMENTATION 1
for all orientations theta {
for all smoothing scales sy {
for all differentiation scales sx {
genConvolution (filtIm1, mirrorBorder, “func”, sx, sy, 2, 0 );
genConvolution (filtIm2, mirrorBorder, “func”, sx, sy, 0, 0 );
binaryPixOpI (filtIm1, filtIm2, “negdiv”);
binaryPixOpC (filtIm1, sx*sy, “mul”);
binaryPixOpI (resltIm, filtIm1, “max”);
}
}
}
IMPLEMENTATIONS 2 and 3
Measurements on DAS-1 (Vrije Universiteit)
Scaled Speedup
120
104.21
90.93
90
Speedup
Performance
2500
Linear
60
Conv2D
ConvUV
ConvRot
2085.985
2000
30
Time (s)
Conv2D
1500
• 512x512 image
• 36 orientations
• 8 anisotropic filters
25.80
ConvUV
0
ConvRot
0
1000
666.720
500
30
60
90
120
Nr. CPUs
437.641
20.017
25.837
0
4.813
0
30
60
Nr. CPUs
90
120
• So: part of the efficiency of parallel execution always
remains in the hands of the application programmer!
Measurements on DAS-2 (Vrije Universiteit)
LazyPar
#Nodes
Speedup
64
Speedup
48
Linear
Conv2D
ConvUV
Conv2D
ConvUV
32
16
0
0
16
32
#Nodes
48
64
1
2
4
8
16
24
32
48
64
on
on
off
off
Conv2D ConvUV Conv2D ConvUV
425.115 185.889 425.115 185.889
213.358
93.824 237.450 124.169
107.470
47.462 133.273
79.847
54.025
23.765
82.781
60.158
27.527
11.927
55.399
47.407
18.464
8.016
48.022
45.724
13.939
6.035
42.730
43.050
9.576
4.149
38.164
40.944
7.318
3.325
36.851
41.265
• 512x512 image
• 36 orientations
• 8 anisotropic filters
• So: lazy parallelization (or: optimization across library
calls) is very important for high efficiency!
Part 5: Grids and Their Specific Problems
The Grid
• The “Promise of the Grid”:
– 1997 an beyond: efficient and transparent (i.e. easy-to-use)
wall-socket computing over a distributed set of resources
• Compare electrical power grid:
Grid Problems (1)
• Getting an account on remote compute clusters is hard!
•
•
•
•
•
•
Find the right person to contact…
Hope he/she does not completely ignore your request…
Provide proof of (a.o.) relevance, ethics, ‘trusted’ nationality…
Fill in and sign NDA’s, Foreign National Information sheets, official usage documents, etc…
Wait for account to be created, and username to be sent to you…
Hope to obtain an initial password as well…
• Getting access to an existing international Grid-testbed is easier
• But only marginally so…
Grid Problems (2)
• Getting your C++/MPI code to compile and run is hard!
•
•
•
•
•
•
Copying your code to the remote cluster (‘scp’ often not allowed)…
Setting up your environment & finding the right MPI compiler (mpicc, mpiCC, … ???)…
Making the necessary include libraries available…
Finding the correct way to use the cluster reservation system (if there is any)…
Finding the correct way to start your program (mpiexec, mpirun, … and on which nodes ???)…
Getting your compute nodes to communicate with other machines (generally not allowed)…
• So:
• Nothing is standardized yet (not even Globus)
• A working application in one Grid domain will generally fail in all others
Grid Problems (3)
• Keeping an application running (efficiently) is hard!
– Grids are inherently dynamic:
• Networks and CPUs are shared with others, causing fluctuations in resource availability
– Grids are inherently faulty:
• compute nodes & clusters may crash at any time
– Grids are inherently heterogeneous:
• optimization for run-time execution efficiency is by-and-large unknown territory
• So:
• An application that runs (efficiently) at one moment should be expected to fail a moment later
Realizing the ‘Promise of the Grid’ for MMCA
• Grid programming and execution is hard!
• Even for experts in the field
• Set of fundamental methodologies required
• Each solving part of the Grid’s complexities
• For most of these methodologies solutions
exist today:
• JavaGAT / SAGA, Ibis / Satin, SmartSockets,
Parallel-Horus / SuperServers
Part 6: Towards a Software Platform for MMCA on Grids
Wide-area Multimedia Services
• Make Parallel-Horus servers available
on set of clusters world-wide
• Each server runs in data parallel manner
• Parallel-Horus clients can upload data
and execution requests
• Requests executed fully asynchronously
• Ultimate goal:
• Transparent task parallel execution of data
parallel services
Parallel
Horus
Client
Parallel
Parallel
Parallel
Horus
Horus
Horus
Server
Servers
Servers
Parallel
Horus
Client
Current Problems
• Execution on each cluster ‘by hand’
Parallel
Horus
• Use JavaGat
• Instable / faulty communication
• Use Ibis
• Connectivity problems
Client
Parallel
Parallel
Parallel
Horus
Horus
Horus
Server
Servers
Servers
• Use SmartSockects
Parallel
• No platform-independence
Horus
• Use Java
Client
Towards ‘SuperServers’
• Transparent submission of server
programs
• Transparent server availability
notification
• Dynamic uploading of client data and
codes
• All in Java:
– Compile once, run everywhere
Parallel
Horus
Client
Parallel
Parallel
Parallel
Horus
Horus
Horus
Server
Servers
Servers
Parallel
Horus
Client
Part 8: Large-Scale MMCA Applications on Grids
Example 1: TRECVID Competition (1)
•
•
•
•
International standard benchmark for content-based video retrieval
Goal: find ‘semantic concepts’ in 200 hours of news broadcasts (CNN, ABC, …)
Strong international competitors, a.o.: IBM Research, Carnegie Mellon University
Sequential approach: over 1 year of processing
Example 1: TRECVID Competition (2)
• Results (2004):
– Parallel-Horus + DAS-2 (200 Pentium III Nodes)
– Pure processing time: < 60 hours
• Advantages:
– Algorithm design entirely sequential & speedups ‘for free’
• Design:
more thorough comparison of algorithmic approaches
• Development: more accurate parameter tuning
• Processing: more in-depth scene analysis
Example 2: Color-Based Object Recognition (1)
+
=
< 444-valued ‘feature vector’ >
• Our Solution:
• Place ‘retina’ over input image - each of 37 ‘retinal areas’ serves as a ‘receptive field’
• For each receptive field:
– Obtain set of local histograms, invariant to shading / lighting
– Estimate Weibull parameters ß and γ for each histogram
• Hence: scene description by set of 37x4x3 = 444 parameters
Example 2: Color-Based Object Recognition (2)
• Learning phase:
– Set of 444 parameters is stored in database
– So: learning from one example, under 1 visual setting
• Recognition phase:
– Validation by showing objects under at least 50 different condtions:
• Lighting direction
• Lighting color
• Viewing position
“a hedgehog”
Example 2: Color-Based Object Recognition (3)
• Amsterdam Library of Object Images
– 1000 objects
• In laboratory setting:
– 300 objects correctly recognized under all (!) visual
conditions
– 700 remaining objects ‘missed’ under extreme
conditions only
Color-Based Object Recognition by a Grid-Connected Robot Dog
Color-Based Object Recognition by a Grid-Connected Robot Dog
Results on DAS-2 (Vrije Universiteit)
Single cluster, client side speedup
Four clusters, client side speedup
96
64
56
80
48
64
linear
32
client
Speedup
Speedup
40
linear
48
client
24
32
16
16
8
0
0
0
8
16
24
32
Nr. of CPUs
40
48
56
64
0
16
32
48
64
80
Nr. of CPUs
• Recognition on single machine: +/- 30 seconds
• Using multiple clusters: up to 10 frames per second
• Insightful: even ‘distant’ clusters effective
96
Part 9: Future Research Directions (new projects?)
Potential Future Research Projects (1)
• Applicability of graphics processors (GPUs) and other accelerators
– NVIDIA, CELL Broadband Engine, FPGAs:
can we make these ‘easily’ programmable?
• Parallel-Horus in Java & Ibis
– Getting rid of C++ / MPI:
can we make this efficient?
• Applicability of the Satin programming model in MMCA
– Fault-tolerance, malleability, migration:
is this at all possible / efficient?
Potential Future Research Projects (2)
• Large-scale distributed data-management for MMCA
– Get the data where the computing is:
can we adhere to strict time-constraints?
• New MMCA algorithms
– Time-dependent, intermediate level:
how to integrate these into Parallel-Horus?
• Applications
– Many, on any type of HPC hardware:
maybe you have a nice application yourself?
THE END
Appendix: Intermediate Level MMCA Algorithms
Intermediate Level Algorithms
• Feature Vector
• A labeled sequence of (scalar) values
• Each (scalar) value represents image data related property
• Label: from user annotation or from automatic clustering
• Example:
(histogram)
=
1234566665433
Let’s call this: “FIRE”
<FIRE, ß=0.93, γ=0.13>
(can be approximated by
a mathematical function,
e.g. a Weibull distribution;
only 2 parameters ‘ß’, ‘γ’)
Annotation (low level)
Sky
• Annotation for low level ‘visual words’
Sky
Sky
USA
Flag
Road
– Define N low level visual concepts
– Assign concepts to image regions
– For each region, calculate feature vector:
•
•
•
•
•
<SKY, ß=0.93, γ=0.13, … >
<SKY, ß=0.91, γ=0.15, … >
<SKY, ß=0.97, γ=0.12, … >
<ROAD, ß=0.89, γ=0.09, … >
<USA FLAG, ß=0.99, γ=0.14, … >
• N human-defined ‘visual words’, each
having multiple descriptions
Alternative: Clustering
• Example:
– Split image in X regions, and obtain feature vector for each
– All feature vectors have position in high-dimensional space
– Clustering algorithm applied to obtain N clusters
– => N non-human ‘visual words’, each with multiple descriptions
Feature Vectors for Full Images
(2) Compute similarity between each image region…
...
(1) Partition image in regions
Sky
Grass
Road
… and each low level visual word …
(3) … and count the number of region-matches with each visual word
e.g.: 3 x ‘Sky’; 7 x ‘Grass’; 4 x ‘Road’; …
=> This defines an accumulated feature vector for a full image
– outdoors
– airplane
– traffic situation
Annotation (high level)
• Annotation for high level ‘semantic
concepts’
– Define M high level visual concepts, e.g.:
– outdoors
– traffic situation
– president Bush
– human interaction
•
•
•
•
•
•
‘sports event’
‘outdoors’
‘airplane’
‘president Bush’
‘traffic situation’
‘human interaction’, …
– For all images in a known (training) set,
assign all appropriate high level concepts
‘Recognition’ by Classification
• The new, accumulated feature vectors again
define positions in a high-dimensional space
• Classification defines a separation
boundary in that space, given the
known high-level concepts
‘sports event’
• ‘Recognition’ of new image:
– position new accumulated feature vector
– see on which side of the boundary it is
– distance to boundary defines probability (so we can
provide ranked results)
NOT ‘sports event’
Ok, this is easy… or…?
• Results of FabChannel data analysis
– Live concert videos
– Allows for ‘browsing’ / user interaction
< a little demo (maybe?) >