Multidimensional Data Visualization

Download Report

Transcript Multidimensional Data Visualization

07.25. 2001
Keyblock Approach:
Metadata Generation and
Retrieval of Geographic
Imagery
Aidong Zhang
Associate Professor
Director, Multimedia and Database Laboratory
Computer Science and Engineering
University at Buffalo
University at Buffalo The State University of New York
Introduction
Observations:
USGS, NIMA and NASA provide the archiving
of large repositories of remote-sensing data.
New Issues: problem of resource selection. Given
a query, where should a user start a search?
Our Approach:
Design a metaserver on top of various visual
databases.
Given a query, the metaserver first produces a
ranking of the databse sites and then distributes
the queries to the selected databases.
University at Buffalo The State University of New York
Distributed System
Architecture
GIS
Database
GIS
Database
GIS
Database
GIS
Database
GIS
database
at remote
sites
Metaserver
Metasearch
Agent
Metaserver
(Our focus)
Meta
Database
Query Manager
Client
Browser
Client
Browser
Client
Browser
University at Buffalo The State University of New York
Client
Browser
Client
applications
for visual
display
GIS1998
Server/DB
GIS1999
Server/DB
GIS2000
Server/DB
GISWNY
Server/DB
Step 1
Users
Local Severs/DB
METASEVER
/DB
Ranked DB List 1.GIS-SANF Server/DB
2.GIS-1999 Server/DB
……
7.GIS-FLOR Server/DB
Local Severs/DB
GIS-SANF
Server/DB
GIS-FLOR
Server/DB
GIS-FLOR2
Server/DB
Matching Images
University at Buffalo The State University of New York
Step 2
Global View of
Data Sources
METADATABASE
Texture
Color
Shape
Feature
Classes
Templates
DB1
DB2
University at Buffalo The State University of New York
DBn
Database Sites
Generating Templates
Images are clustered and the centroids of the
clusters are chosen as templates.
Environment
Residential
Water
University at Buffalo The State University of New York
Grass
Agriculture
Metadatabase
 Templates of local databases are collected in the
metadatabase to represent the content of the databases
 Statistical data:
 We can measure the similarity of images in the databases to
the templates.
 Using these similarity measurements, statistical data are
computed that capture the likelihood of a database containing
data that are relevant to a template.
 The relevant databases for a given query can be selected by
determining the similarity of the query with metadatabase
templates and ranking the database sites based on the visual
relationships recorded between the databases and templates.
University at Buffalo The State University of New York
Content-based Image
Retrieval (CBIR)
Allow retrievals performed on various of
image contents such as color, texture,
shape, etc.
Visual queries are submitted to image
database to find similar images
Feature extraction is the basis of CBIR
Famous systems include QBIC,
VisualSeek, PhotoBook, etc.
University at Buffalo The State University of New York
Evaluation Measures
 Effectiveness of CBIR
| set _ of _ retrieved  set _ of _ relevant|
precision
| set _ of _ retrieved |
set_of_retrieved
images
set_of_relevant
images
| set _ of _ retrieved  set _ of _ relevant|
recall 
| set _ of _ relevant|
University at Buffalo The State University of New York
Multi-scale Feature
Representation
Multi-resolution wavelet representation of image:
Original image
Scale 1
University at Buffalo The State University of New York
Scale 2
Scale 3
University at Buffalo The State University of New York
Keyblock Approach
Generalizing text retrieval techniques to image
retrieval
Text IR: use keywords to index and retrieve
What are the “keywords” of an image?
Region segments of images
Features of images
Objects of images
How to generate “keywords” of images?
Keyblocks: select centroids of clusters
University at Buffalo The State University of New York
Keyblock Generation
Image
Database
Training
Set
Sampling
Training
Blocks
Feature-based
Clustering
(GLA,PNNA,etc.)
Image Encoding
Codebook
Query
Image
Content-based Image Retrieval
Query and Retrieval
Feature Representation:
BM, VM, HM, etc.
University at Buffalo The State University of New York
Keyblock Generation
Various clustering algorithms can
be used.
On original space
partition/segment the images into smaller blocks,
and then select a subset of representative blocks.
On feature space
extract low-level feature vectors, such as color,
texture, and shape, from image segments/blocks,
and then select a subset of representative feature
vectors.
University at Buffalo The State University of New York
Unsupervised Keyblock Selection
Step 1: Initialization
Step 2: Clustering/Partition
Step 3: Recalculating Centroid Step 4: Substituting Centroid and Reiterating
University at Buffalo The State University of New York
Knowledge-based Keyblock Generation
Training
Images
Training
Images
Stage I
Keyblock
Generation
Keyblock
Generation
(Water)
Forest
Codebook
Water
Codebook
Stage II
Merge Codebooks
LVQ-based
Fine Tuning
Stage III
University at Buffalo The State University of New York
(Forest)
Image Encoding
For each image in the database, decompose it
into blocks.
Then, for each block, find the closest entry in
the codebook and store the index
correspondingly.
Now each image is a matrix of indices, which
can be regarded as 1-dimensional in scan
order. This property is very similar to a text
document which is considered as a linear list
of keywords in text-based IR.
University at Buffalo The State University of New York
Codebook ( a list of keyblocks)
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
…...
Block Encoding
Table Lookup
Segmentation
18 16 16 16
15 15 18 18
16 16 18 18
Original Image
Segmented Image
19 19 19 16
Encoded Image
Image Decoding
Reconstructed Image
University at Buffalo The State University of New York
A raw image and the reconstructed images
with different codebooks
University at Buffalo The State University of New York
Image Feature
Representation and Retrieval
Main components:
 D  I1 , , I j , , I M 
the list of encoded images.
list of keyblocks.
 C  c ,  , c ,  , c 
1
i
N

the CBIR model
  ( f , s)
f is the feature extraction mapping which generates the
feature vector for each image ;
s
is the similarity measure between feature vectors. It is
used to generate the ranking in the retrieval stage.

Q
the set of visual queries.
University at Buffalo The State University of New York
Single-block Models
Boolean Model and Vector Model are
widely used in IR
adopt keywords to index and retrieve documents;
assume that both documents in the database and
queries can be described by a set of mutually
independent keywords.
 Similar image feature representation
models can be designed
use keyblocks instead of keywords for images;
individual keyblock's appearance in images is
important information.
University at Buffalo The State University of New York
Boolean Model
BM considers whether or not a keyblock appears.
Wij = 1 if fij >= T, 0 otherwise.

fij is the frequency of keyblock ci appearing in image
Ij , T is a threshold.
The feature vectors of Ij and q can be considered
as strings of length N where i-th bit indicates
whether or not ci appears.
SBM (q,dj ) = n11 * w11 + n00 * w00
n11 is the number of bits at which both Ij and q are 1
n00 is the number of bits at which both Ij and q are 0
w11 and w00 are the weights assigned to n11 and n00 , respectively.
University at Buffalo The State University of New York
Vector Model
fij
normalized frequency f * ij  max 1  l  Nflj
inverse image frequency
idfi = log( M / Mi ) ,for ci
keyblock weights: wij = f*ij * idfi
Similarity measure is the inner
product of Ij and q
Svm( q , Ij ) 
iN1Wij *Wi ( q )
iN1Wij *Wij * iN1Wi ( q )*Wi ( q )
University at Buffalo The State University of New York
Histogram Model
HM can be regarded as a special case of
VM where wij = fij.
The feature vectors Ij and q are the
keyblock histograms.
1
Similarity measure Shm(q, dj ) 
1 dis (q,dj )
where
N abs(Wij Wi ( q ))
dis( q , dj)  
i 1 1Wij Wi ( q )
University at Buffalo The State University of New York
N-block Models
The single-block models only focus on
individual keyblock’s appearance, the
correlation among keyblocks are not
counted in.
We propose N-block Models
the correlation of image blocks is the focus.
the probabilities of a subset of keyblocks
distributed according to certain spatial
configurations are used as feature vectors.
University at Buffalo The State University of New York
Bi-block Spatial
Configurations
horizontal
c k-1
ck
vertical
c k-1
ck
diagonal (main)
diagonal (minor)
c k-1
c k-1
ck
University at Buffalo The State University of New York
ck
Tri-block Spatial
Configurations
horizontal
vertical
c k-2
c k-2 c k-1 c k
c k-1
ck
diagonal (main)
diagonal (minor)
c k-2
c k-2
c k-1
c k-1
ck
University at Buffalo The State University of New York
ck
Tri-block Spatial
Configurations
triangular configure 1
c k-2
c k-1
c k-1
ck
triangular configure 2
ck
c k-2
triangular configure 4
c k-1
University at Buffalo The State University of New York
c k-2
ck
triangular configure 3
ck
c k-1
c k-2
Multi-modal Image
Retrieval
 The above models capture different image content under
various contexts.
 The single-block models only consider single keyblock's occurrence;
 The n-block models consider multiple keyblocks' co-occurrence.
 If keyblocks of different size are used, image content in
different granularity will be focused on.
 Since each individual model can't satisfy all requirements of
image content extraction and retrieval, it is necessary to
combine them to improve the retrieval performance.
 Feature combination
 Result fusion
University at Buffalo The State University of New York
keyblock-keyblock
correlation matrix
 keyblock-keyblock correlation matrix ( Ki , l )n  n
The rows and columns are associated with the keyblocks
in the codebook C (|C| = N)
Each item (ki,l) is a normalized correlation factor between
keyblock ci and cl
ni , l
Ki , l 
ni  nl  ni , l
 ni is the number of images which contain ci;
 nl is the number of images which contain cl;
 ni,l is the number of images which contain both ci and cl
University at Buffalo The State University of New York
keyblock-keyblock
correlation matrix
 We can use the keyblock-keyblock correlation
matrix to redefine the feature vector of the
histogram model
wi , j  fi , j  f * i , j  fi , j 
 fl , j * Ki, l
clIj  Ki ,l 
fij is the frequency of keyblock ci appearing in image Ij and
fij* is the correlation weight calculated by combining
frequencies of ci’s correlated keyblocks with their
correlation factor together.
 is a threshold (usually 0.3   0.5 ) to cut off the effects
of those less correlated keyblocks.
University at Buffalo The State University of New York
Region-based Image
Retrieval
Keyblocks
 can be any image feature segments such as pixels, blocks and
regions, etc.
Regions
 Are better “keywords” because they usually carry more semantic
meanings and they are closer to the objects .
 Image segmentation is still a difficult problem.Segmentation
algorithms inevitably make some mistakes, e.g., oversegmentation.
 How to effectively and efficiently extract region features?
 How to retrieve images based on region features and
corresponding region spatial constraints?
University at Buffalo The State University of New York
Region-based Image
Retrieval
Images are segmented into several regions;
Visual features are extracted for each
region;
The image content is represented by the set
of region features;
At the query time, the query image is
segmented into several regions. Then the
features of one or more regions are matched
against region features which represent
images in the database.
University at Buffalo The State University of New York
Integrate Regions into
Keyblock Framework
Keyblock framework is quite extensible; substitute
blocks with regions in the whole framework
Segmentation : Expectation-Maximization (EM)
proposed in the Blobworld system
iteratively models the joint distribution of color and texture with a
mixture of Gaussians
Region features
Color feature: color histogram of the pixels in the region. based on
the original keyblock representation (1x1, 128);
Texture feature: the mean texture contrast and anisotropy of the
pixels in the region;
Normalized area feature: the number of pixels of a region divided
by the image size.
University at Buffalo The State University of New York
Integrate Regions into
Keyblock Framework
 Shape features: X-axis and Y-axis profiles (10-dimension
feature vector )
 (1) Find the minimum bounding box B of the region;
 (2) Equally subdivide B along both X and Y axes into 5 intervals;
 (3) For each cell (u,v) obtained from the above subdivision,
calculate the percentage p(u,v) of the region that cell (u,v) contains;
 (4) Define the profile of the region along the X-axis as a 5-element
array x with the i-th element x(i) = 5v=1 p(i,v);
 (5) Similarly define the profile of the region along the Y-axis as
a 5-element array y with the j-th element y(j) = 5v=1p(u,j).
University at Buffalo The State University of New York
Feature Combination
Model
 In the phase of feature extraction, for each image,
combine feature vectors generated by different
models into one comprehensive feature vector.
 Feature vectors
( 1,...i ,...,N )
 Model 
 Model 
 Combination Model 
where
(  1,...i ,..., N )
( 1,...i ,...,N )
i  (i , i )
or
i  i * w  i * w
University at Buffalo The State University of New York
Result Fusion Model
 In the phase of retrieval, for each image, combine
retrieval results under different models.
 <image, similarity> lists
{ I 1, S1 ,...  Ij , Sj ,..., IM , SM }
 Model 
 Model 
{ I 1, S 1 ,...  Ij , Sj ,..., IM , SM }
 Combination Model 
where
{ I 1, S 1 ,...  Ij , Sj ,..., IM , SM
Si  Si * w  Si * w
University at Buffalo The State University of New York
}
Experiments on Test
Databases
CDB (web color images)
500 images , 41 groups, each group 10 or 20 images
41 training images are randomly selected
query set : whole database
color feature techniques: histogram and color coherent vector
(CCV)
average precision and recall from 1 to 40 returned images are
calculated.
TDB (Brodatz texture images)
2240 images , 112 groups, each group 20 images
112 training images are randomly selected
query set : whole database
texture feature techniques : haar and daubechies wavelet
average precision and recall from 1 to 40 returned images are
calculated.
University at Buffalo The State University of New York
Experiments: comparison
with traditional techniques
University at Buffalo The State University of New York
Performance of N-block
Models
All the three n-block models achieve higher performance than the
traditional techniques, while the bi-block and uni-block models
perform better on these two datasets.
University at Buffalo The State University of New York
Experiments on COREL
31646 color images
size 120x80 or 80x120
939 training images are randomly selected to get
keyblocks
query set
6895 query images which are categorized to 82
groups.
average precision and recall from 1 to 100
returned images are calculated.
University at Buffalo The State University of New York
Experiments on COREL -Performance Comparison
 The performance of the keyblock
approach outperforms the traditional
techniques.
University at Buffalo The State University of New York
Experiments on GEO
Database GEO
Airphoto images of the Buffalo area provided by NCGIA
at Buffalo
405 images
46 training images are used to get keyblocks
Query set
33 query images which are sub-images of 32 x 32 chosen
from the images in the database by GIS experts from
NCGIA at Buffalo.
These query images are divided into 5 categories:
agriculture, grass, forest, residential area, and water.
University at Buffalo The State University of New York
Experiments on GEO :
comparison with wavelet
transforms
University at Buffalo The State University of New York
An Example Query
University at Buffalo The State University of New York
Experiments for Regionbased Image Retrieval
Data set with 1004 images (14 categories)
 Group A : images with distinctive objects. (have better segmentation
results)
 Group B : images without distinctive objects.
Currently the segmentation results are not
satisfactory due to the limitation of the algorithm
as well as the intrinsic difficulties of image
segmentation on natural images.
Segmentation result is critical, we expect that
query results of Group A would be better than
Group B.
University at Buffalo The State University of New York
Region-based Image Retrieval
University at Buffalo The State University of New York
Conclusion
 We established a framework for browsing and
navigating geographic images
 We use effective metadata representation and
management for integration of multiple data
sources and provide efficient access to the data
sources.
 We developed wavelet-based approach and
keyblock-based approach to generalize the textbased IR techniques to geographic image retrieval.
 Many remaining research issues.
University at Buffalo The State University of New York