image object

Download Report

Transcript image object

Object-oriented classification
Lecture 11
Source: http://usda-ars.nmsu.edu/PDF%20files/laliberteAerialPhotos.pdf
Why?

Per-pixel classification





Only based on pixel value or spectral value
Ignore spatial autocorrelation
One-to-many (one pixel value similar to many classes)
Salt-and-pepper
A crucial drawback of these per-pixel classification
methods is that while the information content of the
imagery increases with spatial resolution, the accuracy
of land use classification may decrease. This is due to
increasing of the within class variability inherent in a
more detailed, higher spatial resolution data.
Object-oriented classification






Use spatial autocorrelation (to grows homogeneous regions, or
regions with specified amounts of heterogeneity)
Not only pixel values but also spatial measurements that
characterizer the shape of the region
Divide image into segments or regions based on spectral and
shape similarity or dissimilarity, i.e., from image pixel level to
image object level.
Once training objects selected, some methods can be used to
classify all objects into different training objects, such as
nearest-neighbor, membership function (fuzzy classification
logic), or knowledge-based approaches.
Classification process is rather fast because objects not
individual pixels are assigned to specific classes.
Primarily used for high spatial resolution image classification
1. Image segmentation



Image segmentation is a partitioning of an image into
constituent parts using image attributes such as pixel
intensity, spectral values, and/or textural properties. Image
segmentation produces an image representation in terms of
edges and regions of various shapes and interrelationships.
Segmentation algorithms are based on region
growing/merging, simulated annealing, boundary detection,
probability-based image segmentation, fractal net evolution
approach (FNEA), and more.
In region growing/merging, neighboring pixels or small
segments that have similar spectral properties are assumed to
belong to the same larger segment and are therefore merged
Software
http://www.ecognition.com/products
Criteria for segmentation

The scale parameter is an abstract value to determine the
maximum possible change of heterogeneity caused by fusing
several objects.





Color is the pixel value;
Shape includes compactness and smoothness which are two
geometric features that can be used as "evidence."



The scale parameter is indirectly related to the size of the created objects.
The heterogeneity at a given scale parameter is directly linearly dependent
on the object size. Homogeneous areas result in larger objects, and
heterogeneous areas result in larger objects.
Small scale number results small objects, lager scale number results in
larger objects. This refers to Multiresolution image segmentation.
Smoothness describes the similarity between the image object borders and a
perfect square.
Compactness describes the "closeness" of pixels clustered in a object by
comparing it to a circle
Pixel neighborhood function
Pixel neighborhood function
One criteria used to segment a remotely sensed image into
image objects is a pixel neighborhood function, which
compares an image object being grown with adjacent pixels.
The information is used to determine if the adjacent pixel
should be merged with the existing image object or be part
of a new image object. a) If a plane 4 neighborhood
function is selected, then two image objects would be
created because the pixels under investigation are not
connected at their plane borders. b) Pixels and objects are
defined as neighbors in a diagonal 8 neighborhood if they
are connected at a plane border or a corner point.
Diagonal neighborhood mode only be used if the structure
of interest are of a scale to the pixel size. Example of road
extraction from coarse resolution image.
In all other case, plane neighborhood mode is appropriate
choice
Should be decided before the first segmentation
Color and shape
These two criteria are used to create image objects (patches) of
relatively homogeneous pixels in the remote sensing dataset
using the general segmentation function (Sf):
S f  wcolor  hcolor  1  wcolor   hshape
where the user-defined weight for spectral color versus shape is
0 < wcolor < 1.
If the user wants to place greater emphasis on the spectral
(color) characteristics in the creation of homogeneous objects
(patches) in the dataset, then wcolor is weighted more heavily
(e.g., wcolor = 0.8). Conversely, if the spatial characteristics of
the dataset are believed to be more important in the creation of
the homogeneous patches, then shape should be weighted more
heavily.
Spectral (i.e., color) heterogeneity (h) of an image object is
computed as the sum of the standard deviations of spectral
values of each layer (sk) (i.e., band) multiplied by the
weights for each layer (wk):
m
h   wk  s k
k 1
Usually equal weight for all bands except you know certain band is really important
So the color criterion is computed as the weighted mean of all
changes in standard deviation for each band k of the m bands of
remote sensing dataset. The standard deviation sk are weighted
by the object sizes nob (i.e. the number of pixels) (Definiens,
2003):
m

hcolor   wk nmg  s k
k 1
mg

 nob1  s k
ob1
 nob2  s k
ob 2

where mg means merge (total pixels in all objects 1 and 2 here).
compactness
l
cpt 
n
smoothness
l
smooth 
b
n is number of pixel in the object, l is the perimeter,
b is shortest possible border length of a box bounding the object
Compactness weight makes it possible to separate objects that have quite
different shapes but not necessarily a great deal of color contrast, such as
clearcuts VS bare patches within forested areas.
hcpt  nmg 
lmg
nmg

lob1
lob2

 nob1 
 nob2 

nob1
nob2


lob1
lob2 

hsmooth  nmg 
  nob1 
 nob2 
bmg 
bob1
bob2 




lmg
hshape  wcpt  hcpt  1  wcpt  hsmooth
Classification
based on
Image
Segmentation
Logic
takes into
account spatial
and spectral
characteristics
Jensen,
2005
2. classification




Classification of image objects
Based on fuzzy systems
Nearest-neighbor
Membership function are used to determine if an
object belongs to a class or not. These membership
functions are based on fuzzy logic.

where an object can have a probability of belonging to a
class - with the probability being in the range 0 to 1 where 0 is absolutely DOES NOT belong to the class, and
1 is absolutely DOES belong to the class.
Nearest Neighbor


based on sample objects within a defined feature
space, the distance to each feature space or to each
sample object is calculated for each image object
This allows a very simple, rapid yet powerful
classification, in which individual image objects are
marked as typical representatives of a class
(=training areas), and then the rest of the scene is
classified accordingly (“click and classify”).
Therefore, digitization of training areas is not
necessary anymore.
3. An example

tns
dres M
San An
Jornada
Experimental
Range
JER


I-25


CDRRC
e
and
Gr
Rio

y 70
Hw
Utah
Arizona
#
Colorado
Las Cruces
New Mexico
#
JO RNADA
n
ua
ah
ih u
Ch
Texas
rt
se
De
Chihuahua
Durango
Coahuila
Nuevo
Leon
Zacatecas
Laliberte et al. 2004
Remote Sensing of Env.
Chihuahuan Desert Rangeland
Research Center (CDRRC)
Northern part of Chihuahuan desert
Semidesert grassland
Increase in shrubs, decrease in
grasslands
Honey mesquite (Prosopis
glandulosa) main increaser
150 ha pasture
Workflow in eCognition
Image object hierarchy
Input images
Level 3
Multiresolution
segmentation
Level 2
Level 1
Pixel level
Feedback
Training samples
standard nearest neighbor
Level 2
Classification
Level 1
Membership functions
fuzzy logic
Feedback
Classification based
segmentation
Final merged classification
Creation of class hierarchy
membership
functions
• Classification using only 1 membership function:
1) Mean value of objects - similar to thresholding
Dark background classified as shrub
•
Classification using 3 membership functions:
1) Mean value of objects
2) Mean difference to neighbors
3) Mean difference to super-object
Shrubs can be differentiated in dark as well as light
backgrounds
Image object hierarchy with 3 segmentation levels
Original image
Quickbird
panchromatic
Level 2
scale 100
Level 1
scale 10
Level 3
scale 300
Level 1 classification: shrubs
Level 2 classification
Shrub/grass dynamics
20
18
16
shrub
grass
% cover
14
12
10
8
6
4
2
0
1940
1950
1960
1970
Fig. 6
1980
1990
2000
Conclusions

From 1937-2003





Shrub increase 0.9% to 13.1%
Grass decrease 18.5% to 1.9%
Vegetation dynamics related to precipitation patterns
(1951-1956 drought), historical grazing pressures
Image analysis underestimated shrub and grass cover
87% of shrubs >2 m2 were detected
4. Combine image and other datasets
for classification in eCognition

For example in urban area, combining the spectral image and
elevation data (DEM), significant elevation info can be used to
outline object’s shape
Source: http://www.definiens-imaging.com/documents/an/tsukuba.pdf
Roof surface materials
Source: http://www.definiens-imaging.com/documents/publications/lemp-urs2005.pdf
Incorrect classified is 4.6% (red)