Transcript Clustering
Segmentação (Clustering)
(baseado nos slides do
Han)
Non-supervised Learning:
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Summary
SAD Tagus 2004/05
H. Galhardas
What is Cluster
Analysis?
Cluster: a collection of data objects
Cluster analysis
Similar to one another within the same cluster
Dissimilar to the objects in other clusters
Grouping a set of data objects into clusters
Clustering is unsupervised classification: no
predefined classes
Typical applications
As a stand-alone tool to get insight into data distribution
As a preprocessing step for other algorithms
SAD Tagus 2004/05
H. Galhardas
General Applications of
Clustering
Pattern Recognition
Spatial Data Analysis
create thematic maps in GIS by clustering feature spaces
detect spatial clusters and explain them in spatial data
mining
Image Processing
Economic Science (especially market research)
WWW
Document classification
Cluster Weblog data to discover groups of similar access
patterns
H. Galhardas
SAD Tagus 2004/05
Examples of Clustering
Applications
Marketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to develop
targeted marketing programs
Land use: Identification of areas of similar land use in an
earth observation database
Insurance: Identifying groups of motor insurance policy
holders with a high average claim cost
City-planning: Identifying groups of houses according to
their house type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters
should be clustered along continent faults
SAD Tagus 2004/05
H. Galhardas
What Is Good Clustering?
A good clustering method will produce high quality
clusters with
high intra-class similarity
low inter-class similarity
The quality of a clustering result depends on both the
similarity measure used by the method and its
implementation.
The quality of a clustering method is also measured by its
ability to discover some or all of the hidden patterns.
SAD Tagus 2004/05
H. Galhardas
Requirements of Clustering
in Data Mining
Scalability
Ability to deal with different types of attributes
Discovery of clusters with arbitrary shape
Minimal requirements for domain knowledge to
determine input parameters
Able to deal with noise and outliers
Insensitive to order of input records
High dimensionality
Incorporation of user-specified constraints
Interpretability and usability
SAD Tagus 2004/05
H. Galhardas
Data Structures
x
11
...
x
i1
...
xn1
Data matrix
(two modes)
Dissimilarity matrix
(one mode)
SAD Tagus 2004/05
... x
1f
... ...
...
...
x
if
...
... x
nf
... x
1p
... ...
... x
ip
... ...
... x
np
0
d(2,1)
0
d(3,1) d ( 3,2) 0
:
:
:
d ( n,1) d ( n,2) ...
H. Galhardas
... 0
Measure the Quality of
Clustering
Dissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, which is typically a metric:d(i, j)
There is a separate “quality” function that measures the
“goodness” of a cluster.
The definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinal and
ratio variables.
Weights should be associated with different variables based
on applications and data semantics.
It is hard to define “similar enough” or “good enough”
the answer is typically highly subjective.
SAD Tagus 2004/05
H. Galhardas
Type of data in clustering
analysis
Interval-scaled variables:
Binary variables:
Nominal, ordinal, and ratio variables:
Variables of mixed types:
SAD Tagus 2004/05
H. Galhardas
Interval-valued variables
Standardize data
Calculate the mean absolute deviation:
sf 1
n (| x1 f m f | | x2 f m f | ... | xnf m f |)
mf 1
where
n (x1 f x2 f ... xnf )
.
Calculate the standardized measurement (z-score)
xif m f
zif
sf
Using mean absolute deviation is more robust
than using standard deviation
SAD Tagus 2004/05
H. Galhardas
Similarity and Dissimilarity
Between Objects
Distances are normally used to measure the
similarity or dissimilarity between two data objects
Some popular ones include: Minkowski distance
d (i, j) q (| x x |q | x x |q ... | x x |q )
i1
j1
i2
j2
ip
jp
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are
two p-dimensional data objects, and q is a positive
integer
If q = 1, d is Manhattan distance
d (i, j) | x x | | x x | ... | x x |
i1 j1 i2 j 2
ip j p
SAD Tagus 2004/05
H. Galhardas
Similarity and Dissimilarity
Between Objects (Cont.)
If q = 2, d is Euclidean distance:
d (i, j) (| x x |2 | x x |2 ... | x x |2 )
i1
j1
i2
j2
ip
jp
Properties:
d(i,j) 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) d(i,k) + d(k,j)
Also, one can use weighted distance,
parametric Pearson product moment
correlation, or otherH. disimilarity
measures
Galhardas
SAD Tagus 2004/05
Partitioning Algorithms:
Basic Concept
Partitioning method: Construct a partition of a
database D of n objects into a set of k clusters
Given a k, find a partition of k clusters that
optimizes the chosen partitioning criterion
Global optimal: exhaustively enumerate all partitions
Heuristic methods: k-means and k-medoids algorithms
k-means (MacQueen’67): Each cluster is represented
by the center of the cluster
k-medoids or PAM (Partition around medoids)
(Kaufman & Rousseeuw’87): Each cluster is
represented by one ofH.the
objects in the cluster
Galhardas
SAD Tagus 2004/05
The K-Means Clustering
Method
Given k, the k-means algorithm is
implemented in four steps:
1.
Partition objects into k nonempty subsets
2.
Compute seed points as the centroids of the
clusters of the current partition (the centroid is
the center, i.e., mean point, of the cluster)
3.
Assign each object to the cluster with the nearest
seed point
Go back to Step 2, stop when no more new
assignment
H. Galhardas
SAD Tagus 2004/05
4.
The K-Means Clustering
Method
Example
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
2
1
0
0
1
2
3
4
5
6
7
8
K=2
Arbitrarily choose K
object as initial
cluster center
9
10
Assign
each
objects
to most
similar
center
10
9
8
7
6
5
Update
the
cluster
means
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
3
2
1
0
0
1
2
3
4
5
6
reassign
10
10
9
9
8
8
7
7
6
6
5
5
4
2
1
0
0
1
2
3
4
5
6
7
8
H. Galhardas
7
8
9
10
reassign
3
SAD Tagus 2004/05
4
9
10
Update
the
cluster
means
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
Comments on the K-Means
Method
Strength Relatively efficient: O(tkn), where n is # objects, k is #
clusters, and t is # iterations. Normally, k, t << n.
Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
Comment Often terminates at a local optimum. The global
optimum may be found using techniques such as: deterministic
annealing and genetic algorithms
Weakness
Applicable only when mean is defined, then what about categorical data?
Need to specify k, the number of clusters, in advance
Unable to handle noisy data and outliers
Not suitable to discover clusters with non-convex shapes
SAD Tagus 2004/05
H. Galhardas
Variations of the K-Means
Method
A few variants of the k-means which differ in
Selection of the initial k means
Dissimilarity calculations
Strategies to calculate cluster means
Handling categorical data: k-modes (Huang’98)
Replacing means of clusters with modes
Using new dissimilarity measures to deal with categorical objects
Using a frequency-based method to update modes of clusters
A mixture of categorical and numerical data: k-prototype method
SAD Tagus 2004/05
H. Galhardas
What is the problem of kMeans Method?
The k-means algorithm is sensitive to outliers !
Since an object with an extremely large value may substantially
distort the distribution of the data.
K-Medoids: Instead of taking the mean value of the object
in a cluster as a reference point, medoids can be used,
which is the most centrally located object in a cluster.
SAD Tagus 2004/05
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
1
2
3
4
5
6
7
8
9
10
H. Galhardas
0
0
1
2
3
4
5
6
7
8
9
10
The K-Medoids Clustering
Method
Find representative objects, called medoids, in
clusters
PAM (Partitioning Around Medoids, 1987)
starts from an initial set of medoids and iteratively replaces
one of the medoids by one of the non-medoids if it
improves the total distance of the resulting clustering
PAM works effectively for small data sets, but does not
scale well for large data sets
SAD Tagus 2004/05
H. Galhardas
Typical k-medoids algorithm
(PAM)
Total Cost = 20
10
10
10
9
9
9
8
8
8
Arbitrary
choose k
object as
initial
medoids
7
6
5
4
3
2
7
6
5
4
3
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
0
10
1
2
3
4
5
6
7
8
9
10
Assign
each
remainin
g object
to
nearest
medoids
7
6
5
4
3
2
1
0
0
K=2
Until no
change
10
3
4
5
6
7
8
9
10
10
Compute
total cost of
swapping
9
9
Swapping O
and Oramdom
8
If quality is
improved.
5
5
4
4
3
3
2
2
1
1
7
6
0
8
7
6
0
0
SAD Tagus 2004/05
2
Randomly select a
nonmedoid object,Oramdom
Total Cost = 26
Do loop
1
1
2
3
4
5
6
H. Galhardas
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
PAM (Partitioning Around
Medoids) (1987)
Use real object to represent the cluster
1.
Select k representative objects arbitrarily
2.
For each pair of non-selected object h and selected
object i, calculate the total swapping cost TCih
3.
For each pair of i and h,
4.
•
If TCih < 0, i is replaced by h
•
Then assign each non-selected object to the most
similar representative object
Repeat steps 2-3 until there is no change
SAD Tagus 2004/05
H. Galhardas
Cost function for k-medoids
SAD Tagus 2004/05
H. Galhardas
PAM Clustering
10
10
9
9
8
j
t
8
t
7
7
6
5
j
6
h
4
5
4
i
3
h
i
3
2
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Cjih = 0
Cjih = d(j, h) - d(j, i)
10
10
9
9
8
8
h
7
7
j
6
6
i
5
5
i
4
4
t
3
2
2
1
1
j
h
t
3
0
0
0
0
1
SAD Tagus 2004/05
2
3
4
5
6
7
8
9
Cjih = d(j, t) - d(j, i)
1
2
3
4
5
6
7
8
9
10
H. Galhardas
Cjih = d(j, h) - d(j, t)
10
What is the problem
with PAM?
Pam is more robust than k-means in the
presence of noise and outliers because a
medoid is less influenced by outliers or other
extreme values than a mean
Pam works efficiently for small data sets but
does not scale well for large data sets.
O(k(n-k)2 ) for each iteration, where n is # of
data,k is # of cluster
SAD Tagus 2004/05
H. Galhardas
Summary
Cluster analysis groups objects based on their similarity
and has wide applications
Measure of similarity can be computed for various types
of data
Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
There are still lots of research issues on cluster analysis,
such as constraint-based clustering
SAD Tagus 2004/05
H. Galhardas
Other Classification
Methods
k-nearest neighbor classifier
case-based reasoning
Genetic algorithm
Rough set approach
Fuzzy set approaches
SAD Tagus 2004/05
H. Galhardas
Instance-Based
Methods
Instance-based learning:
Store training examples and delay the processing
(“lazy evaluation”) until a new instance must be
classified
Typical approaches
k-nearest neighbor approach
Instances represented as points in a Euclidean
space.
Locally weighted regression
Constructs local approximation
Case-based reasoning
Uses symbolic representations and knowledgebased inference
SAD Tagus 2004/05
H. Galhardas
The k-Nearest Neighbor
Algorithm
All instances correspond to points in the
n-dimensional space.
The nearest neighbor are defined in
terms of Euclidean distance.
The target function could be discrete- or
real- valued.
SAD Tagus 2004/05
H. Galhardas
The k-Nearest Neighbor
Algorithm
For discrete-valued, the k-NN returns the
most common value among the k
training examples nearest to xq.
Vonoroi diagram: the decision surface
induced by 1-NN for a typical set of
training examples.
.
_
_
_
+
_
_
SAD Tagus 2004/05
.
+
+
xq
_
.
+
H. Galhardas
.
.
.
Discussion (1)
The k-NN algorithm for continuous-valued target
functions
Calculate the mean values of the k nearest neighbors
Distance-weighted nearest neighbor algorithm
Weight the contribution of each of the k neighbors
according to their distance to the query point xq,giving
greater weight to closer neighbors
1
w
Similarly, for real-valued target functions
d ( x , x )2
SAD Tagus 2004/05
q i
H. Galhardas
Discussion (2)
Robust to noisy data by averaging k-nearest
neighbors
Curse of dimensionality: distance between
neighbors could be dominated by irrelevant
attributes.
To overcome it, axes stretch or elimination of the least
relevant attributes.
SAD Tagus 2004/05
H. Galhardas
Bibliografia
Data Mining: Concepts and Techniques, J.
Han & M. Kamber, Morgan Kaufmann,
2001 (Sect. 7.7.1 e Cap. 8)
SAD Tagus 2004/05
H. Galhardas