Transcript lecture3

CSci 8980: Data Mining (Fall 2002)
Vipin Kumar
Army High Performance Computing Research Center
Department of Computer Science
University of Minnesota
http://www.cs.umn.edu/~kumar
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Sampling

Sampling is the main technique employed for data selection.
– It is often used for both the preliminary investigation of
the data and the final data analysis.

Statisticians sample because obtaining the entire set of data
of interest is too expensive or time consuming.

Sampling is used in data mining because it is too expensive
or time consuming to process all the data
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Sampling …

The key principle for effective sampling is the
following:
– using a sample will work almost as well as using
the entire data sets, if the sample is
representative.
– A sample is representative if it has
approximately the same property (of interest) as
the original set of data.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Types of Sampling

Simple Random Sampling
– There is an equal probability of selecting any particular
item.

Sampling without replacement
– As each item is selected, it is removed from the
population.

Sampling with replacement
– Objects are not removed from the population as they are
selected for the sample.

© Vipin Kumar
In sampling with replacement, the same object can be picked up
more than once.
CSci 8980 Fall 2002
‹#›
Sample Size
8000 points
© Vipin Kumar
2000 Points
CSci 8980 Fall 2002
500 Points
‹#›
Sample Size

What sample size is necessary to get at least one
object from each of 10 groups.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Discretization

Some techniques don’t use class labels.
Data
Equal interval width
Equal frequency
© Vipin Kumar
K-means
CSci 8980 Fall 2002
‹#›
Discretization

Some techniques use class labels.

Entropy based approach
3 categories for both x and y
© Vipin Kumar
5 categories for both x and y
CSci 8980 Fall 2002
‹#›
Aggregation

Combine data or attribute

More stable behavior
Standard Deviation of Average
Monthly Precipitation
© Vipin Kumar
Standard Deviation of Average
Yearly Precipitation
CSci 8980 Fall 2002
‹#›
Dimensionality Reduction

Principal Components Analysis

Singular Value Decomposition

Curse of Dimensionality
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Feature Subset Selection

Redundant features
– duplicate much or all of the information
contained in one or more other attributes, e.g.,
the purchase price of a product and the amount
of sales tax paid contain much the same
information.

Irrelevant features
– contain no information that is useful for the
data mining task at hand, e.g., students' ID
numbers should be irrelevant to the task of
predicting students' grade point averages.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Mapping Data to a New Space

Fourier transform

Wavelet transform
Two Sine Waves
© Vipin Kumar
Two Sine Waves + Noise
CSci 8980 Fall 2002
Frequency
‹#›
Classification: Outline

Decision Tree Classifiers
– What are Decision Trees
– Tree Induction
 ID3,
C4.5, CART
– Tree Pruning

Other Classifiers
– Memory Based
– Neural Net
– Bayesian
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Classification: Definition

Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class.


Find a model for class attribute as a function
of the values of other attributes.
Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Classification Example
Tid Refund Marital
Status
Taxable
Income Cheat
Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
No
Single
75K
?
2
No
Married
100K
No
Yes
Married
50K
?
3
No
Single
70K
No
No
Married
150K
?
4
Yes
Married
120K
No
Yes
Divorced 90K
?
5
No
Divorced 95K
Yes
No
Single
40K
?
6
No
Married
No
No
Married
80K
?
60K
10
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
10
© Vipin Kumar
Single
90K
Yes
Training
Set
CSci 8980 Fall 2002
Learn
Classifier
Test
Set
Model
‹#›
Classification Techniques
Decision Tree based Methods
 Rule-based Methods
 Memory based reasoning
 Neural Networks
 Genetic Algorithms
 Naïve Bayes and Bayesian Belief Networks
 Support Vector Machines

© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Decision Tree Based Classification

Decision tree models are better suited for data
mining:
– Inexpensive to construct
– Easy to Interpret
– Easy to integrate with database systems
– Comparable or better accuracy in many
applications
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Example Decision Tree
Splitting Attributes
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Refund
Yes
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
The splitting attribute at a node is
determined based on the Gini index.
10
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Another Example of Decision Tree
MarSt
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Married
NO
Single,
Divorced
Refund
No
Yes
NO
TaxInc
< 80K
NO
> 80K
YES
There could be more than one tree that
fits the same data!
10
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Decision Tree Algorithms

Many Algorithms:
– Hunt’s Algorithm (one of the earliest).
– CART
– ID3, C4.5
– SLIQ,SPRINT

General Structure:
– Tree Induction
– Tree Pruning
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Hunt’s Method
An Example:
Attributes: Refund (Yes, No), Marital Status (Single, Married,
Divorced), Taxable Income (Continuous)
Class: Cheat, Don’t Cheat
Refund
Yes
Don’t
Cheat
Refund
No
Don’t
Cheat
Yes
Refund
No
Don’t
Cheat
Yes
Don’t
Cheat
Marital
Status
Single,
Divorced
Cheat
Married
© Vipin Kumar
CSci 8980 Fall 2002
Marital
Status
Single,
Divorced
Don’t
Cheat
Don’t
Cheat
No
Married
Don’t
Cheat
Taxable
Income
< 80K
>= 80K
Don’t
Cheat
Cheat
‹#›
Tree Induction

Greedy strategy.
– Choose to split records based on an attribute
that optimizes the splitting criterion.

Two phases at each node:
– Split Determining Phase:
How
to Split a Given Attribute?
Which attribute to split on? Use Splitting Criterion.
– Splitting Phase:
Split
© Vipin Kumar
the records into children.
CSci 8980 Fall 2002
‹#›
Splitting Based on Nominal Attributes


Each partition has a subset of values signifying it.
Multi-way split: Use as many partitions as distinct values.
CarType
Family
Luxury
Sports

Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Sports,
Luxury}
© Vipin Kumar
CarType
{Family}
OR
CSci 8980 Fall 2002
{Family,
Luxury}
CarType
{Sports}
‹#›
Splitting Based on Ordinal Attributes


Each partition has a subset of values signifying it.
Multi-way split: Use as many partitions as distinct values.
Size
Small
Large
Medium

Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Small,
Medium}

Size
{Large}
OR
What about this split?
{Small,
Large}
© Vipin Kumar
CSci 8980 Fall 2002
{Medium,
Large}
Size
{Small}
Size
{Medium}
‹#›
Splitting Based on Continuous Attributes

Different ways of handling
– Discretization to form an ordinal categorical
attribute
Static – discretize once at the beginning
 Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.

– Binary Decision: (A < v) or (A  v)
consider all possible splits and finds the best cut
 can be more compute intensive

© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Splitting Criterion

Gini Index

Entropy and Information Gain

Misclassification error
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Splitting Criterion: GINI

Gini Index for a given node t :
GINI (t )  1   [ p( j | t )]2
j
(NOTE: p( j | t) is the relative frequency of class j at node t).
– Measures impurity of a node.

Maximum (1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information

Minimum (0.0) when all records belong to one class, implying
most interesting information
C1
C2
0
6
Gini=0.000
© Vipin Kumar
C1
C2
1
5
Gini=0.278
C1
C2
2
4
Gini=0.444
CSci 8980 Fall 2002
C1
C2
3
3
Gini=0.500
‹#›
Examples for computing GINI
GINI (t )  1   [ p( j | t )]2
j
C1
C2
0
6
P(C1) = 0/6 = 0
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
© Vipin Kumar
P(C2) = 6/6 = 1
Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
P(C2) = 5/6
Gini = 1 – (1/6)2 – (5/6)2 = 0.278
P(C2) = 4/6
Gini = 1 – (2/6)2 – (4/6)2 = 0.444
CSci 8980 Fall 2002
‹#›
Splitting Based on GINI



Used in CART, SLIQ, SPRINT.
Splitting Criterion: Minimize Gini Index of the Split.
When a node p is split into k partitions (children), the
quality of split is computed as,
k
ni
GINI split   GINI (i )
i 1 n
where,
© Vipin Kumar
ni = number of records at child i,
n = number of records at node p.
CSci 8980 Fall 2002
‹#›
Binary Attributes: Computing GINI
Index


Splits into two partitions
Effect of Weighing partitions:
– Larger and Purer Partitions are sought for.
Parent
B?
Yes
No
C1
6
C2
6
Gini = 0.500
Node N1
C1
C2
N1
0
6
N2
6
0
Gini=0.000
© Vipin Kumar
C1
C2
N1
5
1
Node N2
N2
1
5
Gini=0.278
C1
C2
N1
4
3
N2
2
3
Gini=0.486
CSci 8980 Fall 2002
C1
C2
N1
3
3
N2
3
3
Gini=0.500
‹#›
Categorical Attributes: Computing Gini Index


For each distinct value, gather counts for each class in
the dataset
Use the count matrix to make decisions
Multi-way split
Two-way split
(find best partition of values)
CarType
Family Sports Luxury
C1
C2
1
4
Gini
© Vipin Kumar
2
1
0.393
1
1
C1
C2
Gini
CarType
{Sports,
{Family}
Luxury}
3
1
2
4
0.400
CSci 8980 Fall 2002
C1
C2
Gini
CarType
{Family,
{Sports}
Luxury}
2
2
1
5
0.419
‹#›
Continuous Attributes: Computing Gini Index




Use Binary Decisions based on one value
Several Choices for the splitting value
– Number of possible splitting values = Number of distinct
values
Each splitting value has a count matrix associated with it
– Class counts in each of the partitions, A < v and A  v
Simple method to choose best v
– For each v, scan the database to gather count matrix
and compute its Gini index
– Computationally Inefficient! Repetition of work.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Continuous Attributes: Computing Gini Index...

For efficient computation: for each attribute,
– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix
and computing gini index
– Choose the split position that has the least gini index
Cheat
No
No
No
Yes
Yes
Yes
No
No
No
No
100
120
125
220
Taxable Income
60
Sorted Values
55
Split Positions
75
65
85
72
90
80
95
87
92
97
110
122
172
230
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
Yes
0
3
0
3
0
3
0
3
1
2
2
1
3
0
3
0
3
0
3
0
3
0
No
0
7
1
6
2
5
3
4
3
4
3
4
3
4
4
3
5
2
6
1
7
0
Gini
© Vipin Kumar
70
0.420
0.400
0.375
0.343
0.417
0.400
CSci 8980 Fall 2002
0.300
0.343
0.375
0.400
0.420
‹#›
Alternative Splitting Criteria based on INFO

Entropy at a given node t:
Entropy(t )   p( j | t ) log p( j | t )
j
(NOTE: p( j | t) is the relative frequency of class j at node t).
– Measures homogeneity of a node.
 Maximum
(log nc) when records are equally distributed
among all classes implying least information
 Minimum (0.0) when all records belong to one class,
implying most information
– Entropy based computations are similar to the
GINI index computations
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Examples for computing Entropy
Entropy(t )   p( j | t ) log p( j | t )
2
j
C1
C2
0
6
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
© Vipin Kumar
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0
P(C2) = 5/6
Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65
P(C2) = 4/6
Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
CSci 8980 Fall 2002
‹#›
Splitting Based on INFO...

Information Gain:
n


GAIN  Entropy ( p)    Entropy (i) 
 n

k
split
i
i 1
Parent Node, p is split into k partitions;
ni is number of records in partition i
– Measures Reduction in Entropy achieved because of
the split. Choose the split that achieves most reduction
(maximizes GAIN)
– Used in ID3 and C4.5
– Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Splitting Based on INFO...

Gain Ratio:
GAIN
n
n
GainRATIO 
SplitINFO    log
SplitINFO
n
n
Split
split
k
i
i
i 1
Parent Node, p is split into k partitions
ni is the number of records in partition i
– Adjusts Information Gain by the entropy of the
partitioning (SplitINFO). Higher entropy partitioning
(large number of small partitions) is penalized!
– Used in C4.5
– Designed to overcome the disadvantage of Information
Gain
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Splitting Criteria based on Classification Error

Classification error at a node t :
Error (t )  1  max P(i | t )
i

Measures misclassification error made by a node.
 Maximum
(1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information
 Minimum
(0.0) when all records belong to one class, implying
most interesting information
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Examples for Computing Error
Error (t )  1  max P(i | t )
i
C1
C2
0
6
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
© Vipin Kumar
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
Error = 1 – max (0, 1) = 1 – 1 = 0
P(C2) = 5/6
Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6
P(C2) = 4/6
Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
CSci 8980 Fall 2002
‹#›
Comparison among Splitting Criteria
For a 2-class problem:
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
C4.5
Simple depth-first construction.
 Sorts Continuous Attributes at each node.
 Needs entire data to fit in memory.
 Unsuitable for Large Datasets.
– Needs out-of-core sorting.


Classification Accuracy shown to improve when
entire datasets are used!
© Vipin Kumar
CSci 8980 Fall 2002
‹#›
Decision Tree for Boolean Function
A
0
0
0
0
1
1
1
1
A
0
1
B
B
0
1
C
0
0
0
C
1
0
© Vipin Kumar
0
0
0
0
0
C
0
1
0
1
0
1
0
1
A and B and C
0
0
0
0
0
0
0
1
1
C
1
B
0
0
1
1
0
0
1
1
C
1
0
CSci 8980 Fall 2002
0
0
1
1
‹#›
Decision Tree for Boolean Function…
Can simplify the
tree:
0
A
1
B
0
0
A
0
0
0
0
1
1
1
1
B
0
0
1
1
0
0
1
1
© Vipin Kumar
C
0
1
0
1
0
1
0
1
A and B and C
0
0
0
0
0
0
0
1
1
0
C
0
1
1
0
CSci 8980 Fall 2002
‹#›