CISC 4631 Data Mining

Download Report

Transcript CISC 4631 Data Mining

CISC 4631
Data Mining
•
Lecture 04:
Decision Trees
Theses slides are based on the slides by
• Tan, Steinbach and Kumar (textbook authors)
• Eamonn Koegh (UC Riverside)
• Raymond Mooney (UT Austin)
1
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function of
the values of other attributes.
• Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into training
and test sets, with training set used to build the model
and test set used to validate it.
2
Illustrating Classification Task
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Learning
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
Attrib3
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
Apply
Model
Class
Deduction
10
Test Set
3
Classification Techniques
•
•
•
•
•
•
Decision Tree based Methods
Rule-based Methods
Memory based reasoning
Neural Networks
Naïve Bayes and Bayesian Belief Networks
Support Vector Machines
4
Example of a Decision Tree
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Splitting Attributes
Refund
Yes
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
10
Training Data
Model: Decision Tree
5
Another Example of Decision Tree
MarSt
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Married
NO
Single,
Divorced
Refund
No
Yes
NO
TaxInc
< 80K
NO
> 80K
YES
There could be more than one tree that fits
the same data!
10
6
Decision Tree Classification Task
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Tree
Induction
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
Attrib3
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
Apply
Model
Class
Decision
Tree
Deduction
10
Test Set
7
Apply Model to Test Data
Test Data
Start from the root of tree.
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
8
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
9
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
10
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
11
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
12
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
Assign Cheat to “No”
NO
> 80K
YES
13
Decision Tree Terminology
14
Decision Tree Classification Task
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Tree
Induction
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
Attrib3
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
Apply
Model
Class
Decision
Tree
Deduction
10
Test Set
15
Decision Tree Induction
• Many Algorithms:
–
–
–
–
Hunt’s Algorithm (one of the earliest)
CART
ID3, C4.5
SLIQ,SPRINT
• John Ross Quinlan is a computer science researcher in data
mining and decision theory. He has contributed extensively to the
development of decision tree algorithms, including inventing the
canonical C4.5 and ID3 algorithms.
16
Decision Tree Classifier
10
9
8
7
6
5
4
3
2
1
Antenna Length
Ross Quinlan
Abdomen Length > 7.1?
yes
no
Antenna Length > 6.0?
1 2 3 4 5 6 7 8 9 10
Abdomen Length
Katydid
no
yes
Grasshopper
Katydid
17
Antennae shorter than body?
Yes
No
3 Tarsi?
Grasshopper
Yes
No
Foretiba has ears?
Cricket
Decision trees predate computers
Yes
Katydids
No
Camel Cricket
18
Definition

Decision tree is a classifier in the form of a tree structure
– Decision node: specifies a test on a single attribute
– Leaf node: indicates the value of the target attribute
– Arc/edge: split of one attribute
– Path: a disjunction of test to make the final decision

Decision trees classify instances or examples by starting at the root of the
tree and moving through it until a leaf node.
19
Decision Tree Classification
• Decision tree generation consists of two phases
– Tree construction
• At start, all the training examples are at the root
• Partition examples recursively based on selected attributes
– Tree pruning
• Identify and remove branches that reflect noise or outliers
• Use of decision tree: Classifying an unknown sample
– Test the attribute values of the sample against the decision tree
20
Decision Tree Representation
• Each internal node tests an attribute
• Each branch corresponds to attribute value
• Each leaf node assigns a classification
outlook
sunny
overcast
humidity
high
no
rain
wind
yes
normal
yes
strong
weak
no
yes
21
How do we construct the
decision tree?
• Basic algorithm (a greedy algorithm)
•
– Tree is constructed in a top-down recursive divide-and-conquer manner
– At start, all the training examples are at the root
– Attributes are categorical (if continuous-valued, they can be discretized in
advance)
– Examples are partitioned recursively based on selected attributes.
– Test attributes are selected on the basis of a heuristic or statistical measure
(e.g., information gain)
Conditions for stopping partitioning
– All samples for a given node belong to the same class
– There are no remaining attributes for further partitioning – majority voting is
employed for classifying the leaf
– There are no samples left
22
Top-Down Decision Tree Induction
• Main loop:
1.
2.
3.
4.
5.
A  the “best” decision attribute for next node
Assign A as decision attribute for node
For each value of A, create new descendant of node
Sort training examples to leaf nodes
If training examples perfectly classified,
Then STOP, Else iterate over new leaf nodes
23
Tree Induction
• Greedy strategy.
– Split the records based on an attribute test that optimizes
certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
24
How To Split Records
• Random Split
– The tree can grow huge
– These trees are hard to understand.
– Larger trees are typically less accurate than smaller trees.
• Principled Criterion
– Selection of an attribute to test at each node - choosing the most useful attribute
for classifying examples.
– How?
– Information gain
• measures how well a given attribute separates the training examples
according to their target classification
• This measure is used to select among the candidate attributes at each step
while growing the tree
25
Tree Induction
• Greedy strategy:
– Split the records based on an attribute test that optimizes
certain criterion:
– Hunt’s algorithm: recursively partition training records into
successively purer subsets. How to measure
purity/impurity
• Entropy and information gain (covered in the lectures slides)
• Gini (covered in the textbook)
• Classification error
26
How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1
Own
Car?
Car
Type?
Gender
Yes
No
Family
Student
ID?
Luxury
c1
Sports
C0: 6
C1: 4
C0: 4
C1: 6
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
C0: 1
C1: 0
...
c10
C0: 1
C1: 0
c11
C0: 0
C1: 1
c20
...
C0: 0
C1: 1
Which test condition is the best?
Why is student id a bad feature to use?
27
How to determine the Best Split
• Greedy approach:
– Nodes with homogeneous class distribution are preferred
• Need a measure of node impurity:
C0: 5
C1: 5
C0: 9
C1: 1
Non-homogeneous,
Homogeneous,
High degree of impurity
Low degree of impurity
28
Picking a Good Split Feature
• Goal is to have the resulting tree be as small as possible, per Occam’s
razor.
• Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard
optimization problem.
• Top-down divide-and-conquer method does a greedy search for a simple
tree but does not guarantee to find the smallest.
– General lesson in Machine Learning and Data Mining: “Greed is good.”
• Want to pick a feature that creates subsets of examples that are relatively
“pure” in a single class so they are “closer” to being leaf nodes.
• There are a variety of heuristics for picking a good test, a popular one is
based on information gain that originated with the ID3 system of Quinlan
(1979).
R. Mooney, UT Austin
29
Information Theory
• Think of playing "20 questions": I am thinking of an integer between 1 and
1,000 -- what is it? What is the first question you would ask?
• What question will you ask?
• Why?
• Entropy measures how much more information you need before you can
identify the integer.
• Initially, there are 1000 possible values, which we assume are equally
likely.
• What is the maximum number of question you need to ask?
30
Entropy
•
Entropy (disorder, impurity) of a set of examples, S, relative to a binary
classification is:
Entropy(S )   p1 log2 ( p1 )  p0 log2 ( p0 )
•
•
•
•
where p1 is the fraction of positive examples in S and p0 is the fraction of
negatives.
If all examples are in one category, entropy is zero (we define 0log(0)=0)
If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1.
Entropy can be viewed as the number of bits required on average to encode
the class of an example in S where data compression (e.g. Huffman coding) is
used to give shorter codes to more likely cases.
For multi-class problems with c categories, entropy generalizes to:
c
Entropy( S )    pi log2 ( pi )
i 1
R. Mooney, UT Austin
31
Entropy Plot for Binary
Classification
• The entropy is 0 if the outcome is certain.
• The entropy is maximum if we have no knowledge of the system
(or any outcome is equally possible).
Entropy of a 2-class
problem with regard to
the portion of one of the
two groups
32
Information Gain
• Is the expected reduction in entropy caused by partitioning the examples
according to this attribute.
• is the number of bits saved when encoding the target value of an arbitrary
member of S, by knowing the value of attribute A.
33
Information Gain in Decision
Tree Induction
• Assume that using attribute A, a current set will be partitioned into some
number of child sets
• The encoding information that would be gained by branching on A
Gain( A)  E(Current set)   E(all child sets)
Note: entropy is at its minimum if the collection of objects is completely uniform
34
Examples for Computing Entropy
Entropy(t )   p( j | t ) log p( j | t )
2
j
NOTE: p( j | t) is computed as the relative frequency of class j at node t
C1
C2
0
6
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
C1
C2
3
3
P(C1) = 3/6=1/2
P(C2) = 3/6 = 1/2
Entropy = – (1/2) log2 (1/2) – (1/2) log2 (1/2)
= -(1/2)(-1) – (1/2)(-1) = ½ + ½ = 1
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
Entropy = – 0 log2 0 – 1 log2 1 = – 0 – 0 = 0
P(C2) = 5/6
Entropy = – (1/6) log2 (1/6) – (5/6) log2 (5/6) = 0.65
P(C2) = 4/6
Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
35
How to Calculate log2x
• Many calculators only have a button for log10x and
logex (note log typically means log10)
• You can calculate the log for any base b as follows:
– logb(x) = logk(x) / logk(b)
– Thus log2(x) = log10(x) / log10(2)
– Since log10(2) = .301, just calculate the log base 10 and
divide by .301 to get log base 2.
– You can use this for HW if needed
36
Splitting Based on INFO...
• Information Gain:
GAIN
n


 Entropy( p)    Entropy(i) 
 n

k
split
i
i 1
Parent Node, p is split into k partitions;
ni is number of records in partition i
– Measures Reduction in Entropy achieved because of the
split. Choose the split that achieves most reduction
(maximizes GAIN)
– Used in ID3 and C4.5
– Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.
Continuous Attribute?
(more on it later)
• Each non-leaf node is a test, its edge partitioning the attribute into
subsets (easy for discrete attribute).
• For continuous attribute
– Partition the continuous value of attribute A into a discrete set of
intervals
– Create a new boolean attribute Ac , looking for a threshold c,
true if Ac  c
Ac  
 false otherwise
How to choose c ?
38
Person
Homer
Marge
Bart
Lisa
Maggie
Abe
Selma
Otto
Krusty
Comic
Hair
Length
Weight
Age
Class
0”
10”
2”
6”
4”
1”
8”
10”
6”
250
150
90
78
20
170
160
180
200
36
34
10
8
1
70
41
38
45
M
F
M
F
F
M
F
M
M
8”
290
38
?
39
Entropy(S )  
 p 
p
 
log2 
pn
p

n


 n 
n

log2 
pn
p

n


Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9)
= 0.9911
no
yes
Hair Length <= 5?
Let us try splitting on
Hair length
Gain( A)  E(Current set)   E(all child sets)
Gain(Hair Length <= 5) = 0.9911 – (4/9 * 0.8113 + 5/9 * 0.9710 ) = 0.0911
40
Entropy(S )  
 p 
p
 
log2 
pn
p

n


 n 
n

log2 
pn
p

n


Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9)
= 0.9911
no
yes
Weight <= 160?
Let us try splitting on
Weight
Gain( A)  E(Current set)   E(all child sets)
Gain(Weight <= 160) = 0.9911 – (5/9 * 0.7219 + 4/9 * 0 ) = 0.5900
41
Entropy(S )  
 p 
p
 
log2 
pn
p

n


 n 
n

log2 
pn
p

n


Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9)
= 0.9911
no
yes
age <= 40?
Let us try splitting on
Age
Gain( A)  E(Current set)   E(all child sets)
Gain(Age <= 40) = 0.9911 – (6/9 * 1 + 3/9 * 0.9183 ) = 0.0183
42
Of the 3 features we had, Weight was best.
But while people who weigh over 160 are
perfectly classified (as males), the under 160
people are not perfectly classified… So we
simply recurse!
no
yes
Weight <= 160?
This time we find that we can split on
Hair length, and we are done!
no
yes
Hair Length <= 2?
43
We don’t need to keep the data around, just the
test conditions.
Weight <= 160?
yes
How would these
people be
classified?
no
Hair Length <= 2?
yes
Male
Male
no
Female
44
It is trivial to convert Decision Trees to rules…
Weight <= 160?
yes
no
Hair Length <= 2?
yes
Male
Male
no
Female
Rules to Classify Males/Females
If Weight greater than 160, classify as Male
Elseif Hair Length less than or equal to 2, classify as Male
Else classify as Female
45
Once we have learned the decision tree, we don’t even need a computer!
This decision tree is attached to a medical machine, and is designed to help
nurses make decisions about what type of doctor to call.
Decision tree for a typical shared-care setting applying the system for the
diagnosis of prostatic obstructions.
46
The worked examples we have seen were
performed on small datasets. However with
small datasets there is a great danger of
overfitting the data…
When you have few datapoints, there are
many possible splitting rules that perfectly
classify the data, but will not generalize to
future datasets.
Yes
No
Wears green?
Female
Male
For example, the rule “Wears green?” perfectly classifies the data, so does
“Mothers name is Jacqueline?”, so does “Has blue shoes”…
47
How to Find the Best Split: GINI
Before Splitting:
C0
C1
N00
N01
M0
A?
B?
Yes
No
Node N1
C0
C1
Node N2
N10
N11
C0
C1
N20
N21
M2
M1
Yes
No
Node N3
C0
C1
Node N4
N30
N31
C0
C1
M3
M12
N40
N41
M4
M34
Gain = M0 – M12 vs M0 – M34
48
Measure of Impurity: GINI (at node t)
• Gini Index for a given node t with classes j
GINI(t )  1  [ p( j | t )]2
j
NOTE: p( j | t) is computed as the relative frequency of class j at node t
• Example: Two classes C1 & C2 and node t has 5 C1
and 5 C2 examples. Compute Gini(t)
– 1 – [p(C1|t) + p(C2|t)] = 1 – [(5/10)2 + [(5/10)2 ]
– 1 – [¼ + ¼] = ½.
– Do you think this Gini value indicates a good split or bad
split? Is it an extreme value?
49
More on Gini
• Worst Gini corresponds to probabilities of 1/nc, where nc is
the number of classes.
– For 2-class problems the worst Gini will be ½
• How do we get the best Gini? Come up with an example for
node t with 10 examples for classes C1 and C2
– 10 C1 and 0 C2
– Now what is the Gini?
• 1 – [(10/10)2 + (0/10)2 = 1 – [1 + 0] = 0
– So 0 is the best Gini
• So for 2-class problems:
– Gini varies from 0 (best) to ½ (worst).
50
Some More Examples
• Below we see the Gini values for 4 nodes with
different distributions. They are ordered from best to
worst. See next slide for details
– Note that thus far we are only computing GINI for one
node. We need to compute it for a split and then compute
the change in Gini from the parent node.
C1
C2
0
6
Gini=0.000
C1
C2
1
5
Gini=0.278
C1
C2
2
4
Gini=0.444
C1
C2
3
3
Gini=0.500
51
Examples for computing GINI
GINI(t )  1  [ p( j | t )]2
j
C1
C2
0
6
P(C1) = 0/6 = 0
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
P(C2) = 6/6 = 1
Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
P(C2) = 5/6
Gini = 1 – (1/6)2 – (5/6)2 = 0.278
P(C2) = 4/6
Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Splitting Criteria based on
Classification Error
• Classification error at a node t :
Error (t )  1  max P(i | t )
i
• Measures misclassification error made by a node.
• Maximum (1 - 1/nc) when records are equally distributed among all
classes, implying least interesting information
• Minimum (0.0) when all records belong to one class, implying most
interesting information
53
Examples for Computing Error
Error (t )  1  max P(i | t )
i
C1
C2
0
6
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
Error = 1 – max (0, 1) = 1 – 1 = 0
P(C2) = 5/6
Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6
P(C2) = 4/6
Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
54
Comparison among Splitting Criteria
For a 2-class problem:
55
Discussion
• Error rate is often the metric used to evaluate a
classifier (but not always)
– So it seems reasonable to use error rate to determine the
best split
– That is, why not just use a splitting metric that matches the
ultimate evaluation metric?
– But this is wrong!
• The reason is related to the fact that decision trees use a greedy
strategy, so we need to use a splitting metric that leads to globally
better results
• The other metrics will empirically outperform error rate, although
there is no proof for this.
56
DTs in practice...
x2: sepal width
• Growing to purity is bad (overfitting)
x1: petal length
57
DTs in practice...
x2: sepal width
• Growing to purity is bad (overfitting)
x1: petal length
58
DTs in practice...
• Growing to purity is bad (overfitting)
– Terminate growth early
– Grow to purity, then prune back
59
DTs in practice...
• Growing to purity is bad (overfitting)
x2: sepal width
Not statistically
supportable leaf
Remove split
& merge leaves
x1: petal length
60
Avoid Overfitting in Classification
(more on overfitting later)
• The generated tree may overfit the training data
– Too many branches, some may reflect anomalies due to noise or outliers
– Result is in poor accuracy for unseen samples
• Two approaches to avoid overfitting
– Prepruning: Halt tree construction early—do not split a node if this would
result in the goodness measure falling below a threshold
• Difficult to choose an appropriate threshold
– Postpruning: Remove branches from a “fully grown” tree—get a sequence of
progressively pruned trees
• Use a set of data different from the training data to decide which is the
“best pruned tree”
61
Tree Induction
• Greedy strategy.
– Split the records based on an attribute test that optimizes
certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
62
How to Specify Test Condition?
• Depends on attribute types
– Nominal
– Ordinal
– Continuous
• Depends on number of ways to split
– 2-way split
– Multi-way split
63
Splitting Based on Nominal Attributes
• Multi-way split: Use as many partitions as distinct
values.
CarType
Family
Luxury
Sports
• Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Sports,
Luxury}
CarType
{Family}
OR
{Family,
Luxury}
CarType
{Sports}
64
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as distinct
values.
Size
Small
Large
Medium
• Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Small,
Medium}
Size
{Large}
• What about this split?
OR
{Small,
Large}
{Medium,
Large}
Size
{Small}
Size
{Medium}
65
Splitting Based on Continuous
Attributes
• Different ways of handling
– Discretization to form an ordinal categorical attribute
• Static – discretize once at the beginning
• Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.
– Binary Decision: (A < v) or (A  v)
• consider all possible splits and finds the best cut
• can be more compute intensive
66
Splitting Based on Continuous
Attributes
Taxable
Income
> 80K?
Taxable
Income?
< 10K
Yes
> 80K
No
[10K,25K)
(i) Binary split
[25K,50K)
[50K,80K)
(ii) Multi-way split
67
Data Fragmentation
• Number of instances gets smaller as you traverse
down the tree
• Number of instances at the leaf nodes could be too
small to make any statistically significant decision
68
Search Strategy
• Finding an optimal decision tree is NP-hard
• The algorithm presented so far uses a greedy, topdown, recursive partitioning strategy to induce a
reasonable solution
69
Expressiveness
• Decision tree provides expressive representation for learning
discrete-valued function
– But they do not generalize well to certain types of Boolean
functions
• Example: parity function:
– Class = 1 if there is an even number of Boolean attributes with truth
value = True
– Class = 0 if there is an odd number of Boolean attributes with truth
value = True
• For accurate modeling, must have a complete tree
• Not expressive enough for modeling continuous variables
– Particularly when test condition involves only a single
attribute at-a-time
70
Decision Boundary
1
0.9
x < 0.43?
0.8
0.7
Yes
No
y
0.6
y < 0.33?
y < 0.47?
0.5
0.4
Yes
0.3
0.2
:4
:0
0.1
No
:0
:4
Yes
:0
:3
No
:4
:0
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
0.9
1
• Border line between two neighboring regions of different classes is known as
decision boundary
• Decision boundary is parallel to axes because test condition involves a single
attribute at-a-time
71
Oblique Decision Trees
x+y<1
Class = +
Class =
• Test condition may involve multiple attributes
• More expressive representation
• Finding optimal test condition is computationally expensive
72
Vertical/Horizontal Boundaries
500 circular and 500
triangular data points.
Circular points:
0.5  sqrt(x12+x22)  1
Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
73
Tree Replication
P
Q
S
0
R
0
Q
1
S
0
1
0
1
• Same subtree appears in multiple branches
74
Model Evaluation
• Metrics for Performance Evaluation
– How to evaluate the performance of a model?
• Methods for Performance Evaluation
– How to obtain reliable estimates?
75
Which of the “Problems” can be solved by a Decision
Tree?
1) Deep Bushy Tree
2) Useless
3) Deep Bushy Tree
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10
The Decision Tree
has a hard time with
correlated attributes
10
9
8
7
6
5
4
3
2
1
100
90
80
70
60
50
40
30
20
10
10 20 30 40 50 60 70 80 90 100
?
1 2 3 4 5 6 7 8 9 10
76
Advantages/Disadvantages of Decision Trees
• Advantages:
– Easy to understand (Doctors love them!)
– Easy to generate rules
• Disadvantages:
– May suffer from overfitting.
– Classifies by rectangular partitioning (so does not
handle correlated features very well).
– Can be quite large – pruning is necessary.
– Does not handle streaming data easily
77