Transcript ppt
Dr. Brian Mac Namee (www.comp.dit.ie/bmacnamee)
Business
Systems Intelligence:
5. Classification 1
2
of
25
55
Acknowledgments
These notes are based (heavily) on
those provided by the authors to
accompany “Data Mining: Concepts
& Techniques” by Jiawei Han and
Micheline Kamber
Some slides are also based on trainer’s kits
provided by
More information about the book is available at:
www-sal.cs.uiuc.edu/~hanj/bk2/
And information on SAS is available at:
www.sas.com
3
of
25
55
Classification & Prediction
Today we will look at:
– What are classification & prediction?
– Issues regarding classification and prediction
– Classification techniques:
•
•
•
•
•
•
•
Case based reasoning (k-nearest neighbour algorithm)
Decision tree induction
Bayesian classification
Neural networks
Support vector machines (SVM)
Classification based on association rule mining concepts
Other classification methods
– Prediction
– Classification accuracy
4
of
25
55
Classification & Prediction
Classification:
– Predicts categorical class labels
– Classifies data (constructs a model) based on
the training set and the values (class labels) in a
classifying attribute and uses it in classifying
new data
Prediction:
– Models continuous-valued functions, i.e.,
predicts unknown or missing values
Typical Applications
– Credit approval
– Target marketing
– Medical diagnosis
– Treatment effectiveness analysis
5
of
25
55
Classification: A Two-Step Process
1) Model construction:
– Each tuple/sample is assumed to belong to a
predefined class, as determined by the class
label attribute
– The set of tuples used for model construction is
the training set
– A model created for classification
6
of
25
55
Classification: A Two-Step Process (cont…)
2) Model usage:
– Estimate accuracy of the model
• All members of an independent test-set is tested
using the model built
• The known label of test sample is compared with the
classified result from the model
• Accuracy rate is the percentage of test set samples
that are correctly classified by the model
– If the accuracy is acceptable, the model is used
to classify data tuples whose class labels are not
known
7
of
25
55
Classification: Model Construction
Classification
Algorithm
Training
Set
NAME
Mike
Mary
Bill
Jim
Dave
Anne
RANK
YEARS TENURED
Assistant Prof
3
no
Assistant Prof
7
yes
Professor
2
yes
Associate Prof
7
yes
Assistant Prof
6
no
Associate Prof
3
no
Classification
Model
IF rank = ‘professor’
OR years > 6
THEN tenured = ‘yes’
8
of
25
55
Classification: Using The Model In Prediction
Classifier
Testing
Set
Unseen
Data
(Jeff, Professor, 4)
NAME
Tom
Merlisa
George
Joseph
RANK
YEARS TENURED
Assistant Prof
2
no
Associate Prof
7
no
Professor
5
yes
Assistant Prof
7
yes
Tenured?
Yes
9
of
25
55
Supervised Vs. Unsupervised Learning
Supervised learning (classification)
– Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
– New data is classified based on the training set
Unsupervised learning (clustering)
– The class labels of training data is unknown
– Given a set of measurements, observations, etc.
with the aim of establishing the existence of
classes or clusters in the data
10
of
25
55
Issues Regarding Classification & Prediction: Data
Preparation
Data cleaning
– Preprocess data in order to reduce noise and
handle missing values
Relevance analysis (feature selection)
– Remove the irrelevant or redundant attributes
Data transformation
– Generalize and/or normalize data
11
of
25
55
Issues Regarding Classification & Prediction:
Evaluating Classification Methods
Predictive accuracy
Speed and scalability
– Time to construct the model
– Time to use the model
Robustness
– Handling noise and missing values
Scalability
– Efficiency in disk-resident databases
Interpretability
– Understanding and insight provided by the
model
12
of
25
55
Classification Techniques: Case Based Reasoning
(The k-Nearest Neighbor Algorithm)
Case based reasoning is a classification
technique which uses prior examples (cases)
to determine the classification of unknown
cases
The k-nearest neighbour (k-NN) algorithm is
the simplest form of case based reasoning
13
of
25
55
The k-Nearest Neighbor Algorithm)
All instances correspond to points in n-D space
The nearest neighbours are defined in terms of
Euclidean distance (or other appropriate
measure)
The target value can be discrete or real-valued
For discrete targets, k-NN returns the most
common value among the k training examples
nearest to the query
For real-valued targets, k-NN returns a
combination (e.g. average) of the nearest
neighbours’ target values
14
of
25
55
Nearest Neighbour Example
Features
Class
Wave Size
(ft)
Wave Period
(secs)
Good
Surf?
6
15
Yes
1
6
No
5
11
Yes
7
10
Yes
6
11
Yes
2
1
No
3
4
No
6
12
Yes
4
2
No
10
?
Query
10
15
of
25
55
Nearest Neighbour Example
When a new case is to
be classified:
f1
– Calculate the
distance from the
new case to all
training cases
– Put the new case in
the same class as its
nearest neighbour
Wave Size
?
?
?
Wave Period
f2
16
of
25
55
k-Nearest Neighbour Example
What about when it’s
too close to call?
Use the k-nearest
neighbour technique
f1
Wave Size
2
1
neighbours
vs.
neighbour
?
Wave Period
f2
– Determine the k
nearest neighbours
to the query case
– Put the new case
into the same class
as the majority of its
nearest neighbours
17
of
25
55
Nearest Neighbour Distance Measures
Any kind of measurement can be used to
calculate the distance between cases
The measurement most suitable will depend on
the type of features in the problem
Euclidean distance is the most used technique
d
n
(t
i 1
i
qi )
2
where n is the number of features, ti is the ith
feature of the training case and qi is the ith
feature of the query case
18
of
25
55
Summary Of Nearest Neighbour Classification
Strengths
– No training involved – lazy learning
– New data can be added on the fly
– Some explanation capabilities
– Robust to noisy data by averaging k-nearest
neighbors
Weaknesses
– Not the most powerful classification
– Slow classification
– Curse of dimensionality
One of the easiest machine learning
classification techniques to understand
19
of
25
55
Case-Based Reasoning
Uses lazy evaluation and analysis of similar
instances
However, instances are not necessarily “points
in a Euclidean space”
Methodology
– Instances represented by rich symbolic
descriptions
– Multiple retrieved cases may be combined
– Tight coupling between case retrieval,
knowledge-based reasoning, and problem
solving
Lots of active research issues
20
of
25
55
Classification Techniques: Decision Tree Induction
Decision trees are the most widely
used classification technique in data
mining today
Formulate problems into a tree
composed of decision nodes (or
branch nodes) and classification
nodes (or leaf nodes)
Problem is solved by navigating
down the tree until we reach an
appropriate leaf node
The tricky bit is building the most
efficient and powerful tree
J. Ross Quinlan is a
famed researcher in
data mining and
decision theory. He has
done pioneering work
in the area of decision
trees, including
inventing the ID3 and
C4.5 algorithms.
21
of
25
55
Training Dataset
Age
Income
Student
CreditRating
BuysComputer
<=30
high
no
fair
no
<=30
high
no
excellent
no
31 - 40
high
no
fair
yes
>40
medium
no
fair
yes
>40
low
yes
fair
yes
>40
low
yes
excellent
no
31 - 40
low
yes
excellent
yes
<=30
medium
no
fair
no
<=30
low
yes
fair
yes
>40
medium
yes
fair
yes
<=30
medium
yes
excellent
yes
31 - 40
medium
no
excellent
yes
31 - 40
high
yes
fair
yes
>40
medium
no
excellent
no
22
of
25
55
Resultant Decision Tree
Age?
<=30
30 - 40
Student?
no
No
>40
Credit
Rating?
Yes
yes
excellent
Yes
No
fair
Yes
23
of
25
55
Algorithm For Decision Tree Induction
Basic algorithm (a greedy algorithm)
– Tree is constructed in a top-down recursive
divide-and-conquer manner
– At the start, all the training examples are at the
root
– Attributes are categorical (if continuous-valued,
they are discretized in advance)
– Examples are partitioned recursively based on
selected attributes
– Test attributes are selected on the basis of a
heuristic or statistical measure (e.g. information
gain)
24
of
25
55
Algorithm For Decision Tree Induction
Conditions for stopping partitioning
– All samples for a given node belong to the same
class
– There are no remaining attributes for further
partitioning – majority voting is employed for
classifying the leaf
– There are no samples left
25
of
25
55
Attribute Selection Measure: Information Gain
(ID3/C4.5)
The attribute selection mechanism used in ID3
and based on work on information theory by
Claude Shannon
If our data is split into classes according to
fractions {p1,p2…, pm} then the entropy is
measured as the info required to classify any
arbitrary tuple as follows:
m
E ( p1 ,p2 ,...,pm ) pi log 2 pi
i 1
26
of
25
55
Attribute Selection Measure: Information Gain
(ID3/C4.5) (cont…)
The information measure is essentially the
same as entropy
At the root node the information is as follows:
9 5
info[9,5] E ,
14 14
9
9 5
5
log 2 log 2
14
14 14
14
0.94
27
of
25
55
Attribute Selection Measure: Information Gain
(ID3/C4.5) (cont…)
To measure the information at a particular
attribute we measure info for the various splits
of that attribute
28
of
25
55
Attribute Selection Measure: Information Gain
(ID3/C4.5) (cont…)
At the age attribute the information is as
follows:
5
4
5
info[2,3], [4,0], [3,2] info2,3 info4,0 info3,2
14
14
14
5 2
2 3
3
log 2 log 2
14 5
5 5
5
4 4
4 0
0
log 2 log 2
14 4
4 4
4
5 3
3 2
2
log 2 log 2
14 5
5 5
5
0.694
29
of
25
55
Attribute Selection Measure: Information Gain
(ID3/C4.5) (cont…)
In order to determine which attributes we
should use at each node we measure the
information gained in moving from one node
to another and choose the one that gives us
the most information
30
of
25
55
Attribute Selection By Information Gain Example
Class P: BuysComputer = “yes”
Class N: BuysComputer = “no”
– I(p, n) = I(9, 5) =0.940
Compute the entropy for age:
Age
<=30
<=30
31 - 40
>40
>40
>40
31 - 40
<=30
<=30
>40
<=30
31 - 40
31 - 40
>40
Income
high
high
high
medium
low
low
low
medium
low
medium
medium
medium
high
medium
Student
no
no
no
no
yes
yes
yes
no
yes
yes
yes
no
yes
no
CreditRating
fair
excellent
fair
fair
fair
excellent
excellent
fair
fair
fair
excellent
excellent
fair
excellent
BuysComputer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
Age
pi
ni
I(pi, ni)
>=30
2
3
0.971
30 – 40
4
0
0
>40
3
2
0.971
31
of
25
55
Attribute Selection By Information Gain Computation
5
4
5
E ( age)
I (2,3)
I (4,0)
I (3,2)
14
14
14
0.694
5
I (2,3) means “age <=30” has 5 out of 14 samples,
14
with 2 yes and 3 no. Hence:
Gain(age) I ( p, n) E (age) 0.246
Similarly:
Gain(income) 0.029
Gain( student ) 0.151
Gain(credit _ rating ) 0.048
32
of
25
55
Other Attribute Selection Measures
Gini index (CART, IBM IntelligentMiner)
– All attributes are assumed continuous-valued
– Assume there exist several possible split values
for each attribute
– May need other tools, such as clustering, to get
the possible split values
– Can be modified for categorical attributes
33
of
25
55
Extracting Classification Rules From Trees
Represent knowledge in the form of IF-THEN rules
One rule is created for each path from root to leaf
Each attribute-value pair along a path forms a
conjunction
The leaf node holds the class prediction
Rules are easier for humans to understand
IF Age = “<=30” AND Student = “no” THEN BuysComputer = “no”
IF Age = “<=30” AND Student = “yes” THEN BuysComputer = “yes”
IF Age = “31…40” THEN BuysComputer = “yes”
IF Age = “>40” AND CreditRating = “excellent” THEN BuysComputer = “yes”
IF Age = “<=30” AND CreditRating = “fair” THEN BuysComputer = “no”
34
of
25
55
Overfitting
Training Set
Test Set
35
of
25
55
Overfitting (cont…)
Training Set
Test Set
36
of
25
55
Avoiding Overfitting In Classification
An induced tree may overfit the training data
– Too many branches, some may reflect anomalies due to
noise or outliers
– Poor accuracy for unseen samples
Two approaches to avoiding overfitting
– Prepruning: Halt tree construction early
• Do not split a node if this would result in a measure of the
usefullness of the tree falling below a threshold
• Difficult to choose an appropriate threshold
– Postpruning: Remove branches from a “fully grown” tree
to give a sequence of progressively pruned trees
• Use a set of data different from the training data to decide which
is the “best pruned tree”
37
of
25
55
Approaches To Determine The Final Tree Size
Separate training (2/3) and testing (1/3) sets
Use cross validation, e.g., 10-fold cross
validation
Use all the data for training
– But apply a statistical test (e.g., chi-square) to
estimate whether expanding or pruning a node
may improve the entire distribution
Use minimum description length (MDL)
principle
– Halting growth of the tree when the encoding is
minimized
38
of
25
55
Enhancements To Basic Decision Tree Induction
Allow for continuous-valued attributes
– Dynamically define new discrete-valued
attributes that partition the continuous attribute
value into a discrete set of intervals
Handle missing attribute values
– Assign the most common value of the attribute
– Assign probability to each of the possible values
Attribute construction
– Create new attributes based on existing ones
that are sparsely represented
– This reduces fragmentation, repetition, and
replication
39
of
25
55
Classification In Large Databases
Classification - a classical problem extensively
studied by statisticians and machine learning
researchers
Scalability: Classifying data sets with millions of
examples and hundreds of attributes with reasonable
speed
Why decision tree induction in data mining?
– Relatively faster learning speed (than other classification
methods)
– Convertible to simple and easy to understand
classification rules
– Can use SQL queries for accessing databases
– Comparable classification accuracy with other methods
40
of
25
55
Data Cube-Based Decision-Tree Induction
Integration of generalization with decision-tree
induction
Classification at primitive concept levels
– E.g., precise temperature, humidity, outlook, etc.
– Low-level concepts, scattered classes, bushy
classification-trees
– Semantic interpretation problems
Cube-based multi-level classification
– Relevance analysis at multi-levels
– Information-gain analysis with dimension + level
41
of
25
55
Decision Tree In SAS
42
of
25
55
Bayesian Classification: Why?
Probabilistic learning:
– Calculate explicit probabilities for a hypothesis
– Among the most practical approaches to certain types of
learning problems
Incremental:
– Each training example can incrementally increase/
decrease the probability that a hypothesis is correct
– Prior knowledge can be combined with observed data
Probabilistic prediction:
– Predict multiple hypotheses, weighted by their
probabilities
Standard:
– Bayesian methods can provide a standard of optimal
decision making against which other methods can be
measured
43
of
25
55
Bayesian Theorem: Basics
Let X be a data sample whose class label is unknown
Let H be a hypothesis that X belongs to class C
For classification problems, determine P(H|X): the
probability that the hypothesis holds given the
observed data sample X
– P(H): prior probability of hypothesis H (i.e. the initial
probability before we observe any data, reflects the
background knowledge)
– P(X): probability that sample data is observed
– P(X|H): probability of observing the sample X, given that
the hypothesis holds
44
of
25
55
Bayesian Theorem
Given training data X, posteriori probability of a
hypothesis H, P(H|X) follows the Bayes theorem
P
(
X
|
H
)
P
(
H
)
P(H | X )
P( X )
Informally, this can be written as
posterior = (likelihood * prior) / evidence
MAP (maximum posteriori) hypothesis
h
arg max P(h | D) arg max P(D | h)P(h).
MAP hH
hH
Practical difficulty: require initial knowledge of many
probabilities, significant computational cost
45
of
25
55
Naïve Bayes Classifier
A simplified assumption: attributes are conditionally
independent:
n
P( X | C i) P( x k | C i)
k 1
The product of occurrence of say 2 elements x1 and
x2, given the current class is C, is the product of the
probabilities of each element taken separately, given
the same class P([y1,y2],C) = P(y1,C) * P(y2,C)
No dependence relation between attributes
Greatly reduces the computation cost, only count the
class distribution.
Once the probability P(X|Ci) is known, assign X to the
class with maximum P(X|Ci)*P(Ci)
46
of
25
55
age
Class:
<=30
C1:buys_computer= <=30
‘yes’
30…40
C2:buys_computer= >40
>40
‘no’
>40
31…40
Data sample
<=30
X =(age<=30,
<=30
Income=medium,
>40
Student=yes
<=30
Credit_rating=
31…40
Fair)
31…40
>40
Training dataset
income student credit_rating
high
no fair
high
no excellent
high
no fair
medium
no fair
low
yes fair
low
yes excellent
low
yes excellent
medium
no fair
low
yes fair
medium
yes fair
medium
yes excellent
medium
no excellent
high
yes fair
medium
no excellent
buys_computer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
47
of
25
55
Naïve Bayesian Classifier: Example
Compute P(X/Ci) for each class
P(age=“<30” | buys_computer=“yes”) = 2/9=0.222
P(age=“<30” | buys_computer=“no”) = 3/5 =0.6
P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444
P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4
P(student=“yes” | buys_computer=“yes)= 6/9 =0.667
P(student=“yes” | buys_computer=“no”)= 1/5=0.2
P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667
P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4
X=(age<=30 ,income =medium, student=yes,credit_rating=fair)
P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.0.667 =0.044
P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019
P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028
P(X|buys_computer=“no”) * P(buys_computer=“no”)=0.007
X belongs to class “buys_computer=yes”
48
of
25
55
Naïve Bayesian Classifier: Comments
Advantages :
– Easy to implement
– Good results obtained in most of the cases
Disadvantages
– Assumption: class conditional independence , therefore
loss of accuracy
– Practically, dependencies exist among variables
– E.g., hospitals: patients: Profile: age, family history etc
– Symptoms: fever, cough etc., Disease: lung cancer,
diabetes etc
– Dependencies among these cannot be modeled by
Naïve Bayesian Classifier
How to deal with these dependencies?
– Bayesian Belief Networks
49
of
25
55
Bayesian Networks
Bayesian belief network allows a subset of the
variables conditionally independent
A graphical model of causal relationships
– Represents dependency among the variables
– Gives a specification of joint probability
distribution
Y
X
Z
P
• Nodes: random variables
• Links: dependency
• X,Y are the parents of Z, and Y is the
parent of P
• No dependency between Z and P
• Has no loops or cycles
50
of
25
55
Bayesian Belief Network: An Example
Family
History
Smoker
(FH, S)
LungCancer
PositiveXRay
Emphysema
Dyspnea
Bayesian Belief Networks
(FH, ~S) (~FH, S) (~FH, ~S)
LC
0.8
0.5
0.7
0.1
~LC
0.2
0.5
0.3
0.9
The conditional probability table
for the variable LungCancer:
Shows the conditional probability
for each possible combination of its
parents
n
P( z1,..., zn )
P( z i | Parents( Z i ))
i 1
51
of
25
55
Learning Bayesian Networks
Several cases
– Given both the network structure and all variables
observable: learn only the CPTs
– Network structure known, some hidden variables:
method of gradient descent, analogous to neural network
learning
– Network structure unknown, all variables observable:
search through the model space to reconstruct graph
topology
– Unknown structure, all hidden variables: no good
algorithms known for this purpose
D. Heckerman, Bayesian networks for data mining
52
of
25
55
Lazy Vs. Eager Learning
Lazy learning:
– Case based reasoning
Eager learning:
– Decision-tree and Bayesian classification
Key differences:
– Lazy method may consider query instance when
deciding how to generalize beyond the training
data D
– Eager method cannot since they have already
chosen global approximation when seeing the
query
53
of
25
55
Lazy Vs. Eager Learning
Efficiency:
– Lazy, less time training but more time predicting
Accuracy:
– Lazy method effectively uses a richer hypothesis
space since it uses many local linear functions to
form its implicit global approximation to the
target function
– Eager learners must commit to a single
hypothesis that covers the entire instance space
– Easier for lazy learners to cope with concept
drift
54
of
25
55
Summary
Classification is an extensively studied problem
Classification is probably one of the most widely used
data mining techniques with a lot of extensions
Classification techniques can be categorized as either
lazy or eager
Scalability is still an important issue for database
applications: thus combining classification with
database techniques should be a promising topic
Research directions: classification of non-relational
data, e.g., text, spatial, multimedia, etc. classification
of skewed data sets
55
of
25
55
Questions?