Machine Learning - Naive Bayes Classifier

Download Report

Transcript Machine Learning - Naive Bayes Classifier

Naïve Bayes Classifier
Ke Chen
http://intranet.cs.man.ac.uk/mlo/comp20411/
Modified and extended by Longin Jan Latecki
[email protected]
Probability Basics
•
Prior, conditional and joint probability
– Prior probability: P(X)
– Conditional probability: P(X1 |X2 ), P(X2 |X1 )
– Joint probability: X  (X1 , X2 ), P(X)  P(X1 ,X2 )
– Relationship: P(X1 ,X2 )  P(X2 |X1 )P(X1 )  P(X1 |X2 )P(X2 )
– Independence: P(X2 |X1 )  P(X2 ), P(X1 |X2 )  P(X1 ), P(X1 ,X2 )  P(X1 )P(X2 )
•
Bayesian Rule
Likelihood  Prior
P( X |C )P(C )
P(C |X ) 
Posterior 
P( X )
Evidence
2
Probabilistic Classification
•
Establishing a probabilistic model for classification
– Discriminative model
P(C|X) C  c1 ,  ,cL , X  (X1 ,  , Xn )
– Generative model
P(X|C) C  c1 ,  ,cL , X  (X1 ,  , Xn )
•
MAP classification rule
– MAP: Maximum A Posterior
– Assign x to c* if P(C  c * |X  x)  P(C  c |X  x) c  c * , c  c1 ,  , c L
•
Generative classification with the MAP rule
– Apply Bayesian rule to convert: P(C |X )  P( X |C )P(C )  P( X |C )P(C )
P( X )
3
Naïve Bayes
•
Bayes classification
P(C|X)  P(X|C)P(C)  P(X1 ,  , Xn |C)P(C)
Difficulty: learning the joint probability P(X1 ,  , Xn |C)
•
Naïve Bayes classification
– Making the assumption that all input attributes are independent
P( X1 , X2 ,  , Xn |C )  P( X1 |X2 ,  , Xn ; C )P( X2 ,  , Xn |C )
 P( X1 |C )P( X2 ,  , Xn |C )
 P( X1 |C )P( X2 |C )    P( Xn |C )
– MAP classification rule
P(C  c * |X  x)  P(C  c |X  x) c  c * , c  c1 ,  , c L
[ P( x1 |c * )    P( xn |c * )]P(c * )  [ P( x1 |c)    P( xn |c)]P(c), c  c * , c  c1 ,  , c L
4
Naïve Bayes
•
Naïve Bayes Algorithm (for discrete input attributes)
– Learning Phase: Given a training set S,
For each target valueof ci (ci  c1 ,  , c L )
Pˆ (C  ci )  estimateP(C  ci ) w ithexamplesin S;
For every attributevalue a jk of each attributex j ( j  1,  , n; k  1,  , N j )
Pˆ ( X j  a jk |C  ci )  estimateP( X j  a jk |C  ci ) w ithexamplesin S;
Output: conditional probability tables; for x j , N j  L elements
– Test Phase: Given an unknown instance X  (a1 ,  , an ),
Look up tables to assign the label c* to X’ if
[Pˆ (a1 |c* )    Pˆ (an |c* )]Pˆ (c* )  [Pˆ (a1 |c)    Pˆ (an |c)]Pˆ (c), c  c* , c  c1 ,  , cL
5
Example
•
Example: Play Tennis
6
Learning Phase
P(Outlook=o|Play=b)
P(Temperature=t|Play=b)
Outlook
Play=Yes
Play=No
Temperature
Play=Yes
Play=No
Sunny
2/9
3/5
Hot
2/9
2/5
Overcast
4/9
3/9
0/5
2/5
Mild
4/9
3/9
2/5
1/5
Rain
Cool
P(Humidity=h|Play=b)
Humidity
High
Normal
P(Wind=w|Play=b)
Play=Yes Play=No
3/9
6/9
4/5
1/5
P(Play=Yes) = 9/14
Wind
Play=Yes
Play=No
Strong
3/9
6/9
3/5
2/5
Weak
P(Play=No) = 5/14
7
Example
•
Test Phase
– Given a new instance x’, P(Play =Yes|x’) ? P(Play = No|x’)
x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)
– Look up tables
P(Outlook=Sunny|Play=Yes) = 2/9
P(Outlook=Sunny|Play=No) = 3/5
P(Temperature=Cool|Play=Yes) = 3/9
P(Temperature=Cool|Play==No) = 1/5
P(Huminity=High|Play=Yes) = 3/9
P(Huminity=High|Play=No) = 4/5
P(Wind=Strong|Play=Yes) = 3/9
P(Wind=Strong|Play=No) = 3/5
P(Play=Yes) = 9/14
P(Play=No) = 5/14
– MAP rule
P(Play=Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053
P(Play=No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206
Given the fact P(Play =Yes|x’) < P(Play = No|x’), we label x’ to be “No”.
8
Relevant Issues
•
Violation of Independence Assumption
– For many real world tasks, P(X1 ,  , Xn |C)  P(X1 |C)    P(Xn |C)
– Nevertheless, naïve Bayes works surprisingly well anyway!
•
Zero conditional probability Problem
– If no example contains the attribute value X j  a jk , Pˆ ( X j  a jk |C  ci )  0
– In this circumstance, Pˆ ( x1 |ci )    Pˆ ( a jk |ci )    Pˆ ( xn |ci )  0 during test
– For a remedy, conditional probabilities estimated with Laplace
n  mp
smoothing:
Pˆ ( X  a |C  c )  c
nm
nc : numberof trainingexamplesfor w hichX j  a jk and C  ci
j
jk
i
n : numberof trainingexamplesfor w hichC  ci
p : prior estimate(usually,p  1 / t for t possiblevalues of X j )
m : w eight to prior(numberof " virtual"examples, m  1)
9
Relevant Issues
•
Continuous-valued Input Attributes
– Numberless values for an attribute
– Conditional probability modeled with the normal distribution
 ( X j   ji )2 

Pˆ ( X j |C  ci ) 
exp 
2


2 ji
2  ji


 ji : mean (avearage)of attributevalues X j of examples for w hichC  ci
1
 ji : standarddeviationof attributevalues X j of examples for w hichC  ci
– Learning Phase: for X  (X1 ,  , Xn ), C  c1 ,  , cL
Output: n L normal distributions and P(C  ci ) i  1,  , L
– Test Phase: for X  (X1 ,  , Xn )
•
•
Calculate conditional probabilities with all the normal distributions
Apply the MAP rule to make a decision
10
Conclusions
•
Naïve Bayes based on the independence assumption
– Training is very easy and fast; just requiring considering each
attribute in each class separately
– Test is straightforward; just looking up tables or calculating
conditional probabilities with normal distributions
•
A popular generative model
– Performance competitive to most of state-of-the-art classifiers
even in presence of violating independence assumption
– Many successful applications, e.g., spam mail filtering
– Apart from classification, naïve Bayes can do more…
11
Homework
1. Compute P(Play=Yes|x’) and P(Play=No|x’) with m=0 and with m=1 for
x’=(Outlook=Overcast, Temperature=Cool, Humidity=High, Wind=Strong)
Does the result change?
2. Your training data contains 100 emails with the following statistics:
•60 of those 100 emails (60%) are spam
48 of those 60 emails (80%) that are spam have the word "buy"
42 of those 60 emails (70%) that are spam have the word "win"
•40 of those 100 emails (40%) aren't spam
4 of those 40 emails (10%) that aren't spam have the word "buy"
6 of those 40 emails (15%) that aren't spam have the word "win"
A new email has been received and it has the words "buy" and "win".
Classify it and send it to either to the inbox or to the spam folder. For this you need to compute
P(spam=1 | buy=1, win=1) and P(spam=0 | buy=1, win=1),
where we interpret spam, buy, and win as binary random variables such that
spam=1 means that the email is a spam, spam=0 means that it is not a spam,
buy=1 means that the word “buy” is present in the email, and similarly for win=1.
You need to write the formulas you are using. (Here m=0.)
12