Computing and the Humanities

Download Report

Transcript Computing and the Humanities

CS60057
Speech &Natural Language
Processing
Autumn 2007
Lecture 5
2 August 2007
Lecture 1, 7/21/2005
Natural Language Processing
1
WORDS
The Building Blocks of Language
Lecture 1, 7/21/2005
Natural Language Processing
2

Language can be divided up into pieces of varying sizes,
ranging from morphemes to paragraphs.

Words -- the most fundamental level for NLP.
Lecture 1, 7/21/2005
Natural Language Processing
3
Tokens, Types and Texts
This process of segmenting a string of characters into words is known as
tokenization.
>>> sentence = "This is the time -- and this is the record of the time."
>>> words = sentence.split()
>>> len(words)
13
Compile a list of the unique vocabulary items in a string by using set() to
eliminate duplicates
>>> len(set(words))
10
A word token is an individual occurrence of a word in a concrete context.
A word type is what we're talking about when we say that the three occurrences
of the in sentence are "the same word."
Lecture 1, 7/21/2005
Natural Language Processing
4
>>> set(words)
set(['and', 'this', 'record', 'This', 'of', 'is', '--', 'time.', 'time', 'the']
Extracting text from files
>>> f = open('corpus.txt', 'rU')
>>> f.read()
'Hello World!\nThis is a test file.\n'
We can also read a file one line at a time using the for loop construct:
>>> f = open('corpus.txt', 'rU')
>>> for line in f:
...
print line[:-1]
Hello world!
This is a test file.
Here we use the slice [:-1] to remove the newline character at the end
of the input line.
Lecture 1, 7/21/2005
Natural Language Processing
5
Extracting text from the Web
>>> from urllib import urlopen
>>> page = urlopen("http://news.bbc.co.uk/").read()
>>> print page[:60]
<!doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN"
Web pages are usually in HTML format. To extract the text, we need to
strip out the HTML markup, i.e. remove all material enclosed in
angle brackets. Let's digress briefly to consider how to carry out this
task using regular expressions. Our first attempt might look as
follows:
>>> line = '<title>BBC NEWS | News Front Page</title>‘
>>> new = re.sub(r'<.*>', '', line)
>>> new
‘'
Lecture 1, 7/21/2005
Natural Language Processing
6
What has happened here?
1. The wildcard '.' matches any character other than '\n', so it will match '>'
and '<'.
2. The '*' operator is "greedy", it matches as many characters as it can. In the
above example, '.*' will return not the shortest match, namely 'title', but the
longest match, 'title>BBC NEWS | News Front Page</title'. To get the
shortest match we have to use the '*?' operator. We will also normalise
whitespace, replacing any sequence of one or more spaces, tabs or
newlines (these are all matched by '\s+') with a single space character:
>>> page = re.sub('<.*?>', '', page)
>>> page = re.sub('\s+', ' ', page)
>>> print page[:60]
BBC NEWS | News Front Page News Sport Weather World Service
Lecture 1, 7/21/2005
Natural Language Processing
7
Extracting text from NLTK Corpora
NLTK is distributed with several corpora and corpus samples and
many are supported by the corpus package.
>>> corpus.gutenberg.items
['austen-emma', 'austen-persuasion', 'austen-sense', 'bible-kjv', 'blakepoems', 'blake-songs', 'chesterton-ball', 'chesterton-brown',
'chesterton-thursday', 'milton-paradise', 'shakespeare-caesar',
'shakespeare-hamlet', 'shakespeare-macbeth', 'whitman-leaves']
Next we iterate over the text content to find the number of word tokens:
>>> count = 0
>>> for word in corpus.gutenberg.read('whitman-leaves'):
...
count += 1
>>> print count
154873

Lecture 1, 7/21/2005
Natural Language Processing
8
Brown Corpus
The Brown Corpus was the first million-word, part-of-speech tagged
electronic corpus of English, created in 1961 at Brown University.
Each of the sections a through r represents a different genre.
>>> corpus.brown.items
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'j', 'k', 'l', 'm', 'n', 'p', 'r']
>>> corpus.brown.documents['a']
'press: reportage'
We can extract individual sentences (as lists of words) from the corpus
using the read() function. Here we will specify section a, and indicate
that only words (and not part-of-speech tags) should be produced.
>>> a = corpus.brown.tokenized('a')
>>> a[0]
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an',
'investigation', 'of', "Atlanta's", 'recent', 'primary', 'election',
'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took',
'place', '.']

Lecture 1, 7/21/2005
Natural Language Processing
9
Lecture 1, 7/21/2005
Natural Language Processing
10
Corpus Linguistics





1. Text-corpora: Brown corpus. One million words, tagged,
representative of American English.
2. Text-corpora: Project Gutenberg. 17,000 uncopyrighted literary
texts (Tom Sawyer, etc.)
3. Text-corpora: OMIM: Comprehensive list of medical conditions.
4. Word frequencies.
5. Zipf's First Law.
Lecture 1, 7/21/2005
Natural Language Processing
11
What’s a word?


I have a can opener; but I can’t open these cans.
how many words?

Word form
 inflected form as it appears in the text
 can and cans ... different word forms

Lemma
 a set of lexical forms having the same stem, same POS and same meaning
 can and cans … same lemma

Word token:
 an occurrence of a word
 I have a can opener; but I can’t open these cans. 11 word tokens (not counting
punctuation)

Word type:
 a different realization of a word
 I have a can opener; but I can’t open these cans. 10 word types
(not counting
punctuation)
Lecture 1, 7/21/2005
Natural Language Processing
12
Another example

Mark Twain’s Tom Sawyer
 71,370 word tokens
 8,018 word types
 tokens/type ratio = 8.9 (indication of text complexity)

Complete Shakespeare work
 884,647 word tokens
 29,066 word types
 tokens/type ratio = 30.4
Lecture 1, 7/21/2005
Natural Language Processing
13
Some Useful Empirical Observations





A small number of events occur with high frequency
A large number of events occur with low frequency
You can quickly collect statistics on the high frequency events
You might have to wait an arbitrarily long time to get valid
statistics on low frequency events
Some of the zeroes in the table are really zeros But others are
simply low frequency events you haven't seen yet. How to
address?
Lecture 1, 7/21/2005
Natural Language Processing
14
Common words in Tom Sawyer
but words in NL have an uneven distribution…
Lecture 1, 7/21/2005
Natural Language Processing
15
Text properties (formalized)
Sample word frequency data
Lecture 1, 7/21/2005
Natural Language Processing
16
Frequency of frequencies
Lecture 1, 7/21/2005

most words are rare
 3993 (50%) word types appear only
once
 they are called happax legomena (read
only once)

but common words are very common
 100 words account for 51% of all tokens
(of all text)
Natural Language Processing
17
Zipf’s Law
1.
2.



Count the frequency of each word type in a large corpus
List the word types in order of their frequency
Let:
 f = frequency of a word type
 r = its rank in the list
Zipf’s Law says: f  1/r
In other words:
 there exists a constant k such that: f × r = k
 The 50th most common word should occur with 3 times the
frequency of the 150th most common word.
Lecture 1, 7/21/2005
Natural Language Processing
18
Zipf’s Law
 If
probability of word of rank r is pr and N is the total
number of word occurrences:
f
A
pr 

for corpus indp. const. A  0.1
N
r
Lecture 1, 7/21/2005
Natural Language Processing
19
Zipf curve
Lecture 1, 7/21/2005
Natural Language Processing
20
Predicting Occurrence Frequencies




By Zipf, a word appearing n times has rank rn=AN/n
If several words may occur n times, assume rank rn applies to the last of these.
Therefore, rn words occur n or more times and rn+1 words occur n+1 or more times.
So, the number of words appearing exactly n times is:
AN AN
AN
I n  rn  rn 1 


n
n  1 n(n  1)
Fraction of words with frequency n is:
In
1

D n(n  1)
Fraction
of words appearing only
once isProcessing
therefore ½.
Lecture 1, 7/21/2005
Natural Language
21
Explanations for Zipf’s Law
-
Zipf’s explanation was his “principle of least effort.”
Balance between speaker’s desire for a small
vocabulary and hearer’s desire for a large one.
Lecture 1, 7/21/2005
Natural Language Processing
22
Zipf’s First Law








1. f ∝ 1/r,
f = word-frequency,
r = word-frequency rank,
m = number of meetings per word.
2. There exists a k such that f × r = k.
3. Alternatively, log f = log k - log r.
4. English literature, Johns Hopkins Autopsy Resource, German,
and Chinese.
5. Most famous of Zipf’s Laws.
Lecture 1, 7/21/2005
Natural Language Processing
23
Zipf’s Second Law



1. Meanings, m ∝ √f
2. There exists a k such that k × f = m2.
3. Corollary: m ∝ 1/√r
Lecture 1, 7/21/2005
Natural Language Processing
24
Zipf’s Third Law



1. Frequency ∝ 1/wordlength:
2. There exists a k such that f × wordlength = k.
3. Many other minor laws stated.
Lecture 1, 7/21/2005
Natural Language Processing
25
Zipf’s Law Impact on Language
Analysis
 Good
News: Stopwords will account for a large fraction of text so
eliminating them greatly reduces size of vocabulary in a text
 Bad
News: For most words, gathering sufficient data for meaningful
statistical analysis (e.g. for correlation analysis for query expansion)
is difficult since they are extremely rare.
Lecture 1, 7/21/2005
Natural Language Processing
26
Vocabulary Growth
 How
does the size of the overall vocabulary (number of
unique words) grow with the size of the corpus?
 This determines how the size of the inverted index will
scale with the size of the corpus.
 Vocabulary not really upper-bounded due to proper
names, typos, etc.
Lecture 1, 7/21/2005
Natural Language Processing
27
Heaps’ Law
 If
V is the size of the vocabulary and the n is the length of the corpus
in words:
V  Kn 
 Typical
constants:
with constants K , 0    1

K  10100

  0.40.6 (approx. square-root)
Lecture 1, 7/21/2005
Natural Language Processing
28
Heaps’ Law Data
Lecture 1, 7/21/2005
Natural Language Processing
29
Word counts are interesting...




As an indication of a text’s style
As an indication of a text’s author
But, because most words appear very infrequently,
 it is hard to predict much about the behavior of words
(if they do not occur often in a corpus)
--> Zipf’s Law
Lecture 1, 7/21/2005
Natural Language Processing
30
Zipf’s Law on Tom Saywer


k ≈ 8000-9000
except for
The
3 most frequent words
Words of frequency ≈ 100
Lecture 1, 7/21/2005
Natural Language Processing
31
Plot of Zipf’s Law
On chap. 1-3 of Tom Sawyer (≠ numbers from p. 25&26)
f×r = k
Zipf
350
300
Freq
250
200
150
100
50
0
0
500
1000
1500
2000
Rank
Lecture 1, 7/21/2005
Natural Language Processing
32
Plot of Zipf’s Law (con’t)
On chap. 1-3 of Tom Sawyer
f×r = k ==> log(f×r) = log(k) ==> log(f)+log(r) = log(k)
Zipf's Law
6
5
log(freq)
4
3
2
1
0
0
1
2
3
4
5
6
7
8
log(rank)
Lecture 1, 7/21/2005
Natural Language Processing
33
Zipf’s Law, so what?

There are:

A few very common words

A medium number of medium frequency words

A large number of infrequent words

Principle of Least effort: Tradeoff between speaker and hearer’s effort

Speaker communicates with a small vocabulary of common words (less
effort)

Hearer disambiguates messages through a large vocabulary of rare
words (less effort)

Significance of Zipf’s Law for us:

For most words, our data about their use will be very sparse

Only for a few words will we have a lot of examples
Lecture 1, 7/21/2005
Natural Language Processing
34
N-Grams and Corpus
Linguistics
Lecture 1, 7/21/2005
Natural Language Processing
35
N-grams & Language
Modeling
A bad language model
Lecture 1, 7/21/2005
Natural Language Processing
36
A bad language model
Lecture 1, 7/21/2005
Natural Language Processing
37
A bad language model
Lecture 1, 7/21/2005
Natural Language Processing
38
What’s a Language Model

A Language model is a probability distribution over word
sequences

P(“And nothing but the truth”)  0.001

P(“And nuts sing on the roof”)  0
Lecture 1, 7/21/2005
Natural Language Processing
39
What’s a language model for?

Speech recognition
Handwriting recognition
Spelling correction
Optical character recognition
Machine translation

(and anyone doing statistical modeling)




Lecture 1, 7/21/2005
Natural Language Processing
40
Next Word Prediction

From a NY Times story...





Stocks ...
Stocks plunged this ….
Stocks plunged this morning, despite a cut in interest rates
Stocks plunged this morning, despite a cut in interest rates by
the Federal Reserve, as Wall ...
Stocks plunged this morning, despite a cut in interest rates by
the Federal Reserve, as Wall Street began
Lecture 1, 7/21/2005
Natural Language Processing
41


Stocks plunged this morning, despite a cut in interest rates
by the Federal Reserve, as Wall Street began trading for the
first time since last …
Stocks plunged this morning, despite a cut in interest rates
by the Federal Reserve, as Wall Street began trading for the
first time since last Tuesday's terrorist attacks.
Lecture 1, 7/21/2005
Natural Language Processing
42
Human Word Prediction


Clearly, at least some of us have the ability to predict future words in
an utterance.
How?
 Domain knowledge
 Syntactic knowledge
 Lexical knowledge
Lecture 1, 7/21/2005
Natural Language Processing
43
Claim


A useful part of the knowledge needed to allow Word Prediction can
be captured using simple statistical techniques
In particular, we'll rely on the notion of the probability of a sequence
(a phrase, a sentence)
Lecture 1, 7/21/2005
Natural Language Processing
44
Applications

Why do we want to predict a word, given some
preceding words?
 Rank the likelihood of sequences containing various
alternative hypotheses, e.g. for ASR
Theatre owners say popcorn/unicorn sales have
doubled...
 Assess the likelihood/goodness of a sentence, e.g. for
text generation or machine translation
The doctor recommended a cat scan.
El doctor recommendó una exploración del gato.
Lecture 1, 7/21/2005
Natural Language Processing
45
Simple N-Grams

Assume a language has V word types in its lexicon, how likely is
word x to follow word y?
 Simplest model of word probability: 1/V
 Alternative 1: estimate likelihood of x occurring in new text based
on its general frequency of occurrence estimated from a corpus
(unigram probability)
popcorn is more likely to occur than unicorn

Alternative 2: condition the likelihood of x occurring in the context
of previous words (bigrams, trigrams,…)
mythical unicorn is more likely than mythical popcorn
Lecture 1, 7/21/2005
Natural Language Processing
47
N-grams

A simple model of language

Computes a probability for observed input.

Probability is the likelihood of the observation being generated by
the same source as the training data

Such a model is often called a language model
Lecture 1, 7/21/2005
Natural Language Processing
48
Computing the Probability of a Word Sequence

P(w1, …, wn) =
P(w1).P(w2|w1).P(w3|w1,w2). … P(wn|w1, …,wn-1)
P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical)

The longer the sequence, the less likely we are to find it in a training
corpus
P(Most biologists and folklore specialists believe that in fact the
mythical unicorn horns derived from the narwhal)

Solution: approximate using n-grams
Lecture 1, 7/21/2005
Natural Language Processing
49
Bigram Model

Approximate



n1)
P(wn |wby
1
P(wn |wn 1)
P(unicorn|the mythical) by P(unicorn|mythical)
Markov assumption: the probability of a word depends only on the probability of a
limited history
Generalization: the probability of a word depends only on the probability of the n
previous words



trigrams, 4-grams, …
the higher n is, the more data needed to train
backoff models
Lecture 1, 7/21/2005
Natural Language Processing
50
Using N-Grams

For N-gram models

P(wn |w1n1)  P(wn |wnn1N 1)
 P(wn-1,wn) = P(wn | wn-1) P(wn-1)

By the Chain Rule we can decompose a joint
probability, e.g. P(w1,w2,w3)
P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn1|wn) P(wn)
For bigrams then, the probability of a sequence is just the product
of the conditional probabilities of its bigrams
P(the,mythical,unicorn) = P(unicorn|mythical)
P(mythical|the) P(the|<start>)
n
n
P(w1 )   P(wk | wk 1)
k 1
Lecture 1, 7/21/2005
Natural Language Processing
51
The n-gram Approximation
Assume each word depends only on the previous (n-1) words (n words
total)
For example for trigrams (3-grams):
P(“the|… whole truth and nothing but”)
 P(“the|nothing but”)
P(“truth|… whole truth and nothing but the”)
Lecture 1, 7/21/2005
Natural Language Processing
 P(“truth|but the”)
52
n-grams, continued

How do we find probabilities?

Get real text, and start counting!
 P(“the | nothing but”) 
C(“nothing but the”) / C(“nothing but”)
Lecture 1, 7/21/2005
Natural Language Processing
53

Unigram probabilities (1-gram)
 http://www.wordcount.org/main.php
 Most likely to transition to “the”, least likely to transition
to “conquistador”.

Bigram probabilities (2-gram)
 Given “the” as the last word, more likely to go to
“conquistador” than to “the” again.
Lecture 1, 7/21/2005
Natural Language Processing
54
N-grams for Language Generation

C. E. Shannon, ``A mathematical theory of communication,'' Bell System Technical Journal, vol. 27, pp.
379-423 and 623-656, July and October, 1948.
Unigram:
5. …Here words are chosen independently but with their appropriate frequencies.
REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT
NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO
FURNISHES THE LINE MESSAGE HAD BE THESE.
Bigram:
6. Second-order word approximation. The word transition probabilities are correct but no
further structure is included.
THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE
CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE
LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN
UNEXPECTED.
Lecture 1, 7/21/2005
Natural Language Processing
55
N-Gram Models of Language



Use the previous N-1 words in a sequence to predict the
next word
Language Model (LM)
 unigrams, bigrams, trigrams,…
How do we train these models?
 Very large corpora
Lecture 1, 7/21/2005
Natural Language Processing
56
Counting Words in Corpora

What is a word?
 e.g., are cat and cats the same word?
 September and Sept?
 zero and oh?
 Is _ a word? * ? ‘(‘ ?
 How many words are there in don’t ? Gonna ?
 In Japanese and Chinese text -- how do we identify a
word?
Lecture 1, 7/21/2005
Natural Language Processing
57
Terminology






Sentence: unit of written language
Utterance: unit of spoken language
Word Form: the inflected form that appears in the corpus
Lemma: an abstract form, shared by word forms having the
same stem, part of speech, and word sense
Types: number of distinct words in a corpus (vocabulary size)
Tokens: total number of words
Lecture 1, 7/21/2005
Natural Language Processing
58
Corpora

Corpora are online collections of text and speech
 Brown Corpus
 Wall Street Journal
 AP news
 Hansards
 DARPA/NIST text/speech corpora (Call Home, ATIS,
switchboard, Broadcast News, TDT, Communicator)
 TRAINS, Radio News
Lecture 1, 7/21/2005
Natural Language Processing
59
Simple N-Grams

Assume a language has V word types in its lexicon, how likely is
word x to follow word y?
 Simplest model of word probability: 1/V
 Alternative 1: estimate likelihood of x occurring in new text based
on its general frequency of occurrence estimated from a corpus
(unigram probability)
popcorn is more likely to occur than unicorn

Alternative 2: condition the likelihood of x occurring in the context
of previous words (bigrams, trigrams,…)
mythical unicorn is more likely than mythical popcorn
Lecture 1, 7/21/2005
Natural Language Processing
60
Computing the Probability of a Word
Sequence


Compute the product of component conditional probabilities?
 P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the
mythical)
The longer the sequence, the less likely we are to find it in a training
corpus
P(Most biologists and folklore specialists believe that in fact the
mythical unicorn horns derived from the narwhal)

Solution: approximate using n-grams
Lecture 1, 7/21/2005
Natural Language Processing
61
Bigram Model

Approximate



n1)
P(wn |wby
1
P(wn |wn 1)
P(unicorn|the mythical) by P(unicorn|mythical)
Markov assumption: the probability of a word depends only on the probability of a
limited history
Generalization: the probability of a word depends only on the probability of the n
previous words



trigrams, 4-grams, …
the higher n is, the more data needed to train
backoff models
Lecture 1, 7/21/2005
Natural Language Processing
62
Using N-Grams

n1)
For P
N-gram
(wn |wmodels
P(wn |wnn1N 1)
1


 P(wn-1,wn) = P(wn | wn-1) P(wn-1)

By the Chain Rule we can decompose a joint
probability, e.g. P(w1,w2,w3)
P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn1|wn) P(wn)
For bigrams then, the probability of a sequence is just the product
of the conditional probabilities of its bigrams
n
P(the,mythical,unicorn)
=P
P(unicorn|mythical)
P(w1n )  
(wk | wk 1)
k 1
P(mythical|the) P(the|<start>)
Lecture 1, 7/21/2005
Natural Language Processing
63
Training and Testing


N-Gram probabilities come from a training corpus
 overly narrow corpus: probabilities don't generalize
 overly general corpus: probabilities don't reflect task or domain
A separate test corpus is used to evaluate the model, typically using
standard metrics
 held out test set; development test set
 cross validation
 results tested for statistical significance
Lecture 1, 7/21/2005
Natural Language Processing
64
A Simple Example

P(I want to each Chinese food) =
P(I | <start>) P(want | I) P(to | want) P(eat | to)
P(Chinese | eat) P(food | Chinese)
Lecture 1, 7/21/2005
Natural Language Processing
65
A Bigram Grammar Fragment from BERP
Eat on
.16
Eat Thai
.03
Eat some
.06
Eat breakfast
.03
Eat lunch
.06
Eat in
.02
Eat dinner
.05
Eat Chinese
.02
Eat at
.04
Eat Mexican
.02
Eat a
.04
Eat tomorrow
.01
Eat Indian
.04
Eat dessert
.007
Eat today
.03
Eat British
.001
Lecture 1, 7/21/2005
Natural Language Processing
66
<start> I
.25
Want some
.04
<start> I’d
.06
Want Thai
.01
<start> Tell
.04
To eat
.26
<start> I’m
.02
To have
.14
I want
.32
To spend
.09
I would
.29
To be
.02
I don’t
.08
British food
.60
I have
.04
British restaurant
.15
Want to
.65
British cuisine
.01
Want a
.05
British lunch
.01
Lecture 1, 7/21/2005
Natural Language Processing
67




P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want)
P(eat|to) P(British|eat) P(food|British) = .25*.32*.65*.26*.001*.60
= .000080
vs. I want to eat Chinese food = .00015
Probabilities seem to capture ``syntactic'' facts, ``world
knowledge''
 eat is often followed by an NP
 British food is not too popular
N-gram models can be trained by counting and normalization
Lecture 1, 7/21/2005
Natural Language Processing
68
BERP Bigram Counts
I
Want
To
Eat
Chinese
Food
lunch
I
8
1087
0
13
0
0
0
Want
3
0
786
0
6
8
6
To
3
0
10
860
3
0
12
Eat
0
0
2
0
19
2
52
Chinese
2
0
0
0
0
120
1
Food
19
0
17
0
0
0
0
Lunch
4
0
0
0
0
1
0
Lecture 1, 7/21/2005
Natural Language Processing
69
BERP Bigram Probabilities
Normalization: divide each row's counts by appropriate unigram
counts for wn-1


I
Want
To
Eat
Chinese
Food
Lunch
3437
1215
3256
938
213
1506
459
Computing the bigram probability of I I



C(I,I)/C(all I)
p (I|I) = 8 / 3437 = .0023
Maximum Likelihood Estimation (MLE): relative frequency of e.g.
freq(w1, w2)
freq(w1)
Lecture 1, 7/21/2005
Natural Language Processing
70
What do we learn about the language?


What's being captured with ...
 P(want | I) = .32
 P(to | want) = .65
 P(eat | to) = .26
 P(food | Chinese) = .56
 P(lunch | eat) = .055
What about...
 P(I | I) = .0023
 P(I | want) = .0025
 P(I | food) = .013
Lecture 1, 7/21/2005
Natural Language Processing
71



P(I | I) = .0023 I I I I want
P(I | want) = .0025 I want I want
P(I | food) = .013 the kind of food I want is ...
Lecture 1, 7/21/2005
Natural Language Processing
72
Approximating Shakespeare



As we increase the value of N, the accuracy of the n-gram model
increases, since choice of next word becomes increasingly
constrained
Generating sentences with random unigrams...
 Every enter now severally so, let
 Hill he late speaks; or! a more to leg less first you enter
With bigrams...
 What means, sir. I confess she? then all sorts, he is trim,
captain.
 Why dost stand forth thy canopy, forsooth; he is this palpable hit
the King Henry.
Lecture 1, 7/21/2005
Natural Language Processing
73


Trigrams
 Sweet prince, Falstaff shall die.
 This shall forbid it should be branded, if renown
made it empty.
Quadrigrams
 What! I will go seek the traitor Gloucester.
 Will you not tell me who I am?
Lecture 1, 7/21/2005
Natural Language Processing
74



There are 884,647 tokens, with 29,066 word form types, in
about a one million word Shakespeare corpus
Shakespeare produced 300,000 bigram types out of 844 million
possible bigrams: so, 99.96% of the possible bigrams were
never seen (have zero entries in the table)
Quadrigrams worse: What's coming out looks like
Shakespeare because it is Shakespeare
Lecture 1, 7/21/2005
Natural Language Processing
75
N-Gram Training Sensitivity


If we repeated the Shakespeare experiment but trained our n-grams
on a Wall Street Journal corpus, what would we get?
This has major implications for corpus selection or design
Lecture 1, 7/21/2005
Natural Language Processing
76
Some Useful Empirical Observations





A small number of events occur with high frequency
A large number of events occur with low frequency
You can quickly collect statistics on the high frequency events
You might have to wait an arbitrarily long time to get valid statistics on
low frequency events
Some of the zeroes in the table are really zeros But others are simply
low frequency events you haven't seen yet. How to address?
Lecture 1, 7/21/2005
Natural Language Processing
77
Smoothing Techniques



Every n-gram training matrix is sparse, even for very large
corpora (Zipf’s law)
Solution: estimate the likelihood of unseen n-grams
Problems: how do you adjust the rest of the corpus to
accommodate these ‘phantom’ n-grams?
Lecture 1, 7/21/2005
Natural Language Processing
78
Smoothing Techniques



Every n-gram training matrix is sparse, even for very large
corpora (Zipf’s law)
Solution: estimate the likelihood of unseen n-grams
Problems: how do you adjust the rest of the corpus to
accommodate these ‘phantom’ n-grams?
Lecture 1, 7/21/2005
Natural Language Processing
79
Add-one Smoothing

For unigrams:



Add 1 to every word (type) count
Normalize by N (tokens) /(N (tokens) +V (types))
Smoothed count (adjusted for additions to N) is





c 1 N
N V
i
Normalize by N to get the new unigram probability:

p*  c 1
i wnN) +V1
Add 1 to every bigram c(wn-1

Incr unigram count by vocabulary size c(wn-1) + V
For bigrams:
Lecture 1, 7/21/2005
i
Natural Language Processing
80


Discount: ratio of new counts to old (e.g. add-one smoothing
changes the BERP bigram (to|want) from 786 to 331 (dc=.42)
and p(to|want) from .65 to .28)
But this changes counts drastically:

too much weight given to unseen ngrams

in practice, unsmoothed bigrams often work better!
Lecture 1, 7/21/2005
Natural Language Processing
81
Witten-Bell Discounting

A zero ngram is just an ngram you haven’t seen yet…but every
ngram in the corpus was unseen once…so...
 How many times did we see an ngram for the first time? Once
for each ngram type (T)
 Est. total probability of unseen bigrams as

View training corpus as series
T of events, one for each token (N)
and one for each new type
N (T)
T
Lecture 1, 7/21/2005
Natural Language Processing
82


We can divide the probability mass equally among unseen
bigrams….or we can condition the probability of an unseen
bigram on the first word of the bigram
Discount values for Witten-Bell are much more reasonable than
Add-One
Lecture 1, 7/21/2005
Natural Language Processing
83
Good-Turing Discounting

Re-estimate amount of probability mass for zero (or low count) ngrams
by looking at ngrams with higher counts

Estimate

N c 1
c
*


c

1

E.g. N0’s adjusted count is a function of the count of ngrams
Nc
that occur once, N1

Assumes:


word bigrams follow a binomial distribution
We know number of unseen bigrams (VxV-seen)
Lecture 1, 7/21/2005
Natural Language Processing
84
Backoff methods (e.g. Katz ‘87)

For e.g. a trigram model
 Compute unigram, bigram and trigram probabilities
 In use:


Where trigram unavailable back off to bigram if available,
o.w. unigram probability
E.g An omnivorous unicorn
Lecture 1, 7/21/2005
Natural Language Processing
85
Summary


N-gram probabilities can be used to estimate the
likelihood
 Of a word occurring in a context (N-1)
 Of a sentence occurring at all
Smoothing techniques deal with problems of unseen
words in a corpus
Lecture 1, 7/21/2005
Natural Language Processing
86