winterschool

Download Report

Transcript winterschool

Issues in Computational Linguistics:
Ambiguity Management
Dick Crouch and Tracy King
The next two days

Today
– Mostly computational
– Generalizing charts for packing ambiguity

Tomorrow
– Mostly linguistic
– An introduction to glue semantics
Several Types of Ambiguity

Phonetic:

Tokenization:
– “I scream” or “ice cream”
– “I like Jan.” --- |Jan|. Or |Jan.|.

Morphological:

Lexical:

Syntactic:

Semantic:

Pragmatic:
(abbrev January)
– “walks” --- noun or verb
– “untieable knot” --- un(tieable) or (untie)able
– “bank” --- river bank or financial institution
– “The turkeys are ready to eat” --- fattened or hungry
– “Two boys ate fifteen pizzas” --- 15 each or 15 total
– “Sue won. Ed gave her a good luck charm” --- cause or result
Some more ambiguities


Women like chocolate more than men
Visiting relatives can be boring
– Visiting relatives is boring
– Visiting relatives are boring



Pets must be carried on escalator
Clothes must be worn in public
Warning: keep all body parts out of window
PP Attachment
A classic example of syntactic ambiguity


PP adjuncts can attach to VPs and NPs
Strings of PPs in the VP are ambiguous
– I see the girl with the telescope.
I see [the girl with the telescope].
I see [the girl] [with the telescope].

Ambiguities proliferate exponentially
– I see the girl with the telescope in the park
I see [the girl with (the telescope in the park)]
I see [the (girl with the telescope) in the park]
I see the girl [with the (telescope in the park)]
I see the girl [with the telescope] [in the park]
I see [the girl with the telescope] [in the park]
– The syntax has no way to determine the attachment, even if
humans can.
Coverage entails Ambiguity
I fell in the park.
+
I know the girl in the park.
I see the girl in the park.
Ambiguity: bug or feature?


Bug in computer programming languages
Feature in natural language
– People good at resolving ambiguity in context
– Ambiguity consequently often unperceived
» “Cable breakage damages the printer”
» “Readjust paper holding clip”
Even though thousand-fold ambiguities are common
– Ambiguity promotes conciseness

Computers can’t resolve ambiguity like humans
Ambiguity can be explosive
If alternatives multiply within or across
components…
Discourse
Semantics
Syntax
Morphology
Tokenize
Computational consequences of ambiguity

Serious problem for computational systems
– Broad coverage, hand written grammars frequently produce
thousands of analyses, sometimes millions
– Machine learned grammars easily produce hundreds of
thousands of analyses if allowed to parse to completion

Three approaches to ambiguity management:
– Pruning: block unlikely analysis paths early
– Procrastination: do not expand analysis paths that will lead
to ambiguity explosion until something else requires them
» Also known as underspecification
– Packing: compact representation and computation of all
possible analyses
The Problem with Pruning:
premature disambiguation

The conventional approach: Use heuristics to prune as soon as possible

Strong constraints may reject the so-far-best (= only) option
Statistics
X
Discourse
Fast computation, wrong result
Semantics
X
Syntax
Morphology
Tokenize
X
X
X
The problem with procrastination
passing the buck

Chunk parsing as an example:
– Collect noun groups, verb groups, PP groups
– Leave it to later processing to figure out the
correct way of putting these together
– Not all combinations are grammatically acceptable

Later processing must either
– Call parser to check grammatical constraints
– Have its own model of grammatical constraints
– In the best case, solve a set of constraints the
partial parser includes with its output
The Problem with Packing


There may just be too many analyses to pack
efficiently
A major problem for relatively unconstrained
machine induced grammars
– Grammars overgenerate massively
– Statistics used to prune out unlikely sub-analyses

Less of a problem for carefully hand-coded
broad coverage grammars
Outline of rest of talk

Packing
– Charts for context free grammars
– Generalizing free choice packing

Statistical pruning
– PCFGs
– Viterbi and inside-outside algorithms
– Lexicalized PCFGs

Packing and Pruning
– Maximum entropy disambiguation and avoiding
premature choice
Packing

Explosion of ambiguity results from a small number of
sub-analyses combining in different ways to produce
a large number of total analyses (e.g. PP attachment)

Compute and represent each sub-analysis just once
Compute a factored representation of how these subanalyses combine


Charts for context free grammars are an example of
(free choice) packing
– Compute and represent 2N analyses in time and space N3
Charts
VP:15
<VP:14, PP:8> or <VP:13, PP:10> or <V:1:, NP:12>
VP:14
VP:13
<V:1, NP:11> or <VP:13, PP:7>
<V1:3, NP:2>
NP:12
<NP:11, PP:8> or <NP:2, PP:10>
NP:11
<NP:2, PP:7>
PP:10
<P:3, NP:9>
NP:9
PP:7
V:1
saw
NP:2
the girl
P:3
in
<P:3, NP:4>
NP:4
the park
<NP:4, PP:8>
PP:8
P:5
with
<P:5, NP:6>
NP:6
the telescope
Context Free Charts:
Free choice structures

5 analyses in 15 components
– Each contains ~11 constituents
– Can count analyses without
unpacking chart

Exploits context-freeness to
factor combinations:
– V:1 combines with NP:12
regardless of internal structure
– Chart packs all ambiguities of
NP:12 together with no sacrifice
of correctness
– VP:15 can freely chose any
analysis of NP:12

But what happens if things are
not context free?
– Can we preserve free choice?
VP:15
<VP:14, PP:8> or <VP:13, PP:10> or <V:1:, NP:12>
VP:14
VP:13
<V:1, NP:11> or <VP:13, PP:7>
<V1:3, NP:2>
NP:12
<NP:11, PP:8> or <NP:2, PP:10>
NP:11
<NP:2, PP:7>
PP:10
<P:3, NP:9>
NP:9
PP:7
V:1
NP:2
P:3
<P:3, NP:4>
NP:4
<NP:4, PP:8>
PP:8
P:5
<P:5, NP:6>
NP:6
Generalizing Free Choice Packing
The sheep saw the fish.
How many sheep?
How many fish?
Options multiplied out
The sheep-sg saw the fish-sg.
The sheep-pl saw the fish-sg.
The sheep-sg saw the fish-pl.
The sheep-pl saw the fish-pl.
In principle, a verb might require
agreement of Subject and Object:
Have to check it out.
Options packed
The sheep
sg
sg
saw the fish
pl
pl
But English doesn’t do that:
Any combination of choices is OK
Dependent choices
Das Mädchen
The girl
nom
acc
sah die Katze nom
acc
saw the cat
Again, packing avoids duplication … but it’s wrong
It doesn’t encode all dependencies, choices are not free.
Das Mädchen-nom sah die Katze-nom
Das Mädchen-nom sah die Katze-acc
Das Mädchen-acc sah die Katze-nom
Das Mädchen-acc sah die Katze-acc
bad
The girl saw the cat
The cat saw the girl
bad
Solution: Label dependent choices
Das Mädchen-nom sah die Katze-nom
Das Mädchen-nom sah die Katze-acc
Das Mädchen-acc sah die Katze-nom
Das Mädchen-acc sah die Katze-acc
Das Mädchen
p:nom
p:acc
sah die Katze
q:nom
q:acc
bad
The girl saw the cat
The cat saw the girl
bad
=
(pq)

(pq)
• Label each choice with distinct Boolean variables p, q, etc.
• Record acceptable combinations as a Boolean expression 
• Each analysis corresponds to a satisfying truth-value assignment
(a line from ’s truth table that assigns it “true”)
Simplifying Truth Tables
Das Mädchen
p:nom
p:acc
sah die Katze
p
1
1
0
0
Das Mädchen
p:nom
p:acc
q 
1 0
0 1
1 1
0 0
sah die Katze
p
1
0

1
1
q:nom
q:acc
=
(pq)

(pq)
(q = p)
p:nom
p:acc
=
(p 
p)
Freely choose any line
from the truth table
(Almost) Free choice for PP attachment
Let PPs freely attach to all NPs and VPs
and then rule out crossing attachments
A1
B1
A2
B2
A3
B3
A4
C1
C2
VP
V
NP
P
PP
NP
Independent Choices
A1 xor A2 xor A3 xor A4  1
B1 xor B2 xor B3
C1 xor C2
1
1
PP
P
PP
NP
P
NP
NoGoods (crossing attachments)
(A2 & B1),
(A3 & B1),
(B2 & C1)
(A2 & C1)
(A3 & B2)
xor : exclusive or
Eliminating NoGoods / Simplifying Truth Tables

Starting from:
– A set of N independent m-way choices between boolean variables
– And a set of nogoods constraining combinations of these variables

This defines:
– A truth table for a nogood function on the boolean variables
– Where disallowed combinations of variables correspond to lines in
the truth table assigning false to the noogood function

We can:
– Produce a corresponding free choice truth table
– Where no combinations of variables are disallowed
Example of NoGood Elimination
C1
B1
C2
VP
NP
B1 xor B2  1
C1 xor C2  1
P
B2
PP
NP
PP
P
NP
B1
1
1
0
0
B2
0
0
1
1
C1
1
0
1
0
(B1 & C1)
C2 (B1&C1)
0
0
1
1
0
1
1
1
Simplifies to
c1
b1
b2
c2 v b1
PP
VP
NP
b1 xor b2  1
c1 xor c2  b2
P
NP
PP
P
NP
b1
1
0
0
b2
0
1
1
c1 c2 nogood
0 0
1
1 0
1
0 1
1
The Free Choice Gamble

Eliminating nogoods increases number of boolean
choice variables.
– The more interactions between choices, the greater the
additional number of variables

Worst case, where everything interacts:
– As many choice variables as there are readings
– Packing blows up, and becomes exponential

Best case, no interactions
– N completely independent choices represent 2N readings

Language interactions mostly limited & local
– Tends towards the best case
– Free choice packing pays off for linguistic analysis
Advantages of Free Choice Packing

Avoids procrastination
– Nogoods are constraints that parser sends to other components to
check when using packed structures
– Eliminating nogoods: other components don’t do parser’s work

Independence between choices:
Allows processing relying on independence assumptions
– Counting number of readings
» Apparently trivial but of crucial importance, since statistical modelling
requires the ability to count
– Hence, statistical processing

A general mechanism extending beyond parsing
Parsing in the XLE
Extending free choice packing

C-structure parsing produces a context free chart
– Already a free choice packed structure

F-structure constraints applied to c-str chart
– Rules out certain c-structures through unification failures (i.e.
contradictions in a logic of equality)
– Need to impose additional no-goods on chart

How do we get the f-structure logic of equality to
– Operate within the free choice structure of the chart
– Impose additional no-goods on the structure?

That is, we know how to pack context free grammars,
but how do we pack a logic of equality?
How to Pack a Logic
Goal: Ambiguity-enable components
to defer choosing, unpacking
Let ,  be formulas from the logic you want to pack
1. New choice:    iff p    p  
(p a new Boolean variable)
Proof: (If) If  is true, let p be true, in which case p    p   is true.
(Only if) If p is true, then  is true, in which case    is true.
2. Ambiguity-enabled inference (by trivial logic):
If      is a rule of inference,
then (C1  )  (C2  )  (C1  C2  ) is also valid
E.g. Substition of equals for equals:
x=y    x/y is a rule of inference
Therefore: (C1  x=y)  (C2  )  (C1  C2  x/y)
3. Nogoods: if    and C1   then nogood(C1).
Packing Equalities in F-structure
S
A1
A2
NP
SUBJ=
NP
SUBJ =
NP
NUM=sg
V
NP
visiting
relatives
A1  SUBJ NUM=sg
VP
Adj
NP
=
visiting
relatives
NUM=pl
V
Adj
is
SUBJ NUM=sg
boring
1 & A1  sg=sg
1  SUBJ NUM=sg
A2  SUBJ NUM=pl
1 & A2  sg=pl
nogood(A2)
Ambiguity Enabled Language Processing

Extend free choice packing to other components in
language processing, e.g
–
–
–
–



Structure transfer for machine translation
Generation
Semantic interpretation
Reasoning (?)
A single, uniform mechanism for managing ambiguity
within and across module boundaries
Allows deferment of disambiguation choices across
module boundaries
Efficient, provided choices are relatively independent
Pruning: Statistical Parsing

Probabilistic Context Free Grammars (PCFGs)
Viterbi parsing of PCFGs
Inside-Outside training of PCFGs
Lexicalized PCFGs

Disambiguation of packed structures in XLE



– Maximum entropy models
PCFGs

Context free grammar with probabilities assigned to rules
– Probabilities of all rules rewriting a non-terminal N sum to 1

Probability of a tree
– Product of probabilities of all the rules used to derive the tree

Probability of a sentence
– Sum probabilities of all trees assigned to sentence by grammar

Independence assumption:
All rules are statistically independent
– Choice of rule used to analyze a subconstituent does not
influence choice rule used to embed it in a larger constituent
– Empirically implausible assumption (hence lexicalized PCFGs)
Viterbi Parsing of PCFGs


How do you find the most probable parse of a
sentence without inspecting all of its parses?
Observation:
– most probable parse of a constituent is the most
probable combination of
» a rule
» the most probable subconstituents matching the rule

Viterbi parsing (dynamic programming)
– For each word span i-j and non-terminal N, record
only the most probable parse of i-j as N
Example of Viterbi Parsing
(charts with probabilities)
N  Adj N
VP  V NP
NP  N
NP  VP
NP
NP
N
VP
9x10-6
9x10-7
1x10-6
9x10-5
NPVP
NPN
1-3
NP
N
V
0.1
Adj 0.01
visiting
1-1
N  relatives 0.1 …
Adj  old
0.1
Adj  visiting 0.01 …
V  visiting 0.1 …
0.1
1
0.9
0.1
0.0009
0.001
Adj 0.1
old
2-2
2-3
NP
N
0.09
0.1
relatives
3-3
Training: Expectation Maximization

Start with set of rules & initial estimate of rule probabilities
– e.g. rule counts from annotated corpus, or random assignment
– For PCFGs, results extremely sensitive to initial estimates

Adjust rule probabilities to maximize probability of
sentences in (unannotated) corpus
– Parse corpus with existing rules and calculate sentence
probabilities
– Reassign rule probabilities based on rule counts weighted by the
probabilities of the sentences in which those rules occurred
– Iterate until probabilities settle/converge

Requires efficient way of calculating rule probabilities
weighted by sentence probability
– Inside-Outside algorithm
Inside and Outside Probabilities


Inside probability:
INN(i,j) = P(wi,j | NiRHSj)
Probability that a string of words expands N
Outside probability:
OUTN(i,l) = P(wi,j-1, NjRHSk,,wk+1,l)
Probability of having an expansion of N between
surrounding sequences of words
N
OUT
1
j
IN
OUT
k
l
Calculating Inside & Outside Probabilities

Compute inside probabilities bottom up
– Sum inside probabilities of all chart edges
dominated by N-rule

Compute outside probabilities top down
– From outside probabilities of dominating
constituents in chart

Can show that weighted rule count
C(NAB) =
1
P(w1..n)
OUT (k,l). P(NAB). IN (k,m). IN (m+1,n)
klm
N
A
B
Lexicalized PCFGs


Probabilities of rule conditioned by lexical heads of its
daughters/RHS
Conceptually, each non-terminal N becomes a
complex <N, LexicalHead> category
– NP(man)  Det(the) N(man)
– done more efficiently in practice

Major problems with data sparseness
– Unlikely to see NP(zygote)  Det(the) N(zygote) in training
– Various smoothing techniques to assign reasonable nonzero probabilities to examples unseen in training
Packing & Pruning in XLE

XLE produces (too) many candidates
– All valid (with respect to grammar and OT marks)
– Not all equally likely
– Some applications require a single best guess

Grammar writer can’t specify correct choices
– Many implicit properties of words and structures with unclear
significance

Appeal to probability model to choose best parse
– Assume: previous experience is a good guide for future decisions
– Collect corpus of training sentences, build probability model that
optimizes for previous good results
– Apply model to choose best analysis of new sentences
Issues




What kind of probability model?
What kind of training data?
Efficiency of training, efficiency of
disambiguation?
Benefit vs. random choice of parse
Probability model

Conventional models: stochastic branching process
– Probabilistic Context-Free grammars

Sequence of decisions, each independent of previous
decisions, each choice having a certain probability
– PCFG: Choose from alternative expansions of a given category


Probability of an analysis = product of choice probabilities
Efficient algorithms
– Training: inside/outside
– Disambiguation: Viterbi

Abney 1997 and others: Not appropriate for LFG,
HPSG…
– Choices are not independent: Information from different CFG
branches interacts through f-structure
– Probability models are biased (don’t make right choices on
training set)
Exponential models are appropriate
(aka Maximum Entropy or Log-linear models)



Assign probabilities to representations, not to
choices in a derivation
No independence assumption
Arithmetic combined with human insight
– Human:
» Define properties of representations that may be relevant
» Based on any computable configuration of features, trees
– Arithmetic:
» Train to figure out the weight of each property
What does this mean??




Define yes-no / 1-0 features, f, that seem important
Training determines weights on these features, λ, to
reflect their actual importance
Select parse x: count occurrences of features (0,1)
and multiply by corresponding weights, λ.f(x)
Convert weighted feature counts to probabilities
eλ.f(x)
 eλ.f(X)
Un-normalized probability
Normalizing factor
Entropy and Exponentials

Sparse and interdependent training data
– Not enough parse/feature combinations to uniquely determine
feature weights consistent with training data
– Features are not independent

Entropy: reciprocal of average number of bits required to
communicate a result, under an optimal coding scheme
– Optimal coding: two most probable results encoded in 1 bit, next 4
in 2 bits, etc…
– Dependencies between features: if f1 makes f2 more likely, can
reduce size of encoding of f2 when code for f1 is present

Maximize entropy of training data to smooth dependent features
weights

Entropy defined as (hence the exponential)
H(p) = -p(x). log p(x)
And what does that mean??


Variant of an Expectation-Maximization (EM)
algorithm (conjugate-gradient minimization)
used to compute weight distribution with
maximum entropy
Inside-outside algorithm used to calculate all
inside and outside probabilities as determined
by feature weights
Efficiency

Properties counts
– Associated with Boolean tree of XLE contexts (a1, b2)
– Shared among many parses

Training
– Inside/outside algorithm of PCFG, but applied to Boolean
tree, not parse tree
– Fast algorithm for choosing best properties
– Can train on sentences with relatively low-ambiguity
– 5 hours to train over WSJ (given file of parses)

Disambiguation
– Viterbi algorithm applied to Boolean tree
– 5% of parse time to disambiguate
– 30% gain in F-score
Conclusions

Free choice packing
– Extending charts beyond parsing context free grammars

Efficient if limited interaction between alternatives
– Generally true for natural language processing
– Though sometimes hits bad cases

Uniform ambiguity management architecture across
all language processing modules
– Not forced to make premature disambiguation choices at
module boundaries

Statistical disambiguation of packed representations