parsers_and_grammars
Download
Report
Transcript parsers_and_grammars
Parsers and Grammars
Colin Phillips
Outline
•
•
•
•
•
•
The Standard History of Psycholinguistics
Parsing and rewrite rules
Initial optimism
Disappointment and the DTC
Emergence of independent psycholinguistics
Reevaluating relations between competence and
performance systems
Standard View
324
697+
?
arithmetic
217 x 32 = ?
Standard View
specialized algorithm
324
697+
?
arithmetic
specialized algorithm
217 x 32 = ?
Standard View
specialized algorithm
specialized algorithm
324
697+
?
arithmetic
217 x 32 = ?
?
something deeper
Standard View
specialized algorithm
speaking
language
specialized algorithm
understanding
grammatical
knowledge,
competence
recursive characterization of
well-formed expressions
Standard View
specialized algorithm
speaking
language
specialized algorithm
understanding
grammatical
knowledge,
competence
recursive characterization of
well-formed expressions
precise
but ill-adapted to
real-time operation
Standard View
specialized algorithm
speaking
language
specialized algorithm
understanding
grammatical
knowledge,
competence
recursive characterization of
well-formed expressions
well-adapted to
real-time operation
but maybe inaccurate
Grammatical Knowledge
• How is grammatical knowledge accessed in syntactic
computation for...
(a) grammaticality judgment
(b) understanding
(c) speaking
• Almost no proposals under standard view
• This presents a serious obstacle to unification at the level
of syntactic computation
Townsend & Bever (2001, ch. 2)
• “Linguists made a firm point of insisting that, at
most, a grammar was a model of competence - that
is, what the speaker knows. This was contrasted
with effects of performance, actual systems of
language behaviors such as speaking and
understanding. Part of the motive for this
distinction was the observation that sentences can
be intuitively ‘grammatical’ while being difficult
to understand, and conversely.”
Townsend & Bever (2001, ch. 2)
• “…Despite this distinction the syntactic model had
great appeal as a model of the processes we carry
out when we talk and listen. It was tempting to
postulate that the theory of what we know is a
theory of what we do, thus answering two
questions simultaneously.
1. What do we know when we know a language?
2. What do we do when we use what we know?
Townsend & Bever (2001, ch. 2)
• “…It was assumed that this knowledge is linked to
behavior in such a way that every syntactic
operation corresponds to a psychological process.
The hypothesis linking language behavior and
knowledge was that they are identical.
Miller (1962)
1. Mary hit Mark.
2. Mary did not hit Mark.
3. Mark was hit by Mary.
4. Did Mary hit Mark?
5. Mark was not hit by Mary.
6. Didn’t Mary hit Mark?
7. Was Mark hit by Mary?
8. Wasn’t Mark hit by Mary?
K(ernel)
N
P
Q
NP
NQ
PQ
PNQ
Miller (1962)
Transformational
Cube
Townsend & Bever (2001, ch. 2)
• “The initial results were breathtaking. The amount
of time it takes to produce a sentence, given
another variant of it, is a function of the distance
between them on the sentence cube. (Miller &
McKean 1964).”
“…It is hard to convey how exciting these
developments were. It appeared that there was to
be a continuing direct connection between
linguistic and psychological research. […] The
golden age had arrived.”
Townsend & Bever (2001, ch. 2)
• “Alas, it soon became clear that either the linking
hypothesis was wrong, or the grammar was
wrong, or both.”
Townsend & Bever (2001, ch. 2)
• “The moral of this experience is clear. Cognitive
science made progress by separating the question
of what people understand and say from how they
understand and say it. The straightforward attempt
to use the grammatical model directly as a
processing model failed. The question of what
humans know about language is not only distinct
from how children learn it, it is distinct from how
adults use it.”
A Simple Derivation
S
(starting axiom)
S
A Simple Derivation
S
1. S
(starting axiom)
NP VP
S
NP
VP
A Simple Derivation
S
(starting axiom)
1. S NP VP
2. VP V NP
S
NP
VP
V
NP
A Simple Derivation
S
(starting axiom)
1. S NP VP
2. VP V NP
3. NP D N
S
NP
VP
V
NP
D
N
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
(starting axiom)
NP VP
V NP
DN
Bill
S
NP
VP
Bill V
NP
D
N
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
(starting axiom)
NP VP
V NP
DN
Bill
hit
S
NP
VP
Bill V
hit D
NP
N
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
S
NP
VP
Bill V
NP
hit D
the
N
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
S
NP
VP
Bill V
NP
hit D
N
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
S
NP
VP
Bill V
NP
hit D
N
the
ball
Reverse the derivation...
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
Bill
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill
hit
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
hit
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
hit
the
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
hit D
the
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
hit D
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
hit D
N
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
Bill V
NP
hit D
N
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
NP
VP
Bill V
NP
hit D
N
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
S
NP
VP
Bill V
NP
hit D
N
the
ball
A Simple Derivation
S
1. S
2. VP
3. NP
4. N
5. V
6. D
7. N
(starting axiom)
NP VP
V NP
DN
Bill
hit
the
ball
S
NP
VP
Bill V
NP
hit D
N
the
ball
Transformations
wh-movement
--> 2
X
1
wh-NP
2
Y
3
1
0
3
Transformations
VP-ellipsis
-->
X
1
VP1 Y
2
3
VP2 Z
4
5
1
2
0
3
condition: VP1 = VP2
5
Difficulties
• How to build structure incrementally in rightbranching structures
• How to recognize output of transformations that
create nulls
Summary
• Running the grammar ‘backwards’ is not so
straightforward - problems of indeterminacy and
incrementality
• Disappointment in empirical tests of Derivational
Theory of Complexity
• Unable to account for processing of local
ambiguities
Standard View
specialized algorithm
speaking
language
specialized algorithm
understanding
grammatical
knowledge,
competence
recursive characterization of
well-formed expressions
Grammatical Knowledge
• How is grammatical knowledge accessed in syntactic
computation for...
(a) grammaticality judgment
(b) understanding
(c) speaking
• Almost no proposals under standard view
• This presents a serious obstacle to unification at the level
of syntactic computation
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Grammar as Parser - Problems
• Incremental structure building with PS Rules
(e.g. S -> NP VP)
– delay
– prediction/guessing
• Indeterminacy ( how to recover nulls created by
transformations)
Grammar as Parser - Solutions
• Lexicalized grammars make incremental structurebuilding much easier (available in HPSG,
minimalism, LFG, Categorial Grammar, etc. etc.)
VP
VP -> V PP
PP -> P NP
V
sat
PP
P
on
NP
the rug
Grammar as Parser - Solutions
• Lexicalized grammars make incremental structurebuilding much easier (available in HPSG,
minimalism, LFG, Categorial Grammar, etc. etc.)
VP
sit
comp: __ P
on
comp: __ N
V
sat
PP
P
on
NP
the rug
Grammar as Parser - Solutions
• Problem of seeking nulls in movement structures
Transformations
wh-movement
--> 2
X
1
wh-NP
2
Y
3
1
0
3
Transformations
VP-ellipsis
-->
X
1
VP1 Y
2
3
VP2 Z
4
5
1
2
0
3
condition: VP1 = VP2
5
Grammar as Parser - Solutions
• Problem of seeking nulls in movement structures
• …becomes problem of seeking licensing features
for displaced phrases, e.g. for wh-phrase, seek
Case assigner and thematic role assigner.
• Requirement to find licensing features is a basic
component of all syntactic composition
Incremental Structure Building
• An investigation of the grammatical consequences of
incremental, left-to-right structure building
Incremental Structure Building
A
Incremental Structure Building
A
B
Incremental Structure Building
A
B
C
Incremental Structure Building
A
B
C
D
Incremental Structure Building
A
B
C
D
E
Incremental Structure Building
A
B
Incremental Structure Building
A
B
constituent
Incremental Structure Building
A
B
C
constituent is destroyed by
addition of new material
Incremental Structure Building
A
B
C
Incremental Structure Building
A
B
C
constituent
Incremental Structure Building
A
B
C
D
constituent is destroyed by
addition of new material
Incremental Structure Building
the cat
Incremental Structure Building
the cat
sat
Incremental Structure Building
the cat
sat
on
Incremental Structure Building
the cat
sat
on
the rug
Incremental Structure Building
the cat
sat
on
Incremental Structure Building
the cat
sat
on
the rug
Incremental Structure Building
the cat
sat
on
the rug
[sat on] is a temporary
constituent, which is
destroyed as soon as the
NP [the rug] is added.
Incremental Structure Building
Conflicting Constituency Tests
Verb + Preposition sequences can undergo coordination…
(1) The cat sat on and slept under the rug.
…but cannot undergo pseudogapping (Baltin & Postal, 1996)
(2) *The cat sat on the rug and the dog did the chair.
Incremental Structure Building
the cat
sat
on
Incremental Structure Building
the cat
and
sat
on
slept
under
Incremental Structure Building
the cat
coordination applies
early, before the V+P
constituent is destroyed.
and
sat
on
slept
under
Incremental Structure Building
the cat
sat
on
Incremental Structure Building
the cat
sat
on
the rug
Incremental Structure Building
the cat
and the dog
sat
on
the rug
did
Incremental Structure Building
the cat
and the dog
did
sat
on
the rug
pseudogapping applies
too late, after the V+P
constituent is destroyed.
Incremental Structure Building
• Constituency Problem
Different diagnostics of constituency frequently yield
conflicting results
• Incrementality Hypothesis
(a) Structures are assembled strictly incrementally
(b) Syntactic processes see a ‘snapshot’ of a derivation - they
target constituents that are present when the process applies
(c) Conflicts reflect the simple fact that different processes
have different linear properties
• Applied to interactions among binding, movement, ellipsis,
prosodic phrasing, clitic placement, islands, etc. (Phillips
1996, in press; Richards 1999, 2000; Guimaraes 1999; etc.)
Interim Conclusion
• Grammatical derivations look strikingly like the
incremental derivations of a parsing system
• But we want to be explicit about this, so...
Computational Modeling
(Schneider 1999; Schneider & Phillips, 1999)
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Townsend & Bever (2001, ch. 2)
• “Linguists made a firm point of insisting that, at
most, a grammar was a model of competence - that
is, what the speaker knows. This was contrasted
with effects of performance, actual systems of
language behaviors such as speaking and
understanding. Part of the motive for this
distinction was the observation that sentences can
be intuitively ‘grammatical’ while being difficult
to understand, and conversely.”
Grammaticality ≠ Parsability
• “It is straightforward enough to show that sentence parsing
and grammaticality judgments are different. There are
sentences which are easy to parse but ungrammatical (e.g.
that-trace effects), and there are sentences which are
extremely difficult to parse, but which may be judged
grammatical given appropriate time for reflection (e.g.
multiply center embedded sentences). This classic
argument shows that parsing and grammar are not
identical, but it tells us very little about just how much they
have in common.”
(Phillips, 1995)
Grammaticality ≠ Parsability
• Grammatical sentences that are hard to parse
– The cat the dog the rat bit chased fled
– John gave the man the dog bit a sandwich
• Ungrammatical sentences that are understandable
– Who do you think that left?
– The children is happy
– The millionaire donated the museum a painting
Grammaticality ≠ Parsability
• Grammatical sentences that are hard to parse
– The cat the dog the rat bit chased fled
– John gave the man the dog bit a sandwich
• Can arise independent of grammar
– Resource (memory) limitations
– Incorrect choices in ambiguity
(Preliminary)
• Incomplete structural dependencies have a
cost (that’s what yields center embedding)
A Contrast
(Gibson 1998)
• Relative Clause within a Sentential Complement (RC SC):
The fact [CP that the employee [RC who the manager hired] stole
office supplies] worried the executive.
• Sentential Complement within a Relative Clause (SC RC):
#The executive [RC who the fact [CP that the employee stole office
supplies] worried] hired the manager.
RC SC is easier to process than SC RC
A Contrast
(Gibson 1998)
• Relative Clause within a Sentential Complement (RC SC):
[SC that the employee [RC who the manager hired] stole
• Sentential Complement within a Relative Clause (SC RC):
[RC who the fact [SC that the employee stole office supplies] worried]
RC SC is easier to process than SC RC
A Contrast
(Gibson 1998)
• Relative Clause within a Sentential Complement (RC SC):
[SC that the employee [RC who the manager hired] stole
• Sentential Complement within a Relative Clause (SC RC):
[RC who the fact [SC that the employee stole office supplies] worried]
RC SC is easier to process than SC RC
A Contrast
(Gibson 1998)
• Relative Clause within a Sentential Complement (RC SC):
[SC that the employee [RC who the manager hired] stole
• Sentential Complement within a Relative Clause (SC RC):
[RC who the fact [SC that the employee stole office supplies] worried]
Contrast is motivated by off-line complexity ratings
Grammaticality ≠ Parsability
• Ungrammatical sentences that are understandable
– Who do you think that left?
– The children is happy
– The millionaire donated the museum a painting
• System can represent illegal combinations (e.g.
categories are appropriate, but feature values are
inappropriate)
• Fact that understandable errors are (i) diagnosable,
(ii) nearly grammatical, should not be overlooked
Grammaticality ≠ Parsability
• Are the parser’s operations fully grammatically
accurate?
Standard View
specialized algorithm
speaking
language
specialized algorithm
understanding
grammatical
knowledge,
competence
recursive characterization of
well-formed expressions
well-adapted to
real-time operation
but maybe inaccurate
Grammatical Accuracy in Parsing
• The grammar looks rather like a parser
• BUT, does the parser look like a grammar?
i.e., are the parser’s operations fully grammatically
accurate at every step
… even in situations where such accuracy appears quite
difficult to achieve
(Phillips & Wong 2000)
Self-Paced Reading
-- --- ------- ------- ---- --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
We --- ------- ------- ---- --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- can ------- ------- ---- --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- --- measure ------- ---- --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- --- ------- reading ---- --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- --- ------- ------- time --- ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- --- ------- ------- ---- per ----.
(e.g. Phillips & Wong 2000)
Self-Paced Reading
-- --- ------- ------- ---- --- word.
(e.g. Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
Englishmen cook wonderful dinners.
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
Englishmen cook wonderful dinners.
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
Englishmen cook
what
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
Englishmen cook
what
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
What do
Englishmen cook
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
What do
Englishmen cook
gap
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Wh-Questions
What do
Englishmen cook
gap
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Long-distance Wh-Questions
Few people think that anybody realizes
that Englishmen cook wonderful dinners
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Long-distance Wh-Questions
Few people think that anybody realizes
that Englishmen cook
what
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Long-distance Wh-Questions
What do few people think that anybody realizes
that Englishmen cook
gap
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
The plan to remove the equipment ultimately destroyed the building.
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
The plan to remove the equipment ultimately destroyed the building.
Direct Object NP
Direct Object NP
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
The plan to remove the equipment ultimately destroyed the building.
Direct Object NP
Direct Object NP
Main Clause
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
Subject NP
The plan to remove the equipment ultimately destroyed the building.
Direct Object NP
Embedded Clause
Direct Object NP
Main Clause
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove the equipment ultimately destroy
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove the equipment ultimately destroy gap
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove
ultimately destroy the building
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove
gap
ultimately destroy the building
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
Subject
What did the plan to remove
gap
ultimately destroy the building
Island Constraint
A wh-phrase cannot be moved out of a subject.
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove
ultimately destroy
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove
ultimately destroy
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
What did the plan to remove
ultimately destroy
Parasitic Gap
Generalization: the good gap ‘rescues’ the bad gap
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
Infinitive
What did the plan to remove
ultimately destroy
Generalization: the good gap ‘rescues’ the bad gap
Grammatical Accuracy in Parsing
‘Parasitic Gaps’
Finite
What did the plan that removed
ultimately destroy
Revised Generalization (informal)
Only mildly bad gaps can be rescued by good gaps.
Grammaticality Ratings
Ratings from 50 subjects
Grammatical Accuracy in Parsing
A ‘Look-Ahead’ Problem
Infinitive
What did the plan to remove
ultimately destroy
The good gap rescues the bad gap
BUT
The bad gap appears before the good gap … a look-ahead problem
Grammatical Accuracy in Parsing
A ‘Look-Ahead’ Problem
Infinitive
What did the plan to remove
ultimately destroy
Embedded Verb
Question
When the parser reaches the embedded verb, does it construct a
dependency - even though the gap would be a ‘bad’ gap?
Grammatical Accuracy in Parsing
A ‘Look-Ahead’ Problem
Infinitive
What did the plan to remove
ultimately destroy
Risky
Finite
What did the plan that removed
Reckless
ultimately destroy
Grammatical Accuracy in Parsing
Question
What do speakers do when they get to the verb embedded
inside the subject NP?
(i) RISKY: create a gap in infinitival clauses only - violates
a constraint, but may be rescued
(ii) RECKLESS: create a gap in all clause types - violates a
constraint; cannot be rescued
(iii) CONSERVATIVE: do not create a gap
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Materials
a. …what … infinitival verb ...
[infinitive, gap ok]
b. … whether ..infinitival verb ...
[infinitive, no gap]
c. … what … finite verb ...
[finite, gap not ok]
d. … whether … finite verb ...
[finite, no gap]
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Materials
a. …what … infinitival verb ...
[infinitive, gap ok]
b. … whether ..infinitival verb ...
[infinitive, no gap]
c. … what … finite verb ...
[finite, gap not ok]
d. … whether … finite verb ...
[finite, no gap]
Gap here: RISKY
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Materials
a. …what … infinitival verb ...
[infinitive, gap ok]
b. … whether ..infinitival verb ...
[infinitive, no gap]
c. … what … finite verb ...
[finite, gap not ok]
d. … whether … finite verb ...
[finite, no gap]
Gap here: RECKLESS
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Materials
a. The outspoken environmentalist worked to investigate what the
local campaign to preserve the important habitats had actually
harmed in the area that the birds once used as a place for resting
while flying south. [infinitive, gap]
b. …whether the local campaign to preserve… [infinitive, no gap]
c. …what the local campaign that preserved… [finite, gap]
d. …whether the local campaign that preserved … [finite, no gap]
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Residual Reading Time (ms)
100
80
60
infinitive, no gap
20
-20
infinitive, gap
40
0
-40
Critical Verb
-60
6
7
8
9
10
11 12
Region
13
14
15
Infinitive
What did the plan to remove
Risky
16
ultimately destroy
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
100
80
60
infinitive, no gap
20
-20
infinitive, gap
40
0
Residual Reading Time (ms)
Residual Reading Time (ms)
-40
100
80
60
40
0
-20
-60
finite, no gap
-40
Critical Verb
finite, gap
20
Critical Verb
-60
6
7
8
9
10
11 12
Region
13
14
15
Finite
16
6
7
8
9
10
11 12
Region
What did the plan that removed
Reckless
13
14
15
16
ultimately destroy
(Phillips & Wong 2000)
Grammatical Accuracy in Parsing
Conclusion
• Structure-building is extremely grammatically accurate,
even when the word-order of a language is not cooperative
• Constraints on movement are violated in exactly the
environments where the grammar allows the violation to be
forgiven (may help to explain discrepancies in past studies)
• Such accuracy is required if grammatical computation is to
be understood as real-time on-line computation
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Derivational Theory of Complexity
• ‘The psychological plausibility of a
transformational model of the language user
would be strengthened, of course, if it could be
shown that our performance on tasks requiring an
appreciation of the structure of transformed
sentences is some function of the nature, number
and complexity of the grammatical
transformations involved.’ (Miller & Chomsky
1963: p. 481)
Miller (1962)
1. Mary hit Mark.
2. Mary did not hit Mark.
3. Mark was hit by Mary.
4. Did Mary hit Mark?
5. Mark was not hit by Mary.
6. Didn’t Mary hit Mark?
7. Was Mark hit by Mary?
8. Wasn’t Mark hit by Mary?
K(ernel)
N
P
Q
NP
NQ
PQ
PNQ
Miller (1962)
Transformational
Cube
Derivational Theory of Complexity
• Miller & McKean (1964): Matching sentences with the
same meaning or ‘kernel’
• Joe warned the old woman.
The old woman was warned by Joe.
• Joe warned the old woman.
Joe didn’t warn the old woman.
• Joe warned the old woman.
The old woman wasn’t warned by Joe.
K
P
K
N
K
PN
1.65s
1.40s
3.12s
McMahon (1963)
a.
b.
c.
d.
i. seven precedes thirteen
ii. thirteen precedes seven
i. thirteen is preceded by seven
ii. seven is preceded by thirteen
i. thirteen does not precede seven
ii. seven does not precede thirteen
i. seven is not preceded by thirteen
ii. thirteen is not preceded by seven
K (true)
K (false)
P (true)
P (false)
N (true)
N (false)
PN (true)
PN (false)
Easy Transformations
• Passive
– The first shot the tired soldier the mosquito bit fired missed.
– The first shot fired by the tired soldier bitten by the mosquito
missed.
• Heavy NP Shift
– I gave a complete set of the annotated works of H.H. Munro to
Felix.
– I gave to Felix a complete set of the annotated works of H.H.
Munro.
• Full Passives
– Fido was kissed (by Tom).
• Adjectives
– The {red house/house which is red} is on fire.
Failure of DTC?
• Any DTC-like prediction is contingent on a
particular theory of grammar, which may be
wrong
• It’s not surprising that transformations are not the
only contributor to perceptual complexity
– memory demands, may increase or decrease
– ambiguity, where grammar does not help
– difficulty of access
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Garden Paths & Temporary
Ambiguity
– The horse raced past the barn fell.
– Weapons test scores a hit.
– John gave the man the dog bit a sandwich.
• Grammar can account for the existence of global
ambiguities (e.g. ‘Visiting relatives can be
boring’), but not local ambiguities … since the
grammar does not typically assemble structure
incrementally
Garden Paths & Temporary
Ambiguity
• Ambiguity originally studied as test of solution to
the incrementality problem
• Heuristics & Strategies (e.g. Bever, 1970)
– NP V => subject verb
– V NP => verb object
– V NP NP => verb object object
• Garden paths used as evidence for effects of
heuristics
Garden Paths & Temporary
Ambiguity
• Heuristics & Strategies
– NP V => subject verb
The horse raced past the barn fell
– V NP => verb object
The student knew the answer was wrong
– V NP NP => verb object object
John gave the man the dog bit a sandwich
Ambiguity Resolution
• Observation: ‘heuristics’ miss a generalization
about how ambiguities are preferentially resolved
• Kimball (1973): Seven principles of surface
structure parsing (e.g. Right Association)
• Frazier (1978), Fodor & Frazier (1978): Minimal
Attachment, Late Closure
• Various others, much controversy...
Ambiguity Resolution
• Assumptions
– grammatical parses are accessed (unclear how)
– simplest analysis of ambiguity chosen (uncontroversial)
– structural complexity affects simplicity (partly
controversial)
– structural complexity determines simplicity (most
controversial)
Ambiguity Resolution
• Relevance to architecture of language
– Comprehension-specific heuristics which compensate
for inadequacy of grammar imply independent system
– Comprehension-specific notions of structural
complexity compatible with independent system
• If grammar says nothing about ambiguity, and
structural complexity is irrelevant to ambiguity
resolution, as some argue, then ambiguity is
irrelevant to question of parser-grammar relations.
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Parsing ≠ Production
• Parsing generates meaning from form
• Production generates form from meaning
• Different ‘bottlenecks’ in the two areas
– garden paths in comprehension
– word-category constraint in production errors
– etc., etc.
• Lexical access: speaking and recognizing words differs,
but do we assume that this reflects different systems?
• Contemporary production theories are now incremental
structure-building systems, more similar to comprehension
models
Arguments for Architecture
1. Available grammars don’t make good parsing devices
2. Grammaticality ≠ Parsability
3. Failure of DTC
4. Evidence for parser-specific structure
5. Parsing/production have distinct properties
6. Possibility of independent damage to parsing/production
7. Competence/performance distinction is necessary, right?
Competence & Performance
• Different kinds of formal systems: ‘Competence systems’
and ‘Performance systems’
• The difference between what a system can generate given
unbounded resources, and what it can generate given
bounded resources
• The difference between a cognitive system and its behavior
Competence & Performance
(1) It’s impossible to deny the distinction between cognitive
states and actions, the distinction between knowledge and
its deployment.
(2) How to distinguish ungrammatical-but-comprehensible
examples (e.g. John speaks fluently English) from hard-toparse examples.
(3) How to distinguish garden-path sentences (e.g. The horse
raced past the barn fell) from ungrammatical sentences.
(4) How to distinguish complexity overload sentences (e.g.
The cat the dog the rat chased saw fled) from
ungrammatical sentences.
Competence & Performance
“It is straightforward enough to show that sentence parsing
and grammaticality judgments are different. There are
sentences which are easy to parse but ungrammatical (e.g.
that-trace effects), and there are sentences which are
extremely difficult to parse, but which may be judged
grammatical given appropriate time for reflection (e.g.
multiply center embedded sentences). This classic argument
shows that parsing and grammar are not identical, but it tells
us very little about just how much they have in common.”
(Phillips, 1995)
This argument is spurious!
Summary
• Motivation for combining learning theories with
theories of adult knowledge is well-understood;
much more evidence needed.
• Theories of comprehension and production long
thought to be independent of ‘competence’
models. In fact, combination of these is quite
feasible; if true, possible to investigate linguistic
knowledge in real time.