Concepts, Words, and Concepts - TerpConnect
Download
Report
Transcript Concepts, Words, and Concepts - TerpConnect
Lexicalizing and Combining
Paul M. Pietroski
University of Maryland
Dept. of Linguistics, Dept. of Philosophy
Talk: One Slide Version
• Lexical Meanings are what
Composition Operations need them to be
– LMs are combinable via COs
– LMs exhibit types that COs can operate on
• if Composition Operations are rabidly conjunctive,
as in neo-Davidsonian semantic theories, then
lexicalization has to be creative
in ways that are otherwise unexpected
• familiar (and otherwise puzzling) facts suggest
that lexicalization is creative in these ways
• so perhaps lexicalization is at heart of what is
uniquely human about our semantic capacities
Large Background Question
What makes humans linguistically special?
(i) Lexicalization: capacity to acquire words
(ii) Combination: capacity to combine words
(iii) Lexicalization and Combination
(iv) Something else entirely:
e.g., distinctive representations
that are simply paired with signals
Outline
• background assumptions: Chomskyan
• specific proposal (and caveats): neo-medieval
• Fregean reminder: an invented language
can be used to analyze (“recarve”) prior
thoughts in cognitively useful ways
• suggestion: acquiring a natural human language
is cognitively useful, though in different ways
• evidence that in lexicalization, prior concepts
are creatively linked to monadic analogs
(as required by neo-Davidsonian composition)
Composition Constrains Lexicalization
• Semantic Composition (in Natural Human Languages)
each expression-meaning is determined somehow
by the constituent words and their arrangement
• Immediate Questions
what do the words contribute?
what does their arrangement contribute?
• Cognitive Science Project a Ia Marr
describe the/a mapping from arrangements of words
to the corresponding expression-meanings;
and say how this mapping is computed in terms of
representations and operations that expressions can invoke
• Semantic Composition: Corollary
word meanings are combinable via the (implemented/invokable)
operations that correspond to arrangements of words
in a naturally acquirable human language
Big Background Assumptions
• natural human languages are
Languages in Chomsky’s sense
I-
• a lexical meaning (I-meaning) can be described as an
instruction to fetch a concept of some sort
• a phrasal meaning can be described as an instruction to
combine concepts in a certain way
• ‘I’ is for ‘Intensional’ (Procedural, Algorithmic)
expressions
as pairs of instructions
contrast
with ‘Extensional’
(sets of I/O pairs)
SEM: fetch and combine mental representations
• ‘I’ is also for: (in
‘Implemented’
‘Invokable’
accord with and
certain
constraints)
?PHON:
make
and combine
articulatory
gestures?
• this leaves
room
for Externalism
about
concepts/truth
Composition Constrains Lexicalization
• Semantic Composition
lexical meanings are combinable via I-Operations
--implemented and invoked by I-Languages
--on display in Logical Forms determined by LFs
Fido
chase
Felix
CHASE(FIDO,
CHASE(
, FIDO,
FELIX)
FELIX)
saturation
AGENT( , FIDO) &
CHASE( ) &
conjunction
PATIENT( , FELIX)
for some _: AGENT( , _) & FIDO(_)
CHASE( ) &
closure
for some _: PATIENT( , _) & FELIX(_)
Composition Constrains Lexicalization
• Semantic Composition
lexical meanings are combinable via I-Operations
--implemented and invoked by I-Languages
--on display in Logical Forms determined by LFs
• Lexicalization Conforms to Composition
if I-operations only operate on meanings of certain types,
then lexical meanings are meanings of those types,
even if the concepts (representations) lexicalized are not
• Lexicalization Provides what Composition Needs
if lexical meanings are instructions to
fetch concepts that can be combined via I-operations,
then lexicalization may have to be a little creative
Foreshadowing: Two Kinds of Creativity
• Introduce a language that invokes general and powerful operations
(like Function-Application /λ-abstraction)
– lexicalize many concepts “directly”
KICK(x,y) + PF(‘kick’) <PF(‘kick’), λy.λx.KICK(x,y)>
– reanalyze many Subject-Predicate thoughts in polyadic terms
•
or indirectly, by “shifting up”
Number(3)
≡df ANCESTRAL[Predecessor(x,
y)]<0, 0’’’>
KICK(x,y)
+ PF(‘kick’)
<PF(‘kick’),
λΨ.λΦ.Φ{λx.Ψ[λy.KICK(x,y)]}>
Introduce a language
that invokes
simple and restrictive operations
(like Predicate-Conjunction/Monadicization)
– when lexicalizing nonmonadic concepts, make monadic analogs
KICK(x,y) + PF(‘kick’) <PF(‘kick’), KICK(e)>
KICK(x,y) ≡df for some e, KICK(e,x,y)
KICK(e,x,y) ≡df
AGENT(e, x) & KICK(e) & PATIENT(e, y)
– use this language to construct “neo-medieval” thoughts with many
monadic constituents and just a few dyadic/thematic constituents
Compositionality is an Explanandum
• we can invent languages in which for any expressions E1 and E2
Meaning(E1^E2) = SATURATE[Meaning(E1), Meaning(E2)]
Meaning (E1^E2) = CONJOIN[Meaning(E1), Meaning(E2)]
Meaning (E1^E2) = DISJOIN[Meaning(E1), Meaning(E2)]
Meaning (E1^E2) = …E1…E2…if…, and otherwise__E1__E2__
• perhaps in human languages, E1 and E2 differ in meaning only if
(a) they differ with regard to the arrangement of their atomic parts,
or (b) at least one atomic part of E1 differs in meaning
from the corresponding atomic part of E2
• but a language could meet this weak condition so long as each mode
of combining expressions indicates some operation on meanings,
even if none of the operations are naturally computable for humans,
and none of the atomic meanings are results of human lexicalization
•
If compositionality is a “supervenience” thesis (see Szabo), we want to
know which I-operations realize compositionality in human languages
Natural Composition is Constrained
• Natural Semantic Composition is Implemented
– in kids
– with innate circuitry that had to be evolved
• Natural Semantic Composition is Systematic
– colorful human words combine easily
– even if animal concepts are less promiscuous
• Natural Semantic Composition is Fast
– novel expressions are often understood “on line”
– as if words were associated with concepts that
can be systematically combined, on demand,
via simple operations implemented with innate circuitry
Foreshadowing: Two Kinds of Composition
• Function-Application / Saturation
– allows for “direct” lexicalization of many concepts
KICK(x,y) + PF(‘kick’) <PF(‘kick’), λy.λx.KICK(x,y)>
– is it implemented/invokable as a recursive I-operation?
• Predicate-Conjunction / Modification
– requires “reformatting” for any prelexical nonmonadic concepts
KICK(x,y) + PF(‘kick’) <PF(‘kick’), KICK(e)>
– is it implemented/invokable as a recursive I-operation?
Specific Proposal
• a verb meaning is an instruction to fetch a monadic
concept of “things” (events, states, processes, …) that
can have “participants” (agents, patients, instruments…)
• Instance of a more general claim:
a lexical meaning is an instruction to fetch
a monadic concept of (i) things with participants, or
(ii) participants of such things
• Consequence of (neo-Davidsonian) composition:
a phrasal meaning is an instruction to conjoin monadic
concepts corresponding to the constituents
[chaseV [a [brown ratN]]]
CHASE(_) & PATIENT(_, ∃: BROWN[_] & RAT[_])
|_________|_______|
Specific Proposal
• [chaseV [a [brown ratN]]]
CHASE(_) & PATIENT(_, ∃: BROWN[_] & RAT[_])
|_________|_______|
• chaseV CHASE(_) & TENSABLE(_)
• [cutV [to [the chaseN]]]
• chaseN CHASE(_) & INDEXABLE(_)
• kickV CaesarN
• [a [swift kickN]]]
• kick KICK(_)
•
V TENSABLE(_)
•
N INDEXABLE(_)
Closely Related Questions
• Which concepts can be fetched with words?
–
–
–
–
–
Singular, Valence +1
Monadic, Valence -1
Dyadic, Valence -2
Triadic, Valence -3
???
Which adicities
are exhibited by the
“fetchable” concepts?
• Which operations get invoked to combine them?
– Saturation of an n-adic Concept
(Adicity Reduction)
– Conjunction of Monadic Concepts -1 & -1 -1
– ???
Another Question
• Even supposing that this is a coherent and
defensible conception of adult competence…
• What does a lexicalizer do if
(i) composition principles imply that chaseV is
an instruction to fetch a monadic concept,
but (ii) the only good candidate concept for
lexicalization is CHASE(x, y)
or some other polyadic concept
Another Question
What does a lexicalizer do if
(i) composition principles imply that kickV is
an instruction to fetch a monadic concept,
but (ii) the only good candidate concept for
lexicalization is KICK(x, y)
or some other polyadic concept
(of type <e, <e, t>>> or higher)
Another Question
What does a lexicalizer do if
(i) composition principles imply that giveV is
an instruction to fetch a monadic concept,
but (ii) the only good candidate concept for
lexicalization is GIVE(x, y, z)
or some other polyadic concept
(of type <e, <e, <e, t>>> or higher)
Another Question
What does a lexicalizer do if
(i) composition principles imply that CaesarN is
an instruction to fetch a monadic concept,
but (ii) the only good candidate concept for
lexicalization is a singular concept
(a mental label, of type <e>) like CAESAR
A Possible Mind
KICK(x,y)
a prelexical concept
KICK(x,y)
≡df for some e, KICK(e,x,y)
AGENT(e,x) , PATIENT(e,y) generic “action” concepts
KICK(e,x,y) ≡df AGENT(e, x) & KICK(e) & PATIENT(e, y)
c , PF:caesar mental labels for a person and a sound
Called(c, PF:caesar)a thought about what the person is called
Called(y, PF:caesar) ≡df CAESARED(y)
‘kick’ is used to fetch the (invented) concept KICK(e)
‘caesar’ is used to fetch the (invented) concept CAESARED(y)
‘that Caesar’ is used to construct the complex concept
CONTEXTUALLY-INDICATED(y) & CAESARED(y)
‘kick that Caesar’ is used to construct the complex concept
KICK(e) & for some y, PATIENT(e, y) &
CONTEXTUALLY-INDICATED(y) & CAESARED(y)
First-Pass Hypothesis
about Human Languages/Children
chaseV fetches CHASE(_)
eatV fetches EAT(_)
donateV fetches DONATE(_)
giveV fetches GIVE(_)
rainV fetches RAIN(_)
surroundV fetches SURROUND(_)
CaesarN fetches CAESARED(_)
even if the
concept lexicalized
is not monadic
because (i) semantic composition principles
dictate that (open class) lexical items
are instructions to fetch monadic concepts,
and (ii) lexicalizers can and do invent monadic analogs
of any nonmonadic concepts they lexicalize
Alternative Hypotheses (for comparison)
chaseV fetches CHASE(x, y), the concept lexicalized
CaesarN fetches CAESAR, the concept lexicalized
[chaseV CaesarN] constructs CHASE(x, CAESAR)
CaesarN fetches λX.X(CAESAR)
chaseV fetches λΨ.λΦ.Φ{λx.Ψ[λy.CHASE(x,y)]}
[chaseV CaesarN] constructs λΦ.Φ{λx.CHASE(x,CAESAR)}
[every
] fetches λY.λX.INCLUDES({x:X(x)},
{x:Y(x)})
because
(i) Dcombining
lexical items is an instruction
to saturate,
and (ii) [dog
lexicalizers
canλx.Dog(x)
and do reanalyze chaseV and CaesarN
N] fetches
[everyD dogN] constructs λX.INCLUDES({x:X(x)}, {x:Dog(x)}
[chaseV [everyD dogN]] constructs
λΦ.Φ{λx.for every dog y, CHASE(x, y)}
First-Pass Hypothesis
about Human Languages/Children
chaseV fetches CHASE(_)
CaesarN fetches CAESARED(_)
even if the
concept lexicalized
is not monadic
because (i) semantic composition principles
dictate that (open class) lexical items
are instructions to fetch monadic concepts,
and (ii) lexicalizers can and do invent monadic analogs
of any nonmonadic concepts they lexicalize
Caveat: Polysemy
To a first approximation, book fetches BOOK(_)
To a second approximation, book fetches one of
-abstractBOOK(_), +abstractBOOK(_)
To a third approximation, book fetches one of
+/-abstract1BOOK(_), +/-abstract2BOOK(_), …
To a fourth approximation, bookN fetches one of
…BOOK(_) and conjoins it with INDEXABLE(_)
Caveat: Subcategorization
• Not saying that a verb meaning is merely an instruction to fetch a
(tense-friendly) monadic concept of things that can have participants
• Distinguish:
Semantic Composition Adicity Number (SCAN)
(instructions to fetch) singular concepts +1
<e>
(instructions to fetch) monadic concepts -1
<e, t>
(instructions to fetch) dyadic concepts
-2
<e,<e, t>>
…
Property of Smallest Sentential Entourage (POSSE)
zero (indexable) terms, one term, two terms, …
• Hypothesis is that
– the SCAN of every verb/noun/adjective/adverb is -1
– but POSSE facts vary:
zero, one, two, …
Caveats
• POSSE facts may reflect, among other things (e.g. statistical experience),
adicities of concepts lexicalized with verbs, as opposed to
adicities of concepts fetched with verbs
the verb putV may have a (lexically represented) POSSE of three
in part because putV lexicalizes PUT(x, y, z)
• Not saying that every concept lexicalized is monadic
arriveV may lexicalize ARRIVE(x)
eatV may lexicalize EAT(x, y) and/or EAT(x)
chaseV may lexicalize CHASE(x, y)
giveV may lexicalize GIVE(x, y, z)
sellV may lexicalize SELL(x, y, z, w)
rainV may lexicalize RAIN(X)
surroundV may lexicalize god knows what
CaesarN may (initially) lexicalize JULIUS
Terminology
• I-Operations: composition operations that
are invoked by I-languages
• I-Concepts: concepts that are
combinable via I-operations
Human infants may have, and adults may retain,
many concepts that are not I-concepts
Humans may acquire many I-concepts
by lexicalizing prior concepts
that we share with other animals
prelexical
concepts
prelexical
concepts
+ words
I-concepts
Historical Remark
When Frege invented the modern logic
that semanticists now take as given,
his aim was to recast the Dedekind-Peano axioms
(which were formulated with “subject-predicate”
sentences, like ‘Every number has a successor’)
in a new format,
by using a new language that allowed for
“fruitful definitions” and “transparent derivations”
Frege’s invented language (Begriffsschrift)
was a tool for abstracting formally new concepts,
not just a tool for signifying existing concepts
But Frege…
• wanted a fully general Logic
for (Ideal Scientific) Polyadic Thought
• treated monadicity as a special case of relationality:
relations objects bear to truth values
• often recast predicates like ‘number’ in higher-order
relational terms, as in ‘thing to which zero bears the
(identity-or-)ancestral-of-the-predecessor-relation relation’
Number(x) iff {ANCESTRAL[Predecessor(y, z)]}<0, x>
• allowed for abstraction by having composition signify
function-application, without constraints on atomic types;
so his Begriffsschrift respects only a weak
(and arguably unhuman) compositionality constraint
By Contrast, my suggestion is that…
• I-Languages let us create concepts that formally efface
adicity distinctions already exhibited by the concepts we
lexicalize (and presumably share with other animals)
• the payoff lies in creating I-concepts that can be
combined quickly, via dumb but implementable
operations (like monadic concept-conjunction)
• FREGEAN ABSTRACTION:
use powerful operations to extract logically interesting
polyadic concepts from subject-predicate thoughts
• NEO-DAVIDSONIAN ABSTRACTION:
use simple operations to extract logically boring
monadic concepts from diverse animal thoughts
Related Topic (for another day):
Number Neutrality as Effacing Conceptual Distinctions
I chaseV those who chaseV me
They gaveV them the vase I gaveV you
Italy surroundsV any pope whose priests surround its towns
fetchable concepts may be
chaseV fetches CHASE(_)
number-neutral even if
giveV fetches GIVE(_)
lexicalized concepts aren’t
surroundV fetches SURROUND(_)
because (i) semantic composition principles
may dictate that (open class) lexical items
are instructions to fetch #-neutral concepts,
and (ii) lexicalizers can/do invent #-neutral analogs of
any essentially numbered concepts they lexicalize
Lexicalization as Concept-Abstraction
Concept
of
adicity n
(before)
Concept
of
adicity n
Perceptible
Signal
Concept
of
adicity k
Lexicalization as Monadic-Concept-Abstraction
KICK(_, _)
KICK(event , _, _,)
KICK(event)
Concept
of
adicity n
(before)
Concept
of
adicity n
Perceptible
Signal
Concept
of
adicity -1
Two Kinds of Facts to Accommodate
• Flexibilities
–
–
–
–
–
Brutus kicked Caesar
Caesar was kicked
The baby kicked
I get a kick out of you
Brutus kicked Caesar the ball
• Inflexibilities
– Brutus put the ball on the table
– *Brutus put the ball
– *Brutus put on the table
Two Pictures of
Lexicalization
Concept
of
adicity n
Concept
of
adicity n
(before)
Concept
of
adicity n
further
Word: adicity “flexibility” facts
(as for ‘kick’)
(SCAN) n
Perceptible
Signal
further “posse” facts
(as for ‘put’)
Concept
of
adicity -1
Perceptible
Signal
Word:
adicity -1
“Negative” Facts to Accommodate
Striking absence of certain (open-class) lexical meanings
that would be permitted
if I-Languages permit nonmonadic semantic types
<e,<e,<e,<e, t>>>> (instructions to fetch) tetradic concepts
<e,<e,<e, t>>> (instructions to fetch) triadic concepts
<e,<e, t>> (instructions to fetch) dyadic concepts
<e> (instructions to fetch) singular concepts
<<e, t>, <<e, t>, t>> (instructions to fetch)
second-order dyadic concepts
“Negative” Facts to Accommodate
Brutus sald a car Caesar a dollar
x sold y to z
(in exchange) for w
sald
SOLD(x, w, z, y)
[sald [a car]] SOLD(x, w, z, a car)
[[sald [a car]] Caesar]
SOLD(x, w, Caesar, a car)
[[[sald [a car]] Caesar]] a dollar] SOLD(x, $, Caesar, a car)
_________________________________________________
Brutus tweens Caesar Antony
tweens
BETWEEN(x, z, y)
[tweens Caesar] BETWEEN(x, z, Caesar)
[[tweens Caesar] Antony] BETWEEN(x, Antony, Caesar)
“Negative” Facts to Accommodate
Alexander jimmed the lock a knife
jimmed
JIMMIED(x, z, y)
[jimmed [the lock] JIMMIED(x, z, the lock)
[[jimmed [the lock] [a knife]] JIMMIED(x, a knife, the lock)
_________________________________________________
Brutus froms Rome
froms
COMES-FROM(x, y)
[froms Rome] COMES-FROM(x, Rome)
“Negative” Facts to Accommodate
Brutus talls Caesar
talls
IS-TALLER-THAN(x, y)
[talls Caesar] IS-TALLER-THAN(x, Caesar)
_________________________________________________
*Julius Caesar
Julius JULIUS
Caesar CAESAR
*<e>^<e>
Recall: Two Kinds of Creativity
• Introduce a language that invokes general and powerful operations
(like Function-Application /λ-abstraction)
– lexicalize many concepts “directly”
KICK(x,y) + PF(‘kick’) <PF(‘kick’), λy.λx.KICK(x,y)>
– reanalyze many Subject-Predicate thoughts in polyadic terms
Number(3) ≡df ANCESTRAL[Predecessor(x, y)]<0, 0’’’>
• Introduce a language that invokes simple and restrictive operations
(like Predicate-Conjunction/Monadicization)
– when lexicalizing nonmonadic concepts, make monadic analogs
KICK(x,y) + PF(‘kick’) <PF(‘kick’), KICK(e)>
– use this language to construct “neo-medieval” thoughts with many
monadic constituents and just a few dyadic/thematic constituents
Quantifiers also Present Negative Facts
Every boy who arrived
can’t mean that
every boy arrived
Every
INCLUDES(X, Y)
boy {x: arrived[x]}
[Every boy] INCLUDES(X, {y: boy[y]})
who arrived {x: arrived[x]}
[Every boy] [who arrived]
INCLUDES({x: arrived[x]}, {y: boy[y]})
________________________________________________
But why not, if Every is of type <<e,t>, <<e,t>, t>>, and
boy is of type <e, t>, and who arrived is of type <e,t>?
Maybe Every is not of type <<e,t>, <<e,t>, t>>.
Maybe the concept INCLUDES(X, Y) is lexicalized differently.
(see Events & Semantic Architecture for monadic analysis)
Quantifiers also Present Negative Facts
ONE-TO-ONE(X, Y)
ONE-TO-ONE (X, {y: boy[y]})
arrived {x: arrived[x]}
[Equi boy] arrived ONE-TO-ONE ({x: arrived[x]}, {y: boy[y]})
Equi
[Equi boy]
no deteriminer fetches a “nonconservative”
second-order dyadic concept,
just as no verb fetches a “UTAH-violating”
first-order dyadic concept
quase
λy.λx.CHASE(y, x)
[quase Felix]
λx.CHASE(Felix, x)
Fido [quase Felix]
CHASE(Felix, Fido)
REMEMBER…
• There is little to no evidence of any
(open class) lexical items fetching
supradyadic I-concepts
(… SCAN -5, SCAN -4, or even SCAN -3)
• Brutus gave Caesar the ball
• Brutus kicked Caesar the ball
• Brutus gave/kicked the ball to Caesar
• Various (e.g., Larsonian) analyses of
ditransitive constructions, without
ditransitive verbs
REMEMBER…
• Not even English provides good evidence for
(open class) lexical nouns of type <e>.
On the contrary, it seems that singular concepts
are not lexicalized “straight” with simple tags of type <e>
• Every Tyler I saw was a philosopher
Every philosopher I saw was a Tyler
There were three Tylers at the party
That Tyler stayed late, and so did this one
Philosophers have wheels, and Tylers have stripes
The Tylers are coming to dinner
At noon, I saw Tyler Burge
I saw Tyler at noon
I saw Burge at noon
But…
If the basic mode of semantic composition is
conjunction of monadic concepts,
then we can start to explain the absence
of lexical meanings like…
SOLD(x, w, z, y)
BETWEEN(x, z, y)
JIMMIED(x, z, y)
COMES-FROM(x, y)
IS-TALLER-THAN(x, y)
TYLER
Alternative Hypotheses (for comparison)
chaseV fetches CHASE(x, y), the concept lexicalized
CaesarN fetches CAESAR, the concept lexicalized
[chaseV CaesarN] constructs CHASE(x, CAESAR)
CaesarN fetches λX.X(CAESAR)
chaseV fetches λΨ.λΦ.Φ{λx.Ψ[λy.CHASE(x,y)]}
[chaseV CaesarN] constructs λΦ.Φ{λx.CHASE(x,CAESAR)}
If we don’t lexicalize “straight,”
and if we reformat “monadically” as opposed to other ways,
how come?
General Acquisition Question
For any n…
if I-Languages permit lexical items of adicity n,
but we don’t lexicalize concepts of adicity n
straightforwardly with words of adicity (SCAN) n,
then theorists need to ask: How come?
Possible Answer:
I-languages don’t permit lexical items of adicity n
Specific Acquisition Questions
For each n such that n ≠ -1
if I-Languages permit lexical items of adicity n,
but we don’t lexicalize concepts of adicity n
straightforwardly with words of adicity (SCAN) n,
then theorists need to ask: How come?
Possible Answer:
I-languages don’t permit lexical items of any
adicity (SCAN) other than -1
Related Questions
Why do I-Languages have functional vocabulary?
prepositions, little ‘v’, …
And why are certain grammatical relations like
“dedicated prepositions” that invoke certain
thematic relations?
Possible Answer:
I-languages don’t permit (open class)
lexical items of any adicity (SCAN) other than -1
And relational concepts cannot be monadicized
without some “functional residue”
Recall Caveat: Subcategorization
• Not saying that a verb meaning is merely an instruction to fetch a
(tense-friendly) monadic concept of things that can have participants
• Distinguish:
Semantic Composition Adicity Number (SCAN)
(instructions to fetch) singular concepts +1
<e>
(instructions to fetch) monadic concepts -1
<e, t>
(instructions to fetch) dyadic concepts
-2
<e,<e, t>>
…
Property of Smallest Sentential Entourage (POSSE)
zero (indexable) terms, one term, two terms, …
• Hypothesis is that
– the SCAN of every verb/noun/adjective/adverb is -1
– but POSSE facts vary:
zero, one, two, …
Recall: Facts to Accommodate
• Flexibilities
–
–
–
–
–
Brutus kicked Caesar
Caesar was kicked
The baby kicked
I get a kick out of you
Brutus kicked Caesar the ball
suggests that ‘kick’
fetches a monadic
concept to which
thematic conjuncts
can be added
• Inflexibilities
– Brutus put the ball on the table
– *Brutus put the ball
– *Brutus put on the table
compatible with ‘put’
fetching a monadic
concept to which
thematic conjuncts
must be added
Two Pictures of
Lexicalization
further
“flexibility” facts
Word: adicity n (as for ‘kick’)
Concept
of
adicity n
Concept
of
adicity n
(before)
Concept
of
adicity n
Perceptible
Signal
further “posse” facts
(as for ‘put’)
Concept
of
adicity -1
Perceptible
Signal
Word:
adicity -1
What Makes us Humans
Special Linguistically?
• Lexicalization and Combination
• Lexicalization, but not Combination
I-operations implemented by our cousins
So maybe I-operations are simple and ancient, and
lexicalization lets us employ I-operations in new ways
• Combination, but not Lexicalization
in which case, our cousins can lexicalize like us
Infant
Child
Modules: Vision
Audition
…
Human
Language
Faculty
in its
Initial State
Modules: Vision
Audition
…
Experience
+
and
Growth
Human Language
Faculty in a
Mature State:
LEXICON
COMBINATORICS
I-CONCEPTS
concepts
+ Lexicalization
concepts
Talk: One Slide Version
• Lexical Meanings are what
Composition Operations need them to be
– LMs are combinable via COs
– LMs exhibit types that COs can operate on
• if Composition Operations are rabidly conjunctive,
as in neo-Davidsonian semantic theories, then
lexicalization has to be creative,
in ways that are otherwise unexpected
• familiar (and otherwise puzzling) facts suggest
that lexicalization is creative in these ways
• so perhaps lexicalization is at heart of what is
uniquely human about our semantic capacities
THANKS
Ancient Question
+
words
prelexical concepts
and I-Operations
prelexical concepts
and I-Operations
and I-Concepts
EXTRA SLIDES
Marr on my Mind
• compatible with a multi-stage conception of how
semantic properties are determined/computed
• “elementary” semantic composition may deliver
only “primal sketches” of thoughts
word-string
word-string*
(homophonous)
SEM:
Sketch-1α
SEM:
Sketch-1α
SEM:
Sketch-2α
SEM:
Sketch-3α
SEM:
Sketch-1β
…
SEM:
Sketch-1ω
Question
What does a lexicalizer do if
(i) composition principles imply that
brown rat is an instruction
to conjoin monadic concepts,
but (ii) rat lexicalizes RAT(x),
a (categorial) concept of certain animals ,
while brown lexicalizes
BROWN(s) or BROWN(s, x)
a (perhaps relational) concept
of certain surfaces
DM-ish Description
• chase√ + nullN chaseN
theD + chaseN [theD chaseN]D
alternatively…
•
theD + chase√ [theD chase√]D
[theD chase√]D = [theD chaseN]D
DM-ish Description
• chase√ + nullN chaseN
theD + chaseN [theD chaseN]D
• Caesar√ + nullN CaesarN
that-1D + CaesarN [that-1D CaesarN]D
• chase√ + nullV chaseV
chaseV + [theD manN]D
[chaseV [theD manN]D]V
Combination Decomposed
theD +
manN
[theD manN]D
chaseV + [theD manN]D [chaseV [theD manN]D]V
CONCATENATE(theD, manN) theD^manN
LABEL(theD^manN ) [theD manN]D
CONCATENATE(chaseV, [theD manN]D)
chaseV^[theD manN]D
LABEL(chaseV^[theD manN]D)
[chaseV [theD manN]D]V
Combination Decomposed
chase√ + nullN chaseN
CONCATENATE(chase√ , nullN) chase√^nullN
LABEL(chase√^nullN) [chase√ nullN]N
abbreviation:
chase√ + nullV chaseV
[chase√ nullN]N = chaseN
CONCATENATE(chase√ , nullV) chase√^nullV
LABEL(chase√^nullV) [chase√^nullV]V
abbreviation:
[chase√ nullV]V = chaseV
Meanings?
chase√
CHASE(_)
an instruction to FETCH
a monadic concept like
INDEXABLE(_)
nullN
TENSABLE(_)
nullV
[chase√ nullN]N
an instruction to
CHASE(_)
CHASE(_)••
[chase√ nullV]V FETCH and CONJOIN
INDEXABLE(_)
two monadic concepts, TENSABLE(_)
Caesar√
forming a concept like
[Caesar√ nullN]N
CALLED(_, PF:Caesar)
INDEXED(_, 1) • INDEXABLE(_)
that-1D
[that-1D [Caesar√ nullN]N]D
Terminology
Meanings: compositional properties of
expressions of naturally acquired languages
Concepts: composable mental representations
Contents: mind&language-independent aspects
of the world that we think/talk about
Meanings (or if you prefer, “SEMs”) are
(i) properties of naturally acquired expressions
(ii) compositional in some specific way(s)
that theorists have to discover
Assumptions
• Theorists can focus on I-Languages in Chomsky’s sense
– Implemented Intensions, not Extensions (sets) of expressions
– procedures (algorithms) for generating expressions
– “steady states” of the human cognitive system that supports our
natural acquisition and use of expression-generating procedures
– think of expressions as pairs of instructions (PHON, SEM) to/from
“articulatory/perceptual” and “conceptual/intentional” systems
• The following (Fregean) idea is at least coherent
– the atomic meaningful expressions of an acquired language
can do more than merely associate prior concepts with signals
– acquiring words may be a process in which
formally new concepts are abstracted from prior concepts
CHASE(x, y) + chaseV CHASE(e, x, y) CHASE(e)
Bloom: How Children Learn
the Meanings of Words”
• word meanings are, at least mainly, concepts
that kids have prior to lexicalization
• learning word meanings is, at least mainly,
a process of figuring out which existing concepts
are paired with which word-sized signals
• in this process, kids draw on many capacities-including those that support recognition of
syntactic cues and speaker intentions--but not
capacities specific to learning word meanings
Lidz, Gleitman, and Gleitman
“Clearly, the number of noun phrases required for
the grammaticality of a verb in a sentence is a
function of the number of participants logically
implied by the verb meaning. It takes only one to
sneeze, and therefore sneeze is intransitive, but it
takes two for a kicking act (kicker and kickee),
and hence kick is transitive.
Of course there are quirks and provisos to these
systematic form-to-meaning-correspondences…”
Terminology
• Language: anything that associates signals of some kind
with interpretations of some kind
• (Human) I-Language: ‘I’ for ‘Intensional’, ‘Implemented’
a state of the language faculty that implements a child-acquirable
algorithm for associating signals with concepts in a human way
• (Human) I-Operation: operation invoked by I-languages
with regard to semantics, an invokable operation that permits
combination of “fetchable/constructable” concepts
• (Human)
Brain I-Concepts:
as Computercombinable
metaphor: via I-operations
for any algorithm/program executed,
Humanwhat
infants
may have, and
adults may
retain,
computational
operations
are invoked,
many
concepts
areimplemented)?
not I-concepts
(and
how arethat
they
Humans may acquire many I-concepts by lexicalizing
conjunction
to saturation)
priorif concepts
that(as
weopposed
share with
other animals
is the basic I-operation for semantic composition,
that will have consequences for lexicalization