Transcript Week 10
Philosophy 4610
Philosophy of Mind
Week 9: AI in the Real World
The “Chinese Room”
The Chinese Room
► In
the Chinese Room, there is a rule book
for manipulating symbols and an operator
who does not understand any Chinese
► The Room produces perfectly good Chinese
answers and could pass a Turing Test
conducted in Chinese
► But nothing in the room actually
understands Chinese
The Chinese Room
► According
to Searle, in the Chinese Room
there is intelligent-seeming behavior but no
actual intelligence or understanding. There
is syntax (rules for the manipulation of
meaningless signs) but the semantics or
meaning of the signs is missing. This shows,
Searle argues, that rule-governed behavior
is not enough to give real understanding or
thinking.
The Chinese Room: The “Systems”
Reply
► Even
if there is no single element in the
Chinese Room that understands Chinese,
perhaps the understanding of Chinese really
is in the whole system itself.
► What are the criteria for “really
understanding” as opposed to just seeming to
understand? What role (if any) does
experience, consciousness, or self-awareness
play? How might we test for these qualities?
The Loebner Prize
►
►
►
Every year, philanthropist Hugh
Loebner sponsors a “real-life”
Turing Test
He offers $100,000 to any
computer program that can
successfully convince a panel of
judges that it is “more human”
than at least one human subject
Every year, $2,000 is offered for
the program judged “most
human”.
The Loebner Prize
► Which
of the transcripts seemed “most
human”? Which did not seem “human” at
all? Why?
The Loebner Prize:
Things to Look For
► Ambiguity.
Many words in English have multiple
meanings. For example: “He put a check on the
board” (here ‘check’ can mean either a monetary
instrument, or a mark).
► ‘Canned’ responses. Many of the responses
that a computer might give seem “automatic” or
inappropriate to the situation (how can you tell?)
► Jokes and puns. It is difficult for computers to
understand jokes or puns that depend on the
difference between literal and metaphoric meaning
(why?)
Dreyfus and
What Computers Can’t Do
►
►
Like his colleague Searle,
Dreyfus thinks that it will
be much harder than many
have assumed to build a
real thinking machine.
He argues that it is much
more difficult than it
seems to “program in”
ordinary, practical
intelligence of a kind we
exhibit constantly and
everyday.
Artificial Intelligence:
Two Approaches
► The
“frame” approach (Minsky): To get a
computer to exhibit actual intelligence, we just
have to program it with an appreciation of the
“frame” or context of ordinary human situations.
► The “script” approach (Schank): To get a
computer to exhibit intelligence, we just need to
represent a “script” or plan for handling ordinary
situations (sitting in a chair, ordering at a
restaurant, cooking an egg, etc.)
Scripts and Frames: Trying it Out
► Let’s
try to “program” an AI system to
handle some ordinary tasks.
► We’re allowed to specify any RULE that we
want, provided that the rules are welldefined in terms of the information available
to the system.
Dreyfus: how do you sit in a chair?
► “Anyone
in our culture
understands such things as
how to sit on kitchen chairs,
swivel chairs, folding chairs,
and in arm chairs, rocking
chairs, deck chairs, barbers’
chairs, sedan chairs,
dentists’ chairs, basket
chairs, reclining chairs,
wheel chairs, sling chairs,
and beanbag chairs – as
well as how to get off/out of
them again. … (p. 163).
Dreyfus: The assumption of
traditional AI
► There
is a great deal of knowledge that we rely on
everyday and use in a wide variety of situations
that is not explicit.
► Traditional AI research assumes that this
knowledge is all representable – that it can be
programmed into a computer by inputting a finite
set of rules.
► But Dreyfus argues that there is no reason to think
that this knowledge must be representable this
way.
Real-World AI: Summary
► Classical
AI research, following Turing, assumes
that it’s possible to get a computer to be
intelligent by programming it with some finite set
of rules.
► But passing a Turing test – or even being able to
function in everyday situations – requires a vast
amount of knowledge that is not generally explicit.
► Is it possible to represent this knowledge at all? If
it is not representable, then how do we acquire it?
Might an artificial system or robot be able to
acquire it as we do, even if it cannot be
‘programmed in’ explicitly?