The Chinese Room Argument - Kelly Inglis's Weblog | A
Download
Report
Transcript The Chinese Room Argument - Kelly Inglis's Weblog | A
Doing Philosophy
• Philosophical theories are not primarily
about facts. Therefore, there is no right or wrong.
• Philosophical arguments are well-argued opinions.
• A philosophy course such as this concerns both facts
and opinions, e.g.
What is functionalism (fact)? What is the problem of
multiple realization (fact)? Is functionalism a good theory
of the mind (opinion)? Is materialism a better theory than
dualism (opinion)?
Doing Philosophy in this Course
• Ask questions
– in class
– on the course blog
• Think for yourself.
• Justify your opinions with good logical
arguments, also can appeal to scientific
evidence and personal experience
Tutorials
There are four tutorial groups:
All groups meet in the Philosophy Department, Room MB 305
Group 1: Thurs. 2:00
Sept. 20, Oct. 4, Nov. 1, Nov. 15
Group 2: Tues. 2:00
Sept. 25, Oct. 23, Nov. 6, Nov. 20
Group 3: Tues. 3:00
Sept. 25, Oct. 23, Nov. 6, Nov. 20
Group 4: Tues. 1:00
Oct. 2, Oct. 30, Nov. 13, Nov. 27
Please sign up today in the break. Otherwise, send me an email.
Functionalism
Things are defined by their functions
Two ways to define function
1)
Function = inputs and outputs (machine functionalism)
e.g. mathematical function, e.g. +, -, x, /
2 x 3 = 6, when input is 2 and 3, output is 6
Multiple realizability: can be realized in different
materials or through different processes
Functionalism defined as inputs and outputs continued
e.g. beliefs, desires
“I am thirsty” (i.e. I desire water) is defined in terms of
inputs and outputs. When there are inputs x and y, there
is output z:
Input
Output
(x) Water is available
(y) There is no reason not to drink the water
(z) I drink water
2) Function = use (teleological functionalism)
Function is defined by what something does.
e.g. a heart pumps blood.
e.g. a belief plays a role in reasoning: a premise in a
practical syllogism
Premise 1
Premise 2
Premise 3
Conclusion
I believe x is water
I desire water
There is no reason not to drink x
I drink x
No matter if you interpret functional as an inputoutput relation (machine functionalism) or use
(teleological functionalism), mental states, such
as thirst are multiply realizable.
A computer can conduct multiplication.
An alien can have thirst, pain, etc.
A computer can have thirst, pain, etc.
Functional definition of mind
• If x acts like a mind, it is a mind.
• If, when compared to a mind given similar inputs,
x gives similar outputs, x is a mind.
• If a computer can converse (take part in
linguistic input and output exchanges/play the
role of an intelligent conversational partner) just
like a person, the computer is as intelligent as a
person. It has a mind.
The Chinese Room Argument
Background
Thought Experiments
• Instead of scientific experiments, philosophers
have thought experiments
• Thought experiments are conducted in the
imagination
• They test concepts looking for consistency and
contradictions, often using intuitions to make
judgments
The Turing Test
In 1950, a computer scientist, Alan Turing,
wanted to provide a practical test to answer
“Can a machine think?”
His solution -- the Turing Test:
If a machine can conduct a conversation so well that
people cannot tell whether they are talking with a person
or with a computer, then the computer can think. It
passes the Turing Test.
In other words, he proposed a functional solution to the
question, can a computer think?
There are many modern attempts to produce computer
programs that pass the Turing Test.
In fact, in 1991 Dr. Hugh Loebner started the annual
Loebner Prize competition, with prize money offered to
the author of the computer program that performs the
best on a Turing Test.
The winner of the Loebner prize in 2004 was a program
called ALICE.
You can try her (and other talkbots) out on this website:
http://cogsci.ucsd.edu/~asaygin/tt/ttest.html#talktothem
Searle’s Chinese Room Argument
John Searle
• Famous philosopher at the
University of California, Berkeley
• Most well-known in philosophy
of language, philosophy of mind
and consciousness studies
• Wrote “Minds, Brains and
Programs” in 1980, which
described the “Chinese Room
Argument”
Searle’s Chinese Room Argument
• The Chinese Room argument is one kind of objection to
functionalism, specifically to the Turing Test
• Also an attack on “strong AI”
• Searle makes distinction between strong AI and weak AI
• Strong AI: “the appropriately programmed computer
really is a mind, in the sense that computers, given the
right programs can be literally said to understand”
• Weak AI: Computers can simulate thinking and help us to
learn about how humans think
• Searle objects only to strong AI.
The Chinese Room
Searle cannot understand any Chinese.
He is in a room with input and output windows, and a list of
rules about manipulating Chinese characters.
The characters are all “squiggles and squoggles” to him.
Chinese scripts and questions come in from the input
window.
Following the rules, he manipulates the characters and
produces a reply, which he pushes through the output
window.
The Chinese answers that Searle produces are very good.
In fact, so good, no one can tell that he is not a native
Chinese speaker!
Searle’s Chinese Room passes the Turing Test. In other
words, it functions like an intelligent person.
Searle has only conducted symbol manipulation, with no
understanding, yet he passes the Turing Test.
Therefore, passing the Turing Test does not ensure
understanding.
In other words, although Searle’s Chinese Room functions
like a mind, it is not a mind, and therefore functionalism is
wrong.
Syntax vs. semantics
Searle argued that computers can never understand
because computer programs are purely syntactical with
no semantics.
Syntax: the rules for symbol manipulation, e.g. grammer
Semantics: understanding what the symbols (e.g. words)
mean
Syntax without semantics: The bliggedly blogs browl
aborigously.
Semantics without syntax: Milk want now me.
• Searle concludes that symbol manipulation
alone can never produce understanding.
• Computer programming is only symbol
manipulation.
• Computer programming can never produce
understanding.
• Strong AI is false and functionalism is wrong.
What could produce real understanding?
Searle: “it is a biological phenomenon” and “only
something with the same causal powers as
brains can have [understanding]”.
Objections
The Systems Reply
Searle is part of a larger system. Searle doesn’t understand
Chinese, but the whole system (Searle + room + rules)
does understand Chinese.
The knowledge of Chinese is in the rules contained in the
room.
The ability to implement that knowledge is in Searle.
The whole system understands Chinese.
Searle’s Response to the Systems Reply
1)
It’s absurd to say that the room and the rules can
provide understanding
2)
What if I memorized all the rules and internalized the
whole system. Then there would just be me and I still
wouldn’t understand Chinese.
Counter-response to Searle’s response
If Searle could internalize the rules, part of his brain would
understand Chinese. Searle’s brain would house two
personalities: English-speaking Searle and Chinesespeaking system.
The Robot Reply
What if the whole
system was put inside a
robot?
Then the system would
interact with the world.
That would create
understanding.
Searle inside the robot
Searle’s response to the Robot Reply
1) The robot reply admits that there is more to
understanding than mere symbol manipulation.
2) The robot reply still doesn’t work. Imagine that I am in
the head of the robot. I have no contact with the
perceptions or actions of the robot. I still only manipulate
symbols. I still have no understanding.
Counter-response to Searle’s response
Combine the robot reply with the systems reply. The robot as
a whole understands Chinese, even though Searle
doesn’t.
The Complexity Reply
• Really a type of systems reply.
• Searle’s thought experiment is deceptive. A room, a man
with no understanding of Chinese and “a few slips of
paper” can pass for a native Chinese speaker.
• It would be incredibly difficult to simulate a Chinese
speaker’s conversation. You need to program in
knowledge of the world, an individual personality with
simulated life history to draw on, and the ability to be
creative and flexible in conversation. Basically you need to
be able to simulate the complexity of an adult human brain,
which is composed of billions of neurons and trillions of
connections between neurons.
Complexity changes everything.
Our intuitions about what a complex
system can do are highly unreliable.
Tiny ants with tiny brains can
produce complex ant colonies.
Computers that at the most basic level are just binary
switches that flip from 1 to 0 can play chess and beat the
world’s best human player.
If you didn’t know it could be done, you would not believe it.
Maybe symbol manipulation of sufficient complexity can
create semantics, i.e. can produce understanding.
Conclusion
1)
The Turing Test:
Searle is probably right about the Turing Test.
Simulating a human-like conversation probably does
not guarantee real human-like understanding.
Certainly, it appears that simulating conversation to
some degree does not require a similar degree of
understanding. Programs like ALICE presumably
have no understanding at all.
.
2) Functionalism
Functionalists can respond that the functionalist
identification of the of the room/computer and a mind is
carried out at the wrong level.
The computer as a whole is a thinking machine, like a brain
is a thinking machine. But the computer’s mental states
may not be equivalent to the brain’s mental states.
If the computer is organized as a really long list of
questions with canned answers, the computer does not
have mental states such as belief or desire.
But if the computer is organized like a human mind, with
concepts, complex organization and homuncular
modules, the computer can have beliefs, desires, etc.
3) Strong AI:
Could an appropriately programmed computer have real
understanding? Too early to say. I am not convinced by
Searle’s argument that it is impossible.
The right kind of programming with the right sort of
complexity may yield true understanding.
e.g.
homuncular modularity
mixing of levels
self-updating
4) Syntax vs. Semantics
• How can semantics (meaning) come out of symbol
manipulation? How can 1s and 0s result in real meaning?
It’s mysterious. But then how can the firing of neurons
result in real meaning? Also mysterious.
• One possible reply: meaning is use (Wittgenstein).
Semantics is syntax at use in the world.
5) Qualia
Qualia = raw feels = phenomenal experience = what it is
to be like something
Can a computer have qualia? Again, it is hard to
understand how silicon and metal can have feelings. But
it is no easier to understand how meat can have feelings.
If a computer could talk intelligently and convincingly
about its feelings, we would probably ascribe feelings to
it. But would we be right?
5) Searle’s claim: understanding can only occur in
biological systems with the same causal properties as
the brain:
There is no basis for this hypothesis. It is unclear what
special causal properties the brain meant to have. I
doubt that Searle is right about this.
Readings for next week
• Sterelny, Kim, The Representational Theory of Mind,
Section 1.3, pgs. 11-17
(on reserve in Philosophy Dept.)
• Sterelny, Kim, The Representational Theory of Mind,
Section 3.1-3.4, pgs. 42-49 (on reserve in Philosophy
Dept.)
More optional readings
On the Chinese Room:
•
Searle, John. R. (1990), “Is the Brain's Mind a Computer Program?” in Scientific
American, 262, pgs. 20-25 (in main library)
•
Churchland, Paul, and Patricia Smith Churchland (1990) “Could a machine think?” in
Scientific American 262, pgs. 26-31 (in main library)
On modularity of mind:
•
Fodor, Jerry A. (1983), The Modularity of Mind, pgs. 1-21 at:
http://ruccs.rutgers.edu/forums/seminar3_spring05/Fodor_1983.pdf
•
Pinker, Steven (1999), “How the Mind Works”, William James Book Prize Lecture at:
www3.hku.hk/philodep/joelau/wiki/pmwiki.php?n=Main.Pinker-HowTheMindWorks