artificial intelligence
Download
Report
Transcript artificial intelligence
artificial intelligence
I believe that in about fifty years' time it will be
possible, to programme computers, with a storage
capacity of about 109 [bits], to make them play the
imitation game so well that an average interrogator
will not have more than 70 per cent chance of
making the right identification after five minutes of
questioning… [So] at the end of the century the use
of words and general educated opinion will have
altered so much that one will be able to speak of
machines thinking without expecting to be
contradicted.
Suppose also that after a while I get so good
at following the instructions for manipulating
the Chinese symbols and the programmers
get so good at writing the programs that
from the external point of view that is, from
the point of view of somebody outside the
room in which I am locked -- my answers to
the questions are absolutely indistinguishable
from those of native Chinese speakers.
Nobody just looking at my answers can tell
that I don't speak a word of Chinese.
It seems to me quite obvious in the example
that I do not understand a word of the Chinese
stories. I have inputs and outputs that are
indistinguishable from those of the native
Chinese speaker, and I can have any formal
program you like, but I still understand nothing.
For the same reasons, [a] computer
understands nothing of any stories, whether in
Chinese, English, or whatever, since in the
Chinese case the computer is me, and in cases
where the computer is not me, the computer
has nothing more than I have in the case where
I understand nothing.
-Searle, “Minds, Brains, and Programs”
Suppose we design a program that… simulates the actual
sequence of neuron firings at the synapses of the brain of a
native Chinese speaker when he understands stories in
Chinese and gives answers to them. The machine takes in
Chinese stories and questions about them as input, it
simulates the formal structure of actual Chinese brains in
processing these stories, and it gives out Chinese answers
as outputs. We can even imagine that the machine
operates, not with a single serial program, but with a whole
set of programs operating in parallel, in the manner that
actual human brains presumably operate when they
process natural language. Now surely in such a case we
would have to say that the machine understood the
stories; and if we refuse to say that, wouldn't we also have
to deny that native Chinese speakers understood the
stories? At the level of the synapses, what would or could
be different about the program of the computer and the
program of the Chinese brain?"
It is an odd reply for any partisan of artificial
intelligence (or functionalism, etc.) to make: I
thought the whole idea of strong AI is that we
don't need to know how the brain works to
know how the mind works. The basic hypothesis,
or so I had supposed, was that there is a level of
mental operations consisting of computational
processes over formal elements that constitute
the essence of the mental and can be realized
in all sorts of different brain processes, in the
same way that any computer program can be
realized in different computer hardwares: on the
assumptions of strong AI, the mind is to the brain
as the program is to the hardware, and thus we
can understand the mind without doing
neurophysiology.
Suppose we put a computer inside a robot, and
this computer would not just take in formal
symbols as input and give out formal symbols as
output, but rather would actually operate the
robot in such a way that the robot does
something very much like perceiving, walking,
moving about, hammering nails, eating drinking
-- anything you like. The robot would, for
example have a television camera attached to
it that enabled it to 'see,' it would have arms
and legs that enabled it to 'act,' and all of this
would be controlled by its computer 'brain.'
Such a robot would… have genuine
understanding and other mental states.
[T]he addition of such “perceptual" and "motor"
capacities adds nothing by way of
understanding… To see this, notice that the
same thought experiment applies to the robot
case. Suppose that instead of the computer
inside the robot, you put me inside the room
and, as in the original Chinese case, you give
me more Chinese symbols with more instructions
in English… Suppose, unknown to me, some of
the Chinese symbols that come to me come
from a television camera attached to the robot
and other Chinese symbols that I am giving out
serve to make the motors inside the robot move
the robot's legs or arms… I [still] don't know
what's going on.
While it is true that the individual person
who is locked in the room does not
understand the story, the fact is that he is
merely part of a whole system, and the
system does understand the story. The
person has a large ledger in front of him in
which are written the rules, he has a lot of
scratch paper and pencils for doing
calculations, he has 'data banks' of sets of
Chinese symbols. Now, understanding is not
being ascribed to the mere individual;
rather it is being ascribed to this whole
system of which he is a part.
[T]he systems theory… seems to me so
implausible to start with. The idea is that
while a person doesn't understand Chinese,
somehow the conjunction of that person
and bits of paper might understand
Chinese. It is not easy for me to imagine
how someone who was not in the grip of
an ideology would find the idea at all
plausible.