Transcript New AI 4

A New Artificial Intelligence 4
Kevin Warwick
Philosophy of AI
• The philosophy behind AI has played a critical
•
•
•
•
•
role in the subject’s development
What does it mean for a machine to think?
Can a machine be conscious?
Can a machine fool you, in conversation, into
thinking it is actually human?
Is this important?
In looking into the minds of machines we must
ask fundamental questions about ourselves
Human-centric
• The philosophical study of artificial intelligence
•
•
•
has been dogged by the desire to regard human
intelligence as something special
To show how computers can’t do some of the
things that human brains do
Therefore computers are somehow inferior
This is understandable, after all we are human
and it is easy to fall into the trap of thinking that
the human way of doing things is the best way
Objective Study
• It is difficult to be objective about something
•
•
•
•
when you are immersed in it on a daily basis
Ask any academic researcher whose research
program is the most important – they will tell
you it is theirs
So we face a problem with human intelligence
To get around this, external assessment is
required.
We need knowledgeable sources who will give
us an unbiased view.
You are an Alien!
• We must compare in a scientific way – perhaps
•
•
•
some aspects are more important to one person
than they are to another.
To study the philosophy of artificial intelligence
we need to start by carrying out an independent
assessment of intelligence
You need to forget that you are human and to
look at human intelligence from the outside
You are an alien with no bias towards humans
and you must assess the intelligence of the
entities that you observe on earth.
Starting Point
• Let’s look at some of the misconceptions
and biases that can occur
• With artificial intelligence we are not
necessarily trying to copy (simulate) the
workings of the human brain
• This said - one interesting question is:
could we copy/simulate the human brain
with an artificial intelligent brain?
Randomness
• We all fall into simple traps when studying AI and the
•
•
•
•
•
human brain
Consider random behavior
It might be said that computers think in a mechanistic,
programmed way whereas humans can think randomly
This is incorrect – all human thoughts are from and in
our brains and are based on the genetic make up of our
brains and what we have learnt in our lives
Whilst an act may appear random to an outside
observer, this is simply because they do not understand
the reasoning behind it. Anything you do or say will have
been based on the signals in your brain
As a test – do something randomly, say something at
random – whatever your response you will have made a
conscious decision
Penrose’s Pitfall I
• Roger Penrose said: “there is a great deal of
•
•
•
•
randomness in the (human) brain’s wiring”
This is simply not true
A human brain is a complex network of highly
connected brain cells, the connections have been
made due to biological growth - directed by our
genetic make up and learning
Because something is complex and difficult to
understand this does not mean it is random
If you do not understand what is going on, a
telephone exchange can appear complex – it does not
act randomly, otherwise we would never be able to
make a telephone call – we could be connected with
absolutely anyone (at random).
Penrose’s Pitfall II
• Do you agree with these Penrose statements?
• “Genuine intelligence requires that genuine
•
•
•
understanding must be present” - “intelligence
requires understanding”.
“Actual understanding could not be achieved
by any computer”. Computers will never be
able to understand.
“Computers will always remain subservient to
us (humans), no matter how far they
advance”.
This is all pure Hollywood, fantasy land!
Let’s be serious!
• When one cow Moos to another they presumably have
•
•
•
•
some concept of what is being uttered, they often seem
to respond.
One cow appears to understand another cow. But do we
humans understand them, can we communicate with
them? From this, Penrose logic follows Humans do not ‘genuinely understand’ cows etc,
therefore we are not as intelligent as them.
As a result we will always be subservient to them – cows
will rule the earth!!
The argument is just plain silly – so, in the same way, is
Penrose’s argument for computers to always be
subservient to humans.
To the point
• Computers may well understand things in a different way
•
•
•
to humans, animals understand things in a different way
to humans, some humans probably understand some
things in different ways to other humans.
This doesn’t make one intelligent and another not. It
means that one is intelligent in a different way to
another. It’s all subjective.
As for computers always being subservient - that is pure
fiction. It may make someone feel nice to say that, but
there is no logic to it at all. It’s a cocoa statement!
When the Aztecs and the Red Indians were defeated by
Europeans, the ‘better’, more intelligent, culture lay with
the home teams. The invaders brought technology that
the home teams didn’t understand. Because something
is not intelligent in the same way as we are does not
mean it will always be subservient to us!
Weak AI
• The possibility that machines can act
intelligently as a human does or act as if
they were intelligent as a human is
referred to as Weak AI
• This stems from Minsky’s definition of AI machines do things that appear to be
intelligent acts
• This concept is though not accepted by
some.
Argument from disability
• Computers can now do many things better than humans do •
•
•
•
things we feel require understanding – playing chess,
mathematics
The “argument from disability” as Turing said is that some will
say “a machine can never ….” Examples given by Turing are:
“be kind, resourceful, beautiful, friendly, have initiative, have
a sense of humor, tell right from wrong, make mistakes, fall in
love, enjoy strawberries and cream, etc”.
There is no reason that a computer could not do any of these
Whether a computer does them in the same way as a human,
whether it ‘understands’ are quite different questions.
We can’t know whether another human ‘understands’ or
‘feels’ things in the same way that we do. The person may
say, and think that, they understand – but do they? How can
we be sure?
Strong AI
• The possibility that a machine can actually
think in the same way as a human, as
opposed simply to appearing to simulate
human thinking, is referred to as Strong
AI
• This would mean that it would be possible
to build a computer that completely
replicated the functioning of the human
brain in every aspect.
Brain in a Vat Experiment
• When you are born your brain is removed and placed in
•
•
•
•
a vat
It is kept alive and fed with suitable nutrients to allow it
to grow and develop connections
Signals are sent to the brain to feed it with a purely
fictional world and motor signals from the brain are sent
to the world such that your brain is able to modify it and
move around in it
The world appears to be real
In theory your brain, in this state, could have the same
sort of feelings and emotions as a brain which has
developed in a body in the usual/normal way.
Qualia
• If the two brains have been able to develop in identical
•
•
•
ways then it all rests on the nature of the fictional world.
If it was absolutely identical to the real world then the
brains would have no way to tell the difference and they
must have developed in the same way.
In practice - simulations are not the same as the real
thing and there would be small discrepancies – referred
to as qualia, intrinsic experiences.
A supporter of strong AI would feel that the differences
are so slight as not to matter, but an opponent would
feel that the differences are absolutely critical.
Materialists –v- Spiritualists
• Some approach the subject from a
materialist viewpoint, assuming that there
are no spiritual aspects involved, there is no
such thing as the immortal soul, and that
“brains cause minds”.
• Some feel that no matter what physical
elements are involved, where the (human)
brain is concerned, there is something else
that cannot be measured and it is this that
is the important thing.
God
• From a scientific basis the “brains cause
minds” argument is the more obvious.
• It is also pointless to argue against
someone who says that no matter what
we experience or measure, there is
something else – possibly Godlike – at
work and it overrides all else.
Free will
• How can a mind, restricted by physical
constructs, achieve freedom of choice?
• A materialistic argument to this concludes
that free will is merely the decisions taken
by an individual – these are based on their
genetic make up, their experience and the
sensed environment.
• There is no mystical element at work!
Consciousness & a shoe
•
•
•
•
•
•
•
Consciousness – hmmm. Consider the statements:
What does it feel like to smell a rose?
How can a computer possibly feel such a thing?
Why does it feel like something to have brain states
whereas it does not feel like anything to be a shoe?
A conclusion is drawn (Searle) that a shoe cannot be
conscious – therefore a computer cannot be conscious!
These issues and questions are encountered regularly in
texts on artificial intelligence.
Please be aware of them, but use what you have – your
intelligence – to “think” about the arguments made.
Human bias
• We know what it is like to be ourselves. We do not know what it is like to be
•
•
•
•
•
•
•
a bat, a computer, another human, a cabbage or a shoe.
We should not presume we know what someone or something else is
thinking.
The (previous) argument employs the human sense of smell.
A shoe does not appear to have a sense of smell, human or otherwise.
A human is compared with a shoe – the assumption is that a computer is
similar to a shoe and conclusions drawn regarding the consciousness of a
shoe therefore also apply to a computer
I have not yet witnessed a shoe that is similar to a computer
Comparing a human with a shoe is akin to comparing a computer with a
cabbage – can the cabbage deal with mathematics, can it communicate in
English, can it control a jet aircraft?
The same logic used in the consciousness argument for humans-shoes
means that if a cabbage can’t do these things then neither can a human.
Clearly these are ridiculous comparisons, but so too is comparing a human
with a shoe
Strong & weak AI
• The possibility that a machine can act as if it
•
•
•
was as intelligent as a human is referred to as
Weak AI
The possibility that a machine can think in
exactly the same way as a human is referred to
as Strong AI.
Both positions suffer from a human-centric
comparison
The starting point is that there is only one
“Okay” intelligence – human intelligence – to
which all other forms of intelligence (including
that of aliens if they exist!) can only aspire.
Modern AI
• We need an up to date viewpoint that is
representative of the computers, machines,
robots of today and encapsulates the
different forms of intelligence witnessed in
life
• A modern, open view of consciousness,
understanding, self awareness and free will
is required for us to really get to terms with
artificial intelligence.
Alien landing
• Assume that an alien being lands on earth, having travelled
•
•
•
•
•
billions of miles from its planet in order to do so
It must have intellectual properties way beyond those of
humans as humans have not yet figured out how to travel so
far and stay alive.
If the alien is of a completely different form to humans –
maybe it is a machine – then would we say that it is not
aware of itself because it is not like me, a human?
Would we say it is not conscious because it does not think in
exactly the same way as me?
It’s doubtful that the alien would bother about our thoughts
Yet the alien may well not meet our definition of Weak AI,
never mind Strong AI - it’s not human!
What is needed?
• We need a viewpoint that is less anthropomorphic than
•
•
•
•
classical AI
We need to include distributed information processing,
autonomy, embeddedness, sensory motor coupling with
the environment, forms of social interaction and more.
Humans exhibit such features but so too do other
animals and machines.
We need to incorporate psychological and cognitive
characteristics, e.g. memory systems, without which it is
unlikely that a truly intelligent behavior can be observed.
We need to be open to the fact that any behavior that
can be characterized in this way is truly intelligent
regardless of the nature of the being that generated it.
Rational AI
• Rational AI means any artifact fulfilling such a definition can
•
•
•
•
•
act intelligently and think in their own right, in their own way.
Whether this is similar to the intelligence, thought,
consciousness, self awareness of a human is not important.
Weak AI and strong AI still have meaning with regard to
human intelligence.
Other creatures conforming to rational artificial intelligence
are intelligent and think in their own way, dependant on their
particular senses and how their brain is structured.
Artificial intelligence, of silicon or carbon forms, takes its place
as one version of intelligence, different in appearance and
characteristics from human and animal intelligence.
As humans are intelligent in different ways from each other so
artificial intelligence is diverse in terms of the different
machines that are apparent.
Comments
• Classical AI is human-centric
• Classical AI philosophy assumes Human
intelligence is superior to all else
• Strong & weak AI based on comparison
with human intelligence
• Need to respect all types of intelligence –
including machine intelligence
• Rational intelligence
Next
• Philosophy of AI II
Contact Information
• Web site: www.kevinwarwick.com
• Email: [email protected]
• Tel: (44)-1189-318210
• Fax: (44)-1189-318220
• Professor Kevin Warwick, Department of
Cybernetics, University of Reading,
Whiteknights, Reading, RG6 6AY,UK