Strong AI – can machines really think

Download Report

Transcript Strong AI – can machines really think

Ajay Garg
Satadru Biswas
Veeranna
Praveen Lakhotia
Arun Karthikeyan
05005004
05005021
05005023
05D05010
05D05020





Turing Test – Satadru
Weak AI – Arun
Strong AI – Veeranna
AI Complete – Ajay
Ethics of AI – Praveen



When ever we start something new we
always start debating over the need for it, its
feasibility.
Similar thing happened when AI was born in
the early 1950’s.
Philosophers debated about very
fundamental and important questions like –
“Can machines think?”, “Is AI possible?” etc.


Turing then rephrased the question “Can
machines think?” into a test, which became
famous as The Turing Test.
Several Variants developed over the years.


Turing described a simple party game which
involves three players. Player A is a man,
Player B is a woman and Player C is a
interrogator
The set up is such that Player C is unable to
see either of A or B and can only
communicate with them using written media


By asking questions of player A and player B,
player C tries to determine which of the two is
the man, and which of the two is the woman
A's role is to trick the interrogator into
making the wrong decision, while player B
attempts to assist the interrogator


Turing proposed that player A be replaced
with a computer
The success of the computer is determined
by comparing the outcome of the game when
player A is a computer against the outcome
when player A is a man
Or to put it in Turing’s words:
“the interrogator decides wrongly as often
when the game is played [with the computer]
as he does when the game is played between
a man and a woman, then it can be argued
that the computer is intelligent”




As with the Original Imitation Game Test, the
role of player A is performed by a computer
The difference is that now the role of player B
is to be performed by a man, rather than by a
woman
In this version both player A (the computer)
and player B are trying to trick the
interrogator into making an incorrect
decision
•
•
A man can fail the OIG Test, but it is argued
that this is a virtue of a test of intelligence if
failure indicates a lack of resourcefulness
It is argued that the OIG Test requires the
resourcefulness associated with intelligence
and not merely "simulation of human
conversational behavior"
•
•
•
The power of the Turing test derives from the
fact that it is possible to talk about anything
Turing wrote "the question and answer
method seems to be suitable for introducing
almost any one of the fields of human
endeavor that we wish to include.“
In order to pass a well designed Turing test,
the machine would have to use natural
language, to reason, to have knowledge and
to learn
•
•
1.
It only tests if the subject resembles a human
being
It will fail to test for intelligence under two
circumstances:
It tests for many behaviors that we may not
consider intelligent, such as the susceptibility
to insults or the temptation to lie.
2. It fails to capture the general properties of intelligence,
such as the ability to solve difficult problems or come up
with original insights.
Image Courtesy: Wikipedia Commons
CAN MACHINES ACT INTELLIGENTLY?
- ARUN



The assertion that machines could possibly
act intelligently is called “weak AI”
hypothesis by philosophers
Can machines act intelligently?
Can machines think?



The argument from disability
The mathematical objection
The argument from informality


“A machine can never do X”
X according to Turing: being kind, learning
from experience, doing something new,
differentiating between right and wrong.



Some of the have been achieved over the
years. Ex. Machines today do learn from
experience.
Fact: Automated programs are used to
grade GMAT essay questions.
May be over the years machines can do the
rest of “X's”.



Machines are formal systems limited by
incompleteness theorem. Ex. They cannot
establish the truth of Godel sentence.
Humans have no such limitation.
“Humans are superior to machines”




Problems with the claim:
Godels Theorem applies only to formal
systems powerful enough to do arithmetic.
Applies to Turing Machines and not to
computers.
Turing Machines have infinite memory but
not computers.



Truth of some sentence should be
established by all agents.
Eg: Lucas cannot consistently assert that
this sentence is true.
Even if computers have limitations on what
they can prove, there is no evidence that
humans can prove those results.
Human behavior is far too complex to be
captured by any simple set of rules
 Computers can do no more than follow a set of
rules.
 So they cannot generate behavior as intelligent
as that of humans.
 The inability to capture everything in a set of
logical rules is called the “qualification problem”
in AI.




No one has any idea of incorporating
background knowledge into learning
process.
This claim has been proved to be wrong.
Ex. Learning algorithms use background
knowledge today.



Learning requires prior identification of
relevant inputs and correct outputs.
This claim has been proved to be wrong.
Ex. Unsupervised learning has been
accomplished today.


Brain can direct its sensors to seek
information and process it according to
current situation.
Research is being done over this field and
partial success has been achieved.
Machine can be said to have posses Strong AI if
it could do whatever human brain could do in
every possible way. Should posses casual powers
of brain.
 Have consciousness, self awareness,
understanding, feel emotions, dream, think etc.,
 No body cares about Strong AI.
 Pass the Turing test doesn’t imply actually
thinking, but still might be simulating thinking.

Jefferson’s Lister Oration for 1949, “Not until a machine
can write a sonnet or compose a concerto because of
thoughts and emotions felt, and not by the chance fall of
symbols, could we agree that machine equals brain – that
is not only write it but know that it had written it. No
mechanism could feel (and not merely artificially signal, an
easy contrivance) pleasure at its successes, grief when its
valves fuse, be warmed by flattery, be made miserable by
its mistakes, be charmed by sex, be angry or depressed
when it cannot get what it wants”.
 Should have Consciousness
 Phenomenology: machine has to actually feel emotions
 Intentionality: whether the beliefs, desires and intensions
are “of” or “about” something in real world.



No direct evidence of other people mental
states.
Lets accept that everyone thinks.
Artificial urea is urea, artificial insemination is
insemination, artificial simulation of chess game
is a chess game, artificial simulation of addition
is addition but artificial monalisa is not monalisa,
artificial simulation of storm is not storm,
artificial scotch is not scotch.
 Artificial mind ?
 Depends upon definition of mental states.
 Theory of functionalism.
 Biological naturalism theory.


Mental state (beliefs, desires, being in pain)is
a condition which is between input and
output.
1
S1
S2
“ODD”
“EVEN”
S2
S1
What is S1 ?
Being in S1 = Being an x such that P  Q[If x is in P and gets
a ‘1’ input, then it goes into Q and emits "Odd"; if x is in Q
and gets a ‘1’ input it goes into P and emits "Even"& x is in
P] (Note: read  P as There is a property P.)}.
 Functional State Identity Theory (FSIT) would identify pain
(or, more naturally, the property of having a pain or being
in pain) with the second-order relational property.
 Being in pain = Being an x such that P Q[sitting on a tack
causes P & P causes both Q and emitting ‘ouch’ & x is in P]
 The nature of a mental state is just like the nature of an
automaton state.


Mental state is a result of neural activity.
John Searle 1980
1) all mental phenomena from pain, tickles, and
itches to the most abstruse thoughts are caused
by lower-level neurobiological processes in the
brain.
2) mental phenomena are higher level features
of the brain.
 brains and only brains can cause consciousness.
 Consciousness is ontologically subjective in the
sense that it only exists when experienced by a
human or animal subject.









Dualist theory – soul is different from body.
René Descartes‘. Ghost in a machine !!.
Mind architecture.
Monist theory – mind and body are same.
Only thing that is proven to exists is matter.
Searle – “brain cause mind”.
Free will – materialist deal with it.



Hilary Putnam first presented the argument that
we cannot be brains in a vat.
A term refers to an object only if there is an
appropriate causal connection between that
term and the object. (CC)
1) Assume we are brains in a vat .
2) If we are brains in a vat, then “brain” does not
refer to brain, and “vat” does not refer to vat (via
CC) .
3) If “brain in a vat” does not refer to brains in a
vat, then “we are brains in a vat” is false .





4) Thus, if we are brains in a vat, then the
sentence “We are brains in a vat” is false (1,2,3).
Mental state that “I need a pizza” are they same
in both worlds ?
Wide content – knows everything, from outside.
Narrow content – within same world.
Qualia – difference between human beings and
zombies.
Matrix - 1999 , Wachowski brothers.





Replace each neuron by electronic devices
slowly one by one.
What happens to consciousness ?
Functionalist – consciousness remains
Biological naturalist – consciousness
vanishes.
Brain computer interface (BCI)



Against strong AI.
Searle’s axioms:
1) Minds have mental contents; specifically, they
have semantic contents.
2) Computer programs are entirely defined by
their formal, or syntactical, structure.
3) Syntax is not sufficient for semantics (against
functionalism).
4)Brains cause minds.
-AJAY GARG
the most difficult problems are informally known
as AI-complete.
 implying that the difficulty of these
computational problems is equivalent to solving
the central artificial intelligence problem—
making computers as intelligent as people.
 The term was coined by Fanya Montalvo by
analogy with NP-Complete in complexity theory.


To call a problem AI-complete reflects an
attitude that it won't be solved by a simple
algorithm.


The AI subarea of Natural Language is
essentially the overlap of AI and
computational Linguistics.
The goal of the area is to form a
computational understanding of how
people learn and use their native languages.


Consider a straight-forward, limited and
specific task: machine translation.
To translate accurately, a machine must be
able to understand the text.



It must be able to follow the author's
argument, so it must have some ability to
reason.
It must have extensive world knowledge so
that it knows what is being discussed.
E.g. We gave the monkeys the bananas
because they were hungry and We gave the
monkeys the bananas because they were
over-ripe.



It must also model the authors' goals,
intentions, and emotional states to
accurately reproduce them in a new
language.
E.g. "I never said she stole my money" Someone else said it, but I didn't.
E.g. “I never said she stole my money" - I
said she stole someone else's money.


In short, the machine is required to have
wide variety of human intellectual skills.
So this problem is believed to be AIcomplete.


Vision is interpreting visual images that fall
on the human retina or the camera lens.
The actual scene being looked at could be
2-dimensional such as a printed page of
text or 3-dimensional such as the world
about us.


The classical problem in computer vision is
that of determining whether or not the
image data contains some specific object,
feature, or activity.
This task can normally be solved robustly
and without effort by a human

Computer must be able to relate different
object in the scene. So It must have
extensive world knowledge.

PRAVEEN LAKHOTIA





Is it worth asking the question – “Can there
be an ethical AI?”
Intrusion of machines in our life.
Ex: ATM
Ex: Autopilot system in aero planes
Need for us to do things aside.


How much control does the machine
intrusion have on us?
Consequence is diminishing role of humans in
decision making.

What has this relinquishing of control to AIs
got to do with them being ethical?
▪ establish that the agent can and will carry out our
wishes.
▪ we hold them responsible for the actions that they carry
out as part of that control

Also If it is believed that AI’s can think then
why not believe that they can be ethical?
Another reason to care if Ais can be ethical is the
affect that they have in changing society if they were
able to be ethical.
 One affect might be that the incorporation of
machine agents into human practices will accelerate
and deepen as artefacts simulate basic social
capacities: dependence upon them will grow.
 The attribution of human like agency to artefacts will
change the image of both machines and of human
beings.


Given the destructiveness of contemporary society, an
examination of the additional influence that an ethical AI
would have in the technologizing of human social relations is
timely.



So what should we do??
We need to control the way machines can
act.
We need some kind of laws which the
machines will definitely abide in all
circumstances.

Issac Asimov – Three law of robotics to govern
Artificial Intelligent systems
A robot may not injure a human being, or, through
inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings
except where such orders would conflict with the First
Law.
3. A robot must protect its own existence as long as such
protection does not conflict with the First or Second
Law.
1.


A reading of his work concludes that no set of
fixed laws can sufficiently match the possible
behavior of AI agents and human society.
A criticism of Asimov's robot laws is that the
installation of unalterable laws into a sentient
consciousness would be a limitation of free
will and therefore unethical.


Fiction – I, Robot, Aliens.
Designing autonomous systems.


The arguments given against the objections
raised in Weak AI show the progress of AI
rather than its impossibility.
Searle claims that machines cannot have
intelligence.




Stuart Russell and Peter Norvig. Artificial
Intelligence – A Modern Approach. Pearson
Education, Second Edition, 2005.
Searle J. R. Mind, brains and programs.
Behavioral and Brain Sciences, 1980.
Searle J.R. Mind, brains and science. Harvard
Univ. Press, Cambridge, 1984.
Searle J. R. Is the brain’s mind a Computer
Program? Scientific American, 1990.




Turing A. Computing machinery and
intelligence. 1950.
http://en.wikipedia.org/wiki/AI-complete
Stanford Encyclopedia of Philosophy.
Richard Lucas. An outline for determining the
ethics of AI.