Possibility of True Artificial Intelligence
Download
Report
Transcript Possibility of True Artificial Intelligence
Possibility of True Artificial
Intelligence
Weak AI: Can machines act intelligently
Artificial intelligence pursued within the cult of computationalism stands
not even a ghost of a chance of producing durable results … it is time to
divert the efforts of AI researchers - and the considerable monies made
available for their support - into avenues other than the computational
approach. (Sayre, Three more flaws in the computational model. Paper presented at the
APA (Central Division) Annual Conference, 1993)
Weak AI: Can machines act intelligently
Impossibility of AI depends on its definition
1.
AI is the quest for the best agent program
on a given architecture.
Clearly possible.
AI makes machines think.
2.
An under-specified problems
Turing Test: An experimental answer
Weak AI: Can machines act intelligently
Disability Objection
Turing lists: Machines can never …
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of
humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries
and cream, make someone fall in love with it, learn from experience, use
words properly, be the subject of its own thought, have as much diversity
of behavior as man, do something really new.
How has this argument held up historically?
Computers can play chess, check spelling, steer cars and helicopters,
diagnose diseases, …
Computers have made small, but significant discoveries in mathematics,
chemistry, mineralogy, biology, computer science, …
Algorithms perform at human or better level in some areas involving
human judgement:
Paul Meehl, 1955: Statistical learning algorithm outperform experts when
predicting success of students in a training program or recidivism of criminals
GMAT grades essays automatically since 1999.
Weak AI: Can machines act intelligently
Mathematical objection
Certain mathematical questions are in principle
unanswerable by a particular formal system.
Gödel (1931), Turing (1936)
Formal axiomatic system F powerful enough to do arithmetic
allows construction of Gödel sentences G(F)
G(F) is sentence of F, but cannot be proved within F
If F is consistent, then G(F) is true
Lucas (1961): Machine are formal systems hence they are
not capable of deriving Gödel sentences while humans
have no such limitation
Penrose (1989, 1994): Humans are different because their
brains operate by quantum gravity
Weak AI: Can machines act intelligently
Counterarguments to the Lucasian position
Gödel’s incompleteness theorem does not apply:
Gödel’s incompleteness theorem only applies to formal
systems that are powerful enough to do arithmetic, such
as Turing machines.
Computers are not Turing machines, they are finite.
Hence they are describable in a very large propositional
logic system, where Gödel’s incompleteness theorem
does not apply.
Weak AI: Can machines act intelligently
Counterarguments to the Lucasian position
There is no problem in that intelligence agents cannot
establish the truth of some sentence while other agents
can.
“J. R. Lucas cannot consistently assert that this sentence is
true”
If Lucas asserts this sentence, then he would be contradicting
himself, so he cannot assert it consistently and the sentence
needs to be true.
Why should we (who can assert this sentence) think less of Lucas
because he cannot assert this sentence?
Humans cannot add 100 Billion 100 digit numbers in their
lifetime, but computers can.
Humankind did very well without mathematics for millennia, so
intelligence should not be made dependent on mathematical
ability.
Weak AI: Can machines act intelligently
Counterarguments to the Lucasian position
Computers have limitations, but so do humans
Humans are famously inconsistent
Weak AI: Can machines act intelligently
Argument from informality of behavior (Dreyfus
1972, 1986, 1992)
Claim:
“Human behavior is for too complex to be captured by any
simple set of rules.”
“Computers cannot do more than follow as set of rules,
therefore they cannot.”
Known as Qualification Problem in AI
Claim applies to “Good Old Fashioned AI” (GOFAI)
Dreyfus critique morphed into a proposal of doing AI together
with a list of “insurmountable” problems
All of these problems were addressed by AI research
Strong AI: Can Machines Really Think
“Not until a machine could write a sonnet or compose a concerto because
of thoughts and emotions felt, and not by the chance fall of symbols, could
we agree that machine equals brain - that is, not only write it but know
that it had written it” (Geoffrey Jefferson, 1949, quoted by A. Turing)
Argument from Consciousness, but relates to
Phenomenology
Study of direct experience
Intentionality
Whether the machine’s purported beliefs, desires, and other
representations are actually “about” something in the real
world
Strong AI: Can Machines Really Think
Turing proposes “a polite convention” that
everyone thinks
We are interested in creating programs that
behave intelligently, not whether someone else
pronounces them to be real or simulated.
We avoid the question: “When are artifacts
considered real?”
Is an artificial Picasso painting a Picasso painting?
Are artificial sweeteners sweeteners?
Distinction seems to depend on intuition
Strong AI: Can Machines Really Think
Turing proposes “a polite convention” that
everyone thinks
Searle: “No one supposes that a computer simulation of a
storm will leave us all wet … Why on earth would anyone in
his right mind suppose a computer simulation of mental
processes actually had mental processes (1980)
Functionalism:
Biological Naturalism:
Mental state is any intermediate causal condition between input
and output.
Mental states are high-level emergent features that are caused by
low-level neurological processes in the neurons and it is the
properties of the neurons that matter.
Give different answers to the challenge by Searle
Strong AI: Can Machines Really Think
Mind-Body Problem:
How are mental states and processes related to bodily
states and processes
Raises the further questions of
Descartian dualism
Monism or materialism (Searle: “Brains cause minds”.)
Consciousness, Understanding, Self-Awareness
Free will
Brain in a vat experiment
Brain prosthesis experiment: Replace brain parts over time
with silicon and see what happens to self-consciousness
Strong AI: Can Machines Really Think
Chinese Room
Transformed to the following axioms
1.
2.
3.
4.
Computer programs are formal, syntactic entities
Minds have mental contents, or semantics
Syntax by itself is not sufficient for semantics
Brains cause minds
Conclusion from 1, 2, 3: programs are not
sufficient for minds
Chinese room argument argues for 3
Social Consequences of Successful AI
Berleur and Brunnstein: Ethics of Computing, 2001
People might loose their jobs because of automation
People might have too much or not enough leisure time
People might loose their sense of being unique
People might loose some of their privacy rights
The use of AI systems results in the loss of accountability
The success of AI might mean the end of the human race
Social Consequences of Successful AI
Arthur C. Clarke, 1968
People in 2001 might be “faced with a future of
utter boredom, where the main problem in life is
deciding which of several hundred TV channels to
select”