Artificial Intelligence

Download Report

Transcript Artificial Intelligence

Artificial
Intelligence
Do we stand in the way?
Brandon Bushong
April 21, 2006
Outline





Goal for A.I.
Defining Intelligence
The Turing Test
Moral Considerations
Potential for A.I.
If you’re going to dream, dream big


Goal of A.I.
Basic, yet optimistic criteria for an android



Union of linguistics and everyday knowledge
Ability to interpret demeanor and speech
Nonverbal communication
http://colegroup.com/images/0407-Android.jpg
Neuroscience Says to Keep Dreaming

Complexity of the brain

Neural networks

100 billion neurons vs. 30 neurons
 ? vs. 30+ years and 15 research teams
http://www.alanturing.net/turing_archive/graphics/realneurons.gif
Defining Intelligence

Requirements for thinking


What is the unquestionable definition of
intelligence?


A human body?
Objections (Psychological & Philosophical)?
Operational definition

The Turing test—an imitation game
The Turing Test

Test fundamentals




Three participants
Segregation
Foundation for artificial sentience
Key assumption

Humans think
http://www.alanturing.net/turing_archive/pages/Reference%20Articles/TheTuringTest.html
Moral Considerations

Autonomy requires ethical and moral action


What is an unyielding definition of morality?


Due to interaction with humans
Objections (Psychological & Philosophical)?
Operational definition

The Moral Turing test

Restricts conversation to morality
http://www.marxists.org/reference/archive/hegel/triads/morality.htm
The Requirements for an Autonomous
Moral Agent

Conversing is not enough



Understanding circumstances is essential
Knowledge of the inner status of ethical
beings, communal procedure of creating
accountability attributions, and customary
morality
Distinguishing between data and
information
Data vs. Information

Computers process data


http://www.thefeltsource.com/My-First-Numbers-Large.jpg
Surface-level form of information
Understanding a situation

Requires information processing

Relating to the data being processed
 Example: Impending implosion of the Earth
 How do you know 1 + 1 = 2?
 How do you know when you are in love?
http://www.restposten.de/fotos/1136289355AUT44.jpg
Processing Information

To be able to process information, a computer
would need to understand the information’s
context

Context affects interpretation


Farmer vs. Sandcastle builder
Computers are at a disadvantage


Finite amount of storage
Necessitates more than a pre-set procedure
 Agree or disagree?
 Requires adaptation
 Developed by a physical presence in the world

Artificial beings are incapable of passing the
Moral Turing test
http://facweb.cs.depaul.edu/yele/Course/IS421/S4/open%20road%20context%20solution.gif
Why an Artificial Being Cannot Pass
the Turing Test

The test does not actually measure intelligence


Examines human intelligence, as shaped by the
environment
The use of subcognitive questions

Probes a machine for the accumulation of human
experiences

Use of the senses and processing the data obtained with
one’s senses
 Ex: Smells, tastes, etc.
 Ability to explain a decision based on the use of the
senses
http://www.supereggplant.com/archives/sugar%20cookies.JPG
http://www.corbinstreehouse.com/blog/wp-content/uploads/dadsyard_flowers_DSCF0027.JPG
Why an Artificial Being Cannot Pass
the Turing Test

To pass the Turing test, an artificial being
must live as a human

To be intelligent, according to the Turing test, a
machine must be human
http://www.xensory.com/blogs/robotsnext/repliee.jpg
In Summary

When humans act as the definition of
intelligence, there is no room for other
sentient beings.
http://www.wbru.com/albums/warpedtour/crowd.sized.jpg
References







Bernstein, J. (2001). A.I. The New Yorker, 295-300.
Brackenbury, I, & Ravin, Y. (2002). Machine intelligence and the Turing
test. IBM Systems Journal, 41, 524-529.
French, R. M. (2000). Peeking behind the screen: The unsuspected power
of the standard Turing test. Journal of Experimental & Theoretical
Artificial Intelligence, 12, 331-340.
Jin, Z., & Bell, D. A. (2003). An experiment for showing some kind of
artificial understanding. Expert Systems, 20, 100-107.
Proudfoot, D. (2004). The implications of an externalist theory of rulefollowing behavior for robot cognition. Minds and Machines, 14, 283308.
Stahl, B. C. (2004). Information, ethics, and computers: The problem of
autonomous moral agents. Minds and Machines, 14, 67-83.
Zimmer, C. (2001). Alternative life styles. Natural History, 110, 42-45.