presentation

Download Report

Transcript presentation

Acting Humanly: The Turing test (1950) “Computing machinery and intelligence”:
Can machine’s think? or Can machines behave intelligently? An operational test for intelligent behavior: the Imitation Game
Predicted that by the year 2000, a machine would have a 30% chance of fooling a lay person for 5 minutes
Anticipated all major arguments against AI in the following 50 years
Suggested major components of AI: knowledge, reasoning, language, understanding, learning.
Problem: Turing test is not reproducible, constructive, or amenable to mathematical analysis. Intelligence not determinable
by surface behavior alone. The test is not sufficient since the behaviors under adjudication are too limited. As a sufficient
condition for intelligence, the test is so difficult as to be uninteresting.
Consciousness - The Chinese Room Experiment – Does running the right program
generate consciousness?
•
•
•
•
•
•
•
1.
2.
3.
4.
Human – only understands English
Rule book – written in english
Stacks of paper – some blank, some with indecipherable symbols on them
Small opening to outside world
Pieces of paper with symbols on them are passed through the opening
The human follows the instructions in the rule book
Eventually the human hands a piece of paper with symbols on it through the opening
Certain kinds of objects are incapable of conscious understanding
The human, paper, and rule book are objects of this kind
If each object is incapable, the entire whole is incapable
Therefore there is no conscious understanding in the room
The Brain Prosthesis Experiment
Replace neurons in your brain one at a time with artificial neurons that *exactly*
replicate the behavior of the original neurons (then reverse the process).
By definition, the subjects external behavior must remain unchanged.
What happens?
We have two choices, either
1. The causal mechanisms involved in consciousness in the electronic
brain are still functioning, and it is therefore conscious.
2. Conscious mental events in the normal brain have no effect on
behavior.
If neuron replacement is conscious, replacing brain with an entire circuit/lookup table
that mapped inputs to outputs *must* also be conscious.
Potted history of AI
1943
McCulloch & Pitts: Boolean circuit model
of brain
1950 Turing's ``Computing Machinery and
Intelligence''
1952--69
Look, Ma, no hands!
1950s Early AI programs, including Samuel's
checkers program, Newell & Simon's Logic
Theorist, Gelernter's Geometry Engine
1956
Dartmouth meeting: ``Artificial
Intelligence'' adopted
1965 Robinson's complete algorithm for logical
reasoning
1966--74
AI discovers computational complexity
Neural network research almost disappears
1969--79
Early development of knowledge-based
systems
1980--88
Expert systems industry booms
1988--93
Expert systems industry busts: ``AI
Winter''
1985--95
Neural networks return to popularity
1988-Resurgence of probabilistic and
decision-theoretic methods Rapid increase
in technical depth of mainstream AI
``Nouvelle AI'': ALife, GAs, soft
computing
Agents
 Anything which can be viewed as perceiving
environment through sensors, etc. and then
acting in the environment
 Current hot buzz-word
 Looks like the basic computational box
Abstractly, an agent is a function from percept histories to actions:
f : P*  A
For any given class of environments and tasks, we seek the agent (or class of
agents) with the best (possible) performance (a rational agent).
Intelligent Agents
 Intelligent (rational) agent seeks to maximize
its performance measure for any given
sequence of percepts
 Look up table?
 Text uses intelligent agent approach to bring
all aspects of AI into one.
 What should an intelligent agent have?
An intelligent agent should have knowledge, infer, plan,
reason with uncertainty, learn, perceive, communicate,
etc.
What is rational for an agent? It depends on:
1.
2.
3.
4.
The performance measure
What it has perceived
Its current store of knowledge
The actions the agent can perform
Agent Types
 Reflex Agent - Actions based only on current
percepts (no state memory), condition-action
rules
 Agents with Memory - keep track of internal
state, past actions (or their effects), and the
dynamically changing environment
 Goal-Based Agents - Actions driven by overall
goal, easy if one step, multi-action sequences
(subgoals) often supported by search and
planning mechanisms
 Utility-Based Agents - Best actions
- Multiple ways to reach goals
- Conflicting Goals
- Actions with uncertainty - which approach gives best chance of fulfilling
goals
PAGE
Automated taxi driver:
Percepts?
Actions?
Goals?
Environment?
PAGE
Internet shopping agent
Percepts?
Actions?
Goals?
Environment?
Environment Issues
 Accessibility - can agent detect all relevant
percepts
 Determinism - is next state completely
determined by current state plus the agent
action - if inaccessible, then may appear nondeterministic regardless
 Episodic - Is environment neatly divided into
independent episodes
 Static vs. Dynamic - Does environment remain
static in between agent actions
 Discrete vs. Continuous - Are there limited
distinct percept and action possibilities