Transcript Slides
AI History, Philosophical
Foundations
Part 2
Some highlights from early history of AI
• Gödel’s theorem: 1930
• Turing machines: 1936
• McCulloch and Pitts neurons: 1943 (introduced field of
"neural networks”)
• Von Neumann self-reproducing automaton: 1940s
• Dartmouth workshop: 1956 (2006 is 50th anniversary)
• 1950s:
– Lisp (McCarthy)
– Logic Theorist (Newell and Simon)
– General Problem Solver (Newell and Simon)
– Learning checkers player (Samuel)
– Geometry theorem prover (Gelernter)
– Perceptrons (Rosenblatt)
• Minsky and Papert Perceptrons book: 1969
"In from three to eight years, we'll have a machine with
the general intelligence of an average human being.“
Marvin Minsky to Life magazine, 1970
Means-Ends Analysis
(“General Problem Solver”)
1.
Compare the current state with the goal state; find a
difference between them.
2.
Find in memory an operator that experience has taught
reduces differences of this kind;
3.
Apply the operator to change the state.
4.
If state = goal state, return “success”, else go to 1.
(See transparancies)
Newell and Simon: Physical Symbol System hypothesis:
“A physical symbol system has the necessary and
sufficient means of general intelligent action.”
Quote from Searle:
“I thought the whole idea of strong AI is that we don’t
need to know how the brain operates to know how the
mind works. The basic hypothesis, or so I had supposed,
was that there is a level of mental operations consisting of
computational processes over formal elements that
constitute the essence of the mental and can be realized in
all sorts of different brain processes, in the same way that
any computer program can be realized in different
computer hardware: on the assumptions of strong AI, the
mind is to the brain as the program is to the hardware, and
thus we can understand the mind without doing
neurophysiology.”
Blocks World
(used by Shrdlu)
The dialog that was used as a SHRDLU demo:
Person: PICK UP A BIG RED BLOCK.
Computer: OK. (does it)
Person: GRASP THE PYRAMID.
Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU
MEAN.
Person: FIND A BLOCK WHICH IS TALLER THAN THE
ONE YOU ARE HOLDING AND PUT IT INTO THE BOX.
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK
WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK. (does it)
Person: WHAT DOES THE BOX CONTAIN?
Computer: THE BLUE PYRAMID AND THE BLUE BLOCK.
Person: WHAT IS THE PYRAMID SUPPORTED BY?
Computer: THE BOX.
Person: HOW MANY BLOCKS ARE NOT IN THE BOX?
Computer: FOUR OF THEM.
Person: IS AT LEAST ONE OF THEM NARROWER THAN
THE ONE WHICH I TOLD YOU TO PICK UP?
Computer: YES, THE RED CUBE.
Person: IS IT SUPPORTED?
Computer: YES, BY THE TABLE.
Person: CAN THE TABLE PICK UP BLOCKS?
Computer: NO.
Person: CAN A PYRAMID BE SUPPORTED BY A BLOCK?
Computer: YES.
Person: CAN A PYRAMID SUPPORT A PYRAMID?
Computer: I DON'T KNOW.
Person: STACK UP TWO PYRAMIDS.
Computer: I CAN'T.
Person: WILL YOU PLEASE STACK UP BOTH OF THE
RED BLOCKS AND EITHER A GREEN CUBE OR A
PYRAMID?
Computer: OK. (does it)
Person: WHICH CUBE IS SITTING ON THE TABLE?
Computer: THE LARGE GREEN ONE WHICH SUPPORTS
THE RED PYRAMID.
Person: IS THERE A LARGE BLOCK BEHIND A
PYRAMID?
Computer: YES, THREE OF THEM: A LARGE RED ONE,
A LARGE GREEN CUBE, AND THE BLUE ONE.
Person: PUT A SMALL ONE ONTO THE GREEN CUBE
WHICH SUPPORTS A PYRAMID.
Computer: OK. (does it)
Blocks world
(Winston)
• Input was a line drawing of a blocks world structure.
• Program produced a semantic net representation of the structural
description of the object
biological neuron
artificial neuron
A two-layer neural network
Output layer
(activation represents
classification)
Hidden layer
(“internal
representation”)
Input layer
(activations represent
feature vector
for one training example)
Weighted connections
ALVINN
(Pomerleau, 1993)
• ALVINN learns to drive an autonomous vehicle at normal speeds on
public highways (!)
• Input: 30 x 32 grid of pixel intensities from camera
Each output unit correspond to a particular steering direction. The
most highly activated one gives the direction to steer.