3 approaches to AI

Download Report

Transcript 3 approaches to AI

Com1005: Machines and
Intelligence
Amanda Sharkey
3 approaches to AI
- Symbolic AI or Traditional AI or GOFAI (Good
Old Fashioned AI)
– Connectionism, or Neural Computing
– New AI, (nouvelle AI), Adaptive Behaviour, or
Embodied AI.
• Changing emphasis ….
• GOFAI- emphasis on human intelligence, and
cognitive reasoning.
– Emphasis on representing, and reasoning with
knowledge.
• Turn of the millennium – interest in wider range
of biological intelligence.
– E.g. self organisation of ant colonies.
• New emphasis on interaction between
– Brain
– Body
– World
• Rodney Brooks, MIT Artificial
Intelligence Lab
• Brooks, R.A. (1990) Elephants
don’t play chess. In Pattie Maes
(Ed.) Designing autonomous agents.
Cambridge, Mass, MIT Press
• Brooks, R.A. (1991) Intelligence
without Reason. In Proceedings of the
12th International Joint Conference on
Artificial Intelligence. Morgan
Kauffman.
• Brooks, R.A. (1991) Intelligence
without Representation, Artificial
Intelligence, 47, 139-159.
• Brooks, R.A. (1990) Elephants don’t play chess.
In Pattie Maes (Ed.) Designing autonomous
agents. Cambridge, Mass, MIT Press
• Elephants don’t play chess – but still intelligent
• Nouvelle AI
– based on the physical grounding hypothesis.
• An intelligent systems needs to have its representations
grounded in the physical world.
• Needs sensors and actuators connected to the world - not
typed input and output.
• The world is its own best model – it contains every detail
– “the trick is to sense it appropriately and often enough”
• Robots operate in world, using “highly reactive
architectures, with no reasoning systems, no
manipulable representations, no symbols, and
totally decentralised computation” (Brooks,
1991).
• “I wish to build completely autonomous mobile
agents that co-exist in the world with humans,
and are seen by those humans as intelligent
beings in their own right. I will call such agents
Creatures” (Brooks, 1991)
• A Creature must cope appropriately and in a timely
fashion with changes in its dynamic environment
• A Creature should be robust with respect to its
environment:
- no effect of minor changes in the
properties of the world
• A Creature should be able to maintain multiple goals
and ….adapt to surroundings and capitalise on
fortuitous circumstances
• A Creature should do something in the world; it should
have some purpose in being.
• Set of principles (Brooks, 1991)
– The goal is to study complete integrated intelligent
autonomous agents
– The agents should be embodied as mobile robots situated in
unmodified worlds found round laboratory (embodiment)
– Robots should operate under different environmental
conditions – e.g different lighting conditions (situatedness)
– Robots should operate on timescales commensurate with
timescales used by humans (situatedness)
• Traditional AI
– Concentrates on aspects of problem that can be
solved symbolically.
– Assumes perception and recognition has already
occurred.
– E.g. knowledge of chair
• (CAN (SIT-ON PERSON CHAIR)), (CAN STAND-ON
PERSON CHAIR))
– Representation could be used to solve problem of hungry
person in room with bananas just out of reach.
– But how will chair be recognised?
• Early robots developed by Brooks et al at MIT
• Allen
• Can approach goal, while avoiding obstacles –
without plan or map of environment
– Distance sensors, and 3 layers of control
– Layer 1: avoid static and dynamic objects – repulsed
through distance sensors
– Layer 2: randomly wander about
– Layer 3: Head towards distant places
• Subsumption Architecture
–
–
–
–
Tight connection of perception to action.
Simple modules connected in layers
Starting from lowest level: e.g. obstacle avoidance
Next level e.g. goal finding.
•
•
•
•
•
•
Herbert
Coke can collecting robot
Laser-based table-like object finder – drives robot to table
Laser-based coke-can-like object finder – finds coke can
If robot stationary, arm control reaches out for coke can
Hand – has grasp reflex when something breaks infrared beam
between fingers.
• Arm locates soda can, hand positioned near can, hand grasps
can.
• No planning, or communication between modules – reactive
approach.
• No planning
• No representation of the environment
• Herbert could respond quickly to changed
circumstances
• E.g. new obstacle, or object approaching on a
collision course.
• Place a coke can in front of Herbert – will pick
it up. No expectations about where coke cans
will be found.
• Key topics of Embodied AI
–
–
–
–
Embodiment
Situatedness
Intelligence
Emergence
• Embodiment
– According to Brooks, 1991, embodiment is critical
• 1. Only an embodied agent is validated as one that can
deal with the real world.
• 2. Only through a physical grounding can any internal
symbolic system be given meaning.
• Situatedness
– A situated automaton with sensors connected to the
environment, and outputs connected to effectors
– Traditional AI – working in symbolic abstracted
domain. No real connection to external world.
Dealing with model domain.
– Situated agent – getting information from its
sensors, and responding in timely fashion.
– No intervening human
– “the world is its own best model”
• Can have embodiment without situatedness
• E.g. a remote controlled car
• Or a robot that carries out a predefined plan of
action.
• Intelligence
– Brooks: “intelligence is determined by the dynamics
of interaction with the world”
– Our reasoning, and language abilities are
comparatively recent developments
– Simple behaviours, perception and mobility, took
much longer to evolve
– Look at simple animals
– Look at dynamics of interaction of robot with its
environment.
• Emergence
– “intelligence is in the eye of the observer” (Brooks)
– Intelligence emerges from interaction of
components of system.
– Behaviour-based approach – intelligence emerges
from interaction of simple modules
• E.g. obstacle avoidance, goal finding, wall following
modules.
• Individual components simple, but resulting combined
behaviour appears intelligent.
• Main ideas of Brooks’ Behaviour-based robotics
– No central model of world
– No central locus of control
– No separation into perceptual system, central
system, and actuation system
– Layers, or behaviours run in parallel
– Behavioural competence built up by adding
behavioural modules
• Criticisms of nouvelle AI approach?
• Will approach scale up?
– Can subsumption-like approaches scale up to
arbitrarily complex systems?
• Need for representations?
– can get apparently sophisticated behaviour from
simple reaction to environment
– But soon begin to need things like map of
environment (e.g. bring coke can back to bin)
• Does embodied AI solve all problems?
– Symbol grounding?
– Need for human intervention and programming?
Does embodiment solve symbol
grounding problem?
• Searle (using Chinese room) argued that
computers just manipulate symbols, without real
understanding of what those symbols refer to.
• Symbol grounding problem – how to relate
symbols to the real world?
• Rodney Brooks: claim that only through a
physical grounding can any internal symbolic
system be given meaning.
• Emphasis on connecting robots to the world,
using sensors.
• Robot says “pig” in response to a real pig
detected in the world.
• http://www.mind.ilstu.edu/curriculum
• Does connecting robot to the world with
sensors and effectors solve the symbolgrounding problem?
• No - Still dealing with binary input from the
world
– Not really seeing…..
– Human involvement?
– Human researcher deciding on
• Which modules to add
• What the environment and task should be
• See Lovelace objection from Turing test, ““The Analytical
Engine has no pretensions to originate anything. It can do what ever
we know how to order it to perform” (Lady Lovelace memoir
1842)