Review, reactive behaviour and embodiment
Download
Report
Transcript Review, reactive behaviour and embodiment
Adaptive Robotics
COM2110
Autumn Semester 2008
Lecturer: Amanda Sharkey
1
Robots in the news
Bath University: a robot that jumps like a
grasshopper and rolls like a ball
Created by PhD student Rhodri Armour
Can roll in any direction, and can jump over
obstacles
Small motors build up energy in the springy
spherical exoskeleton by compressing it.
Avoids problems of robots with legs and robots
with wheels
2
Behaviour-based robotics versus GOFAI
Control mechanisms:
Subsumption architecture
Artificial Neural Nets (ANNs)
learning rules and limitations
Genetic algorithms
Biological inspiration
Forms of learning in biological
organisms
Organisation of biological systems
Biological modelling
Examples
Early robots, Humanoid robots,
Examples of research papers,
Applications
Last week – AI, Magic and Deception
3
a) Provide a brief account of the following terms, and
their relevance to behaviour-based robotics:
(I) embodiment (10%)
(ii) reactivity (10%)
(iii) stigmergy (10%)
b) Consider, with reference to the notion of autopoiesis,
whether or not strong embodiment is possible. (30%)
c) Identify and discuss what you see as the main
strengths and weaknesses of the new approach to
Artificial Intelligence. (40%)
4
Classical AI (GOFAI)
E.g. chess player, expert systems or
traditional planning systems like STRIPS,
or GPS (General Problem Solver)
Emphasis on manipulation of symbols,
planning and reasoning
Centralised systems
Sequential
5
Cognitivism
Cognition is the manipulation of abstract
representations by explicit formal rules
Knowledge of the world as sentence like
descriptions using symbols
Symbol – stands for objects and concepts
E.g. CYC project for creating commonsense reasoner
6
Problems with Classical systems
Lack of robustness
Lack of generalisation
May not perform well in noisy conditions, or when
some components break down
May not perform well in novel situations
Real time processing
Likely to be slower
centralised
7
Further problems with Classical
AI
Little consideration of interaction between
agent and the real world
Frame problem
How to model change
Symbol grounding
How to link the symbols being manipulated
with the real world
8
Frame problem
Daniel Dennett (1987)
Robot with propositional representations
E.g. INSIDE(R1,ROOM) ON(BATTERY,WAGON)
Spare battery in room with time bomb
R1 plans to pull wagon and battery out of room.
But bomb also on wagon
R1D1 considering implications of actions. But
still deciding whether removing wagon would
change the colour of the walls when bomb
explodes
9
Back to the drawing board. “We must teach it the
difference between relevant implications and irrelevant
implications. So they developed a method of tagging
implications as either relevant or irrelevant to the project
at hand and installed the method in R2D1. They found it
sitting outside the room.
“Do something” they yelled. “I am” it retorted. “ I’m
busily ignoring some thousands of implications I have
determined to be irrelevant. Just as soon as I find an
irrelevant implication, I put it on the list of those I must
ignore and”
The bomb went off
10
Symbol grounding problem and
the Chinese Room
Gedanken (thought) experiment
Imagine a person in a room, who has a set of rule books. Sets of
symbols are passed in to them, and they can process them, using
the rule books, and send symbols out
The symbols going in are Chinese questions
The symbols going out are Chinese answers
The room seems to understand Chinese
But the person in the room does not understand Chinese
Similarly, a question answering computer program does not
understand language
Computers don’t understand – they just manipulate symbols that
are meaningless to them.
11
Related papers
Harnad, S. (1990) The symbol grounding
problem. Physica D. 42, 335-46.
Searle, J.S. (1980) Minds, Brains and
Programs. Behavioural and Brain
Sciences, 3, 417-24.
12
Behaviour-based AI
AKA – embodied cognitive science, new AI, new
wave AI
Brooks, 1986 subsumption architecture
Emphasis on intelligence emerging from the
interaction of organism with the environment, and
close coupling between sensors and motors
Brooks, 1990 “Intelligence without
Representation” and Behaviour-based Robotics
13
Key concepts in Embodied Cognitive Science
Embodiment
Situatedness
Emphasis on interaction with the environment
Biological inspiration
Stigmergy
Emergence
Reactive behaviour
Decentralisation
14
Reactive robotics
Grey Walter’s electronic tortoises
Taxis and tropism
Phototropism, Phototaxis
Phonotaxis
Coastal seaslug
Geotaxis, negative and postive phototaxis
15
Biology
Biological modelling
Cricket phonotaxis
Catagylphis desert ant
Task allocation
Understanding by building: synthetic modelling
Biological inspiration
Sorting (Holland and Melhuish)
Stigmergy
Emergence
Minimal representation
16
Biological inspiration
Swarm robotics and swarm intelligence
Keep it simple: Minimal representation and
reactive systems
Innate knowledge
Fixed action patterns
Learning and evolution
Classical conditioning
Operant conditioning
Neural nets
Genetic Algorithms
17
Mechanisms
Subsumption architecture
Braitenberg vehicles
McCulloch and Pitts neurons
Neural Nets and learning algorithms
Main characteristics
Strengths and limitations
Hebbian learning
Delta rule
Generalised delta rule
Genetic Algorithms
Evolving neural nets
18
Strengths and Limitations of
reactive systems
A reactive system is one “where sensors and motors are
directly linked and which always react to the same
sensory state with the same motor action” (Nolfi and
Floreano, 2000)
E.g. Grey Walter’s electronic tortoises
E.g: a reactive robot with Braitenberg controller simple neural network e.g. fully connected
perceptrons without internal layers or any form of
internal organisation.
19
Did Brooks et al reject the idea of internal
representation?
20
Criticisms
Criticisms of approach
see Anderson (2003)
No complex intelligent creature can get by without
representations….Kirsh (1991)
21
Criticisms
Ford et al (1994): concerned that “The situationalists are
attacking the very idea of knowledge representation –
the notion that cognitive agents think about their
environments, in large part, by manipulating internal
representations of the worlds they inhabit”
22
Criticisms
Vera and Simon (1993) argue that proponents of
situated action are not saying something
different to proponents of physical symbol
systems
Situated action proponents claim
and Simon)
(according to Vera
No internal representations
Direct access to affordances of the environment
No use of symbols
23
Criticisms
But Vera and Simon point out that minimal
representations are used
E.g. Pengi, and the notion of “the bee that is
chasing me now” is still a symbol.
“If the SA approach is suggesting simply that there is
more to understanding behaviour than describing
internally generated, symbolic, goal-directed planning,
then the symbolic approach has never disagreed.”(Vera
and Simon, 1993)
24
Pengi explanation
Agre and Chapman (1987)
Pengi is a simulated agent that plays the
video game Pengo
Plays without planning or using
representations
E.g. escaping from “the bee that is
chasing me now”, down “the corridor I’m
running down”
25
Also, limitations to reactive
systems
Embodied and situated systems can sometimes
solve quite complicated tasks without internal
representations.
E.g. sensory-motor coordination (exploiting agentenvironment interaction) can solve tasks, as agent
can select favourable sensory patterns through
motor actions.
But there are limits.
26
Examples of problems that can be solved
through sensory-motor coordination
Perceptual aliasing
Sensory ambiguity
Clearly simple behaviours such as obstacle
avoidance can be accomplished without internal
represenation
27
Perceptual aliasing: two or more objects
generate the same sensory pattern, but require
different responses.
E.g. Khepera robot in environment with 2
objects
one with a black top to be avoided,
One with white top to be approached
Khepera: 8 infrared proximity sensors and a linear
camera with a view angle of 30 degrees.
28
29
If robot approaches object which is not in view angle
of camera, it will receive an ambiguous sensory
pattern.
Solution – to turn towards object, so it is in view
angle of camera, and sensory pattern disambiguated.
An example of active perception.
Similar behaviour found in fruit fly Drosophila, which
moves to shift perceived image to certain location of
visual field.
But limits to this strategy – will only be effective when robot can
find at least one sensory state not affected by aliasing problem.
30
Example of active restructuring: Scheier et al 1998.
Khepera should approach large and avoid small
cylindrical objects in a walled area.
Robot receives information from 6 frontal proximity
sensors.
Neural network trained to discriminate between patterns
corresponding between cylindrical objects and walls, or
between different sizes of cylindrical object.
Poor performance on large/small cylindrical objects
31
32
33
Problem is that sensory patterns belonging to
different categories overlap.
“put differently, the distance in sensor space for data originating
from one and the same object can be large, while the distance
between two objects from different categories can be small”
(Scheier et al 1998)
But Scheier et al (1998) used artificial evolution
to select the weights for robot’s neural
controllers.
34
Near optimal performance after 40 generations.
Fittest individuals moved in the environment until they
perceived an object (large or small).
Then they circled the object
Circling behaviour resulted in different sensory patterns
for different sizes of object.
i.e sensory-motor coordination allowed robots to obtain
sensory patterns that could be easily discriminated.
35
So, some difficult tasks can be solved by
exploiting environmental constraints through
sensory-motor coordination and active
perception.
But, not all
An alternative to simple reactive behaviour –
robots that can exploit internal dynamical status.
By using neural network with recurrent connections
36
Summary to date
Classical AI vs Behaviour-based AI
What is reactive behaviour?
What can it accomplish?
Simple tasks, e.g. obstacle avoidance
Some harder tasks by exploiting the environment and
active perception
Limits – example of tasks that require some internal
states.
37
Tasks that require reasoning:
- activities that involve predicting the behaviour of other
agents
- activities which require responses to action in the future
e.g. avoiding future dangers
- activities that require understanding from an objective
perspective e.g. following advice, or a new receipe.
- Problem solving e.g how many sheets of paper needed
to wrap a package
- Creative activities e.g. language use, musical
performance.
38
Mataric (2001) identifies “behaviour-based
systems as an alternative to reactive systems.
She identifies strengths and weaknesses of
reactive systems.
Strengths: real-time responsiveness, scalability,
robustness
Weaknesses: lack of state, inability to look into past
or future.
39
Mataric (2001) characterisation of types of
control
Reactive control: don’t think, react
Deliberative control: think hard, then act.
Hybrid control: think and act independently in
parallel
Behaviour-based control: think the way you
act.
40
Behaviour-based control
Behaviours added incrementally – simplest
first.
Behavioural modules can use internal
representations when necessary
But no centralised knowledge
41
Related idea Action oriented representations (Clark,
1997)
Use of minimal representations – e.g. when looking for
coffee cup, you search for yellow object.
.
partial models of the world which include only those
aspects that are necessary to allow agents to achieve
their goals. (Brooks 1991).
42
But if the same body of information were needed in
several activities, might be more economical to deploy
a more action-neutral encoding.
E.g. if knowledge about an object’s location to be
used for many different purposes, might be better to
generate a single action-independent inner map that
could be accessed by multiple routines.
43
Reactive issue
Change from traditional AI: different emphasis on
importance of mental representations.
Embodied and situated approach: minimal internal
representations best viewed as partial models of the
world which include only those aspects that are
necessary to allow agents to achieve their goals
(Brooks, 1991)
44
Embodiment – robots deal with real
objects in the real world, not symbols
Does that mean they can really be said to
be intelligent and capable of thought?
Discuss …..
45
Embodiment
Key concept in embodied and situated AI
Idea that robots are physically embodied and
can act on the world
Does the use of embodied robots make
Strong AI possible?
46
Weak AI: computer is valuable tool for study of mind – ie
can formulate and test hypotheses rigorously
Strong AI: appropriately programmed computer really is
a mind, can be said to understand, has cognitive states.
Strong AI: “the implemented program, by itself, is
constitutive of having a mind. The implemented
program, by itself, guarantees mental life” Searle (1997)
47
Problems: how can symbols have
meaning? (Searle and Chinese room)
48
Two possible solutions
Symbol grounding (not covered here)
Situated and embodied cognition
49
Situated and Embodied
cognition
Exemplified by Rodney Brooks
Approach emphasises construction of physical
robots embedded in and interacting with the
environment
No central controller
Subsumption architecture
No symbols to ground
Intelligence is found in interaction of robot with its
environment.
50
Is strong embodiment possible?
Sharkey and Ziemke (1998) – only weak
embodiment
Robots are allopoietic, not autopoietic
machines.
51
Autopoiesis
Etymologically (means) self-making
Abstract description of self organising systems.
Maturana and Varela (1972) Autopoiesis and
Cognition: the realisation of the living.
An autopoietic system is defined in terms of its
organisation:
52
“a network of processes of production (transformation
and destruction) of components that produces the
components which (i) through their interactions and
transformation continuously regenerate the network of
processes (relations) that produced them: and (ii)
constitute it (the machine) as a concrete entity in the
space in which they (the components) exist by specifying
the topological domain of its realization as such a
network. (Maturana and Varela, 1980).
53
For life, embodiment is required.
“Autopoiesis in the physical space [is] a
necessary and sufficient condition for a
system to be a living one.” (Maturana and
Varela, 1980)
54
Living organisms – organised as a unitary whole
Basic phenomenon: self organisation of a single
cell.
Core of autopoiesis: the self-production of the
organism’s boundary as a unitary system
“a living system is an autopoietic machine whose
function it is to create and maintain the unity that
distinguishes it from the medium in which it exists”
(Sharkey and Ziemke, 2000)
55
.
Maturana and Varela:
Machines made by humans are allopoietic.
The components are produced by other
processes that are independent of the
organisation of the machine.
Robots – no multicellular solidarity (or living
cells) . Sensors, controllers and actuators are
not integrated into body.
56
Cell: first living system which determines its own
boundaries (cell membrane)
Emphasis on self-constructed, self-maintaining
bodily boundary.
An autopoietic machine such as a living system
is a special type of homeostatic machine for
which the fundamental variable to be maintained
is its own organisation.
57
Imagine colony of self-maintaining robots
Assembler robots to build robots out of parts
Transplant robots able to replace damaged parts
Tinker robots able to manage some types of damage
Some artificial evolution, improving their design.
But still engineering – the constituents of the
system will not have been autonomously
generated, but manufactured.
58
Is strong embodiment possible?
A robot is not a living system, and not
autopoietic, so it does not experience the world
– there is no “self” there to do the experiencing.
its ‘experience’ is no different from that of an
electronic tape measure
(No phenomenal embodiment)
59
Also there is an important difference between
animals and robots.
Animals have coevolved with their environments
In a robot, the intimate relationship between the
body of a living organism and its environment is
missing.
(no mechanistic embodiment)
60
Weak embodiment is possible.
Possible to model mechanistic theories of animal
behaviour
Possible to use robots to study how artificial agents
can enact their own environmental embedding.
I.e. can study an allopoietic machine as if it were
autopoietic, and this can yield scientific insights.
61
Summary
Classical AI vs Embodied Cognitive Science
What is a reactive robot?
Newer approach emphasises connection to the
environment and embodiment
Limitations of reactive systems
Minimal representations and interaction with
environment
Can a robot really be embodied?
Does embodiment solve problems of Strong AI?
Not living, and not autopoietic
Has not evolved together with environment
62
Where will adaptive robotics go next?
Beyond stigmergy – local communication
Epigenetic systems – development of knowledge and
representation as a result of interaction with the
world
Language evolution
Social robotics – capitalising on our
anthropomorphism.
63