MIF24-2014-1-enx

Download Report

Transcript MIF24-2014-1-enx

Introduction to Developmental
Learning
11 March 2014
[email protected]
http://www.oliviergeorgeon.com
t
oliviergeorgeon.com
1/33
Old dream of AI
Instead of trying to produce a program to simulate the adult mind, why
not rather try to produce one which simulates the child's? If this were
then subjected to an appropriate course of education one would obtain
the adult brain.
Presumably, the child brain is something like a notebook […]. Rather
little mechanism, and lots of blank sheets. […]. Our hope is that there is
so little mechanism in the child brain that something like it can be easily
programmed. The amount of work in the education we can assume, as
a first approximation, to be much the same as for the human child.
Computing machinery and intelligence
(Alan Turing, 1950, Mind, philosophy journal).
oliviergeorgeon.com
2/33
Is it even possible?
No ?
•
•
•
•
Spiritualist vision of consciousness (it would require a soul).
Causal openness of physical reality (quantum theory).
Too complex.
…
Yes?
• Materialist theory of consciousness
– (Julien Offray de La Mettrie, 1709-1751).
• Consciousness as a computational process
– (Chalmers 1994) http://consc.net/papers/computation.html
oliviergeorgeon.com
3/33
Outiline
• Example
– Demo of developmental learning.
• Theoretical bases
– Pose the problem.
– The question of self-programming.
• Exercise
– Implement your self-programming agent.
oliviergeorgeon.com
4/33
Example 1
6 Experiments
2 Results
0
0
0
1
0
0
1
0
1
1
10 Interactions (value)
i1 (5)
i2 (-10)
i3 (-3)
i4 (-3)
i5 (-1)
i6 (-1)
i7 (-1)
i8 (-1)
i9 (-1)
i10 (-1)
The coupling agent/environment offers hierarchical sequential regularities of
interactions, for example :
After i8
After i7
After i8
, i8
can often be enacted again.
, attempting i1 or i2 results more likely in i1
, sequence i9, i3, i1
After i9, i3, i1, i8
, i4, i7, i1
than in i2
.
can often be enacted.
can often be enacted.
5/28
Exemple 1:
Move Forward or bump
Turn left / right
Feel right/ front / left
(5)
(-10)
Bump:
(-3)
(-1)
Touch:
6/28
Theoretical bases
•
•
•
•
•
Philosophy of mind .
Epistemology (theory of knowledge)
Developmental psychology.
Biology (autopoiesis, enaction).
Neurosciences.
oliviergeorgeon.com
7/33
Philosophy : is it possible?
• John Locke (1632 – 1704)
– « Tabula Rasa »
• La Mettrie (1709-1751).
– « Matter can think »
• David Chalmers
– A Computational Foundation for the
Study of Cognition (1994)
• Daniel Dennett
– Consciousness explained (1991)
– Free will, individual choice,
self-motivation, déterminism…
oliviergeorgeon.com
8/33
Key philosophical ideas for DIA
• cognition ascomputation in the broad sense.
– Causal structure
• Example: neural net with chemistry (neurotransmitters, hormones
etc.).
• Determinisme does not contradict free will.
– Do not mistake determinism for predictibility.
• Hervé Zwirn (Les systèmes complexes, 2006)
oliviergeorgeon.com
9/33
Epistémology (what can I know?)
• Concept of ontology
• Study of the nature of being
– Aristotle (384 – 322 BC).
– Onto: « being », Logos: discourse.
– Discourse on the properties and categories of
being.
• Reality as such is unknowable
– Emmanuel Kant, (1724 – 1804)
oliviergeorgeon.com
10/33
Key epistemological ideas for DAI
• Implement learning mechanism with no ontological
asumptions.
– Agnostic agents (Georgeon 2012).
– The agent will never know its
environment as we see it.
• But with interactional assumptions
– Predefine the possibilities of interaction between the
agent and its environment.
– Let the agent alone to construct its own ontology of the
environment through its experience of interaction.
oliviergeorgeon.com
11/33
Developmental psychology (How can I
know?)
• Developmental learning
– Jean Piaget (1896 – 1980)
•
•
•
•
Teleology / motivational principles
”the individual self-finalizes recursively”.
Do not separate perception and action a priori:
Notion of sensorimoteur scheme
• Contructivist epistemology
– Jean-Louis Le Moigne (1931 - )
– Ernst von Glasersfeld.
• Knowledge is an adaptation in the functional sense.
oliviergeorgeon.com
12/33
Etapes développementales indicatives
•
•
•
•
•
Month 4: “Bayesian prediction”.
Month 5: Models of hand movement.
Month 6: Objects and face recognition.
Month 7: Persistency of objects.
Month 8: Dynamic models of objects.
•
•
•
•
Month 9: Tool use (bring a cup to the mouth).
Month 10: Gesture imitation, crawling.
Month 11: Walk with the help of an adult.
Month 15: Walk alone.
oliviergeorgeon.com
13/45
Key psychological ideas for DAI
• Think in terms of « interactions » rather than
separating perception and action a priori.
• Focus on an intermediary level of intelligence:
– Cognition sémantique (Manzotti & Chella 2012)
Reasoning and language
Semantic cognition
stimulus-response adaptation
oliviergeorgeon.com
High level
Intermediary level
Low level
14/33
Biology (why know?)
• Autopoiese
– auto: self, poièse : creation
– Maturana (1972)
– Structural coupling agent/environment.
– Relational domain (the space of possibilities of interaction)
• Homeostasis
– Internal state regulation
– Self-motivation
• Theory of enaction
– Self-creation through interaction with the environment.
– Enactive Artificial Intelligence (Froeze and Ziemke 2009) .
oliviergeorgeon.com
15/33
Key ideas from biology for DAI
• Constitutive autonomy is necessary for sensemaking.
– Evolution of possibilities of interaction during the
system’s life.
– Individuation.
• Design systems capable of programming
themselves.
– The data that is learned is not merely parameter
values but is executable data.
oliviergeorgeon.com
16/33
Neurosciences
Many levels of analysis
A lot of plasticity
AND a lot of pre-wiring
oliviergeorgeon.com
17/33
Neuroscience
• Connectome of C. Elegans: 302 neurons.
Entirely inborn connectome rather than acquired through experience
oliviergeorgeon.com
18/33
Human connectome
http://www.humanconnectomeproject.org
oliviergeorgeon.com
19/33
Neurosciences
Examples of mammalian brains
• No qualitative rupture : human cognitive functions (e.g., language
reasoning) relies of brain structures that exist in other mammalian brains.
(This does not mean there is no innate differences !).
• The brain serves at organizing behaviors in time and space.
oliviergeorgeon.com
20/33
Key neuroscience ideas for DAI
• Renounce the hope that it will be simple.
• Maybe begin at an intermediary level and go down if
it does not work?
• Biology can be source of inspiration
– Biologically Inspired Cognitive Architectures.
• Importance of the capacity to internally simulate
courses of behaviors.
oliviergeorgeon.com
21/33
Key ideas of the key ideas
• The objective is to learn (discover, organze and
exploit) regularities of interaction in time and
space to satisfy innate criteria (survival,
curiosity, etc.).
• Without pre-encoded ontological knowledge
• Which allows a kind of constitutive autonomy
(self-programming).
oliviergeorgeon.com
22/33
Teaser for next course
oliviergeorgeon.com
23/5
Exercice
oliviergeorgeon.com
24/33
Exercice
• Two possible experiences E = {e1,e2}
• Two possible results R = {r1,r2}
• Four possible interactions E x R = {i11, i12, i21, i22}
• Two environments
– env1: e1 -> r1 , e2 -> r2 (i12 et i21 are never enacted)
– env2: e1 -> r2 , e2 -> r1 (i11 et i22 are never enacted)
• Motivational systems:
– mot1: v(i11) = v(i12) = 1, v(i21) = v(i22) = -1
– mot2: v(i11) = v(i12) = -1, v(i21) = v(i22) = 1
– mot2: v(i11) = v(i21) = 1, v(i12) = v(i22) = -1
• Implement un agent that learn to enact positive interactions without knowing
its motivatins a priori (mot1 or mot2) neither its environnement (env1 or env2).
• Write a rapport of behavioral analysis based on activity traces.
oliviergeorgeon.com
25/33
No hard-coded knowledge of the environment
Agen{
…
public Experience chooseExperience(){
If (env == env1 and mot == mot1) or (env == env2 and mot == mot2)
return e1;
else
return e2;
}
}
oliviergeorgeon.com
26/33
Implementation
public static Experience e1 = new experience(); Experience e2 = new experience();
public static Result r1 = new result(); Result r2 = new result();
public static Interaction i11 = new Interaction(e1,r1, 1); etc.
Public static void main()
Agent agent = new Agent();
Environnement env = new Env1(); // Env2();
for(int i=0 ; i < 10 ; i++)
e = agent.chooseExperience(r);
r = env.giveResult(e);
System.out.println(e, r, value);
Class Agent
public Experience chooseExperience(Result r)
Class Environnement
public Result giveResult(experience e)
Class Env1
Class Env2
Class Experience
Class Result
Class Interaction(experience, result, value)
public int getValue()
oliviergeorgeon.com
27/33