Motivated Learning
Download
Report
Transcript Motivated Learning
Mental Development and
Representation Building through
Motivated Learning
Janusz A. Starzyk, Ohio University, USA,
Pawel Raif, Silesian University of Technology, Poland,
Ah-Hwee Tan, Nanyang Technological University, Singapore
2010 International Joint Conference on Neural Networks, Barcelona
Outline
• Embodied Intelligence (EI)
• Embodiment of Mind
• Computational Approaches to
Machine Learning
• How to Motivate a Machine
• Motivated Learning (ML)
• Building representation through
motivated learning
– ML agent in „Normal” vs. „Graded”
Environment
– ML agent vs. RL agent in „Graded”
Environment
• Future work
Traditional AI
• Abstract intelligence
– attempt to simulate
“highest” human faculties:
• language, discursive reason,
mathematics, abstract
problem solving
• Environment model
– Condition for problem solving
in abstract way
– “brain in a vat”
Embodied Intelligence
• Embodiment
– knowledge is implicit in the fact
that we have a body
• embodiment supports brain
development
• Intelligence develops through
interaction with environment
– Situated in environment
– Environment is its best model
Embodied Intelligence
Definition
Embodied Intelligence (EI) is a
mechanism that learns how to
minimize hostility of its
environment
• Mechanism: biological, mechanical or virtual agent
with embodied sensors and actuators
• EI acts on environment and perceives its actions
• Environment hostility: is persistent and stimulates EI to act
• Hostility: direct aggression, pain, scarce resources, etc
• EI learns so it must have associative self-organizing memory
• Knowledge is acquired by EI
Intelligence
An intelligent agent learns how
to survive in a hostile environment.
Embodiment of a Mind
Embodiment: is a part of environment under
control of the mind
It contains intelligence core and sensory motor
interfaces to interact with environment
It is necessary for development of intelligence
It is not necessarily constant
Embodiment
Sensors
channel
Environment
Intelligence
core
Actuators
channel
Embodiment of Mind
Changes in embodiment modify
brain’s self-determination
Brain learns its own body’s
dynamics
Self-awareness is a result of
identification with own embodiment
Embodiment can be extended by
using tools and machines
Successful operation is a function
of correct perception of
environment and own embodiment
Computational Approaches to
Machine Learning
Machine Learning
Supervised
Unsupervised
Reinforced
problems with Complex
environments
lack of motivation
Motivated Learning
Definition
Need for benchmarks
How to Motivate a Machine ?
A fundamental question is how to
motivate an agent to do anything,
and in particular, to enhance its
own complexity?
What drives an agent to explore
the environment build
representations and learn
effective actions?
What makes it successful learner
in changing environments?
How to Motivate a Machine ?
Although artificial curiosity helps to
explore the environment, it leads to
learning without a specific purpose.
We suggest that the hostility of the environment, required
for EI, is the most effective motivational factor.
Both are needed - hostility of the environment and
intelligence that learns how to reduce the pain.
Fig. englishteachermexico.wordpress.com/
Motivated Learning
Definition*: Motivated learning (ML) is pain
based motivation, goal creation and
learning in embodied agent.
It uses externally defined pain signals.
Machine is rewarded for minimizing the
primitive pain signals.
Machine creates abstract goals based on the
primitive pain signals.
It receives internal rewards for satisfying its
abstract goals.
ML applies to EI working in a hostile
environment.
*J. A. Starzyk, Motivation in Embodied Intelligence, Frontiers in Robotics,
Automation and Control, I-Tech Education and Publishing, Oct. 2008, pp. 83-110.
Neural self-organizing structures in ML
Goal creation scheme
an abstract pain is introduced
by solving lower level pain
Motivations and selection of a goal
WTA competition selects motivation
another WTA selects implementation
a primitive pain is directly sensed
thresholded curiosity based pain
Building
representation
through motivated
learning
Experiments…
Base Task Specification
•Environment
Environment consist of six different categories of resources.
Five of them have limited availability.
One, the most abstract resource is inexhaustible.
The least
abstract
Food
The most
abstract
Grocery
Bank
Office
School
Sandbox
Base Experiment - Task Specification
Agent uses resources performing proper actions. There are
36 possible actions but only six of them are meaningful and at a
given situation (environment’s and agent’s state) there is usually
one best action to perform.
The problem is: determine which action should be performed
renewing in time the most needed resource.
Meaningful sensory-motor pairs and their effect on the
environment:
Id
SENSORY
MOTOR
INCREASES
DECREASES
PAIR Id
0
Food
Eat
Sugar level
Food supplies
0
1
Grocery
Buy
Food supplies
Money at hand
7
2
Bank
Withdraw
Money at hand
Spending limits
14
3
Office
Work
Spending limits
Job opportunities
21
4
School
Study
Job opportunities
Mental state
28
5
Sandbox
Play
Mental state
-
36
How to simulate complexity and
hostility of environment
1. Complexity
Different resources are available in the
environment.
Agent should learn dependencies between
resources and its actions to operate properly.
2. Hostility
F
e
S
a
cs
h
tO
f
o
fB
o
lia
G
cn
re
k
F
o
co
o
e
d
r
y
1
Function which describes the probability of
finding resources in the environment.
Mild environment
Harsh environment
2
Base Experiment Results
RL agent (left side) can learn
dependencies between only few
basic resources.
In contrast ML agent is able to learn
dependencies between all
resources.
In a harsh environment
ML agent is able to control its
environment (and limit its
‘primitive pain’) but
RL agent cannot
1
RL
ML
2
ML agent in „Normal” vs. „Graded” Environment
Two kinds of environments - “normal” (1) and “graded” (2).
“Graded” environment corresponds to gradual development and
representation building
Simulations in four environments with:
6, 10, 14 and 18 different hierarchy levels
each one representing different resource.
1
2
…
Resources
Time
…
Resources
Time
ML agent in „Normal” vs. „Graded” Environment
ML agent learns more
effectively in the ”graded”
environments with
gradually increasing
complexity.
In a complex
environment this
difference becomes
more significant.
“gradual” learning is
beneficial to mental
development
ML agent vs. RL agent in „Graded” Environment.
The second group of experiments
compares effectiveness of ML and
RL based agents.
Resources
In this simulation we have used
“graded” environments with gradually
increasing complexity.
…
We simulated environments with:
6, 10, 14, 18 levels of hierarchy.
Time
ML agent vs. RL agent in „Graded” Environment.
6 levels of hierarchy
10 levels of hierarchy
Initially ML agent experiences similar
primitive pain signal Pp as RL agent.
ML agent converges quickly to a stable
performance.
Initially RL agent experiences lower
primitive pain signal Pp than ML agent.
RL agent’s pain increases when
environment is more hostile.
ML agent vs. RL agent in „Graded” Environment.
14 levels of hierarchy
ML agent keeps learning while
RL agent exploits early knowledge
In effect, RL doesn’t learn all
dependencies it time to survive
18 levels of hierarchy
Similar results to 10 and 14 levels
Future work
RL
state
action
reward
state
action
RL
reward
GC
GOALS (motivations)
References:
•
•
•
Starzyk J.A., Raif P., Ah-Hwee Tan, Motivated Learning as an Extension of
Reinforcement Learning, Fourth International Conference on Cognitive
Systems, CogSys 2010, ETH Zurich, January 2010.
Starzyk J.A., Raif P., Motivated Learning Based on Goal Creation in Cognitive
Systems, Thirteenth International Conference on Cognitive and Neural Systems,
Boston University, May 2009.
J. A. Starzyk, Motivation in Embodied Intelligence, Frontiers in
Robotics, Automation and Control, I-Tech Education and Publishing, Oct. 2008,
pp. 83-110.
Questions?