PSY105 Neural Networks 2/5

Download Report

Transcript PSY105 Neural Networks 2/5

PSY105 Neural Networks 5/5
5. “Function – Computation Mechanism”
Lecture 1 recap
• We can describe patterns at one level of
description that emerge due to rules followed
at a lower level of description.
• Neural network modellers hope that we can
understand behaviour by creating models of
networks of artificial neurons.
Lecture 2 recap
• Simple model neurons
– Transmit a signal of (or between) 0 and 1
– Receive information from other neurons
– Weight this information
• Can be used to perform any computation
Lecture 3 recap
• Classical conditioning is a simple form of
learning which can be understood as an
increase in the weight (‘associative strength’)
between two stimuli (one of which is
associated with an ‘unconditioned response’)
Lecture 4 recap
• A Hebb Rule for weight change between two
neurons is:
– Δ weight = activity 1 x activity 2 x learning rate constant
• In order to use this rule to associate two
stimuli which are separated in time we need
neuron activity associated with stimuli to
persist in time.
– This can be implemented as an ‘eligibility trace’
• I present you with a robot which uses a simple
neural network to acquire classically
conditioned responses. It can, for example,
learn to associate a warning stimulus with an
upcoming wall and hence turn around before
it reaches the wall. Describe an experiment
which you would do to test the details of how
the robot learns. Say what you would do, and
what aspect(s) of the robot's learning the
results would inform you of, and why.
The problem of continuous time
Stimulus 1
Stimulus 2
Traces
Stimulus 1
Stimulus 2
Consequences of this implementation
•
•
•
•
•
•
Intensity of CS stimulus
Duration of CS stimulus
Intensity of UCS stimulus
Duration of UCS stimulus
Separation in time of CS and UCS
The order in which the CS and UCS occur
– (cf. Rescola-Wagner discrete time model)
Association
We have designed an information
processing system that learns
associations
Sutton, R.S., Barto, A.G. (1990). Timederivative models of pavlovian
reinforcement. In Learning and
Computational Neuroscience:
Foundations of Adaptive Networks, M.
Gabriel and J. Moore, Eds., pp. 497-537.
MIT Press.
Trials
Stop pairing
Association
Without continued pairing
Trials
Stop pairing
Association
Without continued pairing
-> extinction
Trials
Analysis of information processing
systems
• Function (‘computational level’)
• Computation (‘algorithmic level’)
• Mechanism (‘implementational level’)
Marr, David (1982) Vision. San Francisco: W.H.
Freeman
Marrian analysis of classical
condtioning
• Function: learn to predict events based on past
experience
• Computation: Stimuli evoke ‘eligibility traces’.
Hebb Rule governs changes in weights [+ other
additional assumptions which are always needed
when you try and make a computational recipe]
• Mechanism: At least one response neuron, one
unconditioned stimulus neuron and one neuron
for each conditioned stimulus
Kim, J. J., & Thompson,
R. F. (1997). Cerebellar
circuits and synaptic
mechanisms involved
in classical eyeblink
conditioning. Trends in
Neurosciences, 20(4),
177-181.
Marrian analysis: a simple example
• Function
• Computation
• Mechanism
Theory < - > Experiments
Synthesis < - > Analysis
Our classical conditioning networks
S-S link
Stimuli
CS2
CS1
UCS
Responses
Internal representation of the
conditioned stimulus
The lab : using Marrian analysis to
make predictions
Function
• What is the purpose of learning for an animal?
– Does our model behave in a sensible (‘adaptive’)
way when it follows our rule?
– Is the rule sufficient to explain animal learning?
• Test: think of a way you would want the
model/robot to behave, test if it does
Computation
•
•
•
•
•
•
Intensity of CS stimulus
Duration of CS stimulus
Intensity of UCS stimulus
Duration of UCS stimulus
Separation in time of CS and UCS
The order in which the CS and UCS occur
– (cf. Rescola-Wagner discrete time model)
• The learning rate
• The rate of decay of the trace
• The frequency of pairing
Mechanism
S-S link
Stimuli
CS2
CS1
UCS
Responses
• What is your prediction? What will you do to
the rule or the environment?
• How will you know if it has been confirmed or
falsified?
π
ψ
Ω