Synaptic Dynamics: Unsupervised Learning

Download Report

Transcript Synaptic Dynamics: Unsupervised Learning

Synaptic Dynamics:
Unsupervised Learning
Part Ⅰ
Xiao Bing
Input
Output
处理
单元
Input
处理
单元
Output
outline
• Learning
• Supervised Learning and
Unsupervised Learning
• Supervised Learning and
Unsupervised Learning in neural
network
• Four Unsupervised Learning Laws
outline
• Learning
• Supervised Learning and
Unsupervised Learning
• Supervised Learning and
Unsupervised Learning in neural
network
• Four Unsupervised Learning Laws
Learning
• Encoding
A system learns a pattern if the system encodes
the pattern in its structure.
• Change
A system learns or adapts or “self -organizes”
when sample data changes system parameters.
• Quantization
A system learns only a small proportion of all
patterns in the sampled pattern environment, so
quantization is necessary.
Learning
• Encoding:
A system learns a pattern if the
system encodes the pattern in its
structure.
•
•
Change:
A system learns or adapts or “self -organizes” when sample data changes
system parameters.
Quantization
A system learns only a small proportion of all patterns in the sampled
pattern environment
.
Encoding
• A system has Learned a stimulus( xi , yi )pair
response
xi
S
yi
• If ( xi , yi ) is a sample from the function
f : Rn → R p
A system has learned f if the system
y = f ( x)
responses with y for x
all ,and
.
Encoding
Close to
x
Close to
x′
S
y′
• A system has partially learned or
approximated
f the function .
y , y = f ( x)
Learning
•
Encoding:
A system learns a pattern if the system encodes the pattern in its
structure.
• Change:
A system learns or adapts or “self organizes” when sample data changes
system parameters.
•
Quantization
A system learns only a small proportion of all patterns in the sampled
pattern environment.
Change
• We have learned calculus if our calculusexam-behavior has changed from failing to
passing.
• A system learns when pattern stimulation
change a memory medium and leaves it
changed for some comparatively long
stretch of time.
Change
Please pay attention to:
• We identify learning with change in
any synapse, not in a neuron.
Learning
•
•
Encoding:
A system learns a pattern if the system encodes the pattern in its
structure.
Change:
A system learns or adapts or “self -organizes” when sample data changes
system parameters.
• Quantization
A system learns only a small proportion of
all patterns in the sampled pattern
environment.
Quantization
Pattern space
sampling
Sampled pattern space
quantizing
Quantized pattern space
Uniform(一致的) sampling probability
provides an information-theoretic
criterion for an optimal quantization.
Quantization
1.Learning replaces old stored patterns
with new patterns and forms
“internal representations” or
prototypes of sampled patterns.
2.Learned prototypes define quantized
patterns.
Quantization
• Neural network models
prototype patterns are presented as
vectors of real numbers.
learning
“adaptive vector quantization”
(AVQ)
Quantization
Process of learning
• Quantize pattern space fromR n into
k
regions of quantization or decision classes.
• Learned prototype vectors define synaptic
points mi .
• If and only if some pointmi moves in the
pattern space R n ,the system learns
see also figure 4.1, page 113
outline
• Learning
• Supervised Learning and
Unsupervised Learning
• Supervised Learning and
Unsupervised Learning in neural
network
• Four Unsupervised Learning Laws
Supervised Learning and
Unsupervised Learning
• Criterion
Whether the learning algorithm uses
pattern-class information
Supervised learning
Depending on the class
membership of each
training sample
More computational
complexity
Unsupervised
learning
Using unlabelled
pattern samples.
Less computational
complexity
More accuracy
Less accuracy
allowing algorithms to
detect pattern
misclassification to
reinforce the learning
process
Be practical in many
high-speed real time
environments
outline
• Learning
• Supervised Learning and
Unsupervised Learning
• Supervised Learning and
Unsupervised Learning in neural
network
• Four Unsupervised Learning Laws
Supervised Learning and
Unsupervised Learning in neural
network
• Besides differences presented
before, there are more differences
between supervised learning and
unsupervised learning in neural
network.
Supervised learning
Unsupervised learning
Referring to estimated
gradient descent in the
space of all possible
synaptic-value
combinations.
Referring to how
biological synapses
modify their
parameters with
physically local
information about
neuronal signals.
The synapses don’t use
the class membership
of training samples.
Using class-membership
information to define a
numerical error signal or
vector guiding the
estimated gradient
descent
Unsupervised Learning in
neural network
• Local information is information
physically available to the synapse.
• The differential equations define
unsupervised learning laws and
describe how synapses evolve with
local information.
Unsupervised Learning in
neural network
• Local information include:
synaptic properties or neuronal signal
properties
information of structural and
chemical alterations in neurons and
synapses
……
Synapse has access to this information
only briefly.
Unsupervised Learning in
neural network
Function of local information
• Allowing asynchronous synapses to
learn in real time.
• Shrinking the function space of
feasible unsupervised learning laws.
outline
• Learning
• Supervised Learning and
Unsupervised Learning
• Supervised Learning and
Unsupervised Learning in neural
network
• Four Unsupervised Learning Laws
Four Unsupervised
Learning Laws
•
•
•
•
Signal Hebbian
Competitive
Differential Hebbian
Differential competitive
Four Unsupervised
Input
neuron
Learning
Laws
field
Neuron i
presynaptic
dendrite
axon
Synapse
Neuron j
Output
neuron
field
postsynaptic
dendrit
e
axon
mi , j
Signal Hebbian
• Correlating local neuronal signals
• If neuron i and neuron j are activated
synchronously, energy of synapse is
strengthened, or energy of synapse is weakened.
Competitive
• Modulating the signal-synaptic difference
with the zero-one competitive signal (signal
of neuron j ).
• Synapse learns only if their postsynaptic
neurons win.
• Postsynaptic neurons code for presynaptic
signal patterns.
Differential Hebbian
• Correlating signal velocities as well as
neuronal signals
• The signal velocity is obtained by
differential of neuronal signal
Differential competitive
• Combining competitive and
differential Hebbian learning
• Learn only if change
See also
• Simple competitive learning applet of
neuronal networks
http://www.psychology.mcmaster.ca/
4i03/demos/competitive1-demo.html
See also
• Kohonen SOM applet
http://www.psychology.mcmaster.ca/
4i03/demos/competitive-demo.html
Welcome Wang Xiumei and
Wang Ying to introduce
four unsupervised
learning laws in detail