Transcript sheets DA 7

Chapter 7
Network models
• Firing rate model for neuron as a simplification for network analysis
• Neural coordinate transformation as an example of feed-forward neural
network
• Symmetric recurrent neural networks
–
–
–
–
–
–
Selective amplification, winner-take-all behaviour
Input integration
Receptive field properties of V1 simple cells
Gain modulation to encode multiple parameters (gaze and retinal location)
Sustained activity for short term memory
Associative memory
• Excitatory – inhibitory network
– Stability analysis and bifurcation
– Olfactory bulb
Network models
(2k-k^2/n)/2]\approx k links per neuron (n,k) -> (n/2,k)
Firing rate description
Synaptic current
Synaptic current
Firing rate
Feedforward and recurrent
networks
Feedforward and recurrent
networks
Dale’s law
Continuously labeled networks
Neural coordinate transformation
Reaching for viewed objects requires transformation from
retinal coordinates to body-centered coordinates.
A,B: With identical target relative to the body, the image
on the retina changes due to gaze change.
C: g is gaze angle of eyes relative to head, s is image of object
On retina.
Neural coordinate transformation
• Visual neurons have receptive fields ‘tied’to the retina.
• Left: Motor neurons respond to visual stimuli independent of gaze
direction. Stimulus is approaching object from different directions s+g.
Three different gaze directions (monkey premotor cortex)
Neural coordinate transformation
• Middle: When head is turned but fixation is kept the same (g=-15
degree), the motor neuron tuning curve shifts + 15 degree. The
representation is relative to the head.
Neural coordinate transformation
•
Possible basis for model provided by neurons in area 7a (posterior parietal
cortex), whose retinal receptive fields are gain modulated by gaze direction.
Left: average firing rate tuning curves for same retinal stimulus at different
gaze directions. Right: mathematical model is product of Gaussian in s-x (x=20o) and sigmoid in g-g (g=20o).
Neural coordinate transformation
Neural coordinate transformation
• Right: results from the model with w(x,g)=w(x+g) with gaze 0o, 10o and
–20o (solid, heavy dashed, light dashed) and stimulus at 0o. The shift of
the peak in s is equivalent to invariance wrt g+s.
• Gain modulated neurons provide general mechanism for combining
input signals
Recurrent networks
Recurrent networks
Neural integration
Neural integration
•
•
Networks in the brain stem of vertebrates responsible for maintaining eye
position appear to act as integrators. Eye position changes in response to bursts
of ocular motor neurons in brain stem. Neurons in the brainstem integrate these
signals. Their activity is approximately proportional to horizontal eye position.
It is not well understood how the brain solves the ‘fine tuning problem of
having one of the eigen values exactly 1.
Continuous linear network
Continuous linear network
Continuous linear network
• A: h(q)=cos(q)+noise and C: its Fourier components hm
• B: the network activity v(q) for l=0.9
• D: Fourier components vm. v§ 1=10 h§ 1 and vm=hm otherwise
Non-linear network
Orientation tuning in simple cells
•
•
•
•
Recall that orientation selective cells in V1 could be explained by receiving
input from proper constellation of center surround LGN cells.
However, this ignores lateral connectivity in V1, which is more prominent than
feed-forward connectivity.
Same as prev. model with h(q)=A(1-e+e cos(2q)) and global lateral inhibition.
Lateral connectivity yields sharpened orientation selectivity. Varying A
(illumination contrast) scales the activity without broadening, as is observed
experimentally.
Winner take all
• When two stimuli are presented to a non-linear recurrent network, the
strongest input will determine the response (network details are as
previous).
Gain modulation
• Adding a constant to the input yields a gain modulation of the
recurrent activity. This mechanism may explain the encoding of both
stimulus in retinal coordinates (s) and gaze (g) encountered before in
parietal cortical neurons.
Sustained activity
• After a stimulus (A) has yielded a stationary response in the recurrent
network (B), the activity may be sustained (D) by a constant input only
(C.).
Associative memory
• Sustained activity in a recurrent network is called working or shortterm memory.
• Long-term memory is thought to reside in synapses that are adapted to
incorporate a number of sustained activity patterns as fixed points.
• When the network is activated with an approximation of one of the
stored patterns, the network recalls the patterns as its fixed point.
– Basin of attraction
– Spurious memories
– Capacity proportional to N
• Associative memory is like completing a familiar telephone number
from a few digits. It is very different from computer memory.
• Area CA3 of hippocampus and part of prefrontal cortex
Associative memory
Associative memory
Associative memory
• 4 pattern stored in network
of N=50 neurons. Two
patterns are random and two
as shown.
• A) Typical neural activity.
• B, C) Depending on the
initial state one of the
patterns is recalled as a
fixed point.
• Memory degrades with #
patterns.
• Better learning rules exist
• capacity ~ N/(a log 1/a)
Excitatory-Inhibitory networks
Excitatory-Inhibitory networks
• MEE=1.25, MIE=1, MII=0, MEI=-1, gE=-10 Hz, gI=10 Hz, tE=10 ms and
variable tI.
• A) phase plane with nullclines, fixed point and directions of gradients.
Excitatory-Inhibitory networks
Excitatory-Inhibitory networks
• B) real and imaginary part of eigenvalue of the stability matrix versus
tI. The fixed point is stable up to tI=40 ms and unstable for tI>40 ms.
Excitatory-Inhibitory networks
• Network oscillations damp to stable fixed point for tI=30 ms.
Excitatory-Inhibitory networks
•
•
For tI=50 ms the oscillations grow. The fixed point is unstable. The dynamics
settles in a stable limit cycle, due to the rectification at vE=0.
Such transitions, where the largest real eigenvalue changes sign induce
oscilations at finite frequency (6 Hz in this case) is called a Hopf bifurcation.
Olfactory bulb
• Olfaction (smell) is accompanied by oscillatory network activity.
• A) During sniffs the activity of the network increases and starts to
oscillate.
• B) Network model 7.12-13 with MEE=MII=0. hE is the external input
that varies with time. hI is positive top-down input from cortex.
Olfactory bulb
• A) Activation functions F assumed in the model.
• B) h_E changes the stability of the stable fixed point at low network
activity. Largest real eigenvalue crosses 1 around t=100 ms inducing
40 Hz oscillations. Oscillations stop around 300 ms.
Olfactory bulb
The role of h_E is twofold:
– it destabilizes the fixed point of the whole network inducing network oscillations
– Its particular input to different neurons yields different patterns for different odors