Lecture notes

Download Report

Transcript Lecture notes

Network of Neurons
Computational Neuroscience 03
Lecture 6
Connecting neurons in networks
Last week showed how to model synapses in HH models and
integrate and fire models:
cm
dV
dt
 EL  V  rm g s Ps PRe l (V  Es )  Rm I e
Can add them together to form networks of neurons
Use cable theory:
RL = rL Dx/(pa2)
And multicompartmental modelling to model propagation of signals
between neurons
cm

I

 im  e  g  ,  1 (V 1  V )  g  ,  1 (V 1  V )
dt
A
dV
However, this soon leads to very complex models and very
computationally intensive
Massive amounts of numerical integration is needed (can lead to
accumulation of truncation errors
Need to model neuronsl dynamics on the milisecond scale while
netpwrk dynamics can be several orders of magnitude longer
Need to make a simplification …
Firing Rate Models
Since the rate of spiking indicates synaptic activity, use the firing
rate as the information in the network
However AP’s are all-or-nothing and spike timing is stochastic
With identical input
for the identical neuron
spike patterns are similar, but not identical
Single spiking time is meaningless
To extract useful information, we have to average
 for a group of neurons in a local circuit where neuron
codes the same information
 over a time window
to obtain the firing rate r
r =
=
Local circuit
 6 Hz
Time window = 1 sec
Hence we have firing rate of a group of neurons
r1
So we can have a network of these
local groups
w1: synaptic strength
wn
rn
v  f ( w j rj )
Advantages
Much simpler modelling eg don’t need milisecond time scales
Can do analytic calculations of some aspects of network dynamics
Spike models have many free parameters – can be difficult to set (cf
Steve Dunn)
Since AP model responds deterministically to injected current, spike
sequences can only be predicted accurately if all inputs are known.
This is unlikely
Although cortical neurons have many connections, probability of 2
randomly chosen neurons being connected is low. Either need many
neurons to replicate network connectivity or need to average over a
more densely connected group. How to average spikes? Typically
an ‘average’ spike => all neurons in unit spike synchronously =>
large scale synchronisation unseen in (healthy) brain
Disadvantages
Can’t deal with issues of spike timing or spike correlations
Restricted to cases where neuronal firing is uncorrelated with little
synchronous firing (eg where presynaptic inputs to a large fraction
of neurons is correlated) + where precise patterns of spike timing
unimportant
If so, models produce similar results.
However, both styles are clearly needed
The model
1. work out how total synaptic input depends on firing rates of
presynaptic afferents
2. Model how firing rate of postsynaptic neuron depends on this
input
Generally determine 1 by injecting current into soma of neurons and
measuring responses. Therefore, define total synaptic input to be
total current in soma due to presynaptic AP’s, denoted by Is
Then work out postsynaptic rate v from IS using:
v = F(IS )
F is the activation function. Sometimes use the sigmoid (useful if
derivatives are needed in analysis). Often use threshold linear
function F=[IS – t]+ (linear but IS = 0 for IS < t. For t =0 known as
half-wave rectification
Firing rate models with current
dynamics
Although Is determined by injection of constant current, can assume
that the same response is true when Is is time dependent ie
v = F(IS(t))
Thus dynamics come from synaptic input. This is presynaptic input
which is effectively filtered by dynamics of current propagation
from synapse to soma. Therefore use:
s
N
dI s
  I s   wi ri   I s  w.r
dt
i 1
Time constant s If electrotonically compact, roughly same as decay
of synaptic conductance, but typically low (milliseconds)
Effect of s
Visualise effect of s as follows. Imagine I starts at some value I0 and
we have sliced time into discrete pieces Dt. At n’th time step have:
I(nDt) = In = In-1 + Dt dI/dt
Imagining w.r =0 have:
I1  I 0 
I2
In
Dt
s
I0

Dt 
 I0 
1   

s 



Dt 
Dt 
 I1 
1   
  I0 
1   

s 
s 



Dt 
 I0 
1   

s 

Exponential decay
n
2
Alternatively, if w.r not 0

Dt
Dt 
 I n 1
In 
w.r n  1 
s
s 




Dt  Dt
Dt 

 I n  2 
In 
w.r n  1 
w.r n 1  1 


s
 s   s
s 



Dt
2

Dt 
Dt 
Dt 



 I n  2
In 
w.r n 
1
w.r n 1  1 


s
s 
s 
s 

Dt
2
Dt 
Dt 
Dt 
Dt 



 w.r n  2  ...
In 
w.r n 
1
w.r n 1 
1



s
s 
s 
s 
s 
Ie it retains some memory of activity at previous time-step (which
itself retained some memory of time step before etc etc).Sort of a
time average
Dt
How much is retained or for how long we average depends on s as
it governs how quick things change. If its 0 none retained if large lot
retained
s= 0.1
s = 1
s = 4
s = 4
Delays the response to the input Also dependent on
starting position
s= 0.1
Filters input based on size of time constant
s = 1
Filters input based on size of time constant
s = 4
Filters input based on size of time constant
Filters input based on size of time constant
Alternatively, since postsynaptic rate is caused by changes in
membrane potential, can add in effects membrane
capacitance/resistance. This also effectively acts as a low pass filter
giving:
dv
r
 v  F ( I s (t ))
dt
If r << s then v = F(IS(t)) pretty quickly so 2nd model reduces
to first. Alternatively if s << r (more usual) we get:
dv
r
 v  F ( w.r )
dt
Cf leaky integrator, continuous time recurrent nets
Models with only one set of dynamics work well for above threshold
inputs as low pass thresholding irrelevant, but when signal is below
threshold for a while these dynamics become important and both
levels are needed
Feedforward and Recurrent networks
For a network replace weight vector by a matrix. Also often replace
feedforward input with a vector
dv
r
 v  F (W r  M r )  v  F (h  M r )
dt
Dale’s law states that a neuron can’t both inhibit and excite neurons so
wieghts in each row of matrices must have the same sign ie Maa’
(weight from a’ to a) must be +ve or –ve for all a
This means that except for special cases M cannot be symmetric
since if a’ inhibits a, unless a also inhibits a’ then Maa’ has a
different sign to Ma’a
However, anlaysis of systems is much easier when a is
symmetric. Corresponds to making inhibitory dynamics
instantaneous.
These systems are studied for their analytical properties but
systems where excitatory-inhibitory networks
dvE
E
 vE  FE (h E  M EE r E  M EI r I )
dt
dvI
I
 vI  FI (h I  M IE r E  M II r I )
dt
have much richer dynamics exhibiting eg oscillatory behaviour
Continuous model
Often identify each neuron in a network by a parameter describing an
aspect of its selectivity. Eg for neurons in the primary visual cortex
can use their preferred spatial phase (ie what angle of line they
respond most to)
Then look at firing rates as a function of this parameter: v(q) r(q)
In large networks there will be a large range of parameters. Assume
that the density of each is uniform and equal to p and coverage is
dense. Replace the weight matrices by functions W(q,q’) and M(q,q’)
which describe the weights from a presynaptic neuron with preferred
angle q’ to a postsynaptic neuron with preferred angle q we get:
p
dv(q )

r
 v(q )  F  p  W (q ,q ' )r (q ' )  M (q ,q ' )r (q ' )dq ' 
 p

dt
Pure feedforward nets can do many things and eg can be shown to be
able to perform coordinate transformations (habd to body for reaching)
To do this they must exhibit gaze dependent gain modulation: peak
firing rate not shifted by a change in gaze location but increased
Recurrent networks can also do this but have much more complex
dynamics than feedforward nets. Also more difficult to analyse
Much analysis focuses on looking at the eigenvectors of the matrix M
Can show for instance that networks can exhibit selective
amplification if there is one dominant eigenvector (cf PCA)
Or if an eigenvalue is exactly equal to 1 and others < 1can get
integration of inputs and therefore persistent activity as activity does
not stop when input stops
While synaptic modification rules can be used to establish such
precies tuning it is not clear how this is done in biological systems
Also can see that recurrent networks exhibit stereotypical patterns of
activity largely determined by recurrent interactions and can be
independent of feedforwrd input and thus can get sustained activity
Input
Output
Therfore recurrent connections can act as a form of memory
Such memory is called working or short term memory (seconds to
hours)
To establish long term memories idea is that memory is encoded in the
synaptic weights.
Weights are set when memory is stored.
When a similar (or incomplete) feedforward input arrives to the one
that created the memory, persistent activity signals memory recall
Associative memory: recurrent weights are set so that network has
several fixed points which are identical to the patterns of activity
representing the stored memories. Each fixed point has a basin of
attraction representing the set of inputs which will result in the net
ending up at that fixed point. When presented with an input network
effectively pattern matches input to stored patterns
Can thus examine capacity of networks to remember patterns by
analysing stability properties of matrix encoded by synaptic weights
Interplay of excitatory and inhibitory connections can be shown to
give rise to oscillations in networks
Network analysis now problematic so use homogenous excitatory and
inhibitory populations of neurons (effectively 2 neuron-groups) and
examine a phase plane anlalysis.
Can show that non-linearity of activation function allows for stable
limit cycles
Can also look at stochastic networks where input current is interpreted
as a probability of firing: Boltzmann machines. Now need statistical
analysis of network properties