Computing with spikes

Download Report

Transcript Computing with spikes

Computing with spikes
Romain Brette
Ecole Normale Supérieure
Spiking neuron models
Spiking neuron models
Input =
N spike trains
Output =
1 spike train
What transformation ?
Synaptic integration
action potential or « spike »
postsynaptic
potential (PSP)
spike threshold
temporal integration
« Integrate and fire » model: spikes are produced when the membrane
potential exceeds a threshold
The membrane

Lipid bilayer (= capacitance) with
pores (channels = proteins)
outside
Na+ Cl-
2 nm
inside
K+
specific capacitance ≈ 1 μF/cm²
total specific capacitance = specific capacitance * area
The resting potential

At rest, the neuron is polarized:Vm ≈ -70 mV
V
Membrane potential
Vm=Vin-Vout



The membrane is semi-permeable
Mostly permeable to K+
A potential difference appears
and opposes diffusion
diffusion
K+
extra
intra
Electrodiffusion
Membrane permeable to K+ only
 Diffusion creates an electrical field
 Electrical field opposes diffusion
 Equilibrium potential or « Nernst potential »:
or reversal potential

F = 96 000 C.mol-1 (Faraday constant)
R = 8.314 J.K-1.mol-1 (gas constant)
z = charge of ion
RT Cout
E
ln
zF
Cin
The equivalent electrical circuit
I
= capacitance
leak or resting potential
Linear approximation of leak current: I = gL(Vm-EL)
leak conductance = 1/R
EL -70 mV : the membrane is « polarized » (Vin < Vout)
membrane
resistance
The membrane equation
Iinj
outside
=1/R
dVm (Vm  EL )
C

 I inj
dt
R
Vm
inside
dVm

 EL  Vm  RI inj
dt
Iinj
  RC
membrane time constant
(typically 3-100 ms)
Synaptic currents
synapse
synaptic
current
postsynaptic neuron
Is(t)
dVm

 EL  Vm  RI s
dt
Postsynaptic potentials (PSPs)
Presynaptic neuron
(extracellular electrode)
Postsynaptic neuron
(intracellular electrode)
(cat motoneuron)
Idealized synapse



Total charge
Q   Is
Opens for a short duration
Is(t)=Qδ(t)
Dirac function
Vm (t )  EL 
dVm

 EL  Vm  RQ  (t )
dt
Spike-based notation:
EL
dVm
 E L  Vm
dt
RQ
Vm  Vm 
RQ

e


at t=0
w=RQ/τ is the
« synaptic weight »

t

Temporal and spatial integration

Response to a set of spikes {tij} ?
i = synapse
j = spike number

Linearity:
Vm (t )   PSPi (t  ti j )
i, j
Superposition
principle
Synaptic integration and spike threshold
action potential
PSP
spike threshold
« Integrate-and-fire »:
dVm

 EL  Vm  RI
dt
If V = Vt (threshold)
then: neuron spikes and V→Vr (reset)
The integrate-and-fire model
Differential formulation
dVm
 EL  Vm
dt
Vm  Vm  wi
Integral formulation

Vm (t )  EL   PSPi (t  ti j )
i, j
spike at synapse i
If V = Vt (threshold)
then: neuron spikes and V→Vr (reset)
Is this a sensible description
of neurons?
A phenomenological approach
Fitting spiking models to electrophysiological recordings
Injected current
Recording
Model
Rossant et al. (Front. Neuroinform. 2010)
(IF with adaptive threshold fitted with Brian + GPU)
Results: regular spiking cell
Winner of INCF
competition: 76%
(variation of adaptive
threshold IF)
Rossant et al. (2010). Automatic fitting of spiking
neuron models to electrophysiological recordings
(Frontiers in Neuroinformatics)
Good news

Adaptive integrate-and-fire models are
excellent phenomenological models!
(response to somatic injection)
Advanced concepts

Synaptic channels are also described by electrodiffusion
I s  g s ( Es  Vm )
gs(t)
ionic channel conductance
open
synaptic reversal potential
presynaptic
spike

Neurons are not isopotential
2
V
2  V


V
2
t
x
The « cable equation »
closed
Simulating spiking models with
Goodman, D. and R. Brette (2009). The Brian simulator. Front Neurosci
doi:10.3389/neuro.01.026.2009.
from brian import *
eqs='''
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
''‘
Ci
P
P=NeuronGroup(4000,model=eqs,
threshold=’v>-50*mV’,reset=’v=-60*mV’)
P.v=-60*mV+10*mV*rand(len(P))
Pe=P.subgroup(3200)
Pi=P.subgroup(800)
Ce=Connection(Pe,P,'ge',weight=1.62*mV,sparseness=0.02)
Ci=Connection(Pi,P,'gi',weight=-9*mV,sparseness=0.02)
M=SpikeMonitor(P)
run(1*second)
raster_plot(M)
show()
briansimulator.org
Pi
Pe
Ce
Computing with spikes I:
rate-based theories
Statistics of spike trains
tn+1 – tn = « interspike interval » (ISI)

t1 t2 t3
Spike train:


A sequence of spike times (tk)
A signal
S(t ) =
δ(t − t k)
k

Rate:


Number of spikes / time (= lim k/tk)
Average of S:
r
S(t )
lim
T +∞
1
T
T
S(t ) d t.
0
(Up to a few hundred Hz)
Rate-based descriptions
Inputs with rates
F1, F2, ..., Fn
F1
Is(t) = total synaptic current
F2
F4
F3
dVm

 EL  Vm  RI s (t )
dt
Is(t)
F
Rate F
Can we express F as a function of F1, F2, ..., Fn?
Approach #1: the « perfect integrator »

Neglect the leak current:
dVm

 EL  Vm  RI (t )
dt
Vt  Vr
2

More precise: replace Vm by

dVm
V  Vr
 EL  t
 RI (t )
dt
2
Vt  Vr
2
The perfect integrator
dV
et si V=Vt:V → Vr
 I (t )
dt
EL  (Vt  Vr ) / 2  RI (t ) 
 *
 I (t ) 





Normalization

Vt=1,Vr=0
= change of variable for V:
V* = (V-Vr)/(Vt-Vr)
Same for I
The perfect integrator
dV
 I (t )
dt

Integration:
t
V (t )  V (0)   I ( s)ds
0

Firing rate:
I (t ) 
F I
F  0 Hz
if
I 0
otherwise
Brette, R. (2004). Dynamics of one-dimensional spiking neuron models. J Math Biol
1
 sin( 2ft )
2
The perfect integrator with synaptic inputs
I (t )  b 

j
J
(
t

t
 k k)
(superposition principle)
k
i
synapsesspikes
timing of spike at synapse k
constant (from change
of variables)
Jk = postsynaptic current
(= 0 for t<0)
The perfect integrator with synaptic inputs

Input firing rates F1, F2, ..., Fn.
Let wk   J k the « synaptic weight » of synapse k

Output firing rate is



F   wk Fk  b
k

Proof: first prove

([x]+ = max(x,0))
 J (t  t )
j
j
where F is the rate of events (tj)
 F J
Approach #2: mean field


dVm
 EL  Vm  RI (t )
dt
Mean field approximation:

replace I(t) by its mean
I (t ) 

j
J
(
t

t
 k k)
I   wk Fk
k
i
synapsesspikes

k
wk   J k
use the current-frequency function: F=f(I)
1 
E  RI  Vr
F    log L
T 
EL  RI  Vt



1
where
I   wk Fk
k
 
F  g (w  F )
Mean field vs. perfect integrator

dVm
 EL  Vm  RI (t )
dt
mean field:
neglect variations of I(t)
perfect integrator:
neglect the leak

dVm
 EL  Vm  RI (t )
dt



 
F  w F  b

dVm
 EL  Vm  R I
dt
 
F  g (w  F )
Approach #3: Poisson inputs


What if I  I threshold? (balanced regime)
Assumption: input spike trains are independent Poisson
processes with rates Fi
F1
F2
w
w
...
Fn
identical synapses
w
If Ti = Poisson with rate Fi, then UFi = Poisson with rate Fi
Conclusion: Fout = f(Fi)
Summary:


Neglect leak:
(perfect integrator)



 
F  w F  b
integral of
synaptic
current
currentfrequency
function
 
F  f (w  F )
Mean field
(mean field)

 
F  f (w  F )
Independent inputs
(+Poisson
+1 synapse type)
undetermined
function
Variation: diffusion
transmission
probability
F  f ( wi Fi ,  wi2 Fi )
Otherwise there might not be a univocal input-output rate relationship
Computing with spikes II:
asynchronous theories
The precision of spike timing
The same variable current is injected 25
times.
Spike timing is reproductible even after 1 s.
Mainen & Sejnowski (1995)
(cortical neuron in vitro, somatic injection)
IF model:
The neural « code »
Count
Time
Rank
Code:
• spike count
(rate code)
• spike timing
(temporal code)
• spike rank
(rank order code)
Thorpe et al (2001), Spike-based strategies for rapid processing. Neural networks.
Decoding rank order

How to distinguish between AB and BA?
A
B

Solution: excitation and inhibition
Inhibitory PSP
Excitatory PSP
-
+
+
-
Prey localization by the sand scorpion
Inhibition of
opposite neuron
→ more spikes
near the source
(polar representation of firing rates)
Conversion rank order code → rate code
Stürtzl et al. (2000). Theory of arachnid prey localization. PRL
Predictive coding
input
output
linear read-out
 K (t  t
i
j
i
)
i, j
Each output neuron spikes to reduce some
error defined on the read-out
References:
Boerlin & Denève (2011). Spike-Based Population Coding and Working Memory. PLoS Comp Biol
S Denève (2008). Bayesian spiking neurons I: inference. Neural Computation
S Denève (2008). Bayesian spiking neurons II: learning. Neural Computation
Computing with spikes III:
synchrony-based computation
Reliability of spike timing
Z. Mainen, T. Sejnowski, Science (1995)
In spiking model:
Spike timing is reproducible in
vitro for time-varying inputs
Brette, R. and E. Guigon (2003). Reliability of spike timing is a general property of spiking model neurons. Neural Comput
Brette (2012). Computing with neural synchrony. PLoS Comp Biol
Selective synchronization
Consequence:
Similar neurons synchronize to similar time-varying
inputs
?
What impact on postsynaptic neurons?
Coincidence detection: principle

dV
 EL  V
dt
V V  w
d
Threshold Vt
Same rate F
Delay d
threshold
threshold
spike
no spike
Spike if
 V  EL 
d   log  t
 1
 w

 Vt  EL

 w  Vt  EL 

 2

Coincidence detection in noisy neurons
How about in realistic situations?
Balanced regime
= VmD peaks below
threshold
Rossant C, Leijon S, Magnusson AK, Brette R (2011). Sensitivity of noisy neurons to coincident inputs. J Neurosci
Synchrony-based computation
A
B
C
D
no response
signals similarities between A and C
Synchrony receptive fields
A
B
« Synchrony receptive field » = {S | NA(S) = NB(S)}
What does it represent?
no response
Synchrony receptive fields: examples
Synchrony when:
S(t-dR-δR)=S(t-dL-δL)
dR-dL = δR +δL
Independent of source signal
Synchrony receptive fields: examples
Synchrony when:
S(t-δA)=S(t-δB)
Periodic sound with period δA-δB
Synchrony receptive fields signal some regularity or « structure » in the sensory signals
Structure in stimuli
A structured stimulus:
X
M
Object in
environment
The transformation M
introduces structure in
sensory signals
S=M(X)
Sensory signal
Examples:
Binaural hearing
source
sound S
M
(FL*S,FR*S)
M is location-specific
stimulus
Pitch
M
sound in
phase space
M is pitch-specific
Structure and synchrony
dR-dL = δR +δL
Brette (2012). Computing with neural synchrony. PLoS Comp Biol
Non-trivial example:
binaural sound localization
FR,FL = location-dependent acoustical filters
(HRTFs/HRIRs)
Delay:
high frequency
low frequency
Intensity:
Sound propagation is more
complex than pure delays!
Binaural synchrony receptive fields
FR,FL = HRTFs/HRIRs (location-dependent)
NA, NB = neural filters
(e.g. basilar membrane filtering)
input to neuron A: NA*FR*S (convolution)
input to neuron B: NB*FL*S
Synchrony when: NA*FR = NB*FL
Independent of source signal S
SRF(A,B) = set of filter pairs (FL,FR)
= set of source locations
= spatial receptive field
Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization. PLoS Comp Biol
Decoding synchrony structure
basilar
membrane
MSO
cochlear
nucleus
Each source location is represented by a specific assembly of binaural neurons
= neurons whose inputs contain the location in their SRF
Proof of principle
Sounds: noise, musical
instruments, voice (VCV)
Gammatone
filterbank
γi
S
Spiking: noisy IF
models
j
GR
FR
FL
Acoustical filtering:
measured human HRTFs
γi
Coincidence detection:
noisy IF models
j
GL
Additional transformations: all HRTFs
band-pass filtered HRTFs:
Location = assembly of coincidence detectors (1/channel)
synchrony RF of inputs contain location
Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization. PLoS
Comp Biol 6(11): e1000993. doi:10.1371/journal.pcbi.1000993.
Results
Activation of all assemblies as a
function of preferred location.
Spatial receptive fields
categorization
azimuth
elevation
Estimation error
Spike-based learning
Hebb’s rule
D. Hebb
When an axon of cell A is near enough to excite cell B
and repeatedly or persistently takes part in firing it, some
growth process or metabolic change takes place in one or
both cells such that A's efficiency, as one of the cells firing
B, is increased. (1949)
A
B
Neuron A and neuron B are active: wAB increases
Physiologically: « synaptic plasticity »
PSP size is increased
PSP
(or: transmission probability is
increased)
Synaptic plasticity at spike level
(STDP = Spike-TimingDependent Plasticity)
potentiel d’action
postsynaptique
Dan & Poo (2006)
pre  post: potentiation
post  pre: depression
• causal rule
• favors synchronous inputs
Phenomenological model
 pre
 post
dApre
  Apre
dt
dApost
dt
  Apost
Presynaptic spike:
Apre  Apre  Apre
w  w  Apost
Postsynaptic spike:
Apost  Apost  Apost
w  w  Apre
w   f (t ipost  t ipre )
i, j
f ( s )  Apree
 s /  p re
if s  0
s /  p o st
if s  0
f ( s )  Aposte
Learning a synchrony code
Example: a synchrony-based code for duration
Neurons with rebound spiking
duration
A and B fire in synchrony for duration 500 ms
inhibition
synchrony
receptive
field
Spike latency depends on duration
A postsynaptic neuron
fires preferentially at
duration 500 ms
Brette (2012). Computing with neural synchrony. PLoS Comp Biol
Homeostasis



A coincidence detector must only fire to coincidences, i.e.,
rarely
Homeostatic mechanism: enforce a target firing rate F
Example, synaptic scaling:
w→(1-a)w
when the neuron spikes
dw/dt = b.w
otherwise
Weight change = -a.w.F.dt +b.w.dt
Equilibrium: F=b/a
Brette (2012). Computing with neural synchrony. PLoS Comp Biol
Learning the synchrony code for duration
STDP
+ synaptic scaling
(only potentiation)
pre
pre
pre
post
Potentiation of coincident inputs
Learning the synchrony code for duration
STDP
+ synaptic scaling
(only potentiation)
Duration tuning in postsynaptic neurons
Duration
Learning the synchrony code for duration
Brette (2012). Computing with neural synchrony. PLoS Comp Biol
Publications on synchrony-based computing
• Reliability of spike timing in models: Brette, R. and E. Guigon (2003). Reliability of spike
timing is a general property of spiking model neurons. Neural Comput 15(2): 279-308.
• Coincidence detection: Rossant C, Leijon S, Magnusson AK, Brette R (2011). Sensitivity of
noisy neurons to coincident inputs. J Neurosci 31(47):17193-17206.
• Computing with synchrony: Brette R (2012). Computing with neural synchrony. PLoS
Comp Biol
• Sound localization with binaural synchrony: Goodman DF and R Brette (2010). Spiketiming-based computation in sound localization. PLoS Comp Biol 6(11): e1000993.
doi:10.1371/journal.pcbi.1000993.
• Simulation: Goodman, D. and R. Brette (2009). The Brian simulator. Front Neurosci
doi:10.3389/neuro.01.026.2009.
Invariant structure in perception (psychology): James J Gibson (1979), The ecological
approach to visual perception. Boston: Houghton Mifflin.
[email protected]
http://audition.ens.fr/brette/