Computing with synchrony

Download Report

Transcript Computing with synchrony

Computing with neural synchrony:
an ecological approach to neural computation
Romain Brette
Ecole Normale Supérieure, Paris
[email protected]
Perception as pattern recognition
The main function of sensory systems is often described as pattern recognition.
Perception
Sensory input
« image »
Neural representation
« features »
Memory
Pattern recognition
This is the main paradigm in standard neural network theory (e.g. perceptron).
One major issue of perceptual systems is invariance: the same object can correspond to
many different patterns. How do you build a system that is invariant to these changes?
Marr (1982). Vision. Freeman & Co Ltd
The ecological approach
“Ask not what’s inside your head but what your head is inside of”
James J Gibson
Main point: sensory signals come from real things in the world, and therefore are tightly
constrained by the laws of physics. Perceiving is about grasping sensory laws, the “invariant
structure” of sensory signals (or sensorimotor laws).
Example, pitch:
This acoustical wave elicits a pitch percept (musical note)
This other wave elicits the same pitch (same note), but the
pattern is different.
However, the structure is the same (=periodicity)
James J Gibson (1979). The ecological approach to visual perception.
JK O’Regan & A. Noë. A sensorimotor account of vision and visual consciousness.
Pattern vs. structure
What is this signal?
Pattern recognition
Structure identification
T
S(t)
I don’t know!
I don’t know! But I notice that S(t+T)=S(t)
This is an A4
This is an A4
→ label this pattern
as A4
→ label this structure as A4
I don’t know!
I notice that S(t+T)=S(t)
This is another
instance of an A4
It’s an A4!
→ label this pattern
as A4
Invariance is learned with many examples
Invariance is intrinsic, one example is enough
Olfaction
(This is just meant as a pedagogical example)
C1(t) = a1 x [O](t)
affinity of receptor 1
to the odor
C2(t) = a2 x [O](t)
fluctuates (air
turbulences)
Sensory laws or
« invariant structure »:
C3(t) = a3 x [O](t)
C4(t) = a4 x [O](t)
Odor identity is defined by
receptor affinities
C1(t)/ C2(t) = a1/a2
constant ratio between
signals
a2.C1(t) = a1.C2(t)
two views of the signals
are identical
NEURAL SYNCHRONY AND
SENSORY STRUCTURE
Encoding signals into spikes
Z. Mainen, T. Sejnowski, Science (1995)
Integrate-and-fire model:
Spike timing is reproducible in
vitro for time-varying inputs
Encoding signals into spikes
Integrate-and-fire model:
Main points:
1) Temporal precision is determined by
intrinsic noise, rather than signal
fluctuation timescale.
2) Neurons are precise in the
fluctuation-driven regime (mean
input below threshold)
The synchrony receptive field
A
B
« Synchrony receptive field » = set of stimuli S making A and B fire synchronously
= {S | NA(S) = NB(S)}
a law followed by sensory signals S
or « invariant structure »
Structure and synchrony
Synchrony patterns reflect invariant sensory structure
X
Structured signals:
T
X
source of
variation
pitch
S
invariant
transformation
sensory
signals
S
Computing with synchrony
A
B
Synchrony receptive field = {S | NA(S) = NB(S)}
Neuron C responds to coincidences between A and B
= when S is in SRF(A,B)
Condition for coincidence detection: fluctuation-driven regime
C
no response
THE OLFACTORY EXAMPLE
Olfactory receptors
receptor neurons with different sensitivities
C1(t) = a1 x [O](t)
A
sA x a1 x [O](t)
B
sB x a1 x [O](t)
C2(t) = a2 x [O](t)
fluctuates (air
turbulences)
sxa
C3(t) = a3 x [O](t)
C4(t) = a4 x [O](t)
C
sC x a4 x [O](t)
A and C synchronize for some odor (sA x a1 = sC x a4)
B and C synchronize for another odor (sB x a1 = sC x a4)
Spiking neuron model
Neurons with different receptor types (a) and
sensitivities (s)
color = a x s
for odor A
One neural assembly for odor A.
Receptors with the same color (=synchronous
for odor A) project to the same neuron in the
assembly.
Spiking neuron model
color = a x s
for odor B
same neurons different « synchrony groups »
Another neural assembly for odor B.
Receptors with the same color (=synchronous
for odor B) project to the same neuron in the
assembly.
Spiking neuron model
odor
concentration
Neurons wired to detect synchrony patterns
produced by odor A fire.
Spiking neuron model
odor
concentration
Neurons wired to detect synchrony patterns
produced by odor B fire.
Spiking neuron model
odor
concentration
Receptor neurons start saturating.
Neurons from group B still fire,
neurons from group A don’t.
Spiking neuron model
odor
concentration
Distracting odor
reduces firing in
group A, but
doesn’t increase
firing in group B
Spiking neuron model
odor
concentration
LEARNING SENSORY STRUCTURE
The problem
1) We want the network to learn the correct mapping
i.e., connecting neurons that are simultaneously active for
a specific structure to a postsynaptic neuron
sounds like STDP!
2) We want neurons to respond only to coincidences
Coincidence detection and
homeostasis
• A coincidence detector must only fire to coincidences,
i.e., rarely
• Homeostatic mechanism: enforce a target firing rate F
• Example, synaptic scaling:
w→(1-a)w
when the neuron spikes
dw/dt = b.w
otherwise
Weight change = -a.w.F.dt +b.w.dt
Equilibrium: F=b/a
This homeostastic mechanism does not change relative weights
Learning structure with STDP
STDP
+ synaptic scaling
(only potentiation)
pre
pre
pre
post
Potentiation of coincident inputs
Learning to identify odors
Random presentation of odors A and B, fixed concentration
At the end: neurons are tuned to either A or B
Learning to identify odors
Concentration-invariant responses:
Concentration
At the end: neurons are tuned to either A or B
BINAURAL HEARING
Binaural hearing without diffraction
S(t)
S(t-dR)
S(t-dL)
Synchrony when:
S(t-dR-δR)=S(t-dL-δL)
(invariant structure)
dR-dL = δR +δL
Independent of source signal
This is essentially the Jeffress model of sound localization
But this is simplistic!
Binaural hearing in real life
FR,FL = location-dependent acoustical filters
(HRTFs/HRIRs)
Delay:
low frequency
Sound propagation is more
complex than pure delays!
high frequency
Binaural hearing in real life
FR,FL = location-dependent acoustical filters
(HRTFs/HRIRs)
Delay:
high frequency
low frequency
ITDs:
ITD (ms)
FRONT
Frequency
BACK
Binaural structure and synchrony
receptive fields
FR,FL = HRTFs/HRIRs (location-dependent)
NA, NB = neural filters
(e.g. basilar membrane filtering)
input to neuron A: NA*FR*S (convolution)
input to neuron B: NB*FL*S
Synchrony when: NA*FR = NB*FL
SRF(A,B) = set of filter pairs (FL,FR)
= set of source locations
= spatial receptive field
Independent of source signal S
The hypothesis
Each binaural neuron encodes an element of binaural structure
NB*FL*S
FL*S
NA*FR*S
FR*S
Proof of concept
Sounds: noise, musical
instruments, voice (VCV)
S
Gammatone filterbank +
more filters j
Spiking: noisy IF
γi
GR
models
FR
FL
Acoustical filtering:
measured human HRTFs
γi
j
GL
Coincidence detection:
noisy IF models
Activation of all assemblies as a function
of preferred location:
Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization. PLoS Comp Biol 6(11): e1000993.
doi:10.1371/journal.pcbi.1000993.
Best phase of a neuron vs. frequency
= Interaural phase difference vs. frequency
for preferred source location
Best phase
Experimental prediction
CP
Cells (cat IC)
PUT A CD
CELL
Input frequency (Hz)
HRTFs
HRTFs
Cells (IC)
MORE EXAMPLES
Visual edges
LGN
SRF(A,B) = translation-invariant
image
In LGN: correlation is tuned to orientation!
Stanley et al. (2012). Visual Orientation and Directional Selectivity through Thalamic Synchrony. J Neurosci
Visual edges
OK but what’s the difference with the standard V1 model?
Pattern recognition
« It looks like a Gabor wavelet »
Structure detection
« It is translation-invariant »
Cross-correlation vs. autocorrelation
Barlow & Berry (2010). Cross- and auto-correlation in early vision. Proc Royal Soc B.
Impact sounds
• Decay time indicates material (metal/wood)
• Resonant frequencies indicate shape
• Amplitude of modes are linked to impact properties
Take home messages
• Synchrony reflects sensory structure
• STDP learns structure
• Computing with neural synchrony is detecting
structure or « laws » in sensory signals
≠ pattern recognition
• Invariance is not an issue anymore because
structure is an invariant
Thank you
HRTF measurements and analysis
Analysis of in vivo recordings
Binaural model
Victor Benichoux
Binaural model (« proof of principle »)
Brian simulator
Dan Goodman
Auditory models
(« Brian Hears »)
Bertrand Fontaine
Pitch model
Binaural model
Analysis of HRTFs
Jonathan
Laudanski
Marc Rébillat
Coincidence sensitivity of neurons
Model fitting
Cyrille Rossant
Impact of reflections on binaural cues
Boris Gourévitch
In vitro electrophysiology
In vivo electrophysio
(cats)
Philip Joris (Leuven)
In vivo electrophysio
(owls)
Jose Peña (NY)
Anna Magnusson
(Stockholm)
And: Makoto Otani (BEM simulations)
Renaud Keriven (3D models)
Publications on synchrony-based computing
• Reliability of spike timing in models: Brette, R. and E. Guigon (2003). Reliability of spike
timing is a general property of spiking model neurons. Neural Comput 15(2): 279-308.
• Coincidence detection: Rossant C, Leijon S, Magnusson AK, Brette R (2011). Sensitivity of
noisy neurons to coincident inputs. J Neurosci 31(47):17193-17206.
• Computing with synchrony: Brette R (2012). Computing with neural synchrony. PLoS
Comp Biol
• Sound localization with binaural synchrony: Goodman DF and R Brette (2010). Spiketiming-based computation in sound localization. PLoS Comp Biol 6(11): e1000993.
doi:10.1371/journal.pcbi.1000993.
• Simulation: Goodman, D. and R. Brette (2009). The Brian simulator. Front Neurosci
doi:10.3389/neuro.01.026.2009.
Invariant structure in perception (psychology): James J Gibson (1979), The ecological
approach to visual perception. Boston: Houghton Mifflin.
[email protected]
http://audition.ens.fr/brette/
A FEW WORDS ABOUT
COINCIDENCE DETECTION
An example
4000 independent Poisson excitatory inputs + 1000 inhibitory inputs
2 Hz
8 Hz
20 mV
100 ms
event
event
synchrony events involving 10 random synapses,
at rate 40 Hz
Pairwise correlation: 0.0002 (probably not experimentally detectable!)
Why tiny correlations may have large
postsynaptic effects
• In a balanced regime, the output rate depends on both the mean
and the variance of the input
• Consider N random variables Xn with identical distributions and
correlation c.
cov( X , X )
c
N
S   Xn
i
j
var( X )
What is the variance of S?
n 1
var( S )  N var( X )
if c =0
var( S )  N var( X )   cov( X i , X j )
otherwise
i j
var( S )  ( N  N ( N  1)c) var( X )
correlations can be neglected
only if c << 1/N.
Asynchronous spikes vs.
coincident spikes
2 input spikes in a noisy neuron
2 coincident spikes
Vm
PSTH
Average extra
number of
spikes
Coincidence sensitivity S = difference
A simple approach
S
p spikes
exponential currents
theory
Instantaneous currents
Simulations
threshold
Neurons are sensitive to coincidences
in the balanced regime only
2w
w
2 coincident spikes
Balanced regime
(= fluctuation-driven)
Oscillator regime
(= mean-driven)
Extra spikes
Background noise
Quantitative results:
2 spikes
Synaptic weight
Quantitative results:
p spikes
Effect depends on p*w rather than p
Distributed synchrony
Synchrony events involving p random synapses
(w=0.5 mV)
event
event
Theory
Simulation
Summary
• In the balanced regime, neurons are extremely
sensitive to input correlations
• Correlations have negligible effects only if they
are small compared to 1/N (N synapses)
var( S )  ( N  N ( N  1)c) var( X )
• In fact, correlations undetectable in pair
recordings can have tremendous postsynaptic
effect
• We have a simple method to calculate the
(stochastic) effect of coincidences