20050927150015301-149354

Download Report

Transcript 20050927150015301-149354

Gain control in insect olfaction
for efficient odor recognition
Ramón Huerta
Institute for Nonlinear Science
UCSD
The goal
What is time and dynamics buying us for
pattern recognition purposes?
One way to tackle it
1. Start from the basics of pattern recognition:
organization, connectivity, etc..
2. See when dynamics (time) is required.
How does an engineer address a pattern recognition problem?
1. Feature extraction. For example: edges, shapes,
textures, etc…
2. Machine learning. For example: ANN, RBF,
SVM, Fisher, etc..
What is easy ? What is difficult?
1. Feature extraction: very difficult (cooking phase)
2. Machine learning: very easy (automatic phase)
How insects appear to do it
Antennal Lobe (AL)
Feature Extraction
Mushroom body (MB)
Machine Learning
Stage
Antenna
Location of learning
High divergence-convergence ratios
from layer to layer.
Mushroom body lobes
Bad news
The feature extraction stage is mostly genetically prewired
Good news
The machine learning section seems to be “plastic”
Spatio-temporal coding occurs here
Antennal Lobe (AL)
Feature Extraction
No evidence of time here
Mushroom body (MB)
Machine Learning
Stage
Antenna
Mushroom body lobes
The basic question
Can we implement a learning machine with
• fan-in, fan-out connectivities,
• the proportion of neurons,
• local synaptic plasticity,
• and inhibition?
Huerta et al, Neural Computation 16(8) 1601-1640 (2004)
Marr, D. (1969).
A theory of cerebellar cortex. J. Physiol., 202:437470.
Marr, D. (1970).
A theory for cerebral neocortex. Proceedings of the
Royal Society of London B, 176:161-234.
Marr, D. (1971).
Simple memory: a theory for archicortex. Phil.
Trans. Royal Soc. London, 262:23-81.
Willshaw D, Buneman O P, & Longuet-Higgins, HC
(1969)
Non-holographic associative memory,
Nature 222:960
Stage II: Learning “perception” of odors
Stage I: Transformation into a large display
MB lobes
CALYX
Decision layer
Display Layer
AL
No learning
required
Intrinsic
Kenyon Cells
Learning
required
Extrinsic
Kenyon Cells
k-winnertake-all
PNs (~800)
iKC(~50000)
eKC(100?)
Class 1
Class 2
1
0
x 
0
0
x 
0
0
x 
1
x3 
1
2
KCs coordinates
y3
AL coordinates
1 x0
x3
y0
y1
MB lobe neuron:
decision
Hyperplane:
Connections from the
KCs to MB lobes
y2
y3
x1
0
1
1
x2
1
w
y0
y1
y2
Odor classification
Odor N
Odor 4
Odor
Class 2
3
Odor 2
Odor 1
Class 1
Probability of discrimination
Sparse code
# of active KCs
Capacity for discriminating
TOTAL # OF
ODORS
We look for maximum number of odors that can be
discriminated for different activate KCs, nKC
# of active KCs
Note: we use Drosophila numbers
It has been shown both in
Locust (Laurent)
and
Honeybee (Menzel)
the existence of sparse code
~1% activity
Narrow areas of sparse activity
Without GAIN CONTROL
There can be major FAILURE
GAIN CONTROL
Antennal Lobe (AL)
Feature Extraction
Mushroom body (MB)
Machine Learning
Stage
Antenna
But nobody knows
why
Mushroom body lobes
Evidence for gain control in the AL
•These neurons can fire up to100 Hz
•The baseline firing rate is 3-4Hz
Data from Mark Stopfer, Vivek Jayaraman and Gilles Laurent
Honeybee: Galizia’s group
•There seems to be local GABA circuits in the MBs.
•Locust and honeybee circuits are different:
Honeybee 10 times more inhibitory neurons than locust
Let’s concentrate on the locust problem:
How do we design the AL circuit such that it has gain
control?
1 d fi E
 E
E d t
1 d fi I
 I
I d t


NE
EE
ij
w
j 1

f   j 1 wijEI f jI   E  I  f i E
NI
E
j

I
IE E
II
I
I
w
f

w
f



I

f
 j 1 ij j I
j
i
j 1 ij
NE
N
Mean field of 4 populations of neurons
SE  i / they receive input 
SE   N E
1
x1 (t ) 
SE
x2 (t ) 
1
SE
f
iS E

iS E
SI   N I
E
i
SI  i / they receive input 
(t )
f i E (t )
1
y1 (t ) 
SI
y2 (t ) 
1
SI
 f (t )
iS I
i
 f (t )
iS E
i
We apply mean field
1
SE
  
iS E
E
 1
F 
 S E
E
NE
j 1
w
 
iS E
EE
ij
NE
j 1

f   j 1 w f   E  I 
NI
E
j
EE
ij
w
EI
ij
I
j


f   j 1 w f   E  I 

E
j
NI
EI
ij
I
j
Define new set of variables
X  N E pIE g IE  x1  (1   ) x2 
Y  N I pIE g IE  y1  (1   ) y2 
To obtain the mean field eq.
X  N E pIE g IE  FE  Y  I   E   (1   ) FE  Y   E   X





pII g II
pII g II




Y  N I pIE g IE  FI  X 
Y  I   I   (1   ) FI  X 
Y   I   Y
pEI g EI
pEI g EI





Where we use
pEE  0
x* ( I ,  )  const
We look for the condition such that
Whose condition
is:
with
Du
1 ,u 2

pEI g EI DX * , X * FE 
1
dFE
du
d
d
 (1   )
du u1
du u2
2

pII g II DY * ,Y * FI 
1
dFI
du
Y1*
2
Y1*
and X 2*  Y   E
X 1*  Y   E  I
Y2*  X  Y   I
X 1*  X  Y   I  I
The gain control depends only on the inhibitory connections
This works if FE and FI are
linear
BUT!
SIMULATIONS: 400 Neurons
The excitatory neurons are not at high spiking frequencies or
*
silent, but X 2  Y   E  0 but not very high (3-4) Hz. So

a 
X  


 X 2 / 2 2 
c
F ( X )   X  erf 
 1  2  e


2 
 2  

E
The gain control condition from the MF
can be estimated as g II pII  g EI pEI / 2
A few conclusions:
•Gain control can be implemented in the AL network
•It can be controlled by the inhibitory connectivity. The
rest of the parameters are free.
Things to do:
I do not know whether under different odor intensities
the AL representation is the same.
Thanks to
•
•
•
•
Marta Garcia-Sanchez
Loig Vaugier
Thomas Nowotny
Misha Rabinovich
• Vivek Jayaraman
• Ofer Mazor
• Gilles Laurent