ppt - Computational NeuroEngineering Laboratory (CNEL)

Download Report

Transcript ppt - Computational NeuroEngineering Laboratory (CNEL)

Brain Machine Interfaces: Modeling Strategies for
Neural Signal Processing
Jose C. Principe, Ph.D.
Distinguished Professor ECE, BME
Computational NeuroEngineering Laboratory
Electrical and Computer Engineering Department
University of Florida
www.cnel.ufl.edu
[email protected]
Brain Machine Interfaces (BMI)
A man made device that either substitutes a
sensory input to the brain, repairs functional
communication between brain regions or
translates intention of movement.
Types of BMIs
Sensory (Input BMI): Providing sensory input to form percepts when
natural systems are damaged.
Ex: Visual, Auditory Prosthesis
Motor (Output BMI): Converting motor intent to a command output
(physical device, damaged limbs)
Ex: Prosthetic Arm Control
Cognitive BMI: Interpret internal neuronal state to deliever feedback to
the neural population.
Ex: Epilepsy, DBS Prosthesis
Computational Neuroscience and Technology developments are
playing a larger role in the development of each of these
areas.
General Architecture
BCI (BMI) bypasses the brain’s normal pathways of peripheral nerves (and muscles)
J.R. Wolpaw et al. 2002
The Fundamental Concept
BRAIN
MACHINE
INTENT
ACTION
Decoding
PERCEPT
STIMULUS
Coding
Neural Interface
Physical Interface
Need to understand how brain processes information.
Stimulus
Neural Response
Coding
Given
To be inferred
Decoding
To be inferred
Given
Levels of Abstraction for Neurotechnology
Brain is an extremely
complex system
1012 neurons
1015 synapses
Specific
interconnectivity
Tapping into the Nervous System
The choice and availability of brain signals and
recording methods can greatly influence the ultimate
performance of the BMI.
The level of BMI performance may be attributed to
selection of electrode technology, choice of model, and
methods for extracting rate, frequency, or timing codes.
Coarse(mm)
http://ida.first.fhg.de/projects/bci/bbci_official/
Choice of Scale for Neuroprosthetics
Bandwidth
(approximate)
Localization
Scalp
Electrodes
0 ~ 80 Hz
Volume
Conduction
Cortical Surface
Electrocorticogram
(ECoG)
0 ~ 500Hz
Cortical Surface
Implanted
Electrodes
0 ~ 7kHz
Single Neuron
Spatial Resolution of Recordings
Moran
Florida Multiscale Signal Acquisition
Develop a experimental paradigm with a nested hierarchy
for studying neural population dynamics.
Least
Invasive
EEG
NRG IRB
Approval for
Human
Studies
ECoG
Microelectrodes
Highest
Resolution
NRG
IACUC
Approval for
Animal
Studies
Common BMI-BCI Methods
BMIs --- Invasive, work with intention of movement
• Spike trains, field potentials, ECoG
• Very specific, potentially better performance
BCIs --- Noninvasive, subjects must learn how to control their
brain activity
• EEG
• Very small bandwidth
Computational NeuroScience
Integration of probabilistic models of information processing with
the neurophysiological reality of brain anatomy, physiology and
purpose.
Need to abstract the details of the “wetware” and ask what is the
purpose of the function. Then quantify it in mathematical terms.
Difficult but very promising. One issue is that biological evolution is
a legacy system!
BMI research is an example of a computational neuroscience
approach.
How to put it together?
NeoCortical Brain Areas Related to Movement
Posterior Parietal (PP) –
Visual to motor
transformation
Premotor (PM) and Dorsal
Premotor (PMD) Planning and guidance
(visual inputs)
Primary Motor (M1) –
Initiates muscle contraction
Ensemble Correlations – Local in Time – are Averaged with
Global Models
Computational Models of Neural Intent
Two different levels of neurophysiology realism
Black Box models – no realism, function relation between
input desired response
Generative Models – minimal realism, state space models
using neuroscience elements
Signal Processing Approaches with Black
Box Modeling
Accessing 2 types of signals (cortical activity and behavior) leads us to a
general class of I/O models.
Data for these models are rate codes obtained by binning spikes on 100
msec windows.
Optimal FIR Filter – linear, feedforward
TDNN – nonlinear, feedforward
Multiple FIR filters – mixture of experts
RMLP – nonlinear, dynamic
Linear Model (Wiener-Hopf solution)
Consider a set of spike counts from M neurons, and a hand position vector dC (C
is the output dimension, C = 2 or 3). The spike count of each neuron is
embedded by an L-tap discrete time-delay line. Then, the input vector for a
linear model at a given time instance n is composed as x(n) = [x1(n), x1(n-1) …
x1(n-L+1), x2(n) … xM(n-L+1)]T, xLM, where xi(n-j) denotes the spike count of
neuron i at a time instance n-j.
A linear model estimating hand position at time instance n from the embedded spike
counts can be described as
L1 M
y   xi (n  j ) wcji  b c
c
i 0 j 1
where yc is the c-coordinate of the estimated hand position by the model, wji is a
weight on the connection from xi(n-j) to yc, and bc is a bias for the c-coordinate.
Linear Model (Wiener-Hopf solution)
In a matrix form, we can rewrite the previous equation as
y  WT x
where y is a C-dimensional output vector, and W is a weight matrix of
dimension (LM+1)C. Each column of W consists of [w10c, w11c, w12c…, w1Lc
c
c
c
c T
1 , w20 , w21 …, wM0 , …, wML-1 ] .
x1(n)
z-1
…

yx(n)

yy(n)

yz(n)
…
z-1
xM(n)
z-1
…
z-1
Linear Model (Wiener-Hopf solution)
For the MIMO case, the weight matrix in the Wiener filter system is estimated
by
WW iener  R 1P
R is the correlation matrix of neural spike inputs with the dimension of
(LM)(LM),
p
 p
 r11
r
R   21
 

rM 1
r12
r22

rM 2
 r1M 
 r2 M 
  

 rMM 
 11
1C 
p


p
21
2
C

P
  
 


p M 1  p MC 
where rij is the LL cross-correlation matrix between neurons i and j (i ≠ j), and
rii is the LL autocorrelation matrix of neuron i.
P is the (LM)C cross-correlation matrix between the neuronal bin count and
hand position, where pic is the cross-correlation vector between neuron i
and the c-coordinate of hand position. The estimated weights WWiener are
optimal based on the assumption that the error is drawn from white
Gaussian distribution and the data are stationary.
Linear Model (Wiener-Hopf solution)
The predictor WWiener minimizes the mean square error (MSE) cost function,
Each sub-block matrix rij can be further decomposed as
J  E[ e ], e  d  y
2
rij (1)
 rij (0)
 r (1)
rij (0)
ij
rij  



rij (1  L) r (2  L)
 rij ( L  1) 
 rij ( L  2)





r (0) 
where rij() represents the correlation between neurons i and j with time lag .
Assuming that the random process xi(k) is ergodic for all i, we can utilize the
time average operator to estimate the correlation function. In this case, the
estimate of correlation between two neurons, rij(m-k), can be obtained by
1 N
rij (m  k )  E[ xi (m) x j (k )] 
 xi (n  m)x j (n  k )
N  1 n1
Linear Model (Wiener-Hopf solution)
The cross-correlation vector pic can be decomposed and estimated in the same way,
substituting xj by the desired signal cj.
pij (m  k )  E[ xi (m)c j (k )] 
1 N
 xi (n  m)c j (n  k )
N  1 n1
From the equations, it can be seen that rij(m-k) is equal to rji(k-m). Since these two
correlation estimates are positioned at the opposite side of the diagonal entries of R,
the equality leads to a symmetric R.
The symmetric matrix R, then, can be inverted effectively by using the Cholesky
factorization. This factorization reduces the computational complexity for the inverse of
R from O(N3) using Gaussian elimination to O(N2) where N is the number of
parameters.
Optimal Linear Model
Normalized LMS with weight
decay is a simple starting point.
w(n  1)  w(n) 

x(n)  
2
e(n) x(n)
Four multiplies, one divide and
two adds per weight update
Ten tap embedding with 105
neurons
For 1-D topology contains 1,050
parameters (3,150)
Alternatively, the Wiener solution
w  ( R  I )1 p
Time-Delay Neural Network (TDNN)
The first layer is a bank of linear
filters followed by a nonlinearity.
The number of delays to span I
second
y(n)= Σ wf(Σwx(n))
Trained with backpropagation
Topology contains a ten tap
embedding and five hidden
PEs– 5,255 weights (1-D)
Principe, UF
Multiple Switching Local Models
Multiple adaptive filters that compete to win the modeling of a signal
segment.
Structure is trained all together with normalized LMS/weight decay
Needs to be adapted for input-output modeling.
We selected 10 FIR experts of order 10 (105 input channels)
d(n)
Recurrent Multilayer Perceptron (RMLP) –
Nonlinear “Black Box”
Spatially recurrent dynamical
systems
Memory is created by feeding
back the states of the hidden
PEs.
Feedback allows for continuous
representations on multiple
timescales.
If unfolded into a TDNN it can be
shown to be a universal mapper
in Rn
Trained with backpropagation
through time
y1 (t )  f (W1x(t )  Wf y1 (t 1)  b1 )
y 2 (t )  W2 y1 (t )  b2
Motor Tasks Performed
Task 1
Data
• 2 Owl monkeys – Belle,
Carmen
• 2 Rhesus monkeys –
Aurora, Ivy
• 54-192 sorted cells
• Cortices sampled: PP,
M1, PMd, S1, SMA
40
30
Task 2
20
10
0
-10
-20
-30
-40
-40
-30
-20
-10
0
10
20
30
40
• Neuronal activity rate
and behavior is time
synchronized and
downsampled to 10Hz
Model Building Techniques
Train the adaptive system with neuronal firing rates
(100 msec) as the input and hand position as the
desired signal.
Training - 20,000 samples (~33 minutes of neuronal
firing)
Freeze weights and present novel neuronal data.
Testing - 3,000 samples – (5 minutes of neuronal
firing)
Results (Belle)
Based on 5 minutes of test data, computed over 4 sec
windows (training on 30 minutes)
Signal to error ratio (dB)
Correlation Coefficient
(average)
(max)
(average)
(max)
LMS
0.8706
7.5097
0.6373
0.9528
Kalman
0.8987
8.8942
0.6137
0.9442
TDNN
1.1270
3.6090
0.4723
0.8525
Local Linear
1.4489
23.0830
0.7443
0.9748
RNN
1.6101
32.3934
0.6483
0.9852
Physiologic Interpretation
When the fitting error is above chance, a sensitivity analysis can
be performed by computing the Jacobian of the output vector
with respect to each neuronal input i
This calculation indicates which inputs (neurons) are most
important for modulating the output/trajectory of the model.
Computing Sensitivities Through the
Models
Identify the neurons that affect the output the most.
Feedforward Linear Eq.
y (t )  Wx(t )
Feedforward RMLP Eqs.
y1 (t )  f (W1x(t )  Wf y1 (t 1)  b1 )
y 2 (t )  W2 y1 (t )  b2
General form of Linear
Sensitivity
y (t )
W
x(t )
General form of RMLP
Sensitivity
 

y 2 (t )
T
 W2 Dt   WTf Dt i W1T
x(t  )
 i 1

Data Analysis : The Effect of Sensitive Neurons on Performance
Hightest Sensitivity Neurons
Primate 1, Session 1
Middle Sensitivity Neurons
60
60
40
40
20
20
0.3
0
0
0.25
-20
-20
0.4
93
0
19
29
5
4
84
7
26
45
104
0.2
0.15
0.1
20
40
60
Lowest Sensitivity Neurons
40
60
Neurons
80
100
60
60
All Neurons
0.8
40
120
0.6
10 Highest Sensitivity
84 Intermediate Sensitivity
0.4
10 Low est Sensitivity
0
0.2
-20
0
Decay trend appears in all
animals and behavioral
paradigms
40
Movements (hits) of Test Trajectory
20
20
20
1
0.05
0
0
0
Probability
Sensitivity
0.35
20
40
60
0
0
20
40
60
3D Error Radius (mm)
80
Directional Tuning vs. Sensitivity of
ranked cells
Tuning
Sensitivity
Significance: Sensitivity analysis through trained models automatically
delivers deeply tuned cells that span the space.
Reaching Movement Segmentation
70
Rest to Food
Food to Mouth
X
Y
Z
Mouth to Rest
60
50
40
30
20
10
0
-10
-20
-30
0
10
20
30
40
50
60
70
How does each cortical area contribute to the reconstruction of this movement?
Cortical Contributions Belle Day 2
Area 1
Area 2
Area 3
Area 4
Areas 12
60
60
60
60
60
40
40
40
40
40
20
20
20
20
20
0
0
0
0
0
-20
-20
-20
-20
-20
0
20
Areas 13
40
0
20
Areas 14
40
0
20
Areas 23
40
0
20
Areas 24
40
0
60
60
60
60
60
40
40
40
40
40
20
20
20
20
20
0
0
0
0
0
-20
-20
-20
-20
-20
0
20
40
Areas 123
0
20
40
Areas 124
0
20
40
Areas 134
0
20
40
Areas 234
0
60
60
60
60
60
40
40
40
40
40
20
20
20
20
20
0
0
0
0
0
-20
-20
0
20
40
-20
0
20
40
-20
0
20
40
20
Areas 34
40
Area 1
PP
Area 2
M1
Area 3
PMd
Area 4
M1 (right)
20
40
Areas 1234
-20
0
20
40
0
20
40
Train 15 separate RMLPs with every combination of cortical input.
Is there enough information in spike
trains for modeling movement?
Analysis is based on the time embedded model
Correlation with desired is based on a linear filter output for
each neuron
Utilize a non-stationary tracking algorithm
Parameters are updated by LMS
Build a spatial filter
Adaptive in real time
Sparse structure based on regularization for enables
selection
Architecture
x1(n)
z-1
w11
//

z-1
w1L
y2(n)
z-1
wM1

//
z-1
Adapted by LMS
…
…
xM(n)
y1(n)
c1
c2

dˆ (n)
cM
yM(n)
wML
Adapted by on-line LAR
(Kim et. al., MLSP, 2004)
Training Algorithms
Tap weights for every time lag is updated by LMS
wij (n  1)  wij (n)  2e(n) xij (n)
Then, the spatial filter coefficients are obtained by on-line version of
least angle regression (LAR) (Efron et. al. 2004)
x1
...
xj
xk
i=0
j
k
y
y-X
rr ==y-X
y-(xj==j+yy-x
xkjkj)
Tr|
Find
argmax
Adjust
j &
s.t.
i|x
k is.t.
k, |xkqTTr|=|xikTTr|r|=|xiTr|
q,
Application to BMI Data – Tracking
Performance
Application to BMI Data – Neuronal
Subset Selection
Hand
Trajectory
(z)
Early
Part
Neuronal
Channel
Index
Late
Part
Generative Models for BMIs
Use partial information about the physiological system, normally
in the form of states.
They can be either applied to binned data or to spike trains
directly.
Here we will only cover the spike train implementations.
Difficulty of spike train Analysis:
Spike trains are point processes, i.e. all the information is contained
in the timing of events, not in the amplitude fo the signals!
Goal
Build an adaptive signal processing framework for
BMI decoding in the spike domain.
Features of Spike domain analysis
Binning window size is not a concern
Preserve the randomness of the neuron behavior.
Provide more understanding of neuron physiology (tuning) and
interactions at the cell assembly level
Infer kinematics online
Deal with nonstationary
More computation with millisecond time resolution
Recursive Bayesian Approach
~
~
Z t  H t ( X t , nt )
Time-series
model
State
cont. observ.
Zt
~
X t  Ft ( X t 1 , vt 1 )
Prediction
P(state|observation)
Updating
Recursive Bayesian approach
State space representation
 xt 1  f ( xt )  vt

 zt  ht (ut , xt )  nt
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the
observations p(zt|xt) . The problem is completely defined by the
prior distribution p(x0).
Although the posterior distribution p(x0:t|u1:t,z1:t) constitutes the
complete solution, the filtering density p(xt|u1:t, z1:t) is normally
used for on-line problems.
The general solution methodology is to integrate over the unknown
variables (marginalization).
Recursive Bayesian approach
There are two stages to update the filtering density:
Prediction (Chapman Kolmogorov)
p ( xt | u1:t 1 , z1:t 1 )   p ( xt | xt 1 ) p( xt 1 | u1:t 1 , z1:t 1 )dxt 1
System model p(xt|xt-1) propagates into the future the posterior density
Update
p( zt | xt , ut ) p( xt | x1:t 1 , z1:t 1 )
p( xt | u1:t , z1:t ) 
p(ut | ut , z1:t 1 )
Uses Bayes rule to update the filtering density. The following equations
are needed in the solution.
p( xt | xt 1 )   p( xt | vt 1 , xt 1 ) p(vt 1 | xt 1 )dvt 1   ( xt  vt 1  xt 1 ) p(vt 1 )dvt 1
p ( zt | xt , ut )   ( zt  h(ut , xt )  nt ) p (nt )dnt
p( zt | z1:t 1 , ut )   p( zt | xt , ut ) p( xt | z1:t 1 , u1:t 1 )dxt
Kalman filter for BMI decoding
For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Kinematic
State
Linea
r
Neuron tuning
function
Linea
r
Prediction
Continuous
Observation
Firing rate
Gaussian
P(state|observation)
Updating
[Wu et al. 2006]
Particle Filter for BMI decoding
In general the integrals need to be approximated by sums using
Monte Carlo integration with a set of samples drawn from the
posterior distribution of the model parameters.
Exponential
Kinematic
State
Continuous
Observation
Neuron tuning
function
Linea
r
Prediction
Firing rate
nonGaussian
P(state|observation)
Updating
[Brockwell et al. 2004]
State estimation framework for BMI decoding in
spike domain
1
1.5
0.9
1
0.5
0.7
0
0.6
spike
velocity
0.8
-0.5
0.5
0.4
-1
0.3
0.2
-1.5
0.1
0
0.2
0.4
0.6
0.8
1
time (ms)
Kinematics
state
1.2
1.4
1.6
1.8
0
2
5
x 10
Neural Tuning
function
0
0.2
0.4
0.6
0.8
1
time
1.2
1.4
1.6
1.8
2
x
5
10
Multi-spike trains
observation
Decoding
xk = Fk ( xk-1 , vk-1)
z = H ( xk , n )
k
k
k
Kinematic dynamic model
Tuning function
Key Idea: work with the probability of spike firing which is a
continuous random variable
Adaptive algorithm for point processes
Poisson
Model
nonlinear
Point process
Kinematic
State
Neuron tuning
function
Linea
r
Prediction
spike train
Gaussian
P(state|observation)
Updating
[Brown et al. 2001]
Monte Carlo Sequential estimation for point process
nonlinear
Point process
Neuron tuning
function
Kinematic
State
nonLinear
spike train
nonGaussian
Prediction
P(state|observation)
Updating
Sequential Estimate PDF
[Wang et al. 2006]
Monte Carlo sequential estimation framework for
BMI decoding in spike domain
STEP 1. Preprocessing
1. Generate spike trains from stored spike times 10ms interval, (99.62%
binary train)
2. Synchronize all the kinetics with the spike trains.
3. Assign the kinematic vector x to reconstruct.
X=[position velocity acceleration]’
(more information, instantaneous state avoid error accumulation,
less computation)
STEP 2- Neural tuning analysis
kinematics
Neural spike trains
Encoding
neuron No. 72 TuningDepth: 1
90
(Tuning)
0.25
120
60
0.2
A example of a tuned neuron
0.15
circularmean arg( rN ei N )
Metric: Tuning depth:
150
30
0.1
N
how differently does a neuron fire across
directions?
D=(max-min)/std (firing rate)
0.05
180
0
210
330
240
300
270
Neuron 72: Tuning Depth 1
Step 2- Information Theoretic Metric of Tuning
kinematics
direction angle
I (spike; angle) 

angle
p(angle)
Information

neural spikes
p(spike | angle) log2 (
spike 0,1
p( spike  1 | angle) 
p( spike  1, angle)
p(angle)
p(spike | angle)
)
p(spike)
Step 2- Information theoretic Tuning depths for 3
kinds of kinematics (log axis)
Step 2- Tuning Function Estimation
Neural firing Model
velocity
Linear
filter
Assumption :
nonlinear f
Poisson
model
spikes
t  f (k  vt )
spiket  Poisson (t )
generation of the spikes depends only on the kinematic
vector we choose.
Step 2- Linear Filter Estimation
Spike Triggered Average (STA)
T
Geometry interpretation
k  ( E[v v]  I ) 1 Ev|spike [v]
neuron 72: VpS PCA
25
Vp
VpS
20
15
10
2nd Principal Component
nd Principal component
2
5
0
-5
-10
-15
-20
-25
-30
-20
-10
0
1st Principal Component
10
20
30
1st Principal component
Step 2- Nonlinear f estimation
Step 2- Diversity of neural nonlinear properties
Ref: Paradoxical cold
[Hensel et al. 1959]
Step 2- Estimated firing probability and
generated spikes
Step 3: Sequential Estimation Algorithm for
Point Process Filtering
Consider the neuron as an inhomogenous Poisson point process
(tk )  exp{  kk vk }
Observing N(t) spikes in an interval T, the posterior of the spike
model is
Pr(N (t  t )  N (t )  1 | x(t ),θ(t ), H(t ))
 (t | x(t ),θ(t ), H(t ))  lim
t 0
t
The probability of observing an event in t is
P(Nk | xk , Hk )  ((tk | xk , Hk )t )Nk exp((tk | xk , Hk )t )
And the one step prediction density (Chapman-Kolmogorov)
p(x k | H k ) 
 p(x
k
| x k 1 , H k ) p(x k 1 | N k 1 , H k 1 )dx k 1
The posterior of the state vector, given an observation N
p(x k | N k , H k ) 
P(N k | x k , H k ) p(x k | H k )
p(N k | H k )
N
{x i0:k , wki }i S1
Step 3: Sequential Estimation Algorithm for
Point Process Filtering
N
Monte Carlo Methods are used to estimate the integral. Let
{x i0:k , wki }i S1
represent a random measure on the posterior density, and represent
the proposed density by
q (x 0:k | N 1:k )
The posterior density can then be approximated by
N
p( x0:t | N1:t )   wti k ( x0:t  x0i :t ,  )
i 1
Generating samples from
sampling
q (x 0:k | N 1:k ) using
the principle of Importance
p(xi0:k | N1:k )
p(N k | xik ) p(xik | xik 1 )
i
w 
 wk 1
q(xi0:k | N1:k )
q(xik | xik 1 , N k )
i
k
By MLE we can find the maximum or use direct estimation with kernels
of mean and variance
~
xk 
NS

i 1
p(N k | x ik )  x ik
NS
Vk 

i 1
~
~
p(N k | x ik )  (  (x ik  x k )(x ik  x k ) T )
Posterior density at pdfaattime
index
time index 45.092s
0.35
posterior density
desired velocity
velocity by seq. estimation (collapse)
velocity by seq. estimation (MLE)
velocity by adaptive filtering
0.3
probability
0.25
0.2
0.15
0.1
0.05
0
-2.5
-2
-1.5
-1
velocity
-0.5
0
0.5
Step 3: Causality concerns
0
0.2
0.4
0.6
x 10
0.8
time (ms)
1
1.2
1.4
1.6
1.8
0. 8
t ime
1
1. 2
1. 4
1. 6
1. 8
5
2
-1.5
-1
yti cole v
-0.5
0
0.5
1
1.5
lag
0
0
0. 2
0. 4
0. 6
x
10
5
2
0. 1
0. 2
0. 3
ekips
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
I( spike ; KX ) (lag ) 

X
p( KX (lag ))

spike 0,1
p(spike | KX (lag ))log2 (
p(spike | KX (lag ))
)
p(spike)
Step 3: Information Estimated Delays
I(spk,KX) as function of time delay
2.4
neuron
neuron
neuron
neruon
neruon
2.2
2
80
72
99
108
77
1.6
1.4
1.2
I
(spk,KX)
(TimeDelay)
1.8
1
0.8
0.6
0.4
0
50
100
150
200
250
300
time delay (ms)
350
400
450
500
Figure 3-14 Mutual information as function of time delay for 5 neurons.
For 185 neurons, average delay is 220.108 ms
Step 4:
Monte Carlo sequential kinematics estimation
i
i  f (k  X )
t
t
Neural Tuning
function
Kinematic
State
spike trains
Nt( j )
NonGaussian
X t i  Ft X t 1i  vt 1i
Prediction
P(state|observation)
N
p ( x0:t |
N1(:tj ) )

 w k(x
i
t
0:t
 x0i :t )
i 1
N
p(x k | N1:k )  Wki  k (x k  x k )
i 1
i
Updating
wti  wti1 p(Nt( j ) | it )
Reconstruct the kinematics from neuron spike
trains
desired
cc exp=0.80243
Px
cc MLE=0.67264
10
Py
20
-10
0
-20
-20
700
750
800
cc MLE=0.95376
40
0
-30
650
desired
cc exp=0.97445
-40
650
700
t
750
800
t
desired
cc exp=0.81539
Vx
cc MLE=0.8151
1
desired
cc exp=0.91319
Vy
cc MLE=0.91162
2
0
0
-1
-2
650
700
750
800
-2
650
700
t
750
desired
cc exp=0.015071
Ax
cc MLE=0.040027
0.3
Ay
0.2
0.1
0.1
0
0
700
750
t
800
desired
cc exp=0.7002
cc MLE=0.69188
0.3
0.2
-0.1
650
800
t
-0.1
650
700
750
t
800
Results comparison
Table 3-2 Correlation Coefficients between the Desired Kinematics and the
Reconstructions
Position
Velocity
Acceleration
CC
x
y
x
y
x
y
Expectation
0.8161
0.8730
0.7856
0.8133
0.5066
0.4851
MLE
0.7750
0.8512
0.7707
0.7901
0.4795
0.4775
Table 3-3 Correlation Coefficient Evaluated by the Sliding Window
Position
CC
Expectation
MLE


Velocity
Acceleration
x
y
x
y
x
y
0.84010
0.0738
0.8945
0.0477
0.7944
0.0578
0.8142
0.0658
0.5256
0.0658
0.4460
0.1495
0.7984
0.0963


0.8721
0.0675


0.7805
0.0491


0.7918
0.0710


0.4950
0.0430
[Sanchez, 2004]


0.4471
0.1399
Conclusion
Our results and those from other laboratories show it is possible to
extract intent of movement for trajectories from multielectrode array
data.
The current results are very promising, but the setups have limited
difficulty, and the performance seems to have reached a ceiling at an
uncomfortable CC < 0.9
Recently, spike based methods are being developed in the hope of
improving performance. But difficulties in these models are many.
Experimental paradigms to move the field from the present level need
to address issues of:
Training (no desired response in paraplegic)
How to cope with coarse sampling of the neural population
How to include more neurophysiology knowledge in the design