NIPS 2006 - Universiteit Gent

Download Report

Transcript NIPS 2006 - Universiteit Gent

Ghent University
On Implementing
Reservoir Computing
Benjamin Schrauwen
Electronics and Information Systems Department
Ghent University – Belgium
December 9 2006 - NIPS 2006
Outline
•
Introduction
•
Software: Reservoir Computing Toolbox
•
Hardware: Digital spiking neurons
•
Future hardware
•
Conclusions
On Implementing Reservoir Computing
NIPS 2006 – December 9
2/31
Introduction
•
LSM, ESN, BPDC, SDN, … are all the same concept, just use
different nodes and topologies: Reservoir Computing
•
How to evaluate RC performance across node types?
•
Opensource MATLAB toolbox for reservoir computing research
•
A box of tools + examples + a large scale explorer
•
Because all techniques in single flow: able to focus on specific
influence of:
•
Topology
•
Node type
•
Reservoir adaptation
On Implementing Reservoir Computing
NIPS 2006 – December 9
3/31
Reservoir Computing Toolbox
•
Generic way to construct topologies and weight scaling
•
Various node types supported: linear, TLG, tanh, fermi, spiking (LIF,
synapse models, dynamic synapses)
•
Event based simulator for spiking neurons: ESSpiNN
•
Supports batching for large datasets
•
Currently focused on off-line training (on-line in construction)
•
Resampling and post-processing pipeline
•
Linear, ridge-regression, non-linear readout
•
Cross-validation, grid-search
•
Reservoir adaptation
On Implementing Reservoir Computing
NIPS 2006 – December 9
4/31
The RC Toolbox
Input data generation
Topology
Adaptation
Simulation
ESSpiNN
(CSIM)
Readout pipeline
Cross-val/grid
On Implementing Reservoir Computing
NIPS 2006 – December 9
5/31
The RC Toolbox: topology
Connection structure
Rewiring
Assign weights
Scaling
On Implementing Reservoir Computing
NIPS 2006 – December 9
6/31
The RC Toolbox: readout
Spatial non-linearity
Filtering/mean
Sp./temp. non-linearity
Scoring
On Implementing Reservoir Computing
NIPS 2006 – December 9
7/31
The RC Toolbox
http://www.elis.UGent.be/rct
On Implementing Reservoir Computing
NIPS 2006 – December 9
8/31
Hardware
•
•
Hardware advantages of RC:
•
Sparse/local connectivity is good
•
Random weights are allowed
•
(mild) node and network chaos can be taken advantage of
•
Weights are fixed or can only change locally with RA
Various HW implementations possible:
•
Spiking/analog/non-linear
•
Digital/aVLSI/…
On Implementing Reservoir Computing
NIPS 2006 – December 9
9/31
Digital spiking neurons
•
SNN: mathematically a more complex model than ANN
•
But: better implementable in hardware
•
No weight multiplications: table look-up
•
Filtering can be implemented using shifts and adds
•
Interconnection only single bit, and sparse communication
•
Asynchronous communication easily implementable
On Implementing Reservoir Computing
NIPS 2006 – December 9
10/31
Digital spiking neurons
•
Hardware can take advantage of parallelism
•
But area-speed trade-off: we don’t have to make the implementation faster than
needed by the application
•
For trade-off: different implementations with other area-speed needed
•
Possible parallelisms:
•
•
Network parallelism
•
Neuron/synapse parallelism
•
Arithmetic parallelism
We implemented:
•
SPPA: network parallel, neuron serial, arithmetic parallel
•
PPSA: network parallel, neuron parallel, arithmetic serial
•
SPSA: network serial or parallel, neuron serial, arithmetic serial
On Implementing Reservoir Computing
NIPS 2006 – December 9
11/31
Digital spiking neurons: PPSA
On Implementing Reservoir Computing
NIPS 2006 – December 9
12/31
Digital spiking neurons: SPPA
On Implementing Reservoir Computing
NIPS 2006 – December 9
13/31
Digital spiking neurons: SPSA
On Implementing Reservoir Computing
NIPS 2006 – December 9
14/31
sppa
ppsa
Results
5
6
10
spsa
4
10
10
5
memory
LUTs
10
3
10
2
10
0
clock cycles
spsa
4
10
4
10
3
50
I
100
10
0
sppa
2
10
ppsa
0
50
I
100
10
0
50
I
100
Number of inputs per neuron
On Implementing Reservoir Computing
NIPS 2006 – December 9
15/31
Area-speed trade-off for speech task
•Speech
task in hardware
•LSM with 200 neurons
•12 kHz processing speed
•Real-time requirement
LUTs
memory
Real-time
SPPA
13812
900 kbit
347
PPSA
13426
58 kbit
205
SPSA 10PE
488
144 kbit
2.2
SPSA 5PE
489
144 kbit
1.1
SPSA 1PE
489
144 kbit
0.23
On Implementing Reservoir Computing
NIPS 2006 – December 9
16/31
Digital spiking neurons and RCT
•Topology
can be exported from RCT to different HW models
•Exploration in SW  export to HW for deployment
•Basic HW simulation model in RCT
On Implementing Reservoir Computing
NIPS 2006 – December 9
17/31
Intermezzo: some science
•Most
valuable resource in hardware: long connections
•Impact for RC: readout is hardest part
•Solution: only do partial readout
•What is performance penalty of this?
On Implementing Reservoir Computing
NIPS 2006 – December 9
18/31
Intermezzo: some science
•Most
valuable resource in hardware: long connections
•Impact for RC: readout is hardest part
•Solution: only do partial readout
•What is performance penalty of this?
Ax  b
x  arg min Ax  b
x
x  ( AT A) 1 AT b
Moore-Penrose pseudo inverse
On Implementing Reservoir Computing
NIPS 2006 – December 9
19/31
Intermezzo: some science
•Most
valuable resource in hardware: long connections
•Impact for RC: readout is hardest part
•Solution: only do partial readout
•What is performance penalty of this?
Ridge regression
Tikhonov regularization
Ax  b
Effective
parameters
2
2
x  arg min Ax  b   x
x
x  ( AT A   2 I ) 1 AT b

i
On Implementing Reservoir Computing
NIPS 2006 – December 9
2
i

2
i

2
20/31
Intermezzo: some science
•Most
valuable resource in hardware: long connections
•Impact for RC: readout is hardest part
•Solution: only do partial readout
•What is performance penalty of this?
10
0
word error
rate
memory
capacity
25
no pruning
0.8
0.6
0.4
0.2
-1
20
10
no pruning
0.8
0.6
0.4
0.2
15
10
-2
10
10
-3
1
0
10
50
2
100
150
10
effective reservoir
effective
readoutsize
size
On Implementing Reservoir Computing
NIPS 2006 – December 9
3
200
10
21/31
Future: parallel event based
On Implementing Reservoir Computing
NIPS 2006 – December 9
22/31
Future: parallel event based
On Implementing Reservoir Computing
NIPS 2006 – December 9
23/31
Future: parallel event based
•Network communication needs to be
minimized
•Best for networks with much local and
few global connections
•High speed-up possible due to
–Event based
–Parallel
–Hardware implementation
On Implementing Reservoir Computing
NIPS 2006 – December 9
24/31
Future: CNN
•
Cellular Neural/Non-linear Network as reservoir
•
Outlook:
•
Very fast, analog non-linear network with only
nearest-neighbor connections (128x128)
•
Analog computer: instruction flow possible that
implements reservoir and full parallel read-out
•
Intrinsically random connections: corrections
needed when deterministic computations on CNN
•
Parallel image input via CCD layer
•
With Samuel Xavier de Souza and Johan
Suykens from KULeuven
•
On ACE16k_v2 chip from AnaFocus
On Implementing Reservoir Computing
NIPS 2006 – December 9
25/31
Future: photonic
“Photonics is the science and technology of generating, controlling,
and detecting photons, particularly in the visible light and near infra-red
spectrum“ Wikipedia.org
•
Currently mainly focused on communication
•
Long standing photonicist dream: photonic computing
•
Problems:
•
Feature size at least order of wavelength (~1μm)
•
Implementing memory is complex
•
Change light with light only possible through medium: slow
•
Laser is intrinsically non-linear/chaotic
•
Problems with fabrication variances
On Implementing Reservoir Computing
NIPS 2006 – December 9
26/31
Future: photonic
•
Possible implementation of reservoir: photonic crystal
•
Semi-crystal fabricated on silicon to affect the path of light
•
Creates stop band where light of given bandwidth can’t exist
•
Light can be bend in any direction
•
Single crystal ‘flaw’ can be a laser
On Implementing Reservoir Computing
NIPS 2006 – December 9
27/31
Future: photonic
•
Idea: use photonics to implement a reservoir
•
Why:
•
•
Nodes (lasers) intrinsically non-linear/chaotic
•
Possibly very fast (ps timescale)
•
Full parallel readout and linear regression trivial
•
Random (but fixed) process variation is allowed/desired
Research project recently started together with Roel Baets and
Peter Bienstman from photonics lab at Ghent University
On Implementing Reservoir Computing
NIPS 2006 – December 9
28/31
Future: photonic
LCD
LASER
On Implementing Reservoir Computing
NIPS 2006 – December 9
29/31
Future: photonic
•
•
Possible applications:
•
Full optical signal reconstruction in optical communication
•
Optical image processing
•
Very high speed signal processing
Questions/problems:
•
Harness laser chaos or use it to our advantage
•
Information in light in multiple physical properties: energy,
polarisation, EM field, …
On Implementing Reservoir Computing
NIPS 2006 – December 9
30/31
Conclusions
•
The reservoir computing concept is very suited for hardware
implementation
•
… or no … much hardware is very suited to be used as a
reservoir
On Implementing Reservoir Computing
NIPS 2006 – December 9
31/31