A Dynamic Analog Concurrently-Processed Adaptive Chip
Download
Report
Transcript A Dynamic Analog Concurrently-Processed Adaptive Chip
A Dynamic Analog
Concurrently-Processed
Adaptive Chip
Malcolm Stagg
Grade 11
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Purpose:
To design a reconfigurable analog neural
network on a chip
A chip that can learn over time
For any application neural networks are used
Improvement over previous designs:
High density
High routability
On-chip learning
Multiple learning algorithms
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Neuron and synapse circuits are created
For TSMC 0.35µm CMOS (Complementary
Metal Oxide Semiconductor) implementation
Dense layout of transistors for analog
arithmetic circuits
High Density
Neurons and Synapses are as small as
possible without compromising performance
Supports both Backpropagation and Hebbian
learning on each cell
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Neural networks:
Have been used for years in many applications, from
business and finance to science and engineering
Ideally implemented as arrays of electronic cells
Many learning algorithms have been designed
The first electronic neural network was made in
1951, by Marvin Minsky (called “SNARC”)
Carver Mead pioneered chip implementation of
neural cells, designing a retina-like neural circuit
Intel created a popular analog neural network
chip in 1989 (called “ETANN”)
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Backpropagation:
Feedback-based learning algorithm (supervised
learning)
Requires calculation of the correct output for an input
pattern
Will learn to represent the correct output over time
Hebbian:
Non-feedback-based learning algorithm
(unsupervised learning)
Calculates synapse weights based on correlation
between input patterns
Will learn to identify and classify input patterns
Hopfield:
Advanced memory recall algorithm, using Hebbian
learning
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Neuron Block Diagram
in
out
+
BP
BP
-
Hebb
BPout
Hebb
BPin
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Neuron Cell Diagram
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
The neuron design:
Supports Backpropagation and Hebbian
Uses an SRAM cell to enable/disable algorithmdependent circuits
Uses tanh sigmoid circuit for forwardspropagation (range: [-1, 1])
Derivative as sech2 for backwards
propagation (range: [0, 1])
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Synapse Block Diagram
in
out
BP
Weight Update
Parameters
Hebb
BPout
BP
BPin
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Synapse Cell Diagram
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
The synapse design:
Supports Backpropagation and Hebbian
Uses an SRAM cell to enable/disable algorithmdependent circuits
Uses the Gilbert multiplier cell for forwardsand backwards- propagation
All inputs are ensured to be symmetrical (c+x, c-x)
Differential pair-based circuits are used
Input range is small for good linearity
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
High routability
Multiple routing pathways between neurons
and synapses
Using programmable wires and connections
Enables use of advanced learning algorithms
To evolve/change the layout of the network over
time
This allows the network to become more dense
and more efficient, while maintaining a low
synapse count
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Programmable Wire
SRAM
Analog
Switch
Programmable Connection
SRAM
Analog
Switch
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Routing Cell Diagram
Neuron
Synapse
Synapse
Synapse
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Circuit Design
Created and modified analog and mixedsignal circuits for each learning/propagation
task
Most circuits are implemented using a small
number of MOSFET transistors
Transistors are relatively large (5µm length) and
interdigitated for good matching
Simulations have been completed for all cells
CMOS layout has been completed for some
of the cells
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Current Summation
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Tanh and Sech2 (Sigmoid and Derivative)
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Differential Pairs (for multiplier input)
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Gilbert Multiplier Cell
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Layout:
Completed for tanh/sech2
First a stick diagram was planned
Initial layout planned in Microsoft Visio
Drawn in Cadence Virtuoso using unmatched
transistors
Drawn in Cadence Virtuoso using matched
(interdigitated) transistors
Planned for other cells, but not yet complete
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Initial Virtuoso Layout of tanh (no matching)
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Virtuoso Layout of tanh with interdigitated matching
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Initial Visio layout of tanh (no matching)
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Simulation Graphs
Tanh Sigmoid
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Sigmoid:
Input Common Mode Range: 0.5V – 1.5V
Good accuracy compared to ideal function of
tanh
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Simulation Graphs
Sech2 Sigmoid
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Sigmoid Derivative:
Input Common Mode Range: 0.5V – 1.5V
Good accuracy compared to ideal function of
sech2
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Simulation Graphs
Multiplier
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Multiplier:
Input Common Mode Range: 2V – 3V
Symmetrical inputs around center point
Good linearity for input swing of ±150mV
Performance is reduced as input range is
extended
Pre-processing differential pair is required
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Results:
A neural network is simulated in C++
OCR problem, of 5 training sets of randomly
ordered numerical characters
10x10 pixels
5% noise
Position offset of ±1 pixel horizontally and vertically
Comparing:
100%-connected network
20%-connected network
Simulation of a dynamically re-routed network by
removing 10% of the synapses over 1000 cycles
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Results
0 to 1000 training cycles
PERFORMANCE of NEURAL LEARNING METHODS
0
50
Performance (MSE)
100
Each Averaged Over 10 Trials
0
200
400
600
800
1000
Cycle
Fully-connected N.Netw ork
Dynamically Re-routed N.Netw ork
20% Partially-connected N.Netw ork
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Results
200 to 1000 training cycles
PERFORMANCE of NEURAL LEARNING METHODS
Each Averaged Over 10 Trials
10
5
0
Perf ormance (MSE)
15
Beginning at Cycle 200
200
400
600
Cycle
F ully-connected N. Net work
800
1000
20% Partially-connected N.Network
Dynamically Re-rout ed N.Net work
PERFORMANCE of NEURAL LEARNING METHODS
LINEAR PREDICTION
10
5
0
Perf ormance (MSE)
15
Beginning at Cycle 200
200
400
600
Cycle
F ully-connected N. Ntwk
Dynamically Re-rout ed N. Nt wk
800
20% Partially-connected N. Nt wk
1000
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Results
800 to 1000 training cycles
PERFORMANCE of NEURAL LEARNING METHODS
Each Averaged Over 10 Trials
1.5
1
.5
0
Performance (MSE)
2
Beginning at Cycle 800
800
850
900
Cycle
F ully-connected N. Net work
950
1000
20% Partially-connected N.Network
Dynamically Re-rout ed N.Net work
PERFORMANCE of NEURAL LEARNING METHODS
LINEAR PREDICTION
1.5
1
.5
0
Performance (MSE)
2
Beginning at Cycle 800
800
850
900
Cycle
F ully-connected N. Ntwk
Dynamically Re-rout ed N. Nt wk
950
20% Partially-connected N. Nt wk
1000
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Conclusions:
The dynamic routing algorithm is the ideal
method of optimizing performance in a
hardware neural network
Reducing the number of synapses to increase
density
The cell circuits perform very well with
system-made restrictions on input ranges
Restrictions increase accuracy and linearity
The arrangement of routing cells between
neurons and synapses allows the routing
algorithm to perform very efficiently
A Dynamic Analog ConcurrentlyProcessed Adaptive Chip
Special Thanks To:
Victoria Stagg
My mother, for all of her help and support
Andrew Stagg
My brother, for all of his help and support
Dr. Jim Haslett, University of Calgary
For use of the ATIPS laboratory, to use Cadence Virtuoso
John Carney, Cadence Design
For the demo license of Cadence Orcad Layout
Dr. Vance Tyree, MOSIS Corporation
For agreeing to accept a fabrication proposal under the
MOSIS education program
David Wells, Auton Engineering, Ltd.
For printing my trifold