Lecture 14 - School of Computing
Download
Report
Transcript Lecture 14 - School of Computing
Bioinspired Computing
Lecture 14
Alternative
Neural Networks
Netta Cohen
Last time
Today
Attractor neural nets:
Other Neural Nets
• Biologically inspired
associative memories
• moves away from biorealistic model
• Unsupervised learning
• Working examples and
applications
• Pros, Cons & open
questions
•
•
•
•
SOM (Competitive) Nets
Neuroscience applications
GasNets.
Robotic control
2
Spatial Codes
Natural neural nets often code similar things close together.
The auditory and visual cortex provide examples.
Neural Material
Low
Freq.
Frequency
Sensitivity
Neural Material
High
Freq.
0°
Orientation
Sensitivity
Another example: touch receptors in the human
body. "Almost every region of the body is
represented by a corresponding region in both
the primary motor cortex and the somatic
sensory cortex" (Geschwind 1979:106). "The
finger tips of humans have the highest density of
receptors: about 2500 per square cm!" (Kandel
and Jessell 1991:374). This representation is
often dubbed the homunculus (or little man in the
brain)
Picture from http://www.dubinweb.com/brain/3.html
359°
3
Kohonen Nets
Lattice
In a Kohonen net, a number
of input neurons feed a
single lattice of neurons.
Input Nodes
Fully
Connected
Output
Pattern
The output pattern is
produced across the lattice
surface.
Large volumes of data are compressed using spatial/
topological relationships within the training set. Thus
the lattice becomes an efficient distributed
representation of the input.
4
Kohonen Nets
also known as self-organising maps (SOMs)
Important features:
• Self-organisation of a distributed representation of inputs.
• This is a form of unsupervised learning:
• The underlying learning principle: competition among
nodes known as “winner takes all”. Only winners get to
“learn” & losers decay. The competition is enforced by the
network architecture: each node has a self-excitatory
connection and inhibits all its neighbours.
• Spatial patterns are formed by imposing the learning rule
throughout the local neighbourhood of the winner.
5
Training Self-Organising Maps
A simple training algorithm might look like this:
1. Randomly initialise the network input weights
2. Normalise all inputs so they are size-independent
3. Define a local neighbourhood and a learning rate
4. For each item in the training set
• Find the lattice node most excited by the input
• Alter the input weights for this node and those
nearby such that they more closely resemble the
input vector, i.e., at each node, the input weight
update rule is: w = r (x-w)
5. Reduce the learning rate & the neighbourhood size
6. Goto 2 (another pass through the training set)
6
Training Self-Organising Maps (cont)
Gradually the net self-organises into a map of the
inputs, clustering the input data by recruiting areas of
the net for related inputs or features in the inputs.
The size of the neighbourhood roughly corresponds to
the resolution of the mapped features.
7
How Does It Work?
Imagine a 2D training set with clusters of data points
The nodes in the lattice are
initially randomly sensitive.
Horizontal
Gradually, they will “migrate”
towards the input data.
Blue
Red
Vertical
Nodes that are neighbours in the
lattice will tend to become
sensitive to similar inputs.
Effective resource allocation: dense
parts of the input space recruit
more nodes than sparse areas.
Another example: The travelling salesman problem
8
Applet from http://www.patol.com/java/TSP/index.html
How does the brain perform
classification?
One area of the cortex (the inferior temporal cortex or IT)
has been linked with two important functions:
• object recognition
• object classification
These tasks seem to be shape/colour specific but
independent of object size, position, relative motion or
speed, brightness or texture.
Indeed, category-specific impairments have been linked
to IT injuries.
9
How does the brain perform
classification (cont)?
Questions:
How do IT neurons encode objects/categories? e.g.,
• local versus distributed representations/coding
• temporal versus rate coding at the neuronal level
Can we recruit ANNs to answer such questions?
Can ANNs perform classification as well given similar data?
Recently, Elizabeth Thomas and colleagues performed
experiments on the activity of IT neurons during an
exercise of image classification in monkeys and used a
Kohonen net to analyse the data.
10
The experiment
Monkeys were trained to distinguish between a training set of
pictures of trees and various other objects. The monkeys were
considered trained when they reached a 95% success rate.
Trained monkeys were now shown new images of trees and other
objects. As they classified the objects, the activity in IT neurons in
their brains was recorded. All in all 226 neurons were recorded on
various occasions and over many different images.
The data collected was the mean firing rate of each neuron in
response to each image. 25% of neurons responded only to one
category, but 75% were not category specific. All neurons were
image-specific.
Problem: Not all neurons were recorded for all images &
No images were tested across all neurons.
In fact, when a Table of neuronal responses for each image was
created, it was more than 80% empty.
11
E. Thomas et al, J. Cog. Neurosci. (2001)
Experimental Results
Question: Given the partial data, is there sufficient
information to classify images as trees or non-trees?
Answer: A 2-node Kohonen net trained on the Table of
neuronal responses was able to classify new images with an
84% success rate.
Question: Are categories encoded by category-specific
neurons?
Answer: Delete data of category-specific neuron responses
from Table. The success rate of the Kohonen net was
degraded but only minimally. A control set with random data
deletions yielded similar results. Conclusion: Categoryspecific neurons are not important for categorisation!
12
E. Thomas et al, J. Cog. Neurosci. (2001)
Experimental Results (cont.)
Question: Which neurons are important, if any?
Answer: An examination of the weights that contribute most
to the output in the Kohonen net revealed that a small
subset of neurons (<50) that are not category-specific yet
respond with different intensities to different categories are
crucial for correct classification.
Conclusions: The IT employs a distributed representation to
encode categories of different images. The redundancy in
this encoding allows for graceful degradation so that even
with 80% of data missing and many neurons deleted,
sufficient information is present for classification purposes.
The fact that only rate information was used suggests that
temporal information is less important here.
13
E. Thomas et al, J. Cog. Neurosci. (2001)
Space in Neural Nets
Kohonen nets teach us an important lesson about the ability
of neurons to encode information, not only in weights, but
also in spatial organisation. What are the consequences for
network dynamics? Can these principles be extended beyond
simple centre-surround constraints of self-excitation and
neighbour inhibition?
Once again, insight may be gained by returning to the
biological domain and asking how space affects brain activity.
While always aware of the immense richness of neuronal
behaviour, we have, until today, considered them to be
minimal processors communicating via well-defined circuits.
What have we neglected?
We have turned our networks into abstract, computing tools,
disconnected from the real world in which problems are defined.
We have also robbed the networks of enormous freedom by
restricting the encoding of information to series of weights. 14
Neurotransmitters in brain
• many neurotransmitters do not just excite or inhibit
• neurons release gases such as nitric oxide (NO)
• the behaviour of these diffusing gaseous modulators is
very different from that of standard neurotransmitters…
Unlike standard neurotransmitters which are unable to
travel far from their point of origin, NO is a small gas
molecule that is free to diffuse slowly away from its origin,
unhindered by intervening cellular structures.
NO secreted by a neuron affects all neurons within range –
regardless of circuitry. Such influences go beyond
excitation or inhibition. NO has the potential to modulate
many aspects of the neuron’s behaviour.
15
GasNets
Researchers at Sussex’s Centre for Computational
Neuroscience and Robotics have been developing an ANN
architecture inspired by these findings which they call GasNets.
Their model is a generalisation of the dynamic recurrent neural
nets. Neurons are organised on a 2D grid, with all-to-all synaptic
connections. Active neurons can also secrete gas.
input
The concentration of gas at the location of a neuron modulates
its sigmoid activity function, either increasing or decreasing the
steepness of the curve (and its ability to secrete gas itself).
16
All GasNet figures courtesy of Phil Husbands, Mick O’Shea, Tom Smith, & Nick Jakobi
A Control Task…
A robot lives in a walled arena. Its task is to approach a
white triangle painted on the wall and avoid a white
rectangle, using only very crude visual input (typically a
handful of pixels from a camera mounted on the robot).
motor
bumpers
drive
wheel
camera
caster
Performing this shape discrimination under noisy lighting
conditions is a non-trivial task, especially given the
limited visual input available to the controller.
17
Non-Gaseous Solutions
The same researchers had previously evolved more
standard dynamical neural nets to solve this task:
These controllers took ~6000 generations to discover.
What kind of GasNet controllers would evolve? Would
they exhibit advantages over other kinds of ANN?
all figures courtesy of Sussex CCNR
18
GasNet Controllers
Two kinds of successful GasNet controller were
evolved, each taking ~1000 generations to discover.
Here’s one:
19
The GasNet Solutions
Both GasNets perform robustly despite the noisy lighting
conditions and “outrageously low bandwidth” vision.
Far: Ballistic
Contrast between two
offset visual inputs is used
to detect triangle edge.
Near: Closed-loop
“Scanning behaviour”
continually modulates
approach
The evolved visual morphology always played a crucial
role. Active visual strategies solved the task, rather
than central reasoning.
20
Why Does Gas Make It Better?
While still an open question, several possibilities include:
• Gas diffuses widely, allowing large parts of the network to
be inhibited or excited simultaneously.
• Gas concentration varies much slower than the flow of
‘electrical’ activation around the synaptic connections.
• There may be useful interactions between the slow gas
and fast activation dynamics.
Combination of these ideas may explain why solutions
appear easier to build from GasNets than from non-gas
dynamical ANNs.
Research into these possibilities is currently ongoing by
Chris Buckley in the Biosystems group of the School of
Computing.
21
From Biology to ANNs & Back
Neuroscience and studies of animal behaviour have led
to new ideas for artificial learning, communication,
cooperation & competition. Simplistic cartoon models of
these mechanisms can lead to new paradigms and
impressive technologies.
• Dynamic Neural Nets are helping us understand real-time
adaptation and problem-solving under changing conditions.
• Hopfield nets shed new insight on mechanisms of
association and the benefits of unsupervised learning.
• Thomas’ work helps unravel coding structures in the cortex.
• Husbands et al.’s GasNets are helping neuroscientists to
understand the behaviour of NO and other local influences
in real nervous systems and for improved robot control.
22
Next time…
• Guest lecture series on Genetic Evolution and Genetic Programming.
Reading
• Elizabeth Thomas et al (2001) “Encoding of categories by noncategoryspecific neurons in the inferior temporal cortex”, J. Cog. Neurosci. 13: 190200.
• Phil Husbands, Tom Smith, Nick Jakobi & Michael O’Shea (1998). “Better
living through chemistry: Evolving GasNets for robot control”, Connection
Science, 10:185-210.
• Ezequiel Di Paolo (2003). Organismically-inspired robotics: Homeostatic
adaptation and natural teleology beyond the closed sensorimotor loop, in: K.
Murase & T. Asakura (Eds) Dynamical Systems Approach to Embodiment and
Sociality, Advanced Knowledge International., Adelaide, pp 19 - 42.
• Ezequiel Di Paolo (2000) “Homeostatic adaptation to inversion of the visual
23
field and other sensorimotor disruptions”, SAB2000, MIT Press.