PowerPoint-presentatie

Download Report

Transcript PowerPoint-presentatie

Neural Networks
Chapter 9
Universiteit Leiden
Unsupervised Competitive
Learning
•
•
•
•
•
Competitive learning
Winner-take-all units
Cluster/Categorize input data
Compression through vector quantization.
Feature mapping
Unsupervised Competitive
Learning
1
2
1
2
3
3
4
5
Unsupervised Competitive
Learning
winner
output
input (n-dimensional)
Simple Competitive Learning
• Winner:
hi   wij j  wi
j
wi*   wi 
i
• In biological network: Lateral inhibition
• In ANN: search for maximum.
Simple Competitive Learning
• Equivalent if w’s and ’s are normalized to
unit length: winner is the unit closest to :
| wi*   |  | wi   | i
Simple Competitive Learning
• Update weights for winning neuron only

wi* j   ( j  wi* j )
• (Standard competitive learning rule.)
• Moves w towards .
Simple Competitive Learning
• Update rule for all neurons:

wij  Oi ( j  wi* j )
Oi*  1
Oi  0 if
ii
*
Simple Competitive Learning
[insert Figure 9.2.]
NB: if weights and inputs normalized, then
everything on unit sphere.
Simple Competitive Learning
• Dead Unit Problem Solutions
– Initialize weights tot samples from the input
– Leaky learning: also update the weights of the losers
(but with a smaller )
– Arrange neurons in a geometrical way: update also
neighbors
– Turn on input patterns gradually
– Conscience mechanism: make it easier for frequent
losers to win.
– Add noise to input patterns
Graph Bipartioning
• Patterns: edges = dipole stimuli
• Edges sharing a node will be close together, hence
tend to end up in same cluster.
• Two output units.
Vector Quantization
• Classes are represented by prototype vectors
• For storage and transmission of speech and
image data.
• Voronoi tessellation
Vector Quantization
Learning Vector Quantization
• Labelled sample data
• Multiple prototypes per class
• Update rule depends on current classification: If
winner class is incorrect, then move prototype
away from input vector!


   ( j  wi* j )
wi* j  

  ( j  wi* j )


if class is correct
if class is incorrect
Learning Vector Quantization
Feature Mapping
• Geometrical arrangement of output units
• Nearby outputs correspond to nearby input
patterns
• Feature Map
• Topology preserving map
Self Organizing Map
• Determine the winner (the neuron of which
the weight vector has the smallest distance
to the input vector)
• Move the weight vector w of the winning
neuron towards the input i
i
i w
w
Before learning
After learning
Self Organizing Map
• Impose a topological order onto the
competitive neurons (e.g., rectangular
map)
• Let neighbors of the winner share the
“prize” (The “postcode lottery”
principle)
• After learning, neurons with similar
weights tend to cluster on the map
Self Organizing Map
Example for twodimensioal input.
Self Organizing Map
Update neighboring weights.
Self Organizing Map
• Input: uniformly randomly distributed
points
• Output: Map of 202 neurons
• Training
– Starting with a large learning rate and
neighborhood size, both are gradually
decreased to facilitate convergence
Self Organizing Map
Self Organizing Map
Nonlinear mappings are
possible!
Self Organizing Map
A very nonlinear
nonlinear mapping…
Self Organizing Map
Self Organizing Map
Feature Mapping
• Retinotopic Map: spatial organization of the
neuronal responses to visual stimuli.
• Somatosensory Map: (The somatosensory system is a
diverse sensory system comprising the receptors and processing
centres to produce the sensory modalities such as touch, temperature,
proprioception (body position), and nociception (pain).)
• Tonotopic Map: (Tonotopy (from Greek tono- and topos =
place) refers to the spatial arrangement of where sounds of different
frequency are processed in the brain. Tones close to each other in
terms of frequency are represented in topologically neighbouring
regions in the brain.)
Feature Mapping
Feature Mapping
Feature Mapping
Feature Mapping
Kohonen’s Algorithm
wij  (i, i )( j  wij )
*
(i, i )  exp( | ri  ri* | / 2 )
*
2
2
Travelling Salesman Problem
wi  (  (i)(  wi )   (wi1  2wi  wi1 ))



2


 (i ) 
exp(   wi / 2 )
 exp( 
j

2
2
 w j / 2 )
2
Hybrid Learning Schemes
supervised
unsupervised
Counterpropagation
• First layer uses standard competitive
learning
• Second (output) layer is trained using delta
rule


wij  ( i  Oi )Vj

wij  ( i  wij )Vj
Radial Basis Functions
• First layer with normalized Gaussian
activation functions
2
g j ( ) 
exp(    j / 2 )
 exp(   
k
2
j
2
k
/ 2 )
2
k