PowerPoint-presentatie - LIACS

Download Report

Transcript PowerPoint-presentatie - LIACS

Neural Networks
Chapter 9
Joost N. Kok
Universiteit Leiden
Unsupervised Competitive
Learning
•
•
•
•
Competitive learning
Winner-take-all units
Cluster/Categorize input data
Feature mapping
Unsupervised Competitive
Learning
1
2
1
2
3
3
4
5
Unsupervised Competitive
Learning
winner
output
input (n-dimensional)
Simple Competitive Learning
• Winner:
hi   wij j  wi
j
wi*    wi  
• Lateral inhibition

Simple Competitive Learning
• Update weights for winning neuron

wi* j   j
 

j


wi* j  

w
*

i
j
 j
 j


wi* j   ( j  wi* j )
Simple Competitive Learning
• Update rule for all neurons:

wij  Oi ( j  wi* j )
Oi*  1
Oi  0 if
ii
*
Graph Bipartioning
• Patterns: edges = dipole stimuli
• Two output units
Simple Competitive Learning
• Dead Unit Problem Solutions
– Initialize weights tot samples from the input
– Leaky learning: also update the weights of the losers
(but with a smaller )
– Arrange neurons in a geometrical way: update also
neighbors
– Turn on input patterns gradually
– Conscience mechanism
– Add noise to input patterns
Vector Quantization
• Classes are represented by prototype vectors
• Voronoi tessellation
Learning Vector Quantization
• Labelled sample data
• Update rule depends on current
classification


   ( j  wi* j )
wi* j  

  ( j  wi* j )


if class is correct
if class is incorrect
Adaptive Resonance Theory
• Stability-Plasticity Dilemma
• Supply of neurons, only use them if needed
• Notion of “sufficiently similar”
Adaptive Resonance Theory
• Start with all weights = 1
• Enable all output units
• Find winner among enabled units
wi 
• Test match
• Update weights
wi* : wi*  
wi  
wi
   j w ji
r
wi*  

j
j
Feature Mapping
• Geometrical arrangement of output units
• Nearby outputs correspond to nearby input
patterns
• Feature Map
• Topology preserving map
Self Organizing Map
• Determine the winner (the neuron of which
the weight vector has the smallest distance
to the input vector)
• Move the weight vector w of the winning
neuron towards the input i
i
i w
w
Before learning
After learning
Self Organizing Map
• Impose a topological order onto the
competitive neurons (e.g., rectangular
map)
• Let neighbors of the winner share the
“prize” (The “postcode lottery”
principle)
• After learning, neurons with similar
weights tend to cluster on the map
Self Organizing Map
Self Organizing Map
Self Organizing Map
• Input: uniformly randomly distributed
points
• Output: Map of 202 neurons
• Training
– Starting with a large learning rate and
neighborhood size, both are gradually
decreased to facilitate convergence
Self Organizing Map
Self Organizing Map
Self Organizing Map
Self Organizing Map
Self Organizing Map
Feature Mapping
• Retinotopic Map
• Somatosensory Map
• Tonotopic Map
Feature Mapping
Feature Mapping
Feature Mapping
Feature Mapping
Kohonen’s Algorithm
wij  (i, i )( j  wij )
*
(i, i )  exp(  | ri  ri* | / 2 )
*
2
2
Travelling Salesman Problem
wi   (  (i )(  wi )   ( wi1  2wi  wi1 ))



2


 (i) 
exp(    wi / 2 )
 exp(  
j

2
2
 w j / 2 )
2
Hybrid Learning Schemes
supervised
unsupervised
Counterpropagation
• First layer uses standard competitive
learning
• Second (output) layer is trained using delta
rule


wij  ( i  Oi )V j

wij  ( i  wij )V j
Radial Basis Functions
• First layer with normalized Gaussian
activation functions
2
g j ( ) 
exp(     j / 2 )
 exp(    
k
2
j
2
k
/ 2 )
2
k