Transcript PPT

3
An
Illustrative
Example
1
3
Apple/Banana Sorter
2
Prototype Vectors
3
Measurement
Vector
Prototype Banana
shape
p = te xture
w eight
–1
p1 = 1
–1
Prototype Apple
1
p2 = 1
–1
Shape: {1 : round ; -1 : eliptical}
Texture: {1 : smooth ; -1 : rough}
Weight: {1 : > 1 lb. ; -1 : < 1 lb.}
3
3
Perceptron
4
3
Two-Input Case
w1  1 = 1
w1  2 = 2
a = hardlims  n  = hardlims  1 2 p +  – 2  
Decision Boundary
Wp + b = 0
1 2 p + – 2  = 0
5
3
Apple/Banana Example


p

1

a = hardlims  w 1 1 w1  2 w 1 3 p 2 + b




p
3


The decision boundary should
separate the prototype vectors.
p1 = 0
The weight vector should be
orthogonal to the decision
boundary, and should point in the
direction of the vector which
should produce an output of 1.
The bias determines the position
of the boundary
p1
–1 0 0 p2 + 0 = 0
p3
6
3
Testing the Network
Banana:


–
1


a = har dlims  – 1 0 0 1 + 0 = 1 b ana na


–1


Apple:


1


a = hardlim s  – 1 0 0 1 + 0 = – 1  a pple


–1


“Rough” Banana:


–
1


a = har dlims  – 1 0 0 –1 + 0 = 1 b ana na


–
1


7
3
Hamming Network
8
3
Properties of Hamming Network
• It Was designed for binary pattern recognition problems
• Input is either 1 or -1
• Has a feed-forward and recurrent layers
• Number of neurons in the first layer is the same as those in the second layer
• It is intended to decide which given prototype vector is closest to the input
• The feedforward layer performs inner product between each of the
prototype patterns and the input pattern
• The rows of the W in the feedforward layer are set to the prototype patterns
• The feedforward layer uses a linear transform (purelin)
• Each element of the bias vector is the number of elements in the input
vector, R. This assures that input to the recurrent network is always positive
• One neuron in the recurrent layer for each prototype pattern
• The recurrent layer is known as the “competitive layer”
• inputs to this layer are the outputs from the feedforward layer
• When the recurrent layer converges then there will be one neuron with non
-zero output that determines the prototype closest to the input vector
9
3
Feedforward Layer
For Banana/Apple Recognition
S = 2
W1
=
p 1T
p 2T
= –1 1 –1
1 1 –1
b1 = R = 3
R
p T1
3
T
p p+3
a = W p+b =
p+ 3 = 1
T
T
3
p2
p2p + 3
1
1
1
1
3
Recurrent Layer
W2 =

a2  t + 1  = posl in 

1 –
– 1
1
  -----------S– 1
S is Number of
neurons in recurrent
layer
 2
2


a

t

–
a
1 –  a 2  t  = posl in 1
2 t 



 a2  t  – a 2 t 
– 1
1
 2





1
3
Hamming Operation
First Layer
Input (Rough Banana)
–1
p = –1
–1
–1
–
1
1
–
1
3 =  1 + 3 = 4
a1 =
–1 +
1 1 –1
3
– 1 + 3 
2
–1
1
3
Hamming Operation
Second Layer

 posl in  1 – 0.5 4 



 –0.5 1 2 

2
2
2
a 1  = p osl in  W a 0  = 

 
 posli n  3  = 3

0
0


 posl in  1 – 0.5


 –0.5 1

a 2 2  = p osl in  W2 a 2 1  = 

 3 
 posl in 
 =

 –1.5 

3

0
3
0
1
3
Hopfield Network
1
3
Properties of Hopfield Network
• The neurons are initialized with the input vector
• The network iterates until the output converges
• The network is converged when one of the inputs appear at the output. That
is if the network works correctly. It produces one of the prototypes.
• The satlin transfer function is used in the feedforward layer.
• Computing W is complex but for now we wish to make it such that the
cause of difference in strengthen and the cause of similarity is weakened.
1
Apple/Banana Problem
3
1.2 0 0
0
W = 0 0.2 0  b = 0.9
0 0 0.2
– 0.9
a1  t + 1 = sa tlins 1.2a1  t 
a2  t + 1 = satlins  0.2a2 t  + 0.9
a3  t + 1 = satlins 0.2a3  t  – 0.9
Test: “Rough” Banana
–1
a 0  = – 1
–1
–1
a 1  = 0.7
–1
–1
a 2  = 1
–1
–1
a 3  = 1
–1
(Banana)
1
Summary
3
• Perceptron
– Feedforward Network
– Linear Decision Boundary
– One Neuron for Each Decision
• Hamming Network
–
–
–
–
Competitive Network
First Layer – Pattern Matching (Inner Product)
Second Layer – Competition (Winner-Take-All)
# Neurons = # Prototype Patterns
• Hopfield Network
– Dynamic Associative Memory Network
– Network Output Converges to a Prototype Pattern
– # Neurons = # Elements in each Prototype Pattern
1