Fuzzy Hopfield Tank Presentation - California State University

Download Report

Transcript Fuzzy Hopfield Tank Presentation - California State University

Neural Networks
for
Optimization
Bill Wolfe
California State University Channel Islands
Neural Models
•
•
•
•
•
•
•
•
Simple processing units
Lots of them
Highly interconnected
Exchange excitatory and inhibitory signals
Variety of connection architectures/strengths
“Learning”: changes in connection strengths
“Knowledge”: connection architecture
No central processor: distributed processing
Simple Neural Model
ai
wij
ei
• ai
• ei
• wij
aj
ej
Activation
External input
Connection Strength
Assume: wij = wji
(“symmetric” network)
W = (wij) is a symmetric matrix
Net Input
aj
neti   wijaj  ei
j
net  W a  e
wij
ai
ei
Dynamics
• Basic idea:
neti
>0
ai
neti  0  ai 
neti  0  ai 
neti
<0
ai
dai
da
 neti 
 net
dt
dt
Energy
E   a Wa e a
1
2
T
T

 E  net

 E
 E / a1,....,E / an 
  w1 j  e1,...,  wnj  en 
j
 net1,..., netn 
 net
j
Lower Energy
• da/dt = net = -grad(E)  seeks lower energy
Energy
net
a
Problem: Divergence
Energy
net
a
A Fix: Saturation
0  ai  1
lower energy
dai
 neti (1  ai )(ai )
dt
corner-seeking
Keeps the activation
vector inside the
hypercube
boundaries
lower energy
dai
 neti (1  ai )(ai )
dt
corner-seeking
Encourages
convergence
to corners
Energy
a
0
1
Summary: The Neural Model
ai
ei
wij
W
Activation
0  ai  1
External Input
Connection Strength
(wij = wji) Symmetric
neti   wijaj  ei
j
dai
 neti (1  ai )(ai )
dt
Example: Inhibitory Networks
• Completely inhibitory
– wij = -1 for all i,j
– k-winner
• Inhibitory Grid
– neighborhood inhibition
Traveling Salesman Problem
• Classic combinatorial
optimization problem
• Find the shortest
“tour” through n cities
• n!/2n distinct tours
B
A
E
C
ABCED
D
B
A
E
C
ABECD
D
TSP solution
for 15,000
cities in
Germany
TSP
50 City Example
Random
Nearest-City
2-OPT
An Effective Heuristic for the Traveling Salesman Problem
S. Lin and B. W. Kernighan
Operations Research, 1973
http://www.jstor.org/view/0030364x/ap010105/01a00060/0
Centroid
Monotonic
Neural Network Approach
time stops
1
cities
2
3
4
A   
B   
C   
D   
neuron
Tours – Permutation Matrices
A   
B   
C   
tour: CDBA
D   
permutation matrices correspond
to the “feasible” states.
Not Allowed
A   
B   
C   
D   
Only one city per time stop
Only one time stop per city
Inhibitory rows and columns
   
inhibitory
   
   
   
Distance Connections:
Inhibit the neighboring
cities in proportion to their
distances.
A   
-dAC
B   
B
A
-dBC
C
C   
-dDC
D   
D
putting it all together:
A   
-dAC
B   
-dBC
C   
-dDC
D   
Research Questions
• Which architecture is best?
• Does the network produce:
– feasible solutions?
– high quality solutions?
– optimal solutions?
• How do the initial activations affect network
performance?
• Is the network similar to “nearest city” or any other
traditional heuristic?
• How does the particular city configuration affect network
performance?
• Is there any way to understand the nonlinear dynamics?
typical state of the network before convergence
1
A
B
C
D
E
F
G
2
3
4
5
6
7
1
2
4
3
6
5
7
A
A
B
B
C
C
D
D
“Fuzzy Readout”
E
E
F
F
G
G
 GAECBFD
Initial Phase
Fuzzy Tour
Neural Activations
Monotonic Phase
Fuzzy Tour
Neural Activations
Nearest-City Phase
Fuzzy Tour
Neural Activations
Fuzzy Tour Lengths
tour length
iteration
Average Results for n=10 to n=70 cities
(50 random runs per n)
# cities
DEMO 2
Applet by Darrell Long
http://hawk.cs.csuci.edu/william.wolfe/TSP001/TSP1.html
Conclusions
• Neurons stimulate intriguing computational
models.
• The models are complex, nonlinear, and difficult
to analyze.
• The interaction of many simple processing units
is difficult to visualize.
• The Neural Model for the TSP mimics some of
the properties of the nearest-city heuristic.
• Much work to be done to understand these
models.
EXTRA SLIDES
Brain
•
•
•
•
•
Approximately 1010 neurons
Neurons are relatively simple
Approximately 104 fan out
No central processor
Neurons communicate via excitatory and
inhibitory signals
• Learning is associated with modifications of
connection strengths between neurons
Fuzzy Tour Lengths
tour length
iteration
Average Results for n=10 to n=70 cities
(50 random runs per n)
tour length
# cities
 0  1  1  1


  1 0  1  1
W 
 1  1 0  1


 1 1 1 0 


a2
1
v   ,   1
1
1
1
v   ,   1
 1
a1
1
 0  1

W  
 1 0 
with external input e = 1/2
a2
1
a1
1
Perfect K-winner Performance: e = k-1/2
e
e
e
e
e
e
e
e
1
final activations
e=½
(k=1)
initial activations
0
1
final activations
e=1 + ½
(k=2)
initial activations
0