Compositionality through Negotiation
Download
Report
Transcript Compositionality through Negotiation
On the capacity of
unsupervised recursive neural networks
for symbol processing
Prof. Dr. Barbara Hammer
Computational Intelligence Group
Institute of Computer Science
Clausthal University of Technology
Nicolas Neubauer, M.Sc.
Neural Information Processing Group
Department of Electrical Engineering &
Computer Science
Technische Universität Berlin
29. 8. 2006
Overview
Introduction
GSOMSD models and capacities
Main part
Implementing deterministic push-down automata in RecSOM
Conclusion
Unsupervised neural networks
Clustering algorithms
– Each neuron/prototype i has a weight vector wi
– Inputs x are mapped to a winning neuron i such that
d(x,wi) is minimal
– Training: Adapt weights to minimize some error function
Self-Organizing Maps (SOMs): Neurons arranged on lattice
– During training, adapt neighbours also
– After training, similar inputs neighbouring neurons
– variants without fixed grid (e.g. neural gas)
Defined for finite-dimensional input vectors
– Question:
How to adapt algorithms for inputs of non-fixed length like
time series?
– E.g. time windowing, statistical analysis, ...
Rn R?
Recursive processing of time series
• Process each input of time series separately
• Along with a representation C of the map’s response to the previous
input (the context)
– function rep: RN Rr (N=number of neurons)
– Ct = rep(d1(t-1), ..., dN(t-1))
• Neurons respond not only to input, but to context also
– Neurons require additional context weights c
– distance of neuron i at timestep t:
di(t) = αd(xt,wi) + βdr(Ct,ci)
x1
x2
rep
Cinit
C2
xt
...
rep
C
rep
Ct
Generalized SOM for Structured Data (GSOMSD)
[Hammer, Micheli, Sperduti, Stricker, 04]
Unsupervised recursive algorithms: Instances of GSOMSD
Varying in
– context function rep, distances d, dr
– and lattice (metric, hyperbolic, neural gas, ...)
Example
– Recursive SOM (RecSOM)
context stores all neurons’ activations
r=N, rep(d1,...,dN) = -exp(d1),...,-exp(dN)
each neuron needs N context weights! (memory ~ N^2)
– other models store
• properties of winning neuron
• previous activations only for single neurons
Computational capacity
0111…1 != 1111…1
• Ability to keep state information:
Equivalence to Finite State Automata (FSA) / regular languages
• Decaying context will eventually „forget“ leading 0
([(…)]) != ([(…)]]
• Ability to keep stack information:
Equivalence to Pushdown Automata (PDA) / context-free languages
• Finite context cannot store potentially infinite stack
• … Ability to store at least two binary stacks:
Turing Machine Equivalence
connecting context models to Chomsky hierarchy
Why capacity matters
• Explore dynamics of algorithms in detail
• Distinguish power of different models
– different contexts within GSOMSD
e.g., justify huge memory costs of RecSOM compared to other models
– to other approaches
e.g., supervised recurrent networks
• Supervised recurrent networks: Turing machine equivalence
– in exponential time
for sigmoidal activation functions
[Kilian/Siegelmann ’96]
– in polynomial time
for semilinear activation functions
[Siegelmann/Sontag ’95]
Various recursive models and their capacity
TKM
RSOM
MSOM
SOMSD
RecSOM
context
neuron
itself
neuron itself
winner
content
winner index
exp(all act.)
encoding
input
space
input space
input space
index space
activ. space
lattice
all
all
all
SOM/ HSOM
all
capacity
<FSA
<FSA
FSA
FSA
≥ PDA*
[Chappell/
Taylor,93]
[Koskela/
Varsta/
Heikkonen,98]
[Hammer/
Strickert,04]
[Hagenbuchner/
Sperduti/
Tsoi,03]
[Voegtlin,02]
[Strickert/Hammer,05]
* for WTA semilinear context
Overview
Introduction
GSOMSD models and capacities
Main part
Implementing deterministic push-down automata in RecSOM
Conclusion
Goal: Build a Deterministic PDA
Using
• the GSOMSD recursive equation di(t) = αd(xt,wi) + βdr(Ct,ci)
• L1 distances for d, dr (i.e., d(a,b) = |a-b|)
• parameters
– α=1
– β=¼
• modified RecSOM context:
1
0,8
activation
– instead of original exp(-di): max(1-di,0)
• similar overall shape
• easier to handle analytically
– additionally, winner-takes-all
• required at one place in the proof...
• makes life easier overall, however
semilinear vs exponential activation function
0,6
max(1-x,0)
exp(-x)
0,4
0,2
0
0
0,2
0,4
0,6
0,8
x
1
1,2
1,4
Three levels of abstraction
1. Layers
temporal („vertical“) grouping feed-forward architecture
~ weight space!
(not lattice)
2. Operators
functional grouping defined operations
3. Phases
combining operators -> automaton simulations
~
1
0
a
b
1
0
First level of abstraction: Feed-forward
• One-dimensional input weights w
• Encoding function enc: Σ Nl
Input symbol σi
series of inputs (i, ε, 2ε, 3ε, …, (l-1)ε)
with ε >> max(i)=|Σ|
• Each neuron is active for at most one component of enc
– resulting in l layers of neurons
• In layer l, we know that only neurons from layer l-1 have been
active, i.e. are represented >0 in context
– pass on activity from layer to layer
Feed-forward: Generic Architecture
• 2 states – S={a,b}
• 2 inputs – Σ={σ0,σ1}
Simulation of a state transition function
δ: S x Σ S
Output layer
a
Network represents state a
a active
C=(0,0,...,1,0)
Network represents state b
b active
C=(0,0,...,0,1)
“Hidden” layers
(l-1)ε|(…)
w|c
...ε
σ0,a
ε
aux2
aux1
ε |(1,1,0,0)
encoding input X state
- get input via input weight
- get state via context weight
(l-1)ε|(…)
...
arbitrary intermediate computations
Input layer
(l-1)ε
b
σ1,a
1|(…,1,0)
1|…
0|(…,1,0)
1|(1,0)
0|…
0|(1,0)
ε |(0,0,1,1)
σ0,b
σ1,b
0,1
enc
Sample architecture
1|(…,0,1)
1|…
0|(…,0,1)
1|(0,1)
0|…
0|(0,1)
Second Layer of Abstraction: Operators
We might be finished
–
In fact, we are - for the FSA case
However, what about the stack?
– looks like γ0 γ1 γ1 ...
–
how to store potentially infinite symbol sequences?
General idea:
1. Encode stack in the winner neuron’s activation
2. Then build ‘operators’ to
•
•
•
read
modify or
copy
the stack by changing the winner’s activation
Encoding the stack in neurons’ activations
To save a sequence of stack symbols within the map,
• turn γ0 γ1 γ1 into binary sequence alpha=011
0
0,
25
0,
37
5
0,
5
0,
62
5
0,
75
0,
87
5
1
0,
25
0,
37
5
0,
5
0,
62
5
0,
75
0,
87
5
1
(□) = 0
(0) = ¼
(1) = ¾
(01) = ¼ + 3/16
(011) = ¼ + 3/16 + 3/64
0,
12
5
f4
f4
f4
f4
f4
0
–
–
–
–
–
0,
12
5
• f4(alpha) =
– push(s,γ1) = ¼s + ¾
– pop(s,γ0 ) = (s- ¼)*4
• Encode stack in activation:
Activation a = 1–¼s
push(a, γ1)= 13/16 -1/16s
pop(a, γ0) = 5/4 – s
Operators
COPY
– copy activation into next layer
TOP
– identify top stack symbol
OR
1
0,
87
5
0,
75
0,
62
5
0,
5
0,
37
5
0,
25
0
PUSH
0,
12
5
– get activation of active neurons
(if any)
– modify activation for push
– push(a, γ1)= 13/16 -1/16s
POP
– modify activation for pop
– pop(a, γ0) = 5/4 – s
Third abstraction: Phases
Set
Content
Elements: generic / examples
S
States
s
a s(uccess) f(ail)
Σ
Input alphabet
σ
()[]ε
Γ
Stack alphabet
γ
([□
U
Stack actions
push( push[ pop( pop) do nothing
a
b
Phase
Task
Input Output
Required
operators
Finalize
Collect all results leading to
same state
UxSS
OR, COPY
Execute
Manipulate stack where
needed
UxSUxS
PUSH,COPY/
POP,COPY
Merge
Collect all states resulting
in common stack & state
SxΣxΓUxS
OR, COPY
SxΣSxΣxΓ
TOP
Separate Read top stack symbol
σ0,a
σ1,a
σ0,b
σ1,b
The final architecture
0S2
1S3
Overview
Introduction
GSOMSD models and capacities
Main part
Implementing deterministic push-down automata in RecSOM
Conclusion
Conclusions
• RecSOM: stronger computational capacity than SOMSD/MSOM
• Does this mean it‘s worth the cost?
– Simulations not learnable with Hebbian learning:
Practical relevance questionable
– Anyway: Elaborate context (costly) rather hindering for simulations
too much context: results in a lot of noise
maybe better: simpler models, slightly enhanced
for example: MSOM, SOMSD with context variable
indicating last winner‘s activation
• Turing machines also possible?
– Storing two stacks into a real number is possible
– Reconstructing two stacks from real number is hard
particularly when using only differences
– may have to leave constant-size simulations
– Other representation of stacks may be required
Thanks
Aux slide: PDA definition
Aux slide: PDA definition suitable for map construction