Multi-Level Neurons
Download
Report
Transcript Multi-Level Neurons
Multi-Level Neurons
L. Manevitz 2000.
More Expressive Networks
Expand Networks:
Simplest: Forward Composition of
Networks
Expressibility: Full
Training (Learning): Problematic
XOR Representation
Problem
• How to award “blame” to assess
appropriate modification of weights?
• Perceptron Approach Unclear
• Adaline Approach: Gradient of Errors
– Problem: Not differentiable!
– Solution: Change Neuron to Sigmoid, etc.
Set Up Notation
•Weights
•Weights
•Output
•Weights
•Input
•Wji
•Vik
F(S wji xj
What is the difficulty
• Easy to run “forward”
Reason for Explosion of
Interest
• Two co-incident affects (around 1985 –
87)
– (Re-)discovery of mathematical tools and
algorithms for handling large networks
– Availability (hurray for Intel and company!) of
sufficient computing power to make
experiments practical.
Some Properties of NNs
• Universal: Can represent and
accomplish any task.
• Uniform: “Programming” is changing
weights
• Automatic: Algorithms for Automatic
Programming; Learning
New Neuron
• Replace Step-function with differentiable
f(x).
• Most natural: f(x) approximates Stepfunction; e.g. Sigmoid or Hyperbolic
Tangent
• Note: Derivatives for future Use
Replacement of Threshold
Neurons with Sigmoid or
Differentiable Neurons
•Threshold
•Sigmoid
Universality
• McCullough-Pitts: Adaptive Logic Gates;
can represent any logic function
• Cybenko: Any continuous function
representable by three-level NN.
Perceptron
•weights
w x
threshold
A in receptive field
kdkdkfjlll
w x
i i
i
i
The letter A is in the receptive field.
•Pattern
Identification
•(Note: Neuron
is trained)
Feed Forward Network
•weights
w x
i i
•weights
threshold
A in receptive field
kdkdkfjlll
Neural Networks (NN)
• What is it?
A
biologically inspired model, which tries to
simulate the human nervous system
Consists
of elements (neurons) and connections
between them (weights)
Can
be trained to perform complex functions (e.g.
classifications) by adjusting the value of the
weights.
Neural Networks (NN)
• How does it work?
The input signal is multiplied by the weights, summed together
and then processed by the neuron
Updates the NN weights through training scheme (e.g. BackPropagation algorithm)
Feed-Forward Networks
Step 2: Feed the Input Signal forward
Step1:
Initialize
Weights
Train the net over an input set
until a convergence occurs
Step3:
Compute the
Error Signal
(difference between the NN
output and the desired Output)
Step4: Feed the Error Signal backward and update weights
(in order to minimize the error)
Derivation of Back Prop
E
E net j
wij net j wij
p
p