Transcript Document

Šarūnas Stanskis




1943 - Warren McCulloch and Walter Pitts introduced
models of neurological networks, recreated threshold
switches based on neurons and showed that even
simple networks of this kind are able to calculate
nearly any logic or arithmetic function.
1949: Donald O. Hebb formulated the classical Hebbian
rule which represents in its more generalized form the
basis of nearly all neural learning procedures.
1951: For his dissertation Marvin Minsky developed
the neurocomputer Snark, which has already been
capable to adjust its weights automatically
…
Biological neural network
Artificial neural network
Biological neuron
Artificial neuron
x1,w1
x2,wn
n
f(∑xi *wi)
y1
i=1
...
Output
xn,wn
Imputs with
corresponding
weights
Summation and
transfer function
Decision boundary
Percepton model
Linear
Logic sigmoid
Treshold
Tangent sigmoid
Feedforward ANN includes:
•Single layer perceptron
•Multilayer perceptron
•Radial basis function network
Multilayer Feed Forward ANN example
Recurrent ANN example


The purpose of ANN learning is to minimize
error between its response and actual wanted
result.
ANN learning algorithms can be divided into
three categories:



Supervised Learning with a Teacher;
Supervised Learning with Reinforcement;
Unsupervised Learning.

The network is provided with a set of inputs
and the appropriate outputs for those inputs.

The network is provided with a evaluation of
its output given the input and alters the
weights to try to increase the reinforcement it
receives.

The network receives no external feedback but
has an internal criterion that it tries to fulfil
given the inputs that it faces.
Backward propagation of errors





First apply the inputs to the network and work
out the output.
Next work out the errors for neurons in output
layer.
Change the weights.
Calculate the Errors for the hidden layer
neurons by using proceeding layer neutron
errors.
Change the hidden layer weights.


Finding local
minimum
instead of global
minimum.
Neural network
overfitting to
training data.

ANN applications can be divided into to main
tasks:
Classification

Regression

More detailed examples:
 Image and signal analysis
 Character recognition
 Sales forecasting
 Industrial process control
 Customer research
 Data validation
 Risk management
 Target marketing





What are 2 different ANN architecture
category types?
Name at least two activation functions. And
why are they needed?
What are ANN learning categories?
What problems can arise during ANN
learning?
Name some ANN applications.