Transcript ANN Aqsa Ij

Artificial Neural Networks
Group Member:
Aqsa Ijaz
Sehrish Iqbal
Zunaira Munir
What is an ANN ?
The inventor of the first neuro computer, Dr. Robert defines
a neural network as,A human brain like system
consisting of a large number of interconnected processing
units.
About Human Brain
• The human brain is composed of 100 billion nerve cells called
neurons. They are connected to other thousand cells by Axons.
inputs from sensory organs are accepted.These inputs create
electric impulses, which quickly travel through the neural
network.
• A neuron can then send the message to other neuron to handle
the issue or doesnot send it forward.
About Human Brain…
•
Each neuron can connect with upto 200,000 other neurons. neurons
enable us to remember, recall, think, and apply previous experiences
to our every action.
•
The power (outcome) of the human mind comes from the networks
of neurons and learning.
Artificial Neural Networks
There are two Artificial Neural Network topologies
• FeedForward
• Feedback.
FeedForward ANN
The
information
flow
is
unidirectional. A unit sends
information to other unit from
which it does not receive any
information. There are no feedback
loops. They are used in pattern
generation/recognition/classificatio
n. They have fixed inputs and
outputs.
FeedBack ANN
Here, feedback loops are
allowed. They are used in
content
addressable
memories.
Working of ANNs
Applications of Neural Networks
They can perform tasks that are easy for a human but difficult for
a machine
•Speech − Speech recognition, speech classification, text to speech conversion.
•Telecommunications − Image and data compression, automated information
services, real-time spoken language translation.
•Software − Pattern Recognition in facial recognition, optical character recognition,
etc
•Industrial − Manufacturing process control, product design and analysis
Main Properties of an ANN
1.
Parallelism
2.
Learning
3.
Storing
4.
Recalling
5.
Decision making
Main Properties of ANNs
Parallelism:
The capability of acquiring knowledge from the
environment.
Learning:
The capability of learning from examples and
experience with or without a teacher.
•
Learn From Experience
•
Learn From Samples
Storing
The capability of storing its learnt knowledge.
Main Properties of ANNs…
Recalling:
The capability of recalling its learnt knowledge.
Decision making:
The capability of making particular decisions based upon
the acquired knowledge.
Learning paradigms
There are three major learning paradigms, each corresponding to a particular
abstract learning task. These are:
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Supervised Learning :
Learn by examples as to what a face is in
terms of structure, color, etc so that after
several iterations it learns to define a face.
It involves a teacher that is scholar than the ANN itself. For example,
the teacher feeds some example data about which the teacher already
knows the answers.
• The ANN comes up with guesses while recognizing. Then the teacher
provides the ANN with the answers. The network then compares it
guesses with the teacher’s “correct” answers and makes adjustments
according to errors.
SUPERVISED
LEARNING
Training Info = desired (target) outputs
Input
Supervised Learning
System
output
Unsupervised Learning
It is required when there is no example data set with known answers.
Unsupervised Learning
Application
Unsupervised Learning
Application
Reinforcement Learning
This strategy built on observation. The ANN makes a decision by
observing its environment. Reinforcement Learning allows the machine or
software agent to learn its behaviour based on feedback from the
environment.
Reinforcement Learning
Training Info =
evaluations(“reward / penalties”)
input
RL
System
output
Supervised learning Vs RL
1.3 Basics of a Neuron
Topology of a Neuron
Topology of a Neuron
Neuron:
A neuron (a perceptron) is a basic processing unit to perform a small
part of overall computational problem of a neural network.
ANN
• ANNs are composed of multiple nodes, which imitate biological
neurons of human brain.
• The neurons are connected by links and they interact with each other.
The nodes can take input data and perform simple operations on the
data. The result of these operations is passed to other neurons. The
output at each node is called its activation or node value.
• Each link is associated with weight. ANNs are capable of learning,
which takes place by altering weight values. The following illustration
shows a simple ANN
Modeling Artificial Neurons
Example
Topology of a Neuron…
Basic model of a neuron
Input layer
Output neuron
x0
w0
x1
w1
v =
n

i=0
wn
xn
Figure 1.8 Basic model of a neuron.
wi xi
j (v )
output
Topology of a Neuron…
There are four components of a neuron
•
•
•
•
Connections
Memory Buffers (register)
An adder (a computing unit)
An activation function
Components of a neuron…
•
•
Connections are directed links (shown by arrows) through which the
neuron receives inputs from other neurons.
Each input is scaled (scaled up or scaled down) by multiplying it with
a number called weight (the connection weight).
•
Value of a weight indicates the level/strength/ degree
influence/importance of the input to be given by the neuron.
•
Weights are computed through a process called training
of
Components of a neuron…
• An adder
For computing weighted sum of inputs (also known as net input of the
activation function).
• An activation function
For transforming the output of the adder. The value resulted through this
operation is termed as the output of the neuron.
The Perceptron
• Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical
Laboratory, a perceptron is the simplest neural network possible: a
computational model of a single neuron. A perceptron consists of one
or more inputs, a processor, and a single output
Continued
• A perceptron follows the “feed-forward” model, meaning inputs are
sent into the neuron, are processed, and result in an output. In the
diagram above, this means the network (one neuron) reads from left to
right: inputs come in, output goes out.
Continued
• Step 1: Receive inputs.
• Say we have a perceptron with two inputs—let’s call them x1
and x2.
• Input 0: x1 = 12
Input 1: x2 = 4
Continued
• Step 2: Weight inputs.
• Each input that is sent into the neuron must first be
weighted, i.e. multiplied by some value (often a number
between -1 and 1). When creating a perceptron, we’ll
typically begin by assigning random weights. Here, let’s
give the inputs the following weights:
• Weight 0: 0.5
Weight 1: -1
Continued
• We take each input and multiply it by its
weight.
• Input 0 * Weight 0 ⇒ 12 * 0.5 = 6
• Input 1 * Weight 1 ⇒ 4 * -1 = -4
Continued
• Step 3: Sum inputs.
• The weighted inputs are then summed.
• Sum = 6 + -4 = 2
Continued
• Output = sign(sum) ⇒ sign(2) ⇒ +1
The Perceptron Algorithm:
• For every input, multiply that input by its
weight.
• Sum all of the weighted inputs.
• Compute the output of the perceptron based on
that sum passed through an activation function
(the sign of the sum).
Training phase
• To train a neural network to answer correctly,
we’re going to employ the method of supervised
learning.
• With this method, the network is provided with
inputs for which there is a known answer. This way
the network can find out if it has made a correct
guess. If it’s incorrect, the network can learn from
its mistake and adjust its weights. The process is as
follows:
Steps
• Provide the perceptron with inputs for which there
is a known answer.
• Ask the perceptron to guess an answer.
• Compute the error. (Did it get the answer right or
wrong?)
• Adjust all the weights according to the error.
• Return to Step 1 and repeat!
Continued
• The perceptron’s error can be defined as the
difference between the desired answer and its
guess.
• ERROR = DESIRED OUTPUT - GUESS
OUTPUT
Continued
Continued
• The error is the determining factor in how the perceptron’s
weights should be adjusted. For any given weight, what we are
looking to calculate is the change in weight, often called Δ
weight (or “delta” weight, delta being the Greek letter Δ).
• NEW WEIGHT = WEIGHT + ΔWEIGHT
Δ weight is calculated as the error multiplied by the input.
• ΔWEIGHT = ERROR * INPUT
Therefore:
• NEW WEIGHT = WEIGHT + ERROR * INPUT