Traffic Sign Recognition Using Artificial Neural Network

Download Report

Transcript Traffic Sign Recognition Using Artificial Neural Network

Traffic Sign Recognition
Using Artificial Neural
Network
Radi Bekker
101100
Motivation for ANN
 von Neumann machines are based on the
processing – one processing unit, many
operations in one second.
 Neural networks are based on the parallel
architecture of animal brains-slow ,parallel
and complicated-good for pattern
matching.
 Pattern matching can solve many problems
to which algorithms are not exist or very
complicated.
The human brain
 Consists from 1011 neurons
 Neurons are connected by around 1015 connections .
 Neurons send impulses to each other through the
connections and these impulses make the brain work.
 Dendrites- responsible for input.
 Axon- responsible for output.
synapse
nucleus
cell body
dendrites
axon
Artificial neural network (ANN)
 Network is constructed from artificial neuron
layers.
 There is input and output layers and any number of
hidden (internal) layers.
 Each neuron in one layer is connected to every
neuron in the next layer.
Artificial Neuron
 Many inputs like dendrites.
 One output like axon.
 Each neuron receives a signal from the neurons in
the previous layer.
 The weighted inputs are summed, and passed
through a limiting function which scales the output to
a fixed range of values.
 The output of the limiter is then broadcast to all of
the neurons in the next layer.
Training- Back Propagation-1
 The most common learning algorithm is called Back
Propagation (BP).
 A BP network learns by example, that is, we must
provide a learning set that consists of some input
examples and the known-correct output for each
case.
 This method adjusts the weights between the neurons
to solve a particular problem.
 The BP learning process works in small iterative
steps: one of the example cases is applied to the
network, and the network produces some output
based on the current state of it's synaptic weights.
 This output is compared to the known-good output,
and a mean-squared error signal is calculated.
Training- Back Propagation-2
 The error value is then propagated
backwards through the network, and small
changes are made to the weights in each
layer.
 The whole process is repeated for each of
the example cases, then back to the first
case again, and so on.
 The cycle is repeated until the overall error
value drops below some pre-determined
threshold.
 At this point we say that the network has
learned the problem "well enough" .
My Network
 Input layer-10,000 neurons.
 Hidden layers-3 hidden layers with 10
neurons each.
 Output layer-16 neurons for 16 traffic
signs.
 Training- network trained for 2000
cycles.
Image Filtering
 Resizing the image to size 100x100.
 Turning the image to black and
white.
 Rescaling the matrix image to
numbers between 0 and 1.
 Constructing a 10,000 sized vector
from the columns of the image
matrix.
Results
 Good results for trained images
 Bad results for real picture images.
 When the network was constructed to
identify 5 images- better results was
achieved.
 Contrast and brightness adjustments
in some cases contributed to sign
correct recognition.
Conclusions
 ANN is good for small problems and
networks.
 ANN is bad for big networks.
 Bigger network –more training time
needed.
 Hard to find out good network
configurations.
 ANN is a good method for solving hard
computational problems.
 More research on human brain could be
helpful in constructing better ANN.