Lecture notes - University of Sussex
Download
Report
Transcript Lecture notes - University of Sussex
Introduction to Neural
Networks
Andy Philippides
Centre for Computational Neuroscience and
Robotics (CCNR) School of Cognitive and
Computing Sciences/School of Biological Sciences
[email protected]
Spring 2003
Lectures -- 2 per week
Time
Day
Place
12:30 - 1:20
Mon
Arun - 401
11:30 - 12:20
Wed
Arun - 401
Seminar– 1 per week
Group 1
3 – 3.50
Mon
Pev1 2D4
Group 2
4 – 4.50
Mon
Pev1 2D4
Group 3
2 – 2.50
Fri
Arun 404B
Group 4
3 – 3.50
Fri
Arun 404B
Office hour: Friday12.30-1.30, BIOLS room 3D10
Lecture will be available online soon
Today’s Topics:
Course summary
Components of an artificial neural
network
A little bit math
Single artificial neuron
Course Summary
Summary
Course
The course will introduce the theory of several variants of
artificial neural networks (ANNs) discuss how they are
used/trained in practice
Ideas will be illustrated using the example of ANNs used for
function approximation
Very common use of ANNs and also shows the major
concepts nicely. Idea:
Data PreProcessing
Neural Net model
+ training method
PostProcessing Function
approx
[Will not specifically be using NNs as brain models (Computational
Neuroscience)]
Topics covered
1. Introduction to neural networks
2. Basic concepts for network training
3. Single layer perceptron
4. Probability density estimation
5+6. Multilayer perceptron
7+8. Radial Basis Function networks
9+10. Support Vector machines
11+12. Pre-processing + Competitve Learning
13+14. Mixtures of Experts/Committee machines
15+16. Neural networks for robot control
Assessment
3rd years: All coursework
Masters students: 50% coursework, 50 % exam
(start of next term)
Coursework is 2 programming projects first is
20% of coursework (details next week) due in
week 6, second 80% due week 10.
Coursework dealt with in seminars, some
theoretical, some practical matlab sessions
(programs can be in any language, but matlab is
useful for in-built functions)
This week’s seminar: light maths revision
Course Texts
1. Haykin S (1999). Neural networks. Prentice Hall
International. Excellent but quite heavily mathematical
2. Bishop C (1995). Neural networks for pattern
recognition. Oxford: Clarendon Press (good but a bit
statistical, not enough dynamical theory)
3. Pattern Classification, John Wiley, 2001
R.O. Duda and P.E. Hart and D.G. Stork
4. Hertz J., Krogh A., and Palmer R.G. Introduction to
the theory of neural computation (nice, but somewhat
out of date)
5. Pattern Recognition and Neural Networks
by Brian D. Ripley. Cambridge University Press. Jan 1996.
ISBN 0 521 46086 7.
6. Neural Networks. An Introduction, Springer-Verlag
Berlin, 1991 B. Mueller and J. Reinhardt
As its quite a mathematical subject good to find the book
that best suits your level
Also for algorithms/mathematical detail see Numerical
Recipe’s, Press et al.
And appendices of Duda, Hart and Stork and Bishop
Uses of NNs
Neural Networks
Are For
Applications
Science
Character recognition
Neuroscience
Optimization
mathematics statistics
Physics,
Financial prediction
Computer science
Automatic driving
Psychology
..............................
...........................
What are biological NNs?
• UNITs: nerve cells called neurons, many different
types and are extremely complex
• around 1011 neurons in the brain (depending on
counting technique) each with 103 connections
• INTERACTIONs: signal is conveyed by action
potentials, interactions could be chemical (release or
receive neurotransmitters) or electrical at the synapse
• STRUCTUREs: feedforward and feedback and
self-activation recurrent
“The nerve fibre is clearly a signalling mechanism of limited scope.
It can only transmit a succession of brief explosive waves, and the
message can only be varied by changes in the frequency and in the
total number of these waves. … But this limitation is really a small
matter, for in the body the nervous units do not act in isolation as
they do in our experiments. A sensory stimulus will usually affect a
number of receptor organs, and its result will depend on the
composite message in many nerve fibres.” Lord Adrian, Nobel
Acceptance Speech, 1932.
We now know it’s not quite that simple
• Single neurons are highly complex
electrochemical devices
• Synaptically connected networks are only
part of the story
• Many forms of interneuron communication
now known – acting over many different
spatial and temporal scales
The complexity of a
neuronal system can be
partly seen from a picture
in a book on computational
neuroscience
edited by Jianfeng that I am
writing a chapter for
How do we go from real neurons to artificial ones?
Hillock
input
output
Single neuron activity
• Membrane potential is the voltage difference between a neuron
and its surroundings (0 mV)
Membrane potential
Cell
Cell
Cell
Cell
0 Mv
Single neuron activity
•If you measure the membrane potential of a neuron and print it out
on the screen, it looks like:
spike
Single neuron activity
•A spike is generated when the membrane potential is greater than
its threshold
Abstraction
•So we can forget all sub-threshold activity and concentrate on
spikes (action potentials), which are the signals sent to other
neurons
Spikes
• Only spikes are important since other neurons receive them
(signals)
•
Neurons communicate with spikes
•
Information is coded by spikes
So if we can manage to measure the spiking
time, we decipher how the brain works ….
•
Again its not quite
that simple
• spiking time in the cortex is random
With identical input
for the identical neuron
spike patterns are similar, but not identical
Recording from a real neuron: membrane potential
Single spiking time is meaningless
To extract useful information, we have to average
for a group of neurons in a local circuit where neuron
codes the same information
over a time window
to obtain the firing rate r
r =
=
Local circuit
= 6 Hz
Time window = 1 sec
Hence we have firing rate of a group of neurons
r1
So we can have a network of these
local groups
w1: synaptic strength
R = f ( w j rj )
wn
rn
ri is the firing rate of input local circuit
The neurons at output local circuits receives signals in the form
N
wr
i
i =1
i
The output firing rate of the output local circuit is then given by
R
N
R = f ( wi ri )
i =1
where f is the activation function, generally a Sigmoidal
function of some sort
wi weight, (synaptic strength) measuring the strength of the
interaction between neurons.
Artificial Neural networks
Local circuits (average to get firing rates)
Single neuron (send out spikes)
Artificial Neural Networks (ANNs)
A network with interactions, an attempt to mimic the brain
• UNITs: artificial neuron (linear or nonlinear inputoutput unit), small numbers, typically less than a few
hundred
• INTERACTIONs: encoded by weights, how strong a
neuron affects others
• STRUCTUREs: can be feedforward, feedback or
recurrent
It is still far too naïve as a brain model and an information
processing device and the development of the field relies
on all of us
Four-layer networks
x1
x2
Input
Output
(visual
input)
(Motor
output)
xn
Hidden layers
The general artificial neuron model has five components, shown
in the following list. (The subscript i indicates the i-th input
or weight.)
1.
A set of inputs, xi.
2.
A set of weights, wi.
3.
A bias, u.
4.
An activation function, f.
5.
Neuron output, y
Thus the key to understanding ANNs is to
understand/generate the local input-output relationship
m
yi = f ( wij x j bi )
j =1