Presentation
Download
Report
Transcript Presentation
J. McLean Sloughter
What I Did On My Summer
Vacation: Undergraduate
Research Internships, Neural
Networks, & Airport Security
“Soon after the electrical current became known many attempts were made by
the older physiologists to explain nervous impulses in terms of electricity. The
analogy between the nerves of the body and a system of telephone or telegraph
wires was too striking to be overlooked.”
(from Studies in Advanced Physiology, Louis J. Rettger, A.M., 1898, p. 443)
How the Brain Works
An Extremely Over-Simplified
Explanation
The brain is made up of interconnected
neurons
Neurons are binary – either fire or don’t fire
As a neuron receives signals from other
neurons, it will start firing if the total signal
reaches some threshhold
2
3
How the Brain Works
How the Brain Works
Just like that, except way more
complicated
Actually a lot more neurons involved
Frequency of firing is also important
But let’s ignore those details for now…
4
Putting a philosophy degree to work
History – 1940s
Warren McCulloch, a psychologist and
philosopher, postulated that thought is
discrete
Suggested a “psychon” – the smallest unit of
thought
Thought that an individual neuron firing or not
firing might be a psychon
Recommended developing a “calculus of
ideas” to describe neural activity
5
Philosophy + Math = Fame
History – 1940s
McCulloch teamed up with Walter Pitts, a
math prodigy
Together they published “A Logical Calculus
of the Ideas Immanent in Nervous Activity”
This paper introduced the idea of a “nervous
network,” the first artificial neural model of
cognition
6
Enter von Neumann
History – 1940s
Von Neumann became an early proponent of
their work
However, he criticized it as being overly
simplistic
Based on some of von Neumann’s
suggestions, McCulloch & Pitts proposed a
system using a large number of neurons
This allows for robustness – an ability, for
example, to recognize a slightly deformed
square as still being essentially a square
7
Best Mathematician Name Ever
History – 1940s
Norbert Weiner (“The Father of Cybernetics”)
proposed a more involved system
Weighted inputs – one neuron can be more
influential than another
Memory = learning weights
Did not propose how this learning takes
place, dismissed that as a problem for
engineers to deal with
8
In which not a whole lot happened
History – 1950s
Marvin Minsky introduced a
system based on behavioural
conditioning
Neurons had probabilities of
sending signals
When they produced the correct
output, probabilities were
increased
When the produced the wrong
output, probabilities were
decreased
And nobody really seemed to care
(they were all busy becoming
computer programmers)
9
Perceptrons
History – 1960s
In 1960, Rosenblatt published a proof of the
capabilities of what he named the
“perceptron”
The perceptron acted much like the nervous
network, but with weighted signals
The major advance was a learning algorithm
Rosenblatt was able to prove that, using his
learning algorithm, any possible configuration
of the perceptron could be learned, given the
proper training data
10
Perceptron function
History – 1960s
Consider a simple case where nodes A and B
are each sending signals to node B
Node B has some threshold, T, which it needs
to receive to be activated
A, B, and C are all binary – 0 or 1
W1 and W2 are the weights between A and C
and B and C
Then, if A*W1 + B*W2 > T, C = 1
Otherwise, C = 0
11
Perceptron learning
History – 1960s
Initialize weights randomly
Set threshold to some arbitrary value (why does it not
matter what value the threshold is set to?)
Randomly select one set of inputs
Find the result based on current weights
Subtract result from desired result = error term
Look at each initial node individually
Multiply input value by error term by “learning
coefficient” (between 0 and 1, controls amount of
change you’ll allow at each iteration)
Add result to weight previously associated with that
node to get a new weight
Pick a new set of inputs, repeat until convergence
12
Adaline
History – 1960s
Widrow and Hoff created a system called Adaline –
“Adaptive linear element”
Very similar to perceptrons (though with a slightly
different learning algorithm)
Major changes were the use of -1 instead of 0 for no
signal, and a “bias” term – a node that always fires
These were significant because they had no basis in
neurophysiology, and were added purely because
they could improve performance
13
The Wrath of Minksy
History – 1960s
In 1969, Minsky again entered the world of
neural networks, this time co-authoring the
book “Perceptrons” with Seymour Papert
14
Xor
History – 1960s
Minsky and Papert showed, among other critiques of
perceptrons, that they weren’t capable of learning an
exclusive OR (can you see why?)
An exclusive OR could be made by combining
multiple other networks – have A and B feed into both
an OR and a NAND, and then AND the results
But learning rules only worked with a single layer
network – Minskey and Papert suggested
researching whether learning rules could be
developed for multi-layered networks
15
The Problem
History – 1960s
Minsky & Papert put their critique of
perceptrons at the front of the book
They put their suggestions for research into
multi-layered perceptrons at the back of the
book, after a few hundred pages of rather
dense math
People didn’t seem to read that far
Research on perceptrons died
16
History – 1970s
Nothing important happened
17
The Multi-Layer Perceptron
History – 1980s
Rumelhart, Hinton, and Williams created a
learning algorithm for multi-layer perceptrons
Requires differentiation of functions, and thus
the hard threshold had to be replaced by a
sigmoid function
18
MLP function
Net input to a node:
History – 1980s
n
I wijxj
j 1
Output from a node:
f (I )
1
1 e
I
19
MLP learning
Change weight as follows:
History – 1980s
wij bEf ( I )
Where b is the learning coefficient, and E is
the error term:
E
output
y
desired
y
actual
df ( I i ) n
output
E
w
ij
E
j
dI j 1
df ( I )
f ( I )(1 f ( I ))
where
dI
middle
i
20
The Problem
Airport Security
Metal detectors only detect things that are,
well, metal (and even then only sometimes)
Lots of bad things aren’t metal – plastic
explosives, ceramic guns, plastic flare guns
An x-ray could potentially see these objects,
but submitting people to x-rays every time
they fly isn’t an especially good idea
21
The Solution
Airport Security
Scientists at Pacific Northwest National
Laboratory developed a millimeter wave
camera
Millimeter waves are not harmful like x-rays
They can penetrate clothing, but are reflected
by skin
Plastics and ceramics show up with a
distinctive speckled pattern, as they only
partially reflect the waves
22
The New Problem Caused by the
Solution
Airport Security
Scientists at a
government lab just
made a camera that
can take pictures of
you through your
clothes
Implementing this in
airports would have
every passenger go
through a virtual
strip-search
23
The Solution to the Problem Caused
by the Solution to the Other Problem
Airport Security
Rather than have a human operator look at the
pictures, we can have a computer look at them for us
The computer can identify suspicious areas and
provide a non-naughty picture to the security officer
24
In Practice
Airport Security
This technology is now in
use by SafeView, a
company spun off from
this project
It is being used in
airports, government
buildings, border
crossings, and other
locations around the
world
25
Student Research Opportunities
Research Internship
I was involved in this project while a student intern at Pacific
Northwest National Lab
Information about PNNL’s student internship programs can be
found online at http://science-ed.pnl.gov/students/
One of my summers on this project, I applied through the
Department of Energy’s internship program, which includes
opportunities at a number of other national labs
Information on DOE internship programs is available at
http://www.scied.science.doe.gov/scied/erulf/about.html
26