divergent plate boundary

Download Report

Transcript divergent plate boundary

Neural Network and
Earthquake Prediction
Professor Sin-Min Lee
What is Data Mining?
• Process of automatically finding the
relationships and patterns, and extracting
the meaning of enormous amount of data.
• Also called “knowledge discovery”
Objective
• Extracting the hidden, or not easily
recognizable knowledge out of the large
data… Know the past
• Predicting what is likely to happen if a
particular type of event occurs … Predict
the future
Application
• Marketing example
– Sending direct mail to randomly chosen
people
– Database of recipients’ attribute data (e.g.
gender, marital status, # of children, etc) is
available
– How can this company increase the response
rate of direct mail?
Application (Cont’d)
• Figure out the pattern, relationship of
attributes that those who responded has in
common
• Helps making decision of what kind of
group of people the company should
target
• Data mining helps analyzing large amount
of data, and making decision…but how
exactly does it work?
• One method that is commonly used is
decision tree
Decision Tree
• One of many methods to perform data
mining - particularly classification
• Divides the dataset into multiple groups by
evaluating attributes
• Decision tree can be explained a series of
nested if-then-else statements.
Decision Tree (Cont’d)
• Each non-leaf node has a predicate
associated, testing an attribute of data
• Leaf node represents a class, or category
• To classify a data, start from root node and
traverse down the tree by testing
predicates and taking branches
Example of Decision Tree
Advantage of Decision Tree
• simple to understand and interpret
• require little data preparation
• able to handle nominal and categorical
data.
• perform well with large data in a short
time
• the explanation for the condition is
easily explained by boolean logic.
Advantages of Decision Tree
• Easy to visualize the process of
classification
– Can easily tell why the data is classified in a
particular category - just trace the path to get
to the leaf and it explains the reason
• Simple, fast processing
– Once the tree is made, just traverse down the
tree to classify the data
Decision Tree is for…
• Classifying the dataset which
– The predicates return discrete values
– Does not have an attributes that all data has
the same value
CMT catalog: Shallow earthquakes, 1976-2005
INDIAN PLATE MOVES NORTH
COLLIDING WITH EURASIA
Gordon & Stein, 1992
COMPLEX
PLATE
BOUNDARY
ZONE IN
SOUTHEAST
ASIA
Northward motion of
India deforms all of
the region
Many small plates
(microplates) and
blocks
Molnar & Tapponier, 1977
India subducts
beneath Burma
microplate
at about 50
mm/yr
Earthquakes
occur at plate
interface along
the Sumatra arc
(Sunda trench)
These are
NOAA
IN DEEP OCEAN tsunami has long wavelength, travels fast,
small amplitude - doesn’t affect ships
AS IT APPROACHES SHORE, it slows. Since energy is
conserved, amplitude builds up - very damaging
TSUNAMI WARNING
Deep ocean buoys can measure
wave heights, verify tsunami and
reduce false alarms
Because seismic waves travel much
faster (km/s) than tsunamis, rapid
analysis of seismograms can identify
earthquakes likely to cause major
tsunamis and predict when waves will
arrive
HOWEVER, HARD TO PREDICT EARTHQUAKES
recurrence is highly variable
Sieh et al., 1989
Extend earthquake history
with geologic records paleoseismology
M>7 mean 132 yr s 105 yr
Estimated probability in 30 yrs 7-51%
EARTHQUAKE RECURRENCE
AT SUBDUCTION ZONES IS
COM PLICATED
In many subduction zones, thrust
earthquakes have patterns in
space and time. Large
earthquakes occurred in the
Nankai trough area of Japan
approximately every 125 years
since 1498 with similar fault areas
In some cases entire region
seems to have slipped at once; in
others slip was divided into
several events over a few years.
Repeatability suggests that a
segment that has not slipped for
some time is a gap due for an
earthquake, but it’s hard to use
this concept well because of
variability
GAP?
NOTHING YET
Ando, 1975
1985 MEXICO EARTHQUAKE
• SEPTEMBER 19,
1985
• M8.1
• A SUBDUCTION
ZONE QUAKE
• ALTHOUGH
LARGER THAN
USUAL, THE
EARTHQUAKE WAS
NOT A “SURPRISE”
• A GOOD, MODERN
BUILDING CODE
HAD BEEN
ADOPTED AND
IMPLEMENTED
1985 MEXICO EARTHQUAKE
• EPICENTER
• 400 BUILDINGS
LOCATED 240 KM
COLLAPSED IN
FROM MEXICO CITY
OLD LAKE BED
ZONE OF MEXICO
CITY
• SOIL-STRUCTURE
RESONANCE IN
OLD LAKE BED
ZONE WAS A
MAJOR FACTOR
1985 MEXICO EARTHQUAKE:
ESSENTIAL STRUCTURES-SCHOOLS
1985 MEXICO EARTHQUAKE:
STEEL FRAME BUILDING
1985 MEXICO EARTHQUAKE:
POUNDING
1985 MEXICO EARTHQUAKE:
NUEVA LEON APARTMENT
BUILDINGS
1985 MEXICO EARTHQUAKE:
SEARCH AND RESCUE
• Definition
• Characteristics
• Project:California
Earthquake
Prediction)
Neural Networks
• AIMA – Chapter 19
• Fundamentals of Neural Networks :
Architectures, Algorithms and
Applications. L, Fausett, 1994
• An Introduction to Neural Networks
(2nd Ed). Morton, IM, 1995
Neural Networks
• McCulloch & Pitts (1943) are generally
recognised as the designers of the first
neural network
• Many of their ideas still used today (e.g.
many simple units combine to give
increased computational power and the
idea of a threshold)
Neural Networks
• Hebb (1949) developed the first
learning rule (on the premise that if two
neurons were active at the same time
the strength between them should be
increased)
Neural Networks
• During the 50’s and 60’s many
researchers worked on the perceptron
amidst great excitement.
• 1969 saw the death of neural network
research for about 15 years – Minsky &
Papert
• Only in the mid 80’s (Parker and LeCun)
was interest revived (in fact Werbos
discovered algorithm in 1974)
Neural Networks
Neural Networks
• We are born with about 100 billion
neurons
• A neuron may connect to as many as
100,000 other neurons
Neural Networks
• Signals “move” via electrochemical
signals
• The synapses release a chemical
transmitter – the sum of which can
cause a threshold to be reached –
causing the neuron to “fire”
• Synapses can be inhibitory or
excitatory
The First Neural Neural
Networks
McCulloch and Pitts produced the first
neural network in 1943
Many of the principles can still be seen
in neural networks of today
What is neural network ?
• Def 1: Imitate the brain,
and surpass the brain to
manage both pattern
processing problem and
symbolic problem.
• Example: learning and
self-organization
What is neural network ? (cont.)
Def 2: Complex-valued neural
networks are the network
that deal with complexvalued information by using
complex-valued parameters
and variables.
Example:
1. Good dish: color, smell, taste
2. Prediction: seismic history,
ground water, abnormal
behavior, nearby seismic
activities
What is neural network ? (cont.)
Def 3: brain artificial brain 
artificial intelligence  neural
network
Example: information
processing in the real world
should be flexible enough to
deal with unexpectedly (geo
figure) and dynamically
(fore/main/after-shock)
changing environment.
A new sort of computer
• What are (everyday) computer systems good
at... and not so good at?
Good at
Rule-based systems:
doing what the
programmer wants them
to do
Not so good at
Dealing with noisy data
Dealing with unknown
environment data
Massive parallelism
Fault tolerance
Adapting to
circumstances
Neural networks to the rescue
• Neural network: information processing
paradigm inspired by biological nervous
systems, such as our brain
• Structure: large number of highly
interconnected processing elements
(neurons) working together
• Like people, they learn from experience
(by example)
Neural networks to the rescue
• Neural networks are configured for a
specific application, such as pattern
recognition or data classification, through
a learning process
• In a biological system, learning involves
adjustments to the synaptic connections
between neurons
 same for artificial neural networks (ANNs)
Where can neural network systems
help
• when we can't formulate an algorithmic
solution.
• when we can get lots of examples of the
behavior we require.
‘learning from experience’
• when we need to pick out the structure
from existing data.
Inspiration from Neurobiology
• A neuron: many-inputs /
one-output unit
• output can be excited or
not excited
• incoming signals from
other neurons determine
if the neuron shall excite
("fire")
• Output subject to
attenuation in the
synapses, which are
junction parts of the
neuron
Synapse concept
• The synapse resistance to the incoming signal can
be changed during a "learning" process [1949]
Hebb’s Rule:
If an input of a neuron is repeatedly and persistently
causing the neuron to fire, a metabolic change
happens in the synapse of that particular input to
reduce its resistance
Mathematical representation
The neuron calculates a weighted sum of inputs and
compares it to a threshold. If the sum is higher than
the threshold, the output is set to 1, otherwise to -1.
Non-linearity
A simple perceptron
• It’s a single-unit network
• Change the weight by an
amount proportional to
the difference between
the desired output and
Input
the actual output.
Δ Wi = η * (D-Y).Ii
Learning rate
Actual output
Desired output
Perceptron Learning Rule
Example: A simple single unit
adaptive network
• The network has 2
inputs, and one
output. All are binary.
The output is
– 1 if W0I0 + W1I1 + Wb >
0
– 0 if W0I0 + W1I1 + Wb ≤
0
• We want it to learn
simple OR: output a 1
if either I0 or I1 is 1.
Learning
• From experience: examples / training data
• Strength of connection between the
neurons is stored as a weight-value for the
specific connection
• Learning the solution to a problem =
changing the connection weights
Operation mode
• Fix weights (unless in online learning)
• Network simulation = input signals flow
through network to outputs
• Output is often a binary decision
• Inherently parallel
• Simple operations and threshold:
fast decisions and real-time
response
Characteristics
• Distributedness and parallelism
• Locality
• Weighted sum and activation function with
nonlinearity
• Plasticity
• Generalization
Characteristics (cont.)
• 1. Distributedness and
parallelism: process
information distributedly in
parallel and trouble in a
single neuron does not give
rise to fatal impact on the
brain function.
• CAEP: if one or some of
related area becomes
inactive recently, it is not
going to affect the area
movement as whole.
(normal fault, strike-slip
fault, thrust fault)
Characteristics (cont.)
Characteristics (cont.)
• 2. Locality: information
transferred by a neuron
is limited by its nearby
neurons.
• CAEP: short term
earthquake prediction is
highly influenced by it’s
geologic figure locally.
Characteristics (cont.)
• 3. Weighted sum and
activation function with
nonlinearity: input signal
is weighted at the
synoptic connection by a
connection weight.
• CAEP: nearby location
will be weighted with
each activation function.
Characteristics (cont.)
• 4. Plasticity: connection weights
change according to the
information fed to the neuron and
the internal state. This plasticity
of the connection weights leads
to learning and self-organization.
The plasticity realizes the
adaptability against the
continuously varying
environment.
• CAEP: calculate the stress of
focused point according to the
seismic wave history in the
around area
Characteristics (cont.)
• 5. Generalization: A neural
network constructs its own
view of the world by
inferring an optimal action
on the basis of previously
learned events by
interpolation, and
extrapolation.
• CAEP: get a view of one
area from past experience
by pattern representation
Prediction.
Basic Function of CSEP
• Neuron: list of
locations along San
Andreas Fault, and
two of the
associated faults—
Hayward and
Calaveras.
Basic Function of CSEP (cont.)
• Neuron’s parameters: magnitude, date, latitude,
longitude, depth, location, ground water,
observation, etc.