M0 = 0 - Department of Earth and Planetary Sciences, Northwestern

Download Report

Transcript M0 = 0 - Department of Earth and Planetary Sciences, Northwestern

Metrics, Bayes, And BOGSAT: How To
Assess And Revise Earthquake Hazard
Maps
Seth Stein1, Edward Brooks1, Bruce D. Spencer2
1 Department
of Earth & Planetary Sciences and Institute for Policy
Research, Northwestern University, Evanston, Illinois, USA
2 Department
of Statistics and Institute for Policy Research, Northwestern
University, Evanston, Illinois, USA
Hazard map questions
 Why do maps sometimes do poorly?
 How can we measure their performance?
 When and how should we update them?
 How can we quantify their uncertainties?
 How good do they have to be useful?
 How can we improve their forecasts?
How do we make sensible policy given their limitations?
Geller
2011
Geller (2011) argued that
“all of Japan is at risk from
earthquakes, and the
present state of
seismological science does
not allow us to reliably
differentiate the risk level in
particular geographic areas,”
so a map showing uniform
hazard would be preferable
to the existing maps.
To test this idea,
need to measure
map performance
How good a baseball player was
Babe Ruth?
The answer depends on the
metric used.
In many seasons Ruth led the
league in both home runs and in
the number of times he struck
out.
By one metric he did very well,
and by another, very poorly.
Lessons from meteorology
n
Weather forecasts are routinely evaluated to assess
how well their predictions matched what actually
occurred: "it is difficult to establish well-defined
goals for any project designed to enhance
forecasting performance without an unambiguous
definition of what constitutes a good forecast."
(Murphy, 1993)
n
Information about how a forecast performs is crucial
in determining how best to use it. The better a
weather forecast has worked to date, the more we
factor it into our daily plans.
Chosing appropriate
metrics is crucial in
assessing performance of
forecasts.
Silver (2012) shows that TV
weather forecasts have a "wet
bias" - predicting more rain
than actually occurs, probably
because they feel that
customers accept
unexpectedly sunny weather,
but are annoyed by
unexpected rain.
From users’ perspective,
what specifically should hazard maps
seek to accomplish?
How can we measure
how well they do it?
How to measure map
performance?
Implicit probabilistic map
criterion: after appropriate
time predicted shaking
exceeded at only a fraction
p of sites
Define fractional site
exceedance metric
M0(f,p) = |f – p|
where f is fraction of sites
p%
exceeding predicted shaking
Ideal map has M0 = 0
100-p%
Fractional site
exceedance is a
useful metric
but only tells
part of the story
Both maps are
successful, but…
M0=0
This map exposed some sites to much
greater shaking than predicted. This
situation could reflect faults that had
larger earthquakes than assumed.
Fractional site
exceedance is a
useful metric
but only tells
part of the story
All these maps
are successful,
but…
M0=0
This map significantly
overpredicted shaking,
which could arise from
overestimating the
magnitude of the
largest earthquakes.
A map can be nominally successful by the fractional exceedance metric, but
significantly underpredict shaking at many sites and overpredict that at others.
p=0.1
f=0.1
M0=0
p=0.1
f=0.2
M0=0.1
Or be nominally unsuccessful by the fractional site exceedance metric, but
better predict shaking at most sites.
Other metrics can
provide additional
information beyond
the fractional site
exceedance M0
Squared misfit to the data
M1(s,x) =
i
(xi - si)2/N
measures how well the
predicted shaking compares
to the highest observed.
From a purely seismological
view, M1 tells us more than
M0 about how well a map
performed.
Other metrics can
provide additional
information beyond
the fractional site
exceedance M0
Because underprediction does
potentially more harm than
overprediction, we could weight
underprediction more heavily.
Asymmetric squared misfit
M2(s,x) =
i
wi(xi - si)2/N
with
wi = a for (xi - si) > 0 and
wi = b for (xi - si) ≤ 0
More useful for hazard
mitigation than M1
Other metrics can
provide additional
information beyond
the fractional site
exceedance M0
Shaking-weighted
asymmetric squared misfit
We could use larger
weights for areas predicted
to be the most hazardous,
so the map is judged most
on how it does there.
Other metrics can
provide additional
information beyond
the fractional site
exceedance M0
Exposure-weighted
asymmetric squared misfit
We could use larger
weights for areas with the
largest exposure of people
or property, so the map is
judged most on how it
does there.
Although no single
metric fully
characterizes map
performance, using
several metrics can
provide valuable
insight for assessing
and improving
hazard maps
217 BC –
2002 AD
Nekrasova
et al., 2014
Return period T = 2475 yr
P=59%
f=0.25%
f= 1.6%
Misfit reflects problems in the data (most likely),
maps, or both.
 Probabilistic map with 2%
probability of exceedance in 50
years (i.e. ground shaking
expected at least once in 2475
years) significantly
overestimates the shaking
reported over a comparable time
span (~ 2200 years).
 Deterministic map, which is
not associated to a specific time
span, also overestimates
reported ground shaking.
Historical catalog thought to be
incomplete (Stucchi et al., 2004) and
may underreport the largest shaking
due to space-time sampling bias
Options after an
earthquake yields
shaking larger than
anticipated:
Either regard the high
shaking as a lowprobability event allowed
by the map
Or – as usually done accept that high shaking
was not simply a lowprobability event and
revise the map
No formal or objective
criteria are used to
decide whether to
change map & how
Done via BOGSAT
(“Bunch Of Guys
Sitting Around Table”)
Challenge: a new map
that better describes the
past may or may not
better predict the future
?
Deciding whether to remake a map
is like deciding after a coin has come up heads a
number of times whether to continue assuming that
the coin is fair and the run is a low-probability event,
or to change to a model in which the coin is
assumed to be biased.
Changing the model to match past
may describe future worse
?
Bayes’ Rule – how much to change depends on one’s
confidence in prior model
Revised probability model =
Likelihood of observations given the prior model
x Prior probability model
If you were confident that the coin was fair, you would
probably not change your model. If you were given the coin
at a magic show, your confidence would be lower and you
would be more likely to change your model.
Assume Poisson
earthquake
recurrence with
λ = 1/T = 1/50 =
0.02 years
This estimate is
assumed (prior) to
have mean μ and
standard deviation σ
If earthquake occurs
after only 1 year
High
confidence,
Small change
Low
confidence,
Large change
Prior
The updated forecast, described by the
posterior mean, increasingly differs
from the initial forecast (prior mean)
when the uncertainty in the prior
distribution is larger. The less
confidence we have in the prior model,
the more a new datum can change it.
Current status

We need agreed ways of assessing how well hazard maps
performed and thus whether one map performed better than
another.

Although no single metric alone fully characterizes map behavior,
using several metrics can provide useful insight for assessing map
performance and thus uncertainty.

Deciding when and how to revise hazard maps should combine
BOGSAT – subjective judgment given limited information - and
Bayes – ideas about parameter uncertainty.

Uncertainty estimates are crucial for how much confidence to have
in maps for very expensive policy decisions.
Forecast uncertainty for policy decisions
Australia
U.S. Social Security
IPCC Global
warming
Challenge
U.S. meteorologists (Hirschberg et al., 2011)
have adopted a goal of “routinely providing
the nation with comprehensive, skillful,
reliable, sharp, and useful information about
the uncertainty of hydrometeorological
forecasts.”
Although seismologists have a tougher
challenge and a longer way to go, we should
try to do the same for earthquake hazards.