Ensemble Prediction in the U.S.

Download Report

Transcript Ensemble Prediction in the U.S.

Ensembles and Probabilistic
Prediction
Uncertainty in Forecasting
• All of the model forecasts I have talked about
reflect a deterministic approach.
• This means that we do the best job we can for a
single forecast and do not consider uncertainties in
the model, initial conditions, or the very nature of
the atmosphere. These uncertainties are often very
significant.
• Traditionally, this has been the way forecasting
has been done, but that is changing now.
A Fundamental Issue
• The work of Lorenz (1963, 1965,
1968) demonstrated that the
atmosphere is a chaotic system, in
which small differences in the
initialization, well within
observational error, can have large
impacts on the forecasts, particularly
for longer forecasts.
• In a series of experiments found that
small errors in initial conditions can
grow so that all deterministic forecast
skill is lost at about two weeks.
Butterfly Effect: a small change
at one place in a complex system
can have large effects elsewhere
Uncertainty Extends Beyond
Initial Conditions
• Also uncertainty in our model physics.
• And further uncertainty produced by our
numerical methods (e.g., finite differencing
truncation error, etc.).
Probabilistic NWP
• To deal with forecast uncertainty, Epstein (1969)
suggested stochastic-dynamic forecasting, in which
forecast errors are explicitly considered during
model integration.
• Essentially, uncertainty estimates were added to
each term in the primitive equation.
• This stochastic method was not computationally
practical, since it added many additional terms.
Probabilistic-Ensemble NWP
• Another approach, ensemble prediction, was
proposed by Leith (1974), who suggested that
prediction centers run a collection (ensemble) of
forecasts, each starting from a different initial state.
• The variations in the resulting forecasts could be
used to estimate the uncertainty of the prediction.
• But even the ensemble approach was not possible at
this time due to limited computer resources.
• Became practical in the late 1980s as computer
power increased.
Ensemble Prediction
• Can use ensembles to estimate the probabilities that
some weather feature will occur.
•The ensemble mean is more accurate on average than
any individual ensemble member.
•Forecast skill of the ensemble mean is related to the
spread of the ensembles
•When ensemble forecasts are similar, ensemble
mean skill is higher.
•When forecasts differ greatly, ensemble mean
forecast skill is less.
12h
forecast
24h
forecast
7
6
36h
forecast
e
4
a
Core
i, 2
2
c
u
j
Cent
1, 2
0
T
n
2
g
t
-2.96564 4
2
2.5
1
0
Core , Cent
i, 1
1, 1
1
2
3
M
3
T
A critical issue is the development of
ensemble systems that provide probabilistic
guidance that is both reliable and sharp.
Elements of a Good Probability Forecast
• Reliability (also known as calibration)
– A probability forecast p, ought to verify with relative
frequency p.
– Forecasts from climatology are reliable (by definition), so
calibration alone is not enough.
Sharpness
We are trying to predict a
probability density function
(PDF)
52
56
60
Elements of a Good Probability Forecast
• Sharpness (a.k.a. resolution)
– The variance or confidence interval of the
predicted distribution should be as small as
possible.
Sharp
Less
Sharp
Probability Density
Function (PDF)
for some forecast
quantity
PDFs are created by fitting
gaussian or other curves to
ensemble members
More ensembles are generally
better
• Can better explore uncertainty in initial
conditions
• Can better explore uncertainty in model
physics and numerics
Ensmbles can be calibrated
Variety of Ways to View
Ensembles and Their Output
The Thanksgiving Forecast 2001
42h forecast (valid Thu 10AM)
SLP and winds
1: cent
Verification
- Reveals high uncertainty in storm track and intensity
- Indicates low probability of Puget Sound wind event
2: eta
5: ngps
8: eta*
11: ngps*
3: ukmo
6: cmcg
9: ukmo*
12: cmcg*
4: tcwb
7: avn
10: tcwb*
13: avn*
Box and
Whiskers
NAEFS
Early Forecasting Started
Probabilistically
• Early forecasters, faced with large gaps in their nascent science,
understood the uncertain nature of the weather prediction
process and were comfortable with a probabilistic approach to
forecasting.
• Cleveland Abbe, who organized the first forecast group in the
United States as part of the U.S. Signal Corp, did not use the
term “forecast” for his first prediction in 1871, but rather used
the term “probabilities,” resulting in him being known as “Old
Probabilities” or “Old Probs” to the public.
• A few years later, the term ‘‘indications’’ was substituted for
probabilities and by 1889 the term ‘‘forecasts’’ received official
sanction (Murphy 1997).
“Ol Probs”
•Cleveland
Abbe
(“Ol’
Probabilities”), who led the
establishment of a weather
forecasting division within the
U.S. Army Signal Corps,
•Produced the first known
communication of a weather
probability to users and the
public.
Professor Cleveland Abbe, who issued the first public
“Weather Synopsis and Probabilities” on February 19,
1871
History of Probabilistic
Prediction
• The first operational probabilistic forecasts
in the United States were produced in 1965.
These forecasts, for the probability of
precipitation, were produced by human
weather forecasters and thus were subjective
predictions. The first objective probabilistic
forecasts were produced as part of the
Model Output Statistics (MOS) system that
began in 1969.
Ensemble Prediction
• Ensemble prediction began at NCEP in the early 1990s.
ECMWF rapidly joined the club.
• During the past decades the size and sophistication of the
NCEP and ECMWF ensemble systems have grown
considerably, with the medium-range, global ensemble
system becoming an integral tool for many forecasters.
• Also during this period, NCEP has constructed a higher
resolution, short-range ensemble system (SREF) that uses
breeding to create initial condition variations.
Up to date
listing:http://www.meted.ucar.ed
u/nwp/pcu2/ens_matrix/index.ht
m
Major Global Ensembles
• NCEP GEFS (Global Ensemble Forecasting
System): GFS, 21 members every 6 hr,
T254 (roughly 50 km resolution), 64 levels
http://www.esrl.noaa.gov/psd/map/images/ens/ens.html)
• Canadian CEFS: GEM Model, 21
members, 100 km grid spacing, 0 and 12Z
• ECMWF: 51 members, 62 levels, 0 and
12Z, T399 (roughly 27 km)
• http://www.ecmwf.int/products/forecasts/d/charts/medium/
eps/
Major International
Global/Continental Ensembles
Systems
• North American Ensemble Forecasting
Systems (NAEFS): Combines Canadian
and U.S. Global Ensembles:
http://www.meteo.gc.ca/ensemble/naefs/EP
Sgrams_e.html
NCEP Short-Range Ensembles (SREF)
• Resolution of 16 km
• Out to 87 h twice a day (09 and 21 UTC
initialization)
• Uses both initial condition uncertainty
(breeding) and physics uncertainty.
• Uses the NMM, NMM-B, and WRF-ARW
models (21 total members)
• http://www.emc.ncep.noaa.gov/SREF/
• http://www.emc.ncep.noaa.gov/mmb/SREF/FCST/COM_
US/web_js/html/mean_surface_prs.html
http://www.spc.noaa.gov/exper/sr
ef/fplumes/
Lessons of the NE Snowstorm
http://cliffmass.blogspot.com/201
5/01/forecast-lessons-fromnortheast.html
SREF
NARRE (N. American Rapid
Refresh Ensemble)
British Met Office MOGREPS
• 24 members, 18 km
Ensemble Post-Processing
• Ensemble output can be post-processed to get
better probabilistic predictions
• Can weight better ensemble members more.
• Correct biases
• Improve the width of probabilistic distributions
(pdfs)
BMA (Bayesian Model Averaging) is
One Example
There is a whole theory on using
probabilistic information for
economic savings
C= cost of protection
L= loss if bad event event occurs
Decision theory says you should
protect if the probability of
occurrence is greater than C/L
Critical Event: sfc winds > 50kt
Cost (of protecting): $150K
Loss (if damage ): $1M
Observed?
Decision Theory Example
YES
NO
Forecast?
YES
NO
Hit
$150K
False
Alarm
$150K
Miss
$1000K
Correct
Rejection
$0K
Deterministic
Deterministic Observation
Observation
Probabilistic
Probabilistic
Cost
Cost ($K)
($K) by
by Threshold
Threshold for
for Protective
Protective Action
Action
Case
Case Forecast
Forecast (kt)
(kt)
(kt)
(kt)
Cost
Cost ($K)
($K)
Forecast
Forecast
0%
0%
20%
20%
40%
40%
60%
60%
80%
80% 100%
100%
11
65
65
54
54
150
150
42%
42%
150
150
150
150
150
150
1000
1000
1000
1000
1000
1000
22
58
58
63
63
150
150
71%
71%
150
150
150
150
150
150
150
150
1000
1000
1000
1000
33
73
73
57
57
150
150
95%
95%
150
150
150
150
150
150
150
150
150
150
1000
1000
44
55
55
37
37
150
150
13%
13%
150
150
00
00
00
00
00
55
39
39
31
31
00
3%
3%
150
150
00
00
00
00
00
66
31
31
55
55
1000
1000
36%
36%
150
150
150
150
1000
1000
1000
1000
1000
1000
1000
1000
77
62
62
71
71
150
150
85%
85%
150
150
150
150
150
150
150
150
150
150
1000
1000
88
53
53
42
42
150
150
22%
22%
150
150
150
150
00
00
00
00
99
21
21
27
27
00
51%
51%
150
150
150
150
150
150
00
00
00
10
10
52
52
39
39
150
150
77%
77%
150
150
150
150
150
150
150
150
00
00
Total
Total Cost:
Cost: $$ 2,050
2,050
$$1,500
1,500 $$1,200
1,200 $$1,900
1,900 $$2,600
2,600 $$3,300
3,300 $$5,000
5,000
Optimal Threshold = 15%
The Most Difficult Part:
Communication of Uncertainty
Deterministic Nature?
• People seem to prefer deterministic
products: “tell me exactly what is going to
happen”
• People complain they find probabilistic
information confusing. Many don’t
understand POP.
• Media and internet not moving forward
very quickly on this.
Commercial
sector
is no better
A great deal of research and
development is required to
develop effective approaches for
communicating probabilistic
forecasts which will not
overwhelm people and allow
them to get value out of them.