452Ensemble2009 - Atmospheric Sciences
Download
Report
Transcript 452Ensemble2009 - Atmospheric Sciences
Probabilistic Prediction
Uncertainty in Forecasting
• All of the model forecasts I have talked about
reflect a deterministic approach.
• This means that we do the best job we can for a
single forecast and do not consider uncertainties in
the model, initial conditions, or the very nature of
the atmosphere. These uncertainties are often very
significant.
• Traditionally, this has been the way forecasting
has been done, but that is changing now.
A More Fundamental Issue
• The work of Lorenz (1963, 965, 1968)
demonstrated that the atmosphere is a
chaotic system, in which small
differences in the initialization…well
within observational error… can have
large impacts on the forecasts,
particularly for longer forecasts.
• Similarly, uncertainty in model physics
can result in large forecast
differences..and errors.
• Not unlike a pinball game….
• Often referred to as the “butterfly
effect”
Probabilistic-Ensemble NWP
• One approach would be to add uncertainty terms to
all terms in the primitive equations. Not practical.
• Another: Instead of running one forecast, run a
collection (ensemble) of forecasts, each starting
from a different initial state or with different
physics. Became practical in the late 1980s as
computer power increased.
Ensemble Prediction
•The variations in the resulting forecasts could be used to
estimate the uncertainty of the prediction. Can use ensembles to
provide a new generation of products that give the probabilities
that some weather feature will occur.
•Can predict forecast skill or forecast reliability!
•It appears that when forecasts are similar, forecast skill is
higher.
•When forecasts differ greatly, forecast skill is less.
•The ensemble mean is usually more accurate on average than
any individual ensemble member.
Probabilistic Prediction
• A critical issue will be the development of
mesoscale ensemble systems that provide
probabilistic guidance that is both reliable and
sharp.
Elements of a Good Probability Forecast
• Reliability (a.k.a. calibration)
– A probability forecast p, ought to verify with relative
frequency p.
– Forecasts from climatology are reliable (by definition),
so calibration alone is not enough.
Elements of a Good Probability Forecast
• Sharpness (a.k.a. resolution)
– The variance, or confidence interval,of the
predicted distribution should be as small as
possible.
Sharp
Less
Sharp
Probability Density
Function (PDF)
for some forecast
quantity
Early Forecasting Started
Probabilistically
• Early forecasters, faced with large gaps in their nascent science,
understood the uncertain nature of the weather prediction
process and were comfortable with a probabilistic approach to
forecasting.
• Cleveland Abbe, who organized the first forecast group in the
United States as part of the U.S. Signal Corp, did not use the
term “forecast” for his first prediction in 1871, but rather used
the term “probabilities,” resulting in him being known as “Old
Probabilities” or “Old Probs” to the public.
• A few years later, the term ‘‘indications’’ was substituted for
probabilities and by 1889 the term ‘‘forecasts’’ received official
sanction (Murphy 1997).
“Ol Probs”
•Cleveland
Abbe
(“Ol’
Probabilities”), who led the
establishment of a weather
forecasting division within the
U.S. Army Signal Corps,
•Produced the first known
communication of a weather
probability to users and the
public.
Professor Cleveland Abbe, who issued the first public
“Weather Synopsis and Probabilities” on February 19,
1871
History of Probabilistic
Prediction
• The first operational probabilistic forecasts
in the United States were produced in 1965.
These forecasts, for the probability of
precipitation, were produced by human
weather forecasters and thus were subjective
predictions. The first objective probabilistic
forecasts were produced as part of the
Model Output Statistics (MOS) system that
began in 1969.
Ensemble Prediction
• Ensemble prediction began an NCEP in the early 1990s.
ECMWF rapidly joined the club.
• During the past decades the size and sophistication of the
NCEP and ECMWF ensemble systems have grown
considerably, with the medium-range, global ensemble
system becoming an integral tool for many forecasters.
• Also during this period, NCEP has constructed a higher
resolution, short-range ensemble system (SREF) that uses
breeding to create initial condition variations.
NCEP Global Ensemble System
• Begun in 1993 with the MRF (now GFS)
• First tried “lagged” ensembles as basis…using runs
of various initializations verifying at the same time.
• For the last ten years have used the “breeding”
method to find perturbations to the initial conditions
of each ensemble members.
• Breeding adds random perturbations to an initial
state, let them grow, then reduce amplitude down to a
small level, lets them grow again, etc.
• Give an idea of what type of perturbations are
growing rapidly in the period BEFORE the forecast.
• Does not include physics uncertainty.
• Coarse spatial resolution..only for synoptic features.
NCEP Global Ensemble
At 00Z:
• T254L64 high resolution control) out to 7 days, after which
this run gets truncated and is run out to 16 days at a T170L42
resolution
• T62 control that is started with a truncated T170 analysis
• 10 perturbed forecasts each run at T62 horizontal resolution.
The perturbations are from five independent breeding cycle.
At 12Z:
• T254L64 control out to 3 days that gets truncated and run at
T170L42 resolution out to 16 days
• Two pairs of perturbed forecasts based on two independent
breeding cycles (four perturbed integrations out to 16 days.
The Thanksgiving Forecast 2001
42h forecast (valid Thu 10AM)
SLP and winds
1: cent
Verification
- Reveals high uncertainty in storm track and intensity
- Indicates low probability of Puget Sound wind event
2: eta
5: ngps
8: eta*
11: ngps*
3: ukmo
6: cmcg
9: ukmo*
12: cmcg*
4: tcwb
7: avn
10: tcwb*
13: avn*
NCEP Short-Range Ensembles
(SREF)
• Resolution of 32 km
• Out to 87 h twice a day (09 and 21 UTC
initialization)
• Uses both initial condition uncertainty
(breeding) and physics uncertainty.
• Uses the Eta and Regional Spectral Models
and recently the WRF model (21 total
members)
SREF Current System
Model
RSM-SAS
RSM-RAS
Res (km) Levels Members
45
28 Ctl,n,p
45
28 n,p
Cloud Physics
GFS physics
GFS physics
Convection
Simple Arak-Schubert
Relaxed Arak-Schubert
Betts-Miller-Janjic
BMJ-moist prof
Eta-BMJ
Eta-SAT
32
32
60 Ctl,n,p
60 n,p
Op Ferrier
Op Ferrier
Eta-KF
Eta-KFD
32
32
60 Ctl,n,p
60 n,p
Op Ferrier
Op Ferrier
Kain-Fritsch
Kain-Fritsch
with enhanced
detrainment
PLUS
* NMM-WRF control and 1 pert. Pair
* ARW-WRF control and 1 pert. pair
There is a whole theory on using
probabilistic information for
economic savings
C= cost of protection
L= loss if bad event event occurs
Decision theory says you should
protect if the probability of
occurrence is greater than C/L
Critical Event: sfc winds > 50kt
Cost (of protecting): $150K
Loss (if damage ): $1M
Observed?
Decision Theory Example
YES
NO
Forecast?
YES
NO
Hit
$150K
False
Alarm
$150K
Miss
$1000K
Correct
Rejection
$0K
Deterministic
Deterministic Observation
Observation
Probabilistic
Probabilistic
Cost
Cost ($K)
($K) by
by Threshold
Threshold for
for Protective
Protective Action
Action
Case
Case Forecast
Forecast (kt)
(kt)
(kt)
(kt)
Cost
Cost ($K)
($K)
Forecast
Forecast
0%
0%
20%
20%
40%
40%
60%
60%
80%
80% 100%
100%
11
65
65
54
54
150
150
42%
42%
150
150
150
150
150
150
1000
1000
1000
1000
1000
1000
22
58
58
63
63
150
150
71%
71%
150
150
150
150
150
150
150
150
1000
1000
1000
1000
33
73
73
57
57
150
150
95%
95%
150
150
150
150
150
150
150
150
150
150
1000
1000
44
55
55
37
37
150
150
13%
13%
150
150
00
00
00
00
00
55
39
39
31
31
00
3%
3%
150
150
00
00
00
00
00
66
31
31
55
55
1000
1000
36%
36%
150
150
150
150
1000
1000
1000
1000
1000
1000
1000
1000
77
62
62
71
71
150
150
85%
85%
150
150
150
150
150
150
150
150
150
150
1000
1000
88
53
53
42
42
150
150
22%
22%
150
150
150
150
00
00
00
00
99
21
21
27
27
00
51%
51%
150
150
150
150
150
150
00
00
00
10
10
52
52
39
39
150
150
77%
77%
150
150
150
150
150
150
150
150
00
00
Total
Total Cost:
Cost: $$ 2,050
2,050
$$1,500
1,500 $$1,200
1,200 $$1,900
1,900 $$2,600
2,600 $$3,300
3,300 $$5,000
5,000
Optimal Threshold = 15%
The Most Difficult Part:
Communication of Uncertainty
Deterministic Nature?
• People seem to prefer deterministic
products: “tell me exactly what is going to
happen”
• People complain they find probabilistic
information confusing. Many don’t
understand POP.
• Media and internet not moving forward
very quickly on this.
Icons are not effective in providing probabilities
Even worse…they use the same icons for
likely rain and rain as they do for chance
rain. Also, they used “likely rain” for
70% on this page and “chance rain” for
70% in the example on the previous page
And a “slight” chance of freezing
drizzle reminds one of a trip to
Antarctica
Commercial
sector
is no better
A great deal of research and
development is required to
develop effective approaches for
communicating probabilistic
forecasts which will not
overwhelm people and allow
them to get value out of them.