Z. Toth, Yucheng Song, S.Majumdar, I. Szunyogh, C. Bishop

Download Report

Transcript Z. Toth, Yucheng Song, S.Majumdar, I. Szunyogh, C. Bishop

THE WINTER STORM RECONNESSAINCE PROGRAM OF THE
US NATIONAL WEATHER SERVICE
Zoltan Toth
GSD/ESRL/OAR/NOAA
Formerly at EMC/NCEP/NWS/NOAA
Acknowledgements:
Yucheng Song – Plurality at EMC
Sharan Majumdar – U. Miami
Istvan Szunyogh – Texas AMU
Craig Bishop - NRL
Rolf Langland - NRL
1
THORPEX Symposium, Sept 14-18 2009, Monterey, CA
OUTLINE / SUMMARY
• History
– Outgrowth of FASTEX & NORPEX research
– Operationally implemented at NWS in 2001
• Contributions / documentation
– Community effort
– Refereed and other publications, rich info on web
• Highlights
– Operational procedures for case selection, ETKF sensitivity calculations
– Positive results consistent from year to year
• Open questions
– Does operational targeting have economic benefits?
– Can similar or better results be achieved with cheaper obs. systems?
– What are the limitations of current techniques?
2
HISTORY OF WSR
• Sensitivity calculation method
– Ensemble Transform (ET) method developed around 1996
• Field tests
– FASTEX – 1997, Atlantic
• Impact from sensitive areas compared with that from non-sensitive
areas (“null” cases)
– NORPEX – 1998, Pacific
• Comparison with adjoint methods
– CALJET, PACJET, WC-TOST, ATReC, AMMA, T-PARC
• WSR
– 1999 - First test in research environment
– 2000 - Pre-implementation test
– 2001 - Full operational implementation
3
CONTRIBUTIONS
•
Craig Bishop (NASA, PSU, NRL)
– ET & ETKF method development
•
Sharan Majumdar (PSU, U. Miami)
– ETKF method development and implementation
•
Rolf Langland (NRL), Kerry Emanuel (MIT)
– Field testing and comparisons in FASTEX, NORPEX, TPARC
•
Istvan Szunyogh (UCAR Scientist at NCEP, U. MD, Texas AMU)
– Operational implementation, impact analysis, dynamics of data impact
•
Yucheng Song (Plurality at EMC/NCEP/NWS/NOAA)
– Updates, maintenance, coordination
•
Observations
– NOAA Aircraft Operations Center (G-lV)
– US Air Force Reserve (C130s)
•
Operations
– Case selection by NWS forecasters (NCEP/HPC, Regions)
– Decision making by Senior Duty Meteorologists (SDM)
4
DOCUMENTATION
•
Papers (refereed / not reviewed)
– Methods
• ET
• ETKF
Bishop & Toth
Bishop et al, Majumdar et al
– Field tests
•
•
•
•
•
Langland et al FASTEX
Langland et al NORPEX
Szunyogh et al FASTEX
Szunyogh et al NORPEX
Song et al TPARC (under preparation)
– Operational implementation
• Toth et al 2 papers
– WSR results
• Szunyogh et al
• Toth et al (under preparation)
•
Web
– Details on procedures
– Detailed documentation for each case in WSR99-09 (11 years, ~200+ cases)
•
•
•
•
Identification of threatening high impact forecast events
Sensitivity calculation results
Flight request
Data impact analysis
5
OPERATIONAL PROCEDURES
•
Case selection
– Forecaster input – time and location of high impact event
• Based on perceived threat and forecast uncertainty
– SDM compiles daily prioritized list of cases for which targeted data may be collected
•
Ensemble-based sensitivity calculations
– Forward assessment
• Predict impact of targeted data from predesigned flight tracks
– Backward sensitivity
• Statistical analysis of forward results for selected verification cases
•
Decision process
– SDM evaluates sensitivity results
• Consider predicted impact, priority of cases, available resources
– Predesigned flight track # or no flight decision for next day
– Outlook for flight / no flight for day after next
•
Observations
– Drop-sondes from manned aircraft flying over predesigned tracks
• Aircraft based in Alaska (Anchorage) and/or Hawaii (Honolulu)
– Real time QC & transmission to NWP centers via GTS
•
NWP
– Assimilate all adaptively taken data along with regular data
– Operational forecasts benefit from targeted data
6
HIGHLIGHTS
•
Case selection
– No systematic evaluation available
– Some errors in position / timing of threatening events in 4-6 day forecast range
• Affects stringent verification results
– Need for objective case selection guidance based on ensembles
•
Sensitivity calculations
– Predicted and observed impact from targeted data compared in statistical sense
– Sensitivity related to dynamics of flow
• Variations on daily and longer time scales (regime dependency)
•
Decision process
– Subjective due to limitations in sensitivity methods
• Spurious correlations due to small sample size
•
Observations
– Aircraft dedicated to operational observing program used
– Are there lower cost alternatives?
• Thorough processing of satellite data
• UAVs?
•
NWP forecast improvement
– Compare data assimilation / forecast results with / without use of targeted data
• Cycled comparison for cumulative impact
• One at a time comparison for better tracking of impact dynamics in individual cases
7
8
Forecast improvement / degradation
Observed data impact
Predicted data impact
WHY TARGETING MAY WORK
Impact of data removal over Pacific - Kelly et al. 2007
Figure 1. Winter Pacific forecasts: Verification of mean 500 hPa geopotential rmse up to day 10 for
SEAOUT in grey dotted and SEAIN in black: Both experiments are verified using ECMWF
9
operational analysis. Verification regions: (a) North Pacific, (b) North America, (c) North Atlantic and
(d) the European region.
FORECAST EVALUATION RESULTS
Based on 10 years of experience (1999-2008)
• Error reduced in ~70% of targeted forecasts
– Verified against observations at preselected time / region
• Wind & temperature profiles, surface pressure
• 10-20% rms error reduction in preselected regions
– Verified against analysis fields
• 12-hour gain in predictability
– 48-hr forecast with targeted data as skillful as 36-hr forecast
without
10
WSR Summary statistics for 2004-07
Variable
# cases
improved
# cases
neutral
#cases
degraded
Surface pressure
21+20+13+25=79
0+1+0+0=1
14+9+14+12=49
Temperature
24+22+17+24=87
1+1+0+0=2
10+7+10+13=40
Vector Wind
23+19+21+27=90
1+0+0+0=1
11+11+6+10=38
Humidity
22+19+13+24=78
0+0+0+0=0
13+11+14+13=51
25+22+19+26 = 92 POSITIVE CASES
0+1+0 +0
= 1 NEUTRAL CASE
10+7+8 +11 = 36 NEGATIVE CASES
71.3% improved
27.9% degraded
Without targeted data
Wind vector error, 2007
11
With targeted data
Valentine’s day Storm
2007
Surface pressure from
analysis
(hPa; solid contours)
Forecast Improvement (hPa;
shown in red)
Forecast Degradation (hPa;
blue)
• Weather event with
a large societal impact
• Each GFS run
verified against its
own analysis – 60 hr
forecast
• Impact on surface
pressure verification
• RMS error
improvement: 19.7%
(2.48mb vs. 2.97mb)
Targeted in high
impact weather area12
marked by the circle
Average surface pressure forecast error reduction from
WSR 2000
The average surface pressure forecast error reduction for Alaska
(55°–70°N, 165°–140°W), the west coast (25°–50°N, 125°–
100°W), the east coast (100°–75°W), and the lower 48 states of
the United States (125°–75°W). Positive values show forecast
improvement, while negative values show forecast degradation
(From Szunyogh et al 2002)
13
Forecast Verification for Wind (2007)
10-20% rms error
reduction in winds
Close to 12-hour gain in
predictability
RMS error reduction vs. forecast
lead time
14
Forecast Verification for Temperature (2007)
10-20 % rms error reduction
Close to 12-hour gain in
predictability
RMS error reduction vs. forecast lead time
15
CONCLUSIONS
• High impact cases can be identified in advance
using ensemble methods
• Data impact can be predicted in statistical
sense using ET / ETKF methods
– Optimal observing locations / times for high impact
cases can be identified
• It is possible to operationally conduct a targeted
observational program
• Open questions remain
16
OPEN QUESTIONS
• Does operational targeting have economic benefits?
– Cost-benefit analysis needs to be done for different regions – SERA research
• Are there differences between Pacific (NA) & Atlantic (Europe)?
• Can similar or better results be achieved with cheaper observing systems?
– Observing systems of opportunity
• Targeted processing of satellite data
• AMDAR
– UAVs?
• Sensitivity to data assimilation techniques
– Advanced DA methods extracts more info from any data
• Better analysis without targeted data
• Larger impact from targeted data (relative to improved analysis with standard data)?
• What are the limitations of current techniques?
– What can be said beyond linear regime?
• Need larger ensemble for that?
– Can we quantify expected forecast improvement (not only impact)?
• Distinction between predicting impact vs. predicting positive impact
– Effect of sub-grid scales ignored so far
• Ensemble displays more orderly dynamics than reality?
– Overly confident signal propagation predictions?
17
DISCUSSION POINTS
How to explain large apparent differences between various
studies regarding effectiveness of targeted observations?
• Case selection important
– Only every ~3rd day there is a “good” case
– Targeting is not cure for all diseases
• If all cases averaged, signal washed out at factor of 1/3
• Measure impact over target area
– Effect expected in specific area
• If measured over much larger area, signal washes out by factor of 1/3
• 2 factors above may explain 10-fold difference in quantitative assessment of utility in
targeting observations
• Not all cases expected to yield positive results
– Artifact of statistical nature of DA methods
• Should expect some negative impact
–
Current DA methods lead to forecast improvements in 70-75% of cases
• Geographical differences
– Potentially larger impact over larger Pacific vs smaller Atlantic basins?
18
BACKGROUND
19
Example: Impact of WSRP targeted dropsondes
1 Jan – 28 Feb 2006
00UTC Analysis
Binned Impact
Beneficial (-0.01 to -0.1 J kg-1)
NOAA-WSRP
191 Profiles
Non-beneficial (0.01 to 0.1 J kg-1)
Small impact (-0.01 to 0.01 J kg-1)
Average dropsonde ob impact is beneficial and ~2-3x
greater than average radiosonde impact
Composite summary maps
139.6W 59.8N 36hrs (7 cases) - 1422km
122W 37.5N 49.5hrs (8 cases) - 2034km
Verification
Region
92W 38.6N 60hrs (5 cases)- 4064km
80W 38.6N 63.5hrs (8 cases) - 5143km
Verification
Region
21
North Pacific observation impact sum - NAVDAS
Change in 24h moist total
energy error norm (J kg-1)
0
-1
-2
Error -3
Reduction-4
-5
-6
-7
-8
1-31 Jan 2007 (00UTC analyses)
Satwind
AMSU-A
SSMI/PRH
Raob
Dropsonde
Aircraft
Land Sfc
Scatwind
Windsat
Modis
Ship Sfc
SSMI/Wnd
22
North Pacific forecast error reduction per-observation
Change in 24h moist total
energy error norm (J kg-1)
0
-1
-2
-3
Error -4
Reduction
-5
(x 1.0e5) -6
-7
-8
-9
-10
1-31 Jan 2007 (00UTC analyses)
Ship Obs
Targeted
dropsondes =
high-impact perob, low total
impact
Satwind
AMSU-A
SSMI/PRH
Raob
Dropsonde
Aircraft
Land Sfc
Scatwind
Windsat
Modis
Ship Sfc
SSMI/Wnd
23
ETKF predicted signal propagation
Distance (km)
6000
63.5
5000
60
4000
3000
49.5
2000
36
1000
0
0
20
40
60
80
Forecast Hours
24
Precipitation verification
• Precipitation verification is still in a testing stage due to
the lack of station observation data in some regions.
ETS
5mm
10mm
CTL
16.35
18.56
OPR
16.50
20.44
Positive vs. negative
cases
4:1
3:1
25