Advanced Targeting and Observation Selection 6.2 New Start 75

Download Report

Transcript Advanced Targeting and Observation Selection 6.2 New Start 75

A Pacific Predictability Experiment Targeted Observing Issues and Strategies
1
Rolf Langland
Pacific Predictability Meeting
Seattle, WA
June 6, 2005
Eight years since FASTEX - first targeting field program
2
FASTEX Targeting Flight – Meteo France / NCAR / NRL / NOAA
Goose Bay, Canada – 22 Feb 1997
IOP-18
Previous Targeting Field Programs
3
Winter storm targeting
• North Atlantic (FASTEX-1997, NA-TREC-2003)
• North Pacific (NORPEX-1998, WSR-1999-2005)
Hurricane / tropical cyclone targeting
• North Atlantic (NOAA-HRD, 2000-2005)
• Western Pacific (DOTSTAR, 2003-2005)
Participants: Meteo France, ECMWF, UKMO, NRL, NCEP, NCAR, NOAA-AOC, NOAA-HRD,
USAF Hurricane Hunters, NASA, CIMSS, MIT, Univ. of Miami, Penn State Univ., others
Targeting Results
4
Forecast Impact of Targeted Data – (adding 10-50 dropsondes at
single assimilation times)
• Targeted data improves the average skill of short-range
forecasts*, by ~ 10–20% over localized verification regions –
maximum improvements up to 50% forecast error reduction in
localized areas
• In all analysis / forecast systems*, and for all targeting
methodologies, it is found that ~ 20-30% of forecast cases are
neutral or degraded by the addition of targeted data
• Impact “per-observation” of targeted dropsonde data is large,
but total impact is generally limited by the relatively small amount
of targeted data
* Results based on published forecast impact studies
performed at NCEP, ECMWF, Meteo France, UKMO, NRL
Targeting Impact on Forecast Error (regional verification area)
5
100
90
80
70
60
Average reduction 50
in 2-day forecast
40
error (percent)
30
20
10
0
UPPER LIMIT
SUGGESTED BY
PREDICTABILITY
STUDIES
NOAA-WSR-04
NORPEX -98
NA-TReC -03
1
100
10000
1000000
Total number of satellite or in-situ data assimilated per forecast case
How to increase the beneficial impact of
Targeted Observing?
6
ECMWF – need to observe much larger part of the SV-targeting
subspace
NRL - use higher-density of satellite data in target regions,
observe more frequently, observe larger region (requires
satellite data targeting)
NCEP – ??
UKMO – ??
Targeting a major winter storm forecast failure
7
SENSITIVITY OF 72H FORECAST ERROR TO 300mb U-WIND
OBSERVATION
TARGETS
Langland et al. (MWR, 2002)
FORECAST
VERIFICATION
AREA
Pacific origins of the 2000 E. Coast blizzard
21 Jan 00
8
22 Jan 00
23 Jan 00
24 Jan 00
25 Jan 00
26 Jan 00
250mb Daily-Mean Geopotential Height
Figure by Mel Shapiro
Objectives for future targeting programs
9
Goal 1: Increase the average beneficial impact of targeted
data in deterministic and ensemble forecasts –
Goal 2: Increase the percentage of forecasts that are
improved by targeted data –
• More data in target sub-space (fully observe the sub-space
and provide near-continuous observations)
•
Improve targeting techniques
•
Improve data assimilation procedures
Pacific predictability questions -10
1. Are the analyses over the Pacific getting better ?
2. How much of the uncertainty or error that exists in current
analyses over the Pacific will reduced by anticipated hyperspectral (and other) satellite observations that will be provided
over the next five to ten years? How to extract maximum benefit
for NWP from this vast amount of satellite data?
-
Vertical resolution of satellite data vs. that of model background
-
Bias correction ?
-
Observations in sensitive cloudy regions ?
NAVDAS Observation Count – 12 May 2005
11
All observation types -
00, 06, 12, 18 UTC
Includes AMSU-A,
scatterometer,
MODIS, geosat
winds, SSMI, raobs,
land, ship, aircraft
data
MAX SENSITIVITY
Number of obs within 5o x 5o lat-lon boxes
Does not includes
HIRS, AIRS, GPS, or
ozone
Targeting Strategies –
12
How much benefit can we obtain by “tuning” the network of existing
regular satellite and in-situ observations in a targeted sense?
- Targeted satellite data thinning
- Targeted satellite channel selection
- On-request feature-track wind data
- Increase observations from commercial aircraft
- On-request radiosondes at non-standard times
What major scientific and technical objectives can be
addressed by a Pacific predictability experiment?
13
1. Use field program data set to improve impact of
satellite data for NWP (mid-latitude and tropical)
• observation and background error
• bias correction – calibration and validation
• data thinning – channel selection
• on-request targeted satellite data
2. Test viability of new in-situ observing systems for
targeting – driftsonde, aerosonde, rocketsonde, smart
balloon, etc.
1.
14
2.
of THORPEX
3.
4.
5.
6.
7.
8.
9.
10.
Targeting Strategy
15
Satellite
Observations
In-situ observations
Data Selection & Thinning
Procedures
Rejected Data
Data Assimilation
Forecast Model
Targeting
Guidance
Forecast and Analysis Procedure
16
Observation
(y)
Background
(xb)
Data
Assimilation
System
Analysis
(xa)
Forecast
Model
Forecast
(xf)
Adjoint of Forecast and Analysis Procedure
Observation
Sensitivity
(J/ y)
Background
Sensitivity
(J/ xb)
Adjoint of
the Data
Assimilation
System
Observation Impact
<y-H(xb)> (J/ y)
Analysis
Sensitivity
(J/ xa)
Adjoint of the
Forecast Model
Tangent
Propagator
What is the impact of the observations
on measures of forecast error (J) ?
Gradient of
Cost Function
J: (J/ xf)
“New” vs. Old Targeting Approach
17
Issue
New Targeting
Old Targeting
Number of obs in
target region
~ 10,000 or more
obs in target area
10-50 dropsonde
profiles
Type of obs
Satellite and some
in-situ
Mostly in-situ
Frequency of obs
At least every 6
hours or continuous
Once – at target time
Sampling
Approach
Sample larger area
of target subspace
Dropsondes in
localized region
Forecast Impact
More reliable and
larger forecast
impacts
Mixed impact, many
null cases
Large Impact of Observations in Cloudy Regions
18
Ob Impact
0.005
0.004
0.003
SATWIND
RAOB
ATOVS
0.002
0.001
0
0 10 20 30 40 50 60 70 80 90
Cloud Cover (% )
Observation impact (average magnitude per
observation, in J kg-1) as a function of modeldiagnosed cloud-cover. The “impact” in this figure
includes both improvements and degradations of 72h
global forecast error. Based on results from 29 June –
28 July 2002.
19
20
21
High Forecast Impact
22
High Forecast Impact
23
High Forecast Impact
24
High Forecast Impact
25
Med-Low Forecast Impact
26
Med-Low Forecast Impact
27
Med-Low Forecast Impact
28
FIGURE IN EARLY VERSION OF THORPEX PLAN
(April 2000)
29
Initial Launch Time:
00 UTC 06 Feb 1999
13 launch sites
Coverage at: 00UTC
11Feb 1999
Drift Level: 100 mb
Launch Interval: 12hr
Dropsonde Interval: 6hr
Example of Driftsonde sounding coverage at one assimilation time after
five days of deployment from launch sites along the Asian Pacific rim
Targeting Impact – Percent of Improved Forecasts
30
100
90
80
Percent of 2-day
forecasts
improved
NOAA-WSR-04
70
NORPEX -98
NA-TReC -03
60
50
40
1
100
10000
1000000
Total number of satellite or in-situ data assimilated per forecast case
PROPAGATION OF PACIFIC TARGETING – SIGNAL KINETIC ENERGY
From 00UTC 20 Jan 2005
(+ 7 days)
31
EUROPE
U.S.
CHINA
FROM S. MAJUMDAR
Extended-duration targeting – flow regime 1
32
Research Tasks
33
– OSEs (real data) test procedures for targeted satellite data thinning and
channel selection
– OSSEs (synthetic data) test impact of future satellite and in-situ
observing systems
– Evaluate impact of targeted feature-track geosat wind data and other
targeted satellite data
- Examine 3d-var, 4d-var deterministic, TIGGE, various metrics and
various forecast verification areas
– Perform operational tests of driftsonde, aerosonde, rocketsonde, smart
balloon, etc. for potential field program applications
Predictability Questions
34
- Where are the most critical analysis errors or uncertainties over the
Pacific? How well are cloudy regions analyzed?
- Is there a benefit from using higher horizontal or vertical resolution of
satellite data in target areas?
- What is the realistic upper-limit of forecast improvement that can be
expected from targeted observing in various situations?
- What is the potential benefit from observing larger sections of the
targeting subspace, instead of attempting to survey the smaller-scale
areas of maximum sensitivity, which have been the primary focus of
previous field programs? How can this be accomplished?
Interpretation of previous targeting results
35
• Targeted observing has the potential for significant
improvement to deterministic and ensemble forecasting
• Previous targeting field programs have achieved only a
small fraction of this potential – intermittent small sets
of data (10-50 dropsondes) have modest beneficial
impact
• New and next-generation satellite data are the primary
resource that can advance the impact of targeting
• In-situ targeted observations provide value in certain
situations where satellite observations are insufficient
(including cloudy areas)
Observation Impact
during THORPEX NA-TReC
36
1Nov-31Dec 2003 – global domain
Observation
Type
δe 48
42
(J kg-1)
% of total
# obs
δe 48
42 per ob
(10-5 J kg-1)
AMSU-Aa
-88.68
47.8%
4,461,709
-2.0
Geosat windsb
-32.44
17.4%
2,958,608
-1.1
Aircraftc
-29.24
15.8%
2,511,540
-1.2
Land-surfaced
-14.20
7.7%
696,140
-2.0
Ship-surfacee
-11.20
6.0%
214,143
-5.2
Rawinsondesf
-7.44
4.0%
362,489
-2.1
TC Synthg
-1.74
0.9%
11,152
-15.6
Dropsondesh
-0.67
0.4%
13,418
-5.0
-185.61
100%
11,229,199
-1.7
18UTC
Total
Does not include moisture observations or rapid-scan satellite wind data
48
-1