Report - COSMO model

Download Report

Transcript Report - COSMO model

Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
WG4 Activities
Priority project « Advanced
interpretation and verification of
very high resolution models »
Pierre Eckert
MeteoSwiss, Geneva
WG4 coordinator
Gust diagnostics
Jan-Peter Schulz
Deutscher Wetterdienst
WG4 reporting
[email protected]
2
Diagnosing turbulent gusts
In the COSMO model the maximum gusts at 10 m above the ground are
estimated from the absolute speed of the near-surface mean wind Vm and its
standard deviation σ :
Vturb  Vm    
Vturb  Vm    2.4 u*
following Panofsky and Dutton (1984)
Vturb  Vm    2.4 CD Vm
Vturb  (1    2.4 CD ) Vm
α=3
u*
CD
: Tuning parameter
: Friction velocity
: Drag coefficient for momentum
3
Verification
10 - 25 Jan. 2007
Mean Gust (observed) [m/s]
Mean Gust/Mean Wind (observed)
Period:
Mean Wind [m/s]
x old
+ new
5
Gust diagnostics
Recommandation
WG4 recommends that the formulation of wind gusts of
the COMSO reference version is adapted so that the
gusts are reduced.
Could be affected by the choice of the vertical
discretisation
 Poster
WG4 reporting
[email protected]
6
Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
Thunderstorm Prediction with Boosting:
Verification and Implementation of a
new Base Classifier
André Walser (MeteoSwiss)
Martin Kohli (ETH Zürich, Semester Thesis)
Output of the Learn process
• M base classifier
• Threshold classifier:
Andre Walser
8
AdaBoost Algorithm
Input
Weighted learn samples
Number of base classifier M
Iteration
1 determine base classifier G
2 calculate error, weights w
3 adapt the weights of falsely
classified samples
Classifier:
Andre Walser
9
C_TSTORM MAPS
17 UTC
18 UTC
19 UTC
Andre Walser
10
July 2006
~7% events
Random forecast
Andre Walser
11
The COSMO-LEPS system:
getting close to the 5-year milestone
Andrea Montani, Chiara Marsigli and Tiziana Paccagnella
ARPA-SIM
Hydrometeorological service of Emilia-Romagna, Italy
IX General COSMO meeting
Athens,18-21 September 2007
The new COSMO-LEPS suite @ ECMWF
since February 2006
d
d-1
d+1
d+2
older EPS
00
d+3
d+4
d+5
4 variables
ZUVQ
3 levels
500 700 850 hPa
Cluster Analysis and RM identification
younger EPS
12
2
time
steps
European
area
Complete
Linkage
•
COSMOLEPS
clustering
area
COSMOLEPS
Integratio
n Domain
•
•
•
•
suite running as a ”timecritical application” managed
by ARPA-SIM;
Δx ~ 10 km; 40 ML;
COSM0-LM 3.20 since Nov06;
fc length: 132h;
Computer time (4.3 million BU
for 2007) provided by the
COSMO partners which are
ECMWF member states.
Dissemination
 probabilistic products
 deterministic products (individual COSMO-LEPS runs)
 derived probability products (EM, ES)
 meteograms over station points
products delivered at about 1UTC to the COSMO weather
services, to Hungary (case studies) and to the MAP DPHASE and COPS communities (field campaign).
Time series of Brier Skill Score
 BSS is written as 1-BS/BSref. Sample climate is the reference system. Useful forecast systems if BSS > 0.
 BS measures the mean squared difference between forecast and observation in probability space.
 Equivalent to MSE for deterministic forecast.
BSS
fc step: 30-42h
 improvement of
performance detectable
for all thresholds along
the years;
 still problems with high
thresholds, but good
trend in 2007.
Jun04: 5m  10m
Feb06: 10m16m; 32ML  40 ML
Main results
•
COSMO-LEPS system runs on a daily basis since November 2002 (6 “failures” in
almost 5 years of activity) and it has become a “member-state time-critical
application” at ECMWF ( ECMWF operators involved in the suite monitoring).
•
COSMO-LEPS products used in EC Projects (e.g. PREVIEW) , field campaigns (e.g.
COPS, MAP D-PHASE) and met-ops rooms across COSMO community.
Time series scores cannot easily disentangle improvements related to
COSMO-LEPS itself from those due to better boundaries by ECMWF EPS.
•
Nevertheless, positive trends can be identified:
•
•
•
increase in ROC area scores and reduction in outliers percentages;
positive impact of increasing the population from 5 to 10 members (June 2004);
although some deficiency in the skill of the system were identified after the system
upgrades occurred on February 2006 (from 10 to 16 members; from 32 to 40 model
levels + EPS upgrade!!!), scores are encouraging throughout 2007.
2 more features:
•
•
marked semi-diurnal cycle in COSMO-LEPS scores (better skill for “night-time” forecasts);
better scores over the Alpine area rather than over the full domain (to be confirmed).
Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
Improving COSMO-LEPS forecasts of
extreme events with reforecasts
F. Fundel, A. Walser, M. Liniger, C. Appenzeller
 Poster
Why can reforecasts help to improve
meteorological warnings?
Model
Obs
25. Jun. +-14d
Improving CLEPS forecasts | COSMO GM | [email protected]
18
Spatial variation of model bias
Difference of CDF of
observations and COSMO-LEPS
24h total precipitation
10/2003-12/2006
Model too wet, worse in southern Switzerland
Improving CLEPS forecasts | COSMO GM | [email protected]
19
COSMO-LEPS Model Climatology
Setup
• Reforecasts over a period of 30 years (1971-2000)
• Deterministic run of COSMO-LEPS (1 member)
(convective scheme = tiedtke)
• ERA40 Reanalysis as Initial/Boundary
• 42h lead time, 12:00 Initial time
• Calculated on hpce at ECMWF
• Archived on Mars at ECMWF (surf (30 parameters),
4 plev (8 parameters); 3h step)
• Post processing at CSCS
Improving CLEPS forecasts | COSMO GM | [email protected]
20
Calibrating an EPS
x Model Climate
Ensemble Forecast
Improving CLEPS forecasts | COSMO GM | [email protected]
21
New index
Probability of Return Period exceedance PRP
• Dependent on the climatology used to calculate
return levels/periods
• Here, a monthly subset of the climatology is used
(e.g. only data from September 1971-2000)
• PRP1 = Event that happens once per September
• PRP100 = Event that happens in one out of 100 Septembers
Improving CLEPS forecasts | COSMO GM | [email protected]
22
Probability of Return Period exceedance
twice per September
each Septembers
COSMO-PRP1/2
COSMO-PRP1
once in 2 Septembers
COSMO-PRP2
Improving CLEPS forecasts | COSMO GM | [email protected]
once in 6 Septembers
COSMO-PRP6
23
PRP based Warngramms
twice per September (15.8 mm/24h)
once per September (21 mm/24h)
once in 3 Septembers (26.3 mm/24h)
once in 6 Septembers (34.8 mm/24h)
Improving CLEPS forecasts | COSMO GM | [email protected]
24
PRP with Extreme Value Analysis
The underlying distribution function of extreme values y=x-u
above a threshold u is the Generalized Pareto Distribution (GPD)
(a special case of the GEV)
=scale; =shape
C. Frei, Introduction to EVA
Improving CLEPS forecasts | COSMO GM | [email protected]
25
PRP with Extreme Value Analysis
COSMO-PRP12 (GPD)
Improving CLEPS forecasts | COSMO GM | [email protected]
COSMO-PRP60 (GPD)
26
Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
Priority project
« Verification of very high
resolution models »
Slides from
•
•
•
•
Felix Ament  Poster
Ulrich Damrath
Carlo Cacciamani
Pirmin Kaufmann  Poster
Motivation for new scores
Which rain forecast would you rather use?
Mesoscale model (5 km) 21 Mar 2004
Sydney
RMS=13.0
WG4 reporting
[email protected]
Global model (100 km) 21 Mar 2004
Observed 24h rain
Sydney
RMS=4.6
28
Fine scale verification: Fuzzy Methods
“… do not evaluate a point by point match!”
General Recipe
• (Choose a threshold to define event
and non-event)
• define scales of interest
• consider statistics at these scales
for verification
Scale
forecast
observation
x
x
x
x x
x
x
X
x
X
X X
X
X
 score depends on spatial scale
and intensity
x X
X
X
x
Evaluate box
statistics
x
x
x
Intensity
WG4 reporting
[email protected]
29
A Fuzzy Verification Toolbox
Fuzzy method
Decision model for useful forecast
Upscaling (Zepeda-Arce et al. 2000; Weygandt et al. 2004)
Resembles obs when averaged to coarser scales
Anywhere in window (Damrath 2004), 50% coverage
Predicts event over minimum fraction of region
Fuzzy logic (Damrath 2004), Joint probability (Ebert 2002)
More correct than incorrect
Multi-event contingency table (Atger 2001)
Predicts at least one event close to observed event
Intensity-scale (Casati et al. 2004)
Lower error than random arrangement of obs
Fractions skill score (Roberts and Lean 2005)
Similar frequency of forecast and observed events
Practically perfect hindcast (Brooks et al. 1998)
Resembles forecast based on perfect knowledge of observations
Pragmatic (Theis et al. 2005)
Can distinguish events and non-events
CSRR (Germann and Zawadzki 2004)
High probability of matching observed value
Area-related RMSE (Rezacova et al. 2005)
Similar intensity distribution as observed
Ebert, E.E., 2007: Fuzzy verification of high resolution gridded forecasts: A review and proposed framework. Meteorol. Appls., submitted.
Toolbox available at http://www.bom.gov.au/bmrc/wefor/staff/eee/fuzzy_verification.zip
WG4 reporting
[email protected]
30
A Fuzzy Verification testbed
Virtual truth
(Radar data, model
data, synthetic field)
Perturbation Realizations of
Generator
virtual erroneous
model forecasts
Fuzzy
Verification
Toolbox
Analyzer
Realizations of
verification results
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.90
0.90
0.90
0.90
0.90
0.90
1.00
1.00
1.00
0.70
0.70
0.70
0.70
0.70
0.70
1.00
1.00
1.00
0.50
0.50
0.50
0.50
0.50
0.50
0.90
0.90
0.90
0.50
0.50
0.50
0.50
0.50
0.50
0.90
0.90
0.90
0.40
0.40
0.50
0.50
0.50
0.50
0.90
0.90
0.90
0.30
0.40
0.40
0.50
0.50
0.50
0.90
0.90
0.90
Assessment of
• sensitivity (mean)
• [reliability (STD)]
Two ingredients:
1. Reference fields: Hourly radar derived rain fields, August 2005 flood event, 19 time stamps (Frei et al., 2005)
2. Perturbations:  next slide
WG4 reporting
[email protected]
31
Perturbations
Perturbation
Type of forecast error
Algorithm
PERFECT
No error – perfect forecast!
-
XSHIFT
Horizontal translation
Horizontal translation
(10 grid points)
BROWNIAN
No small scale skill
Random exchange of
neighboring points
(Brownian motion)
LS_NOISE
Wrong large scale forcing
Multiplication with a disturbance
factor generated by large scale
2d Gaussian kernels.
SMOOTH
High horizontal diffusion (or
coarse scale model)
Moving window arithmetic
average
DRIZZLE
Overestimation of low
intensity precipitation
Moving Window filter setting
each point below average point
to the mean value
WG4 reporting
[email protected]
32
Perfect
forecast
All scores should equal
!
• But, in fact, 5 out of 12 do not!
WG4 reporting
[email protected]
33
spatial scale
Expected response to perturbations
XSHIFT
coarse
BROWNIAN
LS_NOISE
SMOOTH
DRIZZLE
fine
low
high
Sensitivity:
expected (=0.0);
not expected (=1.0)
intensity
Summary in terms of contrast:
Contrast := mean( ) – mean( )
WG4 reporting
[email protected]
34
Summary real
good
BROWNIAN
SMOOTH
LS_NOISE
DRIZZLE
XSHIFT
Contrast
Leaking Scores
0.7
0.6
0.5
0.4
0.3
0.2
0.1
-0.1
Upscaling
Anywhere in
Window
50%
coverage
Fuzzy
Logig
Joint
Prob.
Multi
Fraction
Intensity
event
Skill
Scale
cont. tab.
Score
Pragmatic
Appr.
Practic.
Perf.
Hindcast
CSSR
0.2
• Leaking scores show an overall poor performance
0.1
• “Intensity scale” and “Practically Perfect Hindcast” perform in general well, but …
0
Area
related
RMSE
• Many score have problem to detect large scale noise (LS_NOISE); “Upscaling” and
STD
“50% coverage” are beneficial in this respect
good
WG4 reporting
[email protected]
35
August 2005 flood event
Precipitation sum 18.8.-23.8.2005:
Mean: 73.1mm
Mean: 62.8mm
WG4 reporting
[email protected]
(Hourly radar data calibrated using
rain gauges (Frei et al., 2005))
Mean: 106.2mm
Mean: 43.2mm
36
Fuzzy Verification of August 2005 flood
Based on 3 hourly accumulations during August 2005 flood period (18.8.-23.8.2005)
COSMO-2
Scale
(7km gridpoints)
COSMO-7
Intensity
threshold (mm/3h)
WG4 reporting
[email protected]
good
37
bad
Fuzzy Verification of August 2005 flood
Difference of Fuzzy Scores
Scale
(7km gridpoints)
COSMO-2 better
neutral
COSMO-7 better
Intensity threshold (mm/3h)
WG4 reporting
[email protected]
38
D-PHASE: August 2007
COSMO-7
COSMO-2
COSMO-EU
COSMO-DE
Intensity Scale score (preliminary), 3h accumulation
WG4 reporting
[email protected]
39
„Fuzzy“-type verification for 12 h forecasts
(vv=06 till vv=18) starting at 00 UTC August 2007
(fraction skill score)
40
U. Damrath, COSMO GM, Athens 2007
First simple approach: averaging QPF
In box of different size
(what is the best size ?)
alert warning areas (Emilia-Romagna)
Sensitivity to box size and
precipitation threshold
threat score cumulata a +24h
0.7
0.6
1mm/24h
5 mm/24h
0.5
10 mm/24h
20 mm/24h
0.4
0.3
0.5 X 0.5
0.4 X 0.4
dimensione box
Positive impact of larger box is more visible at
higher precipitation thresholds
42
Sensitivity to box size and precipitation
threshold
POD (QPF: +24)
0.8
0.6
0.4
0.2
0
box-0.3
box-0.4
box-0.5
1
5
10
20
50
Threshold (mm/24h)
Best result box = 0.5 deg ? (7 * 7 grid points …)
43
Sensitivity to box size and
precipitation threshold
TS (QPF: +24)
0.6
0.4
0.2
0
1
5
10
20
50
box-0.3
box-0.4
box-0.5
Threshold (mm/24 h)
Best result box = 0.5 deg ? (7 * 7 grid points …)
44
Some preliminary conclusions
QPF spatial averaging over box or alert areas
produces a more usable QPF field for
applications. Space-time localisation errors are
minimised
Box or alert areas with size of 5-6 times the grid
resolution gives the best results
Positive impact of larger box is more visible at
higher precipitation thresholds
The gain of HRLam with respect to GCMs is
greater for high thresholds and for precipitation
maxima
Better results increasing time averaging
(problems with 6 hours accumulation period,
much better with 24 hours cumulated period !
1999-10-25 (Case L)
Temporal radius
obs
rt=1
rt=3
rt=6
COSMO General Meeting 2007, Athens, Greece
[email protected] (presented by [email protected])
46
1999-10-25 (Case L)
Spatial radius
obs
rxy=5
rxy=10
rxy=15
COSMO General Meeting 2007, Athens, Greece
[email protected] (presented by [email protected])
47
Italian COSMO Models implementations cross-verifications
COSMO I7
COSMO-ME
2 run per day starting at 00
and 12 UTC
Domain size
641 x 401
Grid spacing
0.0625 (7 km)
Number of layers / top
40 / ~22 km
Time step
40 s
Forecast range
72 hrs
Initial time of model run
00/12 UTC
Lateral bound. condit.
IFS
L.B.C. update frequency
3 hrs
Initial state
Interpolated 3D-PSAS
Initialization
DFI
External analysis
T,u,v, PseudoRH, SP
Special features
Filtered topography
Status
Operational
Hardware
IBM P690 (ECMWF)
N° of processors
180
Forecast length + 72 hours
Horizontal resolution about 7
km
40 vertical levels
3-hourly boundary conditions
from IFS/ECMWF forecast
Initial Conditions through
continuous assimilation cycle
based on nudging
COSMO-IT
Runge-Kutta scheme (Forstner and Doms, 2004)
Only “shallow convection” parametrization
Domain size
542 x 604
Grid spacing
0.025 (2.8 km)
Number of layers / top
50 / ~22 km
Time step and scheme
25 s
Forecast range
36 hrs
Initial time of model run
00 UTC
Lateral bound. condit.
COSMO-MED
L.B.C. update frequency
1 hr
Initial state
Nudging
Initialization
None
External analysis
None
Special features
Filtered topography
Status
operational
Hardware
IBM P690 (ECMWF)
N° of processors
354
COSMO general meeting – Athens 18 -21 September 2007 – WG5
48
Comparison between COSMO-ME and COSMO-IT (with upscaling)
COSMO general meeting – Athens 18 -21 September 2007 – WG5
49
Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
Verification of very high resolution (precipitation)
« Optimal » scale:
0.5° : 50 km
5 x grid (7km) : 35 km
30 x 2.2 km: 70 km
Some signals that 2 km models better than 7 km
I would like to generate smothed products
Material starts to be collected: MAP D-PHASE, 2km
models
Work has to continue
Exchange of experience with other consortia
Federal Department of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss
Verification of COSMO-LEPS and
coupling with a hydrologic model
André Walser1) and Simon Jaun2)
1)MeteoSwiss
2)Institute
for Atmospheric and Climate Science, ETH
Data flow for MAP D-PHASE
Main partner WSL: Swiss Federal Institute for Forest, Snow and Landscape Research
A. Walser
52
Comparison different models
• August 2007 event Linth at Mollis, initial time 2007-08-07
A. Walser
53