Method for Object-based Diagnostic Evaluation

Download Report

Transcript Method for Object-based Diagnostic Evaluation

Assessing added value of
high resolution forecasts
Emiel van der Plas
Maurice Schmeits, Kees Kok
KNMI, The Netherlands
Introduction
Question: do high resolution (convection resolving) models perform
better than the previous generation models?
T2M, wind, precipitation!
KNMI:
Harmonie (2.5 km) > Hirlam(11, 22 km) ?
Harmonie > ECMWF (deterministic run: T1279)?
Verification of high resolution NWP forecasts is challenging
Precipitation: highly localised
Radar/stationdata: double penalty!
If there is extra skill, how to demonstrate objectively?
In this talk: Fuzzy methods and Model Output Statistics
2/15
Set-up
Harmonie (‘ECJAN’):
2.5 km, 300x300 domain, AROME physics, 3DVAR
ECMWF boundaries
Run with 800x800 points, Hirlam boundaries: no sufficient
archive available…
Hirlam (D11):
22 km, 136 x 226, 3DVAR
ECMWF Operational (T1279)
±16 km, global, 3DVAR
•Period: 1st February 2012 - 31st May 2012
All output resampled to Harmonie grid
(nearest neighbour)
Radar: Dutch precipitation radar composite
1 km
3/15
Example of Direct Model Output
E.g. frontal precipitation, 7th March 2012
ECJAN, Hirlam, ECMWF, Radar
RADAR
4/15
Harmonie
D11
ECMWF
Neo-classical verification: fuzzy methods
• MET: suite of verification tools by NCAR (WRF)
Grid based scores: with respect to gridded radar observations
GSS, 25x25, > 2mm/3h
FSS, 3x3, > 1mm/3h
Fractions Skill Score (Roberts, Lean 2008)
Hanssen-Kuiper discriminant, Gilbert Skill Score (ETS), …
Object based scores (not in this paper)
5/15
MOS: what is relevant in DMO?
• How would a trained meteorologist look at direct model output?
?
6/15
Predictors
• How would a trained meteorologist look at direct model output?
7/15
Predictors
• How would a trained meteorologist look at direct model output?
8/15
Predictors
• How would a trained meteorologist look at direct model output?
9/15
Model Output Statistics: predictive potential
Construct a set of predictors (per model, station, starting and lead time):
For now: use precipitation only
Use various ‘areas of influence’: 25,50,75,100 km
DMO, coverage, max(DMO) within area,
distance to forecasted precipitation, … , threshold!
Apply (extended) logistic regression [Wilks 2009]
Use threshold (sqrt(q)) as predictor:
complete distribution function (Wilks, 2009)
Forward stepwise selection, backward deletion
using R: stepPLR (Mee Young Park and Trevor Hastie, 2008)
Verify probabilities based on coefficients of selected predictors in
terms of reliability diagrams, Brier Skill Score
10/15
Harmonie
Results: example poor skill
00UTC+003
ECMWF
D11
11/15
Harmonie
Results: example good skill
00UTC+012
ECMWF
D11
12/15
Outlook
• No conclusive results
• Grid-based, “fuzzy” methods suggest reasonable skill for high
resolution NWP model (Harmonie)
• MOS: mixed bag
Frontal systems (FMAM) well captured by hydrostatic models
• To do:
Larger dataset
Training data, independent data
Convective season: more cases, higher thresholds
Include Harmonie run on large domain
…
13/15
Extended Logistic Regression (ELR)
Binary predictand yi (here: precip > q)
Probability: logistic:
Joint likelihood:
L2 penalisation
minimise
(using R: stepPLR by Mee Young Park and Trevor Hastie, 2008):
Use threshold (sqrt(q)) as predictor:
complete distribution function (Wilks, 2009)
Few cases, many potential predictors: pool stations, max 5 terms
14/15
Period
1st February 2012 -31st May 2012
The archive available for Harmonie was the limiting factor
Mostly frontal precipitation
ECJAN
D11
ECMWF
RADAR
15/15
Period: base rate (HSS, HK, FBIAS)
HK
Verification: classical, Fraction Skill Score
Classical or categorical verification, eg:
Hanssen-Kuiper discriminant, (aka
True Skill Statistic, Peirce Skill Score)
(a d – b c)/(a + c)(b + d)
Fraction Skill Score:
(Roberts & Lean, 2008)
Straightforward interpretation
but: Double penalty
17/15
CTS
Observed
yes
no
Forecast yes|
a
b
no |
c
d
Verification: MODE (object based), wavelets
MET provides access to MODE analysis:
“Method for Object-based Diagnostic Evaluation”
FC
OBS
Forecast, observation: convolution, thresholded, …
18/15
Verification: MODE (object based), wavelets
MET provides access to MODE analysis:
“Method for Object-based Diagnostic Evaluation”
Center
of mass
Area,
Angle,
FC
Convex
hull,
…
… merged, matched and compared.
19/15
OBS