Transcript weather dz

TAIWIN
model verification task
Jamie Wolff
Team members: Tressa Fowler, John Halley Gotway, Michelle
Harrold, Tracy Hertneky, Kyoko Ikeda, Scott Landolt
Collaborations with: Greg Thompson and Mei Xu
FAA Icing Weather Tools Review
13 July 2016
Model verification task
 Task: Examine microphysics forecasts and NWP model
forecast performance for TAIWIN-relevant fields
 Conduct comprehensive verification in order to establish a
baseline for how well current operational models perform;
monitor model performance in future years to quantify the
impact of developments
 Deliverable: Internal FAA report
 Status:




Model datasets
Observation datasets
Verification approaches
Preliminary results
Model datasets
 Operational HRRR (3km)

Hybrid level data (50 levels)
 Mixing ratio (at each level): cloud water, cloud ice, rain, snow, graupel
 Operational NAMnest (4km)

Isobaric level data (Grid 227, 5km LC)
 42 isobaric levels from 10 - 1000 hPa
 Mixing ratio (at each level): cloud water, cloud ice, rain, snow
 Pulled for 1 Jan – 31 March 2016

Regridded each model to a common 3-km domain for verification purposes
 Model output examined


Mixing ratios: rain and snow
Categorical (sfc) precipitation type: rain, snow, ice pellets, freezing rain
Mixing ratios
(lowest level)
HRRR
NAM (no graupel)
Categorical
ptype
HRRR
(not mutually exclusive except RA vs FZRA)
NAM
(ensemble technique where dominant category is
declared ptype)
Verification datasets
 Point observations:
 METARs
 Any report of weather types: RA, DZ, FZRA, FZDZ, SN, GR, GS, PE, PL, SG
(purposely ignoring VC and UP reports)
 ASOS A/B stations are used to identify reliable observations of non-occurrence of
precipitation (no weather type report – set to NONE)
 mPING
 Any data types report of: rain (3), freezing rain (4), drizzle (5), freezing drizzle (6), ice
pellets-sleet (7), snow and/or graupel (8), mixed rain and snow (9), mixed ice pellets
and snow (10), mixed rain and ice pellets (11), graupel (12), mixed freezing rain and
ice pellets (48), none (2)
 Precipitation type reports categorized and used in MET: M*_RAIN,
M*_SNOW, M*_FRZR, M*_ICEP, M*_NONE
 Gridded observations:
 Multi-radar/Multi-sensor (MRMS) – CONUS ~1km resolution
 Hourly QPE (gauge corrected radar estimates)
 Automated surface precipitation classification (ptype): seven-classes
categorized into rain, snow, or none
 Regridded to the same 3-km domain as the model output
Hisogram of 422585 METAR Rain Observations
5000
5000
10000
10000
Count
Count
15000
15000
20000
20000
25000
25000
Hisogram of 404195 METAR Snow Observations
1
2
3
4
5
6
7
8
9
10
12
14
16
18
20
22
0
1
2
3
4
6
7
8
9
Count
10
12
14
16
18
20
22
20
22
Valid Hour (UTC)
Hisogram of 952 METAR ICEP Observations
400
300
0
1
2
3
4
5
6
7
8
9
10
12
14
16
18
20
22
100
0
0
200
200
Valid Hour (UTC)
400
Count
Count
0
600
50000
800
100000
500
150000
Hisogram of 7345 METAR FRZR Observations
1000
5
250000
Valid Hour (UTC)
200000
0
300000
0
0
350000
Hisogram of 8006846 METAR NONE Observations
0
1
2
3
4
5
6
7
8
9
10
12
14
Valid Hour (UTC)
16
18
20
22
0
1
2
3
4
5
6
7
8
9
10
12
14
Valid Hour (UTC)
16
18
Verification approaches
 Grid-to-grid comparisons



Model accumulated precipitation vs. MRMS QPE
Model mixing ratios (RWMR[T>0]/SNMR) vs. MRMS ptype (rain/snow/none)
Model categorical precipitation type vs. MRMS ptype (rain/snow/none)
 Grid-to-point comparisons



Model mixing ratios vs. METAR/mPING ptype (rain[T>0]/frzr[T<0]/snow/none)
Model categorical precipitation type vs. METAR/mPING ptype (rain[T>0]/frzr[T<0]/snow/icep/none)
Looked at neighborhood widths of 1 (nearest), 2, 3, 4, 5, 6


Categorical: Fraction of points in the n x n box that are matches
Mixing ratios: Maximum forecast value of points in the n x n box
 Statistics computed






Probability of detection (POD) of yes and no
False Alarm Ratio (FAR)
Gilbert Skill Score (GSS)
Frequency Bias (Fbias)
Performance Diagrams (PODy, Success Ratio, Bias, Critical Success Index)
Processing done, plots created – analysis underway!
Probability of Detection (PODy)
Sample ptype results
Benjamin et al. 2016 WAF
CRAIN
CFRZR
CSNOW
opHRRR
opNAMnest
Model ptype vs. METAR
4x4 window (~12km)
95% CIs
CONUS
JFM 2016
CICEP
False Alarm Ration (FAR)
Sample ptype results
Benjamin et al. 2016 WAF
CRAIN
CSNOW
opHRRR
opNAMnest
Model ptype vs. METAR
4x4 window (~12km)
95% CIs
CONUS
JFM 2016
CFRZR
CICEP
Perfect score
Bias
Overforecast
Underforecast
Future work
 Monitor operational model performance to quantify the
impact of developments
 Utilize output of from the Freezing Drizzle algorithm to
further assess surface weather type in the models using
these enhanced observations
 Explore methods and observations to verify aloft conditions
 Assist with evaluating newly developed techniques within
the TAIWIN modeling task area (e.g., HRRR-TLE, aerosolaware scheme, cloud underproduction)
 Apply spatial and object-based verification techniques to
acquire advanced diagnostic information to help identify
strengths and weaknesses of models