Ensemble versus Deterministic Performance at the Kilometer Scale

M. P. Mittermaier Numerical Modelling, Weather Science, Met Office, Exeter, United Kingdom

Search for other papers by M. P. Mittermaier in
Current site
Google Scholar
PubMed
Close
and
G. Csima Numerical Modelling, Weather Science, Met Office, Exeter, United Kingdom

Search for other papers by G. Csima in
Current site
Google Scholar
PubMed
Close
Restricted access

Abstract

What is the benefit of a near-convection-resolving ensemble over a near-convection-resolving deterministic forecast? In this paper, a way in which ensemble and deterministic numerical weather prediction (NWP) systems can be compared is demonstrated using a probabilistic verification framework. Three years’ worth of raw forecasts from the Met Office Unified Model (UM) 12-member 2.2-km Met Office Global and Regional Ensemble Prediction System (MOGREPS-UK) ensemble and 1.5-km Met Office U.K. variable resolution (UKV) deterministic configuration were compared, utilizing a range of forecast neighborhood sizes centered on surface synoptic observing site locations. Six surface variables were evaluated: temperature, 10-m wind speed, visibility, cloud-base height, total cloud amount, and hourly precipitation. Deterministic forecasts benefit more from the application of neighborhoods, though ensemble forecast skill can also be improved. This confirms that while neighborhoods can enhance skill by sampling more of the forecast, a single deterministic model state in time cannot provide the variability, especially at the kilometer scale, where rapid error growth acts to limit local predictability. Ensembles are able to account for the uncertainty at larger, synoptic scales. The results also show that the rate of decrease in skill with lead time is greater for the deterministic UKV. MOGREPS-UK retains higher skill for longer. The concept of a skill differential is introduced to find the smallest neighborhood size at which the deterministic and ensemble scores are comparable. This was found to be 3 × 3 (6.6 km) for MOGREPS-UK and 11 × 11 (16.5 km) for UKV. Comparable scores are between 2% and 40% higher for MOGREPS-UK, depending on the variable. Naively, this would also suggest that an extra 10 km in spatial accuracy is gained by using a kilometer-scale ensemble.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Marion Mittermaier, marion.mittermaier@metoffice.gov.uk

Abstract

What is the benefit of a near-convection-resolving ensemble over a near-convection-resolving deterministic forecast? In this paper, a way in which ensemble and deterministic numerical weather prediction (NWP) systems can be compared is demonstrated using a probabilistic verification framework. Three years’ worth of raw forecasts from the Met Office Unified Model (UM) 12-member 2.2-km Met Office Global and Regional Ensemble Prediction System (MOGREPS-UK) ensemble and 1.5-km Met Office U.K. variable resolution (UKV) deterministic configuration were compared, utilizing a range of forecast neighborhood sizes centered on surface synoptic observing site locations. Six surface variables were evaluated: temperature, 10-m wind speed, visibility, cloud-base height, total cloud amount, and hourly precipitation. Deterministic forecasts benefit more from the application of neighborhoods, though ensemble forecast skill can also be improved. This confirms that while neighborhoods can enhance skill by sampling more of the forecast, a single deterministic model state in time cannot provide the variability, especially at the kilometer scale, where rapid error growth acts to limit local predictability. Ensembles are able to account for the uncertainty at larger, synoptic scales. The results also show that the rate of decrease in skill with lead time is greater for the deterministic UKV. MOGREPS-UK retains higher skill for longer. The concept of a skill differential is introduced to find the smallest neighborhood size at which the deterministic and ensemble scores are comparable. This was found to be 3 × 3 (6.6 km) for MOGREPS-UK and 11 × 11 (16.5 km) for UKV. Comparable scores are between 2% and 40% higher for MOGREPS-UK, depending on the variable. Naively, this would also suggest that an extra 10 km in spatial accuracy is gained by using a kilometer-scale ensemble.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Marion Mittermaier, marion.mittermaier@metoffice.gov.uk
Save
  • Bouallégue, Z., and S. E. Theis, 2014: Spatial techniques to precipitation ensemble forecasts: From verification results to probabilistic products. Meteor. Appl., 21, 922929, doi:10.1002/met.1435.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brier, G., 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 13, doi:10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Davies, T., M. Cullen, A. Malcolm, M. Mawson, A. Staniforth, A. White, and N. Wood, 2005: A new dynamical core for the Met Office’s global and regional modelling of the atmosphere. Quart. J. Roy. Meteor. Soc., 131, 17591782, doi:10.1256/qj.04.101.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dey, S., G. Leoncini, N. Roberts, R. Plant, and S. Migliorini, 2014: A spatial view of ensemble spread in convection permitting ensembles. Mon. Wea. Rev., 142, 40914107, doi:10.1175/MWR-D-14-00172.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Durran, D. R., and M. Gingrich, 2014: Atmospheric predictability: Why butterflies are not of practical importance. J. Atmos. Sci., 71, 24762488, doi:10.1175/JAS-D-14-0007.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Epstein, E., 1969: A scoring system for probability forecasts of ranked probabilities. J. Appl. Meteor., 8, 985987, doi:10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ferro, C., 2014: Fair scores for ensemble forecasts. Quart. J. Roy. Meteor. Soc., 140, 19171923, doi:10.1002/qj.2270.

  • Gilleland, E., D. Ahijevych, B. Brown, B. Casati, and E. Ebert, 2009: Intercomparison of spatial forecast verification methods. Wea. Forecasting, 24, 14161430, doi:10.1175/2009WAF2222269.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gilleland, E., D. Ahijevych, B. Brown, and E. E. Ebert, 2010: Verifying forecasts spatially. Bull. Amer. Meteor. Soc., 91, 13651373, doi:10.1175/2010BAMS2819.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Golding, B. W., and Coauthors, 2014: Forecasting capabilities for the London 2012 Olympics. Bull. Amer. Meteor. Soc., 95, 883896, doi:10.1175/BAMS-D-13-00102.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction systems. Wea. Forecasting, 15, 559570, doi:10.1175/1520-0434(2000)015<0559:DOTCRP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, Eds., 2012: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. 2nd ed. John Wiley and Sons, 292 pp.

  • Kain, J. S., S. J. Weiss, J. J. Levit, M. E. Baldwin, and D. R. Bright, 2006: Examination of convection-allowing configurations of the WRF model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004. Wea. Forecasting, 21, 167181, doi:10.1175/WAF906.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lerch, S., T. Thorarinsdottir, F. Ravazzolo, and T. Gneiting, 2015: Forecasters dilemma: Extreme events and forecast evaluation. arXiv.org, doi:arxiv.org/pdf/1512.09244.

  • Lewis, H., and Coauthors, 2015: From months to minutes—Exploring the value of high-resolution rainfall observation and prediction during the UK winter storms of 2013/14. Meteor. Appl., 22, 90104, doi:10.1002/met.1493.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E., 1969: Atmospheric predictability as revealed by naturally occurring analogues. J. Atmos. Sci., 26, 636646, doi:10.1175/1520-0469(1969)26<636:APARBN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mass, C., D. Ovens, K. Westrick, and B. Colle, 2002: Does increasing horizontal resolution produce more skillful forecasts? The results of two years of real-time numerical weather prediction over the Pacific Northwest. Bull. Amer. Meteor. Soc., 83, 407430, doi:10.1175/1520-0477(2002)083<0407:DIHRPM>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., 2008: The potential impact of using persistence as a reference forecast on perceived forecast skill. Wea. Forecasting, 23, 10221031, doi:10.1175/2008WAF2007037.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., 2012: A critical assessment of surface cloud observations and their use for verifying cloud forecasts. Quart. J. Roy. Meteor. Soc., 138, 17941807, doi:10.1002/qj.1918.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., 2014: A strategy for verifying near-convection-resolving forecasts at observing sites. Wea. Forecasting, 29, 185204, doi:10.1175/WAF-D-12-00075.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., N. Roberts, and S. Thompson, 2013: A long-term assessment of precipitation forecast skill using the fractions skill score. Meteor. Appl., 20, 176186, doi:10.1002/met.296.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nielsen, E. R., and R. S. Schumacher, 2016: Using convection-allowing ensembles to understand the predictability of an extreme rainfall event. Mon. Wea. Rev., 144, 36513676, doi:10.1175/MWR-D-16-0083.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roberts, N., 2008: Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model. Meteor. Appl., 15, 163169, doi:10.1002/met.57.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roberts, N., and H. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tang, Y., H. W. Lean, and J. Bornemann, 2013: The benefits of the Met Office variable resolution NWP model for forecasting convection. Meteor. Appl., 20, 417426, doi:10.1002/met.1300.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tennant, W., 2015: Improving initial condition perturbations for MOGREPS-UK. Quart. J. Roy. Meteor. Soc., 141, 23242336, doi:10.1002/qj.2524.

  • Wilks, D., 2006: Statistical Methods in Atmospheric Sciences. 2nd ed. Academic Press, 627 pp.

  • WMO, 2008: Guide to Meteorological Instruments and Methods of Observation. 7th ed. World Meteorological Organization, 681 pp. [Available online at https://www.wmo.int/pages/prog/gcos/documents/gruanmanuals/CIMO/CIMO_Guide-7th_Edition-2008.pdf.]

  • Wood, N., and Coauthors, 2013: An inherently mass-conserving semi-implicit semi-Lagrangian discretization of the deep-atmosphere global non-hydrostatic equations. Quart. J. Roy. Meteor. Soc., 140, 15051520, doi:10.1002/qj.2235.

    • Crossref
    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 2672 1607 74
PDF Downloads 1090 156 9