Large-Sample Application of Radar Reflectivity Object-Based Verification to Evaluate HRRR Warm-Season Forecasts

Jeffrey D. Duda Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder, Boulder, Colorado
NOAA/Global Systems Laboratory, Boulder, Colorado

Search for other papers by Jeffrey D. Duda in
Current site
Google Scholar
PubMed
Close
and
David D. Turner NOAA/Global Systems Laboratory, Boulder, Colorado

Search for other papers by David D. Turner in
Current site
Google Scholar
PubMed
Close
Restricted access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

The Method of Object-based Diagnostic Evaluation (MODE) is used to perform an object-based verification of approximately 1400 forecasts of composite reflectivity from the operational HRRR during April–September 2019. In this study, MODE is configured to prioritize deep, moist convective storm cells typical of those that produce severe weather across the central and eastern United States during the warm season. In particular, attributes related to distance and size are given the greatest attribute weights for computing interest in MODE. HRRR tends to overforecast all objects, but substantially overforecasts both small objects at low-reflectivity thresholds and large objects at high-reflectivity thresholds. HRRR tends to either underforecast objects in the southern and central plains or has a correct frequency bias there, whereas it overforecasts objects across the southern and eastern United States. Attribute comparisons reveal the inability of the HRRR to fully resolve convective-scale features and the impact of data assimilation and loss of skill during the initial hours of the forecasts. Scalar metrics are defined and computed based on MODE output, chiefly relying on the interest value. The object-based threat score (OTS), in particular, reveals similar performance of HRRR forecasts as does the Heidke skill score, but with differing magnitudes, suggesting value in adopting an object-based approach to forecast verification. The typical distance between centroids of objects is also analyzed and shows gradual degradation with increasing forecast length.

Significance Statement

Improving weather forecast models requires determining where the model does well and where it does not. Gridpoint-based methods for assessing model forecasts have known shortfalls when applied to high-resolution models that can forecast individual thunderstorms. We present an object-based verification procedure that focuses on identifying actual meteorological features such as thunderstorms instead of gridpoint-by-gridpoint comparison between forecasts and verifying truth. This article reveals some of the information ascertained from this assessment and illustrates the enhancement of information obtained from object-based verification to gridpoint-based assessment.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jeffrey D. Duda, jeffduda319@gmail.com

Abstract

The Method of Object-based Diagnostic Evaluation (MODE) is used to perform an object-based verification of approximately 1400 forecasts of composite reflectivity from the operational HRRR during April–September 2019. In this study, MODE is configured to prioritize deep, moist convective storm cells typical of those that produce severe weather across the central and eastern United States during the warm season. In particular, attributes related to distance and size are given the greatest attribute weights for computing interest in MODE. HRRR tends to overforecast all objects, but substantially overforecasts both small objects at low-reflectivity thresholds and large objects at high-reflectivity thresholds. HRRR tends to either underforecast objects in the southern and central plains or has a correct frequency bias there, whereas it overforecasts objects across the southern and eastern United States. Attribute comparisons reveal the inability of the HRRR to fully resolve convective-scale features and the impact of data assimilation and loss of skill during the initial hours of the forecasts. Scalar metrics are defined and computed based on MODE output, chiefly relying on the interest value. The object-based threat score (OTS), in particular, reveals similar performance of HRRR forecasts as does the Heidke skill score, but with differing magnitudes, suggesting value in adopting an object-based approach to forecast verification. The typical distance between centroids of objects is also analyzed and shows gradual degradation with increasing forecast length.

Significance Statement

Improving weather forecast models requires determining where the model does well and where it does not. Gridpoint-based methods for assessing model forecasts have known shortfalls when applied to high-resolution models that can forecast individual thunderstorms. We present an object-based verification procedure that focuses on identifying actual meteorological features such as thunderstorms instead of gridpoint-by-gridpoint comparison between forecasts and verifying truth. This article reveals some of the information ascertained from this assessment and illustrates the enhancement of information obtained from object-based verification to gridpoint-based assessment.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jeffrey D. Duda, jeffduda319@gmail.com
Save
  • Adams-Selin, R. D., A. J. Clark, C. J. Melick, S. R. Dembeck, I. L. Jirak, and C. L. Ziegler, 2019: Evolution of WRF-HAILCAST during the 2014–16 NOAA/Hazardous Weather Testbed Spring Forecasting Experiments. Wea. Forecasting, 34, 6179, https://doi.org/10.1175/WAF-D-18-0024.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ahijevych, D., E. Gilleland, B. G. Brown, and E. E. Ebert, 2009: Application of spatial verification methods to idealized and NWP-gridded precipitation forecasts. Wea. Forecasting, 24, 14851497, https://doi.org/10.1175/2009WAF2222298.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Alexander, C. R., S. S. Weygandt, T. G. Smirnova, S. Benjamin, P. Hofmann, E. P. James, and D. A. Koch, 2010: High Resolution Rapid Refresh (HRRR): Recent enhancements and evaluation during the 2010 convective season. 25th Conf. on Severe Local Storms, Denver, CO, Amer. Meteor. Soc., 9.2, https://ams.confex.com/ams/25SLS/techprogram/paper_175722.htm.

  • Benjamin, S. G., and Coauthors, 2016: A North American hourly assimilation and model forecast cycle: The Rapid Refresh. Mon. Wea. Rev., 144, 16691694, https://doi.org/10.1175/MWR-D-15-0242.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blaylock, B. K., and J. D. Horel, 2020: Comparison of lightning forecasts from the High-Resolution Rapid Refresh model to Geostationary Lightning Mapper observations. Wea. Forecasting, 35, 401416, https://doi.org/10.1175/WAF-D-19-0141.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, B., and Coauthors, 2021: The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bull. Amer. Meteor. Soc., 102, E782E807, https://doi.org/10.1175/BAMS-D-19-0093.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bytheway, J. L., C. D. Kummerow, and C. Alexander, 2017: A features-based assessment of the evolution of warm season precipitation forecasts from the HRRR model over three years of development. Wea. Forecasting, 32, 18411856, https://doi.org/10.1175/WAF-D-17-0050.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cai, H., and R. E. Dumais Jr., 2015: Object-based evaluation of a numerical weather prediction model’s performance through forecast storm characteristic analysis. Wea. Forecasting, 30, 14511468, https://doi.org/10.1175/WAF-D-15-0008.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clark, A. J., R. G. Bullock, T. A. Jensen, M. Xue, and F. Kong, 2014: Application of object-based time-domain diagnostics for tracking precipitation systems in convection-allowing models. Wea. Forecasting, 29, 517542, https://doi.org/10.1175/WAF-D-13-00098.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2016: Spring Forecasting Experiment 2016: Preliminary findings and results. NOAA/NSSL/SPC, 50 pp., https://hwt.nssl.noaa.gov/Spring_2016/HWT_SFE_2016_preliminary_findings_final.pdf.

  • Clark, A. J., and Coauthors, 2017: Spring Forecasting Experiment 2017: Preliminary findings and results. NOAA/NSSL/SPC, 50 pp.

  • Clark, A. J., and Coauthors, 2018: Spring Forecasting Experiment 2018: Preliminary findings and results. NOAA/NSSL/SPC, 69 pp., https://hwt.nssl.noaa.gov/sfe/2018/docs/HWT_SFE_2018_Prelim_Findings_v1.pdf.

  • Clark, A. J., and Coauthors, 2019: Spring Forecasting Experiment 2019: Preliminary findings and results. NOAA/NSSL/SPC, 77 pp., https://hwt.nssl.noaa.gov/sfe/2019/docs/HWT_SFE_2019_Prelim_Findings_FINAL.pdf.

  • Clark, A. J., and Coauthors, 2020: Spring Forecasting Experiment 2020: Preliminary findings and results. NOAA/NSSL/SPC, 77 pp., https://hwt.nssl.noaa.gov/sfe/2020/docs/HWT_SFE_2020_Prelim_Findings_FINAL.pdf.

  • Davis, C. A., B. G. Brown, and R. G. Bullock, 2006: Object-based verification of precipitation forecasts. Part I: Methodology and application to mesoscale rain areas. Mon. Wea. Rev., 134, 17721784, https://doi.org/10.1175/MWR3145.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Davis, C. A., B. G. Brown, R. G. Bullock, and J. Halley-Gotway, 2009: The Method for Object-Based Diagnostic Evaluation (MODE) applied to numerical forecasts from the 2005 NSSL/SPC spring program. Wea. Forecasting, 24, 12521267, https://doi.org/10.1175/2009WAF2222241.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Done, J., C. A. Davis, and M. Weisman, 2004: The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) model. Atmos. Sci. Lett., 5, 110117, https://doi.org/10.1002/asl.72.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Duda, J. D., and W. A. Gallus, 2013: The impact of large-scale forcing on skill of simulated convective initiation and upscale evolution with convection-allowing grid spacings in the WRF. Wea. Forecasting, 28, 9941018, https://doi.org/10.1175/WAF-D-13-00005.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Duda, J. D., X. Wang, Y. Wang, and J. R. Carley, 2019: Comparing the assimilation of radar reflectivity using the direct GSI-based Ensemble–Variational (EnVar) and indirect cloud analysis methods in convection-allowing forecasts over the continental United States. Mon. Wea. Rev., 147, 16551678, https://doi.org/10.1175/MWR-D-18-0171.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., and W. A. Gallus, 2009: Toward better understanding of the contiguous rain area (CRA) method for spatial forecast verification. Wea. Forecasting, 24, 14011415, https://doi.org/10.1175/2009WAF2222252.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Flora, M. L., P. S. Skinner, C. K. Potvin, A. E. Reinhart, T. A. Jones, N. Yussouf, and K. H. Knopfmeier, 2019: Object-based verification of short-term, storm-scale probabilistic mesocyclone guidance from an experimental warn-on-forecast system. Wea. Forecasting, 34, 17211739, https://doi.org/10.1175/WAF-D-19-0094.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gallus, W. A., J. Wolff, J. Halley Gotway, M. Harrold, L. Blank, and J. Beck, 2019: The impacts of using mixed physics in the Community Leveraged Unified Ensemble. Wea. Forecasting, 34, 849867, https://doi.org/10.1175/WAF-D-18-0197.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Griffin, S. M., J. A. Otkin, C. M. Rozoff, J. M. Sieglaff, L. M. Cronce, and C. R. Alexander, 2017: Methods for comparing simulated and observed satellite infrared brightness temperatures and what do they tell us? Wea. Forecasting, 32, 525, https://doi.org/10.1175/WAF-D-16-0098.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Griffin, S. M., J. A. Otkin, G. Thompson, M. Frediani, J. Berner, and F. Kong, 2020: Assessing the impact of stochastic perturbations in cloud microphysics using GOES-16 infrared brightness temperatures. Mon. Wea. Rev., 148, 31113137, https://doi.org/10.1175/MWR-D-20-0078.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Halley Gotway, J., K. Newman, T. Jensen, B. Brown, R. Bullock, and T. Fowler, 2018: Model Evaluation Tools version 8.0 (METv8.0) user’s guide. Developmental Testbed Center, 432 pp., https://dtcenter.org/sites/default/files/community-code/met/docs/user-guide/MET_Users_Guide_v8.0.pdf.

  • Hartung, D. C., J. A. Otkin, R. A. Petersen, D. D. Turner, and W. F. Feltz, 2011: Assimilation of surface-based boundary layer profiler observations during a cool-season weather event using an observing system simulation experiment. Part II: Forecast assessment. Mon. Wea. Rev., 139, 23272346, https://doi.org/10.1175/2011MWR3623.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction systems. Wea. Forecasting, 15, 559570, https://doi.org/10.1175/1520-0434(2000)015<0559:DOTCRP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hu, M., M. Xue, and K. Brewster, 2006: 3DVAR and cloud analysis with WSR-88D level-II data for the prediction of the Fort Worth, Texas, tornadic thunderstorms. Part I: Cloud analysis and its impact. Mon. Wea. Rev., 134, 675698, https://doi.org/10.1175/MWR3092.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, A., and X. Wang, 2012: Verification and calibration of neighborhood and object-based probabilistic precipitation forecasts from a multimodel convection-allowing ensemble. Mon. Wea. Rev., 140, 30543077, https://doi.org/10.1175/MWR-D-11-00356.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, A., and X. Wang, 2013: Object-based evaluation of a storm-scale ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment. Mon. Wea. Rev., 141, 10791098, https://doi.org/10.1175/MWR-D-12-00140.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, A., X. Wang, Y. Wang, A. Reinhart, A. J. Clark, and I. L. Jirak, 2020: Neighborhood- and object-based probabilistic verification of the OU MAP ensemble forecasts during 2017 and 2018 Hazardous Weather Testbeds. Wea. Forecasting, 35, 169191, https://doi.org/10.1175/WAF-D-19-0060.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, T. A., P. Skinner, K. Knopfmeier, E. Mansell, P. Minnis, R. Palikonda, and W. Smith Jr., 2018: Comparison of cloud microphysics schemes in a warn-on-forecast system using synthetic satellite objects. Wea. Forecasting, 33, 16811708, https://doi.org/10.1175/WAF-D-18-0112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moser, B. A., W. A. Gallus, and R. Mantilla, 2015: An initial assessment of radar data assimilation on warm season rainfall forecasts for use in hydrologic models. Wea. Forecasting, 30, 14911520, https://doi.org/10.1175/WAF-D-14-00125.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pinto, J. O., J. A. Grim, and M. Steiner, 2015: Assessment of the high-resolution Rapid Refresh model’s ability to predict mesoscale convective systems using object-based evaluation. Wea. Forecasting, 30, 892913, https://doi.org/10.1175/WAF-D-14-00118.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Potvin, C. K., and Coauthors, 2019: Systematic comparison of convection-allowing models during the 2017 NOAA HWT Spring Forecasting Experiment. Wea. Forecasting, 34, 13951416, https://doi.org/10.1175/WAF-D-19-0056.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roberts, B., B. T. Gallo, I. L. Jirak, A. J. Clark, D. C. Dowell, X. Wang, and Y. Wang, 2020: What does a convection-allowing ensemble of opportunity buy us in forecasting thunderstorms? UFS Users’ Workshop, UFS, 12 pp., https://dtcenter.org/sites/default/files/events/2020/3-roberts-brett.pdf.

  • Roebber, P. J., 2009: Visualizing multiple measures of forecast quality. Wea. Forecasting, 24, 601608, https://doi.org/10.1175/2008WAF2222159.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, K. R. Fossell, R. A. Sobash, and M. A. Weisman, 2017: Toward 1-km ensemble forecasts over large domains. Mon. Wea. Rev., 145, 29432969, https://doi.org/10.1175/MWR-D-16-0410.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., https://doi.org/10.5065/D68S4MVH.

    • Crossref
    • Export Citation
  • Skinner, P. S., L. J. Wicker, D. M. Wheatley, and K. H. Knopfmeier, 2016: Application of two spatial verification methods to ensemble forecasts of low-level rotation. Wea. Forecasting, 31, 713735, https://doi.org/10.1175/WAF-D-15-0129.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skinner, P. S., and Coauthors, 2018: Object-based verification of a prototype Warn-on-Forecast system. Wea. Forecasting, 33, 12251250, https://doi.org/10.1175/WAF-D-18-0020.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Smith, T. M., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 16171630, https://doi.org/10.1175/BAMS-D-14-00173.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Squitieri, B. J., and W. A. Gallus, 2020: On the forecast sensitivity of MCS cold pools and related features to horizontal grid spacing in convection-allowing WRF simulations. Wea. Forecasting, 35, 325346, https://doi.org/10.1175/WAF-D-19-0016.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stratman, D. R., and K. A. Brewster, 2017: Sensitivities of 1-km forecasts of 24 May 2011 tornadic supercells to microphysics parameterizations. Mon. Wea. Rev., 145, 26972721, https://doi.org/10.1175/MWR-D-16-0282.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Turner, D. D., and Coauthors, 2020: A verification approach used in developing the Rapid Refresh and other numerical weather prediction models. J. Oper. Meteor., 8, 3953, https://doi.org/10.15191/nwajom.2020.0803.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wernli, H., M. Paulat, M. Hagen, and C. Frei, 2008: SAL—A novel quality measure for the verification of quantitative precipitation forecasts. Mon. Wea. Rev., 136, 44704487, https://doi.org/10.1175/2008MWR2415.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wernli, H., C. Hofmann, and M. Zimmer, 2009: Spatial forecast verification methods intercomparison project: Application of the SAL technique. Wea. Forecasting, 24, 14721484, https://doi.org/10.1175/2009WAF2222271.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, J., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 621638, https://doi.org/10.1175/BAMS-D-14-00174.1.

    • Crossref
    • Search Google Scholar
    • Export Citation