Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Jamie K. Wolff x
  • All content x
Clear All Modify Search
Jamie K. Wolff, Brad S. Ferrier, and Clifford F. Mass
Full access
Jamie K. Wolff, Michelle Harrold, Tressa Fowler, John Halley Gotway, Louisa Nance, and Barbara G. Brown

Abstract

While traditional verification methods are commonly used to assess numerical model quantitative precipitation forecasts (QPFs) using a grid-to-grid approach, they generally offer little diagnostic information or reasoning behind the computed statistic. On the other hand, advanced spatial verification techniques, such as neighborhood and object-based methods, can provide more meaningful insight into differences between forecast and observed features in terms of skill with spatial scale, coverage area, displacement, orientation, and intensity. To demonstrate the utility of applying advanced verification techniques to mid- and coarse-resolution models, the Developmental Testbed Center (DTC) applied several traditional metrics and spatial verification techniques to QPFs provided by the Global Forecast System (GFS) and operational North American Mesoscale Model (NAM). Along with frequency bias and Gilbert skill score (GSS) adjusted for bias, both the fractions skill score (FSS) and Method for Object-Based Diagnostic Evaluation (MODE) were utilized for this study with careful consideration given to how these methods were applied and how the results were interpreted. By illustrating the types of forecast attributes appropriate to assess with the spatial verification techniques, this paper provides examples of how to obtain advanced diagnostic information to help identify what aspects of the forecast are or are not performing well.

Full access
Sarah M. Griffin, Jason A. Otkin, Christopher M. Rozoff, Justin M. Sieglaff, Lee M. Cronce, Curtis R. Alexander, Tara L. Jensen, and Jamie K. Wolff

Abstract

In this study, object-based verification using the method for object-based diagnostic evaluation (MODE) is used to assess the accuracy of cloud-cover forecasts from the experimental High-Resolution Rapid Refresh (HRRRx) model during the warm and cool seasons. This is accomplished by comparing cloud objects identified by MODE in observed and simulated Geostationary Operational Environmental Satellite 10.7-μm brightness temperatures for August 2015 and January 2016. The analysis revealed that more cloud objects and a more pronounced diurnal cycle occurred during August, with larger object sizes observed in January because of the prevalence of synoptic-scale cloud features. With the exception of the 0-h analyses, the forecasts contained fewer cloud objects than were observed. HRRRx forecast accuracy is assessed using two methods: traditional verification, which compares the locations of grid points identified as observation and forecast objects, and the MODE composite score, an area-weighted calculation using the object-pair interest values computed by MODE. The 1-h forecasts for both August and January were the most accurate for their respective months. Inspection of the individual MODE attribute interest scores showed that, even though displacement errors between the forecast and observation objects increased between the 0-h analyses and 1-h forecasts, the forecasts were more accurate than the analyses because the sizes of the largest cloud objects more closely matched the observations. The 1-h forecasts from August were found to be more accurate than those during January because the spatial displacement between the cloud objects was smaller and the forecast objects better represented the size of the observation objects.

Full access
Jamie K. Wolff, Michelle Harrold, Tracy Hertneky, Eric Aligo, Jacob R. Carley, Brad Ferrier, Geoff DiMego, Louisa Nance, and Ying-Hwa Kuo

Abstract

A wide range of numerical weather prediction (NWP) innovations are under development in the research community that have the potential to positively impact operational models. The Developmental Testbed Center (DTC) helps facilitate the transition of these innovations from research to operations (R2O). With the large number of innovations available in the research community, it is critical to clearly define a testing protocol to streamline the R2O process. The DTC has defined such a process that relies on shared responsibilities of the researchers, the DTC, and operational centers to test promising new NWP advancements. As part of the first stage of this process, the DTC instituted the mesoscale model evaluation testbed (MMET), which established a common testing framework to assist the research community in demonstrating the merits of developments. The ability to compare performance across innovations for critical cases provides a mechanism for selecting the most promising capabilities for further testing. If the researcher demonstrates improved results using MMET, then the innovation may be considered for the second stage of comprehensive testing and evaluation (T&E) prior to entering the final stage of preimplementation T&E.

MMET provides initialization and observation datasets for several case studies and multiday periods. In addition, the DTC provides baseline results for select operational configurations that use the Advanced Research version of Weather Research and Forecasting Model (ARW) or the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling System Nonhydrostatic Multiscale Model on the B grid (NEMS-NMMB). These baselines can be used for testing sensitivities to different model versions or configurations in order to improve forecast performance.

Full access
Burkely T. Gallo, Jamie K. Wolff, Adam J. Clark, Israel Jirak, Lindsay R. Blank, Brett Roberts, Yunheng Wang, Chunxi Zhang, Ming Xue, Tim Supinie, Lucas Harris, Linjiong Zhou, and Curtis Alexander

Abstract

Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.

Restricted access