Search Results

You are looking at 1 - 7 of 7 items for

  • Author or Editor: Michelle Harrold x
  • Refine by Access: All Content x
Clear All Modify Search
William A. Gallus Jr.
and
Michelle A. Harrold

Abstract

A severe derecho impacted the Midwestern United States on 10 August 2020, causing over $12 billion (U.S. dollars) in damage, and producing peak winds estimated at 63 m s−1, with the worst impacts in Iowa. The event was not forecast well by operational forecasters, nor even by operational and quasi-operational convection-allowing models. In the present study, nine simulations are performed using the Limited Area Model version of the Finite-Volume-Cubed-Sphere model (FV3-LAM) with three horizontal grid spacings and two physics suites. In addition, when a prototype of the Rapid Refresh Forecast System (RRFS) physics is used, sensitivity tests are performed to examine the impact of using the Grell–Freitas (GF) convective scheme. Several unusual results are obtained. With both the RRFS (not using GF) and Global Forecast System (GFS) physics suites, simulations using relatively coarse 13- and 25-km horizontal grid spacing do a much better job of showing an organized convective system in Iowa during the daylight hours of 10 August than the 3-km grid spacing runs. In addition, the RRFS run with 25-km grid spacing becomes much worse when the GF convective scheme is used. The 3-km RRFS run that does not use the GF scheme develops spurious nocturnal convection the night before the derecho, removing instability and preventing the derecho from being simulated at all. When GF is used, the spurious storms are removed and an excellent forecast is obtained with an intense bowing echo, exceptionally strong cold pool, and roughly 50 m s−1 surface wind gusts.

Restricted access
Jamie K. Wolff
,
Kathryn R. Fossell
,
Michelle Harrold
,
Michael Kavulich Jr.
, and
John Halley Gotway
Full access
Jamie K. Wolff
,
Michelle Harrold
,
Tressa Fowler
,
John Halley Gotway
,
Louisa Nance
, and
Barbara G. Brown

Abstract

While traditional verification methods are commonly used to assess numerical model quantitative precipitation forecasts (QPFs) using a grid-to-grid approach, they generally offer little diagnostic information or reasoning behind the computed statistic. On the other hand, advanced spatial verification techniques, such as neighborhood and object-based methods, can provide more meaningful insight into differences between forecast and observed features in terms of skill with spatial scale, coverage area, displacement, orientation, and intensity. To demonstrate the utility of applying advanced verification techniques to mid- and coarse-resolution models, the Developmental Testbed Center (DTC) applied several traditional metrics and spatial verification techniques to QPFs provided by the Global Forecast System (GFS) and operational North American Mesoscale Model (NAM). Along with frequency bias and Gilbert skill score (GSS) adjusted for bias, both the fractions skill score (FSS) and Method for Object-Based Diagnostic Evaluation (MODE) were utilized for this study with careful consideration given to how these methods were applied and how the results were interpreted. By illustrating the types of forecast attributes appropriate to assess with the spatial verification techniques, this paper provides examples of how to obtain advanced diagnostic information to help identify what aspects of the forecast are or are not performing well.

Full access
William A. Gallus Jr.
,
Jamie Wolff
,
John Halley Gotway
,
Michelle Harrold
,
Lindsay Blank
, and
Jeff Beck

Abstract

A well-known problem in high-resolution ensembles has been a lack of sufficient spread among members. Modelers often have used mixed physics to increase spread, but this can introduce problems including computational expense, clustering of members, and members that are not all equally skillful. Thus, a detailed examination of the impacts of using mixed physics is important. The present study uses two years of Community Leveraged Unified Ensemble (CLUE) output to isolate the impact of mixed physics in 36-h forecasts made using a convection-permitting ensemble with 3-km horizontal grid spacing. One 10-member subset of the CLUE used only perturbed initial conditions (ICs) and lateral boundary conditions (LBCs) while another 10-member ensemble used the same mixed ICs and LBCs but also introduced mixed physics. The cases examined occurred during NOAA’s Hazardous Weather Testbed Spring Forecast Experiments in 2016 and 2017. Traditional gridpoint metrics applied to each member and the ensemble as a whole, along with object-based verification statistics for all members, were computed for composite reflectivity and 1- and 3-h accumulated precipitation using the Model Evaluation Tools (MET) software package. It is found that the mixed physics increases variability substantially among the ensemble members, more so for reflectivity than precipitation, such that the envelope of members is more likely to encompass the observations. However, the increased variability is mostly due to the introduction of both substantial high biases in members using one microphysical scheme, and low biases in other schemes. Overall ensemble skill is not substantially different from the ensemble using a single physics package.

Full access
Isidora Jankov
,
Jeffrey Beck
,
Jamie Wolff
,
Michelle Harrold
,
Joseph B. Olson
,
Tatiana Smirnova
,
Curtis Alexander
, and
Judith Berner

Abstract

A stochastically perturbed parameterization (SPP) approach that spatially and temporally perturbs parameters and variables in the Mellor–Yamada–Nakanishi–Niino planetary boundary layer scheme (PBL) and introduces initialization perturbations to soil moisture in the Rapid Update Cycle land surface model was developed within the High-Resolution Rapid Refresh convection-allowing ensemble. This work is a follow-up study to a work performed using the Rapid Refresh (RAP)-based ensemble. In the present study, the SPP approach was used to target the performance of precipitation and low-level variables (e.g., 2-m temperature and dewpoint, and 10-m wind). The stochastic kinetic energy backscatter scheme and the stochastic perturbation of physics tendencies scheme were combined with the SPP approach and applied to the PBL to target upper-level variable performance (e.g., improved skill and reliability). The three stochastic experiments (SPP applied to PBL only, SPP applied to PBL combined with SKEB and SPPT, and stochastically perturbed soil moisture initial conditions) were compared to a mixed-physics ensemble. The results showed a positive impact from initial condition soil moisture perturbations on precipitation forecasts; however, it resulted in an increase in 2-m dewpoint RMSE. The experiment with perturbed parameters within the PBL showed an improvement in low-level wind forecasts for some verification metrics. The experiment that combined the three stochastic approaches together exhibited improved RMSE and spread for upper-level variables. Our study demonstrated that, by using the SPP approach, forecasts of specific variables can be improved. Also, the results showed that using a single-physics suite ensemble with stochastic methods is potentially an attractive alternative to using multiphysics for convection allowing ensembles.

Full access
Jamie K. Wolff
,
Michelle Harrold
,
Tracy Hertneky
,
Eric Aligo
,
Jacob R. Carley
,
Brad Ferrier
,
Geoff DiMego
,
Louisa Nance
, and
Ying-Hwa Kuo

Abstract

A wide range of numerical weather prediction (NWP) innovations are under development in the research community that have the potential to positively impact operational models. The Developmental Testbed Center (DTC) helps facilitate the transition of these innovations from research to operations (R2O). With the large number of innovations available in the research community, it is critical to clearly define a testing protocol to streamline the R2O process. The DTC has defined such a process that relies on shared responsibilities of the researchers, the DTC, and operational centers to test promising new NWP advancements. As part of the first stage of this process, the DTC instituted the mesoscale model evaluation testbed (MMET), which established a common testing framework to assist the research community in demonstrating the merits of developments. The ability to compare performance across innovations for critical cases provides a mechanism for selecting the most promising capabilities for further testing. If the researcher demonstrates improved results using MMET, then the innovation may be considered for the second stage of comprehensive testing and evaluation (T&E) prior to entering the final stage of preimplementation T&E.

MMET provides initialization and observation datasets for several case studies and multiday periods. In addition, the DTC provides baseline results for select operational configurations that use the Advanced Research version of Weather Research and Forecasting Model (ARW) or the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling System Nonhydrostatic Multiscale Model on the B grid (NEMS-NMMB). These baselines can be used for testing sensitivities to different model versions or configurations in order to improve forecast performance.

Full access
Barbara Brown
,
Tara Jensen
,
John Halley Gotway
,
Randy Bullock
,
Eric Gilleland
,
Tressa Fowler
,
Kathryn Newman
,
Dan Adriaansen
,
Lindsay Blank
,
Tatiana Burek
,
Michelle Harrold
,
Tracy Hertneky
,
Christina Kalb
,
Paul Kucera
,
Louisa Nance
,
John Opatz
,
Jonathan Vigh
, and
Jamie Wolff

Abstract

Forecast verification and evaluation is a critical aspect of forecast development and improvement, day-to-day forecasting, and the interpretation and application of forecasts. In recent decades, the verification field has rapidly matured, and many new approaches have been developed. However, until recently, a stable set of modern tools to undertake this important component of forecasting has not been available. The Model Evaluation Tools (MET) was conceived and implemented to fill this gap. MET (https://dtcenter.org/community-code/model-evaluation-tools-met) was developed by the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Air Force (USAF) and is supported via the Developmental Testbed Center (DTC) and collaborations with operational and research organizations. MET incorporates traditional verification methods, as well as modern verification capabilities developed over the last two decades. MET stands apart from other verification packages due to its inclusion of innovative spatial methods, statistical inference tools, and a wide range of approaches to address the needs of individual users, coupled with strong community engagement and support. In addition, MET is freely available, which ensures that consistent modern verification capabilities can be applied by researchers and operational forecasting practitioners, enabling the use of consistent and scientifically meaningful methods by all users. This article describes MET and the expansion of MET to an umbrella package (METplus) that includes a database and display system and Python wrappers to facilitate the wide use of MET. Examples of MET applications illustrate some of the many ways that the package can be used to evaluate forecasts in a meaningful way.

Full access