Search Results

You are looking at 1 - 7 of 7 items for

  • Author or Editor: Montgomery L. Flora x
  • All content x
Clear All Modify Search
Corey K. Potvin and Montgomery L. Flora

Abstract

The Warn-on-Forecast (WoF) program aims to deploy real-time, convection-allowing, ensemble data assimilation and prediction systems to improve short-term forecasts of tornadoes, flooding, lightning, damaging wind, and large hail. Until convection-resolving (horizontal grid spacing Δx < 100 m) systems become available, however, resolution errors will limit the accuracy of ensemble model output. Improved understanding of grid spacing dependence of simulated convection is therefore needed to properly calibrate and interpret ensemble output, and to optimize trade-offs between model resolution and other computationally constrained parameters like ensemble size and forecast lead time.

Toward this end, the authors examine grid spacing sensitivities of simulated supercells over Δx of 333 m–4 km. Storm environment and physics parameterization are varied among the simulations. The results suggest that 4-km grid spacing is too coarse to reliably simulate supercells, occasionally leading to premature storm demise, whereas 3-km simulations more often capture operationally important features, including low-level rotation tracks. Further decreasing Δx to 1 km enables useful forecasts of rapid changes in low-level rotation intensity, though significant errors remain (e.g., in timing).

Grid spacing dependencies vary substantially among the experiments, suggesting that accurate calibration of ensemble output requires better understanding of how storm characteristics, environment, and parameterization schemes modulate grid spacing sensitivity. Much of the sensitivity arises from poorly resolving small-scale processes that impact larger (well resolved) scales. Repeating some of the 333-m simulations with coarsened initial conditions reveals that supercell forecasts can substantially benefit from reduced grid spacing even when limited observational density precludes finescale initialization.

Full access
Montgomery L. Flora, Corey K. Potvin, and Louis J. Wicker

Abstract

As convection-allowing ensembles are routinely used to forecast the evolution of severe thunderstorms, developing an understanding of storm-scale predictability is critical. Using a full-physics numerical weather prediction (NWP) framework, the sensitivity of ensemble forecasts of supercells to initial condition (IC) uncertainty is investigated using a perfect model assumption. Three cases are used from the real-time NSSL Experimental Warn-on-Forecast System for Ensembles (NEWS-e) from the 2016 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. The forecast sensitivity to IC uncertainty is assessed by repeating the simulations with the initial ensemble perturbations reduced to 50% and 25% of their original magnitudes. The object-oriented analysis focuses on significant supercell features, including the mid- and low-level mesocyclone, and rainfall. For a comprehensive analysis, supercell location and amplitude predictability of the aforementioned features are evaluated separately.

For all examined features and cases, forecast spread is greatly reduced by halving the IC spread. By reducing the IC spread from 50% to 25% of the original magnitude, forecast spread is still substantially reduced in two of the three cases. The practical predictability limit (PPL), or the lead time beyond which the forecast spread exceeds some prechosen threshold, is case and feature dependent. Comparing to past studies reveals that practical predictability of supercells is substantially improved by initializing once storms are well established in the ensemble analysis.

Full access
Corey K. Potvin, Elisa M. Murillo, Montgomery L. Flora, and Dustan M. Wheatley

Abstract

Observational and model resolution limitations currently preclude analysis of the smallest scales important to numerical prediction of convective storms. These missing scales can be recovered if the forecast model is integrated on a sufficiently fine grid, but not before errors are introduced that subsequently grow in scale and magnitude. This study is the first to systematically evaluate the impact of these initial-condition (IC) resolution errors on high-resolution forecasts of organized convection. This is done by comparing high-resolution supercell simulations generated using identical model settings but successively coarsened ICs. Consistent with the Warn-on-Forecast paradigm, the simulations are initialized with ongoing storms and integrated for 2 h. Both idealized and full-physics experiments are performed in order to examine how more realistic model settings modulate the error evolution.

In all experiments, scales removed from the IC (wavelengths < 2, 4, 8, or 16 km) regenerate within 10–20 min of model integration. While the forecast errors arising from the initial absence of these scales become quantitatively large in many instances, the qualitative storm evolution is relatively insensitive to the IC resolution. It therefore appears that adopting much finer forecast (e.g., 250 m) than analysis (e.g., 3 km) grids for data assimilation and prediction would improve supercell forecasts given limited computational resources. This motivates continued development of mixed-resolution systems. The relative insensitivity to IC resolution further suggests that convective forecasting can be more readily advanced by improving model physics and numerics and expanding extrastorm observational coverage than by increasing intrastorm observational density.

Full access
Montgomery L. Flora, Corey K. Potvin, Patrick S. Skinner, Shawn Handler, and Amy McGovern

Abstract

A primary goal of the National Oceanic and Atmospheric Administration Warn-on-Forecast (WoF) project is to provide rapidly updating probabilistic guidance to human forecasters for short-term (e.g., 0–3 h) severe weather forecasts. Postprocessing is required to maximize the usefulness of probabilistic guidance from an ensemble of convection-allowing model forecasts. Machine learning (ML) models have become popular methods for postprocessing severe weather guidance since they can leverage numerous variables to discover useful patterns in complex datasets. In this study, we develop and evaluate a series of ML models to produce calibrated, probabilistic severe weather guidance from WoF System (WoFS) output. Our dataset includes WoFS ensemble forecasts available every 5 min out to 150 min of lead time from the 2017–19 NOAA Hazardous Weather Testbed Spring Forecasting Experiments (81 dates). Using a novel ensemble storm-track identification method, we extracted three sets of predictors from the WoFS forecasts: intrastorm state variables, near-storm environment variables, and morphological attributes of the ensemble storm tracks. We then trained random forests, gradient-boosted trees, and logistic regression algorithms to predict which WoFS 30-min ensemble storm tracks will overlap a tornado, severe hail, and/or severe wind report. To provide rigorous baselines against which to evaluate the skill of the ML models, we extracted the ensemble probabilities of hazard-relevant WoFS variables exceeding tuned thresholds from each ensemble storm track. The three ML algorithms discriminated well for all three hazards and produced more reliable probabilities than the baseline predictions. Overall, the results suggest that ML-based postprocessing of dynamical ensemble output can improve short-term, storm-scale severe weather probabilistic guidance.

Restricted access
Montgomery L. Flora, Patrick S. Skinner, Corey K. Potvin, Anthony E. Reinhart, Thomas A. Jones, Nusrat Yussouf, and Kent H. Knopfmeier

Abstract

An object-based verification method for short-term, storm-scale probabilistic forecasts was developed and applied to mesocyclone guidance produced by the experimental Warn-on-Forecast System (WoFS) in 63 cases from 2017 to 2018. The probabilistic mesocyclone guidance was generated by calculating gridscale ensemble probabilities from WoFS forecasts of updraft helicity (UH) in layers 2–5 km (midlevel) and 0–2 km (low-level) above ground level (AGL) aggregated over 60-min periods. The resulting ensemble probability swaths are associated with individual thunderstorms and treated as objects with a single, representative probability value prescribed. A mesocyclone probability object, conceptually, is a region bounded by the ensemble forecast envelope of a mesocyclone track for a given thunderstorm over 1 h. The mesocyclone probability objects were matched against rotation track objects in Multi-Radar Multi-Sensor data using the total interest score, but with the maximum displacement varied between 0, 9, 15, and 30 km. Forecast accuracy and reliability were assessed at four different forecast lead time periods: 0–60, 30–90, 60–120, and 90–150 min. In the 0–60-min forecast period, the low-level UH probabilistic forecasts had a POD, FAR, and CSI of 0.46, 0.45, and 0.31, respectively, with a probability threshold of 22.2% (the threshold of maximum CSI). In the 90–150-min forecast period, the POD and CSI dropped to 0.39 and 0.27 while FAR remained relatively unchanged. Forecast probabilities > 60% overpredicted the likelihood of observed mesocyclones in the 0–60-min period; however, reliability improved when allowing larger maximum displacements for object matching and at longer lead times.

Full access
Corey K. Potvin, Patrick S. Skinner, Kimberly A. Hoogewind, Michael C. Coniglio, Jeremy A. Gibbs, Adam J. Clark, Montgomery L. Flora, Anthony E. Reinhart, Jacob R. Carley, and Elizabeth N. Smith

Abstract

The NOAA Warn-on-Forecast System (WoFS) is an experimental rapidly updating convection-allowing ensemble designed to provide probabilistic operational guidance on high-impact thunderstorm hazards. The current WoFS uses physics diversity to help maintain ensemble spread. We assess the systematic impacts of the three WoFS PBL schemes—YSU, MYJ, and MYNN—using novel, object-based methods tailored to thunderstorms. Very short forecast lead times of 0–3 h are examined, which limits phase errors and thereby facilitates comparisons of observed and model storms that occurred in the same area at the same time. This evaluation framework facilitates assessment of systematic PBL scheme impacts on storms and storm environments. Forecasts using all three PBL schemes exhibit overly narrow ranges of surface temperature, dewpoint, and wind speed. The surface biases do not generally decrease at later forecast initialization times, indicating that systematic PBL scheme errors are not well mitigated by data assimilation. The YSU scheme exhibits the least bias of the three in surface temperature and moisture and in many sounding-derived convective variables. Interscheme environmental differences are similar both near and far from storms and qualitatively resemble the differences analyzed in previous studies. The YSU environments exhibit stronger mixing, as expected of nonlocal PBL schemes; are slightly less favorable for storm intensification; and produce correspondingly weaker storms than the MYJ and MYNN environments. On the other hand, systematic interscheme differences in storm morphology and storm location forecast skill are negligible. Overall, the results suggest that calibrating forecasts to correct for systematic differences between PBL schemes may modestly improve WoFS and other convection-allowing ensemble guidance at short lead times.

Free access
Adam J. Clark, Israel L. Jirak, Burkely T. Gallo, Brett Roberts, Andrew R. Dean, Kent H. Knopfmeier, Louis J. Wicker, Makenzie Krocak, Patrick S. Skinner, Pamela L. Heinselman, Katie A. Wilson, Jake Vancil, Kimberly A. Hoogewind, Nathan A. Dahl, Gerald J. Creager, Thomas A. Jones, Jidong Gao, Yunheng Wang, Eric D. Loken, Montgomery Flora, Christopher A. Kerr, Nusrat Yussouf, Scott R. Dembek, William Miller, Joshua Martin, Jorge Guerra, Brian Matilla, David Jahn, David Harrison, David Imy, and Michael C. Coniglio
Full access