Search Results

You are looking at 31 - 40 of 43 items for :

  • Author or Editor: Adam J. Clark x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Burkely T. Gallo
,
Adam J. Clark
,
Bryan T. Smith
,
Richard L. Thompson
,
Israel Jirak
, and
Scott R. Dembek

Abstract

Attempts at probabilistic tornado forecasting using convection-allowing models (CAMs) have thus far used CAM attribute [e.g., hourly maximum 2–5-km updraft helicity (UH)] thresholds, treating them as binary events—either a grid point exceeds a given threshold or it does not. This study approaches these attributes probabilistically, using empirical observations of storm environment attributes and the subsequent climatological tornado occurrence frequency to assign a probability that a point will be within 40 km of a tornado, given the model-derived storm environment attributes. Combining empirical frequencies and forecast attributes produces better forecasts than solely using mid- or low-level UH, even if the UH is filtered using environmental parameter thresholds. Empirical tornado frequencies were derived using severe right-moving supercellular storms associated with a local storm report (LSR) of a tornado, severe wind, or severe hail for a given significant tornado parameter (STP) value from Storm Prediction Center (SPC) mesoanalysis grids in 2014–15. The NSSL–WRF ensemble produced the forecast STP values and simulated right-moving supercells, which were identified using a UH exceedance threshold. Model-derived probabilities are verified using tornado segment data from just right-moving supercells and from all tornadoes, as are the SPC-issued 0600 UTC tornado probabilities from the initial day 1 forecast valid 1200–1159 UTC the following day. The STP-based probabilistic forecasts perform comparably to SPC tornado probability forecasts in many skill metrics (e.g., reliability) and thus could be used as first-guess forecasts. Comparison with prior methodologies shows that probabilistic environmental information improves CAM-based tornado forecasts.

Full access
Burkely T. Gallo
,
Adam J. Clark
,
Bryan T. Smith
,
Richard L. Thompson
,
Israel Jirak
, and
Scott R. Dembek

Abstract

Probabilistic ensemble-derived tornado forecasts generated from convection-allowing models often use hourly maximum updraft helicity (UH) alone or in combination with environmental parameters as a proxy for right-moving (RM) supercells. However, when UH occurrence is a condition for tornado probability generation, false alarm areas can occur from UH swaths associated with nocturnal mesoscale convective systems, which climatologically produce fewer tornadoes than RM supercells. This study incorporates UH timing information with the forecast near-storm significant tornado parameter (STP) to calibrate the forecast tornado probability. To generate the probabilistic forecasts, three sets of observed climatological tornado frequencies given an RM supercell and STP value are incorporated with the model output, two of which use UH timing information. One method uses the observed climatological tornado frequency for a given 3-h window to generate the probabilities. Another normalizes the observed climatological tornado frequency by the number of hail, wind, and tornado reports observed in that 3-h window compared to the maximum number of reports in any 3-h window. The final method is independent of when UH occurs and uses the observed climatological tornado frequency encompassing all hours. The normalized probabilities reduce the false alarm area compared to the other methods but have a smaller area under the ROC curve and require a much higher percentile of the STP distribution to be used in probability generation to become reliable. Case studies demonstrate that the normalized probabilities highlight the most likely area for evening RM supercellular tornadoes, decreasing the nocturnal false alarm by assuming a linear convective mode.

Full access
Burkely T. Gallo
,
Katie A. Wilson
,
Jessica Choate
,
Kent Knopfmeier
,
Patrick Skinner
,
Brett Roberts
,
Pamela Heinselman
,
Israel Jirak
, and
Adam J. Clark

Abstract

During the 2019 Spring Forecasting Experiment in NOAA’s Hazardous Weather Testbed, two NWS forecasters issued experimental probabilistic forecasts of hail, tornadoes, and severe convective wind using NSSL’s Warn-on-Forecast System (WoFS). The aim was to explore forecast skill in the time frame between severe convective watches and severe convective warnings during the peak of the spring convective season. Hourly forecasts issued during 2100–0000 UTC, valid from 0100 to 0200 UTC demonstrate how forecasts change with decreasing lead time. Across all 13 cases in this study, the descriptive outlook statistics (e.g., mean outlook area, number of contours) change slightly and the measures of outlook skill (e.g., fractions skill score, reliability) improve incrementally with decreasing lead time. WoFS updraft helicity (UH) probabilities also improve slightly and less consistently with decreasing lead time, though both the WoFS and the forecasters generated skillful forecasts throughout. Larger skill differences with lead time emerge on a case-by-case basis, illustrating cases where forecasters consistently improved upon WoFS guidance, cases where the guidance and the forecasters recognized small-scale features as lead time decreased, and cases where the forecasters issued small areas of high probabilities using guidance and observations. While forecasts generally “honed in” on the reports with slightly smaller contours and higher probabilities, increased confidence could include higher certainty that severe weather would not occur (e.g., lower probabilities). Long-range (1–5 h) WoFS UH probabilities were skillful, and where the guidance erred, forecasters could adjust for those errors and increase their forecasts’ skill as lead time decreased.

Significance Statement

Forecasts are often assumed to improve as an event approaches and uncertainties resolve. This work examines the evolution of experimental forecasts valid over one hour with decreasing lead time issued using the Warn-on-Forecast System (WoFS). Because of its rapidly updating ensemble data assimilation, WoFS can help forecasters understand how thunderstorm hazards may evolve in the next 0–6 h. We found slight improvements in forecast and WoFS performance as a function of lead time over the full experiment; the first forecasts issued and the initial WoFS guidance performed well at long lead times, and good performance continued as the event approached. However, individual cases varied and forecasters frequently combined raw model output with observed mesoscale features to provide skillful small-scale forecasts.

Full access
Brett Roberts
,
Burkely T. Gallo
,
Israel L. Jirak
,
Adam J. Clark
,
David C. Dowell
,
Xuguang Wang
, and
Yongming Wang

Abstract

The High Resolution Ensemble Forecast v2.1 (HREFv2.1), an operational convection-allowing model (CAM) ensemble, is an “ensemble of opportunity” wherein forecasts from several independently designed deterministic CAMs are aggregated and postprocessed together. Multiple dimensions of diversity in the HREFv2.1 ensemble membership contribute to ensemble spread, including model core, physics parameterization schemes, initial conditions (ICs), and time lagging. In this study, HREFv2.1 forecasts are compared against the High Resolution Rapid Refresh Ensemble (HRRRE) and the Multiscale data Assimilation and Predictability (MAP) ensemble, two experimental CAM ensembles that ran during the 5-week Spring Forecasting Experiment (SFE) in spring 2018. The HRRRE and MAP are formally designed ensembles with spread achieved primarily through perturbed ICs. Verification in this study focuses on composite radar reflectivity and updraft helicity to assess ensemble performance in forecasting convective storms. The HREFv2.1 shows the highest overall skill for these forecasts, matching subjective real-time impressions from SFE participants. Analysis of the skill and variance of ensemble member forecasts suggests that the HREFv2.1 exhibits greater spread and more effectively samples model uncertainty than the HRRRE or MAP. These results imply that to optimize skill in forecasting convective storms at 1–2-day lead times, future CAM ensembles should employ either diverse membership designs or sophisticated perturbation schemes capable of representing model uncertainty with comparable efficacy.

Free access
Rebecca D. Adams-Selin
,
Christina Kalb
,
Tara Jensen
,
John Henderson
,
Tim Supinie
,
Lucas Harris
,
Yunheng Wang
,
Burkely T. Gallo
, and
Adam J. Clark

Abstract

Hail forecasts produced by the CAM-HAILCAST pseudo-Lagrangian hail size forecasting model were evaluated during the 2019, 2020, and 2021 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments (SFEs). As part of this evaluation, HWT SFE participants were polled about their definition of a “good” hail forecast. Participants were presented with two different verification methods conducted over three different spatiotemporal scales, and were then asked to subjectively evaluate the hail forecast as well as the different verification methods themselves. Results recommended use of multiple verification methods tailored to the type of forecast expected by the end-user interpreting and applying the forecast. The hail forecasts evaluated during this period included an implementation of CAM-HAILCAST in the Limited Area Model of the Unified Forecast System with the Finite Volume 3 (FV3) dynamical core. Evaluation of FV3-HAILCAST over both 1- and 24-h periods found continued improvement from 2019 to 2021. The improvement was largely a result of wide intervariability among FV3 ensemble members with different microphysics parameterizations in 2019 lessening significantly during 2020 and 2021. Overprediction throughout the diurnal cycle also lessened by 2021. A combination of both upscaling neighborhood verification and an object-based technique that only retained matched convective objects was necessary to understand the improvement, agreeing with the HWT SFE participants’ recommendations for multiple verification methods.

Significance Statement

“Good” forecasts of hail can be determined in multiple ways and must depend on both the performance of the guidance and the perspective of the end-user. This work looks at different verification strategies to capture the performance of the CAM-HAILCAST hail forecasting model across three years of the Spring Forecasting Experiment (SFE) in different parent models. Verification strategies were informed by SFE participant input via a survey. Skill variability among models decreased in SFE 2021 relative to prior SFEs. The FV3 model in 2021, compared to 2019, provided improved forecasts of both convective distribution and 38-mm (1.5 in.) hail size, as well as less overforecasting of convection from 1900 to 2300 UTC.

Free access
Patrick T. Marsh
,
John S. Kain
,
Valliappa Lakshmanan
,
Adam J. Clark
,
Nathan M. Hitchens
, and
Jill Hardy

Abstract

Convection-allowing models offer forecasters unique insight into convective hazards relative to numerical models using parameterized convection. However, methods to best characterize the uncertainty of guidance derived from convection-allowing models are still unrefined. This paper proposes a method of deriving calibrated probabilistic forecasts of rare events from deterministic forecasts by fitting a parametric kernel density function to the model’s historical spatial error characteristics. This kernel density function is then applied to individual forecast fields to produce probabilistic forecasts.

Full access
Adam J. Clark
,
Jidong Gao
,
Patrick T. Marsh
,
Travis Smith
,
John S. Kain
,
James Correia Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

Examining forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA/Hazardous Weather Testbed Spring Forecasting Experiment, recent research diagnosed a strong relationship between the cumulative pathlengths of simulated rotating storms (measured using a three-dimensional object identification algorithm applied to forecast updraft helicity) and the cumulative pathlengths of tornadoes. This paper updates those results by including data from the 2011 SSEF system, and illustrates forecast examples from three major 2011 tornado outbreaks—16 and 27 April, and 24 May—as well as two forecast failure cases from June 2010. Finally, analysis updraft helicity (UH) from 27 April 2011 is computed using a three-dimensional variational data assimilation system to obtain 1.25-km grid-spacing analyses at 5-min intervals and compared to forecast UH from individual SSEF members.

Full access
Burkely T. Gallo
,
Adam J. Clark
,
Israel Jirak
,
David Imy
,
Brett Roberts
,
Jacob Vancil
,
Kent Knopfmeier
, and
Patrick Burke

Abstract

During the 2021 Spring Forecasting Experiment (SFE), the usefulness of the experimental Warn-on-Forecast System (WoFS) ensemble guidance was tested with the issuance of short-term probabilistic hazard forecasts. One group of participants used the WoFS guidance, while another group did not. Individual forecasts issued by two NWS participants in each group were evaluated alongside a consensus forecast from the remaining participants. Participant forecasts of tornadoes, hail, and wind at lead times of ∼2–3 h and valid 2200–2300 UTC, 2300–0000 UTC, and 0000–0100 UTC were evaluated subjectively during the SFE by participants the day after issuance, and objectively after the SFE concluded. These forecasts exist between the watch and the warning time frame, where WoFS is anticipated to be particularly impactful.

The hourly probabilistic forecasts were skillful according to objective metrics like the Fractions Skill Score. While the tornado forecasts were more reliable than the other hazards, there was no clear indication of any one hazard scoring highest across all metrics. WoFS availability improved the hourly probabilistic forecasts as measured by the subjective ratings and several objective metrics, including increased POD and decreased FAR at high probability thresholds. Generally, expert forecasts performed better than consensus forecasts, though expert forecasts over-forecasted. Finally, this work explored the appropriate construction of practically perfect fields used during subjective verification, which participants frequently found to be too small and precise. Using a Gaussian smoother with σ=70 km is recommended to create hourly practically perfect fields in future experiments.

Restricted access
Brett Roberts
,
Adam J. Clark
,
Israel L. Jirak
,
Burkely T. Gallo
,
Caroline Bain
,
David L. A. Flack
,
James Warner
,
Craig S. Schwartz
, and
Larissa J. Reames

Abstract

As part of NOAA’s Hazardous Weather Testbed Spring Forecasting Experiment (SFE) in 2020, an international collaboration yielded a set of real-time convection-allowing model (CAM) forecasts over the contiguous United States in which the model configurations and initial/boundary conditions were varied in a controlled manner. Three model configurations were employed, among which the Finite Volume Cubed-Sphere (FV3), Unified Model (UM), and Advanced Research version of the Weather Research and Forecasting (WRF-ARW) Model dynamical cores were represented. Two runs were produced for each configuration: one driven by NOAA’s Global Forecast System for initial and boundary conditions, and the other driven by the Met Office’s operational global UM. For 32 cases during SFE2020, these runs were initialized at 0000 UTC and integrated for 36 h. Objective verification of model fields relevant to convective forecasting illuminates differences in the influence of configuration versus driving model pertinent to the ongoing problem of optimizing spread and skill in CAM ensembles. The UM and WRF configurations tend to outperform FV3 for forecasts of precipitation, thermodynamics, and simulated radar reflectivity; using a driving model with the native CAM core also tends to produce better skill in aggregate. Reflectivity and thermodynamic forecasts were found to cluster more by configuration than by driving model at lead times greater than 18 h. The two UM configuration experiments had notably similar solutions that, despite competitive aggregate skill, had large errors in the diurnal convective cycle.

Free access
Burkely T. Gallo
,
Jamie K. Wolff
,
Adam J. Clark
,
Israel Jirak
,
Lindsay R. Blank
,
Brett Roberts
,
Yunheng Wang
,
Chunxi Zhang
,
Ming Xue
,
Tim Supinie
,
Lucas Harris
,
Linjiong Zhou
, and
Curtis Alexander

Abstract

Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.

Full access