Search Results

You are looking at 51 - 60 of 67 items for

  • Author or Editor: Adam J. Clark x
  • Refine by Access: All Content x
Clear All Modify Search
Brett Roberts
,
Adam J. Clark
,
Israel L. Jirak
,
Burkely T. Gallo
,
Caroline Bain
,
David L. A. Flack
,
James Warner
,
Craig S. Schwartz
, and
Larissa J. Reames

Abstract

As part of NOAA’s Hazardous Weather Testbed Spring Forecasting Experiment (SFE) in 2020, an international collaboration yielded a set of real-time convection-allowing model (CAM) forecasts over the contiguous United States in which the model configurations and initial/boundary conditions were varied in a controlled manner. Three model configurations were employed, among which the Finite Volume Cubed-Sphere (FV3), Unified Model (UM), and Advanced Research version of the Weather Research and Forecasting (WRF-ARW) Model dynamical cores were represented. Two runs were produced for each configuration: one driven by NOAA’s Global Forecast System for initial and boundary conditions, and the other driven by the Met Office’s operational global UM. For 32 cases during SFE2020, these runs were initialized at 0000 UTC and integrated for 36 h. Objective verification of model fields relevant to convective forecasting illuminates differences in the influence of configuration versus driving model pertinent to the ongoing problem of optimizing spread and skill in CAM ensembles. The UM and WRF configurations tend to outperform FV3 for forecasts of precipitation, thermodynamics, and simulated radar reflectivity; using a driving model with the native CAM core also tends to produce better skill in aggregate. Reflectivity and thermodynamic forecasts were found to cluster more by configuration than by driving model at lead times greater than 18 h. The two UM configuration experiments had notably similar solutions that, despite competitive aggregate skill, had large errors in the diurnal convective cycle.

Restricted access
Burkely T. Gallo
,
Christina P. Kalb
,
John Halley Gotway
,
Henry H. Fisher
,
Brett Roberts
,
Israel L. Jirak
,
Adam J. Clark
,
Curtis Alexander
, and
Tara L. Jensen
Full access
Burkely T. Gallo
,
Christina P. Kalb
,
John Halley Gotway
,
Henry H. Fisher
,
Brett Roberts
,
Israel L. Jirak
,
Adam J. Clark
,
Curtis Alexander
, and
Tara L. Jensen

Abstract

Evaluation of numerical weather prediction (NWP) is critical for both forecasters and researchers. Through such evaluation, forecasters can understand the strengths and weaknesses of NWP guidance, and researchers can work to improve NWP models. However, evaluating high-resolution convection-allowing models (CAMs) requires unique verification metrics tailored to high-resolution output, particularly when considering extreme events. Metrics used and fields evaluated often differ between verification studies, hindering the effort to broadly compare CAMs. The purpose of this article is to summarize the development and initial testing of a CAM-based scorecard, which is intended for broad use across research and operational communities and is similar to scorecards currently available within the enhanced Model Evaluation Tools package (METplus) for evaluating coarser models. Scorecards visualize many verification metrics and attributes simultaneously, providing a broad overview of model performance. A preliminary CAM scorecard was developed and tested during the 2018 Spring Forecasting Experiment using METplus, focused on metrics and attributes relevant to severe convective forecasting. The scorecard compared attributes specific to convection-allowing scales such as reflectivity and surrogate severe fields, using metrics like the critical success index (CSI) and fractions skill score (FSS). While this preliminary scorecard focuses on attributes relevant to severe convective storms, the scorecard framework allows for the inclusion of further metrics relevant to other applications. Development of a CAM scorecard allows for evidence-based decision-making regarding future operational CAM systems as the National Weather Service transitions to a Unified Forecast system as part of the Next-Generation Global Prediction System initiative.

Full access
Corey K. Potvin
,
Patrick S. Skinner
,
Kimberly A. Hoogewind
,
Michael C. Coniglio
,
Jeremy A. Gibbs
,
Adam J. Clark
,
Montgomery L. Flora
,
Anthony E. Reinhart
,
Jacob R. Carley
, and
Elizabeth N. Smith

Abstract

The NOAA Warn-on-Forecast System (WoFS) is an experimental rapidly updating convection-allowing ensemble designed to provide probabilistic operational guidance on high-impact thunderstorm hazards. The current WoFS uses physics diversity to help maintain ensemble spread. We assess the systematic impacts of the three WoFS PBL schemes—YSU, MYJ, and MYNN—using novel, object-based methods tailored to thunderstorms. Very short forecast lead times of 0–3 h are examined, which limits phase errors and thereby facilitates comparisons of observed and model storms that occurred in the same area at the same time. This evaluation framework facilitates assessment of systematic PBL scheme impacts on storms and storm environments. Forecasts using all three PBL schemes exhibit overly narrow ranges of surface temperature, dewpoint, and wind speed. The surface biases do not generally decrease at later forecast initialization times, indicating that systematic PBL scheme errors are not well mitigated by data assimilation. The YSU scheme exhibits the least bias of the three in surface temperature and moisture and in many sounding-derived convective variables. Interscheme environmental differences are similar both near and far from storms and qualitatively resemble the differences analyzed in previous studies. The YSU environments exhibit stronger mixing, as expected of nonlocal PBL schemes; are slightly less favorable for storm intensification; and produce correspondingly weaker storms than the MYJ and MYNN environments. On the other hand, systematic interscheme differences in storm morphology and storm location forecast skill are negligible. Overall, the results suggest that calibrating forecasts to correct for systematic differences between PBL schemes may modestly improve WoFS and other convection-allowing ensemble guidance at short lead times.

Free access
Burkely T. Gallo
,
Jamie K. Wolff
,
Adam J. Clark
,
Israel Jirak
,
Lindsay R. Blank
,
Brett Roberts
,
Yunheng Wang
,
Chunxi Zhang
,
Ming Xue
,
Tim Supinie
,
Lucas Harris
,
Linjiong Zhou
, and
Curtis Alexander

Abstract

Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.

Full access
John S. Kain
,
Steve Willington
,
Adam J. Clark
,
Steven J. Weiss
,
Mark Weeks
,
Israel L. Jirak
,
Michael C. Coniglio
,
Nigel M. Roberts
,
Christopher D. Karstens
,
Jonathan M. Wilkinson
,
Kent H. Knopfmeier
,
Humphrey W. Lean
,
Laura Ellam
,
Kirsty Hanley
,
Rachel North
, and
Dan Suri

Abstract

In recent years, a growing partnership has emerged between the Met Office and the designated U.S. national centers for expertise in severe weather research and forecasting, that is, the National Oceanic and Atmospheric Administration (NOAA) National Severe Storms Laboratory (NSSL) and the NOAA Storm Prediction Center (SPC). The driving force behind this partnership is a compelling set of mutual interests related to predicting and understanding high-impact weather and using high-resolution numerical weather prediction models as foundational tools to explore these interests.

The forum for this collaborative activity is the NOAA Hazardous Weather Testbed, where annual Spring Forecasting Experiments (SFEs) are conducted by NSSL and SPC. For the last decade, NSSL and SPC have used these experiments to find ways that high-resolution models can help achieve greater success in the prediction of tornadoes, large hail, and damaging winds. Beginning in 2012, the Met Office became a contributing partner in annual SFEs, bringing complementary expertise in the use of convection-allowing models, derived in their case from a parallel decadelong effort to use these models to advance prediction of flash floods associated with heavy thunderstorms.

The collaboration between NSSL, SPC, and the Met Office has been enthusiastic and productive, driven by strong mutual interests at a grassroots level and generous institutional support from the parent government agencies. In this article, a historical background is provided, motivations for collaborative activities are emphasized, and preliminary results are highlighted.

Full access
Adam J. Clark
,
Israel L. Jirak
,
Burkely T. Gallo
,
Brett Roberts
,
Kent. H. Knopfmeier
,
Robert A. Clark
,
Jake Vancil
,
Andrew R. Dean
,
Kimberly A. Hoogewind
,
Pamela L. Heinselman
,
Nathan A. Dahl
,
Makenzie J. Krocak
,
Jessica J. Choate
,
Katie A. Wilson
,
Patrick S. Skinner
,
Thomas A. Jones
,
Yunheng Wang
,
Gerald J. Creager
,
Larissa J. Reames
,
Louis J. Wicker
,
Scott R. Dembek
, and
Steven J. Weiss
Free access
Burkely T. Gallo
,
Adam J. Clark
,
Israel Jirak
,
John S. Kain
,
Steven J. Weiss
,
Michael Coniglio
,
Kent Knopfmeier
,
James Correia Jr.
,
Christopher J. Melick
,
Christopher D. Karstens
,
Eswar Iyer
,
Andrew R. Dean
,
Ming Xue
,
Fanyou Kong
,
Youngsun Jung
,
Feifei Shen
,
Kevin W. Thomas
,
Keith Brewster
,
Derek Stratman
,
Gregory W. Carbin
,
William Line
,
Rebecca Adams-Selin
, and
Steve Willington

Abstract

Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.

Full access
John S. Kain
,
Michael C. Coniglio
,
James Correia
,
Adam J. Clark
,
Patrick T. Marsh
,
Conrad L. Ziegler
,
Valliappa Lakshmanan
,
Stuart D. Miller Jr.
,
Scott R. Dembek
,
Steven J. Weiss
,
Fanyou Kong
,
Ming Xue
,
Ryan A. Sobash
,
Andrew R. Dean
,
Israel L. Jirak
, and
Christopher J. Melick

The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.

Full access
Corey K. Potvin
,
Jacob R. Carley
,
Adam J. Clark
,
Louis J. Wicker
,
Patrick S. Skinner
,
Anthony E. Reinhart
,
Burkely T. Gallo
,
John S. Kain
,
Glen S. Romine
,
Eric A. Aligo
,
Keith A. Brewster
,
David C. Dowell
,
Lucas M. Harris
,
Israel L. Jirak
,
Fanyou Kong
,
Timothy A. Supinie
,
Kevin W. Thomas
,
Xuguang Wang
,
Yongming Wang
, and
Ming Xue

Abstract

The 2016–18 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments (SFE) featured the Community Leveraged Unified Ensemble (CLUE), a coordinated convection-allowing model (CAM) ensemble framework designed to provide empirical guidance for development of operational CAM systems. The 2017 CLUE included 81 members that all used 3-km horizontal grid spacing over the CONUS, enabling direct comparison of forecasts generated using different dynamical cores, physics schemes, and initialization procedures. This study uses forecasts from several of the 2017 CLUE members and one operational model to evaluate and compare CAM representation and next-day prediction of thunderstorms. The analysis utilizes existing techniques and novel, object-based techniques that distill important information about modeled and observed storms from many cases. The National Severe Storms Laboratory Multi-Radar Multi-Sensor product suite is used to verify model forecasts and climatologies of observed variables. Unobserved model fields are also examined to further illuminate important intermodel differences in storms and near-storm environments. No single model performed better than the others in all respects. However, there were many systematic intermodel and intercore differences in specific forecast metrics and model fields. Some of these differences can be confidently attributed to particular differences in model design. Model intercomparison studies similar to the one presented here are important to better understand the impacts of model and ensemble configurations on storm forecasts and to help optimize future operational CAM systems.

Full access