Search Results

You are looking at 1 - 6 of 6 items for :

  • Author or Editor: Adam J. Clark x
  • Bulletin of the American Meteorological Society x
  • Refine by Access: All Content x
Clear All Modify Search
Brett Roberts
,
Israel L. Jirak
,
Adam J. Clark
,
Steven J. Weiss
, and
John S. Kain

Abstract

Since the early 2000s, growing computing resources for numerical weather prediction (NWP) and scientific advances enabled development and testing of experimental, real-time deterministic convection-allowing models (CAMs). By the late 2000s, continued advancements spurred development of CAM ensemble forecast systems, through which a broad range of successful forecasting applications have been demonstrated. This work has prepared the National Weather Service (NWS) for practical usage of the High Resolution Ensemble Forecast (HREF) system, which was implemented operationally in November 2017. Historically, methods for postprocessing and visualizing products from regional and global ensemble prediction systems (e.g., ensemble means and spaghetti plots) have been applied to fields that provide information on mesoscale to synoptic-scale processes. However, much of the value from CAMs is derived from the explicit simulation of deep convection and associated storm-attribute fields like updraft helicity and simulated reflectivity. Thus, fully exploiting CAM ensembles for forecasting applications has required the development of fundamentally new data extraction, postprocessing, and visualization strategies. In the process, challenges imposed by the immense data volume inherent to these systems required new approaches when considering diverse factors like forecaster interpretation and computational expense. In this article, we review the current state of postprocessing and visualization for CAM ensembles, with a particular focus on forecast applications for severe convective hazards that have been evaluated within NOAA’s Hazardous Weather Testbed. The HREF web viewer implemented at the NWS Storm Prediction Center (SPC) is presented as a prototype for deploying these techniques in real time on a flexible and widely accessible platform.

Full access
Burkely T. Gallo
,
Christina P. Kalb
,
John Halley Gotway
,
Henry H. Fisher
,
Brett Roberts
,
Israel L. Jirak
,
Adam J. Clark
,
Curtis Alexander
, and
Tara L. Jensen
Full access
Burkely T. Gallo
,
Christina P. Kalb
,
John Halley Gotway
,
Henry H. Fisher
,
Brett Roberts
,
Israel L. Jirak
,
Adam J. Clark
,
Curtis Alexander
, and
Tara L. Jensen

Abstract

Evaluation of numerical weather prediction (NWP) is critical for both forecasters and researchers. Through such evaluation, forecasters can understand the strengths and weaknesses of NWP guidance, and researchers can work to improve NWP models. However, evaluating high-resolution convection-allowing models (CAMs) requires unique verification metrics tailored to high-resolution output, particularly when considering extreme events. Metrics used and fields evaluated often differ between verification studies, hindering the effort to broadly compare CAMs. The purpose of this article is to summarize the development and initial testing of a CAM-based scorecard, which is intended for broad use across research and operational communities and is similar to scorecards currently available within the enhanced Model Evaluation Tools package (METplus) for evaluating coarser models. Scorecards visualize many verification metrics and attributes simultaneously, providing a broad overview of model performance. A preliminary CAM scorecard was developed and tested during the 2018 Spring Forecasting Experiment using METplus, focused on metrics and attributes relevant to severe convective forecasting. The scorecard compared attributes specific to convection-allowing scales such as reflectivity and surrogate severe fields, using metrics like the critical success index (CSI) and fractions skill score (FSS). While this preliminary scorecard focuses on attributes relevant to severe convective storms, the scorecard framework allows for the inclusion of further metrics relevant to other applications. Development of a CAM scorecard allows for evidence-based decision-making regarding future operational CAM systems as the National Weather Service transitions to a Unified Forecast system as part of the Next-Generation Global Prediction System initiative.

Full access
John S. Kain
,
Steve Willington
,
Adam J. Clark
,
Steven J. Weiss
,
Mark Weeks
,
Israel L. Jirak
,
Michael C. Coniglio
,
Nigel M. Roberts
,
Christopher D. Karstens
,
Jonathan M. Wilkinson
,
Kent H. Knopfmeier
,
Humphrey W. Lean
,
Laura Ellam
,
Kirsty Hanley
,
Rachel North
, and
Dan Suri

Abstract

In recent years, a growing partnership has emerged between the Met Office and the designated U.S. national centers for expertise in severe weather research and forecasting, that is, the National Oceanic and Atmospheric Administration (NOAA) National Severe Storms Laboratory (NSSL) and the NOAA Storm Prediction Center (SPC). The driving force behind this partnership is a compelling set of mutual interests related to predicting and understanding high-impact weather and using high-resolution numerical weather prediction models as foundational tools to explore these interests.

The forum for this collaborative activity is the NOAA Hazardous Weather Testbed, where annual Spring Forecasting Experiments (SFEs) are conducted by NSSL and SPC. For the last decade, NSSL and SPC have used these experiments to find ways that high-resolution models can help achieve greater success in the prediction of tornadoes, large hail, and damaging winds. Beginning in 2012, the Met Office became a contributing partner in annual SFEs, bringing complementary expertise in the use of convection-allowing models, derived in their case from a parallel decadelong effort to use these models to advance prediction of flash floods associated with heavy thunderstorms.

The collaboration between NSSL, SPC, and the Met Office has been enthusiastic and productive, driven by strong mutual interests at a grassroots level and generous institutional support from the parent government agencies. In this article, a historical background is provided, motivations for collaborative activities are emphasized, and preliminary results are highlighted.

Full access
John S. Kain
,
Michael C. Coniglio
,
James Correia
,
Adam J. Clark
,
Patrick T. Marsh
,
Conrad L. Ziegler
,
Valliappa Lakshmanan
,
Stuart D. Miller Jr.
,
Scott R. Dembek
,
Steven J. Weiss
,
Fanyou Kong
,
Ming Xue
,
Ryan A. Sobash
,
Andrew R. Dean
,
Israel L. Jirak
, and
Christopher J. Melick

The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.

Full access
Adam J. Clark
,
Israel L. Jirak
,
Scott R. Dembek
,
Gerry J. Creager
,
Fanyou Kong
,
Kevin W. Thomas
,
Kent H. Knopfmeier
,
Burkely T. Gallo
,
Christopher J. Melick
,
Ming Xue
,
Keith A. Brewster
,
Youngsun Jung
,
Aaron Kennedy
,
Xiquan Dong
,
Joshua Markel
,
Matthew Gilmore
,
Glen S. Romine
,
Kathryn R. Fossell
,
Ryan A. Sobash
,
Jacob R. Carley
,
Brad S. Ferrier
,
Matthew Pyle
,
Curtis R. Alexander
,
Steven J. Weiss
,
John S. Kain
,
Louis J. Wicker
,
Gregory Thompson
,
Rebecca D. Adams-Selin
, and
David A. Imy

Abstract

One primary goal of annual Spring Forecasting Experiments (SFEs), which are coorganized by NOAA’s National Severe Storms Laboratory and Storm Prediction Center and conducted in the National Oceanic and Atmospheric Administration’s (NOAA) Hazardous Weather Testbed, is documenting performance characteristics of experimental, convection-allowing modeling systems (CAMs). Since 2007, the number of CAMs (including CAM ensembles) examined in the SFEs has increased dramatically, peaking at six different CAM ensembles in 2015. Meanwhile, major advances have been made in creating, importing, processing, verifying, and developing tools for analyzing and visualizing these large and complex datasets. However, progress toward identifying optimal CAM ensemble configurations has been inhibited because the different CAM systems have been independently designed, making it difficult to attribute differences in performance characteristics. Thus, for the 2016 SFE, a much more coordinated effort among many collaborators was made by agreeing on a set of model specifications (e.g., model version, grid spacing, domain size, and physics) so that the simulations contributed by each collaborator could be combined to form one large, carefully designed ensemble known as the Community Leveraged Unified Ensemble (CLUE). The 2016 CLUE was composed of 65 members contributed by five research institutions and represents an unprecedented effort to enable an evidence-driven decision process to help guide NOAA’s operational modeling efforts. Eight unique experiments were designed within the CLUE framework to examine issues directly relevant to the design of NOAA’s future operational CAM-based ensembles. This article will highlight the CLUE design and present results from one of the experiments examining the impact of single versus multicore CAM ensemble configurations.

Full access