Search Results

You are looking at 1 - 6 of 6 items for

  • Author or Editor: Christopher J. Melick x
  • Refine by Access: All Content x
Clear All Modify Search
Douglas E. Tilly, Anthony R. Lupo, Christopher J. Melick, and Patrick S. Market

Abstract

The Zwack–Okossi vorticity tendency equation was used to calculate 500-hPa height tendencies in two intensifying Southern Hemisphere blocking events. The National Centers for Environmental Prediction–National Center for Atmospheric Research gridded reanalyses were used to make each of these calculations. The block intensification period for each event was associated with a deepening surface cyclone during a 48-h period beginning at 1200 UTC 28 July and 1200 UTC 8 August 1986, respectively. These results demonstrate that the diabatic heating forces height rises through the sensible and latent heating terms in these two Southern Hemisphere blocking events. The sensible heating was the larger contributor, second only to (about the same as) the vorticity advection term in the first (second) event. The vorticity advection term has been shown by several studies to be associated with block intensification.

Full access
Rebecca D. Adams-Selin, Adam J. Clark, Christopher J. Melick, Scott R. Dembek, Israel L. Jirak, and Conrad L. Ziegler

Abstract

Four different versions of the HAILCAST hail model have been tested as part of the 2014–16 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments. HAILCAST was run as part of the National Severe Storms Laboratory (NSSL) WRF Ensemble during 2014–16 and the Community Leveraged Unified Ensemble (CLUE) in 2016. Objective verification using the Multi-Radar Multi-Sensor maximum expected size of hail (MRMS MESH) product was conducted using both object-based and neighborhood grid-based verification. Subjective verification and feedback was provided by HWT participants. Hourly maximum storm surrogate fields at a variety of thresholds and Storm Prediction Center (SPC) convective outlooks were also evaluated for comparison. HAILCAST was found to improve with each version due to feedback from the 2014–16 HWTs. The 2016 version of HAILCAST was equivalent to or exceeded the skill of the tested storm surrogates across a variety of thresholds. The post-2016 version of HAILCAST was found to improve 50-mm hail forecasts through object-based verification, but 25-mm hail forecasting ability declined as measured through neighborhood grid-based verification. The skill of the storm surrogate fields varied widely as the threshold values used to determine hail size were varied. HAILCAST was found not to require such tuning, as it produced consistent results even when used across different model configurations and horizontal grid spacings. Additionally, different storm surrogate fields performed at varying levels of skill when forecasting 25- versus 50-mm hail, hinting at the different convective modes typically associated with small versus large sizes of hail. HAILCAST was able to match results relatively consistently with the best-performing storm surrogate field across multiple hail size thresholds.

Full access
Burkely T. Gallo, Adam J. Clark, Israel Jirak, John S. Kain, Steven J. Weiss, Michael Coniglio, Kent Knopfmeier, James Correia Jr., Christopher J. Melick, Christopher D. Karstens, Eswar Iyer, Andrew R. Dean, Ming Xue, Fanyou Kong, Youngsun Jung, Feifei Shen, Kevin W. Thomas, Keith Brewster, Derek Stratman, Gregory W. Carbin, William Line, Rebecca Adams-Selin, and Steve Willington

Abstract

Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.

Full access
Adam J. Clark, Steven J. Weiss, John S. Kain, Israel L. Jirak, Michael Coniglio, Christopher J. Melick, Christopher Siewert, Ryan A. Sobash, Patrick T. Marsh, Andrew R. Dean, Ming Xue, Fanyou Kong, Kevin W. Thomas, Yunheng Wang, Keith Brewster, Jidong Gao, Xuguang Wang, Jun Du, David R. Novak, Faye E. Barthold, Michael J. Bodner, Jason J. Levit, C. Bruce Entwistle, Tara L. Jensen, and James Correia Jr.

The NOAA Hazardous Weather Testbed (HWT) conducts annual spring forecasting experiments organized by the Storm Prediction Center and National Severe Storms Laboratory to test and evaluate emerging scientific concepts and technologies for improved analysis and prediction of hazardous mesoscale weather. A primary goal is to accelerate the transfer of promising new scientific concepts and tools from research to operations through the use of intensive real-time experimental forecasting and evaluation activities conducted during the spring and early summer convective storm period. The 2010 NOAA/HWT Spring Forecasting Experiment (SE2010), conducted 17 May through 18 June, had a broad focus, with emphases on heavy rainfall and aviation weather, through collaboration with the Hydrometeorological Prediction Center (HPC) and the Aviation Weather Center (AWC), respectively. In addition, using the computing resources of the National Institute for Computational Sciences at the University of Tennessee, the Center for Analysis and Prediction of Storms at the University of Oklahoma provided unprecedented real-time conterminous United States (CONUS) forecasts from a multimodel Storm-Scale Ensemble Forecast (SSEF) system with 4-km grid spacing and 26 members and from a 1-km grid spacing configuration of the Weather Research and Forecasting model. Several other organizations provided additional experimental high-resolution model output. This article summarizes the activities, insights, and preliminary findings from SE2010, emphasizing the use of the SSEF system and the successful collaboration with the HPC and AWC.

A supplement to this article is available online (DOI:10.1175/BAMS-D-11-00040.2)

Full access
John S. Kain, Michael C. Coniglio, James Correia, Adam J. Clark, Patrick T. Marsh, Conrad L. Ziegler, Valliappa Lakshmanan, Stuart D. Miller Jr., Scott R. Dembek, Steven J. Weiss, Fanyou Kong, Ming Xue, Ryan A. Sobash, Andrew R. Dean, Israel L. Jirak, and Christopher J. Melick

The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.

Full access
Adam J. Clark, Israel L. Jirak, Scott R. Dembek, Gerry J. Creager, Fanyou Kong, Kevin W. Thomas, Kent H. Knopfmeier, Burkely T. Gallo, Christopher J. Melick, Ming Xue, Keith A. Brewster, Youngsun Jung, Aaron Kennedy, Xiquan Dong, Joshua Markel, Matthew Gilmore, Glen S. Romine, Kathryn R. Fossell, Ryan A. Sobash, Jacob R. Carley, Brad S. Ferrier, Matthew Pyle, Curtis R. Alexander, Steven J. Weiss, John S. Kain, Louis J. Wicker, Gregory Thompson, Rebecca D. Adams-Selin, and David A. Imy

Abstract

One primary goal of annual Spring Forecasting Experiments (SFEs), which are coorganized by NOAA’s National Severe Storms Laboratory and Storm Prediction Center and conducted in the National Oceanic and Atmospheric Administration’s (NOAA) Hazardous Weather Testbed, is documenting performance characteristics of experimental, convection-allowing modeling systems (CAMs). Since 2007, the number of CAMs (including CAM ensembles) examined in the SFEs has increased dramatically, peaking at six different CAM ensembles in 2015. Meanwhile, major advances have been made in creating, importing, processing, verifying, and developing tools for analyzing and visualizing these large and complex datasets. However, progress toward identifying optimal CAM ensemble configurations has been inhibited because the different CAM systems have been independently designed, making it difficult to attribute differences in performance characteristics. Thus, for the 2016 SFE, a much more coordinated effort among many collaborators was made by agreeing on a set of model specifications (e.g., model version, grid spacing, domain size, and physics) so that the simulations contributed by each collaborator could be combined to form one large, carefully designed ensemble known as the Community Leveraged Unified Ensemble (CLUE). The 2016 CLUE was composed of 65 members contributed by five research institutions and represents an unprecedented effort to enable an evidence-driven decision process to help guide NOAA’s operational modeling efforts. Eight unique experiments were designed within the CLUE framework to examine issues directly relevant to the design of NOAA’s future operational CAM-based ensembles. This article will highlight the CLUE design and present results from one of the experiments examining the impact of single versus multicore CAM ensemble configurations.

Open access