Search Results

You are looking at 31 - 40 of 43 items for

  • Author or Editor: Michael C. Coniglio x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Ryan A. Sobash, John S. Kain, David R. Bright, Andrew R. Dean, Michael C. Coniglio, and Steven J. Weiss

Abstract

With the advent of convection-allowing NWP models (CAMs) comes the potential for new forms of forecast guidance. While CAMs lack the required resolution to simulate many severe phenomena associated with convection (e.g., large hail, downburst winds, and tornadoes), they can still provide unique guidance for the occurrence of these phenomena if “extreme” patterns of behavior in simulated storms are strongly correlated with observed severe phenomena. This concept is explored using output from a series of CAM forecasts generated on a daily basis during the spring of 2008. This output is mined for the presence of extreme values of updraft helicity (UH), a diagnostic field used to identify supercellular storms. Extreme values of the UH field are flagged as simulated “surrogate” severe weather reports and the spatial correspondence between these surrogate reports and actual observed severe reports is determined. In addition, probabilistic forecasts [surrogate severe probabilistic forecasts (SSPFs)] are created from each field’s simulated surrogate severe reports using a Gaussian smoother. The simulated surrogate reports are capable of reproducing the seasonal climatology observed within the field of actual reports. The SSPFs created from the surrogates are verified using ROC curves and reliability diagrams and the sensitivity of the verification metrics to the smoothing parameter in the Gaussian distribution is tested. The SSPFs produce reliable forecast probabilities with minimal calibration. These results demonstrate that a relatively straightforward postprocessing procedure, which focuses on the characteristics of explicitly predicted convective entities, can provide reliable severe weather forecast guidance. It is anticipated that this technique will be even more valuable when implemented within a convection-allowing ensemble forecasting system.

Full access
Matthew A. Campbell, Ariel E. Cohen, Michael C. Coniglio, Andrew R. Dean, Stephen F. Corfidi, Sarah J. Corfidi, and Corey M. Mead

Abstract

The goal of this study is to document differences in the convective structure and motion of long-track, severe-wind-producing MCSs from short-track severe-wind-producing MCSs in relation to the mean wind. An ancillary goal is to determine if these differences are large enough that some criterion for MCS motion relative to the mean wind could be used in future definitions of “derechos.” Results confirm past investigations that well-organized MCSs, including those that produce derechos, tend to move faster than the mean wind, exhibiting a significantly larger degree of propagation (component of MCS motion in addition to the component contributed by the mean flow). Furthermore, well-organized systems that produce shorter-track swaths of damaging winds likewise tend to move faster than the mean wind with a significant propagation component along the mean wind. Therefore, propagation in the direction of the mean wind is not necessarily a characteristic that can be used to distinguish derechos from nonderechos. However, there is some indication that long-track damaging wind events that occur without large-scale or persistent bow echoes and mesoscale convective vortices (MCVs) require a strong propagation component along the mean wind direction to become long lived. Overall, however, there does not appear to be enough separation in the motion characteristics among the MCS types to warrant the inclusion of a mean-wind criterion into the definition of a derecho at this time.

Full access
Craig S. Schwartz, John S. Kain, Steven J. Weiss, Ming Xue, David R. Bright, Fanyou Kong, Kevin W. Thomas, Jason J. Levit, and Michael C. Coniglio

Abstract

During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.

Full access
Stanley B. Trier, Glen S. Romine, David A. Ahijevych, Robert J. Trapp, Russ S. Schumacher, Michael C. Coniglio, and David J. Stensrud

Abstract

In this study, the authors examine initiation of severe convection along a daytime surface dryline in a 10-member ensemble of convection-permitting simulations. Results indicate that the minimum buoyancy B min of PBL air parcels must be small (B min > −0.5°C) for successful deep convection initiation (CI) to occur along the dryline. Comparing different ensemble members reveals that CAPE magnitudes (allowing for entrainment) and the width of the zone of negligible B min extending eastward from the dryline act together to influence CI. Since PBL updrafts that initiate along the dryline move rapidly northeast in the vertically sheared flow as they grow into the free troposphere, a wider zone of negligible B min helps ensure adequate time for incipient storms to mature, which, itself, is hastened by larger CAPE.

Local B min budget calculations and trajectory analysis are used to quantify physical processes responsible for the reduction of negative buoyancy prior to CI. Here, the grid-resolved forcing and forcing from temperature and moisture tendencies in the PBL scheme (arising from surface fluxes) contribute about equally in ensemble composites. However, greater spatial variability in grid-resolved forcing focuses the location of the greatest net forcing along the dryline. The grid-resolved forcing is influenced by a thermally direct vertical circulation, where time-averaged ascent at the east edge of the dryline results in locally deeper moisture and cooler conditions near the PBL top. Horizontal temperature advection spreads the cooler air eastward above higher equivalent potential temperature air at source levels of convecting air parcels, resulting in a wider zone of negligible B min that facilitates sustained CI.

Full access
Craig S. Schwartz, John S. Kain, Steven J. Weiss, Ming Xue, David R. Bright, Fanyou Kong, Kevin W. Thomas, Jason J. Levit, Michael C. Coniglio, and Matthew S. Wandishin

Abstract

During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced a daily 10-member 4-km horizontal resolution ensemble forecast covering approximately three-fourths of the continental United States. Each member used the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) model core, which was initialized at 2100 UTC, ran for 33 h, and resolved convection explicitly. Different initial condition (IC), lateral boundary condition (LBC), and physics perturbations were introduced in 4 of the 10 ensemble members, while the remaining 6 members used identical ICs and LBCs, differing only in terms of microphysics (MP) and planetary boundary layer (PBL) parameterizations. This study focuses on precipitation forecasts from the ensemble.

The ensemble forecasts reveal WRF-ARW sensitivity to MP and PBL schemes. For example, over the 7-week experiment, the Mellor–Yamada–Janjić PBL and Ferrier MP parameterizations were associated with relatively high precipitation totals, while members configured with the Thompson MP or Yonsei University PBL scheme produced comparatively less precipitation. Additionally, different approaches for generating probabilistic ensemble guidance are explored. Specifically, a “neighborhood” approach is described and shown to considerably enhance the skill of probabilistic forecasts for precipitation when combined with a traditional technique of producing ensemble probability fields.

Full access
Adam J. Clark, John S. Kain, David J. Stensrud, Ming Xue, Fanyou Kong, Michael C. Coniglio, Kevin W. Thomas, Yunheng Wang, Keith Brewster, Jidong Gao, Xuguang Wang, Steven J. Weiss, and Jun Du

Abstract

Probabilistic quantitative precipitation forecasts (PQPFs) from the storm-scale ensemble forecast system run by the Center for Analysis and Prediction of Storms during the spring of 2009 are evaluated using area under the relative operating characteristic curve (ROC area). ROC area, which measures discriminating ability, is examined for ensemble size n from 1 to 17 members and for spatial scales ranging from 4 to 200 km.

Expectedly, incremental gains in skill decrease with increasing n. Significance tests comparing ROC areas for each n to those of the full 17-member ensemble revealed that more members are required to reach statistically indistinguishable PQPF skill relative to the full ensemble as forecast lead time increases and spatial scale decreases. These results appear to reflect the broadening of the forecast probability distribution function (PDF) of future atmospheric states associated with decreasing spatial scale and increasing forecast lead time. They also illustrate that efficient allocation of computing resources for convection-allowing ensembles requires careful consideration of spatial scale and forecast length desired.

Full access
John S. Kain, Ming Xue, Michael C. Coniglio, Steven J. Weiss, Fanyou Kong, Tara L. Jensen, Barbara G. Brown, Jidong Gao, Keith Brewster, Kevin W. Thomas, Yunheng Wang, Craig S. Schwartz, and Jason J. Levit

Abstract

The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.

Full access
Corey K. Potvin, Patrick S. Skinner, Kimberly A. Hoogewind, Michael C. Coniglio, Jeremy A. Gibbs, Adam J. Clark, Montgomery L. Flora, Anthony E. Reinhart, Jacob R. Carley, and Elizabeth N. Smith

Abstract

The NOAA Warn-on-Forecast System (WoFS) is an experimental rapidly updating convection-allowing ensemble designed to provide probabilistic operational guidance on high-impact thunderstorm hazards. The current WoFS uses physics diversity to help maintain ensemble spread. We assess the systematic impacts of the three WoFS PBL schemes—YSU, MYJ, and MYNN—using novel, object-based methods tailored to thunderstorms. Very short forecast lead times of 0–3 h are examined, which limits phase errors and thereby facilitates comparisons of observed and model storms that occurred in the same area at the same time. This evaluation framework facilitates assessment of systematic PBL scheme impacts on storms and storm environments. Forecasts using all three PBL schemes exhibit overly narrow ranges of surface temperature, dewpoint, and wind speed. The surface biases do not generally decrease at later forecast initialization times, indicating that systematic PBL scheme errors are not well mitigated by data assimilation. The YSU scheme exhibits the least bias of the three in surface temperature and moisture and in many sounding-derived convective variables. Interscheme environmental differences are similar both near and far from storms and qualitatively resemble the differences analyzed in previous studies. The YSU environments exhibit stronger mixing, as expected of nonlocal PBL schemes; are slightly less favorable for storm intensification; and produce correspondingly weaker storms than the MYJ and MYNN environments. On the other hand, systematic interscheme differences in storm morphology and storm location forecast skill are negligible. Overall, the results suggest that calibrating forecasts to correct for systematic differences between PBL schemes may modestly improve WoFS and other convection-allowing ensemble guidance at short lead times.

Free access
John S. Kain, Steve Willington, Adam J. Clark, Steven J. Weiss, Mark Weeks, Israel L. Jirak, Michael C. Coniglio, Nigel M. Roberts, Christopher D. Karstens, Jonathan M. Wilkinson, Kent H. Knopfmeier, Humphrey W. Lean, Laura Ellam, Kirsty Hanley, Rachel North, and Dan Suri

Abstract

In recent years, a growing partnership has emerged between the Met Office and the designated U.S. national centers for expertise in severe weather research and forecasting, that is, the National Oceanic and Atmospheric Administration (NOAA) National Severe Storms Laboratory (NSSL) and the NOAA Storm Prediction Center (SPC). The driving force behind this partnership is a compelling set of mutual interests related to predicting and understanding high-impact weather and using high-resolution numerical weather prediction models as foundational tools to explore these interests.

The forum for this collaborative activity is the NOAA Hazardous Weather Testbed, where annual Spring Forecasting Experiments (SFEs) are conducted by NSSL and SPC. For the last decade, NSSL and SPC have used these experiments to find ways that high-resolution models can help achieve greater success in the prediction of tornadoes, large hail, and damaging winds. Beginning in 2012, the Met Office became a contributing partner in annual SFEs, bringing complementary expertise in the use of convection-allowing models, derived in their case from a parallel decadelong effort to use these models to advance prediction of flash floods associated with heavy thunderstorms.

The collaboration between NSSL, SPC, and the Met Office has been enthusiastic and productive, driven by strong mutual interests at a grassroots level and generous institutional support from the parent government agencies. In this article, a historical background is provided, motivations for collaborative activities are emphasized, and preliminary results are highlighted.

Full access
John S. Kain, Michael C. Coniglio, James Correia, Adam J. Clark, Patrick T. Marsh, Conrad L. Ziegler, Valliappa Lakshmanan, Stuart D. Miller Jr., Scott R. Dembek, Steven J. Weiss, Fanyou Kong, Ming Xue, Ryan A. Sobash, Andrew R. Dean, Israel L. Jirak, and Christopher J. Melick

The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.

Full access