Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: Jason J. Levit x
  • Refine by Access: All Content x
Clear All Modify Search
Victor Homar
,
David J. Stensrud
,
Jason J. Levit
, and
David R. Bright

Abstract

During the spring of 2003, the Storm Prediction Center, in partnership with the National Severe Storms Laboratory, conducted an experiment to explore the value of having operational severe weather forecasters involved in the generation of a short-range ensemble forecasting system. The idea was to create a customized ensemble to provide guidance on the severe weather threat over the following 48 h. The forecaster was asked to highlight structures of interest in the control run and, using an adjoint model, a set of perturbations was obtained and used to generate a 32-member fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5) ensemble. The performance of this experimental ensemble is objectively evaluated and compared with other available forecasts (both deterministic and ensemble) using real-time severe weather reports and precipitation in the central and eastern parts of the continental United States. The experimental ensemble outperforms the operational forecasts considered in the study for episodes with moderate-to-high probability of severe weather occurrence and those with moderate probability of heavy precipitation. On the other hand, the experimental ensemble forecasts of low-probability severe weather and low precipitation amounts have less skill than the operational models, arguably due to the lack of global dispersion in a system designed to target the spread over specific areas of concern for severe weather. Results from an additional test ensemble constructed by combining automatic and manually perturbed members show the best results for numerical forecasts of severe weather for all probability values. While the value of human contribution in the numerical forecast is demonstrated, further research is needed to determine how to better use the skill and experience of the forecaster in the construction of short-range ensembles.

Full access
Daniel Harris
,
Efi Foufoula-Georgiou
,
Kelvin K. Droegemeier
, and
Jason J. Levit

Abstract

Small-scale (less than ∼15 km) precipitation variability significantly affects the hydrologic response of a basin and the accurate estimation of water and energy fluxes through coupled land–atmosphere modeling schemes. It also affects the radiative transfer through precipitating clouds and thus rainfall estimation from microwave sensors. Because both land–atmosphere and cloud–radiation interactions are nonlinear and occur over a broad range of scales (from a few centimeters to several kilometers), it is important that, over these scales, cloud-resolving numerical models realistically reproduce the observed precipitation variability. This issue is examined herein by using a suite of multiscale statistical methods to compare the scale dependence of precipitation variability of a numerically simulated convective storm with that observed by radar. In particular, Fourier spectrum, structure function, and moment-scale analyses are used to show that, although the variability of modeled precipitation agrees with that observed for scales larger than approximately 5 times the model resolution, the model shows a falloff in variability at smaller scales. Thus, depending upon the smallest scale at which variability is considered to be important for a specific application, one has to resort either to very high resolution model runs (resolutions 5 times higher than the scale of interest) or to stochastic methods that can introduce the missing small-scale variability. The latter involve upscaling the model output to a scale approximately 5 times the model resolution and then stochastically downscaling it to smaller scales. The results of multiscale analyses, such as those presented herein, are key to the implementation of such stochastic downscaling methodologies.

Full access
Stephen F. Corfidi
,
Steven J. Weiss
,
John S. Kain
,
Sarah J. Corfidi
,
Robert M. Rabin
, and
Jason J. Levit

Abstract

The Super Outbreak of tornadoes over the central and eastern United States on 3–4 April 1974 remains the most outstanding severe convective weather episode on record in the continental United States. The outbreak far surpassed previous and succeeding events in severity, longevity, and extent. In this paper, surface, upper-air, radar, and satellite data are used to provide an updated synoptic and subsynoptic overview of the event. Emphasis is placed on identifying the major factors that contributed to the development of the three main convective bands associated with the outbreak, and on identifying the conditions that may have contributed to the outstanding number of intense and long-lasting tornadoes. Selected output from a 29-km, 50-layer version of the Eta forecast model, a version similar to that available operationally in the mid-1990s, also is presented to help depict the evolution of thermodynamic stability during the event.

Full access
John S. Kain
,
Scott R. Dembek
,
Steven J. Weiss
,
Jonathan L. Case
,
Jason J. Levit
, and
Ryan A. Sobash

Abstract

A new strategy for generating and presenting model diagnostic fields from convection-allowing forecast models is introduced. The fields are produced by computing temporal-maximum values for selected diagnostics at each horizontal grid point between scheduled output times. The two-dimensional arrays containing these maximum values are saved at the scheduled output times. The additional fields have minimal impacts on the size of the output files and the computation of most diagnostic quantities can be done very efficiently during integration of the Weather Research and Forecasting Model. Results show that these unique output fields facilitate the examination of features associated with convective storms, which can change dramatically within typical output intervals of 1–3 h.

Full access
Craig S. Schwartz
,
John S. Kain
,
Steven J. Weiss
,
Ming Xue
,
David R. Bright
,
Fanyou Kong
,
Kevin W. Thomas
,
Jason J. Levit
, and
Michael C. Coniglio

Abstract

During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.

Full access
Craig S. Schwartz
,
John S. Kain
,
Steven J. Weiss
,
Ming Xue
,
David R. Bright
,
Fanyou Kong
,
Kevin W. Thomas
,
Jason J. Levit
,
Michael C. Coniglio
, and
Matthew S. Wandishin

Abstract

During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced a daily 10-member 4-km horizontal resolution ensemble forecast covering approximately three-fourths of the continental United States. Each member used the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) model core, which was initialized at 2100 UTC, ran for 33 h, and resolved convection explicitly. Different initial condition (IC), lateral boundary condition (LBC), and physics perturbations were introduced in 4 of the 10 ensemble members, while the remaining 6 members used identical ICs and LBCs, differing only in terms of microphysics (MP) and planetary boundary layer (PBL) parameterizations. This study focuses on precipitation forecasts from the ensemble.

The ensemble forecasts reveal WRF-ARW sensitivity to MP and PBL schemes. For example, over the 7-week experiment, the Mellor–Yamada–Janjić PBL and Ferrier MP parameterizations were associated with relatively high precipitation totals, while members configured with the Thompson MP or Yonsei University PBL scheme produced comparatively less precipitation. Additionally, different approaches for generating probabilistic ensemble guidance are explored. Specifically, a “neighborhood” approach is described and shown to considerably enhance the skill of probabilistic forecasts for precipitation when combined with a traditional technique of producing ensemble probability fields.

Full access
John S. Kain
,
Steven J. Weiss
,
David R. Bright
,
Michael E. Baldwin
,
Jason J. Levit
,
Gregory W. Carbin
,
Craig S. Schwartz
,
Morris L. Weisman
,
Kelvin K. Droegemeier
,
Daniel B. Weber
, and
Kevin W. Thomas

Abstract

During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.

Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.

In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.

Full access
John S. Kain
,
Ming Xue
,
Michael C. Coniglio
,
Steven J. Weiss
,
Fanyou Kong
,
Tara L. Jensen
,
Barbara G. Brown
,
Jidong Gao
,
Keith Brewster
,
Kevin W. Thomas
,
Yunheng Wang
,
Craig S. Schwartz
, and
Jason J. Levit

Abstract

The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.

Full access
Adam J. Clark
,
Steven J. Weiss
,
John S. Kain
,
Israel L. Jirak
,
Michael Coniglio
,
Christopher J. Melick
,
Christopher Siewert
,
Ryan A. Sobash
,
Patrick T. Marsh
,
Andrew R. Dean
,
Ming Xue
,
Fanyou Kong
,
Kevin W. Thomas
,
Yunheng Wang
,
Keith Brewster
,
Jidong Gao
,
Xuguang Wang
,
Jun Du
,
David R. Novak
,
Faye E. Barthold
,
Michael J. Bodner
,
Jason J. Levit
,
C. Bruce Entwistle
,
Tara L. Jensen
, and
James Correia Jr.

The NOAA Hazardous Weather Testbed (HWT) conducts annual spring forecasting experiments organized by the Storm Prediction Center and National Severe Storms Laboratory to test and evaluate emerging scientific concepts and technologies for improved analysis and prediction of hazardous mesoscale weather. A primary goal is to accelerate the transfer of promising new scientific concepts and tools from research to operations through the use of intensive real-time experimental forecasting and evaluation activities conducted during the spring and early summer convective storm period. The 2010 NOAA/HWT Spring Forecasting Experiment (SE2010), conducted 17 May through 18 June, had a broad focus, with emphases on heavy rainfall and aviation weather, through collaboration with the Hydrometeorological Prediction Center (HPC) and the Aviation Weather Center (AWC), respectively. In addition, using the computing resources of the National Institute for Computational Sciences at the University of Tennessee, the Center for Analysis and Prediction of Storms at the University of Oklahoma provided unprecedented real-time conterminous United States (CONUS) forecasts from a multimodel Storm-Scale Ensemble Forecast (SSEF) system with 4-km grid spacing and 26 members and from a 1-km grid spacing configuration of the Weather Research and Forecasting model. Several other organizations provided additional experimental high-resolution model output. This article summarizes the activities, insights, and preliminary findings from SE2010, emphasizing the use of the SSEF system and the successful collaboration with the HPC and AWC.

A supplement to this article is available online (DOI:10.1175/BAMS-D-11-00040.2)

Full access
Kevin E. Kelleher
,
Kelvin K. Droegemeier
,
Jason J. Levit
,
Carl Sinclair
,
David E. Jahn
,
Scott D. Hill
,
Lora Mueller
,
Grant Qualley
,
Tim D. Crum
,
Steven D. Smith
,
Stephen A. Del Greco
,
S. Lakshmivarahan
,
Linda Miller
,
Mohan Ramamurthy
,
Ben Domenico
, and
David W. Fulker

The NOAA NWS announced at the annual meeting of the American Meteorological Society in February 2003 its intent to create an Internet-based pseudo-operational system for delivering Weather Surveillance Radar-1988 Doppler (WSR-88D) Level II data. In April 2004, the NWS deployed the Next-Generation Weather Radar (NEXRAD) level II central collection functionality and set up a framework for distributing these data. The NWS action was the direct result of a successful joint government, university, and private sector development and test effort called the Collaborative Radar Acquisition Field Test (CRAFT) project. Project CRAFT was a multi-institutional effort among the Center for Analysis and Prediction of Storms, the University Corporation for Atmospheric Research, the University of Washington, and the three NOAA organizations, National Severe Storms Laboratory, WSR-88D Radar Operations Center (ROC), and National Climatic Data Center. The principal goal of CRAFT was to demonstrate the real-time compression and Internet-based transmission of level II data from all WSR-88D with the vision of an affordable nationwide operational implementation. The initial test bed of six radars located in and around Oklahoma grew to include 64 WSR-88D nationwide before being adopted by the NWS for national implementation. A description of the technical aspects of the award-winning Project CRAFT is given, including data transmission, reliability, latency, compression, archival, data mining, and newly developed visualization and retrieval tools. In addition, challenges encountered in transferring this research project into operations are discussed, along with examples of uses of the data.

Full access