Search Results

You are looking at 21 - 29 of 29 items for

  • Author or Editor: Michael E. Baldwin x
  • Refine by Access: All Content x
Clear All Modify Search
Logan C. Dawson, Glen S. Romine, Robert J. Trapp, and Michael E. Baldwin

Abstract

The utility of radar-derived rotation track data for the verification of supercell thunderstorm forecasts was quantified through this study. The forecasts were generated using a convection-permitting model ensemble, and supercell occurrence was diagnosed via updraft helicity and low-level vertical vorticity. Forecasts of four severe convective weather events were considered. Probability fields were computed from the model data, and forecast skill was quantified using rotation track data, storm report data, and a neighborhood-based verification approach. The ability to adjust the rotation track threshold for verification purposes was shown to be an advantage of the rotation track data over the storms reports, because the reports are inherently binary observations whereas the rotation tracks are based on values of Doppler velocity shear. These results encourage further pursuit of incorporating observed rotation track data in the forecasting and verification of severe weather events.

Full access
Nathan M. Hitchens, Robert J. Trapp, Michael E. Baldwin, and Alexander Gluhovsky

Abstract

This research establishes a methodology to quantify the characteristics of convective cloud systems that produce subdiurnal extreme precipitation. Subdiurnal extreme precipitation events are identified by examining hourly precipitation data from 48 rain gauges in the midwestern United States during the period 1956–2005. Time series of precipitation accumulations for 6-h periods are fitted to the generalized Pareto distribution to determine the 10-yr return levels for the stations. An extreme precipitation event is one in which precipitation exceeds the 10-yr return level over a 6-h period. Return levels in the Midwest vary between 54 and 93 mm for 6-h events. Most of the precipitation contributing to these events falls within 1–2 h. Characteristics of the precipitating systems responsible for the extremes are derived from the National Centers for Environmental Prediction stage II and stage IV multisensor precipitation data. The precipitating systems are treated as objects that are identified using an automated procedure. Characteristics considered include object size and the precipitation mean, variance, and maximum within each object. For example, object sizes vary between 96 and 34 480 km2, suggesting that a wide variety of convective precipitating systems can produce subdiurnal extreme precipitation.

Full access
Matthew S. Wandishin, Michael E. Baldwin, Steven L. Mullen, and John V. Cortinas Jr.

Abstract

Short-range ensemble forecasting is extended to a critical winter weather problem: forecasting precipitation type. Forecast soundings from the operational NCEP Short-Range Ensemble Forecast system are combined with five precipitation-type algorithms to produce probabilistic forecasts from January through March 2002. Thus the ensemble combines model diversity, initial condition diversity, and postprocessing algorithm diversity. All verification numbers are conditioned on both the ensemble and observations recording some form of precipitation. This separates the forecast of type from the yes–no precipitation forecast.

The ensemble is very skillful in forecasting rain and snow but it is only moderately skillful for freezing rain and unskillful for ice pellets. However, even for the unskillful forecasts the ensemble shows some ability to discriminate between the different precipitation types and thus provides some positive value to forecast users. Algorithm diversity is shown to be as important as initial condition diversity in terms of forecast quality, although neither has as big an impact as model diversity. The algorithms have their individual strengths and weaknesses, but no algorithm is clearly better or worse than the others overall.

Full access
John S. Kain, Michael E. Baldwin, Paul R. Janish, Steven J. Weiss, Michael P. Kay, and Gregory W. Carbin

Abstract

Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program.

By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how “good” individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the “best” forecast for a given period.

The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.

Full access
Robert J. Trapp, David J. Stensrud, Michael C. Coniglio, Russ S. Schumacher, Michael E. Baldwin, Sean Waugh, and Don T. Conlee

Abstract

The Mesoscale Predictability Experiment (MPEX) was a field campaign conducted 15 May through 15 June 2013 within the Great Plains region of the United States. One of the research foci of MPEX regarded the upscaling effects of deep convective storms on their environment, and how these feed back to the convective-scale dynamics and predictability. Balloon-borne GPS radiosondes, or “upsondes,” were used to sample such environmental feedbacks. Two of the upsonde teams employed dual-frequency sounding systems that allowed for upsonde observations at intervals as fast as 15 min. Because these dual-frequency systems also had the capacity for full mobility during sonde reception, highly adaptive and rapid storm-relative sampling of the convectively modified environment was possible. This article documents the mobile sounding capabilities and unique sampling strategies employed during MPEX.

Full access
John S. Kain, Paul R. Janish, Steven J. Weiss, Michael E. Baldwin, Russell S. Schneider, and Harold E. Brooks

Collaborative activities between operational forecasters and meteorological research scientists have the potential to provide significant benefits to both groups and to society as a whole, yet such collaboration is rare. An exception to this state of affairs is occurring at the National Severe Storms Laboratory (NSSL) and Storm Prediction Center (SPC). Since the SPC moved from Kansas City to the NSSL facility in Norman, Oklahoma in 1997, collaborative efforts between researchers and forecasters at this facility have begun to flourish. This article presents a historical background for this interaction and discusses some of the factors that have helped this collaboration gain momentum. It focuses on the 2001 Spring Program, a collaborative effort focusing on experimental forecasting techniques and numerical model evaluation, as a prototype for organized interactions between researchers and forecasters. In addition, the many tangible and intangible benefits of this unusual working relationship are discussed.

Full access
Jacob R. Carley, Benjamin R. J. Schwedler, Michael E. Baldwin, Robert J. Trapp, John Kwiatkowski, Jeffrey Logsdon, and Steven J. Weiss

Abstract

A feature-specific forecasting method for high-impact weather events that takes advantage of high-resolution numerical weather prediction models and spatial forecast verification methodology is proposed. An application of this method to the prediction of a severe convective storm event is given.

Full access
John S. Kain, Steven J. Weiss, David R. Bright, Michael E. Baldwin, Jason J. Levit, Gregory W. Carbin, Craig S. Schwartz, Morris L. Weisman, Kelvin K. Droegemeier, Daniel B. Weber, and Kevin W. Thomas

Abstract

During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.

Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.

In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.

Full access
David J. Stensrud, Nusrat Yussouf, Michael E. Baldwin, Jeffery T. McQueen, Jun Du, Binbin Zhou, Brad Ferrier, Geoffrey Manikin, F. Martin Ralph, James M. Wilczak, Allen B. White, Irina Djlalova, Jian-Wen Bao, Robert J. Zamora, Stanley G. Benjamin, Patricia A. Miller, Tracy Lorraine Smith, Tanya Smirnova, and Michael F. Barth

The New England High-Resolution Temperature Program seeks to improve the accuracy of summertime 2-m temperature and dewpoint temperature forecasts in the New England region through a collaborative effort between the research and operational components of the National Oceanic and Atmospheric Administration (NOAA). The four main components of this program are 1) improved surface and boundary layer observations for model initialization, 2) special observations for the assessment and improvement of model physical process parameterization schemes, 3) using model forecast ensemble data to improve upon the operational forecasts for near-surface variables, and 4) transfering knowledge gained to commercial weather services and end users. Since 2002 this program has enhanced surface temperature observations by adding 70 new automated Cooperative Observer Program (COOP) sites, identified and collected data from over 1000 non-NOAA mesonet sites, and deployed boundary layer profilers and other special instrumentation throughout the New England region to better observe the surface energy budget. Comparisons of these special datasets with numerical model forecasts indicate that near-surface temperature errors are strongly correlated to errors in the model-predicted radiation fields. The attenuation of solar radiation by aerosols is one potential source of the model radiation bias. However, even with these model errors, results from bias-corrected ensemble forecasts are more accurate than the operational model output statistics (MOS) forecasts for 2-m temperature and dewpoint temperature, while also providing reliable forecast probabilities. Discussions with commerical weather vendors and end users have emphasized the potential economic value of these probabilistic ensemble-generated forecasts.

Full access