Browse

You are looking at 121 - 130 of 2,813 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Kelsey C. Britt, Patrick S. Skinner, Pamela L. Heinselman, and Kent H. Knopfmeier

Abstract

Cyclic mesocyclogenesis is the process by which a supercell produces multiple mesocyclones with similar life cycles. The frequency of cyclic mesocyclogenesis has been linked to tornado potential, with higher frequencies decreasing the potential for tornadogenesis. Thus, the ability to predict the presence and frequency of cycling in supercells may be beneficial to forecasters for assessing tornado potential. However, idealized simulations of cyclic mesocyclogenesis have found it to be highly sensitive to environmental and computational parameters. Thus, whether convective-allowing models can resolve and predict cycling has yet to be determined. This study tests the capability of a storm-scale, ensemble prediction system to resolve the cycling process and predict its frequency. Forecasts for three cyclic supercells occurring in May 2017 are generated by NSSL’s Warn-on-Forecast System (WoFS) using 3- and 1-km grid spacing. Rare cases of cyclic-like processes were identified at 3 km, but cycling occurred more frequently at 1 km. WoFS predicted variation in cycling frequencies for the storms that were similar to observed variations in frequency. Object-based identification of mesocyclones was used to extract environmental parameters from a storm-relative inflow sector from each mesocyclone. Lower magnitudes of 0–1-km storm-relative helicity and significant tornado parameter are present for the two more frequently cycling supercells, and higher values are present for the case with the fewest cycles. These results provide initial evidence that high-resolution ensemble forecasts can potentially provide useful guidance on the likelihood and cycling frequency of cyclic supercells.

Restricted access
Irina V. Djalalova, Laura Bianco, Elena Akish, James M. Wilczak, Joseph B. Olson, Jaymes S. Kenyon, Larry K. Berg, Aditya Choukulkar, Richard Coulter, Harinda J. S. Fernando, Eric Grimit, Raghavendra Krishnamurthy, Julie K. Lundquist, Paytsar Muradyan, David D. Turner, and Sonia Wharton

Abstract

The second Wind Forecast Improvement Project (WFIP2) is a multiagency field campaign held in the Columbia Gorge area (October 2015–March 2017). The main goal of the project is to understand and improve the forecast skill of numerical weather prediction (NWP) models in complex terrain, particularly beneficial for the wind energy industry. This region is well known for its excellent wind resource. One of the biggest challenges for wind power production is the accurate forecasting of wind ramp events (large changes of generated power over short periods of time). Poor forecasting of the ramps requires large and sudden adjustments in conventional power generation, ultimately increasing the costs of power. A Ramp Tool and Metric (RT&M) was developed during the first WFIP experiment, held in the U.S. Great Plains (September 2011–August 2012). The RT&M was designed to explicitly measure the skill of NWP models at forecasting wind ramp events. Here we apply the RT&M to 80-m (turbine hub-height) wind speeds measured by 19 sodars and three lidars, and to forecasts from the High-Resolution Rapid Refresh (HRRR), 3-km, and from the High-Resolution Rapid Refresh Nest (HRRRNEST), 750-m horizontal grid spacing, models. The diurnal and seasonal distribution of ramp events are analyzed, finding a noticeable diurnal variability for spring and summer but less for fall and especially winter. Also, winter has fewer ramps compared to the other seasons. The model skill at forecasting ramp events, including the impact of the modification to the model physical parameterizations, was finally investigated.

Restricted access
Eder P. Vendrasco, Luiz A. T. Machado, Bruno Z. Ribeiro, Edmilson D. Freitas, Rute C. Ferreira, and Renato G. Negri

Abstract

This research explores the benefits of radar data assimilation for short-range weather forecasts in southeastern Brazil using the Weather Research and Forecasting (WRF) Model’s three-dimensional variational data assimilation (3DVAR) system. Different data assimilation options are explored, including the cycling frequency, the number of outer loops, and the use of null-echo assimilation. Initially, four microphysics parameterizations are evaluated (Thompson, Morrison, WSM6, and WDM6). The Thompson parameterization produces the best results, while the other parameterizations generally overestimate the precipitation forecast, especially WDSM6. Additionally, the Thompson scheme tends to overestimate snow, while the Morrison scheme overestimates graupel. Regarding the data assimilation options, the results deteriorate and more spurious convection occurs when using a higher cycling frequency (i.e., 30 min instead of 60 min). The use of two outer loops produces worse precipitation forecasts than the use of one outer loop, and the null-echo assimilation is shown to be an effective way to suppress spurious convection. However, in some cases, the null-echo assimilation also removes convective clouds that are not observed by the radar and/or are still not producing rain, but have the potential to grow into an intense convective cloud with heavy rainfall. Finally, a cloud convective mask was implemented using ancillary satellite data to prevent null-echo assimilation from removing potential convective clouds. The mask was demonstrated to be beneficial in some circumstances, but it needs to be carefully evaluated in more cases to have a more robust conclusion regarding its use.

Restricted access
Barry H. Lynn, Seth Cohen, Leonard Druyan, Adam S. Phillips, Dennis Shea, Haim-Zvi Krugliak, and Alexander P. Khain

Abstract

A large set of deterministic and ensemble forecasts was produced to identify the optimal spacing for forecasting U.S. East Coast snowstorms. WRF forecasts were produced on cloud-allowing (~1-km grid spacing) and convection-allowing (3–4 km) grids, and compared against forecasts with parameterized convection (>~10 km). Performance diagrams were used to evaluate 19 deterministic forecasts from the winter of 2013–14. Ensemble forecasts of five disruptive snowstorms spanning the years 2015–18 were evaluated using various methods to evaluate probabilistic forecasts. While deterministic forecasts using cloud-allowing grids were not better than convection-allowing forecasts, both had lower bias and higher success ratios than forecasts with parameterized convection. All forecasts were underdispersive. Nevertheless, forecasts on the higher-resolution grids were more reliable than those with parameterized convection. Forecasts on the cloud-allowing grid were best able to discriminate areas that received heavy snow and those that did not, while the forecasts with parameterized convection were least able to do so. It is recommended to use convection-resolving and (if computationally possible) to use cloud-allowing forecast grids when predicting East Coast winter storms.

Restricted access
Brett Roberts, Burkely T. Gallo, Israel L. Jirak, Adam J. Clark, David C. Dowell, Xuguang Wang, and Yongming Wang

Abstract

The High Resolution Ensemble Forecast v2.1 (HREFv2.1), an operational convection-allowing model (CAM) ensemble, is an “ensemble of opportunity” wherein forecasts from several independently designed deterministic CAMs are aggregated and postprocessed together. Multiple dimensions of diversity in the HREFv2.1 ensemble membership contribute to ensemble spread, including model core, physics parameterization schemes, initial conditions (ICs), and time lagging. In this study, HREFv2.1 forecasts are compared against the High Resolution Rapid Refresh Ensemble (HRRRE) and the Multiscale data Assimilation and Predictability (MAP) ensemble, two experimental CAM ensembles that ran during the 5-week Spring Forecasting Experiment (SFE) in spring 2018. The HRRRE and MAP are formally designed ensembles with spread achieved primarily through perturbed ICs. Verification in this study focuses on composite radar reflectivity and updraft helicity to assess ensemble performance in forecasting convective storms. The HREFv2.1 shows the highest overall skill for these forecasts, matching subjective real-time impressions from SFE participants. Analysis of the skill and variance of ensemble member forecasts suggests that the HREFv2.1 exhibits greater spread and more effectively samples model uncertainty than the HRRRE or MAP. These results imply that to optimize skill in forecasting convective storms at 1–2-day lead times, future CAM ensembles should employ either diverse membership designs or sophisticated perturbation schemes capable of representing model uncertainty with comparable efficacy.

Restricted access
Robert G. Fovell and Alex Gallagher

Abstract

While numerical weather prediction models have made considerable progress regarding forecast skill, less attention has been paid to the planetary boundary layer. This study leverages High-Resolution Rapid Refresh (HRRR) forecasts on native levels, 1-s radiosonde data, and (primarily airport) surface observations across the conterminous United States. We construct temporally and spatially averaged composites of wind speed and potential temperature in the lowest 1 km for selected months to identify systematic errors in both forecasts and observations in this critical layer. We find near-surface temperature and wind speed predictions to be skillful, although wind biases were negatively correlated with observed speed and temperature biases revealed a robust relationship with station elevation. Above ≈250 m above ground level, below which radiosonde wind data were apparently contaminated by processing, biases were small for wind speed and potential temperature at the analysis time (which incorporates sonde data) but became substantial by the 24-h forecast. Wind biases were positive through the layer for both 0000 and 1200 UTC, and morning potential temperature profiles were marked by excessively steep lapse rates that persisted across seasons and (again) exaggerated at higher elevation sites. While the source or cause of these systematic errors are not fully understood, this analysis highlights areas for potential model improvement and the need for a continued and accessible archive of the data that make analyses like this possible.

Restricted access
Benjamin C. Trabing and Michael M. Bell

Abstract

The characteristics of official National Hurricane Center (NHC) intensity forecast errors are examined for the North Atlantic and east Pacific basins from 1989 to 2018. It is shown how rapid intensification (RI) and rapid weakening (RW) influence yearly NHC forecast errors for forecasts between 12 and 48 h in length. In addition to being the tail of the intensity change distribution, RI and RW are at the tails of the forecast error distribution. Yearly mean absolute forecast errors are positively correlated with the yearly number of RI/RW occurrences and explain roughly 20% of the variance in the Atlantic and 30% in the east Pacific. The higher occurrence of RI events in the east Pacific contributes to larger intensity forecast errors overall but also a better probability of detection and success ratio. Statistically significant improvements to 24-h RI forecast biases have been made in the east Pacific and to 24-h RW biases in the Atlantic. Over-ocean 24-h RW events cause larger mean errors in the east Pacific that have not improved with time. Environmental predictors from the Statistical Hurricane Intensity Prediction Scheme (SHIPS) are used to diagnose what conditions lead to the largest RI and RW forecast errors on average. The forecast error distributions widen for both RI and RW when tropical systems experience low vertical wind shear, warm sea surface temperature, and moderate low-level relative humidity. Consistent with existing literature, the forecast error distributions suggest that improvements to our observational capabilities, understanding, and prediction of inner-core processes is paramount to both RI and RW prediction.

Restricted access
Steven M. Martinaitis, Benjamin Albright, Jonathan J. Gourley, Sarah Perfater, Tiffany Meyer, Zachary L. Flamig, Robert A. Clark, Humberto Vergara, and Mark Klein

Abstract

The flash flood event of 23 June 2016 devastated portions of West Virginia and west-central Virginia, resulting in 23 fatalities and 5 new record river crests. The flash flooding was part of a multiday event that was classified as a billion-dollar disaster. The 23 June 2016 event occurred during real-time operations by two Hydrometeorology Testbed (HMT) experiments. The Flash Flood and Intense Rainfall (FFaIR) experiment focused on the 6–24-h forecast through the utilization of experimental high-resolution deterministic and ensemble numerical weather prediction and hydrologic model guidance. The HMT Multi-Radar Multi-Sensor Hydro (HMT-Hydro) experiment concentrated on the 0–6-h time frame for the prediction and warning of flash floods primarily through the experimental Flooded Locations and Simulated Hydrographs product suite. This study describes the various model guidance, applications, and evaluations from both testbed experiments during the 23 June 2016 flash flood event. Various model outputs provided a significant precipitation signal that increased the confidence of FFaIR experiment participants to issue a high risk for flash flooding for the region between 1800 UTC 23 June and 0000 UTC 24 June. Experimental flash flood warnings issued during the HMT-Hydro experiment for this event improved the probability of detection and resulted in a 63.8% increase in lead time to 84.2 min. Isolated flash floods in Kentucky demonstrated the potential to reduce the warned area. Participants characterized how different model guidance and analysis products influenced the decision-making process and how the experimental products can help shape future national and local flash flood operations.

Restricted access
Jeffrey C. Snyder, Howard B. Bluestein, Zachary B. Wienhoff, Charles M. Kuster, and Dylan W. Reif

Abstract

Tornadic supercells moved across parts of Oklahoma on the afternoon and evening of 9 May 2016. One such supercell, while producing a long-lived tornado, was observed by nearby WSR-88D radars to contain a strong anticyclonic velocity couplet on the lowest elevation angle. This couplet was located in a very atypical position relative to the ongoing cyclonic tornado and to the supercell’s updraft. A storm survey team identified damage near where this couplet occurred, and, in the absence of evidence refuting otherwise, the damage was thought to have been produced by an anticyclonic tornado. However, such a tornado was not seen in near-ground, high-resolution radar data from a much closer, rapid-scan, mobile radar. Rather, an elongated velocity couplet was observed only at higher elevation angles at altitudes similar to those at which the WSR-88D radars observed the strong couplet. This paper examines observations from two WSR-88D radars and a mobile radar from which it is argued that the anticyclonic couplet (and a similar one ~10 min later) were actually quasi-horizontal vortices centered ~1–1.5 km AGL. The benefits of having data from a radar much closer to the convective storm being sampled (e.g., better spatial resolution and near-ground data coverage) and providing more rapid volume updates are readily apparent. An analysis of these additional radar data provides strong, but not irrefutable, evidence that the anticyclonic tornado that may be inferred from WSR-88D data did not exist; consequently, upon discussions with the National Weather Service, it was not included in Storm Data.

Free access
Dylan Steinkruger, Paul Markowski, and George Young

Abstract

The utility of employing artificial intelligence (AI) to issue tornado warnings is explored using an ensemble of 128 idealized simulations. Over 700 tornadoes develop within the ensemble of simulations, varying in duration, length, and associated storm mode. Machine-learning models are trained to forecast the temporal and spatial probabilities of tornado formation for a specific lead time. The machine-learning probabilities are used to produce tornado warning decisions for each grid point and lead time. An optimization function is defined, such that warning thresholds are modified to optimize the performance of the AI system on a specified metric (e.g., increased lead time, minimized false alarms, etc.). Using genetic algorithms, multiple AI systems are developed with different optimization functions. The different AI systems yield unique warning output depending on the desired attributes of the optimization function. The effects of the different optimization functions on warning performance are explored. Overall, performance is encouraging and suggests that automated tornado warning guidance is worth exploring with real-time data.

Restricted access