Browse

You are looking at 91 - 100 of 2,787 items for :

  • Weather and Forecasting x
  • All content x
Clear All
George P. Pacey, David M. Schultz, and Luis Garcia-Carreras

Abstract

The frequency of European convective windstorms, environments in which they form, and their convective organizational modes remain largely unknown. A climatology is produced using 10 233 severe convective-wind reports from the European Severe Weather Database between 2009–2018. Severe convective-wind days have increased from 50 days yr–1 in 2009 to 117 days yr–1 in 2018, largely because of an increase in reporting. The highest frequency of reports occurred across central Europe, particularly Poland. Reporting was most frequent in summer, when a severe convective windstorm occurred every other day on average. The preconvective environment was assessed using 361 proximity soundings from 45 stations between 2006–2018, and a clustering technique was used to distinguish different environments from nine variables. Two environments for severe convective storms occurred: Type 1, generally low-shear–high-CAPE (mostly in the warm season) and Type 2, generally high-shear–low-CAPE (convective available potential energy; mostly in the cold season). Because convective mode often relates to the type of weather hazard, convective organizational mode was studied from 185 windstorms that occurred between 2013–2018. In Type-1 environments, the most frequent convective mode was cells, accounting for 58.5% of events, followed by linear modes (29%) and the nonlinear noncellular mode (12.5%). In Type-2 environments, the most frequent convective mode was linear modes (55%), followed by cells (36%) and the nonlinear noncellular mode (9%). Only 10% of windstorms were associated with bow echoes, a much lower percentage than other studies, suggesting that forecasters should not necessarily wait to see a bow echo before issuing a warning for strong winds.

Open access
Bryan T. Smith, Richard L. Thompson, Douglas A. Speheger, Andrew R. Dean, Christopher D. Karstens, and Alexandra K. Anderson-Frey

Abstract

The Storm Prediction Center (SPC) has developed a database of damage-surveyed tornadoes in the contiguous United States (2009–17) that relates environmental and radar-derived storm attributes to damage ratings that change during a tornado life cycle. Damage indicators (DIs), and the associated wind speed estimates from tornado damage surveys compiled in the Damage Assessment Toolkit (DAT) dataset, were linked to the nearest manual calculations of 0.5° tilt angle maximum rotational velocity V rot from single-site WSR-88D data. For each radar scan, the maximum wind speed from the highest-rated DI, V rot, and the significant tornado parameter (STP) from the SPC hourly objective mesoscale analysis archive were recorded and analyzed. Results from examining V rot and STP data indicate an increasing conditional probability for higher-rated DIs (i.e., EF-scale wind speed estimate) as both STP and V rot increase. This work suggests that tornadic wind speed exceedance probabilities can be estimated in real time, on a scan-by-scan basis, via V rot and STP for ongoing tornadoes.

Restricted access
Bryan T. Smith, Richard L. Thompson, Douglas A. Speheger, Andrew R. Dean, Christopher D. Karstens, and Alexandra K. Anderson-Frey

Abstract

A sample of damage-surveyed tornadoes in the contiguous United States (2009–17), containing specific wind speed estimates from damage indicators (DIs) within the Damage Assessment Toolkit dataset, were linked to radar-observed circulations using the nearest WSR-88D data in Part I of this work. The maximum wind speed associated with the highest-rated DI for each radar scan, corresponding 0.5° tilt angle rotational velocity V rot, significant tornado parameter (STP), and National Weather Service (NWS) convective impact-based warning (IBW) type, are analyzed herein for the sample of cases in Part I and an independent case sample from parts of 2019–20. As V rot and STP both increase, peak DI-estimated wind speeds and IBW warning type also tend to increase. Different combinations of V rot, STP, and population density—related to ranges of peak DI wind speed—exhibited a strong ability to discriminate across the tornado damage intensity spectrum. Furthermore, longer duration of high V rot (i.e., ≥70 kt) in significant tornado environments (i.e., STP ≥ 6) corresponds to increasing chances that DIs will reveal the occurrence of an intense tornado (i.e., EF3+). These findings were corroborated via the independent sample from parts of 2019–20, and can be applied in a real-time operational setting to assist in determining a potential range of wind speeds. This work provides evidence-based support for creating an objective and consistent, real-time framework for assessing and differentiating tornadoes across the tornado intensity spectrum.

Restricted access
Yiwen Mao and Asgeir Sorteberg

Abstract

A binary classification model is trained by random forest using data from 41 stations in Norway to predict the precipitation in a given hour. The predictors consist of results from radar nowcasts and numerical weather predictions. The results demonstrate that the random forest model can improve the precipitation predictions by the radar nowcasts and the numerical weather predictions. This study clarifies whether certain potential factors related to model training can influence the predictive skill of the random forest method. The results indicate that enforcing a balanced prediction by resampling the training datasets or lowering the threshold probability for classification cannot improve the predictive skill of the random forest model. The study reveals that the predictive skill of the random forest model shows seasonality, but is only weakly influenced by the geographic diversity of the training dataset. Finally, the study shows that the most important predictor is the precipitation predictions by the radar nowcasts followed by the precipitation predictions by the numerical weather predictions. Although meteorological variables other than precipitation are weaker predictors, the results suggest that they can help to reduce the false alarm ratio and to increase the success ratio of the precipitation prediction.

Restricted access
Kelsey C. Britt, Patrick S. Skinner, Pamela L. Heinselman, and Kent H. Knopfmeier

Abstract

Cyclic mesocyclogenesis is the process by which a supercell produces multiple mesocyclones with similar life cycles. The frequency of cyclic mesocyclogenesis has been linked to tornado potential, with higher frequencies decreasing the potential for tornadogenesis. Thus, the ability to predict the presence and frequency of cycling in supercells may be beneficial to forecasters for assessing tornado potential. However, idealized simulations of cyclic mesocyclogenesis have found it to be highly sensitive to environmental and computational parameters. Thus, whether convective-allowing models can resolve and predict cycling has yet to be determined. This study tests the capability of a storm-scale, ensemble prediction system to resolve the cycling process and predict its frequency. Forecasts for three cyclic supercells occurring in May 2017 are generated by NSSL’s Warn-on-Forecast System (WoFS) using 3- and 1-km grid spacing. Rare cases of cyclic-like processes were identified at 3 km, but cycling occurred more frequently at 1 km. WoFS predicted variation in cycling frequencies for the storms that were similar to observed variations in frequency. Object-based identification of mesocyclones was used to extract environmental parameters from a storm-relative inflow sector from each mesocyclone. Lower magnitudes of 0–1-km storm-relative helicity and significant tornado parameter are present for the two more frequently cycling supercells, and higher values are present for the case with the fewest cycles. These results provide initial evidence that high-resolution ensemble forecasts can potentially provide useful guidance on the likelihood and cycling frequency of cyclic supercells.

Restricted access
Irina V. Djalalova, Laura Bianco, Elena Akish, James M. Wilczak, Joseph B. Olson, Jaymes S. Kenyon, Larry K. Berg, Aditya Choukulkar, Richard Coulter, Harinda J. S. Fernando, Eric Grimit, Raghavendra Krishnamurthy, Julie K. Lundquist, Paytsar Muradyan, David D. Turner, and Sonia Wharton

Abstract

The second Wind Forecast Improvement Project (WFIP2) is a multiagency field campaign held in the Columbia Gorge area (October 2015–March 2017). The main goal of the project is to understand and improve the forecast skill of numerical weather prediction (NWP) models in complex terrain, particularly beneficial for the wind energy industry. This region is well known for its excellent wind resource. One of the biggest challenges for wind power production is the accurate forecasting of wind ramp events (large changes of generated power over short periods of time). Poor forecasting of the ramps requires large and sudden adjustments in conventional power generation, ultimately increasing the costs of power. A Ramp Tool and Metric (RT&M) was developed during the first WFIP experiment, held in the U.S. Great Plains (September 2011–August 2012). The RT&M was designed to explicitly measure the skill of NWP models at forecasting wind ramp events. Here we apply the RT&M to 80-m (turbine hub-height) wind speeds measured by 19 sodars and three lidars, and to forecasts from the High-Resolution Rapid Refresh (HRRR), 3-km, and from the High-Resolution Rapid Refresh Nest (HRRRNEST), 750-m horizontal grid spacing, models. The diurnal and seasonal distribution of ramp events are analyzed, finding a noticeable diurnal variability for spring and summer but less for fall and especially winter. Also, winter has fewer ramps compared to the other seasons. The model skill at forecasting ramp events, including the impact of the modification to the model physical parameterizations, was finally investigated.

Restricted access
Eder P. Vendrasco, Luiz A. T. Machado, Bruno Z. Ribeiro, Edmilson D. Freitas, Rute C. Ferreira, and Renato G. Negri

Abstract

This research explores the benefits of radar data assimilation for short-range weather forecasts in southeastern Brazil using the Weather Research and Forecasting (WRF) Model’s three-dimensional variational data assimilation (3DVAR) system. Different data assimilation options are explored, including the cycling frequency, the number of outer loops, and the use of null-echo assimilation. Initially, four microphysics parameterizations are evaluated (Thompson, Morrison, WSM6, and WDM6). The Thompson parameterization produces the best results, while the other parameterizations generally overestimate the precipitation forecast, especially WDSM6. Additionally, the Thompson scheme tends to overestimate snow, while the Morrison scheme overestimates graupel. Regarding the data assimilation options, the results deteriorate and more spurious convection occurs when using a higher cycling frequency (i.e., 30 min instead of 60 min). The use of two outer loops produces worse precipitation forecasts than the use of one outer loop, and the null-echo assimilation is shown to be an effective way to suppress spurious convection. However, in some cases, the null-echo assimilation also removes convective clouds that are not observed by the radar and/or are still not producing rain, but have the potential to grow into an intense convective cloud with heavy rainfall. Finally, a cloud convective mask was implemented using ancillary satellite data to prevent null-echo assimilation from removing potential convective clouds. The mask was demonstrated to be beneficial in some circumstances, but it needs to be carefully evaluated in more cases to have a more robust conclusion regarding its use.

Restricted access
Barry H. Lynn, Seth Cohen, Leonard Druyan, Adam S. Phillips, Dennis Shea, Haim-Zvi Krugliak, and Alexander P. Khain

Abstract

A large set of deterministic and ensemble forecasts was produced to identify the optimal spacing for forecasting U.S. East Coast snowstorms. WRF forecasts were produced on cloud-allowing (~1-km grid spacing) and convection-allowing (3–4 km) grids, and compared against forecasts with parameterized convection (>~10 km). Performance diagrams were used to evaluate 19 deterministic forecasts from the winter of 2013–14. Ensemble forecasts of five disruptive snowstorms spanning the years 2015–18 were evaluated using various methods to evaluate probabilistic forecasts. While deterministic forecasts using cloud-allowing grids were not better than convection-allowing forecasts, both had lower bias and higher success ratios than forecasts with parameterized convection. All forecasts were underdispersive. Nevertheless, forecasts on the higher-resolution grids were more reliable than those with parameterized convection. Forecasts on the cloud-allowing grid were best able to discriminate areas that received heavy snow and those that did not, while the forecasts with parameterized convection were least able to do so. It is recommended to use convection-resolving and (if computationally possible) to use cloud-allowing forecast grids when predicting East Coast winter storms.

Restricted access
Brett Roberts, Burkely T. Gallo, Israel L. Jirak, Adam J. Clark, David C. Dowell, Xuguang Wang, and Yongming Wang

Abstract

The High Resolution Ensemble Forecast v2.1 (HREFv2.1), an operational convection-allowing model (CAM) ensemble, is an “ensemble of opportunity” wherein forecasts from several independently designed deterministic CAMs are aggregated and postprocessed together. Multiple dimensions of diversity in the HREFv2.1 ensemble membership contribute to ensemble spread, including model core, physics parameterization schemes, initial conditions (ICs), and time lagging. In this study, HREFv2.1 forecasts are compared against the High Resolution Rapid Refresh Ensemble (HRRRE) and the Multiscale data Assimilation and Predictability (MAP) ensemble, two experimental CAM ensembles that ran during the 5-week Spring Forecasting Experiment (SFE) in spring 2018. The HRRRE and MAP are formally designed ensembles with spread achieved primarily through perturbed ICs. Verification in this study focuses on composite radar reflectivity and updraft helicity to assess ensemble performance in forecasting convective storms. The HREFv2.1 shows the highest overall skill for these forecasts, matching subjective real-time impressions from SFE participants. Analysis of the skill and variance of ensemble member forecasts suggests that the HREFv2.1 exhibits greater spread and more effectively samples model uncertainty than the HRRRE or MAP. These results imply that to optimize skill in forecasting convective storms at 1–2-day lead times, future CAM ensembles should employ either diverse membership designs or sophisticated perturbation schemes capable of representing model uncertainty with comparable efficacy.

Restricted access
Robert G. Fovell and Alex Gallagher

Abstract

While numerical weather prediction models have made considerable progress regarding forecast skill, less attention has been paid to the planetary boundary layer. This study leverages High-Resolution Rapid Refresh (HRRR) forecasts on native levels, 1-s radiosonde data, and (primarily airport) surface observations across the conterminous United States. We construct temporally and spatially averaged composites of wind speed and potential temperature in the lowest 1 km for selected months to identify systematic errors in both forecasts and observations in this critical layer. We find near-surface temperature and wind speed predictions to be skillful, although wind biases were negatively correlated with observed speed and temperature biases revealed a robust relationship with station elevation. Above ≈250 m above ground level, below which radiosonde wind data were apparently contaminated by processing, biases were small for wind speed and potential temperature at the analysis time (which incorporates sonde data) but became substantial by the 24-h forecast. Wind biases were positive through the layer for both 0000 and 1200 UTC, and morning potential temperature profiles were marked by excessively steep lapse rates that persisted across seasons and (again) exaggerated at higher elevation sites. While the source or cause of these systematic errors are not fully understood, this analysis highlights areas for potential model improvement and the need for a continued and accessible archive of the data that make analyses like this possible.

Restricted access