Search Results

You are looking at 31 - 40 of 40 items for :

  • Author or Editor: Harold E. Brooks x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Robert J. Trapp
,
Sarah A. Tessendorf
,
Elaine Savageau Godfrey
, and
Harold E. Brooks

Abstract

The primary objective of this study was to estimate the percentage of U.S. tornadoes that are spawned annually by squall lines and bow echoes, or quasi-linear convective systems (QLCSs). This was achieved by examining radar reflectivity images for every tornado event recorded during 1998–2000 in the contiguous United States. Based on these images, the type of storm associated with each tornado was classified as cell, QLCS, or other.

Of the 3828 tornadoes in the database, 79% were produced by cells, 18% were produced by QLCSs, and the remaining 3% were produced by other storm types, primarily rainbands of landfallen tropical cyclones. Geographically, these percentages as well as those based on tornado days exhibited wide variations. For example, 50% of the tornado days in Indiana were associated with QLCSs.

In an examination of other tornado attributes, statistically more weak (F1) and fewer strong (F2–F3) tornadoes were associated with QLCSs than with cells. QLCS tornadoes were more probable during the winter months than were cells. And finally, QLCS tornadoes displayed a comparatively higher and statistically significant tendency to occur during the late night/early morning hours. Further analysis revealed a disproportional decrease in F0–F1 events during this time of day, which led the authors to propose that many (perhaps as many as 12% of the total) weak QLCSs tornadoes were not reported.

Full access
Eric C. Ware
,
David M. Schultz
,
Harold E. Brooks
,
Paul J. Roebber
, and
Sara L. Bruening

Abstract

Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.

Full access
Stephanie M. Verbout
,
Harold E. Brooks
,
Lance M. Leslie
, and
David M. Schultz

Abstract

Over the last 50 yr, the number of tornadoes reported in the United States has doubled from about 600 per year in the 1950s to around 1200 in the 2000s. This doubling is likely not related to meteorological causes alone. To account for this increase a simple least squares linear regression was fitted to the annual number of tornado reports. A “big tornado day” is a single day when numerous tornadoes and/or many tornadoes exceeding a specified intensity threshold were reported anywhere in the country. By defining a big tornado day without considering the spatial distribution of the tornadoes, a big tornado day differs from previous definitions of outbreaks. To address the increase in the number of reports, the number of reports is compared to the expected number of reports in a year based on linear regression. In addition, the F1 and greater Fujita-scale record was used in determining a big tornado day because the F1 and greater series was more stationary over time as opposed to the F2 and greater series. Thresholds were applied to the data to determine the number and intensities of the tornadoes needed to be considered a big tornado day. Possible threshold values included fractions of the annual expected value associated with the linear regression and fixed numbers for the intensity criterion. Threshold values of 1.5% of the expected annual total number of tornadoes and/or at least 8 F1 and greater tornadoes identified about 18.1 big tornado days per year. Higher thresholds such as 2.5% and/or at least 15 F1 and greater tornadoes showed similar characteristics, yet identified approximately 6.2 big tornado days per year. Finally, probability distribution curves generated using kernel density estimation revealed that big tornado days were more likely to occur slightly earlier in the year and have a narrower distribution than any given tornado day.

Full access
Michael C. Coniglio
,
Harold E. Brooks
,
Steven J. Weiss
, and
Stephen F. Corfidi

Abstract

The problem of forecasting the maintenance of mesoscale convective systems (MCSs) is investigated through an examination of observed proximity soundings. Furthermore, environmental variables that are statistically different between mature and weakening MCSs are input into a logistic regression procedure to develop probabilistic guidance on MCS maintenance, focusing on warm-season quasi-linear systems that persist for several hours. Between the mature and weakening MCSs, shear vector magnitudes over very deep layers are the best discriminators among hundreds of kinematic and thermodynamic variables. An analysis of the shear profiles reveals that the shear component perpendicular to MCS motion (usually parallel to the leading line) accounts for much of this difference in low levels and the shear component parallel to MCS motion accounts for much of this difference in mid- to upper levels. The lapse rates over a significant portion of the convective cloud layer, the convective available potential energy, and the deep-layer mean wind speed are also very good discriminators and collectively provide a high level of discrimination between the mature and dissipation soundings as revealed by linear discriminant analysis. Probabilistic equations developed from these variables used with short-term numerical model output show utility in forecasting the transition of an MCS with a solid line of 50+ dBZ echoes to a more disorganized system with unsteady changes in structure and propagation. This study shows that empirical forecast tools based on environmental relationships still have the potential to provide forecasters with improved information on the qualitative characteristics of MCS structure and longevity. This is especially important since the current and near-term value added by explicit numerical forecasts of convection is still uncertain.

Full access
Corey K. Potvin
,
Chris Broyles
,
Patrick S. Skinner
,
Harold E. Brooks
, and
Erik Rasmussen

Abstract

The Storm Prediction Center (SPC) tornado database, generated from NCEI’s Storm Data publication, is indispensable for assessing U.S. tornado risk and investigating tornado–climate connections. Maximizing the value of this database, however, requires accounting for systemically lower reported tornado counts in rural areas owing to a lack of observers. This study uses Bayesian hierarchical modeling to estimate tornado reporting rates and expected tornado counts over the central United States during 1975–2016. Our method addresses a serious solution nonuniqueness issue that may have affected previous studies. The adopted model explains 73% (>90%) of the variance in reported counts at scales of 50 km (>100 km). Population density explains more of the variance in reported tornado counts than other examined geographical covariates, including distance from nearest city, terrain ruggedness index, and road density. The model estimates that approximately 45% of tornadoes within the analysis domain were reported. The estimated tornado reporting rate decreases sharply away from population centers; for example, while >90% of tornadoes that occur within 5 km of a city with population > 100 000 are reported, this rate decreases to <70% at distances of 20–25 km. The method is directly extendable to other events subject to underreporting (e.g., severe hail and wind) and could be used to improve climate studies and tornado and other hazard models for forecasters, planners, and insurance/reinsurance companies, as well as for the development and verification of storm-scale prediction systems.

Full access
Bryan T. Smith
,
Richard L. Thompson
,
Jeremy S. Grams
,
Chris Broyles
, and
Harold E. Brooks

Abstract

Radar-based convective modes were assigned to a sample of tornadoes and significant severe thunderstorms reported in the contiguous United States (CONUS) during 2003–11. The significant hail (≥2-in. diameter), significant wind (≥65-kt thunderstorm gusts), and tornadoes were filtered by the maximum event magnitude per hour on a 40-km Rapid Update Cycle model horizontal grid. The filtering process produced 22 901 tornado and significant severe thunderstorm events, representing 78.5% of all such reports in the CONUS during the sample period. The convective mode scheme presented herein begins with three radar-based storm categories: 1) discrete cells, 2) clusters of cells, and 3) quasi-linear convective systems (QLCSs). Volumetric radar data were examined for right-moving supercell (RM) and left-moving supercell characteristics within the three radar reflectivity designations. Additional categories included storms with marginal supercell characteristics and linear hybrids with a mix of supercell and QLCS structures. Smoothed kernel density estimates of events per decade revealed clear geographic and seasonal patterns of convective modes with tornadoes. Discrete and cluster RMs are the favored convective mode with southern Great Plains tornadoes during the spring, while the Deep South displayed the greatest variability in tornadic convective modes in the fall, winter, and spring. The Ohio Valley favored a higher frequency of QLCS tornadoes and a lower frequency of RM compared to the Deep South and the Great Plains. Tornadoes with nonsupercellular/non-QLCS storms were more common across Florida and the high plains in the summer. Significant hail events were dominated by Great Plains supercells, while variations in convective modes were largest for significant wind events.

Full access
John L. Cintineo
,
Travis M. Smith
,
Valliappa Lakshmanan
,
Harold E. Brooks
, and
Kiel L. Ortega

Abstract

The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.

Full access
Elizabeth E. Ebert
,
Laurence J. Wilson
,
Barbara G. Brown
,
Pertti Nurmi
,
Harold E. Brooks
,
John Bally
, and
Matthias Jaeneke

Abstract

The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.

The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.

Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.

Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.

The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.

Full access
Ariel E. Cohen
,
Steven M. Cavallo
,
Michael C. Coniglio
,
Harold E. Brooks
, and
Israel L. Jirak

Abstract

Southeast U.S. cold season severe weather events can be difficult to predict because of the marginality of the supporting thermodynamic instability in this regime. The sensitivity of this environment to prognoses of instability encourages additional research on ways in which mesoscale models represent turbulent processes within the lower atmosphere that directly influence thermodynamic profiles and forecasts of instability. This work summarizes characteristics of the southeast U.S. cold season severe weather environment and planetary boundary layer (PBL) parameterization schemes used in mesoscale modeling and proceeds with a focused investigation of the performance of nine different representations of the PBL in this environment by comparing simulated thermodynamic and kinematic profiles to observationally influenced ones. It is demonstrated that simultaneous representation of both nonlocal and local mixing in the Asymmetric Convective Model, version 2 (ACM2), scheme has the lowest overall errors for the southeast U.S. cold season tornado regime. For storm-relative helicity, strictly nonlocal schemes provide the largest overall differences from observationally influenced datasets (underforecast). Meanwhile, strictly local schemes yield the most extreme differences from these observationally influenced datasets (underforecast) in a mean sense for the low-level lapse rate and depth of the PBL, on average. A hybrid local–nonlocal scheme is found to mitigate these mean difference extremes. These findings are traced to a tendency for local schemes to incompletely mix the PBL while nonlocal schemes overmix the PBL, whereas the hybrid schemes represent more intermediate mixing in a regime where vertical shear enhances mixing and limited instability suppresses mixing.

Full access
Josué U. Chamberlain
,
Matthew D. Flournoy
,
Makenzie J. Krocak
,
Harold E. Brooks
, and
Alexandra K. Anderson-Frey

Abstract

The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have. Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008 to 2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on the chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm life cycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.

Significance Statement

In this study, we focus on better understanding real-time tornado warning performance on a storm-by-storm basis. This approach allows us to examine how warning performance can change based on the order of each tornado within its parent storm. Using tornado reports, warning products, and radar data during tornado outbreaks from 2008 to 2014, we find that probability of detection and lead time increase with later tornadoes produced by the same storm. In other words, for storms that produce multiple tornadoes, the first tornado is generally the least likely to be warned in advance; when it is warned in advance, it generally contains less lead time than subsequent tornadoes. These findings provide important new analyses of tornado warning performance, particularly for the first tornado of each storm, and will help inform strategies for improving warning performance.

Free access