Search Results

You are looking at 11 - 20 of 78 items for

  • Author or Editor: Harold E. Brooks x
  • Refine by Access: All Content x
Clear All Modify Search
Roger Edwards
,
Harold E. Brooks
, and
Hannah Cohn

Abstract

U.S. tornado records form the basis for a variety of meteorological, climatological, and disaster-risk analyses, but how reliable are they in light of changing standards for rating, as with the 2007 transition of Fujita (F) to enhanced Fujita (EF) damage scales? To what extent are recorded tornado metrics subject to such influences that may be nonmeteorological in nature? While addressing these questions with utmost thoroughness is too large of a task for any one study, and may not be possible given the many variables and uncertainties involved, some variables that are recorded in large samples are ripe for new examination. We assess basic tornado-path characteristics—damage rating, length, width, and occurrence time, as well as some combined and derived measures—for a 24-yr period of constant path-width recording standard that also coincides with National Weather Service modernization and the WSR-88D deployment era. The middle of that period (in both time and approximate tornado counts) crosses the official switch from F to EF. At least minor shifts in all assessed path variables are associated directly with that change, contrary to the intent of EF implementation. Major and essentially stepwise expansion of tornadic path widths occurred immediately upon EF usage, and widths have expanded still farther within the EF era. We also document lesser increases in pathlengths and in tornadoes rated at least EF1 in comparison with EF0. These apparently secular changes in the tornado data can impact research dependent on bulk tornado-path characteristics and damage-assessment results.

Full access
Makenzie J. Krocak
and
Harold E. Brooks

Abstract

While many studies have looked at the quality of forecast products, few have attempted to understand the relationship between them. We begin to consider whether or not such an influence exists by analyzing storm-based tornado warning product metrics with respect to whether they occurred within a severe weather watch and, if so, what type of watch they occurred within. The probability of detection, false alarm ratio, and lead time all show a general improvement with increasing watch severity. In fact, the probability of detection increased more as a function of watch-type severity than the change in probability of detection during the time period of analysis. False alarm ratio decreased as watch type increased in severity, but with a much smaller magnitude than the difference in probability of detection. Lead time also improved with an increase in watch-type severity. Warnings outside of any watch had a mean lead time of 5.5 min, while those inside of a particularly dangerous situation tornado watch had a mean lead time of 15.1 min. These results indicate that the existence and type of severe weather watch may have an influence on the quality of tornado warnings. However, it is impossible to separate the influence of weather watches from possible differences in warning strategy or differences in environmental characteristics that make it more or less challenging to warn for tornadoes. Future studies should attempt to disentangle these numerous influences to assess how much influence intermediate products have on downstream products.

Full access
Nathan M. Hitchens
and
Harold E. Brooks

Abstract

The Storm Prediction Center issues four categorical convective outlooks with lead times as long as 48 h, the so-called day 3 outlook issued at 1200 UTC, and as short as 6 h, the day 1 outlook issued at 0600 UTC. Additionally, there are four outlooks issued during the 24-h target period (which begins at 1200 UTC on day 1) that serve as updates to the last outlook issued prior to the target period. These outlooks, issued daily, are evaluated over a relatively long period of record, 1999–2011, using standard verification measures to assess accuracy; practically perfect forecasts are used to assess skill. Results show a continual increase in the skill of all outlooks during the study period, and increases in the frequency at which these outlooks are skillful on an annual basis.

Full access
Makenzie J. Krocak
and
Harold E. Brooks

Abstract

One of the challenges of providing probabilistic information on a multitude of spatiotemporal scales is ensuring that information is both accurate and useful to decision-makers. Focusing on larger spatiotemporal scales (i.e., from convective outlook to weather watch scales), historical severe weather reports are analyzed to begin to understand the spatiotemporal scales that hazardous weather events are contained within. Reports from the Storm Prediction Center’s report archive are placed onto grids of differing spatial scales and then split into 24-h convective outlook days (1200–1200 UTC). These grids are then analyzed temporally to assess over what fraction of the day a single location would generally experience severe weather events. Different combinations of temporal and spatial scales are tested to determine how the reference class (or the choice of what scales to use) alters the probabilities of severe weather events. Results indicate that at any given point in the United States on any given day, more than 95% of the daily reports within 40 km of the point occur in a 4-h period. Therefore, the SPC 24-h convective outlook probabilities can be interpreted as 4-h convective outlook probabilities without a significant change in meaning. Additionally, probabilities and threat periods are analyzed at each location and different times of year. These results indicate little variability in the duration of severe weather events, which allows for a consistent definition of an “event” for all locations in the continental United States.

Free access
David J. Stensrud
and
Harold E. Brooks
Full access
Patrick T. Marsh
and
Harold E. Brooks

No abstract available.

Full access
Nathan M. Hitchens
and
Harold E. Brooks

Abstract

The Storm Prediction Center has issued daily convective outlooks since the mid-1950s. This paper represents an initial effort to examine the quality of these forecasts. Convective outlooks are plotted on a latitude–longitude grid with 80-km grid spacing and evaluated using storm reports to calculate verification measures including the probability of detection, frequency of hits, and critical success index. Results show distinct improvements in forecast performance over the duration of the study period, some of which can be attributed to apparent changes in forecasting philosophies.

Full access
Mateusz Taszarek
,
Harold E. Brooks
, and
Bartosz Czernecki

Abstract

Observed proximity soundings from Europe are used to highlight how well environmental parameters discriminate different kind of severe thunderstorm hazards. In addition, the skill of parameters in predicting lightning and waterspouts is also tested. The research area concentrates on central and western European countries and the years 2009–15. In total, 45 677 soundings are analyzed including 169 associated with extremely severe thunderstorms, 1754 with severe thunderstorms, 8361 with nonsevere thunderstorms, and 35 393 cases with nonzero convective available potential energy (CAPE) that had no thunderstorms. Results indicate that the occurrence of lightning is mainly a function of CAPE and is more likely when the temperature of the equilibrium level drops below −10°C. The probability for large hail is maximized with high values of boundary layer moisture, steep mid- and low-level lapse rates, and high lifting condensation level. The size of hail is mainly dependent on the deep layer shear (DLS) in a moderate to high CAPE environment. The likelihood of tornadoes increases along with increasing CAPE, DLS, and 0–1-km storm-relative helicity. Severe wind events are the most common in high vertical wind shear and steep low-level lapse rates. The probability for waterspouts is maximized in weak vertical wind shear and steep low-level lapse rates. Wind shear in the 0–3-km layer is the best at distinguishing between severe and extremely severe thunderstorms producing tornadoes and convective wind gusts. A parameter WMAXSHEAR multiplying square root of 2 times CAPE (WMAX) and DLS turned out to be the best in distinguishing between nonsevere and severe thunderstorms, and for assessing the severity of convective phenomena.

Full access
Nathan M. Hitchens
and
Harold E. Brooks

Abstract

Among the Storm Prediction Center’s (SPC) probabilistic convective outlook products are forecasts specifically targeted at significant severe weather: tornadoes that produce EF2 or greater damage, wind gusts of at least 75 mi h−1, and hail with diameters of 2 in. or greater. During the period of 2005–15, for outlooks issued beginning on day 3 and through the final update to the day 1 forecast, the accuracy and skill of these significant severe outlooks are evaluated. To achieve this, criteria for the identification of significant severe weather events were developed, with a focus on determining days for which outlooks were not issued, but should have been based on the goals of the product. Results show that significant tornadoes and hail are generally well identified by outlooks, but significant wind events are underforecast. There exist differences between verification measures when calculating them based on 1) only those days for which outlooks were issued and 2) days with outlooks or missed events; specifically, there were improvements in the frequency of daily skillful forecasts when disregarding missed events. With the greatest number of missed events associated with significant wind events, forecasts for this hazard are identified as an area of future focus for the SPC.

Full access
Harold E. Brooks
and
James Correia Jr.

Abstract

Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012.

Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.

Full access