Search Results

You are looking at 21 - 30 of 78 items for

  • Author or Editor: Harold E. Brooks x
  • Refine by Access: All Content x
Clear All Modify Search
Harold E. Brooks
and
Charles A. Doswell III

Abstract

Historical records of damage from major tornadoes in the United States are taken and adjusted for inflation and wealth. Such adjustments provide a more reliable method to compare losses over time in the context of significant societal change. From 1890 to 1999, the costliest tornado on the record, adjusted for inflation, is the 3 May 1999 Oklahoma City tornado, with an adjusted $963 million in damage (constant 1997 dollars). Including an adjustment for growth in wealth, on the other hand, clearly shows the 27 May 1896 Saint Louis–East Saint Louis tornado to be the costliest on record. An extremely conservative adjustment for the 1896 tornado gives a value of $2.2 billion. A more realistic adjustment yields a figure of $2.9 billion. A comparison of the ratio of deaths to wealth-adjusted damage shows a clear break in 1953, at the beginning of the watch/warning/awareness program of the National Weather Service.

Full access
Robert J. Trapp
and
Harold E. Brooks

Abstract

In the United States, tornado activity of a given year is usually assessed in terms of the total number of human-reported tornadoes. Such assessments fail to account for the seldom-acknowledged fact that an active (or inactive) tornado year for the United States does not necessarily equate with activity (or inactivity) everywhere in the country. The authors illustrate this by comparing the geospatial tornado distributions from 1987, 2004, and 2011. Quantified in terms of the frequency of daily tornado occurrence (or “tornado days”), the high activity in the South Atlantic and upper Midwest regions was a major contributor to the record-setting number of tornadoes in 2004. The high activity in 2011 arose from significant tornado occurrences in the Southeast and lower Midwest. The authors also show that the uniqueness of the activity during these years can be determined by modeling the local statistical behavior of tornado days by a gamma distribution.

Full access
Makenzie J. Krocak
and
Harold E. Brooks

Abstract

While there has been an abundance of research dedicated to the seasonal climatology of severe weather, very little has been done to study hazardous weather probabilities on smaller scales. To this end, local hourly climatological estimates of tornadic event probabilities were developed using storm reports from NOAA’s Storm Prediction Center. These estimates begin the process of analyzing tornado frequencies on a subdaily scale.

Characteristics of the local tornado climatology are investigated, including how the diurnal cycle varies in space and time. Hourly tornado probabilities are peaked for both the annual and diurnal cycles in the plains, whereas the southeast United States has a more variable pattern. Areas that have similar total tornado threats but differ in the distribution of that threat are highlighted. Additionally, areas that have most of the tornado threat concentrated in small time frames both annually and diurnally are compared to areas that have a low-level threat at all times. These differences create challenges related to staffing requirements and background understanding of the tornado threat unique to each region.

This work is part of a larger effort to provide background information for probabilistic forecasts of hazardous weather that are meaningful over broad time and space scales, with a focus on scales broader than the typical time and space scales of the events of interest (including current products on the “watch” scale). A large challenge remains to continue describing probabilities as the time and space scales of the forecast become comparable to the scale of the event.

Full access
Jeffrey P. Craven
,
Ryan E. Jewell
, and
Harold E. Brooks

Abstract

Approximately 400 Automated Surface Observing System (ASOS) observations of convective cloud-base heights at 2300 UTC were collected from April through August of 2001. These observations were compared with lifting condensation level (LCL) heights above ground level determined by 0000 UTC rawinsonde soundings from collocated upper-air sites. The LCL heights were calculated using both surface-based parcels (SBLCL) and mean-layer parcels (MLLCL—using mean temperature and dewpoint in lowest 100 hPa). The results show that the mean error for the MLLCL heights was substantially less than for SBLCL heights, with SBLCL heights consistently lower than observed cloud bases. These findings suggest that the mean-layer parcel is likely more representative of the actual parcel associated with convective cloud development, which has implications for calculations of thermodynamic parameters such as convective available potential energy (CAPE) and convective inhibition. In addition, the median value of surface-based CAPE (SBCAPE) was more than 2 times that of the mean-layer CAPE (MLCAPE). Thus, caution is advised when considering surface-based thermodynamic indices, despite the assumed presence of a well-mixed afternoon boundary layer.

Full access
Tony Hall
,
Harold E. Brooks
, and
Charles A. Doswell III

Abstract

A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas–Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. (∼1 mm) to 1 in. (∼25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.

Full access
Nathan M. Hitchens
,
Harold E. Brooks
, and
Russ S. Schumacher

Abstract

The climatology of heavy rain events from hourly precipitation observations by Brooks and Stensrud is revisited in this study using two high-resolution precipitation datasets that incorporate both gauge observations and radar estimates. Analyses show a seasonal cycle of heavy rain events originating along the Gulf Coast and expanding across the eastern two-thirds of the United States by the summer, comparing well to previous findings. The frequency of extreme events is estimated, and may provide improvements over prior results due to both the increased spatial resolution of these data and improved techniques used in the estimation. The diurnal cycle of heavy rainfall is also examined, showing distinct differences in the strength of the cycle between seasons.

Full access
Nathan M. Hitchens
,
Harold E. Brooks
, and
Michael P. Kay

Abstract

A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.

Full access
Harold E. Brooks
,
Charles A. Doswell III
, and
Michael P. Kay

Abstract

An estimate is made of the probability of an occurrence of a tornado day near any location in the contiguous 48 states for any time during the year. Gaussian smoothers in space and time have been applied to the observed record of tornado days from 1980 to 1999 to produce daily maps and annual cycles at any point on an 80 km × 80 km grid. Many aspects of this climatological estimate have been identified in previous work, but the method allows one to consider the record in several new ways. The two regions of maximum tornado days in the United States are northeastern Colorado and peninsular Florida, but there is a large region between the Appalachian and Rocky Mountains that has at least 1 day on which a tornado touches down on the grid. The annual cycle of tornado days is of particular interest. The southeastern United States, outside of Florida, faces its maximum threat in April. Farther west and north, the threat is later in the year, with the northern United States and New England facing its maximum threat in July. In addition, the repeatability of the annual cycle is much greater in the plains than farther east. By combining the region of greatest threat with the region of highest repeatability of the season, an objective definition of Tornado Alley as a region that extends from the southern Texas Panhandle through Nebraska and northeastward into eastern North Dakota and Minnesota can be provided.

Full access
Harold E. Brooks
,
Charles A. Doswell III
, and
Robert A. Maddox

Abstract

In the near future, the technological capability will be available to use mesoscale and cloud-scale numerical models for forecasting convective weather in operational meteorology. We address some of the issues concerning effective utilization of this capability. The challenges that must be overcome are formidable. We argue that explicit prediction on the cloud scale, even if these challenges can be met, does not obviate the need for human interpretation of the forecasts. In the case that humans remain directly involved in the forecasting process, another set of issues is concerned with the constraints imposed by human involvement. As an alternative to direct explicit prediction of convective events by computers, we propose that mesoscale models be used to produce initial conditions for cloud-scale models. Cloud-scale models then can be run in a Monte Carlo–like mode, in order to provide an estimate of the probable types of convective weather for a forecast period. In our proposal, human forecasters fill the critical role as an interface between various stages of the forecasting and warning process. In particular, they are essential in providing input to the numerical models from the observational data and in interpreting the model output. This interpretative step is important both in helping the forecaster anticipate and interpret new observations and in providing information to the public.

Full access
Harold E. Brooks
,
Arthur Witt
, and
Michael D. Eilts

The question of who is the “best” forecaster in a particular media market is one that the public frequently asks. The authors have collected approximately one year's forecasts from the National Weather Service and major media presentations for Oklahoma City. Diagnostic verification procedures indicate that the question of best does not have a clear answer. All of the forecast sources have strengths and weaknesses, and it is possible that a user could take information from a variety of sources to come up with a forecast that has more value than any one individual source provides. The analysis provides numerous examples of the utility of a distributions-oriented approach to verification while also providing insight into the problems the public faces in evaluating the array of forecasts presented to them.

Full access