Search Results
Abstract
The 3 May 1999 Oklahoma City tornado was the deadliest in the United States in over 20 years, with 36 direct fatalities. To understand how this event fits into the historical context, the record of tornado deaths in the United States has been examined. Almost 20 000 deaths have been reported associated with more than 3600 tornadoes in the United States since 1680. A cursory examination of the record shows a break in 1875. Prior to then, it is likely that many killer tornadoes failed to be reported. When the death toll is normalized by population, a near-constant rate of death is apparent until about 1925, when a sharp fall begins. The rate was about 1.8 people per million population in 1925 and was less than 0.12 people per million by 2000. The decrease in fatalities has resulted from two primary causes: a decrease in the number of killer tornadoes and a decrease in the number of fatalities in the most deadly tornadoes. Current death rates for mobile home residents, however, are still nearly what the overall national rate was prior to 1925 and are about 20 times the rate of site-built home residents. The increase in the fraction of the U.S. population living in mobile homes has important implications for future reductions in the death toll.
Abstract
The 3 May 1999 Oklahoma City tornado was the deadliest in the United States in over 20 years, with 36 direct fatalities. To understand how this event fits into the historical context, the record of tornado deaths in the United States has been examined. Almost 20 000 deaths have been reported associated with more than 3600 tornadoes in the United States since 1680. A cursory examination of the record shows a break in 1875. Prior to then, it is likely that many killer tornadoes failed to be reported. When the death toll is normalized by population, a near-constant rate of death is apparent until about 1925, when a sharp fall begins. The rate was about 1.8 people per million population in 1925 and was less than 0.12 people per million by 2000. The decrease in fatalities has resulted from two primary causes: a decrease in the number of killer tornadoes and a decrease in the number of fatalities in the most deadly tornadoes. Current death rates for mobile home residents, however, are still nearly what the overall national rate was prior to 1925 and are about 20 times the rate of site-built home residents. The increase in the fraction of the U.S. population living in mobile homes has important implications for future reductions in the death toll.
Abstract
After the tornadoes of 3 May 1999, the Federal Emergency Management Agency formed a Building Performance Assessment Team (BPAT) to examine the main tornado paths during the outbreak and to make recommendations based on the damage they saw. This is the first time a tornado disaster has been subjected to a BPAT investigation. Some aspects of the BPAT final report are reviewed and considered in the context of tornado preparedness in Kansas and Oklahoma. Although the preparedness efforts of many public and private institutions apparently played a large role in reducing casualties from the storm, a number of building deficiencies were found during the BPAT's evaluation. Especially in public facilities, there are several aspects of tornado preparedness that could be improved. Moreover, there is clear evidence that a nonnegligible fraction of the damage associated with these storms could have been mitigated with some relatively simple and inexpensive construction enhancements. Widespread implementation of these enhancements would reduce projectile loading and its associated threats to both life and property.
Abstract
After the tornadoes of 3 May 1999, the Federal Emergency Management Agency formed a Building Performance Assessment Team (BPAT) to examine the main tornado paths during the outbreak and to make recommendations based on the damage they saw. This is the first time a tornado disaster has been subjected to a BPAT investigation. Some aspects of the BPAT final report are reviewed and considered in the context of tornado preparedness in Kansas and Oklahoma. Although the preparedness efforts of many public and private institutions apparently played a large role in reducing casualties from the storm, a number of building deficiencies were found during the BPAT's evaluation. Especially in public facilities, there are several aspects of tornado preparedness that could be improved. Moreover, there is clear evidence that a nonnegligible fraction of the damage associated with these storms could have been mitigated with some relatively simple and inexpensive construction enhancements. Widespread implementation of these enhancements would reduce projectile loading and its associated threats to both life and property.
Abstract
The ability to discriminate between tornadic and nontornadic thunderstorms is investigated using a mesoscale model. Nine severe weather events are simulated: four events are tornadic supercell thunderstorm outbreaks that occur in conjunction with strong large-scale forcing for upward motion, three events are bow-echo outbreaks that also occur in conjunction with strong large-scale forcing for upward motion, and two are isolated tornadic supercell thunderstorms that occur under much weaker large-scale forcing. Examination of the mesoscale model simulations suggests that it is possible to discriminate between tornadic and nontornadic thunderstorms by using the locations of model-produced convective activity and values of convective available potential energy to highlight regions of likely thunderstorm development, and then using the values of storm-relative environmental helicity (SREH) and bulk Richardson number shear (BRNSHR) to indicate whether or not tornadic supercell thunderstorms are likely. Values of SREH greater than 100 m2 s−2 indicate a likelihood that any storms that develop will have a midlevel mesocyclone, values of BRNSHR between 40 and 100 m2 s−2 suggest that low-level mesocyclogenesis is likely, and values of BRNSHR less than 40 m2 s−2 suggest that the thunderstorms will be dominated by outflow. By combining the storm characteristics suggested by these parameters, it is possible to use mesoscale model output to infer the dominant mode of severe convection.
Abstract
The ability to discriminate between tornadic and nontornadic thunderstorms is investigated using a mesoscale model. Nine severe weather events are simulated: four events are tornadic supercell thunderstorm outbreaks that occur in conjunction with strong large-scale forcing for upward motion, three events are bow-echo outbreaks that also occur in conjunction with strong large-scale forcing for upward motion, and two are isolated tornadic supercell thunderstorms that occur under much weaker large-scale forcing. Examination of the mesoscale model simulations suggests that it is possible to discriminate between tornadic and nontornadic thunderstorms by using the locations of model-produced convective activity and values of convective available potential energy to highlight regions of likely thunderstorm development, and then using the values of storm-relative environmental helicity (SREH) and bulk Richardson number shear (BRNSHR) to indicate whether or not tornadic supercell thunderstorms are likely. Values of SREH greater than 100 m2 s−2 indicate a likelihood that any storms that develop will have a midlevel mesocyclone, values of BRNSHR between 40 and 100 m2 s−2 suggest that low-level mesocyclogenesis is likely, and values of BRNSHR less than 40 m2 s−2 suggest that the thunderstorms will be dominated by outflow. By combining the storm characteristics suggested by these parameters, it is possible to use mesoscale model output to infer the dominant mode of severe convection.
Abstract
An approach to forecasting the potential for flash flood-producing storms is developed, using the notion of basic ingredients. Heavy precipitation is the result of sustained high rainfall rates. In turn, high rainfall rates involve the rapid ascent of air containing substantial water vapor and also depend on the precipitation efficiency. The duration of an event is associated with its speed of movement and the size of the system causing the event along the direction of system movement.
This leads naturally to a consideration of the meteorological processes by which these basic ingredients are brought together. A description of those processes and of the types of heavy precipitation-producing storms suggests some of the variety of ways in which heavy precipitation occurs. Since the right mixture of these ingredients can be found in a wide variety of synoptic and mesoscale situations, it is necessary to know which of the ingredients is critical in any given case. By knowing which of the ingredients is most important in any given case, forecasters can concentrate on recognition of the developing heavy precipitation potential as meteorological processes operate. This also helps with the recognition of heavy rain events as they occur, a challenging problem if the potential for such events has not been anticipated.
Three brief case examples are presented to illustrate the procedure as it might be applied in operations. The cases are geographically diverse and even illustrate how a nonconvective heavy precipitation event fits within this methodology. The concept of ingredients-based forecasting is discussed as it might apply to a broader spectrum of forecast events than just flash flood forecasting.
Abstract
An approach to forecasting the potential for flash flood-producing storms is developed, using the notion of basic ingredients. Heavy precipitation is the result of sustained high rainfall rates. In turn, high rainfall rates involve the rapid ascent of air containing substantial water vapor and also depend on the precipitation efficiency. The duration of an event is associated with its speed of movement and the size of the system causing the event along the direction of system movement.
This leads naturally to a consideration of the meteorological processes by which these basic ingredients are brought together. A description of those processes and of the types of heavy precipitation-producing storms suggests some of the variety of ways in which heavy precipitation occurs. Since the right mixture of these ingredients can be found in a wide variety of synoptic and mesoscale situations, it is necessary to know which of the ingredients is critical in any given case. By knowing which of the ingredients is most important in any given case, forecasters can concentrate on recognition of the developing heavy precipitation potential as meteorological processes operate. This also helps with the recognition of heavy rain events as they occur, a challenging problem if the potential for such events has not been anticipated.
Three brief case examples are presented to illustrate the procedure as it might be applied in operations. The cases are geographically diverse and even illustrate how a nonconvective heavy precipitation event fits within this methodology. The concept of ingredients-based forecasting is discussed as it might apply to a broader spectrum of forecast events than just flash flood forecasting.
Abstract
A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas–Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. (∼1 mm) to 1 in. (∼25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.
Abstract
A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas–Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. (∼1 mm) to 1 in. (∼25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.
Abstract
The history of storm spotting and public awareness of the tornado threat is reviewed. It is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925. Storm spotting’s history begins in World War II as an effort to protect the nation’s military installations, but became a public service with the resumption of public tornado forecasting, pioneered in 1948 by the Air Force’s Fawbush and Miller and begun in the public sector in 1952. The current spotter program, known generally as SKYWARN, is a civilian-based volunteer organization. Responsibility for spotter training has rested with the national forecasting services (originally, the Weather Bureau and now the National Weather Service). That training has evolved with (a) the proliferation of widespread film and (recently) video footage of severe storms; (b) growth in the scientific knowledge about tornadoes and tornadic storms, as well as a better understanding of how tornadoes produce damage; and (c) the inception and growth of scientific and hobbyist storm chasing.
The concept of an integrated warning system is presented in detail, and considered in light of past and present accomplishments and what needs to be done in the future to maintain the downward trend in fatalities. As the integrated warning system has evolved over its history, it has become clear that volunteer spotters and the public forecasting services need to be closely tied. Further, public information dissemination is a major factor in an integrated warning service; warnings and forecasts that do not reach the users and produce appropriate responses are not very valuable, even if they are accurate and timely. The history of the integration has been somewhat checkered, but compelling evidence of the overall efficacy of the watch–warning program can be found in the maintenance of the downward trend in annual fatalities that began in 1925.
Abstract
The history of storm spotting and public awareness of the tornado threat is reviewed. It is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925. Storm spotting’s history begins in World War II as an effort to protect the nation’s military installations, but became a public service with the resumption of public tornado forecasting, pioneered in 1948 by the Air Force’s Fawbush and Miller and begun in the public sector in 1952. The current spotter program, known generally as SKYWARN, is a civilian-based volunteer organization. Responsibility for spotter training has rested with the national forecasting services (originally, the Weather Bureau and now the National Weather Service). That training has evolved with (a) the proliferation of widespread film and (recently) video footage of severe storms; (b) growth in the scientific knowledge about tornadoes and tornadic storms, as well as a better understanding of how tornadoes produce damage; and (c) the inception and growth of scientific and hobbyist storm chasing.
The concept of an integrated warning system is presented in detail, and considered in light of past and present accomplishments and what needs to be done in the future to maintain the downward trend in fatalities. As the integrated warning system has evolved over its history, it has become clear that volunteer spotters and the public forecasting services need to be closely tied. Further, public information dissemination is a major factor in an integrated warning service; warnings and forecasts that do not reach the users and produce appropriate responses are not very valuable, even if they are accurate and timely. The history of the integration has been somewhat checkered, but compelling evidence of the overall efficacy of the watch–warning program can be found in the maintenance of the downward trend in annual fatalities that began in 1925.
Abstract
An estimate is made of the probability of an occurrence of a tornado day near any location in the contiguous 48 states for any time during the year. Gaussian smoothers in space and time have been applied to the observed record of tornado days from 1980 to 1999 to produce daily maps and annual cycles at any point on an 80 km × 80 km grid. Many aspects of this climatological estimate have been identified in previous work, but the method allows one to consider the record in several new ways. The two regions of maximum tornado days in the United States are northeastern Colorado and peninsular Florida, but there is a large region between the Appalachian and Rocky Mountains that has at least 1 day on which a tornado touches down on the grid. The annual cycle of tornado days is of particular interest. The southeastern United States, outside of Florida, faces its maximum threat in April. Farther west and north, the threat is later in the year, with the northern United States and New England facing its maximum threat in July. In addition, the repeatability of the annual cycle is much greater in the plains than farther east. By combining the region of greatest threat with the region of highest repeatability of the season, an objective definition of Tornado Alley as a region that extends from the southern Texas Panhandle through Nebraska and northeastward into eastern North Dakota and Minnesota can be provided.
Abstract
An estimate is made of the probability of an occurrence of a tornado day near any location in the contiguous 48 states for any time during the year. Gaussian smoothers in space and time have been applied to the observed record of tornado days from 1980 to 1999 to produce daily maps and annual cycles at any point on an 80 km × 80 km grid. Many aspects of this climatological estimate have been identified in previous work, but the method allows one to consider the record in several new ways. The two regions of maximum tornado days in the United States are northeastern Colorado and peninsular Florida, but there is a large region between the Appalachian and Rocky Mountains that has at least 1 day on which a tornado touches down on the grid. The annual cycle of tornado days is of particular interest. The southeastern United States, outside of Florida, faces its maximum threat in April. Farther west and north, the threat is later in the year, with the northern United States and New England facing its maximum threat in July. In addition, the repeatability of the annual cycle is much greater in the plains than farther east. By combining the region of greatest threat with the region of highest repeatability of the season, an objective definition of Tornado Alley as a region that extends from the southern Texas Panhandle through Nebraska and northeastward into eastern North Dakota and Minnesota can be provided.
Abstract
The probability of nontornadic severe weather event reports near any location in the United States for any day of the year has been estimated. Gaussian smoothers in space and time have been applied to the observed record of severe thunderstorm occurrence from 1980 to 1994 to produce daily maps and annual cycles at any point. Many aspects of this climatology have been identified in previous work, but the method allows for the consideration of the record in several new ways. A review of the raw data, broken down in various ways, reveals that numerous nonmeteorological artifacts are present in the raw data. These are predominantly associated with the marginal nontornadic severe thunderstorm events, including an enormous growth in the number of severe weather reports since the mid-1950s. Much of this growth may be associated with a drive to improve warning verification scores. The smoothed spatial and temporal distributions of the probability of nontornadic severe thunderstorm events are presented in several ways. The distribution of significant nontornadic severe thunderstorm reports (wind speeds ≥ 65 kt and/or hailstone diameters ≥ 2 in.) is consistent with the hypothesis that supercells are responsible for the majority of such reports.
Abstract
The probability of nontornadic severe weather event reports near any location in the United States for any day of the year has been estimated. Gaussian smoothers in space and time have been applied to the observed record of severe thunderstorm occurrence from 1980 to 1994 to produce daily maps and annual cycles at any point. Many aspects of this climatology have been identified in previous work, but the method allows for the consideration of the record in several new ways. A review of the raw data, broken down in various ways, reveals that numerous nonmeteorological artifacts are present in the raw data. These are predominantly associated with the marginal nontornadic severe thunderstorm events, including an enormous growth in the number of severe weather reports since the mid-1950s. Much of this growth may be associated with a drive to improve warning verification scores. The smoothed spatial and temporal distributions of the probability of nontornadic severe thunderstorm events are presented in several ways. The distribution of significant nontornadic severe thunderstorm reports (wind speeds ≥ 65 kt and/or hailstone diameters ≥ 2 in.) is consistent with the hypothesis that supercells are responsible for the majority of such reports.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
An experiment using a three-dimensional cloud-scale numerical model in an operational forecasting environment was carried out in the spring of 1991. It involved meteorologists generating forecast environmental conditions associated with anticipated strong convection. Those conditions then were used to initialize the cloud model, which was run subsequently to forecast qualitative descriptions of storm type. Verification was done on both the sounding forecast and numerical model portions of the experiment. Of the 12 experiment days, the numerical model generated six good forecasts, two of which involved significant tornadic storms. More importantly, while demonstrating the potential for cloud-scale modeling in an operational environment, the experiment highlights some of the obstacles in the path of such an implementation.
Abstract
An experiment using a three-dimensional cloud-scale numerical model in an operational forecasting environment was carried out in the spring of 1991. It involved meteorologists generating forecast environmental conditions associated with anticipated strong convection. Those conditions then were used to initialize the cloud model, which was run subsequently to forecast qualitative descriptions of storm type. Verification was done on both the sounding forecast and numerical model portions of the experiment. Of the 12 experiment days, the numerical model generated six good forecasts, two of which involved significant tornadic storms. More importantly, while demonstrating the potential for cloud-scale modeling in an operational environment, the experiment highlights some of the obstacles in the path of such an implementation.