Search Results
You are looking at 31 - 40 of 62 items for :
- Author or Editor: Harold E. Brooks x
- Article x
- Refine by Access: All Content x
Abstract
In the hazards literature, a near-miss is defined as an event that had a nontrivial probability of causing loss of life or property but did not due to chance. Frequent near-misses can desensitize the public to tornado risk and reduce responses to warnings. Violent tornadoes rarely hit densely populated areas, but when they do they can cause substantial loss of life. It is unknown how frequently violent tornadoes narrowly miss a populated area. To address this question, this study looks at the spatial distribution of possible exposures of people to violent tornadoes in the United States. We collected and replicated tornado footprints for all reported U.S. violent tornadoes between 1995 and 2016, across a uniform circular grid, with a radius of 40 km and a resolution of 0.5 km, surrounding the centroid of the original footprint. We then estimated the number of people exposed to each tornado footprint using proportional allocation. We found that violent tornadoes tended to touch down in less populated areas with only 33.1% potentially impacting 5000 persons or more. Hits and near-misses were most common in the Southern Plains and Southeast United States with the highest risk in central Oklahoma and northern Alabama. Knowledge about the location of frequent near-misses can help emergency managers and risk communicators target communities that might be more vulnerable, due to an underestimation of tornado risk, for educational campaigns. By increasing educational efforts in these high-risk areas, it might be possible to improve local knowledge and reduce casualties when violent tornadoes do hit.
Abstract
In the hazards literature, a near-miss is defined as an event that had a nontrivial probability of causing loss of life or property but did not due to chance. Frequent near-misses can desensitize the public to tornado risk and reduce responses to warnings. Violent tornadoes rarely hit densely populated areas, but when they do they can cause substantial loss of life. It is unknown how frequently violent tornadoes narrowly miss a populated area. To address this question, this study looks at the spatial distribution of possible exposures of people to violent tornadoes in the United States. We collected and replicated tornado footprints for all reported U.S. violent tornadoes between 1995 and 2016, across a uniform circular grid, with a radius of 40 km and a resolution of 0.5 km, surrounding the centroid of the original footprint. We then estimated the number of people exposed to each tornado footprint using proportional allocation. We found that violent tornadoes tended to touch down in less populated areas with only 33.1% potentially impacting 5000 persons or more. Hits and near-misses were most common in the Southern Plains and Southeast United States with the highest risk in central Oklahoma and northern Alabama. Knowledge about the location of frequent near-misses can help emergency managers and risk communicators target communities that might be more vulnerable, due to an underestimation of tornado risk, for educational campaigns. By increasing educational efforts in these high-risk areas, it might be possible to improve local knowledge and reduce casualties when violent tornadoes do hit.
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
The climatology of heavy rain events from hourly precipitation observations by Brooks and Stensrud is revisited in this study using two high-resolution precipitation datasets that incorporate both gauge observations and radar estimates. Analyses show a seasonal cycle of heavy rain events originating along the Gulf Coast and expanding across the eastern two-thirds of the United States by the summer, comparing well to previous findings. The frequency of extreme events is estimated, and may provide improvements over prior results due to both the increased spatial resolution of these data and improved techniques used in the estimation. The diurnal cycle of heavy rainfall is also examined, showing distinct differences in the strength of the cycle between seasons.
Abstract
The climatology of heavy rain events from hourly precipitation observations by Brooks and Stensrud is revisited in this study using two high-resolution precipitation datasets that incorporate both gauge observations and radar estimates. Analyses show a seasonal cycle of heavy rain events originating along the Gulf Coast and expanding across the eastern two-thirds of the United States by the summer, comparing well to previous findings. The frequency of extreme events is estimated, and may provide improvements over prior results due to both the increased spatial resolution of these data and improved techniques used in the estimation. The diurnal cycle of heavy rainfall is also examined, showing distinct differences in the strength of the cycle between seasons.
Abstract
Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.
There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.
Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.
Abstract
Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.
There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.
Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.
Abstract
Forecasts from the National Centers for Environmental Prediction’s experimental short-range ensemble system are examined and compared with a single run from a higher-resolution model using similar computational resources. The ensemble consists of five members from the Regional Spectral Model and 10 members from the 80-km Eta Model, with both in-house analyses and bred perturbations used as initial conditions. This configuration allows for a comparison of the two models and the two perturbation strategies, as well as a preliminary investigation of the relative merits of mixed-model, mixed-perturbation ensemble systems. The ensemble is also used to estimate the short-range predictability limits of forecasts of precipitation and fields relevant to the forecast of precipitation.
Whereas error growth curves for the ensemble and its subgroups are in relative agreement with previous work for large-scale fields such as 500-mb heights, little or no error growth is found for fields of mesoscale interest, such as convective indices and precipitation. The difference in growth rates among the ensemble subgroups illustrates the role of both initial perturbation strategy and model formulation in creating ensemble dispersion. However, increase spread per se is not necessarily beneficial, as is indicated by the fact that the ensemble subgroup with the greatest spread is less skillful than the subgroup with the least spread.
Further examination into the skill of the ensemble system for forecasts of precipitation shows the advantage gained from a mixed-model strategy, such that even the inclusion of the less skillful Regional Spectral Model members improves ensemble performance. For some aspects of forecast performance, even ensemble configurations with as few as five members are shown to significantly outperform the 29-km Meso-Eta Model.
Abstract
Forecasts from the National Centers for Environmental Prediction’s experimental short-range ensemble system are examined and compared with a single run from a higher-resolution model using similar computational resources. The ensemble consists of five members from the Regional Spectral Model and 10 members from the 80-km Eta Model, with both in-house analyses and bred perturbations used as initial conditions. This configuration allows for a comparison of the two models and the two perturbation strategies, as well as a preliminary investigation of the relative merits of mixed-model, mixed-perturbation ensemble systems. The ensemble is also used to estimate the short-range predictability limits of forecasts of precipitation and fields relevant to the forecast of precipitation.
Whereas error growth curves for the ensemble and its subgroups are in relative agreement with previous work for large-scale fields such as 500-mb heights, little or no error growth is found for fields of mesoscale interest, such as convective indices and precipitation. The difference in growth rates among the ensemble subgroups illustrates the role of both initial perturbation strategy and model formulation in creating ensemble dispersion. However, increase spread per se is not necessarily beneficial, as is indicated by the fact that the ensemble subgroup with the greatest spread is less skillful than the subgroup with the least spread.
Further examination into the skill of the ensemble system for forecasts of precipitation shows the advantage gained from a mixed-model strategy, such that even the inclusion of the less skillful Regional Spectral Model members improves ensemble performance. For some aspects of forecast performance, even ensemble configurations with as few as five members are shown to significantly outperform the 29-km Meso-Eta Model.
Abstract
In this study we investigate convective environments and their corresponding climatological features over Europe and the United States. For this purpose, National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) data, ERA5 hybrid-sigma levels, and severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data were combined on a common grid of 0.25° and 1-h steps over the period 1979–2018. The severity of convective hazards increases with increasing instability and wind shear (WMAXSHEAR), but climatological aspects of these features differ over both domains. Environments over the United States are characterized by higher moisture, CAPE, CIN, wind shear, and midtropospheric lapse rates. Conversely, 0–3-km CAPE and low-level lapse rates are higher over Europe. From the climatological perspective severe thunderstorm environments (hours) are around 3–4 times more frequent over the United States with peaks across the Great Plains, Midwest, and Southeast. Over Europe severe environments are the most common over the south with local maxima in northern Italy. Despite having lower CAPE (tail distribution of 3000–4000 J kg−1 compared to 6000–8000 J kg−1 over the United States), thunderstorms over Europe have a higher probability for convective initiation given a favorable environment. Conversely, the lowest probability for initiation is observed over the Great Plains, but, once a thunderstorm develops, the probability that it will become severe is much higher compared to Europe. Prime conditions for severe thunderstorms over the United States are between April and June, typically from 1200 to 2200 central standard time (CST), while across Europe favorable environments are observed from June to August, usually between 1400 and 2100 UTC.
Abstract
In this study we investigate convective environments and their corresponding climatological features over Europe and the United States. For this purpose, National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) data, ERA5 hybrid-sigma levels, and severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data were combined on a common grid of 0.25° and 1-h steps over the period 1979–2018. The severity of convective hazards increases with increasing instability and wind shear (WMAXSHEAR), but climatological aspects of these features differ over both domains. Environments over the United States are characterized by higher moisture, CAPE, CIN, wind shear, and midtropospheric lapse rates. Conversely, 0–3-km CAPE and low-level lapse rates are higher over Europe. From the climatological perspective severe thunderstorm environments (hours) are around 3–4 times more frequent over the United States with peaks across the Great Plains, Midwest, and Southeast. Over Europe severe environments are the most common over the south with local maxima in northern Italy. Despite having lower CAPE (tail distribution of 3000–4000 J kg−1 compared to 6000–8000 J kg−1 over the United States), thunderstorms over Europe have a higher probability for convective initiation given a favorable environment. Conversely, the lowest probability for initiation is observed over the Great Plains, but, once a thunderstorm develops, the probability that it will become severe is much higher compared to Europe. Prime conditions for severe thunderstorms over the United States are between April and June, typically from 1200 to 2200 central standard time (CST), while across Europe favorable environments are observed from June to August, usually between 1400 and 2100 UTC.
Abstract
Many tornadoes are unreported because of lack of observers or are underrated in intensity, width, or track length because of lack of damage indicators. These reporting biases substantially degrade estimates of tornado frequency and thereby undermine important endeavors such as studies of climate impacts on tornadoes and cost–benefit analyses of tornado damage mitigation. Building on previous studies, we use a Bayesian hierarchical modeling framework to estimate and correct for tornado reporting biases over the central United States during 1975–2018. The reporting biases are treated as a univariate function of population density. We assess how these biases vary with tornado intensity, width, and track length and over the analysis period. We find that the frequencies of tornadoes of all kinds, but especially stronger or wider tornadoes, have been substantially underestimated. Most strikingly, the Bayesian model estimates that there have been approximately 3 times as many tornadoes capable of (E)F2+ damage as have been recorded as (E)F2+ [(E)F indicates a rating on the (enhanced) Fujita scale]. The model estimates that total tornado frequency changed little over the analysis period. Statistically significant trends in frequency are found for tornadoes within certain ranges of intensity, pathlength, and width, but it is unclear what proportion of these trends arise from changes in damage survey practices. Simple analyses of the tornado database corroborate many of the inferences from the Bayesian model.
Significance Statement
Prior studies have shown that the probabilities of a tornado being reported and of its intensity, track length, and width being accurately estimated are strongly correlated with the local population density. We have developed a sophisticated statistical model that accounts for these population-dependent tornado reporting biases to improve estimates of tornado frequency in the central United States. The bias-corrected tornado frequency estimates differ markedly from the official tornado climatology and have important implications for tornado risk assessment, damage mitigation, and studies of climate change impacts on tornado activity.
Abstract
Many tornadoes are unreported because of lack of observers or are underrated in intensity, width, or track length because of lack of damage indicators. These reporting biases substantially degrade estimates of tornado frequency and thereby undermine important endeavors such as studies of climate impacts on tornadoes and cost–benefit analyses of tornado damage mitigation. Building on previous studies, we use a Bayesian hierarchical modeling framework to estimate and correct for tornado reporting biases over the central United States during 1975–2018. The reporting biases are treated as a univariate function of population density. We assess how these biases vary with tornado intensity, width, and track length and over the analysis period. We find that the frequencies of tornadoes of all kinds, but especially stronger or wider tornadoes, have been substantially underestimated. Most strikingly, the Bayesian model estimates that there have been approximately 3 times as many tornadoes capable of (E)F2+ damage as have been recorded as (E)F2+ [(E)F indicates a rating on the (enhanced) Fujita scale]. The model estimates that total tornado frequency changed little over the analysis period. Statistically significant trends in frequency are found for tornadoes within certain ranges of intensity, pathlength, and width, but it is unclear what proportion of these trends arise from changes in damage survey practices. Simple analyses of the tornado database corroborate many of the inferences from the Bayesian model.
Significance Statement
Prior studies have shown that the probabilities of a tornado being reported and of its intensity, track length, and width being accurately estimated are strongly correlated with the local population density. We have developed a sophisticated statistical model that accounts for these population-dependent tornado reporting biases to improve estimates of tornado frequency in the central United States. The bias-corrected tornado frequency estimates differ markedly from the official tornado climatology and have important implications for tornado risk assessment, damage mitigation, and studies of climate change impacts on tornado activity.