Search Results
You are looking at 21 - 30 of 40 items for :
- Author or Editor: Harold E. Brooks x
- Weather and Forecasting x
- Refine by Access: All Content x
Abstract
The history of storm spotting and public awareness of the tornado threat is reviewed. It is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925. Storm spotting’s history begins in World War II as an effort to protect the nation’s military installations, but became a public service with the resumption of public tornado forecasting, pioneered in 1948 by the Air Force’s Fawbush and Miller and begun in the public sector in 1952. The current spotter program, known generally as SKYWARN, is a civilian-based volunteer organization. Responsibility for spotter training has rested with the national forecasting services (originally, the Weather Bureau and now the National Weather Service). That training has evolved with (a) the proliferation of widespread film and (recently) video footage of severe storms; (b) growth in the scientific knowledge about tornadoes and tornadic storms, as well as a better understanding of how tornadoes produce damage; and (c) the inception and growth of scientific and hobbyist storm chasing.
The concept of an integrated warning system is presented in detail, and considered in light of past and present accomplishments and what needs to be done in the future to maintain the downward trend in fatalities. As the integrated warning system has evolved over its history, it has become clear that volunteer spotters and the public forecasting services need to be closely tied. Further, public information dissemination is a major factor in an integrated warning service; warnings and forecasts that do not reach the users and produce appropriate responses are not very valuable, even if they are accurate and timely. The history of the integration has been somewhat checkered, but compelling evidence of the overall efficacy of the watch–warning program can be found in the maintenance of the downward trend in annual fatalities that began in 1925.
Abstract
The history of storm spotting and public awareness of the tornado threat is reviewed. It is shown that a downward trend in fatalities apparently began after the famous “Tri-State” tornado of 1925. Storm spotting’s history begins in World War II as an effort to protect the nation’s military installations, but became a public service with the resumption of public tornado forecasting, pioneered in 1948 by the Air Force’s Fawbush and Miller and begun in the public sector in 1952. The current spotter program, known generally as SKYWARN, is a civilian-based volunteer organization. Responsibility for spotter training has rested with the national forecasting services (originally, the Weather Bureau and now the National Weather Service). That training has evolved with (a) the proliferation of widespread film and (recently) video footage of severe storms; (b) growth in the scientific knowledge about tornadoes and tornadic storms, as well as a better understanding of how tornadoes produce damage; and (c) the inception and growth of scientific and hobbyist storm chasing.
The concept of an integrated warning system is presented in detail, and considered in light of past and present accomplishments and what needs to be done in the future to maintain the downward trend in fatalities. As the integrated warning system has evolved over its history, it has become clear that volunteer spotters and the public forecasting services need to be closely tied. Further, public information dissemination is a major factor in an integrated warning service; warnings and forecasts that do not reach the users and produce appropriate responses are not very valuable, even if they are accurate and timely. The history of the integration has been somewhat checkered, but compelling evidence of the overall efficacy of the watch–warning program can be found in the maintenance of the downward trend in annual fatalities that began in 1925.
Abstract
An estimate is made of the probability of an occurrence of a tornado day near any location in the contiguous 48 states for any time during the year. Gaussian smoothers in space and time have been applied to the observed record of tornado days from 1980 to 1999 to produce daily maps and annual cycles at any point on an 80 km × 80 km grid. Many aspects of this climatological estimate have been identified in previous work, but the method allows one to consider the record in several new ways. The two regions of maximum tornado days in the United States are northeastern Colorado and peninsular Florida, but there is a large region between the Appalachian and Rocky Mountains that has at least 1 day on which a tornado touches down on the grid. The annual cycle of tornado days is of particular interest. The southeastern United States, outside of Florida, faces its maximum threat in April. Farther west and north, the threat is later in the year, with the northern United States and New England facing its maximum threat in July. In addition, the repeatability of the annual cycle is much greater in the plains than farther east. By combining the region of greatest threat with the region of highest repeatability of the season, an objective definition of Tornado Alley as a region that extends from the southern Texas Panhandle through Nebraska and northeastward into eastern North Dakota and Minnesota can be provided.
Abstract
An estimate is made of the probability of an occurrence of a tornado day near any location in the contiguous 48 states for any time during the year. Gaussian smoothers in space and time have been applied to the observed record of tornado days from 1980 to 1999 to produce daily maps and annual cycles at any point on an 80 km × 80 km grid. Many aspects of this climatological estimate have been identified in previous work, but the method allows one to consider the record in several new ways. The two regions of maximum tornado days in the United States are northeastern Colorado and peninsular Florida, but there is a large region between the Appalachian and Rocky Mountains that has at least 1 day on which a tornado touches down on the grid. The annual cycle of tornado days is of particular interest. The southeastern United States, outside of Florida, faces its maximum threat in April. Farther west and north, the threat is later in the year, with the northern United States and New England facing its maximum threat in July. In addition, the repeatability of the annual cycle is much greater in the plains than farther east. By combining the region of greatest threat with the region of highest repeatability of the season, an objective definition of Tornado Alley as a region that extends from the southern Texas Panhandle through Nebraska and northeastward into eastern North Dakota and Minnesota can be provided.
Abstract
The probability of nontornadic severe weather event reports near any location in the United States for any day of the year has been estimated. Gaussian smoothers in space and time have been applied to the observed record of severe thunderstorm occurrence from 1980 to 1994 to produce daily maps and annual cycles at any point. Many aspects of this climatology have been identified in previous work, but the method allows for the consideration of the record in several new ways. A review of the raw data, broken down in various ways, reveals that numerous nonmeteorological artifacts are present in the raw data. These are predominantly associated with the marginal nontornadic severe thunderstorm events, including an enormous growth in the number of severe weather reports since the mid-1950s. Much of this growth may be associated with a drive to improve warning verification scores. The smoothed spatial and temporal distributions of the probability of nontornadic severe thunderstorm events are presented in several ways. The distribution of significant nontornadic severe thunderstorm reports (wind speeds ≥ 65 kt and/or hailstone diameters ≥ 2 in.) is consistent with the hypothesis that supercells are responsible for the majority of such reports.
Abstract
The probability of nontornadic severe weather event reports near any location in the United States for any day of the year has been estimated. Gaussian smoothers in space and time have been applied to the observed record of severe thunderstorm occurrence from 1980 to 1994 to produce daily maps and annual cycles at any point. Many aspects of this climatology have been identified in previous work, but the method allows for the consideration of the record in several new ways. A review of the raw data, broken down in various ways, reveals that numerous nonmeteorological artifacts are present in the raw data. These are predominantly associated with the marginal nontornadic severe thunderstorm events, including an enormous growth in the number of severe weather reports since the mid-1950s. Much of this growth may be associated with a drive to improve warning verification scores. The smoothed spatial and temporal distributions of the probability of nontornadic severe thunderstorm events are presented in several ways. The distribution of significant nontornadic severe thunderstorm reports (wind speeds ≥ 65 kt and/or hailstone diameters ≥ 2 in.) is consistent with the hypothesis that supercells are responsible for the majority of such reports.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
An experiment using a three-dimensional cloud-scale numerical model in an operational forecasting environment was carried out in the spring of 1991. It involved meteorologists generating forecast environmental conditions associated with anticipated strong convection. Those conditions then were used to initialize the cloud model, which was run subsequently to forecast qualitative descriptions of storm type. Verification was done on both the sounding forecast and numerical model portions of the experiment. Of the 12 experiment days, the numerical model generated six good forecasts, two of which involved significant tornadic storms. More importantly, while demonstrating the potential for cloud-scale modeling in an operational environment, the experiment highlights some of the obstacles in the path of such an implementation.
Abstract
An experiment using a three-dimensional cloud-scale numerical model in an operational forecasting environment was carried out in the spring of 1991. It involved meteorologists generating forecast environmental conditions associated with anticipated strong convection. Those conditions then were used to initialize the cloud model, which was run subsequently to forecast qualitative descriptions of storm type. Verification was done on both the sounding forecast and numerical model portions of the experiment. Of the 12 experiment days, the numerical model generated six good forecasts, two of which involved significant tornadic storms. More importantly, while demonstrating the potential for cloud-scale modeling in an operational environment, the experiment highlights some of the obstacles in the path of such an implementation.
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
The representation of turbulent mixing within the lower troposphere is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere in mesoscale model forecasts. For mesoscale models, turbulence is mostly a subgrid-scale process, but its presence in the planetary boundary layer (PBL) can directly modulate a simulation’s depiction of mass fields relevant for forecast problems. The primary goal of this work is to review the various parameterization schemes that the Weather Research and Forecasting Model employs in its depiction of turbulent mixing (PBL schemes) in general, and is followed by an application to a severe weather environment. Each scheme represents mixing on a local and/or nonlocal basis. Local schemes only consider immediately adjacent vertical levels in the model, whereas nonlocal schemes can consider a deeper layer covering multiple levels in representing the effects of vertical mixing through the PBL. As an application, a pair of cold season severe weather events that occurred in the southeastern United States are examined. Such cases highlight the ambiguities of classically defined PBL schemes in a cold season severe weather environment, though characteristics of the PBL schemes are apparent in this case. Low-level lapse rates and storm-relative helicity are typically steeper and slightly smaller for nonlocal than local schemes, respectively. Nonlocal mixing is necessary to more accurately forecast the lower-tropospheric lapse rates within the warm sector of these events. While all schemes yield overestimations of mixed-layer convective available potential energy (MLCAPE), nonlocal schemes more strongly overestimate MLCAPE than do local schemes.
Abstract
The representation of turbulent mixing within the lower troposphere is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere in mesoscale model forecasts. For mesoscale models, turbulence is mostly a subgrid-scale process, but its presence in the planetary boundary layer (PBL) can directly modulate a simulation’s depiction of mass fields relevant for forecast problems. The primary goal of this work is to review the various parameterization schemes that the Weather Research and Forecasting Model employs in its depiction of turbulent mixing (PBL schemes) in general, and is followed by an application to a severe weather environment. Each scheme represents mixing on a local and/or nonlocal basis. Local schemes only consider immediately adjacent vertical levels in the model, whereas nonlocal schemes can consider a deeper layer covering multiple levels in representing the effects of vertical mixing through the PBL. As an application, a pair of cold season severe weather events that occurred in the southeastern United States are examined. Such cases highlight the ambiguities of classically defined PBL schemes in a cold season severe weather environment, though characteristics of the PBL schemes are apparent in this case. Low-level lapse rates and storm-relative helicity are typically steeper and slightly smaller for nonlocal than local schemes, respectively. Nonlocal mixing is necessary to more accurately forecast the lower-tropospheric lapse rates within the warm sector of these events. While all schemes yield overestimations of mixed-layer convective available potential energy (MLCAPE), nonlocal schemes more strongly overestimate MLCAPE than do local schemes.