Search Results
You are looking at 11 - 20 of 40 items for :
- Author or Editor: Harold E. Brooks x
- Weather and Forecasting x
- Refine by Access: All Content x
Abstract
The authors have carried out verification of 590 12–24-h high-temperature forecasts from numerical guidance products and human forecasters for Oklahoma City, Oklahoma, using both a measures-oriented verification scheme and a distributions-oriented scheme. The latter captures the richness associated with the relationship of forecasts and observations, providing insight into strengths and weaknesses of the forecasting systems, and showing areas in which improvement in accuracy can be obtained.
The analysis of this single forecast element at one lead time shows the amount of information available from a distributions-oriented verification scheme. In order to obtain a complete picture of the overall state of forecasting, it would be necessary to verify all elements at all lead times. The authors urge the development of such a national verification scheme as soon as possible, since without it, it will be impossible to monitor changes in the quality of forecasts and forecasting systems in the future.
Abstract
The authors have carried out verification of 590 12–24-h high-temperature forecasts from numerical guidance products and human forecasters for Oklahoma City, Oklahoma, using both a measures-oriented verification scheme and a distributions-oriented scheme. The latter captures the richness associated with the relationship of forecasts and observations, providing insight into strengths and weaknesses of the forecasting systems, and showing areas in which improvement in accuracy can be obtained.
The analysis of this single forecast element at one lead time shows the amount of information available from a distributions-oriented verification scheme. In order to obtain a complete picture of the overall state of forecasting, it would be necessary to verify all elements at all lead times. The authors urge the development of such a national verification scheme as soon as possible, since without it, it will be impossible to monitor changes in the quality of forecasts and forecasting systems in the future.
Abstract
Historical records of damage from major tornadoes in the United States are taken and adjusted for inflation and wealth. Such adjustments provide a more reliable method to compare losses over time in the context of significant societal change. From 1890 to 1999, the costliest tornado on the record, adjusted for inflation, is the 3 May 1999 Oklahoma City tornado, with an adjusted $963 million in damage (constant 1997 dollars). Including an adjustment for growth in wealth, on the other hand, clearly shows the 27 May 1896 Saint Louis–East Saint Louis tornado to be the costliest on record. An extremely conservative adjustment for the 1896 tornado gives a value of $2.2 billion. A more realistic adjustment yields a figure of $2.9 billion. A comparison of the ratio of deaths to wealth-adjusted damage shows a clear break in 1953, at the beginning of the watch/warning/awareness program of the National Weather Service.
Abstract
Historical records of damage from major tornadoes in the United States are taken and adjusted for inflation and wealth. Such adjustments provide a more reliable method to compare losses over time in the context of significant societal change. From 1890 to 1999, the costliest tornado on the record, adjusted for inflation, is the 3 May 1999 Oklahoma City tornado, with an adjusted $963 million in damage (constant 1997 dollars). Including an adjustment for growth in wealth, on the other hand, clearly shows the 27 May 1896 Saint Louis–East Saint Louis tornado to be the costliest on record. An extremely conservative adjustment for the 1896 tornado gives a value of $2.2 billion. A more realistic adjustment yields a figure of $2.9 billion. A comparison of the ratio of deaths to wealth-adjusted damage shows a clear break in 1953, at the beginning of the watch/warning/awareness program of the National Weather Service.
Abstract
The authors discuss the relationship between budget-cutting exercises and knowledge of the value of weather services. The complex interaction between quality (accuracy) and value of weather forecasts prevents theoretical approaches from contributing much to the discussion, except perhaps to indicate some of the sources for its complexity. The absence of comprehensive theoretical answers indicates the importance of empirical determinations of forecast value; as it stands, the United States is poorly equipped to make intelligent decisions in the current and future budget situations. To obtain credible empirical answers, forecasters will need to develop closer working relationships with their users than ever before, seeking specific information regarding economic value of forecasts. Some suggestions for developing plausible value estimates are offered, based largely on limited studies already in the literature. Efforts to create closer ties between forecasters and users can yield diverse benefits, including the desired credible estimates of the value of forecasts, as well as estimates of the sensitivity of that value to changes in accuracy of the forecasts. The authors argue for the development of an infrastructure to make these empirical value estimates, as a critical need within weather forecasting agencies, public and private, in view of continuing budget pressures.
Abstract
The authors discuss the relationship between budget-cutting exercises and knowledge of the value of weather services. The complex interaction between quality (accuracy) and value of weather forecasts prevents theoretical approaches from contributing much to the discussion, except perhaps to indicate some of the sources for its complexity. The absence of comprehensive theoretical answers indicates the importance of empirical determinations of forecast value; as it stands, the United States is poorly equipped to make intelligent decisions in the current and future budget situations. To obtain credible empirical answers, forecasters will need to develop closer working relationships with their users than ever before, seeking specific information regarding economic value of forecasts. Some suggestions for developing plausible value estimates are offered, based largely on limited studies already in the literature. Efforts to create closer ties between forecasters and users can yield diverse benefits, including the desired credible estimates of the value of forecasts, as well as estimates of the sensitivity of that value to changes in accuracy of the forecasts. The authors argue for the development of an infrastructure to make these empirical value estimates, as a critical need within weather forecasting agencies, public and private, in view of continuing budget pressures.
Abstract
The 3 May 1999 Oklahoma City tornado was the deadliest in the United States in over 20 years, with 36 direct fatalities. To understand how this event fits into the historical context, the record of tornado deaths in the United States has been examined. Almost 20 000 deaths have been reported associated with more than 3600 tornadoes in the United States since 1680. A cursory examination of the record shows a break in 1875. Prior to then, it is likely that many killer tornadoes failed to be reported. When the death toll is normalized by population, a near-constant rate of death is apparent until about 1925, when a sharp fall begins. The rate was about 1.8 people per million population in 1925 and was less than 0.12 people per million by 2000. The decrease in fatalities has resulted from two primary causes: a decrease in the number of killer tornadoes and a decrease in the number of fatalities in the most deadly tornadoes. Current death rates for mobile home residents, however, are still nearly what the overall national rate was prior to 1925 and are about 20 times the rate of site-built home residents. The increase in the fraction of the U.S. population living in mobile homes has important implications for future reductions in the death toll.
Abstract
The 3 May 1999 Oklahoma City tornado was the deadliest in the United States in over 20 years, with 36 direct fatalities. To understand how this event fits into the historical context, the record of tornado deaths in the United States has been examined. Almost 20 000 deaths have been reported associated with more than 3600 tornadoes in the United States since 1680. A cursory examination of the record shows a break in 1875. Prior to then, it is likely that many killer tornadoes failed to be reported. When the death toll is normalized by population, a near-constant rate of death is apparent until about 1925, when a sharp fall begins. The rate was about 1.8 people per million population in 1925 and was less than 0.12 people per million by 2000. The decrease in fatalities has resulted from two primary causes: a decrease in the number of killer tornadoes and a decrease in the number of fatalities in the most deadly tornadoes. Current death rates for mobile home residents, however, are still nearly what the overall national rate was prior to 1925 and are about 20 times the rate of site-built home residents. The increase in the fraction of the U.S. population living in mobile homes has important implications for future reductions in the death toll.
Abstract
After the tornadoes of 3 May 1999, the Federal Emergency Management Agency formed a Building Performance Assessment Team (BPAT) to examine the main tornado paths during the outbreak and to make recommendations based on the damage they saw. This is the first time a tornado disaster has been subjected to a BPAT investigation. Some aspects of the BPAT final report are reviewed and considered in the context of tornado preparedness in Kansas and Oklahoma. Although the preparedness efforts of many public and private institutions apparently played a large role in reducing casualties from the storm, a number of building deficiencies were found during the BPAT's evaluation. Especially in public facilities, there are several aspects of tornado preparedness that could be improved. Moreover, there is clear evidence that a nonnegligible fraction of the damage associated with these storms could have been mitigated with some relatively simple and inexpensive construction enhancements. Widespread implementation of these enhancements would reduce projectile loading and its associated threats to both life and property.
Abstract
After the tornadoes of 3 May 1999, the Federal Emergency Management Agency formed a Building Performance Assessment Team (BPAT) to examine the main tornado paths during the outbreak and to make recommendations based on the damage they saw. This is the first time a tornado disaster has been subjected to a BPAT investigation. Some aspects of the BPAT final report are reviewed and considered in the context of tornado preparedness in Kansas and Oklahoma. Although the preparedness efforts of many public and private institutions apparently played a large role in reducing casualties from the storm, a number of building deficiencies were found during the BPAT's evaluation. Especially in public facilities, there are several aspects of tornado preparedness that could be improved. Moreover, there is clear evidence that a nonnegligible fraction of the damage associated with these storms could have been mitigated with some relatively simple and inexpensive construction enhancements. Widespread implementation of these enhancements would reduce projectile loading and its associated threats to both life and property.
Abstract
Approximately 400 Automated Surface Observing System (ASOS) observations of convective cloud-base heights at 2300 UTC were collected from April through August of 2001. These observations were compared with lifting condensation level (LCL) heights above ground level determined by 0000 UTC rawinsonde soundings from collocated upper-air sites. The LCL heights were calculated using both surface-based parcels (SBLCL) and mean-layer parcels (MLLCL—using mean temperature and dewpoint in lowest 100 hPa). The results show that the mean error for the MLLCL heights was substantially less than for SBLCL heights, with SBLCL heights consistently lower than observed cloud bases. These findings suggest that the mean-layer parcel is likely more representative of the actual parcel associated with convective cloud development, which has implications for calculations of thermodynamic parameters such as convective available potential energy (CAPE) and convective inhibition. In addition, the median value of surface-based CAPE (SBCAPE) was more than 2 times that of the mean-layer CAPE (MLCAPE). Thus, caution is advised when considering surface-based thermodynamic indices, despite the assumed presence of a well-mixed afternoon boundary layer.
Abstract
Approximately 400 Automated Surface Observing System (ASOS) observations of convective cloud-base heights at 2300 UTC were collected from April through August of 2001. These observations were compared with lifting condensation level (LCL) heights above ground level determined by 0000 UTC rawinsonde soundings from collocated upper-air sites. The LCL heights were calculated using both surface-based parcels (SBLCL) and mean-layer parcels (MLLCL—using mean temperature and dewpoint in lowest 100 hPa). The results show that the mean error for the MLLCL heights was substantially less than for SBLCL heights, with SBLCL heights consistently lower than observed cloud bases. These findings suggest that the mean-layer parcel is likely more representative of the actual parcel associated with convective cloud development, which has implications for calculations of thermodynamic parameters such as convective available potential energy (CAPE) and convective inhibition. In addition, the median value of surface-based CAPE (SBCAPE) was more than 2 times that of the mean-layer CAPE (MLCAPE). Thus, caution is advised when considering surface-based thermodynamic indices, despite the assumed presence of a well-mixed afternoon boundary layer.
Abstract
In the near future, the technological capability will be available to use mesoscale and cloud-scale numerical models for forecasting convective weather in operational meteorology. We address some of the issues concerning effective utilization of this capability. The challenges that must be overcome are formidable. We argue that explicit prediction on the cloud scale, even if these challenges can be met, does not obviate the need for human interpretation of the forecasts. In the case that humans remain directly involved in the forecasting process, another set of issues is concerned with the constraints imposed by human involvement. As an alternative to direct explicit prediction of convective events by computers, we propose that mesoscale models be used to produce initial conditions for cloud-scale models. Cloud-scale models then can be run in a Monte Carlo–like mode, in order to provide an estimate of the probable types of convective weather for a forecast period. In our proposal, human forecasters fill the critical role as an interface between various stages of the forecasting and warning process. In particular, they are essential in providing input to the numerical models from the observational data and in interpreting the model output. This interpretative step is important both in helping the forecaster anticipate and interpret new observations and in providing information to the public.
Abstract
In the near future, the technological capability will be available to use mesoscale and cloud-scale numerical models for forecasting convective weather in operational meteorology. We address some of the issues concerning effective utilization of this capability. The challenges that must be overcome are formidable. We argue that explicit prediction on the cloud scale, even if these challenges can be met, does not obviate the need for human interpretation of the forecasts. In the case that humans remain directly involved in the forecasting process, another set of issues is concerned with the constraints imposed by human involvement. As an alternative to direct explicit prediction of convective events by computers, we propose that mesoscale models be used to produce initial conditions for cloud-scale models. Cloud-scale models then can be run in a Monte Carlo–like mode, in order to provide an estimate of the probable types of convective weather for a forecast period. In our proposal, human forecasters fill the critical role as an interface between various stages of the forecasting and warning process. In particular, they are essential in providing input to the numerical models from the observational data and in interpreting the model output. This interpretative step is important both in helping the forecaster anticipate and interpret new observations and in providing information to the public.
Abstract
The ability to discriminate between tornadic and nontornadic thunderstorms is investigated using a mesoscale model. Nine severe weather events are simulated: four events are tornadic supercell thunderstorm outbreaks that occur in conjunction with strong large-scale forcing for upward motion, three events are bow-echo outbreaks that also occur in conjunction with strong large-scale forcing for upward motion, and two are isolated tornadic supercell thunderstorms that occur under much weaker large-scale forcing. Examination of the mesoscale model simulations suggests that it is possible to discriminate between tornadic and nontornadic thunderstorms by using the locations of model-produced convective activity and values of convective available potential energy to highlight regions of likely thunderstorm development, and then using the values of storm-relative environmental helicity (SREH) and bulk Richardson number shear (BRNSHR) to indicate whether or not tornadic supercell thunderstorms are likely. Values of SREH greater than 100 m2 s−2 indicate a likelihood that any storms that develop will have a midlevel mesocyclone, values of BRNSHR between 40 and 100 m2 s−2 suggest that low-level mesocyclogenesis is likely, and values of BRNSHR less than 40 m2 s−2 suggest that the thunderstorms will be dominated by outflow. By combining the storm characteristics suggested by these parameters, it is possible to use mesoscale model output to infer the dominant mode of severe convection.
Abstract
The ability to discriminate between tornadic and nontornadic thunderstorms is investigated using a mesoscale model. Nine severe weather events are simulated: four events are tornadic supercell thunderstorm outbreaks that occur in conjunction with strong large-scale forcing for upward motion, three events are bow-echo outbreaks that also occur in conjunction with strong large-scale forcing for upward motion, and two are isolated tornadic supercell thunderstorms that occur under much weaker large-scale forcing. Examination of the mesoscale model simulations suggests that it is possible to discriminate between tornadic and nontornadic thunderstorms by using the locations of model-produced convective activity and values of convective available potential energy to highlight regions of likely thunderstorm development, and then using the values of storm-relative environmental helicity (SREH) and bulk Richardson number shear (BRNSHR) to indicate whether or not tornadic supercell thunderstorms are likely. Values of SREH greater than 100 m2 s−2 indicate a likelihood that any storms that develop will have a midlevel mesocyclone, values of BRNSHR between 40 and 100 m2 s−2 suggest that low-level mesocyclogenesis is likely, and values of BRNSHR less than 40 m2 s−2 suggest that the thunderstorms will be dominated by outflow. By combining the storm characteristics suggested by these parameters, it is possible to use mesoscale model output to infer the dominant mode of severe convection.
Abstract
An approach to forecasting the potential for flash flood-producing storms is developed, using the notion of basic ingredients. Heavy precipitation is the result of sustained high rainfall rates. In turn, high rainfall rates involve the rapid ascent of air containing substantial water vapor and also depend on the precipitation efficiency. The duration of an event is associated with its speed of movement and the size of the system causing the event along the direction of system movement.
This leads naturally to a consideration of the meteorological processes by which these basic ingredients are brought together. A description of those processes and of the types of heavy precipitation-producing storms suggests some of the variety of ways in which heavy precipitation occurs. Since the right mixture of these ingredients can be found in a wide variety of synoptic and mesoscale situations, it is necessary to know which of the ingredients is critical in any given case. By knowing which of the ingredients is most important in any given case, forecasters can concentrate on recognition of the developing heavy precipitation potential as meteorological processes operate. This also helps with the recognition of heavy rain events as they occur, a challenging problem if the potential for such events has not been anticipated.
Three brief case examples are presented to illustrate the procedure as it might be applied in operations. The cases are geographically diverse and even illustrate how a nonconvective heavy precipitation event fits within this methodology. The concept of ingredients-based forecasting is discussed as it might apply to a broader spectrum of forecast events than just flash flood forecasting.
Abstract
An approach to forecasting the potential for flash flood-producing storms is developed, using the notion of basic ingredients. Heavy precipitation is the result of sustained high rainfall rates. In turn, high rainfall rates involve the rapid ascent of air containing substantial water vapor and also depend on the precipitation efficiency. The duration of an event is associated with its speed of movement and the size of the system causing the event along the direction of system movement.
This leads naturally to a consideration of the meteorological processes by which these basic ingredients are brought together. A description of those processes and of the types of heavy precipitation-producing storms suggests some of the variety of ways in which heavy precipitation occurs. Since the right mixture of these ingredients can be found in a wide variety of synoptic and mesoscale situations, it is necessary to know which of the ingredients is critical in any given case. By knowing which of the ingredients is most important in any given case, forecasters can concentrate on recognition of the developing heavy precipitation potential as meteorological processes operate. This also helps with the recognition of heavy rain events as they occur, a challenging problem if the potential for such events has not been anticipated.
Three brief case examples are presented to illustrate the procedure as it might be applied in operations. The cases are geographically diverse and even illustrate how a nonconvective heavy precipitation event fits within this methodology. The concept of ingredients-based forecasting is discussed as it might apply to a broader spectrum of forecast events than just flash flood forecasting.
Abstract
A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas–Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. (∼1 mm) to 1 in. (∼25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.
Abstract
A neural network, using input from the Eta Model and upper air soundings, has been developed for the probability of precipitation (PoP) and quantitative precipitation forecast (QPF) for the Dallas–Fort Worth, Texas, area. Forecasts from two years were verified against a network of 36 rain gauges. The resulting forecasts were remarkably sharp, with over 70% of the PoP forecasts being less than 5% or greater than 95%. Of the 436 days with forecasts of less than 5% PoP, no rain occurred on 435 days. On the 111 days with forecasts of greater than 95% PoP, rain always occurred. The linear correlation between the forecast and observed precipitation amount was 0.95. Equitable threat scores for threshold precipitation amounts from 0.05 in. (∼1 mm) to 1 in. (∼25 mm) are 0.63 or higher, with maximum values over 0.86. Combining the PoP and QPF products indicates that for very high PoPs, the correlation between the QPF and observations is higher than for lower PoPs. In addition, 61 of the 70 observed rains of at least 0.5 in. (12.7 mm) are associated with PoPs greater than 85%. As a result, the system indicates a potential for more accurate precipitation forecasting.