Search Results
You are looking at 51 - 60 of 62 items for :
- Author or Editor: Harold E. Brooks x
- Article x
- Refine by Access: All Content x
Collaborative activities between operational forecasters and meteorological research scientists have the potential to provide significant benefits to both groups and to society as a whole, yet such collaboration is rare. An exception to this state of affairs is occurring at the National Severe Storms Laboratory (NSSL) and Storm Prediction Center (SPC). Since the SPC moved from Kansas City to the NSSL facility in Norman, Oklahoma in 1997, collaborative efforts between researchers and forecasters at this facility have begun to flourish. This article presents a historical background for this interaction and discusses some of the factors that have helped this collaboration gain momentum. It focuses on the 2001 Spring Program, a collaborative effort focusing on experimental forecasting techniques and numerical model evaluation, as a prototype for organized interactions between researchers and forecasters. In addition, the many tangible and intangible benefits of this unusual working relationship are discussed.
Collaborative activities between operational forecasters and meteorological research scientists have the potential to provide significant benefits to both groups and to society as a whole, yet such collaboration is rare. An exception to this state of affairs is occurring at the National Severe Storms Laboratory (NSSL) and Storm Prediction Center (SPC). Since the SPC moved from Kansas City to the NSSL facility in Norman, Oklahoma in 1997, collaborative efforts between researchers and forecasters at this facility have begun to flourish. This article presents a historical background for this interaction and discusses some of the factors that have helped this collaboration gain momentum. It focuses on the 2001 Spring Program, a collaborative effort focusing on experimental forecasting techniques and numerical model evaluation, as a prototype for organized interactions between researchers and forecasters. In addition, the many tangible and intangible benefits of this unusual working relationship are discussed.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.
Abstract
Southeast U.S. cold season severe weather events can be difficult to predict because of the marginality of the supporting thermodynamic instability in this regime. The sensitivity of this environment to prognoses of instability encourages additional research on ways in which mesoscale models represent turbulent processes within the lower atmosphere that directly influence thermodynamic profiles and forecasts of instability. This work summarizes characteristics of the southeast U.S. cold season severe weather environment and planetary boundary layer (PBL) parameterization schemes used in mesoscale modeling and proceeds with a focused investigation of the performance of nine different representations of the PBL in this environment by comparing simulated thermodynamic and kinematic profiles to observationally influenced ones. It is demonstrated that simultaneous representation of both nonlocal and local mixing in the Asymmetric Convective Model, version 2 (ACM2), scheme has the lowest overall errors for the southeast U.S. cold season tornado regime. For storm-relative helicity, strictly nonlocal schemes provide the largest overall differences from observationally influenced datasets (underforecast). Meanwhile, strictly local schemes yield the most extreme differences from these observationally influenced datasets (underforecast) in a mean sense for the low-level lapse rate and depth of the PBL, on average. A hybrid local–nonlocal scheme is found to mitigate these mean difference extremes. These findings are traced to a tendency for local schemes to incompletely mix the PBL while nonlocal schemes overmix the PBL, whereas the hybrid schemes represent more intermediate mixing in a regime where vertical shear enhances mixing and limited instability suppresses mixing.
Abstract
Southeast U.S. cold season severe weather events can be difficult to predict because of the marginality of the supporting thermodynamic instability in this regime. The sensitivity of this environment to prognoses of instability encourages additional research on ways in which mesoscale models represent turbulent processes within the lower atmosphere that directly influence thermodynamic profiles and forecasts of instability. This work summarizes characteristics of the southeast U.S. cold season severe weather environment and planetary boundary layer (PBL) parameterization schemes used in mesoscale modeling and proceeds with a focused investigation of the performance of nine different representations of the PBL in this environment by comparing simulated thermodynamic and kinematic profiles to observationally influenced ones. It is demonstrated that simultaneous representation of both nonlocal and local mixing in the Asymmetric Convective Model, version 2 (ACM2), scheme has the lowest overall errors for the southeast U.S. cold season tornado regime. For storm-relative helicity, strictly nonlocal schemes provide the largest overall differences from observationally influenced datasets (underforecast). Meanwhile, strictly local schemes yield the most extreme differences from these observationally influenced datasets (underforecast) in a mean sense for the low-level lapse rate and depth of the PBL, on average. A hybrid local–nonlocal scheme is found to mitigate these mean difference extremes. These findings are traced to a tendency for local schemes to incompletely mix the PBL while nonlocal schemes overmix the PBL, whereas the hybrid schemes represent more intermediate mixing in a regime where vertical shear enhances mixing and limited instability suppresses mixing.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have. Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008 to 2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on the chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm life cycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Significance Statement
In this study, we focus on better understanding real-time tornado warning performance on a storm-by-storm basis. This approach allows us to examine how warning performance can change based on the order of each tornado within its parent storm. Using tornado reports, warning products, and radar data during tornado outbreaks from 2008 to 2014, we find that probability of detection and lead time increase with later tornadoes produced by the same storm. In other words, for storms that produce multiple tornadoes, the first tornado is generally the least likely to be warned in advance; when it is warned in advance, it generally contains less lead time than subsequent tornadoes. These findings provide important new analyses of tornado warning performance, particularly for the first tornado of each storm, and will help inform strategies for improving warning performance.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have. Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008 to 2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on the chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm life cycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Significance Statement
In this study, we focus on better understanding real-time tornado warning performance on a storm-by-storm basis. This approach allows us to examine how warning performance can change based on the order of each tornado within its parent storm. Using tornado reports, warning products, and radar data during tornado outbreaks from 2008 to 2014, we find that probability of detection and lead time increase with later tornadoes produced by the same storm. In other words, for storms that produce multiple tornadoes, the first tornado is generally the least likely to be warned in advance; when it is warned in advance, it generally contains less lead time than subsequent tornadoes. These findings provide important new analyses of tornado warning performance, particularly for the first tornado of each storm, and will help inform strategies for improving warning performance.
Abstract
As lightning-detection records lengthen and the efficiency of severe weather reporting increases, more accurate climatologies of convective hazards can be constructed. In this study we aggregate flashes from the National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) with severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data on a common grid of 0.25° and 1-h steps. Each year approximately 75–200 thunderstorm hours occur over the southwestern, central, and eastern United States, with a peak over Florida (200–250 h). The activity over the majority of Europe ranges from 15 to 100 h, with peaks over Italy and mountains (Pyrenees, Alps, Carpathians, Dinaric Alps; 100–150 h). The highest convective activity over continental Europe occurs during summer and over the Mediterranean during autumn. The United States peak for tornadoes and large hail reports is in spring, preceding the maximum of lightning and severe wind reports by 1–2 months. Convective hazards occur typically in the late afternoon, with the exception of the Midwest and Great Plains, where mesoscale convective systems shift the peak lightning threat to the night. The severe wind threat is delayed by 1–2 h compared to hail and tornadoes. The fraction of nocturnal lightning over land ranges from 15% to 30% with the lowest values observed over Florida and mountains (~10%). Wintertime lightning shares the highest fraction of severe weather. Compared to Europe, extreme events are considerably more frequent over the United States, with maximum activity over the Great Plains. However, the threat over Europe should not be underestimated, as severe weather outbreaks with damaging winds, very large hail, and significant tornadoes occasionally occur over densely populated areas.
Abstract
As lightning-detection records lengthen and the efficiency of severe weather reporting increases, more accurate climatologies of convective hazards can be constructed. In this study we aggregate flashes from the National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) with severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data on a common grid of 0.25° and 1-h steps. Each year approximately 75–200 thunderstorm hours occur over the southwestern, central, and eastern United States, with a peak over Florida (200–250 h). The activity over the majority of Europe ranges from 15 to 100 h, with peaks over Italy and mountains (Pyrenees, Alps, Carpathians, Dinaric Alps; 100–150 h). The highest convective activity over continental Europe occurs during summer and over the Mediterranean during autumn. The United States peak for tornadoes and large hail reports is in spring, preceding the maximum of lightning and severe wind reports by 1–2 months. Convective hazards occur typically in the late afternoon, with the exception of the Midwest and Great Plains, where mesoscale convective systems shift the peak lightning threat to the night. The severe wind threat is delayed by 1–2 h compared to hail and tornadoes. The fraction of nocturnal lightning over land ranges from 15% to 30% with the lowest values observed over Florida and mountains (~10%). Wintertime lightning shares the highest fraction of severe weather. Compared to Europe, extreme events are considerably more frequent over the United States, with maximum activity over the Great Plains. However, the threat over Europe should not be underestimated, as severe weather outbreaks with damaging winds, very large hail, and significant tornadoes occasionally occur over densely populated areas.
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
Abstract
The European Severe Storms Laboratory (ESSL) was founded in 2006 to advance the science and forecasting of severe convective storms in Europe. ESSL was a grassroots effort of individual scientists from various European countries. The purpose of this article is to describe the 10-yr history of ESSL and present a sampling of its successful activities. Specifically, ESSL developed and manages the only multinational database of severe weather reports in Europe: the European Severe Weather Database (ESWD). Despite efforts to eliminate biases, the ESWD still suffers from spatial inhomogeneities in data collection, which motivates ESSL’s research into modeling climatologies by combining ESWD data with reanalysis data. ESSL also established its ESSL Testbed to evaluate developmental forecast products and to provide training to forecasters. The testbed is organized in close collaboration with several of Europe’s national weather services. In addition, ESSL serves a central role among the European scientific and forecast communities for convective storms, specifically through its training activities and the series of European Conferences on Severe Storms. Finally, ESSL conducts wind and tornado damage assessments, highlighted by its recent survey of a violent tornado in northern Italy.
Abstract
The European Severe Storms Laboratory (ESSL) was founded in 2006 to advance the science and forecasting of severe convective storms in Europe. ESSL was a grassroots effort of individual scientists from various European countries. The purpose of this article is to describe the 10-yr history of ESSL and present a sampling of its successful activities. Specifically, ESSL developed and manages the only multinational database of severe weather reports in Europe: the European Severe Weather Database (ESWD). Despite efforts to eliminate biases, the ESWD still suffers from spatial inhomogeneities in data collection, which motivates ESSL’s research into modeling climatologies by combining ESWD data with reanalysis data. ESSL also established its ESSL Testbed to evaluate developmental forecast products and to provide training to forecasters. The testbed is organized in close collaboration with several of Europe’s national weather services. In addition, ESSL serves a central role among the European scientific and forecast communities for convective storms, specifically through its training activities and the series of European Conferences on Severe Storms. Finally, ESSL conducts wind and tornado damage assessments, highlighted by its recent survey of a violent tornado in northern Italy.
Accurate quantitative precipitation estimates (QPE) and very short term quantitative precipitation forecasts (VSTQPF) are critical to accurate monitoring and prediction of water-related hazards and water resources. While tremendous progress has been made in the last quarter-century in many areas of QPE and VSTQPF, significant gaps continue to exist in both knowledge and capabilities that are necessary to produce accurate high-resolution precipitation estimates at the national scale for a wide spectrum of users. Toward this goal, a national next-generation QPE and VSTQPF (Q2) workshop was held in Norman, Oklahoma, on 28–30 June 2005. Scientists, operational forecasters, water managers, and stakeholders from public and private sectors, including academia, presented and discussed a broad range of precipitation and forecasting topics and issues, and developed a list of science focus areas. To meet the nation's needs for the precipitation information effectively, the authors herein propose a community-wide integrated approach for precipitation information that fully capitalizes on recent advances in science and technology, and leverages the wide range of expertise and experience that exists in the research and operational communities. The concepts and recommendations from the workshop form the Q2 science plan and a suggested path to operations. Implementation of these concepts is expected to improve river forecasts and flood and flash flood watches and warnings, and to enhance various hydrologic and hydrometeorological services for a wide range of users and customers. In support of this initiative, the National Mosaic and Q2 (NMQ) system is being developed at the National Severe Storms Laboratory to serve as a community test bed for QPE and VSTQPF research and to facilitate the transition to operations of research applications. The NMQ system provides a real-time, around-the-clock data infusion and applications development and evaluation environment, and thus offers a community-wide platform for development and testing of advances in the focus areas.
Accurate quantitative precipitation estimates (QPE) and very short term quantitative precipitation forecasts (VSTQPF) are critical to accurate monitoring and prediction of water-related hazards and water resources. While tremendous progress has been made in the last quarter-century in many areas of QPE and VSTQPF, significant gaps continue to exist in both knowledge and capabilities that are necessary to produce accurate high-resolution precipitation estimates at the national scale for a wide spectrum of users. Toward this goal, a national next-generation QPE and VSTQPF (Q2) workshop was held in Norman, Oklahoma, on 28–30 June 2005. Scientists, operational forecasters, water managers, and stakeholders from public and private sectors, including academia, presented and discussed a broad range of precipitation and forecasting topics and issues, and developed a list of science focus areas. To meet the nation's needs for the precipitation information effectively, the authors herein propose a community-wide integrated approach for precipitation information that fully capitalizes on recent advances in science and technology, and leverages the wide range of expertise and experience that exists in the research and operational communities. The concepts and recommendations from the workshop form the Q2 science plan and a suggested path to operations. Implementation of these concepts is expected to improve river forecasts and flood and flash flood watches and warnings, and to enhance various hydrologic and hydrometeorological services for a wide range of users and customers. In support of this initiative, the National Mosaic and Q2 (NMQ) system is being developed at the National Severe Storms Laboratory to serve as a community test bed for QPE and VSTQPF research and to facilitate the transition to operations of research applications. The NMQ system provides a real-time, around-the-clock data infusion and applications development and evaluation environment, and thus offers a community-wide platform for development and testing of advances in the focus areas.