Search Results
You are looking at 61 - 70 of 78 items for
- Author or Editor: Harold E. Brooks x
- Refine by Access: All Content x
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
In this study we investigate convective environments and their corresponding climatological features over Europe and the United States. For this purpose, National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) data, ERA5 hybrid-sigma levels, and severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data were combined on a common grid of 0.25° and 1-h steps over the period 1979–2018. The severity of convective hazards increases with increasing instability and wind shear (WMAXSHEAR), but climatological aspects of these features differ over both domains. Environments over the United States are characterized by higher moisture, CAPE, CIN, wind shear, and midtropospheric lapse rates. Conversely, 0–3-km CAPE and low-level lapse rates are higher over Europe. From the climatological perspective severe thunderstorm environments (hours) are around 3–4 times more frequent over the United States with peaks across the Great Plains, Midwest, and Southeast. Over Europe severe environments are the most common over the south with local maxima in northern Italy. Despite having lower CAPE (tail distribution of 3000–4000 J kg−1 compared to 6000–8000 J kg−1 over the United States), thunderstorms over Europe have a higher probability for convective initiation given a favorable environment. Conversely, the lowest probability for initiation is observed over the Great Plains, but, once a thunderstorm develops, the probability that it will become severe is much higher compared to Europe. Prime conditions for severe thunderstorms over the United States are between April and June, typically from 1200 to 2200 central standard time (CST), while across Europe favorable environments are observed from June to August, usually between 1400 and 2100 UTC.
Abstract
In this study we investigate convective environments and their corresponding climatological features over Europe and the United States. For this purpose, National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) data, ERA5 hybrid-sigma levels, and severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data were combined on a common grid of 0.25° and 1-h steps over the period 1979–2018. The severity of convective hazards increases with increasing instability and wind shear (WMAXSHEAR), but climatological aspects of these features differ over both domains. Environments over the United States are characterized by higher moisture, CAPE, CIN, wind shear, and midtropospheric lapse rates. Conversely, 0–3-km CAPE and low-level lapse rates are higher over Europe. From the climatological perspective severe thunderstorm environments (hours) are around 3–4 times more frequent over the United States with peaks across the Great Plains, Midwest, and Southeast. Over Europe severe environments are the most common over the south with local maxima in northern Italy. Despite having lower CAPE (tail distribution of 3000–4000 J kg−1 compared to 6000–8000 J kg−1 over the United States), thunderstorms over Europe have a higher probability for convective initiation given a favorable environment. Conversely, the lowest probability for initiation is observed over the Great Plains, but, once a thunderstorm develops, the probability that it will become severe is much higher compared to Europe. Prime conditions for severe thunderstorms over the United States are between April and June, typically from 1200 to 2200 central standard time (CST), while across Europe favorable environments are observed from June to August, usually between 1400 and 2100 UTC.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have . Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008–2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm lifecycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have . Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008–2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm lifecycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Collaborative activities between operational forecasters and meteorological research scientists have the potential to provide significant benefits to both groups and to society as a whole, yet such collaboration is rare. An exception to this state of affairs is occurring at the National Severe Storms Laboratory (NSSL) and Storm Prediction Center (SPC). Since the SPC moved from Kansas City to the NSSL facility in Norman, Oklahoma in 1997, collaborative efforts between researchers and forecasters at this facility have begun to flourish. This article presents a historical background for this interaction and discusses some of the factors that have helped this collaboration gain momentum. It focuses on the 2001 Spring Program, a collaborative effort focusing on experimental forecasting techniques and numerical model evaluation, as a prototype for organized interactions between researchers and forecasters. In addition, the many tangible and intangible benefits of this unusual working relationship are discussed.
Collaborative activities between operational forecasters and meteorological research scientists have the potential to provide significant benefits to both groups and to society as a whole, yet such collaboration is rare. An exception to this state of affairs is occurring at the National Severe Storms Laboratory (NSSL) and Storm Prediction Center (SPC). Since the SPC moved from Kansas City to the NSSL facility in Norman, Oklahoma in 1997, collaborative efforts between researchers and forecasters at this facility have begun to flourish. This article presents a historical background for this interaction and discusses some of the factors that have helped this collaboration gain momentum. It focuses on the 2001 Spring Program, a collaborative effort focusing on experimental forecasting techniques and numerical model evaluation, as a prototype for organized interactions between researchers and forecasters. In addition, the many tangible and intangible benefits of this unusual working relationship are discussed.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.
Despite the meteorological community's long-term interest in weather-society interactions, efforts to understand socioeconomic aspects of weather prediction and to incorporate this knowledge into the weather prediction system have yet to reach critical mass. This article aims to reinvigorate interest in societal and economic research and applications (SERA) activities within the meteorological and social science communities by exploring key SERA issues and proposing SERA priorities for the next decade.
The priorities were developed by the authors, building on previous work, with input from a diverse group of social scientists and meteorologists who participated in a SERA workshop in August 2006. The workshop was organized to provide input to the North American regional component of THORPEX: A Global Atmospheric Research Programme, but the priorities identified are broadly applicable to all weather forecast research and applications.
To motivate and frame SERA activities, we first discuss the concept of high-impact weather forecasts and the chain from forecast creation to value realization. Next, we present five interconnected SERA priority themes—use of forecast information in decision making, communication of forecast uncertainty, user-relevant verification, economic value of forecasts, and decision support— and propose research integrated across the themes.
SERA activities can significantly improve understanding of weather-society interactions to the benefit of the meteorological community and society. However, reaching this potential will require dedicated effort to bring together and maintain a sustainable interdisciplinary community.
Despite the meteorological community's long-term interest in weather-society interactions, efforts to understand socioeconomic aspects of weather prediction and to incorporate this knowledge into the weather prediction system have yet to reach critical mass. This article aims to reinvigorate interest in societal and economic research and applications (SERA) activities within the meteorological and social science communities by exploring key SERA issues and proposing SERA priorities for the next decade.
The priorities were developed by the authors, building on previous work, with input from a diverse group of social scientists and meteorologists who participated in a SERA workshop in August 2006. The workshop was organized to provide input to the North American regional component of THORPEX: A Global Atmospheric Research Programme, but the priorities identified are broadly applicable to all weather forecast research and applications.
To motivate and frame SERA activities, we first discuss the concept of high-impact weather forecasts and the chain from forecast creation to value realization. Next, we present five interconnected SERA priority themes—use of forecast information in decision making, communication of forecast uncertainty, user-relevant verification, economic value of forecasts, and decision support— and propose research integrated across the themes.
SERA activities can significantly improve understanding of weather-society interactions to the benefit of the meteorological community and society. However, reaching this potential will require dedicated effort to bring together and maintain a sustainable interdisciplinary community.
Abstract
As lightning-detection records lengthen and the efficiency of severe weather reporting increases, more accurate climatologies of convective hazards can be constructed. In this study we aggregate flashes from the National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) with severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data on a common grid of 0.25° and 1-h steps. Each year approximately 75–200 thunderstorm hours occur over the southwestern, central, and eastern United States, with a peak over Florida (200–250 h). The activity over the majority of Europe ranges from 15 to 100 h, with peaks over Italy and mountains (Pyrenees, Alps, Carpathians, Dinaric Alps; 100–150 h). The highest convective activity over continental Europe occurs during summer and over the Mediterranean during autumn. The United States peak for tornadoes and large hail reports is in spring, preceding the maximum of lightning and severe wind reports by 1–2 months. Convective hazards occur typically in the late afternoon, with the exception of the Midwest and Great Plains, where mesoscale convective systems shift the peak lightning threat to the night. The severe wind threat is delayed by 1–2 h compared to hail and tornadoes. The fraction of nocturnal lightning over land ranges from 15% to 30% with the lowest values observed over Florida and mountains (~10%). Wintertime lightning shares the highest fraction of severe weather. Compared to Europe, extreme events are considerably more frequent over the United States, with maximum activity over the Great Plains. However, the threat over Europe should not be underestimated, as severe weather outbreaks with damaging winds, very large hail, and significant tornadoes occasionally occur over densely populated areas.
Abstract
As lightning-detection records lengthen and the efficiency of severe weather reporting increases, more accurate climatologies of convective hazards can be constructed. In this study we aggregate flashes from the National Lightning Detection Network (NLDN) and Arrival Time Difference long-range lightning detection network (ATDnet) with severe weather reports from the European Severe Weather Database (ESWD) and Storm Prediction Center (SPC) Storm Data on a common grid of 0.25° and 1-h steps. Each year approximately 75–200 thunderstorm hours occur over the southwestern, central, and eastern United States, with a peak over Florida (200–250 h). The activity over the majority of Europe ranges from 15 to 100 h, with peaks over Italy and mountains (Pyrenees, Alps, Carpathians, Dinaric Alps; 100–150 h). The highest convective activity over continental Europe occurs during summer and over the Mediterranean during autumn. The United States peak for tornadoes and large hail reports is in spring, preceding the maximum of lightning and severe wind reports by 1–2 months. Convective hazards occur typically in the late afternoon, with the exception of the Midwest and Great Plains, where mesoscale convective systems shift the peak lightning threat to the night. The severe wind threat is delayed by 1–2 h compared to hail and tornadoes. The fraction of nocturnal lightning over land ranges from 15% to 30% with the lowest values observed over Florida and mountains (~10%). Wintertime lightning shares the highest fraction of severe weather. Compared to Europe, extreme events are considerably more frequent over the United States, with maximum activity over the Great Plains. However, the threat over Europe should not be underestimated, as severe weather outbreaks with damaging winds, very large hail, and significant tornadoes occasionally occur over densely populated areas.