Search Results
You are looking at 1 - 9 of 9 items for :
- Author or Editor: M. E. Brooks x
- Weather and Forecasting x
- Refine by Access: All Content x
Abstract
The Storm Prediction Center issues four categorical convective outlooks with lead times as long as 48 h, the so-called day 3 outlook issued at 1200 UTC, and as short as 6 h, the day 1 outlook issued at 0600 UTC. Additionally, there are four outlooks issued during the 24-h target period (which begins at 1200 UTC on day 1) that serve as updates to the last outlook issued prior to the target period. These outlooks, issued daily, are evaluated over a relatively long period of record, 1999–2011, using standard verification measures to assess accuracy; practically perfect forecasts are used to assess skill. Results show a continual increase in the skill of all outlooks during the study period, and increases in the frequency at which these outlooks are skillful on an annual basis.
Abstract
The Storm Prediction Center issues four categorical convective outlooks with lead times as long as 48 h, the so-called day 3 outlook issued at 1200 UTC, and as short as 6 h, the day 1 outlook issued at 0600 UTC. Additionally, there are four outlooks issued during the 24-h target period (which begins at 1200 UTC on day 1) that serve as updates to the last outlook issued prior to the target period. These outlooks, issued daily, are evaluated over a relatively long period of record, 1999–2011, using standard verification measures to assess accuracy; practically perfect forecasts are used to assess skill. Results show a continual increase in the skill of all outlooks during the study period, and increases in the frequency at which these outlooks are skillful on an annual basis.
Abstract
The Storm Prediction Center has issued daily convective outlooks since the mid-1950s. This paper represents an initial effort to examine the quality of these forecasts. Convective outlooks are plotted on a latitude–longitude grid with 80-km grid spacing and evaluated using storm reports to calculate verification measures including the probability of detection, frequency of hits, and critical success index. Results show distinct improvements in forecast performance over the duration of the study period, some of which can be attributed to apparent changes in forecasting philosophies.
Abstract
The Storm Prediction Center has issued daily convective outlooks since the mid-1950s. This paper represents an initial effort to examine the quality of these forecasts. Convective outlooks are plotted on a latitude–longitude grid with 80-km grid spacing and evaluated using storm reports to calculate verification measures including the probability of detection, frequency of hits, and critical success index. Results show distinct improvements in forecast performance over the duration of the study period, some of which can be attributed to apparent changes in forecasting philosophies.
Abstract
Among the Storm Prediction Center’s (SPC) probabilistic convective outlook products are forecasts specifically targeted at significant severe weather: tornadoes that produce EF2 or greater damage, wind gusts of at least 75 mi h−1, and hail with diameters of 2 in. or greater. During the period of 2005–15, for outlooks issued beginning on day 3 and through the final update to the day 1 forecast, the accuracy and skill of these significant severe outlooks are evaluated. To achieve this, criteria for the identification of significant severe weather events were developed, with a focus on determining days for which outlooks were not issued, but should have been based on the goals of the product. Results show that significant tornadoes and hail are generally well identified by outlooks, but significant wind events are underforecast. There exist differences between verification measures when calculating them based on 1) only those days for which outlooks were issued and 2) days with outlooks or missed events; specifically, there were improvements in the frequency of daily skillful forecasts when disregarding missed events. With the greatest number of missed events associated with significant wind events, forecasts for this hazard are identified as an area of future focus for the SPC.
Abstract
Among the Storm Prediction Center’s (SPC) probabilistic convective outlook products are forecasts specifically targeted at significant severe weather: tornadoes that produce EF2 or greater damage, wind gusts of at least 75 mi h−1, and hail with diameters of 2 in. or greater. During the period of 2005–15, for outlooks issued beginning on day 3 and through the final update to the day 1 forecast, the accuracy and skill of these significant severe outlooks are evaluated. To achieve this, criteria for the identification of significant severe weather events were developed, with a focus on determining days for which outlooks were not issued, but should have been based on the goals of the product. Results show that significant tornadoes and hail are generally well identified by outlooks, but significant wind events are underforecast. There exist differences between verification measures when calculating them based on 1) only those days for which outlooks were issued and 2) days with outlooks or missed events; specifically, there were improvements in the frequency of daily skillful forecasts when disregarding missed events. With the greatest number of missed events associated with significant wind events, forecasts for this hazard are identified as an area of future focus for the SPC.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
A method for determining baselines of skill for the purpose of the verification of rare-event forecasts is described and examples are presented to illustrate the sensitivity to parameter choices. These “practically perfect” forecasts are designed to resemble a forecast that is consistent with that which a forecaster would make given perfect knowledge of the events beforehand. The Storm Prediction Center’s convective outlook slight risk areas are evaluated over the period from 1973 to 2011 using practically perfect forecasts to define the maximum values of the critical success index that a forecaster could reasonably achieve given the constraints of the forecast, as well as the minimum values of the critical success index that are considered the baseline for skillful forecasts. Based on these upper and lower bounds, the relative skill of convective outlook areas shows little to no skill until the mid-1990s, after which this value increases steadily. The annual frequency of skillful daily forecasts continues to increase from the beginning of the period of study, and the annual cycle shows maxima of the frequency of skillful daily forecasts occurring in May and June.
Abstract
Over the last 50 yr, the number of tornadoes reported in the United States has doubled from about 600 per year in the 1950s to around 1200 in the 2000s. This doubling is likely not related to meteorological causes alone. To account for this increase a simple least squares linear regression was fitted to the annual number of tornado reports. A “big tornado day” is a single day when numerous tornadoes and/or many tornadoes exceeding a specified intensity threshold were reported anywhere in the country. By defining a big tornado day without considering the spatial distribution of the tornadoes, a big tornado day differs from previous definitions of outbreaks. To address the increase in the number of reports, the number of reports is compared to the expected number of reports in a year based on linear regression. In addition, the F1 and greater Fujita-scale record was used in determining a big tornado day because the F1 and greater series was more stationary over time as opposed to the F2 and greater series. Thresholds were applied to the data to determine the number and intensities of the tornadoes needed to be considered a big tornado day. Possible threshold values included fractions of the annual expected value associated with the linear regression and fixed numbers for the intensity criterion. Threshold values of 1.5% of the expected annual total number of tornadoes and/or at least 8 F1 and greater tornadoes identified about 18.1 big tornado days per year. Higher thresholds such as 2.5% and/or at least 15 F1 and greater tornadoes showed similar characteristics, yet identified approximately 6.2 big tornado days per year. Finally, probability distribution curves generated using kernel density estimation revealed that big tornado days were more likely to occur slightly earlier in the year and have a narrower distribution than any given tornado day.
Abstract
Over the last 50 yr, the number of tornadoes reported in the United States has doubled from about 600 per year in the 1950s to around 1200 in the 2000s. This doubling is likely not related to meteorological causes alone. To account for this increase a simple least squares linear regression was fitted to the annual number of tornado reports. A “big tornado day” is a single day when numerous tornadoes and/or many tornadoes exceeding a specified intensity threshold were reported anywhere in the country. By defining a big tornado day without considering the spatial distribution of the tornadoes, a big tornado day differs from previous definitions of outbreaks. To address the increase in the number of reports, the number of reports is compared to the expected number of reports in a year based on linear regression. In addition, the F1 and greater Fujita-scale record was used in determining a big tornado day because the F1 and greater series was more stationary over time as opposed to the F2 and greater series. Thresholds were applied to the data to determine the number and intensities of the tornadoes needed to be considered a big tornado day. Possible threshold values included fractions of the annual expected value associated with the linear regression and fixed numbers for the intensity criterion. Threshold values of 1.5% of the expected annual total number of tornadoes and/or at least 8 F1 and greater tornadoes identified about 18.1 big tornado days per year. Higher thresholds such as 2.5% and/or at least 15 F1 and greater tornadoes showed similar characteristics, yet identified approximately 6.2 big tornado days per year. Finally, probability distribution curves generated using kernel density estimation revealed that big tornado days were more likely to occur slightly earlier in the year and have a narrower distribution than any given tornado day.
Abstract
The representation of turbulent mixing within the lower troposphere is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere in mesoscale model forecasts. For mesoscale models, turbulence is mostly a subgrid-scale process, but its presence in the planetary boundary layer (PBL) can directly modulate a simulation’s depiction of mass fields relevant for forecast problems. The primary goal of this work is to review the various parameterization schemes that the Weather Research and Forecasting Model employs in its depiction of turbulent mixing (PBL schemes) in general, and is followed by an application to a severe weather environment. Each scheme represents mixing on a local and/or nonlocal basis. Local schemes only consider immediately adjacent vertical levels in the model, whereas nonlocal schemes can consider a deeper layer covering multiple levels in representing the effects of vertical mixing through the PBL. As an application, a pair of cold season severe weather events that occurred in the southeastern United States are examined. Such cases highlight the ambiguities of classically defined PBL schemes in a cold season severe weather environment, though characteristics of the PBL schemes are apparent in this case. Low-level lapse rates and storm-relative helicity are typically steeper and slightly smaller for nonlocal than local schemes, respectively. Nonlocal mixing is necessary to more accurately forecast the lower-tropospheric lapse rates within the warm sector of these events. While all schemes yield overestimations of mixed-layer convective available potential energy (MLCAPE), nonlocal schemes more strongly overestimate MLCAPE than do local schemes.
Abstract
The representation of turbulent mixing within the lower troposphere is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere in mesoscale model forecasts. For mesoscale models, turbulence is mostly a subgrid-scale process, but its presence in the planetary boundary layer (PBL) can directly modulate a simulation’s depiction of mass fields relevant for forecast problems. The primary goal of this work is to review the various parameterization schemes that the Weather Research and Forecasting Model employs in its depiction of turbulent mixing (PBL schemes) in general, and is followed by an application to a severe weather environment. Each scheme represents mixing on a local and/or nonlocal basis. Local schemes only consider immediately adjacent vertical levels in the model, whereas nonlocal schemes can consider a deeper layer covering multiple levels in representing the effects of vertical mixing through the PBL. As an application, a pair of cold season severe weather events that occurred in the southeastern United States are examined. Such cases highlight the ambiguities of classically defined PBL schemes in a cold season severe weather environment, though characteristics of the PBL schemes are apparent in this case. Low-level lapse rates and storm-relative helicity are typically steeper and slightly smaller for nonlocal than local schemes, respectively. Nonlocal mixing is necessary to more accurately forecast the lower-tropospheric lapse rates within the warm sector of these events. While all schemes yield overestimations of mixed-layer convective available potential energy (MLCAPE), nonlocal schemes more strongly overestimate MLCAPE than do local schemes.
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
Southeast U.S. cold season severe weather events can be difficult to predict because of the marginality of the supporting thermodynamic instability in this regime. The sensitivity of this environment to prognoses of instability encourages additional research on ways in which mesoscale models represent turbulent processes within the lower atmosphere that directly influence thermodynamic profiles and forecasts of instability. This work summarizes characteristics of the southeast U.S. cold season severe weather environment and planetary boundary layer (PBL) parameterization schemes used in mesoscale modeling and proceeds with a focused investigation of the performance of nine different representations of the PBL in this environment by comparing simulated thermodynamic and kinematic profiles to observationally influenced ones. It is demonstrated that simultaneous representation of both nonlocal and local mixing in the Asymmetric Convective Model, version 2 (ACM2), scheme has the lowest overall errors for the southeast U.S. cold season tornado regime. For storm-relative helicity, strictly nonlocal schemes provide the largest overall differences from observationally influenced datasets (underforecast). Meanwhile, strictly local schemes yield the most extreme differences from these observationally influenced datasets (underforecast) in a mean sense for the low-level lapse rate and depth of the PBL, on average. A hybrid local–nonlocal scheme is found to mitigate these mean difference extremes. These findings are traced to a tendency for local schemes to incompletely mix the PBL while nonlocal schemes overmix the PBL, whereas the hybrid schemes represent more intermediate mixing in a regime where vertical shear enhances mixing and limited instability suppresses mixing.
Abstract
Southeast U.S. cold season severe weather events can be difficult to predict because of the marginality of the supporting thermodynamic instability in this regime. The sensitivity of this environment to prognoses of instability encourages additional research on ways in which mesoscale models represent turbulent processes within the lower atmosphere that directly influence thermodynamic profiles and forecasts of instability. This work summarizes characteristics of the southeast U.S. cold season severe weather environment and planetary boundary layer (PBL) parameterization schemes used in mesoscale modeling and proceeds with a focused investigation of the performance of nine different representations of the PBL in this environment by comparing simulated thermodynamic and kinematic profiles to observationally influenced ones. It is demonstrated that simultaneous representation of both nonlocal and local mixing in the Asymmetric Convective Model, version 2 (ACM2), scheme has the lowest overall errors for the southeast U.S. cold season tornado regime. For storm-relative helicity, strictly nonlocal schemes provide the largest overall differences from observationally influenced datasets (underforecast). Meanwhile, strictly local schemes yield the most extreme differences from these observationally influenced datasets (underforecast) in a mean sense for the low-level lapse rate and depth of the PBL, on average. A hybrid local–nonlocal scheme is found to mitigate these mean difference extremes. These findings are traced to a tendency for local schemes to incompletely mix the PBL while nonlocal schemes overmix the PBL, whereas the hybrid schemes represent more intermediate mixing in a regime where vertical shear enhances mixing and limited instability suppresses mixing.