Search Results
You are looking at 21 - 30 of 49 items for
- Author or Editor: M. E. Brooks x
- Refine by Access: All Content x
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
Accurately forecasting snowfall is a challenge. In particular, one poorly understood component of snowfall forecasting is determining the snow ratio. The snow ratio is the ratio of snowfall to liquid equivalent and is inversely proportional to the snow density. In a previous paper, an artificial neural network was developed to predict snow ratios probabilistically in three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). A Web-based application for the probabilistic prediction of snow ratio in these three classes based on operational forecast model soundings and the neural network is now available. The goal of this paper is to explore the statistical characteristics of the snow ratio to determine how temperature, liquid equivalent, and wind speed can be used to provide additional guidance (quantitative, wherever possible) for forecasting snowfall, especially for extreme values of snow ratio. Snow ratio tends to increase as the low-level (surface to roughly 850 mb) temperature decreases. For example, mean low-level temperatures greater than −2.7°C rarely (less than 5% of the time) produce snow ratios greater than 25:1, whereas mean low-level temperatures less than −10.1°C rarely produce snow ratios less than 10:1. Snow ratio tends to increase strongly as the liquid equivalent decreases, leading to a nomogram for probabilistic forecasting snowfall, given a forecasted value of liquid equivalent. For example, liquid equivalent amounts 2.8–4.1 mm (0.11–0.16 in.) rarely produce snow ratios less than 14:1, and liquid equivalent amounts greater than 11.2 mm (0.44 in.) rarely produce snow ratios greater than 26:1. The surface wind speed plays a minor role by decreasing snow ratio with increasing wind speed. Although previous research has shown simple relationships to determine the snow ratio are difficult to obtain, this note helps to clarify some situations where such relationships are possible.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The reduction of systematic errors is a continuing challenge for model development. Feedbacks and compensating errors in climate models often make finding the source of a systematic error difficult. In this paper, it is shown how model development can benefit from the use of the same model across a range of temporal and spatial scales. Two particular systematic errors are examined: tropical circulation and precipitation distribution, and summer land surface temperature and moisture biases over Northern Hemisphere continental regions. Each of these errors affects the model performance on time scales ranging from a few days to several decades. In both cases, the characteristics of the long-time-scale errors are found to develop during the first few days of simulation, before any large-scale feedbacks have taken place. The ability to compare the model diagnostics from the first few days of a forecast, initialized from a realistic atmospheric state, directly with observations has allowed physical deficiencies in the physical parameterizations to be identified that, when corrected, lead to improvements across the full range of time scales. This study highlights the benefits of a seamless prediction system across a wide range of time scales.
Abstract
The reduction of systematic errors is a continuing challenge for model development. Feedbacks and compensating errors in climate models often make finding the source of a systematic error difficult. In this paper, it is shown how model development can benefit from the use of the same model across a range of temporal and spatial scales. Two particular systematic errors are examined: tropical circulation and precipitation distribution, and summer land surface temperature and moisture biases over Northern Hemisphere continental regions. Each of these errors affects the model performance on time scales ranging from a few days to several decades. In both cases, the characteristics of the long-time-scale errors are found to develop during the first few days of simulation, before any large-scale feedbacks have taken place. The ability to compare the model diagnostics from the first few days of a forecast, initialized from a realistic atmospheric state, directly with observations has allowed physical deficiencies in the physical parameterizations to be identified that, when corrected, lead to improvements across the full range of time scales. This study highlights the benefits of a seamless prediction system across a wide range of time scales.
In the fall of 1992 a lightning direction finder network was deployed in the western Pacific Ocean in the area of Papua New Guinea. Direction finders were installed on Kapingamarangi Atoll and near the towns of Rabaul and Kavieng, Papua New Guinea. The instruments were modified to detect cloud-to-ground lightning out to a distance of 900 km. Data were collected from cloud-to-ground lightning flashes for the period 26 November 1992–15 January 1994. The analyses are presented for the period 1 January 1993–31 December 1993. In addition, a waveform recorder was located at Kavieng to record both cloud-to-ground lightning and intracloud lightning in order to provide an estimate of the complete lightning activity. The data from these instruments are to be analyzed in conjunction with the data from ship and airborne radars, in-cloud microphysics, and electrical measurements from both the ER-2 and DC-8. The waveform instrumentation operated from approximately mid-January through February 1993. Over 150 000 waveforms were recorded.
During the year, January–December 1993, the cloud-to-ground lightning location network recorded 857 000 first strokes of which 5.6% were of positive polarity. During the same period, 437 000 subsequent strokes were recorded. The peak annual flash density was measured to be 2.0 flashes km−2 centered on the western coastline of the island of New Britain, just southwest of Rabaul. The annual peak lightning flash density over the Intensive Flux Array of Tropical Oceans Global Atmosphere Coupled Ocean–Atmosphere Response Experiment was 0.1 flashes km−2, or more than an order of magnitude less than that measured near land. The diurnal lightning frequency peaked at 1600 UTC (0200 LT), perhaps in coincidence with the nighttime land-breeze convergence along the coast of New Britain. Median monthly negative peak currents are in the 20–30-kA range, with first stroke peak currents typically exceeding subsequent peak currents. Median monthly positive peak currents are typically 30 kA with one month (June) having a value of 60 kA.
Positive polar conductivity was measured by an ER-2 flight from 40°N geomagnetic latitude to 28°S geomagnetic latitude. The measurements show that the air conductivity is about a factor of 0.6 lower in the Tropics than in the midlatitudes. Consequently, a tropical storm will produce higher field values aloft for the same rate of electrical current generation. An ER-2 overflight of tropical cyclone Oliver on 7 February 1993 measured electric fields and 85-GHz brightness temperatures. The measurements reveal electrification in the eye wall cloud region with ice, but no lightning was observed.
In the fall of 1992 a lightning direction finder network was deployed in the western Pacific Ocean in the area of Papua New Guinea. Direction finders were installed on Kapingamarangi Atoll and near the towns of Rabaul and Kavieng, Papua New Guinea. The instruments were modified to detect cloud-to-ground lightning out to a distance of 900 km. Data were collected from cloud-to-ground lightning flashes for the period 26 November 1992–15 January 1994. The analyses are presented for the period 1 January 1993–31 December 1993. In addition, a waveform recorder was located at Kavieng to record both cloud-to-ground lightning and intracloud lightning in order to provide an estimate of the complete lightning activity. The data from these instruments are to be analyzed in conjunction with the data from ship and airborne radars, in-cloud microphysics, and electrical measurements from both the ER-2 and DC-8. The waveform instrumentation operated from approximately mid-January through February 1993. Over 150 000 waveforms were recorded.
During the year, January–December 1993, the cloud-to-ground lightning location network recorded 857 000 first strokes of which 5.6% were of positive polarity. During the same period, 437 000 subsequent strokes were recorded. The peak annual flash density was measured to be 2.0 flashes km−2 centered on the western coastline of the island of New Britain, just southwest of Rabaul. The annual peak lightning flash density over the Intensive Flux Array of Tropical Oceans Global Atmosphere Coupled Ocean–Atmosphere Response Experiment was 0.1 flashes km−2, or more than an order of magnitude less than that measured near land. The diurnal lightning frequency peaked at 1600 UTC (0200 LT), perhaps in coincidence with the nighttime land-breeze convergence along the coast of New Britain. Median monthly negative peak currents are in the 20–30-kA range, with first stroke peak currents typically exceeding subsequent peak currents. Median monthly positive peak currents are typically 30 kA with one month (June) having a value of 60 kA.
Positive polar conductivity was measured by an ER-2 flight from 40°N geomagnetic latitude to 28°S geomagnetic latitude. The measurements show that the air conductivity is about a factor of 0.6 lower in the Tropics than in the midlatitudes. Consequently, a tropical storm will produce higher field values aloft for the same rate of electrical current generation. An ER-2 overflight of tropical cyclone Oliver on 7 February 1993 measured electric fields and 85-GHz brightness temperatures. The measurements reveal electrification in the eye wall cloud region with ice, but no lightning was observed.
A new millimeter-wave cloud radar (MMCR) has been designed to provide detailed, long-term observations of nonprecipitating and weakly precipitating clouds at Cloud and Radiation Testbed (CART) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) program. Scientific requirements included excellent sensitivity and vertical resolution to detect weak and thin multiple layers of ice and liquid water clouds over the sites and long-term, unattended operations in remote locales. In response to these requirements, the innovative radar design features a vertically pointing, single-polarization, Doppler system operating at 35 GHz (Ka band). It uses a low-peak-power transmitter for long-term reliability and high-gain antenna and pulse-compressed waveforms to maximize sensitivity and resolution. The radar uses the same kind of signal processor as that used in commercial wind profilers. The first MMCR began operations at the CART in northern Oklahoma in late 1996 and has operated continuously there for thousands of hours. It routinely provides remarkably detailed images of the ever-changing cloud structure and kinematics over this densely instrumented site. Examples of the data are presented. The radar measurements will greatly improve quantitative documentation of cloud conditions over the CART sites and will bolster ARM research to understand how clouds impact climate through their effects on radiative transfer. Millimeter-wave radars such as the MMCR also have potential applications in the fields of aviation weather, weather modification, and basic cloud physics research.
A new millimeter-wave cloud radar (MMCR) has been designed to provide detailed, long-term observations of nonprecipitating and weakly precipitating clouds at Cloud and Radiation Testbed (CART) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) program. Scientific requirements included excellent sensitivity and vertical resolution to detect weak and thin multiple layers of ice and liquid water clouds over the sites and long-term, unattended operations in remote locales. In response to these requirements, the innovative radar design features a vertically pointing, single-polarization, Doppler system operating at 35 GHz (Ka band). It uses a low-peak-power transmitter for long-term reliability and high-gain antenna and pulse-compressed waveforms to maximize sensitivity and resolution. The radar uses the same kind of signal processor as that used in commercial wind profilers. The first MMCR began operations at the CART in northern Oklahoma in late 1996 and has operated continuously there for thousands of hours. It routinely provides remarkably detailed images of the ever-changing cloud structure and kinematics over this densely instrumented site. Examples of the data are presented. The radar measurements will greatly improve quantitative documentation of cloud conditions over the CART sites and will bolster ARM research to understand how clouds impact climate through their effects on radiative transfer. Millimeter-wave radars such as the MMCR also have potential applications in the fields of aviation weather, weather modification, and basic cloud physics research.
Abstract
The first phase of an atmospheric tracer experiment program, designated Project Sagebrush, was conducted at the Idaho National Laboratory in October 2013. The purpose was to reevaluate the results of classical field experiments in short-range plume dispersion (e.g., Project Prairie Grass) using the newer technologies that are available for measuring both turbulence levels and tracer concentrations. All releases were conducted during the daytime with atmospheric conditions ranging from neutral to unstable. The key finding was that the values of the horizontal plume spread parameter σ y tended to be larger, by up to a factor of ~2, than those measured in many previous field studies. The discrepancies tended to increase with downwind distance. The values of the ratio σ y /σ θ , where σ θ is the standard deviation of the horizontal wind direction, also trend near the upper limit or above the range of values determined in earlier studies. There was also evidence to suggest that the value of σ y began to be independent of σ θ for σ θ greater than 18°. It was also found that the commonly accepted range of values for σ θ in different stability conditions might be limiting, at best, and might possibly be unrealistically low, especially at night in low wind speeds. The results raise questions about the commonly accepted magnitudes of σ y derived from older studies. These values are used in the parameterization and validation of both older stability-class dispersion models as well as newer models that are based on Taylor’s equation and modern PBL theory.
Abstract
The first phase of an atmospheric tracer experiment program, designated Project Sagebrush, was conducted at the Idaho National Laboratory in October 2013. The purpose was to reevaluate the results of classical field experiments in short-range plume dispersion (e.g., Project Prairie Grass) using the newer technologies that are available for measuring both turbulence levels and tracer concentrations. All releases were conducted during the daytime with atmospheric conditions ranging from neutral to unstable. The key finding was that the values of the horizontal plume spread parameter σ y tended to be larger, by up to a factor of ~2, than those measured in many previous field studies. The discrepancies tended to increase with downwind distance. The values of the ratio σ y /σ θ , where σ θ is the standard deviation of the horizontal wind direction, also trend near the upper limit or above the range of values determined in earlier studies. There was also evidence to suggest that the value of σ y began to be independent of σ θ for σ θ greater than 18°. It was also found that the commonly accepted range of values for σ θ in different stability conditions might be limiting, at best, and might possibly be unrealistically low, especially at night in low wind speeds. The results raise questions about the commonly accepted magnitudes of σ y derived from older studies. These values are used in the parameterization and validation of both older stability-class dispersion models as well as newer models that are based on Taylor’s equation and modern PBL theory.
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
Abstract
Concurrent wavefield and turbulent flux measurements acquired during the Southern Ocean (SO) Gas Exchange (GasEx) and the High Wind Speed Gas Exchange Study (HiWinGS) projects permit evaluation of the dependence of the whitecap coverage W on wind speed, wave age, wave steepness, mean square slope, and wind-wave and breaking Reynolds numbers. The W was determined from over 600 high-frequency visible imagery recordings of 20 min each. Wave statistics were computed from in situ and remotely sensed data as well as from a WAVEWATCH III hindcast. The first shipborne estimates of W under sustained 10-m neutral wind speeds U 10N of 25 m s−1 were obtained during HiWinGS. These measurements suggest that W levels off at high wind speed, not exceeding 10% when averaged over 20 min. Combining wind speed and wave height in the form of the wind-wave Reynolds number resulted in closely agreeing models for both datasets, individually and combined. These are also in good agreement with two previous studies. When expressing W in terms of wavefield statistics only or wave age, larger scatter is observed and/or there is little agreement between SO GasEx, HiWinGS, and previously published data. The wind speed–only parameterizations deduced from the SO GasEx and HiWinGS datasets agree closely and capture more of the observed W variability than Reynolds number parameterizations. However, these wind speed–only models do not agree as well with previous studies than the wind-wave Reynolds numbers.
Abstract
Concurrent wavefield and turbulent flux measurements acquired during the Southern Ocean (SO) Gas Exchange (GasEx) and the High Wind Speed Gas Exchange Study (HiWinGS) projects permit evaluation of the dependence of the whitecap coverage W on wind speed, wave age, wave steepness, mean square slope, and wind-wave and breaking Reynolds numbers. The W was determined from over 600 high-frequency visible imagery recordings of 20 min each. Wave statistics were computed from in situ and remotely sensed data as well as from a WAVEWATCH III hindcast. The first shipborne estimates of W under sustained 10-m neutral wind speeds U 10N of 25 m s−1 were obtained during HiWinGS. These measurements suggest that W levels off at high wind speed, not exceeding 10% when averaged over 20 min. Combining wind speed and wave height in the form of the wind-wave Reynolds number resulted in closely agreeing models for both datasets, individually and combined. These are also in good agreement with two previous studies. When expressing W in terms of wavefield statistics only or wave age, larger scatter is observed and/or there is little agreement between SO GasEx, HiWinGS, and previously published data. The wind speed–only parameterizations deduced from the SO GasEx and HiWinGS datasets agree closely and capture more of the observed W variability than Reynolds number parameterizations. However, these wind speed–only models do not agree as well with previous studies than the wind-wave Reynolds numbers.