Search Results
You are looking at 1 - 8 of 8 items for
- Author or Editor: Charles G. Wade x
- Refine by Access: All Content x
Abstract
A program is described which has been used to verify the quality of surface mesonet data collected during the Cooperative Convective Precipitation Experiment (CCOPE). The CCOPE mesonet consisted of 123 automated stations from two mesonet systems and was operational for an 81-day period from May to August 1981. Parameters examined include pressure, temperature, humidity, wind direction, and. wind speed. The pressure data were examined by systematically comparing hourly values from each station with the pressure observed at a nearby flight service station. Errors due to thermal effects and sensors drifting out of calibration were uncovered and corrected to approximately 1 mb in absolute accuracy. Temperature, humidity, and wind data were examined using an objective intercomparison procedure based on the Barnes objective analysis technique. The paper describes the procedure and shows how it was used to uncover relative errors in temperature and humidity, as well as exposure and vane alignment errors in wind.
In order to study the differences between the two mesonet systems used in CCOPE, one station from each system was collocated throughout the experiment. Results are presented which show observed temperature and wind speed differences between the two systems.
Abstract
A program is described which has been used to verify the quality of surface mesonet data collected during the Cooperative Convective Precipitation Experiment (CCOPE). The CCOPE mesonet consisted of 123 automated stations from two mesonet systems and was operational for an 81-day period from May to August 1981. Parameters examined include pressure, temperature, humidity, wind direction, and. wind speed. The pressure data were examined by systematically comparing hourly values from each station with the pressure observed at a nearby flight service station. Errors due to thermal effects and sensors drifting out of calibration were uncovered and corrected to approximately 1 mb in absolute accuracy. Temperature, humidity, and wind data were examined using an objective intercomparison procedure based on the Barnes objective analysis technique. The paper describes the procedure and shows how it was used to uncover relative errors in temperature and humidity, as well as exposure and vane alignment errors in wind.
In order to study the differences between the two mesonet systems used in CCOPE, one station from each system was collocated throughout the experiment. Results are presented which show observed temperature and wind speed differences between the two systems.
Abstract
This paper explores the low-humidity problem that has plagued the radiosonde hygristor for nearly 30 years and that makes the hygristor appear to become insensitive at relative humidities (RH) below about 20% RH. The problem led the National Weather Service (NWS) in 1973 to begin a practice of terminating the radiosonde humidity measurement at 20% RH and to begin reporting any humidities evaluated at less than 20% using a 30°C dewpoint depression in the coded radiosonde message. This practice remains in effect today and has resulted in a permanent, 20-year gap in the radiosonde humidity archive for the United States.
This study examines a number of factors that can potentially affect the accuracy of NWS radiosonde low-humidity data, including 1) characteristics of the hygristor response curve, 2) characteristics of the NWS analog radiosonde and methods used to transmit and record the radiosonde signal, and 3) reduction techniques used to convert the radiosonde signal into RH. It is shown that the primary factor that biases the NWS low-humidity data, and that makes the hygristor appear to lose its sensitivity below 20% RH, is an error in the placement of the 10% and 15% RH lines on the humidity evaluator used to convert the radiosonde signal into RH. This error has propagated forward to the present because of an NWS requirement that current humidity reduction algorithms match the evaluator within 1% RH. The paper reviews the past work of Brousaides, which initially described the error in the evaluator, and presents results of recent tests conducted by the manufacturer of the hygristor that corroborate Brousaides's earlier work. The humidity reduction algorithm currently used by the NWS is described, and it is shown that by changing the coefficients used to derive the low-humidity data, the sensitivity of the hygristor can be restored. An example of data correction using soundings collected in a dry-microburst environment is presented. The paper discusses the decreased resolution inherent in the radiosonde recorder trace in the low-humidity region, but shows that current automated sounding systems have eliminated this device as a recording medium. Limitations in the accuracy of the low-humidity measurement stemming from uncertainties in the sensor's lock-in resistance are also discussed. The paper recommends a change in the low-humidity reduction algorithm used by the NWS, and a reversal of the 20-year-old practice of truncating the humidity report at 20% RH.
Abstract
This paper explores the low-humidity problem that has plagued the radiosonde hygristor for nearly 30 years and that makes the hygristor appear to become insensitive at relative humidities (RH) below about 20% RH. The problem led the National Weather Service (NWS) in 1973 to begin a practice of terminating the radiosonde humidity measurement at 20% RH and to begin reporting any humidities evaluated at less than 20% using a 30°C dewpoint depression in the coded radiosonde message. This practice remains in effect today and has resulted in a permanent, 20-year gap in the radiosonde humidity archive for the United States.
This study examines a number of factors that can potentially affect the accuracy of NWS radiosonde low-humidity data, including 1) characteristics of the hygristor response curve, 2) characteristics of the NWS analog radiosonde and methods used to transmit and record the radiosonde signal, and 3) reduction techniques used to convert the radiosonde signal into RH. It is shown that the primary factor that biases the NWS low-humidity data, and that makes the hygristor appear to lose its sensitivity below 20% RH, is an error in the placement of the 10% and 15% RH lines on the humidity evaluator used to convert the radiosonde signal into RH. This error has propagated forward to the present because of an NWS requirement that current humidity reduction algorithms match the evaluator within 1% RH. The paper reviews the past work of Brousaides, which initially described the error in the evaluator, and presents results of recent tests conducted by the manufacturer of the hygristor that corroborate Brousaides's earlier work. The humidity reduction algorithm currently used by the NWS is described, and it is shown that by changing the coefficients used to derive the low-humidity data, the sensitivity of the hygristor can be restored. An example of data correction using soundings collected in a dry-microburst environment is presented. The paper discusses the decreased resolution inherent in the radiosonde recorder trace in the low-humidity region, but shows that current automated sounding systems have eliminated this device as a recording medium. Limitations in the accuracy of the low-humidity measurement stemming from uncertainties in the sensor's lock-in resistance are also discussed. The paper recommends a change in the low-humidity reduction algorithm used by the NWS, and a reversal of the 20-year-old practice of truncating the humidity report at 20% RH.
Abstract
National Weather Service Automated Surface Observing System (ASOS) stations do not currently report drizzle because the precipitation identification sensor, called the light-emitting diode weather identifier (LEDWI), is thought not to have the capability to be able to detect particles smaller than about 1 mm in diameter. An analysis of the LEDWI 1-min channel data has revealed, however, that the signal levels in these data are sufficiently strong when drizzle occurs; thus, they can be used to detect drizzle and distinguish it from light rain or snow. In particular, it is shown that there is important information in the LEDWI particle channel that has not been previously used for precipitation identification. A drizzle detection algorithm is developed, based on these data, and is presented in the paper. Since noise in the LEDWI channels can sometimes obscure the drizzle signal, a technique is proposed that uses data from other ASOS sensors to identify nondrizzle periods and eliminate them from consideration in the drizzle algorithm. These sensors include the ASOS ceilometer, temperature, and dewpoint sensors, and the visibility sensor. Data from freezing rain and freezing drizzle events are used to illustrate how the algorithm can differentiate between these precipitation types. A comparison is made between the results obtained using the algorithm presented here and those obtained from the Ramsay freezing drizzle algorithm, and precipitation type recorded by the ASOS observer. The paper shows that by using data from the LEDWI particle channel, in combination with data from other ASOS sensors, the ability exists to detect drizzle with the current suite of ASOS instrumentation.
Abstract
National Weather Service Automated Surface Observing System (ASOS) stations do not currently report drizzle because the precipitation identification sensor, called the light-emitting diode weather identifier (LEDWI), is thought not to have the capability to be able to detect particles smaller than about 1 mm in diameter. An analysis of the LEDWI 1-min channel data has revealed, however, that the signal levels in these data are sufficiently strong when drizzle occurs; thus, they can be used to detect drizzle and distinguish it from light rain or snow. In particular, it is shown that there is important information in the LEDWI particle channel that has not been previously used for precipitation identification. A drizzle detection algorithm is developed, based on these data, and is presented in the paper. Since noise in the LEDWI channels can sometimes obscure the drizzle signal, a technique is proposed that uses data from other ASOS sensors to identify nondrizzle periods and eliminate them from consideration in the drizzle algorithm. These sensors include the ASOS ceilometer, temperature, and dewpoint sensors, and the visibility sensor. Data from freezing rain and freezing drizzle events are used to illustrate how the algorithm can differentiate between these precipitation types. A comparison is made between the results obtained using the algorithm presented here and those obtained from the Ramsay freezing drizzle algorithm, and precipitation type recorded by the ASOS observer. The paper shows that by using data from the LEDWI particle channel, in combination with data from other ASOS sensors, the ability exists to detect drizzle with the current suite of ASOS instrumentation.
Abstract
A detailed description is given of the morphology and evolution of a moderate hailstorm in terms primarily of quantitative S-band reflectivity factor measurements. During the early phase of the storm's life its movement was strongly influenced by the propagation of new cells on its right flank in a manner typical of “organized” multicell norms. During its later phase, when it was being intensively observed by research aircraft and Doppler radar, the new cells tended to form on the front flank of the storm in a manner similar to that analysed for the multicellular Raymer storm that has been discussed extensively in the literature. In the present case, however, emphasis is given not to the discrete nature of the cellular propagation, but rather to the quasi-steady overall structure that is comparable in certain ways to previous descriptions of supercell storms, with transient weak-echo vaults and a pronounced forward overhang in the echo structure. The present storm was smaller and less intense than the archetypal supercell, and inferred pulsations in updraft intensity indicate a degree of unsteadiness not generally acknowledged for supercell storms. It is suggested that the present regime represents the extension of a steady airflow pattern to environmental conditions with higher instability or weaker shear.
Abstract
A detailed description is given of the morphology and evolution of a moderate hailstorm in terms primarily of quantitative S-band reflectivity factor measurements. During the early phase of the storm's life its movement was strongly influenced by the propagation of new cells on its right flank in a manner typical of “organized” multicell norms. During its later phase, when it was being intensively observed by research aircraft and Doppler radar, the new cells tended to form on the front flank of the storm in a manner similar to that analysed for the multicellular Raymer storm that has been discussed extensively in the literature. In the present case, however, emphasis is given not to the discrete nature of the cellular propagation, but rather to the quasi-steady overall structure that is comparable in certain ways to previous descriptions of supercell storms, with transient weak-echo vaults and a pronounced forward overhang in the echo structure. The present storm was smaller and less intense than the archetypal supercell, and inferred pulsations in updraft intensity indicate a degree of unsteadiness not generally acknowledged for supercell storms. It is suggested that the present regime represents the extension of a steady airflow pattern to environmental conditions with higher instability or weaker shear.
A recent examination of Denver National Weather Service radiosonde data has revealed an error in the procedure used to establish the surface baseline pressure for Denver soundings obtained between 14 April 1983 and 2 March 1988. As a result of this error the baroswitch was improperly set on each sounding, resulting in geopotential heights that average from 16 to 30 m too low. This article alerts users of the Denver data to the existence and nature of this problem and shows the effect that such subtle bias errors in radiosonde height data can have on derived quantities such as geostrophic vorticity.
A recent examination of Denver National Weather Service radiosonde data has revealed an error in the procedure used to establish the surface baseline pressure for Denver soundings obtained between 14 April 1983 and 2 March 1988. As a result of this error the baroswitch was improperly set on each sounding, resulting in geopotential heights that average from 16 to 30 m too low. This article alerts users of the Denver data to the existence and nature of this problem and shows the effect that such subtle bias errors in radiosonde height data can have on derived quantities such as geostrophic vorticity.
Abstract
The initiation of thunderstorms is examined through a combined observational and modeling case study. The study is based on Doppler radar, aircraft, mesonet, balloon sounding, and profiler and photographic data from the Convection Initiation and Downburst Experiment (CINDE) conducted near Denver, Colorado. The study examines the initiation of a line of thunderstorms that developed along a preexisting, quasi-stationary boundary-layer convergence line on 17 July 1987. The storms were triggered at the intersection of the convergence line with horizontal rolls where enhanced updrafts were present. The primary effect of the convergence line was to deepen the moist layer locally and provide a region potentially favorable to deep convection. The critical factor governing the time of storm development was apparently related to the attainment of a balance between horizontal vorticity in the opposing flows on either side of the convergence line. The effect was to cause the updrafts in the convergence line to become more erect and the convergence zone deeper, as discussed theoretically by Rotunno et al. Modeling results for this case also indicated that storm initiation was very sensitive to the depth of the convergence-line circulation. Storm initiation also frequently coincided with the location of misocyclones along the convergence line. Model results suggested this was because both events were caused by strong updrafts. The misocyclones resulted from stretching of existing vorticity associated with the convergence line. They tended to form where a convective roll intersected the convergence line leading to a local maximum in convergence and vertical motion. Some misocyclones suddenly deepened and strengthened when they became collocated with the deep, intense updraft of a convective storm. The updraft was responsible for advection and stretching of the vertical component of vorticity, leading in the most intense cases to the development of nonsupercell tornadoes, as discussed previously by Wakimoto and Wilson.
Abstract
The initiation of thunderstorms is examined through a combined observational and modeling case study. The study is based on Doppler radar, aircraft, mesonet, balloon sounding, and profiler and photographic data from the Convection Initiation and Downburst Experiment (CINDE) conducted near Denver, Colorado. The study examines the initiation of a line of thunderstorms that developed along a preexisting, quasi-stationary boundary-layer convergence line on 17 July 1987. The storms were triggered at the intersection of the convergence line with horizontal rolls where enhanced updrafts were present. The primary effect of the convergence line was to deepen the moist layer locally and provide a region potentially favorable to deep convection. The critical factor governing the time of storm development was apparently related to the attainment of a balance between horizontal vorticity in the opposing flows on either side of the convergence line. The effect was to cause the updrafts in the convergence line to become more erect and the convergence zone deeper, as discussed theoretically by Rotunno et al. Modeling results for this case also indicated that storm initiation was very sensitive to the depth of the convergence-line circulation. Storm initiation also frequently coincided with the location of misocyclones along the convergence line. Model results suggested this was because both events were caused by strong updrafts. The misocyclones resulted from stretching of existing vorticity associated with the convergence line. They tended to form where a convective roll intersected the convergence line leading to a local maximum in convergence and vertical motion. Some misocyclones suddenly deepened and strengthened when they became collocated with the deep, intense updraft of a convective storm. The updraft was responsible for advection and stretching of the vertical component of vorticity, leading in the most intense cases to the development of nonsupercell tornadoes, as discussed previously by Wakimoto and Wilson.
Abstract
An analysis of the seeding operations during the National Hail Research Experiment 1972–74 randomized seeding program is carried out for the purpose of critiquing the seeding procedures and establishing the actual rates at which seeding material was dispensed as opposed to the prescribed rates. The seeding coverage, a parameter defined in the paper, is found to be only about 50% on the average. The reasons for the low seeding coverage are discussed in terms of seeding logistics and storm evolution, and three case studies are presented to illustrate the problems that can arise. Some results on the rate at which storm cells can develop and on the duration of convective activity over a fixed target area are presented. It is concluded that seeding convective clouds using aircraft flying near cloud base is more difficult than is widely acknowledged.
Since the seeding operations were more thorough on some days than on others, one might reasonably expect that seeding effects, if they exist, would be more marked on the days with the higher coverage. Post hoc analyses that stratify the surface hail and rain data according to seeding coverage are presented. The results do not allow one to reject the hypothesis that seeding had no effect on surface precipitation.
Abstract
An analysis of the seeding operations during the National Hail Research Experiment 1972–74 randomized seeding program is carried out for the purpose of critiquing the seeding procedures and establishing the actual rates at which seeding material was dispensed as opposed to the prescribed rates. The seeding coverage, a parameter defined in the paper, is found to be only about 50% on the average. The reasons for the low seeding coverage are discussed in terms of seeding logistics and storm evolution, and three case studies are presented to illustrate the problems that can arise. Some results on the rate at which storm cells can develop and on the duration of convective activity over a fixed target area are presented. It is concluded that seeding convective clouds using aircraft flying near cloud base is more difficult than is widely acknowledged.
Since the seeding operations were more thorough on some days than on others, one might reasonably expect that seeding effects, if they exist, would be more marked on the days with the higher coverage. Post hoc analyses that stratify the surface hail and rain data according to seeding coverage are presented. The results do not allow one to reject the hypothesis that seeding had no effect on surface precipitation.