Search Results
You are looking at 1 - 10 of 14 items for
- Author or Editor: P. Hamill x
- Refine by Access: All Content x
Abstract
We compare a series of 85 dustsonde measurements and 84 lidar measurements made in midlatitude North America during 1974–80. This period includes two major volcanic increases (Fuego in 1974 and St. Helens in 1980), as well as an unusually clean, or background, period in 1978–79. An optical modeling technique is used to relate the dustsonde-number data to the lidar-backscatter data. The model includes a range of refractive indices and of size distribution functional forms, to show its sensitivity to these factors. Moreover, two parameters of each size distribution function are adjustable, so that each distribution can be matched to any two-channel dustsonde measurement.
We show how the mean particle radius for backscatter, rB , changes in response to size distribution changes revealed by the dustsonde channel ratio, N r>0.15/N r>0.25. (N r>x is the number of particles with radius larger than x microns.) In early 1975, just after the Fuego injection, N r>0.15/N r>0.25 was ∼3, and the corresponding rB , was ∼0.5 μm; by early 1980, when N r>0.15/N r>0.25 had increased to eight or larger, rB had correspondingly decreased to ∼0.25 μm. Throughout the 1975–76 Fuego decay, rB always exceeded 0.3 μm; thus, lidar backscatter was influenced primarily by particles larger than those that contribute most to N r>0.15 and N r>0.25. This is in accord with the shorter lidar background-corrected, 1/e decay time: 7.4 months, versus 10.4 and 7.9 months for N r>0.15 and N r>0.25.
The modeling technique is used to derive a time series of dustsonde-inferred peak backscatter mixing ratio, which agrees very well with the lidar-measured series. The best overall agreement for 1974–80 is achieved with a mixture of refractive indices corresponding to aqueous sulfuric acid at about 210 K with an acid-weight fraction between 0.6 and 0.85.
Abstract
We compare a series of 85 dustsonde measurements and 84 lidar measurements made in midlatitude North America during 1974–80. This period includes two major volcanic increases (Fuego in 1974 and St. Helens in 1980), as well as an unusually clean, or background, period in 1978–79. An optical modeling technique is used to relate the dustsonde-number data to the lidar-backscatter data. The model includes a range of refractive indices and of size distribution functional forms, to show its sensitivity to these factors. Moreover, two parameters of each size distribution function are adjustable, so that each distribution can be matched to any two-channel dustsonde measurement.
We show how the mean particle radius for backscatter, rB , changes in response to size distribution changes revealed by the dustsonde channel ratio, N r>0.15/N r>0.25. (N r>x is the number of particles with radius larger than x microns.) In early 1975, just after the Fuego injection, N r>0.15/N r>0.25 was ∼3, and the corresponding rB , was ∼0.5 μm; by early 1980, when N r>0.15/N r>0.25 had increased to eight or larger, rB had correspondingly decreased to ∼0.25 μm. Throughout the 1975–76 Fuego decay, rB always exceeded 0.3 μm; thus, lidar backscatter was influenced primarily by particles larger than those that contribute most to N r>0.15 and N r>0.25. This is in accord with the shorter lidar background-corrected, 1/e decay time: 7.4 months, versus 10.4 and 7.9 months for N r>0.15 and N r>0.25.
The modeling technique is used to derive a time series of dustsonde-inferred peak backscatter mixing ratio, which agrees very well with the lidar-measured series. The best overall agreement for 1974–80 is achieved with a mixture of refractive indices corresponding to aqueous sulfuric acid at about 210 K with an acid-weight fraction between 0.6 and 0.85.
Abstract
The Air Force Global Weather Central (AFGWC) Real-Time Nephanalysis (RTNEPH) is an automated cloud model that produces a 48-km gridded analysis of cloud amount, cloud type, and cloud height. Its primary input is imagery from polar-orbiting satellites.
Six main programs make up the RTNEPH. These are the satellite data mapper, the surface temperature analysis and forecast model, the satellite data processor, the conventional data processor, the merge processor, and the bogus processor. The satellite data mapper remaps incoming polar-orbiter imagery to a polar-stereographic database. The surface temperature model produces an analysis and forecast of shelter and skin temperatures for comparison to satellite-measured infrared (IR) brightness temperatures. The satellite data processor reads in the new satellite data and produces a satellite-derived cloud analysis. The conventional data processor retrieves and reformats cloud information from airport observations. The merge processor combines the satellite- and conventional-derived cloud analyses into a final nephanalysis. Finally, the bogus processor allows forecasters to manually correct the nephanalysis where appropriate.
The RTNEPH has been extensively redesigned, primarily to improve analyses of total and layered cloud amounts generated from IR data. Recent enhancements include the use of regression equations to calculate atmospheric water vapor attenuation, an improved definition of surface temperatures used to calculate cloud/no-cloud thresholds for IR data, and the use of Special Sensor Microwave/Imager (SSM/1) data to further improve the calculation of infrared cloud/no-cloud thresholds. Planned enhancements include the processing of geostationary satellite data, more sophisticated processing of visible data, and a higher-resolution satellite database for the archiving and processing of multispectral satellite data.
Abstract
The Air Force Global Weather Central (AFGWC) Real-Time Nephanalysis (RTNEPH) is an automated cloud model that produces a 48-km gridded analysis of cloud amount, cloud type, and cloud height. Its primary input is imagery from polar-orbiting satellites.
Six main programs make up the RTNEPH. These are the satellite data mapper, the surface temperature analysis and forecast model, the satellite data processor, the conventional data processor, the merge processor, and the bogus processor. The satellite data mapper remaps incoming polar-orbiter imagery to a polar-stereographic database. The surface temperature model produces an analysis and forecast of shelter and skin temperatures for comparison to satellite-measured infrared (IR) brightness temperatures. The satellite data processor reads in the new satellite data and produces a satellite-derived cloud analysis. The conventional data processor retrieves and reformats cloud information from airport observations. The merge processor combines the satellite- and conventional-derived cloud analyses into a final nephanalysis. Finally, the bogus processor allows forecasters to manually correct the nephanalysis where appropriate.
The RTNEPH has been extensively redesigned, primarily to improve analyses of total and layered cloud amounts generated from IR data. Recent enhancements include the use of regression equations to calculate atmospheric water vapor attenuation, an improved definition of surface temperatures used to calculate cloud/no-cloud thresholds for IR data, and the use of Special Sensor Microwave/Imager (SSM/1) data to further improve the calculation of infrared cloud/no-cloud thresholds. Planned enhancements include the processing of geostationary satellite data, more sophisticated processing of visible data, and a higher-resolution satellite database for the archiving and processing of multispectral satellite data.
Abstract
Probabilistic fire-weather forecasts provide pertinent information to assess fire behavior and danger of current or potential fires. Operational fire-weather guidance is provided for lead times fewer than seven days, with most products only providing day 1–3 outlooks. Extended-range forecasts can aid in decisions regarding placement of in- and out-of-state resources, prescribed burns, and overall preparedness levels. We demonstrate how ensemble model output statistics and ensemble copula coupling (ECC) postprocessing methods can be used to provide locally calibrated and spatially coherent probabilistic forecasts of the hot–dry–windy index (and its components). The univariate postprocessing fits the truncated normal distribution to data transformed with a flexible selection of power exponents. Forecast scenarios are generated via the ECC-Q variation, which maintains their spatial and temporal coherence by reordering samples from the univariate distributions according to ranks of the raw ensemble. A total of 20 years of ECMWF reforecasts and ERA-Interim reanalysis data over the continental United States are used. Skill of the forecasts is quantified with the continuous ranked probability score using benchmarks of raw and climatological forecasts. Results show postprocessing is beneficial during all seasons over CONUS out to two weeks. Forecast skill relative to climatological forecasts depends on the atmospheric variable, season, location, and lead time, where winter (summer) generally provides the most (least) skill at the longest lead times. Additional improvements of forecast skill can be achieved by aggregating forecast days. Illustrations of these postprocessed forecasts are explored for a past fire event.
Abstract
Probabilistic fire-weather forecasts provide pertinent information to assess fire behavior and danger of current or potential fires. Operational fire-weather guidance is provided for lead times fewer than seven days, with most products only providing day 1–3 outlooks. Extended-range forecasts can aid in decisions regarding placement of in- and out-of-state resources, prescribed burns, and overall preparedness levels. We demonstrate how ensemble model output statistics and ensemble copula coupling (ECC) postprocessing methods can be used to provide locally calibrated and spatially coherent probabilistic forecasts of the hot–dry–windy index (and its components). The univariate postprocessing fits the truncated normal distribution to data transformed with a flexible selection of power exponents. Forecast scenarios are generated via the ECC-Q variation, which maintains their spatial and temporal coherence by reordering samples from the univariate distributions according to ranks of the raw ensemble. A total of 20 years of ECMWF reforecasts and ERA-Interim reanalysis data over the continental United States are used. Skill of the forecasts is quantified with the continuous ranked probability score using benchmarks of raw and climatological forecasts. Results show postprocessing is beneficial during all seasons over CONUS out to two weeks. Forecast skill relative to climatological forecasts depends on the atmospheric variable, season, location, and lead time, where winter (summer) generally provides the most (least) skill at the longest lead times. Additional improvements of forecast skill can be achieved by aggregating forecast days. Illustrations of these postprocessed forecasts are explored for a past fire event.
Abstract
Sightings of polar stratospheric clouds (PSC's) by the SAM II satellite system during the northern and southern winters of 1979 are reported. PSC's were observed in the Arctic stratosphere at altitudes between about 17 and 25 km during January 1979, with a single sighting in November 1978, and in the Antarctic stratosphere from June to October 1979 at altitudes from the tropopause up to about 23 km. The measured extinction coefficients at 1 μm wavelength were as much as two orders of magnitude greater than that of the background stratospheric aerosol. with peak extinctions up to 10−2 km−1. The PSC's were observed when stratospheric temperatures were very low with a high probability of observation when temperatures were colder than 190 K and a low probability when temperatures were warmer than 198 K. In the Antarctic, clouds were observed in more than 90% of the events in which the minimum temperature was 185 K or less, and were observed in fewer than 10% of the occasions when the temperature was greater than 196 K.
Abstract
Sightings of polar stratospheric clouds (PSC's) by the SAM II satellite system during the northern and southern winters of 1979 are reported. PSC's were observed in the Arctic stratosphere at altitudes between about 17 and 25 km during January 1979, with a single sighting in November 1978, and in the Antarctic stratosphere from June to October 1979 at altitudes from the tropopause up to about 23 km. The measured extinction coefficients at 1 μm wavelength were as much as two orders of magnitude greater than that of the background stratospheric aerosol. with peak extinctions up to 10−2 km−1. The PSC's were observed when stratospheric temperatures were very low with a high probability of observation when temperatures were colder than 190 K and a low probability when temperatures were warmer than 198 K. In the Antarctic, clouds were observed in more than 90% of the events in which the minimum temperature was 185 K or less, and were observed in fewer than 10% of the occasions when the temperature was greater than 196 K.
Abstract
We have developed a time-dependent one-dimensional model of the stratospheric sulfate aerosol layer. In constructing the model, we have incorporated a wide range of basic physical and chemical processes in order to avoid predetermining or biasing the model predictions. The simulation, which extends from the surface to an altitude of 58 km, includes the troposphere as a source of gases and condensation nuclei and as a sink for aerosol droplets; however, tropospheric aerosol physics and chemistry are not fully analyzed in the present model. The size distribution of aerosol particles is resolved into 25 discrete size categories covering a range of particle radii from 0.01–2.56 µm with particle volume doubling between categories. In the model, sulfur gases reaching the stratosphere are oxidized by a series of photochemical reactions into sulfuric acid vapor. At certain heights this results in a supersaturated H2SO4–H2O gas mixture with the consequent deposition of aqueous sulfuric acid solution on the surfaces of condensation nuclei. The newly formed droplets grow by heteromolecular heterogeneous condensation of acid and water vapors; the droplets also undergo Brownian coagulation, settle under the influence of gravity and diffuse in the vertical direction. Below the tropopause, particles are washed from the air by rainfall. Most of these aspects of aerosol physics are treated in detail, as is the atmospheric chemistry of sulfur compounds. In addition, the model predicts the quantity of solid (or dissolved) core material within the aerosol droplets. Depending on the local physical environment, aerosol droplets may either grow or evaporate; if they evaporate, their cores are released as solid nuclei.
A set of continuity equations has been derived which describes the temporal and spatial variations of aerosol droplet and condensation nuclei concentrations in air, as well as the sizes of cores in droplets; techniques to solve these equations accurately and efficiently have also been formulated. We present calculations which illustrate the precision and potential applications of the model.
Abstract
We have developed a time-dependent one-dimensional model of the stratospheric sulfate aerosol layer. In constructing the model, we have incorporated a wide range of basic physical and chemical processes in order to avoid predetermining or biasing the model predictions. The simulation, which extends from the surface to an altitude of 58 km, includes the troposphere as a source of gases and condensation nuclei and as a sink for aerosol droplets; however, tropospheric aerosol physics and chemistry are not fully analyzed in the present model. The size distribution of aerosol particles is resolved into 25 discrete size categories covering a range of particle radii from 0.01–2.56 µm with particle volume doubling between categories. In the model, sulfur gases reaching the stratosphere are oxidized by a series of photochemical reactions into sulfuric acid vapor. At certain heights this results in a supersaturated H2SO4–H2O gas mixture with the consequent deposition of aqueous sulfuric acid solution on the surfaces of condensation nuclei. The newly formed droplets grow by heteromolecular heterogeneous condensation of acid and water vapors; the droplets also undergo Brownian coagulation, settle under the influence of gravity and diffuse in the vertical direction. Below the tropopause, particles are washed from the air by rainfall. Most of these aspects of aerosol physics are treated in detail, as is the atmospheric chemistry of sulfur compounds. In addition, the model predicts the quantity of solid (or dissolved) core material within the aerosol droplets. Depending on the local physical environment, aerosol droplets may either grow or evaporate; if they evaporate, their cores are released as solid nuclei.
A set of continuity equations has been derived which describes the temporal and spatial variations of aerosol droplet and condensation nuclei concentrations in air, as well as the sizes of cores in droplets; techniques to solve these equations accurately and efficiently have also been formulated. We present calculations which illustrate the precision and potential applications of the model.
Abstract
We have performed sensitivity tests on a one-dimensional physical-chemical model of the unperturbed stratospheric aerosols and have compared model calculations with observations. The sensitivity tests and comparisons with observations suggest that coagulation controls the particle number mixing ratio, although the number of condensation nuclei at the tropopause and the diffusion coefficient at high altitudes are also important. The sulfate mass and large particle number (r > 0.15 µm) mixing ratios are controlled by growth, sedimentation, evaporation at high altitudes and washout below the tropopause. The sulfur gas source strength and the aerosol residence time are much more important than the supply of condensation nuclei in establishing mass and large particle concentrations. The particle size is also controlled mainly by gas supply and residence time. OCS diffusion (not SO2diffusion) dominates the production of stratospheric H2SO4 particles during unperturbed times, although direct injection of SO2 into the stratosphere could be significant if it normally occurs regularly by some transport mechanism. We suggest a number of in-situ observations of the aerosols and laboratory measurements of aerosol parameters that can provide further information about the physics and chemistry of the stratosphere and the aerosols found there.
Abstract
We have performed sensitivity tests on a one-dimensional physical-chemical model of the unperturbed stratospheric aerosols and have compared model calculations with observations. The sensitivity tests and comparisons with observations suggest that coagulation controls the particle number mixing ratio, although the number of condensation nuclei at the tropopause and the diffusion coefficient at high altitudes are also important. The sulfate mass and large particle number (r > 0.15 µm) mixing ratios are controlled by growth, sedimentation, evaporation at high altitudes and washout below the tropopause. The sulfur gas source strength and the aerosol residence time are much more important than the supply of condensation nuclei in establishing mass and large particle concentrations. The particle size is also controlled mainly by gas supply and residence time. OCS diffusion (not SO2diffusion) dominates the production of stratospheric H2SO4 particles during unperturbed times, although direct injection of SO2 into the stratosphere could be significant if it normally occurs regularly by some transport mechanism. We suggest a number of in-situ observations of the aerosols and laboratory measurements of aerosol parameters that can provide further information about the physics and chemistry of the stratosphere and the aerosols found there.
Abstract
Studies using idealized ensemble data assimilation systems have shown that flow-dependent background-error covariances are most beneficial when the observing network is sparse. The computational cost of recently proposed ensemble data assimilation algorithms is directly proportional to the number of observations being assimilated. Therefore, ensemble-based data assimilation should both be more computationally feasible and provide the greatest benefit over current operational schemes in situations when observations are sparse. Reanalysis before the radiosonde era (pre-1931) is just such a situation.
The feasibility of reanalysis before radiosondes using an ensemble square root filter (EnSRF) is examined. Real surface pressure observations for 2001 are used, subsampled to resemble the density of observations we estimate to be available for 1915. Analysis errors are defined relative to a three-dimensional variational data assimilation (3DVAR) analysis using several orders of magnitude more observations, both at the surface and aloft. We find that the EnSRF is computationally tractable and considerably more accurate than other candidate analysis schemes that use static background-error covariance estimates. We conclude that a Northern Hemisphere reanalysis of the middle and lower troposphere during the first half of the twentieth century is feasible using only surface pressure observations. Expected Northern Hemisphere analysis errors at 500 hPa for the 1915 observation network are similar to current 2.5-day forecast errors.
Abstract
Studies using idealized ensemble data assimilation systems have shown that flow-dependent background-error covariances are most beneficial when the observing network is sparse. The computational cost of recently proposed ensemble data assimilation algorithms is directly proportional to the number of observations being assimilated. Therefore, ensemble-based data assimilation should both be more computationally feasible and provide the greatest benefit over current operational schemes in situations when observations are sparse. Reanalysis before the radiosonde era (pre-1931) is just such a situation.
The feasibility of reanalysis before radiosondes using an ensemble square root filter (EnSRF) is examined. Real surface pressure observations for 2001 are used, subsampled to resemble the density of observations we estimate to be available for 1915. Analysis errors are defined relative to a three-dimensional variational data assimilation (3DVAR) analysis using several orders of magnitude more observations, both at the surface and aloft. We find that the EnSRF is computationally tractable and considerably more accurate than other candidate analysis schemes that use static background-error covariance estimates. We conclude that a Northern Hemisphere reanalysis of the middle and lower troposphere during the first half of the twentieth century is feasible using only surface pressure observations. Expected Northern Hemisphere analysis errors at 500 hPa for the 1915 observation network are similar to current 2.5-day forecast errors.
Abstract
Measurements of the stratospheric aerosol by SAM II during the northern and southern winters of 1979 showed a pronounced increase in extinction on occasions when the temperature fell to a low value (below 200 K). In this paper we evaluate, from thermodynamic considerations, the correlation between extinction and temperature. As the temperature fails, the hygroscopic aerosols absorb water vapor from the atmosphere, growing as they do so. The effect of the temperature on the size distribution and composition of the aerosol is determined, and the optical extinction at 1 μm wavelength is calculated using Mie scattering theory. The theoretical predictions of the change in extinction with temperature and humidity am compared with the SAM II results at 100 mb, and the water vapor mixing ratio and aerosol number density are inferred from these results. A best fit of the theoretical curves to the SAM II data gives a water vapor content of 5–6 ppmv, and a total particle number density of 6–7 particles cm−3.
Abstract
Measurements of the stratospheric aerosol by SAM II during the northern and southern winters of 1979 showed a pronounced increase in extinction on occasions when the temperature fell to a low value (below 200 K). In this paper we evaluate, from thermodynamic considerations, the correlation between extinction and temperature. As the temperature fails, the hygroscopic aerosols absorb water vapor from the atmosphere, growing as they do so. The effect of the temperature on the size distribution and composition of the aerosol is determined, and the optical extinction at 1 μm wavelength is calculated using Mie scattering theory. The theoretical predictions of the change in extinction with temperature and humidity am compared with the SAM II results at 100 mb, and the water vapor mixing ratio and aerosol number density are inferred from these results. A best fit of the theoretical curves to the SAM II data gives a water vapor content of 5–6 ppmv, and a total particle number density of 6–7 particles cm−3.
This paper describes the life cycle of the background (nonvolcanic) stratospheric sulfate aerosol. The authors assume the particles are formed by homogeneous nucleation near the tropical tropopause and are carried aloft into the stratosphere. The particles remain in the Tropics for most of their life, and during this period of time a size distribution is developed by a combination of coagulation, growth by heteromolecular condensation, and mixing with air parcels containing preexisting sulfate particles. The aerosol eventually migrates to higher latitudes and descends across isentropic surfaces to the lower stratosphere. The aerosol is removed from the stratosphere primarily at mid- and high latitudes through various processes, mainly by isentropic transport across the tropopause from the stratosphere into the troposphere.
This paper describes the life cycle of the background (nonvolcanic) stratospheric sulfate aerosol. The authors assume the particles are formed by homogeneous nucleation near the tropical tropopause and are carried aloft into the stratosphere. The particles remain in the Tropics for most of their life, and during this period of time a size distribution is developed by a combination of coagulation, growth by heteromolecular condensation, and mixing with air parcels containing preexisting sulfate particles. The aerosol eventually migrates to higher latitudes and descends across isentropic surfaces to the lower stratosphere. The aerosol is removed from the stratosphere primarily at mid- and high latitudes through various processes, mainly by isentropic transport across the tropopause from the stratosphere into the troposphere.
Abstract
Forecast skill of numerical weather prediction (NWP) models for precipitation accumulations over California is rather limited at subseasonal time scales, and the low signal-to-noise ratio makes it challenging to extract information that provides reliable probabilistic forecasts. A statistical postprocessing framework is proposed that uses an artificial neural network (ANN) to establish relationships between NWP ensemble forecast and gridded observed 7-day precipitation accumulations, and to model the increase or decrease of the probabilities for different precipitation categories relative to their climatological frequencies. Adding predictors with geographic information and location-specific normalization of forecast information permits the use of a single ANN for the entire forecast domain and thus reduces the risk of overfitting. In addition, a convolutional neural network (CNN) framework is proposed that extends the basic ANN and takes images of large-scale predictors as inputs that inform local increase or decrease of precipitation probabilities relative to climatology. Both methods are demonstrated with ECMWF ensemble reforecasts over California for lead times up to 4 weeks. They compare favorably with a state-of-the-art postprocessing technique developed for medium-range ensemble precipitation forecasts, and their forecast skill relative to climatology is positive everywhere within the domain. The magnitude of skill, however, is low for week-3 and week-4, and suggests that additional sources of predictability need to be explored.
Abstract
Forecast skill of numerical weather prediction (NWP) models for precipitation accumulations over California is rather limited at subseasonal time scales, and the low signal-to-noise ratio makes it challenging to extract information that provides reliable probabilistic forecasts. A statistical postprocessing framework is proposed that uses an artificial neural network (ANN) to establish relationships between NWP ensemble forecast and gridded observed 7-day precipitation accumulations, and to model the increase or decrease of the probabilities for different precipitation categories relative to their climatological frequencies. Adding predictors with geographic information and location-specific normalization of forecast information permits the use of a single ANN for the entire forecast domain and thus reduces the risk of overfitting. In addition, a convolutional neural network (CNN) framework is proposed that extends the basic ANN and takes images of large-scale predictors as inputs that inform local increase or decrease of precipitation probabilities relative to climatology. Both methods are demonstrated with ECMWF ensemble reforecasts over California for lead times up to 4 weeks. They compare favorably with a state-of-the-art postprocessing technique developed for medium-range ensemble precipitation forecasts, and their forecast skill relative to climatology is positive everywhere within the domain. The magnitude of skill, however, is low for week-3 and week-4, and suggests that additional sources of predictability need to be explored.