Search Results
You are looking at 1 - 10 of 20 items for
- Author or Editor: Warren Wiscombe x
- Refine by Access: All Content x
Abstract
We present a method for calculating the spectral albedo of snow which can be used at any wavelength in the solar spectrum and which accounts for diffusely or directly incident radiation at any zenith angle. For deep snow, the model contains only one adjustable parameter, an effective grain size, which is close to observed grain sizes. A second parameter, the liquid-equivalent depth, is required only for relatively thin snow.
In order for the model to make realistic predictions, it must account for the extreme anisotropy of scattering by snow particles. This is done by using the “delta-Eddington” approximation for multiple scattering, together with Mie theory for single scattering.
The spectral albedo from 0.3 to 5 μm wavelength is examined as a function of the effective grain size, the solar zenith angle, the snowpack thickness, and the ratio of diffuse to direct solar incidence. The decrease in albedo due to snow aging can be mimicked by reasonable increases in grain size (50–100 μm for new snow, growing to 1 mm for melting old snow).
The model agrees well with observations for wavelengths above 0.8 μm. In the visible and near-UV, on the other hand, the model may predict albedos up to 15% higher than those which are actually observed. Increased grain size alone cannot lower the model albedo sufficiently to match these observations. It is also argued that the two major effects which are neglected in the model, namely nonsphericity of snow grains and near-field scattering, cannot be responsible for the discrepancy. Insufficient snow depth and error in measured absorption coefficient are also ruled out as the explanation. The remaining hypothesis is that visible snow albedo is reduced by trace amounts of absorptive impurities (Warren and Wiscombe, 1980, Part II).
Abstract
We present a method for calculating the spectral albedo of snow which can be used at any wavelength in the solar spectrum and which accounts for diffusely or directly incident radiation at any zenith angle. For deep snow, the model contains only one adjustable parameter, an effective grain size, which is close to observed grain sizes. A second parameter, the liquid-equivalent depth, is required only for relatively thin snow.
In order for the model to make realistic predictions, it must account for the extreme anisotropy of scattering by snow particles. This is done by using the “delta-Eddington” approximation for multiple scattering, together with Mie theory for single scattering.
The spectral albedo from 0.3 to 5 μm wavelength is examined as a function of the effective grain size, the solar zenith angle, the snowpack thickness, and the ratio of diffuse to direct solar incidence. The decrease in albedo due to snow aging can be mimicked by reasonable increases in grain size (50–100 μm for new snow, growing to 1 mm for melting old snow).
The model agrees well with observations for wavelengths above 0.8 μm. In the visible and near-UV, on the other hand, the model may predict albedos up to 15% higher than those which are actually observed. Increased grain size alone cannot lower the model albedo sufficiently to match these observations. It is also argued that the two major effects which are neglected in the model, namely nonsphericity of snow grains and near-field scattering, cannot be responsible for the discrepancy. Insufficient snow depth and error in measured absorption coefficient are also ruled out as the explanation. The remaining hypothesis is that visible snow albedo is reduced by trace amounts of absorptive impurities (Warren and Wiscombe, 1980, Part II).
Abstract
Small highly absorbing particles, present in concentrations of only 1 part per million by weight (ppmw) or less, can lower snow albedo in the visible by 5–15% from the high values (96–99%) predicted for pure snow in Part I. These particles have, however, no effect on snow albedo beyond 0.9 μm wavelength where ice itself becomes a strong absorber. Thus we have an attractive explanation for the discrepancy between theory and observation described in Part I, a discrepancy which seemingly cannot be resolved on the basis of near-field scattering and nonsphericity effects.
Desert dust and carbon soot are the most likely contaminants. But careful measurements of spectral snow albedo in the Arctic and Antarctic paint to a “grey” absorber, one whose imaginary refractive index is nearly constant across the visible spectrum. Thus carbon soot, rather than the red iron oxide normally present in desert dust, is strongly indicated at these sites. Soot particles of radius 0.1 μm, in concentrations of only 0.3 ppmw, can explain the albedo measurements of Grenfell and Maykut on Arctic Ice Island T-3. This amount is consistent with some observations of soot in Arctic air masses. 1.5 ppmw of soot is required to explain the Antarctic observations of Kuhn and Siogas, which seemed an unrealistically large amount for the earth's most unpolluted continent until we learned that burning of camp heating fuel and aircraft exhaust indeed had contaminated the measurement site with soot.
Midlatitude snowfields are likely to contain larger absolute amounts of soot and dust than their polar counterparts, but the snowfall is also much larger, so that the ppmw contamination does not differ drastically until melting begins. Nevertheless, the variations in absorbing particle concentration which will exist can help to explain the wide range of visible snow albedos reported in the literature.
Longwave emissivity of snow is unaltered by its soot and dust content. Thus the depression of snow albedo in the visible is a systematic effect and always results in more energy being absorbed at a snow-covered surface than would be the case for pure snow. Thus man-made carbon soot aerosol may continue to exert a significant warming effect on the earth's climate even after it is removed from the atmosphere.
Abstract
Small highly absorbing particles, present in concentrations of only 1 part per million by weight (ppmw) or less, can lower snow albedo in the visible by 5–15% from the high values (96–99%) predicted for pure snow in Part I. These particles have, however, no effect on snow albedo beyond 0.9 μm wavelength where ice itself becomes a strong absorber. Thus we have an attractive explanation for the discrepancy between theory and observation described in Part I, a discrepancy which seemingly cannot be resolved on the basis of near-field scattering and nonsphericity effects.
Desert dust and carbon soot are the most likely contaminants. But careful measurements of spectral snow albedo in the Arctic and Antarctic paint to a “grey” absorber, one whose imaginary refractive index is nearly constant across the visible spectrum. Thus carbon soot, rather than the red iron oxide normally present in desert dust, is strongly indicated at these sites. Soot particles of radius 0.1 μm, in concentrations of only 0.3 ppmw, can explain the albedo measurements of Grenfell and Maykut on Arctic Ice Island T-3. This amount is consistent with some observations of soot in Arctic air masses. 1.5 ppmw of soot is required to explain the Antarctic observations of Kuhn and Siogas, which seemed an unrealistically large amount for the earth's most unpolluted continent until we learned that burning of camp heating fuel and aircraft exhaust indeed had contaminated the measurement site with soot.
Midlatitude snowfields are likely to contain larger absolute amounts of soot and dust than their polar counterparts, but the snowfall is also much larger, so that the ppmw contamination does not differ drastically until melting begins. Nevertheless, the variations in absorbing particle concentration which will exist can help to explain the wide range of visible snow albedos reported in the literature.
Longwave emissivity of snow is unaltered by its soot and dust content. Thus the depression of snow albedo in the visible is a systematic effect and always results in more energy being absorbed at a snow-covered surface than would be the case for pure snow. Thus man-made carbon soot aerosol may continue to exert a significant warming effect on the earth's climate even after it is removed from the atmosphere.
The fundamental climatic role of radiative processes has spurred the development of increasingly sophisticated models of radiative transfer in the earth–atmosphere system. Since the basic physics of radiative transfer is rather well known, this was thought to be an exercise in refinement. Therefore, it came as a great surprise when large differences (30–70 W m−2) were found among longwave infrared fluxes predicted by over 30 radiation models for identical atmospheres during the intercomparison of radiation codes used in climate models (ICRCCM) exercise in the mid-1980s. No amount of further calculation could explain these and other intermodel differences; thus, it became clear that what was needed was a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere like temperature and humidity.
To obtain this dataset, the ICRCCM participants charged the authors to develop an experimental field program. So, the authors developed a program concept fotir the Spectral Radiance Experiment (SPECTRE), organized a team of scientists with expertise in atmospheric field spectroscopy, remote sensing, and radiative transfer, and secured funding from the Department of Energy and the National Aeronautics and Space Administration. The goal of SPECTRE was to establish a reference standard against which to compare models and also to drastically reduce the uncertainties in humidity, aerosol, etc., which radiation modelers had invoked in the past to excuse disagreements with observations. To avoid the high cost and sampling problems associated with aircraft, SPECTRE was designed to be a surface-based program.
The field portion of SPECTRE took place 13 November to 7 December 1991, in Coffeyville, Kansas, in conjunction with the FIRE Cirrus II field program, and most of the data have been calibrated to a usable form and will soon appear on a CD-ROM. This paper provides an overview of the data obtained; it also outlines the plans to use this data to further advance the ICRCCM goal of testing the verisimilitude of radiation parameterizations used in climate models.
The fundamental climatic role of radiative processes has spurred the development of increasingly sophisticated models of radiative transfer in the earth–atmosphere system. Since the basic physics of radiative transfer is rather well known, this was thought to be an exercise in refinement. Therefore, it came as a great surprise when large differences (30–70 W m−2) were found among longwave infrared fluxes predicted by over 30 radiation models for identical atmospheres during the intercomparison of radiation codes used in climate models (ICRCCM) exercise in the mid-1980s. No amount of further calculation could explain these and other intermodel differences; thus, it became clear that what was needed was a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere like temperature and humidity.
To obtain this dataset, the ICRCCM participants charged the authors to develop an experimental field program. So, the authors developed a program concept fotir the Spectral Radiance Experiment (SPECTRE), organized a team of scientists with expertise in atmospheric field spectroscopy, remote sensing, and radiative transfer, and secured funding from the Department of Energy and the National Aeronautics and Space Administration. The goal of SPECTRE was to establish a reference standard against which to compare models and also to drastically reduce the uncertainties in humidity, aerosol, etc., which radiation modelers had invoked in the past to excuse disagreements with observations. To avoid the high cost and sampling problems associated with aircraft, SPECTRE was designed to be a surface-based program.
The field portion of SPECTRE took place 13 November to 7 December 1991, in Coffeyville, Kansas, in conjunction with the FIRE Cirrus II field program, and most of the data have been calibrated to a usable form and will soon appear on a CD-ROM. This paper provides an overview of the data obtained; it also outlines the plans to use this data to further advance the ICRCCM goal of testing the verisimilitude of radiation parameterizations used in climate models.
Abstract
This is the second of two papers analyzing the internal liquid water content (LWC) structure of marine stratocumulus (Sc) based on observations taken during the First ICCP (International Commission on Cloud Physics) Regional Experiment (FIRE) 1987 and Atlantic Stratocumulus Transition Experiment (ASTEX) 1992 field programs. Part I examined wavenumber spectra and the three-decade scale range (tens of meters to tens of kilometers) over which scale invariance holds; the inability of spectral analysis to distinguish between different random processes was also underscored. This indetermination is removed in this part by applying multifractal analysis techniques to the LWC fields, leading to a characterization of the role of intermittency in marine Sc.
Two multiscaling statistics are computed and associated nonincreasing hierarchies of exponents are obtained: structure functions and H(q), singular measures and D(q). The real variable q is the order of a statistical moment (e.g., q = 1.0 yields a mean); D(q) quantifies intermittency, H(q) nonstationarity. Being derived from the slopes of lines on log(statistic) versus log(scale) plots, these exponents are only defined when those lines are reasonably straight and where this happens defines the scale-invariant range. Being nonconstant, the derived H(q) and D(q) indicate multifractality rather than monofractality of LWC fields.
Two exponents can serve as first-order measures of nonstationarity and intermittency: H 1 = H(1) and C 1 = 1 − D(1). For the ensemble average of all FIRE and all ASTEX data, the authors find the two corresponding points in the (H 1, C 1) plane to be close: (0.28, 0.10) for FIRE and (0.29, 0.08) for ASTEX. This indicates that the dynamics determining the internal structure of marine Sc depend little on the local climatology. In contrast, the scatter of spatial averages for the individual flight around the ensemble average illustrates ergodicity violation. Finally, neither multiplicative cascades (with H 1 = 0) nor additive Gaussian models such as fractional Brownian motions (with C 1 = 0) adequately reproduce the LWC fluctuations in marine Sc.
Abstract
This is the second of two papers analyzing the internal liquid water content (LWC) structure of marine stratocumulus (Sc) based on observations taken during the First ICCP (International Commission on Cloud Physics) Regional Experiment (FIRE) 1987 and Atlantic Stratocumulus Transition Experiment (ASTEX) 1992 field programs. Part I examined wavenumber spectra and the three-decade scale range (tens of meters to tens of kilometers) over which scale invariance holds; the inability of spectral analysis to distinguish between different random processes was also underscored. This indetermination is removed in this part by applying multifractal analysis techniques to the LWC fields, leading to a characterization of the role of intermittency in marine Sc.
Two multiscaling statistics are computed and associated nonincreasing hierarchies of exponents are obtained: structure functions and H(q), singular measures and D(q). The real variable q is the order of a statistical moment (e.g., q = 1.0 yields a mean); D(q) quantifies intermittency, H(q) nonstationarity. Being derived from the slopes of lines on log(statistic) versus log(scale) plots, these exponents are only defined when those lines are reasonably straight and where this happens defines the scale-invariant range. Being nonconstant, the derived H(q) and D(q) indicate multifractality rather than monofractality of LWC fields.
Two exponents can serve as first-order measures of nonstationarity and intermittency: H 1 = H(1) and C 1 = 1 − D(1). For the ensemble average of all FIRE and all ASTEX data, the authors find the two corresponding points in the (H 1, C 1) plane to be close: (0.28, 0.10) for FIRE and (0.29, 0.08) for ASTEX. This indicates that the dynamics determining the internal structure of marine Sc depend little on the local climatology. In contrast, the scatter of spatial averages for the individual flight around the ensemble average illustrates ergodicity violation. Finally, neither multiplicative cascades (with H 1 = 0) nor additive Gaussian models such as fractional Brownian motions (with C 1 = 0) adequately reproduce the LWC fluctuations in marine Sc.
Abstract
This study investigates the internal structure of marine stratocumulus (Sc) using the spatial fluctuations of liquid water content (LWC) measured along horizontal flights off the coast of southern California during the First ISCCP Regional Experiment (FIRE) in summer of 1987. The results of FIRE 87 data analyses are compared to similar ones for marine Sc probed during the Atlantic Stratocumulus Transition Experiment (ASTEX) in summer 1992 near the Azores. In this first of two parts, the authors use spectral analysis to determine the main scale-invariant regimes, defined by the ranges of scales where wavenumber spectra follow power laws; from there, they discuss stationary issues. Although crucial for obtaining meaningful spatial statistics (e.g., in climate diagnostics), the importance of establishing stationarity—statistical invariance under translation—is often overlooked. The sequel uses multifractal analysis techniques and addresses intermittency issues. By improving our understanding of both nonstationarity and intermittency in atmospheric data, we are in a better position to formulate successful sampling strategies.
Comparing the spectral responses of different instruments to natural LWC variability, the authors find scale breaks (characteristic scales separating two distinct power law regimes) that are spurious, being traceable to well-documented idiosyncrasies of the Johnson–Williams probe and forward scattering spectrometer probes. In data from the King probe, the authors find no such artifacts; all spectra are of the scale-invariant form k −β with exponents β in the range 1.1–1.7, depending on the flight. Using the whole FIRE 87 King LWC database, the authors find power-law behavior with β = 1.56 ± 0.06 from 20 m to 20 km. From a spectral vantage point, the ASTEX cloud system behaves statistically like a scaled-up version of FIRE 87: a similar exponent β = 1.43 ± 0.08 is obtained, but the scaling range is shifted to [60 m, 60 km], possibly due to the 2–3 times greater boundary layer thickness.
Finally, the authors reassess the usefulness of spectral analysis:
-
• Its main shortcoming is ambiguity: very different looking stochastic processes can yield similar, even identical, spectra. This problem impedes accurate modeling of the LWC data and, ultimately, is why multifractal methods are required.
-
• Its main asset is applicability in stationary and nonstationary situations alike and, in conjunction with scaling, it can be used to detect nonstationary behavior in data.
Having β > 1, LWC fields in marine Sc are nonstationary within the scaling range and stationary only at larger scales. Nonstationarity implies long-range correlations, and we demonstrate the damage these cause when tying to estimate means and standard deviations with limited amounts of LWC data.
Abstract
This study investigates the internal structure of marine stratocumulus (Sc) using the spatial fluctuations of liquid water content (LWC) measured along horizontal flights off the coast of southern California during the First ISCCP Regional Experiment (FIRE) in summer of 1987. The results of FIRE 87 data analyses are compared to similar ones for marine Sc probed during the Atlantic Stratocumulus Transition Experiment (ASTEX) in summer 1992 near the Azores. In this first of two parts, the authors use spectral analysis to determine the main scale-invariant regimes, defined by the ranges of scales where wavenumber spectra follow power laws; from there, they discuss stationary issues. Although crucial for obtaining meaningful spatial statistics (e.g., in climate diagnostics), the importance of establishing stationarity—statistical invariance under translation—is often overlooked. The sequel uses multifractal analysis techniques and addresses intermittency issues. By improving our understanding of both nonstationarity and intermittency in atmospheric data, we are in a better position to formulate successful sampling strategies.
Comparing the spectral responses of different instruments to natural LWC variability, the authors find scale breaks (characteristic scales separating two distinct power law regimes) that are spurious, being traceable to well-documented idiosyncrasies of the Johnson–Williams probe and forward scattering spectrometer probes. In data from the King probe, the authors find no such artifacts; all spectra are of the scale-invariant form k −β with exponents β in the range 1.1–1.7, depending on the flight. Using the whole FIRE 87 King LWC database, the authors find power-law behavior with β = 1.56 ± 0.06 from 20 m to 20 km. From a spectral vantage point, the ASTEX cloud system behaves statistically like a scaled-up version of FIRE 87: a similar exponent β = 1.43 ± 0.08 is obtained, but the scaling range is shifted to [60 m, 60 km], possibly due to the 2–3 times greater boundary layer thickness.
Finally, the authors reassess the usefulness of spectral analysis:
-
• Its main shortcoming is ambiguity: very different looking stochastic processes can yield similar, even identical, spectra. This problem impedes accurate modeling of the LWC data and, ultimately, is why multifractal methods are required.
-
• Its main asset is applicability in stationary and nonstationary situations alike and, in conjunction with scaling, it can be used to detect nonstationary behavior in data.
Having β > 1, LWC fields in marine Sc are nonstationary within the scaling range and stationary only at larger scales. Nonstationarity implies long-range correlations, and we demonstrate the damage these cause when tying to estimate means and standard deviations with limited amounts of LWC data.
Abstract
Several studies have uncovered a break in the scaling properties of Landsat cloud scenes at nonabsorbing wavelengths. For scales greater than 200–400 m, the wavenumber spectrum is approximately power law in k
−5/3, but from there down to the smallest observable scales (50–100 m) follows another k
−β
law with β > 3. This implies very smooth radiance fields. The authors reexamine the empirical evidence for this scale break and explain it using fractal cloud models, Monte Carlo simulations, and a Green function approach to multiple scattering theory. In particular, the authors define the “radiative smoothing scale” and relate it to the characteristic scale of horizontal photon transport. The scale break was originally thought to occur at a scale commensurate with either the geometrical thickness Δ
z
of the cloud, or with the “transport” mean free path l
t = [(1 − g)σ]−1, which incorporates the effect of forward scattering (σ is extinction and g the asymmetry factor of the phase function). The smoothing scale is found to be approximately
Abstract
Several studies have uncovered a break in the scaling properties of Landsat cloud scenes at nonabsorbing wavelengths. For scales greater than 200–400 m, the wavenumber spectrum is approximately power law in k
−5/3, but from there down to the smallest observable scales (50–100 m) follows another k
−β
law with β > 3. This implies very smooth radiance fields. The authors reexamine the empirical evidence for this scale break and explain it using fractal cloud models, Monte Carlo simulations, and a Green function approach to multiple scattering theory. In particular, the authors define the “radiative smoothing scale” and relate it to the characteristic scale of horizontal photon transport. The scale break was originally thought to occur at a scale commensurate with either the geometrical thickness Δ
z
of the cloud, or with the “transport” mean free path l
t = [(1 − g)σ]−1, which incorporates the effect of forward scattering (σ is extinction and g the asymmetry factor of the phase function). The smoothing scale is found to be approximately
Abstract
Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of “bounded cascade” models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.
Abstract
Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of “bounded cascade” models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.
Abstract
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as lD(r), where 0 ≤ D(r) ≤ 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
Abstract
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as lD(r), where 0 ≤ D(r) ≤ 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
Abstract
A new method for retrieving cloud optical depth from ground-based measurements of zenith radiance in the red (RED) and near-infrared (NIR) spectral regions is introduced. Because zenith radiance does not have a one-to-one relationship with optical depth, it is absolutely impossible to use a monochromatic retrieval. On the other side, algebraic combinations of spectral radiances, such as normalized difference cloud index (NDCI), while largely removing nonuniqueness and the radiative effects of cloud inhomogeneity, can result in poor retrievals due to its insensitivity to cloud fraction. Instead, both RED and NIR radiances as points on the “RED versus NIR” plane are proposed to be used for retrieval. The proposed retrieval method is applied to Cimel measurements at the Atmospheric Radiation Measurements (ARM) site in Oklahoma. Cimel, a multichannel sun photometer, is a part of the Aerosol Robotic Network (AERONET)—a ground-based network for monitoring aerosol optical properties. The results of retrieval are compared with the ones from microwave radiometer (MWR) and multifilter rotating shadowband radiometer (MFRSR) located next to Cimel at the ARM site. In addition, the performance of the retrieval method is assessed using a fractal model of cloud inhomogeneity and broken cloudiness. The preliminary results look very promising both theoretically and from measurements.
Abstract
A new method for retrieving cloud optical depth from ground-based measurements of zenith radiance in the red (RED) and near-infrared (NIR) spectral regions is introduced. Because zenith radiance does not have a one-to-one relationship with optical depth, it is absolutely impossible to use a monochromatic retrieval. On the other side, algebraic combinations of spectral radiances, such as normalized difference cloud index (NDCI), while largely removing nonuniqueness and the radiative effects of cloud inhomogeneity, can result in poor retrievals due to its insensitivity to cloud fraction. Instead, both RED and NIR radiances as points on the “RED versus NIR” plane are proposed to be used for retrieval. The proposed retrieval method is applied to Cimel measurements at the Atmospheric Radiation Measurements (ARM) site in Oklahoma. Cimel, a multichannel sun photometer, is a part of the Aerosol Robotic Network (AERONET)—a ground-based network for monitoring aerosol optical properties. The results of retrieval are compared with the ones from microwave radiometer (MWR) and multifilter rotating shadowband radiometer (MFRSR) located next to Cimel at the ARM site. In addition, the performance of the retrieval method is assessed using a fractal model of cloud inhomogeneity and broken cloudiness. The preliminary results look very promising both theoretically and from measurements.
Abstract
Characterizing the performance of ground-based commercial radiometers in cold and/or low-pressure environments is critical for developing accurate flux measurements in the polar regions and in the upper troposphere and stratosphere. Commercially available broadband radiometers have a stated operational temperature range of, typically, −20° to +50°C. Within this range, their temperature dependencies of sensitivities change less than 1%. But for deployments on high-altitude platforms or in polar regions, which can be much colder than −20°C, information on temperature dependency of sensitivity is not always available. In this paper, the temperature dependencies of sensitivities of popular pyranometers and pyrgeometers manufactured by Kipp and Zonen were tested in a thermal-vacuum chamber. When their body temperature is lowered to −60°C, pyranometer sensitivity drops by 4%–6% from the factory-default specification. Pyrgeometer sensitivity increases by 13% from the factory-default specification during a similar temperature change. When the chamber pressure is lowered from 830 to 6 hPa, the sensitivity decreases by about 2% for the pyranometer, and increases by about 2% for the pyrgeometer. Note that these temperature and pressure dependencies of sensitivities are specific for the instruments that were tested and should not be applied to others. These findings show that for measurements suitable for climate studies, it is crucial to characterize temperature and/or pressure effects on radiometer sensitivity for deployments on high-altitude platforms and in polar regions.
Abstract
Characterizing the performance of ground-based commercial radiometers in cold and/or low-pressure environments is critical for developing accurate flux measurements in the polar regions and in the upper troposphere and stratosphere. Commercially available broadband radiometers have a stated operational temperature range of, typically, −20° to +50°C. Within this range, their temperature dependencies of sensitivities change less than 1%. But for deployments on high-altitude platforms or in polar regions, which can be much colder than −20°C, information on temperature dependency of sensitivity is not always available. In this paper, the temperature dependencies of sensitivities of popular pyranometers and pyrgeometers manufactured by Kipp and Zonen were tested in a thermal-vacuum chamber. When their body temperature is lowered to −60°C, pyranometer sensitivity drops by 4%–6% from the factory-default specification. Pyrgeometer sensitivity increases by 13% from the factory-default specification during a similar temperature change. When the chamber pressure is lowered from 830 to 6 hPa, the sensitivity decreases by about 2% for the pyranometer, and increases by about 2% for the pyrgeometer. Note that these temperature and pressure dependencies of sensitivities are specific for the instruments that were tested and should not be applied to others. These findings show that for measurements suitable for climate studies, it is crucial to characterize temperature and/or pressure effects on radiometer sensitivity for deployments on high-altitude platforms and in polar regions.