Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: R. E. Larson x
  • Refine by Access: All Content x
Clear All Modify Search
P. E. Wilkniss
and
R. E. Larson

Abstract

Radon was determined in the atmosphere over the Arctic Ocean in flights of a United States Naval Research Laboratory aircraft in April and May 1974. Simultaneously collected air samples were analyzed for carbon monoxide, methane, trichlorofluoromethane, and carbon tetrachloride. Flights at 305 m altitude found significant spatial and temporal variations in Arctic air masses and the formation of succinct fronts during warm air advection. Radon measurements, Defense Meteorological Satellite Project images, and conventional meteorological charts demonstrated the transport of North American air west of Greenland and of marine air and European air east of Greenland to the Arctic Ocean basin. Radon and the other trace gases served to identify stratospheric air masses, and showed that continental southern air did not penetrate over the Arctic Ocean basin at high altitudes (7.6–8.2 km). Trace gas measurements aboard ships and aircraft during different seasons in the Arctic suggest that seasonal variations are masked by synoptic variations.

Full access
Vincent E. Larson
,
Jean-Christophe Golaz
, and
William R. Cotton

Abstract

The joint probability density function (PDF) of vertical velocity and conserved scalars is important for at least two reasons. First, the shape of the joint PDF determines the buoyancy flux in partly cloudy layers. Second, the PDF provides a wealth of information about subgrid variability and hence can serve as the foundation of a boundary layer cloud and turbulence parameterization.

This paper analyzes PDFs of stratocumulus, cumulus, and clear boundary layers obtained from both aircraft observations and large eddy simulations. The data are used to fit five families of PDFs: a double delta function, a single Gaussian, and three PDF families based on the sum of two Gaussians.

Overall, the double Gaussian, that is binormal, PDFs perform better than the single Gaussian or double delta function PDFs. In cumulus layers with low cloud fraction, the improvement occurs because typical PDFs are highly skewed, and it is crucial to accurately represent the tail of the distribution, which is where cloud occurs. Since the double delta function has been shown in prior work to be the PDF underlying mass-flux schemes, the data analysis herein hints that mass-flux simulations may be improved upon by using a parameterization built upon a more realistic PDF.

Full access
Jean-Christophe Golaz
,
Vincent E. Larson
, and
William R. Cotton

Abstract

A new cloudy boundary layer single-column model is presented. It is designed to be flexible enough to represent a variety of cloudiness regimes—such as cumulus, stratocumulus, and clear regimes—without the need for case-specific adjustments. The methodology behind the model is the so-called assumed probability density function (PDF) method. The parameterization differs from higher-order closure or mass-flux schemes in that it achieves closure by the use of a relatively sophisticated joint PDF of vertical velocity, temperature, and moisture. A family of PDFs is chosen that is flexible enough to represent various cloudiness regimes. A double Gaussian family proposed by previous works is used. Predictive equations for grid box means and a number of higher-order turbulent moments are advanced in time. These moments are in turn used to select a particular member from the family of PDFs, for each time step and grid box. Once a PDF member has been selected, the scheme integrates over the PDF to close higher-order moments, buoyancy terms, and diagnose cloud fraction and liquid water. Since all the diagnosed moments for a given grid box and time step are derived from the same unique joint PDF, they are guaranteed to be consistent with one another. A companion paper presents simulations produced by the single-column model.

Full access
Jean-Christophe Golaz
,
Vincent E. Larson
, and
William R. Cotton

Abstract

A new single-column model for the cloudy boundary layer, described in a companion paper, is tested for a variety of regimes. To represent the subgrid-scale variability, the model uses a joint probability density function (PDF) of vertical velocity, temperature, and moisture content. Results from four different cases are presented and contrasted with large eddy simulations (LES). The cases include a clear convective layer based on the Wangara experiment, a trade wind cumulus layer from the Barbados Oceanographic and Meteorological Experiment (BOMEX), a case of cumulus clouds over land, and a nocturnal marine stratocumulus boundary layer.

Results from the Wangara experiment show that the model is capable of realistically predicting the diurnal growth of a dry convective layer. Compared to the LES, the layer produced is slightly less well mixed and entrainment is somewhat slower. The cloud cover in the cloudy cases varied widely, ranging from a few percent cloud cover to nearly overcast. In each of the cloudy cases, the parameterization predicted cloud fractions that agree reasonably well with the LES. Typically, cloud fraction values tended to be somewhat smaller in the parameterization, and cloud bases and tops were slightly underestimated. Liquid water content was generally within 40% of the LES-predicted values for a range of values spanning almost two orders of magnitude. This was accomplished without the use of any case-specific adjustments.

Full access
P. E. Wilkniss
,
R. E. Larson
,
D. J. Bressan
, and
Joseph Steranka

Abstract

Full access
Z. J. Lebo
,
C. R. Williams
,
G. Feingold
, and
V. E. Larson

Abstract

The spatial variability of rain rate R is evaluated by using both radar observations and cloud-resolving model output, focusing on the Tropical Warm Pool–International Cloud Experiment (TWP-ICE) period. In general, the model-predicted rain-rate probability distributions agree well with those estimated from the radar data across a wide range of spatial scales. The spatial variability in R, which is defined according to the standard deviation of R (for R greater than a predefined threshold R min) σ(R), is found to vary according to both the average of R over a given footprint μ(R) and the footprint size or averaging scale Δ. There is good agreement between area-averaged model output and radar data at a height of 2.5 km. The model output at the surface is used to construct a scale-dependent parameterization of σ(R) as a function of μ(R) and Δ that can be readily implemented into large-scale numerical models. The variability in both the rainwater mixing ratio q r and R as a function of height is also explored. From the statistical analysis, a scale- and height-dependent formulation for the spatial variability of both q r and R is provided for the analyzed tropical scenario. Last, it is shown how this parameterization can be used to assist in constraining parameters that are often used to describe the surface rain-rate distribution.

Full access
Vincent E. Larson
,
Jean-Christophe Golaz
,
Hongli Jiang
, and
William R. Cotton

Abstract

One problem in computing cloud microphysical processes in coarse-resolution numerical models is that many microphysical processes are nonlinear and small in scale. Consequently, there are inaccuracies if microphysics parameterizations are forced with grid box averages of model fields, such as liquid water content. Rather, the model needs to determine information about subgrid variability and input it into the microphysics parameterization.

One possible solution is to assume the shape of the family of probability density functions (PDFs) associated with a grid box and sample it using the Monte Carlo method. In this method, the microphysics subroutine is called repeatedly, once with each sample point. In this way, the Monte Carlo method acts as an interface between the host model’s dynamics and the microphysical parameterization. This avoids the need to rewrite the microphysics subroutines.

A difficulty with the Monte Carlo method is that it introduces into the simulation statistical noise or variance, associated with the finite sample size. If the family of PDFs is tractable, one can sample solely from cloud, thereby improving estimates of in-cloud processes. If one wishes to mitigate the noise further, one needs a method for reduction of variance. One such method is Latin hypercube sampling, which reduces noise by spreading out the sample points in a quasi-random fashion.

This paper formulates a sampling interface based on the Latin hypercube method. The associated family of PDFs is assumed to be a joint normal/lognormal (i.e., Gaussian/lognormal) mixture. This method of variance reduction has a couple of advantages. First, the method is general: the same interface can be used with a wide variety of microphysical parameterizations for various processes. Second, the method is flexible: one can arbitrarily specify the number of hydrometeor categories and the number of calls to the microphysics parameterization per grid box per time step.

This paper performs a preliminary test of Latin hypercube sampling. As a prototypical microphysical formula, this paper uses the Kessler autoconversion formula. The PDFs that are sampled are extracted diagnostically from large-eddy simulations (LES). Both stratocumulus and cumulus boundary layer cases are tested. In this diagnostic test, the Latin hypercube can produce somewhat less noisy time-averaged estimates of Kessler autoconversion than a traditional Monte Carlo estimate, with no additional calls to the microphysics parameterization. However, the instantaneous estimates are no less noisy. This paper leaves unanswered the question of whether the Latin hypercube method will work well in a prognostic, interactive cloud model, but this question will be addressed in a future manuscript.

Full access
Vincent E. Larson
,
Robert Wood
,
Paul R. Field
,
Jean-Christophe Golaz
,
Thomas H. Vonder Haar
, and
William R. Cotton

Abstract

A grid box in a numerical model that ignores subgrid variability has biases in certain microphysical and thermodynamic quantities relative to the values that would be obtained if subgrid-scale variability were taken into account. The biases are important because they are systematic and hence have cumulative effects. Several types of biases are discussed in this paper. Namely, numerical models that employ convex autoconversion formulas underpredict (or, more precisely, never overpredict) autoconversion rates, and numerical models that use convex functions to diagnose specific liquid water content and temperature underpredict these latter quantities. One may call these biases the “grid box average autoconversion bias,” “grid box average liquid water content bias,” and “grid box average temperature bias,” respectively, because the biases arise when grid box average values are substituted into formulas valid at a point, not over an extended volume. The existence of these biases can be derived from Jensen’s inequality.

To assess the magnitude of the biases, the authors analyze observations of boundary layer clouds. Often the biases are small, but the observations demonstrate that the biases can be large in important cases.

In addition, the authors prove that the average liquid water content and temperature of an isolated, partly cloudy, constant-pressure volume of air cannot increase, even temporarily. The proof assumes that liquid water content can be written as a convex function of conserved variables with equal diffusivities. The temperature decrease is due to evaporative cooling as cloudy and clear air mix. More generally, the authors prove that if an isolated volume of fluid contains conserved scalars with equal diffusivities, then the average of any convex, twice-differentiable function of the conserved scalars cannot increase.

Full access
Vincent E. Larson
,
Robert Wood
,
Paul R. Field
,
Jean-Christophe Golaz
,
Thomas H. Vonder Haar
, and
William R. Cotton

Abstract

A key to parameterization of subgrid-scale processes is the probability density function (PDF) of conserved scalars. If the appropriate PDF is known, then grid box average cloud fraction, liquid water content, temperature, and autoconversion can be diagnosed. Despite the fundamental role of PDFs in parameterization, there have been few observational studies of conserved-scalar PDFs in clouds. The present work analyzes PDFs from boundary layers containing stratocumulus, cumulus, and cumulus-rising-into-stratocumulus clouds.

Using observational aircraft data, the authors test eight different parameterizations of PDFs, including double delta function, gamma function, Gaussian, and double Gaussian shapes. The Gaussian parameterization, which depends on two parameters, fits most observed PDFs well but fails for large-scale PDFs of cumulus legs. In contrast, three-parameter parameterizations appear to be sufficiently general to model PDFs from a variety of cloudy boundary layers.

If a numerical model ignores subgrid variability, the model has biases in diagnoses of grid box average liquid water content, temperature, and Kessler autoconversion, relative to the values it would obtain if subgrid variability were taken into account. The magnitude of such biases is assessed using observational data. The biases can be largely eliminated by three-parameter PDF parameterizations.

Prior authors have suggested that boundary layer PDFs from short segments are approximately Gaussian. The present authors find that the hypothesis that PDFs of total specific water content are Gaussian can almost always be rejected for segments as small as 1 km.

Full access
L.W. Larson
,
R.L. Ferral
,
E.T. Strem
,
A.J. Morin
,
B. Armstrong
,
T.R. Carroll
,
M.D. Hudlow
,
L.A. Wenzel
,
G.L. Schaefer
, and
D.E. Johnson

Abstract

The River and Flood Program in the National Weather Service, in its mission to save lives and property, has the responsibility to gather hydrologic data from a variety of sources and to assemble the data to make timely and reliable hydrologic forecasts. The intent of this paper, the second in a series of three, is to present an overview of the operational responsibilities of the River and Flood Program: how data are collected, what models-systems are currently in operation to process the data, and how the application of these procedures and techniques are applied in different types of hydrologic forecasting.

Full access