Search Results

You are looking at 1 - 10 of 24 items for

  • Author or Editor: Jean-Christophe Golaz x
  • Refine by Access: All Content x
Clear All Modify Search
Vincent E. Larson and Jean-Christophe Golaz

Abstract

Parameterizations of turbulence often predict several lower-order moments and make closure assumptions for higher-order moments. In principle, the low- and high-order moments share the same probability density function (PDF). One closure assumption, then, is the shape of this family of PDFs. When the higher-order moments involve both velocity and thermodynamic scalars, often the PDF shape has been assumed to be a double or triple delta function. This is equivalent to assuming a mass-flux model with no subplume variability. However, PDF families other than delta functions can be assumed. This is because the assumed PDF methodology is fairly general.

This paper proposes closures for several third- and fourth-order moments. To derive the closures, the moments are assumed to be consistent with a particular PDF family, namely, a mixture of two trivariate Gaussians. (This PDF is also called a double Gaussian or binormal PDF by some authors.) Separately from the PDF assumption, the paper also proposes a simplified relationship between scalar and velocity skewnesses. This PDF family and skewness relationship are simple enough to yield simple, analytic closure formulas relating the moments. If certain conditions hold, this set of moments is specifically realizable. By this it is meant that the set of moments corresponds to a real Gaussian-mixture PDF, one that is normalized and nonnegative everywhere.

This paper compares the new closure formulas with both large eddy simulations (LESs) and closures based on double and triple delta PDFs. This paper does not implement the closures in a single-column model and test them interactively. Rather, the comparisons are diagnostic; that is, low-order moments are extracted from the LES and treated as givens that are input into the closures. This isolates errors in the closures from errors in a single-column model. The test cases are three atmospheric boundary layers: a trade wind cumulus layer, a stratocumulus layer, and a clear convective case. The new closures have shortcomings, but nevertheless are superior to the double or triple delta closures in most of the cases tested.

Full access
Anning Cheng, Kuan-Man Xu, and Jean-Christophe Golaz

Abstract

A hierarchy of third-order turbulence closure models are used to simulate boundary layer cumuli in this study. An unrealistically strong liquid water oscillation (LWO) is found in the fully prognostic model, which predicts all third moments. The LWO propagates from cloud base to cloud top with a speed of 1 m s−1. The period of the oscillation is about 1000 s. Liquid water buoyancy (LWB) terms in the third-moment equations contribute to the LWO. The LWO mainly affects the vertical profiles of cloud fraction, mean liquid water mixing ratio, and the fluxes of liquid water potential temperature and total water, but has less impact on the vertical profiles of other second and third moments.

In order to minimize the LWO, a moderately large diffusion coefficient and a large turbulent dissipation at its originating level are needed. However, this approach distorts the vertical distributions of cloud fraction and liquid water mixing ratio. A better approach is to parameterize LWB more reasonably. A minimally prognostic model, which diagnoses all third moments except for the vertical velocity, is shown to produce better results, compared to a fully prognostic model.

Full access
Vincent E. Larson, Jean-Christophe Golaz, and William R. Cotton

Abstract

The joint probability density function (PDF) of vertical velocity and conserved scalars is important for at least two reasons. First, the shape of the joint PDF determines the buoyancy flux in partly cloudy layers. Second, the PDF provides a wealth of information about subgrid variability and hence can serve as the foundation of a boundary layer cloud and turbulence parameterization.

This paper analyzes PDFs of stratocumulus, cumulus, and clear boundary layers obtained from both aircraft observations and large eddy simulations. The data are used to fit five families of PDFs: a double delta function, a single Gaussian, and three PDF families based on the sum of two Gaussians.

Overall, the double Gaussian, that is binormal, PDFs perform better than the single Gaussian or double delta function PDFs. In cumulus layers with low cloud fraction, the improvement occurs because typical PDFs are highly skewed, and it is crucial to accurately represent the tail of the distribution, which is where cloud occurs. Since the double delta function has been shown in prior work to be the PDF underlying mass-flux schemes, the data analysis herein hints that mass-flux simulations may be improved upon by using a parameterization built upon a more realistic PDF.

Full access
Jean-Christophe Golaz, Vincent E. Larson, and William R. Cotton

Abstract

A new single-column model for the cloudy boundary layer, described in a companion paper, is tested for a variety of regimes. To represent the subgrid-scale variability, the model uses a joint probability density function (PDF) of vertical velocity, temperature, and moisture content. Results from four different cases are presented and contrasted with large eddy simulations (LES). The cases include a clear convective layer based on the Wangara experiment, a trade wind cumulus layer from the Barbados Oceanographic and Meteorological Experiment (BOMEX), a case of cumulus clouds over land, and a nocturnal marine stratocumulus boundary layer.

Results from the Wangara experiment show that the model is capable of realistically predicting the diurnal growth of a dry convective layer. Compared to the LES, the layer produced is slightly less well mixed and entrainment is somewhat slower. The cloud cover in the cloudy cases varied widely, ranging from a few percent cloud cover to nearly overcast. In each of the cloudy cases, the parameterization predicted cloud fractions that agree reasonably well with the LES. Typically, cloud fraction values tended to be somewhat smaller in the parameterization, and cloud bases and tops were slightly underestimated. Liquid water content was generally within 40% of the LES-predicted values for a range of values spanning almost two orders of magnitude. This was accomplished without the use of any case-specific adjustments.

Full access
Jean-Christophe Golaz, Vincent E. Larson, and William R. Cotton

Abstract

A new cloudy boundary layer single-column model is presented. It is designed to be flexible enough to represent a variety of cloudiness regimes—such as cumulus, stratocumulus, and clear regimes—without the need for case-specific adjustments. The methodology behind the model is the so-called assumed probability density function (PDF) method. The parameterization differs from higher-order closure or mass-flux schemes in that it achieves closure by the use of a relatively sophisticated joint PDF of vertical velocity, temperature, and moisture. A family of PDFs is chosen that is flexible enough to represent various cloudiness regimes. A double Gaussian family proposed by previous works is used. Predictive equations for grid box means and a number of higher-order turbulent moments are advanced in time. These moments are in turn used to select a particular member from the family of PDFs, for each time step and grid box. Once a PDF member has been selected, the scheme integrates over the PDF to close higher-order moments, buoyancy terms, and diagnose cloud fraction and liquid water. Since all the diagnosed moments for a given grid box and time step are derived from the same unique joint PDF, they are guaranteed to be consistent with one another. A companion paper presents simulations produced by the single-column model.

Full access
Jing Huang, Elie Bou-Zeid, and Jean-Christophe Golaz

Abstract

This is the second part of a study about turbulence and vertical fluxes in the stable atmospheric boundary layer. Based on a suite of large-eddy simulations in Part I where the effects of stability on the turbulent structures and kinetic energy are investigated, first-order parameterization schemes are assessed and tested in the Geophysical Fluid Dynamics Laboratory (GFDL)’s single-column model. The applicability of the gradient-flux hypothesis is first examined and it is found that stable conditions are favorable for that hypothesis. However, the concept of introducing a stability correction function fm as a multiplicative factor into the mixing length used under neutral conditions lN is shown to be problematic because fm computed a priori from large-eddy simulations tends not to be a universal function of stability. With this observation, a novel mixing-length model is proposed, which conforms to large-eddy simulation results much better under stable conditions and converges to the classic model under neutral conditions. Test cases imposing steady as well as unsteady forcings are developed to evaluate the performance of the new model. It is found that the new model exhibits robust performance as the stability strength is changed, while other models are sensitive to changes in stability. For cases with unsteady forcings, which are very rarely simulated or tested, the results of the single-column model and large-eddy simulations are also closer when the new model is used, compared to the other models. However, unsteady cases are much more challenging for the turbulence closure formulations than cases with steady surface forcing.

Full access
Huan Guo, Jean-Christophe Golaz, Leo J. Donner, Paul Ginoux, and Richard S. Hemler

Abstract

A unified turbulence and cloud parameterization based on multivariate probability density functions (PDFs) has been incorporated into the GFDL atmospheric general circulation model (AM3). This PDF-based parameterization not only predicts subgrid variations in vertical velocity, temperature, and total water, which bridge subgrid-scale processes (e.g., aerosol activation and cloud microphysics) and grid-scale dynamic and thermodynamic fields, but also unifies the treatment of planetary boundary layer (PBL), shallow convection, and cloud macrophysics. This parameterization is called the Cloud Layers Unified by Binormals (CLUBB) parameterization. With the incorporation of CLUBB in AM3, coupled with a two-moment cloud microphysical scheme, AM3–CLUBB allows for a more physically based and self-consistent treatment of aerosol activation, cloud micro- and macrophysics, PBL, and shallow convection.

The configuration and performance of AM3–CLUBB are described. Cloud and radiation fields, as well as most basic climate features, are modeled realistically. Relative to AM3, AM3–CLUBB improves the simulation of coastal stratocumulus, a longstanding deficiency in GFDL models, and their seasonal cycle, especially at higher horizontal resolution, but global skill scores deteriorate slightly. Through sensitivity experiments, it is shown that 1) the two-moment cloud microphysics helps relieve the deficiency of coastal stratocumulus, 2) using the CLUBB subgrid cloud water variability in the cloud microphysics has a considerable positive impact on global cloudiness, and 3) the impact of adjusting CLUBB parameters is to improve the overall agreement between model and observations.

Full access
Jean-Christophe Golaz, Marc Salzmann, Leo J. Donner, Larry W. Horowitz, Yi Ming, and Ming Zhao

Abstract

The recently developed GFDL Atmospheric Model version 3 (AM3), an atmospheric general circulation model (GCM), incorporates a prognostic treatment of cloud drop number to simulate the aerosol indirect effect. Since cloud drop activation depends on cloud-scale vertical velocities, which are not reproduced in present-day GCMs, additional assumptions on the subgrid variability are required to implement a local activation parameterization into a GCM.

This paper describes the subgrid activation assumptions in AM3 and explores sensitivities by constructing alternate configurations. These alternate model configurations exhibit only small differences in their present-day climatology. However, the total anthropogenic radiative flux perturbation (RFP) between present-day and preindustrial conditions varies by ±50% from the reference, because of a large difference in the magnitude of the aerosol indirect effect. The spread in RFP does not originate directly from the subgrid assumptions but indirectly through the cloud retuning necessary to maintain a realistic radiation balance. In particular, the paper shows a linear correlation between the choice of autoconversion threshold radius and the RFP.

Climate sensitivity changes only minimally between the reference and alternate configurations. If implemented in a fully coupled model, these alternate configurations would therefore likely produce substantially different warming from preindustrial to present day.

Full access
Kentaroh Suzuki, Graeme Stephens, Alejandro Bodas-Salcedo, Minghuai Wang, Jean-Christophe Golaz, Tokuta Yokohata, and Tsuyoshi Koshiro

Abstract

This study examines the warm rain formation process over the global ocean in global climate models. Methodologies developed to analyze CloudSat and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite observations are employed to investigate the cloud-to-precipitation process of warm clouds and are applied to the model results to examine how the models represent the process for warm stratiform clouds. Despite a limitation of the present study that compares the statistics for stratiform clouds in climate models with those from satellite observations, including both stratiform and (shallow) convective clouds, the statistics constructed with the methodologies are compared between the models and satellite observations to expose their similarities and differences. A problem common to some models is that they tend to produce rain at a faster rate than is observed. These model characteristics are further examined in the context of cloud microphysics parameterizations using a simplified one-dimensional model of warm rain formation that isolates key microphysical processes from full interactions with other processes in global climate models. The one-dimensional model equivalent statistics reproduce key characteristics of the global model statistics when corresponding autoconversion schemes are assumed in the one-dimensional model. The global model characteristics depicted by the statistics are then interpreted as reflecting behaviors of the autoconversion parameterizations adopted in the models. Comparisons of the one-dimensional model with satellite observations hint at improvements to the formulation of the parameterization scheme, thus offering a novel way of constraining key parameters in autoconversion schemes of global models.

Full access
Vincent E. Larson, Jean-Christophe Golaz, Hongli Jiang, and William R. Cotton

Abstract

One problem in computing cloud microphysical processes in coarse-resolution numerical models is that many microphysical processes are nonlinear and small in scale. Consequently, there are inaccuracies if microphysics parameterizations are forced with grid box averages of model fields, such as liquid water content. Rather, the model needs to determine information about subgrid variability and input it into the microphysics parameterization.

One possible solution is to assume the shape of the family of probability density functions (PDFs) associated with a grid box and sample it using the Monte Carlo method. In this method, the microphysics subroutine is called repeatedly, once with each sample point. In this way, the Monte Carlo method acts as an interface between the host model’s dynamics and the microphysical parameterization. This avoids the need to rewrite the microphysics subroutines.

A difficulty with the Monte Carlo method is that it introduces into the simulation statistical noise or variance, associated with the finite sample size. If the family of PDFs is tractable, one can sample solely from cloud, thereby improving estimates of in-cloud processes. If one wishes to mitigate the noise further, one needs a method for reduction of variance. One such method is Latin hypercube sampling, which reduces noise by spreading out the sample points in a quasi-random fashion.

This paper formulates a sampling interface based on the Latin hypercube method. The associated family of PDFs is assumed to be a joint normal/lognormal (i.e., Gaussian/lognormal) mixture. This method of variance reduction has a couple of advantages. First, the method is general: the same interface can be used with a wide variety of microphysical parameterizations for various processes. Second, the method is flexible: one can arbitrarily specify the number of hydrometeor categories and the number of calls to the microphysics parameterization per grid box per time step.

This paper performs a preliminary test of Latin hypercube sampling. As a prototypical microphysical formula, this paper uses the Kessler autoconversion formula. The PDFs that are sampled are extracted diagnostically from large-eddy simulations (LES). Both stratocumulus and cumulus boundary layer cases are tested. In this diagnostic test, the Latin hypercube can produce somewhat less noisy time-averaged estimates of Kessler autoconversion than a traditional Monte Carlo estimate, with no additional calls to the microphysics parameterization. However, the instantaneous estimates are no less noisy. This paper leaves unanswered the question of whether the Latin hypercube method will work well in a prognostic, interactive cloud model, but this question will be addressed in a future manuscript.

Full access