Abstract

Many cloud microphysical processes occur on a much smaller scale than a typical numerical grid box can resolve. In such cases, a probability density function (PDF) can act as a proxy for subgrid variability in these microphysical processes. This method is known as the assumed PDF method. By placing a density on the microphysical fields, one can use samples from this density to estimate microphysics averages. In the assumed PDF method, the calculation of such microphysical averages has primarily been done using classical Monte Carlo methods and Latin hypercube sampling. Although these techniques are fairly easy to implement and ubiquitous in the literature, they suffer from slow convergence rates as a function of the number of samples. This paper proposes using deterministic quadrature methods instead of traditional random sampling approaches to compute the microphysics statistical moments for the assumed PDF method. For smooth functions, the quadrature-based methods can achieve much greater accuracy with fewer samples by choosing tailored quadrature points and weights instead of random samples. Moreover, these techniques are fairly easy to implement and conceptually similar to Monte Carlo–type methods. As a prototypical microphysical formula, Khairoutdinov and Kogan’s autoconversion and accretion formulas are used to illustrate the benefit of using quadrature instead of Monte Carlo or Latin hypercube sampling.

1. Introduction

To accurately simulate weather and climate, it is necessary to accurately simulate cloud microphysical processes. This is difficult in part because microphysical fields, such as cloud water and rain, vary on spatial scales smaller than a typical numerical grid box. Reducing gridbox size in order to resolve microphysical processes is often computationally prohibitive. Moreover, if the gridbox size is large, then it is usually inaccurate to estimate the microphysical process rates simply by using gridbox averages of the microphysics fields (Pincus and Klein 2000). The reason is that the relationship between the microphysical process rates and the microphysics quantities themselves is usually nonlinear. For example, the autoconversion rate, which is the rate of mass transfer from cloud to rain droplets due to the coalescence of cloud droplets, typically depends nonlinearly on the cloud water mixing ratio among other quantities (e.g., Khairoutdinov and Kogan 2000). The nonlinearity of the microphysics processes means that, in order to accurately calculate the grid mean, one needs to take into account the probability density of the underlying microphysical fields over the grid box, such as the probability density function (PDF) of the cloud water mixing ratio (Larson et al. 2001).

To calculate grid means, one may use the assumed PDF method, which has been used in process studies (e.g., Sommeria and Deardorff 1977; Mellor 1977; Manton and Cotton 1977) and global simulations (e.g., Smith 1990; Tompkins 2002; Bogenschutz et al. 2013; Guo et al. 2014). In the assumed PDF method, the microphysical fields belong to a family of random variables with a known PDF whose shape or functional form is assumed. For instance, the PDF shape may be assumed to be normal or lognormal (Mellor 1977; Golaz et al. 2002; Cheng and Xu 2006; Bogenschutz and Krueger 2013; Larson and Griffin 2013). The PDF represents the subgrid variability in the microphysics fields. The choice of PDF shape depends upon the underlying microphysics properties, observational data, and mathematical tractability (Larson et al. 2005). Although the functional form of the PDF is assumed, the PDF itself may vary over space and time as its mean, variance, and other statistical moments vary. Once the PDF has been calculated, and a microphysics model or scheme is chosen, then the grid means of the microphysical process rates can be calculated by integrating the microphysical rate formulas over the joint PDF of the microphysical variables.

From this point of view, an essential aspect of calculating grid-mean microphysical process rates in a coarse-resolution model is integration. Several different integration methods have been proposed in the past.

One such method is feeding within-cloud averages, rather than grid means, into microphysical schemes (e.g., Park and Bretherton 2009). A within-cloud average is calculated by dividing a grid mean by the cloud fraction, namely, the fraction of a grid box that is estimated to be occupied by cloud. When the grid mean and cloud fraction are known, this is a simple calculation. However, this method does ignore variability within clouds. It is tantamount to assuming that the subgrid PDF is a double delta function, in which one delta function represents cloudy air and the other represents clear air. This can lead to a significant underestimate of the true mean (see Fig. 9, which illustrates this discrepancy for two microphysics quantities).

A second integration method is analytic integration of a microphysical process rate formula over a subgrid PDF (Zhang et al. 2002; Larson and Griffin 2006; Morrison and Gettelman 2008; Cheng and Xu 2009; Griffin and Larson 2013; Larson and Griffin 2013). In this method, the subgrid variability, including the within-cloud variability, is represented by a PDF shape such as a lognormal. Then the microphysical process rate is integrated over all parts of the grid box, weighted appropriately by the PDF. Analytic integration accounts for within-cloud variability and provides exact grid means, but is feasible only for process rate formulas that are simple.

A third integration method is Monte Carlo integration (e.g., Barker et al. 2002; Gentle 2003; Pincus et al. 2003; Räisänen et al. 2004; Räisänen and Barker 2004; Räisänen et al. 2005; Pincus et al. 2006), in which samples are drawn from the subgrid PDF, and the process rate is computed for each sample. Advantages of Monte Carlo integration are that it is simple to implement and applicable to all types of microphysical schemes, including complex numerical ones. Furthermore, implementing a Monte Carlo or any sampling-based method requires little modification to a microphysics scheme. Thus, a Monte Carlo method can act as a general interface between a cloud parameterization and the microphysics. This makes it easy to compute any statistical function of the microphysics quantities in a modular way. However, Monte Carlo integration has a slow convergence rate; that is, it exhibits a slow decay of statistical noise with the number of samples. In fact, it is well known that the statistical noise decreases according to , where N is the number of sample points used to compute the average (Owen 1992). This means that in order to reduce the error or statistical noise by one-half, one would need to quadruple the number of samples! This can be prohibitive if one needs to repeat the integration for every grid box and time step in a large simulation.

To reduce the statistical noise, Larson et al. (2005) approximated the microphysical averages via Latin hypercube sampling and integration. This approach decreases statistical noise by forcing samples to explore the space more evenly. In this approach, one does not obtain statistically independent samples. However, the statistical noise can be shown to decrease faster than , where, again, N is the number of samples. This means that in order to reduce the error or statistical noise by one-half, one may only need to double the number of samples (Owen 1992) [in practice, as we will later show, convergence is usually better than ]. Although Latin hypercube sampling has been used to integrate microphysics in single-column simulations (Larson et al. 2005; Larson and Schanen 2013), it has not yet been used in global simulations, even those that use the assumed PDF method (Bogenschutz et al. 2013; Guo et al. 2014).

Latin hypercube sampling outperforms Monte Carlo approaches in most cases. However, with any random sampling–based approach, there will always be statistical noise associated with the answer, as long as one has a finite number of samples. This means that for a fixed sample size, every time one calculates the microphysics averages, one will obtain a slightly different answer.

In this paper, we explore a deterministic numerical quadrature approach, which does not use random samples. Instead, one uses tailored quadrature points and weights to compute the microphysics averages. Deterministic quadrature methods have been studied for centuries, but only recently have they been used to compute statistical moments (Xiu 2009; Golaz et al. 2011). Quadrature methods do exhibit numerical quadrature error, of course. However, since the quadrature points and weights are chosen deterministically, there is no sampling noise for a fixed sample size (Monte Carlo methods will give different mean estimates for a fixed sample size N due to sampling noise). Moreover, we will show that in most cases, when the function being averaged is relatively smooth (as defined by the number of continuous derivatives), the convergence or reduction in error is strikingly better than random sampling approaches. In addition, unlike analytic integration methods, deterministic quadrature applies generally to any microphysical process rate formulation, including complex numerical algorithms such as those that might be used to compute ice process rates.

In this paper, we will only consider processes that occur locally within a single grid box, such as autoconversion and accretion, rather than processes that depend on adjacent grid boxes, such as radiative transfer of heat. The latter processes depend on cloud overlap properties (e.g., Morcrette and Fouquart 1986; Morcrette and Jakob 2000; Hogan and Illingworth 2000; Räisänen and Barker 2004; Oreopoulos et al. 2012). Moreover, our quadrature algorithms have been designed to sample from the in-cloud portion of the grid box, as in Larson and Schanen (2013).

This paper is structured as follows. Section 2 introduces the general form of the local microphysics quantities, the dependent fields, and the joint PDF of those fields that is used in the assumed PDF method. A change of variables is then performed to decorrelate the joint density. In section 3, random sampling and quadrature based approaches are introduced and compared. Finally, in section 4 we apply the quadrature techniques to calculate the autoconversion and accretion rates using the formulas of Khairoutdinov and Kogan (2000).

2. Mathematical setup

Although many variables are relevant to microphysics, only a small subset of these variables are relevant for most individual microphysical processes. For instance, to represent variability in clouds, turbulence, and drizzle processes, Larson and Griffin (2013) use a PDF with six variables: total water mixing ratio, vertical velocity, liquid water potential temperature, cloud droplet number concentration, rainwater mixing ratio, and rain number concentration. However, most microphysics process rates do not depend directly on all six variables. In fact, the examples used in this paper, namely the autoconversion and accretion rates, depend on only two of these variables, namely the cloud water mixing ratio and the cloud droplet concentration.

The subgrid PDF that we will use in the assumed PDF method is a mixture of two multivariate normal/lognormal densities. Larson and Griffin (2013) and others suggest using this subgrid PDF as it allows positive and negative skewness for the normal variables, positivity for the lognormal variables, mathematical tractability, and simplicity (Larson and Golaz 2005). To distinguish between the normal and lognormal random variables, let and denote the normal and lognormal random variables, respectively, where , and represents the total dimensionality. Let and be two joint multivariate densities. Then, the mixture PDF we will consider for x and y is

 
formula

where . The form of the joint normal/lognormal PDF, for both i = 1 and 2, is given by

 
formula

where , and

 
formula

where and and . The expected value is taken with regard to . Note that the mean vector and covariance matrix are determined at every time step and grid box per the macroscale predictive equations, provided by the assumed PDF method. Thus, this information is assumed to be given. Next, consider a general microphysics quantity that depends on the microphysics fields x and y denoted by . We are interested in approximating the mean quantity

 
formula

where and likewise for . If the microphysics quantity f were linear, then the mean would simply be a linear combination of the mean of and . In this case, sampling from and would be unnecessary. However, in most practical cases, the f is nonlinear and simply plugging in the means of and into f would not be an accurate estimate of .1

To evaluate (4), we will first perform a change of variables so that we can write the values as a product of densities.

a. Change of variables

In this section we will show how to decorrelate the multivariate normal/lognormal density. We present a simple affine transformation based on the Cholesky decomposition of the covariance matrix. One could also use a Rosenblatt transformation, which uses the conditional cumulative distribution function to transform the system to a set of independent random variables [Robinson 1998; see Eq. (50) therein]. This approach is more general than the Cholesky approach because it can be applied to non-Gaussian random variables. However, since we are only considering normal and lognormal random variables, we will present the change of variables using only the Cholesky approach.

For simplicity, we will only consider a single joint density, instead of a mixture model. The change of variables is analogous for and in (4). Thus, for notational convenience, we will leave out the subscript i, which refers to a particular mixture model. Assume that the covariance matrix is positive-definite. Then, has a Cholesky decomposition:

 
formula

where is lower triangular, whose diagonal entries are real and positive. Consider the new variable under an affine transformation

 
formula

where , , and . Then, the exponential term in (2) becomes

 
formula

The Jacobian of the transformation is

 
formula

so that

 
formula

Therefore,

 
formula

where

 
formula

is the standard normal density for independent random variables. Note that since A is lower triangular with real and positive entries, the inverse exists. To write an explicit form for let us write

 
formula

where , and . Then,

 
formula

Now, instead of integrating (4), one can more easily integrate (9), where all the random variables are independent standard normals. The change of variables formula can be applied to each density in (4) where the only difference will be the form of the Cholesky decomposition and the mean values.

The integration of microphysics is often challenged by discontinuities in the support of the distribution. What this means is that typically the domain of integration of certain microphysics variables is truncated to allow for the calculation of the mean only within the physical cloud. If we let be the cloud water mixing ratio, then denotes a location inside the cloud. Thus, we only want to calculate the microphysics average over the region . But how does this truncated region look under the affine transformation in (5)? Since A is lower triangular, the truncated region is simply given by . Thus, the region of integration is still rectangular and the integral in (4) and (9) can be written as

 
formula

where the indicator function 1 is defined as

 
formula

In section 3 we will explain how to compute (9) via random sampling and quadrature approaches. Subsequently, we explain how to approximate (13) as well.

b. Change of variables example

Consider Khairoutdinov and Kogan’s (KK) autoconversion formula (Khairoutdinov and Kogan 2000)

 
formula

where and represent the cloud water mixing ratio and the cloud droplet concentration, respectively. In this case, so that the integral in (4) is only two-dimensional. It turns out that one can write the integral in (13) as a multiplication of two one-dimensional integrals. To see this, consider the Cholesky decomposition of the covariance matrix. For simplicity, let be a single density (i.e., ) instead of a mixture. Then,

 
formula

where and ρ is the correlation between and ; that is,

 
formula

where is the covariance between and . Therefore,

 
formula

Now, we can write the autoconversion average as

 
formula

where is the standard normal density. Equation (17) provides an expression for the autoconversion in terms of standard normal variables. We can simplify this integral even further by writing as a product of a function of only and a function of only. Combining (15) and (17) yields

 
formula

Thus, (18) shows that the mean autoconversion in (15) can be written as a product of two one-dimensional integrals. In general, one may not be able to split the multidimensional integral into separate one-dimensional components. For example, if , then our two-dimensional integral could not be rewritten as two one-dimensional integrals, after the affine change of variables.

In the next section, we will discuss approximating the integral (13) via random sampling and quadrature approaches. Afterward, we will compare convergence results for the mean estimate of autoconversion and accretion rates, illustrating the benefits of quadrature over random sampling approaches.

3. Approximating the grid mean

a. Monte Carlo and Latin hypercube sampling

In this section, we will briefly discuss random sampling and quadrature approaches to approximate the mean of a normal or lognormal random variable. For simplicity and uniformity of presentation, we will show the performance of these approaches for a normal random variable, since one can convert an integral over a lognormal random variable to a normal random variable by a simple change of variables (see section 2a).

Let us consider, in general, a function of a random variable , where is a normal random variable. We are interested in approximating the mean or expected value of f:

 
formula

where is the normal density. Without loss of generality, let us assume that can be written as a product of one-dimensional standard normal densities.

Perhaps the simplest and most ubiquitous approach to solving the type of integral in (19) is the Monte Carlo method. It is based on generating samples from the probability density function and then computing their empirical average. To be precise, let be N samples from . Then, the Monte Carlo approximation to (19) is

 
formula

Although this type of approximation is simple, it has one major drawback: the solution converges very slowly. In fact, as a direct consequence of the central limit theorem, one can show that

 
formula

where the left-hand side is known as the norm or mean squared error with regard to the PDF —that is,

 
formula

where N is the number of Monte Carlo samples [see Glasserman 2003, Eq. (A.1) therein]. Essentially, this means that to decrease the error by one-half, one would have to quadruple the number of samples. This can be prohibitive if the number of samples is already large.

Alternatively, McKay et al. (1979) introduced a variance reduction technique called Latin hypercube sampling (LHS). LHS is a stratified sampling approach that produces samples that are more evenly distributed throughout the sampling space. This prevents samples from accumulating and clustering in any particular region. LHS amounts to subdividing the probability space into equal probability regions and then picking a sample in each nonoverlapping region (if samples are taken from overlapping regions, this is called stratified sampling). Let be N Latin hypercube samples from ; then the Latin hypercube approximation to (19) is again

 
formula

General convergence rate estimates like (21) are not available for Latin hypercube sampling (Manteufel 2000). At most, in general, one can show Latin hypercube does no worse than Monte Carlo (Owen 1998):

 
formula

For simple additive functions [i.e., ], one can show that the standard deviation of the estimated mean using Latin hypercube decays like (Stein 1987). Furthermore, if the underlying inputs for the additive system are uniform, the convergence rate becomes (Manteufel 2000). For stratified sampling, it can be shown that the standard deviation converges like (McKay et al. 1979). This means that LHS can potentially converge significantly faster than Monte Carlo, with minimum additional overhead expense. The only additional expense incurred by the LHS approach is in the computation of the LHS points. In the LHS approach, the probability space, which is taken to be the unit hypercube by default, must be divided into equal probability regions, after which samples are chosen in each nonoverlapping region. This is relatively easy to do, except that one requires the inverse cumulative distribution function (CDF) to map those samples back to the density . This extra expense can be negated by the gain in accuracy and convergence of the Latin hypercube approach.

b. One-dimensional quadrature methods

Hermite quadrature

In this section we introduce the readers to the Gauss–Hermite quadrature approximation to the mean. Again, without loss of generality, it suffices to consider means of standard normal random variables only.

Consider a quadrature approximation to the one-dimensional integral defined as

 
formula

where and are referred to as the quadrature points and weights, respectively. Note that in general, an integral approximation to (19) that can be written as in (24) is known as a quadrature formula. The quadrature points and weights are chosen such that the accuracy is maximized with the least number of points. If we take to be the roots of the nth-degree Hermite polynomial, with an appropriate choice of weights, then (24) is exact for all polynomials of degree 2n − 1 (Quarteroni et al. 2007, their section 9.5). Moreover, these points and weights are optimal in the sense that one cannot achieve higher accuracy with fewer points (Quarteroni et al. 2010, their section 4.4). This type of quadrature is known as interpolatory and is derived from approximating the function with a Hermite interpolating polynomial. The optimality of the Hermite quadrature integration can be seen as a consequence of the optimality of the Hermite interpolant.2 In general, it can be shown that

 
formula

where

 
formula

in (19) is the standard normal density, and s is the maximum number of derivatives of f with finite second moment (Quarteroni et al. 2007; Boyd 1984). If is infinitely differentiable where all derivatives have finite second order, then the quadrature rule converges faster than any polynomial of degree n. In fact, convergence may even decay exponentially, which is known as spectral convergence (Hesthaven et al. 2006).

To illustrate the effectiveness of quadrature rules to approximate (19), Fig. 1a shows the convergence plot of the Hermite quadrature approximation compared with Monte Carlo and Latin hypercube approaches. Because the function being integrated is infinitely differentiable, we obtain exponential or spectral convergence, which is illustrated by the nonlinear error decay in the log-log scale. In fact, convergence for the quadrature rule is exponential, that is, for some [Boyd 1984; see Eq. (2.2) therein].

Fig. 1.

(a) Hermite quadrature easily outperforms both Monte Carlo and Latin hypercube sampling approaches. (b) Hermite quadrature exhibits very poor performance when we add a single jump discontinuity at a = 1 where . (c) Convergence for Hermite quadrature with , where we integrate over the entire domain. This time, the discontinuity is present only in the derivative so convergence is slightly better.

Fig. 1.

(a) Hermite quadrature easily outperforms both Monte Carlo and Latin hypercube sampling approaches. (b) Hermite quadrature exhibits very poor performance when we add a single jump discontinuity at a = 1 where . (c) Convergence for Hermite quadrature with , where we integrate over the entire domain. This time, the discontinuity is present only in the derivative so convergence is slightly better.

c. One-dimensional quadrature over truncated domains

At the end of section 2a we briefly discussed that in the process of approximating many of the microphysics mean values, the region being integrated is often truncated. This can introduce a simple discontinuity of the function itself or potentially in its derivatives. Without loss of generality, let us consider integrating a one-dimensional function over the truncated region :

 
formula

Equivalently, one can write this integral over the entire real line as

 
formula

where is defined over . If we write the integral using the latter notation, it is clear that may have a jump discontinuity at , or one of its derivatives may be discontinuous at x = a. If we use a Hermite quadrature rule over the full domain, , the convergence rate in (25) would possibly suffer by having . Moreover, because the function values are zero outside the truncated region, there are potentially many wasted quadrature points whose function values vanish, especially since the Hermite quadrature points cluster around the origin. If a = 0, then all Hermite quadrature points less than zero would vanish since the function being evaluated is zero in that region. If itself is discontinuous, convergence may even be worse than a Monte Carlo approximation, especially when the discontinuity is farther from the origin (i.e., a > 1) because more Hermite quadrature points are essentially wasted (more function values are set to zero). Indeed, Fig. 1b shows that Hermite quadrature exhibits very poor convergence with even a single discontinuity at a = 1, where . Convergence is not as poor when discontinuities exist in the derivatives [e.g., ; see Fig. 1c] but in this case, Latin hypercube still outperforms Monte Carlo and quadrature methods.

In the next section, we will introduce two modified quadrature approaches tailored to truncated variables, which can significantly outperform even the Latin hypercube method.

1) Legendre–Gauss quadrature

In the first approach, we approximate (26) over a finite interval and use Legendre quadrature points to approximate the integral. To do so we introduce a cutoff for the interval, b, such that

 
formula

In other words, we choose b such that . We use the standard Legendre–Gauss quadrature rule over to approximate the integral in (27) over the interval [a, b] where are the standard Legendre–Gauss quadrature points and weights, respectively (see Hesthaven et al. 2006, their section 5.2).

In our case, since we are integrating over the standard normal and Legendre–Gauss rules apply to functions with unit weight, our quadrature rule for (26) becomes

 
formula

where

 
formula
 
formula

Convergence becomes

 
formula

where , s is the highest-order derivative of over [a, b] with finite second moment, and is the error incurred by truncating the half open interval, that is, . Thus, by only integrating the truncated region at the truncation point , we remove any possible discontinuities that hinder the convergence rate s.

In this approach, we emphasize that we are essentially changing the function we are integrating from to , and the probability measure we are integrating against from to . Thus, we are performing a change of measure and computing the mean on a different probability space, in order to optimize the degree of exactness for our integration rule. The benefit is that in this new measure space (i.e., domain) our function is smooth and thus convergence drastically improves (see Fig. 2). This quadrature rule is summarized in Table 1.

Fig. 2.

(left) Convergence of quadrature rules for over the truncated region [1, ∞). (right) Convergence of quadrature rules for over the region [1, ∞). The Legendre quadrature levels out due to error and , respectively. The Laguerre quadrature levels out due to double precision limits.

Fig. 2.

(left) Convergence of quadrature rules for over the truncated region [1, ∞). (right) Convergence of quadrature rules for over the region [1, ∞). The Legendre quadrature levels out due to error and , respectively. The Laguerre quadrature levels out due to double precision limits.

Table 1.

To approximate , where , over different domains , the table above lists the appropriate choice of quadrature points and weights for use when approximating the integral via quadrature using . Note that , , and , are standard Gauss–Hermite, Legendre–Gauss, and Gauss–Laguerre quadrature points, respectively.

To approximate , where , over different domains , the table above lists the appropriate choice of quadrature points and weights for use when approximating the integral via quadrature using . Note that , , and , are standard Gauss–Hermite, Legendre–Gauss, and Gauss–Laguerre quadrature points, respectively.
To approximate , where , over different domains , the table above lists the appropriate choice of quadrature points and weights for use when approximating the integral via quadrature using . Note that , , and , are standard Gauss–Hermite, Legendre–Gauss, and Gauss–Laguerre quadrature points, respectively.

2) Gauss–Laguerre quadrature

The Legendre–Gauss quadrature approach drastically improves the convergence rate, but up to an error , which is the cutoff in (27). Instead, one can use a quadrature method directly tailored for the half open interval , that is, the Gauss–Laguerre quadrature rule. For any function over , we can approximate the truncated mean as

 
formula

where are the standard Gauss–Laguerre quadrature points and weights on , respectively (Boyd 2000, section 17). In our case, since our truncated region is , we can approximate (26) by the following quadrature rule:

 
formula

where

 
formula

Convergence is then

 
formula

where, again, , and s is the highest-order derivative of with finite second moment (Boyd 2000).

Again, we emphasize that in this approach we are essentially changing the function we are integrating, from to , and the probability measure we are integrating against, from to . Thus, we are performing a change of measure for the integral and computing the mean on a different probability space. The benefit is that in this new measure, our function is smooth and thus convergence drastically improves.

Figure 2 illustrates convergence of the mean for and over the truncated region for a =1 using Legendre–Gauss and Gauss–Laguerre quadrature points. For Legendre quadrature points, we take b = 8 so that the probability outside the cutoff region is less than 10−16.

In summary, in order to best approximate , where is the standard normal PDF, over regular and truncated domains we can write

 
formula

where the quadrature points and weights are chosen according to the type of domain, for example, or , given by Table 1.

d. Multidimensional quadrature

Consider once again the multidimensional mean integral

 
formula

where we assume ; that is, the joint density can be written as a product of one-dimensional densities. To derive the quadrature rule for a multivariate integral, first define the one-dimensional univariate quadrature rule

 
formula

to approximate the integral of a univariate with regard to the density . Then, the tensor product formula is defined as the sum over all possible combinations

 
formula

For the tensor product quadrature, this means that we will have a total of points. Convergence for the multidimensional tensor product quadrature is, at best, the minimum rate among all one-dimensional quadrature rules . In general, the convergence rate is given by

 
formula

where s is the minimum degree of smoothness along each dimension, that is, the number of derivatives with finite variance, k is the dimension of the problem, and n is the total number of quadrature points. When , then the convergence rate becomes , which is the minimum convergence rate among any of the one-dimensional quadrature rules.

As discussed in section 2c in Eq. (18), the domain of integration is typically truncated in the first dimension. In this case, we simply replace with Legendre–Gauss or Gauss–Laguerre quadrature. If we consider the truncated integral,

 
formula

then we can approximate this with

 
formula

or

 
formula

Quadrature rules (38), (40), and (41) can be easily applied to (13) in section 2a, for no greater cost and/or complexity than Monte Carlo or Latin hypercube sampling. Figure 3 shows two-dimensional tensor product quadrature points using Hermite quadrature in each dimension and a mixture of Legendre–Hermite and Laguerre–Hermite, respectively.

Fig. 3.

(left to right), (top to bottom) Two-dimensional tensor product quadrature rules using Hermite–Hermite, Hermite–Legendre, and Hermite–Laguerre in the x and y directions, respectively.

Fig. 3.

(left to right), (top to bottom) Two-dimensional tensor product quadrature rules using Hermite–Hermite, Hermite–Legendre, and Hermite–Laguerre in the x and y directions, respectively.

Figure 4 shows error rates using two-dimensional tensor product quadrature for the approximation of

 
formula

with a = −∞ or a = 1. When a = 1, we essentially introduce a jump discontinuity into the function, which causes the Hermite quadrature to perform poorly. However, Legendre and Laguerre quadrature do not suffer when a = 1; in fact, we get exponential convergence!

Fig. 4.

Two-dimensional tensor product integration with (a) a = −∞ and (b) a = 1 in (42).

Fig. 4.

Two-dimensional tensor product integration with (a) a = −∞ and (b) a = 1 in (42).

Most microphysics process rates depend on at most two or three variables (Larson and Griffin 2013). Thus, we will most likely only need quadrature rules for, at most, two- to three-dimensional integrals. In these relatively low-dimensional cases, tensor product rules are still practical in that the number of quadrature points is reasonably small and convergence is optimal. However, for higher dimensions, the number of quadrature points can become very large, very quickly. For example, if we consider a 10-dimensional tensor product quadrature rule, and we wanted at least a second-order approximation (i.e., two points per dimension), then we already have 210 quadrature points! In these cases, it may be beneficial and even necessary to use a sparse grid quadrature rule. For a thorough discussion about sparse grid quadrature, see Gerstner and Griebel (1998) or Holtz (2010, section 4 therein). In short, while tensor product grids have error rates , where N is the total number of quadrature points, d is the dimension, and s is the minimum degree of smoothness in any single dimension, sparse grid rules exhibit an error rate which decays like . Thus, the error rate in sparse grids is independent of the dimension up to a logarithmic factor.

4. Comparing mean autoconversion and accretion calculated by sampling and deterministic quadrature

In this section we will compute grid means of two microphysical process rates—autoconversion and accretion—via Monte Carlo, Latin hypercube, and quadrature approaches. (Recall that autoconversion is the process by which cloud droplets coalesce and form raindrops. Accretion is the process by which raindrops collect cloud drops, causing the raindrops to grow in size.)

To calculate autoconversion and accretion rates, we will use Khairoutdinov and Kogan’s formulas [Khairoutdinov and Kogan 2000, their Eqs. (29) and (33)]:

 
formula

where s is the cloud water mixing ratio, is the cloud droplet concentration, and is the indicator function, which is zero when s < 0 (this reflects the fact that we only care about the quantity when we are physically in a cloud; i.e., s > 0). In the assumed PDF method, s and N are described by a mixture of two normal/lognormal distributions [see Eq. (4) for the general form of the PDF and section 2 for more details].

To provide inputs for the various integration methods, we need an example subgrid PDF. To verify the accuracy of the methods, we need a benchmark analytic integration of the autoconversion and accretion rates. Here, both are provided by an implementation of the assumed PDF method called Cloud Layers Unified by Binormals (CLUBB). CLUBB is a single-column model that predicts a joint PDF of thermodynamic, turbulent, and microphysical quantities (Golaz et al. 2002; Larson and Golaz 2005). It contains an option to analytically integrate over the KK microphysics (Larson and Griffin 2013). CLUBB computes grid-mean microphysical rates using the assumed PDF method and then feeds those rates back into time-evolution equations. The process repeats for multiple time steps and each grid box layer.

To drive the assumed PDF method, we use the RICO (Rain in Cumulus over the Ocean) case configured within CLUBB to predict the subgrid moments (e.g., the means and variance of the microphysics fields s and ). RICO was a field program sponsored by the National Science Foundation (NSF) to study the relationship between shallow cumulus clouds and precipitation (Rauber et al. 2007; vanZanten et al. 2011). The study included sample data from the eastern Atlantic and western Caribbean during the two months of December 2004 through January 2005. The RICO case within CLUBB is a simulation of three days starting on 12 December 2004 and running through 15 December 2004. We will only consider the autoconversion and accretion rates for a single time step, approximately 72 h after the initial start, at 58 different grid levels or altitudes.3 We have chosen autoconversion and accretion rates as our test processes because their grid-mean values can be computed analytically by an option within CLUBB (Larson and Griffin 2013). Figure 5 shows the autoconversion and accretion rates for this time step at different altitudes computed analytically in CLUBB. These will be the values we will approximate via quadrature-based approaches.

Fig. 5.

Exact (a) autoconversion and (b) accretion rates for different altitudes at t = 72 h in the RICO experiment within CLUBB. The means and variances of the normal–lognormal mixture model are provided by CLUBB in the macroscale equations. The units for both autoconversion and accretion are kg kg−1 s−1.

Fig. 5.

Exact (a) autoconversion and (b) accretion rates for different altitudes at t = 72 h in the RICO experiment within CLUBB. The means and variances of the normal–lognormal mixture model are provided by CLUBB in the macroscale equations. The units for both autoconversion and accretion are kg kg−1 s−1.

The most efficient approach to compute the averages of (43) is to separate the mean integral (9) into two one-dimensional integrals (see section 2b for details). Alternatively, we can use a full tensor product grid using Eqs. (38)(41). While the latter approach is not as efficient, it is more generally applicable to other microphysics formulas so we perform the integration using both tensor product, which requires quadrature points, and one-dimensional grids, which require only quadrature points. Also, the underlying PDFs for s and are mixtures of two normal/lognormal densities [Eq. (4)], which means we have to perform the integration twice for each mixture PDF. Thus, the exact number of random sample points and quadrature points used in the convergence figures is double.

Figure 6 displays the convergence of the grid-mean autoconversion and accretion rates to the analytic solution of Larson and Griffin (2013) as the number of calls to the microphysics is increased. Figure 6 compares random sampling– and quadrature-based approaches. Analytic integration methods, such as those that have been used in past studies (e.g., Morrison and Gettelman 2008; Cheng and Xu 2009; Larson and Griffin 2013), would of course have negligible error in Fig. 6, but those methods cannot be used for complicated microphysical processes, such as those that can be computed only with numerical algorithms.

Fig. 6.

Plot of relative error for random-sampling and quadrature-based approaches for (a) autoconversion and (b) accretion by splitting the two-dimensional integral into two one-dimensional integrals. Quadrature outperforms random sampling for all N, where N represents the total number of sample pairs for both densities in the mixture model. The error is vertically averaged over the altitudes.

Fig. 6.

Plot of relative error for random-sampling and quadrature-based approaches for (a) autoconversion and (b) accretion by splitting the two-dimensional integral into two one-dimensional integrals. Quadrature outperforms random sampling for all N, where N represents the total number of sample pairs for both densities in the mixture model. The error is vertically averaged over the altitudes.

In our first test scenario, quadrature is performed by separating the mean integral into two one-dimensional integrals and performing one-dimensional quadrature formulas for each dimension. This is the most efficient method, but not the most general. These figures show that for the same number of points, Legendre–Gauss quadrature methods achieve significantly lower relative errors. For example, for the autoconversion rate, using only 20 quadrature points (10 quadrature points for each mixture PDF), one can obtain relative errors on the order of 10−9 versus 100 for random sampling approaches. This is a decrease of nine orders of magnitude in the relative error. For the accretion rate, the reduction in error is not as large, showing a seven-order decrease in the relative error. Also, to be precise, the total number of quadrature points and random samples plotted on the x axis for Figs. 6 and 7 represents the total pairs of points, (i.e., the mixing ratio and droplet concentration), for both PDFs in the mixture model, which consists of two joint normal/lognormal densities.4

Fig. 7.

Full tensor product quadrature vs random-sampling approaches for (a) autoconversion and (b) accretion. Legendre and Laguerre quadrature still beats random sampling for all N. Again, N represents the total number of quadrature or random sample pairs of points, used by both densities in the mixture model. The error is vertically averaged over the altitudes.

Fig. 7.

Full tensor product quadrature vs random-sampling approaches for (a) autoconversion and (b) accretion. Legendre and Laguerre quadrature still beats random sampling for all N. Again, N represents the total number of quadrature or random sample pairs of points, used by both densities in the mixture model. The error is vertically averaged over the altitudes.

Figure 7 displays the convergence plots using the full two-dimensional tensor product quadrature rules. Note that because the microphysics formulas can be easily separated into each dimension, the tensor product formula is redundant and not as efficient. However, this approach is more generally applicable to multidimensional integrals that cannot be separated into a product of one-dimensional integrals and so we provide convergence plots using this method as well. Using only 32 total quadrature points on a tensor product grid (each grid in the mixture model being 16 points), Legendre–Gauss quadrature rules provide a reduction in error of about three orders of magnitude for the autoconversion rate, and two orders of magnitude for the accretion rate.

Figure 8 gives a qualitative visual of the accuracy in quadrature methods versus random sampling–based approaches. In particular, Fig. 8 (left) shows the autoconversion rate at different altitudes for a total of 500 random Monte Carlo and Latin hypercube samples versus a total of 16 quadrature points (or equivalently, 2 × 82 quadrature points for the full tensor product). Similarly, Fig. 8 (right) shows the accretion rate for N = 500 random Monte Carlo and Latin hypercube samples versus 16 quadrature points (again, equivalent to 2 × 82 tensor product quadrature points). Monte Carlo and Latin hypercube sampling are subject to sampling noise, which is represented as ±2σ error bars. It is hard to see the difference in the analytic solution computed by CLUBB versus the quadrature-based approaches in both these plots.

Fig. 8.

(a) Autoconversion rate using 250 random samples vs only 8 quadrature points (or 64 quadrature points for the full tensor product). (b) Accretion rate using 250 random samples vs 8 quadrature points (equivalently, 64 tensor product quadrature points). Latin hypercube sampling is subject to sampling noise, which is represented by the shaded regions based on ±2σ error bars. The units for both accretion and autoconversion are kg kg−1 s−1.

Fig. 8.

(a) Autoconversion rate using 250 random samples vs only 8 quadrature points (or 64 quadrature points for the full tensor product). (b) Accretion rate using 250 random samples vs 8 quadrature points (equivalently, 64 tensor product quadrature points). Latin hypercube sampling is subject to sampling noise, which is represented by the shaded regions based on ±2σ error bars. The units for both accretion and autoconversion are kg kg−1 s−1.

For reference, Fig. 9 displays the exact solution versus the solution when we only propagate the mean of the PDFs of s and , neglecting their respective variances. Notice how this method severely underestimates the mean. This is due to the nonlinearity of the autoconversion and accretion Eqs. (43).

Fig. 9.

(a) Exact autoconversion rate vs the same quantity calculated by propagating the mean of s and only. (b) Accretion rate vs the same quantity by propagating the mean of s and only.

Fig. 9.

(a) Exact autoconversion rate vs the same quantity calculated by propagating the mean of s and only. (b) Accretion rate vs the same quantity by propagating the mean of s and only.

We have developed a library in FORTRAN that is able to perform numerical integration of multivariate functions using quadratures. The library functions are called from CLUBB to integrate and [see Eq. (43)] and compute the autoconversion and accretion rates. The integration can be performed in the library using a fixed number of quadrature points or adaptively such that a given relative tolerance is reached. In the adaptive approach, the subroutine adds quadrature points in the dimensions along which the integrated function has higher gradients. This procedure is repeated at every iteration until the relative error between consecutive integrations is below a prescribed tolerance. The results of an adaptive integration are shown in Fig. 10 for the accretion and autoconversion output for the same time point as in Fig. 8. These results along with the analytical solution are essentially indistinguishable. This enhancement in the accuracy, however, requires more function evaluations during the integration. For a relative tolerance equal to 10−6, the autoconversion and accretion rates require 275 and 324 function evaluations, respectively, when Legendre quadrature is used. It is up to the CLUBB user to decide how many function evaluations can be afforded in order to obtain the wanted accuracy.

Fig. 10.

Adaptive quadrature approaches for the approximation of the (a) autoconversion and (b) accretion rates in CLUBB.

Fig. 10.

Adaptive quadrature approaches for the approximation of the (a) autoconversion and (b) accretion rates in CLUBB.

5. Summary and conclusions

In this paper, we provide an alternative calculation of grid mean microphysics process rates by utilizing deterministic quadrature methods for integration in the assumed PDF method, as opposed to Latin hypercube sampling (Larson et al. 2005). In the most straightforward application of quadrature rules, we can transform the mean integral as an integral over standard normal variables and then perform a multidimensional Gauss–Hermite quadrature rule to approximate this quantity. When the region of integration is truncated, we can perform Legendre–Gauss and Gauss–Laguerre quadrature rules. The gain in accuracy by utilizing these quadrature rules is significant, as evidenced by the calculation of autoconversion and accretion means for the CLUBB RICO case.

In what microphysical applications might deterministic quadrature be of greatest practical value? First consider the alternatives. Analytic integration can provide precise, efficient answers for microphysical processes that can be represented by simple formulas. It is especially useful as a benchmark for other integration methods. It is too restricted, however, to provide a comprehensive solution to the problem of computing grid-mean microphysical rates in typical weather or climate models. On the other side of the spectrum, sampling methods, such as Monte Carlo or Latin hypercube integration, are general and relatively easy to implement, but they produce noisy solutions with high variability except when many sample points are used, which, for weather or climate modeling, requires the use of more processors on parallel supercomputers. Occupying a middle ground between analytic integration and sampling methods is deterministic quadrature. Deterministic quadrature is not as accurate or efficient as analytic methods, but it is much more broadly applicable. It is not as easy to implement as sampling methods, because it must be called separately for each microphysical process, but it is more efficient and accurate. It may be especially useful, for instance, if an atmospheric modeler desires to calculate grid means of selected ice microphysical processes or cloud droplet nucleation (e.g., Chuang et al. 1997; Ghan et al. 1997; Golaz et al. 2011), which are difficult to treat analytically, but the modeler does not wish to take on the computational expense of a more comprehensive treatment of subgrid variability.

Acknowledgments

This research was supported by the Department of Energy Office of Advanced Scientific Computing Research, work package 13-015334. V. Larson would also like to acknowledge support by the Office of Science (BER), U.S. Department of Energy, under Grant DE-SC0008323. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000.

REFERENCES

REFERENCES
Barker
,
H. W.
,
R.
Pincus
, and
J.-J.
Morcrette
,
2002
: The Monte Carlo independent column approximation: Application within large-scale models. GCSS-ARM Workshop on the Representation of Cloud Systems in Large-Scale Models, Kananaskis, AB, Canada, GEWEX, 1–10.
Bogenschutz
,
P. A.
, and
S. K.
Krueger
,
2013
: A simplified PDF parameterization of subgrid-scale clouds and turbulence for cloud-resolving models. J. Adv. Model. Earth Syst.,5, 195–211, doi:.
Bogenschutz
,
P. A.
,
A.
Gettelman
,
H.
Morrison
,
V. E.
Larson
,
C.
Craig
, and
D. P.
Schanen
,
2013
:
Higher-order turbulence closure and its impact on climate simulations in the Community Atmosphere Model
.
J. Climate
,
26
,
9655
9676
, doi:.
Boyd
,
J. P.
,
1984
:
Asymptotic coefficients of Hermite function series
.
J. Comput. Phys.
,
54
,
382
410
, doi:.
Boyd
,
J. P.
,
2000
: Chebychev and Fourier Spectral Methods. 2nd ed. Dover Publications, 688 pp.
Cheng
,
A.
, and
K.-M.
Xu
,
2006
:
Simulation of shallow cumuli and their transition to deep convective clouds by cloud-resolving models with different third-order turbulence closures
.
Quart. J. Roy. Meteor. Soc.
,
132
,
359
382
, doi:.
Cheng
,
A.
, and
K.-M.
Xu
,
2009
:
A PDF-based microphysics parameterization for simulation of drizzling boundary layer clouds
.
J. Atmos. Sci.
,
66
,
2317
2334
, doi:.
Chuang
,
C. C.
,
J. E.
Penner
,
K. E.
Taylor
,
A. S.
Grossman
, and
J. J.
Walton
,
1997
:
An assessment of the radiative effects of anthropogenic sulfate
.
J. Geophys. Res.
,
102
,
3761
3778
, doi:.
Gentle
,
J. E.
,
2003
: Random Number Generation and Monte Carlo Methods. 2nd ed. Springer, 381 pp.
Gerstner
,
T.
, and
M.
Griebel
,
1998
:
Numerical integration using sparse grids
.
Numer. Algorithms
,
18
,
209
232
, doi:.
Ghan
,
S. J.
,
L. R.
Leung
,
R. C.
Easter
, and
H.
Abdul-Razzak
,
1997
:
Prediction of cloud droplet number in a general circulation model
.
J. Geophys. Res.
,
102
,
21 777
21 794
, doi:.
Glasserman
,
P.
,
2003
: Monte Carlo Methods in Financial Engineering. Springer, 596 pp.
Golaz
,
J.-C.
,
V.
Larson
, and
W. R.
Cotton
,
2002
:
A PDF-based model for boundary layer clouds. Part I: Method and model description
.
J. Atmos. Sci.
,
59
,
3540
3551
, doi:.
Golaz
,
J.-C.
,
M.
Salzmann
,
L. J.
Donner
,
L. W.
Horowitz
,
Y.
Ming
, and
M.
Zhao
,
2011
:
Sensitivity of the aerosol indirect effect to subgrid variability in the cloud parameterization of the GFDL atmosphere general circulation model AM3
.
J. Climate
,
24
,
3145
3160
, doi:.
Griffin
,
B. M.
, and
V. E.
Larson
,
2013
:
Analytic upscaling of a local microphysics scheme. Part II: Simulations
.
Quart. J. Roy. Meteor. Soc.
,
139
,
58
69
, doi:.
Guo
,
H.
,
J.-C.
Golaz
,
L. J.
Donner
,
P.
Ginoux
, and
R. S.
Hemler
,
2014
:
Multivariate probability density functions with dynamics in the GFDL atmospheric general circulation model: Global tests
.
J. Climate
,
27
,
2087
2108
, doi:.
Hesthaven
,
J. S.
,
S.
Gottlieb
, and
D.
Gottlieb
,
2006
: Spectral Methods for Time-Dependent Problems. Cambridge University Press, 273 pp.
Hogan
,
R. J.
, and
A. J.
Illingworth
,
2000
:
Deriving cloud overlap statistics from radar
.
Quart. J. Roy. Meteor. Soc.
,
126
,
2903
2909
, doi:.
Holtz
,
M.
,
2010
: Sparse Grid Quadrature in High Dimensions with Applications in Finance and Insurance. Springer, 200 pp.
Khairoutdinov
,
M.
, and
Y.
Kogan
,
2000
:
A new cloud physics parameterization in a large-eddy simulation model of marine stratocumulus
.
Mon. Wea. Rev.
,
128
,
229
243
, doi:.
Larson
,
V. E.
, and
J.
Golaz
,
2005
:
Using probability density functions to derive consistent closure relationships among higher-order moments
.
Mon. Wea. Rev.
,
133
,
1023
1042
, doi:.
Larson
,
V. E.
, and
B. M.
Griffin
,
2006
: Coupling microphysics parameterizations to cloud parameterizations. 12th Conf. on Cloud Physics, Madison, WI, Amer. Meteor. Soc., P2.48. [Available online at https://ams.confex.com/ams/Madison2006/techprogram/paper_113330.htm.]
Larson
,
V. E.
, and
B. M.
Griffin
,
2013
:
Analytic upscaling of local microphysics parameterizations, Part I: Derivation
.
Quart. J. Roy. Meteor. Soc.
,
139
,
46
57
, doi:.
Larson
,
V. E.
, and
D. P.
Schanen
,
2013
:
The Subgrid Importance Latin Hypercube Sampler (SILHS): A multivariate subcolumn generator
.
Geosci. Model Dev.
,
6
,
1813
1829
, doi:.
Larson
,
V. E.
,
R.
Wood
,
P. R.
Field
,
J.-C.
Golaz
,
T. H.
Vonder Haar
, and
W. R.
Cotton
,
2001
:
Systematic biases in the microphysics and thermodynamics of numerical models that ignore subgrid variability
.
J. Atmos. Sci.
,
58
,
1117
1128
, doi:.
Larson
,
V. E.
,
J.-C.
Golaz
,
H.
Jiang
, and
W. R.
Cotton
,
2005
:
Supplying local microphysics parameterizations with information about subgrid variability: Latin hypercube sampling
.
J. Atmos. Sci.
,
62
,
4010
4026
, doi:.
Manteufel
,
R. D.
,
2000
: Evaluating the convergence of Latin hypercube sampling. 41st AIAA/ASME Structures, Structural Dynamics and Materials Conf., Atlanta, GA, American Institute of Aeronautics and Astronautics, AIAA-2000-1636, 100–106. [Available online at http://arc.aiaa.org/doi/abs/10.2514/6.2000-1636.]
Manton
,
M. J.
, and
W. R.
Cotton
,
1977
: Formulation of approximate equations for modeling moist deep convection on the mesoscale. Atmospheric Science Paper 266, Colorado State University, 62 pp.
McKay
,
M. D.
,
W. J.
Conover
, and
R. J.
Beckman
,
1979
:
A comparison of three methods for selecting values of input variables in the analysis of output from a computer code
.
Technometrics
,
21
,
239
245
, doi:.
Mellor
,
G. L.
,
1977
:
The Gaussian cloud model relations
.
J. Atmos. Sci.
,
34
,
356
358
, doi:.
Morcrette
,
J.-J.
, and
Y.
Fouquart
,
1986
:
The overlapping of cloud layers in shortwave radiation parameterization
.
J. Atmos. Sci.
,
43
,
321
328
, doi:.
Morcrette
,
J.-J.
, and
C.
Jakob
,
2000
:
The response of the ECMWF model to changes in the cloud overlap assumption
.
Mon. Wea. Rev.
,
128
,
1707
1732
, doi:.
Morrison
,
H.
, and
A.
Gettelman
,
2008
:
A new two-moment bulk stratiform cloud microphysics scheme in the Community Atmosphere Model, version 3 (CAM3). Part I: Description and numerical tests
.
J. Climate
,
21
,
3642
3659
, doi:.
Oreopoulos
,
L.
,
D.
Lee
,
Y. C.
Sud
, and
M. J.
Suarez
,
2012
:
Radiative impacts of cloud heterogeneity and overlap in an atmospheric general circulation model
.
Atmos. Chem. Phys.
,
12
,
9097
9111
, doi:.
Owen
,
A. B.
,
1992
:
A central limit theorem for Latin hypercube sampling
.
J. Roy. Stat. Soc.
,
54
,
541
551
.
Owen
,
A. B.
,
1998
:
Latin supercube sampling for very high-dimensional simulations
.
ACM Trans. Model. Comput. Simul.
,
8
,
71
102
, doi:.
Park
,
S.
, and
C. S.
Bretherton
,
2009
:
The University of Washington shallow convection and moist turbulence schemes and their impact on climate simulations with the Community Atmosphere Model
.
J. Climate
,
22
,
3449
3469
, doi:.
Pincus
,
R.
, and
S. A.
Klein
,
2000
:
Unresolved spatial variability and microphysical process rates in large-scale models
.
J. Geophys. Res.
,
105
,
27 059
27 065
, doi:.
Pincus
,
R.
,
H. W.
Barker
, and
J.-J.
Morcrette
,
2003
:
A fast, flexible, approximate technique for computing radiative transfer in inhomogeneous cloud fields
.
J. Geophys. Res.
,
108
,
4376
, doi:.
Pincus
,
R.
,
R.
Hemler
, and
S. A.
Klein
,
2006
:
Using stochastically generated subcolumns to represent cloud structure in a large-scale model
.
Mon. Wea. Rev.
,
134
,
3644
3656
, doi:.
Quarteroni
,
A.
,
R.
Sacco
, and
F.
Saleri
,
2007
: Numerical Mathematics. 2nd ed. Springer, 655 pp.
Quarteroni
,
A.
,
F.
Saleri
, and
P.
Gervasio
,
2010
: Scientific Computing with MATLAB and Octave. 3rd ed. Springer, 382 pp.
Räisänen
,
P.
, and
H. W.
Barker
,
2004
:
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
.
Quart. J. Roy. Meteor. Soc.
,
130
,
2069
2085
, doi:.
Räisänen
,
P.
,
H. W.
Barker
,
M. F.
Khairoutdinov
,
J.
Li
, and
D. A.
Randall
,
2004
:
Stochastic generation of subgrid-scale cloudy columns for large-scale models
.
Quart. J. Roy. Meteor. Soc.
,
130
,
2047
2067
, doi:.
Räisänen
,
P.
,
H. W.
Barker
, and
J. N. S.
Cole
,
2005
:
The Monte Carlo Independent Column Approximation’s conditional random noise: Impact on simulated climate
.
J. Climate
,
18
,
4715
4730
, doi:.
Rauber
,
R. M.
, and Coauthors
,
2007
:
Rain in shallow cumulus over the ocean: The RICO campaign
.
Bull. Amer. Meteor. Soc.
,
88
,
1912
1928
, doi:.
Robinson
,
D. G.
,
1998
: A survey of probabilistic methods used in reliability, risk and uncertainty analysis: Analytical techniques I. Sandia National Laboratries rep. SAND-98-1189, 29 pp.
Smith
,
R. N. B.
,
1990
:
A scheme for predicting layer clouds and their water content in a general circulation model
.
Quart. J. Roy. Meteor. Soc.
,
116
,
435
460
, doi:.
Sommeria
,
G.
, and
J.
Deardorff
,
1977
:
Subgrid-scale condensation in models of nonprecipitating clouds
.
J. Atmos. Sci.
,
34
,
344
355
, doi:.
Stein
,
M.
,
1987
:
Large sample properties of simulations using Latin hypercube sampling
.
Technometrics
,
29
,
143
151
, doi:.
Tompkins
,
A. M.
,
2002
:
A prognostic parameterization for the subgrid-scale variability of water vapor and clouds in large-scale models and its use to diagnose cloud cover
.
J. Atmos. Sci.
,
59
,
1917
1942
, doi:.
vanZanten
,
M.
, and Coauthors
,
2011
: Controls on precipitation and cloudiness in simulations of trade-wind cumulus as observed during RICO. J. Adv. Model. Earth Syst.,3, M06001, doi:.
Xiu
,
D.
,
2009
:
Fast numerical methods for stochastic computations: A review
.
Comm. Comp. Phys.
,
5
,
242
272
.
Zhang
,
J.
,
U.
Lohmann
, and
B.
Lin
,
2002
:
A new statistically based autoconversion rate parameterization for use in large-scale models
.
J. Geophys. Res.
,
107
,
4750
, doi:.

Footnotes

1

If f were convex, then Jensen’s inequality tells us that simply plugging in the means could largely underestimate the true mean.

2

The nth-degree Hermite interpolant is optimal in the sense that it is the nth-degree polynomial with the least variance w.r.t. standard normal density. In other words, it is the nth-degree approximation with the least projection error onto the weighted space, with standard normal weight.

3

Note that there is nothing unique about this particular time step. We chose a fixed time step to more clearly illustrate the benefit of quadrature at the smallest level. In fact, different time steps produce almost identical results, which we omit here.

4

For Fig. 6, the convergence in relative error is better than in Fig. 7 at a possible cost of two microphysics evaluations since the two-dimensional mean integrals are split into two one-dimensional integrals.