• Anderson, J. L., 1997: The impact of dynamical constraints on the selection of initial conditions for ensemble predictions: Low-order perfect model results. Mon. Wea. Rev., 125 , 29692983.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Augustin, F., , Gilg A. , , Paffrath M. , , Rentrop P. , , and Wever U. , 2008: Polynomial chaos for the approximation of uncertainties: Chances and limits. Eur. J. Appl. Math., 19 , 149190.

    • Search Google Scholar
    • Export Citation
  • Cameron, R., , and Martin W. , 1947: The orthogonal development of nonlinear functionals in series of Fourier–Hermite functionals. Ann. Math., 48 , 385392.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Debusschere, B. J., , Najm H. N. , , Pèbay P. P. , , Knio O. M. , , Ghanem R. G. , , and Le Maître O. P. , 2004: Numerical challenges in the use of polynomial chaos representations for stochastic processes. SIAM J. Sci. Comput., 26 , 698719.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1997: Advanced data assimilation for strongly nonlinear dynamics. Mon. Wea. Rev., 125 , 13421354.

  • Frederiksen, J. S., 2000: Singular vector, finite-time normal modes, and error growth during blocking. J. Atmos. Sci., 57 , 312333.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghanem, R., , and Spanos P. , 1991: Stochastic Finite Elements: A Spectral Approach. Springer, 214 pp.

  • Legras, B., , and Vautard R. , 1996: A guide to Liapunov vectors. Proc. ECMWF Seminar on Predictablility, Vol. I, Reading, United Kingdom, European Centre for Medium-Range Weather Forecasts, 143–156.

    • Search Google Scholar
    • Export Citation
  • Le Maître, O. P., , Najm H. N. , , Ghanem R. G. , , and Knio O. M. , 2004: Multi-resolution analysis of Wiener-type uncertainty propagation schemes. J. Comput. Phys., 197 , 502531.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loéve, M., 1978: Probability Theory. 4th ed. Springer-Verlag, 425 pp.

  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20 , 130141.

  • Lorenz, E. N., 1984: Irregularity: A fundamental property of the atmosphere. Tellus, 36A , 98110.

  • Lorenz, E. N., 2005: A look at some details of the growth of initial uncertainties. Tellus, 57A , 111.

  • Marzouk, Y. M., , and Najm H. N. , 2009: Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems. J. Comput. Phys., 228 , 18621902.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Najm, H. N., 2009: Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics. Annu. Rev. Fluid Mech., 41 , 3552.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., 2000: Predicting uncertainty in forecasts of weather and climate. Rep. Prog. Phys., 63 , 71116.

  • Toth, Z., , and Kalnay E. , 1997: Ensemble forecasting at NCEP and breeding method. Mon. Wea. Rev., 125 , 32973319.

  • Wan, X., , and Karniadakis G. E. , 2005: An adaptive multi-element generalized polynomial chaos method for stochastic differential equations. J. Comput. Phys., 209 , 617642.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wiener, N., 1938: The homogeneous chaos. Amer. J. Math., 60 , 897936.

  • Xiu, D., 2009: Fast numerical methods for stochastic computations: A review. Commun. Comput. Phys., 5 , 242272.

  • Xiu, D., , and Karniadakis G. E. , 2002: The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput., 24 , 619644.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xu, L., , and Daley R. , 2000: Towards a true 4-dimensional data assimilation algorithm: Application of a cycling representer algorithm to a simple transport problem. Tellus, 52A , 109128.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Comparison of MC and PC uncertainty, (a) rms(δX) and (b) δX, difference between uncertainty mean and prediction, as a function of time. Here, r denotes the order of the PC expansion. The arrow at ∼ day 25 indicates when r = 1 PC deviates from MC. The arrows at ∼ day 30 and day 35 indicate the start of the deviation for r = 2 and r = 5, respectively.

  • View in gallery

    Times series of the δX, δY, and δZ components of S1.

  • View in gallery

    Times series of singular values. (a) Comparison of PC order r = 1, 2, and 5 to MC for σ = 10−2. (b) MC only for σ = 10−5.

  • View in gallery

    Time series of the δX component of S1, S2, and S3 from MC and PC order r = 1, 2, and 5. Agreement is shown in the dominant S1 between MC and all PC orders for the time period before ∼ day 25, whereas the secondary and tertiary vectors S2, and S3 show early deviations. Where deviation occurs before day 22, the curves for r = 2 and 5 overlap, differing in magnitude only by about 10−3. The comparison for the δY and δZ components is the same (not shown).

  • View in gallery

    The 1D pdf for each uncertainty component, δX, δY, and δZ, normalized by their respective rms values σx, σy, and σz. The pdf curves for MC and r = 5 PC are in close agreement. The Gaussian r = 1 pdf curve is shown for contrast. The values of (σx, σy, and σz): for MC, (0.185, 0.097, 0.233); PC r = 5, (0.185, 0.097, and 0.224); and PC r = 1, (0.191, 0.087, and 0.211).

  • View in gallery

    Time series of ξ*1, ξ*2, and ξ*3 in the TL limit. These vectors map to PC order one singular vectors S1, S2, and S3. (a) Vector components ξx (solid), ξy (dash), and ξz (dot) for the respective vectors ξ*1, ξ*2, and ξ*3. (b) Singular values λ*1 (solid), λ*2 (dash), and λ*3 (dot) for the vectors ξ*1, ξ*2, and ξ*3, respectively.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 49 49 8
PDF Downloads 31 31 0

Polynomial Chaos Quantification of the Growth of Uncertainty Investigated with a Lorenz Model

View More View Less
  • 1 Naval Research Laboratory, Washington, D.C
© Get Permissions
Full access

Abstract

A time-dependent physical model whose initial condition is only approximately known can predict the evolving physical state to only within certain error bounds. In the prediction of weather, as well as its ocean counterpart, quantifying this uncertainty or the predictability is of critical importance because such quantitative knowledge is needed to provide limits on the forecast accuracy. Monte Carlo simulation, the accepted standard for uncertainty determination, is impractical to apply to the atmospheric and ocean models, particularly in an operational setting, because of these models’ high degrees of freedom and computational demands. Instead, methods developed in the literature have relied on a limited ensemble of simulations, selected from initial errors that are likely to have grown the most at the forecast time. In this paper, the authors present an alternative approach, the polynomial chaos method, to the quantification of the growth of uncertainty. The method seeks to express the initial errors in functional form in terms of stochastic basis expansions and solve for the uncertainty growth directly from the equations of motion. It is shown via a Lorenz model that the resulting solution can provide all the error statistics as in Monte Carlo simulation but requires much less computation. Moreover, it is shown that the functional form of the solution facilitates the uncertainty analysis. This is discussed in detail for the tangent linear case of interest to ensemble forecasting. The relevance of the uncertainty covariance result to data assimilation is also noted.

Corresponding author address: Colin Y. Shen, Naval Research Laboratory, 4555 Overlook Ave S.W., Washington, DC 20375. Email: colin.shen@nrl.navy.mil

Abstract

A time-dependent physical model whose initial condition is only approximately known can predict the evolving physical state to only within certain error bounds. In the prediction of weather, as well as its ocean counterpart, quantifying this uncertainty or the predictability is of critical importance because such quantitative knowledge is needed to provide limits on the forecast accuracy. Monte Carlo simulation, the accepted standard for uncertainty determination, is impractical to apply to the atmospheric and ocean models, particularly in an operational setting, because of these models’ high degrees of freedom and computational demands. Instead, methods developed in the literature have relied on a limited ensemble of simulations, selected from initial errors that are likely to have grown the most at the forecast time. In this paper, the authors present an alternative approach, the polynomial chaos method, to the quantification of the growth of uncertainty. The method seeks to express the initial errors in functional form in terms of stochastic basis expansions and solve for the uncertainty growth directly from the equations of motion. It is shown via a Lorenz model that the resulting solution can provide all the error statistics as in Monte Carlo simulation but requires much less computation. Moreover, it is shown that the functional form of the solution facilitates the uncertainty analysis. This is discussed in detail for the tangent linear case of interest to ensemble forecasting. The relevance of the uncertainty covariance result to data assimilation is also noted.

Corresponding author address: Colin Y. Shen, Naval Research Laboratory, 4555 Overlook Ave S.W., Washington, DC 20375. Email: colin.shen@nrl.navy.mil

1. Introduction

It has long been recognized in weather forecasting that the slightest error in the initial state of the forecast may lead to a rapid unrestrained growth of forecast errors, eventually rendering the forecast unreliable (Lorenz 1963). Such an outcome clearly has implications to ocean and climate model predictions as well, which, similar to weather forecasting, must rely on observations to approximate the initial states. To what degree a model prediction can be considered reliable, therefore, must be judged in terms of the errors that are quantifiable throughout the predictive calculation and the tolerable limits for such errors.

To the extent that the initial state is only approximately known, and its uncertainty is defined by a probability distribution of the errors, Monte Carlo simulation (MC) is the accepted standard for establishing the uncertainty of model prediction. However, the method can be impractical to apply to computationally demanding complex geophysical fluid systems having high degrees of freedom; it relies on a large number of model runs that have to adequately sample the error statistics in the initial state to determine the statistics of the error growth. There are efforts directed at circumventing this computational requirement by attempting to produce initial error vectors that potentially have the most rapid growth rates at the starting time of the forecast. One prominent effort is the singular vector analysis, which obtains the dominant error vectors at the forecast time consistent with the tangent linear approximation of a model (see, e.g., Palmer 2000). Another is the bred-vector approach, which seeks the dominant error vectors through iterative model integration and rescaling of perturbations grown from a selected number of initial error vectors (Toth and Kalnay 1997). The dominant error vectors found by either one of these two approaches are then used to generate a limited set of random initial conditions, with which an ensemble of forecasts is produced and the uncertainty determined from the ensemble. The adequacy of such selection of initial conditions for determining forecast uncertainty has been investigated by Anderson (1997) by using the models of Lorenz (1963, 1984). He showed that the subspace spanned by the dominant error vectors in the ensemble may not be enough to capture the forecast spread and other higher moments of error statistics. Whether this conclusion carries over to realistic operational forecast models remains to be seen.

In this paper, we consider an alternative approach to the quantification of error growth, which is computationally more efficient and under certain circumstances can be as accurate as MC, as will be shown; moreover, it is capable of providing a functional description of the uncertainties not possible with MC. The approach is based on the polynomial chaos (PC) expansions, a particular form of stochastic basis expansion, which has been applied in various disciplines to study statistical behaviors of systems containing random parameters (Ghanem and Spanos 1991). The word “chaos” in this context is independent of the notion of chaos in reference to extreme sensitivity to initial conditions; the approach is applicable to both linear and nonlinear systems. We will apply the method to the Lorenz (1984) model to show how the random errors can be expressed in terms of the stochastic basis expansion, and how the coefficients of the expansion, which relate the initial errors to uncertainties at any given future time, can be obtained through direct integration of the governing equations. The model of Lorenz (1984) and his earlier model (Lorenz 1963) have been developed to produce chaotic behaviors and unpredictability similar to those found in the realistic, complex, ocean and weather models but with only 3 degrees of freedom. The study of forecast uncertainty in the literature has since in general begun with an investigation of one of his models, often the earlier version (Lorenz 1963). Here, the use of the PC method and the Lorenz (1984) model to study forecast uncertainty follows this established approach.

The PC method and its application to the Lorenz model are discussed in section 2, with additional details about the methodology provided in the appendix. In section 3, the result of the PC analysis is compared to that of direct MC simulation of the Lorenz model. Specifically, we show the extent to which the variance, covariance, and probability density of the uncertainty can be obtained accurately with low-order PC expansions. Additionally, it is shown that the first-order PC expansion is closely related to the tangent linear approximation of a model, and the coefficients of the first-order expansions correspond to the “propagator” of the tangent linear model. In this regard, the PC method can also be useful in the tangent linear analysis of the forecast uncertainty of interest to the ensemble forecasting.

Statistical quantification of uncertainties of realistic weather/ocean model forecasts is an enormous challenge because of the high number of degrees of freedom of a real physical system and its implied high dimensionality of the uncertainty space. This same challenge also confronts the PC approach. In the conclusion section, we discuss how an existing strategy for addressing the high dimensionality may be used in conjunction with PC. Although the high dimensionality is still a challenge, studies in the literature to understand forecast uncertainties have continued with reduced models of limited degrees of freedom. The PC method is equally practical to apply to such reduced models. This application is perhaps of more immediate interest, as PC functional representation of the uncertainties is more complete and amenable to analysis than the existing approaches, such as those based on tangent linear modeling and MC-like simulation of the reduced models. We illustrate an additional PC’s application in the conclusion section with an analysis of the sensitivity of the uncertainty to change in initial errors, and we also point out that the full statistics derived from PC expansions, in particular the error covariance function, are potentially of interest to statistically based data assimilation.

Although the focus of the present study is on the uncertainty growth due to initial errors, it will become evident from our discussion that the PC analysis is equally applicable to the uncertainty growth associated with uncertain physical parameters in a model, should such errors need to be included in the uncertainty quantification. We should also point out that the Lorenz model and other oscillatory systems of low degrees of freedom have been studied previously in the PC literature (Le Maître et al. 2004; Wan and Karniadakis 2005; Najm 2009) to develop special techniques that address the uncertainty of oscillation comparable in size to the oscillatory signal itself. The present study is concerned only with the initial error growth to a few tens of percent of the signal amplitude as in a typical forecast study.

2. The Lorenz model and its PC representation

Let t denote time and F and G denote constant forcing amplitudes. The Lorenz (1984) model for the evolution of the three state variables, (X, Y, Z), representing the spectral amplitudes of three basic modes of motion, is
i1520-0426-27-6-1059-e1a
i1520-0426-27-6-1059-e1b
i1520-0426-27-6-1059-e1c
where a and b are fixed constant parameters. The model’s initial condition in this study can be cast in the form
i1520-0426-27-6-1059-e2
which represents a situation where the initial value of the vector X = (X, Y, Z)*, for example, when derived from observations, may contain some errors or uncertainties that cause the observed X to differ from the initial “exact” or “true” state, XT(t = 0) = (X0, Y0, Z0)* by an amount, x(t = 0) = (σ1ξ1, σ2ξ2, σ3ξ3)* (all bold face letters here and below denote column vectors or matrices and the superscript * denotes their transpose). Such errors are typically random. For this study, as in Lorenz (2005), these random initial errors are assumed to be independent and have Gaussian distributions, with standard deviations σ1, σ2, and σ3, and zero-mean values; thus, ξ1, ξ2, and ξ3 are Gaussian random variables of zero mean and unit variance. The integration of (1a)(1c) requires a specific starting value for each X, Y, and Z. In Monte Carlo simulation, the starting values are generated randomly from the population of ξ1, ξ2, and ξ3, and one realization of X is obtained from (1a)(1c) for each randomly generated vector ξ = (ξ1, ξ2, ξ3)*.
Given that each ξ produces a specific X, it follows that X is inherently a function of ξ, and that the statistics of X and of its functions f (X) are formally defined by the probability density functions (pdfs) of ξ1, ξ2, and ξ3. For example, the nth moment of f (X) is
i1520-0426-27-6-1059-e3
Independently, X being a function of ξ may be expanded in terms of polynomials of the random variables ξ1, ξ2, and ξ3. A multivariate polynomial representation called “polynomial chaos” for describing second-order random processes was first developed for studies of Brownian motion (Wiener 1938). Its application to uncertainty quantification was first introduced by Ghanem and Spanos (1991). In such a polynomial chaos or PC representation, a random function X is expanded in terms of orthogonal polynomials that satisfy the weighted inner product whose weights match pdfs of the random variables ξ. In the present case, the weight is the multivariate Gaussian distribution in (3) and the corresponding polynomials are the Hermite polynomials (Ghanem and Spanos 1991). Generalization to other pdfs has been developed in recent years (Xiu and Karniadakis 2002). Once X is expressed in terms of the polynomials of ξ in this manner, statistics can be directly evaluated from (3). This is the essence of the PC approach—all realizations are accounted for simultaneously in the computation, as opposed to accumulating statistics through numerous trials as in the MC approach. This advantage is further evidenced below in the PC expansion of X in (1), which leads to a set of coupled differential equations that needs to be solved only once to generate the PC expansion from which the statistics of X are to be evaluated.
The multivariate Hermite polynomials Hn(ξ1, ξ2, ξ3) are derivable from the generating function of degree n: Hn;i,j,k = eξ*ξ/2(−1)nneξ*ξ/2/∂ξ1iξ2jξ3k, where i + j + k = n and i, j, kn. For simplicity, Hn;i,j,k may be numbered sequentially with a single index m and represented in turn by a single index multivariate basis function ψm(ξ1, ξ2, ξ3). The functions ψm are orthogonal in the sense of a weighted inner product 〈ψiψj〉 = 〈ψi2δij, where δij is a Kronecker delta function, and the inner product is defined by (3). Then one can represent the state variables in the form
i1520-0426-27-6-1059-e4
where αk = (αk, βk, γk) is a function of t only, and the summation is truncated at the polynomial order r. Let n = 3 be the dimension of ξ. Then the upper limit of the summation is P = (r + n)!/(r!n!) − 1. Supplementary details regarding the PC expansion are given in the appendix. By substituting (4) into (1a)(1c) and taking the inner product of ψk and (1a)(1c) while noting 〈ψiψj〉/〈ψi2〉 = δij, where δij is the Kronecker delta function, the Lorenz Eqs. (1a)(1c) can be transformed to a set of nonlinear, coupled equations for αk accurate to the rth order:
i1520-0426-27-6-1059-e5a
i1520-0426-27-6-1059-e5b
i1520-0426-27-6-1059-e5c
where Mijk = 〈ψiψjψk〉/〈ψk2〉, and k = 0, 1, 2, … , P. The initial conditions consistent with (2) are as follows: α0 = X0, α1 = σ, αk>1 = 0; β0 = Y0, β1 = 0, β2 = σ, βk>2 = 0; and γ0 = Z0, γ1 = 0, γ2 = 0, γ3 = σ, γk>3 = 0; for simplicity, σ1, σ2, and σ3 in (2) are assumed here to have the same value σ.

At the initial time, when the pdf of the random function is used also as the weight function in the definition of the weighted inner product of the PC basis function, the PC representation is the most efficient, that is, having the fastest convergence (Najm 2009). However, the pdf initially used for the PC expansion is often not preserved over time in an evolving nonlinear system. This necessitates inclusion of increasingly higher-order terms in the expansion, as the system evolves, to improve the approximation. In the present case, the growth of the uncertainty starting with an initial Gaussian error pdf is investigated for different PC orders up to the fifth order as discussed below.

3. Results

The numerical integration of (5a)(5c) is performed with a Runge–Kutta scheme for the polynomial order of r = 1 (or P = 3) to r = 5 (or P = 55). The coupling parameter Mijk in (5a)(5c) is evaluated numerically in a separate step, and the evaluation has been automated. The other parameter values used in (5) are a = 0.25, b = 4.0, F = 8.0, and G = 1.23 as in Lorenz (2005). The MC result used to evaluate the PC accuracy is obtained from 106 realizations. The sample size in Lorenz (2005) is 105. We have additionally computed MC statistics with 107 realizations to ascertain that the MC statistics from 106 realizations have in fact sufficiently converged.

a. Mean and variance

The mean of X, that is, 〈X〉, is simply the zeroth-order coefficient α0(t). However, the uncertainty is defined here as δX = XXT, with respect to XT, the “exact” prediction in the absence of error. Absence of error corresponds to the condition ξ(t = 0) = 0. It should be noted that XT(t) is not equal to 〈X〉 = α0(t) for an evolving nonlinear system subject to uncertain initial conditions. Accordingly, the “variance” of X with respect to XT is 〈δX2〉. Plotted in Figs. 1a and 1b, respectively, are the magnitude of the difference δX = |α0XT| and the root-mean-square value rms(δX) = 〈δX21/2 as a function of time for PC of the order r = 1, 2, and 5. Also shown are the benchmark MC results obtained from 106 realizations, that is, 106 integrations of (1a)(1c). In contrast, the PC equations (5a)(5c) need to be integrated only once for each of the PC coefficients, which means that in Figs. 1a and 1b only four integrations of (5a)(5c) are required for r = 1 curve, 10 for r = 2 curves, and 56 for r = 5 curves.

The initial uncertainty for all the cases begins at day 0 with σ = 10−2, with corresponding rms(δX) = 3σ and δX = 0. During the first 20 days or so, when δX < ∼10−2 and rms(δX) < ∼0.1, there is virtually no difference in the growths of δX and rms(δX) between PC and MC. This means that for small initial uncertainties with σ less than 10%, the first-order PC, (r = 1) alone is sufficient to capture the statistics of the subsequent uncertainty growth up to rms(δX) < ∼0.1. Above 0.2, the first-order PC result begins to deviate from MC; the approximate point of departure is indicated by the first arrow from the left, at which point the second-order and fifth-order PC curves still agree with the MC result. However, around day 30 the second-order PC also departs from MC. Finally, around day 35, the fifth-order PC departs from MC. At this point, the rms(δX) is well above 0.5, or 50% of the standard deviation (∼1.0) of the temporal variation of XT itself, an uncertainty large enough that it would normally justify the termination of the predictive calculation. The foregoing comparison of the PC results confirms that PC approximations improve with increasingly higher order of PC expansion. Perhaps what is most interesting in this comparison is how well the order-one (r = 1) PC expansion is able to reproduce the MC statistics up to rms(δX) ∼ 0.1. Moreover, it is able to track the variation of MC’s statistics, both the mean and rms, qualitatively to day 50 (Figs. 1a,b), whereas the higher-order PC cannot once the high-order approximation starts to break down.

b. Covariance

The covariance of X, or the error covariance, is the matrix 𝗖 = 〈[X(t) − α0][X(t) − α0]*〉. All second-moment elements in this matrix are readily evaluated using (3). Here, 𝗖 can be analyzed for the eigenvalues λi and eigenvectors Si in the usual manner by solving
i1520-0426-27-6-1059-eq1
From here on, we will adopt the convention of referring to λi and Si as singular values and vectors; specifically, they are left singular values and vectors; the reason for this distinction will become apparent in section 3d. For the three-variable Lorenz model the eigensolution leads to three pairs of singular values and vectors, which are ordered according to the size of the singular value, with λ1 > λ2 > λ3. In Fig. 2, the three components of S1 are shown as a function of time for 𝗖 obtained from the MC simulation. The distinguishing feature of this plot is that the δX component fluctuates noticeably less frequently than the δY and δZ components and that δY and δZ differ mainly by a phase shift. This is also found to be the case for the components of S2 and S3 (not shown). The logarithmic square roots of λ1, λ2, and λ3 (solid curves) for the corresponding three singular vectors for the MC case are plotted in Fig. 3a. Although all three singular values appear to increase with time, initially λ2 is approximately neutral and λ3 decreases with time.

In Fig. 3a, the singular values obtained from PC order r = 1, 2, and 5 are plotted for comparison with MC, and in the upper panel of Fig. 4 the δX component of S1 derived from PC is compared to that of MC. The PC order 1, 2, and 5 curves in the top panel of both Fig. 3a and Fig. 4 agree well with the MC curves over the same time intervals which had shown good agreement in the previous rms(δX) and δX plots in Fig. 1; as before, the deviation from the MC curve occurs first for PC order 1, followed by order 2, and then order 5. The comparison of δY and δZ components of S1 between PC and MC (not shown) shows the same kind of agreement as well as the comparison of λ1 between PC and MC (Fig. 3a, top panel). The agreement, however, does not quite extend to the secondary singular vectors S2 and S3 and their singular values λ2 and λ3. Differences exist between PC and MC even at earlier times, albeit small and for a short period, as illustrated with the singular values and the δX components of S2 and S3 in the second and third panels in Figs. 3a and 4, respectively. The differences in the δX components of the singular vectors shown occur between days 6 and 8 and later after day 20. These two times coincide with the growth of rms(δX) to above ∼0.05 in Fig. 1a. This indicates that as the uncertainty becomes large, the inaccuracy of the PC approximation is likely to manifest itself first in the weaker singular vectors/values.

The results used here for comparisons are for uncertainty growth starting from initial error σ = 10−2. Now, if the initial error is set at 10−5, then there is practically no difference in all the singular vectors/values (less than 10−3 times of their magnitudes) found between PC and MC, except for r = 1 during the entire time of the uncertainty rms(δX) growth from 10−5 to 10−1. However, even for r = 1 PC, the MC singular vector/value result is accurately reproduced for about a half of this time to about day 30, at which point the λ3 curve given by MC turns upward (Fig. 3b). The agreement to day 30 simply indicates that the interactions between small error vectors are sufficiently weak that the statistics are adequately represented with the first-order PC expansion. It can be seen also that during this period, only one singular vector contributes to the uncertainty growth; the singular values for the other two vectors are either decaying or approximately neutral.

In the case of Fig. 3a, where the initial errors are larger with σ = 10−2, and the interactions between error vectors are not negligible, λ3 for the first-order PC (r = 1) deviates almost immediately from MC, whereas λ3 of the higher-order PC (r = 2 and r = 5) is able to follow that of MC for an extended period. On the other hand, λ2 and λ1 for the first-order PC are seen to agree with the MC results for a much longer period. It is noteworthy that, despite the total lack of interactions among the error vectors in the r = 1 PC case, the r = 1 λ1 is in good agreement with the MC result up to about day 25, which is consistent with the rms(δX) error growth result shown in Fig. 1a.

c. Probability density

With the PC solution the probability density function (pdf) of δX is directly obtained from the pdf of ξ by sampling ξ in (4). This computation is much faster than estimating the pdf from MC applied to (1a)(1c). For small rms(δX) values, the pdf of δX is found to be indistinguishable from that of a Gaussian. This is expected because the pdf of ξ is Gaussian, and when αk(t) is still small, little of the non-Gaussian higher-order modes are generated by the nonlinear coupling terms in the governing Eqs. (5a)(5c). In the r = 1 case, where only the first-order polynomial is retained in (5a)(5c), there are no interactions between α1(t); in this case, the pdf of δX remains Gaussian at all times independent of the magnitude of α1(t). For r > 1, coefficients αk(t) of the higher-order polynomials appear in δX and interact nonlinearly. The result is that the pdf of δX becomes increasingly non-Gaussian as αk(t) grows in time. Figure 5 shows the pdfs of the three components of δX for MC (solid) and for fifth-order PC (dash) at day 30, when rms(δX) has grown to ∼0.2. The one-dimensional pdf shown for each component is obtained by integrating the three-dimensional pdf over the other two components, that is, forming the marginal distribution of each component. The agreement in the shape of the pdf between the r = 5 PC and MC is very close, despite the highly asymmetrical form. Plotted in the same figure is the pdf of the r = 1 PC case for comparison. The r = 1 pdf is necessarily Gaussian, and yet in this case, the standard deviations given in the figure caption are almost the same as those of MC. This agreement is consistent with the analysis of the second-moment statistics discussed earlier, and it suggests that the low-order second-moment statistics may be insensitive to the precise shape of the pdf when the uncertainty is less than 10% of the mean.

d. The tangent linear limit

The tangent linear (TL) approximation of nonlinear weather and ocean models has been found to be practical and useful in the study of ensemble forecasting and data assimilation (Palmer 2000; Xu and Daley 2000). In the present case, the tangent linear limit is obtained by taking XT as the basic state and all other possible realizations as an infinitesimal departure from XT. Thus, in this limit, XT is evolved according to the fully nonlinear Lorenz Eqs. (1a)(1c), whereas the departure x = XXT is evolved about XT using the linearized Lorenz equations—that is, with x = (x, y, z)* governed by
i1520-0426-27-6-1059-e6a
i1520-0426-27-6-1059-e6b
i1520-0426-27-6-1059-e6c
Because x at t = 0 represents the uncertainty of the initial state and is described by a Gaussian pdf, it can similarly be written as a PC expansion, as in (4):
i1520-0426-27-6-1059-e7
and (6a)(6c) can be similarly transformed into the evolution equations for the PC coefficients. However, at t = 0, the only nonzero PC coefficients in (7) are
i1520-0426-27-6-1059-eq2
and because (6a)(6c) are linear, the solution for x (t > 0) is simply x = α̂1ξ1 + α̂2ξ2 + α̂3ξ3; that is, all the other PC coefficients in (7) at t > 0 are zero, or in the matrix form
i1520-0426-27-6-1059-e8
where 𝗕 = {α̂1, α̂2, α̂3}. It should be clear that 𝗕 is a function of t as well as the starting time t0 when XT(t0) is chosen to begin the TL calculation, although we set t0 = 0 here. The covariance matrix of x given by (8) at t = t1 > 0 is
i1520-0426-27-6-1059-eq3
whereas its inner product is
i1520-0426-27-6-1059-eq4
evaluated according to (3). We let ξ*i be the singular vector that satisfies (𝗕*𝗕 − λ*i𝗜)ξ*i = 0, and has unit length |ξ*i| = 1. Then, any vector ξ can be expressed as a linear combination of the column vector ξ*i or ξ = ξ*d, where ξ* = {ξ*1, ξ*2, ξ*3} is a matrix whose columns are singular vectors, and d is a column vector of random variables having the same Gaussian statistics as ξ—that is, zero mean and unit variance 〈di2〉 = 1 for i = 1, 2, and 3. It then follows that
i1520-0426-27-6-1059-e9
Because 𝗕*𝗕ξ* = ξ*Λ, where Λ is a diagonal matrix with singular values λ*i as the diagonal elements, one has 〈d*ξ**𝗕*𝗕ξ*d〉 = 〈d*ξ**ξ*Λd〉 = λ*1d12〉 + λ*2d22〉 + λ*3d32〉, and from (9)
i1520-0426-27-6-1059-eq5
Conventionally, 〈x*x〉 given by (9) is related to its value x0 at t0. Because x(t0 = 0) = x0 = (σξ1, σξ2, σξ3)*, the solution (8) is equivalent to
i1520-0426-27-6-1059-e10
where = {α̂1, α̂2, α̂3}σ−1 (in general, α̂i = αi/σi when σi each has a different value). It is clear that the matrix corresponds to the “propagator” of the standard TL solution; for this terminology, see Frederiksen (2000). The difference is that here is obtained via straightforward time integration of the evolution equations for α̂i, whereas recurring matrix multiplication is required to obtain by the conventional procedure. It then follows that (*λ*iσ−2𝗜)ξ*i = 0 and
i1520-0426-27-6-1059-eq6
The ratio (λ*1 + λ*2 + λ*3)/3σ2 is therefore the amount of amplification that the error variance 〈x*x〉 has undergone from t0 to t1. Note that the singular vector ξ*i is related to a vector xi at t1 by xi = 𝗕ξ*i. Multiplying this by 𝗕𝗕* and noting that 𝗕𝗕*𝗕ξ*i = λ*i𝗕ξ*i = λ*ixi we have
i1520-0426-27-6-1059-eq7
where 𝗕𝗕* = 𝗖 is the covariance defined in section 3b. In this case, xi = Si is the singular vector given by the first-order PC solution in section 3b.

When the initial error is small σ ≪ 1, the singular vectors Si in the TL limit and those given by the first-order PC solution of the nonlinear equations are essentially identical. In Figs. 3 and 4, we have shown the Si vectors for the first-order PC case as well as those for the higher-order PCs. In Fig. 6, we show the singular vectors ξ*i, which evolve to Si in the TL limit, as defined by the relationships xi = 𝗕ξ*i and xi = Si just derived above; thus, the ξ*i vectors at any given point along the time axis in Fig. 6 map to Si of the first-order (r = 1) PC at the corresponding time in Fig. 4. These ξ*i vectors behave strikingly differently in comparison to Si, as it can be seen that they fluctuate during an initial period of about 50 days and then become invariant with respect to time. The singular vectors ξ*i and Si are known more precisely as the right and left singular vectors of 𝗕, respectively, because the singular value decomposition of 𝗕 is 𝗕 = 𝗦Λ1/2ξ* in which Si and ξ*i form the column vectors of the left and right matrices 𝗦 and ξ*, respectively, and Λ is the diagonal matrix for the singular values λ*i. Of special interest are the limits of these singular vectors of 𝗕 because 𝗕 changes with increasing separation between t1 and t0. In the limit of t1 → +∞ with t0 fixed, ξ*i given by 𝗕 are called the forward Lyapunov vectors, and Si with t1 fixed and t0 → −∞ are the backward Lyapunov vectors. A summary discussion of the Lyapunov vectors and their exponents (i.e., their growth rates) in the context of the forecast problem can be found in Legras and Vautard (1996). In the present analysis, it suffices to note that the limit for the TL Lorenz model appears to have been reached at around t1t0 = ∼50 days. Thus, if at t = 0, the initial error vectors x0i are chosen such that x0i = σξ*i(t), then for t < 50 days, different x0i evolves to Si(t) because ξ*i varies with time. However, for t > ∼50 days, a unique set of x0i exists that evolves to Si(t), and they correspond to the time invariant forward Lyapunov vectors. In ensemble forecasting, as noted in the introduction section, identifying such a unique set of x0i error vectors at the start of the forecast is of considerable interest, because these vectors can guide the selection of a set of initial conditions that presumably would evolve to Si(t) vectors associated with the bulk of the forecast uncertainty. In this section, we have shown how PC can be effective in obtaining such vectors from the TL equations.

4. Discussion and conclusions

The PC method provides an alternative way to obtain the stochastic solution to the forecast uncertainty due to the initial condition errors. The method accounts for the uncertainty explicitly in functional form by using the PC expansion in the error vector space. As shown with the Lorenz model, the method enables statistics to be completely determined without resorting to the MC simulation, and the accuracy can be comparable to that of MC, depending on the order of the PC expansion.

For a full model having a high number of degrees of freedom, the dimensions of the error vector space can become just as large and a full-fledged MC simulation is computationally prohibitive to perform. The ensemble-forecasting methods have been developed in the literature to use an MC-like approach but rely on only a manageable set of simulations (Toth and Kalnay 1997; Palmer 2000). This is achieved, in theory, by identifying a priori the error vectors that are likely to contribute the most to the uncertainty growth and then to perform a limited set of simulations using an initial condition constructed of these error vectors.

Although the PC method is applicable to high dimensional error vector spaces and requires significantly less computation than MC, the high dimensionality of an error vector space still poses a challenge because the number of PC equations to integrate increases with increasing number of uncertain error vectors, and moreover, it increases with increasing order of expansion. However, the latter is ameliorated in general by the fact that PC expansion converges exponentially fast when the initial error pdf is known and used to define the inner product of PC basis functions (Najm 2009), so often only low-order expansion may be sufficient. As to the high dimensionality, it is possible to reduce the dimensions of an error vector space for PC calculation by identifying the principal components that span the space, such as with the use of the method of the Karhunen–Loève (KL) expansion of a random process (Loève 1978) as discussed in the appendix. A recent application of this approach to dimension reduction for use with PC has been given by Marzouk and Najm (2009) in which they make use of the solution of the Lorenz (1963) model, although their primary concern is the inversion of noisy data for model parameters.

Because of the potentially large error vector space associated realistic forecast models, it seems reasonable, as a first step in the application of PC to the realistic models, to consider these models in the context of the TL approximation, which requires PC expansion only to the first order in the case where all the error variables have the same type of pdf such as in the Gaussian case discussed in this paper. The first-order PC is generally simple enough to be amenable to practical implementation. However, even if the PC expansion is nonlinear, involving products of PC basis functions—for example, in the case where the higher-order Hermite PC expansion is required to approximate a non-Gaussian pdf (section 3c)—the integration for the higher-order PC coefficients in the TL limit may still be practical because the TL PC equations for different orders are linear and decoupled. The application of PC to realistic models without TL approximation clearly will increase computational demand significantly, depending on the order of PC used. Nevertheless, the result shown in Fig. 1 for the Lorenz model suggests that the application of PC to nonlinear models can be fruitfully explored using even just low-order PC. In particular, Fig. 1 shows that the first-order PC alone is capable of obtaining the correct rms error up to 0.1 or 10% and approximately correct rms error up to 50% near day 35. This suggests that the practical first-order PC will also be useful to nonlinear forecast models once its application to the TL version of the models is verified. At the same time, the first-order and higher-order PC may find applications with reduced ocean or weather models of limited degrees of freedom, which are computationally less demanding. The reduced models are often used in forecast studies to understand the nature and sources of the forecast uncertainties. The PC’s functional representations of the uncertainties may prove to be useful in such studies.

In section 3d, it was shown how the PC solution can be used in the TL case to obtain the singular vectors and singular values explicitly as a function of time and to determine their Lyapunov limits. The singular vectors ξ*i obtained in the TL case are of additional interest, in that ξ*i at a given time represent the direction of maximum sensitivity of the uncertainty to change in initial errors; specifically, ξ*1(t) points in the direction along which the gradient |G| = |(∂E/∂ξ1, ∂E/∂ξ2, ∂E/∂ξ3)*| of the squared magnitude E(t) = x*x, is a maximum. Here, x = α̂1ξ1 + α̂2ξ2 + α̂3ξ3, and G = 2𝗔ξ, in which the symmetric square matrix 𝗔 = {Aij} has elements Aij = Aji, Aii = α̂i*α̂i, A12 = α̂1*α̂2, A13 = α̂1*α̂3, and A23 = α̂2*α̂3. Clearly, identifying such a direction is of interest to planning observations that aim to reduce errors in the initial condition for forecasting.

Although the focus here has been the uncertainty growth of the initial errors, the PC method presented here can easily include other uncertain variables that affect the forecast, such as inaccurately known boundary conditions, uncertain forcing functions, etc. These uncertain variables will simply be additional error dimensions in the PC expansion and thus the same PC methodology should be applicable to the more general problem. Finally, it is of interest to note that the error covariance, which is essential to most data assimilation methods, is readily computed from the PC solution, whereas the computation of time-dependent error covariance via the governing equations such as in the Kalman filter is computationally demanding. The latter difficulty has led to the use of limited MC-like sampling to estimate the error covariance (Evensen 1997). The potential use of PC to obtain error covariance directly for data assimilation, as well as to study the uncertainty problem for more realistic models, is presently under investigation.

Acknowledgments

This work is a contribution to the Advanced Research Initiative, Acoustic Field Uncertainty, at the Naval Research Laboratory, sponsored by the Office of Naval Research.

REFERENCES

  • Anderson, J. L., 1997: The impact of dynamical constraints on the selection of initial conditions for ensemble predictions: Low-order perfect model results. Mon. Wea. Rev., 125 , 29692983.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Augustin, F., , Gilg A. , , Paffrath M. , , Rentrop P. , , and Wever U. , 2008: Polynomial chaos for the approximation of uncertainties: Chances and limits. Eur. J. Appl. Math., 19 , 149190.

    • Search Google Scholar
    • Export Citation
  • Cameron, R., , and Martin W. , 1947: The orthogonal development of nonlinear functionals in series of Fourier–Hermite functionals. Ann. Math., 48 , 385392.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Debusschere, B. J., , Najm H. N. , , Pèbay P. P. , , Knio O. M. , , Ghanem R. G. , , and Le Maître O. P. , 2004: Numerical challenges in the use of polynomial chaos representations for stochastic processes. SIAM J. Sci. Comput., 26 , 698719.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1997: Advanced data assimilation for strongly nonlinear dynamics. Mon. Wea. Rev., 125 , 13421354.

  • Frederiksen, J. S., 2000: Singular vector, finite-time normal modes, and error growth during blocking. J. Atmos. Sci., 57 , 312333.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghanem, R., , and Spanos P. , 1991: Stochastic Finite Elements: A Spectral Approach. Springer, 214 pp.

  • Legras, B., , and Vautard R. , 1996: A guide to Liapunov vectors. Proc. ECMWF Seminar on Predictablility, Vol. I, Reading, United Kingdom, European Centre for Medium-Range Weather Forecasts, 143–156.

    • Search Google Scholar
    • Export Citation
  • Le Maître, O. P., , Najm H. N. , , Ghanem R. G. , , and Knio O. M. , 2004: Multi-resolution analysis of Wiener-type uncertainty propagation schemes. J. Comput. Phys., 197 , 502531.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loéve, M., 1978: Probability Theory. 4th ed. Springer-Verlag, 425 pp.

  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20 , 130141.

  • Lorenz, E. N., 1984: Irregularity: A fundamental property of the atmosphere. Tellus, 36A , 98110.

  • Lorenz, E. N., 2005: A look at some details of the growth of initial uncertainties. Tellus, 57A , 111.

  • Marzouk, Y. M., , and Najm H. N. , 2009: Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems. J. Comput. Phys., 228 , 18621902.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Najm, H. N., 2009: Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics. Annu. Rev. Fluid Mech., 41 , 3552.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., 2000: Predicting uncertainty in forecasts of weather and climate. Rep. Prog. Phys., 63 , 71116.

  • Toth, Z., , and Kalnay E. , 1997: Ensemble forecasting at NCEP and breeding method. Mon. Wea. Rev., 125 , 32973319.

  • Wan, X., , and Karniadakis G. E. , 2005: An adaptive multi-element generalized polynomial chaos method for stochastic differential equations. J. Comput. Phys., 209 , 617642.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wiener, N., 1938: The homogeneous chaos. Amer. J. Math., 60 , 897936.

  • Xiu, D., 2009: Fast numerical methods for stochastic computations: A review. Commun. Comput. Phys., 5 , 242272.

  • Xiu, D., , and Karniadakis G. E. , 2002: The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput., 24 , 619644.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xu, L., , and Daley R. , 2000: Towards a true 4-dimensional data assimilation algorithm: Application of a cycling representer algorithm to a simple transport problem. Tellus, 52A , 109128.

    • Search Google Scholar
    • Export Citation

APPENDIX

Form of the PC Expansion

The details of the PC method and its applications, especially in the engineering discipline, have been discussed and reviewed extensively in the papers by Debusschere et al. (2004), Augustin et al. (2008), Najm (2009), and Xiu (2009) as well as in the book by Ghanem and Spanos (1991). We summarize here the basic elements of the method pertinent to this study.

Given a set of orthogonal functions Φn, n = 1, 2, … , that are Hermite polynomials of random variables ξj, j = 1, 2, … , K, the theorem of Cameron and Martin (1947) guarantees that the expansion of a second-order stochastic process v(t; ξ) in terms of Φn converges in the mean squared sense. That is, if v(t; ξ) is approximated as the weighted sum of the polynomial functions Φn
i1520-0426-27-6-1059-ea1
then . Here, ξ denotes the random variables ξj and t denotes time; however, in place of it, space/time variables can equally be used.
The Hermite polynomials Φn for a Gaussian random field are Hn;i,j,k, given in section 2. Explicitly, Φn in terms of three random variables ξj, j = 1, 2, and 3 to the second order are
i1520-0426-27-6-1059-eqa1
It is customary to simplify the expression (A1) with a single index such as that shown in (4). In this case, ψm in (4) can be related to Φn sequentially as
i1520-0426-27-6-1059-eqa2
The PC coefficients μ after reordering as above are related to the PC coefficients—namely, the components of α, in (4) in the same way, and X take the place of v. Then the calculation to determine the PC coefficients proceeds as described in the paper.

The number of terms in (A1) increases rapidly with the increasing order of the expansion as well as with the increasing number of random variables. The PC method is attractive when the expansion converges rapidly within the first few orders, which is usually the case when the pdfs of the random variables are close to the functional form of the weighting function for the orthogonal polynomials. In the special case with tangent linear modeling, it may not be necessary to extend beyond the first-order expansion as evident in section 3d. To deal with the increasing number of random variables, which invariably arise in realistic and oceanic and atmospheric models, the KL expansion can be invoked to select only those relevant dimensions of the uncertainty space for the PC analysis. The following illustrates the idea based on Marzouk and Najm (2009).

By letting the previous v now be a random function of the spatial dimension r at time t = 0, that is, v0(r; ω) = v(r, t = 0; ω), where ω denotes an event of the sample space, and provided that the covariance function of v0 exists, the KL expansion of v0 is
i1520-0426-27-6-1059-ea2
In (A2), v0 is the mean; ci(ω) are the coefficients of the expansion; λi; and ϕi(r) are the eigenvalues and eigenfunctions associated with the covariance function C(r1, r2) of v0, as defined by the integral
i1520-0426-27-6-1059-eqa3
over the domain D of interest. The coefficients ci(ω) in (A2) are uncorrelated random variables having zero mean and unit variance. With the terms in (A2) ordered in descending λi and with any integer L ≥ 1, the finite sum of (A2),
i1520-0426-27-6-1059-ea3
is the optimal approximation of v0(r, ω) in the mean-squared sense that the total residual,
i1520-0426-27-6-1059-eqa4
is a minimum among all possible approximations of v0 obtained with any orthogonal basis functions other than ϕi, where P(ω) is the probability measure over the sample space Ω. Thus, by truncating (A2) at L to approximate v0(r, ω) in accordance with some predetermined criterion, the uncertainty space that would have been infinite is now limited to L degrees of freedom.

Fig. 1.
Fig. 1.

Comparison of MC and PC uncertainty, (a) rms(δX) and (b) δX, difference between uncertainty mean and prediction, as a function of time. Here, r denotes the order of the PC expansion. The arrow at ∼ day 25 indicates when r = 1 PC deviates from MC. The arrows at ∼ day 30 and day 35 indicate the start of the deviation for r = 2 and r = 5, respectively.

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Fig. 2.
Fig. 2.

Times series of the δX, δY, and δZ components of S1.

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Fig. 3.
Fig. 3.

Times series of singular values. (a) Comparison of PC order r = 1, 2, and 5 to MC for σ = 10−2. (b) MC only for σ = 10−5.

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Fig. 4.
Fig. 4.

Time series of the δX component of S1, S2, and S3 from MC and PC order r = 1, 2, and 5. Agreement is shown in the dominant S1 between MC and all PC orders for the time period before ∼ day 25, whereas the secondary and tertiary vectors S2, and S3 show early deviations. Where deviation occurs before day 22, the curves for r = 2 and 5 overlap, differing in magnitude only by about 10−3. The comparison for the δY and δZ components is the same (not shown).

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Fig. 5.
Fig. 5.

The 1D pdf for each uncertainty component, δX, δY, and δZ, normalized by their respective rms values σx, σy, and σz. The pdf curves for MC and r = 5 PC are in close agreement. The Gaussian r = 1 pdf curve is shown for contrast. The values of (σx, σy, and σz): for MC, (0.185, 0.097, 0.233); PC r = 5, (0.185, 0.097, and 0.224); and PC r = 1, (0.191, 0.087, and 0.211).

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Fig. 6.
Fig. 6.

Time series of ξ*1, ξ*2, and ξ*3 in the TL limit. These vectors map to PC order one singular vectors S1, S2, and S3. (a) Vector components ξx (solid), ξy (dash), and ξz (dot) for the respective vectors ξ*1, ξ*2, and ξ*3. (b) Singular values λ*1 (solid), λ*2 (dash), and λ*3 (dot) for the vectors ξ*1, ξ*2, and ξ*3, respectively.

Citation: Journal of Atmospheric and Oceanic Technology 27, 6; 10.1175/2009JTECHO727.1

Save