• Cover, T. M., , and J. A. Thomas, 1991: Elements of Information Theory. Wiley, 576 pp.

  • DelSole, T., 2004a: Predictability and information theory. Part I: Measure of predictability. J. Atmos. Sci., 61 , 24252440.

  • DelSole, T., 2004b: Stochastic models of quasigeostrophic turbulence. Surv. Geophys, 25 , 107149.

  • DelSole, T., , and P. Chang, 2003: Predictable component analysis, canonical correlation analysis, and autoregressive models. J. Atmos. Sci., 60 , 409416.

    • Search Google Scholar
    • Export Citation
  • Gardiner, C. W., 1990: Handbook of Stochastic Methods. 2d ed. Springer-Verlag, 442 pp.

  • Joe, H., 1989: Relative entropy measures of multivariate dependence. J. Amer. Stat. Assoc., 84 , 157164.

  • Johnson, R. A., , and D. W. Wichern, 1982: Applied Multivariate Statistical Analysis. Prentice-Hall, 594 pp.

  • Kleeman, R., 2002: Measuring dynamical prediction utility using relative entropy. J. Atmos. Sci., 59 , 20572072.

  • Kloeden, P. E., , and E. Platen, 1999: Numerical Solution of Stochastic Differential Equations. Springer, 636 pp.

  • Leung, L-Y., , and G. R. North, 1990: Information theory and climate prediction. J. Climate, 3 , 514.

  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20 , 130141.

  • Lorenz, E. N., 1969: Atmospheric predictability as revealed by naturally occurring analogues. J. Atmos. Sci., 26 , 636646.

  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8 , 281293.

    • Search Google Scholar
    • Export Citation
  • Schneider, T., , and S. M. Griffies, 1999: A conceptual framework for predictability studies. J. Climate, 12 , 31333155.

  • Tippett, M. K., , and P. Chang, 2003: Some theoretical considerations on predictability of linear stochastic dynamics. Tellus, 55A , 148157.

    • Search Google Scholar
    • Export Citation
  • von Storch, H., , and F. Zwiers, 1999: Statistical Analysis in Climate Research. Cambridge University Press, 484 pp.

  • View in gallery

    Schematic illustrating the relation between different distributions and the associated concepts. A distribution is represented by a point and the “distance” between the points indicates the degree of difference between the distributions. Each line segment is labeled to indicate its meaning. The dashed lines join distributions that usually are represented in different state spaces and therefore have undefined distances.

  • View in gallery

    Predictability of the two-variable stochastic model (26) with dynamical operator (45) and β = 0 (dashed), and of the regression forecast distribution with β = 5 for ensemble sizes 1, 2, 4, 8, 16, 32, 64, and 128 (solid lines, from bottom up).

  • View in gallery

    Predictability of the two-variable stochastic model (26) with β = 0 (dash with no error bar or circle) and associated regression forecast distribution with β = 5 and with 16 ensemble members (solid with no error bar or circle), as in Fig. 2. Also shown is the predictability estimated from 10 independent samples of the verification and initial condition (dash with error bars and circles) and associated regression forecast distribution (solid with error bars and circles), based on a Gaussian assumption for the distributions. The error bars show the 10th and 90th percentiles.

  • View in gallery

    Predictability of the Lorenz (1963) model estimated from 100 independent initial condition and verification pairs, assuming a Gaussian form for the distribution (dashed). The accessible forecast model is drawn from the same model class but with the parameter β adjusted from 28 to 20. The predictability of the regression forecast distribution based on one ensemble member and a Gaussian form for the distribution is shown as the solid line. The predictability based only on two leading potential predictable components of the accessible forecast model is shown as the solid line with circles.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 0 0 0
PDF Downloads 0 0 0

Predictability and Information Theory. Part II: Imperfect Forecasts

View More View Less
  • 1 George Mason University, Fairfax, Virginia, and Center for Ocean–Land–Atmosphere Studies, Calverton, Maryland
© Get Permissions
Full access

Abstract

This paper presents a framework for quantifying predictability based on the behavior of imperfect forecasts. The critical quantity in this framework is not the forecast distribution, as used in many other predictability studies, but the conditional distribution of the state given the forecasts, called the regression forecast distribution. The average predictability of the regression forecast distribution is given by a quantity called the mutual information. Standard inequalities in information theory show that this quantity is bounded above by the average predictability of the true system and by the average predictability of the forecast system. These bounds clarify the role of potential predictability, of which many incorrect statements can be found in the literature. Mutual information has further attractive properties: it is invariant with respect to nonlinear transformations of the data, cannot be improved by manipulating the forecast, and reduces to familiar measures of correlation skill when the forecast and verification are joint normally distributed. The concept of potential predictable components is shown to define a lower-dimensional space that captures the full predictability of the regression forecast without loss of generality. The predictability of stationary, Gaussian, Markov systems is examined in detail. Some simple numerical examples suggest that imperfect forecasts are not always useful for joint normally distributed systems since greater predictability often can be obtained directly from observations. Rather, the usefulness of imperfect forecasts appears to lie in the fact that they can identify potential predictable components and capture nonstationary and/or nonlinear behavior, which are difficult to capture by low-dimensional, empirical models estimated from short historical records.

Corresponding author address: Timothy DelSole, Center for Ocean–Land–Atmosphere Studies, 4041 Powder Mill Rd., Suite 302, Calverton, MD 20705-3106. Email: delsole@cola.iges.org

Abstract

This paper presents a framework for quantifying predictability based on the behavior of imperfect forecasts. The critical quantity in this framework is not the forecast distribution, as used in many other predictability studies, but the conditional distribution of the state given the forecasts, called the regression forecast distribution. The average predictability of the regression forecast distribution is given by a quantity called the mutual information. Standard inequalities in information theory show that this quantity is bounded above by the average predictability of the true system and by the average predictability of the forecast system. These bounds clarify the role of potential predictability, of which many incorrect statements can be found in the literature. Mutual information has further attractive properties: it is invariant with respect to nonlinear transformations of the data, cannot be improved by manipulating the forecast, and reduces to familiar measures of correlation skill when the forecast and verification are joint normally distributed. The concept of potential predictable components is shown to define a lower-dimensional space that captures the full predictability of the regression forecast without loss of generality. The predictability of stationary, Gaussian, Markov systems is examined in detail. Some simple numerical examples suggest that imperfect forecasts are not always useful for joint normally distributed systems since greater predictability often can be obtained directly from observations. Rather, the usefulness of imperfect forecasts appears to lie in the fact that they can identify potential predictable components and capture nonstationary and/or nonlinear behavior, which are difficult to capture by low-dimensional, empirical models estimated from short historical records.

Corresponding author address: Timothy DelSole, Center for Ocean–Land–Atmosphere Studies, 4041 Powder Mill Rd., Suite 302, Calverton, MD 20705-3106. Email: delsole@cola.iges.org

1. Introduction

DelSole (2004a, hereafter Part I) discussed a framework for quantifying predictability based on information theory. This framework, which is reviewed in the following section, requires probability distributions that are not known and, in practice, are estimated from an imperfect forecast model. The purpose of this paper is to discuss an approach to accounting for imperfect forecasts within the above framework. The basic idea is to use not the forecast itself, but the conditional distribution of the state given the forecast. This idea was suggested by Schneider and Griffies (1999), although our interpretation appears to differ from theirs. The assumptions inherent in this approach are laid out in section 3 and the resulting predictability estimates are shown in section 4 to constitute a lower bound on the true predictability and potential predictability. The mutual information between verification and forecast is argued to be an attractive measure of forecast skill. Section 5 discusses predictable components of imperfect models and their potential significance in practical applications. The above concepts are illustrated in section 6 in the context of stationary, Gaussian, Markov systems. Numerical examples are presented in section 7. Finally, a summary of the results is given in section 8.

This paper considers the ideal case of large samples (large in a sense to be discussed in section 3). Strategies for dealing with small samples will be addressed in Part III.

2. Brief review of predictability

In this section we summarize the predictability framework proposed in Part I. Consider a dynamical system of dimension K. The state of the system at time t is specified by a K-dimensional vector xt, which specifies a point with coordinates xt in the K-dimensional space. If the state is uncertain, it is appropriate to describe the state by the density of possible points in phase space. This density is essentially a probability distribution function and evolves in time in a manner described by Liouville’s equation for conservative systems. The distribution of xt changes discontinuously after the system is observed. Let the set of all observations up to time t be denoted by ot. Note that xt and ot often reside in different spaces. The distribution of the state after observations become available is the conditional distribution p(xt|ot), whose mean is called the analysis. As is well known from state space estimation theory, the analysis distribution p(xt|ot) depends on the forecast model and therefore is conditioned on the forecast model.

It proves convenient to distinguish the state at two different times by different symbols. Thus, let the initial condition at time t be i = xt, the verification at time t + τ be v = xt+τ, and the observations up to time t be o = ot. The parameter τ is called the lead-time. The probability distribution functions (pdf’s) of i, v, o will be denoted by p(i), p(v), p(o), respectively, where the function p(·) is understood to differ according to its argument. In this notation, the analysis distribution at time t is p(i|o). For stationary systems, p(v) = p(i).

The distribution of a future state v = xt+τ, after observations become available, is denoted by p(v|o) and computed from the classical formula:
i1520-0469-62-9-3368-e1
where r(v|i) is a transition probability associated with a dynamical or stochastic model and the integral is a multiple integral. The distribution p(v|o) will be called the perfect model forecast distribution. This distribution describes our knowledge of the future state v = xt+τ after antecedent observations ot and a (perfect model) forecast based on those observations become available. We use the term forecast system to refer to the combined influence of the forecast model and the uncertainty in the initial condition. Note that a perfect model forecast distribution is not “perfectly predictable,” even for a deterministic model because even if r(v|i) is deterministic and hence a delta function, p(v|o) from (1) is not a delta function, owing to uncertainty in the initial condition as described by p(i|o).
In the absence of (recent) observations ot, the variable v = xt+τ has a climatological distribution given by its marginal distribution
i1520-0469-62-9-3368-e2
If the system is stationary or cyclostationary, then the climatological distribution is independent of time or periodic and can be estimated from historical records. The variable v is said to be unpredictable if p(v|o) = p(v), which is equivalent to the statement that v is independent of the observations o.

3. The accessible forecast distribution

A key problem with applying the above methodology is that, in practice, the transition probability r(v|i) for the climate system is not known. It follows then that the perfect model forecast distribution p(v|o) cannot be computed from (1) and, hence, is unknown too. Moreover, the transition probability r(v|i) cannot be estimated from data because nature provides only a single realization of xt+τ for a given value of xt, and the atmosphere has no natural analogues in the sense discussed by Lorenz (1969). For these reasons, the transition probability must be estimated from a model. The details of the model are immaterial: for example, the model could be purely empirical or purely physical. What is important is that the model provides a transition probability, which in all realistic cases differs from that of the true system. Moreover, the state space of the forecast model usually differs from that of the true state space. Consequently, the “initial condition” appropriate for the accessible forecast, denoted if , differs from the initial condition appropriate for the perfect model forecast, i. Let the initial condition distribution appropriate for the accessible forecast be p(if|o), and let the forecast verifying at time t + τ be f. The forecast distribution is then given by
i1520-0469-62-9-3368-e3
where r′(f|if) is the transition probability for the model. The distribution p(f|o) will be called the accessible forecast distribution, to distinguish it from the perfect model forecast distribution p(v|o), which is inaccessible in any realistic scenario. Samples drawn from p(f|o) constitute the forecast ensemble.

Obviously, we would eliminate model errors if we could. Hence, we assume that model errors cannot be eliminated easily. In such situations, there appears to be no alternative other than to quantify predictability based on the past behavior of the model and observations, assuming that the past relation between model and observations will persist into the future. This assumption is reasonable for stationary systems, but is problematic for nonstationary systems, such as occur in climate change scenarios.

Let us assume, then, that the system is stationary. A complete description of the past behavior of the model and observations is the joint distribution p(v, o, f). The distribution of the verification given knowledge of the forecast and observations is the conditional distribution p(v|o, f). Since the true system evolves according to a set of laws that are (presumably) independent of any accessible forecast, f and v are conditionally independent, in the sense that
i1520-0469-62-9-3368-e4
Hence, if the joint distribution p(v, o, f) were really known, then the accessible forecast would be irrelevant for the purposes of measuring predictability. The fact that knowledge of p(v, o, f) is tantamount to knowledge of the perfect model distribution p(v|o) raises the question as to the role of the forecast model. The answer lies in the fact that some distributions are more accessible than others. For instance, the distributions p(v, o) and p(f, o) are anticipated to be complicated functions owing to the nonlinear transition probabilities associated with dynamical systems. On the other hand, if the forecast model captures enough detail in the nonlinear processes, then it is hoped that the forecast f will differ from v in “simple” ways that are “easily” corrected. For instance, if the forecast merely is biased relative to v, then the best prediction is the forecast distribution shifted by an amount that removes the bias. In this scenario, p(v, o) and p(f, o) would be impractical to estimate owing to their nonlinearity, but p(v, f) would not because v and f are related by an additive constant. The approach pursued here assumes that p(v, f) requires “much less” data for its estimation than p(v, o), otherwise we would use p(v, o) and dispense with the forecast altogether. Some insight into these assumptions is provided by the examples presented in section 7.

Since our focus is on predictability, we do not dwell on the question of how to utilize p(v, f) to characterize forecast errors; see Murphy (1993) and von Storch and Zwiers (1999) for discussion of this issue. Given the joint distribution p(v, f), the distribution of the verification given the accessible forecast is p(v|f). We call p(v|f) the regression forecast distribution for reasons that will become apparent. The regression forecast distribution p(v|f) has many desirable properties related to accuracy, reliability, and resolution, in the sense of Murphy (1993). Furthermore, if the accessible forecast is independent of the verification, then p(v|f) = p(v), the regression forecast distribution, reduces to the climatological distribution and the variable v is said to be unpredictable. In such cases, the accessible forecast distribution gives no information about the verification that is not already contained in the climatological distribution.

If an ensemble of forecasts are available, say f1, f2, . . . , fM, then the desired regression forecast distribution is the conditional distribution given the forecast ensemble p(v|f1, f2, . . . , fM). Typically, forecast ensembles from the same model are constructed such that each member is equally likely. In such cases, the order of the ensembles is irrelevant and hence the regression forecast distribution p(v|f1, f2, . . . , fM) can depend on the sample only through certain sufficient statistics. In section 6 we show that if the distribution is joint normal, then the ensemble mean is a sufficient statistic of the regression forecast distribution; that is, p(v|f1, f2, . . . , fM) = p(v|〈f〉), where 〈f〉 is the sample mean of the ensemble forecasts [also defined in (18)].

Note that the forecast distribution p(f|o) plays a relatively minor role in predictability, as compared to the regression forecast distribution p(v|f). This point deserves mention since numerous predictability studies focus almost exclusively on the forecast distribution p(f|o). This emphasis is appropriate if the accessible forecast is perfect. If the accessible forecast is not perfect however, structure in the forecast distribution is relevant only to the extent that it covaries with the event in question. One might suggest that the forecast distribution can be transformed into the relevant distribution for the event that we want to predict. Leaving aside the question of how to construct an appropriate transformation, there can be no more information in the derived forecast distribution than in the individual members of the forecast that were used to construct the distribution. Hence, it is sensible that the regression forecast distribution, which plays a central role in our predictability framework, depends on the actual realizations of the forecast rather than on some distribution derived from the realizations.

4. Predictability of regression forecasts

The previous section introduced the regression forecast distribution p(v|f), which can be interpreted as the distribution of the future state v given the accessible forecast f and the (past) joint behavior between these two variables. This section shows that the predictability of a regression forecast distribution constitutes a rigorous lower bound on the true predictability. Furthermore, the “potential predictability,” which is a measure of the difference between the accessible forecast distribution p(f|o) and its climatology p(f), provides an upper bound on the predictability of the regression forecast distribution. These bounds clarify the role of accessible forecasts in the estimation of predictability.

As discussed in Part I, the most fundamental definition of predictability is based on some measure of the difference between the perfect model distribution p(v|o) and climatological distribution p(v). Two measures of this difference are relative entropy Rυ,o and predictive information Pυ,o, as discussed in Kleeman (2002) and Schneider and Griffies (1999). DelSole (2004b) showed that the average of either of these quantities, over all observations, yields a quantity called the mutual information I(V; O):
i1520-0469-62-9-3368-e5
Mutual information has the attractive property that it is invariant with respect to invertible, nonlinear transformations.

Unfortunately, the perfect model distribution p(v|o) is not accessible, as discussed in the previous section. Instead, we have access to the forecast p(f|o) and the regression forecast distribution p(v|f). Hence, two distinct predictability measures may be conceived. First, the accessible forecast p(f|o) may be compared to its climatology p(f). By analogy with (5), the associated average predictability is the mutual information between F and O, denoted I(F; O). Here I(F; O) will be called potential predictability since this term is used similarly in the literature to describe the predictability of a forecast system relative to its climatology, without reference to the true system. Second, the regression forecast distribution p(v|f) may be compared to the climatological distribution p(v). It can be shown that the average predictability of the regression forecast distribution is the mutual information between F and V, denoted I(F; V); I(F; V) will be called the predictability of the regression forecast distribution.

We now show that the metrics I(V; O), I(F; O), and I(F; V) satisfy certain fundamental inequalities. First, Eq. (4) implies that the variables v, f, o form a Markov chain in the order fov. By the fundamental data processing theorem in information theory (Cover and Thomas 1991, chapter 2), the mutual information between the above variables satisfy the inequality
i1520-0469-62-9-3368-e6
This inequality states that the potential predictability of an accessible forecast system is greater than or equal to predictability of the regression forecast distribution. The above inequality clarifies that potential predictability does not constitute an upper bound on the true predictability, as is sometimes implied in the literature but, rather, it constitutes an upper bound on the average predictability of the regression forecast.
Conditional independence of the verification and accessible forecast also implies the opposite Markov chain vof, from which it follows that
i1520-0469-62-9-3368-e7
This inequality states that no regression forecast can have greater predictability than that of the true system. Equivalently, the predictability of the regression forecast distribution constitutes a lower bound on the average predictability of the true system.

In contrast to most other proposed measures of predictability, mutual information does not require that the state space of the accessible forecast and the true system be the same. The desirability of this property can be appreciated from the fact that investigators generally are interested in whether some set of forecast variables can provide useful predictors of the verification, regardless of whether the variables are the same. Large mutual information indicates that some forecast variables are statistically dependent with the verification and hence can provide useful predictors of the verification.

A schematic that may facilitate the interpretation of the above quantities is shown in Fig. 1. In this abstraction, a probability distribution is characterized by a “point,” and the distance between two points indicates the difference between two distributions. Each line segment in the figure has been labeled to indicate its meaning. Angles have no meaning. The distance between the climatological distribution p(v) and perfect model distribution p(v|o) defines the predictability of the true system. However, we do not have access to the perfect model distribution p(v|o), rather, we have access to the accessible forecast distribution p(f|o). The “distance” between the perfect model distribution p(v|o) and accessible forecast p(f|o) is the most complete description of forecast error (although this distance is undefined if these distributions are represented in different state spaces). The distance between the accessible forecast p(f|o) and its climatology p(f) represents potential predictability. The regression forecast distribution p(v|f) provides a link between these two types of predictability measures. The distance between the regression forecast distribution p(v|f) and the climatological distribution p(v) is the best estimate of predictability based solely on the accessible forecasts. The figure has been constructed such that the predictability of the regression forecast distribution is smaller than either the predictability of the true system or the predictability of the accessible forecast system, as required by inequalities (6) and (7).

A remarkable property of mutual information is that no operation on the forecast can increase mutual information, provided that the operation in question is independent of the verification. To show this, suppose that for a given forecast f we attempt to construct a new forecast, denoted L(f), with the goal of improving the skill over the original forecast f. If the operation L(·) is a function only of f, then the distribution of L(f), conditional on f and v, must be independent of v:
i1520-0469-62-9-3368-e8
It follows from this expression that the above variables form a Markov chain in the order
i1520-0469-62-9-3368-e9
By the fundamental data processing theorem in information theory (Cover and Thomas 1991, chapter 2), the mutual information between the variables satisfy the inequality
i1520-0469-62-9-3368-e10
Hence, no manipulation of the forecast can enhance I(V;F). This property distinguishes I(V; F) from other skill metrics, such as mean square error, which often can be improved by biasing the forecast toward climatology. Since mutual information is invariant with respect to invertible, nonlinear transformations, the above proof implies that noninvertible transformations can only reduce mutual information. The above inequality has an intuitive interpretation in communication theory: It states that, if a message is sent through a noisy channel and received at the other end as an output, no manipulation of the output can increase the information about the message contained in the output.

Mutual information between forecast and verification also can be interpreted as a measure of skill, as suggested briefly by Leung and North (1990). The skill of a forecast can be measured in at least two distinct ways: by the “closeness” between forecast and verification, such as as measured by mean square error, or by the “temporal similarity” between forecast and verification, as measured by the correlation coefficient. Mutual information can be interpreted as a generalization of “similarity” measures since it is based on the fundamental probabilistic definition of independence and, hence, does not make implicit assumptions regarding the form of the relation between two variables. Furthermore, mutual information is invariant with respect to nonlinear transformations of the data, is invariant with respect to the role of forecast and verification, cannot be improved by manipulating the forecast (provided the manipulation is independent of verification), and vanishes if and only if the forecast is statistically independent of the verification. Also, owing to (5), forecasts with larger mutual information provide more information about the verification, which is a sensible measure of skill. Finally, in the case of bivariate normal distributions, mutual information is monotonically related to the correlation between forecast and verification. Therefore, mutual information reduces to a common measure of skill in suitable circumstances.

The above discussion implies that a forecast can contain large systematic errors and yet still have significant mutual information. In communication theory, we would say that the forecast is subject to distortion. Such distortion can be corrected if the functional relation between the forecast and verification can be inverted. This correction is implicitly included in the regression forecast distribution p(v|f). Adopting mutual information as a measure of skill would implicitly include this correction and hence eliminate the temptation to statistically correct forecasts for the purposes of improving skill. These comments should not be construed as suggesting that skill metrics, such as mean square error, are not useful. We are simply clarifying the fact that mutual information measures a different kind of skill than mean square error.

For continuous distributions, mutual information has no maximum value. Joe (1989) showed that the transformation [1 − exp(−2I)]−1/2 produces a value in the interval [0, 1] and recovers the correlation, multiple correlation, and partial correlation in appropriate circumstances when the variables are normally distributed. Schneider and Griffies (1999) propose an analogous transformation for predictive information. For discrete distributions, mutual information is bounded above by the entropy of the verification H(V). Accordingly, in the discrete case, the ratio I(V; F)/H(V) is bounded between 0 and 1 and, hence, may provide an attractive skill score for discrete, probabilistic forecast verification. Joe discusses other normalizations of mutual information.

Mutual information can be interpreted not only as a measure of the dependence between variables, but also as a measure of the reduction of uncertainty when one variable becomes known. The latter interpretation follows from the identity
i1520-0469-62-9-3368-e11
where H(V) is the entropy of the climatological distribution p(v) and H(V|F) is the conditional entropy of V given F [Cover and Thomas (1991) chapter 2]. According to this identity, positive skill I(V; F) implies H(V) > H(V|F), implying that a skillful forecast reduces the average uncertainty of an event relative to the climatological distribution. This relation links the concepts of degree of dependence (skill) and reduction in uncertainty (predictability). Inequality (7) and identity (11) imply H(V|F) ≥ H(V|O), which states that the forecast cannot reduce uncertainty more than observations and a perfect forecast model. The above results can be extended to show that conditioning never increases the average uncertainty. It follows that a forecast based on all available knowledge should have less uncertainty than a forecast based on partial knowledge.

5. Predictable components of an accessible forecast

An unpredictable component of a forecast is a random variable Fu that satisfies
i1520-0469-62-9-3368-e12
If a forecast variable does not satisfy (12), then it is called a potential predictable component, denoted Fp. The word potential is used to indicate that these components are predictable in the accessible forecast but not necessarily in the true system, though this term often will be dropped in sequel because predictable components of other forecasts will not be considered. In this section, we show that, under certain plausible assumptions, potential predictable components, and these components alone, can be used as predictors of a regression forecast. This result has important implications if the potential predictable components span a dimension smaller than that of the full system.

The proof given below holds even if the variables are not joint normally distributed. In practice, however, the normal assumption is needed to identify predictable components. For instance, if the variables are joint normally distributed, then canonical correlation analysis (CCA) can identify the unpredictable components (DelSole 2004b). This procedure is equivalent to predictable component analysis proposed by Schneider and Griffies (1999), provided the distributions are joint normal (DelSole and Chang 2003). Even if variables are not normally distributed, CCA still might be a useful method of finding potential predictable components because it identifies components with large correlation. Whether more general methods of finding predictable components are needed for realistic systems is a question that only experiment can settle.

Suppose the forecast variables can be split into two groups: those with vanishing mutual information, Zu = {f(1)u, f(2)u, . . .}, called potential unpredictable components, and everything else, Zp = {f(1)p, f(2)p, . . .}, called potential predictable components. The unpredictable components are identified with weather noise. As such, it is plausible to assume that the unpredictable components are independent of observations jointly:
i1520-0469-62-9-3368-e13
We make the stronger, yet still plausible, assumption that weather noise in the accessible forecast is independent of observations, verification, and predictable components:
i1520-0469-62-9-3368-e14
This assumption holds automatically if weather noise is parameterized as independent, additive noise, as is usually the case in predictability studies. Note that the above assumption implies that Zu is independent of any combination of o, v, Zp.
Assumption (14) implies that the regression forecast distribution can be expressed as
i1520-0469-62-9-3368-e15
It follows immediately from this identity that
i1520-0469-62-9-3368-e16

The importance of the above identities, (15) and (16), is that the dimension of Zp may be much smaller than the dimension of full system, especially in the context of monthly or seasonal predictability. Furthermore, the predictable components of an accessible forecast model can be determined with more accuracy than the predictable components of the observed system, owing to the availability of multiple realizations of the accessible forecast. Finally, inequality (6) implies I(Zp; O) ≥ I(V; Zp): The predictability of the potential predictable components is never less than the predictability of the regression forecast distribution. For these reasons, predictable components may provide an attractive basis set for reducing the dimension of the predictability analysis.

6. Regression forecasts for Gaussian, Markov systems

In this section, we illustrate the above concepts in the context of stationary, Gaussian, Markov systems. Before doing this, it is instructive to write expressions for the above quantities for joint normally distributed variables. If v and f are joint normally distributed, then it is well known that the conditional distribution p(v|f) is
i1520-0469-62-9-3368-e17
where μυ and Συ are the mean and covariance matrix of the marginal distribution p(v), μf and Στf are the analogous quantities for p(f), and Σvf = ΣTfv is the cross-covariance matrix between f and v [see Johnson and Wichern (1982), p. 170 for a derivation]. Readers familiar with statistical methods will recognize this result as equivalent to a least squares linear prediction of v, given f, for asymptotically large sample size. Although least squares estimation would be an obvious way of correcting a forecast, it arises here not because we are trying to minimize forecast error variance directly, but because, as is well known, it is equivalent to the conditional distribution p(v|f) for Gaussian variables.
The above considerations assume the availability of a single forecast. Now consider an ensemble of forecasts drawn randomly from p(f|o). Let the forecast ensemble be f1, f2, . . . , fM. The conditional distribution of the verification given the forecast ensemble is p(v|f1, f2, . . . , fM). For Gaussian variables, the distribution p(v|f1, f2, . . . , fM) is identical to p(v|〈f 〉), where 〈f〉 is the sample ensemble mean forecast
i1520-0469-62-9-3368-e18
The equivalence between p(v|f1, f2, . . . , fM) and p(v|〈f〉) can be seen in several ways. Perhaps the simplest is to note that, owing to the Gaussian form, the distribution p(v|f1, f2, . . . , fM) can depend only linearly with respect to the forecasts f1, f2, . . . , fM. Furthermore, since there is no basis for treating any one forecast from the same model differently from the others, the distribution p(v|f1, f2, . . . , fM) must be invariant with respect to an interchange of any two forecasts. The properties of invariance and linearity imply that the distribution p(v|f1, f2, . . . , fM) can depend only on the sum over M vectors f1 + f2 + . . . + fM, which is proportional to the sample ensemble mean 〈f〉. From the joint normal distribution assumption, it follows that the conditional distribution p(v|〈f〉) is
i1520-0469-62-9-3368-e19
which has the same form as (17), but with mean μf and covariance matrices Συf and Σf pertaining to the sample ensemble mean forecast 〈f〉.
Since the regression forecast distribution p(v|f1, f2, . . . , fM) depends only on the sample mean forecast 〈f〉, it might appear that the regression forecast distribution is independent of the forecast spread. This is not the case, as we will now demonstrate. The population mean forecast, often called the signal, is a random variable given by
i1520-0469-62-9-3368-e20
Note that E[f|o] is a random function since it depends on o. It is routine to show that
i1520-0469-62-9-3368-e21
where
i1520-0469-62-9-3368-e22
in which E[] with no conditioning represents the expectation over p(v, f1, f2, . . . , fM, o). The term Σs measures the variance of the “signal,” while the term Σn measures the variance of “forecast spread” or “noise.” Elementary sampling theory shows that
i1520-0469-62-9-3368-e23
where Σf is the covariance matrix of 〈f〉, and Συf is the covariance matrix between v and 〈f〉. These expressions show that Σf depends on the spread of the forecast Σn. Since Σs and Σn are positive definite, increasing Σn increases the variance of 〈f〉 but does not alter the covariance Συf. Thus, appearances to the contrary, the distribution p(v|〈f〉) depends on forecast spread in the following sense: given two forecasts with the same signal but different ensemble spreads, the forecast with larger spread gives rise to a regression forecast distribution with larger uncertainty. The variation of the regression forecast distribution depends on the sample only through the sample ensemble mean.
It is instructive to consider the case of a perfect accessible forecast. In a perfect model scenario, p(v|o) = p(f|o), which implies that
i1520-0469-62-9-3368-e24
The last relation arises because the forecast and verification can be represented each as a sum of a common signal plus independent noise, and all cross-covariances involving the noise terms vanish. Substituting these expressions into (19) gives a regression forecast distribution p(v|〈f〉) that is multivariate Gaussian with mean and covariance matrix
i1520-0469-62-9-3368-e25
where “PM” indicates perfect model. In the limit M → ∞, the regression forecast distribution approaches N(E[f|o], Σn), which is the (correct) perfect model distribution p(v|o). For finite ensemble size M, however, the conditional distribution p(v|〈f〉) differs from the perfect model forecast distribution p(v|o), even for a perfect model scenario, reflecting the fact that the forecast distribution has not been adequately sampled.
Now we consider the evolution of predictability in stationary, Gaussian, Markov systems. As discussed in Part I, the state xt of such a system can be interpreted as a solution of a linear stochastic model of the form
i1520-0469-62-9-3368-e26
where 𝗔 is a stable dynamical operator and w is a Gaussian white noise process with zero mean and covariance matrix 𝗤. The properties of this system have been discussed extensively in the literature (Gardiner 1990; DelSole 2004b, and references therein). The main facts of relevance in this paper are the following. Assuming the solution to (26) was begun in the infinite past, then xt is stationary. If the initial condition is drawn randomly from the stationary distribution p(xt), then the marginal distribution for the initial condition i = xt and verification v = xt+τ are equal, independent of time, and normally distributed with zero mean and constant covariance matrix Συ. Thus
i1520-0469-62-9-3368-e27
The solution to (26) with initial condition i can be written as the sum of two terms,
i1520-0469-62-9-3368-e28
where 𝗣 = exp(𝗔τ) is the propagator of the system and eυ is Gaussian white noise with distribution
i1520-0469-62-9-3368-e29
The random variables i and eυ are independent. It follows from the above two equations that the conditional distribution of v, given i, is
i1520-0469-62-9-3368-e30
The predictability of this system, as measured by relative entropy, predictive information, and mutual information, has been discussed in Part I and need not be reproduced here.
We attempt to forecast v given o. In this attempt, it would be unrealistic to assume that (26) is perfectly known. Thus, a forecast based on a stochastic model,
i1520-0469-62-9-3368-e31
is attempted, where 𝗔f differs from 𝗔 in (26), and wf is Gaussian white noise with statistics possibly different from those of w in (26). The propagator of the forecast system is 𝗣f = exp(𝗔fτ), and the covariance matrix of the asymptotic forecast is Σf. To this model corresponds an analysis p(if|o), which represents the distribution of the initial condition appropriate for the forecast model. Note that p(if|o) is not equal to p(i|o), the analysis for the perfect model does not equal the analysis for the accessible forecast model. A forecast by the accessible forecast model starting from if satisfies the equation
i1520-0469-62-9-3368-e32
where ef is a Gaussian random process with distribution
i1520-0469-62-9-3368-e33
The variables i and ef are independent. Physically, this independence follows from the fact that i is a realization from the true system (26) whereas ef represents the internal noise of the forecast. We could allow ef to have nonzero mean, in which case the forecast f would be biased, but this situation represents only a trivial extension of the unbiased case. It thus follows from (32) and (33) that
i1520-0469-62-9-3368-e34
Further progress requires clarifying the relation between i, if , and o. Since i and if arise from an analysis procedure, they depend only on the antecedent observations and forecast models. In particular, i and if are conditionally independent given the observations:
i1520-0469-62-9-3368-e35
We consider the case in which the initial condition errors are small. Thus, for simplicity, we assume i = if , in which case the triplet (eυ, ef , i) forms a mutually independent set. Since all three variables are independent and normally distributed, any linear combination of the triple is joint normally distributed. In particular, the pair (eυ − 𝗣 i) and (ef − 𝗣f i) are joint normally distributed, from which it follows that v and f are joint normally distributed. Thus, the regression forecast distribution can be written immediately as (17), where it remains to determine the covariance matrices (the means are assumed to vanish).
From (34), the marginal distribution p(f) is Gaussian with zero mean and covariance
i1520-0469-62-9-3368-e36
where we have used the fact that i and ef are independent. In general, Στf can depend on lead time. This dependence arises whenever the initial condition for the forecast model is drawn from a distribution that differs from the marginal distribution of the forecast. If Σf and Συ are equal, implying that the climatology of the forecast and true system coincide, then the covariance matrix Στf is independent of τ. The cross-covariance Συf is
i1520-0469-62-9-3368-e37
All quantities in (17) have now been specified. This completes the task of finding the regression forecast distribution p(v|f) for a stationary, Gaussian, Markov process.
Now consider the more general case of an ensemble of forecasts. An ensemble of forecasts can be interpreted as a set of independent vectors f1, f2, . . . , fM drawn randomly from the accessible forecast distribution p(f|i). As discussed in section 3, the appropriate regression forecast distribution is p(v|f1, f2, . . . , fM), which in the case of Gaussian distributions is identical to p(v|〈f〉), where 〈f〉 is the ensemble mean forecast
i1520-0469-62-9-3368-e38
To derive an expression for p(v|〈f〉), we may follow precisely the same procedure used to derive p(v|f), but with the new variable
i1520-0469-62-9-3368-e39
with distribution
i1520-0469-62-9-3368-e40
The resulting regression forecast distribution for the ensemble is
i1520-0469-62-9-3368-e41
Standard sampling theory gives
i1520-0469-62-9-3368-e42
We recover the covariances for a single realization f, (36) and (37), by substituting M = 1 into the above expression. It can be verified that, in the limit of infinite ensemble size M → ∞, the regression forecast distribution (41) asymptotically approaches the perfect model distribution (30), provided Σf and Σvf remain nonsingular.
The predictive information, mutual information, and relative entropy for the regression forecast distribution p(v|〈f〉) are obtained by substituting (41) into the appropriate expressions in Part I. The results are
i1520-0469-62-9-3368-e43
where
i1520-0469-62-9-3368-e44
These expressions are isomorphic to those obtained for the perfect model distributions and, hence, have properties similar to those associated with the true system (i.e., relative entropy depends on initial condition, all three quantities decay monotonically with lead time, etc.).

7. Numerical examples

We now give numerical examples to illustrate the above concepts. Consider first a two dimensional system with noise covariance matrix 𝗤 = 𝗜, and dynamical operator
i1520-0469-62-9-3368-e45
where β is a tunable parameter. This model can be solved analytically by methods described in Gardiner (1990) and DelSole (2004b). Since 𝗔 is upper triangular, its eigenvalues are −1/5 and −1, regardless of β. The case β = 0 corresponds to a normal dynamical operator in prewhitened coordinates, which constitutes a lower bound on the predictability of all stochastic systems with the same eigenvalues (Tippett and Chang 2003). Suppose that the “truth” is identified with the stochastic model with β = 0, while the accessible forecast is identified with β = 5; in both cases the covariance matrix is assumed to be 𝗤 = 𝗜. This experiment may be termed a perfect initial condition scenario since uncertainty arises from stochastic forcing within the model and not from the initial condition. The mutual information between verification and initial condition in the true system is given by
i1520-0469-62-9-3368-e46
and is shown as the dashed line in Fig. 2. The solid lines show, for different ensemble sizes, the mutual information between verification and accessible forecast, as evaluated from If in (43). First, note that the predictability of the regression forecast distribution is always less than or equal to the predictability of the system. This reflects the inequality (7), which states that the predictability of a regression forecast distribution is a lower bound on the predictability of the true system. Second, note that the gain in predictability due to doubling the ensemble size is modest after one time unit. This is not surprising given that the regression forecast distribution of joint normal distributions depends on the sample only through the ensemble mean, so extra ensembles merely refine the sample mean. The predictability of the regression forecast distribution converges to the true predictability more rapidly as the β in the forecast model approaches the true value of β.

How well can mutual information be estimated from finite samples? To gain insight into this question, we numerically generated time series from the above stochastic models using a forward Euler stochastic scheme (Kloeden and Platen 1999, p. 305) with a time step of 0.01 time units. The verification and observation time series were constructed by first integrating the true stochastic model (i.e., β = 0) and then sampling this single realization 10 times every 16 time units. Since the slowest decaying eigenmode has an e-folding time of 5 time units, sampling every 16 time units ensures that each verification–initial condition pair is effectively independent of all other such pairs. Within each 16 time unit interval, the initial condition is identified with the starting point and the verification is the value of the time series τ time units later. Accessible forecasts were constructed by starting at each initial condition and integrating the stochastic model using β = 5, using random forcing that was independent of that used to generate the truth. Multiple ensemble members were generated by integrating from the same initial condition but with independent realizations of the random forcing. The result of this procedure is to produce 10 vi pairs and 10 v–〈f〉 pairs. Estimates of I(v, i) and I(v, f), denoted Ie(v, i) and Ie(v, f), were obtained by replacing population covariance matrices with sample covariance matrices in (43) and (46), respectively. This procedure was repeated 500 times to estimate the distribution of Ie(v, i) and Ie(v, f).

Figure 3 reproduces the exact values of I(v, i) and I(v, 〈f〉) for this model for 16 ensemble members (dashed and solid lines, with no filled circles, respectively). The figure also shows the mean and the 10th and 90th percentiles, as error bars, of the sample estimates Ie(v, i) and Ie(v, 〈f〉) (dashed and solid lines, with error bars). First, we see that the sample estimates Ie(v, i) and Ie(v, 〈f〉) tend to be biased upward relative to their respective exact values I(v, i) and I(v, 〈f〉). The magnitude of this bias decreases as the sample size increases. Second, we see that Ie(v, i) tends to be larger than Ie(v, 〈f〉), even thought the latter is computed from 16 ensemble members. Quantitatively, Ie(v, i) > Ie(v, 〈f〉) in over 75% of the results for time lags less than 3 time units. We have verified that this result holds even if the accessible forecast model is perfect (i.e., if the accessible forecast model has β = 0, but with random forcing that is independent of the truth), provided the number of forecast ensemble members is less than 10. These limited results suggest that, if the system is joint normally distributed, there is no compelling reason to utilize accessible forecasts, since higher (but equally biased) estimates of predictability can be obtained by estimating I(v, i) directly from the observed time series. Conceivably, prior knowledge of the system could be incorporated into the estimation procedure to improve the predictability estimates, but this was not explored.

Now consider the nonlinear dynamical model of Lorenz (1963) with parameter values σ = 10, ρ = 8/3, and β = 28, for which the model is chaotic. This model is distinguished from the previous model in that it is nonlinear and non-Gaussian. This model was integrated with a fourth-order Runge–Kutta scheme starting from a random point near the origin to construct a single time series of length 2600 time units. After computing this time series, the initial 1000 time units were discarded to avoid spinup effects, and independent random numbers from a Gaussian distribution with zero mean and unit variance were added to the time series. The resulting time series then was sampled every 16 time units to construct 100 initial conditions. The accessible forecasts were constructed by integrating the Lorenz model at each of the 100 initial conditions, but with β = 20, all other parameters held the same (our major conclusions below do not appear to depend on the parameter being perturbed). Additional initial conditions for generating ensemble members were constructed by adding new, independent random numbers to the original solution to the Lorenz model. The resulting 100 vi pairs and 100 v–〈f〉 pairs are insufficient to estimate distributions along the lines of Kleeman (2002). Nevertheless, the sample size is typical in climate research.

Given the small sample size, we evaluated mutual information by (incorrectly) assuming a Gaussian form for the distributions so that the Eqs. (43) and (46) can be used. Figure 4 shows Ie(v, i) (dashed) and Ie(v, f) (solid) estimated from the time series for one ensemble member. We see that Ie(v, f) exceeds Ie(v, i), in contrast to Fig. 3. This result does not contradict inequality (7), which pertains to the exact probability distributions, because here a Gaussian form is imposed for the distributions. The essential reason for this result is that the accessible forecasts track the verification better than a linear prediction based on the initial condition, presumably because the accessible forecast to some extent captures important nonlinearity. Consequently, the covariance between v and f remains much higher as τ increases than the covariance between v and i. Interestingly, adding additional ensemble members does not substantially improve estimates of mutual information. The reason for this is that each ensemble member remains relatively close to the other members over the time scales considered; that is, each f is close to 〈f〉. Thus, adding new ensemble members does not add much information about the prediction.

The potential predictable components were obtained by performing CCA between 100 fi pairs. The mutual information between v and the first two predictable components fp, as computed from I〈f〉 in (43), is shown in Fig. 4 as the line with circles. Although the curve shows that the predictability based on two predictable components is comparable to the (Gaussian) mutual information between v and i, this appears to be a coincidence. The important result is that the predictability based on potential predictable components underestimates the predictability of the regression forecast distribution by a factor of 2. This result contradicts our hypothesis that only a small number of potential predictable components can capture the full predictability. The example given here, however, is more appropriately compared with short time weather forecasts, rather than with climate forecasts. We suspect that our hypothesis is valid for large dimensional climate systems on long time scales.

8. Summary

This paper proposed a predictability theory framework that accounts for imperfect forecast models. The critical quantity in this framework is neither the perfect model distribution, which is unknown anyway, nor the accessible forecast distribution, whose state space and variability may differ from the true system, but rather the conditional distribution of the state given all accessible forecasts. This idea also was proposed in Schneider and Griffies (1999), although our interpretation appears to differ from theirs. We have called this distribution the regression forecast distribution because, in the case of normal distributions, it is equivalent to a linear regression of the verification given the accessible forecast. Theoretically, the regression forecast distribution is not the best possible prediction. The best prediction is the conditional distribution given all forecasts and all antecedent observations. However, this latter distribution is independent of the accessible forecasts, reflecting the fact that an imperfect forecast is irrelevant if a perfect forecast model is available. The usefulness of imperfect forecasts appears to lie in the fact that they capture nonlinear or nonstationary behavior, which are difficult to capture in low-dimensional, statistical models estimated from short historical records.

This paper showed that the average predictability of the regression forecast distribution, denoted I(V; F), satisfies certain fundamental inequalities. First, I(V; F) provides a rigorous lower bound to the average predictability of the true system. This bound clarifies an important role of accessible forecasts in predictability studies. Second, I(V; F) is bounded above by the average potential predictability of the forecast model, defined as the average predictability of the accessible forecast distribution relative to its own climatology. The potential predictability of an accessible forecast system does not constitute an upper bound on the true predictability, as is sometimes asserted in the literature. In fact, the potential predictability and the true predictability need not have any relation to each other. Rather, the potential predictability constitutes an upper limit to the average predictability of the regression forecast distribution.

The absence of perfect models has lead some authors to suggest that all measures of predictability require some reference to accessible forecast models; that is, a true predictability does not exist. But defining predictability with respect to forecast models leads to multiple definitions of predictability, one for each model. Also, one should be careful not to equate existence with accessibility: just because we do not have access to something does not mean it does not exist. The framework proposed here presumes the existence of a true predictability, which is the distribution that a perfect model would produce given the initial condition distribution (which itself is constructed from a data assimilation procedure using the perfect model). The true predictability is an inaccessible property of the climate system and associated observations. Accessible forecasts provide lower bound estimates of true predictability; they are not needed to define predictability. The framework correctly implies that classical, deterministic models are perfectly predictable if both the initial condition and dynamical model are known perfectly, but not otherwise. The framework also accounts for the model dependence of uncertainty in the initial condition. True predictability can never be quantified definitively since at any given time only imperfect forecasts and finite observations exist, and there is no plausible way to eliminate the possibility that a better forecast model or better observations could lead to greater predictability.

This paper suggested that mutual information between accessible forecast and verification I(V; F) provides an attractive measure of forecast skill. This measure arises naturally in our framework as the average predictability of the regression forecast distribution. It also measures the degree of dependence between two sets of variables that is more fundamentally related to predictability than mean square error, which requires, for instance, that the forecast and verification be represented in the same state space. Furthermore, this measure is invariant with respect to nonlinear transformations of the data, is invariant with respect to the role of forecast and verification, cannot be improved by manipulating the forecast (provided the manipulation is independent of verification), and vanishes if and only if the forecast is statistically independent of the verification. In the case of bivariate normal distributions, mutual information reduces to familiar measures of skill based on the correlation between forecast and verification.

This paper showed that, under certain plausible assumptions, potential predictable components, and these components alone, completely describe the variability of regression forecasts. Potential predictable components therefore provide a basis for reducing the dimensionality of the predictability problem without loss of generality, provided they can be identified. If the forecast and observations are joint normally distributed, then potential predictable components can be obtained by canonical correlation analysis. In non-Gaussian cases, CCA may still provide a useful method of finding predictable components since it optimizes the correlation coefficient.

The predictability of regression forecast distributions for stationary, Gaussian, Markov systems was examined. The distribution of all relevant random variables was given explicitly. If ensemble forecasts are available, the regression forecast distribution varies only with the sample ensemble mean forecast, while the ensemble spread influences the predictability of the regression forecast distribution.

Simple numerical experiments were conducted to illustrate the above concepts and to gain insight into the usefulness of regression forecast distributions. In these experiments, the truth was identified with a single realization from a chosen model, while the forecast was generated by a model from the same class but with different parameter values. The exact predictability of a regression forecast distribution of a two-dimensional, Gaussian, Markov model was computed for various ensemble sizes. The results revealed that relatively small increases in predictability were to be gained with increasing ensemble size. This conclusion is not surprising since, for joint normal distributions, the regression forecast distribution depends on the forecast ensemble only through the ensemble mean, and hence the “extra” ensemble members merely “sharpen” the estimate of the ensemble mean. Sample estimates of the predictability of these stochastic systems, derived from numerical realizations, were biased upward relative to their true values. This bias, which is a manifestation of artificial skill that occurs in statistical prediction, represents a significant problem in the estimation of predictability from finite samples. In most cases, the true predictability estimated from finite realizations tended to be larger than the predictability of regression forecast distributions, even for large ensemble sizes. Further investigation of linear stochastic models in different parameter regimes suggest that, in the absence of prior information, imperfect forecast systems are not always useful in joint normally distributed systems since greater predictability often can be obtained directly from data. By contrast, the Lorenz (1963) model revealed opposite behavior: Gaussian approximated regression forecasts generally have more predictability than Gaussian approximated perfect model distributions. This difference was attributed to the nonlinear dynamics in the Lorenz model, which could be captured by an accessible forecast model from the same model class but with slightly incorrect parameter values, but not by a joint normal distribution, which essentially assumes a linear relation between verification and initial condition. Since the historical record is not adequate for developing anything more than simple linear regression laws, we concluded that the present usefulness of imperfect forecast models lies in the extent to which they capture relevant nonlinear dynamics and/or nonstationary behavior, or facilitate the identification of potential predictable components. These experiments on low-dimensional systems did not support the hypothesis that a truncated set of potential predictable components can capture most the predictability. Whether this finding holds in more realistic, large-dimensional climate models on long time scales can only be settled by experiment.

The problem of estimating the predictability of systems from finite samples will be addressed in more detail in Part III.

Acknowledgments

I am very much indebted to J. Shukla, Tapio Schneider, Michael Tippett, and Ben Kirtman for instructive discussions. Comments from Tapio Schneider, acting as reviewer, and two other reviewers also led to improvements in this paper. Discussions with Lenny Smith also led to helpful clarifications. This research was supported by the NSF (ATM9814295), NOAA (NA96-GP0056), and NASA (NAG5-8202).

REFERENCES

  • Cover, T. M., , and J. A. Thomas, 1991: Elements of Information Theory. Wiley, 576 pp.

  • DelSole, T., 2004a: Predictability and information theory. Part I: Measure of predictability. J. Atmos. Sci., 61 , 24252440.

  • DelSole, T., 2004b: Stochastic models of quasigeostrophic turbulence. Surv. Geophys, 25 , 107149.

  • DelSole, T., , and P. Chang, 2003: Predictable component analysis, canonical correlation analysis, and autoregressive models. J. Atmos. Sci., 60 , 409416.

    • Search Google Scholar
    • Export Citation
  • Gardiner, C. W., 1990: Handbook of Stochastic Methods. 2d ed. Springer-Verlag, 442 pp.

  • Joe, H., 1989: Relative entropy measures of multivariate dependence. J. Amer. Stat. Assoc., 84 , 157164.

  • Johnson, R. A., , and D. W. Wichern, 1982: Applied Multivariate Statistical Analysis. Prentice-Hall, 594 pp.

  • Kleeman, R., 2002: Measuring dynamical prediction utility using relative entropy. J. Atmos. Sci., 59 , 20572072.

  • Kloeden, P. E., , and E. Platen, 1999: Numerical Solution of Stochastic Differential Equations. Springer, 636 pp.

  • Leung, L-Y., , and G. R. North, 1990: Information theory and climate prediction. J. Climate, 3 , 514.

  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20 , 130141.

  • Lorenz, E. N., 1969: Atmospheric predictability as revealed by naturally occurring analogues. J. Atmos. Sci., 26 , 636646.

  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8 , 281293.

    • Search Google Scholar
    • Export Citation
  • Schneider, T., , and S. M. Griffies, 1999: A conceptual framework for predictability studies. J. Climate, 12 , 31333155.

  • Tippett, M. K., , and P. Chang, 2003: Some theoretical considerations on predictability of linear stochastic dynamics. Tellus, 55A , 148157.

    • Search Google Scholar
    • Export Citation
  • von Storch, H., , and F. Zwiers, 1999: Statistical Analysis in Climate Research. Cambridge University Press, 484 pp.

Fig. 1.
Fig. 1.

Schematic illustrating the relation between different distributions and the associated concepts. A distribution is represented by a point and the “distance” between the points indicates the degree of difference between the distributions. Each line segment is labeled to indicate its meaning. The dashed lines join distributions that usually are represented in different state spaces and therefore have undefined distances.

Citation: Journal of the Atmospheric Sciences 62, 9; 10.1175/JAS3522.1

Fig. 2.
Fig. 2.

Predictability of the two-variable stochastic model (26) with dynamical operator (45) and β = 0 (dashed), and of the regression forecast distribution with β = 5 for ensemble sizes 1, 2, 4, 8, 16, 32, 64, and 128 (solid lines, from bottom up).

Citation: Journal of the Atmospheric Sciences 62, 9; 10.1175/JAS3522.1

Fig. 3.
Fig. 3.

Predictability of the two-variable stochastic model (26) with β = 0 (dash with no error bar or circle) and associated regression forecast distribution with β = 5 and with 16 ensemble members (solid with no error bar or circle), as in Fig. 2. Also shown is the predictability estimated from 10 independent samples of the verification and initial condition (dash with error bars and circles) and associated regression forecast distribution (solid with error bars and circles), based on a Gaussian assumption for the distributions. The error bars show the 10th and 90th percentiles.

Citation: Journal of the Atmospheric Sciences 62, 9; 10.1175/JAS3522.1

Fig. 4.
Fig. 4.

Predictability of the Lorenz (1963) model estimated from 100 independent initial condition and verification pairs, assuming a Gaussian form for the distribution (dashed). The accessible forecast model is drawn from the same model class but with the parameter β adjusted from 28 to 20. The predictability of the regression forecast distribution based on one ensemble member and a Gaussian form for the distribution is shown as the solid line. The predictability based only on two leading potential predictable components of the accessible forecast model is shown as the solid line with circles.

Citation: Journal of the Atmospheric Sciences 62, 9; 10.1175/JAS3522.1

Save