Search Results

You are looking at 1 - 4 of 4 items for :

  • Author or Editor: Michael K. Tippett x
  • Journal of the Atmospheric Sciences x
  • Refine by Access: All Content x
Clear All Modify Search
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper shows that if a measure of predictability is invariant to affine transformations and monotonically related to forecast uncertainty, then the component that maximizes this measure for normally distributed variables is independent of the detailed form of the measure. This result explains why different measures of predictability such as anomaly correlation, signal-to-noise ratio, predictive information, and the Mahalanobis error are each maximized by the same components. These components can be determined by applying principal component analysis to a transformed forecast ensemble, a procedure called predictable component analysis (PrCA). The resulting vectors define a complete set of components that can be ordered such that the first maximizes predictability, the second maximizes predictability subject to being uncorrelated of the first, and so on. The transformation in question, called the whitening transformation, can be interpreted as changing the norm in principal component analysis. The resulting norm renders noise variance analysis equivalent to signal variance analysis, whereas these two analyses lead to inconsistent results if other norms are chosen to define variance. Predictable components also can be determined by applying singular value decomposition to a whitened propagator in linear models. The whitening transformation is tantamount to changing the initial and final norms in the singular vector calculation. The norm for measuring forecast uncertainty has not appeared in prior predictability studies. Nevertheless, the norms that emerge from this framework have several attractive properties that make their use compelling. This framework generalizes singular vector methods to models with both stochastic forcing and initial condition error. These and other components of interest to predictability are illustrated with an empirical model for sea surface temperature.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper introduces the average predictability time (APT) for characterizing the overall predictability of a system. APT is the integral of a predictability measure over all lead times. The underlying predictability measure is based on the Mahalanobis metric, which is invariant to linear transformation of the prediction variables and hence gives results that are independent of the (arbitrary) basis set used to represent the state. The APT is superior to some integral time scales used to characterize the time scale of a random process because the latter vanishes in situations when it should not, whereas the APT converges to reasonable values. The APT also can be written in terms of the power spectrum, thereby clarifying the connection between predictability and the power spectrum. In essence, predictability is related to the width of spectral peaks, with strong, narrow peaks associated with high predictability and nearly flat spectra associated with low predictability. Closed form expressions for the APT for linear stochastic models are derived. For a given dynamical operator, the stochastic forcing that minimizes APT is one that allows transformation of the original stochastic model into a set of uncoupled, independent stochastic models. Loosely speaking, coupling enhances predictability. A rigorous upper bound on the predictability of linear stochastic models is derived, which clarifies the connection between predictability at short and long lead times, as well as the choice of norm for measuring error growth. Surprisingly, APT can itself be interpreted as the “total variance” of an alternative stochastic model, which means that generalized stability theory and dynamical systems theory can be used to understand APT. The APT can be decomposed into an uncorrelated set of components that maximize predictability time, analogous to the way principle component analysis decomposes variance. Part II of this paper develops a practical method for performing this decomposition and applies it to meteorological data.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper proposes a new method for diagnosing predictability on multiple time scales without time averaging. The method finds components that maximize the average predictability time (APT) of a system, where APT is defined as the integral of the average predictability over all lead times. Basing the predictability measure on the Mahalanobis metric leads to a complete, uncorrelated set of components that can be ordered by their contribution to APT, analogous to the way principal components decompose variance. The components and associated APTs are invariant to nonsingular linear transformations, allowing variables with different units and natural variability to be considered in a single state vector without normalization. For prediction models derived from linear regression, maximizing APT is equivalent to maximizing the sum of squared multiple correlations between the component and the time-lagged state vector. The new method is used to diagnose predictability of 1000-hPa zonal velocity on time scales from 6 h to decades. The leading predictable component is dominated by a linear trend and presumably identifies a climate change signal. The next component is strongly correlated with ENSO indices and hence is identified with seasonal-to-interannual predictability. The third component is related to annular modes and presents decadal variability as well as a trend. The next few components have APTs exceeding 10 days. A reconstruction of the tropical zonal wind field based on the leading seven components reveals eastward propagation of anomalies with time scales consistent with the Madden–Julian oscillation. The remaining components have time scales less than a week and hence are identified with weather predictability. The detection of predictability on these time scales without time averaging is possible by virtue of the fact that predictability on different time scales is characterized by different spatial structures, which can be optimally extracted by suitable projections.

Full access
Craig H. Bishop
,
Carolyn A. Reynolds
, and
Michael K. Tippett

Abstract

An exact closed form expression for the infinite time analysis and forecast error covariances of a Kalman filter is used to investigate how the locations of fixed observing platforms such as radiosonde stations affect global distributions of analysis and forecast error variance. The solution pertains to a system with no model error, time-independent nondefective unstable dynamics, time-independent observation operator, and time-independent observation error covariance. As far as the authors are aware, the solutions are new. It is shown that only nondecaying normal modes (eigenvectors of the dynamics operator) are required to represent the infinite time error covariance matrices. Consequently, once a complete set of nondecaying eigenvectors has been obtained, the solution allows for the rapid assessment of the error-reducing potential of any observational network that bounds error variance.

Atmospherically relevant time-independent basic states and their corresponding tangent linear propagators are obtained with the help of a (T21L3) quasigeostrophic global model. The closed form solution allows for an examination of the sensitivity of the error variances to many different observing configurations. It is also feasible to determine the optimal location of one additional observation given a fixed observing network, which, through repetition, can be used to build effective observing networks.

Effective observing networks result in error variances several times smaller than other types of networks with the same number of column observations, such as equally spaced or land-based networks. The impact of the observing network configuration on global error variance is greater when the observing network is less dense. The impact of observations at different pressure levels is also examined. It is found that upper-level observations are more effective at reducing globally averaged error variance, but midlevel observations are more effective at reducing forecast error variance at and downstream of the baroclinic regions associated with midlatitude jets.

Full access