Ensemble Oscillation Correction (EnOC): Leveraging Oscillatory Modes to Improve Forecasts of Chaotic Systems

Eviatar Bach aDepartment of Atmospheric and Oceanic Science, University of Maryland, College Park, College Park, Maryland
bInstitute for Physical Science and Technology, University of Maryland, College Park, College Park, Maryland
cGeosciences Department and Laboratoire de Météorologie Dynamique (CNRS and IPSL), École Normale Supérieure and PSL University, Paris, France

Search for other papers by Eviatar Bach in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-9725-0203
,
Safa Mote aDepartment of Atmospheric and Oceanic Science, University of Maryland, College Park, College Park, Maryland
bInstitute for Physical Science and Technology, University of Maryland, College Park, College Park, Maryland

Search for other papers by Safa Mote in
Current site
Google Scholar
PubMed
Close
,
V. Krishnamurthy dCenter for Ocean–Land–Atmosphere Studies, George Mason University, Fairfax, Virginia

Search for other papers by V. Krishnamurthy in
Current site
Google Scholar
PubMed
Close
,
A. Surjalal Sharma eDepartment of Astronomy, University of Maryland, College Park, College Park, Maryland

Search for other papers by A. Surjalal Sharma in
Current site
Google Scholar
PubMed
Close
,
Michael Ghil cGeosciences Department and Laboratoire de Météorologie Dynamique (CNRS and IPSL), École Normale Supérieure and PSL University, Paris, France
fDepartment of Atmospheric and Oceanic Science, University of California, Los Angeles, Los Angeles, California

Search for other papers by Michael Ghil in
Current site
Google Scholar
PubMed
Close
, and
Eugenia Kalnay aDepartment of Atmospheric and Oceanic Science, University of Maryland, College Park, College Park, Maryland
bInstitute for Physical Science and Technology, University of Maryland, College Park, College Park, Maryland

Search for other papers by Eugenia Kalnay in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Oscillatory modes of the climate system are among its most predictable features, especially at intraseasonal time scales. These oscillations can be predicted well with data-driven methods, often with better skill than dynamical models. However, since the oscillations only represent a portion of the total variance, a method for beneficially combining oscillation forecasts with dynamical forecasts of the full system was not previously known. We introduce Ensemble Oscillation Correction (EnOC), a general method to correct oscillatory modes in ensemble forecasts from dynamical models. We compute the ensemble mean—or the ensemble probability distribution—with only the best ensemble members, as determined by their discrepancy from a data-driven forecast of the oscillatory modes. We also present an alternate method that uses ensemble data assimilation to combine the oscillation forecasts with an ensemble of dynamical forecasts of the system (EnOC-DA). The oscillatory modes are extracted with a time series analysis method called multichannel singular spectrum analysis (M-SSA), and forecast using an analog method. We test these two methods using chaotic toy models with significant oscillatory components and show that they robustly reduce error compared to the uncorrected ensemble. We discuss the applications of this method to improve prediction of monsoons as well as other parts of the climate system. We also discuss possible extensions of the method to other data-driven forecasts, including machine learning.

Denotes content that is immediately available upon publication as open access.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Eviatar Bach, eviatarbach@protonmail.com; Safa Mote, ssm@umd.edu

Abstract

Oscillatory modes of the climate system are among its most predictable features, especially at intraseasonal time scales. These oscillations can be predicted well with data-driven methods, often with better skill than dynamical models. However, since the oscillations only represent a portion of the total variance, a method for beneficially combining oscillation forecasts with dynamical forecasts of the full system was not previously known. We introduce Ensemble Oscillation Correction (EnOC), a general method to correct oscillatory modes in ensemble forecasts from dynamical models. We compute the ensemble mean—or the ensemble probability distribution—with only the best ensemble members, as determined by their discrepancy from a data-driven forecast of the oscillatory modes. We also present an alternate method that uses ensemble data assimilation to combine the oscillation forecasts with an ensemble of dynamical forecasts of the system (EnOC-DA). The oscillatory modes are extracted with a time series analysis method called multichannel singular spectrum analysis (M-SSA), and forecast using an analog method. We test these two methods using chaotic toy models with significant oscillatory components and show that they robustly reduce error compared to the uncorrected ensemble. We discuss the applications of this method to improve prediction of monsoons as well as other parts of the climate system. We also discuss possible extensions of the method to other data-driven forecasts, including machine learning.

Denotes content that is immediately available upon publication as open access.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Eviatar Bach, eviatarbach@protonmail.com; Safa Mote, ssm@umd.edu

1. Introduction and motivation

Weather predictability is inherently limited by chaotic error growth (Lorenz 1963, 1965). However, there are several physical processes that provide predictability for the atmosphere beyond the weather’s synoptic time scale. Charney and Shukla (1981) recognized the potential for long-range predictability from slowly varying boundary conditions, including sea surface temperatures (SSTs). This predictability is particularly relevant for the tropical atmosphere, where synoptic-scale instabilities are less prominent. Recently, Bach et al. (2019) quantified the predictability provided by SST to the atmosphere from data. Coupled atmosphere–ocean interactions are also important for predictability beyond the weather time scale (Penny et al. 2019).

On intraseasonal time scales, in addition to the above sources of predictability, there is potential predictability due to climate oscillations. Because of their near-periodicity and low frequency, oscillations are predictable beyond weather time scales. The oscillations that are thought to be important for predictability on intraseasonal time scales include the Madden–Julian oscillation (MJO), monsoon intraseasonal oscillations (MISOs), and extratropical oscillations (Ghil and Robertson 2002; Stan and Krishnamurthy 2019; Krishnamurthy 2019). El Niño–Southern Oscillation (ENSO) also possesses an oscillatory component that, at times when it is more prominent, makes ENSO more predictable (Ghil and Jiang 1998). Climate oscillations on longer time scales also present an opportunity for enhanced prediction; however, there is controversy about whether robust oscillatory modes—defined as occupying a narrow band of frequencies—exist in the climate system on decadal time scales (Ghil 2001; Mann et al. 2020).

Many studies have shown that oscillatory modes extracted from data can be effectively forecasted. We will refer to these forecasts, which rely purely on data or on data combined with a low-order model, as “data-driven” forecasts; they are also referred to as empirical or statistical forecasts in the literature. These data-driven oscillation forecasts often possess comparable or higher skill than dynamical forecasts of these oscillations. See the literature on data-driven prediction of extratropical oscillations (Keppenne and Ghil 1993; Strong et al. 1995; Vautard et al. 1996), the MJO (Kang and Kim 2010; Kondrashov et al. 2013; Chen et al. 2014), and MISOs (Krishnamurthy and Sharma 2017; Chen et al. 2018). As an example of the latter, Krishnamurthy and Sharma (2017) showed that the leading MISO mode can be predicted for up to about 80 days using a data-driven method, more skillfully than MISO forecasts obtained from state-of-the-art models. This demonstrates the potential for improved intraseasonal prediction of monsoon rainfall by better prediction of MISO.

However, these methods are limited to improving forecasts of the oscillation itself, not of the full signal. Although the oscillations can be predicted fairly well, the magnitude of this prediction will be small compared with that of the full field, because the oscillatory components make up only a fraction of its total variance (Strong et al. 1995; Mo 2001). Data-driven models are generally not competitive with dynamical models in predicting the part of the variance comprised of high-frequency daily weather at medium range, due to the high dimensionality of the system necessitating unfeasibly large amounts of training data (Van den Dool 1994; Palmer 2020; Rasp and Thuerey 2021).

Our aim, then, is to combine dynamical forecasts with data-driven oscillation forecasts, in order to improve the former. This idea was previously suggested in Vautard et al. (1992), Strong et al. (1995), and Ghil et al. (2004); as put by Strong et al. (1995), the idea is to form a “combination of […] an empirical model, to predict the smoothly varying part of the flow, and a dynamical model, to predict day-to-day ‘weather,’ superimposed on it” (p. 2628). However, a method for doing this has never been developed heretofore, to the best of our knowledge.

Here, we present a general method, called ensemble oscillation correction (EnOC), for improving ensemble forecasting of a chaotic system’s full state by leveraging the predictability of oscillatory modes. EnOC selects ensemble members whose oscillation states are closest to a data-driven forecast of a given oscillatory mode. We also present an alternate method, ensemble oscillation correction with data assimilation (EnOC-DA), which treats oscillation forecasts as “observations” and uses data assimilation to combine them with a dynamical model forecast.

The rest of the paper is laid out as follows. Section 2 provides an overview of the existing methods to extract and forecast oscillations. Section 3 presents our novel EnOC and EnOC-DA methods. Section 4 describes the experiments applying EnOC and EnOC-DA to toy models, while section 5 discusses the results of these experiments. Finally, section 6 presents conclusions and future applications. The appendix contains a derivation of a theoretical error expression for EnOC and EnOC-DA.

2. Methods

We aim to use data-driven oscillation forecasts to improve prediction of systems that possess oscillatory modes. This requires three successive methods: 1) extraction of the oscillations from data, which we do here using multichannel singular spectrum analysis (section 2a), 2) mapping from a state in the full phase space to the corresponding state in the oscillations’ subspace (section 2b), and 3) forecasting in this subspace (section 2c).

a. Multichannel singular spectrum analysis

Multichannel singular spectrum analysis (M-SSA), also known as extended empirical orthogonal functions (extended EOFs; Weare and Nasstrom 1982), is a method used to extract spatiotemporal modes from multidimensional time series (Broomhead and King 1986; Plaut and Vautard 1994; Ghil et al. 2002). M-SSA helps separate such time series into a nonlinear trend, oscillatory modes, noise, and chaotic components (Ghil and Vautard 1991; Watari 1996). M-SSA has been applied in many climatic and other contexts; for instance, it has identified MISOs in Indian rainfall (Krishnamurthy and Shukla 2007; Moron et al. 2012) and been applied to Chinese rainfall (Wang et al. 1996), the MJO (Lau 2012), space weather (Sharma et al. 1993), and macroeconomic data (Groth and Ghil 2017), among others. Vautard et al. (1996) have demonstrated that the low-frequency modes extracted by M-SSA better correspond to the predictable modes of the atmosphere than those extracted by regular spatial EOFs.

We provide here a brief introduction to M-SSA; see Ghil et al. (2002), Alessio (2016, chapter 12), and Golyandina (2020) for a more detailed exposition. M-SSA is a form of principal component analysis (PCA), a widely used statistical method (Jolliffe 2002). In M-SSA, PCA is applied to sliding windows of length M along a time series in order to identify the orthogonal modes that capture the most variance in the time series. Here, we use the Broomhead–King variant of SSA, as opposed to the Toeplitz variant (see Ghil et al. 2002).

Consider a D-dimensional time series of length N, x = {xd(n)|d = 1, …, D; n = 1, …, N}. After choosing a window of length M based on the time scales of interest (see Vautard et al. 1992), we create lagged copies of the time series xd(n) = [xd(n), …, xd(n + M − 1)] for each dimension d, where n = 1, …, NM + 1. We combine these into the (NM + 1) × (DM) trajectory matrix X = (x1, …, xD), and form the covariance matrix C = XTX/(NM + 1).

The eigendecomposition of C yields the eigenvalues {λk} and eigenvectors {ek} (Plaut and Vautard 1994; Groth and Ghil 2011).1 By analogy with the EOFs used in meteorology and oceanography (Preisendorfer and Mobley 1988), these eigenvectors are called space–time EOFs (ST-EOFs); see Ghil et al. (2002). The trace of the matrix C equals the total variance in the time series. It is also equal to the sum of the eigenvalues of C, that is, tr(C)=jλj. Thus the ratio λk/tr(C) is the fraction of the total variance captured by mode k.

Due to the orthogonality of sinusoids that are in quadrature, oscillatory modes appear as pairs of eigenvectors with nearly identical eigenvalues, generalizing the sine–cosine pair of Fourier analysis (Vautard and Ghil 1989; Ghil et al. 2002; Alessio 2016). Since M-SSA may falsely detect such oscillatory modes in noise, statistical significance tests have been developed to distinguish oscillations from colored noise (Allen and Smith 1996; Ghil et al. 2002; Groth and Ghil 2015). Groth and Ghil (2011) developed a varimax algorithm to reduce mixture effects between physically distinct modes and better isolate them.

After choosing a mode of interest k from the set of eigenmodes distinguished by M-SSA, the original time series is projected onto the eigenvector ek to obtain a principal component (PC). However, the PC combines the behavior of all the D variables and does not possess the same phase as the original time series. The reconstruction procedure then extracts the portion of the time series corresponding to mode k for each variable and with correct phase. The resulting time series are called the reconstructed components (RCs); see Ghil and Vautard (1991), Ghil et al. (2002), and Groth and Ghil (2011).

The reconstruction at time n for the single mode k and dimension d can be written as the following operation on the original time series:2
rdk(n)=1Mnm=LnUn[d=1Dm=1Mxd(nm+m)edk(m)]edk(m);
here Mn = M, Ln = 1, and Un = M, except near the endpoints of the time series, for which the appropriate expressions are given in Vautard et al. (1992). Summing over all rk yields the time series itself; x(n)=k=1DMrk(n). Summing over a partial set K={k1,,kK}, with K < DM, yields a partial reconstruction; for example, if λk*λk*+1 and the other statistical significance tests are satisfied, rK(n) is the reconstruction of an oscillatory pair, with K={k*,k*+1} (Ghil et al. 2002). The RCs can be considered as the result of forward and reverse filtering of the original data (Harris and Yuan 2010).

We refer to the D-dimensional Euclidean space XRD, with xX, as the full phase space. The DM-dimensional space RDM, in which the xd(n) reside, is the embedding space. Any partial reconstruction with K components, K < DM, corresponds to a reduced subspace XK spanned by the K vectors {ek|kK}, which we refer to as the reconstructed subspace. When the set K corresponds to one or several oscillatory pairs (i.e., K=O), we refer to such a subspace as the oscillation subspace.

It can be seen from Eq. (1) that the reconstructed time series at time n is computed using the time n in the original time series, as well as M − 1 points from the immediate future and M − 1 points from the immediate past, except near the endpoints where increasingly fewer points are available.

These successively fewer points to average over present a problem near the endpoints, where reconstruction becomes less accurate. Methods for more accurate reconstruction near the endpoints have been developed, including the recent SSA with conditional predictions (Ogrosky et al. 2019; other methods for reconstruction near the endpoints are referenced therein).

Note that Eq. (1) is a linear transformation, which can be written as a (2M − 1)D × D matrix except near the endpoints. Since this operation is linear and rank-deficient, there is no unique sequence of (2M − 1) D-dimensional full phase space states corresponding to each D-dimensional reconstructed phase space state.3 However, the dynamics generating the time series will put additional constraints on possible sequences of full phase space states that are not captured by the linear reconstruction.

Because the reconstruction operation is not invertible, we cannot go directly from a point in the reconstructed phase space to the full space. Therefore, in order to use the forecast of the oscillatory modes to inform the forecast of the full state, we must do so indirectly. This is achieved using the EnOC and EnOC-DA algorithms described in section 3.

b. Projecting onto the oscillation subspace

For the EnOC algorithm that we introduce in section 3, we need a method to approximately map points from the full phase space to the reconstructed subspace; this projection is shown as the curved arrow from the full phase space to the oscillation subspace in Fig. 1.

Fig. 1.
Fig. 1.

Schematic diagram of projection to and forecasting in the oscillation subspace. We first project from the full phase space onto the oscillation subspace, as described in section 2b. Next, we forecast in the oscillation subspace, as described in section 2c. Note that for a pure sinusoidal oscillation—which would result, for example, from sampling a sinusoidal plane wave regularly in time at discrete points in space—the trajectories in the oscillation subspace will lie on an ellipse. We see that, in this example, the shape traced out by trajectories in the oscillation subspace deviates from an exact ellipse due to the oscillation’s nonlinearity.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

Assuming the time series to be produced by a deterministic, autonomous dynamical system, it can be seen that any point in the full phase space corresponds to a single point in the reconstructed space. This is because the operation which maps from the full space to the reconstructed space is a function of the present, as well as M − 1 past and M − 1 future points. Since trajectories cannot cross in phase space due to uniqueness, a full phase space state will always have the same future and past, and thus the same corresponding reconstructed subspace state. Thus, if we were able to integrate the system forward and backward in time for M − 1 steps in each direction, we could exactly map the full phase space to the reconstructed subspace. Since an exact mapping is often not feasible, we construct an approximate mapping instead. In particular, we need an approximation of the mapping that does not require future information, since this is not available in the real-time forecasting context.

One such method relies on using the endpoint version of Eq. (1), which makes use of the present and the immediate M − 1 past or future points (Vautard et al. 1992). This was used in mapping from the full to the reconstructed phase space in Lynch (2019). Ogrosky et al. (2019) provided a more accurate expression for the endpoints.

Alternatively, one can use an analog method, which we use in the present paper. This method requires only the system state at the current time. We assume that we have access to a historical time series {x(ti)} of the system, as well as the corresponding time series {r(ti)} of the RCs. Then, to project a full state x(t) into the reconstructed subspace, we first find its kf closest analogs, also referred to as nearest neighbors (Abarbanel et al. 1993), in the historical record of the full phase space, which occur at times that we denote by t1*,,tkf*.

After finding the closest analogs in the full phase space x(t1*),,x(tkf*), we average over their corresponding points in the historical record of the RCs, r(t1*),,r(tkf*), weighted by the inverse distance of the analogs in the full phase space. In other words, we estimate the state in the reconstructed subspace as
r˜[x(t)]=i=1kfr(ti*)x(t)x(ti*)i=1kf||x(t)x(ti*)||1.
Although here we only use the present state x(t) to search for analogs, it may be advantageous to find analogs using a window that also includes the immediate past. Such an approach is taken in Farmer and Sidorowich (1987, 1988), Ukhorskiy et al. (2004), Chen and Sharma (2006), and Xavier and Goswami (2007).

Although the analog-based projection method works well for the low-dimensional systems considered here, it may be inaccurate for high-dimensional systems. We have found that lasso regression, where the predictors are states at multiple consecutive times, works better in high dimensions.

c. Forecasting in the oscillation subspace

We want to take a state r in the reconstructed subspace and forecast it from time ti into a future time ti+1; such a forecast is shown by the red arrow labeled “Oscillation forecast” on the right side of Fig. 1. We denote this operation by Rtiti+1[r(ti)].

Several forecast methods based on M-SSA exist; they include fitting an autoregressive model to the RCs (Ghil and Jiang 1998; Ghil et al. 2002), SSA with conditional prediction to yield an (M − 1)-step forecast (Ogrosky et al. 2019), and neural networks (Lisi et al. 1995). We use an analog method here as in Krishnamurthy and Sharma (2017): we search for similar states to the present one in the historical record, and forecast based on the trajectory of these past states.

We again assume that we have a historical record {r(ti)} of the RCs. Given the state r from which we want to start forecasting, we find its kr closest analogs in the record. The trajectories of nearby points are expected to remain close before starting to diverge (Farmer and Sidorowich 1987; Krishnamurthy and Sharma 2017). Therefore, we follow the trajectory of these analogs until the end of the forecast window and average over their final states.

This method is similar to the analog method of Lorenz (1969), applied here to forecasting in the reconstructed subspace instead of the full phase space; see also the references in Farmer and Sidorowich (1988) for other early work on analog forecasting.

In this paper we use the Euclidean distance to define the closest analogs. Other metrics can also be used, such as distance in a reduced-dimensional space. For instance, Krishnamurthy and Sharma (2017) reduced the dimension D by projecting onto the leading spatial EOFs of the RCs. For a better algorithmic time complexity than a sequential search, the record can be stored in a k-dimensional (k-d) tree (Bentley 1975).

3. EnOC forecasting using oscillatory modes

a. EnOC algorithm

Here we introduce EnOC as a method for improving the ensemble forecasts by averaging solely over the ensemble members whose oscillation state is close to the data-driven oscillation forecast. We assume that we have access to a historical time series of noisy observations or an analysis4 of the real system. This time series is used to perform the M-SSA, to map the full phase space X into the reconstructed subspace XK, and to forecast the oscillation, as in Krishnamurthy and Sharma (2017).

EnOC relies on using an ensemble of dynamical model forecasts of the full state {xi}, such as from an operational weather forecast model. Denote the projection of X into XK by r˜(x), as per section 2b above, and the operation that forecasts the reconstructed subspace state r(ti) to time ti+1 as Rtiti+1[r(ti)], as per section 2c above. The EnOC algorithm is given by the following steps:

  1. Using the best current estimate x^(t0) of the full system state at time t0—based, for instance, on an analysis combining the ensemble forecasts with observations—find the corresponding point r˜[x^(t0)]=r˜(t0) in the reconstructed subspace. Carry out an oscillatory-subspace forecast into the future from time t0 to t1, and call it r¯(t1)=Rt0t1[r˜(t0)].

  2. Evolve each ensemble member {xi|i = 1, …, m} forward from time t0 until the next forecast time t1, using the dynamical model.

  3. Find the points r˜i(t1)=r˜[xi(t1)] in the oscillatory subspace that correspond to each ensemble member at time t1. Then, select the m′ ≤ m ensemble members with the smallest distances di=r˜i(t1)r¯(t1) at time t1, and compute the new ensemble mean x¯(m)(t1) using only these m′ members.

  4. Repeat steps 1–3 for the next forecast time t1.

These steps are illustrated schematically in Fig. 2.
Fig. 2.
Fig. 2.

Schematic diagram of EnOC for two ensemble members. (top) The dynamical model is integrated in time for each member. At the beginning of the forecast window (thin gray bar), the best estimate of the system state is mapped into the oscillation subspace, and an oscillation forecast is started (red curve). At the end of the forecast window (thick gray bar), the model forecasts are mapped into the oscillation subspace, and compared to the oscillation forecast. In this case, we see that the oscillation of ensemble member 1 is closer to the oscillation forecast than ensemble member 2. (bottom) A time window is shown for the oscillation of each ensemble member for visualization purposes, but in reality the method uses only the single state at the end of the forecast window.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

Note that we may be interested in the oscillations in only some of the variables, like rainfall in the case of the monsoon. If so, only the variables of interest need be used in the M-SSA, and subsequently in steps 1 and 3 above.

b. Optimal number m of ensemble members to average over

In step 3 above, we did not specify the number m′ of “best” ensemble members to average over. This will differ depending on the system, the forecast lead time, the total number of ensemble members m, and the ensemble spread, among other factors. Generally, the choice of m′ will be a trade-off between 1) the improvement due to using a larger m′ in the ensemble mean (Christiansen 2018; Wilks 2019), in the smaller error variance around the mean (Kalnay 2019), and in the ability to characterize the forecast uncertainty (Wilks 2019), versus 2) the inclusion of ensemble members that are far from the oscillation forecast. Here, we only optimize m′ with respect to the root-mean-square error (RMSE) in the ensemble mean.

For a given setup—and assuming we have access to a historical record of ensembles, oscillation forecasts, and a good estimate of the system state x^(t) at those times—we can estimate m′ by checking which one would have resulted in the smallest error. In other words,
m=argminμt||x¯(μ)(t)x^(t)||2,
where t is the time in the historical record, x^(t) is the estimate of the true state at that time, and x¯(μ)(t) is the average over the μ ensemble members with the smallest distances di from the oscillation forecast. The norm can be chosen to optimize the desired features of the forecast.

Note also that in this paper we pick a single m′ for all the variables. However, in cases where an oscillation comprises significantly more variance in some variables than others—or in some regions in the case of a spatiotemporal oscillation—assigning different values of m′ could improve the performance.

c. Data assimilation of oscillatory modes (EnOC-DA)

We present an alternate EnOC method, which we call ensemble oscillation correction with data assimilation (EnOC-DA). Instead of selecting ensemble members that have better oscillation forecasts, we use an ensemble data assimilation system to assimilate oscillation forecasts as pseudo-observations of the system. We are thereby combining the model forecast—treated as the first guess or background in the data assimilation (DA)—with the data-driven forecast of the oscillation, treated as an observation in DA.

EnOC-DA does not provide a significant advantage over regular EnOC in our toy model experiments in section 5, but it may do so for operational weather and climate prediction. For introductions to DA, see any number of books and review papers (e.g., Bengtsson et al. 1981; Ghil and Malanotte-Rizzoli 1991; Kalnay 2002; Asch et al. 2016; Carrassi et al. 2018).

In EnOC-DA, we assimilate the oscillation forecasts using the same forecast window that we would normally apply in EnOC. This choice differs from conventional DA in that we perform the assimilation at a point in time in the future, prior to having real observations at that time.

We use the ensemble transform Kalman filter (ETKF: Bishop et al. 2001), which allows for a nonlinear observation operator, which in this case is the projection of the full phase space onto the reconstructed subspace r˜[x(t)], as discussed in section 2b. We estimate the observation error covariance matrix R empirically from the error statistics of the oscillation forecast. Thus, given oscillation forecasts r¯(ti) at points {ti|i = 1, …, N} in time,
R^=1N1i=1N[r^(ti)r¯(ti)][r^(ti)r¯(ti)]T,
where r^(ti) is the true value of the oscillation at time ti.

d. Notes

If the system contains anticorrelated large-amplitude modes that are not included in the reconstruction then the error reduction may be small, since an improvement in the included modes will be partially offset by the missing ones that make up a large portion of the variance. Thus, the correlations between the RCs need to be inspected and all correlated and anticorrelated modes should be included.

This is related to the problem of separability in the SSA literature, and the correlations between the RCs—slightly modified to account for the endpoint effects—are referred to as w correlations (Golyandina et al. 2001). Since our EnOC algorithm reduces the ensemble spread in the oscillatory modes, having them be uncorrelated from the rest of the modes also helps the ensemble maintain a sufficient spread in these other modes.

Assuming that the RCs are uncorrelated, we derive in the appendix a rough estimate for the “best-case” ratio of the RMSE of the EnOC-corrected forecast to the RMSE of the uncorrected forecast:
RMSERMSE=(1jOλjjPλj)1/2;
here O is the set of indices of the oscillatory modes, and P is the set of indices of all modes excluding a mean mode, if it exists; see the appendix for further details. The two major MISOs make up about 14% of the variance in the daily rainfall anomalies (Krishnamurthy and Shukla 2007). Other oscillations in the large-scale atmospheric flow are estimated to comprise 20%–30% of the variance (Ghil and Robertson 2002; Stan and Krishnamurthy 2019).

The derivation assumes that EnOC results in a perfect prediction of the oscillation, so the real error improvement will generally be smaller than suggested by this equation. Unsurprisingly, we see that the larger the portion of the variance comprised by the oscillatory modes, the more the potential error reduction.

Note that EnOC is not restricted to oscillatory modes and is potentially useful given any modes that are predicted well using a data-driven forecasting method. However, we consider oscillatory modes the most likely to have useful data-driven forecasts for high-dimensional geophysical systems, as already demonstrated by several authors and reviewed here in section 1. We discuss the possible extension of the method to nonoscillatory modes in section 6a.

For the method to work, it is important to have ensemble members that yield oscillations approximating the correct one. This condition may not be satisfied for models that miss important physics (Ghil and Robertson 2000), or if the ensemble spread is too small. Such “underdispersion” occurs when the ensemble underestimates the uncertainty in the forecast, and it may then be beneficial to increase the ensemble spread.

Both EnOC and EnOC-DA are a correction performed after the ensemble forecast, so they can be considered a form of ensemble postprocessing (Vannitsem et al. 2018). EnOC is similar to the ensemble subsetting methods in Dong and Zhang (2016) and Ancell (2016) in that it picks a subset of ensemble members whose mean outperforms the ensemble mean. However, these methods fundamentally differ from EnOC in that they verify against real-time observations instead of a data-driven forecast, and in that they pick a subset and uses it for future lead times instead of a possibly different subset at every lead time.

e. Software implementation

In section 4, EnOC and EnOC-DA are applied to three nonlinear, chaotic systems of ordinary differential equations (ODEs). The experiments were performed using open-source Julia code that is available on GitHub.5 To efficiently search for analogs, the k-d tree data structure was used from the NearestNeighbors.jl library (Carlsson 2020). The Lyapunov exponents of the systems were computed using the DynamicalSystems.jl library (Datseris 2018). The continuous ranked probability score (CRPS) was computed with the properscoring library (Climate Corporation 2015). The time integrations used the fourth-order Runge–Kutta scheme. We used the parasweep library for Python of Bach (2021) to facilitate the running in parallel of multiple experiments at different lead times and with different parameters.

4. Experiments applying EnOC to toy models

We test EnOC and EnOC-DA on three ODE systems: two coupled Colpitts oscillators (e.g., Kennedy 1994), a Chua oscillator (Chua et al. 1986), and a periodically forced Lorenz (1963) model. These three systems exhibit oscillatory behavior and are all chaotic, with maximal Lyapunov exponents λmax greater than zero. Lyapunov exponents characterize the rate of exponential error growth (Ott 2002). The Lyapunov time λmax1 is the characteristic time scale of chaotic error growth.

We apply parametric model errors to each model in order to test a scenario as in Krishnamurthy and Sharma (2017), in which the oscillatory modes are poorly predicted by the dynamical model. For each system, we list its parameters, Lyapunov exponents, peak oscillation frequency, experiment configuration, and oscillation properties in Table 1.

Table 1.

Parameters, properties, and configuration for the ODE systems used in the experiments; nondimensional time units.

Table 1.

Our system of coupled Colpitts oscillators follows Rey et al. (2014):
x˙1(i)(t)=p1x2(i)(t)+c(i+1,i)[x1(i+1)(t)x1(i)(t)],
x˙2(i)(t)=p2[x1(i)(t)+x3(i)(t)]p3(i)x2(i)(t),
x˙3(i)(t)=p4{x2(i)(t)+1exp[x1(i)(t)]}.
Here x˙=dx/dt, the indices are cyclical, and we use two oscillators. For the values of the parameters in Table 1, the two oscillators do not synchronize.
The dimensionless Chua oscillator describes the so-called double-scroll family, and it is governed by the following equations (Chua et al. 1986):
x˙=α[yxf(x)],
y˙=xy+z,
z˙=βy,
f(x)=m1x+12(m0m1)(|x+1||x1|).
For this system, due to mixture of low-frequency variability in the oscillatory modes, we follow Groth and Ghil (2011) and perform varimax rotation.
Mani et al. (2009) estimated the largest Lyapunov exponent for the Indian monsoon rainfall in the years 1979–2004 to be λmax ≃ 0.36 day−1, corresponding to a Lyapunov time λmax12.8 days. The Lyapunov time is thus significantly smaller than the leading MISO period of about 45 days (Krishnamurthy and Shukla 2007). To study a toy system with a Lyapunov time much smaller than the oscillation period, we use the following modified Lorenz (1963) system, with the addition of sinusoidal forcing in the x component:
x˙=σ(yx)+cu,
y˙=x(ρz)y,
z˙=xyβz,
u˙=υ,υ˙=Ω2u.
Ghil and Jiang (1998) used a similar system to illustrate the effectiveness of single-channel SSA in real-time ENSO prediction. Here, the periodic forcing given by Eq. (8d) represents the MISO cycle rather than the previous authors’ seasonal cycle. It should be noted that the maximum Lyapunov exponent for this sinusoidally driven Lorenz system is less than the value of 0.906 for the same parameters but with no driving. For this Lorenz system we use only the x and y variables in EnOC and EnOC-DA, since the oscillation is most prominent in these variables of the forced subsystem.

To create the “historical data” used in the EnOC algorithm of section 3a in order to carry out the M-SSA analysis of the dataset, project from the full phase space to the oscillation subspace, and forecast the oscillatory modes, we run a transient of 3000Δt, followed by a record of 22 000Δt, where Δt is the sampling time. A Gaussian error with variance of 10% of the standard deviation of each variable is then added to that variable.

To generate the ensembles, we draw from a Gaussian distribution around the true state, with standard deviations of 20% of the standard deviation of each variable. For the Chua system, randomly perturbed states are often outside the strange attractor’s basin of attraction and the trajectory becomes unbounded; we thus verify that each perturbed state is on a bounded trajectory, and discard it if it is not. We use m = 20 ensemble members.

In each case, we present the results after finding the optimal m′ based on 1000 forecast cycles. We then run 10 000 forecast cycles to compare the error of the corrected versus uncorrected ensemble using this m′.

For the nearest-neighbor analog mapping from the full phase space to the oscillation subspace, as well as the oscillation forecast, we use kf = kr = 30 neighbors. In applying EnOC-DA, for each system and lead time, we run experiments with multiplicative inflation factor λ (corresponding to inflating the background covariance matrix by λ2), between 1.0 and 1.45 at increments of 0.05, and choose the one that results in the lowest error.

5. Results

a. Oscillation forecasts

Before showing the results of EnOC and EnOC-DA, we first consider the error growth with lead time of the oscillation forecasts themselves. This is shown in Fig. 3. The RMSE is computed as r¯(t)r^(t)2/D, where r¯(t) is the oscillation forecast, r^(t) is the true state of the oscillation, and D′ is the number of variables used in the oscillation forecasts. The error obtained by predicting the climatological mean value of the oscillation is also included.

Fig. 3.
Fig. 3.

The root-mean-square error (RMSE) error of the oscillation forecasts at a given lead time (blue), along with the error obtained by predicting the climatological mean value of the oscillation (red). This error is computed on a test set of 2200Δt.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

When the error of the oscillation forecast reaches the error obtained by predicting the climatological mean of the oscillation, the oscillation forecast no longer offers any useful skill. Whereas for the Colpitts and Chua systems the error grows with time, we see that for the forced Lorenz system the error instead oscillates without growing. This is because the oscillation in x and y for the forced Lorenz system is close to being purely harmonic, rather than broad-peaked, and it can thus be predicted well at long lead times; see Ghil and Childress (1987, section 12.6).

b. EnOC and EnOC-DA applied to ensemble mean

Figure 4 shows the RMSE as a function of lead time in forecasting the Colpitts system given by Eq. (6), comparing the ensembles corrected with EnOC and EnOC-DA to the uncorrected forecasts. We also plot the error as theoretically estimated from Eq. (5). Note that the RMSE here refers to x¯(t)x^(t)2/D—where x¯(t) is the forecast and x^(t) is the true state—averaged over all the forecast cycles.

Fig. 4.
Fig. 4.

The RMSE for the forecasts of the Colpitts system of Eq. (6) at a given lead time, comparing the uncorrected forecast, the EnOC forecast, the EnOC-DA forecast, and the estimate from Eq. (5). The error bars are the standard error in the mean over 10 000 forecast cycles for EnOC and 1000 cycles for EnOC-DA. We include, for reference, the error obtained by predicting the climatological mean as the dark yellow line.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

Up until a lead time of about 7, the error reductions of EnOC are close to the predictions of Eq. (5). However, there are significant improvements at even the largest lead times past 25. At longer lead times, the oscillation forecast loses skill and EnOC can contribute less, but is still providing predictability. We see that EnOC-DA performs worse than EnOC except at the longest lead times.

Figure 5 displays the results of applying EnOC and EnOC-DA to the Chua system of Eq. (7). The error reductions closely track the estimates of Eq. (5), and EnOC and EnOC-DA perform similarly, with EnOC slightly better. The error reductions continue to the longest lead times shown on the plot, when the errors reach the level attained by predicting the climatological mean.

Fig. 5.
Fig. 5.

As in Fig. 4, but for the Chua system of Eq. (7).

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

Finally, Fig. 6 shows the error for the periodically forced Lorenz system. The results here are qualitatively quite different from those of the Colpitts and Chua systems in Figs. 4 and 5, since the oscillation period is significantly longer than the Lyapunov time. We can see only insignificant error reductions until around time 1, and the reductions keep increasing in magnitude with increasing lead time.

Fig. 6.
Fig. 6.

As in Fig. 4, but for the periodically forced Lorenz (1963) system of Eq. (8). The error obtained by predicting the climatological mean is not shown since it is outside of the range of the y axis, at around 15.8.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

The larger reductions at longer lead times can be explained by the long oscillation period: at small lead times, the oscillation has only completed a small fraction of its period, and therefore does not provide significant predictability for the system. Moreover, as discussed in section 5a, the oscillation of this system can be predicted at long lead times without growing error. Here, the error reductions are much smaller than the “best case” predicted by Eq. (5) at all lead times.

For the above EnOC results of Figs. 46, we have picked the optimal number m′ of ensemble members to average over as described in section 3b. Note that this is not done for EnOC-DA, which uses all the ensemble members.

Figure 7 shows an example of an RMSE vs. m′ curve for the Chua system. We can clearly see the trade-off between two factors: too small an ensemble leads to a larger RMSE, while including too many ensemble members that are not close enough to the oscillation forecast also leads to a larger RMSE. This convex shape of the RMSE vs. m′ curve was common in our experiments.

Fig. 7.
Fig. 7.

The RMSE of the ensemble mean x¯(m) vs m′ for the Chua system at a lead time of 3.0 time units (blue), along with the RMSE when choosing a random subset of ensemble members of length m′ (red).

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

In the case at hand, the optimal m′ is 11. In cases where the oscillation forecast is very informative about the system state, the optimal m′ will be small. If the oscillation forecast is useless, the error will generally be monotonically decreasing with increasing m′, and m′ = m (the total number of ensemble members) will be optimal.

For comparison, we also try picking random subsets of ensemble members. As we see in Fig. 7, the error generally decreases monotonically with increasing ensemble size, demonstrating that the subsets of ensemble members chosen by EnOC are better than chance.

c. EnOC for probabilistic forecasts

Thus far, we have only considered the error in the ensemble mean. Often in ensemble forecasting, however, the distribution of the ensemble members is also of interest. Here we evaluate the performance of EnOC in terms of its impact on the forecast probability distribution.

A common error score for probabilistic forecasts is the continuous ranked probability score (CRPS: Hersbach 2000). Although defined for continuous probability distributions, a version called the ensemble CRPS can be computed by assuming that each ensemble member is a sample from a forecast probability distribution. A lower score of CRPS indicates a better forecast (Wilks 2019).

Here, instead of using EnOC to compute the ensemble mean, we instead use it to pick the best m′ ensemble members to characterize the forecast distribution. We then compare the CRPS computed by considering only those members to the CRPS computed by considering all the members.

Figure 8 displays the CRPS for the Chua, periodically forced Lorenz (1963), and Colpitts systems, for both the uncorrected and EnOC forecasts. We see that EnOC results in a significant reduction in the CRPS for the Chua and forced Lorenz systems. For the Colpitts system the reduction in CRPS is very small, and stops at lead times beyond 15.

Fig. 8.
Fig. 8.

CRPS for the three systems at a given lead time, comparing the uncorrected and EnOC forecasts.

Citation: Journal of Climate 34, 14; 10.1175/JCLI-D-20-0624.1

These results demonstrate that EnOC can improve probabilistic forecasts. However, the results for the Colpitts system show that it is possible for EnOC to improve the ensemble mean forecast without significantly improving the probabilistic forecast. These results may be improved if CRPS is chosen to optimize m′ instead of RMSE (see section 3b). Moreover, we have found that EnOC-DA can reduce the CRPS more than EnOC in some cases.

6. Concluding remarks

a. Summary

Oscillatory modes are prevalent in the climate system, and tend to be more predictable than the overall signal (Ghil and Childress 1987; Ghil and Robertson 2002; Ghil et al. 2019; Krishnamurthy 2019). Previous studies have focused on the prediction of these modes, but did not use this information systematically to improve the prediction of the overall system.

Here, we presented a general method, which we term ensemble oscillation correction (EnOC), to improve ensemble forecasts of systems with oscillatory modes. We demonstrate a robust error reduction over uncorrected forecasts. Equation (5) provides a rough “best case” error reduction for this method based only on the percentage of the variance comprised by the oscillation.

We also introduced an alternate method that uses data assimilation, EnOC with Data Assimilation (EnOC-DA). EnOC-DA generally results in similar but slightly smaller error reductions compared to EnOC for the chaotic toy models we tested. This may be because spinup is important for the performance of ensemble Kalman filters (Kalnay and Yang 2010), and the filter was not spun up here. A hybrid method that incorporates a climatological background error covariance matrix could mitigate this problem (Penny 2017).

In this paper we used M-SSA due to the extensive literature on its use for extraction and prediction of climate oscillations, as described in section 2a. As an extension of this work, the EnOC ideas could be tested using a number of other methods of extracting oscillatory modes from multivariate time series data; see von Storch and Zwiers (1999), Jolliffe (2002), and Ghil et al. (2002) for an overview. These include multitaper frequency domain-singular value decomposition (MTM-SVD: Mann and Park 1999), Hilbert EOFs (Rasmusson et al. 1981), principal oscillation patterns (Hasselmann 1988), predictive oscillation patterns (Kooperberg and O’Sullivan 1996), or the recent data-adaptive harmonic (DAH)–multilayer Stuart–Landau model (MSLM) methodology (Kondrashov et al. 2018).

Even more broadly, the idea of EnOC and EnOC-DA of optimally combining dynamical and data-driven forecasts could be extended to use machine learning forecasts, or other low-order, robust forecasts, instead of the oscillatory-mode forecasts on which the full-space forecast builds herein. As one example of many, the reduced-subspace forecast could rely on empirical model reduction (EMR: Kravtsov et al. 2010) that has been widely applied in the climate sciences and beyond (Kondrashov et al. 2015). EMR forecasts have participated for over a decade in the ENSO forecast plume of the International Research Institute for Climate and Society and have been found still in 2012 to be highly competitive with ENSO forecasts of high-end climate models (Barnston et al. 2012).

b. Discussion: Application to real-time predictions

Motivated by the poor forecasts of the monsoon intraseasonal oscillations (MISOs) with state-of-the-art climate models (e.g., Krishnamurthy and Sharma 2017), we intend to apply this method to the South Asian monsoon region (see Pentakota et al. 2020).

The error reduction will vary in space according to the percentage of the variance represented by MISOs at that location. Rainfall reflects latent heating and thus the patterns of potential vorticity, which in turn are a key determinant in the atmospheric flow. Therefore, if EnOC ensemble members are chosen with EnOC based on their predicted rainfall, it is likely that their error in forecasts of other variables could also be improved.

This idea is supported by Lien et al. (2013), who found that data assimilation of observed precipitation reduced the forecast error in other fields. They attributed this improvement to the increased weight placed on ensemble members with more correct potential vorticity, which is done implicitly with the ensemble transform Kalman filter.

Acknowledgments

We would like to express our deep gratitude for the Monsoon Mission II funding for this work (Grant IITMMMIIUNIVMARYLANDUSA2018INT1) provided by the Ministry of Earth Science, Government of India. We thank Takemasa Miyoshi for the suggestion of an ensemble method. We also thank Chu-Chun Chang, Jagadish Shukla, and Takuma Yoshida for helpful discussions. Two anonymous reviewers provided insightful feedback that improved the manuscript. E.B. was supported by the University of Maryland Flagship Fellowship and Ann G. Wylie Fellowship, and Monsoon Mission II funding. V.K. was supported by the National Science Foundation (Grant 1338427), the National Oceanic and Atmospheric Administration (Grant NA140OAR4310160), and the National Aeronautics and Space Administration (Grant NNX14AM19G) of the United States. M.G.’s work on this paper was supported by the EIT Climate-KIC; EIT Climate-KIC is supported by the European Institute of Innovation and Technology (EIT), a body of the European Union, under Grant Agreement 190733. The present paper is TiPES contribution number 81; this project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement 820970. The authors acknowledge the University of Maryland supercomputing resources (http://hpcc.umd.edu) made available for conducting the research reported in this paper.

APPENDIX

Derivation of an A Priori Error Reduction Estimate

We derive a rough estimate of a “best case” expected error reduction of EnOC given some reasonable assumptions.

We start, for simplicity, with the squared prediction error of the forecast y^(t) of the one-dimensional time series y(t) at a particular time step t; the same results hold for multiple dimensions and multiple time steps. We assume that both the real and predicted time series have the same set of SSA eigenvectors, such that both can be decomposed into the same set of modes J, that is,
y=jJrjandy^=jJr^j,
where we drop the dependence on t. This assumption will hold if and only if the two corresponding covariance matrices commute with each other.
We regard each j as a single mode, although the derivation also works if each j is a subset of modes. In particular, each oscillatory mode will generally correspond to a pair of eigenvectors in quadrature; see section 2a. We then obtain
(y^y)2=(jr^jjrj)2=[j(r^jrj)]2=j(r^jrj)2+k(r^krk)(r^r)=j(r^jrj)2+k[(r^kr¯k)(rkr¯k)][(r^r¯)(rr¯)],
where r¯i is the mean of the ith RC and we assume that the mean is the same for the real and predicted RCs.
Next, we assume that the different groups of RCs are uncorrelated, as discussed in section 2d, so that the second term above vanishes and
(y^y)2=j(r^jrj)2.
Assume further that EnOC corrects mode j′ “perfectly,” so that (r^jrj)2=0. Then, the ratio of the uncorrected mean-square error MSE to the corrected mean-square error MSE′ will be
MSEMSE=jj(r^jrj)2+(r^jrj)2jj(r^jrj)2=1+(r^jrj)2jj(r^jrj)2.
Now, we assume that the prediction error of each mode is proportional to the variance captured by that mode [i.e., (r^jrj)2=cjVarj]. Then,
MSEMSE=1+cjVarjjjcjVarj.
Often the highest-variance mode corresponds simply to the long-term mean of the time series; unless, that is, y(t) is already an anomaly time series with mean zero. Assuming that the model estimates this mean well, cj will be small for this mode, and we assume that its cj = 0.
We expect that the noise modes will have large cj, due to their unpredictability, and modes that are well predicted by the model will have small cj. Here we make the simplifying assumption that, except for the mode corresponding to the mean, all the other modes are predicted equally well by the model and thus their cj terms are the same. One thus obtains
MSEMSE=1+VarjVartotVarj,
where Vartot is the total variance of all the modes, excluding the mean mode.
Expressing now the ratio of the corrected to uncorrected error in terms of the RMSE, we get
RMSERMSE=(1VarjVartot)1/2=(1jOλjjPλj)1/2,
where λj is the eigenvalue corresponding to mode j, O is the set of indices of the oscillatory modes, and P is the set of indices of all modes excluding a mean mode. The last equality follows from the discussion in section 2a.

REFERENCES

  • Abarbanel, H. D. I., R. Brown, J. J. Sidorowich, and L. S. Tsimring, 1993: The analysis of observed chaotic data in physical systems. Rev. Mod. Phys., 65, 13311392, https://doi.org/10.1103/RevModPhys.65.1331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Alessio, S. M., 2016: Singular spectrum analysis (SSA). Digital Signal Processing and Spectral Analysis for Scientists: Concepts and Applications, Springer, 537–571, https://doi.org/10.1007/978-3-319-25468-5.

    • Crossref
    • Export Citation
  • Allen, M. R., and L. A. Smith, 1996: Monte Carlo SSA: Detecting irregular oscillations in the presence of colored noise. J. Climate, 9, 33733404, https://doi.org/10.1175/1520-0442(1996)009<3373:MCSDIO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., 2016: Improving high-impact forecasts through sensitivity-based ensemble subsets: Demonstration and initial tests. Wea. Forecasting, 31, 10191036, https://doi.org/10.1175/WAF-D-15-0121.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Asch, M., M. Bocquet, and M. Nodet, 2016: Data Assimilation: Methods, Algorithms, and Applications. Society for Industrial and Applied Mathematics, 306 pp., https://doi.org/10.1137/1.9781611974546.

    • Crossref
    • Export Citation
  • Bach, E., 2021: parasweep: A template-based utility for generating, dispatching, and post-processing of parameter sweeps. SoftwareX, 13, 100631, https://doi.org/10.1016/j.softx.2020.100631.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bach, E., S. Motesharrei, E. Kalnay, and A. Ruiz-Barradas, 2019: Local atmosphere–ocean predictability: Dynamical origins, lead times, and seasonality. J. Climate, 32, 75077519, https://doi.org/10.1175/JCLI-D-18-0817.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Barnston, A. G., M. K. Tippett, M. L. L’Heureux, S. Li, and D. G. DeWitt, 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93, 631651, https://doi.org/10.1175/BAMS-D-11-00111.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bengtsson, L., M. Ghil, and E. Källen, Eds., 1981: Dynamic Meteorology: Data Assimilation Methods. Applied Mathematical Sciences Series, Vol. 36, Springer-Verlag, 330 pp., https://doi.org/10.1007/978-1-4612-5970-1.

    • Crossref
    • Export Citation
  • Bentley, J. L., 1975: Multidimensional binary search trees used for associative searching. Commun. ACM, 18, 509517, https://doi.org/10.1145/361002.361007.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, https://doi.org/10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Broomhead, D. S., and G. P. King, 1986: Extracting qualitative dynamics from experimental data. Physica D, 20, 217236, https://doi.org/10.1016/0167-2789(86)90031-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Carlsson, K., 2020: NearestNeighbors.jl: High performance nearest neighbor data structures and algorithms for Julia. GitHub, accessed 14 May 2021, https://github.com/KristofferC/NearestNeighbors.jl.

  • Carrassi, A., M. Bocquet, L. Bertino, and G. Evensen, 2018: Data assimilation in the geosciences: An overview of methods, issues, and perspectives. Wiley Interdiscip. Rev.: Climate Change, 9, e535, https://doi.org/10.1002/wcc.535.

    • Search Google Scholar
    • Export Citation
  • Charney, J. G., and J. Shukla, 1981: Predictability of monsoons. Monsoon Dynamics, J. Lighthill and R. P. Pearce, Eds., Cambridge University Press, 99–110, https://doi.org/10.1017/CBO9780511897580.009.

    • Crossref
    • Export Citation
  • Chen, J., and A. S. Sharma, 2006: Modeling and prediction of the magnetospheric dynamics during intense geospace storms. J. Geophys. Res. Space Phys., 111, A04209, https://doi.org/10.1029/2005JA011359.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, N., A. J. Majda, and D. Giannakis, 2014: Predicting the cloud patterns of the Madden–Julian oscillation through a low-order nonlinear stochastic model. Geophys. Res. Lett., 41, 56125619, https://doi.org/10.1002/2014GL060876.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, N., A. J. Majda, C. T. Sabeerali, and R. S. Ajayamohan, 2018: Predicting monsoon intraseasonal precipitation using a low-order nonlinear stochastic model. J. Climate, 31, 44034427, https://doi.org/10.1175/JCLI-D-17-0411.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Christiansen, B., 2018: Ensemble averaging and the curse of dimensionality. J. Climate, 31, 15871596, https://doi.org/10.1175/JCLI-D-17-0197.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chua, L., M. Komuro, and T. Matsumoto, 1986: The double scroll family. IEEE Trans. Circuits Syst., 33, 10721118, https://doi.org/10.1109/TCS.1986.1085869.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Climate Corporation, 2015: properscoring. Accessed 14 May 2021, https://github.com/TheClimateCorporation/properscoring.

  • Datseris, G., 2018: DynamicalSystems.jl: A Julia software library for chaos and nonlinear dynamics. J. Open Source Softw., 3, 598, https://doi.org/10.21105/joss.00598.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dong, L., and F. Zhang, 2016: OBEST: An observation-based ensemble subsetting technique for tropical cyclone track prediction. Wea. Forecasting, 31, 5770, https://doi.org/10.1175/WAF-D-15-0056.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Farmer, J. D., and J. J. Sidorowich, 1987: Predicting chaotic time series. Phys. Rev. Lett., 59, 845848, https://doi.org/10.1103/PhysRevLett.59.845.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Farmer, J. D., and J. J. Sidorowich, 1988: Exploiting chaos to predict the future and reduce noise. Evolution, Learning and Cognition, Y. C. Lee, Ed., World Scientific, 277–330, https://doi.org/10.1142/9789814434102_0011.

    • Crossref
    • Export Citation
  • Ghil, M., 2001: Hilbert problems for the geosciences in the 21st century. Nonlinear Processes Geophys., 8, 211, https://doi.org/10.5194/npg-8-211-2001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghil, M., and S. Childress, 1987: Topics in Geophysical Fluid Dynamics: Atmospheric Dynamics, Dynamo Theory, and Climate Dynamics. Springer, 512 pp., https://doi.org/10.1007/978-1-4612-1052-8.

    • Crossref
    • Export Citation
  • Ghil, M., and P. Malanotte-Rizzoli, 1991: Data assimilation in meteorology and oceanography. Advances in Geophysics, Vol. 33, Elsevier, 141–266, https://doi.org/10.1016/S0065-2687(08)60442-2..

    • Crossref
    • Export Citation
  • Ghil, M., and R. Vautard, 1991: Interdecadal oscillations and the warming trend in global temperature time series. Nature, 350, 324327, https://doi.org/10.1038/350324a0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghil, M., and N. Jiang, 1998: Recent forecast skill for the El Niño/Southern Oscillation. Geophys. Res. Lett., 25, 171174, https://doi.org/10.1029/97GL03635.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghil, M., and A. W. Robertson, 2000: Solving problems with GCMs: General circulation models and their role in the climate modeling hierarchy. General Circulation Model Development: Past, Present, and Future, D. A. Randall, Ed., Academic Press, 285–325.

    • Crossref
    • Export Citation
  • Ghil, M., and A. W. Robertson, 2002: “Waves” vs. “particles” in the atmosphere’s phase space: A pathway to long-range forecasting? Proc. Natl. Acad. Sci. USA, 99, 24932500, https://doi.org/10.1073/pnas.012580899.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghil, M., and Coauthors, 2002: Advanced spectral methods for climatic time series. Rev. Geophys., 40, 1003, https://doi.org/10.1029/2000RG000092.

  • Ghil, M., D. Kondrashov, F. Lott, and A. W. Robertson, 2004: Intraseasonal oscillations in the mid-latitudes: Observations, theory and GCM results. ECMWF/CLIVAR Workshop on Simulation and Prediction of Intra-Seasonal Variability with Emphasis on the MJO, Reading, United Kingdom, ECMWF, 35–54.

  • Ghil, M., A. Groth, D. Kondrashov, and A. W. Robertson, 2019: Extratropical sub-seasonal to seasonal oscillations and multiple regimes: The dynamical systems view. Sub-Seasonal to Seasonal Prediction: The Gap between Weather and Climate Forecasting, A. W. Robertson, and F. Vitart, Eds., Elsevier, 119–142, https://doi.org/10.1016/B978-0-12-811714-9.00006-1.

    • Crossref
    • Export Citation
  • Golyandina, N., 2020: Particularities and commonalities of singular spectrum analysis as a method of time series analysis and signal processing. Wiley Interdiscip. Rev. Comput. Stat., 12, e1487, https://doi.org/10.1002/wics.1487.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Golyandina, N., V. Nekrutkin, and A. A. Zhigljavsky, 2001: Analysis of Time Series Structure: SSA and Related Techniques. 1st ed., Chapman and Hall/CRC, 320 pp.

    • Crossref
    • Export Citation
  • Groth, A., and M. Ghil, 2011: Multivariate singular spectrum analysis and the road to phase synchronization. Phys. Rev. E, 84, 036206, https://doi.org/10.1103/PhysRevE.84.036206.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Groth, A., and M. Ghil, 2015: Monte Carlo singular spectrum analysis (SSA) revisited: Detecting oscillator clusters in multivariate datasets. J. Climate, 28, 78737893, https://doi.org/10.1175/JCLI-D-15-0100.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Groth, A., and M. Ghil, 2017: Synchronization of world economic activity. Chaos, 27, 127002, https://doi.org/10.1063/1.5001820.

  • Harris, T. J., and H. Yuan, 2010: Filtering and frequency interpretations of singular spectrum analysis. Physica D, 239, 19581967, https://doi.org/10.1016/j.physd.2010.07.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hasselmann, K., 1988: PIPs and POPs: The reduction of complex dynamical systems using principal interaction and oscillation patterns. J. Geophys. Res., 93, 11 01511 021, https://doi.org/10.1029/JD093iD09p11015.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction systems. Wea. Forecasting, 15, 559570, https://doi.org/10.1175/1520-0434(2000)015<0559:DOTCRP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., 2002: Principal Component Analysis. 2nd ed. Springer-Verlag, 488 pp., https://doi.org/10.1007/b98835.

    • Crossref
    • Export Citation
  • Kalnay, E., 2002: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 341 pp.

    • Crossref
    • Export Citation
  • Kalnay, E., 2019: Historical perspective: Earlier ensembles and forecasting forecast skill. Quart. J. Roy. Meteor. Soc., 145, 2534, https://doi.org/10.1002/qj.3595.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., and S.-C. Yang, 2010: Accelerating the spin-up of ensemble Kalman filtering. Quart. J. Roy. Meteor. Soc., 136, 16441651, https://doi.org/10.1002/qj.652.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kang, I.-S., and H.-M. Kim, 2010: Assessment of MJO predictability for boreal winter with various statistical and dynamical models. J. Climate, 23, 23682378, https://doi.org/10.1175/2010JCLI3288.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kennedy, M., 1994: Chaos in the Colpitts oscillator. IEEE Trans. Circuits Syst., 41, 771774, https://doi.org/10.1109/81.331536.

  • Keppenne, C. L., and M. Ghil, 1993: Adaptive filtering and prediction of noisy multivariate signals: An application to subannual variability in atmospheric angular momentum. Int. J. Bifurcation Chaos, 03, 625634, https://doi.org/10.1142/S0218127493000520.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kondrashov, D., M. D. Chekroun, A. W. Robertson, and M. Ghil, 2013: Low-order stochastic model and “past-noise forecasting” of the Madden–Julian oscillation. Geophys. Res. Lett., 40, 53055310, https://doi.org/10.1002/grl.50991.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kondrashov, D., M. D. Chekroun, and M. Ghil, 2015: Data-driven non-Markovian closure models. Physica D, 297, 3355, https://doi.org/10.1016/j.physd.2014.12.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kondrashov, D., M. D. Chekroun, X. Yuan, and M. Ghil, 2018: Data-adaptive harmonic decomposition and stochastic modeling of Arctic sea ice. Advances in Nonlinear Geosciences, A. A. Tsonis, Ed., Springer, 179–205, https://doi.org/10.1007/978-3-319-58895-7_10.

    • Crossref
    • Export Citation
  • Kooperberg, C., and F. O’Sullivan, 1996: Predictive oscillation patterns: A synthesis of methods for spatial-temporal decomposition of random fields. J. Amer. Stat. Assoc., 91, 14851496, https://doi.org/10.1080/01621459.1996.10476716.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kravtsov, S., D. Kondrashov, and M. Ghil, 2010: Empirical model reduction and the modeling hierarchy in climate dynamics and the geosciences. Stochastic Physics and Climate Modelling, T. N. Palmer and P. Williams, Eds., Cambridge University Press, 35–72.

  • Krishnamurthy, V., 2019: Predictability of weather and climate. Earth Space Sci., 6, 10431056, https://doi.org/10.1029/2019EA000586.

  • Krishnamurthy, V., and J. Shukla, 2007: Intraseasonal and seasonally persisting patterns of Indian monsoon rainfall. J. Climate, 20, 320, https://doi.org/10.1175/JCLI3981.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Krishnamurthy, V., and A. S. Sharma, 2017: Predictability at intraseasonal time scale. Geophys. Res. Lett., 44, 85308537, https://doi.org/10.1002/2017GL074984.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lau, W. K. M., 2012: El Niño Southern Oscillation connection. Intraseasonal Variability in the Atmosphere–Ocean Climate System, 2nd ed. W. K. M. Lau and D. E. Waliser, Eds., Springer-Verlag, 297–334, https://doi.org/10.1007/978-3-642-13914-7.

    • Crossref
    • Export Citation
  • Lien, G.-Y., E. Kalnay, and T. Miyoshi, 2013: Effective assimilation of global precipitation: Simulation experiments. Tellus, 65A, 19915, https://doi.org/10.3402/tellusa.v65i0.19915.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lisi, F., O. Nicolis, and M. Sandri, 1995: Combining singular-spectrum analysis and neural networks for time series forecasting. Neural Process. Lett., 2, 610, https://doi.org/10.1007/BF02279931.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130141, https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 17, 321333, https://doi.org/10.3402/tellusa.v17i3.9076.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., 1969: Atmospheric predictability as revealed by naturally occurring analogues. J. Atmos. Sci., 26, 636646, https://doi.org/10.1175/1520-0469(1969)26<636:APARBN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lynch, E. M., 2019: Data driven prediction without a model. Doctoral thesis, University of Maryland, College Park, 107 pp., https://doi.org/10.13016/quty-dayf.

    • Crossref
    • Export Citation
  • Mani, N. J., E. Suhas, and B. N. Goswami, 2009: Can global warming make Indian monsoon weather less predictable? Geophys. Res. Lett., 36, L08811, https://doi.org/10.1029/2009GL037989.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mann, M. E., and J. Park, 1999: Oscillatory spatiotemporal signal detection in climate studies: A multiple-taper spectral domain approach. Advances in Geophysics, Vol. 41, Elsevier, 1–131, https://doi.org/10.1016/S0065-2687(08)60026-6.

    • Crossref
    • Export Citation
  • Mann, M. E., B. A. Steinman, and S. K. Miller, 2020: Absence of internal multidecadal and interdecadal oscillations in climate model simulations. Nat. Commun., 11, 49, https://doi.org/10.1038/s41467-019-13823-w.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mo, K. C., 2001: Adaptive filtering and prediction of intraseasonal oscillations. Mon. Wea. Rev., 129, 802817, https://doi.org/10.1175/1520-0493(2001)129<0802:AFAPOI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moron, V., A. W. Robertson, and M. Ghil, 2012: Impact of the modulated annual cycle and intraseasonal oscillation on daily-to-interannual rainfall variability across monsoonal India. Climate Dyn., 38, 24092435, https://doi.org/10.1007/s00382-011-1253-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ogrosky, H. R., S. N. Stechmann, N. Chen, and A. J. Majda, 2019: Singular spectrum analysis with conditional predictions for real-time state estimation and forecasting. Geophys. Res. Lett., 46, 18511860, https://doi.org/10.1029/2018GL081100.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ott, E., 2002: Chaos in Dynamical Systems. 2nd ed. Cambridge University Press, 487 pp.

    • Crossref
    • Export Citation
  • Palmer, T., 2020: A vision for numerical weather prediction in 2030. 11 pp., https://arxiv.org/abs/2007.04830.

  • Penny, S. G., 2017: Mathematical foundations of hybrid data assimilation from a synchronization perspective. Chaos, 27, 126801, https://doi.org/10.1063/1.5001819.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Penny, S. G., E. Bach, K. Bhargava, C.-C. Chang, C. Da, L. Sun, and T. Yoshida, 2019: Strongly coupled data assimilation in multiscale media: Experiments using a quasi-geostrophic coupled model. J. Adv. Model. Earth Syst., 11, 18031829, https://doi.org/10.1029/2019MS001652.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pentakota, S., and Coauthors, 2020: Advances in coupled data assimilation, ensemble forecasting, and assimilation of altimeter observations. CLIVAR Exchanges, No. 79, International CLIVAR Project Office, Southampton, United Kingdom, 27–30.

  • Plaut, G., and R. Vautard, 1994: Spells of low-frequency oscillations and weather regimes in the Northern Hemisphere. J. Atmos. Sci., 51, 210236, https://doi.org/10.1175/1520-0469(1994)051<0210:SOLFOA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Preisendorfer, R. W., and C. D. Mobley, 1988: Principal Component Analysis in Meteorology and Oceanography. Elsevier, 425 pp.

  • Rasmusson, E. M., P. A. Arkin, W.-Y. Chen, and J. B. Jalickee, 1981: Biennial variations in surface temperature over the United States as revealed by singular decomposition. Mon. Wea. Rev., 109, 587598, https://doi.org/10.1175/1520-0493(1981)109<0587:BVISTO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rasp, S., and N. Thuerey, 2021: Data-driven medium-range weather prediction with a Resnet pretrained on climate simulations: A new model for WeatherBench. J. Adv. Model. Earth Syst., 13, e2020MS002405, https://doi.org/10.1029/2020MS002405.

    • Crossref
    • Export Citation
  • Rey, D., M. Eldridge, U. Morone, H. D. I. Abarbanel, U. Parlitz, and J. Schumann-Bischoff, 2014: Using waveform information in nonlinear data assimilation. Phys. Rev. E, 90, 062916, https://doi.org/10.1103/PhysRevE.90.062916.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sharma, A. S., D. Vassiliadis, and K. Papadopoulos, 1993: Reconstruction of low-dimensional magnetospheric dynamics by singular spectrum analysis. Geophys. Res. Lett., 20, 335338, https://doi.org/10.1029/93GL00242.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stan, C., and V. Krishnamurthy, 2019: Intra-seasonal and seasonal variability of the Northern Hemisphere extra-tropics. Climate Dyn., 53, 48214839, https://doi.org/10.1007/s00382-019-04827-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Strong, C., F. Jin, and M. Ghil, 1995: Intraseasonal oscillations in a barotropic model with annual cycle, and their predictability. J. Atmos. Sci., 52, 26272642, https://doi.org/10.1175/1520-0469(1995)052<2627:IOIABM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ukhorskiy, A. Y., M. I. Sitnov, A. S. Sharma, and K. Papadopoulos, 2004: Global and multi-scale features of solar wind–magnetosphere coupling: From modeling to forecasting. Geophys. Res. Lett., 31, L08802, https://doi.org/10.1029/2003GL018932.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Van den Dool, H. M., 1994: Searching for analogues, how long must we wait? Tellus, 46A, 314324, https://doi.org/10.3402/tellusa.v46i3.15481.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vannitsem, S., D. S. Wilks, and J. W. Messner, Eds., 2018: Statistical Postprocessing of Ensemble Forecasts. Elsevier, 362 pp., https://doi.org/10.1016/C2016-0-03244-8.

    • Crossref
    • Export Citation
  • Vautard, R., and M. Ghil, 1989: Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series. Physica D, 35, 395424, https://doi.org/10.1016/0167-2789(89)90077-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vautard, R., P. Yiou, and M. Ghil, 1992: Singular-spectrum analysis: A toolkit for short, noisy chaotic signals. Physica D, 58, 95126, https://doi.org/10.1016/0167-2789(92)90103-T.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vautard, R., C. Pires, and G. Plaut, 1996: Long-range atmospheric predictability using space–time principal components. Mon. Wea. Rev., 124, 288307, https://doi.org/10.1175/1520-0493(1996)124<0288:LRAPUS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • von Storch, H., and F. W. Zwiers, 1999: Statistical Analysis in Climate Research. Cambridge University Press, 484 pp., https://doi.org/10.1017/CBO9780511612336.

    • Crossref
    • Export Citation
  • Wang, X. L., J. Corte-Real, and X. Zhang, 1996: Intraseasonal oscillations and associated spatial-temporal structures of precipitation over China. J. Geophys. Res., 101, 19 03519 042, https://doi.org/10.1029/96JD01225.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Watari, S., 1996: Separation of periodic, chaotic, and random components in solar activity. Sol. Phys., 168, 413422, https://doi.org/10.1007/BF00148065.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weare, B. C., and J. S. Nasstrom, 1982: Examples of extended empirical orthogonal function analyses. Mon. Wea. Rev., 110, 481485, https://doi.org/10.1175/1520-0493(1982)110<0481:EOEEOF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2019: Statistical Methods in the Atmospheric Sciences. 4th ed. Elsevier, 840 pp., https://doi.org/10.1016/C2017-0-03921-6.

    • Crossref
    • Export Citation
  • Xavier, P. K., and B. N. Goswami, 2007: An analog method for real-time forecasting of summer monsoon subseasonal variability. Mon. Wea. Rev., 135, 41494160, https://doi.org/10.1175/2007MWR1854.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

When DM > NM + 1, using the PCA “transpose trick” will be more computationally efficient; see Ghil et al. (2002, section A2).

2

Note that this formula is not a computationally efficient way to implement reconstruction, as work can be saved by precomputing the PCs.

3

A linear transformation is one-to-one if and only if it is of full rank.

4

Objective analysis, or just analysis for short, refers to model estimates of the system that are corrected by observations using data assimilation (Bengtsson et al. 1981; Kalnay 2002).

Save
  • Abarbanel, H. D. I., R. Brown, J. J. Sidorowich, and L. S. Tsimring, 1993: The analysis of observed chaotic data in physical systems. Rev. Mod. Phys., 65, 13311392, https://doi.org/10.1103/RevModPhys.65.1331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Alessio, S. M., 2016: Singular spectrum analysis (SSA). Digital Signal Processing and Spectral Analysis for Scientists: Concepts and Applications, Springer, 537–571, https://doi.org/10.1007/978-3-319-25468-5.

    • Crossref
    • Export Citation
  • Allen, M. R., and L. A. Smith, 1996: Monte Carlo SSA: Detecting irregular oscillations in the presence of colored noise. J. Climate, 9, 33733404,