## 1. Introduction

An important class of linear estimation problems is the extrapolation (prediction) of a field outside the data domain. Needs for reliable predictions arise from many different areas of climate studies, El Niño being a prototypical example. The importance of reliable prediction cannot be overemphasized because any significant changes in climatic variables typically have large impacts on climatic, environmental, and economic variables. Considered here is a general three-dimensional statistical predictor.

Prediction schemes may be classified into one of the three categories—statistical, dynamical, and statistical–dynamical. In El Niño predictions, examples of statistical schemes include Barnett et al. (1988) and Xu and von Storch (1990). Examples of dynamical schemes include Cane et al. (1986) and Latif and Flügel (1991). Inoue and O’Brien (1984) used a combination of the two called the statistical–dynamical scheme. This study presents a statistical prediction algorithm based on space–time empirical orthogonal functions (EOFs) of the predictand field.

The present method, in essence, is similar to purely statistical predictors such as autoregressive (AR) predictors discussed in many time series textbooks (e.g., Newton 1988). In the latter, a covariance matrix is used to construct the best unbiased predictor for a given time series. The so-called prediction normal equation is solved to make predictions. The predictor is “best” in the sense of minimum error variance. Also, the predictor field is always a subset of the predictand field. In this study, a similar predictor is developed in terms of EOFs.

EOFs naturally come into play because the covariance function (or matrix) is decomposed into a series of EOFs. One important motivation for this EOF representation is that it facilitates a space–time generalization of the method. Spatial information is decomposed into“independent” modes that can be predicted separately. Further, the algorithm can easilty be carried over to other basis functions including complex EOFs or cyclostationary EOFs (Kim et al. 1996; Kim and North 1997). A certain set of basis functions may have better representation of true physical modes and using it can improve the performance of predictors. This is an important aspect of the present prediction algorithm. Finally, some EOF modes may be more beneficial for prediction than others. The EOF representation allows the selection of particular (physical) modes for prediction thereby improving the predictability.

Section 2 of this article describes the linear prediction algorithm and the prediction error variance. The algorithm is constructed so that the prediction error variance is minimized. Section 3 shows how to compute finite-domain EOFs, which are an essential ingredient in the formulation of the prediction algorithm. The section discusses the actual implementation of the predictor from a continuum representation in the previous section. In section 4, the performance and properties of the predictor are illustrated in terms of two artificial Markov processes. Actual applications of the predictor require extensive cross validations and have been postponed to a later study. It is followed by sensitivity tests in section 5. In this section sensitivity of the predictor is examined in terms of its construction parameters and the accuracy of covariance statistics. Concluding remarks follow in section 6.

## 2. Method

### a. Formulation of optimal filter

Because of the simplicity, derivation here assumes continuum forms. In the next section, actual implementation of the predictor using discrete variables in space and time will be discussed.

*T*(

*x*),

*x*= (

**r̂**,

*t*), is defined. Throughout we will deal with the anomaly field such that 〈

*T*(

*x*)〉 = 0, where 〈 · 〉 denotes ensemble average. We wish to predict

*T*(

*x*) for a particular realization at a point

*x*∈

*T*(

*x*) taken from that same realization in a subset

*T̂*(

*x*) be the prediction value at point

*x.*Then, our predictor can be defined as

*x, x*′) is a filter which we tailor to our needs. The filter provides a weighting over the data that leads to the prediction.

*T*(

*x*),

*x*∈

*ϵ*

*x*

*T*

*x*

*T̂*

*x*

*ϵ*

^{2}

*x*

*T*

*x*

*T̂*

*x*

^{2}

*ϵ*

^{2}(

*x*)〉 by adjusting the shape of the filter Γ(

*x, x*′).

*ϵ*

^{2}〉 with respect to the filter function and set it to zero. Then, we find

*δ*Γ, we should have

*C*

*x, x*

*T*

*x*

*T*

*x*

*χ*(

*x*) under the integral sign. The expression (8) constitutes an integral equation for Γ(

*x, x*′), which is the desired filter. To solve the integral equation, we introduce a set of basis functions Ψ

_{n}(

*x*) defined on

*R*

_{n}are eigenvalues. They are orthonormal since

*C*(

*x, x*′) is symmetric:

^{R}

_{n}

*x*) are the expansion coefficients of the filter kernel. The superscript signifies that the expansion is with respect to the

_{n}(

*x*) yields the EOFs defined on the space–time domain

*χ*(

*x*).

*T*(

*x*) can be expressed as

*T*

^{R}

_{n}

_{n}(

*x*)} defined on the subregion

*D*

_{n}are eigenvalues. The basis functions are again orthonormal

_{n}

_{m}

_{D}

*δ*

_{nm}

^{D}

_{n}

*x*) are the expansion coefficients of the filter kernel with respect to the

_{n}(

*x*′) [see also (16)]. Next, we project the

*n*th component of (27) [multiply through by Φ

_{n}(

*x*′) and integrate over

*D*

_{n}, multiplying by Φ

_{n}(

*x*), and summing

*C*(

*x, x*′) in terms of the Ψ

_{n}(

*x*). Using (15),

_{m}(

*x*) is defined for values of

*x*outside of

_{opt}(

*x, x*′) to be defined for

*x*throughout

_{m}(

*x*), Φ

_{n}(

*x*) and the eigenvalues

*R*

_{m},

*D*

_{n},

*n, m*= 0, 1, 2, . . . are found in advance. The final formula for the predictor is

### b. Prediction error

*ϵ*(

*x*) for a particular realization from (1) and (4) is

*T*

^{R}

_{m}

*x*happens to lie inside

_{n}(

*x*),

*x*∈

*x*∈

*x*∈

*x, x*′) =

*δ*(

*x*−

*x*′).

*T*

^{R}

_{m}

*ϵ*(

*x*) can now be conveniently written as

*x*∈

*m*in the sum. Hence,

*ϵ*(

*x*) as well as

*ϵ*

^{2}(

*x*) vanish at any truncation level so long as

*x*∈

## 3. Finite-domain EOFs

In the actual implementation of (30), (31), and (38), it should be interpreted that

It is also stressed that covariance function depends only on lag under the stationarity assumption. This implies that

_{n}(

*x*), such that

*C*(

*x, x*′) is the covariance kernel for a stationary process and

*R*

_{n}is the eigenvalue corresponding to Ψ

_{n}(

*x*). For stationary processes, we can conveniently factor the space and time dependence of the eigenfunctions:

_{n}

*x*

_{n}

**r̂**

*ψ*

_{n}

*t*

*x*=

*t.*Equation (39) is reduced to

*C*(

*τ*),

*τ*=

*t*−

*t*′, by its Fourier integral in terms of the spectral density,

*S*(

*f*):

*f*= 0,

*C*(

*τ*) is a real function. For a discrete time series, the eigenfunctions

*ψ*

_{n}(

*t*) can be approximated on the interval (0,

*L*) as solutions of a matrix equation

*L*with a known discrete covariance kernel. The eigenvectors converge to exact eigenfunctions (sine and cosine functions) as

*L*→ ∞. The spectrum of eigenvalues

*R*

_{n}also approaches

*S*(

*f*=

*n*/

*L*) for an infinitely long time series.

For a three-dimensional discrete dataset spatial EOFs may be computed as eigenvectors of spatial covariance matrix {**C**_{jk}} where *j* and *k* are station indices. Then, temporal EOFs are computed as eigenvectors of each principal component (PC) time series by invoking the factorization (40). Note that the continuum equations such as (30), (31), and (38) can readily be generalized for discrete variables by replacing summations for integrals with the understanding that *T,* Φ_{n}, and Ψ_{n} are vectors. For instance, (12) represents a dot product of two vectors for discrete variables.

## 4. One-dimensional prediction examples

### a. A first-order Markov process

*F*(

*t*) is assumed to be a white noise process and satisfies

*T*(

*t*) be zero as

*t*→ −∞.

As should be expected, *m,* (35) without the last multiplicative term, *T*^{R}_{m}*x* ∈

Now, Fig. 3 shows the prediction error squared, (38), as a function of the number of modes retained. Because of discretization error, variance is not exactly zero in *τ*_{0})^{h}, *h* being the prediction horizon (Newton 1988), with the diminution of ripples as more modes are retained. By retaining more modes, however, prediction error may not be reduced beyond a certain limit because it is the property of the particular process involved not of the number of modes retained. The prediction error quickly reaches the plateau implying that there is no prediction beyond that point (100% prediction error variance = total variance).

The optimal prediction kernel of the first-order Markov process as derived in (30) is shown in Fig. 4. The crest of sharp peaks approaches the Dirac delta function *δ*(*t* − *t*′) (actually Dirac delta sequence). This is the consequence of the fact that for a minimum mse predictor the predicted value should be the same as the observed value in the *T̂*(*x*) = *T*(*x*), *x* ∈ *x, x*′) = *δ*(*x* − *x*′) in (1). The largest peak behind the delta function in Fig. 4 decays exponentially both in the postive *t* and the negative *t*′ directions. This implies that predictability comes from only a small number of observations close to the

### b. A second-order Markov process

*F*(

*t*) is a white noise with constant spectral density

*σ*

^{2}

_{F}

*ω*= 2

*πf.*

Figure 5 shows the prediction error squared, (38), as a function of the number of modes retained. As the number of EOF modes is increased, prediction error is better resolved. As in the case of the first-order Markov process, the prediction error quickly reaches the plateau of no predictability (100% prediction error).

The optimal prediction kernel of the second-order Markov process is shown in Fig. 6. The crest of sharp peaks represent the Dirac delta function *δ*(*t* − *t*′). The shape of the peaks of the kernel in the prediction domain *t* and the negative *t*′ directions but some undulation is obvious reflecting the oscillatory nature of the second-order Markov process. Information useful for prediction is limited to the region close to the

### c. Deterministic plus AR-1 process

*T*

*t*

*αT*

*t*

*ϵ*

*t*

*α*= 0.78 and

*ϵ*(

*t*) is a white noise process with variance 0.11. In statistical predictions it is often advised that known deterministic signals be removed from data prior to prediction. This is because some statistical predictors such as AR cannot handle deterministic components correctly. The process of removing deterministic components, however, is not necessarily easy in practice. Since deterministic processes have unrestricted predictability it is important for a predictor to handle them accurately. As shown in Fig. 7, the developed predictor does a good job in predicting the deterministic component outside the domain

Although the deterministic part was predicted accurately, the stochastic part was predicted reasonably for only a few points. The latter part, of course, sets an inherent prediction limit. The prediction error variance is almost comparable to that of an AR-1 prediction for the stochastic part only (see Fig. 8). This is reasonable because both are best unbiased linear predictors. The error variance of an AR-1 predictor is described in Newton (1988). Similarly, for processes other than AR processes one can find best linear unbiased predictors that would perform comparably to the present predictor. Of course, an important motivation for the present study is to develop a predictor that is convenient and easy to understand and that can easily be generalized into three dimensions.

## 5. Sensitivity tests

There are four parameters regarding the construction of a prediction filter. They are the size of data domain *n* and *m* in the two domains [see (30)]. Another important practical consideration is the accuracy of the covariance matrix which is essentially governed by the total record length. A general and exhaustive sensitivity test employing many different types of datasets is difficult to conduct and is not included here. Instead, sensitivity tests here are limited to an AR-1 process that we discussed earlier in (55).

### a. Sensitivity to retained mode numbers

To simplify the problem let us first set *n* = *m.* In other words, we use an equal number of EOFs in the data and prediction domains in constructing the prediction filter (30). Note that the variance explained by an equal number of EOFs is approximately the same in the data and prediction domains, respectively. Figure 8 shows the prediction error variance versus the number of retained EOF modes. The variance was normalized with respect to the total variance of the retained modes in the *n* = *m.* As a result percent variance explained by the retained EOF modes in the prediction domain is slightly less than that in the data domain. This, in turn, causes a slight underestimation of the prediction variance. In the case of *n* = *m* = 10, prediction error variance is incorrect because the covariability of the data is insufficiently resolved by 10 modes that explain about 60% of the total variability. This test indicates that 30 EOF modes (80% of the total variance) should suffice to design a reasonably accurate filter. This accuracy implies near-optimal performance of the filter. The insensitivity of the performance of the filter to the number of retained modes also implies that the simplification *n* = *m* is acceptable for all practical purposes.

### b. Sensitivity to domain sizes

Figure 9 shows the error variance versus the size of the data domain

The performance of the predictor is also insensitive to the relative size of the prediction domain

### c. Sensitivity to the total length of data

The most important ingredient in constructing the prediction filter is a set of EOFs that is derived from a sample covariance matrix. The accuracy of covariance matrix determines the accuracy of EOFs, which in turn determines the accuracy of the prediction filter. The length of available data is an important factor governing the accuracy of a covariance matrix. The shorter the record length the larger the effect of sampling error is in computing covariance matrix.

Figure 12 shows the prediction error variance versus the record length. The loss of precision of the prediction filter is obvious with the decreased record length. The degradation of the performance is more severe for larger domains for the same record length as should be expected. The limited test indicates that the record length should be at least 10 times the size of the prediction domain for a reasonably accurate prediction filter. An inaccurate prediction filter may imply that prediction error variance will not be minimal in the ensemble sense. Thus, the predictor will be suboptimal.

## 6. Summary and concluding remarks

Developed in this study is a general statistical predictor based on EOFs. It is the best linear unbiased predictor and minimizes the prediction error variance. The predictor conceptually is an extrapolator based on the past covariance structures and hence the accuracy of this covariance structure is an important limiting factor for the performance of the predictor. It is obtained in the form of a filter kernel which is applied to a space–time stream of data for prediction. The developed predictor was applied to one-dimensional examples for testing the validity and performance.

As any other statistical predictors the developed predictor cannot perform better than the prediction limit inherent in the data. If the covariance statistics are accurate the performance of the predictor in this study should approach the prediction limit because its formulation is based on the minimum prediction error. All the examples demonstrate that the developed predictor performs as should be expected.

Four parameters concerning the construction of a predictor are the sizes of data domain

There is also a further improvement to be made. In some prediction studies, it may be beneficial to include the phase information. For one thing, statistics of data may not be stationary. Statistics of surface temperature field, for example, exhibit significant seasonal dependency (Kim et al. 1996; Kim and North 1997). Also some phenomena such as El Niño may be strongly phase locked with the seasonal cycle (Jin et al. 1994; Tziperman et al. 1994; Chang et al. 1994, 1995). This phasic dependency of the predictor can be incorporated easily by using cyclostationary EOFs (Kim and North 1997). The algorithm was so constructed that such improvement can be incorporated easily.

## Acknowledgments

We gratefully acknowledge support for this work by the Department of Energy (DEFG03-98ER62610) via a grant to Texas A&M University. The Department of Energy does not necessarily endorse any of the conclusions drawn in the paper.

## REFERENCES

Barnett, T., N. Graham, M. A. Cane, S. E. Zebiak, S. Dolan, J. J. O’Brien, and D. Legler, 1988: On the prediction of the El Niño of 1986–1987.

*Science,***241,**192–196.Cane, M. A., S. E. Zebiak, and S. C. Dolan, 1986: Experimental forecasts of El Niño.

*Nature,***321,**827–832.Chang, P., B. Wang, T. Li, and L. Ji, 1994: Interactions between the seasonal cycle and the southern oscillation—Frequency entrainment and chaos in a coupled ocean–atmosphere model.

*Geophys. Res. Lett.,***21,**2817–2820.——, L. Ji, B. Wang, and T. Li, 1995: Interactions between the seasonal cycle and El Niño–Southern Oscillation in an intermediate coupled ocean–atmosphere model.

*J. Atmos. Sci.,***52,**2353–2372.Inoue, M., and J. J. O’Brien, 1984: A forecasting model for the onset of El Niño.

*Mon. Wea. Rev.,***112,**2326–2337.Jin, F.-F., J. D. Neelin, and M. Ghil, 1994: El Niño on the devil’s staircase: Annual subharmonic steps to chaos.

*Science,***264,**70–72.Kim, K.-Y., and G. R. North, 1997: EOFs of harmonizable cyclostationary processes.

*J. Atmos. Sci.,***54,**2416–2427.——, ——, and J. Huang, 1996: EOFs of one-dimensional cyclostationary time series: Computations, examples, and stochastic modeling.

*J. Atmos. Sci.,***53,**1007–1017.Latif, M., and M. Flügel, 1991: An investigation of short range climate predictability in the tropical Pacific.

*J. Geophys. Res.,***96,**2661–2673.Newton, H. J., 1988:

*TIMESLAB: A Time Series Analysis Laboratory.*Wadsworth and Brooks, 623 pp.Tziperman, E., L. Stone, M. A. Cane, and H. Jarosh, 1994: El Niño chaos: Overlapping of resonances between the seasonal cycle and the Pacific ocean–atmosphere oscillator.

*Science,***264,**72–74.Xu, J.-S., and H. von Storch, 1990: Predicting the state of the Southern Oscillation using principal oscillation pattern analysis.

*J. Climate,***3,**1316–1329.

The first seven *τ*_{0} = 10 and *σ*_{F} = 1). Error occurs in representing

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The first seven *τ*_{0} = 10 and *σ*_{F} = 1). Error occurs in representing

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The first seven *τ*_{0} = 10 and *σ*_{F} = 1). Error occurs in representing

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes for the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes for the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes for the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′), where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′), where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the first-order Markov process (*τ*_{0} = 10 and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′), where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error squared as a function of the number of retained modes of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Each curve approaches the plateau representing no predictability (100% prediction error).

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′) where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′) where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

The optimal prediction kernel, Γ_{opt}(*t, t*′), of the second-order Markov process (*τ*_{0} = 10, *ω*_{0} = 0.3, and *σ*_{F} = 1). Here, (T1, T2) represents (*t, t*′) where *t* ∈ *t*′ ∈ *t, t*′) ∈ [80, 150] × [80, 100] is plotted.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of cosine plus AR-1 time series (solid line) and 50 predicted values (dotted line) based on the first 100 points of the time series in comparison with the AR-1 prediction (short-dashed line). The long-dashed line is the deterministic component (cosine function) of the time series.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of cosine plus AR-1 time series (solid line) and 50 predicted values (dotted line) based on the first 100 points of the time series in comparison with the AR-1 prediction (short-dashed line). The long-dashed line is the deterministic component (cosine function) of the time series.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of cosine plus AR-1 time series (solid line) and 50 predicted values (dotted line) based on the first 100 points of the time series in comparison with the AR-1 prediction (short-dashed line). The long-dashed line is the deterministic component (cosine function) of the time series.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error variance vs the number of retained EOF modes: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error variance vs the number of retained EOF modes: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of prediction error variance vs the number of retained EOF modes: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Same as Fig. 9 but for a time series consisting of a deterministic cosine function with the period of 100 units plus an AR-1 process.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Same as Fig. 9 but for a time series consisting of a deterministic cosine function with the period of 100 units plus an AR-1 process.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Same as Fig. 9 but for a time series consisting of a deterministic cosine function with the period of 100 units plus an AR-1 process.

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the record length of data: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the record length of data: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2

Plot of normalized prediction error variance vs the record length of data: (a)

Citation: Journal of Climate 11, 11; 10.1175/1520-0442(1998)011<3046:EBLPAT>2.0.CO;2