## 1. Introduction

Correlation functions and coefficients have been used extensively to quantify the phase (correlation) similarity between time series of ensemble members (hereafter, EMs). Taylor (1920) first described the physical concepts, and Wiener (1930) completed the theory foundations using general harmonic analysis. In addition, the autocorrelation function is transformed into the power spectrum through Fourier transform analysis (Wiener–Khintchine’s formula; Taylor 1938). Various studies have established and described mathematical and physical correlations between two EMs. These studies have advanced our understanding of such meteorological phenomena as turbulence and chaos.

Thompson (1957) originally noted the sensitivity of large-scale atmospheric patterns to initial conditions. Lorenz (1963) suggested that a nonperiodic evolution occurs in three types of simultaneous, ordinary, differential equations, even if initial conditions have subtle differences. Because of the atmosphere’s chaotic behavior, a deterministic numerical forecast with a single atmospheric initial condition may have limited value for prediction. An ensemble forecast consisting of a number of individual forecast simulations, each simulation using slightly different initial conditions, can gauge and reduce the prediction errors that arise from chaotic behavior. Miyakoda et al. (1986), for example, used a global, nine-level general circulation model (GCM) and found that ensemble mean forecasts, when validated against observations, had smaller root-mean-square (RMS) errors and larger anomaly correlation scores than did individual forecasts. Ensemble forecasts are a practical approximation of the general stochastic dynamic prediction method (Epstein 1969); see Lewis (2005) for further details. The correlations among the members, the RMS differences between the members, and the RMS departure from the ensemble mean are all used to characterize ensemble spread and to estimate forecast skill or probability distributions in medium-range weather forecasts. [Such ensemble forecasts are provided routinely by meteorological centers such as the National Centers for Environmental Prediction (NCEP), the European Centre for Medium-Range Weather Forecasts (ECMWF), and the Japan Meteorological Agency (JMA; Murphy 1988; Kimoto et al. 1992).] Statistical diagnostics derived from forecast ensembles have also been used to estimate potential predictability at seasonal time scales (Rowell et al. 1995; Stern and Miyakoda 1995; Sugi et al. 1997; Rowell 1998; Phelps et al. 2004). For example, Rowell et al. (1995) examined the impact of internal atmospheric variability and sea surface temperature (SST) forcing on predictability over tropical North Africa using a signal-to-noise ratio from analysis of variance (Scheffe 1959). Shukla et al. (2000) used the signal-to-noise ratio along with anomaly correlation coefficients to suggest that winter mean circulation anomalies over the Pacific–North American region were highly predictable during years of large tropical SST anomalies.

The individual members of an ensemble of medium-range or seasonal forecasts can differ in two key ways: 1) in their “shape” by which we mean their mean values and the amplitudes of their temporal variations, and 2) in their “phase,” as characterized by their temporal correlation. If all EMs are completely correlated, phase predictability is by definition perfect, even if mean values and amplitudes vary between the EMs. In contrast to the correlation diagnostic, RMS differences can represent the difference of mean values and amplitudes among EMs for each time period including the difference of period characteristics. Therefore the RMS differences estimate the “shape predictability” among EMs over a given period—conceivably, the RMS difference can be low even if the EMs are completely uncorrelated. A unified diagnostic for the statistical evaluation of predictability—one that examines both the phase and shape elements—is not in standard use. (Note that throughout this text, the term “predictability” is used to describe the degree to which initial conditions affect some aspect of a forecast ensemble, no matter if the forecast agrees with observations or not. This is in contrast to “predictive skill,” which implies a comparison to observations.)

Koster et al. (2000, 2002, hereafter K02) use a statistical index, Ω, that does measure the degree of similarity between EMs in both phase and shape. According to K02, Ω measures the ratio of signal variance to total variance, much like the aforementioned diagnostic used by Rowell et al. (1995). Many recent studies (Koster et al. 2004, 2006; Dirmeyer et al. 2006; Guo et al. 2006) have used the Ω diagnostic to estimate the coupling strength between soil moisture and precipitation variability. These particular studies were performed as part of the Global Land–Atmosphere Coupling Experiment (GLACE), a project sponsored by the Global Energy and Water Cycle Experiment (GEWEX) and the Climate Variability program (CLIVAR). Through joint analysis of the results of a dozen atmospheric general circulation models (AGCMs), GLACE found large degrees of coupling strength (for boreal summer) over the Great Plains of North America, India, and the Sahel (Koster et al. 2004).

The present paper provides a description of the mathematical structure of Ω to illustrate all elements of its usefulness for ensemble forecast analysis. Section 2 shows how Ω is calculated, and section 3 provides two separate mathematical interpretations of Ω. The behavior of Ω is explored in section 4 for some idealized, hypothetical situations. Section 5 then introduces a new application of Ω—its use in evaluating the predictability among ensemble members in medium-range forecasts.

## 2. Definition of Ω

*x*is a variable averaged over

_{ij}*n*time periods (

*j*= 1, 2, . . . ,

*n*) for each of

*m*EMs (

*i*= 1, 2, . . . ,

*m*). Two variances are calculated. The ensemble mean is computed with (1), and the temporal variance of the ensemble mean (

*σ*

^{2}

_{b}) is computed with (2) (see Fig. 1a):

*is the temporal mean of*x

*b*, calculated with

_{j}*σ*

^{2}) is calculated across all time periods and all EMs (Fig. 1b):

*σ*

^{2}

_{b}equals

*σ*

^{2}, and Ω will be 1. In contrast, if all EMs are completely uncorrelated, then

*σ*

^{2}

_{b}approaches (

*σ*

^{2}/

*m*), and Ω will be approximately 0. Thus, outside of sampling error, Ω varies from 0 to 1. Values closer to 1 indicate a greater degree of similarity amongst the EMs.

## 3. Mathematical analyses of Ω

### a. Mathematical analysis based on idealized normalizations

This section describes our first interpretation of the mathematical structure of Ω. Consider a generalized case of *m* EMs, each with *n* time periods; *X _{ij}* is a simulated variable (e.g., temperature or precipitation amount), and

*F*is a forcing (e.g., SST or soil moisture) that varies with time. The

_{j}*F*time series is the same for each ensemble member.

_{j}*X*and

_{ij}*F*can be normalized as follows:

_{j}*is the temporal ensemble mean of*X

*X*, and

_{ij}*is the temporal mean of*F

*F*. Also,

_{j}*σ*and

_{X}*σ*represent the standard deviations of

_{F}*X*and

_{ij}*across all time periods and all EMs.*F

*x*is controlled in part by the forcing term

_{ij}*f*and by chaotic variability, represented by the random normal deviate

_{j}*ξ*. The variable

_{ij}*x*can be rewritten as

_{ij}*ρ*is effectively the correlation coefficient between the variable and the underlying boundary forcing that controls it. By definition,

*x*,

_{i}*f*, and

*ξ*, and (10) gives their variances.

_{i}*σ*

^{2}

_{b}of the ensemble mean can be estimated using (1) and (8):

*ξ*is a random variable,

_{ij}*σ*

^{2}

_{b}from (14) into (5) yields

*ρ*

^{2}. In other words, Ω represents the square of the coefficient between the variable of interest and the boundary forcing. Key to the usefulness of Ω is that it can be computed without knowing the particular character of the boundary forcing that controls the predictability. In other words,

*f*, which might represent a subtle spatial pattern of the forcing, never needs to be explicitly computed.

### b. Mathematical analysis with strict normalizations

*a*) and the temporal variance (

_{i}*σ*

^{2}

_{ampi}) of each ensemble member are calculated as follows:

*x*by

_{ij}*a*and

_{i}*σ*

_{ampi}, as shown in (20) and (21):

*x*′

_{kj}or

*x*′

_{lj}across all time periods in each EM is 0, that is,

*I*; if (3) is substituted in for

x

^{2}, then

*I*= 0 and (26) can be written as

*R*′

_{kk}= 1 for all

*k*. Equation (29) can be further rewritten as

*σ*

_{ampk}

*σ*

_{ampl}to

*σ*

^{2}.

*σ*

^{2}

_{i}is the variance of each EM about the mean value

*in (3). The second term is*x

*σ*

^{2}

_{meani}is the squared difference of the mean value

*a*of each EM to

_{i}*. Next, (34) and (35) are substituted into (33) to yield*x

## 4. Clarification of the mathematical structure of Ω

The similarity among ensemble members in medium-range forecast decreases as time increases by losing the impact of atmospheric initial conditions. At least three factors induce this decrease. One is the increase of the phase difference (hereafter, PD) among EMs (Fig. 2a). The EMs of a forecast, for example, may predict different time variations of weather conditions even though they are characterized by the same frequencies of weather change. The second factor is the mean difference (hereafter, MD) among EMs (Fig. 2b)—one EM may predict relatively warm conditions for the forecast period, whereas the other EMs may predict cooler conditions. The final factor is the amplitude difference (hereafter, AD) among EMs (Fig. 2c). For instance, large time variations of temperature may be predicted by some EMs but not in others.

In this section, we focus on how PD, MD, and AD affect Ω. The effect of each is isolated using hypothetical time series, as illustrated in Fig. 2.

### a. Influence of phase difference

*m*EMs; here,

*n*is the total number of time periods (wavelength),

*j*is the specific time period,

*i*is the number of EMs (

*m*= 2–16 for every two steps), and Δ

*τ*

_{1}is the PD in each EM (0–2

*π*):

*τ*

_{1}shown in Fig. 4. The second term of the right-hand side of (39) vanishes because the MD for the EMs is zero. The AD among all EMs is also zero for this idealized case:

*m*considered; Ω is small for large sets of EMs when Δ

*τ*

_{1}equals

*π*. If there is no PD (PD = 0, 2

*π*) among EMs, Ω is 1 regardless of the number of EMs. When

*m*is 2, Ω behaves like a cross correlation coefficient; if it is correlated (uncorrelated), then Ω approaches 1 (−1). In contrast, when the number of EMs is large enough (or, in this case, when

*m*= 16), Ω is effectively 0 for all nonzero PD. The value of Ω is not identically 0 when PD =

*π*because in this case,

*σ*

^{2}

_{b}becomes 0, and (5) can be expressed as

### b. Influence of mean value difference

*i*is the index of the ensemble member,

*j*is the time period number,

*n*is the total number of time periods, and Δ

*τ*

_{2}is an index of MD among EMs. There is no PD among EMs in this hypothetical, idealized case. Therefore, (39) can be written as

*τ*

_{2}) for several ensemble sizes (

*m*= 2–16). Figure 7 plots the second term on the right-hand side of (45) versus Δ

*τ*

_{2}. In both figures, the values approach 0 for large sets of EMs when Δ

*τ*

_{2}is large (e.g., Δ

*τ*

_{2}= 5). That is, the impact of the second term on the right-hand side of (45) is negligible in the case of a large number of EMs, for which the Ω equation simplifies to

*m*.

When *m* = 2, Ω approaches −1 as the MD increases. If there are many EMs (*m* = 16 in this experiment), Ω vanishes with increasing MD.

### c. Influence of amplitude difference

*m*= 2–16 for every two steps). Figure 8 shows a schematic of (47):

*m*EMs; here, Δ

*τ*

_{3}is an index that shows the AD among EMs. In this hypothetical case, (39) can be expressed as

*τ*

_{3}) for several EMs (

*m*= 2–16 for every two steps). The figure shows that when Δ

*τ*

_{3}is moderately small (e.g., Δ

*τ*

_{3}= 5), Ω increases with increasing ensemble size. In contrast, Ω approaches 0 when the amplitudes are quite different amongst the EMs, despite the zero values of PD and MD. The Ω values are constrained to lie between 0 and 1 when AD acts alone.

### d. Mathematical characteristics of three indices

Two indices clarify the impact of PD and the joint impact of MD and AD on the behavior of Ω. First, (42) provides the average value of the ACCC. The ACCC shows the impact of PD on Ω, regardless of MD and AD. A second index, defined in (46), is the average value of the AVR. The AVR indicates the impact of MD and AD on Ω, regardless of PD. Here we assume that there is a large number of EMs and the impact of the second term in (45) can be neglected.

Note that while the value of Ω reflects the values of both ACCC and AVR, there is no simple functional form relating the three quantities. The ACCC and AVR are simply presented as the “phase” and “shape” aspects of Ω.

To summarize the results of the present section, we note that two types of statistical differences amongst ensemble members underlie Ω: phase differences (PD, as characterized by the ACCC), and shape differences (MD and AD, as characterized by the AVR). PD, MD, and AD affect the value of Ω in different ways, as illustrated in Figs. 4, 6 and 9. The concepts of ACCC and AVR will be used extensively in the next section.

## 5. Predictability among ensemble members in medium-range forecast using the new estimation methods

In this section, we use the mathematical structure of Ω revealed above to evaluate the predictability among ensemble members in medium-range forecast from a viewpoint of similarity. Here the predictability is defined as the impact of initial conditions on atmospheric behavior. We deal with idealized predictability; we do not compare the model ensemble results with observations. In essence, we assume that the model we use is perfect and that one of the EMs represents “nature”—that is, it shows the true evolution of the various atmospheric fields.

### a. Model and data

Ensemble numerical simulations were integrated with the Center for Climate System Research (CCSR; University of Tokyo) and the National Institute for Environmental Studies (CCSR/NIES) AGCM (Numaguti et al. 1997). The CCSR/NIES model used T42 horizontal truncation (128 × 64 grid cells, approximately 2.8° resolution) and 20 sigma coordinate layers in the vertical.

### b. Methodology

*j*is the elapsed time (h),

*n*is the number of periods evaluated at every time step, and

*m*is the number of EMs; Ω

*(*

_{mn}*j*) was calculated for all

*j*, as shown in Fig. 10:

*(*

_{mn}*j*) indicates the “phase predictability” and AVR

*(*

_{mn}*j*) the “shape predictability” among ensemble members. As the single statistical quantity that includes both phase and shape predictability, Ω

*(*

_{mn}*j*) indicates “similarity predictability.”

### c. Grid scale

Figure 11 shows the time series of temperature at 500 hPa produced by 16 EMs (*m* = 16) at a specific grid cell (46°N, 180°) in December. The 16 sets of 1 December atmospheric initial conditions used were constructed with the 1-h lagged approach. All data in Fig. 11 are averaged over 6 h. Figure 12 shows time series of Ω* _{mn}*(

*j*), ACCC

*(*

_{mn}*j*), and AVR

*(*

_{mn}*j*) as calculated from the 16 EMs. Every point of each line was calculated from 3 days of data (

*n*= 12). Here ACCC

*(*

_{mn}*j*) was approximately constant (and thus showing stable phase predictability) at 0.7 until day 12, and AVR

*(*

_{mn}*j*) was nearly constant around 0.9 until day 12; thus, large predictability for shape also persisted. Large predictability for both phase and shape was equivalent to a large value of Ω

*(*

_{mn}*j*).

After day 12, Ω* _{mn}*(

*j*) and ACCC

*(*

_{mn}*j*) decreased to values less than 0.05, a decrease significant at the 92% level, according to Monte Carlo analysis; AVR

*(*

_{mn}*j*) also decreased at this time, but the decrease was much smaller than that for ACCC

*(*

_{mn}*j*). The impact of ACCC

*(*

_{mn}*j*) on Ω

*(*

_{mn}*j*) was therefore dominant. After losing shape predictability around day 14, AVR

*(*

_{mn}*j*) fluctuated between 0.3 and 0.6. By this time, the impact of the initial conditions on the forecast had disappeared.

Note that physical constraints on meteorological fields limit the degree to which AVR* _{mn}*(

*j*) can be reduced. In both the real world and models, temperatures, for example, will not vary by the amounts needed to reduce AVR

*(*

_{mn}*j*) to 0. Thus, our use of the term “shape predictability” throughout this text when referring to AVR

*(*

_{mn}*j*), though convenient, is not rigorously correct; in the strictest sense, “shape predictability” would refer to the degree to which the AVR

*(*

_{mn}*j*) exceeds the lower limit. In any case, for the grid cell examined in Fig. 11 and 12, large decreases in both ACCC

_{mn}(

*j*) and AVR

*(*

_{mn}*j*) induce the large decrease in Ω

*(*

_{mn}*j*) at day 12—chaos destroys both the phase and shape similarity at approximately the same time.

Figure 13 presents the time series of 500-hPa temperature at a different grid cell (74°N, 96°W). As shown in Fig. 14, the corresponding Ω* _{mn}*(

*j*) and ACCC

*(*

_{mn}*j*) values show large decreases after day 5. The decease in Ω

*(*

_{mn}*j*) is in fact significantly larger than that in ACCC

_{mn}(

*j*) due to a sharp decrease in AVR

*(*

_{mn}*j*). At high latitudes, SST has a small impact on atmospheric behavior, especially during winter. This may explain the sharp decrease in AVR

*(*

_{mn}*j*). Here, predictability decreased not only because of an increase in PD but also due to increases in MD and AD.

### d. Zonal mean

Figure 15 shows (a) Ω* _{mn}*(

*j*), (b) ACCC

*(*

_{mn}*j*), and (c) AVR

*(*

_{mn}*j*) for 500-hPa temperature at low, middle, and high latitudes. In all figures, dotted, solid, and dashed lines indicate zonal averages between 0° and 30°N, between 30° and 60°N, and between 60° and 90°N, respectively.

Consider the behavior of ACCC* _{mn}*(

*j*) in Fig. 15b. On day 2, the high-latitude average value is larger than that at other latitudes. However, phase predictability is lost first at high latitudes, with values decreasing to 0.05 on day 16. In contrast, low-latitude values are initially about 0.65, but the phase predictability there persists until day 24. Midlatitude phase predictability shows the largest value from day 7 to 15 among the three latitudinal bands and lasts until day 17.

Figure 15c shows AVR* _{mn}*(

*j*) for the three latitudinal bands. All three lines show similar values in the first few days, implying that shape predictability at early times has no latitudinal dependence; AVR

*(*

_{mn}*j*) at high latitudes, however, decreases first and then stabilizes after day 16. At mid- and low latitudes, the values take much longer to decrease. Comparison of Fig. 15b and 15c shows that phase and shape predictability vanish at high latitudes simultaneously. In lower and midlatitudes, however, shape predictability persists for several days after phase predictability is lost. Thus, at these latitudes, the effects of atmospheric chaos differ for phase and shape.

Figure 15a shows Ω* _{mn}*(

*j*), the measure of comprehensive predictability, as a function of time at low, middle, and high latitudes. The relative positions of the lines in Fig. 15a look very similar to those for ACCC

*(*

_{mn}*j*) in Fig. 15b. At all three latitudes, however, note that Ω

*(*

_{mn}*j*) reached 0.05 between days 13 and 16, somewhat earlier than did ACCC

*(*

_{mn}*j*).

### e. Global mean

Figure 16 shows Ω* _{mn}*(

*j*), ACCC

*(*

_{mn}*j*), and AVR

*(*

_{mn}*j*) for 500-hPa temperature averaged worldwide in December. Here Ω

*(*

_{mn}*j*) decreases with time and reaches 0.05 on day 15. Therefore, by this measure, predictability has a time scale of 15 days. Predictability for ACCC

*(*

_{mn}*j*), however, persists for more than 5 days longer. After a sharp decrease, AVR

*(*

_{mn}*j*) shows stable values of about 0.3 starting at day 20. Figure 16 shows nearly constant differences between Ω

*(*

_{mn}*j*) and ACCC

*(*

_{mn}*j*) at every time period, suggesting that the impact of MD and AD on predictability at global scales does not vary with time.

### f. Global distribution

Figure 17 shows global distributions of Ω* _{mn}*(

*j*), ACCC

*(*

_{mn}*j*), and AVR

*(*

_{mn}*j*) for 500-hPa temperature on day 10. Values are calculated with 12 time periods from day 9 to 11. Large values of both ACCC

*(*

_{mn}*j*) and AVR

*(*

_{mn}*j*) occur over midlatitudes, especially over the oceans. The regional distributions resemble strong westerly jets in which long-period waves dominate atmospheric behavior. The large values of ACCC

*(*

_{mn}*j*) and AVR

*(*

_{mn}*j*) in certain regions lead to correspondingly large values of Ω

*(*

_{mn}*j*) in these regions. Predictability can be maintained not only for PD but also for MD and AD in midlatitudes.

At high latitudes, however, ACCC* _{mn}*(

*j*) and AVR

*(*

_{mn}*j*) are small, and thus Ω

*(*

_{mn}*j*) is also small. At low latitudes, relatively large values of AVR

*(*

_{mn}*j*) persist on day 10. In contrast, small values of ACCC

*(*

_{mn}*j*) occurred over many tropical regions, and Ω

*(*

_{mn}*j*) is small there, with values between 0 and 0.2. Thus, in the Tropics, similarity predictability is lost due to PD among EMs. The factors that reduce predictability are seen to have a latitudinal dependence.

## 6. Summary

Two interpretations of the mathematical structure of the statistical similarity index Ω are provided. The first interpretation shows that, under the assumption that both boundary forcing and atmospheric chaos contribute separately to the value of a meteorological variable at a given time step, Ω is equivalent to the square of the correlation coefficient between the variable and the forcing. The nature of this boundary forcing, which presumably is multivariate and multifaceted, need not be established or understood for the calculation of Ω.

The second interpretation, the mathematically stricter one, shows Ω to be associated with two quantities: the average value of the anomaly cross correlation coefficient (ACCC) and the average value of the variance ratio (AVR) amongst the EMs. The second term indicates one part of the similarity of mean values amongst the EMs, but this term could be negligible for large numbers of EMs, and thus the first term dominates the behavior of Ω. The statistical characteristics of Ω suggest that Ω reflects both phase similarity (correlation) and shape similarity (mean value and amplitude). It thus has an advantage over both the cross correlation coefficient, which shows similarity of phase but not shape among EMs, and the root-mean-square (RMS) difference, which indicates similarities in shape including the effect of period characteristics.

Skill in ensemble weather forecasts is typically estimated with anomaly correlations or RMS differences. Even if large predictability for shape is estimated with the RMS difference, such skill may not be practical or reliable in the face of small predictability for phase similarity. The converse is also true. This paper suggests that by characterizing phase and shape predictability jointly, the Ω* _{mn}*(

*j*) diagnostic may be a superior predictability measure. In addition, this study shows that relative losses (with time) in phase and shape predictability vary with latitude. As we described in section 5, this study does not deal with real predictability but idealized predictability by assuming that the model is perfect and that one of the EMs represents “nature.” However, it may be mathematically possible to estimate the real predictability with Ω

*(*

_{mn}*j*) by calculating the anomalies of each EM compared with observation data. The predictability depends on which time scale we focus on; Ω

*(*

_{mn}*j*) can be evaluated for any averaging time scale, and this can be one of the suitable characteristics to estimate the predictability. Overall, Ω

*(*

_{mn}*j*) is seen to be a highly practical and versatile tool for the analysis of ensemble weather forecasts and perhaps for other scientific and technological applications as well.

## REFERENCES

Dirmeyer, P. A., R. D. Koster, and Z. Guo, 2006: Do global models properly represent the feedback between land and atmosphere?

,*J. Hydrometeor.***7****,**1177–1198.Epstein, E. S., 1969: Stochastic dynamic prediction.

,*Tellus***21****,**739–759.Guo, Z., and Coauthors, 2006: GLACE: The Global Land–Atmosphere Coupling Experiment. Part II: Analysis.

,*J. Hydrometeor.***7****,**611–625.Kimoto, M., H. Mukougawa, and S. Yoden, 1992: Medium-range forecast skill variation and blocking transition: A case study.

,*Mon. Wea. Rev.***120****,**1616–1627.Koster, R. D., M. J. Suarez, and M. Heiser, 2000: Variance and predictability of precipitation at seasonal-to-interannual timescales.

,*J. Hydrometeor.***1****,**26–46.Koster, R. D., P. A. Dirmeyer, A. N. Hahmann, R. Ipelaar, L. Tyahla, P. Cox, and M. J. Suarez, 2002: Comparing the degree of land–atmosphere interaction in four atmospheric general circulation models.

,*J. Hydrometeor.***3****,**363–375.Koster, R. D., and Coauthors, 2004: Regions of coupling between soil moisture and precipitation.

,*Science***305****,**1138–1140.Koster, R. D., and Coauthors, 2006: GLACE: The Global Land–Atmosphere Coupling Experiment. Part I: Overview.

,*J. Hydrometeor.***7****,**590–610.Lewis, J. M., 2005: Roots of ensemble forecasting.

,*Mon. Wea. Rev.***133****,**1865–1885.Lorenz, E. N., 1963: Deterministic nonperiodic flow.

,*J. Atmos. Sci.***20****,**130–141.Miyakoda, K., J. Sirutis, and J. Ploshay, 1986: One-month forecast experiments—Without anomaly boundary forcings.

,*Mon. Wea. Rev.***114****,**2363–2401.Murphy, J. M., 1988: The impact of ensemble forecasts of predictability.

,*Quart. J. Roy. Meteor. Soc.***114****,**463–493.Numaguti, A., M. Takahashi, T. Nakajima, and A. Sumi, 1997: Description of CCSR/NIES atmospheric general circulation model.

*CGER’s Supercomputer Monogr. Rep*., No. 3, NIES, 1–48.Phelps, M. W., A. Kumar, and J. J. O’Brien, 2004: Potential predictability in the NCEP CPC dynamical seasonal forecast system.

,*J. Climate***17****,**3775–3785.Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations.

,*J. Climate***11****,**109–120.Rowell, D. P., C. K. Folland, K. Maskell, and M. N. Ward, 1995: Variability of summer rainfall over tropical North Africa (1906–92): Observations and modeling.

,*Quart. J. Roy. Meteor. Soc.***121****,**699–704.Scheffe, H., 1959:

*The Analysis of Variance*. John Wiley and Sons, 477 pp.Shukla, J., and Coauthors, 2000: Dynamical seasonal prediction.

,*Bull. Amer. Meteor. Soc.***81****,**2593–2606.Stern, W., and K. Miyakoda, 1995: Feasibility of seasonal forecasts inferred from multiple GCM simulations.

,*J. Climate***8****,**1071–1085.Sugi, M., R. Kawamura, and N. Sato, 1997: A study of SST-forced variability and potential predictability of seasonal mean fields using the JMA global model.

,*J. Meteor. Soc. Japan***75****,**717–736.Taylor, G. I., 1920: Diffusion by continuous movements.

,*Proc. London Math. Soc.***20****,**196–212.Taylor, G. I., 1938: The spectrum of turbulence.

,*Proc. Roy. Soc. London***A164****,**476–490.Thompson, P., 1957: Uncertainty of initial state as a factor in predictability of large-scale atmospheric flow patterns.

,*Tellus***9****,**275–295.Wiener, N., 1930: Generalized harmonic analysis.

,*Acta Math.***55****,**117–258.