Predictability of Week-3–4 Average Temperature and Precipitation over the Contiguous United States

Timothy DelSole George Mason University, and Center for Ocean–Land–Atmosphere Studies, Fairfax, Virginia

Search for other papers by Timothy DelSole in
Current site
Google Scholar
PubMed
Close
,
Laurie Trenary George Mason University, and Center for Ocean–Land–Atmosphere Studies, Fairfax, Virginia

Search for other papers by Laurie Trenary in
Current site
Google Scholar
PubMed
Close
,
Michael K. Tippett Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York, and Center of Excellence for Climate Change Research, Department of Meteorology, King Abdulaziz University, Jidda, Saudi Arabia

Search for other papers by Michael K. Tippett in
Current site
Google Scholar
PubMed
Close
, and
Kathleen Pegion George Mason University, and Center for Ocean–Land–Atmosphere Studies, Fairfax, Virginia

Search for other papers by Kathleen Pegion in
Current site
Google Scholar
PubMed
Close
Full access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

This paper demonstrates that an operational forecast model can skillfully predict week-3–4 averages of temperature and precipitation over the contiguous United States. This skill is demonstrated at the gridpoint level (about 1° × 1°) by decomposing temperature and precipitation anomalies in terms of an orthogonal set of patterns that can be ordered by a measure of length scale and then showing that many of the resulting components are predictable and can be predicted in observations with statistically significant skill. The statistical significance of predictability and skill are assessed using a permutation test that accounts for serial correlation. Skill is detected based on correlation measures but not based on mean square error measures, indicating that an amplitude correction is necessary for skill. The statistical characteristics of predictability are further clarified by finding linear combinations of components that maximize predictability. The forecast model analyzed here is version 2 of the Climate Forecast System (CFSv2), and the variables considered are temperature and precipitation over the contiguous United States during January and July. A 4-day lagged ensemble, comprising 16 ensemble members, is used. The most predictable components of winter temperature and precipitation are related to ENSO, and other predictable components of winter precipitation are shown to be related to the Madden–Julian oscillation. These results establish a scientific basis for making week-3–4 weather and climate predictions.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Timothy DelSole, tdelsole@gmu.edu

Abstract

This paper demonstrates that an operational forecast model can skillfully predict week-3–4 averages of temperature and precipitation over the contiguous United States. This skill is demonstrated at the gridpoint level (about 1° × 1°) by decomposing temperature and precipitation anomalies in terms of an orthogonal set of patterns that can be ordered by a measure of length scale and then showing that many of the resulting components are predictable and can be predicted in observations with statistically significant skill. The statistical significance of predictability and skill are assessed using a permutation test that accounts for serial correlation. Skill is detected based on correlation measures but not based on mean square error measures, indicating that an amplitude correction is necessary for skill. The statistical characteristics of predictability are further clarified by finding linear combinations of components that maximize predictability. The forecast model analyzed here is version 2 of the Climate Forecast System (CFSv2), and the variables considered are temperature and precipitation over the contiguous United States during January and July. A 4-day lagged ensemble, comprising 16 ensemble members, is used. The most predictable components of winter temperature and precipitation are related to ENSO, and other predictable components of winter precipitation are shown to be related to the Madden–Julian oscillation. These results establish a scientific basis for making week-3–4 weather and climate predictions.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Timothy DelSole, tdelsole@gmu.edu

1. Introduction

Operational weather forecasts are skillful out to 7–10 days (Simmons and Hollingsworth 2002), and operational seasonal forecasts are skillful out to 3–8 months (depending on season and model; Barnston et al. 2012), but there is relatively limited evidence that forecasts are skillful in the intermediate 3–4-week range (Newman et al. 2003; Pegion and Sardeshmukh 2011; Wang et al. 2014). If skillful forecasts in the 3–4-week range existed, they would have significant social and economic value because many management decisions in agriculture, food security, water resources, and disaster risk are made on this time scale. However, most studies that claim predictability in the 3–4-week range identify this skill in the tropics (Li and Robertson 2015), in upper-level quantities like geopotential height fields (Pegion and Sardeshmukh 2011), or in certain global climate indices (Wang et al. 2014), whereas the skill of midlatitude land surface quantities like 2-m temperature or precipitation tend to be negligible (Li and Robertson 2015). Johnson et al. (2013) develop an empirical model for predicting North American 2-m temperature out to 4 weeks based on a linear trend and statistical relations with the Madden–Julian oscillation (MJO) and El Niño–Southern Oscillation (ENSO) and find that this empirical model has skill in certain regions and phases of the MJO. This paper will show that an operational forecast model makes skillful predictions of week-3–4 average temperature and precipitation over the contiguous United States (CONUS).

Predictability of temperature and precipitation depends very much on the spatial and temporal scale under consideration. Beyond weather time scales (e.g., 7–10 days), it is widely accepted that only large-scale spatial structures are predictable. Accordingly, we propose a novel approach to investigating subseasonal predictability using a set of spatial patterns that can be ordered by length scale. We will show that week-3–4 averages of time series corresponding to many of these spatial patterns can be skillfully predicted by a state-of-the-art prediction model. In addition, we find linear combinations of these time series that maximize predictability and show that many of these predictable components can be predicted with skill.

2. Data

The computations performed in this study are strongly constrained by the availability of forecasts; hence, it is helpful to discuss data issues first. We analyze retrospective forecasts, called “hindcasts,” from version 2 of the Climate Forecast System (CFSv2; Saha et al. 2014). The CFSv2 is a coupled atmosphere–ocean–land–ice model and is initialized based on analysis products for the atmosphere, ocean, land, and sea ice. The hindcasts under investigation were initialized at 0000, 0600, 1200, and 1800 UTC of each day over the 12-yr period from January 1999 to December 2010. Although these hindcasts were integrated out to 45 days, only the 2-week mean of weeks 3–4 were considered. Only one hindcast per initialization time is available, so a lagged-ensemble approach is employed whereby an average of forecasts initialized at different times but verifying at the same time were used. In general, skill increases with the size of the lagged ensemble until it saturates around 4 days (as shown in section 4). Accordingly, we consider hindcasts based on a 4-day lagged ensemble, which contains 16 members, derived from four hindcasts per day. To be clear, the 4-day lagged ensemble is computed from hindcasts that are initialized at or before time t and that verify from times through days (inclusive). We consider hindcasts of temperature and precipitation over CONUS initialized only in January and July (i.e., boreal winter and summer).

For verification, the 2-week mean temperature is compared to estimates from the NCEP–NCAR reanalysis (Kistler et al. 2001). Similarly, hindcasts of daily precipitation were verified relative to the Climate Prediction Center (CPC) unified gauge-based analysis (Chen et al. 2008).

Climatologies of daily temperature and precipitation are quite noisy and require significant smoothing. No significant dependence of hindcast climatology on lead time was detected, so the model climatology for each calendar day was estimated by averaging all hindcasts verifying on the same day and over all lead times. In addition, the daily climatology was fit to a second-order polynomial over the 76-day period starting from the first of each month. Various checks and visual comparisons were made to ensure that the estimated climatologies were reasonable.

MJO indices are computed from CFSv2 hindcasts in the manner of Trenary et al. (2017). Specifically, the familiar real-time multivariate MJO indices (RMM1 and RMM2) of Wheeler and Hendon (2004) were derived from an EOF analysis of observations, and then the resulting EOF patterns were projected on model variables. In contrast to the standard approach, a 120-day running mean was not subtracted from the indices; hence, our MJO indices include interannual variability.

3. Methods

This section describes our methods for 1) defining an orthogonal set of large-scale patterns, 2) quantifying predictability and skill, and 3) finding patterns that maximize predictability and skill.

a. Eigenvectors of the Laplacian operator

We project temperature and precipitation fields onto the eigenvectors of the Laplacian operator over CONUS. Laplacian eigenvectors provide a convenient orthogonal basis set that can be ordered by a measure of length scale. Special cases of Laplacian eigenvectors include Fourier series and spherical harmonics, which are used routinely to decompose time series by time scale and spatial structures by length scale, respectively. Eigenvectors of the Laplacian operator over CONUS were obtained using a Green’s function method described in DelSole and Tippett (2015), which should be consulted for details (codes are available upon request). The resulting spatial patterns are orthogonal with respect to an area-weighted inner product and ordered such that the first corresponds to a spatially uniform pattern over the domain (i.e., the largest spatial scale that fits in the domain), and subsequent patterns correspond to dipoles, tripoles, quadrupoles, and so forth of decreasing length scale. These vectors depend only on the geometry of the domain and therefore are data independent, in contrast to empirical orthogonal functions (EOFs). Thus, a single set of spatial patterns are used to analyze different variables and seasons.

Laplacian eigenvectors 2–10 over CONUS are shown in Fig. 1. The first eigenvector is not shown because it equals a constant over the whole domain. The second and third eigenvectors measure the east–west and north–south gradients, respectively. The next two eigenvectors correspond to a tripole and quadrupole, and so on. The percent variance of observed 2-week means explained by the first 20 Laplacian eigenvectors is shown in Fig. 2; similar percentages are found in the model (not shown). As expected, the explained variance tends to decrease with decreasing spatial scale.

Fig. 1.
Fig. 1.

Laplacian eigenvectors 2–10 over the contiguous United States.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Fig. 2.
Fig. 2.

Fraction of variance of observed 2-week means explained by individual Laplacian eigenfunctions 1–20.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

b. Measure of predictability

Predictability refers to the degree to which a variable in a model is predictable by that model. As such, predictability is an inherent property of a model that can be measured independently of observations. The standard approach to measuring predictability is to consider an ensemble of predictions initialized at equally likely states of the system. Although the CFSv2 reforecast dataset does not have multiple ensemble members for the same initial condition day (i.e., a “burst” ensemble), an ensemble can be approximated by grouping hindcasts initialized 6 h apart and that verify on the same day. The resulting ensemble often is called a lagged ensemble (Hoffman and Kalnay 1983). Let denote the forecast anomaly initialized at time and verifying at time t, where τ is the lead time; time is measured in units of days. If E is the ensemble size, then the mean of the lagged ensemble is defined as follows:
e1
where arises because hindcasts were initialized 6 h apart (i.e., 1/4 of a day apart).
If a variable is not predictable, then the ensemble members would be independent and the expected variance of the ensemble mean would be times the expected variance of the climatological distribution . The standard test for this hypothesis is analysis of variance (ANOVA; Rowell 1998). To test the null hypothesis of no predictability, ANOVA uses the statistic
e2
where is an estimate of the variance of ensemble means (i.e., “signal”), given by
e3
and is an estimate of the variance about the ensemble means (i.e., “noise”), given by
e4

If the noise perturbations are independent and identically distributed Gaussian random variables, then F follows an F distribution with and degrees of freedom, which can be used to test significance. Unfortunately, the independence assumption is unrealistic for forecasts initialized a few days apart because large-scale fields tend to be serially correlated on daily time scales. Therefore, the standard hypothesis test is not appropriate for subseasonal forecasts. We propose a block permutation test for deciding predictability. Specifically, under the null hypothesis of no predictability, the forecasts would be exchangeable in the sense that each value of F obtained from a permutation of (independent) samples is equally likely. Accordingly, we construct a permuted ensemble by drawing forecasts from random years. Importantly, the entire sequence of forecasts within a year are drawn, ensuring that the serial correlation across consecutive days is preserved. This sampling is tantamount to randomly permuting (or “shuffling”) the years assigned to the forecasts. The statistic F is computed for the permuted ensemble, and this procedure is repeated many times (i.e., 10 000 times). The rank of the F obtained from the unpermuted ensemble is evaluated relative to the values of F for the permuted ensembles. Under the hypothesis of exchangeability, the rank is uniformly distributed. The actual lagged ensemble is said to be predictable if the observed value of F exceeds the 95% percentile of the F values obtained from permuted samples.

c. Measure of skill

Skill refers to the degree to which a forecast predicts the observed variable. Two standard measures of skill are mean square error and correlation ρ. Significance tests for skill based on mean square error have been discussed by DelSole and Tippett (2014), while those based on correlation are standard. Unfortunately, these tests are not appropriate for forecasts initialized at daily intervals because of the serial correlation mentioned above. We again apply a permutation method in which the year labels for the observations are randomly permuted. By selecting the entire sequence of observations within a year, the serial correlation between observations on daily time scales is preserved. After shuffling the year labels for the observations, the correlation coefficient between forecasts and shuffled observations can be computed. This procedure is repeated many times (i.e., 10 000 times) to build up an empirical distribution for the correlation under the null hypothesis of independence. The 95th percentile of the resulting samples then defines the 5% significance threshold value for the correlation coefficient.

d. Predictable component analysis

In some cases, none of the time series for the Laplacian eigenvectors can be predicted with skill. However, this result does not prove that there is no skill, because it is possible that some linear combination of eigenvectors can be predicted with skill. To test this possibility, we find the linear combination of eigenvectors that maximize the predictability measure F in (2). This procedure is formally equivalent to predictable component analysis (see DelSole and Tippett 2007 for a review). We briefly review this procedure to clarify its application in our particular situation. Let the weights of the linear combination be such that the quantity being forecast is the following:
e5
where is the forecast anomaly for the mth Laplacian eigenvector. If the weights are collected into the vector , then the predictability error measure (2) can be written equivalently as follows:
e6
where and are covariance matrices for the ensemble mean and residuals about the ensemble mean, respectively, defined as
e7
and
e8
To find an extremum, we differentiate (6) with respect to :
e9
e10
where λ is the value of F for the linear combination defined by the weights . If is positive definite, which is typically true when the number of Laplacian eigenvectors is much smaller than the sample size, then the derivative vanishes when satisfies the generalized eigenvalue problem:
e11
It can be proven that if the eigenvalues (and corresponding eigenvectors) are ordered in descending order, then the first eigenvector maximizes F, the second maximizes F subject to being uncorrelated with the first eigenvector (in a sense defined shortly), and so on. Moreover, the eigenvalues give the corresponding maximized F values. These solutions define the predictable components, the first of which will be called the “most predictable component.” Each eigenvector can be substituted in (5) to define the time series associated with that component. Because covariance matrices are symmetric, the resulting time series for different components are uncorrelated. The spatial structure of the predictable component is obtained from regression. The regression coefficient between the predictable component time series in (5) and the mth Laplacian eigenvector is
e12
The Laplacian eigenvectors are then summed using weights specified in the vector . Note that a regression coefficient can be computed for the mth Laplacian eigenvector even if that vector was not included in the optimization procedure discussed above (e.g., when ). We use Laplacian eigenvectors to construct the spatial pattern. This choice effectively imposes a prescribed level of spatial smoothing for the regression pattern.

Note that the above procedure yields a complete set of predictable components for each lead time τ. This lead time dependence is sensible because predictability is characterized by different patterns at different time scales. An alternative approach is to characterize predictability over all time scales, which can be done by maximizing a measure of predictability integrated over all lead times. This approach is called average predictability time (APT; DelSole and Tippett 2009) analysis. APT analysis is not used here because we want to demonstrate the existence of predictability specifically for the week-3–4 forecasts. Although APT analysis can find predictable components on subseasonal time scales, testing the hypothesis of predictability on subseasonal time scales is not straightforward because the integral includes the short weather lead times that are predictable. By applying predictable component analysis for only one lead time, subseasonal predictability can be tested in isolation from predictability on other time scales.

The sampling distribution of the maximized F values (i.e., the eigenvalues) under the null hypothesis of no predictability can be estimated using a permutation technique similar to that described above, in which the label for years assigned to forecasts are randomly permuted. The only extra step is that instead of drawing a single variable, an entire M-dimensional vector is drawn, corresponding to the amplitudes of the M Laplacian eigenvectors for the relevant forecast. Again, an essential element of the technique is to draw the entire sequence of forecasts within a year for the M eigenvectors, which preserves the serial correlation on daily time scales. After generating a mock ensemble forecast dataset comprising T time steps and E ensemble members, the covariance matrices are computed and the generalized eigenvalue problem in (11) is solved. This process is repeated many times (i.e., 10 000 times) to build up an empirical distribution for the eigenvalues.

4. Results

The correlation skill of 4-day lagged ensembles of temperature and precipitation of week-3–4 hindcasts over CONUS during January and July is shown in Fig. 3. Statistically insignificant values at the 5% level (according to the permutation test) are masked out. The figure shows that winter temperature and precipitation and summer temperature are skillfully predicted by the CFSv2 over a third to a half of the area of CONUS. Summer precipitation shows effectively no skill (e.g., the number of positive and negative correlations are approximately equal). Although some negative correlations are statistically significant in a local sense, we do not believe them to be field significant.

Fig. 3.
Fig. 3.

Correlation skill of week-3–4 temperature and precipitation CFSv2 hindcasts over CONUS during January and July from 1999 to 2010 (12 yr). The hindcasts are based on a 4-day lagged ensemble (comprising 16 members drawn from 4× daily hindcasts). Values that are statistically insignificant at the 5% level (according to the permutation test) are masked out. The percentage area with significant correlation skill (positive and negative) is indicated in the title of each panel.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Our goal is to diagnose the predictability and skill shown in Fig. 3 in terms of large-scale spatial structures. The predictability and skill of individual Laplacian eigenvectors of January temperature as a function of ensemble size is shown in Fig. 4. Qualitatively similar results are obtained for other variables and time periods. Not surprisingly, predictability decreases with ensemble size because each additional member is initialized farther from the target and therefore contains more noise. The signal-to-noise ratio (SNR) decreases by a factor of 2–3 from a 12-h to a 4-day lagged ensemble. In contrast, the skill tends to increase with ensemble size, provided the skill is sufficiently large.

Fig. 4.
Fig. 4.

(top) Predictability and (bottom) skill of week-3–4 CFSv2 hindcasts of January temperature for individual eigenfunctions as a function of ensemble size (measured in days spanned by the lagged ensemble). The numbered labels indicate the Laplacian eigenfunction.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

The predictability of week-3–4 temperature and precipitation CFSv2 hindcasts projected onto individual Laplacian eigenvectors is shown in Fig. 5. Predictability is quantified by the SNR , where F is defined in (2). We use , which is equivalent to analyzing differences between hindcasts initialized 6 h apart, as done in weather prediction studies (Simmons and Hollingsworth 2002). The figure reveals that several spatial structures of winter temperature and precipitation and summer temperature are predictable. Summer precipitation also is predictable but for fewer spatial structures. In general, temperature is more predictable than precipitation, and winter is more predictable than summer. Note, however, that precipitation is more predictable than temperature for certain components during winter (e.g., eighth and ninth components).

Fig. 5.
Fig. 5.

Predictability (as measured by the SNR) of week-3–4 temperature and precipitation hindcasts over the CONUS during January and July from the CFSv2 for individual Laplacian eigenvectors (the first 10 of which are shown in Fig. 1). Different symbols correspond to different variables and months, as indicated in the bottom legend. The dashed lines show the 5% significance threshold estimated from 10 000 permutation samples, using the color corresponding to the relevant variable and month (e.g., the black dashed shows the significance thresholds for the black dots corresponding to January temperature). The SNRs below the smallest significance threshold are not shown.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Although the above results demonstrate week-3–4 predictability, this result does not necessarily imply that the associated hindcasts are skillful (i.e., that the hindcasts can predict observed anomalies with skill). In most cases, mean square error shows no significant skill. Accordingly, we consider skill based on correlation, which is invariant to linear transformations of the forecast and thus does not penalize biases or errors in forecast amplitude. The skills of the hindcasts based on a 4-day lagged ensemble are shown in Fig. 6. The figure shows that many spatial structures of winter temperature and precipitation and summer temperature can be predicted with skill by -week hindcasts. The fact that skill exists for correlation but not for mean square error suggests that an amplitude correction is necessary for skill. Only one spatial structure (i.e., the 19th) of summer precipitation has skill exceeding the relevant significance level, but it is unlikely that it would be significant after the multiple comparisons required to identify it are taken into account. Thus, we conclude that large-scale week- winter temperature and precipitation and summer temperature can be predicted with skill but find little evidence that large-scale, summer precipitation can be predicted with skill at weeks .

Fig. 6.
Fig. 6.

Correlation skill of 4-day lagged ensemble hindcasts of week-3–4 temperature and precipitation over CONUS from the CFSv2 for individual Laplacian eigenvectors (the first 10 of which are shown in Fig. 1). The format of the figure is similar to Fig. 5.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Although no individual Laplacian eigenvector has significant skill for summer precipitation, this result does not necessarily imply that summer precipitation cannot be predicted with skill. In particular, it is possible that some linear combination of eigenvectors can be predicted with skill. To test this possibility, we apply predictable component analysis to find linear combinations of Laplacian eigenvectors that maximize predictability. A critical step in this procedure is selecting the number of eigenvectors. This step is tantamount to a model selection problem and is one of the most challenging problems in statistics (Fukunaga 1990; Hastie et al. 2003; Taylor and Tibshirani 2015). Fortunately, we have found that our results are not sensitive to the precise number of eigenvectors in the range of eigenvectors (we did not look beyond 20). To further validate our results, we have partitioned the years into two parts, a training sample in which the most predictable components are identified and a verification sample onto which the predictable components are projected and used as an independent test of predictability. We find that the predictability and skill in the verification sample tends to saturate after about 9 eigenvectors and remains nearly the same (or even grows) by 20 eigenvectors (not shown). The time series and associated regression pattern corresponding to individual predictable components are virtually independent of the number of eigenvectors greater than five or so. For subsequent calculations, we use the same number of eigenvectors (viz., nine) for all months and variables.

The maximized signal-to-noise ratios for CFSv2 week-3–4 hindcasts are shown in Fig. 7. As above, we use ensemble size corresponding to differences between hindcasts initialized 6 h apart. The shaded area shows the 95% confidence intervals for no predictability. The results suggest that all components are predictable (because all results lie outside the shaded confidence region). However, precipitation components near the trailing end tend to be only marginally significant. There is a “kink” in the signal-to-noise spectra at one or two components, indicating predictability significantly greater than the background significance threshold.

Fig. 7.
Fig. 7.

Maximized SNRs of CFSv2 week-3–4 hindcasts of temperature and precipitation over CONUS. The maximization is performed using the first nine Laplacian eigenfunctions over CONUS, which are shown in Fig. 1. The shaded region shows the 95% confidence interval for no predictability estimated by permutation methods.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

The regression map between the most predictable component time series and relevant field is shown in Fig. 8. The winter temperature and precipitation patterns are similar to the observed ENSO teleconnection patterns derived from monthly means (Yang and DelSole 2012), suggesting that CFSv2 week-3–4 predictability arises from El Niño/La Niña events. The summer temperature pattern also bears some resemblance to model–ENSO teleconnection patterns (e.g., compare to Fig. 7 of Wang et al. 2012), but the correspondence to the summer precipitation pattern is weak.

Fig. 8.
Fig. 8.

Regression coefficients between the most predictable component time series and the associated variable. The regression map is derived by regressing time series onto the first 20 Laplacian eigenvectors. The choice of 20 imposes an implicit level of spatial smoothing. The pattern is normalized to lie between and 1, and the multiplicative factor to obtain kelvins or mm day−1 for temperature and precipitation, respectively, is indicated in the title above each panel.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

The skills of the predictable components are shown in Fig. 9. The figure shows that the most predictable components have skill at weeks 3–4 for winter temperature and precipitation and summer temperature. In contrast, the most predictable component of summer precipitation has no significant skill (it is too small to appear in the figure). About two to three predictable components of winter temperature and precipitation and summer temperature have skill. Confidence intervals for the correlation skills overlap (not shown), indicating that the correlations cannot be distinguished. It follows that the ranking according to skill cannot be determined based on the available data. Thus, the fact that the most predictable component is not the most skillful is not necessarily meaningful.

Fig. 9.
Fig. 9.

The correlation skill between the predictable components (i.e., components that maximize the SNR) of week-3–4 temperature and precipitation hindcasts from CFSv2 for over CONUS. The format of the figure is similar to Fig. 5.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

To gain insight into the nature of the predictability and skill, we show in Fig. 10 time series of the most predictable components. These time series confirm that secular trends are small. In addition, for the components with the most skill, the time series exhibit relatively large jumps between years but relatively small fluctuations within a year. This feature suggests that the predictability comes from predicting the overall mean during the month rather than predicting variations within the month. To test this possibility, the forecasts within a month were decomposed into the sum of two terms, a monthly mean plus an anomaly relative to the monthly mean, and then the correlation skill of these two components were computed separately. The result, shown in Fig. 11, shows that skill associated with the monthly mean often dominates. Moreover, skill in predicting the anomalies rarely exceeds 0.35, whereas skill in predicting monthly means frequently exceeds 0.35.

Fig. 10.
Fig. 10.

Time series of the most predictable components of week-3–4 CFSv2 hindcasts of (left) temperature and (right) precipitation over CONUS. Each time series shows a 2-week mean of a variable: for observations (red), this time series corresponds to a 2-week running mean and serves as verification; for hindcasts (black), each 2-week mean is computed separately by averaging leads 15–28 days of each hindcast initialized on each day of the month. Forecasts initialized on consecutive days in a given month are plotted as a single time series for each year; a time series beginning on the first of the month (indicated by a dot) is disconnected from time series of the previous year. The title of each panel indicates the month and variable of the predictable component. The correlation coefficient between the observed and hindcast time series is indicated in the title of each panel.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Fig. 11.
Fig. 11.

Skill of predictable components for week-3–4 CFSv2 hindcasts of temperature and precipitation over CONUS. The week-3–4 forecasts within a month were decomposed into the sum of two terms, a monthly mean and an anomaly relative to the monthly mean, and then the correlation skill of the two components were computed separately. The skill of predicting monthly means is shown on the x axis, and the skill of predicting anomalies is shown on the y axis. The shaded box indicates areas in which both skills are below 0.35 (an approximate 5% significance threshold). The number label indicates the order of the predictable component, and the different colors denote different months and variables, as indicated in the legend key.

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

Given that predictability appears to be dominated by the monthly mean component, it is reasonable to explore relations with other variables by computing correlations between monthly mean quantities. The simultaneous squared correlation between each predictable component in CFSv2 and the Niño-3.4 index is shown in Fig. 12a. We call this measure R2 because it corresponds to the coefficient of determination of a regression model for predicting the component based on Niño-3.4. Because the Niño-3.4 index is persistent on weekly time scales, its value in the model is very close to its initial value, which in turn is close to the observed value. Thus, these correlations measure the ENSO teleconnections in the model. We see that the most predictable components of winter temperature and precipitation in CFSv2 are highly correlated with ENSO.

Fig. 12.
Fig. 12.

The R2 values of the degree of association between CONUS predictable components of CFSv2 with (a) ENSO and (b) MJO activity. ENSO is measured by the observed Niño-3.4 index, while the MJO is measured by the RMM1 and RMM2 indices of Wheeler and Hendon (2004) as derived from the CFSv2. (bottom) The “partial” R2 value, which measures (c) the association with ENSO after the MJO has been regressed out and (d) the association with MJO after ENSO has been regressed out. The horizontal dashed line shows the 5% significance threshold based on 12 samples (i.e., the number of years in the period 1999–2010).

Citation: Journal of Climate 30, 10; 10.1175/JCLI-D-16-0567.1

In addition to ENSO, the MJO often is cited as a phenomenon that may give rise to subseasonal predictability (Vitart 2014). To explore this, we compute the coefficient of determination between the predictable component and the RMM1 and RMM2 indices defined in Wheeler and Hendon (2004). These indices were computed from daily CFSv2 fields, then averaged over week-3–4 hindcasts, and then averaged over the month so that a correlation could be computed using only monthly values. The coefficient of determination is the correlation between the predictable component and the best linear combination of RMM1 and RMM2 and measures the fraction of variance of the predictable component that can be predicted from the MJO indices. The values computed from monthly mean quantities are shown in Fig. 12b and reveal that the most predictable component of winter temperature and precipitation are significantly correlated with MJO activity.

It is well known that ENSO and MJO activity tend to be correlated. This correlation confounds the interpretation of pairwise correlations. To clarify the relations further, we quantify the degree of relation after one of the indices has been regressed out. A convenient measure of the degree of relation between X and Y, after Z has been removed, is
e13
where is the sum square error of a regression prediction of Y based on X, and is the sum square error of a regression prediction of Y based on X and Z. The constant term is understood to be included in all regression models. The quantity lies between 0 and 1 and can be interpreted as the fraction of variance of Y explained by Z after the linear relation with X has been removed from all variables. In the case of ENSO after MJO has been removed (Fig. 12c), only the leading predictable component of winter precipitation shows a significant relation with ENSO. In contrast, the leading component of winter temperature has a significant correlation with ENSO (see Fig. 12a), but not after the MJO has been removed (which has an R2 of about 0.4, just below the significance threshold; see Fig. 12c). This result does not necessarily mean the leading component of winter temperature is unrelated to ENSO, but rather that a correlation could exist but the sample size (i.e., 12 yr) may be too small to detect it. In the case of MJO after ENSO has been removed, shown in Fig. 12d, the third and fourth predictable components of winter precipitation show a significant relation with MJO.

For completeness, we note a similar analysis was performed using the North Atlantic Oscillation (NAO) index and MJO indices. We find that the correlations with the predictable components are marginally significant but that these correlations become insignificant when MJO has been regressed out (not shown).

5. Conclusions

This paper shows that an operational forecast model skillfully predicts week-3–4 temperature and precipitation over the contiguous United States. This skill can be identified at the gridpoint level (about 1° × 1°) and by projecting data onto an orthogonal set of large-scale CONUS patterns (as derived from the eigenvectors of the Laplacian operator). An important aspect of this identification is a permutation significance test that accounts for serial correlation on daily time scales. Skill is detected based on correlation measures but not based on mean square error measures, indicating that an amplitude correction is necessary for skill. Our results differ from those of Li and Robertson (2015) perhaps because we analyzed weeks together rather than separately and only one month at a time was analyzed.

Winter temperature and precipitation tend to have more predictability than their summer counterparts, with summer precipitation having the weakest predictability of all quantities considered in this paper. In addition, the most predictable components were identified by finding linear combinations of Laplacian eigenvectors that maximize signal-to-noise ratio. The results of this maximization procedure clarify the spatial structure of the predictable variability. The most predictable component during winter effectively represents the model’s ENSO teleconnection pattern. Some predictable components of winter precipitation are associated with MJO activity. The skill of the predictable components is dominated by the skill in predicting the mean value during a month rather than from predicting anomalies relative to the monthly mean. By explicitly identifying patterns in an operational forecast model that are predictable on subseasonal time scales and demonstrating that these patterns can be predicted with skill in observations, the above results provide a scientific basis for week-3–4 predictions.

Acknowledgments

We thank two reviewers and the editor Joseph Barsugli for helpful comments that led to improved clarity in the final paper. This research was supported primarily by the National Oceanic and Atmospheric Administration, under the Climate Test Bed program (NA10OAR4310264) and the MAPP program (NA14OAR4310184). Additional support was provided by the National Science Foundation (AGS-1338427), National Aeronautics and Space Administration (NNX14AM19G), and the National Oceanic and Atmospheric Administration (NA14OAR4310160). The views expressed herein are those of the authors and do not necessarily reflect the views of these agencies.

REFERENCES

  • Barnston, A., M. K. Tippett, M. L. L’Heureux, S. Li, and D. G. DeWitt, 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93 (Suppl.), doi:10.1175/BAMS-D-11-00111.2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, M., W. Shi, P. Xie, V. B. S. Silva, V. E. Kousky, R. Wayne Higgins, and J. E. Janowiak, 2008: Assessing objective techniques for gauge-based analyses of global daily precipitation. J. Geophys. Res., 113, D04110, doi:10.1029/2007JD009132.

    • Search Google Scholar
    • Export Citation
  • DelSole, T., and M. K. Tippett, 2007: Predictability: Recent insights from information theory. Rev. Geophys., 45, RG4002, doi:10.1029/2006RG000202.

  • DelSole, T., and M. K. Tippett, 2009: Average predictability time: Part II: Seamless diagnosis of predictability on multiple time scales. J. Atmos. Sci., 66, 11881204, doi:10.1175/2008JAS2869.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DelSole, T., and M. K. Tippett, 2014: Comparing forecast skill. Mon. Wea. Rev., 142, 46584678, doi:10.1175/MWR-D-14-00045.1.

  • DelSole, T., and M. K. Tippett, 2015: Laplacian eigenfunctions for climate analysis. J. Climate, 28, 74207436, doi:10.1175/JCLI-D-15-0049.1.

  • Fukunaga, K., 1990: An Introduction to Statistical Pattern Recognition. 2nd ed. Academic Press, 591 pp.

    • Crossref
    • Export Citation
  • Hastie, T., R. Tibshirani, and J. H. Friedman, 2003: Elements of Statistical Learning. Corrected ed. Springer, 552 pp.

  • Hoffman, R. N., and E. Kalnay, 1983: Lagged average forecasting, an alternative to Monte Carlo forecasting. Tellus, 35A, 100118, doi:10.1111/j.1600-0870.1983.tb00189.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, N. C., D. C. Collins, S. B. Feldstein, M. L. L’Heureux, and E. E. Riddle, 2013: Skillful wintertime North American temperature forecasts out to 4 weeks based on the state of ENSO and the MJO. Wea. Forecasting, 29, 2338, doi:10.1175/WAF-D-13-00102.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kistler, R., and Coauthors, 2001: The NCEP–NCAR 50-Year Reanalysis: Monthly means CD-ROM and documentation. Bull. Amer. Meteor. Soc., 82, 247267, doi:10.1175/1520-0477(2001)082<0247:TNNYRM>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, S., and A. W. Robertson, 2015: Evaluation of submonthly precipitation forecast skill from global ensemble prediction systems. Mon. Wea. Rev., 143, 28712889, doi:10.1175/MWR-D-14-00277.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Newman, M., P. D. Sardeshmukh, C. R. Winkler, and J. S. Whitaker, 2003: A study of subseasonal predictability. Mon. Wea. Rev., 131, 17151732, doi:10.1175//2558.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pegion, K., and P. D. Sardeshmukh, 2011: Prospects for improving subseasonal predictions. Mon. Wea. Rev., 139, 36483666, doi:10.1175/MWR-D-11-00004.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations. J. Climate, 11, 109120, doi:10.1175/1520-0442(1998)011<0109:APSPWA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, doi:10.1175/JCLI-D-12-00823.1.

  • Simmons, A. J., and A. Hollingsworth, 2002: Some aspects of the improvement in skill of numerical weather prediction. Quart. J. Roy. Meteor. Soc., 128, 647677, doi:10.1256/003590002321042135.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Taylor, J., and R. J. Tibshirani, 2015: Statistical learning and selective inference. Proc. Natl. Acad. Sci. USA, 112, 76297634, doi:10.1073/pnas.1507583112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trenary, L., T. DelSole, M. K. Tippett, and K. Pegion, 2017: A new method for determining the optimal lagged ensemble. J. Adv. Model. Earth Syst., doi:10.1002/2016MS000838, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., 140, 18891899, doi:10.1002/qj.2256.

  • Wang, H., A. Kumar, W. Wang, and B. Jha, 2012: U.S. summer precipitation and temperature patterns following the peak phase of El Niño. J. Climate, 25, 72047215, doi:10.1175/JCLI-D-11-00660.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, W., M.-P. Hung, S. J. Weaver, A. Kumar, and X. Fu, 2014: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 42, 25092520, doi:10.1007/s00382-013-1806-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., and H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 19171932, doi:10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yang, X., and T. DelSole, 2012: Systematic comparison of ENSO teleconnection patterns between models and observations. J. Climate, 25, 425446, doi:10.1175/JCLI-D-11-00175.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • Barnston, A., M. K. Tippett, M. L. L’Heureux, S. Li, and D. G. DeWitt, 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93 (Suppl.), doi:10.1175/BAMS-D-11-00111.2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, M., W. Shi, P. Xie, V. B. S. Silva, V. E. Kousky, R. Wayne Higgins, and J. E. Janowiak, 2008: Assessing objective techniques for gauge-based analyses of global daily precipitation. J. Geophys. Res., 113, D04110, doi:10.1029/2007JD009132.

    • Search Google Scholar
    • Export Citation
  • DelSole, T., and M. K. Tippett, 2007: Predictability: Recent insights from information theory. Rev. Geophys., 45, RG4002, doi:10.1029/2006RG000202.

  • DelSole, T., and M. K. Tippett, 2009: Average predictability time: Part II: Seamless diagnosis of predictability on multiple time scales. J. Atmos. Sci., 66, 11881204, doi:10.1175/2008JAS2869.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DelSole, T., and M. K. Tippett, 2014: Comparing forecast skill. Mon. Wea. Rev., 142, 46584678, doi:10.1175/MWR-D-14-00045.1.

  • DelSole, T., and M. K. Tippett, 2015: Laplacian eigenfunctions for climate analysis. J. Climate, 28, 74207436, doi:10.1175/JCLI-D-15-0049.1.

  • Fukunaga, K., 1990: An Introduction to Statistical Pattern Recognition. 2nd ed. Academic Press, 591 pp.

    • Crossref
    • Export Citation
  • Hastie, T., R. Tibshirani, and J. H. Friedman, 2003: Elements of Statistical Learning. Corrected ed. Springer, 552 pp.

  • Hoffman, R. N., and E. Kalnay, 1983: Lagged average forecasting, an alternative to Monte Carlo forecasting. Tellus, 35A, 100118, doi:10.1111/j.1600-0870.1983.tb00189.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, N. C., D. C. Collins, S. B. Feldstein, M. L. L’Heureux, and E. E. Riddle, 2013: Skillful wintertime North American temperature forecasts out to 4 weeks based on the state of ENSO and the MJO. Wea. Forecasting, 29, 2338, doi:10.1175/WAF-D-13-00102.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kistler, R., and Coauthors, 2001: The NCEP–NCAR 50-Year Reanalysis: Monthly means CD-ROM and documentation. Bull. Amer. Meteor. Soc., 82, 247267, doi:10.1175/1520-0477(2001)082<0247:TNNYRM>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, S., and A. W. Robertson, 2015: Evaluation of submonthly precipitation forecast skill from global ensemble prediction systems. Mon. Wea. Rev., 143, 28712889, doi:10.1175/MWR-D-14-00277.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Newman, M., P. D. Sardeshmukh, C. R. Winkler, and J. S. Whitaker, 2003: A study of subseasonal predictability. Mon. Wea. Rev., 131, 17151732, doi:10.1175//2558.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pegion, K., and P. D. Sardeshmukh, 2011: Prospects for improving subseasonal predictions. Mon. Wea. Rev., 139, 36483666, doi:10.1175/MWR-D-11-00004.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations. J. Climate, 11, 109120, doi:10.1175/1520-0442(1998)011<0109:APSPWA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, doi:10.1175/JCLI-D-12-00823.1.

  • Simmons, A. J., and A. Hollingsworth, 2002: Some aspects of the improvement in skill of numerical weather prediction. Quart. J. Roy. Meteor. Soc., 128, 647677, doi:10.1256/003590002321042135.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Taylor, J., and R. J. Tibshirani, 2015: Statistical learning and selective inference. Proc. Natl. Acad. Sci. USA, 112, 76297634, doi:10.1073/pnas.1507583112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trenary, L., T. DelSole, M. K. Tippett, and K. Pegion, 2017: A new method for determining the optimal lagged ensemble. J. Adv. Model. Earth Syst., doi:10.1002/2016MS000838, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., 140, 18891899, doi:10.1002/qj.2256.

  • Wang, H., A. Kumar, W. Wang, and B. Jha, 2012: U.S. summer precipitation and temperature patterns following the peak phase of El Niño. J. Climate, 25, 72047215, doi:10.1175/JCLI-D-11-00660.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, W., M.-P. Hung, S. J. Weaver, A. Kumar, and X. Fu, 2014: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 42, 25092520, doi:10.1007/s00382-013-1806-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., and H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 19171932, doi:10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yang, X., and T. DelSole, 2012: Systematic comparison of ENSO teleconnection patterns between models and observations. J. Climate, 25, 425446, doi:10.1175/JCLI-D-11-00175.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Laplacian eigenvectors 2–10 over the contiguous United States.

  • Fig. 2.

    Fraction of variance of observed 2-week means explained by individual Laplacian eigenfunctions 1–20.

  • Fig. 3.

    Correlation skill of week-3–4 temperature and precipitation CFSv2 hindcasts over CONUS during January and July from 1999 to 2010 (12 yr). The hindcasts are based on a 4-day lagged ensemble (comprising 16 members drawn from 4× daily hindcasts). Values that are statistically insignificant at the 5% level (according to the permutation test) are masked out. The percentage area with significant correlation skill (positive and negative) is indicated in the title of each panel.

  • Fig. 4.

    (top) Predictability and (bottom) skill of week-3–4 CFSv2 hindcasts of January temperature for individual eigenfunctions as a function of ensemble size (measured in days spanned by the lagged ensemble). The numbered labels indicate the Laplacian eigenfunction.

  • Fig. 5.

    Predictability (as measured by the SNR) of week-3–4 temperature and precipitation hindcasts over the CONUS during January and July from the CFSv2 for individual Laplacian eigenvectors (the first 10 of which are shown in Fig. 1). Different symbols correspond to different variables and months, as indicated in the bottom legend. The dashed lines show the 5% significance threshold estimated from 10 000 permutation samples, using the color corresponding to the relevant variable and month (e.g., the black dashed shows the significance thresholds for the black dots corresponding to January temperature). The SNRs below the smallest significance threshold are not shown.

  • Fig. 6.

    Correlation skill of 4-day lagged ensemble hindcasts of week-3–4 temperature and precipitation over CONUS from the CFSv2 for individual Laplacian eigenvectors (the first 10 of which are shown in Fig. 1). The format of the figure is similar to Fig. 5.

  • Fig. 7.

    Maximized SNRs of CFSv2 week-3–4 hindcasts of temperature and precipitation over CONUS. The maximization is performed using the first nine Laplacian eigenfunctions over CONUS, which are shown in Fig. 1. The shaded region shows the 95% confidence interval for no predictability estimated by permutation methods.

  • Fig. 8.

    Regression coefficients between the most predictable component time series and the associated variable. The regression map is derived by regressing time series onto the first 20 Laplacian eigenvectors. The choice of 20 imposes an implicit level of spatial smoothing. The pattern is normalized to lie between and 1, and the multiplicative factor to obtain kelvins or mm day−1 for temperature and precipitation, respectively, is indicated in the title above each panel.

  • Fig. 9.

    The correlation skill between the predictable components (i.e., components that maximize the SNR) of week-3–4 temperature and precipitation hindcasts from CFSv2 for over CONUS. The format of the figure is similar to Fig. 5.

  • Fig. 10.

    Time series of the most predictable components of week-3–4 CFSv2 hindcasts of (left) temperature and (right) precipitation over CONUS. Each time series shows a 2-week mean of a variable: for observations (red), this time series corresponds to a 2-week running mean and serves as verification; for hindcasts (black), each 2-week mean is computed separately by averaging leads 15–28 days of each hindcast initialized on each day of the month. Forecasts initialized on consecutive days in a given month are plotted as a single time series for each year; a time series beginning on the first of the month (indicated by a dot) is disconnected from time series of the previous year. The title of each panel indicates the month and variable of the predictable component. The correlation coefficient between the observed and hindcast time series is indicated in the title of each panel.

  • Fig. 11.

    Skill of predictable components for week-3–4 CFSv2 hindcasts of temperature and precipitation over CONUS. The week-3–4 forecasts within a month were decomposed into the sum of two terms, a monthly mean and an anomaly relative to the monthly mean, and then the correlation skill of the two components were computed separately. The skill of predicting monthly means is shown on the x axis, and the skill of predicting anomalies is shown on the y axis. The shaded box indicates areas in which both skills are below 0.35 (an approximate 5% significance threshold). The number label indicates the order of the predictable component, and the different colors denote different months and variables, as indicated in the legend key.

  • Fig. 12.

    The R2 values of the degree of association between CONUS predictable components of CFSv2 with (a) ENSO and (b) MJO activity. ENSO is measured by the observed Niño-3.4 index, while the MJO is measured by the RMM1 and RMM2 indices of Wheeler and Hendon (2004) as derived from the CFSv2. (bottom) The “partial” R2 value, which measures (c) the association with ENSO after the MJO has been regressed out and (d) the association with MJO after ENSO has been regressed out. The horizontal dashed line shows the 5% significance threshold based on 12 samples (i.e., the number of years in the period 1999–2010).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 2478 1004 155
PDF Downloads 1039 176 21