## 1. Introduction

In view of recent scientific developments and a growing societal reliance on seasonal climate forecasts, it is time to reexamine seasonal predictability of weather statistics and the contribution to predictive skill from dynamical general circulation models (GCMs). The last few years saw rapid advances in seasonal climate prediction. These advances have been due to the expansion of information about the climate system and better understanding of its physical mechanisms. The information improvements include availability of more and better data, from both observations and dynamical models leading to broader possibilities for studying climate variability and predictability in broader regions, with better spatial and temporal resolution. For example, it has become possible to describe, study, and predict interannual–interdecadal variability in nontraditional variables derived from daily data. Traditionally, seasonal precipitation forecasting efforts concentrate on seasonal total precipitation and use El Niño–Southern Oscillation (ENSO) as the primary predictor (e.g., Barnston et al. 1994). However, ENSO influences the seasonal probability distribution functions (PDFs) of daily temperature and precipitation in many regions of the contiguous United States (Gershunov and Barnett 1998a; Gershunov 1998; Smith and Sardeshmukh 2000). This is especially true for the tails of the PDFs—the frequencies of daily extreme temperatures, heavy precipitation, and streamflow events (Gershunov 1998; Cayan et al. 1999). Moreover, climate patterns evolving on longer timescales modulate ENSO-related predictability (Gershunov and Barnett 1998b; Gershunov et al. 1999; Minobe and Mantua 1999) and it is quite likely that modes of climate variability other than ENSO may exert a strong enough influence on seasonal weather statistics to provide significant seasonal predictability in non-ENSO years. Our aim here is to construct reasonable forecasting models that take advantage of all relevant climate modes to predict seasonal statistics of daily precipitation and to evaluate these models specifically for non-ENSO as well as for ENSO years.

Several large ENSO events of recent years have been observed beyond precedent. Unfortunately, this apparently did not yet significantly contribute to improvements in the long-range predictability of ENSO events themselves; that is, it did not render ENSO itself reasonably predictable before the actual onset of the event (Landsea and Knaff 2000). Once the event is set in motion, however, usually in the boreal summer, ENSO excursions into warm Niño and cold Niña extremes continue to develop in a predictable enough fashion through the following winter. This phase locking to the seasonal cycle and the understanding of how seasonal ENSO extremes affect regional climate statistics around the world form the basis of ENSO-related hydrologic predictability.

There is also a growing realization that other modes of climate variability are important drivers of climate signals, with or without contemporaneous ENSO extremes. Such modes, or so-called oscillations, relevant to North American climate include the North Pacific Oscillation (NPO; Mantua et al. 1997; Minobe 1997, 1999), the North Atlantic Oscillation (NAO; Hurrell and van Loon 1997; Stephenson et al. 2000), and the related Arctic Oscillation (AO; Thompson and Wallace 1998; Ambaum et al. 2001; Dommenget and Latif 2002). The NAO varies on timescales ranging from intraseasonal to interdecadal. Most of this variability is currently unpredictable. Its low-frequency time evolution may be predictable (e.g., Mokhov et al. 2000), but explains little variance. The more slowly evolving NPO can be predicted with persistence (Barnett 1981; Kushnir et al. 2002) and perhaps with simple dynamical models as well (Schneider and Miller 2001). Whether or not these are legitimate self-contained climate “modes” or regional manifestations of global red noise (Hasselmann 1976), the important point for predictability is that they are manifested in large-scale slowly evolving SST patterns and that some of them have a consistent influence on Pacific–North American atmospheric circulation and U.S. climate in their own right (e.g., Mantua et al. 1997; Hurrell and van Loon 1997) and as strong modulators of ENSO signals (Gershunov and Barnett 1998b; Gershunov et al. 1999; Minobe and Mantua 1999; Bonsal et al. 2001). This fact can be exploited for long-range hydrologic prediction. A reasonable prediction system for North American precipitation, therefore, can employ these patterns in SST as predictors with or without explicit dynamical accounting of the atmospheric circulation as an intermediate step.

Advances in forecasting methodology have also taken place giving rise to at least three fundamental approaches to regional seasonal forecasting: fully statistical, fully dynamical, and hybrid techniques (e.g., Barnston and Smith 1996; Chen et al. 1999; Gershunov et al. 2000). Here, we construct and compare “optimal” hybrid and fully statistical forecasting models that make use of many climate modes relevant for contiguous U.S. precipitation as well as the presence or absence of ENSO forcing. These models do not depend on the assumption of climatic stationarity, even in the purely statistical approach, so any existing trends can be sources of predictability. We do not consider fully dynamical techniques (i.e., regional dynamical models nested within the GCM grid) because these are currently inferior to the statistical and hybrid methodologies for practical and theoretical reasons (Gershunov et al. 2000; Chen 2002).

The main predictand variable considered here is the seasonal frequency of heavy daily precipitation events. Section 2 describes the data; section 3, the methodology; section 4 gives a feel for the statistical component of the prediction methodology and for the most relevant sources of wintertime predictability by applying the methodology in a diagnostic mode; section 5 assesses the statistical and hybrid approaches in specification mode presenting skill maps and optimization surfaces for January–February–March (JFM, hereafter all three-month periods will be designated by first letter of each month) as well as the seasonal cycle of field-averaged skill; section 6 evaluates statistical predictability of heavy daily precipitation frequency based on SST at lead times up to six months. Section 7 contains discussion and conclusions.

## 2. Data

### a. Predictands

The predictand variables considered here are derived using the serially complete (no missing values) daily station data compiled by Eischeid et al. (2000) from the National Climatic Data Center (NCDC) summary of the day (TD3200) dataset, quality controlled according to Reek et al. (1992), and recently updated and expanded to the entire contiguous United States. These data suffer from known problems such as gauge undercatch due to wind-induced turbulence at the gauge orifice and wetting losses on the internal walls of the gauge (Groisman and Legates 1994). Extreme values may suffer most from undercatch bias as they tend to be accompanied by strong winds. This problem is exacerbated in northern and mountainous regions in winter during snowfall. The spatial interpolation of missing values is another source of error, although there appears to be no systematic bias in the estimation procedures (Eischeid et al. 2000). A random subsample of 262 stations (circles in Figs. 1c,f,i) was extracted from a total of 4397 stations. Subsampling was performed to reduce the computational load while giving adequate regional coverage for the entire contiguous United States. For applications requiring better spatial forecast resolution, more stations can be used regionally to achieve the desired forecast resolution at the expense of computational efficiency, but without compromising skill.

A “heavy” precipitation event is defined as daily precipitation total exceeding the 50th, 75th, or 90th percentile of the seasonal local (station) 50-yr (1950–99) climatology (P50, P75, or P90, respectively). No theoretical probability distribution is assumed for the daily data. Quantiles are based on the empirical PDFs as in Gershunov (1998). Because these frequency–intensity variables are referred to local station PDF quantiles, they suffer less from systematic measurement bias than do raw magnitudes. In any case, signals due to known modes of climate variability resolved by the set of stations used here compared well with signals derived from the high-quality U.S. Historical Climatology Network dataset used in previous work (e.g., Gershunov and Barnett 1998a,b).

Certainly, seasonal total precipitation (Ptot) as well as above-median daily precipitation frequency (P50) are better resolved, more stable, better behaved, and usually more predictable variables than frequencies of more extreme precipitation (see discussion in section 5c). P90 is defined on smaller samples and is noisier than P75, which is still noisier and thus harder to predict than P50. However, in what follows, we focus more attention on the more extreme P90 because heavier precipitation events are more related to floods, are more important to predict, and generally present more of a challenge.

### b. Predictors

SST data (Reynolds and Smith 1994) cover the common time period 1950–99. These data are used as predictor fields for the statistical forecast models. They are the same data used to force the atmospheric GCMs (AGCMs) used here in the hybrid approach.

Atmospheric circulation fields used as predictors in the statistical component of the hybrid approach are 500-mb heights from the ECHAM3 (Kaurola 1997) and the community climate model version 3 (CCM3; Kiehl et al. 1998) AGCMs, both for the common period 1950–99 and from the National Aeronautics and space Administration's (NASA'S) Seasonal-to-Interannual Prediction Project (NSIPP) AGCM (Roads et al. 2003) for the period 1961–99. All AGCM data are 10-member ensemble averages resolved on the T42 grid (roughly 2.8° × 2.8°). We use (Atmospheric Model Intercomparison Project) AMIP-type integrations: AGCM forced with observed, as opposed to predicted, SST. These are all “base” runs that form the climatology for making anomalies for the International Research Institute for Climate Prediction (IRI) forecasts. The 500-mb heights from the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis1 (Kalnay et al. 1996) are also used for the 50-yr common time period to specify the upper limit on predictability.

## 3. Methods

Hybrid dynamical–statistical and purely statistical models are assessed regarding their ability to specify the seasonal parameters describing daily rainfall PDFs, specifically the tails, for example, P90. To allow direct comparison of hybrid and statistical results, the same statistical model is used to relate climate to the predictand, that is, canonical correlation analysis (CCA). CCA was originally developed by Hotelling (1935, 1936) to identify and quantify associations between two sets of variables and was initially used in the social sciences. CCA is not limited to discrete or continuous data. Barnett and Preisendorfer (1987) first applied the technique in a climate prediction context. The method itself does not imply a causal direction, theory does. In climate prediction, we use CCA, as originally intended by Hotelling, to match patterns in the predictor with patterns in the predictand fields.

The basic climate forcing (i.e., predictor field) is anomalous SST, related via atmospheric circulation to the hydrologic response. In the purely statistical scheme, the CCA relates SST directly to hydrology, while in the hybrid approach an AGCM is first used to translate the SST forcing dynamically into the atmospheric response, that is, geopotential heights. It is worth noting that geopotential heights are a well-behaved quality variable in GCMs, while interannual signals in daily precipitation statistics are inadequately simulated by GCMs (Gershunov and Barnett 1998a). Geopotential heights (500-mb level) become the forcing for the statistical (CCA) component. An advantage of using CCA as the statistical component in the hybrid methodology is that it does not weigh the collocated grid points more heavily than other points, but rather bases the predictive relationship on large-scale patterns in the predictor field. This gets around the common dynamical downscaling problems of spatial bias and grid-level noise. The circulation patterns do not necessarily need to be true to reality as long as they respond consistently to SST forcing patterns. Therein lies an implicit GCM spatial bias corrector, a feature lacking in dynamical and grid-based or local types of statistical downscaling models. The difference between the hybrid and statistical approaches, then, is that the hybrid approach uses an explicit dynamical atmosphere. Otherwise, the initial forcing, outcome (i.e., predictand), and training period are the same, facilitating direct comparison and thus assessment of the GCMs' contribution to predictive skill.

Our pattern-to-pattern approach identifies and exploits the relevant predictor–predictand patterns that are the manifestations of climatic changes on interannual–interdecadal, or even century (e.g., anthropogenic), timescales. Barnston and Smith (1996) applied the CCA forecasting model in the statistical mode while Gershunov et al. (2000) describe the application of CCA in the hybrid mode. Here, we further improve upon this methodology by adding an optimization procedure to find the adequate model complexity for a specific application, set of predictors, forcing, etc. The methodology is described and discussed below.

### a. Methodology at a glance

**Predictor:***Hybrid approach*: Large-scale seasonal (and ensemble)-average GCM circulation (500-mb heights) from multiyear global SST-forced GCM ensemble integration.*Statistical version*: (i) contemporaneous observed SST or (ii) antecedent observations (SST anomalies observed in previous months).**Predictand:**Observed seasonal indices of daily precipitation (e.g., seasonal frequencies of daily extremes).**Statistical model:**Predictor and predictand fields are prefiltered with

*p*principal components (PCs) each (same number of PCs for simplicity);Patterns of variability in the predictor and predictand fields represented by their respective PCs (

*p*of them) are related to each other via*k*canonical correlates derived from CCA. Thus,*k*≤*p*≪*T,*where*T*is the number of temporal observations available for model training;The optimal statistical model is defined by considering cross-validated measures of skill for all reasonable combinations of

*p*and*k*displayed on the skill optimization surface (SOS). The PCs are recalculated for each “leave-one-out” cross-validation iteration.

**Forecast:***Hybrid*: Global SST field is operationally forecast. The AGCM is forced by the forecast SST. The predictor field is computed. Patterns in the dynamically predicted predictor field are downscaled to the predictand using the optimal statistical model complexity.*Statistical*: (i) SST is forecast same as for the hybrid case, or (ii) antecedent forcing field is observed at the appropriate lead time. A statistical forecast is then constructed, again, using the optimal model complexity.

In this paper, we will compare the performance of hybrid and statistical methodologies in specification mode (section 5) while actual forecasting results (section 6) will be based on the statistical methodology using an antecedent forcing field.

### b. Model validation and skill optimization

Validation of stochastic and hybrid models is an important part of the prediction effort. We evaluate the fit of our models to the observed historical data by using cross-validated (jackknifed) applications of the model; that is, for each year on seasonal record, a forecast or specification is constructed with the PCA–CCA models trained on the remaining years. Building on this validation technique, we examine the ability of the hybrid and statistical models to reproduce the observed indices. Skill is defined at each station as the correlation coefficient between the observed and predicted predictand. Cross validation makes for an unbiased estimate of skill to accompany any specific forecast made with the hybrid or statistical methodology.

Skill is optimized as follows. A metric is chosen to summarize seasonal skill for the entire spatial predictand field (i.e., 262 stations across the contiguous United States). Here, we use field (all station) averaged skill, that is, cross-validated prediction–observation correlation. For each reasonable model complexity (i.e., *p*–*k* combination, see section 5b), one value, thus, summarizes the field-averaged skill. This value is then displayed for each model complexity on a *p*–*k* plot. Such a display defines the SOS. The SOS is then used to choose the optimal model complexity for the season (and forcing type) of interest. The model is recalculated entirely for each model complexity and cross validation (i.e., specific seasonal prediction). Therefore, for each cross validation, the model (PCs and CCs) does not have to be exactly the same as for another, even when model complexity remains constant. So, the SOS points to an approximate optimal model complexity for a specific predictor–predictand pair, season, and forcing (i.e., conditional model). An exact optimal model does not exist in practice. We shall see below that the optimal model usually lies in an optimal region of models with similar complexities and skills. In a practical operational context, the differences in skill between models of roughly optimal complexity are insignificant. In any case, the annual progression of seasonal skill can be studied, for example, by plotting maximum field-averaged skill for each season. Such a seasonal skill curve describes the annual cycle of predictability (see section 5c).

Calculation of the SOS and seasonal skill curves is computationally demanding because of cross validation and many *p*–*q* combinations. In our case, each seasonal curve (Figs. 4–6) summarizes results from 91 800 model estimations [Σ (1:17) *p*–*q* combinations × 50 cross validations × 12 seasons] that are computed for each predictor–predictand pair. Because the spatial autocorrelation for seasonally expressed climatological fields is considerable, the predictor–predictand patterns involved in the CCA tend to be rather large scale and results are insensitive to the spatial resolution of predictor–predictand fields. Accordingly, all predictor fields were spatially averaged on a 5.6° × 5.6° grid. As described in the data section, predictand fields were also reduced (subsampled from over 4000 stations) to calculate the SOSs and seasonal curves. When optimal seasonal models are defined, the skill maps and/or the operational forecasts can be calculated on better resolved predictand fields giving better spatial resolution of forecast and skill. Our experimentation confirms that the field-averaged skill is practically insensitive to changes in spatial resolution of either the predictor or the predictand fields (results not shown).

## 4. Diagnostic results: Coupled diagnostics

The prediction system explored here draws on the coupling between large-scale climate and regional hydroclimate variability. To understand the dynamical underpinning, the physical sources of predictability, the statistical framework (CCA), can be used in the diagnostic, as well as prognostic modes. Although, the main thrust of this paper is toward predictability, it is important first to briefly elucidate the natural or physical foundation of this predictability. As an example, let us consider the results presented in Fig. 1. The leading mode (CC1) of coupled large-scale climate–regional hydroclimate variability, that is, observed 500 mb–P90 (Figs. 1a,b,c), modeled 500 mb–P90 (Figs. 1d,e,f), and observed Pacific SST (PSST)–P90 (Figs. 1g,h,i), all capture a combination of mainly ENSO and NPO (Mantua et al. 1997; Minobe 1997). This is no surprise as both are paramount in their joint effect on the hydroclimate of the contiguous United States (Gershunov and Barnett 1998b; Gershunov et al. 1999). CCA parsimoniously summarizes the salient features of these forcings in one leading coupled mode.

Los Niños grandes (big El Niños) of 1998, 1983, and 1958 are prominent as well as the smaller events (1987 and 1992) of the later half of the record. Las Niñas are more prominent in the early half of the record. This is consistent with NPO's modulation of ENSO's effect on North American hydroclimate (Gershunov and Barnett 1998b; Gershunov et al. 1999). The discrete late 1970s climate “shift” (Graham 1994) is apparent in CC1 time evolution (Figs. 1a,d,g). This shift is also consistent with the decadal progression of the NPO (Mantua et al. 1997; Minobe 1997). The common notion that AGCMs do not respond to extratropical SST anomalies (e.g., Lau 1997) has been challenged recently by Kushnir et al. (2002). Lending support to their claim, the North Pacific SST change is also visible in the time series of CC1 for the modeled 500 mb–P90 relationship (Fig. 1d). Apparently, the late 1970s climatic change exerted strong influence on the hydroclimate of the contiguous United States, and specifically on the frequencies of heavier wintertime precipitation events. The ECHAM3 AGCM was able to reproduce the salient circulation anomalies associated with the SST variability linked to observed interannual and decadal changes rooted in the North Pacific (CCM3 500-mb heights–P90 CC1, not shown, exhibits similar features). The very end of the record (JFM 1999) hints at a shift to pre-1977 NPO conditions observed since fall 1998 (e.g., Gershunov et al. 1999), and apparently exacerbated, if not forced, by the strong La Niña 1999 (see especially Fig. 1g).

The general decline of eastern North Pacific and southeastern U.S. geopotential heights throughout the record and the associated increase in precipitation in the south and central United States (decrease in the extreme northwest and the Ohio River valley) can also be viewed as a linear trend, suggesting a multipart explanation and a more continuous view of Graham and Diaz (2001). In any case, several prominent climatic influences on U.S. hydrology are integrated in the leading CC mode. Higher-order coupled modes reflect modifications of the leading coupled pattern as well as possibly distinct forms of climatic variability. It is worth noting that the resemblance between observed and modeled patterns deteriorates at higher-order CCs. The nature of all these climate modes and the degree of their interrelatedness are subjects of vigorous scientific debate and inquiry and, in any case, are beyond the focus of the current work. The diagnostic results displayed in Fig. 1 are presented simply to exemplify the coupled large-scale climate–regional hydroclimate variability on which the hydrologic predictability discussed below is based.

## 5. Statistical and hybrid potential predictability—Precipitation specification skill

In this section, we compare the utilities of three predictor fields: observed and modeled atmospheric circulation and observed SST in their ability to predict P90. Although these skills represent prediction in a statistical sense, all of the predictors presented in this section are contemporaneous with P90 (JFM). So, the skills presented here should not be considered forecast skills in a strict physical sense, but rather specification skills. Given the predictors used here, these specification skills are upper limits on predictability for the hybrid and statistical methods. Skill based on the observed circulation is presented as the ultimate upper limit on seasonal predictability if it were possible to perfectly predict seasonal atmospheric circulation. The other two examples of specification skill (modeled atmospheric circulation and observed SST) are directly comparable and represent the upper limits of skill attainable with the hybrid and fully statistical methods, respectively, were it possible to perfectly predict the seasonal SST field. True predictability with lagged predictors will be considered in section 6.

### a. Specification skill maps: JFM

Figure 2 presents cold season skills expressed as correlations between observed P90 and P90 statistically predicted from observed 500-mb heights (OBS500: upper panels), 500-mb heights modeled with ECHAM3 (ECHAM500: middle panels), and observed PSST (bottom panels). ECHAM3 accounts for the most skill out of the three GCMs considered (see section 5c). The skill obtained from OBS500 heights is presented as the upper limit of field-optimized specification skill only. Figure 2 is further broken down into columns presenting optimal model skill for all 50 yr (1950–99: left panels), ENSO years (El Niño and La Niña extremes: middle panels), and non-ENSO years (right panels). Here, as in Gershunov (1998), ENSO-active years are defined to occur when DJF Niño-3.4 (5°S–5°N, 170°–120°W) SST exceeds its one standard deviation thresholds during the period 1950–99. El Niño winters (JFM) are thus defined to be 1958, 1964, 1966, 1969, 1970, 1973, 1983, 1987, 1992, and 1998; and La Niña winters are 1950, 1951, 1956, 1971, 1974, 1976, 1985, 1989, 1996, and 1999. Results can be summarized as follows:

Although skill magnitudes vary, the patterns of predictability are rather robust for all years, ENSO years and non-ENSO years independent of the specific predictor used. Significant predictability achieved in the northwest, Great Plains, and the eastern United States is mostly ENSO related.

The Southwest appears to be predictable even in non-ENSO winters.

Even if the circulation is known, ENSO still enhances seasonal predictability (cf. Figs. 2b and 2c).

The optimal statistical model appears to be performing at least as well as the optimal hybrid model based on ECHAM500 as predictor (cf. middle and bottom panels of Fig. 2).

Predictability achieved in non-ENSO winters is intriguing. It is discussed further in the following section.

### b. Skill optimization surface (SOS): JFM

The choices of model complexity for the results presented in Fig. 2 are based on the SOSs presented in Fig. 3. Field-averaged skill from CCA models of various complexities are recorded and displayed on the SOS. Model complexities (*p,* *q*) were allowed to range: 1 ≤ *p* ≤ *P* initial EOF patterns in the predictor and also in the predictand fields and 1 ≤ *q* ≤ *p* CCA paired modes relating them, where *P* = 17 was chosen as the maximum reasonable number of patterns. This choice was made considering a 50-yr sample and the observation that, in our experiments, skill usually declined for models overparameterized beyond this approximate threshold. Optimal model complexity can then be chosen either for all forcings (all years) or for a specific forcing (e.g., ENSO) if the involvement of an important specific forcing is anticipated. For example, if an ENSO extreme is known to be a factor, it is possible to identify the events with appropriate amplitude and timing in the historical record. If the sample of such events is adequately large within the model-training period, skill can be computed using those years only. ENSO-conditional models are constructed with all years' data, but skill values computed on just ENSO-extreme years yield a more accurate estimate of the conditional forecast or specification skill, conditional on ENSO forcing, in this case (Figs. 3b,e,h). The SOS computed in this way also gives a more accurate estimate of optimal model complexity required for the relevant climate state.

Simplicity of the PSST–P90 optimal model (*p* = 3, *q* = 3) allows for an easy diagnostic of the source of skill in non-ENSO winters. An examination of the three leading coupled CCA patterns of the JFM PSST–P90 relationship reveals that the leading pattern is responsible for much of the skill in the Southwest (see Fig. 2i), even in non-ENSO years. This pattern (based on three PCs, i.e., *p* = 3) is not shown, but it resembles Fig. 1h with the P90 relationship more focused on the Southwest in non-ENSO winters. Moreover, a specification experiment based on North Pacific SST only (north of 20°N) yielded a skill map for non-ENSO JFM similar to Fig. 2i in both pattern and magnitude (not shown). We conclude that this strong non-ENSO predictability is due to that part of the NPO uncorrelated with ENSO excursions.

Apparently, the modeled 500-mb heights are sensitive enough to this forcing pattern to account for non-ENSO JFM predictability in the Southwest. We could have guessed that from Figs. 1d–f, since the leading coupled pattern accounts for more than just ENSO variability. Although skill patterns are similar, the statistical model skill is more extensive in the Southwest than that obtained with a hybrid model (cf. Figs. 2f and 2i). In general, the optimal statistical model performs at least as well as the best hybrid model (cf. Figs. 3d–f and 3g–i). The difference in this case may be practically negligible, but it is consistent and can be much larger in other seasons (see below).

In general, Fig. 3 confirms the nonspatial comparative points made in the previous section on the basis of Fig. 2.

Predictability is rather robust. It is achieved by many models of approximately optimal complexity required to capture the salient climate signals.

ENSO contributes strongly to potential predictability even if average seasonal circulation is known (OBS500 as predictor).

The statistical model based on PSST performs better than the hybrid model based on any of the GCMs considered here.

### c. Seasonal cycle of potential predictability

Figure 4 displays the seasonal progression of the maximum P90 field-averaged skill corresponding to the optimal complexity model from the seasonal SOSs. Skills due to seven predictors are displayed for all years on the left, ENSO years on the center, and non-ENSO years on the right panels. The following observations summarize the results of seasonal predictability curves presented in Fig. 4.

Maximum predictability for most predictors occurs in winter, earliest for OBS500 (NDJ), slightly later for PSST (DJF), and latest for GCM modeled 500-mb heights and lagged PSST (JFM).

Summertime predictability is generally at a minimum for all predictors. This minimum is slightly later in early fall for the PSST-based statistical model, and extends farther into fall with hybrid models, especially the one using NSIPP atmospheric circulation.

ENSO is responsible for a considerable share of predictability throughout the year, except in summer and early fall, when skill is generally at a minimum.

The difference between OBS500-based skills (the upper limit) and those based on all other predictors is largest in non-ENSO years.

There is a large difference between ENSO and non-ENSO years' skill for most predictors. Of course these differences are most pronounced around seasonal skill maxima. The seasonal cycle of predictability is reinforced by ENSO because of ENSO's persistence and phase locking to the seasonal cycle. Additionally, tropical–midlatitude teleconnections are more efficient in the winter hemisphere, further reinforcing the seasonal cycle of ENSO-related predictability.

There is no significant difference between statistical model skill achieved with Pacific-only SST (PSST) and an SST field including Atlantic and Pacific oceans (APSST).

A note on optimization is in order here. Plotting skill curves from five “suboptimal” models lying in the optimal region of complexity did not visibly change these results (figure not shown). All models lying within the optimal region of complexity exhibit almost identical skill. Our optimization procedure merely identifies an approximate region of optimal complexity.

The seasonal cycle of predictability is further explored in the context of other precipitation statistics and selected predictors for all years. Figure 5 shows that throughout the year, Ptot and P50 are more predictable than P75, which in turn, is more predictable than P90. However, the seasonal cycle of predictability based on each predictor is similar for all predictands. Figure 5 shows this to be true with (Fig. 5a) OBS500, (Fig. 5b) ECHAM500 and NSIPP500, and (Fig. 5c) contemporaneous PSST as predictors. The fact that P90 is the most difficult variable to predict is not surprising as it is the noisiest variable of the four, defined on the smallest daily precipitation samples. However, we keep focusing on P90 in the following investigation of true predictability because we consider it more important to be able to forecast heavy precipitation events and extreme weather events in general compared to seasonal means or totals. In the following investigation of predictability with lagged PSST, it is worth keeping in mind that still better predictability can be achieved for less extreme precipitation statistics.

## 6. Statistical predictability—P90 prediction skill

The preceding examination of specification skills achieved with various predictors makes it apparent that a statistical approach based directly on SST forcing is superior to the hybrid approach where the atmosphere is modeled dynamically via today's generation of AGCMs. The statistical model bypasses the atmosphere, but the atmosphere is the implicit conveyor of SST forcing information to the continental hydrologic response. There may be applications for which purely statistical approaches will not work, for example, anthropogenic climate change estimation, but for seasonal prediction, purely statistical models are currently preferable. A seasonal forecast can be achieved via contemporaneous SST patterns forecast by a mixture of statistical and dynamical methods, as it is operationally done for AGCM-based forecasts. Here, we investigate predictability simply with lagged monthly PSST. This approach assumes that there is either persistence in the SST field and/or that SST anomalies evolve in a consistent manner that depends on their previous state: both reasonable assumptions for the tropical and North Pacific Ocean on monthly timescales. Besides plenty of evidence from current literature, this assumption is also supported by the predictability results. Consider, for example, P90 predictability based on PSST lagged 1 month prior to each 3-month season (e.g., December PSST predicting JFM P90 and so on: see red line in Fig. 4). This seasonal cycle of statistical predictability is comparable to the hybrid specification skill achieved through the top AGCM throughout the year for all years and forcing types considered (i.e., ENSO and non-ENSO).

In Fig. 6, this statistical predictability result is extended for PSST lags up to six months. Here again, the seasonal cycle of predictability is strong for all years, and especially strong for ENSO years, when wintertime (DJF and JFM) P90 appears to be predictable with lead times of at least two months at almost the same level of field-averaged skill as that achieved at lag 0 (January PSST predicting JFM P90 and so on) and by specification (JFM PSST predicting JFM P90). In non-ENSO years, predictability is certainly at much lower skill levels, but still consistently achieved for JFM P90 at lead times up to several months (Fig. 6c).

Let us finally consider the maps of statistical prediction skill achieved via lagged PSST for JFM P90. Figure 7 shows prediction skill maps displayed in the same format and on the same magnitude scale as the specification skills displayed in Fig. 2. As before, the left panels represent all years; the middle panels, ENSO years; and the right panels, non-ENSO years. The top, middle, and bottom panels, however, represent PSST lags of 1, 3, and 5 months, respectively. Again, and at all lags, we see generally similar characteristic predictability patterns that we saw in Fig. 2. At 1-month lead time (Figs. 7a,b,c; December PSST predicting JFM P90), prediction skill magnitudes are comparable to the PSST specification skill (Figs. 2g,h,i). While, at longer lead times, ENSO-related skill deteriorates in the Southwest (Figs. 7e,h), non-ENSO-related skill there weakens and shrinks, but remains significant out to 5-month lead times (Figs. 7f,i). To be sure, skill generally declines at longer lead times. At 5-month lead, field significance may be low, that is, the proportion of stations with significant skill begins to approach that expected by chance. Yet, the characteristic patterns are still visible and these are unlikely to arise by chance.

## 7. Discussion and conclusions

The spatial robustness of specification and prediction skill is reassuring. Wintertime patterns of ENSO-related precipitation predictability have been extensively studied. These U.S. patterns constitute part of the global ENSO-related climate signal (e.g., Kiladis and Diaz 1989). Both statistical and hybrid schemes efficiently use ENSO forcing to skillfully reproduce regional precipitation patterns. It is no surprise that these patterns (Figs. 2b,e,h and 7b,e,h) qualitatively agree with those described in past studies, especially when similar frequency–intensity variables were considered (e.g., Gershunov 1998). The non-ENSO-related predictability, however, is tantalizing (Figs. 2c,f,i and 7c,f,i).

Knowing the circulation, one can make an excellent specification of precipitation, especially in late fall–early winter (Fig. 4). Although less extreme daily precipitation frequencies and the seasonal total amounts are much better specified (Fig. 5a), P90 is largely determined by OBS500 over large regions of the contiguous United States, even in late winter (Figs. 2a,b,c). It is interesting to note that, even if the seasonal average circulation is known, specification skill is significantly better during ENSO extreme winters. This may be partly due to reduced intraseasonal variability during El Niño winters, which has been noted by Smith and Sardeshmukh (2000).

In operational prediction, SST is the fundamental forcing. Differences in skill between the hybrid and statistical SST-based approaches may not be large, especially in JFM, but are consistently in favor of the simpler statistical approach. Although both methods account for significant specification skill, a purely statistical model performs better than a hybrid model that includes an explicit dynamical atmosphere modeled with today's state-of-the-art GCMs. Figures 2 and 3 make this clear for JFM. Figures 4 and 5 confirm and strengthen this conclusion for other predictable seasons. We hope that future improvements in dynamical modeling will enhance hybrid predictability beyond the capabilities of statistical models alone.

A result more important than the relative performance of purely statistical versus hybrid prediction schemes is the fact that both approaches capture similar spatial patterns of significant precipitation predictability due to both ENSO and non-ENSO forcing. In winter, predictability due to non-ENSO influences appears to be limited to the southwestern United States and due to North Pacific SST variability uncorrelated with ENSO. This includes non-ENSO-related interannual as well as lower-frequency variability in the North Pacific. Diagnostic examination of the coupled SST–precipitation modes responsible for this predictability suggests that trends and climate “shifts” associated with North Pacific interdecadal variability play an important role in influencing frequencies of heavy precipitation events in the contiguous United States and especially in the Southwest.

Groisman et al. (1998, 2001), Pielke and Downton (2000), Cayan et al. (2001), and Graham and Diaz (2001) have recently documented the existence of trends in U.S. hydroclimate. Our diagnostic results support these findings. As far as prediction, CCA accounts for possible trends coexisting in both predictor and predictand fields naturally as for any mode of coupled variability, and in so doing, circumvents the necessity to assume a stationary climate, which is a requirement in composite-type statistical forecasting models (e.g., Gershunov 1998). Trends, therefore, become a source of predictive skill.

Summertime precipitation predictability is at a minimum by any method, even when using contemporaneous observed circulation patterns as the direct statistical predictor. However, both for OBS500 and for PSST as statistical predictors, the U.S. average summertime skill for Ptot and P50 is at least as good as the JFM skill for P90 (Fig. 5). Admittedly, prediction of summer precipitation is difficult, but this observation makes us confident that a careful selection of the relevant dynamical predictors as well as the use of antecedent climate conditions, should account for useful predictability in the warm season, perhaps even for the more extreme precipitation statistics. We are also working to further refine the hybrid approach by including predictors specifically related to summertime precipitation, for example, moisture flux.

In principle, it is possible to improve seasonal predictability for all seasons by enhancing the effectiveness of the methodology proposed here. Skill improvements can be achieved by using more than one predictor variable in either the hybrid or statistical modes. This includes the option of taking account of the predictors' temporal evolution (i.e., stacking several months of the predictor) leading up to the forecast season. GCM forecast and antecedent observed variables could also be mixed in the same forecasting model at common lead time. An investigation of physically meaningful combinations of predictor variables with both the statistical and hybrid approaches would make an interesting regional predictability study. Even with our current approach, it is certain that regional predictability can be improved through a more spatially focused optimization procedure. Here, optimization was performed for the entire contiguous United States. It is clear, for example, that even without regionally focused optimization, the seasonal prediction skill would be significantly higher, especially for non-ENSO winters, if it was computed for the southwestern United States only.

We considered seasonal predictability of daily heavy precipitation events. Many other statistics of weather and hydrology are also predictable via similar methods. Streamflow, for example, integrates precipitation and temperature influences over seasons and forecast lead times can be much longer due to the fact that the initial anomalous SST patterns' forcing memory can persist longer through snow cover, soil moisture, and delayed temperature effects. Statistics of hydrologic extremes, for example, annual peak flow and flood damage, may be amenable to seasonal forecasts, as they are known to be consistently sensitive to modes of climate variability such as trends and ENSO (Katz et al. 2002). Seasonal accumulations of extreme weather events (Domonkos 2001) are also likely candidates for seasonal prediction. Forecast indices can provide specific information to assess the risk of extreme rainfall events, floods, and other stressful weather conditions on a seasonal basis.

## Acknowledgments

Funding by NOAA/GCIP Grant NA77RJ0453 made this work possible. Additional funding from NSF Grant ATM-99-01110, NASA Grant NAG5-8292, and the California Applications Project (NOAA Grant NA17RJ1231) contributed to this research effort. We thank Tim Barnett and Rick Lawford for their personal encouragement and intellectual input. Tim Barnett conceived the original idea for hybrid seasonal prediction. Conversations with Lisa Goddard and Tony Westerling stimulated the creative processes. Thanks are also due to Jon Eischeid for providing the precipitation data and to Mary Tyree and Larry Riddle for help with data processing. NCEP–NCAR reanalysis data were provided by the NOAA–CIRES Climate Diagnostics Center, Boulder, Colorado, via their Web site at http://www.cdc.noaa.gov/. IRI provided their AGCM base runs. Critical comments by Francis Zwiers and two anonymous reviewers improved the manuscript.

## REFERENCES

Ambaum, M. H. P., B. J. Hoskins, and D. B. Stephenson, 2001: Arctic Oscillation or North Atlantic Oscillation?

,*J. Climate***14****,**3495–3507.Barnett, T. P., 1981: On the nature and causes of large-scale thermal variability in the central North Pacific Ocean.

,*J. Phys. Oceanogr.***11****,**887–904.Barnett, T. P., and R. Preisendorfer, 1987: Origins and levels of monthly and seasonal forecast skill for United States surface air temperatures determined by canonical correlation analysis.

,*Mon. Wea. Rev.***115****,**1825–1850.Barnston, A. G., and T. M. Smith, 1996: Specification and prediction of global surface temperature and precipitation from global SST using CCA.

,*J. Climate***9****,**2660–2697.Barnston, A. G., and Coauthors. 1994: Long-lead seasonal forecasts—Where do we stand?

,*Bull. Amer. Meteor. Soc.***75****,**2097–2114.Bonsal, B. R., A. Shabbar, and K. Higuchi, 2001: Impacts of low frequency variability modes on Canadian winter temperature.

,*Int. J. Climatol.***2****,**95–108.Cayan, D. R., K. T. Redmond, and L. G. Riddle, 1999: ENSO and hydrologic extremes in the western United States.

,*J. Climate***12****,**2881–2893.Cayan, D. R., S. A. Kammerdiener, M. D. Dettinger, J. M. Caprio, and D. H. Peterson, 2001: Changes in the onset of spring in the western United States.

,*Bull. Amer. Meteor. Soc.***82****,**399–415.Chen, S-C., 2002: Model mismatch between global and regional simulation.

,*Geophys. Res. Lett.***29****.**1060, doi:10.1029/2001GL013570.Chen, S-C., J. O. Roads, H-M. H. Juang, and M. Kanamitsu, 1999: Global to regional simulations of California wintertime precipitation.

,*J. Geophys. Res.***104****,**31517–31532.Dommenget, D., and M. Latif, 2002: A cautionary note on the interpretation of EOFs.

,*J. Climate***15****,**216–225.Domonkos, P., 2001: Temporal accumulations of extreme daily mean temperature anomalies.

,*Theor. Appl. Climatol.***68****,**17–32.Eischeid, J. K., P. A. Pasteris, H. F. Diaz, M. S. Plantico, and N. J. Lott, 2000: Creating a serially complete, national daily time series of temperature and precipitation for the western United States.

,*J. Appl. Meteor.***39****,**1580–1591.Gershunov, A., 1998: ENSO influence on intraseasonal extreme rainfall and temperature frequencies in the contiguous United States: Implications for long-range predictability.

,*J. Climate***11****,**3192–3203.Gershunov, A., and T. Barnett, 1998a: ENSO influence on intraseasonal extreme rainfall and temperature frequencies in the contiguous United States: Observations and model results.

,*J. Climate***11****,**1575–1586.Gershunov, A., and T. Barnett, 1998b: Interdecadal modulation of ENSO teleconnections.

,*Bull. Amer. Meteor. Soc.***79****,**2715–2725.Gershunov, A., T. Barnett, and D. Cayan, 1999: North Pacific Interdecadal Oscillation seen as factor in ENSO-related North American climate anomalies.

,*Eos, Trans. Amer. Geophys. Union***80****,**25–30.Gershunov, A., T. Barnett, D. Cayan, T. Tubbs, and L. Goddard, 2000: Predicting and downscaling ENSO impacts on intraseasonal precipitation statistics in California: The 1997/98 event.

,*J. Hydrometeor.***1****,**201–209.Graham, N. E., 1994: Decadal-scale climate variability in the tropical and North Pacific during the 1970s and 1980s—Observations and model results.

,*Climate Dyn.***10****,**135–162.Graham, N. E., and H. F. Diaz, 2001: Evidence for intensification of North Pacific winter cyclones since 1948.

,*Bull. Amer. Meteor. Soc.***82****,**1869–1893.Groisman, P. Ya, and D. Legates, 1994: The accuracy of United States precipitation data.

,*Bull. Amer. Meteor. Soc.***75****,**215–228.Groisman, P. Ya, and Coauthors. 1998: Changes in the probability of heavy precipitation: Important indicators of climatic change.

,*Climatic Change***42****,**243–283.Groisman, P. Ya, R. W. Knight, and T. R. Karl, 2001: Heavy precipitation and high streamflow in the contiguous United States: Trends in the 20th century.

,*Bull. Amer. Meteor. Soc.***82****,**219–246.Hasselmann, K., 1976: Stochastic climate models. Part 1, Theory.

,*Tellus***28****,**473–485.Hotelling, H., 1935: The most predictable criterion.

,*J. Educ. Psychol.***26****,**139–142.Hotelling, H., 1936: Relations between two sets of variables.

,*Biometrica***28****,**321–377.Hurrell, J. W., and H. van Loon, 1997: Decadal variations in climate associated with the North Atlantic Oscillation.

,*Climatic Change***36****,**301–326.Kalnay, E., and Coauthors. 1996: The NCEP/NCAR 40-Year Reanalysis Project.

,*Bull. Amer. Meteor. Soc.***77****,**437–471.Katz, W. R., M. B. Parlange, and P. Naveau, 2002: Statistics of extremes in hydrology.

,*Adv. Water Resour.***25****,**1287–1304.Kaurola, J., 1997: Some diagnostics of the northern wintertime climate simulated by the ECHAM3 model.

,*J. Climate***10****,**201–222.Kiehl, J. T., J. J. Hack, G. B. Bonan, B. A. Boville, D. L. Williamson, and P. J. Rasch, 1998: The National Center for Atmospheric Research Community Climate Model: CCM3.

,*J. Climate***11****,**1131–1150.Kiladis, G. N., and H. Diaz, 1989: Global climatic anomalies associated with extremes in the Southern Oscillation.

,*J. Climate***2****,**1069–1090.Kushnir, Y., W. A. Robinson, I. Bladé, N. M. J. Hall, S. Peng, and R. Sutton, 2002: Atmospheric GCM response to extratropical SST anomalies: Synthesis and evaluation.

,*J. Climate***15****,**2233–2256.Landsea, C. W., and J. A. Knaff, 2000: How much skill was there in forecasting the very strong 1997/98 El Niño?

,*Bull. Amer. Meteor. Soc.***81****,**2107–2119.Lau, N-C., 1997: Interactions between global SST anomalies and the midlatitude atmospheric circulation.

,*Bull. Amer. Meteor. Soc.***78****,**21–33.Mantua, N. J., S. R. Hare, Y. Zhang, J. M. Wallace, and R. C. Francis, 1997: A Pacific interdecadel climate oscillation with impacts on salmon production.

,*Bull. Amer. Meteor. Soc.***78****,**1069–1079.Minobe, S., 1997: A 50–70 year climatic oscillation over the North Pacific and North America.

,*Geophys. Res. Lett.***24****,**683–686.Minobe, S., 1999: Resonance in bidecadal and pentadecadal climate oscillations over the North Pacific: Role in climatic regime shifts.

,*Geophys. Res. Lett.***26****,**855–858.Minobe, S., and N. Mantua, 1999: Interdecadal modulation of interannual atmospheric and oceanic variability over the North Pacific.

*Progress in Oceanography,*Vol. 43, Pergamon Press. 163–192.Mokhov, I. I., A. V. Eliseev, D. Handorf, V. K. Petukhov, K. Dethloff, A. Weisheimer, and D. V. Khvorost'yanov, 2000: North Atlantic Oscillation: Diagnosis and simulation of decadal variability and its long-period evolution.

,*Izv. Atmos. Oceanic Phys.***36****,**555–565.Pielke Jr., R. A., and M. W. Downton, 2000: Precipitation and damaging floods: Trends in the United States, 1932–97.

,*J. Climate***13****,**3625–3637.Reek, R. S., S. R. Doty, and T. W. Owen, 1992: A deterministic approach to validation of historical daily temperature and precipitation data from the cooperative network.

,*Bull. Amer. Meteor. Soc.***73****,**753–762.Reynolds, R. W., and T. M. Smith, 1994: Improved global sea surface temperature analyses using optimum interpolation.

,*J. Climate***7****,**929–948.Roads, J., S-C. Chen, and M. Kanamitsu, 2003: US regional climate simulations and seasonal forecasts.

*J. Geophys. Res.,*in press.Schneider, N., and A. J. Miller, 2001: Predicting western North Pacific Ocean climate.

,*J. Climate***14****,**3997–4002.Smith, C. A., and P. D. Sardeshmukh, 2000: The effect of ENSO on the intraseasonal variance of surface temperatures in winter.

,*Int. J. Climatol.***20****,**1543–1557.Stephenson, D. B., V. Pavan, and R. Bojariu, 2000: Is the North Atlantic Oscillation a random walk?

,*Int. J. Climatol.***20****,**1–18.Thompson, D. W. J., and J. M. Wallace, 1998: The Arctic Oscillation signature in the wintertime geopotential height and temperature fields.

,*Geophys. Res. Lett.***25****,**1297–1300.

P90 specification skill expressed as correlations between the cross-validated forecasts and observations at stations. All values are displayed on the same range. Uncolored areas are regions of insignificant negative correlations. Skill maps are shown for (a), (d), (g) all years (1950–99); (b), (e), (h) ENSO (cold and warm episodes) years; and (c), (f), (i) non-ENSO years. Rows of panels represent results based on three different predictor fields: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. The three contours represent the 90th, 95th, and 99th percent levels of significance in order of increasing correlations. All skills are obtained with the model complexity optimized for field-averaged skill (see text and Fig. 3). Optimal model complexity (*p,* *k*) is given in the title above each panel

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

P90 specification skill expressed as correlations between the cross-validated forecasts and observations at stations. All values are displayed on the same range. Uncolored areas are regions of insignificant negative correlations. Skill maps are shown for (a), (d), (g) all years (1950–99); (b), (e), (h) ENSO (cold and warm episodes) years; and (c), (f), (i) non-ENSO years. Rows of panels represent results based on three different predictor fields: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. The three contours represent the 90th, 95th, and 99th percent levels of significance in order of increasing correlations. All skills are obtained with the model complexity optimized for field-averaged skill (see text and Fig. 3). Optimal model complexity (*p,* *k*) is given in the title above each panel

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

P90 specification skill expressed as correlations between the cross-validated forecasts and observations at stations. All values are displayed on the same range. Uncolored areas are regions of insignificant negative correlations. Skill maps are shown for (a), (d), (g) all years (1950–99); (b), (e), (h) ENSO (cold and warm episodes) years; and (c), (f), (i) non-ENSO years. Rows of panels represent results based on three different predictor fields: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. The three contours represent the 90th, 95th, and 99th percent levels of significance in order of increasing correlations. All skills are obtained with the model complexity optimized for field-averaged skill (see text and Fig. 3). Optimal model complexity (*p,* *k*) is given in the title above each panel

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

SOS for JFM P90 displays cross-validated field-averaged skill for all combinations of patterns (PCs: 1 ≤ *p* ≤ 17) and relationships between these patterns (CCs: 1 ≤ *k* ≤ *p*). Field average skill is summarized as the average station correlation between predicted (cross validated) and observed P90 for (a), (d), (g) all 50 winters (1950–99); (b), (e), (h) ENSO winters (20 cold and warm episodes); and (c), (f), (i) other, or non-ENSO winters. As in Fig. 2, the predictors are arranged as follows: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. All results are displayed on the same color scale for simple comparison

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

SOS for JFM P90 displays cross-validated field-averaged skill for all combinations of patterns (PCs: 1 ≤ *p* ≤ 17) and relationships between these patterns (CCs: 1 ≤ *k* ≤ *p*). Field average skill is summarized as the average station correlation between predicted (cross validated) and observed P90 for (a), (d), (g) all 50 winters (1950–99); (b), (e), (h) ENSO winters (20 cold and warm episodes); and (c), (f), (i) other, or non-ENSO winters. As in Fig. 2, the predictors are arranged as follows: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. All results are displayed on the same color scale for simple comparison

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

SOS for JFM P90 displays cross-validated field-averaged skill for all combinations of patterns (PCs: 1 ≤ *p* ≤ 17) and relationships between these patterns (CCs: 1 ≤ *k* ≤ *p*). Field average skill is summarized as the average station correlation between predicted (cross validated) and observed P90 for (a), (d), (g) all 50 winters (1950–99); (b), (e), (h) ENSO winters (20 cold and warm episodes); and (c), (f), (i) other, or non-ENSO winters. As in Fig. 2, the predictors are arranged as follows: (a), (b), (c) OBS500; (d), (e), (f) ECHAM500; and (g), (h), (i) PSST. All results are displayed on the same color scale for simple comparison

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 specification and prediction skill is shown as the max field-averaged skill from the SOSs for all 12 three-month seasons based on seven predictors, six contemporaneous (OBS500: black solid line; ECHAM500, CCM500, NSIPP500: green solid, dotted and dashed, respectively; APSST and PSST: blue dotted and solid lines, respectively) and one lagged (PSST in the month preceding each 3-month season: red solid line). Field-averaged skill for (a) all 50 yr (39 for NSIPP500), (b) ENSO years, and (c) non-ENSO years. The dashed vertical line in (b) separates the seasons where ENSO years were defined as the calendar years following the onset of a warm or cold event, Y(1), and the calendar years of onset, Y(0)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 specification and prediction skill is shown as the max field-averaged skill from the SOSs for all 12 three-month seasons based on seven predictors, six contemporaneous (OBS500: black solid line; ECHAM500, CCM500, NSIPP500: green solid, dotted and dashed, respectively; APSST and PSST: blue dotted and solid lines, respectively) and one lagged (PSST in the month preceding each 3-month season: red solid line). Field-averaged skill for (a) all 50 yr (39 for NSIPP500), (b) ENSO years, and (c) non-ENSO years. The dashed vertical line in (b) separates the seasons where ENSO years were defined as the calendar years following the onset of a warm or cold event, Y(1), and the calendar years of onset, Y(0)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 specification and prediction skill is shown as the max field-averaged skill from the SOSs for all 12 three-month seasons based on seven predictors, six contemporaneous (OBS500: black solid line; ECHAM500, CCM500, NSIPP500: green solid, dotted and dashed, respectively; APSST and PSST: blue dotted and solid lines, respectively) and one lagged (PSST in the month preceding each 3-month season: red solid line). Field-averaged skill for (a) all 50 yr (39 for NSIPP500), (b) ENSO years, and (c) non-ENSO years. The dashed vertical line in (b) separates the seasons where ENSO years were defined as the calendar years following the onset of a warm or cold event, Y(1), and the calendar years of onset, Y(0)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycles of P90, P75, P50, and Ptot specification skill are shown for the entire 50 yr of record as the max field-averaged skill from the seasonal SOSs. (a) Skill for P90 (thick solid line), P75 (dotted line), P50 (dashed line), and Ptot (thin solid line) based on OBS500. (b) Skill for P90 (solid lines) and Ptot (dotted lines) based on ECHAM500 (thick lines) and NSIPP500 (thin lines). (c) Skill for P90, P75, P50, and Ptot based on contemporaneous seasonal PSST displayed in the same convention as in (a)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycles of P90, P75, P50, and Ptot specification skill are shown for the entire 50 yr of record as the max field-averaged skill from the seasonal SOSs. (a) Skill for P90 (thick solid line), P75 (dotted line), P50 (dashed line), and Ptot (thin solid line) based on OBS500. (b) Skill for P90 (solid lines) and Ptot (dotted lines) based on ECHAM500 (thick lines) and NSIPP500 (thin lines). (c) Skill for P90, P75, P50, and Ptot based on contemporaneous seasonal PSST displayed in the same convention as in (a)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycles of P90, P75, P50, and Ptot specification skill are shown for the entire 50 yr of record as the max field-averaged skill from the seasonal SOSs. (a) Skill for P90 (thick solid line), P75 (dotted line), P50 (dashed line), and Ptot (thin solid line) based on OBS500. (b) Skill for P90 (solid lines) and Ptot (dotted lines) based on ECHAM500 (thick lines) and NSIPP500 (thin lines). (c) Skill for P90, P75, P50, and Ptot based on contemporaneous seasonal PSST displayed in the same convention as in (a)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 prediction skill is shown as the max field-averaged skill from the seasonal SOSs based on lagged monthly PSST from lags of 1–6 months (lag1–lag6) before the beginning of each 3-month season (red to light green lines). Specification skill based on contemporaneous seasonal PSST (blue line, same as in Fig. 4) and monthly PSST in the first month of each 3-month period (lag0, black line). All other conventions are exactly as in Fig. 4

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 prediction skill is shown as the max field-averaged skill from the seasonal SOSs based on lagged monthly PSST from lags of 1–6 months (lag1–lag6) before the beginning of each 3-month season (red to light green lines). Specification skill based on contemporaneous seasonal PSST (blue line, same as in Fig. 4) and monthly PSST in the first month of each 3-month period (lag0, black line). All other conventions are exactly as in Fig. 4

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Annual cycle of P90 prediction skill is shown as the max field-averaged skill from the seasonal SOSs based on lagged monthly PSST from lags of 1–6 months (lag1–lag6) before the beginning of each 3-month season (red to light green lines). Specification skill based on contemporaneous seasonal PSST (blue line, same as in Fig. 4) and monthly PSST in the first month of each 3-month period (lag0, black line). All other conventions are exactly as in Fig. 4

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Same as Fig. 2, but for predictability based on monthly PSST lagged by (a), (b), (c) 1 month (lag1); (d), (e), (f) 3 months (lag3); and (g), (h), (i) 5 months (lag5)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Same as Fig. 2, but for predictability based on monthly PSST lagged by (a), (b), (c) 1 month (lag1); (d), (e), (f) 3 months (lag3); and (g), (h), (i) 5 months (lag5)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2

Same as Fig. 2, but for predictability based on monthly PSST lagged by (a), (b), (c) 1 month (lag1); (d), (e), (f) 3 months (lag3); and (g), (h), (i) 5 months (lag5)

Citation: Journal of Climate 16, 16; 10.1175/1520-0442(2003)016<2752:HDPFOT>2.0.CO;2