## 1. Introduction

An important reason to improve regional climate prediction is agricultural production's sensitivity to climate; a second is the scientific merit of elucidating seasonal climatic influences. ENSO is a prominent example and its effects have been investigated for several decades (Berlage 1966; Bjerknes 1966; Ropelewski and Halpert 1987; and many others). Philander (1990) gives a good review of earlier literature. More quantitative studies have been done recently, for example, Ropelewski and Halpert (1996) describe regional precipitation *distribution* changes worldwide where they previously found influence. While earlier studies used mainly an atmospheric variable for prediction, often the Southern Oscillation index, SST was soon identified as the likely major source of seasonal timescale effects (Philander 1990).

Detailed, quantitative, statistical studies of SST influence on continental climates became practical with the release of adequate long-term data, especially the Comprehensive Ocean–Atmosphere Data Set (COADS). Barnston (1994) and Unger (1995) studied SST U.S. precipitation influence, Unger with screening multiple regression. Barnston and Smith (1996; henceforth BS96) extended their analysis globally. These three studies' main goal was empirical forecasting: concurrent prediction (specification, GCM terminology) was done largely as a benchmark. Montroy (1997, henceforth MT) was perhaps first to identify tropical Pacific SST influence on eastern U.S. precipitation. His study was not limited strictly to ENSO effects and analyzed SST patterns in greater detail. These more quantitative studies used linear statistical methods and, except Unger and MT, mainly canonical techniques. However, precipitation studies, even on seasonal timescales and using near-global SSTs, have achieved limited prediction skill, even at 0 lag, over land outside the Tropics.

This work emphasizes simultaneous (i.e., 0 lag) monthly and seasonal SST correlation to predict precipitation. Precipitation was chosen for several reasons. Its prediction appears technically difficult due to noisy and highly non-Gaussian distributions. Second, improvements should be applicable to other climate variables. Third, precipitation critically affects food production. Our motivating heuristic is the high-sensitivity local probability distributions of a chaotic system's variables (regional climate) is likely to show from only moderate, but persistent, changes in system forcing (Lorenz 1964): monthly SST and precipitation anomalies (SSTAs and PAs) here.

Using detailed SST pattern analysis and several statistical steps, we find sizeable SST–precipitation correlations during much of the year: multiple, hindcast-bias corrected, monthly and seasonal correlation coefficients (*R*_{c}, *R*_{sc}) about 0.2–0.4, and 0.3–0.6, respectively, over much of the United States, especially winter and summer. The 1994–99 out-of-sample data show better than expected skill. Useful precipitation and climate forecasts from one to a few months appear feasible using an ocean circulation model, or even statistically. Preliminary statistical forecasts (not included; using lagged SSTs, Markowski and North 1999) support this judgment. We also find other results, for example, 1) unexpectedly strong and widespread Gulf of Mexico SSTA correlations; 2) missing influence from the central North Pacific's main variability, that about the subarctic front (east–west anomaly position appears important instead); and 3) ENSO event differences noted by others can be explained by differing SSTA patterns.

While this study was limited to the United States for simplicity, methodology should be applicable to most regions and climate variables. Related work, for example, BS96, suggests similar skills should be obtainable over much of the global land surface.

## 2. Methodology

Monthly SST anomaly principal component (PC) time series and transformed precipitation anomalies (PAs) were correlated using multiple least-squares linear regression (MLR). (For clarity, empirical orthogonal function, EOF, will denote a PC's eigenvector, usually shown as a geographic SSTA pattern.) The PAs were derived from the monthly U.S. National Climatic Data Center (NCDC) State Climatic Division (CD) Precipitation Data Base (Guttman and Quayle 1996). Regressions were done for each CD using each of the seasons' months from each year from 1950 through 1993 (44 yr). Monthly precipitation was first normalized to unit variance (*σ*^{2} = 1) to remove changes in distribution width, especially from the annual cycle. Monthly means were subtracted to obtain anomalies. (Note: numerous abbreviations and variables are used throughout this text. A list of them is provided in appendix A.)

### a. SSTA data preparation

A 1° gridded January 1950 through 1993 COADS dataset (da Silva et al. 1994) was averaged over 3° to 5° squares, depending on ocean region size. Earlier data were not used due to poor Pacific coverage (Oort et al. 1987). Squares with 1/2 coverage were considered valid. Interpolation was done for less coverage if any opposing border squares were valid. Final time series had no missing values and were the PC analysis input.

Because semiglobal analysis would likely miss smaller important patterns, several separate regions were analyzed (Table 1). The PC calculation used all 528 months. Squares were not area-weighted to retain SSTA pattern sensitivity in northern ocean regions, especially since North Pacific weather systems often propagate directly into the United States. The Gulf of Mexico was chosen mainly for its influence on inflowing U.S. moisture.

### b. Precipitation anomaly transformation

#### 1) Necessity of a pa transformation

Although precipitation data were averages over substantial time and space, distributions were far from Gaussian. For example, 262 of October–December (OND) 344 CD anomaly distributions, likely to be nonarid and well behaved nearly everywhere, failed our *χ*^{2} normality test (see later) at *p* = 0.1. At *p* = 0.5, only 4% (13) passed, versus ≈50% if normal. Regressions removed only small to moderate variance fractions, thus, residual distributions differed little in shape. Since severely non-Gaussian residuals make even basic MLR statistics unreliable, if not meaningless, a transformation was critical. Since MLR minimizes squared deviations, the usual extremes from floods and droughts would further harm results.

#### 2) Transformation development and description

The precipitation distributions' (PDs) large shape variations over the United States, and often within seasons, complicated transform development. Resemblance to a shifted lognormal suggested a lognormal transform, but distributions were truncated at small values to varying degrees, some highly. Where seasonal dryness was common, 0 or ≈0 values often occurred. Figure 1 schematically illustrates this behavior.

To avoid the log's zero singularity, distributions' smaller values were adjusted. A PA distribution was first shifted so that its smallest value was 0 (largely reconstituting the normalized PD); then a constant was added to the now 0 value(s) forcing geometric symmetry about the median. The addition was tapered to 0 by the median, as the dashed line illustrates in Fig. 1. The result was log transformed and normalized to *σ* = 1. (See appendix B for detail.) The transformed modes and averages were near 0 and extreme event influence largely eliminated. Residual normality and independence, the most critical MLR requirements, were carefully checked: normality with a six-bin *χ*^{2} test, and residual behavior with time series and scatterplots (see appendix B).

Overall, precipitation distributions transformed into approximately normal ratio anomalies, *Y*(*t*), giving regressions that predicted (approximately) ratios to medians as *y*^{′}_{d}*t*) = *Y*_{dM}*y*_{d}(*t*)*s*_{Y}], where *Y*_{dM} is a CD, *d,* season's median (normalized) precipitation, *s*_{Y}, its tPA's unnormalized *σ,* and *y*_{d}(*t*) the predicted *t*PA for month, *t.*

### c. Multiple linear regression and hindcast bias corrections

*x*(

*t*) is a month's PC, the two sums and indices,

*i,*

*j,*indicate predictors from two ocean regions were sometimes used,

*a*

_{di}are the MLR coefficients and constant, and ɛ

_{d}(

*t*) the residual, or error. Months, typically three, were selected serially from each year, a “season,” so that the number,

*N,*in an MLR was usually 132. Predictors were usually 10 to 14 leading PCs from one or two ocean regions. MLR statistics were linearly interpolated to a grid using Delaunay triangulation on representative CD centers (from NCDC) and contoured. Significance was log interpolated.

*R*'s, positive, that is, hindcast, bias was removed using the standard adjustment (Neter et al. 1996; Wherry 1931) to obtain the unbiased population (also termed ensemble) estimate,

*R*

_{c}:

*R*

^{2}

_{c}

*s*

_{y . x}

*s*

_{y}

^{2}

*N*

*N*

*υ*

*d*is dropped for simplicity,

*υ*the independent variable number, and

*s*

_{y . x}and

*s*

_{y}MLR residuals' and dependent variable's (i.e., tPAs)

*σ*s, respectively. (Here,

*s*

_{y}= 1 from tPA normalization.) Overall MLR significance was obtained with the

*F*test, which directly accounts for finite sample (i.e., hindcast) bias.

*σ*

_{y . x}, ≈ the rms prediction error (e.g., Neter et al. 1996, chapter 7), and

*R*

_{c}, the hindcast-corrected monthly or seasonal

*Rc,*are related by

*σ*

_{y . x}

*σ*

_{Y}

^{2}

*R*

^{2}

_{c}

*σ*

_{Y}is

*Y*'s unbiased

*σ.*The expected and average anomaly, and probability that

*Y*differs in sign from predicted, are estimated from these quantities. The value

*R*

_{c}is operationally the most important statistic: it determines nonclimatology prediction frequency and overall SSTA sensitivity [since (1) can be recast as

*RX*(

*t*) =

*y*(

*t*), where

*X*(

*t*) = the left side of (1) and

*X*and

*y*are normalized to

*σ*= 1].

Seasonal average correlation coefficients (*R*_{s}) were obtained by averaging each season's predicted and actual tPAs, *Y*(*t*), and correlating these 44 paired averages. The value *R*_{s} tends to increase since averaging reduces noise. However, its hindcast bias increases for similar reasons, see appendix C. The 44-point correlation also adds bias; it was removed by Eq. (2) with *N* = 44, *υ* = 1.

*b*

_{s}

*b*

_{1}

*a*

_{s}

*α*

*γ*

*R*

^{2}

*R*

^{2}

*a*

_{s}is the

*seasonal*correlation's slope,

*α*and

*γ*are variance ratios of the monthly and averaged residuals and predicted tPAs, respectively, and

*b*

_{1}is

*R*

^{2}'s hindcast bias. The value

*R*

^{2}

_{sc}

*R*

^{2}

_{s}

*R*

^{2}

_{s}

*b*

_{s}. (Appendix C has a brief derivation.) These monthly and seasonal hindcast corrections, and cross-validation's (XV) hindcast-bias correction and out-of-sample (OOS) skill estimation, were checked by Monte Carlo simulation. It showed adding

*a*

_{s}in (4) substantially reduced false positives where

*p*> 0.1.

### d. Trade-offs: Ease of use versus skill optimization and interpretation

Our approach trades off simplicity, ease of use, and interpretation versus skill optimization, typically done by removing nonsignificant predictors. Properly removing them is not simple (see section 2h) while keeping them penalizes MLR significance and skill since: (a) variance explained by chance grows; (b) hindcast-bias correction depends directly on predictor number; and (c) seasonal bias correction (4) becomes more conservative. Advantages are that different seasons, ocean regions, and their combinations, can be easily examined since biases are readily corrected. We emphasize that the above corrections *remove* *R* and *R*_{s}'s hindcast biases. Results should be conservative since removing poor predictors should appreciably improve skill, and other improvements described below seem likely.

### e. Out-of-sample, forecast, hindcast, population, and artificial skill confusion

The population skill *R*_{c} is *not* equivalent to OOS skill (*R*_{os}), often termed forecast skill. The value *R*_{os} has typically been “estimated” by XV after selecting in-sample variables by their skill. (See BS96 and references therein.) Unlike population (ensemble) skill, *R*_{os} is a function, sometimes strong, of *N*_{os}, the OOS prediction number. Hindcast and OOS skill difference, *R*^{2} − *R*^{2}_{os}*R*^{2} − *R*^{2}_{c}*both* been called artificial skill, but they are clearly distinct *biases* (Davis 1977). When *N*_{os} = *N,* the in-sample observations, then *R*^{2} − *R*^{2}_{os}_{R2}_{R2}*R*^{2}_{os}*R*^{2}_{c}*R*_{os}: Davis shows that *R*^{2}_{os}*R*^{2}_{c}*b*_{1} since if *N*_{os} ≅ *N,* *b*_{1} ≅ Δ_{R2}

### f. Multiple PC regression advantages and disadvantages

#### 1) Advantages

Multiple PC regression has important advantages compared to often-used canonical correlation methods, such as SVD. Advantages 1 and 2 obtain by combining PC analysis with MLR.

MLR removes the question of EOF accuracy (e.g., North et al. 1982) since PC analysis is used

*only*to find reasonably strong and persistent anomaly patterns, as per our heuristic motivation (see introduction). Eigenvalue and EOF pattern accuracy do not have special relevance. Instead, the important quantities are an EOF's influence and statistical significance, which MLR gives directly as*a*_{di}'s size and significance (1) (see also Wang 2001), and*R,*which is little affected by EOF mixing.Quantitative statistics, such as

*R,**R*_{s},*σ*_{y · x}, and PA sensitivity to SSTs are also directly obtained, unlike other pattern correlation methods, such as canonical and combined field techniques (Bretherton et al. 1992, henceforth BSW).Target field prediction using only PC 1 performed notably better than canonical correlation analysis (CCA) in most tests by BSW, and nearly as well as other combined field techniques. MLR avoids the largest error BSW found: PC 1 missing the signal, which was in BSW's PC 2.

All SST “signal” is available since CD data were not gridded or filtered. Since CDs reflect climate differences, such as a mountainous region bordering a plain, adjacent CDs can behave differently. Interpolation then causes signal loss.

MLR is a well-characterized and sensitive technique when its statistical requirements are met.

Hindcast bias (often misleadingly termed artificial skill) is modest, well understood, and can be accurately removed (Wherry 1931). Its fundamental cause is finite sample size and, thus, is common to nearly all predictive statistical methods.

Predictions, their confidence limits, and PA SST sensitivity can be obtained with textbook methods (e.g., Neter et al. 1996, chapter 6).

Optimal predictors can be used when predictions are wanted under all conditions (Davis 1977).

#### 2) Limitations and difficulties; choice of EOFs

The main limitation is that predictors must be chosen a priori to avoid adding hard-to-quantify artificial skill (also Nicholls 2001): EOF significance and OOS skill must be traded off against possible exclusion of important predictors (MI). Davis and MI recommend the a priori choice, used here, if a reasonable model is not available. Little difficulty arises if a few PCs capture nearly all variance. Also, eigenvalue size can be misleading. Small ocean regions or SSTAs can disproportionately influence weather patterns but contribute only small variance, for example, the western tropical Pacific.

Figures 2 and 3 show the Pacific and Gulf EOFs used. The first 8 Pacific were judged likely important by their centers-of-action and variance fraction; 9 and 10 were included for comparison: they appeared unlikely to be important. EOFs > 12 appeared too complex. The first four Gulf EOFs included nearly all Gulf variance. Most results here use these Pacific and Gulf EOFs, or only the Pacific, to avoid false skill from variable selection. Since our methods can easily examine many cases, including predictors based on a posteriori performance is a danger: these EOFs are our a priori, naïve, choice.

### g. EOF field significance, p_{f}

EOF significance has scientific and, thus, statistical importance. A current question is which, if any, ocean regions influence climate—except for ENSO, SST is usually assumed slaved to the atmosphere (Neelin and Weng 1999).

The highly spatially correlated tPAs (adjacent CDs ≈ 0.9) and many CDs make likely sizeable chance patterns with high local (single CD) significance, *p,* that mimic real influence and require consideration of the entire CD *p,* field (Livezey and Chen 1983, henceforth LC). More sensitive field significance, *p*_{f}, tests typically use Monte Carlo methods (LC). For qualitative guidance, about 200 EOF significance maps were made substituting random and varying persistence (lag 1 autocorrelation, *r*_{1}) first-order autoregressive (AR1) time series for PCs.

However, accurately estimating *p*_{f} is not straightforward: 1) large chance low-*p* areas are likely; 2) large acausal correlation areas seem present, many probably from atmospheric forcing: AR1 *r*_{1} ≈ 0.5 time series substituted for PCs gave unusually many large areas with *p* ≪ 0.1; 3) seasonal testing is required: spatial correlation depends notably on season; 4) the large implicit number of trials in a single map, 344, and 10 to 14 maps per season make CDs with chance *p* ≪ 10^{−3} likely; 5) relatively small areas of substantial influence seem common; thus, the usual, entire region test (LC) will likely miss significant EOFs. Nonuniform spatial PA correlation increases this difficulty. Items 1 through 4 together require a great many Monte Carlo trials for accurate confidence limits. Item 5 suggests using a regional field test, but properly defining regions does not appear straightforward. Using only statistically significant predictors should add appreciable skill.

To avoid these difficulties, simple but conservative Bonferroni tests (e.g., Neter et al. 1996; LC) were adopted adequate to give some interesting results: only local significance is considered, joint independence is assumed, and sufficiently small *p* is required so that chance occurrence on one map in a set is ≤*p*_{f}. We chose *p*_{f} = 0.1 to reduce rejecting EOFs with real influence. Ten maps per set and (conservatively) assuming 300 independent points per map require *p* ≤ 1.5 × 10^{−5} for one, and ≤3.3 × 10^{−4} each for two independent points on one map. (Independence, defined *r* ≤ 0.20, required CDs ≈1000 km apart.) Ocean regions were separately assessed since influence is expected. The *p*s for MLR significance, only one map, are less stringent: 1.5 × 10^{−4} and 1.3 × 10^{−3}.

### h. Out-of-sample testing

Although MLR predictions should be reliable since care was given to statistical rigor, out-of-sample testing gives an independent and overall evaluation. Six years following the in-sample data were used, 1994–99 [National Oceanic and Atmospheric Administration–Cooperative Institute for Research in Environmental Sciences (NOAA–CIRES Climate Diagnostic Center 2000; Reynolds and Smith 1994)]. Mapped and quantitative, Heidke skills were examined. For simplicity, predictions were made using an optimal predictor (Davis 1977).

#### 1) Optimal predictor construction including *p* and *p*_{f}

*a*

_{id}(1) when an a priori model is unavailable. Following Davis, weights,

*w*

_{id}, used here are

*w*

_{id}

*t*

^{2}

_{id}

*t*

^{2}

_{id}

*λ*

*t*

_{id}is

*a*

_{id}'s Student's

*t*statistic, and

*λ*determines

*w*

_{id}'s

*p*dependence (via

*t*). Usually 1 <

*λ*< 2. To use PC field and MLR significance advantageously,

*w*

_{id}was reduced to 0 by additional factors when

*p*> 0.1, see appendix D.

#### 2) Quantitative skill assessment—Heidke skill

*S*

_{H}; Barnston et al. (1999, their appendix) describe

*S*

_{H}in detail. Briefly, predictands are divided evenly into terciles: low, “normal,” and high;

*S*

_{H}is given by:

*S*

_{H}

*N*

_{c}

*N*

_{ec}

*N*

_{p}

*N*

_{ec}

*N*

_{c}and

*N*

_{p}are the number of correct and total predictions (or guesses), respectively. The value

*N*

_{ec}is the expected number correct by chance,

*N*

_{p}/3. Here,

*S*

_{H}= −50, 0, 25, and 100 for 0, exactly chance (

*N*

_{c}=

*N*

_{ec}), 50% and 100% correct, respectively. Predicting only when accuracy is expected improves

*S*

_{H}; otherwise, poor guesses dilute real skill.

## 3. Results

We show three main types of results: single EOF significance and influence, MLR skill for a set of EOFs, and OOS skills. Results are mainly paired same-month (0 lag) PC and tPA correlations; “forecast” will be used to denote future prediction. (The terms correlation and influence are usually used interchangeably, with real influence the default assumption. Cause and effect are distinguished in some cases.) Different season length results are shown since statistical significance often increases with observation number, *N.* However, EOFs influence location and strength often vary and they may be missed unless constant in location or very strong during part of a long season.

### a. Noteworthy EOFs: Large influence, acausality, and Pacific 2's missing influence

We find several clearly significant Pacific and Gulf EOFs, and one Pacific whose influence is unexpectedly absent. Three EOFs stand out on a full annual basis and most seasons, Pacific 1, and Gulf 1 and 3; they are discussed first.

#### 1) Pacific 1: The main ENSO EOF

Figure 4 shows Pacific 1 significance for 3-month seasons over an annual cycle. The large *p* < 10^{−3} area positions, and some <1.5 × 10^{−5} during January–March (JFM), identify Pacific 1 as the main ENSO EOF. Seasonal behavior is consistent with previous ENSO studies (e.g., Ropelewski and Halpert 1987; MT); and, considerably more detail, some substantially larger influence areas, and more seasonal changes appear than previously identified. Three JFM Florida CD *p*s are <10^{−7} and the southernmost Texas (TX) CD is field significant, *p* < 1.5 × 10^{−5}. (Henceforth, “field significant” will mean significant by our local Bonferroni tests and Gaussian MLR residuals.) Opposing responses occur; the JFM dipole (Fig. 4a) is especially strong. It also corresponds well with MT's JFM result: his rotated EOF 1 is very close to our Pacific 1. Like MT, we find the dipole fades rapidly on either side of JFM. Identifiable influence is least during July–September (JAS), largely due to shifting influence position. On an annual basis, EOF 1 is field significant over much of the desert southwest: two New Mexico *p*s are ≤10^{−7} and western TX has *p* < 1.5 × 10^{−5}. (TX residuals fail our *χ*^{2} test.) EOF 1's influence moves northward from late spring to summer [Fig. 12 shows June–August (JJA)] and apparently has influence all year, although May–July and August–October (ASO) are not field significant. These seem likely significant based on random time series pattern areas (section 2h).

#### 2) Acausal gulf eofs

Gulf 1 March–June (Fig. 5a) has *p*s < 10^{−5} and influence throughout the year (inspecting 3-month season maps as earlier). Influence is fairly constant from late fall through spring and weakest in summer and early fall. Gulf 3 (Fig. 5b, annual basis) is not as strong, but our two-point test shows field significance (Idaho and South Carolina CDs). Gulf 3's influence moves considerably and half of its southern dipole (Fig. 5b) is weak each season. (Gulf 2 and 4 often show large areas with low *p*.)

But contrary to expectations, Gulf influence is usually not near the Gulf and often distant, for example, the Northwest (Fig. 5a). The Gulf's small area does not seem capable of causing such strong and distant effects. Several explanations seem reasonable:

The Gulf anomalies are forced by atmospheric systems causing the precipitation (Frankignoul 1985; Neelin and Weng 1999).

Gulf SSTAs are part of a larger pattern that includes the storm formation region (SFR) off Cape Hatteras: this pattern forces the North Atlantic Oscillation (NAO)—the link between cause and effect. Sutton and Allen (1997, henceforth SA) found wintertime SFR SSTAs and NAO strongly correlated.

SSTAs not considered force the correlations, for example, a region not analyzed, such as the Indian Ocean, or higher order Pacific PCs. The first 10 only have ≈1/2 of all variance, but higher order PCs (>10) typically have short persistence, ≤0.5, like the Gulf (≈0.5).

Nonlinear SST interactions force the Gulf. Rapid variation could occur even though the Pacific PCs vary slowly: most of the first 10 had persistence ≥0.65, 1–3 usually ≥0.8.

Explanations (a) and (b) can be tested. Explanation (a) predicts Gulf PCs to be nearly AR1 time series (red noise): they are. [Water's large heat capacity reddens white noise atmospheric forcing (Frankignoul 1985).] Precipitation should then mainly correlate with the PC's random forcing (noise) term, ɛ_{t}, of its AR1-like time series, that is, *x*_{t} = *r*_{1}*x*_{t−1} + ɛ_{t}. The value ɛ_{t} is the residual from a month's weather forcing, *r*_{1} is *x*_{t}'s lag 1 autocorrelation (persistence); and ɛ_{t} = *x*_{t} − *r*_{1}*x*_{t−1}. If SST is causal, ɛ_{t} should show much weaker correlation than *x*_{t}, and *x*_{t−1} should show 1-month forecast skill where its 0 lag correlations are strong. Here, ɛ_{t}'s stronger 0 lag correlations than *x*_{t} and *x*_{t−1}'s negligible forecast skill indicate atmospheric forcing. Gulf EOF 1's loading (Fig. 3a) also suggests atmospheric forcing: loading decreases monotonically from the coast and explains 66% of Gulf variance. (Note: atmospheric forcing can create causal SSTAs as “causal” is used here; they at least need long enough persistence, ≳0.6, to become influential; their correlation must not be due to forcing, that is, ɛ_{t}.)

Explanation (b) requires that the strongest Gulf EOF patterns continue into the SFR. The strongest “extended Gulf” EOF (EG; Table 1 and Fig. 6a) has this character, 34% of EG's variance, and similar *p*, ≤10^{−6}, at similar locations. However, the statistical tests above indicate acausality. The loading pattern of EG 1 is also like Gulf 1's with respect to the coast. Thus, EG 1 appears related to Gulf 1 but acausal vis-a-vis the NAO, though the opposite may be true.

None of the next four EG EOFs matches Gulf 2 or 3 well. While resembling Gulf 3 (Fig. 3c), EG 2 (Fig. 6b) shows considerable loading pattern difference. Its PC is much more persistent, *r*_{1} ≈ 0.8, has good 1-month forecast skill, and often strongly correlates with the ENSO EOF, Pacific 1, *r* ≈ 0.6, with distinctly overlapping influence. Extended Gulf 2 seems associated with Pacific 1, but not Gulf 3.

Because the previous results fit very well the behavior expected from atmospheric forcing, it likely causes most or almost all the Gulf's correlations. Caveats are that explanations (c) and (d) cannot be ruled out; and Gulf 3 or EG 1 may have some causal influence not explained by the Pacific: the large monthly variability here could swamp the winter-average variability SA analyzed. Nevertheless, very strong correlations exist between the Gulf, its surrounding ocean regions, and U.S. precipitation. Reproducing these should be an important test for coupled GCMs.

#### 3) Pacific EOFs 2's missing influence and EOF 5

Pacific EOF 2 (Fig. 2b) has the second-largest variance fraction, marks the Kuroshio Extension and subarctic front, coincides with the large SST variability across the northern Pacific, is at least as coherent as EOF 1, but during fall and winter, significance areas appear little more than random. Several researchers conclude that EOF 2 should affect U.S. winter weather (Latif and Barnett 1994, henceforth LB; Chen et al. 1996; Nakamura et al. 1997), but none is apparent here. Contrarily, EOF 2 shows influence March through August. (Fig. 12d shows JJA.) Although usually not close to field significance by our conservative tests, areas are large enough to suggest a positive result by more sensitive (Monte Carlo) tests.

EOF 5 (Fig. 2d), straddling EOF 2, shows major influence instead. EOF 5 has a large influence area with several *p*s ≈ 7 × 10^{−4} on an annual basis. Figure 7 shows its strongest season, October–December field significance is clear: several CDs within the 10^{−4} (0) contour have 10^{−6} < *p* < 10^{−5}. EOF 5 also shows large influence areas most seasons: March–June, north-central and northeast U.S.; ASO, most of the U.S. middle third; after December mainly on the West Coast. March–June and October–February maps are field significant. EOF 5 appears to strongly modulate ENSO influence much of the year, compare Figs. 7 and 4. Gershunov and Barnett (1998, henceforth, GB98) report modulation during JFM. We see moderate JFM modulation, but EOF 1 and 5 JFM correlation is (unusually) high, −0.45, so they are not independent.

EOF 2 and 5's differing influence is likely partly due to EOF physical and statistical stationarity limitations: EOFs cannot explicitly show motion. Much of EOF 2's variance is likely a motion artifact, such as subarctic front waves (Gill 1982, 493–547; Nakamura et al. 1997; Yuan et al. 1999), or, especially, north–south front motion. Small position shifts generate large variance from the front's large temperature gradient which PC analysis singles out. EOF 5's shape also indicates motion: when substantial, a dipole EOF will typically straddle an EOF centered in the motion (Kim and Wu 1999); front rotation also generates dipole variance. Position changes will convolve regional SSTs; for example, if the Arctic Ocean is anomalously warm but the front is south, PC 2 is likely to be negative, not positive, since position is so influential. Standard analysis will likely be problematic; separating variability is recommended.

EOF 5's large influence may be dynamically expected: its dipole spacing (50°–60°) is near typical polar jet wavenumbers 3 and 4. By coupling through cyclones, especially the Aleutian low, the dipole should affect the low's strength, position (LB; Chen et al. 1996; Nakamura et al. 1997) and shape and, thus, the short waves forming from it and propagating across the United States. Consistent with polar jet behavior, EOF 5's summer influence weakens and moves to the northern United States.

Pacific EOF 5 is also unusual with respect to causality: its correlations appear to be partly causal and about 2/3 to 1/2 from atmospheric forcing. Region separation is also seen, especially late spring and early summer: one region tends to be mainly causal, another mainly acausal. Causality seems absent from about July to October, although correlation areas are large. Note that when some causality exists, atmospheric forcing will develop some real influence.

In summary, EOF 2 mainly appears to reflect the subarctic front's position, with actual influence likely during late spring and summer.

### b. Seasonal averaging and seasonal average correlations

Figures 8 and 9 show monthly regression significance and hindcast-bias-corrected seasonal average skill, *R*_{sc} (section 2c), for four seasons with predictors the first 10 Pacific PCs. (Fig. 12 shows JJA.) Including the Gulf PCs adds much skill but were not used due to their apparent acausality. As expected, seasonal skill closely follows monthly significance and usually is >*R*_{c} (*R*_{c} not shown). Some chance positives and negatives are expected. (*p* = 0.1 for 3-month seasons corresponds to *R*_{sc} ≈ 0.3.) Seasonal skills are likely to be chance if not near a locally significant area.

Large low-*p* areas are Fig. 8's outstanding features. Except the weakest, JAS, all seasons are field significant and most have high skills. The later fall and winter months show much skill area ≥0.5 with some CDs > 0.6; JFM shows skills ≥ 0.7 and substantial area with *R*_{sc} > 0.6. Although JAS's *p* < 0.1 area is within chance, it has good OOS Heidke skill, 16. JAS's EOF 1 and 2 likely have real influence since their patterns are similar to adjacent seasons where they are stronger, especially EOF 1.

Winter skill is in large part due to ENSO-related EOFs. Pacific 1 is typically dominant, except on most of the West Coast. Other EOFs have greater influence here, especially 3, 4, and 8. EOF 3 makes major contributions from October to March and appears related to an intermediate North Pacific Oscillation (NPO) state (LB; GB98). In many seasons, these EOFs also often strongly modulate the main ENSO response. Except JFM, coastal southern California shows little skill.

### c. Out-of-sample test results

Out-of-sample predictions were made for 6 yr, 1994–99, a period expected to be challenging since it includes weak ENSO conditions and a rapid switch from strong El Niño to strong La Niña states (1998–99). Semiquantitative mapped and quantitative numerical skills follow.

#### 1) OOS mapped results

Figures 10 and 11 compare predicted, *Y*_{est}, and actual, *Y*_{s}, transformed anomaly season averages for the best predicted colder and warmer seasons, JFM and JJA. Transformed anomalies facilitate comparing the wide precipitation range across regions. The maps show approximately the season average log of the ratio of the monthly to the season's median monthly precipitation (monthly logs normalized to *σ* = 1: tPA normalization; section 2b). Normalized by seasonal average *σ,* values would be typically 1.4–2 times larger, depending on CD. An important caveat is that longer-term skill may be greatly underestimated due to the small season number: on an *R*^{2} basis, little or no skill is expected here (Davis 1977). Nevertheless, results are consistent with and confirm the in-sample skills, Fig. 9, since they appear better than expected statistically. Since most seasons' predictions include a large area, they should intuitively represent at least two to four independent predictions per year.

##### (i) October–December and April–June seasons

April–June (AMJ) and OND maps were inspected for all OOS years. These seasons' accuracy varied much more than JFM and JJA. The 1999 OND prediction is strikingly accurate, 1996 strikingly poor; 1997 prediction is poor, while 1994 and 1995 appear generally skillful. Overall OND Heidke skill is good. The 1994–96 AMJ predictions look fairly skillful, and 1997 quite good; but 1998 is poor and 1999 bad. The 1999 result was largely from the main ENSO PC (EOF 1) showing a strong La Niña while precipitation behaved like a strong El Niño. While 1998 was the reverse of 1999 with respect to PC 1 and precipitation, the difference was not as large: although PC 1 was still indicating an El Niño, it was rapidly changing to La Niña values. The rapid change likely accounts for some of the error.

The OND and AMJ accuracies may be highly variable because jet stream and ITCZ positions change rapidly during these seasons: our PCs may poorly capture position and stability differences and other ocean regions may be more influential. Regressions for these seasons' first and last 2 months support this reasoning: they usually show the main influence areas shifting markedly, but do little to explain the 1999 AMJ errors.

##### (ii) Out-of-sample predictions for the January–March season

Figure 10 shows actual and OOS JFM predictions for the strong 1998 El Niño and 1999 La Niña, and are about the best obtained. Figure 10a is a fairly classic JFM pattern. The 1998 actual anomalies are large since the values shown are (about) log_{e}(*Y*_{s}/*Y*_{M}), *Y*_{M} the season's *Y*(*t*) median. (This figure preserves the transform inverse, section 2f. Hereafter, “normal” or “normal median” will mean *Y*_{M}.) The several 1.4s (Fig. 10a) indicate seasonal anomalies about *e*^{1.4(0.62)}, or 2.4 times the normal monthly median; 0.62 is Florida's CD 4 *σ*_{Y} and typical. Large anomalies are likewise predicted: Florida's CD 4 *Y*_{est} and *Y*_{s} are nearly identical; good prediction is expected as *R*_{sc} is >0.7 (Fig. 9a). Interestingly, the oddly dry CD in western Texas is well predicted. The California (CA) results are better than expected. Figure 10b has large areas with no predictions, especially the central and northwest United States. Most are where regressions are not significant, see Fig. 8. The main errors, ≈1.0 to 1.4*σ*_{Y}, are differences from the classic pattern (Barnston et al. 1999). The larger are in Nebraska and Kansas, where expected precipitation was absent, and eastern Michigan and northern Ohio, which was wet instead of dry. Overall, the Appalachian region is not nearly as dry as expected. The major EOFs, 1 and 3, are strong: PC 3 greatly helps correct the overly dry result that would obtain from only the main ENSO EOF, see Fig. 4, especially in Tennessee and southward.

Figures 10c,d shows 1999 La Niña anomalies. Actual anomalies again differ notably from the classic pattern, especially the Appalachians, southern Texas, and several states to its north: much area expected dry is close to normal or even wet, such as Oklahoma (OK). Prediction is good in Arizona (AZ) and southern CA, and at least of correct sign along the Appalachian states. Northwest Wyoming error is unusually high. The Southeast is predicted rather well. More PCs are strong than in 1998; PC 3 tends to degrade accuracy except near eastern Pennsylvania and the Northwest. Other strong PCs are 5, 6 (which has little significance), 7, 8, and 9. These generally increase accuracy, especially in AZ and Maryland.

##### (iii) June–August out-of-sample predictions

Figure 11 shows 1996 and 1997 JJA summers. In-sample regression results and EOF 1 and 2 significance are in Fig. 12. Year 1996 illustrates more typical accuracy and a nearly null ENSO state, PC 1 about −0.2*σ.* But precipitation resembles a strong La Niña, so that the very dry conditions in western Wyoming and nearby states are almost completely missed—the largest error is ≈1.5 *σ*_{Y}. Small dry parts of Montana and Idaho are predicted moderately well and also locally significant CDs in Washington and Oregon. Principal component 5 predicts a small wet anomaly, which gives small errors in the Dakotas but helps in their north where precipitation is normal; PC 2 and 3 make the major contributions and give the fairly accurate wet central United States and New Mexico predictions. A few scattered predictions are small but with correct signs: one shows in North Carolina.

Figures 11c,d shows the strong 1997 El Niño summer. Accuracy is remarkably good (likely somewhat fortuitously) at CDs where *p* is small enough for prediction. The PCs 1 and 2 make the major contributions; PC 2's main effect is near each side of the Colorado–Kansas border (but adds to Iowa and Nebraska errors). Several other PCs are quite strong, but with much lower significance, and add only moderately to accuracy. Notably, EOF 1 contributes in the proper region, as opposed to much of the area obtained by Wang et al. (1999) without transformed variates. Figures 11b,d also show notably better predictability than indicated by recent GCM results (Koster et al. 1999), although adding a surface moisture variable would probably improve accuracy.

#### 2) Out-of-sample quantitative skill—Heidke skill scoring

Predictions (guesses) were made only where they were likely useful, that is, enough > chance, to avoid washing out real skill; if actual skill or OOS sample size, *N*_{os}, is small, as here, results may even be negative (Davis 1977). We optimize *R*^{2} skill. Some improvement could be gained optimizing *S*_{H} instead, but since large errors are serious in practice, *R*^{2} seems reasonable.

##### (i) Prediction decision criteria and parameters

The values *Y*_{est} (with *Y*_{s} renormalized to *σ* = 1) and *R*_{sc} were the main parameters for outer- and middle-tercile nonclimatology decisions, respectively. Outer terciles were picked if |*Y*_{est}| > *L*_{k}, *k* indicating more than 1 limit, *L,* was tried. The value *L* should be large enough to expect an outer-tercile result. Larger *L* should increase *S*_{H} if real skill is present until *N*_{p} and *N*_{c} become small enough for stochastic noise to dominate, an effect seen in Barnston et al.'s Fig. 7 (1999).

The value *L* = 0.33 was our main choice: one more correct guess is expected > chance per each eight. Normal distribution tercile boundaries, ±0.44 (*N*_{c}/*N*_{p} = 1/2), give an increase of one per six. (In practice, fractions correct are likely to be smaller due to OOS size, *Y*_{s} departures from normality, and climate nonstationarity.) When *R*_{sc} > 0.45, *L*/(1 − *R*^{2}_{sc}^{1/2} was used to account for smaller *σ*_{y · x} [Eq. (3)]. The middle tercile was chosen if *R*_{sc} > 0.45 and |*Y*_{est}|(1 − *R*^{2}_{sc}^{1/2} ≤ 0.14: a small limit is needed to keep the prediction plus error likely within the tercile. The value 0.14 was simply estimated; optimization may give useful improvement. (1 − *R*^{2}_{sc}^{1/2} again is from (3).

##### (ii) Quantitative skill considerations: Table 2

Table 2 shows quantitative skill for four seasons, each using all OOS years. Results for at least three *L*_{k} are shown under prediction limit; some seasons include *S*_{H} with other decision criteria for illustration. The number of predictions, *N*_{p}, number of correct predictions, *N*_{c}, and expected correct, *N*_{ec}, columns explicitly show *L*_{k} sensitivity. Maximum possible predictions per season are 2064 (6 × 344 CDs). Like the OOS maps, Heidke skills include the in-sample annual cycle so that OOS baseline shifts from climate change, or other nonstationarity, directly reduce skill. Testing here is more demanding than used by Barnett and Preisendorfer (1987, henceforth BP) since BP derived their terciles from their OOS predictions.

##### (iii) Quantitative skill: Results, Table 2

Table 2's salient feature is the consistently positive skill: even the most problematic season, April–June (see earlier), shows slightly positive results. The Heidke skill score usually increases with *L,* as expected for real skill. JFM's large *L*_{k} range shows behavior when prediction is unusually good (note many high *R*_{sc} CDs, Fig. 9a), ENSO response is strong, and strong ENSOs occur: even *L* = 1.0 gives ≈280 predictions, >1/2 are correct for *L* > 0.55. In-sample tercile divisions varied about normal (see appendix D). JFM skills shown for normal terciles (±0.44) are nearly as good as including regional variation—a result typical of all seasons, not explicitly shown.

OND illustrates other commonalties. The value 1.2 under “Other decision criteria” shows skills when monthly tPAs are exponentiated by 1.2 before normalization. This change (tails are stretched, middle compressed) corrects much of the kurtosis from the log transform. Reduced tercile jitter and larger *N*_{p} should increase skill—both effects are seen. From July through December, adding EG 2 to the predictors and orthogonalizing (EG2 + orth under Other decision criteria) notably increased *N*_{p}, *S*_{H}, and *p*_{fo}. (The optimal predictor assumes orthogonality while EG 2 often correlated strongly with several PCs.) Extended Gulf 2 had large low *p* areas in all the included 3-month seasons and field significance in most.

##### (iv) Quantitative skill: Estimating statistical significance, p_{fo}

Table 2's third column shows *z*-scores (see appendix D). The value *z* determines *p*_{fo} directly if predictands are independent, but spatial correlation biases *z* high (BP, their appendix, section 4). Correction was done following BP (see appendix D). Table 2's bottom shows conservative, 15 degrees of freedom, *t*_{B} (field) significance levels. (The *B* indicates the underlying binomial distribution.)

Nearly all Table 2 entries have *p*_{fo} ≤ 1/6; during JFM many have *p*_{fo} ≤ 0.05. Surprisingly, even JAS (not shown) has *p*_{fo} ≈ 1/6 and *S*_{H} ≈ 16 in spite of its low MLR significance. Its skill apparently results from a few influential PCs, especially 1 and 2. Considered together, the results described earlier indicate that real skill is achievable during most of the year. Given the small *N*_{os} and the *t* statistic's standard error, ±1, the results seem distinctive.

## 4. Discussion

### a. Seasonal correlation results: Comparison to previous studies

Previous statistical studies often used canonical correlation methods, usually with EOF prefiltering (e.g., BS96; BP). While BS96 predicted temperature reasonably well, precipitation results were not good: simple prediction using ENSO composites shows markedly more potential skill (Barnston et al. 1999). Figures 8 and 11b's seasonal 0 lag skills are notably greater than similar other studies. Several factors contribute.

#### 1) Factors limiting and improving skill

In hindsight, canonical methods' limited performance may be straightforwardly explained:

These types of methods rely fundamentally on linear regression: precipitation distributions' high skew and extreme values give flawed regression statistics, even for one predictor variable (Tabachnick and Fidell 1996, henceforth TB96, chapter 4, Wang and Swail 2001, henceforth WS). This characteristic of continental precipitation is a nontrivial difficulty.

EOF prefiltering the precipitation field is more likely to filter out the desired signal than enhance it since SST influence typically accounts for a small variance fraction. Instead, prefiltering will likely capture mainly stochastic atmospheric variance since it is highly spatially correlated on 1–2-month timescales (e.g., droughts) and usually dominates.

Canonical methods typically generate much artificial skill (BP; MI): cross-validation is often used for correction, but its help is limited, see below. Using XV effectively subtracts two large quantities, each with substantial uncertainties. This difference, at best, is likely to have uncertainty hard to characterize [see MI and Davis (1977)].

Canonical methods are “exquisitely sensitive” to departures from normality (TB96, p. 640).

Several factors improve our results: (i) Smaller ocean regions were analyzed, following MT. (ii) A CD's entire PA was used—no variance was lost by filtering or interpolation. (iii) Significance levels, correlations, and reliability were markedly improved by removing the PA distributions' large skew and truncation by transformation (TB96; WS). (iv) MLR recombines signals smeared by PC analysis (see smearing below): skill is improved more than expected from unrelated predictors.

#### 2) Caveats and additional discussion

Some care is needed when comparing results here to others'. Seasonally averaged tPA predictions are, approximately, the season's geometric mean ratio to each months's (monthly σ normalized) median. This ratio may not be easily related to the usual normal, the season average, if monthly averages or distribution shapes change greatly in the season. (Normalizing anomalies by monthly σ, usually done, causes a similar ambiguity.)

Secondly, XV has often been used to remove artificial skill. However, XV (assuming Gaussian distributions) gives estimated OOS skill, *R*_{os}, for *N*_{os} ≅ *N,* the number of observations in model(s) construction. The *R*_{os} will usually be noticeably lower than population-based skills, (those shown here) if *N* is not immense>10^{3} and *R*^{2}_{c}*N*), the 0.4 contour would be ≈ at 0.3's position, with accordingly less influence area, and 0.4 at ≈ 0.5. Contours ≥ 0.6 would be little affected since the governing quantity is squared. For only two or three predictors and *N* ≥ 100, *R*_{c} − *R*_{os} becomes small, a trade-off between predictor number and *R*_{os} discussed in section 2c–e. [Davis (1977) treats these skill types in detail.] Note that XV does *not* identify well most artificial skill generated by screening techniques like CCA: only that from predictors whose skill is from a few influential observations. However, these are preferably dealt with before the main analysis (TB96, chapter 4).

We emphasize: the term artificial skill requires differentiation from other biases. Hindcast, population (also termed ensemble), and OOS skill are already well defined (section 2e; Davis 1977). Our recommendation is the false skill from selecting predictors based on their performance.

### b. ENSO event differences and nonlinearity: Comparison with simpler methods

Methods here explain ENSO event differences found using simpler approaches and show additional SST influence.

#### 1) Improvement over index-composite techniques: Examples

Smith et al. (1999) and Harrison and Larkin (2001, henceforth HL) find the 1997–98 ENSO differing from previous events. Many of their differences appear attributable to using a single ENSO index, even though robust. Artifacts are also likely due to composites' small sample sizes. PC-MLR analysis appears to capture effects better: Pacific EOF 1 shows strong and widespread responses in both time and space (Fig. 4) especially JJA (Fig. 12), when HL find little effect. It also properly predicts, OOS, the 1997–98 winter wet southeast coast and Texas regions—a strong warm event (see Figs. 10b, 9a; also MT). A wet southern CA winter is also correctly predicted (Fig. 9a). (Southern CA is influenced by EOF 1, 2, 5, and 7; unless PC 1 is very strong, as in 1997–98, we expect differences such as HL note.) Sometimes we confirm index results, especially in SON (Fig. 1, HL): we expected a very dry 1997–98 Northwest winter; and, as we both find, the opposite occurred.

#### 2) Warm and cold event nonlinearities: Physical and dynamical

Our work apparently explains warm and cold event differences seen in other studies (e.g., Hoerling et al. 2001, henceforth HO; MT). Differences should result from both physical and dynamical nonlinearities. Some of the former is requisite since precipitation cannot be negative, and water vapor's nearly exponential temperature dependence should cause a positive skew. Our PA transform directly adjusts for both these effects (e.g., the log is the exponential inverse), so that our tPAs should have little purely physical nonlinearity (which inverse parameters absorb and quantify).

Unlike others, we find little dynamical nonlinearity, which can be found by including |PC 1| as a predictor. It typically shows less significance than expected by chance. Other EOFs account for most differences, for example, EOF 4 correlates highly with |PC 1|. In particular, HO's south-central and Northeast difference pattern coincides very well with EOF 3's influence. We do find a small northwestern region with sufficient area and low *p* to suggest reality and coinciding well with HO. But the effective sample size is likely quite small and the 1997–98 event a major contributor, so that chance may easily be responsible.

### c. Effects inherent in EOF-PC analysis

EOF map sets typically had some maps with a few small areas of local significance, some with fairly large areas, and usually two or three maps with areas large enough to suggest field significance. All three area types often overlapped or fit together. This joint behavior suggests signal smearing among EOFs. More evidence is overlapping areas forming clearly significant MLR fields (common in our Atlantic set, not shown). Much of this smearing appears inherent in PC analysis. Horel (1981) and Richman (1986) show EOF's sensitivity to analysis region boundaries, especially when variance is near a border (K.-Y. Kim 1999, personal communication). The empirical orthogonal function's, rotated or not, poor ability to represent motion, usual for SSTAs, will complicate EOF patterns and spread variance among several, or more likely, many. MLR helps by recombining at least some of these parts: those overlapping in a map set. The values of *R* and *p* are invariant if the predictors are linearly transformed. Varimax, skew rotation, and cyclostationary analysis (Kim 2000) may also help. Factor analysis should give useful noise reduction. However, a simple means to analyze noncyclical motion seems unavailable.

### d. Using additional ocean regions and pattern field testing—Future work

Other ocean regions show clear skill, for example, the Atlantic, Table 1; but simply adding a group of predictors will likely reduce overall skill by adding too many unimportant variables. A pattern-limited field test seems needed for selection since a whole field test will likely miss smaller significant areas. Other considerations are predictor consistency over time and dynamics; see also Wang (2001).

### e. Noise EOFs and SSTAs

Some EOFs, including all Gulf EOFs, had surprisingly strong and distant correlations, likely from atmospheric forcing. Although their PCs had short persistence, ≈1 month half-life, and little 1-month forecast skill, occasionally one had very high 3-month forecast skill, Gulf 3 in particular. Its JJA forecast from March–May (MAM) has clear field significance: *p* ≈ 10^{−7} and a large second area with *p*s ≪ 10^{−2}. Since atmospheric systems causing the Gulf correlations are large-scale, mechanisms could be excited with delayed influence, for example, oceanic adjustment. Gulf 3 suggests snow cover: MAM forecasts JJA, and its full PC is needed, not just ɛ_{t}. Soil moisture does not appear responsible.

An apparent conclusion from the previous material and our AR1 guidance maps (section 2f) is that in the United States, at least, large regions show high local significance when correlated with randomlike or AR1, *r*_{1} ≈ 0.5, time series. While some may be due to SSTAs, many are likely from large-scale atmospheric processes. Since these should have roughly semiglobal effects, they should cause SSTAs in other ocean regions and their correlations will pass usual statistical tests and XV. They pose a statistical forecasting pitfall and suggest wariness when short-persistence PCs (or other predictors) seem to forecast well. We can only suggest caution, hopefully reasonable causes, and suggest future research.

## 5. Summary and conclusions

Relatively high concurrent (0 lag) precipitation prediction skill is found over much of the United States over most of the year: hindcast-bias corrected correlation coefficients typically 0.2–0.4 and 0.3–0.6 on monthly and seasonal timescales, respectively, and considerably higher than earlier comparable studies. As found earlier, high-skill regions are seasonally dependent, winter highest. While this study was limited to the United States, methods developed can be applied to climate parameters generally. Related studies, such as BS96, suggest similar results are obtainable over most continents. Out-of-sample (OOS) tests confirmed expected accuracy (as did cross-validation).

Factors improving results were detailed SSTA pattern analysis, multiple regression, monthly data, and, critically, a precipitation transform giving Gaussian regression residuals. Method advantages are trustworthy statistics, especially correlation coefficients, and rigorous monthly, and simple and conservative seasonal average hindcast-bias removal. The number of predictors chosen (necessarily) a priori can be traded off against the difficulty removing unimportant ones without introducing false skill. Moreover, predicted response to an SST state and its confidence limits are obtainable by textbook methods.

Quantitative OOS testing with 1994–99 data showed distinct skill for nearly all seasons—often with surprising accuracy some years and, at times, the opposite. This behavior fit with in-sample scatterplots. Elucidating poor prediction causes should be interesting future work. At least one non-Pacific EOF found field significant during in-sample analysis (EG 2) performed well in OOS testing, suggesting its skill was not just due to selection from many possibilities.

Improved skill seems likely using better databases and techniques, such as EOF rotation, trimming-corrected COADS data (Wolter 1997), and factor and cyclostationary analysis, in addition to removing unimportant predictors. Several-month-lead forecast prospects appear good by combining ocean circulation and statistical models or statistical alone. [Initial work showed population skill typically decreased by only ≈0.1 for late fall and winter 3-month forecasts (Markowski and North 1999).] Carefully including other ocean predictors seems likely to noticeably increase forecast and concurrent skill.

Examining single EOF correlations gave additional results:

The strong North Pacific variability about the subarctic front (typically the main North Pacific EOF) appears largely due to the front's north–south movement and showed little winter precipitation influence. Instead, the east–west temperature gradient or anomaly position along the front showed much influence and notably modulated ENSO effects during several seasons (seen also by GB98). Other Pacific EOFs also modulated ENSO influence, especially EOF 3.

Methods here explain ENSO event differences found by others using simpler approaches, show additional SST influence, and account for much nonlinearity previously unexplained.

Gulf of Mexico EOFs and a large region including the Gulf show unusually strong and widespread U.S. precipitation correlations during all seasons. Their PC behavior strongly suggests the correlations are mainly due to atmospheric forcing.

The previous and similarly caused correlations are likely to be troublesome in statistical forecasts since they mimic causal influence and are not identified by usual significance and cross-validation tests. However, a few highly significant forecast skills suggest some may be useful forecastors, perhaps due to delayed influences such as snow cover.

Because the previous correlations are strong, and extend over large ocean regions (perhaps near global), they should provide useful, if not important, tests for coupled GCMs.

## Acknowledgments

This work was supported in part by the National Institute for Global Environmental Change, Southern Region, U.S. DOE. It does not necessarily endorse this study's findings. We are indebted to Kwang-Yul Kim of The Florida State University for many helpful discussions, comments, and EOF algorithm. This manuscript also benefited substantially from comments and review by Matt Gilmore (University of Illinois at Urbana–Champaign). We thank Steve Schroeder for editing help. The first author is most grateful to Yongyun Hu for his crucial encouragement at the start of this work.

## REFERENCES

Barnett, T. P., and Preisendorfer R. , 1987: Origins and levels of monthly and seasonal forecast skill for United States surface air temperature determined by canonical correlation analysis.

,*Mon. Wea. Rev.***115****,**1825–1850.Barnston, A. G., 1994: Linear statistical short-term climate predictive skill in the Northern Hemisphere.

,*J. Climate***7****,**1513–1564.Barnston, A. G., and Smith T. M. , 1996: Specification and prediction of global surface temperature and precipitation from global SST using CCA.

,*J. Climate***9****,**2660–2697.Barnston, A. G., Leetma A. , Kousky V. , Livezey R. , O'Lenic E. , Van den Dool H. , Wagner A. J. , and Unger D. A. , 1999: NCEP forecasts of the El Niño of 1997–98 and its U.S. impacts.

,*Bull. Amer. Meteor. Soc.***80****,**1829–1852.Berlage, H. P., 1966: The Southern Oscillation and World Weather.

,*Meded. Verh. Monogr., No. 88, Koninklijk Nederlands Meteorologish Institut, 150 pp*Bjerknes, J., 1966: A possible response of the atmospheric Hadley circulation to equatorial anomalies of ocean temperature.

,*Tellus***18****,**820–829.Bretherton, C. S., Smith C. , and Wallace J. M. , 1992: An intercomparison of methods for finding coupled patterns in climate data.

,*J. Climate***5****,**541–560.Chelton, D. B., 1983: Effects of sampling errors in statistical estimation.

,*Deep-Sea Res.***30****,**1083–1103.Chen, T. C., Chen J. M. , and Wikle C. K. , 1996: Interdecadal variation in U.S. Pacific coast precipitation over the past four decades.

,*Bull. Amer. Meteor. Soc.***77****,**1197–1205.da Silva, A., Young A. C. , and Levitus S. , 1994: Algorithms and Procedures. Vol. 1,

*Atlas of Surface Marine Data 1994,*NOAA Atlas NESDIS 6, 83 pp.Davis, R. E., 1977: Techniques for statistical analysis and prediction of geophysical fluid systems.

,*Geophys. Astrophys. Fluid Dyn.***8****,**245–277.Frankignoul, C., 1985: Sea surface temperature anomalies, planetary waves, and air–sea feedback in the middle latitudes.

,*Rev. Geophys.***23****,**357–390.Gershunov, A., and Barnett T. P. , 1998: Interdecadal modulation of ENSO teleconnections.

,*Bull. Amer. Meteor. Soc.***79****,**2715–2725.Gill, A. E., 1982:

*Atmosphere–Ocean Dynamics*. Academic Press, 662 pp.Guttman, N. B., and Quayle R. G. , 1996: A historical perspective of U.S. climate divisions.

,*Bull. Amer. Meteor. Soc.***77****,**293–303.Harrison, D. E., and Larkin N. K. , 2001: Comments on “Comparison of 1997–98 U.S. temperature and precipitation anomalies to historical ENSO warm phases.”.

,*J. Climate***14****,**1894–1895.Hoerling, M. P., Kumar A. , and Xu T. , 2001: Robustness of the nonlinear climate response to ENSO's extreme phases.

,*J. Climate***14****,**1277–1293.Horel, J. D., 1981: A rotated principal component analysis of the interannual variability of the Northern Hemisphere 500 mb height field.

,*Mon. Wea. Rev.***109****,**2080–2092.Kim, K-Y., 2000: Statistical prediction of cyclostationary processes.

,*J. Climate***13****,**1098–1115.Kim, K-Y., and Wu Q. , 1999: A comparison study of EOF techniques: Analysis of nonstationary data with periodic statistics.

,*J. Climate***12****,**185–199.Koster, R. D,, Suarez M. J. , and Heiser M. , 1999: Variance and predictability of precipitation at seasonal-to-interannual timescales.

,*J. Hydrometeor.***1****,**26–46.Latif, M., and Barnett T. P. , 1994: Causes of decadal climate variability over the North Pacific and North America.

,*Science***266****,**634–637.Livezey, R. E., and Chen W. Y. , 1983: Statistical field significance and its determination by Monte Carlo techniques.

,*Mon. Wea. Rev.***111****,**46–59.Lorenz, E. N., 1964: The problem of deciding the climate from the governing equations.

,*Tellus***16****,**1–12.Markowski, G. R., and North G. R. , 1999: On the climatic influence of sea surface temperature: Indications of substantial correlation and predictability. Preprints,

*10th Symp. on Global Change Studies,*Dallas, TX, Amer. Meteor. Soc., 282–284.Michaelsen, J., 1987: Cross-validation in statistical climate forecast models.

,*J. Climate Appl. Meteor.***26****,**1589–1600.Montroy, D. L., 1997: Linear relation of central and eastern North American precipitation to tropical Pacific SST anomalies.

,*J. Climate***10****,**541–558.Nakamura, H., Lin G. , and Yamagata T. , 1997: Decadal climate variability in the North Pacific during recent decades.

,*Bull. Amer. Meteor. Soc.***78****,**2215–2225.Neelin, D. J., and Weng W. , 1999: Analytical prototypes for ocean–atmosphere interaction at midlatitudes. Part I: Coupled feedbacks as a sea surface temperature dependent stochastic process.

,*J. Climate***12****,**697–721.Neter, J., Kutner M. H. , Nachtsheim C. J. , and Wasserman W. , 1996:

*Applied Linear Regression Models.*3d ed. Irwin, 720 pp.Nicholls, N., 2001: The insignificance of significance testing.

,*Bull. Amer. Meteor. Soc.***82****,**981–986.NOAA–CIRES Climate Diagnostics Center, cited 2000: Reynolds SST. [Available online at http://www.cdc.noaa.gov/cdc/data.reynolds_sst.html.].

North, G. R., Bell T. L. , Cahalan R. F. , and Moeng F. J. , 1982: Sampling errors in the estimation of empirical orthogonal functions.

,*Mon. Wea. Rev.***110****,**701–706.Oort, A. H., Pan Y. H. , Reynolds R. W. , and Ropelewski C. F. , 1987: Historical trends in the surface temperature over the oceans based on the COADS.

,*Climate Dyn.***2****,**29–38.Philander, S. G. H., 1990:

*El Niño, La Niña, and the Southern Oscillation*. Academic Press, 289 pp.Reynolds, R. W., and Smith T. M. , 1994: Improved global sea surface temperature analyses using optimum interpolation.

,*J. Climate***7****,**929–948.Richman, M. B., 1986: Rotation of principal components.

,*J. Climatol.***6****,**293–335.Ropelewski, C. F., and Halpert M. S. , 1987: Global and regional scale precipitation patterns associated with the El Niño/Southern Oscillation.

,*Mon. Wea. Rev.***115****,**1606–1626.Ropelewski, C. F., and Halpert M. S. , 1996: Quantifying Southern Oscillation–precipitation relationships.

,*J. Climate***9****,**1043–1059.Smith, S. R., Legler D. M. , Remigio M. J. , and O'Brien J. J. , 1999: Comparison of the 1997–98 U.S. temperature and precipitation anomalies to historical ENSO warm phases.

,*J. Climate***12****,**3507–3515.Spiegel, M. R., 1961:

*Theory and Problems of Statistics: Schaum's Outline Series*. McGraw Hill, 359 pp.Sutton, R. T., and Allen M. R. , 1997: Decadal predictability of North Atlantic SST and climate.

,*Nature***388****,**563–567.Tabachnick, T. G., and Fidell L. S. , 1996:

*Using Multivariate Statistics.*3d ed. Harper Collins College, 880 pp.Unger, D. A., 1995: Skill assessment strategies for screening regression predictions based on a small sample size. Preprints,

*13th Conf. on Probability and Statistics in the Atmospheric Sciences,*San Francisco, CA, Amer. Meteor. Soc., 260–267.Wang, H., Ting M. , and Ji M. , 1999: Prediction of seasonal mean United States precipitation based on El Niño sea surface temperatures.

,*Geophys. Res. Lett.***26****,**1341–1344.Wang, R., 2001: Prediction of seasonal climate in a low-dimensional phase space derived from the observed SST forcing.

,*J. Climate***14****,**77–99.Wang, X. L., and Swail V. R. , 2001: Changes of extreme wave heights in Northern Hemisphere oceans and related atmospheric circulation regimes.

,*J. Climate***14****,**2204–2221.Wherry R. J. Sr., , 1931: A new formula for predicting the shrinkage of the coefficient of multiple correlation.

,*Ann. Math. Stat.***2****,**440–457.Wolter, K., 1997: Trimming problems and remedies in COADS.

,*J. Climate***10****,**1980–1997.Yuan, G., Nakano I. , Fujimori H. , Nakamura T. , Kamoshida T. , and Kaya A. , 1999: Tomographic measurements of the Kuroshio Extension meander and its associated eddies.

,*Geophys. Res. Lett.***26****,**79–82.

## APPENDIX A

### Variables and Abbreviations

ɛ(

*t*) Error or residual, month*t**σ,**σ*^{2}Standard deviation, variance*a*_{di}MLR coefficient of*i*th predictor and CD,*d**a*_{s}Seasonal correlation slope*b*_{s}Hindcast bias estimate of*R*_{s}*N*Number of observations (usually in a MLR)*N*_{c}Number of correct predictions*N*_{cef}Effectively independent*N*_{c}*N*_{ec}Number of expected correct predictions,*N*_{p}/3*N*_{os}Number of OOS predictions*N*Number of predictions*N*_{pef}Effectively independent*N*_{p}*p*Significance, Single CD (local) MLR significance*p*_{f}Field statistical significance*p*_{fo}Out-of-sample field significance*R*_{d}MLR correlation coefficient (hindcast skill)*R′**R*_{c}or*R*_{sc}*R*_{c}Population multiple correlation coefficient*R*_{os}Expected OOS correlation coefficient (skill)*R*_{s}Seasonal average correlation coefficient, with hindcast bias*R*_{sc}*R*_{s}corrected for hindcast bias (ensemble estimate, also skill)*r*Simple linear correlation coefficient*r*_{1}Lag 1 autocorrelation coefficient*r*_{ij}Cross-correlation coefficient*S*_{H}Heidke skill score*s*_{y . x}Standard deviation of regression residuals*s*_{y}Standard deviation of regression predictands*υ*Number of regression predictors*y*(*t*) Predicted transformed precipitation anomaly*Y*(*t*) Transformed tPA*Y*_{est}Season's predicted monthly tPA average*Y*_{s}Season's actual monthly tPA averageAR1 Autoregressive process, Order 1

CCA Canonical correlation analysis

CD(s) U.S. State Climatic Division(s)

EG Extended Gulf region: Table 1

MLR Multiple least-squares linear regression

NCDC U.S. National Climatic Data Center

OOS Out-of-sample

PA(s) Precipitation anomaly(s)

PC(s) Principal component(s); PC time series

PD(s) Precipitation distribution(s)

SSTA(s) SST anomaly(s)

SVD Singular value decomposition

tPA(s) Transformed observed monthly PA(s)

XV Cross-validation

## APPENDIX B

### Precipitation Anomaly Distribution Transformation and Regression Residual Testing

#### Transformation algorithm

*y*(

*t*) = the normalized PAs for a given CD and a consecutive series of months,

*t,*from each year. After shifting

*y*(

*t*) so that

*y*(

*t*

_{L}) = 0, choose

*c*

_{0}so

*y*(

*t*

_{M})/[

*y*(

*t*

_{L}) +

*c*

_{0}] =

*y*(

*t*

_{H})/

*y*(

*t*

_{M}) :

*y*(

*t*

_{L}),

*y*(

*t*

_{M}), and

*y*(

*t*

_{H}) are the lowest, median, and highest

*y*(

*t*), respectively. When

*y*(

*t*) <

*y*(

*t*

_{M}), square interpolate the adjustment so that it decreases rapidly as

*y*→

*y*(

*t*

_{M}):

*y*(

*t*) >

*y*(

*t*

_{M}). A log transform about the median gives “geometric” transformed anomalies,

*Y*(

*t*):

*Y*(

*t*) = 1n[

*y*′(

*t*)/

*y*(

*t*

_{M})]. The values

*Y*(

*t*), normalized to

*σ*= 1, are the tPAs.

#### Transformed anomaly and residual distribution examination

*χ*^{2} Normality testing and results

The *χ*^{2} normality tests used six bins with equal 18% intervals and 4.5% tail bins (|*z*| > 1.690, two observations expected per month in season). Tail bin counts were added. Values outside ±2.0% and ±0.5% were counted to see if the transform reduced or eliminated extreme event bias: counts were too small for *χ*^{2} use.

Table B1 shows transform performance with predictors for our 10 Pacific and 4 Gulf PCs and *χ*^{2} *p* = 0.05. Summer distributions from states with highly arid CDs, typically U.S. southwest, are shown separately since they had the most very small values and are the most severe test. Results are typical: residual distributions are closely normal with failing fractions well within chance. Southwest July–September was little more than 1*σ* > expected, even if the several wetter Texas CDs are excluded. The tail counts show extreme values are well removed. The tPAs differ from strict normality, but only residual normality is required. Some limitations: too many very small raw values will give an abnormal negatively peaked residual distribution; over long seasons, distribution shape changes need consideration.

##### Residual autocorrelation and scatterplot examination

We checked residual *r*_{1} autocorrelation nominally using a season's entire residual time series, and within-season values only. Nominal calculation includes decorrelation between years: *r*_{1} ≈ 2/3 versus season only. Autocorrelation varies seasonally so annual basis is not adequate. Autocorrelations appeared statistically consistent with 0. (MLR requirements are based on the *nominal* method.)

Season-averaged residual time series and their 44-point correlation scatterplots were examined for many CDs with the highest skills. No unusual behavior was found: outlier points did not appear to drive correlations (an important reason for XV) nor was nonrandom residual behavior visible.

## APPENDIX C

### Brief Derivation of Seasonal Average Hindcast Bias Correction, Eq. (4)

*y*

_{i}can be written as

*y*

_{i}=

*rx*

_{i}+

*e*

_{i}, where

*e*

_{i}are the regression residuals, the product-moment formula for

*r*

_{s}in terms of sums (

*x*and

*y*means = 0) becomes

After normalizing *x* and *y* to *σ* = 1, formally let Σ *e*^{2}_{sj}*N*/*n*) = *α* Σ *e*^{2}_{i}*N* and Σ *x*^{2}_{sj}*N*/*n*) = *γ* Σ *x*^{2}_{i}*N,* where *i* denotes monthly values and *n* = months per season, and use Σ *e*^{2}_{i}*r*^{2})Σ *x*^{2}_{i}*γ* Σ *x*^{2}_{i}*a*_{s} (*a*_{s} will be ≈1 when *p* ≤ 0.1). Here, *α* will typically be ≈1/*n.* If *x*_{i} were uncorrelated, *γ* also would be ≈1/*n.* But *x*_{i} derive from persistent SSTAs, so that *γ* will be closer to 1 since *x*_{sj} will be ≈ its corresponding *x*_{i}'s.

## APPENDIX D

### Out-of-Sample Testing: Optimal Weighting and Heidke Skill Scoring

#### Additional optimal weighting procedures and discussion

Choosing *λ* [Eq. (5)] depends on predictor confidence: underweighting variables that have real influence, versus overweighting poor predictors and losing forecast skill. Smaller *λ*s weight more less-significant (smaller *t*) predictors. The value *λ* rapidly becomes minor as *t* increases since *t* is squared. Davis (1977) shows that *λ* should lie between 1 and 2 when *t* is moderately low. (Davis' *R*^{2}/〈*r*^{2}〉 is well approximated by *t*^{2} for *N* and *p* here.)

A PC's *p*_{f} was included by varying *λ* vs *p*_{f}. Field-significant PCs: *λ* = 1.7, *t* < 1.0, and linearly tapered to 0, 1.0 < *t* < 2.3 (0 > 2.3). PCs with very large low-*p* areas plus long persistence, *r*_{1} ≥ 0.7, were treated similarly. Otherwise, *λ* was more conservative: 2, *t* < 1.75, and linearly tapered to 0, 1.75 < *t* < 3.5.

A PC's *p*_{f} was also used where local *p* was ≈0.1: For CDs with ≥1 field-significant PC, weights were applied to *Y*_{est} which varied linearly 1.0 to 0.0, 0.12 < *p* < 0.18, and 0.095 ≤ *p* ≤ 0.135 otherwise. This weighting avoids most nonclimatology predictions for CDs with only a few moderate *p* predictors; the higher limits partially compensate for MLR significance reduction from using many predictors. The tradeoff is underweighting PCs with real influence which do not appear field significant.

#### Heidke skill: Tercile divisions

A *Y*_{s}'s actual tercile divisions often differed from normal. Since 3 does not evenly divide the training season *N,* 44, divisions fall evenly between two *Y*_{s} values. Averaging many *Y*_{s} pairs flanking normalized tercile divisions gave 0.43, indicating unbiased terciles. Boundary jitter was evident—often statistically significant between nearby CDs. To reduce this systematic error, average terciles within a state (and adjoining states if <4 CDs) were used; 0.43 was also examined since average regional bias was small.

#### Heidke skill field significance

*Z*-scores (Table 2, column 3) = (*N*_{c} − *N*_{ec})/*σ*_{B}: *σ*_{B} the binomial distribution *σ,* [*Np*_{n}(1 − *p*]^{1/2}, *p* = 1/3, the chance correct probability. Here, *σ*_{B} is well approximated by a Gaussian if *N* ≥ 15 (Spiegel 1961, chapter 7). The value *Z* overestimates *p*_{fo} due to spatial correlation (BP, their appendix, section 4): neighboring CD *Y*_{s} are usually correlated ≈0.9. Correction followed BP's approach. Like *p*_{f}, effective independent and correct guesses, *N*_{pef} and *N*_{cef}, are needed. They were approximated following BP except: cross correlations, *r*_{ij}, were used, BP's Eq. (A17), as *Y*_{s} is 1*σ* normalized; *r*_{ij} were limited to CDs with *p* ≤ 0.1. They were weighted by the fraction of years predictions were made at CDs *i* and *j,* *w*_{i}*w*_{j} instead of (*σ*_{i}*σ*_{j}), to account for differing prediction numbers. Here, |*r*_{ij}|'s bias was removed before summing. [BP's Eq. (A17): using |*r*_{ij}| gives the total normalized equivalent CDs per a typical CD, ≈1/10 here.)] Diagonal elements are summed since *w*_{i} is not always 1. *t*_{B} = *z*/(Σ *w*_{i}*w*_{j}*r*_{ij})^{1/2}. Here, *t*_{B} corresponds to an *N* = *N*_{pef} binomial distribution.

The first 10 Pacific EOFs and their variance fraction (Var). (a)–(d) The most influential and persistent and (e)–(j) usually less influential and less persistent, see text. Loadings have been scaled for presentation by ≈100 after unit length normalization. Dot–dashed lines indicate negative PC contributions. Zero contours are dotted

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

The first 10 Pacific EOFs and their variance fraction (Var). (a)–(d) The most influential and persistent and (e)–(j) usually less influential and less persistent, see text. Loadings have been scaled for presentation by ≈100 after unit length normalization. Dot–dashed lines indicate negative PC contributions. Zero contours are dotted

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

The first 10 Pacific EOFs and their variance fraction (Var). (a)–(d) The most influential and persistent and (e)–(j) usually less influential and less persistent, see text. Loadings have been scaled for presentation by ≈100 after unit length normalization. Dot–dashed lines indicate negative PC contributions. Zero contours are dotted

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 2 except the first four Gulf of Mexico EOFs and normalization ≈ 50.

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 2 except the first four Gulf of Mexico EOFs and normalization ≈ 50.

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 2 except the first four Gulf of Mexico EOFs and normalization ≈ 50.

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Local significance maps of the main ENSO EOF, Pacific 1, for (a)–(f) six 3-month seasons covering the year. Regions enclosed with a thick solid (thin) *p* = 0.1 contour (and thick dot–dashed inner contours) are wet (dry) during a warm (cold) event. Contour intervals are a factor of 10, except a thin dot–dashed line marks *p* = 0.03. The *p* = 1 × 10^{−4} contour is labeled 0 and is thickest; the innermost contour is 1.5 × 10^{−5}. Dots are approximate Climatic Division centers. Note the large areas of significance and the (a) Jan–Mar strong transient dipole. The first 10 Pacific PCs were used as lag 0 predictors—Gulf PCs were not used, see text

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Local significance maps of the main ENSO EOF, Pacific 1, for (a)–(f) six 3-month seasons covering the year. Regions enclosed with a thick solid (thin) *p* = 0.1 contour (and thick dot–dashed inner contours) are wet (dry) during a warm (cold) event. Contour intervals are a factor of 10, except a thin dot–dashed line marks *p* = 0.03. The *p* = 1 × 10^{−4} contour is labeled 0 and is thickest; the innermost contour is 1.5 × 10^{−5}. Dots are approximate Climatic Division centers. Note the large areas of significance and the (a) Jan–Mar strong transient dipole. The first 10 Pacific PCs were used as lag 0 predictors—Gulf PCs were not used, see text

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Local significance maps of the main ENSO EOF, Pacific 1, for (a)–(f) six 3-month seasons covering the year. Regions enclosed with a thick solid (thin) *p* = 0.1 contour (and thick dot–dashed inner contours) are wet (dry) during a warm (cold) event. Contour intervals are a factor of 10, except a thin dot–dashed line marks *p* = 0.03. The *p* = 1 × 10^{−4} contour is labeled 0 and is thickest; the innermost contour is 1.5 × 10^{−5}. Dots are approximate Climatic Division centers. Note the large areas of significance and the (a) Jan–Mar strong transient dipole. The first 10 Pacific PCs were used as lag 0 predictors—Gulf PCs were not used, see text

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Gulf EOF 1 and (b) EOF 3 for Mar–Jun and full year, respectively. Otherwise as in Fig. 4 except the four Gulf PCs are included as regression predictors and different line styles only indicate opposite responses

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Gulf EOF 1 and (b) EOF 3 for Mar–Jun and full year, respectively. Otherwise as in Fig. 4 except the four Gulf PCs are included as regression predictors and different line styles only indicate opposite responses

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Gulf EOF 1 and (b) EOF 3 for Mar–Jun and full year, respectively. Otherwise as in Fig. 4 except the four Gulf PCs are included as regression predictors and different line styles only indicate opposite responses

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 3 but for the first two extended Gulf EOFs, see Table 1

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 3 but for the first two extended Gulf EOFs, see Table 1

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 3 but for the first two extended Gulf EOFs, see Table 1

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 4 except Pacific EOF 5 during its season of maximum influence. Wet line style corresponds to EOF 5's right pole warm (Fig. 2d)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 4 except Pacific EOF 5 during its season of maximum influence. Wet line style corresponds to EOF 5's right pole warm (Fig. 2d)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 4 except Pacific EOF 5 during its season of maximum influence. Wet line style corresponds to EOF 5's right pole warm (Fig. 2d)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a)–(d) Overall regression local significance for four 3-month seasons. Contours, dots, and predictors (lag 0) as in Fig. 4, except the 0.03 contour is omitted and opposing responses do not apply. Note high significance levels and field significance (see text) for most seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a)–(d) Overall regression local significance for four 3-month seasons. Contours, dots, and predictors (lag 0) as in Fig. 4, except the 0.03 contour is omitted and opposing responses do not apply. Note high significance levels and field significance (see text) for most seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a)–(d) Overall regression local significance for four 3-month seasons. Contours, dots, and predictors (lag 0) as in Fig. 4, except the 0.03 contour is omitted and opposing responses do not apply. Note high significance levels and field significance (see text) for most seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Season-averaged MLR bias-corrected 0 lag prediction skill, *R*_{sc}, corresponding to the significance results and seasons in Fig. 8a–d. Contours begin at 0.3; intervals are 0.1. The 0.3 and 0.6 contours are enhanced for clarity. Note the good predictability during winter months

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Season-averaged MLR bias-corrected 0 lag prediction skill, *R*_{sc}, corresponding to the significance results and seasons in Fig. 8a–d. Contours begin at 0.3; intervals are 0.1. The 0.3 and 0.6 contours are enhanced for clarity. Note the good predictability during winter months

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Season-averaged MLR bias-corrected 0 lag prediction skill, *R*_{sc}, corresponding to the significance results and seasons in Fig. 8a–d. Contours begin at 0.3; intervals are 0.1. The 0.3 and 0.6 contours are enhanced for clarity. Note the good predictability during winter months

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Jan–Mar actual and predicted season average precipitation for (a),(c) 1998 and (b),(d) 1999. Averaged monthly transformed anomalies are shown. These are approximately log_{e} of the ratio of season average to the actual median precipitation of the season's months (monthly data normalized to *σ* = 1, see text). Predicted (actual) contours start at 0.2 (0.4), with sequence 0.2, 0.4, 0.7, 1.0, 1.2, 1.4, … Corresponding ratios to observed medians (reciprocals when negative) are ≈1.1, 1.3, 1.5, 1.8, 2.1, 2.3, … Positive (negative) anomalies are indicated by thick (thin) solid outer contours and thin solid (thick dot–dashed) inner contours. Darkest shading starts at ±1.4. Dots are approximate U.S. State Climatic Division centers

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Jan–Mar actual and predicted season average precipitation for (a),(c) 1998 and (b),(d) 1999. Averaged monthly transformed anomalies are shown. These are approximately log_{e} of the ratio of season average to the actual median precipitation of the season's months (monthly data normalized to *σ* = 1, see text). Predicted (actual) contours start at 0.2 (0.4), with sequence 0.2, 0.4, 0.7, 1.0, 1.2, 1.4, … Corresponding ratios to observed medians (reciprocals when negative) are ≈1.1, 1.3, 1.5, 1.8, 2.1, 2.3, … Positive (negative) anomalies are indicated by thick (thin) solid outer contours and thin solid (thick dot–dashed) inner contours. Darkest shading starts at ±1.4. Dots are approximate U.S. State Climatic Division centers

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Jan–Mar actual and predicted season average precipitation for (a),(c) 1998 and (b),(d) 1999. Averaged monthly transformed anomalies are shown. These are approximately log_{e} of the ratio of season average to the actual median precipitation of the season's months (monthly data normalized to *σ* = 1, see text). Predicted (actual) contours start at 0.2 (0.4), with sequence 0.2, 0.4, 0.7, 1.0, 1.2, 1.4, … Corresponding ratios to observed medians (reciprocals when negative) are ≈1.1, 1.3, 1.5, 1.8, 2.1, 2.3, … Positive (negative) anomalies are indicated by thick (thin) solid outer contours and thin solid (thick dot–dashed) inner contours. Darkest shading starts at ±1.4. Dots are approximate U.S. State Climatic Division centers

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 10 but for 1996 and 1997 Jun–Aug seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 10 but for 1996 and 1997 Jun–Aug seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

As in Fig. 10 but for 1996 and 1997 Jun–Aug seasons

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Regression significance as in Fig. 8, (b) prediction skill, *R*_{sc}, as in Fig. 9, and (c),(d) Pacific EOF 1 and 2 significance as in Fig. 4, but all for the Jun–Aug season. EOF 2 dry contours correspond to a cold North Pacific anomaly (Fig. 2b)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Regression significance as in Fig. 8, (b) prediction skill, *R*_{sc}, as in Fig. 9, and (c),(d) Pacific EOF 1 and 2 significance as in Fig. 4, but all for the Jun–Aug season. EOF 2 dry contours correspond to a cold North Pacific anomaly (Fig. 2b)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

(a) Regression significance as in Fig. 8, (b) prediction skill, *R*_{sc}, as in Fig. 9, and (c),(d) Pacific EOF 1 and 2 significance as in Fig. 4, but all for the Jun–Aug season. EOF 2 dry contours correspond to a cold North Pacific anomaly (Fig. 2b)

Citation: Journal of Hydrometeorology 4, 5; 10.1175/1525-7541(2003)004<0856:CIOSST>2.0.CO;2

Ocean regions selected for EOF analysis

Heidke skill scores, effective *t* statistic, difference from correct relative to binomial distribution standard deviation, σ_{B} , and actual prediction numbers. Nonclimatology prediction levels and other conditions are rightmost (see text)

Table B1. Normality χ^{ 2} results, and numbers and expected percentage of values in tails of transformed and residual distributions. The χ^{ 2} test used six bins, *p* = 0.05. Southwest states (CA, NV, UT, AZ, NM, and TX) were considered separately because of their dry seasons. See text for details and discussion