An El Niño–Southern Oscillation Climatology and Persistence (CLIPER) Forecasting Scheme

John A. Knaff Department of Atmospheric Science, Colorado State University, Fort Collins, Colorado

Search for other papers by John A. Knaff in
Current site
Google Scholar
PubMed
Close
and
Christopher W. Landsea NOAA/Hurricane Research Division, Miami, Florida

Search for other papers by Christopher W. Landsea in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

A statistical prediction method, which is based entirely on the optimal combination of persistence, month-to-month trend of initial conditions, and climatology, is developed for the El Niño–Southern Oscillation (ENSO) phenomena. The selection of predictors is by design intended to avoid any pretense of predictive ability based on “model physics” and the like, but rather is to specify the optimal “no-skill” forecast as a baseline comparison for more sophisticated forecast methods. Multiple least squares regression using the method of leaps and bounds is employed to test a total of 14 possible predictors for the selection of the best predictors, based upon 1950–94 developmental data. A range of zero to four predictors were chosen in developing 12 separate regression models, developed separately for each initial calendar month. The predictands to be forecast include the Southern Oscillation (pressure) index (SOI) and the Niño 1+2, Niño 3, Niño 4, and Niño 3.4 SST indices for the equatorial eastern and central Pacific at lead times ranging from zero seasons (0–2 months) through seven seasons (21–23 months). Though hindcast ability is strongly seasonally dependent, substantial improvement is achieved over simple persistence wherein largest gains occur for two–seven-season (6–23 months) lead times. For example, expected maximum forecast ability for the Niño 3.4 SST region, depending on the initial date, reaches 92%, 85%, 64%, 41%, 36%, 24%, 24%, and 28% of variance for leads of zero to seven seasons. Comparable maxima of persistence only forecasts explain 92%, 77%, 50%, 17%, 6%, 14%, 21%, and 17%, respectively. More sophisticated statistical and dynamic forecasting models are encouraged to utilize this ENSO-CLIPER model in place of persistence when assessing whether they have achieved forecasting skill; to this end, real-time results for this model are made available via a Web site.

Corresponding author address: Dr. John A. Knaff, Department of Atmospheric Science, Colorado State University, Fort Collins, CO 80523.

Email: knaff@upmoist.atmos.colostate.edu

Abstract

A statistical prediction method, which is based entirely on the optimal combination of persistence, month-to-month trend of initial conditions, and climatology, is developed for the El Niño–Southern Oscillation (ENSO) phenomena. The selection of predictors is by design intended to avoid any pretense of predictive ability based on “model physics” and the like, but rather is to specify the optimal “no-skill” forecast as a baseline comparison for more sophisticated forecast methods. Multiple least squares regression using the method of leaps and bounds is employed to test a total of 14 possible predictors for the selection of the best predictors, based upon 1950–94 developmental data. A range of zero to four predictors were chosen in developing 12 separate regression models, developed separately for each initial calendar month. The predictands to be forecast include the Southern Oscillation (pressure) index (SOI) and the Niño 1+2, Niño 3, Niño 4, and Niño 3.4 SST indices for the equatorial eastern and central Pacific at lead times ranging from zero seasons (0–2 months) through seven seasons (21–23 months). Though hindcast ability is strongly seasonally dependent, substantial improvement is achieved over simple persistence wherein largest gains occur for two–seven-season (6–23 months) lead times. For example, expected maximum forecast ability for the Niño 3.4 SST region, depending on the initial date, reaches 92%, 85%, 64%, 41%, 36%, 24%, 24%, and 28% of variance for leads of zero to seven seasons. Comparable maxima of persistence only forecasts explain 92%, 77%, 50%, 17%, 6%, 14%, 21%, and 17%, respectively. More sophisticated statistical and dynamic forecasting models are encouraged to utilize this ENSO-CLIPER model in place of persistence when assessing whether they have achieved forecasting skill; to this end, real-time results for this model are made available via a Web site.

Corresponding author address: Dr. John A. Knaff, Department of Atmospheric Science, Colorado State University, Fort Collins, CO 80523.

Email: knaff@upmoist.atmos.colostate.edu

1. Introduction

Year-to-year variations of the El Niño–Southern Oscillation (ENSO) produces robust, large-scale changes in the distribution of global precipitation (Ropelewski and Halpert 1987, 1989) and surface temperature (Halpert and Ropelewski 1992) in addition to altering both the frequency and location of tropical cyclones (Gray et al. 1994; Nicholls 1992; Lander 1994). These teleconnected effects are due to ENSO-forced global circulation changes (Horel and Wallace 1981; Arkin 1982). Thus, successful seasonal forecasts of ENSO variability are crucial for useful predictions of these precipitation, temperature, and tropical cyclone variations.

In recent years, efforts to understand the ENSO phenomena have moved into the real-time forecasting arena of ENSO itself. Predictions associated with this activity are published quarterly within the Experimental Long-lead Forecast Bulletin (hereafter referred to as ELLFB; Barnston 1996a). Methodologies presently being used to forecasting ENSO variability can be broadly subdivided into two groups including 1) statistical models [nonlinear analog model (NLAM), Drosdowsky (1994); linear inverse model (LIM), Penland and Magorian (1993) and Zhang et al. (1993); single spectrum analysis–maximum entropy method (SSA–MEM), Keppenne and Ghil (1992) and Jiang et al. (1995); constructed analog forecasts (CAF), Van den Dool (1994); canonical correlation analysis (CCA), Barnston and Ropelewski (1992)] and 2) dynamic models [hybrid coupled model (HCM), Barnett et al. (1993); Cane and Zebiak model (CZ), Zebiak and Cane (1987); LDEO2, Chen et al. (1995); coupled model project 9 and 10 (CMP9 and CMP10), Ji et al. (1994); Center for Ocean–Land–Atmosphere (COLA), Kirtman et al. (1996); Bureau of Meteorology Research Centre (BMRC), Kleeman (1993); Oxford, Balmaseda et al. (1994)]. Many of these models (LIM, HCM, CZ, LDEO2, CMP9, CMP10, COLA, BMRC) predict the spatial aspects of the SST field; however, it is common to compare the predictive capabilities of all of these models by examining regionally averaged SST anomalies.

The real-time ENSO forecasting ability of a subset of these models—two statistical (CCA and CAF) and three dynamic (CZ, HCM, and CMP9)—was assessed by Barnston et al. (1994). Two-season lead time predictions (as for example, a forecast of December through February conditions issued two seasons in advance on the first of June) were examined. In this analysis, forecast ability was measured by computing linear correlation coefficients of predicted SST variations versus those observed. All of the models achieved correlations of around 0.65, about 40% of the variance of observed eastern and central equatorial Pacific SST anomalies during the years 1982–93. These results were suggested to show “skill” in that they were able to exceed the forecast ability of a simple 1-month persistence forecast that could explain only about 5% of the variance in observations. (A forecast of “persistence” is the use of the current 1-month anomalous conditions as a predictor itself of future anomaly values. For example, if the SSTs were 0.73°C above average, persistence would forecast also a +0.73°C anomaly to occur for the following months.) Note that validation tests of the three dynamic models, because of their relatively recent origin, entailed 4 yr of hindcast tests in lieu of independent data. (A “hindcast” specifically refers to prediction of some past event based upon initial conditions assessed prior to the event in question. Both statistical and dynamic models utilize hindcast testing.) However, successful hindcasts, even with dynamic models, do not ensure equally successful future predictions. As noted by Barnston et al. (1994), “there is no substitute for real-time forecasting.”

Nonetheless, two issues complicate the assessment of real-time forecast skill in these statistical and dynamic ENSO models. The first concern is the lack of success by these models during the last few years. For example, during late 1994 and early 1995, there was a reemergence of El Niño conditions including SST anomalies ranging from +1.0° to 2.0°C above average developing throughout the eastern and central equatorial Pacific. Southern Oscillation index (SOI) values averaging 1.5 standard deviations below average, convective activity well above normal near the date line and equator, and weakened trade winds throughout much of the equatorial Pacific also occurred, consistent with a mature El Niño event (Halpert et al. 1996). The June 1994 issue of ELLFB (Barnston 1994b) noted that nearly all models—statistical and dynamic—suggested that no El Niño was imminent, even though the models were run only a half-year before the 1994–95 El Niño appeared and reached its mature stage. The statistical models including the LIM, SSA–MEM, CAF, and CCA schemes all predicted neutral ENSO conditions for December 1994 through February 1995. The dynamic models including the CMP9 and BMRC schemes called for weak ENSO cold phase conditions (or La Niña), and the CZ model suggested near-neutral conditions. Only the HCM correctly forecast the warming that was later observed in the central tropical Pacific (140° to the date line). However, in June 1995 (Barnston 1995b), even the HCM was unable to forecast the moderate La Niña event of late 1995 and early 1996 (Halpert et al. 1996) and also incorrectly predicted a very strong El Niño event to occur in late (boreal) spring and summer 1996; neutral conditions actually prevailed. Whereas, the reasons for these wide-scale forecast failures are beyond the scope of this paper, suffice it to say that the forecast “success” reported by Barnston et al. (1994) may have been premature.

The second ENSO forecasting assessment issue regards the method for determining skill in the seasonal forecasts. Traditionally, skill, as defined by Barnston et al. (1994) and in most of the statistical and dynamical modeling studies referenced previously, is the ability to show significantly greater forecasting success when compared to persistence of initial conditions. Success has usually been achieved either by maximizing the linear correlation coefficient or by minimizing the root-mean-square error (rmse). However, we suggest that persistence is an inappropriately limited test for determining the threshold of skill in forecasting the ENSO phenomena. For example, in January 1988 the Niño 3 (5°N–5°S and 90°–150°W) SST anomaly value was +0.75°C, which though moderately warm was nevertheless much reduced from the +2.03°C value that occurred during September 1987, at the height of the 1986–87 El Niño event. The use of the January 1988 anomaly value as the benchmark persistence forecast to improve upon would subsequently provide large errors as conditions moved quickly to a strong La Niña within only a few months. Any modeling scheme that predicted a cold ENSO event or even a return to neutral conditions would show success above persistence and thus skill, at least for this particular case. However, in determining whether skillful forecasts were made by a particular model, the inclusion of month-to-month (trends) of the initial conditions would have made for a more stringent test. In this case, persistence plus trend would have suggested that a return to La Niña or at least neutral ENSO conditions was to occur. Of course the addition of trend of initial conditions will not always lead to correct forecasts as borne out by the lack of a La Niña following the 1991–1992 and 1993 El Niño events. The point is that adding trend to persistence generally provides improved forecasts over persistence alone.

In addition to trend of initial values, ENSO conditions are also known to decay preferentially depending on the season (Wright 1985; Wright et al. 1988). Persistence of SST anomalies typically produces the smallest errors when forecasting the boreal late fall and winter conditions and the largest errors for late spring and summer values. This tendency is due to the as yet unexplained strong phase locking between ENSO and the annual cycle. Rasmusson et al. (1990) showed that the ENSO sequel has two dominant timescales. One is a biennial mode with a period very close to 24 months. The other has a period of 4–5 yr. More importantly, the biennial cycle is tightly locked to the annual cycle. Therefore, consideration for the calendar date with respect to the climatology of composite ENSO events can enhance the forecast ability of ENSO prediction schemes. For instance, if January initial conditions are warm, it is likely that the conditions in the following January will be cooler.

To provide a more stringent test for skill in seasonal ENSO forecasting, a multiple regression technique has been fashioned that takes best advantage of climatology, persistence, and trend of initial conditions—the ENSO-CLIPER. This new model is presented as a replacement of the use of pure persistence for determining the skill threshold for ENSO forecasting. We then redefine skill in ENSO prediction as the ability to show significant improvements over the forecast capability of ENSO-CLIPER, rather than just persistence. Thus, the ENSO-CLIPER, which is based upon 43 yr of surface data, is in this sense an optimal “no-skill” forecasting procedure, in that when other models’ performance compares unfavorably with ENSO-CLIPER they are said to have no skill. Note that optimal combinations of climatology, persistence, and trend to provide no skill forecasts is already in common usage for both tropical cyclone motion (CLIPER, Neumann 1972) and intensity (SHIFOR, Jarvinen and Neumann 1979) forecasting. CLIPER-type models have proved invaluable for providing benchmarks in testing tropical cyclone track and intensity forecasting algorithms (Neumann and Pelissier 1981; DeMaria et al. 1990; DeMaria and Kaplan 1994; Gross and Lawrence 1996). All statistical and dynamical tropical cyclone models are compared for their relative improvement with respect to CLIPER and SHIFOR, rather than simple persistence (see DeMaria et al. 1990). The use of these two no skill models has provided an invaluable tool in validating new tropical cyclone prediction schemes, in both real time and hindcasts.

This paper will detail the development of an ENSO-CLIPER scheme and its suggested ENSO forecast ability including the SOI and the various SST indices. The next section describes the developmental datasets utilized for both the predictors and predictands. Section 3 details the methodology utilized in the creation of the ENSO-CLIPER model, and section 4 presents the results of hindcasts on dependent data and estimates of future forecast ability. Section 5 compares the performance of the ENSO-CLIPER with other ENSO prediction schemes. Section 6 provides an example of an independent forecast of ENSO conditions for 1996–98. Following a summary and discussion section, the appendix presents all of the independent ENSO-CLIPER forecasts.

2. Data

Two predictor and predictand datasets were utilized. The years from 1950–1994 were chosen as the dependent dataset from which the multiple regression equations were derived. Forty-three years of data are used to create the predictors for each equation, but because of the variation of lead times the predictand datasets vary as shown in Table 1. The SOI dataset was computed as the standardized Tahiti minus Darwin (Fig. 1) sea level pressure difference, analyzed as monthly values. Following Trenberth and Shea (1987) the SOI was created from monthly sea level pressure anomalies relative to 1950–79 mean values, and standardized by dividing by climatological standard deviation, yielding the difference between the standardized Tahiti minus the standardized Darwin sea level pressure anomalies. These data were retrieved from the Climate Prediction Center’s anonymous FTP site (140.90.50.22).

The SST data were the high-resolution global SSTs (Smith et al. 1996), which are based upon optimal interpolation of in situ ship and buoy data, supplemented by satellite SST retrievals as available. The SSTs are monthly values on a 2° grid spacing for 1950–90. Areal-averaged anomalies were computed from a 1950–79 monthly climatology to derive the four standard SST indices commonly utilized for ENSO monitoring (e.g., Barnston and Ropelewski 1992). These were supplemented with a more detailed (1° × 1°) dataset described in Reynolds and Smith (1995), which is available for the period 1991–present. These areas, shown in Fig. 1, include Niño 1+2 (0°–10°S, 80°–90°W), Niño 3 (5°N–5°S, 90°–150°W), Niño 4 (5°N–5°S, 150°W–160°E), and Niño 3.4 (5°N–5°S, 120°–170°W). The Niño 3.4 index has been identified as the SST region having strongest concurrent association with midlatitude and tropical ENSO-forced circulation variations (Barnston et al. 1997).

3. Methodology

The ENSO-CLIPER predictive model utilizes a multiple linear regression based on least squares deviations, which uses the method of leaps and bounds (IMSL 1987). This predictor selection routine steps forward using every possible combination of the predictors eventually finding the best multiple regression equation having one, two, three, . . . , predictors. Prospective predictors were retained only if they correlated in the regression test at a significance level beyond 95% using a t test and increased the total variance explained by at least 2.5%. If no predictor met these two criteria, then no ENSO-CLIPER forecast equation is obtained and a zero anomaly (climatology) forecast is made. This occurred only occasionally but most notably for the three-season lead times and beyond. Other restrictions on regression predictors related to avoiding “overfitting” are detailed below.

The SST indices and SOI are forecast at leads of zero to seven seasons. All forecasts are made for 3-month target prediction intervals but are made for each individual monthly initiation time. Here we follow the nomenclature of Barnston and Ropelewski (1992) wherein zero lead indicates predictions for the next immediately upcoming month (their Fig. 5). For example, a forecast issued on 1 February for February–April conditions is termed a zero-lead seasonal forecast. A 1 February forecast for May–July is a one-season lead forecast and so forth. A limit of 2-yr lead time (i.e., seven seasons) reflects the fact that hindcast ability becomes negligible beyond seven seasons lead.

As stated in our introduction, the aim is to optimally utilize trend and climatology to augment persistence as a no-skill ENSO forecast. As shown by Wright (1985) and Wright et al. (1988), persistence of initial conditions depends both upon the region being forecast and seasonality. For example, persistence (the anomaly of the previous month) explains 92% of variance in a zero-season lead for Niño 3.4 during November–January but only 45% for May–July. In contrast, Niño 1+2 SSTs have peak zero-season lead persistence during July–September (85% of the variance) and a minimum in February–April (29%). Thus to account for such a strong annual cycle in the effectiveness of persistence, separate regressions were performed for each monthly initial starting date.

A pool of 14 predictors were available for selection by the regression scheme. Each regression had the choice of 1-, 3-, or 5-month averages of initial predictor anomalies for each parameter to be predicted and similar choices for the trend of the initial conditions (1-, 3-, or 5-month differences of average anomalies). For example, predictions made 1 January had the choice of December, October–December, or August–December mean initial values for predictor conditions. Options for trend of initial conditions (again from a 1 January starting point) included December minus November, October–December minus July–September, or August–December minus March–July trends. Similarly, the regression considered the 3-month initial conditions and trend of the other four predictands. Hence, the potential predictors used for a prediction of Niño 3.4 are as follows:

  1. initial Niño 3.4 conditions (1 month)

  2. initial Niño 3.4 conditions (3 month)

  3. initial Niño 3.4 conditions (5 month)

  4. trend of Niño 3.4 conditions (1 month)

  5. trend of Niño 3.4 conditions (3 month)

  6. trend of Niño 3.4 conditions (5 month)

  7. initial Niño 1+2 conditions (3 month)

  8. trend of Niño 1+2 conditions (3 month)

  9. initial Niño 3 conditions (3 month)

  10. trend of Niño 3 conditions (3 month)

  11. initial Niño 4 conditions (3 month)

  12. trend of Niño 4 conditions (3 month)

  13. initial SOI conditions (3 month)

  14. trend of SOI conditions (3 month)

As noted above, the regression procedure imposed an additional criterion to inhibit predictor selection beyond meaningful significance. The additional criterion is that the regression may not retain more than one of predictors 1, 2, or 3 and no more than one of the predictors 4, 5, or 6. This restriction is to minimize multicollinearity of predictors creating hindcast ability (Aczel 1989). The variety of initial conditions and trends of the predictand allows flexibility in handling a strong annual cycle of persistence. For example, for Niño 3.4 SST zero- and one-season lead forecasts, it was common for the 1-month initial conditions and trends to be chosen, whereas at lead times of four seasons and longer, the 3-month or 5-month initial conditions (typically as negatively correlated predictors) and trend were instead chosen. Rather than manually selecting the highest persistence and trend time periods, we allowed the regression model to perform the selection adhering to the above criterion. If no predictors are found, which is occasionally the case at longer leads, the equation produces a climatology forecast.

All results from the multiple regression coefficients are adjusted or degraded to reflect what should be expected in completely independent future forecasts rather than the value obtained in the hindcasts. This alteration of both the variance explained and in the rmse is performed following the methodology of Davis (1979) and Shapiro (1984).

This methodology begins with a definition of the amount of artificial ability (AA) or variance explained in Eq. (1):
i1520-0434-12-3-633-e1
where M = number of predictors (varies from individual equation), N = number of observations (43 yr), and AH = hindcast ability obtained from the regression equation expressed as the percent variance explained.
When hindcasts are applied to independent data, it is expected that the degradation is twice this estimate of artificial ability and thus the actual forecast ability (AF) can be estimated as shown in Eq. (2):
AFAHAA
Since forecast ability is related to the square errors, the adjusted rmse (rmseA) can also be estimated as shown in Eq. (3):
i1520-0434-12-3-633-e3
The results to be shown in the following sections have been adjusted to reflect these likely degradations in performance on independent data.

Five separate predictands (Niño 1+2, Niño 3, Niño 4, and Niño 3.4 SST indices and the SOI), plus eight different forecast periods (zero to seven seasons lead), and 12 initial starting times (1 January, 1 February, . . . , 1 December) yield a total of 480 regression relationships that were examined. An equation for each was developed using the 1950–94 data, which provided a sample of 43 hindcast data points.

4. Results

Of the total 480 possible regression equations, 411 met the first two criteria of 95% confidence and 2.5% increase in hindcast ability, providing nonnegligible forecast ability (i.e., significantly greater than a linear correlation coefficient of zero). These were based on one to four predictors, with most equations containing two to three predictors. In the remainder of the paper, we shall focus only on the Niño 3.4 SSTs for detailed examples of the results. An illustrative example of the form of the prediction equations for Niño 3.4 for a 1 January forecast initiation time in a nonnormalized format is as follows: (JFM ≡ January–Mar AMJ ≡ April–June, JAS ≡ July–September, and OND ≡ October–December.)
i1520-0434-12-3-633-eq1

The equations were tested in the hindcast made on 1950–94 data, yielding linear correlation coefficients of r = 0.93, 0.67, 0.56, 0.63, 0.64, 0.47, 0.36, 0.00, and rmse’s of 0.28°, 0.38°, 0.53°, 0.73°, 0.58°, 0.45°, 0.59°C, respectively. (The seven-season lead forecast, for this case, could not provide nonnegligible hindcasts, so the 1950–94 average anomaly value for October–December is forecast. This value, while close to zero, is nonzero because of the differences in climatological values for the 1950–94 period versus the 1950–79 era that the anomalies were computed from.) Following Davis (1979) and Shapiro (1984), these values would be expected to degrade (in independent forecast tests) to r = 0.93, 0.64, 0.51, 0.59, 0.60, 0.42, 0.30, 0.00 and rmse’s of 0.29°, 0.39°, 0.55°, 0.76°, 0.61°, 0.46°, 0.61°, 0.59°C, respectively.

Hindcast test results for Niño 3.4 SSTs are presented versus observations in the time series in Fig. 2. For simplicity and clarity, only values for spring (March–May), summer (June–August), fall (September–November), and winter (December–February) are plotted. Close correspondence between the hindcasts and observed values occurred in the zero- and one-season lead results but degraded with longer leads while remaining nonnegligible at seven-season lead. Forecasts made after 1 December 1992, designated by the vertical dashed lines in Fig. 2, indicate independent tests of the ENSO-CLIPER forecast scheme. Somewhat greater than expected performance degradation is noted in this small dataset of out-of-sample forecasts. Values for independent forecasts are provided in tabular form in the appendix and a comparison to other ENSO prediction schemes is shown in the next section. For the remainder of the paper, only the adjusted linear correlations and rmse will be presented.

Figure 3 presents adjusted hindcast values (the expected variance explained when used to make independent forecasts) for all five predictands at lead times ranging from zero to seven seasons based upon initial forecast for 1 January, 1 April, 1 July, and 1 October. These results are presented as adjusted anomaly correlation coefficients. In general, forecast ability declines as the lead time increases though not always. Notably, the 1 January initial time forecasts for October–December Niño 3.4 show higher correlation (r = 0.61) than do the forecasts for the earlier July–September period (r = 0.52). Additionally, it is evident that the ability of the same lead time forecasts for the same predictand are extremely dependent on initial forecast time (of year). Again using the example of Niño 3.4, two-season lead forecasts have regression coefficients that vary from a low of r = 0.52 for those verifying in July–September (see Fig. 3a) to a high of r = 0.80 for those verifying in January–March (Fig. 3c). This variability confirms the approach that takes advantage of the strong annual cycle of predictability at the expense of reducing the number of data points available for the regression. Where no predictors were available that fit the two primary criteria, no forecast equations were produced and the correlation for these is shown as “0.”

Comparisons of the five predictands in Fig. 3 reveal that the Niño 3.4 and Niño 4 regions have, in general, the most proficient hindcasts at all lead times. The lowest hindcast correlations are generally observed at leads of zero to two seasons for the SOI and at leads of three to seven seasons for the Niño 1+2 SSTs. Figure 4 shows comparison with the forecast ability available from persistence and the ENSO-CLIPER Niño 3.4 forecast. While persistence comprises essentially all of the forecast ability at lead times of zero and one season, particularly for the SST indices, predictability for the ENSO-CLIPER model provided by the other predictors is crucial during one to four season leads. At leads of five to seven seasons, persistence again becomes a useful predictor but in a negative sense for the Niño 3.4 SSTs. Predictive ability is also much reduced. Whereas persistence quickly drops to negligible levels, the ENSO-CLIPER is able to retain significant forecast ability in some cases out to seven-season lead. Of basic interest are the differences between the performance of ENSO-CLIPER versus persistence only in relation to the initial forecast date. While persistence has substantial forecast ability (greater than r = 0.60) for zero-, one-, and two-season leads for the 1 July initial point, persistence only achieves this level at the zero-season lead for the 1 April initial point, reconfirming earlier results of the variability of persistence depending on the annual cycle. Corresponding rmse values are shown in Fig. 5. Typically, at times beyond zero-season lead, ENSO-CLIPER has rmse values that are significantly lower than those provided by persistence only. The largest values of rmse for both forecasts generally occur during October–December with minima during April–June. This annual cycle is due to increased variability of observed SST in the Niño 3.4 region during October–December and decreased variability during April–June.

Table 2 presents a more comprehensive view of the forecast ability (correlation coefficient) for the zero–seven-season leads for all five predictands stratified by verification time, maxima/minima, and annual average values. Note that for Niño 3.4 SSTs, dependent on the initial date, ENSO-CLIPER has peak maximum forecast ability of 92%, 85%, 64%, 41%, 36%, 24%, 24%, and 28% of the variance explained (the square of the values shown in Table 2) from leads of zero to seven seasons in advance. Comparable 1-month persistence forecasts should give 92%, 77%, 50%, 17%, 6%, 14%, 21%, and 17%, respectively. Averaged over the entire year (i.e., 12 forecast initiation times) ENSO-CLIPER should give 81%, 55%, 34%, 24%, 18%, 18%, 12%, and 7% for leads of zero to seven seasons in advance, whereas persistence alone should provide only 74%, 34%, 7%, 0%, 3%, 6%, 8%, and 6%. Dramatic forecastability gains are shown over persistence, especially in the two–four-season leads. The ENSO-CLIPER model improvement upon persistence entails explaining 7%, 21%, 27%, 24%, 15%, 12%, 10%, and 6%, respectively, more of the variance annually for lead 0–7. Thus, both for the most “predictable” time of year and for the year as a whole, significant improvements are realized over persistence alone with the use of ENSO-CLIPER.

5. Independent performance

The ENSO-CLIPER model was constructed from a dependent dataset of 43 yr, which, given the typical ENSO timescale of 3–5 yr giving 10–15 complete ENSO realizations, may be on the short side of time length needed for this regression technique. Other SST datasets were examined before selecting the Smith et al. (1996) data. However, it was found that before approximately 1950 the equatorial Pacific region data quality was inadequate for the purposes of this scheme. Included in the earlier section are estimates of future skill based on the work of Davis and Shapiro, but it is unknown whether these estimates will hold true in completely independent data. To supplement the reduced correlation and rmse values, the available independent forecast values since 1993 and their verification are listed in the appendix and a comparison is performed versus other published schemes over that same period.

Overall, the years 1993–96 have been a particularly difficult time period of ENSO history to predict, because of the long-running 1990–95 warm ENSO conditions. Using data available in the ELLFB (Barnston 1993a–b, 1994a–d, 1995a–d, 1996a–c) as both numeric and graphical formats, forecasts of Niño 3.4 (CCA and CAF) and Niño 3 (CZ, LIM, CMP9) over the same years but with differing initial forecast times, seasonal leads, and verification times are compared. Table 3 shows the rmse differences between the ENSO-CLIPER minus the various schemes listed above. Note that with only a couple of exceptions, the ENSO-CLIPER outperformed (shown as a negative difference) or was equivalent to its competitors. Only LIM, another statistically based model, significantly improved upon ENSO-CLIPER, and this occurred only at a single (three seasons) lead time.

Similarly, one could calculate the linear correlation coefficients between the observed and predicted values for the five schemes listed above and the ENSO-CLIPER. These results, shown in Table 4, suggest again that no statistical scheme or numerical model has had success improving over the ENSO-CLIPER scheme. It is also apparent that none of the schemes, ENSO-CLIPER included, have been able to make independent forecasts that provide a nonnegligible, positive amount of the variance explained beyond lead one. Granted, these results are based upon a very small sample, but all are independent and all are fairly judged against one another.

Again the difficulties that all of these models have been having stems, at least to some degree, from the rather anomalous long warm event (1990–95) that occurred during the years under consideration. However, it appears with the limited testing shown here that 1) ENSO-CLIPER is at least competitive with many of the currently available models, 2) none of the intercompared models during the years 1993–96 showed skill in forecaster ENSO as defined as significantly exceed the forecasting ability of ENSO-CLIPER (with the possible exception of lead three LIM), and 3) all models including ENSO-CLIPER performed poorly during 1993–96.

6. A forecast for 1996–98

Starting in late 1995 and continuing throughout much of 1996, a weak to moderate La Niña event occurred in the equatorial Pacific (Halpert et al. 1996). By late 1996, the SST values had returned to near-average values with the exception of cooler than normal waters in the Niño 1+2 region. However, the SOI remained slightly positive and substantially drier than normal conditions continued in the central and eastern equatorial Pacific (as measured by the outgoing longwave radiation); thus, the weak La Niña–type regime has continued. Input values for the pool of ENSO-CLIPER predictors on 1 December 1996 were as follows:
i1520-0434-12-3-633-eq2

Employing these predictors in the ENSO-CLIPER model yields the forecasts for December 1996–February 1997 (zero-season lead) out through September–November 1998 (seven-season lead) shown in Fig. 6. The adjusted rmse is also plotted as horizontal bars, indicating the degree of uncertainty associated with each forecast though the true rmse will not be known until after verification (a function of both the ENSO-CLIPER model forecast ability and the annual cycle of variability). All predictands are forecast to return to near neutral by winter/spring 1997 and then progress to moderate El Niño warm conditions during late 1997 and early 1998.

7. Summary and conclusions

The ENSO-CLIPER model offers a baseline no-skill forecast of ENSO variability that is more stringent than persistence-based forecasts. ENSO-CLIPER is a linear multiple regression model that utilizes current ENSO values and the month-to-month time change of these current conditions (trend) plus consideration of the annual cycle to produce forecasts of ENSO-linked SST indices and SOI values. From 1 to 4 predictors are chosen out of a pool of 14 potential predictors by a multiple regression technique using the method of leaps and bounds. If no predictors exist for a given time lead, climatological conditions are forecast. After adjustments of the hindcast ability to account for future degradation, significant improvements are shown versus simple forecasts of the persistence of current conditions. The forecast equations have been developed to be applied at the start of each calendar month. Eight forecasts are produced for each predictand, extending from a lead time of zero to seven seasons.

It is hoped that ENSO-CLIPER will either replace or at least supplement the current use of persistence as a measure of skill by which other ENSO models, both statistical and dynamical, can be judged. Using only persistence as an index of skill threshold for ENSO predictive models is overly simplistic. By optimally utilizing available climatology and persistence information as is, we are able to construct a more stringent no-skill test for comparison. As recognized in Barnston et al. (1994), true forecast skill cannot be judged until an adequate sample of real-time predictions have been run. To assist with such an analysis, the appendix provides all of the independent forecasts available since 1 January 1993 and section 5 has compared these independent results versus other available schemes. Additionally, future ENSO-CLIPER will be available monthly at the following Web site: http://tropical.atmos.colostate.edu/∼knaff. The program to run ENSO-CLIPER is also available upon request.

Much has been gained in the science of day-to-day hurricane forecasting through the use of simple models based entirely on climatology and persistence. ENSO-CLIPER-type models have proven invaluable for providing a benchmark in testing tropical cyclone track and intensity forecasting algorithms. While the tropical cyclone track model CLIPER has been improved upon by hybrid statistical–dynamical models and dynamical models, the tropical cyclone intensity model SHIFOR has yet to be surpassed by either hybrid or dynamical models. A similar scenario will likely follow for the field of ENSO forecasting. Although some current statistical and dynamical ENSO models may have difficulty showing improvements over ENSO-CLIPER, truly skillful ENSO models will undoubtedly be developed within the near future. To this end, we encourage verification of all ENSO forecasts using the ENSO-CLIPER forecasts as the no-skill threshold instead of persistence.

Acknowledgments

The authors wish to thank Tony Barnston, William Gray, Dennis Mayer, and John Sheaffer for valuable comments of an earlier draft of this manuscript. Gerry Bell has provided many helpful discussions on the topic as well. Barb Brumit, Amie Hedstrom, Bill Thorson, and Rick Taft have provided excellent technical and computing expertise. The lead author is being supported by NOAA under Contract NA37RJ0202 (William Gray, PI) with supplemental support given by NSF under Contracts ATM-9417563 (William Gray, PI). The second author was funded through the 1995/96 NOAA Postdoctoral Program in Climate and Global Change.

REFERENCES

  • Aczel, A. D., 1989: Complete Business Statistics. Richard D. Irwin, 1056 pp.

  • Arkin, P. A., 1982: The relationship between interannual variability in the 200 mb tropical wind field and the Southern Oscillation. Mon. Wea. Rev.,110, 1393–1404.

    • Crossref
    • Export Citation
  • Balmaseda, M. A., D. L. T. Anderson, and M. K. Davey, 1994: ENSO prediction using a dynamical ocean model coupled to statistical atmospheres. Tellus,46A, 497–511.

    • Crossref
    • Export Citation
  • Barnett, T. P., M. Latif, N. Graham, M. Flugel, S. Pazan, and W. White, 1993: ENSO and ENSO-related predictability. Part I: Prediction of equatorial Pacific sea surface temperature with a hybrid coupled ocean–atmosphere model. J. Climate,6, 1545–1566.

  • Barnston, A. G., Ed., 1993a: Experimental Long-lead Forecast Bulletin.2, 3, 20 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1993b: Experimental Long-lead Forecast Bulletin.2, 4, 23 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994a: Experimental Long-lead Forecast Bulletin.3, 1, 35 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994b: Experimental Long-lead Forecast Bulletin.3, 2, 39 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994c: Experimental Long-lead Forecast Bulletin.3, 3, 37 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994d: Experimental Long-lead Forecast Bulletin.3, 4, 38 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995a: Experimental Long-lead Forecast Bulletin.4, 1, 48 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995b: Experimental Long-lead Forecast Bulletin.4, 2, 45 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995c: Experimental Long-lead Forecast Bulletin.4, 3, 44 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995d: Experimental Long-lead Forecast Bulletin.4, 4, 55 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996a: Experimental Long-lead Forecast Bulletin.5, 1, 54 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996b: Experimental Long-lead Forecast Bulletin.5, 2, 78 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996c: Experimental Long-lead Forecast Bulletin.5, 3, 60 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, and C. F. Ropelewski, 1992: Prediction of ENSO episodes using canonical correlation analysis. J. Climate,5, 1316–1345.

    • Crossref
    • Export Citation
  • ——, and Coauthors, 1994: Long-lead seasonal forecasts—Where do we stand? Bull. Amer. Meteor. Soc.,75, 2097–2114.

    • Crossref
    • Export Citation
  • ——, M. Chelliah, and S. B. Goldenberg, 1997: What part of the tropical Pacific SST most represents the ENSO? Atmos.–Ocean, in press.

  • Chen, D., S. E. Zebiak, A. J. Busalacchi, and M. A. Cane, 1995: An improved procedure for El Niño forecasting: Implications for predictability. Science,269, 1699–1702.

    • Crossref
    • Export Citation
  • Davis, R. E., 1979: A search for short range climate productivity. Dyn. Atmos. Oceans,3, 485–497.

    • Crossref
    • Export Citation
  • DeMaria, M., and J. Kaplan, 1994: A Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic basin. Wea. Forecasting,9, 209–220.

    • Crossref
    • Export Citation
  • ——, M. B. Lawrence, and J. T. Kroll, 1990: An error analysis of Atlantic tropical cyclone track guidance models. Wea. Forecasting,5, 47–61.

    • Crossref
    • Export Citation
  • Drosdowsky, W., 1994: Analog (nonlinear) forecasts of the Southern Oscillation index time series. Wea. Forecasting,9, 78–84.

    • Crossref
    • Export Citation
  • Gray, W. M., C. W. Landsea, P. W. Mielke Jr., and K. J. Berry, 1994: Predicting Atlantic seasonal tropical cyclone activity by 1 June. Wea. Forecasting,9, 103–115.

    • Crossref
    • Export Citation
  • Gross, J. M., and M. B. Lawrence, 1996: 1995 National Hurricane Center forecast verification. Minutes of the 50th Interdepartmental Hurricane Conf., Miami, FL, OFCM, 829–850 pp.

  • Halpert, M. S., and C. F. Ropelewski, 1992: Surface temperature patterns associated with the Southern Oscillation. J. Climate,5, 577–593.

    • Crossref
    • Export Citation
  • ——, G. D. Bell, V. E. Kousky, and C. F. Ropelewski, 1996: Climate assessment for 1995. Bull. Amer. Meteor. Soc.,77(5), S1–S43.

    • Crossref
    • Export Citation
  • Horel, J. D., and J. M. Wallace, 1981: Planetary-scale atmospheric phenomena associated with the Southern Oscillation. Mon. Wea. Rev.,109, 813–829.

    • Crossref
    • Export Citation
  • IMSL, 1987: FORTRAN Subroutines for Statistical Analysis. International Mathematical and Statistical FORTRAN Library, 1232 pp.

  • Jarvinen, B. R., and C. J. Neumann, 1979: Statistical forecasts of tropical cyclone intensity. NOAA Tech. Memo. NWS NHC-10, 22 pp. [Available from NOAA/NWS/NHC, Miami, FL 33165.].

  • Ji, M., A. Kumar, and A. Leetmaa, 1994: An experimental coupled forecast system at the National Meteorological Center: Some early results. Tellus,46A, 398–418.

    • Crossref
    • Export Citation
  • Jiang, N., M. Ghil, and D. Neelin, 1995: Quasi-quadrennial and quasi-biennial variability in the equatorial Pacific. Climate Dyn.,12, 101–112.

    • Crossref
    • Export Citation
  • Keppenne, C. L., and M. Ghil, 1992: Adaptive spectral analysis and prediction of the Southern Oscillation index. J. Geophys. Res.,97, 20 449–20 554.

    • Crossref
    • Export Citation
  • Kirtman, B. P., J. Shukla, B. Huang, Z. Zhu, and E. K. Schneider, 1996: Multiseasonal predictions with a couple tropical ocean–global atmosphere system. Mon. Wea. Rev.,124, 789–808.

    • Crossref
    • Export Citation
  • Kleeman, R., 1993: On the dependence of hindcast skill on ocean thermodynamics in a coupled ocean–atmosphere model. J. Climate,6, 2012–2033.

    • Crossref
    • Export Citation
  • Lander, M., 1994: An exploratory analysis of the relationship between tropical storm formation in the western North Pacific and ENSO. Mon. Wea. Rev.,122, 636–651.

    • Crossref
    • Export Citation
  • Neumann, C. J., 1972: An alternative to the HURRAN tropical cyclone model system. NOAA Tech. Memo. NWS SR-62, 22 pp. [Available from NOAA/NWS/NHC, Miami, FL 33165.].

  • ——, and J. M. Pelissier, 1981: Models for the prediction of tropical cyclone motion over the North Atlantic: An operational evaluation. Mon. Wea. Rev.,109, 522–538.

    • Crossref
    • Export Citation
  • Nicholls, N., 1992: Recent performance of a method for forecasting Australian seasonal tropical cyclone activity. Aust. Meteor. Mag.,40, 105–110.

  • Penland, C., and T. Magorian, 1993: Prediction of Niño 3 sea-surface temperatures using linear inverse modeling. J. Climate,6, 1067–1076.

    • Crossref
    • Export Citation
  • Rasmusson, E. M., X. Wang, and C. F. Ropelewski, 1990: The biennial component of ENSO variability. J. Mar. Syst.,1, 71–90.

    • Crossref
    • Export Citation
  • Reynolds, R. W., and T. M. Smith, 1995: A high-resolution global sea surface temperature climatology. J. Climate,8, 1571–1583.

  • Ropelewski, C. F., and M. S. Halpert, 1987: Global and regional scale precipitation patterns associated with the El Niño/Southern Oscillation (ENSO). Mon. Wea. Rev.,115, 1606–1626.

    • Crossref
    • Export Citation
  • ——, and ——, 1989: Precipitation patterns associated with the high index phase of the Southern Oscillation. J. Climate,2, 268–284.

    • Crossref
    • Export Citation
  • Shapiro, L. J., 1984: Sampling errors in statistical models of tropical cyclone motion: A comparison of predictor screening and EOF techniques. Mon. Wea. Rev.,112, 1378–1388.

    • Crossref
    • Export Citation
  • Smith T. M., R. W. Reynolds, R. E. Livezey, and D. C. Stokes, 1996: Reconstruction of historical sea surface temperatures using empirical orthogonal functions. J. Climate,9, 1403–1420.

    • Crossref
    • Export Citation
  • Trenberth, K. E., and D. J. Shea, 1987: On the evolution of the Southern Oscillation. Mon. Wea. Rev.,115, 3078–3096.

    • Crossref
    • Export Citation
  • Van den Dool, H. M., 1994: Searching for analogues, how long must we wait? Tellus,46A, 314–324.

    • Crossref
    • Export Citation
  • Wright P. B., 1985: The Southern Oscillation: An ocean–atmosphere feedback system? Bull. Amer. Meteor. Soc.,66, 398–412.

    • Crossref
    • Export Citation
  • ——, J. M. Wallace, T. P. Mitchell, and C. Deser, 1988: Correlation structure of the El Niño/Southern Oscillation phenomenon. J. Climate,1, 609–625.

    • Crossref
    • Export Citation
  • Zebiak, S. E., and M. A. Cane, 1987: A model El Niño–Southern Oscillation. Mon. Wea. Rev.,115, 2262–2278.

    • Crossref
    • Export Citation
  • Zhang, B., J. Lie, and Z. Sun, 1993: A new multidimensional time series forecasting method based on the EOF iteration scheme. Adv. Atmos. Sci.,10, 243–247.

APPENDIX

Independent Forecasts 1993–Present

Though the intention of the ENSO-CLIPER methodology was not to create a superior ENSO forecast, the regressions so far have offered predictions of SST anomalies that have been quite satisfactory and competitive with more sophisticated methodologies (see section 5). Shown in Tables A1 (page 648) through A5 (page 652) are the independent predictions (January 1993–present) of Niño-4, Niño-3.4, Niño-3, Niño-1+2, and SOI, respectively. This period of ENSO history is very difficult to forecast. Long-running warm conditions existed in the eastern equatorial Pacific for most of late 1990 through early 1995. During this period, persistence was a dominant mode. Following a final resurgence of warm conditions in late 1994, cold conditions quickly developed in the later half of 1995. The forecast of ENSO conditions was quite difficult during this period as evidenced by some large errors experienced by other forecast schemes (see Barnston 1994a–d, 1995a–d).

Examination of these few actual independent forecasts shows that the statistical estimates of rmse from hindcast results has been quite accurate, though slightly underestimating independent rmse values. For example, the rmse values for Niño 3.4 for lead times zero through seven are 0.33, 0.54, 0.72, 0.77, 0.74, 0.74, 0.77, and 0.79, respectively. These are lower than the rmse values generated by persistence for the zero through seven leads, which were 0.36, 0.61, 0.78, 0.99, 0.86, 0.87, 0.86, and 0.91, respectively. After lead two, the ENSO-CLIPER RMSE values are 10%–40% smaller than those for persistence. All forecasts seem to exhibit similar error patterns with the largest errors during the early summer of the 1994 warm event, which followed warm conditions in 1993. The probable cause of the increase in overall rmse was likely the nonclimatological nature of the 1993-1994 period when warm conditions persisted. The errors of these schemes tend to be biennial in nature as expected for the dominant timescale of variation of the ENSO phenomenon.

Fig. 1.
Fig. 1.

Locations of the four SST indices and of the SOI stations utilized as predictors and predictands.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Fig. 2.
Fig. 2.

ENSO-CLIPER Niño 3.4 hindcasts and forecasts (dashed lines) versus observations (solid lines in °C). The dashed vertical line demarks the separation between developmental sample-based hindcasts (on the left) versus independent forecast tests (on the right). Panels display zero through seven season leads, respectively. Sample-based linear correlation coefficients and rmse are displayed.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Fig. 3.
Fig. 3.

ENSO-CLIPER adjusted anomaly correlations for all five predictands (Niño 1+2, Niño 3, Niño 4, and Niño 3.4 SST indices and SOI) from lead times ranging from zero to seven seasons based upon initial forecast times of (a) 1 January, (b) 1 April, (c) 1 July, and (d) 1 October.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Fig. 4.
Fig. 4.

Adjusted anomaly correlations for the Niño 3.4 predictands for both the ENSO-CLIPER (solid line, °C) and a 1-month persistence forecast (dashed line, P) from lead times ranging from zero to seven seasons based upon initial forecast times of (a) 1 January, (b) 1 April, (c) 1 July, and (d) 1 October.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Fig. 5.
Fig. 5.

Same as Fig. 4 except for showing rmse in °C.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Fig. 6.
Fig. 6.

Time series of actual ENSO-CLIPER forecasts of (a) Niño 4, (b) Niño 3.4, (c) Niño 3, (d) Niño 1+2 SST indices in °C, and (e) the SOI, based upon conditions up to 1 April 1996. Circled points are the observed 3-month average values. Circled points with horizontal bars are the 3-month average forecast values. The horizontal bar indicates the adjusted rmse.

Citation: Weather and Forecasting 12, 3; 10.1175/1520-0434(1997)012<0633:AENOSO>2.0.CO;2

Table 1. 

Description of data time series used to create the predictor and predictand datasets for the ENSO–CLIPER at each of the lead times.

Table 1. 
Table 2. 

Adjusted linear correlation coefficients of the ENSO–CLIPER (CLI) model and persistence (PER) versus observed values for four Niño SST regions and the SOI are presented. These are stratified by verification season (spring, March–May; summer, June–August; fall, September–November; winter, December–February), the maxima and minima skill periods, and the annual average of skill versus zero through seven-season lead.

Table 2. 
Table 2. 

(Continued)

Table 2. 
Table 3. 

Rmse differences produced by taking ENSO–CLIPER-based rmse minus the rmse of the various schemes listed in the first column. Negative numbers indicate that the ENSO–CLIPER scheme outperformed the methodology listed at the left. Note that the verification times varied between individual schemes. The number of available comparisons per lead time and methodology is shown in the parentheses.

Table 3. 
Table 4. 

Correlation coefficients (× 100) between various schemes, ENSO–CLIPER, and the observed SST anomalies for the same periods discussed in Table 3. The “Mod” column indicates the correlations of the various models listed to the left versus the observed SST anomalies, and the “EC” column indicates the correlation values for the same time periods using ENSO–CLIPER.

Table 4. 

Table A1. Niño 4 independent forecasts 1993 through early 1996. The date of the verification is listed in the first column followed by the observed value in the second column followed by the 0 through 7 lead forecast values. All values are in degrees Celsius.

i1520-0434-12-3-633-ta1

Table A2. Same as Table A1 except for Niño 3.4.

i1520-0434-12-3-633-ta2

Table A3. Same as Table A1 except for Niño 3.

i1520-0434-12-3-633-ta3

Table A4. Same as Table A1 except for Niño 1+2.

i1520-0434-12-3-633-ta4

Table A5. Same as Table A1 except for SOI and values are in standard deviations.

i1520-0434-12-3-633-ta5
Save
  • Aczel, A. D., 1989: Complete Business Statistics. Richard D. Irwin, 1056 pp.

  • Arkin, P. A., 1982: The relationship between interannual variability in the 200 mb tropical wind field and the Southern Oscillation. Mon. Wea. Rev.,110, 1393–1404.

    • Crossref
    • Export Citation
  • Balmaseda, M. A., D. L. T. Anderson, and M. K. Davey, 1994: ENSO prediction using a dynamical ocean model coupled to statistical atmospheres. Tellus,46A, 497–511.

    • Crossref
    • Export Citation
  • Barnett, T. P., M. Latif, N. Graham, M. Flugel, S. Pazan, and W. White, 1993: ENSO and ENSO-related predictability. Part I: Prediction of equatorial Pacific sea surface temperature with a hybrid coupled ocean–atmosphere model. J. Climate,6, 1545–1566.

  • Barnston, A. G., Ed., 1993a: Experimental Long-lead Forecast Bulletin.2, 3, 20 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1993b: Experimental Long-lead Forecast Bulletin.2, 4, 23 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994a: Experimental Long-lead Forecast Bulletin.3, 1, 35 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994b: Experimental Long-lead Forecast Bulletin.3, 2, 39 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994c: Experimental Long-lead Forecast Bulletin.3, 3, 37 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1994d: Experimental Long-lead Forecast Bulletin.3, 4, 38 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995a: Experimental Long-lead Forecast Bulletin.4, 1, 48 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995b: Experimental Long-lead Forecast Bulletin.4, 2, 45 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995c: Experimental Long-lead Forecast Bulletin.4, 3, 44 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1995d: Experimental Long-lead Forecast Bulletin.4, 4, 55 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996a: Experimental Long-lead Forecast Bulletin.5, 1, 54 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996b: Experimental Long-lead Forecast Bulletin.5, 2, 78 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, 1996c: Experimental Long-lead Forecast Bulletin.5, 3, 60 pp. [Available from Climate Prediction Center, W/NMC51 Room 604, Washington, DC 20233.].

  • ——, and C. F. Ropelewski, 1992: Prediction of ENSO episodes using canonical correlation analysis. J. Climate,5, 1316–1345.

    • Crossref
    • Export Citation
  • ——, and Coauthors, 1994: Long-lead seasonal forecasts—Where do we stand? Bull. Amer. Meteor. Soc.,75, 2097–2114.

    • Crossref
    • Export Citation
  • ——, M. Chelliah, and S. B. Goldenberg, 1997: What part of the tropical Pacific SST most represents the ENSO? Atmos.–Ocean, in press.

  • Chen, D., S. E. Zebiak, A. J. Busalacchi, and M. A. Cane, 1995: An improved procedure for El Niño forecasting: Implications for predictability. Science,269, 1699–1702.

    • Crossref
    • Export Citation
  • Davis, R. E., 1979: A search for short range climate productivity. Dyn. Atmos. Oceans,3, 485–497.

    • Crossref
    • Export Citation
  • DeMaria, M., and J. Kaplan, 1994: A Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic basin. Wea. Forecasting,9, 209–220.

    • Crossref
    • Export Citation
  • ——, M. B. Lawrence, and J. T. Kroll, 1990: An error analysis of Atlantic tropical cyclone track guidance models. Wea. Forecasting,5, 47–61.

    • Crossref
    • Export Citation
  • Drosdowsky, W., 1994: Analog (nonlinear) forecasts of the Southern Oscillation index time series. Wea. Forecasting,9, 78–84.

    • Crossref
    • Export Citation
  • Gray, W. M., C. W. Landsea, P. W. Mielke Jr., and K. J. Berry, 1994: Predicting Atlantic seasonal tropical cyclone activity by 1 June. Wea. Forecasting,9, 103–115.

    • Crossref
    • Export Citation
  • Gross, J. M., and M. B. Lawrence, 1996: 1995 National Hurricane Center forecast verification. Minutes of the 50th Interdepartmental Hurricane Conf., Miami, FL, OFCM, 829–850 pp.

  • Halpert, M. S., and C. F. Ropelewski, 1992: Surface temperature patterns associated with the Southern Oscillation. J. Climate,5, 577–593.

    • Crossref
    • Export Citation
  • ——, G. D. Bell, V. E. Kousky, and C. F. Ropelewski, 1996: Climate assessment for 1995. Bull. Amer. Meteor. Soc.,77(5), S1–S43.

    • Crossref
    • Export Citation
  • Horel, J. D., and J. M. Wallace, 1981: Planetary-scale atmospheric phenomena associated with the Southern Oscillation. Mon. Wea. Rev.,109, 813–829.

    • Crossref
    • Export Citation
  • IMSL, 1987: FORTRAN Subroutines for Statistical Analysis. International Mathematical and Statistical FORTRAN Library, 1232 pp.

  • Jarvinen, B. R., and C. J. Neumann, 1979: Statistical forecasts of tropical cyclone intensity. NOAA Tech. Memo. NWS NHC-10, 22 pp. [Available from NOAA/NWS/NHC, Miami, FL 33165.].

  • Ji, M., A. Kumar, and A. Leetmaa, 1994: An experimental coupled forecast system at the National Meteorological Center: Some early results. Tellus,46A, 398–418.

    • Crossref
    • Export Citation
  • Jiang, N., M. Ghil, and D. Neelin, 1995: Quasi-quadrennial and quasi-biennial variability in the equatorial Pacific. Climate Dyn.,12, 101–112.

    • Crossref
    • Export Citation
  • Keppenne, C. L., and M. Ghil, 1992: Adaptive spectral analysis and prediction of the Southern Oscillation index. J. Geophys. Res.,97, 20 449–20 554.

    • Crossref
    • Export Citation
  • Kirtman, B. P., J. Shukla, B. Huang, Z. Zhu, and E. K. Schneider, 1996: Multiseasonal predictions with a couple tropical ocean–global atmosphere system. Mon. Wea. Rev.,124, 789–808.

    • Crossref
    • Export Citation
  • Kleeman, R., 1993: On the dependence of hindcast skill on ocean thermodynamics in a coupled ocean–atmosphere model. J. Climate,6, 2012–2033.

    • Crossref
    • Export Citation
  • Lander, M., 1994: An exploratory analysis of the relationship between tropical storm formation in the western North Pacific and ENSO. Mon. Wea. Rev.,122, 636–651.

    • Crossref
    • Export Citation
  • Neumann, C. J., 1972: An alternative to the HURRAN tropical cyclone model system. NOAA Tech. Memo. NWS SR-62, 22 pp. [Available from NOAA/NWS/NHC, Miami, FL 33165.].

  • ——, and J. M. Pelissier, 1981: Models for the prediction of tropical cyclone motion over the North Atlantic: An operational evaluation. Mon. Wea. Rev.,109, 522–538.

    • Crossref
    • Export Citation
  • Nicholls, N., 1992: Recent performance of a method for forecasting Australian seasonal tropical cyclone activity. Aust. Meteor. Mag.,40, 105–110.

  • Penland, C., and T. Magorian, 1993: Prediction of Niño 3 sea-surface temperatures using linear inverse modeling. J. Climate,6, 1067–1076.

    • Crossref
    • Export Citation
  • Rasmusson, E. M., X. Wang, and C. F. Ropelewski, 1990: The biennial component of ENSO variability. J. Mar. Syst.,1, 71–90.

    • Crossref
    • Export Citation
  • Reynolds, R. W., and T. M. Smith, 1995: A high-resolution global sea surface temperature climatology. J. Climate,8, 1571–1583.

  • Ropelewski, C. F., and M. S. Halpert, 1987: Global and regional scale precipitation patterns associated with the El Niño/Southern Oscillation (ENSO). Mon. Wea. Rev.,115, 1606–1626.

    • Crossref
    • Export Citation
  • ——, and ——, 1989: Precipitation patterns associated with the high index phase of the Southern Oscillation. J. Climate,2, 268–284.

    • Crossref
    • Export Citation
  • Shapiro, L. J., 1984: Sampling errors in statistical models of tropical cyclone motion: A comparison of predictor screening and EOF techniques. Mon. Wea. Rev.,112, 1378–1388.

    • Crossref
    • Export Citation
  • Smith T. M., R. W. Reynolds, R. E. Livezey, and D. C. Stokes, 1996: Reconstruction of historical sea surface temperatures using empirical orthogonal functions. J. Climate,9, 1403–1420.

    • Crossref
    • Export Citation
  • Trenberth, K. E., and D. J. Shea, 1987: On the evolution of the Southern Oscillation. Mon. Wea. Rev.,115, 3078–3096.

    • Crossref
    • Export Citation
  • Van den Dool, H. M., 1994: Searching for analogues, how long must we wait? Tellus,46A, 314–324.

    • Crossref
    • Export Citation
  • Wright P. B., 1985: The Southern Oscillation: An ocean–atmosphere feedback system? Bull. Amer. Meteor. Soc.,66, 398–412.

    • Crossref
    • Export Citation
  • ——, J. M. Wallace, T. P. Mitchell, and C. Deser, 1988: Correlation structure of the El Niño/Southern Oscillation phenomenon. J. Climate,1, 609–625.

    • Crossref
    • Export Citation
  • Zebiak, S. E., and M. A. Cane, 1987: A model El Niño–Southern Oscillation. Mon. Wea. Rev.,115, 2262–2278.

    • Crossref
    • Export Citation
  • Zhang, B., J. Lie, and Z. Sun, 1993: A new multidimensional time series forecasting method based on the EOF iteration scheme. Adv. Atmos. Sci.,10, 243–247.

  • Fig. 1.

    Locations of the four SST indices and of the SOI stations utilized as predictors and predictands.

  • Fig. 2.

    ENSO-CLIPER Niño 3.4 hindcasts and forecasts (dashed lines) versus observations (solid lines in °C). The dashed vertical line demarks the separation between developmental sample-based hindcasts (on the left) versus independent forecast tests (on the right). Panels display zero through seven season leads, respectively. Sample-based linear correlation coefficients and rmse are displayed.

  • Fig. 3.

    ENSO-CLIPER adjusted anomaly correlations for all five predictands (Niño 1+2, Niño 3, Niño 4, and Niño 3.4 SST indices and SOI) from lead times ranging from zero to seven seasons based upon initial forecast times of (a) 1 January, (b) 1 April, (c) 1 July, and (d) 1 October.

  • Fig. 4.

    Adjusted anomaly correlations for the Niño 3.4 predictands for both the ENSO-CLIPER (solid line, °C) and a 1-month persistence forecast (dashed line, P) from lead times ranging from zero to seven seasons based upon initial forecast times of (a) 1 January, (b) 1 April, (c) 1 July, and (d) 1 October.

  • Fig. 5.

    Same as Fig. 4 except for showing rmse in °C.

  • Fig. 6.

    Time series of actual ENSO-CLIPER forecasts of (a) Niño 4, (b) Niño 3.4, (c) Niño 3, (d) Niño 1+2 SST indices in °C, and (e) the SOI, based upon conditions up to 1 April 1996. Circled points are the observed 3-month average values. Circled points with horizontal bars are the 3-month average forecast values. The horizontal bar indicates the adjusted rmse.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 298 99 9
PDF Downloads 166 63 7