The magnitude of seasonal predictability for a variable depends on departure of its probability density function (PDF) for a particular season from the corresponding climatological PDF. Differences in the PDF can be due to differences in various moments of the PDF (e.g., mean or the spread) from their corresponding values for the climatological PDF. Year-to-year changes in which moments of the PDF systematically contribute to seasonal predictability are an area of particular interest. Previous analyses for seasonal atmospheric variability have indicated that most of atmospheric predictability is (i) due to El Niño–Southern Oscillation (ENSO) sea surface temperatures (SSTs) and (ii) primarily due to change in the mean of the PDF for the atmospheric variability with changes in the spread of the PDF playing a secondary role. Present analysis extends to the assessment of seasonal predictability of ENSO SSTs themselves. Based on analysis of seasonal hindcasts, the results indicate that the spread (or the uncertainty) in the prediction of ENSO SSTs does not have a systematic dependence on the mean of the amplitude of predicted ENSO SST anomalies, and further, year-to-year changes in uncertainty are small. Therefore, similar to the atmospheric predictability, predictability of ENSO SSTs may also reside in the prediction of its mean amplitude; spread being almost constant does not have a systematic impact on the predictability.
For seasonal climate variability, sea surface temperature (SST) anomalies in the equatorial Pacific associated with El Niño–Southern Oscillation (ENSO) are the dominant source of predictability over certain geographical locations [e.g., the Pacific–North America (PNA) and the Maritime Continent; Trenberth et al. 1998; National Research Council 2010]. Understanding the connections between seasonal climate anomalies and ENSO has been a major focus of research toward enhancing our predictive capability on seasonal-to-interannual time scales (National Research Council 2010).
Previous results have indicated a quasi-linear relationship between the amplitude of ENSO SSTs and their atmospheric and terrestrial response (Hoerling et al. 1997). A larger atmospheric response for larger-amplitude ENSO event translates into a larger signal-to-noise ratio (SNR). The known relationship between SNR and various measures of prediction skill (Kumar and Hoerling 2000; Kumar 2009) dictates higher expected skill for seasonal prediction in the case for larger-amplitude ENSO events as compared to smaller-amplitude ENSO events. Following this link of relationships, correctly anticipating the amplitude of ENSO SST anomalies, and uncertainty therein, is of importance in anticipating the magnitude of its downstream societal impacts (Ropelewski and Halpert 1986; Glantz 2000).
The uncertainty in climate forecast is quantified based on an ensemble of predictions. The strategy for an ensemble of forecasts is an essential requirement for the following reasons: the initial conditions (ICs) for different components of the earth system cannot be specified with perfect accuracy; therefore, uncertainty in ICs and its influence on the subsequent forecast needs to be adequately sampled by an ensemble forecast approach (Kumar and Murtugudde 2013). Because of the nonlinear nature of the climate system, forecasts starting from small differences in ICs result in increasing divergence (and hence, increasing predictive uncertainty) among an ensemble of forecasts with increasing lead time. In practice, forecasts beyond 2–3 weeks exceed the range of deterministic forecasts that have been shown to be skillful, and have to be cast in terms of probabilities of outcomes that can also be quantified based on ensemble prediction approach.
Given the importance of correctly anticipating the amplitude of ENSO for the coming seasons, an important aspect of ENSO prediction is to quantify how the spread (or the divergence) among initialized forecasts depends on lead time, and further, is there a systematic change in forecast spread from one ENSO event to another. Regarding year-to-year variations in the spread (or the forecast uncertainty) among ensemble of ENSO forecasts, a question of importance is the following: Does uncertainty in amplitude of ENSO prediction depend systematically on the mean of the amplitude of ENSO SST forecast?
Previous results based on simpler models for ENSO prediction indicated that the information about the uncertainty in the amplitude of ENSO prediction (quantified as the spread among the ensemble forecasts) does not add systematically to its predictability, which was found to depend more on the amplitude of the ensemble mean of predicted SSTs (Tang et al. 2005, 2008). If one summarizes the ensemble of predicted SSTs as a probability density function (PDF), then the results implied that the spread of the PDF of SSTs had little systematic contribution to SST predictability, which primarily was determined by the change in the mean of the PDF. Indirectly, this also implies that the uncertainty in the amplitude of ENSO did not have systematic dependence on the amplitude of mean ENSO forecasts (i.e., there is no systematic difference in the spread when the PDF of ENSO forecasts is centered around a neutral ENSO state or when it is centered or around a warm or a cold ENSO state).
Based on an extensive set of hindcasts from an operational seasonal prediction system, the possibility for a systematic dependence of the uncertainty of ENSO prediction on the mean amplitude is further analyzed. A unique aspect of this analysis is the availability of a very large set of predicted SSTs for any target season from a comprehensive coupled predictions system, which is described next.
2. Data and analysis procedure
The retrospective predictions (referred to as predictions or hindcasts hereafter) examined in this work are from the National Centers for Environmental Prediction (NCEP) Climate Forecast System, version 2 (CFSv2; Kumar et al. 2012; Xue et al. 2013). The predictions are initialized in all calendar months from January 1982 to December 2010. For each month, predictions with ICs at 0000, 0600, 1200, and 1800 UTC were made every 5 days starting 1 January. In this analysis, we use predictions from 24 ICs in each month for subsequent target months and seasons. For example, for the target month of January, the 24 predictions from ICs on 2 December, 7 December, 12 December, 17 December, 22 December, and 27 December at 0000, 0600, 1200, and 1800 UTC. For CFSv2 predictions, the ocean and atmosphere ICs are from the NCEP Climate Forecast System Reanalysis (CFSR; Saha et al. 2010; Xue et al. 2011). In this analysis, the zero-month lead is referred to as prediction initialized from the previous month (e.g., predictions for the month of January from ICs of December).
The atmospheric component of CFSv2 is at T126L64 resolution, and is coupled to version 4 of the Modular Ocean Model (MOM4), as well as a three-layer sea ice model and a four-layer land model. For the ocean model, there are 40 levels in the vertical direction to the maximum depth of 4737 m. The meridional resolution is 0.25° in the tropics, tapering to a resolution of 0.5° northward of 10°N and southward of 10°S, respectively. Previous analyses have demonstrated that CFSv2 has good simulation and prediction skill for the SST anomaly (SSTA) in the tropical Pacific associated with ENSO (Zhu et al. 2012; Kim et al. 2012; Xue et al. 2013).
For validation, the monthly SST data are version 2 of the NOAA optimally interpolated (OIv2) SST dataset at 1° × 1° resolution (Reynolds et al. 2002). The monthly mean Niño-3.4 index is the average of SSTA in (5°S–5°N, 120°–170°W) for both OIv2 and CFSv2. Similar to Xue et al. (2013), two climatologies are used to compute the anomalies both for the CFSv2 and OIv2 SST in order to eliminate the impact of a discontinuity in the initial ocean analysis and subsequent predictions around 1998–99 (Xue et al. 2011; Zhang et al. 2012). The first climatology is the average for January 1982–December 1998, and the second one is for January 1999–December 2010. The climatologies of CFSv2 also depend on the lead time. Monthly mean SST data, both from the CFSv2 prediction and OIv2, are averaged into 3-month seasonal mean prior to their use in various analyses.
Statistical analyses used in this paper include the (i) standard deviation of the ensemble mean of predictions and (ii) standard deviation of departure of individual predictions from the ensemble mean. If Fij is the predicted SST anomaly for a particular season in the year i and for the ensemble member j, the ensemble-mean predicted anomaly Fi for the year i is
where M is the number of individual predictions in the ensemble forecasts. The standard deviation of the ensemble-mean predicted anomaly Fi defined as
where N is number of years in the prediction time series, σ is referred to as the signal associated with predictions. The difference between the individual predictions from the corresponding ensemble mean
and the standard deviation over all individual forecasts
represents the uncertainty (or the noise) among ensemble of forecasts for the year i. We note that the uncertainty defined by Eq. (4) and the ensemble-mean forecast defined by Eq. (1) depend on year i. The average of spread over the all predictions is the summation of Eq. (4) over the index i. One can also define the total standard deviation for each forecast time series as
For the CFSv2 prediction with M = 24, a better estimate of total standard deviation is the average of Eq. (5) over 24 different estimates. An additional computation done in our analysis is the linear regression between either the predicted SST Fij or the noise and the Niño-3.4 SST index. Once again, regression between predicted SST Fij and Niño-3.4 can be computed over 24 different forecast time series. We also compute the noise and signal after dividing the entire forecast time series into ENSO and non-ENSO years, and for doing so Eq. (2) is defined as an average over a subset of years that fall in each category to define corresponding signals, and similarly an average of Eq. (4) is over the corresponding events in each category to define measures of noise.
Figure 1 compares the standard deviation of SSTA (contours) and the regressions of SSTA onto Niño-3.4 SST index (shading) in CFSv2 hindcasts for different lead times with their observational counterparts. To be on equal footing with the observations, the model results are averages of standard deviations for individual forecast time series (24 in all), and similarly, average of regressions based on individual forecast time series. In general, the spatial pattern and amplitude of standard deviation of hindcast SST correspond well with observations with the largest amplitude associated with ENSO variability (contours in Fig. 1). The main discrepancy in hindcasts is (i) a smaller standard deviation in the eastern Pacific off the coast of South America, and (ii) a larger standard deviation in the central Pacific compared to observations. Both are common biases shared by many coupled models (e.g., Li and Xie 2012). The westward extent of ENSO-related standard deviation (indicated by 0.5°C contour) is well replicated in the model.
Regression of SSTA onto Niño-3.4 SST index shows a horseshoe pattern in the Pacific with good correspondence between the model and observations (shading in Fig. 1). The largest positive regressions are collocated with the largest standard deviation of SSTs. Consistent with the standard deviation, ENSO regression in the eastern Pacific off the coast of South America is also weaker in the model forecasts than in the observation, suggesting a model bias in that the amplitude of SST variability in the eastern Pacific off the coast of South America is underestimated.
We next investigate the evolution in variability with increasing lead time, and its signal and noise components, for the Niño-3.4 SST index (Fig. 2). As should be the case, for an analysis based on a fixed target season, the total standard deviation for Niño-3.4 SST index is nearly constant with a lead time in CFSv2 (Fig. 2a). However, there is a small increase for lead 1–2-month hindcasts and may imply a model biases, or/and an adjustment due to initial shock. Overall, the total standard deviation is slightly larger in the model than the observation (the most-right-hand bar in Fig. 2a), suggesting that amplitude of ENSO variability is slightly overestimated in model forecasts.
The time evolution of noise in Niño-3.4 SST index shows a steady and a quasi-linear increase with increasing lead time. An increase in the standard deviation associated with the noise components is to be expected as the ensemble of hindcasts starting from a small cloud of ICs will have an increasing amount of divergence for longer leads (Peng et al. 2011; Kumar and Murtugudde 2013). On the other hand, the standard deviation associated with the ensemble mean of the hindcasts (i.e., the signal), has a slow decrease with lead time. This should also be the case as for a fixed target season ensemble mean of hindcasts with longer leads should asymptote toward the model’s climatology (or toward the zero anomaly).
Various components of the hindcast standard deviation for Niño-3.4 SST index, therefore, evolve with lead time in an expected manner: for an unbiased prediction system, as the total standard deviation should stay the same for different lead times, and because the uncertainty between different hindcasts increases with lead time, to compensate for the standard deviation associated with ensemble means should decrease. We point out that although the total standard deviation between model and observations could be compared (Fig. 2a), its decomposition between noise and signal can only be done for the model hindcasts due to availability of an ensemble of forecasts. It is the decomposition of total standard deviation into signal and noise components that determines the level of predictability, and prediction skill. Compared with amplitude of the uncertainty (Fig. 2b), the amplitude of signal is much larger, suggesting SNR, and the predictability of ENSO on the interannual time scale, is dominated by the ensemble-mean signal.
To quantify the possible dependence of noise on the strength of the Niño-3.4 SST index, Fig. 3 shows the scatter between the ensemble mean of hindcasts for the Niño-3.4 SST index (x axis) versus spread among ensemble members from different ICs (y axis) and for various lead times. Consistent with increasing spread with lead time, a general tendency for the amplitude of spread is to move upward along the y axis with lead time. There no systematic relationship appears between the mean amplitude of the predicted Niño-3.4 SST index and its uncertainty. One exception is that for positive SST anomalies (El Niño events), there is a tendency for reduced spread. This tendency toward smaller uncertainty for a positive value of the Niño-3.4 index may be associated with the fact that for large positive ENSO events, the upper values of SST excursions are capped by a limit imposed by the thermostat mechanism (Sun and Liu 1996) leading to a constraint in the uncertainty in prediction.
An important fact to note in Fig. 3 is the difference in range of scale on x and y axis. While the variations in the ensemble mean of Niño-3.4 SST index predictions range from −2.0° to +2.0°C, the typical range for uncertainty for a particular lead time is ~0.3°C. Therefore, compared to year-to-year changes in the amplitude of ensemble mean, year-to-year changes in uncertainty are much smaller, a point also noted by Tang et al. (2008). In Fig. 3, we note that the model reproduces the asymmetric feature between El Niño and La Niña with a larger SSTA amplitude for the former than for the latter (Hoerling et al. 1997; Burgers and Stephenson 1999; Okumura et al. 2011; Kumar and Hu 2014).
Even though there does not seem to be a systematic relationship between ensemble mean and uncertainty, and although, year-to-year variations in uncertainty are much smaller than corresponding variations in the mean signal, there is some variability in the amplitude of uncertainty from one year to another. For 6-month lead hindcasts, for example, the amplitude of uncertainty ranges from approximately 0.4° to 0.8°C. This event-to-event variability could be due to several reasons: seasonality, sampling error, biases in the forecast system, or it could be real. Scatterplots between the predicted amplitude of Niño-3.4 SST index and its uncertainty for a particular target season (i.e., just a subset of points in Fig. 3 for a particular season) also have some year-to-year variability (not shown), and therefore, the possibility of variations in uncertainty in Fig. 3 because of seasonality can be discounted.
Whether the change in uncertainty of the Niño-3.4 SST index is due to sampling or is real can only be partially addressed because of the availability of a limited sample size of 24 hindcasts for a given target season. This is because a statistically robust estimate of population standard deviation requires a much larger sample size than what is needed for estimating the population mean (Sardeshmukh et al. 2000). Nonetheless, there are several ways to address whether year-to-year variability in uncertainty is due to sampling or is real, and particularly, does it depend systematically on the amplitude of the Niño-3.4 SST index? One approach is to separate lead time evolution of various components of variability for ENSO and non-ENSO events, and results for such composites are shown in Fig. 4. The outlay of Fig. 4 is similar to that for Fig. 2 except that the standard deviation of the ensemble mean and the noise are computed for ENSO and non-ENSO years separately where ENSO years are defined as when the absolute value of the predicted ensemble–mean Niño-3.4 SST index exceeds 0.5°C.
Because the composite is based on the magnitude of ENSO events itself, for the standard deviation of the ensemble mean there are obvious differences between the magnitude for ENSO and non-ENSO years. For noise, there is little difference in the magnitude between ENSO and non-ENSO events and both increase with an increasing lead time. This result may indicate that aggregated over a larger sample, changes in uncertainty between ENSO and non-ENSO events are small.
To discern systematic variations in uncertainty in the prediction of the SSTs with the mean amplitude of the predicted Niño-3.4 SST index, one can also compute a linear regression between them. This analysis is similar to the one shown in Fig. 1 where SSTs were regressed against the Niño-3.4 SST index. Such a linear regression analysis between noise for SST prediction and the absolute value of the Niño-3.4 index does not show any systematic spatial pattern (not shown). Unlike in Fig. 1 where the amplitude of regression with predicted ensemble-mean SSTA reaches 1°C per standard deviation of Niño-3.4 SST index, the amplitude of regression with uncertainty has a much smaller amplitude in a range from −0.1° to 0.1°C per standard deviation of the Niño-3.4 SST index. This is consistent with much smaller range for year-to-year variations in uncertainty noted in Fig. 3.
In the final analysis, we show the spatial pattern of the ensemble mean and associated uncertainty composited over ENSO and non-ENSO events (defined in the same way as in Fig. 4). In Fig. 5, for increasing lead time predictions, composites for ENSO (non ENSO) events are shown at left (right). Comparing left and right in Figs. 5a–c, the results further confirm that the amplitude and spatial pattern of uncertainty in SST prediction has little dependence on the amplitude of the Niño-3.4 SST index. The correlations in the spatial pattern of uncertainty between ENSO and non-ENSO events for a different lead time (indicated by the numerical value for set of plots in each row) all exceed 0.96. Given that predictability and prediction skill is a function of SNR (Kumar and Hoerling 2000), the results also indicate (i) much higher skill in the prediction of ENSO events than for non-ENSO events, and (ii) a systematic dependence of predictability on the ensemble mean of the predicted ENSO anomaly.
4. Summary and discussion
The results based on the analysis of an extensive set of a seasonal hindcasts indicate (i) little systematic dependence in uncertainty of ENSO with the prediction of its amplitude and (ii) year-to-year variations in uncertainty that are much smaller than corresponding variations in the ensemble mean of predictions. This is consistent with the earlier analysis of ENSO predictions using simpler models (Tang et al. 2005, 2008), and also with the results analyzing variations in atmospheric seasonal variability with ENSO amplitude (Kumar et al. 2000; Tippett et al. 2004; Peng and Kumar 2005; Jha and Kumar 2009).
In the context of seasonal predictability, the results imply that most of ENSO predictability resides in the shift of the PDF of ENSO SSTs (i.e., changes in the first moment of PDF associated with the ensemble mean of ENSO SST prediction) than due to changes in the spread of PDF. As similar sets of hindcasts are now routinely available for a large set of seasonal prediction systems (Graham et al. 2011; Kirtman et al. 2014), the results in this work need to be validated using other models.
One aspect of this analysis in that there are some year-to-year variations in uncertainty, and although not related to the prediction of ENSO amplitude in a systematic manner, it is unclear if such variations are due to sampling or are an artifact of the characteristics of forecast system or have a physical basis. If year-to-year variations in uncertainty have a physical basis, and if the physical reasons can be understood, then it would provide a justification for incorporating them in modifying the forecast probabilities associated with the seasonal prediction of ENSO beyond the information contained in the prediction of the mean amplitude alone. If year-to-year variations in uncertainty, however, are mostly a consequence of sampling, then the seasonal predictability of ENSO will be dominated by the information contained in the mean amplitude of predicted SSTs. The results indicate that until the question of understanding the physical basis for year-to-year variations in the spread of ENSO hindcast is resolved or better understood, altering forecast probabilities based on the prediction of the mean amplitude of ENSO would be the prudent strategy.
Constructive comments by three anonymous reviewers were helpful in improving the final version of the manuscript.