Abstract

This work demonstrates the influence of the initial amplitude of the sea surface temperature anomaly (SSTA) associated with El Niño–Southern Oscillation (ENSO) following its evolutionary phase on the forecast skill of ENSO in retrospective predictions of the Climate Forecast System, version 2. It is noted that the prediction skill varies with the phase of the ENSO cycle. The averaged skill (linear correlation) of Niño-3.4 index is in a range of 0.15–0.55 for the amplitude of Niño-3.4 index smaller than 0.5°C (e.g., initial phase or neutral condition of ENSO), and 0.74–0.93 for the amplitude larger than 0.5°C (e.g., mature condition of ENSO) for 0–6-month lead predictions. The dependence of the prediction skills of ENSO on its phase is linked to the variation of signal-to-noise ratio (SNR). This variation is found to be mainly due to the changes in the amplitude of the signal (prediction of the ensemble mean) during different phases of the ENSO cycle, as the noise (forecast spread among the ensemble members), both in the Niño-3.4 region and the whole Pacific, does not depend much on the Niño-3.4 amplitude. It is also shown that the spatial pattern of unpredictable noise in the Pacific is similar to the predictable signal. These results imply that skillful prediction of the ENSO cycle, either at the initial time of an event or during the transition phase of the ENSO cycle, when the anomaly signal is weak and the SNR is small, is an inherent challenge.

1. Introduction

El Niño–Southern Oscillation (ENSO) is a naturally occurring fluctuation of the coupled ocean–atmosphere system and the leading mode of seasonal-to-interannual variability in the tropics (e.g., Sarachik and Cane 2010). ENSO is also the primary source for the predictable component of global atmospheric, oceanic, and terrestrial climate variations on seasonal and interannual time scales (Ropelewski and Halpert 1987; Glantz 2000; National Research Council 2010), including surface air temperature and precipitation in North America (Ropelewski and Halpert 1986) and sea surface temperature (SST) variability in the North Pacific (Wen et al. 2012; Hu et al. 2014b) and North Atlantic (Hu and Huang 2007; Hu et al. 2013a). Nevertheless, the limits of predictability of ENSO are still ambiguous and are a topic of continuing debate.

Chen et al. (2004) presented successful retrospective forecasts of the interannual climate fluctuations in the tropical Pacific Ocean for the period from 1857 to 2003 at lead times of up to 2 years, using a coupled ocean–atmosphere model. They argued that the evolution of El Niño is more strongly controlled by self-sustaining internal dynamics than by stochastic forcing, and coupled model prediction of El Niño, therefore, depends more on the initial conditions (ICs) than on unpredictable atmospheric noise, leading to a long predictability horizon. On the other hand, skill estimates of predictions/hindcasts of ENSO from state-of-the-art climate models suggest much lower estimates for predictability limits and, further, an appreciable seasonal dependence in predictability (Kumar and Hu 2014; Kumar et al. 2017). Shorter limits on ENSO predictability are also apparent in forecast plumes of ENSO at various operational centers that depict considerable forecast divergence among ensemble members that start from small differences in ICs (e.g., see Figs. S1 and S2 in the online supplemental material). An improved understanding of limits of predictability of ENSO is important for quantifying our ability in predicting global climate variability and for managing user expectations on the future prospects of seasonal predictions.

Over the last few decades, significant progress has been achieved in operational ENSO prediction; however, prediction challenges remain (Barnston et al. 2012; Zheng et al. 2016; Zhang and Gao 2016). For example, ENSO prediction skill is lower in boreal spring and summer than in boreal autumn and winter, indicating difficulties in forecasting initial or transition phase of ENSO (Kumar et al. 2017). In a recent example, the majority of ENSO prediction models were unable to successfully predict the follow-up La Niña in 2011/12 and the interrupted warm event in 2012 (http://iri.columbia.edu/climate/ENSO/currentinfo/archive/index.html; Zhang et al. 2013), a scenario that seems inconsistent with the general sense that the prediction skill of ENSO is high. For example, the overall correlation between predicted and observed monthly mean Niño-3.4 index can reach 0.7–0.8 for 7–8-month lead predictions (Hu and Huang 2007; Jin et al. 2008; Barnston et al. 2012; Xue et al. 2013) and is much higher compared to the seasonal skill of precipitation and surface temperature over land (Peng et al. 2013; Zhu et al. 2013).

In addition to forecast skill dependence on phase of the ENSO cycle (and season), there are also appreciable interdecadal variations in the prediction skill of ENSO (Wang et al. 2010; Barnston et al. 2012). One hypothesis for the decreased skill of ENSO prediction since 2000 attributes it to the reduced amplitude of the SST anomaly (SSTA), which is associated with a weakening of the atmosphere and ocean coupling in the whole tropical Pacific (Hu et al. 2013b, 2017a). This hypothesis is supported by evidence based on signal-to-noise ratio (SNR) consideration and its relationship with expected prediction skill; large (small) SNR corresponds to high (low) value for different measurements of prediction skill (Kumar and Hoerling 2000; Tang et al. 2004, 2005, 2008; Tang and Deng 2011; Kumar and Hu 2014).

Moreover, forecast spread among individual models (or spread among individual ensemble members of a model) is quasi-independent of the amplitude of the ensemble-mean anomalies (e.g., Kumar and Hu 2014), implying that the interannual variation of SNR (or predictability) is mainly determined by signal (ensemble mean) instead of noise (model spread) (Tang et al. 2005, 2008). Analyzing standard deviation and prediction skill over a 4-yr running window, Wang et al. (2010) also showed that prediction skill was higher for both Niño-3.4 and global SSTA when the variance of observed Niño-3.4 index was larger, confirming the link between prediction skill and ENSO (signal) intensity (Jin et al. 2008). These studies indicate that lower prediction skill for ENSO during the initial or the transition phase of the ENSO cycle in boreal spring could simply be an artifact of variations in the SNR during the ENSO cycle, particularly if the noise component does not have appreciable variations during the ENSO cycle.

As the SNR is an important factor in determining variations in predictability and prediction skill, a comprehensive analysis of the variations of the SNR with the ENSO cycle, and further, of the signal and noise separately, is of importance. A similar analysis for variations of seasonal atmospheric and terrestrial anomalies in the context of ENSO SSTAs as a boundary forcing was done by Kumar et al. (2000), who examined the impact of SST forcing on different moments of the probability density function (PDF) of seasonal-mean atmospheric states based on an ensemble of atmospheric general circulation model (AGCM) simulations. They noted that the impact of interannual variations in SSTs (mainly ENSO) on the spread of the seasonal-mean atmospheric states (i.e., the second moment of the PDFs) over the Pacific–North American region during boreal winter was small (Kumar and Hoerling 2000; Kumar and Chen 2015; Jha et al. 2018). This conclusion was in contrast with the well-defined impact of the amplitude of ENSO SSTA on the first moment (seasonal-mean anomaly) of the PDF of the seasonal-mean atmospheric state. In seasonal predictions of atmospheric anomalies, therefore, the results of Kumar and Hoerling (1998) and Kumar et al. (2000), further confirmed by the analysis of Peng and Kumar (2005), Jha and Kumar (2009), Peng et al. (2011), and Kumar et al. (2017) for other variables, implies that the dominant contribution to seasonal predictability and prediction skill from one year to another comes from the impact of SSTs on the first moment of the PDF, with little impact of SSTs on the second moment of the PDFs.

In the context of prediction skill of ENSO as an initial value problem, the question we investigate here is, How do the SNR and prediction skill vary with the amplitude of ENSO (which in turn is also a function of the phase of the ENSO cycle)? Further, can the variations in the prediction skill of ENSO following its cycle simply be explained by SNR considerations too (in that the variations in the SNR are dominated by the variations in the signal component, with the noise component having much smaller variations)? We demonstrate that this may indeed be the case. The analysis is based on an extensive hindcast dataset of coupled seasonal forecast system with an ensemble of forecasts for a target season available. Here, we analyze the dependence of prediction skill of ENSO on its amplitude (corresponding to the ENSO cycle). Furthermore, the variations in the forecast uncertainty or spread of the amplitude of ENSO SSTA during different phase of the ENSO cycle are also studied. The rest of the paper is organized as follows: The data used in this work are introduced in section 2, the results are shown in section 3, and a summary and discussion are given in section 4.

2. Data

The retrospective predictions (also called hindcasts or forecasts hereafter) examined in this work are from the NCEP Climate Forecast System, version 2 (CFSv2; Kumar et al. 2012; Hu et al. 2013a; Xue et al. 2013; Saha et al. 2014). The predictions are initialized in all calendar months from January 1982 to December 2010. For each month, predictions with ICs at 0000, 0600, 1200, and 1800 UTC were produced every 5 days starting 1 January. In this analysis, we use predictions from 24 ICs in each month for subsequent target seasons (Kumar and Hu 2014). Following a convention in climate forecast (e.g., Kumar et al. 2012), we refer to the 0-month lead prediction as predictions initialized from the previous month. For example, for the 0-month lead prediction of January, the 24 predictions of the monthly means are from ICs on 2, 7, 12, 17, 22, and 27 December at 0000, 0600, 1200, and 1800 UTC. The ocean and atmosphere ICs are from the NCEP Climate Forecast System Reanalysis (CFSR; Saha et al. 2010; Xue et al. 2011).

The atmospheric component of CFSv2 has about 100 km × 100 km horizontal resolution with 64 vertical layers (T126L64) and is coupled to version 4 of the Modular Ocean Model (MOM4), as well as a three-layer sea ice model and a four-layer land model. For the ocean model, there are 40 levels in the vertical direction, extending to the maximum depth of 4737 m. The horizontal resolution is 0.5° zonally and 0.25° meridionally in the tropics, tapering to a global resolution of 0.5° northward of 10°N and southward of 10°S.

In addition to the 9-month hindcasts during 1982–2010, forecasts of CFSv2 have been operational at NCEP since March 2011 (Saha et al. 2014). It has been demonstrated that CFSv2 has good simulation and prediction skill for SSTA in the tropical Pacific associated with ENSO (Zhu et al. 2012; Kim et al. 2012; Xue et al. 2013; Zheng et al. 2016). Compared with version 1 of CFS (Xue et al. 2013) and other ENSO forecast models (Zheng et al. 2016), CFSv2 prediction skills for SSTs in the tropical Pacific and for ENSO are comparable. Beyond the tropical Pacific, CFSv2 has also shown reasonable prediction skill for SSTA in the North Atlantic (Hu et al. 2013a) and the North Pacific (Wen et al. 2012; Hu et al. 2014b), as well as for the Asian summer and winter monsoon (e.g., Zuo et al. 2013; Jiang et al. 2013a,b; Ramu et al. 2016) and North American climate variability (e.g., Peng et al. 2013; Zhu et al. 2013).

For validation, seasonal-mean SST data used in this work are version 2 of the NOAA optimal interpolation (OIv2) SST dataset at 1° × 1° resolution (Reynolds et al. 2002). The monthly mean Niño-3.4 index is the average of SSTA over 5°S–5°N, 120°–170°W for both OIv2 and CFSv2 hindcasts. According to Xue et al. (2013), two climatologies are used to compute the anomalies both for the CFSv2 predicted and OIv2 analyzed SST in order to eliminate the impact of a discontinuity in the initial ocean analysis and subsequent predictions around 1998/99 (Xue et al. 2011; Kumar et al. 2012). The first climatology is the average between January 1982 and December 1998, and the second one between January 1999 and December 2010. Also, all the monthly mean SST data both from the CFSv2 prediction and from OIv2 are converted into 3-month seasonal means prior to the correlation and signal and noise calculations. The CFSv2 prediction skills are referred to as the linear correlations between CFSv2 predicted and OIv2 analyzed 3-month-mean SSTA.

3. Results

a. Prediction skills of ENSO and no-ENSO months

To analyze the dependence of the prediction skill of the ENSO cycle on its phases, we first divide the ENSO cycle based on the observed (OIv2) Niño-3.4 index (e.g., IC of the forecast) into two groups: one with an amplitude larger than 0.5°C (called ENSO months) and another one with an amplitude smaller than or equal to 0.5°C (called no-ENSO months). Based on this classification, 62% of the total months belong to ENSO months and 38% of the total months are no-ENSO months. We then compute the prediction skills of Niño-3.4 index as well as signal and spread in the forecast Niño-3.4 index for the two groups. Here, the forecast spread, representing the noise (or the uncertainty) in ENSO predictions, is defined as the standard deviation of individual prediction members from the predicted ensemble-mean SSTA, denoted by σnoise, while the signal refers to as the standard deviation of the ensemble-mean SSTA, denoted by σsignal. For forecast variable Fm,n with ensemble member m and year n,

 
formula
 
formula
 
formula
 
formula
 
formula

where M = 24 (total ensemble members) and N = 29 (years).

It is noted that the prediction skills of the Niño-3.4 index have large differences between the two groups (Fig. 1). Here, the correlations are calculated using the mean and standard deviation in each category. The skill is much higher for the ENSO months than for the no-ENSO months. This is generally consistent with the argument of Wang et al. (2010) that high (low) ENSO prediction skill corresponds to large (small) ENSO amplitude. The skill is in a range of 0.14–0.54 for the no-ENSO month group, and 0.74–0.93 for the ENSO month group for 0–6-month leads. Interestingly, as shown in Fig. 2c, where the skill for different forecast lead normalized by skill for 0-lead forecast is shown, the decrease of the skill with lead time is faster for the no-ENSO group than for the ENSO group. This is consistent with the slope in variation between expected maximum value of correlation and SNR [see Fig. 2 of Kumar and Hoerling (2000) and Fig. 9 of Kumar et al. (2017)]. These results suggest that achieving higher prediction skill may inherently be challenging prospect during the initiation of an El Niño or a La Niña event or their transition phase when the amplitude of the Niño-3.4 index is small at the initial time of the prediction. In contrast, CFSv2 better predicts the evolution of a pronounced anomaly, when there is a large anomaly at the initial time of the prediction.

Fig. 1.

Dependence of prediction skills of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and Niño-3.4 index amplitude based on OIv2 SSTA. Prediction skill is computed based on all forecasts during January 1982–December 2010. Red (gray) bars represent the skill when the initial amplitudes of OIv2 |Niño-3.4 index| > 0.5°C (|Niño-3.4 index| ≤ 0.5°C). The rightmost bar is the average for all 0–6-month leads.

Fig. 1.

Dependence of prediction skills of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and Niño-3.4 index amplitude based on OIv2 SSTA. Prediction skill is computed based on all forecasts during January 1982–December 2010. Red (gray) bars represent the skill when the initial amplitudes of OIv2 |Niño-3.4 index| > 0.5°C (|Niño-3.4 index| ≤ 0.5°C). The rightmost bar is the average for all 0–6-month leads.

Fig. 2.

Dependence of (a) forecast noise (computed as the spread among ensemble members) and (b) SNR of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and Niño-3.4 index amplitude. The analysis is based on all forecasts during January 1982–December 2010. (c) As in Fig. 1, but for forecast skill relative to the forecast skill in 0-month lead. Red (gray) bars represent the skills for the initial amplitudes of OIv2 |Niño-3.4 index| > 0.5°C (|Niño-3.4 index| ≤ 0.5°C). The rightmost bar is the average for all 0–6-month leads.

Fig. 2.

Dependence of (a) forecast noise (computed as the spread among ensemble members) and (b) SNR of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and Niño-3.4 index amplitude. The analysis is based on all forecasts during January 1982–December 2010. (c) As in Fig. 1, but for forecast skill relative to the forecast skill in 0-month lead. Red (gray) bars represent the skills for the initial amplitudes of OIv2 |Niño-3.4 index| > 0.5°C (|Niño-3.4 index| ≤ 0.5°C). The rightmost bar is the average for all 0–6-month leads.

The differences of the prediction skill between the ENSO and no-ENSO months are linked to the SNR differences (Fig. 2b). For example, consistent with the skill differences, SNR is much larger for the ENSO months than for the no-ENSO months. Consistent with decrease of prediction skill with lead time, SNR decreases as lead time increases. The decrease of SNR is faster for the ENSO months than for the no-ENSO months, possibly because the SNR at 0-month lead is much higher for the former than for the latter.

An interesting and an important feature to note is the similarity in the amplitude of noise between the ENSO and no-ENSO months for a specified lead time, despite the fact that the noise in the two groups all increases with increasing lead time (Fig. 2a). This indicates that the spread among individual forecasts (noise) is independent of the amplitude of ENSO or signal. The low prediction skill during no-ENSO months, therefore, is almost solely due to small amplitude of SSTA (signal).

b. Prediction skills of the ENSO cycle

The dependence of the prediction skill on the amplitude of the Niño-3.4 index is further demonstrated by calculating the skill in different phases of the ENSO cycle (Fig. 3). The ENSO cycle is divided into 14 phases based on observed Niño-3.4 index value and its tendency (see Table 1 for details). The x axis of Fig. 3 represents the ranges of Niño-3.4 index and the y axis is the averaged SSTA (on the left y axis) and skill (on the right y axis) for different lead time in each corresponding range of Niño-3.4 index. The evolution of the skill dependence on the ENSO cycle in Fig. 3 is consistent with the results shown in Fig. 1, confirming the prediction skill dependence on the amplitude of the Niño-3.4 index. Furthermore, it is seen that the skill is less than 0.3 in the initiation (or transition) phase of an El Niño or a La Niña event. Here, we should point out that this analysis does not account the asymmetric evolution between the warm and cold phases of the ENSO cycle (Hoerling et al. 1997; Kessler 2002; Hu et al. 2014a, 2017b).

Fig. 3.

Dependence of prediction skill of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and 14 ENSO phases classified based on OIv2 SSTA and its tendency (see Table 1 for details). Bars represent the Niño-3.4 index based on OIv2 SSTA. The blue dashed lines with different marks are the prediction skill at different lead month, and the black solid line is the mean skill averaged all lead times. The amplitude of the Niño-3.4 index is on the left y axis while the prediction skill is on the right y axis. The x axis is the range of Niño-3.4 index for different phases of ENSO. The analysis is based on all forecasts during January 1982–December 2010.

Fig. 3.

Dependence of prediction skill of CFSv2 predicted 3-month-mean Niño-3.4 index on lead time and 14 ENSO phases classified based on OIv2 SSTA and its tendency (see Table 1 for details). Bars represent the Niño-3.4 index based on OIv2 SSTA. The blue dashed lines with different marks are the prediction skill at different lead month, and the black solid line is the mean skill averaged all lead times. The amplitude of the Niño-3.4 index is on the left y axis while the prediction skill is on the right y axis. The x axis is the range of Niño-3.4 index for different phases of ENSO. The analysis is based on all forecasts during January 1982–December 2010.

Table 1.

Classification of 14 ENSO phases based on OIv2 SSTA range and tendency. The numbers in the rightmost column show the portion of number of months in each phase to total months in all phases.

Classification of 14 ENSO phases based on OIv2 SSTA range and tendency. The numbers in the rightmost column show the portion of number of months in each phase to total months in all phases.
Classification of 14 ENSO phases based on OIv2 SSTA range and tendency. The numbers in the rightmost column show the portion of number of months in each phase to total months in all phases.

The variations in the amplitude of the ensemble-mean forecast and spread among the forecasts with the amplitude of ENSO are illustrated in Fig. 4. Consistent with the results shown in Fig. 2, while the forecast ensemble mean evolves with the ENSO cycle in different phases, the forecast spread range (shading in Fig. 4) is almost constant for various phases, implying a quasi-independence of the forecast spread on amplitude of the predicted ensemble-mean anomalies. Here, the forecast spread range shown in Fig. 4 (shading) is referred to as one standard deviation of the spread. As a result, the variations of SNR values in various phases of the ENSO cycle (bars in Fig. 4) are determined by the ensemble-mean anomalies (green curve in Fig. 4). Thus, SNR (predictability) and prediction skill are low in initial or transition phase of the ENSO cycle and high in its mature phase (bars in Fig. 4). We note that constancy of noise within different phases of the ENSO cycle has important implications for predictability during different phases of ENSO cycle, and will be further discussed in section 4.

Fig. 4.

Dependence of ensemble mean (green line) and spread range superimposed on the forecast ensemble mean (shading) of CFSv2 predicted 3-month-mean Niño-3.4 index on 14 ENSO phases classified based on OIv2 SSTA and its tendency (see Table 1 for details). The spread range (shading) refers to one standard deviation of the spread. Both the ensemble mean and spread are for the average of 0–6-month lead hindcasts. Bars represent the corresponding SNR. The amplitude of the Niño-3.4 index is on the left y axis while the SNR is on the right y axis. The x axis is the range of Niño-3.4 index for different phases of the ENSO cycle. The analysis is based on all forecasts during January1982–December 2010.

Fig. 4.

Dependence of ensemble mean (green line) and spread range superimposed on the forecast ensemble mean (shading) of CFSv2 predicted 3-month-mean Niño-3.4 index on 14 ENSO phases classified based on OIv2 SSTA and its tendency (see Table 1 for details). The spread range (shading) refers to one standard deviation of the spread. Both the ensemble mean and spread are for the average of 0–6-month lead hindcasts. Bars represent the corresponding SNR. The amplitude of the Niño-3.4 index is on the left y axis while the SNR is on the right y axis. The x axis is the range of Niño-3.4 index for different phases of the ENSO cycle. The analysis is based on all forecasts during January1982–December 2010.

c. Spatial distribution patterns of noise and signal variations

The independence of the SSTA noise on the amplitude of the observed Niño-3.4 index is also valid for the whole Pacific Ocean. For example, both the spatial pattern and amplitude of the noise in the Pacific are almost identical for a specified lead time between the ENSO month and no-ENSO month composites (Fig. 5). The high resemblance of spatial pattern of noise is indicated by linear pattern correlation coefficients of 0.97–0.98 over the domain displayed in Fig. 5. The spatial distribution patterns show that large noise is mainly located in the eastern tropical Pacific, North Pacific, and South Pacific, while the small noise is in the western tropical Pacific and subtropical North and South Pacific. The noise distribution pattern changes little with increasing lead time.

Fig. 5.

Noise (standard deviation of the spread among ensemble members) in CFSv2 predicted 3-month-mean SSTA (°C) for lead times of (a) 0, (b) 2, (c) 4, and (d) 6 months. (left) The noise for OIv2 |Niño-3.4 index| > 0.5°C, and (right) the noise for |Niño-3.4 index| ≤ 0.5°C.

Fig. 5.

Noise (standard deviation of the spread among ensemble members) in CFSv2 predicted 3-month-mean SSTA (°C) for lead times of (a) 0, (b) 2, (c) 4, and (d) 6 months. (left) The noise for OIv2 |Niño-3.4 index| > 0.5°C, and (right) the noise for |Niño-3.4 index| ≤ 0.5°C.

In addition to similarity in the spatial distribution of noise between the ENSO and no-ENSO months, the leading pattern of the noise variability in the tropical Pacific itself is similar to that of the signal. This is demonstrated based on the empirical orthogonal function (EOF) analysis of interannual variability of the ensemble mean and interannual variability of departure in the forecasts from the ensemble mean. The EOF analysis is based on covariance matrix over the global domain. For the noise variability, to have identical data length with the signal variability in the EOF calculation, departure of a randomly selected ensemble member from the ensemble mean is used. Once the leading pattern of EOF noise variability is obtained, departures of each of the remaining 23 ensemble members from the ensemble mean are projected onto the EOF pattern of noise to get the time series of noise for each forecast member. This procedure is repeated for forecasts for different lead times.

Figure 6 shows dominant EOF pattern for the variability of the ensemble mean and of the noise for 6-month lead forecasts. The spatial patterns of the leading mode of the signal (Fig. 6a) and noise (Fig. 6b) have strong resemblance and are similar to the spatial mode of ENSO variability. This is confirmed by the time series of the EOF1 for the signal (Fig. 6c, black curve) that strongly corresponds to interannual variability of ENSO [see Fig. 18 of Xue et al. (2011)]. Although the variance explained by the EOF1 of noise is small on a global basis, it is appreciable locally in the tropical Pacific. For example, in the central and eastern tropical Pacific, the EOF1 of noise (Fig. 6b) can explain 20% total variance while the EOF1 of ensemble mean (Fig. 6a) can explain 60% total variance (see Fig. S3). The corresponding second modes mainly reflect variability beyond the equator (not shown) and explain smaller fractions of variances (e.g., 9% and 4% variances of signal and ensemble spread for 6-month lead forecasts, respectively), and thus are not discussed here. The similarity of the signal and noise patterns shown in Fig. 6 suggests that variations in individual forecasts from the ensemble mean are mostly reflected in the amplitude of predicted ENSO from one forecast to another (i.e., some forecasts have a stronger anomaly than the ensemble mean while others are weaker).

Fig. 6.

EOF1 of (a) ensemble mean and (b) spread, and (c) PC1 of ensemble mean (black curve) and spread (dots) for 6-month lead forecasts. The red curve is the variance of spread PC1 normalized by its corresponding climatological variance.

Fig. 6.

EOF1 of (a) ensemble mean and (b) spread, and (c) PC1 of ensemble mean (black curve) and spread (dots) for 6-month lead forecasts. The red curve is the variance of spread PC1 normalized by its corresponding climatological variance.

From Fig. 6c, we can also see that the corresponding PC1 of the spread (dots) is largely random. The randomness of PC1 of the spread is confirmed by the fact that its variance (red curve in Fig. 6c) has no systematic relationship with PC1 of the ensemble mean (black curve in Fig. 6c). That is consistent with conclusion that ensemble member spread (noise) is independent of the ensemble mean (signal). The results are similar for other lead-time forecasts (see Fig. S4 for 3-month lead).

4. Summary and discussion

This work investigates the quantitative impact of ENSO phase and amplitude on prediction skill of ENSO in retrospective predictions of CFSv2. We note that the prediction skill varies with the phase of the ENSO cycle. The forecast skill (linear correlation) of Niño-3.4 index is in a range of 0.15–0.55 for the no-ENSO months and 0.74–0.93 for the ENSO months for 0–6-month leads. The differences of the prediction skills are linked to the SNR differences. SNR is much larger for the ENSO months than for the no-ENSO months, resulting in higher predictability for the former than for the latter. The SNR variation with ENSO cycle is mainly due to the change in signal, while the noise component is largely independent of the amplitude of SSTA both in the Niño-3.4 region and in the whole Pacific. Moreover, the spatial patterns for the dominant mode of signal and noise variability have a strong resemblance and both are similar to the spatial pattern of ENSO.

The constancy of noise and the spatial similarity between the predictable signal and the noise can be used to explain the dependence of correlation skill on the phase of the ENSO cycle (Fig. 3). The constancy of noise and spatial structure of the noise mean that individual forecasts would be scatted around the ensemble mean. The observed anomalies, on the other hand, can be one of the forecast ensemble members. In the transition phase of the ENSO cycle or at the initiation time of an ENSO event, because of the amplitude of noise, and in the presence of a small predictable signal, observations could be of the same sign as the ensemble mean or they could be of opposite sign. This is illustrated in the plume diagrams of Niño-3.4 SST predictions for 2015 (Fig. S1) and for 2017 (Fig. S2). For forecasts during 2015 (a strong El Niño event), because of a large amplitude for the ensemble-mean anomaly, all possible forecast outcomes have the same sign. For 2017, however, because of a smaller amplitude for the ensemble-mean anomaly, some forecasts can also have an opposite-signed anomaly. In the scenario similar to 2017 case, when correlation is computed over a large number of weak cases, some of the observed events would have same sign as the ensemble-mean prediction, while others would have the opposite sign, resulting in a small correlation and low predictability. In a mature phase of ENSO event, however, the observed ENSO anomalies, even in the presence of noise, would have the same sign anomalies as the ensemble mean of the forecasts (Fig. S1), resulting in a large correlation and high predictability.

The dependence of the prediction skill on the ENSO cycle implies that relatively high prediction skill in operational ENSO prediction mainly comes from the skill during the ENSO events having a large-amplitude Niño-3.4 index, when the strong anomaly signals of ocean and atmosphere already exist at initial time of the prediction. Therefore, the prediction skill of ENSO in seasonal and interseasonal time scales in state-of-the-art coupled general circulation models may be largely due to the success of nowcasting large anomalies or forecasting the subsequent evolution of the ENSO events that are close to their mature phase. On the other hand, it is an inherent challenge for CFSv2 (and other models) to predict the initiation of an El Niño or a La Niña event or their transition phase, because during those periods the signal to be predicted is small while the noise component in the forecast remains relatively large.

As the prediction skill of ENSO and its connection with the phase of the ENSO cycle may depend on the model, it is necessary to compare the results in the present work with those from other seasonal prediction systems. For example, based on optimal error analysis, Tang and Deng (2011) noted that the predictions initialized from a large amplitude of SSTA have smaller error growth; this conclusion is different from the analysis presented here, which shows a quasi-independence of model forecast spread (noise) with the amplitude of SSTA (signal).

Acknowledgments

The authors appreciate the constructive comments and insightful suggestions as well as many detailed corrections from three reviewers. B. Huang is supported by grants from the NSF (AGS-1338427), NASA (NNX14AM19G), and NOAA (NA14OAR4310160 and NA17OAR4310144). The scientific results and conclusions, as well as any view or opinions expressed herein, are those of the authors and do not necessarily reflect the views of NWS, NOAA, or the Department of Commerce.

REFERENCES

REFERENCES
Barnston
,
A. G.
,
M. K.
Tippett
,
M. L.
L’Heureux
,
S.
Li
, and
D. G.
DeWitt
,
2012
:
Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing?
Bull. Amer. Meteor. Soc.
,
93
,
631
651
, https://doi.org/10.1175/BAMS-D-11-00111.1.
Chen
,
D.
,
M. A.
Cane
,
A.
Kaplan
,
S. E.
Zebiak
, and
D. J.
Huang
,
2004
:
Predictability of El Niño over the past 148 years
.
Nature
,
428
,
733
736
, https://doi.org/10.1038/nature02439.
Glantz
,
M. H.
,
2000
: Currents of Change: Impacts of El Niño and La Niña on Climate and Society. Cambridge University Press, 266 pp.
Hoerling
,
M. P.
,
A.
Kumar
, and
M.
Zhong
,
1997
:
El Niño, La Niña, and the nonlinearity of their teleconnections
.
J. Climate
,
10
,
1769
1786
, https://doi.org/10.1175/1520-0442(1997)010<1769:ENOLNA>2.0.CO;2.
Hu
,
Z.-Z.
, and
B.
Huang
,
2007
:
The predictive skill and the most predictable pattern in the tropical Atlantic: The effect of ENSO
.
Mon. Wea. Rev.
,
135
,
1786
1806
, https://doi.org/10.1175/MWR3393.1.
Hu
,
Z.-Z.
,
A.
Kumar
,
B.
Huang
,
W.
Wang
,
J.
Zhu
, and
C.
Wen
,
2013a
:
Prediction skill of monthly SST in the North Atlantic Ocean in NCEP Climate Forecast System version 2
.
Climate Dyn.
,
40
,
2745
2759
, https://doi.org/10.1007/s00382-012-1431-z.
Hu
,
Z.-Z.
,
A.
Kumar
,
H.-L.
Ren
,
H.
Wang
,
M.
L’Heureux
, and
F.-F.
Jin
,
2013b
:
Weakened interannual variability in the tropical Pacific Ocean since 2000
.
J. Climate
,
26
,
2601
2613
, https://doi.org/10.1175/JCLI-D-12-00265.1.
Hu
,
Z.-Z.
,
A.
Kumar
,
Y.
Xue
, and
B.
Jha
,
2014a
:
Why were some La Niñas followed by another La Niña?
Climate Dyn.
,
42
,
1029
1042
, https://doi.org/10.1007/s00382-013-1917-3.
Hu
,
Z.-Z.
,
A.
Kumar
,
B.
Huang
,
J.
Zhu
, and
Y.
Guan
,
2014b
:
Prediction skill of North Pacific variability in NCEP Climate Forecast System version 2: Impact of ENSO and beyond
.
J. Climate
,
27
,
4263
4272
, https://doi.org/10.1175/JCLI-D-13-00633.1.
Hu
,
Z.-Z.
,
A.
Kumar
,
B.
Huang
,
J.
Zhu
, and
H.-L.
Ren
,
2017a
:
Interdecadal variations of ENSO around 1999/2000
.
J. Meteor. Res.
,
31
,
73
81
, https://doi.org/10.1007/s13351-017-6074-x.
Hu
,
Z.-Z.
,
A.
Kumar
,
B.
Huang
,
J.
Zhu
,
R.-H.
Zhang
, and
F.-F.
Jin
,
2017b
:
Asymmetric evolution of El Niño and La Niña: The recharge/discharge processes and role of the off-equatorial sea surface height anomaly
.
Climate Dyn.
,
49
,
2737
2748
, https://doi.org/10.1007/s00382-016-3498-4.
Jha
,
B.
, and
A.
Kumar
,
2009
:
A comparative analysis of change in the first and second moment of the PDF of seasonal means with ENSO SSTs
.
J. Climate
,
22
,
1412
1423
, https://doi.org/10.1175/2008JCLI2495.1.
Jha
,
B.
,
A.
Kumar
and
Z.-Z.
Hu
,
2018
: An update on the estimate of predictability of seasonal mean atmospheric variability using North American Multi-Model Ensemble. Climate Dyn., https://doi.org/10.1007/s00382-016-3217-1.
Jiang
,
X.
,
S.
Yang
,
Y.
Li
,
A.
Kumar
,
X.
Liu
,
Z.
Zuo
, and
B.
Jha
,
2013a
:
Seasonal-to-interannual prediction of the Asian summer monsoon in the NCEP Climate Forecast System version 2
.
J. Climate
,
26
,
3708
3727
, https://doi.org/10.1175/JCLI-D-12-00437.1.
Jiang
,
X.
,
S.
Yang
,
Y.
Li
,
A.
Kumar
,
W.
Wang
, and
Z.
Gao
,
2013b
:
Dynamical prediction of the East Asian winter monsoon by the NCEP Climate Forecast System
.
J. Geophys. Res. Atmos.
,
118
,
1312
1328
, https://doi.org/10.1002/jgrd.50193.
Jin
,
E. K.
, and Coauthors
,
2008
:
Current status of ENSO prediction skill in coupled ocean–atmosphere models
.
Climate Dyn.
,
31
,
647
664
, https://doi.org/10.1007/s00382-008-0397-3.
Kessler
,
W. S.
,
2002
:
Is ENSO a cycle or a series of events?
Geophys. Res. Lett.
,
29
,
2125
, https://doi.org/10.1029/2002GL015924.
Kim
,
H.-M.
,
P. J.
Webster
, and
J. A.
Curry
,
2012
:
Seasonal prediction skill of ECMWF System 4 and NCEP CFSv2 retrospective forecast for the Northern Hemisphere winter
.
Climate Dyn.
,
39
,
2957
2973
, https://doi.org/10.1007/s00382-012-1364-6.
Kumar
,
A.
, and
M. P.
Hoerling
,
1998
:
Annual cycle of Pacific–North American seasonal predictability associated with different phases of ENSO
.
J. Climate
,
11
,
3295
3308
, https://doi.org/10.1175/1520-0442(1998)011<3295:ACOPNA>2.0.CO;2.
Kumar
,
A.
, and
M. P.
Hoerling
,
2000
:
Analysis of a conceptual model of seasonal climate variability and implications for seasonal prediction
.
Bull. Amer. Meteor. Soc.
,
81
,
255
264
, https://doi.org/10.1175/1520-0477(2000)081<0255:AOACMO>2.3.CO;2.
Kumar
,
A.
, and
Z.-Z.
Hu
,
2014
:
How variable is the uncertainty in ENSO sea surface temperature prediction?
J. Climate
,
27
,
2779
2788
, https://doi.org/10.1175/JCLI-D-13-00576.1.
Kumar
,
A.
, and
M.
Chen
,
2015
:
Inherent predictability, requirements on ensemble size, and complementarity
.
Mon. Wea. Rev.
,
143
,
3192
3203
, https://doi.org/10.1175/MWR-D-15-0022.1.
Kumar
,
A.
,
A. G.
Barnston
,
P.
Peng
,
M. P.
Hoerling
, and
L.
Goddard
,
2000
:
Changes in the spread of the variability of the seasonal mean atmospheric states associated with ENSO
.
J. Climate
,
13
,
3139
3151
, https://doi.org/10.1175/1520-0442(2000)013<3139:CITSOT>2.0.CO;2.
Kumar
,
A.
,
M.
Chen
,
L.
Zhang
,
W.
Wang
,
Y.
Xue
,
C.
Wen
,
L.
Marx
, and
B.
Huang
,
2012
:
An analysis of the nonstationarity in the bias of sea surface temperature forecasts for the NCEP Climate Forecast System (CFS) version 2
.
Mon. Wea. Rev.
,
140
,
3003
3016
, https://doi.org/10.1175/MWR-D-11-00335.1.
Kumar
,
A.
,
Z.-Z.
Hu
,
B.
Jha
, and
P.
Peng
,
2017
:
Estimating ENSO predictability based on multi-model hindcasts
.
Climate Dyn.
,
48
,
39
51
, https://doi.org/10.1007/s00382-016-3060-4.
National Research Council
,
2010
: Assessment of Intraseasonal to Interannual Climate Prediction and Predictability. National Academies Press, 192 pp.
Peng
,
P.
, and
A.
Kumar
,
2005
:
A large ensemble analysis of the influence of tropical SSTs on seasonal atmospheric variability
.
J. Climate
,
18
,
1068
1085
, https://doi.org/10.1175/JCLI-3314.1.
Peng
,
P.
,
A.
Kumar
, and
W.
Wang
,
2011
:
An analysis of seasonal predictability in coupled model forecasts
.
Climate Dyn.
,
36
,
637
648
, https://doi.org/10.1007/s00382-009-0711-8.
Peng
,
P.
,
A. G.
Barnston
, and
A.
Kumar
,
2013
:
A comparison of skill between two versions of the NCEP Climate Forecast System (CFS) and CPC’s operational short-lead seasonal outlooks
.
Wea. Forecasting
,
28
,
445
462
, https://doi.org/10.1175/WAF-D-12-00057.1.
Ramu
,
D. A.
, and Coauthors
,
2016
:
Indian summer monsoon rainfall simulation and prediction skill in the CFSv2 coupled model: Impact of atmospheric horizontal resolution
.
J. Geophys. Res. Atmos.
,
121
,
2205
2221
, https://doi.org/10.1002/2015JD024629.
Reynolds
,
R. W.
,
N. A.
Rayner
,
T. M.
Smith
,
D. C.
Stokes
, and
W.
Wang
,
2002
:
An improved in situ and satellite SST analysis for climate
.
J. Climate
,
15
,
1609
1625
, https://doi.org/10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.
Ropelewski
,
C. F.
, and
M. S.
Halpert
,
1986
:
North American precipitation and temperature patterns associated with the El Niño/Southern Oscillation (ENSO)
.
Mon. Wea. Rev.
,
114
,
2352
2362
, https://doi.org/10.1175/1520-0493(1986)114<2352:NAPATP>2.0.CO;2.
Ropelewski
,
C. F.
, and
M. S.
Halpert
,
1987
:
Global and regional scale precipitation patterns associated with the El Niño/Southern Oscillation
.
Mon. Wea. Rev.
,
115
,
1606
1626
, https://doi.org/10.1175/1520-0493(1987)115<1606:GARSPP>2.0.CO;2.
Saha
,
S.
, and Coauthors
,
2010
:
The NCEP Climate Forecast System Reanalysis
.
Bull. Amer. Meteor. Soc.
,
91
,
1015
1057
, https://doi.org/10.1175/2010BAMS3001.1.
Saha
,
S.
, and Coauthors
,
2014
:
The NCEP Climate Forecast System version 2
.
J. Climate
,
27
,
2185
2208
, https://doi.org/10.1175/JCLI-D-12-00823.1.
Sarachik
,
E. S.
, and
M. A.
Cane
,
2010
: The El Niño–Southern Oscillation Phenomenon. Cambridge University Press, 384 pp.
Tang
,
Y.
, and
Z.
Deng
,
2011
:
Bred vector and ENSO predictability in a hybrid coupled model during the period 1881–2000
.
J. Climate
,
24
,
298
314
, https://doi.org/10.1175/2010JCLI3491.1.
Tang
,
Y.
,
R.
Kleeman
, and
A. M.
Moore
,
2004
:
A simple method for estimating variations in the predictability of ENSO
.
Geophys. Res. Lett.
,
31
,
L17205
, https://doi.org/10.1029/2004GL020673.
Tang
,
Y.
,
R.
Kleeman
, and
A. M.
Moore
,
2005
:
Reliability of ENSO dynamical predictions
.
J. Atmos. Sci.
,
62
,
1770
1791
, https://doi.org/10.1175/JAS3445.1.
Tang
,
Y.
,
H.
Lin
, and
A. M.
Moore
,
2008
:
Measuring the potential predictability of ensemble climate predictions
.
J. Geophys. Res.
,
113
,
D04108
, https://doi.org/10.1029/2007JD008804.
Wang
,
W.
,
M.
Chen
, and
A.
Kumar
,
2010
:
An assessment of the CFS real-time seasonal forecasts
.
Wea. Forecasting
,
25
,
950
969
, https://doi.org/10.1175/2010WAF2222345.1.
Wen
,
C.
,
Y.
Xue
, and
A.
Kumar
,
2012
:
Seasonal prediction of North Pacific SSTs and PDO in the NCEP CFS hindcasts
.
J. Climate
,
25
,
5689
5710
, https://doi.org/10.1175/JCLI-D-11-00556.1.
Xue
,
Y.
,
B.
Huang
,
Z.-Z.
Hu
,
A.
Kumar
,
C.
Wen
,
D.
Behringer
, and
S.
Nadiga
,
2011
:
An assessment of oceanic variability in the NCEP Climate Forecast System Reanalysis
.
Climate Dyn.
,
37
,
2511
2539
, https://doi.org/10.1007/s00382-010-0954-4.
Xue
,
Y.
,
M.
Chen
,
A.
Kumar
,
Z.-Z.
Hu
, and
W.
Wang
,
2013
:
Prediction skill and bias of tropical Pacific sea surface temperatures in the NCEP Climate Forecast System version 2
.
J. Climate
,
26
,
5358
5378
, https://doi.org/10.1175/JCLI-D-12-00600.1.
Zhang
,
R.-H.
, and
C.
Gao
,
2016
:
The IOCAS intermediate coupled model (IOCAS ICM) and its real-time predictions of the 2015–2016 El Niño event
.
Sci. Bull.
,
61
,
1061
1070
, https://doi.org/10.1007/s11434-016-1064-4.
Zhang
,
R.-H.
,
F.
Zheng
,
J.
Zhu
, and
Z.
Wang
,
2013
:
A successful real-time forecast of the 2010-11 La Niña event
.
Sci. Rep.
,
3
,
1108
, https://doi.org/10.1038/srep01108.
Zheng
,
Z.
,
Z.-Z.
Hu
, and
M.
L’Heureux
,
2016
:
Predictable components of ENSO evolution in real-time multi-model predictions
.
Sci. Rep.
,
6
,
35909
, https://doi.org/10.1038/srep35909.
Zhu
,
J.
,
B.
Huang
,
L.
Marx
,
J. L.
Kinter
III
,
M. A.
Balmaseda
,
R.-H.
Zhang
, and
Z.-Z.
Hu
,
2012
:
Ensemble ENSO hindcasts initialized from multiple ocean analyses
.
Geophys. Res. Lett.
,
39
,
L09602
, https://doi.org/10.1029/2012GL051503.
Zhu
,
J.
,
B.
Huang
,
Z.-Z.
Hu
,
J. L.
Kinter
III
, and
L.
Marx
,
2013
:
Predicting US summer precipitation using NCEP Climate Forecast System version 2 initialized by multiple ocean analyses
.
Climate Dyn.
,
41
,
1941
1954
, https://doi.org/10.1007/s00382-013-1785-x.
Zuo
,
Z.
,
S.
Yang
,
Z.-Z.
Hu
,
R.
Zhang
,
W.
Wang
,
B.
Huang
, and
F.
Wang
,
2013
:
Predictable patterns and predictive skills of monsoon precipitation in Northern Hemisphere summer in NCEP CFSv2 reforecasts
.
Climate Dyn.
,
40
,
3071
3088
, https://doi.org/10.1007/s00382-013-1772-2.

Footnotes

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JCLI-D-18-0285.s1.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Supplemental Material