An Analysis of the Temporal Evolution of ENSO Prediction Skill in the Context of the Equatorial Pacific Ocean Observing System

Arun Kumar Climate Prediction Center, National Centers for Environmental Prediction, College Park, Maryland

Search for other papers by Arun Kumar in
Current site
Google Scholar
PubMed
Close
,
Mingyue Chen Climate Prediction Center, National Centers for Environmental Prediction, College Park, Maryland

Search for other papers by Mingyue Chen in
Current site
Google Scholar
PubMed
Close
,
Yan Xue Climate Prediction Center, National Centers for Environmental Prediction, College Park, Maryland

Search for other papers by Yan Xue in
Current site
Google Scholar
PubMed
Close
, and
David Behringer Environmental Prediction Center, National Centers for Environmental Prediction, College Park, Maryland

Search for other papers by David Behringer in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Subsurface ocean observations in the equatorial tropical Pacific Ocean dramatically increased after the 1990s because of the completion of the TAO moored array and a steady increase in Argo floats. In this analysis the question explored is whether a steady increase in ocean observations can be discerned in improvements in skill of predicting sea surface temperature (SST) variability associated with El Niño–Southern Oscillation (ENSO)? The analysis is based on the time evolution of skill of sea surface temperatures in the equatorial tropical Pacific since 1982 based on a seasonal prediction system. It is found that for forecasts up to a 6-month lead time, a clear fingerprint of increases in subsurface ocean observations is not readily apparent in the time evolution of prediction skill that is dominated much more by the signal-to-noise consideration of SSTs to be predicted. Finding no clear relationship between an increase in ocean observations and prediction skill of SSTs, various possibilities for why it may be so are discussed. This discussion is to motivate further exploration on the question of the tropical Pacific observing system, its influence on the skill of ENSO prediction, and the capabilities of the current generation of coupled models and ocean data assimilation systems to take advantage of ocean observations.

Corresponding author address: Dr. Arun Kumar, 5830 University Research Court, College Park, MD 20740. E-mail: arun.kumar@noaa.gov

Abstract

Subsurface ocean observations in the equatorial tropical Pacific Ocean dramatically increased after the 1990s because of the completion of the TAO moored array and a steady increase in Argo floats. In this analysis the question explored is whether a steady increase in ocean observations can be discerned in improvements in skill of predicting sea surface temperature (SST) variability associated with El Niño–Southern Oscillation (ENSO)? The analysis is based on the time evolution of skill of sea surface temperatures in the equatorial tropical Pacific since 1982 based on a seasonal prediction system. It is found that for forecasts up to a 6-month lead time, a clear fingerprint of increases in subsurface ocean observations is not readily apparent in the time evolution of prediction skill that is dominated much more by the signal-to-noise consideration of SSTs to be predicted. Finding no clear relationship between an increase in ocean observations and prediction skill of SSTs, various possibilities for why it may be so are discussed. This discussion is to motivate further exploration on the question of the tropical Pacific observing system, its influence on the skill of ENSO prediction, and the capabilities of the current generation of coupled models and ocean data assimilation systems to take advantage of ocean observations.

Corresponding author address: Dr. Arun Kumar, 5830 University Research Court, College Park, MD 20740. E-mail: arun.kumar@noaa.gov

1. Introduction

For the current generation of dynamical coupled seasonal forecast systems, retrospective forecasts (also referred to as hindcasts) are routinely made from the early 1980s. Availability of hindcasts allows us to assess prediction skill of sea surface temperature (SST) anomalies in the equatorial tropical Pacific (Jin et al. 2008; Stockdale et al. 2011; Xue et al. 2013), a region dominated by El Niño–Southern Oscillation (ENSO) SST variability. The importance of ENSO on the global climate variability has been widely recognized and provides the basis for the seasonal prediction efforts that are currently operational at multiple centers (Graham et al. 2011; Peng et al. 2013).

Because the memory of the coupled system resides in the ocean, the ocean component of coupled seasonal forecast systems is typically initialized from an ocean analysis estimated based on ocean data assimilation (ODA) systems (Balmaseda et al. 2009; Zhu et al. 2012). In an ODA system, surface and subsurface ocean observations are merged with a guess field of the ocean state that is generated by the forward integration of the ocean model starting from the previous analysis. Merging of observations with the forecast guess provides an observational constraint and corrects for errors in the guess field that are introduced because of upper limits in the accuracy of predictions and biases in the ocean models and surface forcing.

Along with advances in ocean models and ODAs that are expected to improve prediction skill, the number of observations in the equatorial tropical Pacific has been steadily increasing over the years. This can be seen in the time series of the total number of observations in the tropical Pacific (Fig. 1, black line). It is reasonable to assume that an increase in ocean observations, by providing additional constraints, would also lead to improvements in the ocean analysis and prediction skill.

Fig. 1.
Fig. 1.

Time evolution of the number of temperature profiles per month in the equatorial Pacific Ocean. Different lines correspond to different observing systems: XBT (blue line), TAO (red line), Argo (green line), and total (black line).

Citation: Monthly Weather Review 143, 8; 10.1175/MWR-D-15-0035.1

Given the evolution of the ocean observing system, a question is posed whether improvements in the ocean observing system can also be seen as systematic improvements in the prediction skill of ENSO? An increase in subsurface ocean observations in the equatorial tropical Pacific is of particular relevance for ENSO prediction because it is the ocean heat content that provides the memory for skill in the long-range prediction of ENSO (Meinen and McPhaden 2000).

Based on the hindcasts from a seasonal prediction system, we analyzed to what extent temporal evolution in skill of SSTs in the equatorial Pacific can be related to the corresponding evolution in ocean observations. A complicating factor in discerning the fingerprint of an increase in ocean observations on prediction skill is a considerable interannual and decadal variability in ENSO, which can also result in variations in prediction skill. We demonstrate that such variations in prediction skill indeed occur and could be related to a basic constraint that signal and noise have on skill measures. Finding no clear relationship between skill and the number of ocean observations in the equatorial tropical Pacific, we discuss various possibilities that could either mask or provide a physical basis for our failed attempt to relate the state of the tropical Pacific Ocean observing system to the skill of ENSO prediction.

2. Data and analysis procedure

Hindcast data to assess skill of ENSO prediction are from the National Centers for Environmental Prediction (NCEP) Climate Forecast System, version 2 (CFSv2). The CFSv2 consists of fully coupled ocean, atmosphere, and land component models (Saha et al. 2014). The atmospheric component is the NCEP Global Forecast System at a horizontal resolution of T126 (~100 km) with 64 vertical levels extending from the surface to 0.26 hPa. The oceanic component is the Geophysical Fluid Dynamics Laboratory Modular Ocean Model, version 4, which uses 40 levels in the vertical, a zonal resolution of 0.5°, and a meridional resolution of 0.25° between 10°S and 10°N, gradually increasing through the tropics until becoming fixed at 0.5° poleward of 30°S and 30°N.

The predictions are initialized in all calendar months from January 1982 to December 2013. For each month, predictions with initial conditions (ICs) at 0000, 0600, 1200, and 1800 UTC were made every fifth day starting 1 January. In this analysis, we use predictions from 20 ICs in each month as forecast for subsequent target months and seasons. For example, for the target month of January, 20 predictions from ICs on 7 December, 12 December, 17 December, 22 December, and 27 December at 0000, 0600, 1200, and 1800 UTC are used. In this analysis, a zero-month lead for a target season is referred to as prediction initialized from the previous month [e.g., predictions for the January–February–March (JFM) from ICs of December]. Similarly, 3-month (6 month) lead forecasts for JFM refer to forecasts initialized in the month September (June). CFSv2 forecasts extend to nine full months from the month of initial condition, and therefore, 6 months is the longest lead for which skill in predicting seasonal means can be analyzed. Possible implications of this limitation will be discussed later. For CFSv2 predictions, the ocean and atmosphere ICs are from the NCEP Climate Forecast System Reanalysis (CFSR; Saha et al. 2010; Xue et al. 2011).

The predicted SSTs are verified against the corresponding observed OI SST analysis (Reynolds et al. 2002). The prediction skill is quantified based on the spatial anomaly correlation between predicted and observed seasonal mean SST anomalies in the equatorial tropical Pacific over the region (10°S–10°N, 130°E–80°W) comprising the core region of ENSO SST variability.

3. Results

a. Interannual variations in skill

The tropical Pacific Ocean observing system has seen a steady increase in the number of observations over time. A time series of ocean observations in the equatorial Pacific (8°S–8°N) shown in Fig. 1 includes the number of ocean temperature profiles over a month from expendable bathythermograph (XBT, in blue), TAO/TRITON moored buoys (in red), and more recently, Argo (in green). Temperature profiles from these platforms are the ones that are most widely ingested into ocean data assimilation systems to provide initial conditions for seasonal predictions, and are of relevance in the context of ENSO prediction.

Starting from the 1980s, there has been a sixfold increase in the number of total monthly temperature profiles. The most dramatic increase is from the initiation of TAO array in the early 1990s. Introduction of Argo floats starting in 2000 led to a further increase in the number of observations, while contribution from XBTs steadily declined. Another notable feature in the time series is a dramatic reduction in the number of temperature profiles from the TAO/TRITON array after 2012. This decline in the number of observations, and in the deterioration of TAO/TRITON (hereafter referred to as TAO) array was noticed by the international community and led to a workshop on the Tropical Pacific Observing System (TPOS). Recommendations from the workshop are found in the TPOS-2020 report (GCOS/GOOS/WCRP 2014).

Changes in the tropical Pacific observing system—a steady increase in the number of observation since 1980; a decline after 2012—poses a natural question: can a fingerprint of such changes be discerned in skill of SST forecasts in equatorial Pacific? One would expect that an increase in ocean observations will lead to a better analysis of ocean state and a subsequent increase in forecast skill. On the other hand, if a fingerprint related to changes in the number of observations is not discernible in the temporal evolution of skill, then what are the possible factors that could either mask this relationship or provide a physical basis that a clear relationship may not exist? Exploring answers to these questions is useful in the design and assessment of the tropical Pacific Ocean observing system, particularly in the context of (i) evaluating the influence of the ocean observing system on the forecast skill of ENSO, and (ii) assessing the capabilities of ocean data assimilation systems to make use of ocean observations.

We analyze the skill of SST prediction in the equatorial Pacific based on the CFSv2 hindcasts of seasonal means with different lead times. The skill is quantified by the spatial anomaly correlation between predicted and observed anomalies over the region (10°S–10°N, 130°E–80°W). The time series of spatial anomaly correlation (AC) for the 0-month lead forecast is shown in Fig. 2a. It is encouraging to see that the skill of SST prediction is generally high; however, there are also considerable year-to-year variations. What is also immediately apparent is that in general, corresponding to an upward trend in the number of observations (Fig. 1), there is no clear upward trend in AC; AC skill in the early 1980s when ocean temperature profiles were scarce is as high as in the decade of 2000 with a much larger number of ocean observations. We note that corresponding to a decrease in TAO observation after 2012, there is also a decrease in AC, which will be discussed later.

Fig. 2.
Fig. 2.

Time evolution of SST prediction skill in equatorial Pacific. Skill is measured as anomaly correlation (AC) between ensemble mean predicted and observed SST anomaly, and AC is computed over the region (10°S–10°N, 130°E–80°W) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. SST prediction is from the Climate Forecast System, version 2. (d) Time evolution of predicted (color lines) and observed (black) SSTs averaged for the domain over which AC was computed.

Citation: Monthly Weather Review 143, 8; 10.1175/MWR-D-15-0035.1

We next demonstrate that the interannual variability in AC is not random, but is closely related of the amplitude of SST anomaly to be predicted. In Fig. 2d, mean SST anomaly averaged for the same region over which AC was calculated is shown. Comparing the AC and mean SST time series one gets the impression that AC tends to be high when mean SST anomaly (either positive or negative) is also high. This can be clearly seen for strong El Niño events of 1982/83 and 1997/98. It is easier to glean the relationship between AC and mean SST anomaly based on a scatterplot between the two, and is shown in Fig. 3a. From the scatterplot it is immediately apparent that, in general, AC is larger (smaller) when the observed mean SST anomaly also has larger (smaller) deviations from zero. In Fig. 3a we also color coded the points such that all points before (after) the end of 1994 are in red (blue). The selection of 1994 as the point of demarcation is based on the fact this is approximately the time when the implementation of TAO array was complete (Fig. 1). We note that the characteristic feature of the relationship between the seasonal mean SST anomaly and AC is independent of the period analyzed.

Fig. 3.
Fig. 3.

Scatterplot between mean amplitude of observed SST anomaly (x axis) and SST prediction skill (y axis) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. Mean amplitude of the observed SST anomaly is the black line in Fig. 2d, and SST prediction skill is from colored lines in Fig. 2.

Citation: Monthly Weather Review 143, 8; 10.1175/MWR-D-15-0035.1

There is a simple explanation for the butterfly pattern of the relationship between AC and mean SST and it relates to the theoretical relationship between the predictable signal and unpredictable noise and various measures of skill. Kumar and Hoerling (2000) demonstrated that as the ratio of predictable signal and unpredictable noise (or the SN ratio) becomes larger, the expected value of AC asymptotes toward its upper limit of one. Conversely, when the SN ratio is low, the expected value of AC is also low, and for zero SN ratio, AC is also zero. Similar relationships also exist between SN and other measures of prediction skill (Kumar 2009).

For short lead-time prediction, to the first order, the initial SST anomaly can be considered as the predictable signal. An ensemble of predictions also contains a noise component that is reflected in the spread among the predictions that start from different initial conditions. This spread (or noise) is due to growth in the uncertainties in the specification of initial conditions (Kumar and Murtugudde 2013) and is a ubiquitous feature of the initialized prediction problem.

It has also been shown that the amplitude of unpredictable noise has little interannual variability (Tippett et al. 2004; Tang et al. 2005; Kumar and Hu 2014). Considering the spread among the ensemble of forecasts as the noise (that has little year-to-year variability), and the observed anomaly as the signal to be predicted leads to a plausible assumption that for any year mean the SST anomaly can be considered as proportional to the SN ratio, and hence, the relationship shown in Fig. 3 follows from the theoretical relationship discussed in Kumar and Hoerling (2000, see their Fig. 3). We note that similar results are found for AC computed over the regions associated with various Niño indices [e.g., Niño-3.4 SST index (not shown)].

The above analysis provides an explanation for the interannual variations in AC as a function of amplitude of the SST anomaly one seeks to predict. It is, therefore, possible that the tight relationship between the amplitude of ENSO events and prediction skill transforms interannual and epochal variations in ENSO amplitude to similar variations in prediction skill. Such variations could easily overshadow the fingerprint of an increase in ocean observations in the equatorial Pacific on the prediction skill of SSTs. Indeed, it has been documented that since 2000 characteristics of ENSO variability changed toward smaller-amplitude and short-lived ENSO events (Wang et al. 2010; McPhaden 2012; Hu et al. 2013). Consistent with the relationship between SN ratio and the expected value of AC, the average ENSO prediction skill also declined (Wang et al. 2010; Barnston et al. 2012). This decrease in ENSO variability, therefore, could have easily overshadowed the fingerprint of an increase in ocean observations on prediction skill.

It could be argued that the influence of subsurface ocean temperature observations may not have a direct influence on prediction of SSTs for 0-month lead forecast because for this time scale (and as will be quantified later), a forecast based on merely persisting observed SST anomalies can have prediction skills at par with that from initialized seasonal prediction systems. This being the case, an argument could be put forth that SST forecasts with no subsurface ocean observations (i.e., persistence) can do equally well as forecasts that were initialized from an ocean analysis that included subsurface observations. To counter this argument, temporal variations in skill of SST prediction with 3- and 6-month leads were also assessed.

Shown in Fig. 2b is the AC for 3-month lead SST forecast, and in Fig. 2c is the AC for the 6-month lead SST forecast. The overall skill for longer lead-time forecasts is smaller to that for 0-month lead forecast, and it should be the case because for a fixed target season, AC has to decrease with increasing lead time. The other features of the respective time series remain similar to the AC time series of 0-month lead forecast—there is no discernible relationship between temporal variations in skill and the number of observations, and further, skill is higher during the earlier period when ocean observations were scarce. Respective scatterplots shown in Figs. 3b and 3c repeat the butterfly pattern that was evident for 0-month lead forecast (Fig. 3a), indicating that fluctuations in skill for longer lead-time forecast are also dominated by signal-to-noise considerations. In a discussion later, we will also demonstrate that the AC for CFSv2 3- and 6-month lead SST forecasts were indeed better than that of persistence indicating that subsurface ocean observations did add to the skill of forecasts. The influence of the number of observations on the improvement in skill, however, is still not discernible.

There is another curious feature that further questions the relationship between the number of subsurface ocean observations and skill. From a visual comparison of red and blue dots in Fig. 3, one gets the impression that lower values of AC are much more frequent after 1994 than before. This feature is quantified in Fig. 4 where the frequency distributions of AC for the two periods are compared. It is clear that high AC values were much more frequent before 1994 than after, and further, this contrast increases with lead time. Conversely, lower, and also negative AC values occur more frequently after 1994. This is consistent with the studies demonstrating that ENSO prediction skill has been lower after 2000 (Wang et al. 2010; Barnston et al. 2012), and can be attributed to changes in the characteristics of ENSO variability (McPhaden 2012; Hu et al. 2013).

Fig. 4.
Fig. 4.

Frequency distribution of anomaly correlations for forecasts between 1982 and 1994 (red bars), and forecasts between 1995 and 2013 (blue bars) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. Anomaly correlations were binned at the intervals of 0.3, and bin range values are shown on the x axis.

Citation: Monthly Weather Review 143, 8; 10.1175/MWR-D-15-0035.1

In the final analysis we compare the improvements in skill for the forecasts based on the initialized seasonal prediction system CFSv2 to the skill of forecasts based on persistence. The persistence forecast is constructed based on assuming SST anomalies for the previous season as forecast for the subsequent seasons. For example, a persistence forecast for JFM was based on persisting October–November–December (OND) seasonal mean anomalies. Seasonal mean SST anomalies are persisted because persistence of shorter time averages (e.g., monthly) resulted in too noisy forecast SST anomalies.

The lead-time dependence of AC for persistence and CFSv2 is compared in Fig. 5. We separated the analysis of skill comparison over two periods: 1982–94 and 1995–2013. For the forecast lead time of 0 month, AC for persistence as the forecast is comparable or even slightly better than for CFSv2. For longer lead times, however, CFSv2-initialized predictions have a better skill than persistence, and the gap between two increases with lead time. The skill for both persistence and CFSv2 is lower after 1994, and is a reflection of changes in the characteristics of ENSO after 2000 (McPhaden 2012; Hu et al. 2013).

Fig. 5.
Fig. 5.

Lead-time dependence of anomaly correlation for persistence (dashed lines) and for CFSv2 (solid lines) forecasts. Lead-time dependence of anomaly correlations is computed over two periods: 1982–94 (red lines) and 1995–2013 (blue lines).

Citation: Monthly Weather Review 143, 8; 10.1175/MWR-D-15-0035.1

The gap in skill between persistence and CFSv2 forecasts is indicative of the influence of observational data in the ocean assimilation on the subsequent forecasts of ENSO. Before 1994, when subsurface ocean observations were small, the gap in skill was either due to assimilation of SSTs, surface forcing, or due to sparse XBTs. After 1994, the gap in skill between persistence and CFSv2 forecasts remained the same despite an increase in the number of subsurface ocean observations. A possible interpretation could be that continuation of observations that were also available prior to 1994, for example, SSTs, maintained this gap with additional subsurface ocean observations adding little to any further improvements in prediction skill. The efficacy of SST information in generating subsurface ocean information is discussed in section 3b.

In Fig. 2, we noted that coincident with the decline in TAO array starting around 2012, there was a decline in prediction skill of ENSO. If these two facts are considered in isolation, it would be tempting to conclude that a decrease in skill was due to a decrease in TAO observations. However, we note that over the same period, the amplitude of SST anomalies was also small (Fig. 2d). The resulting small SN ratio can be the alternate explanation, providing an apparent link between reduction in skill and TAO observations. Similar periods with low skill around 2002, 1996, etc., also occurred even though the number of observations from the TAO array was at their peak.

b. Other factors that can be responsible for lack of relationship between skill and the number of observations

There could be other reasons why a fingerprint of increase in ocean observations is not readily discerned in the evolution of AC. Some of the possibilities are imbedded in the dynamics of the coupled ocean–atmosphere system in the tropical Pacific or could be related to efficacy of the current ocean data assimilation system in utilizing ocean observations, and are discussed next.

Kumar et al. (2014) and Servonnat et al. (2014), based on an analysis of coupled model simulations where SSTs predicted by the model were nudged toward the observed SST values, demonstrated that the specification of SST alone can also generate realistic subsurface ocean temperature variability in the equatorial Pacific. This is particularly true for the subsurface ocean temperature near the date line, a region of considerable importance for governing ENSO variability (Meinen and McPhaden 2000; Zhang and McPhaden 2010). In Kumar et al. (2014), the physical mechanism via which SST variability is communicated to the subsurface ocean was argued to be ocean–atmosphere coupling whereby changes in SSTs lead to a shift in precipitation pattern and surface winds. Oceanic adjustments to changes in surface wind provide the physical link through which information contained in the interannual variability in SSTs is communicated to the subsurface ocean.

Given that specification of SSTs can generate realistic subsurface temperature information, a steady increase in temperature profiles may be a case of having redundant information. Assimilation of SSTs in the 1980s, by generating subsurface ocean information may have been responsible for skillful ENSO prediction even during a period when ocean observations were still scarce. If this is the case then it could also explain the independence of difference in the AC between persistence CFSv2 forecasts (Fig. 5) over two periods. One could infer that because SST observations were abundant over both periods, improvements in skill for the CFSv2 over persistence as the forecast were the same even though (i) subsurface ocean observations increased dramatically after 1994, and (ii) changes in the characteristics ENSO led to a lower predictability and skill after 1994. For the argument in favor of assimilation of SSTs alone providing subsurface ocean observations, we also note that there are seasonal prediction systems that obtain their initial conditions based on an ocean data assimilation procedure where only observational information is via nudging toward the observed SSTs. ENSO skill in these prediction systems is found to be comparable to skill based on systems with more advanced data assimilation methods that utilize subsurface ocean measurements (Chen et al. 2004; Tang et al. 2005; Keenlyside et al. 2005; Luo et al. 2008; Keenlyside et al. 2008). We also note that in the ODA system an estimate of surface wind (e.g., from atmospheric analysis) is also included and may further add to the generation of subsurface ocean information.

Another possibility that cannot be discounted is that the current generation of ocean model and data assimilation systems is not advanced enough to take advantage of either the additional ocean observations or observations of variables that may add to the skill (e.g., salinity or ocean currents). Along the same reasoning, even if the assimilation systems are able to ingest information and provide a better ocean analysis, initial shocks1 at the beginning of the forecast may quickly disperse the relevant information due to imbalances in the ocean analysis and the models’ preferred state. This could easily be the case because while most of the ocean data assimilation systems are not coupled, seasonal prediction systems themselves are coupled, and thus the initial ocean analysis may not be consistent with the coupled ocean–atmosphere evolution. Further, the current generation of ocean data assimilation systems does not assimilate observations about ocean currents in conjunction with the assimilation of ocean temperature and salinity and could also exacerbate the problem of imbalances in the analyzed fields. Following these possibilities, where either the ocean data assimilation systems are not advanced enough to take advantage of increasing observations, or initial imbalances after the initiation of the forecasts dispenses with the assimilated information, prediction skill in SSTs over the entire analysis period (Fig. 2) may not allow for the optimal use of increase ocean observations, and as a consequence, reflect a fingerprint of steady increase in ocean observations.

Ocean variability also evolves on a slower time scale. It is conceivable that as long as sparse ocean observations are able to capture the key aspects associated with the coupled ocean–atmosphere dynamics, for example, buildup of the warm water volume in the equatorial Pacific, initial ocean state may still be adequately resolved in the context of providing skillful ENSO prediction. If this notion is correct then an increase in ocean observation, once again, may represent redundant information. Admittedly, variability on a faster time scale associated with the overlaying atmosphere—Madden–Julian oscillation (MJO) or westerly wind bursts (WWB)—could play a major role in determining the exact outcome for individual ENSO events (Kessler and Kleeman 2000; Batstone and Hendon 2005; Wang et al. 2011). However, because such events are associated with the stochastic atmospheric forcing superimposed on the slower coupled ocean–atmosphere dynamics, such forcing, to begin with, cannot be predicted ahead of time and add to ENSO prediction skill with longer forecast lead times. Once again, in the context of ENSO prediction, even though additional ocean observation may better resolve higher-frequency ocean variability related to stochastic atmospheric forcing, it may not have a systematic contribution to longer lead ENSO prediction skill, and therefore, a fingerprint of an increase in ocean observations may not be evident in prediction skill. Following a comment raised during the review, it is possible that because of the specification of surface winds in the ODA systems, the ocean evolution over short lead times may be dominated by the surface wind–driven feedbacks, and therefore, the influence of subsurface observations may not manifest on the prediction skill for lead times analyzed here. Confirmation of this possibility, however, has to wait until seasonal forecasts are routinely extended over a longer lead.

4. Conclusions

Based on an assessment of time evolution of ENSO prediction skill in hindcasts from a seasonal prediction system, we demonstrated that a clear fingerprint of increasing ocean observations on prediction skill cannot be discerned. Over the analysis period, temporal variability in ENSO prediction skill is much more related to the mechanistic constraint of SN ratio, and its influence can easily dominate the influence from increasing ocean observations. A lack of relationship in time evolution of prediction skill and the equatorial Pacific observing system raises some intriguing possibilities as alternate explanations. In section 3b, we discussed various hypotheses that could result in our inability to identify the fingerprint of increasing ocean observation on ENSO prediction skill. The role of various possibilities in modulating the relationship between observational data and ENSO prediction skill is an important question that needs to be followed up by the research community.

Although our analysis was based on a single seasonal prediction system, and needs to be repeated with hindcasts available from other seasonal prediction systems, we suspect that results similar to our analysis (i.e., no clear trend in prediction of ENSO skill with an increase in ocean observations in equatorial tropical Pacific) will be substantiated. We base this on the following reasoning: (i) prior to 1990, and during the era of sparse equatorial Pacific observations prediction skill for equatorial Pacific SSTs for the CFSv2 was as high as after 1990, and further, skill of this prediction system was also similar to other seasonal prediction systems (Jin et al. 2008; Barnston et al. 2012); and (ii) there are seasonal prediction systems driven solely by the observed SST information that have demonstrated ENSO prediction skill similar to seasonal prediction systems with a more advanced ocean data assimilation system that utilize subsurface ocean observations.

We caution that at present no judgments on the requirements for the ocean observing system in the equatorial Pacific based on ENSO prediction skill alone should be made. The same equatorial observing system is also used to meet the multiplicity of user requirements beyond improving the skill of ENSO prediction (GCOS/GOOS/WCRP 2014). Observations also play a crucial role in the validation of various physical processes in models and assimilation systems, and are necessary in our quest to bring improvements in them. In the context of ENSO prediction and predictability, however, the plausible reasons for a lack of a relationship between the time evolution of prediction skill and an increase in the number of ocean observations, nonetheless, poses intriguing questions that need to be understood, and are a challenging task. Such an understanding may require a community effort, and is essential to (i) address the design and adequacy of the ocean observing system in the equatorial Pacific, and (ii) improve ocean data assimilations systems in the context of ENSO prediction.

Acknowledgments

We thank comments by two anonymous reviewers and by the editor. Comments by Michelle L’Heureux and Zeng-Zeng Hu are also appreciated. The scientific results and conclusions, as well as any view or opinions herein, are those of authors and do not necessarily reflect the views of NWS, NOAA, or the Department of Commerce.

REFERENCES

  • Balmaseda, M. A., and Coauthors, 2009: Ocean initialization for seasonal forecasts. Oceanography, 22, 154159, doi:10.5670/oceanog.2009.73.

    • Search Google Scholar
    • Export Citation
  • Barnston, A. G., M. K. Tippett, M. L. L’Heureux, S. Li, and D. G. DeWitt, 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93, 631651, doi:10.1175/BAMS-D-11-00111.1.

    • Search Google Scholar
    • Export Citation
  • Batstone, C., and H. H. Hendon, 2005: Characteristics of stochastic variability associated with ENSO and the role of the MJO. J. Climate, 18, 17731789, doi:10.1175/JCLI3374.1.

    • Search Google Scholar
    • Export Citation
  • Chen, D., M. A. Cane, A. Kaplan, S. E. Zebiak, and D. Huang, 2004: Predictability of El Niño over the past 148 years. Nature, 428, 733736, doi:10.1038/nature02439.

    • Search Google Scholar
    • Export Citation
  • GCOS/GOOS/WCRP, 2014: Tropical Pacific Observing System, 2020 Workshop (TPOS 2020). Vol. I: Workshop report and recommendations. Rep. TPOS-2020, 66 pp. [Available online at http://tpos2020.org/wp-content/uploads/TPOS-2020-Workshop-Report-FINAL-300114.pdf.]

  • Graham, R., and Coauthors, 2011: Long-range forecasting and the Global Framework for Climate Services. Climate Res., 47, 4755, doi:10.3354/cr00963.

    • Search Google Scholar
    • Export Citation
  • Hu, Z.-Z., A. Kumar, H.-L. Ren, H. Wang, M. L’Heureux, and F.-F. Jin, 2013: Weakened interannual variability in the tropical Pacific Ocean since 2000. J. Climate, 26, 26012613, doi:10.1175/JCLI-D-12-00265.1.

    • Search Google Scholar
    • Export Citation
  • Jin, E. K., and Coauthors, 2008: Current status of ENSO prediction skill in coupled ocean–atmosphere models. Climate Dyn., 31, 647664, doi:10.1007/s00382-008-0397-3.

    • Search Google Scholar
    • Export Citation
  • Keenlyside, N. S., M. Latiff, M. Botzet, J. Jungclaus, and U. Schulzweida, 2005: A coupled method for initializing El Niño–Southern Oscillation forecasts using sea surface temperature. Tellus, 57A, 340356, doi:10.1111/j.1600-0870.2005.00107.x.

    • Search Google Scholar
    • Export Citation
  • Keenlyside, N. S., M. Latiff, M. Botzet, J. Jungclaus, L. Kornblueh, and E. Roeckner, 2008: Advancing decadal-scale climate prediction in the North Atlantic sector. Nature, 453, 8488, doi:10.1038/nature06921.

    • Search Google Scholar
    • Export Citation
  • Kessler, W. S., and R. Kleeman, 2000: Rectification of the Madden–Julian oscillation into the ENSO cycle. J. Climate, 13, 35603575, doi:10.1175/1520-0442(2000)013<3560:ROTMJO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., 2009: Finite samples and uncertainty estimates for skill measures for seasonal predictions. Mon. Wea. Rev., 137, 26222631, doi:10.1175/2009MWR2814.1.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and M. P. Hoerling, 2000: Analysis of a conceptual model of seasonal climate variability and implications for seasonal predictions. Bull. Amer. Meteor. Soc., 81, 255264, doi:10.1175/1520-0477(2000)081<0255:AOACMO>2.3.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and R. Murtugudde, 2013: Predictability, uncertainty and decision making: A unified perspective to build a bridge from weather to climate. Curr. Opinion Environ. Sustain., 5, 327333, doi:10.1016/j.cosust.2013.05.009.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and Z.-Z. Hu, 2014: How variable is the uncertainty in ENSO sea surface temperature prediction? J. Climate, 27, 27792788, doi:10.1175/JCLI-D-13-00576.1.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., H. Wang, Y. Xue, and W. Wang, 2014: How much of monthly subsurface temperature variability in equatorial Pacific can be recovered by the specification of sea surface temperatures? J. Climate, 27, 15591557, doi:10.1175/JCLI-D-13-00258.1.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., S. Masson, S. K. Behera, and T. Yamagata, 2008: Extended ENSO predictions using a fully coupled ocean–atmosphere model. J. Climate, 21, 8493, doi:10.1175/2007JCLI1412.1.

    • Search Google Scholar
    • Export Citation
  • McPhaden, M. J., 2012: A 21st century shift in the relationship between ENSO SST and warm water volume anomalies. Geophys. Res. Lett., 39, L09706, doi:10.1029/2012GL051826.

    • Search Google Scholar
    • Export Citation
  • Meinen, C. S., and M. J. McPhaden, 2000: Observations of warm water volume changes in the equatorial Pacific and their relationship to El Niño and La Niña. J. Climate, 13, 35513559, doi:10.1175/1520-0442(2000)013<3551:OOWWVC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peng, P., A. G. Barnston, and A. Kumar, 2013: A comparison of skill among two versions of NCEP climate forecast system (CFS) and CPC’s operational short-lead seasonal outlooks. Wea. Forecasting, 28, 445462, doi:10.1175/WAF-D-12-00057.1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, W. R., N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 16091625, doi:10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2010: The NCEP Climate Forecast System Reanalysis. Bull. Amer. Meteor. Soc., 91, 10151057, doi:10.1175/2010BAMS3001.1.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, doi:10.1175/JCLI-D-12-00823.1.

  • Servonnat, J., J. Mignot, E. Guilyardi, D. Swingedouw, R. Seferian, and S. Labetoulle, 2014: Reconstructing the subsurface ocean decadal variability using surface nudging in a perfect model framework. Climate Dyn., 44, 315–338, doi:10.1007/s00382-014-2184-7.

    • Search Google Scholar
    • Export Citation
  • Stockdale, T. N., D. Anderson, M. Balmaseda, F. Doblas-Reyes, L. Ferranti, K. Mogensen, F. Molteni, and F. Vitart, 2011: ECMWF Seasonal Forecast System 3 and its prediction of sea surface temperature. Climate Dyn., 37, 455471, doi:10.1007/s00382-010-0947-3.

    • Search Google Scholar
    • Export Citation
  • Tang, Y., R. Kleeman, and A. M. Moore, 2005: Reliability of ENSO dynamical predictions. J. Atmos. Sci., 62, 17701791, doi:10.1175/JAS3445.1.

    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., R. Kleeman, and Y. Tang, 2004: Measuring the potential utility of seasonal climate predictions. Geophys. Res. Lett., 31, L22201, doi:10.1029/2004GL021575.

    • Search Google Scholar
    • Export Citation
  • Wang, W., M. Chen, and A. Kumar, 2010: An assessment of the CFS real-time seasonal forecasts. Wea. Forecasting, 25, 950969, doi:10.1175/2010WAF2222345.1.

    • Search Google Scholar
    • Export Citation
  • Wang, W., M. Chen, A. Kumar, and Y. Xue, 2011: How important is intraseasonal surface wind variability to real-time ENSO prediction? Geophys. Res. Lett., 38, L13705, doi:10.1029/2011GL047684.

    • Search Google Scholar
    • Export Citation
  • Xue, Y., B. Huang, Z.-Z. Hu, A. Kumar, C. Wen, D. Behringer, and S. Nadiga, 2011: An assessment of oceanic variability in the NCEP climate forecast system reanalysis. Climate Dyn., 37, 25112539, doi:10.1007/s00382-010-0954-4.

    • Search Google Scholar
    • Export Citation
  • Xue, Y., M. Chen, A. Kumar, Z.-Z. Hu, and W. Wang, 2013: Prediction skill and bias of tropical Pacific sea surface temperature in the NCEP Climate Forecast System version 2. J. Climate, 26, 53585378, doi:10.1175/JCLI-D-12-00600.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, X., and M. J. McPhaden, 2010: Surface layer heat balance in the eastern equatorial Pacific Ocean on interannual time scales: Influence of local versus remote wind forcing. J. Climate, 23, 43754394, doi:10.1175/2010JCLI3469.1.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., B. Huang, L. Marx, J. L. Kinter III, M. A. Balmaseda, R.-H. Zhang, and Z.-Z. Hu, 2012: Ensemble ENSO hindcasts initialized from multiple ocean analyses. Geophys. Res. Lett., 39, L09602, doi:10.1029/2012GL051503.

    • Search Google Scholar
    • Export Citation
1

Initial shocks in forecasts refers to a phenomenon when the initial states are not in dynamical balance, and the information that is assimilated from the observational data is quickly dispersed at the beginning of the forecast. For example, an ocean data assimilation system, while assimilating the observed temperature profiles, may not keep the ocean currents in a geostrophic balance, and the resulting dynamical imbalances lead to dispersion of assimilated information.

Save
  • Balmaseda, M. A., and Coauthors, 2009: Ocean initialization for seasonal forecasts. Oceanography, 22, 154159, doi:10.5670/oceanog.2009.73.

    • Search Google Scholar
    • Export Citation
  • Barnston, A. G., M. K. Tippett, M. L. L’Heureux, S. Li, and D. G. DeWitt, 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93, 631651, doi:10.1175/BAMS-D-11-00111.1.

    • Search Google Scholar
    • Export Citation
  • Batstone, C., and H. H. Hendon, 2005: Characteristics of stochastic variability associated with ENSO and the role of the MJO. J. Climate, 18, 17731789, doi:10.1175/JCLI3374.1.

    • Search Google Scholar
    • Export Citation
  • Chen, D., M. A. Cane, A. Kaplan, S. E. Zebiak, and D. Huang, 2004: Predictability of El Niño over the past 148 years. Nature, 428, 733736, doi:10.1038/nature02439.

    • Search Google Scholar
    • Export Citation
  • GCOS/GOOS/WCRP, 2014: Tropical Pacific Observing System, 2020 Workshop (TPOS 2020). Vol. I: Workshop report and recommendations. Rep. TPOS-2020, 66 pp. [Available online at http://tpos2020.org/wp-content/uploads/TPOS-2020-Workshop-Report-FINAL-300114.pdf.]

  • Graham, R., and Coauthors, 2011: Long-range forecasting and the Global Framework for Climate Services. Climate Res., 47, 4755, doi:10.3354/cr00963.

    • Search Google Scholar
    • Export Citation
  • Hu, Z.-Z., A. Kumar, H.-L. Ren, H. Wang, M. L’Heureux, and F.-F. Jin, 2013: Weakened interannual variability in the tropical Pacific Ocean since 2000. J. Climate, 26, 26012613, doi:10.1175/JCLI-D-12-00265.1.

    • Search Google Scholar
    • Export Citation
  • Jin, E. K., and Coauthors, 2008: Current status of ENSO prediction skill in coupled ocean–atmosphere models. Climate Dyn., 31, 647664, doi:10.1007/s00382-008-0397-3.

    • Search Google Scholar
    • Export Citation
  • Keenlyside, N. S., M. Latiff, M. Botzet, J. Jungclaus, and U. Schulzweida, 2005: A coupled method for initializing El Niño–Southern Oscillation forecasts using sea surface temperature. Tellus, 57A, 340356, doi:10.1111/j.1600-0870.2005.00107.x.

    • Search Google Scholar
    • Export Citation
  • Keenlyside, N. S., M. Latiff, M. Botzet, J. Jungclaus, L. Kornblueh, and E. Roeckner, 2008: Advancing decadal-scale climate prediction in the North Atlantic sector. Nature, 453, 8488, doi:10.1038/nature06921.

    • Search Google Scholar
    • Export Citation
  • Kessler, W. S., and R. Kleeman, 2000: Rectification of the Madden–Julian oscillation into the ENSO cycle. J. Climate, 13, 35603575, doi:10.1175/1520-0442(2000)013<3560:ROTMJO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., 2009: Finite samples and uncertainty estimates for skill measures for seasonal predictions. Mon. Wea. Rev., 137, 26222631, doi:10.1175/2009MWR2814.1.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and M. P. Hoerling, 2000: Analysis of a conceptual model of seasonal climate variability and implications for seasonal predictions. Bull. Amer. Meteor. Soc., 81, 255264, doi:10.1175/1520-0477(2000)081<0255:AOACMO>2.3.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and R. Murtugudde, 2013: Predictability, uncertainty and decision making: A unified perspective to build a bridge from weather to climate. Curr. Opinion Environ. Sustain., 5, 327333, doi:10.1016/j.cosust.2013.05.009.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., and Z.-Z. Hu, 2014: How variable is the uncertainty in ENSO sea surface temperature prediction? J. Climate, 27, 27792788, doi:10.1175/JCLI-D-13-00576.1.

    • Search Google Scholar
    • Export Citation
  • Kumar, A., H. Wang, Y. Xue, and W. Wang, 2014: How much of monthly subsurface temperature variability in equatorial Pacific can be recovered by the specification of sea surface temperatures? J. Climate, 27, 15591557, doi:10.1175/JCLI-D-13-00258.1.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., S. Masson, S. K. Behera, and T. Yamagata, 2008: Extended ENSO predictions using a fully coupled ocean–atmosphere model. J. Climate, 21, 8493, doi:10.1175/2007JCLI1412.1.

    • Search Google Scholar
    • Export Citation
  • McPhaden, M. J., 2012: A 21st century shift in the relationship between ENSO SST and warm water volume anomalies. Geophys. Res. Lett., 39, L09706, doi:10.1029/2012GL051826.

    • Search Google Scholar
    • Export Citation
  • Meinen, C. S., and M. J. McPhaden, 2000: Observations of warm water volume changes in the equatorial Pacific and their relationship to El Niño and La Niña. J. Climate, 13, 35513559, doi:10.1175/1520-0442(2000)013<3551:OOWWVC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peng, P., A. G. Barnston, and A. Kumar, 2013: A comparison of skill among two versions of NCEP climate forecast system (CFS) and CPC’s operational short-lead seasonal outlooks. Wea. Forecasting, 28, 445462, doi:10.1175/WAF-D-12-00057.1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, W. R., N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 16091625, doi:10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2010: The NCEP Climate Forecast System Reanalysis. Bull. Amer. Meteor. Soc., 91, 10151057, doi:10.1175/2010BAMS3001.1.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, doi:10.1175/JCLI-D-12-00823.1.

  • Servonnat, J., J. Mignot, E. Guilyardi, D. Swingedouw, R. Seferian, and S. Labetoulle, 2014: Reconstructing the subsurface ocean decadal variability using surface nudging in a perfect model framework. Climate Dyn., 44, 315–338, doi:10.1007/s00382-014-2184-7.

    • Search Google Scholar
    • Export Citation
  • Stockdale, T. N., D. Anderson, M. Balmaseda, F. Doblas-Reyes, L. Ferranti, K. Mogensen, F. Molteni, and F. Vitart, 2011: ECMWF Seasonal Forecast System 3 and its prediction of sea surface temperature. Climate Dyn., 37, 455471, doi:10.1007/s00382-010-0947-3.

    • Search Google Scholar
    • Export Citation
  • Tang, Y., R. Kleeman, and A. M. Moore, 2005: Reliability of ENSO dynamical predictions. J. Atmos. Sci., 62, 17701791, doi:10.1175/JAS3445.1.

    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., R. Kleeman, and Y. Tang, 2004: Measuring the potential utility of seasonal climate predictions. Geophys. Res. Lett., 31, L22201, doi:10.1029/2004GL021575.

    • Search Google Scholar
    • Export Citation
  • Wang, W., M. Chen, and A. Kumar, 2010: An assessment of the CFS real-time seasonal forecasts. Wea. Forecasting, 25, 950969, doi:10.1175/2010WAF2222345.1.

    • Search Google Scholar
    • Export Citation
  • Wang, W., M. Chen, A. Kumar, and Y. Xue, 2011: How important is intraseasonal surface wind variability to real-time ENSO prediction? Geophys. Res. Lett., 38, L13705, doi:10.1029/2011GL047684.

    • Search Google Scholar
    • Export Citation
  • Xue, Y., B. Huang, Z.-Z. Hu, A. Kumar, C. Wen, D. Behringer, and S. Nadiga, 2011: An assessment of oceanic variability in the NCEP climate forecast system reanalysis. Climate Dyn., 37, 25112539, doi:10.1007/s00382-010-0954-4.

    • Search Google Scholar
    • Export Citation
  • Xue, Y., M. Chen, A. Kumar, Z.-Z. Hu, and W. Wang, 2013: Prediction skill and bias of tropical Pacific sea surface temperature in the NCEP Climate Forecast System version 2. J. Climate, 26, 53585378, doi:10.1175/JCLI-D-12-00600.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, X., and M. J. McPhaden, 2010: Surface layer heat balance in the eastern equatorial Pacific Ocean on interannual time scales: Influence of local versus remote wind forcing. J. Climate, 23, 43754394, doi:10.1175/2010JCLI3469.1.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., B. Huang, L. Marx, J. L. Kinter III, M. A. Balmaseda, R.-H. Zhang, and Z.-Z. Hu, 2012: Ensemble ENSO hindcasts initialized from multiple ocean analyses. Geophys. Res. Lett., 39, L09602, doi:10.1029/2012GL051503.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Time evolution of the number of temperature profiles per month in the equatorial Pacific Ocean. Different lines correspond to different observing systems: XBT (blue line), TAO (red line), Argo (green line), and total (black line).

  • Fig. 2.

    Time evolution of SST prediction skill in equatorial Pacific. Skill is measured as anomaly correlation (AC) between ensemble mean predicted and observed SST anomaly, and AC is computed over the region (10°S–10°N, 130°E–80°W) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. SST prediction is from the Climate Forecast System, version 2. (d) Time evolution of predicted (color lines) and observed (black) SSTs averaged for the domain over which AC was computed.

  • Fig. 3.

    Scatterplot between mean amplitude of observed SST anomaly (x axis) and SST prediction skill (y axis) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. Mean amplitude of the observed SST anomaly is the black line in Fig. 2d, and SST prediction skill is from colored lines in Fig. 2.

  • Fig. 4.

    Frequency distribution of anomaly correlations for forecasts between 1982 and 1994 (red bars), and forecasts between 1995 and 2013 (blue bars) for (a) 0-month lead, (b) 3-month lead, and (c) 6-month lead forecasts. Anomaly correlations were binned at the intervals of 0.3, and bin range values are shown on the x axis.

  • Fig. 5.

    Lead-time dependence of anomaly correlation for persistence (dashed lines) and for CFSv2 (solid lines) forecasts. Lead-time dependence of anomaly correlations is computed over two periods: 1982–94 (red lines) and 1995–2013 (blue lines).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 449 175 34
PDF Downloads 156 59 13