• Barnston, A. G., H. M. Van den Dool, S. E. Zebiak, T. P. Barnett, M. Ji, D. R. Rodenhuis, M. A. Cane, A. Leetmaa, N. E. Graham, C. F. Ropelewski, V. E. Kousky, E. A. O’Lenic, and R. E. Livezey, 1994: Long-lead seasonal forecasts—Where do we stand? Bull. Amer. Meteor. Soc.,75, 2079–2114.

  • Battisti, D. S., 1988: Dynamics and thermodynamics of a warming event in a coupled tropical atmosphere-ocean model. J. Atmos. Sci.,45, 2889–2919.

  • Behringer, D. W., M. Ji, and A. Leetmaa, 1998: An improved coupled model for ENSO prediction and implications for ocean initialization. Part I: The ocean data assimilation system. Mon. Wea. Rev.,126, 1013–1021.

  • Bjerknes, J., 1969: Atmospheric teleconnections from the equatorial Pacific. Mon. Wea. Rev.,97, 163–172.

  • Blumenthal, M. B., 1991: Predictability of a coupled ocean–atmosphere model. J. Climate,4, 766–784.

  • Bryan, K., 1969: A numerical method for the study of the World Ocean. J. Comput. Phys.,4, 347–376.

  • Cane, M. A., S. E. Zebiak, and S. C. Dolan, 1986: Experimental forecasts of El Nino. Nature,321, 827–832.

  • Chen, D., S. E. Zebiak, A. J. Busalacchi, and M. A. Cane, 1995: An improved procedure for El Niño forecasting. Science,269, 1699–1702.

  • Cox, M. D., 1984: A primitive, 3-dimensional model of the ocean. GFDL Ocean Group Tech. Rep. 1, Geophysical Fluid Dynamics Laboratory/NOAA, Princeton University, 143 pp.

  • Derber, J. D., and A. Rosati, 1989: A global oceanic data assimilation system. J. Phys. Oceanogr.,19, 1333–1347.

  • Goldenberg, S. B., and J. J. O’Brien, 1981: Time and space variability of tropical Pacific wind stress. Mon. Wea. Rev.,109, 1190–1207.

  • Ji, M., and A. Leetmaa, 1997: Impact of data assimilation on ocean initialization and El Niño prediction. Mon. Wea. Rev.,125, 742–753.

  • ——, A. Kumar, and A. Leetmaa, 1994: An experimental coupled forecast system at the national meteorological center: Some early results. Tellus,46A, 398–418.

  • ——, A. Leetmaa, and J. Derber, 1995: An ocean analysis system for seasonal to interannual climate studies. Mon. Wea. Rev.,123, 460–481.

  • ——, ——, and V. E. Kousky, 1996: Coupled model prediction of ENSO during the 1980s and the 1990s at the National Centers for Environmental Prediction. J. Climate,9, 3105–3120.

  • Kanamitsu, M., and Coauthors, 1991: Description of NMC global data assimilation and forecast system. Wea. Forecasting,6, 425–435.

  • Kirtman, B. P., J. Shukla, B. Huang, Z. Zhu, and E. K. Schneider, 1997: Multiseasonal predictions with a coupled tropical ocean–global atmosphere system. Mon. Wea. Rev.,125, 789–808.

  • Kleeman, R., 1993: On the dependence of hindcast skill in a coupled ocean-atmosphere model on ocean thermodynamics. J. Climate,6, 2012–2033.

  • ——, A. M. Moore, and N. R. Smith, 1995: Assimilation of subsurface thermal data into a simple ocean model for the initialization of an intermediate tropical coupled ocean–atmosphere forecast model. Mon. Wea. Rev.,123, 3103–3113.

  • ——, R. A. Colman, N. R. Smith, and S. B. Power, 1996: A recent change in the mean state of the Pacific basin climate: Observational evidence and atmospheric and oceanic responses. J. Geophys. Res. (Oceans),101, 20483–20499.

  • Kumar, A., M. P. Hoerling, M. Ji, A. Leetmaa, and P. Sardeshmukh, 1996: Assessing a GCM’s suitability for making seasonal predictions. J. Climate,9, 115–129.

  • Latif, M., A. Sterl, E. Maier-Reimer, and M. M. Junge, 1993: Structure and predictability of the El Niño/Southern Oscillation phenomenon in a coupled ocean–atmosphere general circulation model. J. Climate,6, 700–708.

  • ——, R. Kleeman, and C. Eckert, 1997: Greenhouse warming, decadal variability or El Niño? An attempt to understand the anomalous 1990s. J. Climate,10, 2221–2239.

  • Levitus, S., R. Burgett, and T. P. Boyer, 1994: World Ocean Atlas 1994. Vol. 3. Salinity. NOAA Atlas NESDIS 3, 99 pp.

  • Oberhuber, J. M., 1988: An atlas based on the “COADS” data set: The budgets of heat, buoyancy and turbulent kinetic energy at the surface of the global ocean. Rep. 15, Max-Planck-Institut für Meteorologie, 20 pp. [Available from Max-Planck-Institut für Meteorologie, Bundesstrasse 55, Hamburg, Germany.].

  • Pacanowski, R., and S. G. H. Philander, 1981: Parameterization of vertical mixing in numerical models of tropical oceans. J. Phys. Oceanogr.,11, 1443–1451.

  • Philander, S. G. H., W. J. Hurlin, and A. D. Seigel, 1987: A model of the seasonal cycle in the tropical Pacific ocean. J. Phys. Oceanogr.,17, 1986–2002.

  • Reynolds, R. W., and T. M. Smith, 1994: Improved global sea surface temperature analysis using optimum interpolation. J. Climate,7, 929–948.

  • ——, and ——, 1995: A high resolution global sea surface temperature climatology. J. Climate,8, 1571–1583.

  • Ropelewski, C. F., and M. S. Halpert, 1987: Global and regional scale precipitation patterns associated with El Nino/Southern Oscillation. Mon. Wea. Rev.,115, 1606–1626.

  • Rosati, A., K. Miyakoda, and R. Gudgel, 1997: The impact of ocean initial conditions on ENSO forecasting with a coupled model. Mon. Wea. Rev.,125, 754–772.

  • Wyrtki, K., 1975: El Nino—The dynamical response of the equatorial Pacific to atmospheric forcing. J. Phys. Oceanogr.,5, 572–584.

  • —, 1985: Water displacements in the Pacific and the genesis of El Nino cycles. J. Geophys. Res.,90, 7129–7132.

  • Xue, Y., M. A. Cane, and S. E. Zebiak, 1997: Predictability of a coupled model of ENSO using singular vector analysis. Part I: Optimal growth in seasonal background and ENSO cycles. Mon. Wea. Rev,125, 2043–2056.

  • Zebiak, S. E., and M. A. Cane, 1987: A model El Nino–Southern Oscillation. Mon. Wea. Rev.,115, 2262–2278.

  • View in gallery

    Predicted Nino3.4 SST anomalies (°C) for 1981–96 at 3-month, 6-month, and 9-month lead times. The prediction from CMP10 (CMP12) models are shown in dash (solid) lines; the observed Nino3.4 SST anomalies are shown in the thin-solid lines.

  • View in gallery

    Skill estimates as function of lead time for prediction of Nino3.4 SST anomalies for CMP10 (dash) and CMP12 (solid) models for 1981–95 period (left panels) and 1992–95 period (right panels). Shown in the top panels are anomaly correlation coefficients (ACC) between the predictions and the observations; shown in the lower panels are rmse’s.

  • View in gallery

    Evolution of Nino3.4 SST anomalies (°C) from all individual predictions (thin solid lines) initiated monthly using the CMP10 model for 1982–95. Shown in each panel are the predictions grouped by three consecutive starting months. The observed Nino3.4 SST anomalies are shown in the heavy-dashed lines.

  • View in gallery

    Same as Fig. 3 but for the CMP12 model for 1981–95.

  • View in gallery

    Differences between the coupled model SST climatology (average of June and December starts) and the observed SST climatology for January, April, July, and October as representative for different seasons. Contour intervals are 0.5°C.

  • View in gallery

    Spatial distribution (skill map) of the temporal anomaly correlation coefficients between the observed and the predicted SST anomalies. The skill map for CMP12 (CMP10) is shown in the upper (lower) panel. The anomaly correlation coefficients are computed based on 45 (42) 3-month averages centered in December, January, and February for the boreal winters of 1981/82 (1982/83) through 1995/96 for CMP12 (CMP10). Contour values are indicated in the figure.

  • View in gallery

    Anomaly correlation coefficients (left) and rmse’s (right) between the predicted and the observed monthly mean SST anomalies as function of lead times for Nino3.4 SST anomalies. These are for CMP12 for the period of 1981–95. Skills for the predictions starting in the northern warm/cold/all seasons are shown in the dot–dashed/dashed/solid curves. The warm season is defined as May through September; the cold season is defined as November, December, and January.

  • View in gallery

    One-month rms separation as an estimate of error growth for the CMP10 (dash) and CMP12 (solid) models (see text in section 7).

  • View in gallery

    Skill estimates (ACC, left and rmse, right) as a function of lead time for prediction of Nino3.4 SST anomalies from four prediction experiments that used two coupled models and three initial conditions. These predictions are designated as C12RA6 (solid), C12CTL (short dash), C10RA6 (dot–dash), and C10RA5 (long dash). The predictions are initiated in March, June, September, and December of each year for 1981–95. The two coupled models, that is, CMP12 and CMP10, are designated as C12 and C10; the three initial conditions are designated as RA5, RA6, and CTL (see Table 1 and section 4).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 75 75 17
PDF Downloads 38 38 6

An Improved Coupled Model for ENSO Prediction and Implications for Ocean Initialization. Part II: The Coupled Model

View More View Less
  • 1 National Centers for Environmental Prediction, NWS/NOAA, Washington, D.C.
© Get Permissions
Full access

Abstract

An improved forecast system has been developed and implemented for ENSO prediction at the National Centers for Environmental Prediction (NCEP). This system consists of a new ocean data assimilation system and an improved coupled ocean–atmosphere forecast model (CMP12) for ENSO prediction. The new ocean data assimilation system is described in Part I of this two-part paper.

The new coupled forecast model (CMP12) is a variation of the standard NCEP coupled model (CMP10). Major changes in the new coupled model are improved vertical mixing for the ocean model; relaxation of the model’s surface salinity to the climatological annual cycle; and incorporation of an anomalous freshwater flux forcing. Also, the domain in which the oceanic SST couples to the atmosphere is limited to the tropical Pacific.

Evaluation of ENSO prediction results show that the new coupled model, using the more accurate ocean initial conditions, achieves higher prediction skill. However, two sets of hindcasting experiments (one using the more accurate ocean initial conditions but the old coupled model, the other using the new coupled model but the less accurate ocean initial conditions), result in no improvement in prediction skill. These results indicate that future improvement in ENSO prediction skill requires systematically improving both the coupled model and the ocean analysis system. The authors’ results also suggest that for the purpose of initializing the coupled model for ENSO prediction, care should be taken to give sufficient weight to the model dynamics during the ocean data assimilation. This can reduce the danger of aliasing large-scale model biases into the low-frequency variability in the ocean initial conditions, and also reduce the introduction of small-scale noise into the initial conditions caused by overfitting the model to sparse observations.

Corresponding author address: Dr. Ming Ji, Climate Modeling Branch, National Centers for Environmental Prediction, 5200 Auth Road, Rm. 807, Camp Springs, MD 20746.

Email: ming.ji@noaa.gov

Abstract

An improved forecast system has been developed and implemented for ENSO prediction at the National Centers for Environmental Prediction (NCEP). This system consists of a new ocean data assimilation system and an improved coupled ocean–atmosphere forecast model (CMP12) for ENSO prediction. The new ocean data assimilation system is described in Part I of this two-part paper.

The new coupled forecast model (CMP12) is a variation of the standard NCEP coupled model (CMP10). Major changes in the new coupled model are improved vertical mixing for the ocean model; relaxation of the model’s surface salinity to the climatological annual cycle; and incorporation of an anomalous freshwater flux forcing. Also, the domain in which the oceanic SST couples to the atmosphere is limited to the tropical Pacific.

Evaluation of ENSO prediction results show that the new coupled model, using the more accurate ocean initial conditions, achieves higher prediction skill. However, two sets of hindcasting experiments (one using the more accurate ocean initial conditions but the old coupled model, the other using the new coupled model but the less accurate ocean initial conditions), result in no improvement in prediction skill. These results indicate that future improvement in ENSO prediction skill requires systematically improving both the coupled model and the ocean analysis system. The authors’ results also suggest that for the purpose of initializing the coupled model for ENSO prediction, care should be taken to give sufficient weight to the model dynamics during the ocean data assimilation. This can reduce the danger of aliasing large-scale model biases into the low-frequency variability in the ocean initial conditions, and also reduce the introduction of small-scale noise into the initial conditions caused by overfitting the model to sparse observations.

Corresponding author address: Dr. Ming Ji, Climate Modeling Branch, National Centers for Environmental Prediction, 5200 Auth Road, Rm. 807, Camp Springs, MD 20746.

Email: ming.ji@noaa.gov

1. Introduction

A major accomplishment of the recently completed international research program, Tropical Ocean and Global Atmosphere (TOGA, 1985–94), was the development of the capability to predict the El Niño–Southern Oscillation (ENSO) phenomenon. ENSO is the most important known seasonal to interannual climate variation and involves coupled interactions between the tropical Pacific Ocean and the global atmosphere (Bjerknes 1969; Wyrtki 1975, 1985). It has been linked to global climatic anomalies (Ropelewski and Halpert 1987).

Cane et al. (1986) developed the first dynamical coupled model for the prediction of ENSO. This model and a number of subsequent coupled models used similar ocean initialization methods in which the observed surface wind stress in the tropical Pacific was used to force the ocean model prior to the start of a prediction (Latif et al. 1993; Kleeman 1993; Kirtman et al. 1997). Adifferent ocean initialization approach (Ji et al. 1994; Kleeman et al. 1995; Rosati et al. 1997) used not only the surface wind stress information but also observed surface and subsurface temperature data by way of ocean data assimilation. This ocean initialization method for a coupled prediction model is similar in spirit to the initialization of atmospheric models for numerical weather prediction.

Chen et al. (1995) have developed an improved ocean initialization scheme for the Zebiak and Cane (1987) coupled model. By incorporating a new initialization scheme into their prediction system without any modification to the coupled model itself, they showed significant improvement in ENSO prediction skill compared to using the standard Zebiak and Cane (1987) initialization method. Their result clearly shows that initialization improves ENSO prediction skill. Ji and Leetmaa (1997) found that initialization of the ocean by the assimilation of observed subsurface temperature data generally leads to improved forecast skill. However, with one version of the coupled model, an improved oceanic initialization resulted in degradation of prediction skill for hindcasts in the winter season. The skill for predictions started in the winter season was improved when changes were made to the coupled model.However, even with the improved model, data assimilation did not have significant positive impact on the skill for these predictions. They concluded that improving the coupled model is at least as important as improving the initialization.

The research results presented here are Part II of a two-part paper. In Part I (Behringer et al. 1998), an improved ocean analysis system based on a three-dimensional variational ocean data assimilation scheme (Derber and Rosati 1989) is described. We have shown that this improved ocean analysis system is capable of producing ocean initial conditions that can capture seasonal to interannual variability in the tropical Pacific more accurately. This improvement is achieved by incorporating vertical variation in first-guess error variance and an overall reduction in magnitude of the estimated first-guess error. The improvements in the ocean data assimilation scheme lead to reduced instances of locally overfitting the model to sparse observations; hence, the result is reduced aliasing of model bias into low-frequency variability. The ocean initial conditions produced are smoother, indicating less high-frequency and small-scale noise, which is probably not relevant to ENSO. In Part II, we wish to demonstrate that the ocean initial conditions with more accurate low-frequency variability and lower small-scale noise can significantly improve ENSO prediction skill.

In this paper, we will describe an improved coupled forecast model. We will show that the ocean initial conditions produced with the improved ocean analysis system can significantly improve ENSO prediction skill. However, this improvement is achieved only with an improved coupled model, therefore, the improvement in ENSO forecast will require improvement in both the ocean initialization and the coupled model.

This paper is organized as follows. The new coupled model is described in section 2. Hindcast experiments using the new coupled model, and comparisons with the results from our previous coupled model are presented in section 3. In section 4, we discuss the reasons for the improvement in the hindcasting skill by further analyzing impacts from the improved ocean initialization and the improved coupled model. Our results are summarized in section 5.

2. The coupled forecast model

The primary interest at NCEP is to improve the capability for predicting ENSO episodes using coupled ocean–atmosphere models. A central part of this effort is improving the ocean data assimilation system to produce a better ocean initialization for the coupled forecast model. In Part I, we demonstrated that the recent changes in the analysis system have improved the quality of the ocean analyses. This result indicates that the ocean initial conditions obtained from the new analysis system probably are better initial conditions for ENSO prediction. On the other hand, better forecast skill canalso be the result of an improved coupled ocean–atmosphere forecast model. In this section, we describe a new coupled forecast model, denoted as CMP12. Results of El Niño forecast experiments using the improved ocean initial conditions and the new coupled model will be discussed in the next two sections.

a. The coupled model

The new coupled model (CMP12) used for this study is a modified version of the previous NCEP coupled model denoted as CMP10 (Ji et al. 1996). The ocean model is the same as the one used in the ocean analysis system as described in Part I. It is known as the modular ocean model developed at the Geophysical Fluid Dynamical Laboratory (Bryan 1969; Cox 1984; Philander et al. 1987), and is configured for the Pacific domain of 45°S–55°N, 120°E–70°W. The atmosphere model is a low-resolution version of the NCEP’s global medium-range forecast model (Kanamitsu et al. 1991) with a horizontal spectral resolution of T40 and modified convection and cloud parameterizations designed to produce a better climate simulation (Ji et al. 1994; Kumar et al. 1996). One-way anomaly coupling is used from the atmosphere to the ocean, that is, only anomalies of surface stress, heat, and freshwater fluxes produced from the atmosphere model are retained. This is because the AGCM is unable to produce realistic mean annual cycles for these fluxes. The ocean is then driven by total fluxes made up of these anomalies and the climatological mean annual cycle wind stress of The Florida State University (Goldenberg and O’Brien 1981) and the climatological heat flux components of Oberhuber (1988). Coupling from the ocean to the atmosphere is accomplished by using total SST from the ocean model to force the atmosphere model.

Two changes were made to the ocean model for CMP12: a relaxation of the OGCM’s surface salinity to the climatology of Levitus et al. (1994) is added, and a modification to the parameter ν0 in the Richardson number–dependent vertical mixing formulation (Pacanowski and Philander 1981) for the eddy viscosity coefficient. The eddy viscosity coefficient ν is given by
i1520-0493-126-4-1022-e1
where Ri is the Richardson number and α, νb, ν0, and n, are parameters to be chosen empirically. Pacanowski and Philander (1981) suggested for ν0 = O(50 cm2 s−1). In tuning our earlier versions of coupled model, we had set ν0 = 5 cm2 s−1. Over time, it has become clear that this value is too small, resulting in an unrealistically shallow equatorial undercurrent core in our earlier ocean analyses. In the new ocean analysis system, this parameter is changed to 50 cm2 s−1, as recommended by Pacanowski and Philander (1981). Therefore, the same change is made for the new coupled model as well. Thus,the ocean model in CMP12 is configured identically to the model used in the new ocean analysis system.

In addition, a number of changes in the coupling between the ocean and atmosphere models have been adopted. Specifically, the addition of anomalous freshwater flux from the atmosphere to the ocean and the reduction of the active region of the Pacific basin, where SST from the ocean model is allowed to force the atmosphere model from the entire basin (45°S–55°N) to the equatorial band between 15°S and 15°N. However, basinwide stress, heat, and freshwater flux anomalies produced from the AGCM are used to force the ocean model. Additionally, a negative heat flux feedback of 5 W m−2 °C−1 has been introduced, which is in addition to the standard anomalous heat flux forcing used in the CMP10 model.

Since we do not have a realistic climatological freshwater flux to force the ocean to maintain a realistic climatological sea surface salinity (SSS) field, relaxation of the surface salinity to the Levitus et al. (1994) climatology is a reasonable approach. Observations such as those of outgoing longwave radiation (OLR) indicate that a shift of a significant portion of the tropical rainfall from the western Pacific to the central Pacific occurs during a warm ENSO episode. AGCM simulations forced with the observed monthly SSTs show that our model is able to reasonably simulate the tropical rainfall anomalies associated with ENSO (Kumar et al. 1996). Hence a reasonable approach is to use the AGCM rainfall anomaly to represent the anomalous freshwater flux while the effect of the climatological freshwater flux is approximated by relaxation to the Levitus SSS climatology. Since the NCEP model has no demonstrated skill for SST predictions beyond the immediate equatorial Pacific, we do not force the atmosphere model with the ocean model SSTs for the entire basin. The negative heat flux feedback was included in order to damp the predicted SST anomalies in the model, which at times in the past appeared to be too large. However, the actual impact of this change on the model is unclear at present.

Ideally, it would be desirable to study the model’s sensitivity to each of these changes. Unfortunately, the complexity of the coupled GCMs makes it nearly impossible to explore the impact of each of these changes. A large number of hindcasts would be necessary to quantify the impact of each change by statistically examining the model’s performance as measured by forecast skill. The computational cost to evaluate every single parameter change would be prohibitive. Many of the changes implemented were based on making the model more “realistic,” such as adding the relaxation to a surface salinity climatology and adding anomalous freshwater flux by using the AGCM’s rainfall anomalies. At present, we cannot isolate the impact of each of these changes. However, as will be shown in the next section, the combination of all these changes has had a positive impact on the model performance, which has contributed to improved prediction skill.

b. Postprocessing

Monthly mean total SSTs for the tropical Pacific were saved during the forecasts. Monthly SST anomalies are obtained by removing the coupled model’s SST climatology. The coupled model’s SST climatologies were computed by averaging all predictions initiated from the same month of all years. Therefore, the method for estimating the predicted monthly SST anomalies does not involve observations of SST. This means that the average of the predicted SST anomalies is 0 over the hindcasting period. However, the observed SST anomalies used at the Climate Prediction Center are estimated relative to the climatology for the period of 1950–79 (Reynolds and Smith 1995). Therefore, the observed SST anomalies, when averaged over the model’s hindcasting period of 1981–95, are not 0. This period is warmer than the 1950–79 period because of several strong warm ENSO episodes that occurred during the 1980s and more frequent warm episodes that occurred during the early 1990s. Hence, the predicted SST anomalies are adjusted so that when averaged over the hindcasting period, they have the same mean as those for the observed SST anomalies. This adjustment is made by adding the mean observed monthly mean SST anomalies for the 1981–95 period to the predicted monthly mean SST anomalies. In essence, the average of observed SST anomalies for 1981–95 period is treated as a model bias. In addition, our postprocessing procedure averages predicted monthly SST anomalies having the same lead time but produced from three predictions starting from three consecutive months. This significantly reduces month-to-month noise from different predictions. In the subsequent sections, the forecast lead time is defined as the average time lag between the initial conditions and the target month. For example, a 3-month lead prediction for July is the average of three monthly mean predictions for June, July, and August, which are initiated in March, April, and May, respectively. However, in real-time forecasting, this smoothing reduces the effective lead time by 1 month, that is, the forecast for July is not available until the forecast made from May is completed. This procedure differs from that used for our previous models (Ji et al. 1994; Ji et al. 1996), which averages three predicted monthly mean SST anomalies for a given target month but with different lead times because they are produced from predictions of three consecutive monthly starts. In the subsequent sections, we will show comparisons of predictions made from different versions of coupled models. For consistency, predictions made from our previous coupled model, that is, CMP10, are reprocessed using the same postprocessing method for CMP12. We also postprocessed CMP12 predictions using the previous postprocessing method, finding that it does not change our conclusion.

3. Results from hindcasting experiments

To evaluate the performance of the new coupled forecast system, 1-yr hindcasts are carried out. Two sets ofocean initial conditions, RA5 and RA6, are available. RA5 is produced with our previous ocean data assimilation system (Ji et al. 1995); RA6 is produced with the improved ocean data assimilation system described in Part I. In this section, we describe hindcast results using the CMP12 model and RA6 ocean initial conditions. To evaluate improvements in prediction skill, these hindcast results are compared with prediction results from our standard forecast model, that is, CMP10 using RA5 ocean initial conditions. Note that the CMP10 results were actually produced previously and described in Ji et al. (1996). Major features of these two sets of predictions are listed in Table 1.

The common practice for evaluating a coupled model’s ENSO prediction capability is to correlate thetime series of area-averaged SST anomalies between the predictions and the observations in an equatorial Pacific region that is latitudinally bounded from 5°S to 5°N. For this discussion, we choose the area between 170° and 120°W, which is also referred as the Nino3.4 region (Barnston et al. 1994). Figure 1 shows time series of Nino3.4 SST anomalies predicted by CMP10 (dash) and CMP12 (heavy) models for the 1981–96 period for lead times of 3, 6, and 9 months. The observed SST anomalies (Reynolds and Smith 1994) for Nino3.4 are shown in the thin solid lines. Both models predicted the low-frequency interannual evolution of the SST anomalies reasonably well, especially for the short lead time (3 months). At the longer lead times, CMP10 lags behind in predicting the 1982/83 warm ENSO episode, whichCMP12 predicted better. Both models lagged behind for the 1986/87 warm episode, especially for coming out of the event, the rapid cooling during 1988. However, CMP12 performed somewhat better in this respect at the 9-month lead. In addition, CMP12 exhibited a cooling during 1987, which is spurious. For the period of the 1990s, CMP10 had trouble coming out of the 1991/92 warm episode, which led to completely missing the warming during late spring 1993. CMP10 also missed the warm episode during the late fall/winter of 1994/95 at the longer lead time. On the other hand, CMP12 fared much better during the 1990s. It was able to come out of the 1991/92 warm event, and thus predict a small SST rise in 1993, although this had a different timing and amplitude when compared with the observations. CMP12 also was able to predict the warm episode of 1994/95 and the cold episode of 1995/96 up to at least 6-months lead times. Overall, CMP12 tracks the observations better during 1981–91, mainly because it was better during the 1982/83 event and during 1988, especially at the 9-month lead time. For the 1992–96period, CMP12 tracks the observations much better than CMP10.

The commonly used measures of skill for coupled models are anomaly correlation coefficients (ACC) and root-mean-square errors (rmse) between the predictions and the observations for area-averaged SST anomalies. These skill measures for the CMP10 and CMP12 models for the Nino3.4 SST anomalies are shown in Fig. 2. For the entire common period from January 1982 to March 1996, CMP12 clearly exhibited higher skill than CMP10 for lead times longer than about four months. The right panels of Fig. 2 show ACC and rmse for both models based on a total of 48 predictions initiated from January 1992 to December 1995. For this period, CMP12 outperformed CMP10 by a bigger margin compared to the full 1982–95 period. The period since the 1991/92 warm episode was a difficult one for ENSO prediction models. Ji et al. (1996) showed that not only did the coupled prediction models exhibit lower skill levels than during the 1980s (Barnston et al. 1994; Chen et al. 1995), the persistence forecast also showed much lower skill, thatis, a shorter persistence time for anomalies, when comparing the 1992–95 period to the 1982–92 period. The observations of sea level pressure, low-level zonal wind, SST, and subsurface ocean heat content anomalies in the Pacific during the two periods also show remarkable contrasts in the characteristics of their interannual variability (Kleeman et al. 1996; Latif et al. 1997; Ji et al. 1996). The observational evidence suggests a possible change in the nature of coupled interactions in the tropical Pacific, thereby resulting in lower ENSO predictability for the 1990s. Thus it is a challenge to improve the skill of ENSO prediction for the early 1990s. If an ACC value of 0.6–0.7 is considered as a rough criterionfor useful prediction, Fig. 2 suggests that the predictability for the 1992–95 period for CMP10 model was 3–4 months. For the CMP12 model, the predictability for this period was about 7 months.

Individual 12-month predictions are shown in Fig. 3 and Fig. 4. There are 168 1-yr predictions for CMP10 (Fig. 3) and 181 1-yr predictions for CMP12 (Fig. 4). The thin lines in each panel in the figure depict predictions starting from three consecutive months for each year. For example, the top panels in both figures depict all the predictions initiated in January, February, and March (JFM) of each year for the entire hindcasting period. The three letters shown in the upper right ofeach panel are the initials for each of the three starting months of the predictions depicted in the panel. Observed Nino3.4 SST anomalies are shown in dashed curves. We wish to point out several features in this figure. First, both models exhibit spread in the forecasts, that is, hindcasts starting from consecutive months may result in very different predictions. Clear examples of this difference can be seen in the top panel of Fig. 3 during 1992–96 period for CMP10 model. The spread of the predictions is the reason why averaging over several predictions reduces noise in the prediction results, particularly for the 1992–96 period. The spread among the predictions for CMP12 is smaller than for CMP10, indicating that the predictions from CMP12 model aremore consistent. However, CMP12 consistently predicted a cooling during 1987 that did not actually occur. The comparison of predictions from both models also reveals that for the CMP10 model, the predictions initiated prior to or during the onset of warm ENSO episodes appear to have difficulty coming out of the warm episodes. For example, many CMP10 predictions initiated in late 1982 to early 1993, in late 1987 and in winter 1991 to spring 1992 were unable to predict the cooling during the subsequent seasons. CMP12, on the other hand, appears to be better in predicting the cooling after warm ENSO events, particularly during 1983, 1988, and 1992.

Figure 5 depicts differences between the observedSST climatology (Reynolds and Smith 1995) and the coupled model SST climatology for CMP10 (left panels) and CMP12 (right panels). These difference (bias) fields are obtained by subtracting the observed climatology from the model climatology. The model’s monthly mean annual cycle is obtained by averaging all the predictions initiated in June and December. Monthly mean bias fields for January, April, July, and October are shown in the figure. These are the center months for each of the winter, spring, summer, and fall seasons. Comparing these systematic SST biases for the two models, we notice that the biases in SST climatology for CMP10 are generally warm in the central to eastern tropical Pacific, whereas for CMP12, the magnitudes of the bias fields are generally smaller than those of CMP10. Furthermore, in the equatorial region, biases for CMP12 tend to be negative (cold bias), except for the spring season (April) in the far eastern equatorial Pacific. Figure 5 suggests that for CMP10 model, the coupled ocean–atmosphere system is often in a “perpetual spring” or even “perpetual El Niño” state because SST in the eastern tropical Pacific are strongly biased to be warmer than the observed climatology. The northern spring season is characterized by warm SST throughout the tropical Pacific and weak trades. Many previous studies have found that the northern spring season is a period during which the coupled ocean–atmosphere system loses part of its “memory” (Zebiak and Cane 1987; Battisti 1988) and the predictability for ENSO is the lowest. The error growth rate for the coupled systemduring this time of the year tends to be the highest (Blumenthal 1991; Xue et al. 1997). Hence, the perpetual springlike state of the CMP10 model may facilitate random error growth, which results in the predictions having a larger spread. The fact that CMP12 has no tendency toward perpetual springlike conditions may partially explain its better performance.

One may ask if the difference in model bias characteristics between CMP10 and CMP12 results from different model characteristics or from different initializations. We believe it is the former because we have examined the CMP12 bias from predictions using a set of different ocean initial conditions similar to RA5 (see section 4). The bias fields for these (not shown) are nearly identical to those shown in Fig. 5 for CMP12. Therefore the difference in the bias appears to be a result of changes in the model characteristics. Additionally, it could be argued that the coupled model climatology is referred to the 1981–95 base period during which the average SST in the tropical Pacific was warmer than the 1950–79 base period. However, Reynolds and Smith (1995) showed that the average warming during 1982–93 for tropical Pacific region of 10°S–10°N and between 150°W and 90°W, is about 0.4°C. For many large areas in the equatorial Pacific, the CMP10 bias is above 1°C, especially for the northern fall and winter seasons and therefore, there is no doubt that the CMP10 has a significant warm SST bias.

It is also of interest for potential users of the forecasts to know the spatial distribution and the seasonality of the prediction skill for this model. Spatial distribution of the SST forecast skill for the six month lead predictions for the boreal winter seasons are shown in Fig. 6. The anomaly correlation skill for the winter seasons is defied as the temporal correlation of the observed and the predicted SST anomalies at each grid point for 3-month averages centered in December, January, and February. For CMP12, the predictions are verified for 1981/82 through 1995/96, whereas for CMP10, predictions are verified for 1982/83 through 1995/96. Hence, there are 45 (42) seasons verified for CMP12 (CMP10) models. Note that these sample sizes are quite small because the actual degrees of freedom are much smaller than the sample sizes, therefore the statistics could be unstable. Nevertheless, the skill maps can be indicative of the skillful regions of the forecasts for each model. From Fig. 6, we found that the skillful forecasts are expected only within a narrow region confined between 10°S and 10°N in the equatorial Pacific and east of the date line. This is similar to our previous coupled models and to many other coupled dynamical forecast models. Comparing CMP12 and CMP10, it is indicative that CMP12 has somewhat higher skill for the northern winter seasons, and more importantly, the skillful forecast region (area where the skill is above 0.8) for CMP12 extends further west of the date line. This is important because the significant region of coupled air–sea interactions that impact North America is in the central-western tropical Pacific near the date line. Hence improving SST forecast for this area is highly desirable.

Shown in Fig. 7 are skill comparisons for the prediction of Nino3.4 SST anomalies. In addition to overall prediction skill, we grouped the predictions initiated during the northern warm seasons from May through September, and during the early northern winter months (November, December, and January). We found from this comparison that predictions starting in the warm seasons can sustain quite high skill (above or near 0.9) for nearly three seasons while skill for the predictions initiated during early winter months starts to drop after only about one season. At about 1-yr lead time, the predictions from all starting months have similar skill of about 0.65, which is still considered useful.

4. Discussion

In previous sections and in Part I, we have described an improved ENSO prediction system that consists of an improved ocean data assimilation system and a new coupled forecast model. Hindcasting results show that this system produces more accurate El Niño forecasts. It is of strong interest to understand if the improvement in prediction skill results from the more accurate ocean initial conditions or because of the improvements in the coupled model.

Figure 8 shows the 1-month rms spread for CMP10 (dash) and CMP12 (solid) for the individual predictions as a function of the prediction lead times. Symbolically, let A = {ai}, i = 1, 2, . . . , 180 and B = {bi}, i = 1, 2, . . . , 180. If A represents a set of 6-month lead predictions of monthly mean SST anomalies for the target period between July 1981 and June 1996 (a total of 180 predictions), and B represents a set of predictions for the same target period but with a 5-month lead, then the value on each curve at the 6-month lead time is the rms difference between A and B. The difference between A and B arises because A has a lead time of 1 month longer than B. Note that this is not an estimate of the prediction error at a 6-month lead time; that is measured by the rms difference between predictions and observations. This instead provides estimates of the error growth from one month to the next during the forecasts. The first value of each curve, that is, a lead time of 1 month, represents the rms difference between all 1-month predictions and the observations, which indicates the first-month error growth due to errors in the initial conditions. As the lead time becomes longer, the 1-month rms separation becomes larger as indicated by the general increase of the 1-month rms spread. We notice that the two curves are approximately in parallel indicating that both models probably have a similar error growth rate. However, the fact that the curve for CMP12 is below that for CMP10 reflects that RA6 initial conditions have a lower noise level in them, which contributes to improved consistency in the CMP12 forecast results (cf. Figs. 3 and 4). Lowering the noise level in the initial conditions can contribute to improved prediction skill because some of the noises in the initialconditions, when projected onto the growing modes, that is, the singular vectors (Xue et al. 1997), could have nontrivial amplitudes, which can lead to bad predictions. Therefore, a lower noise level in the initial conditions can extend the range of useful forecasts within the limit of predictability given the same error growth rate for the model. Thus, the improved initialization for CMP12 plays an important role in the improvement of the prediction skill. Additional evidence comes from running the same forecast model with two different sets of varying quality ocean initial conditions.

We have shown in Part I that our improved data assimilation system results in analyses that are more accurate for the low-frequency variability. More accurate analyses presumably lead to a better initialization for the coupled model. The obvious question, then, is what impact this actually has on prediction skill? To address this question, prediction experiments were carried out using the CMP12 model and a third set of ocean initial conditions denoted as CTL. The CTL analyses are produced as part of the present research using an ocean data assimilation scheme that adopted some improvements in the new system, but did not incorporate the two major changes of the new analysis system, that is, the vertical variation for the first-guess error variance and the overall reduction in the magnitude of the estimated first-guess error. Comparison with the independent observations of sea level data show that RA6 ismore accurate than CTL for the low-frequency variability. The CTL analyses are produced in order to facilitate our understanding of the impact on analysis quality by improving the first-guess error covariance function. We use CTL as a proxy of less accurate ocean initial conditions for additional hindcast experiments.

We denote the hindcasts using CMP12 model and CTL initial conditions as C12CTL, which contains 1-yr predictions initiated in March, June, September, and December of each year for the period of December 1980 to December 1995. Furthermore, in order to quantify the impact of model improvement on prediction skill, we produced an identical set of 1-yr predictions as the C12CTL but using the CMP10 model and RA6 initial conditions. We denote this dataset as C10RA6. For simplicity, we denote a subset of predictions using the CMP12 model and RA6 initial conditions which are common to the C12CTL experiment as C12RA6; and the same subset of predictions but from the CMP10 model and RA5 initial conditions as C10RA5. Major features of these prediction experiments are listed in Table 1.

Hindcast skills for Nino3.4 SST anomalies as measured by ACC and rmse, based on the predictions from C12RA6, C12CTL, C10RA6, and C10RA5 are shown in Fig. 9. The skill estimates in Fig. 9 show that C10RA5, C10RA6, and C12CTL form a tight pack while C12RA6 clearly stands out as a set of more skillfulpredictions. Note that there is a large random element inherent in coupled GCM predictions. Some indication of this can be seen in Figs. 3 and 4. Thus the sample size of 61–64 for each set of predictions may be still too small to completely overcome the uncertainty, which is difficult to estimate. However, the clear separation between C12RA6 and the other three experiments that closely band together strongly suggest that C12RA6 is probably a better set of predictions.

In Part I, we have shown that the RA6 analyses simulate the observed variability more accurately than the CTL analyses. The prediction results shown here suggest that RA6 is a better set of initial conditions than CTL (C12RA6 vs C12CTL). The ocean data assimilation system that produced RA6 uses stronger dynamical constraint by overall reduction in the magnitude of the estimated first-guess error, that is, giving higher relative weights to the model field and lower weights to the observations. The hindcasting results indicate that for the purpose of initializing the coupled model, it is desirable for the analysis system to have stronger dynamical constraint because it gives better fits to low frequency variability by reducing aliasing of large-scale model bias into the low-frequency variability. The RA6 ocean initial conditions are also smoother than CTL, which indicates that RA6 contains less dynamically inconsistent features, probably caused by locally overfitting the model to data. These results imply that the C12RA6 predictions probably have lower initial error and require less dynamical adjustment from initialization as indicated by Fig. 8 and, therefore, help to achieve the higher prediction skill.

On the other hand, a comparison of skill levels in Fig. 9 between C12RA6 and C10RA6 suggests that improvements in the prediction skill also result from the improvements in the coupled model since both experiments used the same more accurate ocean initial conditions (RA6). This indicates that both the model improvement and the better ocean initialization are equally important in improving prediction skill. The CMP10 model was unable to take advantage of the better ocean initial conditions from RA6 as illustrated by the comparison of C10RA6, C10RA5, and C12RA6, suggesting that the model error in CMP10 model plays a dominant role in limiting further increases in the prediction skill. On the other hand, the increased skill level of C12RA6 over C12CTL, both of which used the CMP12 model, suggests that improving the ocean initialization can have a significant impact on the prediction skill, provided that the coupled model is able to take advantage of the better ocean initial conditions.

5. Summary

In this paper, we described a new version of the coupled ENSO prediction model (CMP12). A number of changes in both the ocean model and in the coupling scheme were incorporated into CMP12. Hindcasting experiments using the CMP12 model and the RA6 initial conditions showed a significant increase in the ENSO prediction skill over those using CMP12 and the less accurate initial conditions (CTL) and over CMP10 using either set of initial conditions.

An extended version of the NCEP ocean analysis system that adopts stronger dynamical constraint than the previous analysis system is described in a companion paper. The stronger dynamical constraint in the new data assimilation scheme not only resulted in analyses having better fit for low-frequency variability, but also proven to be a very important factor leading to the improved prediction skill. We believe the new analysis system provides better ocean initialization because the more accurate low-frequency variability and lower small-scale noise level in the ocean initial conditions are desirable for ENSO prediction. This belief was quantified by hindcasting experiments using the same coupled model but different ocean initial conditions (C12RA6 vs C12CTL). The higher relative weighting given to the ocean model field allows the ocean model data assimilation system to act like a stronger filter. It reduces the impact of the high-frequency, small-scale variations inadequately sampled by the observations, while retaining the low-frequency large-scale signals relevant to ENSO. Also, by avoiding the local overfitting to observations, it reduces errors that result from aliasing the mean model-forcing error into low-frequency variability, thus leading to more accurate simulation of that variability. These improvements suggest that strong dynamical constraint in ocean data assimilation is desirable for the purpose of ocean initialization for ENSO prediction.

Our improvement in the prediction skill resulted from both more accurate ocean initial conditions and a better coupled model. Our experiments (C10RA6) show that CMP10 model was unable to take advantage of the more accurate ocean initial conditions to produce better predictions. On the other hand, the C12CTL experiment, which used the improved model (CMP12) with the less accurate initial conditions, was also unable to achieve better prediction skill. These reflect the complexity of the coupled forecast system. Improving one component of the system does not necessarily lead to improvement in prediction skill. Individual changes to the data assimilation system and the forecast model can some times lead to unexpected results. Future efforts in improving forecast system must take a systematic approach to improving both the ocean data assimilation system and the coupled model. This makes the task more challenging.

Acknowledgments

Support for this research is provided by NOAA’s Office of Global Program through the Climate and Global Change Program. Dr. Y. Xue (NCEP/UCAR) helped to calculate rms error spread of coupled model predictions. The authors wish to express our gratitude to two anonymous reviewers. Their thoughtful comments helped to improve this manuscript greatly.

REFERENCES

  • Barnston, A. G., H. M. Van den Dool, S. E. Zebiak, T. P. Barnett, M. Ji, D. R. Rodenhuis, M. A. Cane, A. Leetmaa, N. E. Graham, C. F. Ropelewski, V. E. Kousky, E. A. O’Lenic, and R. E. Livezey, 1994: Long-lead seasonal forecasts—Where do we stand? Bull. Amer. Meteor. Soc.,75, 2079–2114.

  • Battisti, D. S., 1988: Dynamics and thermodynamics of a warming event in a coupled tropical atmosphere-ocean model. J. Atmos. Sci.,45, 2889–2919.

  • Behringer, D. W., M. Ji, and A. Leetmaa, 1998: An improved coupled model for ENSO prediction and implications for ocean initialization. Part I: The ocean data assimilation system. Mon. Wea. Rev.,126, 1013–1021.

  • Bjerknes, J., 1969: Atmospheric teleconnections from the equatorial Pacific. Mon. Wea. Rev.,97, 163–172.

  • Blumenthal, M. B., 1991: Predictability of a coupled ocean–atmosphere model. J. Climate,4, 766–784.

  • Bryan, K., 1969: A numerical method for the study of the World Ocean. J. Comput. Phys.,4, 347–376.

  • Cane, M. A., S. E. Zebiak, and S. C. Dolan, 1986: Experimental forecasts of El Nino. Nature,321, 827–832.

  • Chen, D., S. E. Zebiak, A. J. Busalacchi, and M. A. Cane, 1995: An improved procedure for El Niño forecasting. Science,269, 1699–1702.

  • Cox, M. D., 1984: A primitive, 3-dimensional model of the ocean. GFDL Ocean Group Tech. Rep. 1, Geophysical Fluid Dynamics Laboratory/NOAA, Princeton University, 143 pp.

  • Derber, J. D., and A. Rosati, 1989: A global oceanic data assimilation system. J. Phys. Oceanogr.,19, 1333–1347.

  • Goldenberg, S. B., and J. J. O’Brien, 1981: Time and space variability of tropical Pacific wind stress. Mon. Wea. Rev.,109, 1190–1207.

  • Ji, M., and A. Leetmaa, 1997: Impact of data assimilation on ocean initialization and El Niño prediction. Mon. Wea. Rev.,125, 742–753.

  • ——, A. Kumar, and A. Leetmaa, 1994: An experimental coupled forecast system at the national meteorological center: Some early results. Tellus,46A, 398–418.

  • ——, A. Leetmaa, and J. Derber, 1995: An ocean analysis system for seasonal to interannual climate studies. Mon. Wea. Rev.,123, 460–481.

  • ——, ——, and V. E. Kousky, 1996: Coupled model prediction of ENSO during the 1980s and the 1990s at the National Centers for Environmental Prediction. J. Climate,9, 3105–3120.

  • Kanamitsu, M., and Coauthors, 1991: Description of NMC global data assimilation and forecast system. Wea. Forecasting,6, 425–435.

  • Kirtman, B. P., J. Shukla, B. Huang, Z. Zhu, and E. K. Schneider, 1997: Multiseasonal predictions with a coupled tropical ocean–global atmosphere system. Mon. Wea. Rev.,125, 789–808.

  • Kleeman, R., 1993: On the dependence of hindcast skill in a coupled ocean-atmosphere model on ocean thermodynamics. J. Climate,6, 2012–2033.

  • ——, A. M. Moore, and N. R. Smith, 1995: Assimilation of subsurface thermal data into a simple ocean model for the initialization of an intermediate tropical coupled ocean–atmosphere forecast model. Mon. Wea. Rev.,123, 3103–3113.

  • ——, R. A. Colman, N. R. Smith, and S. B. Power, 1996: A recent change in the mean state of the Pacific basin climate: Observational evidence and atmospheric and oceanic responses. J. Geophys. Res. (Oceans),101, 20483–20499.

  • Kumar, A., M. P. Hoerling, M. Ji, A. Leetmaa, and P. Sardeshmukh, 1996: Assessing a GCM’s suitability for making seasonal predictions. J. Climate,9, 115–129.

  • Latif, M., A. Sterl, E. Maier-Reimer, and M. M. Junge, 1993: Structure and predictability of the El Niño/Southern Oscillation phenomenon in a coupled ocean–atmosphere general circulation model. J. Climate,6, 700–708.

  • ——, R. Kleeman, and C. Eckert, 1997: Greenhouse warming, decadal variability or El Niño? An attempt to understand the anomalous 1990s. J. Climate,10, 2221–2239.

  • Levitus, S., R. Burgett, and T. P. Boyer, 1994: World Ocean Atlas 1994. Vol. 3. Salinity. NOAA Atlas NESDIS 3, 99 pp.

  • Oberhuber, J. M., 1988: An atlas based on the “COADS” data set: The budgets of heat, buoyancy and turbulent kinetic energy at the surface of the global ocean. Rep. 15, Max-Planck-Institut für Meteorologie, 20 pp. [Available from Max-Planck-Institut für Meteorologie, Bundesstrasse 55, Hamburg, Germany.].

  • Pacanowski, R., and S. G. H. Philander, 1981: Parameterization of vertical mixing in numerical models of tropical oceans. J. Phys. Oceanogr.,11, 1443–1451.

  • Philander, S. G. H., W. J. Hurlin, and A. D. Seigel, 1987: A model of the seasonal cycle in the tropical Pacific ocean. J. Phys. Oceanogr.,17, 1986–2002.

  • Reynolds, R. W., and T. M. Smith, 1994: Improved global sea surface temperature analysis using optimum interpolation. J. Climate,7, 929–948.

  • ——, and ——, 1995: A high resolution global sea surface temperature climatology. J. Climate,8, 1571–1583.

  • Ropelewski, C. F., and M. S. Halpert, 1987: Global and regional scale precipitation patterns associated with El Nino/Southern Oscillation. Mon. Wea. Rev.,115, 1606–1626.

  • Rosati, A., K. Miyakoda, and R. Gudgel, 1997: The impact of ocean initial conditions on ENSO forecasting with a coupled model. Mon. Wea. Rev.,125, 754–772.

  • Wyrtki, K., 1975: El Nino—The dynamical response of the equatorial Pacific to atmospheric forcing. J. Phys. Oceanogr.,5, 572–584.

  • —, 1985: Water displacements in the Pacific and the genesis of El Nino cycles. J. Geophys. Res.,90, 7129–7132.

  • Xue, Y., M. A. Cane, and S. E. Zebiak, 1997: Predictability of a coupled model of ENSO using singular vector analysis. Part I: Optimal growth in seasonal background and ENSO cycles. Mon. Wea. Rev,125, 2043–2056.

  • Zebiak, S. E., and M. A. Cane, 1987: A model El Nino–Southern Oscillation. Mon. Wea. Rev.,115, 2262–2278.

Fig. 1.
Fig. 1.

Predicted Nino3.4 SST anomalies (°C) for 1981–96 at 3-month, 6-month, and 9-month lead times. The prediction from CMP10 (CMP12) models are shown in dash (solid) lines; the observed Nino3.4 SST anomalies are shown in the thin-solid lines.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 2.
Fig. 2.

Skill estimates as function of lead time for prediction of Nino3.4 SST anomalies for CMP10 (dash) and CMP12 (solid) models for 1981–95 period (left panels) and 1992–95 period (right panels). Shown in the top panels are anomaly correlation coefficients (ACC) between the predictions and the observations; shown in the lower panels are rmse’s.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 3.
Fig. 3.

Evolution of Nino3.4 SST anomalies (°C) from all individual predictions (thin solid lines) initiated monthly using the CMP10 model for 1982–95. Shown in each panel are the predictions grouped by three consecutive starting months. The observed Nino3.4 SST anomalies are shown in the heavy-dashed lines.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 4.
Fig. 4.

Same as Fig. 3 but for the CMP12 model for 1981–95.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 5.
Fig. 5.

Differences between the coupled model SST climatology (average of June and December starts) and the observed SST climatology for January, April, July, and October as representative for different seasons. Contour intervals are 0.5°C.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 6.
Fig. 6.

Spatial distribution (skill map) of the temporal anomaly correlation coefficients between the observed and the predicted SST anomalies. The skill map for CMP12 (CMP10) is shown in the upper (lower) panel. The anomaly correlation coefficients are computed based on 45 (42) 3-month averages centered in December, January, and February for the boreal winters of 1981/82 (1982/83) through 1995/96 for CMP12 (CMP10). Contour values are indicated in the figure.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 7.
Fig. 7.

Anomaly correlation coefficients (left) and rmse’s (right) between the predicted and the observed monthly mean SST anomalies as function of lead times for Nino3.4 SST anomalies. These are for CMP12 for the period of 1981–95. Skills for the predictions starting in the northern warm/cold/all seasons are shown in the dot–dashed/dashed/solid curves. The warm season is defined as May through September; the cold season is defined as November, December, and January.

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 8.
Fig. 8.

One-month rms separation as an estimate of error growth for the CMP10 (dash) and CMP12 (solid) models (see text in section 7).

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Fig. 9.
Fig. 9.

Skill estimates (ACC, left and rmse, right) as a function of lead time for prediction of Nino3.4 SST anomalies from four prediction experiments that used two coupled models and three initial conditions. These predictions are designated as C12RA6 (solid), C12CTL (short dash), C10RA6 (dot–dash), and C10RA5 (long dash). The predictions are initiated in March, June, September, and December of each year for 1981–95. The two coupled models, that is, CMP12 and CMP10, are designated as C12 and C10; the three initial conditions are designated as RA5, RA6, and CTL (see Table 1 and section 4).

Citation: Monthly Weather Review 126, 4; 10.1175/1520-0493(1998)126<1022:AICMFE>2.0.CO;2

Table 1.

List of prediction experiments.

Table 1.
Save