UNCERTAINTY IN SEASONAL FORECASTING.
Any prediction of the future evolution of the Earth system requires an associated assessment of its uncertainty. This is true whether the forecast is for the days ahead or is a longer-term prediction for the following months and seasons.
For seasonal forecasts, the uncertainty associated with inexact initial conditions, which can grow rapidly in time, is usually addressed by running multiple forecasts with perturbations applied to the initial state of the ocean and atmosphere (Arribas et al. 2011; Stockdale et al. 2011). The idea is that the perturbed initial conditions are of a suitable magnitude to represent the uncertainty in the observational measurements and the analysis tools that are used to process them. As the forecast evolves, the differences between the forecasts, known as the ensemble “spread,” should therefore reflect the typical forecast error, or “uncertainty”; in other words, the eventual real-world evolution should be contained within the cluster of this forecast ensemble. In tandem, uncertainty in forecasts is also contributed to by our inexact representations of the Earth system physics. This contribution to uncertainty is sampled by employing different Earth system models (Yun et al. 2005; Weisheimer et al. 2009; Smith et al. 2013), the so-called multimodel approach, which is often supplemented by the use of perturbations to physical processes, known as stochastic physics schemes, to further account for structural errors in a particular model (Buizza et al. 1999). The use of ensembles to quantify uncertainty enables forecasting probabilities of different outcomes, which in turn demands that forecast evaluation be conducted using probabilistic skill metrics (e.g., Candille and Talagrand 2005).
The advantage of multimodel approaches over a single seasonal forecast model has been amply demonstrated (e.g., Hagedorn et al. 2005). Multiple models will employ a range of methods for representing small-scale physical processes and numerically integrating the underlying equation set, usually resulting in a partial cancellation of the biases and errors of individual models. These biases of individual models nevertheless require calibration, both for communication of forecasts to the public and for their use in applications such as health, agriculture, and energy (e.g., Challinor et al. 2005; Morse et al. 2005). To accomplish this, forecasting centers accompany their predictions of the future with sets of forecasts conducted for dates in the past, referred to as hindcasts. Comparing these hindcasts with the actual measured evolution of the atmosphere and surface measurements of temperature and precipitation allows the biases to be characterized and accounted for (Di Giuseppe et al. 2013a,b). For this process to be robust, and not be subject to the vagaries of interannual variability, these hindcast suites need to span a large number of years, typically two decades or longer. The hindcast suite of ensemble integrations therefore represents a significant investment of supercomputing resources.
THE CLIMATE-SYSTEM HISTORICAL FORECAST PROJECT.
Leading operational and research centers around the globe thus collectively possess a sizable database of past hindcasts that potentially represents an immensely valuable resource for the research community interested in questions pertaining to seasonal prediction and predictability. For this potential to be realized, the hindcast suites need to be freely and publicly available, using a common format to facilitate their manipulation and analysis. The World Climate Research Programme’s (WCRP) Working Group on Subseasonal to Interdecadal Prediction (WGSIP) therefore initiated a project known as the Climate-System Historical Forecast Project (CHFP) to achieve these aims, with the project launched at the WCRP Workshop on Seasonal Prediction in June 2007 (Kirtman and Pirani 2009). The CHFP invites leading centers to contribute their hindcast suite on a voluntary basis to a common database hosted at Centro de Investigaciones del Mar y la Atmósfera (CIMA) in Argentina. These hindcasts are made freely available for noncommercial purposes through a web portal, while advanced users can access files through wget or Open-Source Project for a Network Data Access Protocol (OPENDAP)-based scripts.
A description of the contributing centers and models is given in Table 1. As a guide to data producers, a set of 27 atmospheric and 13 oceanic variables are requested as monthly averages. Although not all centers are able to fulfill the full complement, a common set of nine variables are available as monthly means from all centers. Daily data submission is also encouraged where resources allow, and as a minimum, daily (near) surface temperature and precipitation output are presently available for seven contributing systems (Table 1, column 6). (For a full list of up-to-date variables available the reader is referred to http://chfps.cima.fcen.uba.ar/DS/summary2.php.)
Details of modeling systems that contribute to the CHFP database. The column “Daily T/P” indicates the availability of (at least) daily precipitation and 2-m/surface temperature. For the other systems, only monthly averaged data are presently available as of November 2017.


To facilitate comparison, CHFP uses a common grid of 2.5° × 2.5° for atmospheric fields and 1° × 1° for oceanic variables. The self-describing network Common Data Form (netCDF) format is adopted, and three types of metadata are specified in each case: dimensions, variables, and global attributes. Work is ongoing to ensure that metadata and dimension conventions are compliant to allow an impending migration to the Earth System Grid Federation (ESGF) to take place.
The database is accessed by scientists around the world. Since its inception the database has seen a steady increase in active users, who originate from more than 30 countries. Figure 1 shows the evolution of hindcast systems included in CHFP and the total download, per year, from 2012 to 2016. A gradual increase in the number of models is visible.

Evolution of the number of hindcast systems contained in the CHFP database (blue bars) and total download (red curve; in GB) between 2012 and 2016.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1

Evolution of the number of hindcast systems contained in the CHFP database (blue bars) and total download (red curve; in GB) between 2012 and 2016.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
Evolution of the number of hindcast systems contained in the CHFP database (blue bars) and total download (red curve; in GB) between 2012 and 2016.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
POTENTIAL RESEARCH USE OF CHFP.
There is obviously a wide range of research questions that can be addressed with such a hindcast dataset. Many uses of seasonal prediction hinge on the prediction or determining the limits of predictability of near surface temperatures and of rainfall in the tropics (Rajeevan et al. 2012). Seasonal forecasting has also shown recent application in midlatitudes, with current modeling systems now showing skill at predicting the North Atlantic Oscillation (NAO) for the winter ahead (Scaife et al. 2014). Yuan et al. (2015) demonstrated the use of CHFP precipitation hindcasts in combination with those of the North American Multimodel Ensemble (NMME; Kirtman et al. 2014) for hydrological applications both in tropical and midlatitude basins.
To provide guidance to the use of the database and also avoid duplication of efforts, WGSIP also supports specific sponsored subprojects. These have included the Seasonal Prediction Intercomparison Project (SMIP), the first and second Global Land–Atmosphere Coupling Experiments (GLACE), the Stratosphere-Resolving Historical Forecast Project (stratHFP), and the Sea Ice Historical Forecast Project (iceHFP). In addition, recently initiated WGSIP projects are the Long-Range Forecast Transient Intercomparison Project (LRFTIP); SNOWGLACE, which is evaluating the impact of realistic snow initialization on skill of subseasonal-to-seasonal forecasts; and WGSIP’s teleconnections initiative that is aimed at diagnosing tropical–extratropical interactions at seasonal and subseasonal time scales. Many of these projects were designed to examine the impact of a certain component of the Earth system (e.g., land surface, stratospheric phenomena such as the quasi-biennial oscillation, and sea ice) on prediction and predictability. WGSIP’s projects frequently draw on the CHFP database, supplemented by additional sensitivity integrations when needed. Further details of the recently initiated WGSIP projects are available in Merryfield et al. (2017), with Butler et al. (2016) and Osman et al. (2016) reporting on the research deriving from stratHFP and SMIP, respectively. Osman and Vera (2017) made an assessment of the predictability and prediction skill of climate anomalies over South America from CHFP models and confirmed that the multimodel ensemble performed on average better than any single model.
Ideas for CHFP-based experiments can be submitted to WGSIP for consideration for support, which may lead to participating centers conducting additional experiments. The only rule for consideration is that suggested model hindcast experiments should be conducted in true forecast mode and should not incorporate any information concerning the climate or environment after the experiment initialization, such as data concerning the evolution of the sea surface temperatures (SSTs) or the occurrence of volcanic eruptions.
AN EXAMPLE ANALYSIS: ENSO.
Here, we will briefly show one example of the multimodel collective hindcast skill at predicting the SST patterns associated with the El Niño–Southern Oscillation (ENSO) in the Pacific Ocean and the associated rainfall patterns. Skillful prediction of this mode of Pacific SST variability is important owing to its near-global impact on surface precipitation through teleconnections. In the following analysis, all ensemble members of all prediction systems are combined into one superensemble, with each integration weighted equally. Forecast systems with larger numbers of ensemble members thus contribute more to the superensemble than those systems with few integrations.
The evolution of the Niño-3.4 (a central–eastern Pacific region: 5°S–5°N, 170°–120°W) SST is often used to monitor ENSO. Niño-3.4-averaged SST anomalies based on NOAA's Optimum Interpolation Sea Surface Temperature (OISST) observational analysis and the multimodel ensemble average for a core set of CHFP prediction systems at lead times of 0, 3, and 6 months is presented in Fig. 2. The close match of the observations and forecast SST at the short lead time is not only due to the forecast quality, since the temporal variation in SST is mostly derived from the changing initial conditions; a forecast based simply on persisting the initial conditions or the initial-condition anomaly would give a similar visual impression of a reliable forecasting system and has nearly the same skill, as quantified by the anomaly correlation values in the top-right panel of Fig. 2. This emphasizes the importance of using forecast skill metrics that judge the forecast in terms of its performance relative to a baseline forecast. At lead times of 3 and 6 months, the ensemble mean is still able to reproduce the evolution of the Niño-3.4 SST anomalies, and the advantage of the CHFP forecast models over persistence is clear.

(left) Seasonal-mean Niño-3.4 index (area-averaged SST anomaly in 5°S–5°N, 170°–120°W), as observed (OISST analysis; black) and predicted by CHFP models (red) initialized from February, May, August, and November 1982–2009 at (a) 0-, (b) 3-, and (c) 6-month lead times. Circles indicate mean values and error bars indicate standard deviations of predictions from 95 ensemble members. (right) Comparison of CHFP anomaly correlation skill values with those based on persisting the observed Niño-3.4 value prior to the start of the forecast.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1

(left) Seasonal-mean Niño-3.4 index (area-averaged SST anomaly in 5°S–5°N, 170°–120°W), as observed (OISST analysis; black) and predicted by CHFP models (red) initialized from February, May, August, and November 1982–2009 at (a) 0-, (b) 3-, and (c) 6-month lead times. Circles indicate mean values and error bars indicate standard deviations of predictions from 95 ensemble members. (right) Comparison of CHFP anomaly correlation skill values with those based on persisting the observed Niño-3.4 value prior to the start of the forecast.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
(left) Seasonal-mean Niño-3.4 index (area-averaged SST anomaly in 5°S–5°N, 170°–120°W), as observed (OISST analysis; black) and predicted by CHFP models (red) initialized from February, May, August, and November 1982–2009 at (a) 0-, (b) 3-, and (c) 6-month lead times. Circles indicate mean values and error bars indicate standard deviations of predictions from 95 ensemble members. (right) Comparison of CHFP anomaly correlation skill values with those based on persisting the observed Niño-3.4 value prior to the start of the forecast.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
The longer-term predictability of SST associated with ENSO results in improved predictions of precipitation anomalies, both locally over the Pacific and globally through teleconnections in both the Northern Hemisphere summer and winter (Fig. 3). The composite rainfall anomaly of La Niña minus El Niño years shows a strong correspondence at a lead time of 0 months over the Pacific Ocean basin, the Americas, and the Indian Ocean Basin. There are, however, some significant spatial differences over the Indian subcontinent during the monsoon, and likewise over central and West Africa, where ENSO teleconnections are fairly weak and monsoon precipitation is more strongly influenced by the Atlantic and Gulf of Guinea SST dipole (Camberlin et al. 2001).

Composite precipitation differences (La Niña minus El Niño) based on years 1982–2009 in which observed seasonal-mean Niño-3.4 index exceeds ±1, from (left) GPCP observations and (right) the multimodel ensemble at 0-month lead, for (top) JJA and (bottom) DJF.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1

Composite precipitation differences (La Niña minus El Niño) based on years 1982–2009 in which observed seasonal-mean Niño-3.4 index exceeds ±1, from (left) GPCP observations and (right) the multimodel ensemble at 0-month lead, for (top) JJA and (bottom) DJF.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
Composite precipitation differences (La Niña minus El Niño) based on years 1982–2009 in which observed seasonal-mean Niño-3.4 index exceeds ±1, from (left) GPCP observations and (right) the multimodel ensemble at 0-month lead, for (top) JJA and (bottom) DJF.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
As stated earlier, an ensemble of forecasts is used to sample various uncertainties in the forecasts due to errors in initial conditions or, when multiple or perturbed models are considered, the modeling system itself. If, for a given quantity such as surface temperature, the differences between the ensemble members are smaller than the errors in the forecast, it implies these sources of error are underestimated, and the forecast is deemed “overconfident.” Likewise, in rare cases the opposite can occur, where models have too much spread and are underconfident (Kumar et al. 2014; Eade et al. 2014). However, it is clear when examining Fig. 4, which compares Niño-3.4 ensemble spreads and root-mean-square errors (RMSE), that overconfidence is a common deficiency in the tropics for many of the CHFP contributing models. In some cases the ensemble mean error is more than twice that of the ensemble spread. The figure also clearly demonstrates the major advantage of employing a multimodel approach, in that when the various modeling systems are combined into a single superensemble that consists of all ensemble members of each modeling system, the combination of different modeling approaches inflates that ensemble spread to match the error growth almost exactly.

RMSE (black) and ensemble standard deviation (red) for Niño-3.4 prediction by nine CHFP models and the multimodel ensemble, as a function of lead time for predictions from August 1982–2009.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1

RMSE (black) and ensemble standard deviation (red) for Niño-3.4 prediction by nine CHFP models and the multimodel ensemble, as a function of lead time for predictions from August 1982–2009.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
RMSE (black) and ensemble standard deviation (red) for Niño-3.4 prediction by nine CHFP models and the multimodel ensemble, as a function of lead time for predictions from August 1982–2009.
Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0209.1
CHALLENGES AND SUSTAINABILITY.
To ensure that the CHFP database is sustainable and well utilized by the research community over the longer term, a number of challenges need to be addressed. First and foremost, the forecasting systems that have contributed to the CHFP are not static but are subject to intermittent upgrades with improvements to model physics and data assimilation systems. With each new release, a new set of hindcasts is conducted to ensure changing bias characteristics can be accounted for. If the CHFP is to remain relevant for research, it is imperative that each new state-of-the-art system is included in the database. Indeed, by retaining all earlier model releases, the intention is that the CHFP will serve to document the systematic improvement in seasonal forecasting systems over time. A key challenge to this endeavor is that, at present, both the submission of data and the maintenance of the CHFP database is conducted on a voluntary basis without external funding sources.
A second aspect that is critical to sustainability is the use of a data format that is self-describing and standard. While many operational centers store forecast data in version 2 of the gridded binary (GRIB2) format, a strategic choice was taken to archive CHFP data in netCDF format more commonly used in the research and climate modeling community. Climate prediction protocols require the ability to handle multiple time axes (viz., the hindcast start date associated with a particular real-time forecast and the hindcast time step). To address this issue new protocols have been developed within the European Union–funded Seasonal-to-Decadal Climate Prediction for the Improvement of European Climate Services (SPECS) project. Once these protocols are in place, processing of large hindcast ensembles stored within a single netCDF file will be possible with standard software packages. An added advantage is that this will also permit the database to be migrated to the ESGF, which is already used to access the Earth system models that are assessed by the Intergovernmental Panel on Climate Change (IPCC) process. These actions will underpin the sustainability of the database.
The final issue regards the choice of fields submitted to the database. On the one hand, the core set of variables should not be too large to ensure that data volumes do not become too excessive and the submission process too onerous for contributing centers, which is particularly important for voluntary undertakings. On the other hand, many research questions or applications modeling undertakings require noncore model variables or data with daily frequency and must therefore make recourse to the subset of models for which such data are available. Additional noncore fields are submitted on an ad hoc, center-by-center basis, with the result that the available model set changes according to the scientific question posed. The database would be more robust for research purposes if it adhered to a more rigid protocol of a core set of variables, supplemented by additional dataset tiers for modeling centers willing to provide them. Ideally, these additional variables and their respective archiving frequency would evolve in time in response to requests from database users, and feedback from the user community is thus strongly desirable and encouraged.
OUTLOOK.
We are presently experiencing an undeniable and inexorable evolution toward open data policies in support of research institutions and their undertakings. Open access ensures the scientific potential of data is maximized for the full benefit of society, increases scientific feedback concerning the strengths and drawbacks of the data, and allows fair and equal access to the scientific community, which is of particular importance to scientists in developing countries who face difficulties in participating in international multiorganization projects. As examples of the open-access development, data from the most recent European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) launches in the sentinel series of satellites are made available in near–real time, emulating access rights for remote sensing products that have long been the norm in the United States. Many journals and funding agencies now insist on open data policies as a condition for publication or financial support. Political pressure associated with the climate change debate has started to lead to national meteorological agencies releasing additional station records to the public, beyond those already available on the global telecommunications system (GTS). Climate modeling undertakings that are assessed by the IPCC have been open access since inception and since the Fifth Assessment Report have included coordinated experiments regarding decadal prediction. Leading operational centers from around the world have submitted their short- to medium-range forecasts in near–real time to the open-access The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE; Bougeault et al. 2010) database for a number of years, which has recently been emulated from 2015 by the subseasonal to seasonal (S2S; Vitart et al. 2016) database.
The CHFP database therefore represents another piece of the meteorological open-access puzzle, making a vast set of seasonal forecasts freely available to the research community, facilitating the move toward seamless prediction. As the present limitations in the database are addressed, the prospects for a growing and active user base and long-term sustainability of the undertaking are bright. We hope that the CHFP database will continue to grow and will chart the improvements in initialized seasonal climate predictions as they increase in skill. We invite all readers and users to actively communicate their experiences with the database to the WGSIP working group so that this prospect becomes a reality.
FOR FURTHER READING
Arribas, A., and Coauthors, 2011: The GloSea4 ensemble prediction system for seasonal forecasting. Mon. Wea. Rev., 139, 1891–1910, doi:10.1175/2010MWR3615.1.
Baehr, J., and Coauthors, 2015: The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model. Climate Dyn., 44, 2723–2735, doi:10.1007/s00382-014-2399-7.
Bougeault, P., and Coauthors, 2010: The THORPEX interactive grand global ensemble. Bull. Amer. Meteor. Soc., 91, 1059–1072, doi:10.1175/2010BAMS2853.1.
Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 2887–2908, doi:10.1002/qj.49712556006.
Butler, A. H., and Coauthors, 2016: The climate-system historical forecast project: Do stratosphere-resolving models make better seasonal climate predictions in boreal winter? Quart. J. Roy. Meteor. Soc., 142, 1413–1427, doi:10.1002/qj.2743.
Camberlin, P., S. Janicot, and I. Poccard, 2001: Seasonality and atmospheric dynamics of the teleconnection between African rainfall and tropical sea-surface temperature: Atlantic vs. ENSO. Int. J. Climatol., 21, 973–1005, doi:10.1002/joc.673.
Candille, G., and O. Talagrand, 2005: Evaluation of probabilistic prediction systems for a scalar variable. Quart. J. Roy. Meteor. Soc., 131, 2131–2150, doi:10.1256/qj.04.71.
Challinor, A. J., J. M. Slingo, T. R. Wheeler, and F. J. Doblas-Reyes, 2005: Probabilistic simulations of crop yield over western India using the DEMETER seasonal hindcast ensembles. Tellus, 57A, 498–512, doi:10.3402/tellusa.v57i3.14670.
Cottrill, A., and Coauthors, 2013: Seasonal forecasting in the Pacific using the coupled model POAMA-2. Wea. Forecasting, 28, 668–680, doi:10.1175/WAF-D-12-00072.1.
Di Giuseppe, F., F. Molteni, and E. Dutra, 2013a: Real-time correction of ERA-Interim monthly rainfall. Geophys. Res. Lett., 40, 3750–3755, doi:10.1002/grl.50670.
Di Giuseppe, F., A. M. Tompkins, and F. Molteni, 2013b: A rainfall calibration methodology for impacts modelling based on spatial mapping. Quart. J. Roy. Meteor. Soc., 139, 1389–1401, doi:10.1002/qj.2019.
Eade, R., D. Smith, A. Scaife, E. Wallace, N. Dunstone, L. Hermanson, and N. Robinson, 2014: Do seasonal-to-decadal climate predictions underestimate the predictability of the real world? Geophys. Res. Lett., 41, 5620–5628, doi:10.1002/2014GL061146.
Fereday, D. R., A. Maidens, A. Arribas, A. A. Scaife, and J. R. Knight, 2012: Seasonal forecasts of Northern Hemisphere winter 2009/10. Environ. Res. Lett., 7, 034031, doi:10.1088/1748-9326/7/3/034031.
Hagedorn, R., F. J. Doblas-Reyes, and T. N. Palmer, 2005: The rationale behind the success of multi-model ensembles in seasonal forecasting—I. Basic concept. Tellus, 57A, 219–233, doi:10.1111/j.1600-0870.2005.00103.x.
Imada, Y., H. Tatebe, M. Ishii, Y. Chikamoto, M. Mori, M. Arai, M. Watanabe, and M. Kimoto, 2015: Predictability of two types of El Niño assessed using an extended seasonal prediction system by MIROC. Mon. Wea. Rev., 143, 4597–4617, doi:10.1175/MWR-D-15-0007.1.
Jungclaus, J., and Coauthors, 2013: Characteristics of the ocean simulations in the Max Planck Institute Ocean Model (MPIOM) the ocean component of the MPI-Earth system model. J. Adv. Model. Earth Syst., 5, 422–446, doi:10.1002/jame.20023.
Kirtman, B., and A. Pirani, 2009: The state of the art of seasonal prediction: Outcomes and recommendations from the First World Climate Research Program Workshop on Seasonal Prediction. Bull. Amer. Meteor. Soc., 90, 455–458, doi:10.1175/2008BAMS2707.1.
Kirtman, B., and Coauthors, 2014: The North American multimodel ensemble: Phase-1 seasonal-to-interannual prediction; phase-2 toward developing intraseasonal prediction. Bull. Amer. Meteor. Soc., 95, 585–601, doi:10.1175/BAMS-D-12-00050.1.
Kumar, A., P. Peng, and M. Chen, 2014: Is there a relationship between potential and actual skill? Mon. Wea. Rev., 142, 2220–2227, doi:10.1175/MWR-D-13-00287.1.
MacLachlan, C., and Coauthors, 2015: Global Seasonal Forecast System version 5 (GloSea5): A high-resolution seasonal forecast system. Quart. J. Roy. Meteor. Soc., 141, 1072–1084, doi:10.1002/qj.2396.
Merryfield, W. J., and Coauthors, 2013: The Canadian seasonal to interannual prediction system. Part I: Models and initialization. Mon. Wea. Rev., 141, 2910–2945, doi:10.1175/MWR-D-12-00216.1.
Merryfield, W. J., and Coauthors, 2017: Advancing climate forecasting Eos, Trans. Amer. Geophys. Union, in press.
Molteni, F., and Coauthors, 2011: The new ECMWF seasonal forecast system (system 4). European Centre for Medium-Range Weather Forecasts Tech. Rep. 656, 51 pp. [Available online at www.ecmwf.int/sites/default/files/elibrary/2011/11209-new-ecmwf-seasonal-forecast-system-system-4.pdf.]
Morse, A. P., F. J. Doblas-Reyes, M. B. Hoshen, R. Hagedorn, and T. N. Palmer, 2005: A forecast quality assessment of an end-to-end probabilistic multi-model seasonal forecast system using a malaria model. Tellus, 57A, 464–475, doi:10.3402/tellusa.v57i3.14668.
Osman, M., and C. S. Vera, 2017: Climate predictability and prediction skill on seasonal time scales over South America from CHFP models. Climate Dyn., 49, 2365–2383, doi:10.1007/s00382-016-3444-5.
Osman, M., C. S. Vera, and F. J. Doblas-Reyes, 2016: Predictability of the tropospheric circulation in the Southern Hemisphere from CHFP models. Climate Dyn., 46, 2423–2434, doi:10.1007/s00382-015-2710-2.
Rajeevan, M., C. Unnikrishnan, and B. Preethi, 2012: Evaluation of the ensembles multi-model seasonal forecasts of Indian summer monsoon variability. Climate Dyn., 38, 2257–2274, doi:10.1007/s00382-011-1061-x.
Saha, S., and Coauthors, 2006: The NCEP Climate Forecast System. J. Climate, 19, 3483–3517, doi:10.1175/JCLI3812.1.
Scaife, A. A., and Coauthors, 2014: Skillful long-range prediction of European and North American winters. Geophys. Res. Lett., 41, 2514–2519, doi:10.1002/2014GL059637.
Scinocca, J. F., N. A. McFarlane, M. Lazare, J. Li, and D. Plummer, 2008: Technical note: The CCCma third generation AGCM and its extension into the middle atmosphere. Atmos. Chem. Phys., 8, 7055–7074, doi:10.5194/acp-8-7055-2008.
Sigmond, M., J. F. Scinocca, and P. J. Kushner, 2008: Impact of the stratosphere on tropospheric climate change. Geophys. Res. Lett., 35, L12706, doi:10.1029/2008GL033573.
Smith, D. M., and Coauthors, 2013: Real-time multi-model decadal climate predictions. Climate Dyn., 41, 2875–2888, doi:10.1007/s00382-012-1600-0.
Stevens, B., and Coauthors, 2013: Atmospheric component of the MPI-M Earth System Model: ECHAM6. J. Adv. Model. Earth Syst., 5, 146–172, doi:10.1002/jame.20015.
Stockdale, T. N., and Coauthors, 2011: ECMWF seasonal forecast system 3 and its prediction of sea surface temperature. Climate Dyn., 37, 455–471, doi:10.1007/s00382-010-0947-3.
Takaya, Y., and Coauthors, 2017a: Japan Meteorological Agency/Meteorological Research Institute-Coupled Prediction System version 1 (JMA/MRI-CPS1) for operational seasonal forecasting. Climate Dyn., 48, 313–333, doi:10.1007/s00382-016-3076-9.
Takaya, Y., and Coauthors, 2017b: Japan Meteorological Agency/Meteorological Research Institute-Coupled Prediction System version 2 (JMA/MRI-CPS2): Atmosphere–land–ocean–sea ice coupled prediction system for operational seasonal forecasting. Climate Dyn., doi:10.1007/s00382-017-3638-5, in press.
Vitart, F., and Coauthors, 2017: The subseasonal to seasonal prediction (S2S) project database. Bull. Amer. Meteor. Soc., 98, 163–173, doi:10.1175/BAMS-D-16-0017.1.
Voldoire, A., and Coauthors, 2013: The CNRM-CM5.1 global climate model: Description and basic evaluation. Climate Dyn., 40, 2091–2121, doi:10.1007/s00382-011-1259-y.
von Salzen, K., and Coauthors, 2013: The Canadian Fourth Generation Atmospheric Global Climate Model (CanAM4). Part I: Representation of physical processes. Atmos.–Ocean, 51, 104–125, doi:10.1080/07055900.2012.755610.
Watanabe, M., and Coauthors, 2010: Improved climate simulation by MIROC5: Mean states, variability, and climate sensitivity. J. Climate, 23, 6312–6335, doi:10.1175/2010JCLI3679.1.
Weisheimer, A., and Coauthors, 2009: ENSEMBLES: A new multi-model ensemble for seasonal-to-annual prediction—skill and progress beyond DEMETER in forecasting tropical Pacific SSTs. Geophys. Res. Lett., 36, L21711, doi:10.1029/2009GL040896.
Yuan, X., J. K. Roundy, E. F. Wood, and J. Sheffield, 2015: Seasonal forecasting of global hydrologic extremes: System development and evaluation over GEWEX basins. Bull. Amer. Meteor. Soc., 96, 1895–1912, doi:10.1175/BAMS-D-14-00003.1.
Yun, W. T., L. Stefanova, A. K. Mitra, T. S. V. V. Kumar, W. Dewar, and T. N. Krishnamurti, 2005: A multi-model superensemble algorithm for seasonal climate prediction using DEMETER forecasts. Tellus, 57A, 280–289, doi:10.3402/tellusa.v57i3.14699.