## 1. Introduction

One of the most difficult areas of research is the ability to predict whether it will be a “wet or dry” season in advance. Some success has been achieved in this regard from the response of seasonal climate to El Niño or La Niña events (Meehl et al. 2006; Philander 1990; Barnett et al. 1994). Both statistical and dynamical models have had limited success for such seasonal predictions (Krishnamurti et al. 1999; Gadgil et al. 2005). Performance of the statistical models depends on the quality of the long period datasets to train them while dynamical models approximate some of the physical processes (e.g., precipitation from individual cumulus clouds) of the atmosphere and ocean because their scale of operation is very small to incorporate into the climate model. Many current climate models focus on seasonal forecasts; these generally carry a low seasonal predictability since they face a major challenge in simulating the mean rainfall distribution, that is, the climatology, over Asian monsoon region (Kang et al. 2002; Waliser et al. 2003). In recent years several coupled models emerged that demonstrated the ability to predict seasonal means of climate with greater accuracy. Yet they failed to predict the seasonal mean or the annual cycle reasonably (Kirtman et al. 2002; Yang and Anderson 2000). A reason for such problems resides in the familiar growth of initial state uncertainties and the growth of errors (Lorenz 1969). To overcome this type of problem, ensemble simulations were suggested (Brankovic et al. 1990; Brankovic and Palmer 1997; Palmer et al. 2004). Several techniques were attempted to combine the ensemble forecasts of a single model. It was noted that even these methods had difficulties in simulating the mean states of climate reasonably well. Perturbing of single models still carries inherent problems that arise from model deficiencies, data assimilation, model physics, and representation of the land surface processes. These deficiencies carry over to the model ensembles. The growth of systematic errors in general circulation models (GCMs) and coupled forecasting system models (CFSs) will remain one of the central problems for producing accurate predictions of climate variability for the next 80–100 yr (Randall et al. 2007). An issue impeding progress relates to the growth of systematic errors of modeling that relate to particular physical parameterizations of a model. Lack of observational data and understanding of nonlinear feedbacks among various physical parameterizations makes it difficult to appreciate the overall errors generated in a model (Reichler and Kim 2008; Martin at el. 2010).

The climatological mean seasonal rainfalls in all CFSs were found to be deficient over northern India with excessive rainfall over the oceanic region around the peninsula (Drbohlav and Krishnamurthy 2010). Watanabe et al. 2007 and Kim et al. 2008 have shown how the climatological mean of the precipitation simulated by a coupled model can be brought in agreement with observational fields of precipitation.

In this paper we shall illustrate the performance of as many as 16 state-of-the-art coupled climate models providing 15 seasons of forecasts. Examining the Asian monsoon region we find that the monsoon rainfall prediction skills are indeed quite poor from most of these individual models. A starting point in seasonal forecasts is the model climatology. If that departs significantly from the observed climatology then further progress in the prediction of anomalies becomes futile. This paper shows that given reliable rainfall observations, from as many as 10000 rain gauge sites over the monsoon land areas (Fig. 1, Yatagai et al. 2009), a downscaling of precipitation forecasts is possible. If one follows up next with the construction of a multimodel superensemble (Krishnamurti et al. 2006) then it is possible to obtain seasonal rainfall climatology that carries very high skill. In this part, we described about coupled models, other datasets, downscaling methodology, synthetic superensemble (SSE) methodology, observed climatology of monsoonal rainfall from member models, ensemble mean (EM) and for the superensemble, and results from the statistical skill scores. Simple skill metrics for the evaluation of the climatology forecasts include the root-mean-square errors (RMSEs), spatial correlations, and equitable threat scores. The second part of this study (Krishnamurti and Kumar 2012, hereafter Part II) addresses the rainfall anomalies and includes the probabilistic Brier skill score.

Predicting the averaged all-India monsoon rainfall, a season in advance, has been operationally provided by the India Meteorological Department over many years (Rajeevan et al. 2002, 2003, 2007; Thapliyal and Rajeevan 2003). This is based on a statistical multiple-regression approach where the predictands include parameters such as an El Niño parameter; sea surface temperature (SST) anomalies over the Arabian sea and South Indian Ocean; minimum temperature over central India and the eastern coast of India; Northern Hemisphere surface air temperature and zonal winds pattern at the 20-km height over India, the Southern Oscillation index; the pressure tendency at Darwin on the El Niño–Southern Oscillation (ENSO) time scale; sea level pressure over Argentina; pressure gradient over western Europe; surface pressure anomaly over the Northern Hemisphere; pressure over equatorial Indian Ocean; and Himalayan and Eurasian snow cover. This approach has had some limited success; however, it failed to predict the below normal summer monsoon rains during several recent seasons Nanjundiah (2009). The dynamical modeling approach has utilized both atmospheric general circulation models (AGCMs), with prescribed climatological SSTs, and coupled atmosphere–ocean models (Krishnamurti et al. 2006). The AGCMs are not designed to incorporate effects of varying SST anomalies on time scales other than the annual cycle. The coupled atmosphere–ocean models seem better suited for the seasonal prediction of the monsoon; however, their performance record has not been very impressive (Nanjundiah 2009; Gadgil et al. 2005). A mix of statistical and dynamical models has been suggested (J. Slingo, personal communication) for forecasts of monsoon over India where the results from dynamical models could be used for statistical modeling. Essentially, this call for a number of predictands derived from large-scale climate model forecasts; those are in turn used to statistically predict the local monsoon behavior. This implies that seasonal climate forecasts from dynamical models over the Indian region have carried rather low skills. This was also noted in the present study where we examined monsoon forecasts from a suite of as many as 16 coupled atmosphere–ocean models. The Florida State University (FSU) modeling utilizes a suite of coupled atmosphere–ocean global models, the forecasts for each of which are next downscaled, following Krishnamurti et al. (2009) and Chakraborty and Krishnamurti (2009); next these are subjected to the construction of a downscaled multimodel superensemble forecast (Krishnamurti et al. 1999). This includes the training and the forecast phase at a horizontal resolution of 25 km. In effect, we also capitalize on local statistical improvements that arise from the downscaling and the superensemble methodology.

This study demonstrated that the performance of the superensemble was greatly enhanced when we used a greater number of member models together with the high-resolution rainfall datasets. Comparing each model’s results from coarse resolution to downscaled rainfall shows a 10%–25% increase in correlation coefficients (CC) while RMSE is decreased by 50%. These improvements are most obvious in the superensemble predictions.

This study differs from a previous study by Chakraborty and Krishnamurti (2009) in the following areas.

In the present study we make use of 16 coupled atmosphere–ocean models, as compared to 4 models in our precious work.

We make use of the Yatagai et al. (2009) collection of as many as 10 000 rain gauge observations that provide daily rainfall totals covering 43 yr over a wide Asian monsoon domain. The previous study was limited to India and made use of only 1803 rain guage observations (Rajeevan et al. 2006).

We address the reduction of errors as the number of models is increased from 5 to 10 to 15 in the superensemble predictions.

We address the skills of single-model-based ensembles versus multimodel-based ensembles.

We explore the probabilistic Brier skill scores of all 16-member models and compare those with the scores of the multimodel ensembles and superensemble predictions.

We evaluate skills for subregions such as Japan, Korea, Taiwan, China, and India separately, which was not done in the previous paper.

We also show that, by having 16 models versus 4 models, the multimodel errors are reduced for the ensemble predictions.

In some of our earlier studies, with the superensemble technique (Krishnamurti et al. 2000; Mishra and Krishnamurti 2007), it was noted that four to eight member models were sufficient for the construction of a skillful multimodel superensemble forecasts. Given 16 member models for the present study, it was possible to address the sensitivity of results, for error reductions, using different number of models. This is addressed in Part II of the study. If a single model is subjected to multiple forecasts using perturbed initial states then one may need as many as 25–50 realizations for providing improved forecasts from an ensemble mean (Palmer 1993; Toth and Kalnay 1993). According to Leith (1974) most of the improvement in the ensemble mean is achieved with 8–10 members, whereas 30 members would be required to estimate second-order statistics. We have noted that as few as four or five better-performing member models, in terms of skill, can be used to construct a superensemble that carries higher skills than the member models (Krishnamurti et al. 2006).

## 2. Coupled models used in this study

Rainfall datasets (for 15 yr, 1987–2001) from sixteen coupled models are included in this study. These datasets were acquired from personal contacts with the data producers. In Table 1 some details for atmospheric and ocean components on model name, model resolution, initial condition for simulation, and number of ensemble predictions are provided. This also includes the number of ensemble forecasts provided for each model run. The ensemble mean forecasts from a single model’s several runs are also included in this study. These model forecasts are cast at a common horizontal resolution of 2.5° latitude by 2.5° longitude for the construction of model ensembles. All datasets from multimodels were bilinearly interpolated to this common resolution prior to the construction of ensemble averaging. These models carry several different physical and ocean processes, and they tend to provide a robust ensemble for the seasonal climate experiments. A list of acronyms used for member models in this study is provided in Table 2.

Table presents the acronyms for models name and their affiliation with the institute.

The present study is based on the coupled model’s datasets acquired from the Asia-Pacific Economic Cooperation (APEC) Climate Center (APCC)/Climate Prediction and its Application to Society (CliPAS) project (Wang et al. 2009), which was developed to understand the seasonal and subseasonal climate variability. This CliPAS project also includes seven coupled models from the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) project (see for all model details, Palmer et al. 2004). Basically each model has different forecast length and ensemble size, but all the models were integrated for 5 months long (1 May to 30 September) for the summer season and 5 months long (1 November to 31 March) for the winter season. There is no flux correction that is applied in case of coupled models. The coupled models the Bureau of Meteorology Research Centre (BMRC), National Centers for Environmental Prediction (NCEP), and the Geophysical Fluid Dynamics Laboratory (GFDL) use ocean data assimilation for initialization, Scale Interaction Experiment (SINT) and Seoul National University (SNU) use SST nudging scheme while the University of Hawaii (UH) uses SST and a thermocline-depth nudging scheme. Moreover, these models do not apply any flux correction. These coupled models did not have land initialization schemes. The models NCEP and SNU use the NCEP reanalysis data as land surface initial conditions, while other models use climatological land surface conditions as initial conditions.

## 3. Observational datasets and precipitation climatology

A high-resolution (0.25° latitude/longitude) daily observed rainfall dataset is used for the Asian monsoonal region following the Asian Precipitation Highly-Resolved Observational Data Integration Toward Evaluation (APHRODITE) for water resources (Yatagai et al. 2009). This dataset is based on a dense rain gauge observation network (Fig. 1). The temporal resolution of this dataset is daily from 1961 to 2004, while the spatial coverage is 60.125°–149.875°E, 0.125°–54.875°N. This dataset is based on nearly 10 000 rain guage sites and an analysis of the daily rainfall totals at the 25-km resolution was a part of the APHRODITE datasets. Monthly means covering the entire period 1987–2001 were obtained for all the model forecasts and for the observed rains.

The seasonal average of observed rainfall is presented in Fig. 2. The monthly datasets were separated into four seasons named March–May (MAM), June–August (JJA), September–October (SON), and December–February (DJF) (for acronyms see Table 2). These illustrations carry the familiar seasonal climatology of monsoon rainfall as many other rainfall products show. In Fig. 2a rainfall is noted over the eastern Indian region, southern Sri Lanka, Taiwan, southeast China, southern Japan, and most of the landmasses of the Indonesian archipelago during the MAM season. Figure 2b shows the climatological rainfall maxima of the rainfall over various part of the monsoon Asia region (Western Ghats and Foot Hills of Himalayas in India, Coastal Myanmar, Laos, Vietnam, South Taiwan, South China, Korea, Philippines, and southwest Japan). During SON (Fig. 2c) coastal regions of India, Vietnam, China, and Japan receive a good rainfall while the Indonesian Archipelago receives a good rainfall. In the season DJF (Fig. 2d) rainfall is scanty or no-rain over most of the monsoon Asia region. Some of the regional features in the seasonal climatology are of interest due to its high resolution. This observed dataset is used as benchmark for all forecast validations, model downscaling, and for the superensemble prediction. One of the interesting points of this study is to see to what extent the member models, the ensemble mean, and the downscaled superensemble are able to replicate this observed seasonal climatology. This is a necessary exercise prior to the examination of seasonal precipitation anomaly forecasts. This study will emphasize the results for the northern summer and winter seasons.

Seasonal distribution of the observed rainfall from APHRODITE (units mm day^{−1}) over the monsoon Asia region. Resolution of APHRODITE analysis is 25 km.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Seasonal distribution of the observed rainfall from APHRODITE (units mm day^{−1}) over the monsoon Asia region. Resolution of APHRODITE analysis is 25 km.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Seasonal distribution of the observed rainfall from APHRODITE (units mm day^{−1}) over the monsoon Asia region. Resolution of APHRODITE analysis is 25 km.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

## 4. The downscaling methodology

*R*

_{obs}and

*R*

_{mdl}are the observed and interpolated model forecasts of rainfall, respectively;

*a*and

*b*are regression coefficients known as the slope and intercept of the least squares fitting; and

*ϵ*is the error term. The basic principle of this method is to minimize the absolute value of the error term (|

*ϵ*|). Following the cross-validation principle, the year for which regression coefficients are to be calculated is kept aside and the remaining years are used to calculate these regression coefficients. Michaelsen (1987), Kug et al. (2008), and Sahai (2009) applied cross-validation technique on various datasets and found the merits and demerits of this technique. The downscaled model rainfall is obtained using these regression coefficients:

*R*

_{dscl}is the downscaled rainfall forecast of the model; here,

*a*and

*b*are calculated using Eq. (1) at each grid point and separately for every month of the year. These regression coefficients carry information that varies spatially and temporally and are used to predict the regional precipitation over monsoonal Asia. This method is illustrated as a schematic in Fig. 3a. This procedure was used to construct the downscaled datasets for seasons of 1987–2001. The downscaling at each grid point for the monsoon provides model bias over all regions and is described below in section 6.

Schematic shows the steps involved in (a) downscaling methodology and (b) synthetic superensemble forecasts. The model’s forecasts are statistically evaluated for their errors during the training phase, and these statistical weights are passed on to the models toward construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Schematic shows the steps involved in (a) downscaling methodology and (b) synthetic superensemble forecasts. The model’s forecasts are statistically evaluated for their errors during the training phase, and these statistical weights are passed on to the models toward construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Schematic shows the steps involved in (a) downscaling methodology and (b) synthetic superensemble forecasts. The model’s forecasts are statistically evaluated for their errors during the training phase, and these statistical weights are passed on to the models toward construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

## 5. Synthetic superensemble technique

*G*is the error term that is used to obtain the weights,

*N*

_{train}is the length of the training period, and

*S*and

_{t}*O*are the superensemble and the observed values, respectively, at training time

_{t}*t*. Equation (3) provides weights

*w*

_{i}(

*i*= 1, 2, 3, … .

*N*

_{mdl}; where

*N*

_{mdl}is the number of models) assigned for each model. These weights are used to build the superensamble forecasts during the forecast phase using Eq. (4),

*S*is the superensemble prediction,

*w*

_{i}are the weights for the individual models

*i*,

*F*and

_{i}*i*for training period, and

*N*

_{mdl}is the number of models. The weights calculated in the training phase vary in sign from negative to positive depending on the model’s correlation with observations spatially and temporally.

In addition, the weights vary geographically, thus taking into account the regional variation and biases of each model. The weights used here make the superensemble an exceptional forecasting tool compared to the other usual ensemble tools (Stefanova and Krishnamurti 2002; Chakraborty and Krishnamurti 2006). An outline of superensemble methodology is sketched in Fig. 3b.

For the climate forecasts, a slightly modified superensemble method is constructed, which is called the synthetic superensemble (Yun et al. 2003; Krishnamurti et al. 2006; Chakraborty and Krishnamurti 2006). In this method an expansion of the forecast and observation fields are done in time using principal component (PC) and space empirical orthogonal functions (EOFs). We use as many as 62 components for each of these. This involves some steps, which are as follows.

*p*(

_{k}*t*) stands for PC time series, Φ

*(*

_{k}*x*) stands for EOF component of the

*k*th mode, which is a function of space, and

*M*is the number of models. Here,

*M*is selected in such a way that EOFs explain 99% of the variance in the actual dataset.

*i*denotes the

*i*th model,

*F*

_{i,}

*(*

_{k}*t*) and Φ

_{i,}

*(*

_{k}*x*) are the PC time series and the spatial EOF for the

*k*

^{th}model.

*k*

^{th}mode of the PC time series as the linear combination of the

*k*

^{th}mode of the PC time series of the member models:

*i*denotes for

*i*

^{th}member in the ensemble,

*α*

_{i,}

_{k}is the weight of the

*i*

^{th}model for mode

*k*, and

*α*

_{i,}

_{k}is the error term. The weights (

*α*

_{i,}

_{k}) are estimated using multiple linear regression that minimizes the variance of the error term

*E*(

*α*

_{i,}

_{k}).

*k*

^{th}mode for the

*i*

^{th}model can be defined as

*i*

^{th}member model are defined as

*(*

_{k}*x*) are the EOF components from the observation. Once the synthetic data

## 6. The downscaling regression coefficients and their spatial distributions

The spatial distributions of the downscaling coefficients carry valuable information regarding biases and errors of the model rainfall. Here, ‘*a*’ (slope) is a ratio of the observed rainfall to the model rainfall, and it can have values that can be positive or negative. For a value of ‘*a*’ = 1, the model rainfall perfectly equates with observed rainfall at a grid point. A value of zero implies no relationship among them. If rate of change of model rainfall is greater (less) than the observed value, then ‘*a*’ would be less than (or greater than) 1. The downscaling coefficient ‘*b*’ is the intercept; it implies a baseline error and shows an overall model bias at that grid point. The coefficient ‘*b*’ denotes an underestimate (overestimates) of the model rainfall for its positive (negative) values. In Figs. 4 and 5 we show geographical distributions of ‘*a*’ and ‘*b*’, for the month of July of 15 yr (1987–2001); these errors simply arise from the downscaling algorithm. This geographical distribution is very revealing on the bias errors of the large-scale prediction of each member model with respect to the observed mesoscale rainfall distributions of Yatagai et al. (2009). Figure 4 shows many red values in northern India, Myanmar, and China; in these regions the slope is much greater than 1, in some places it is as high as 6. The rainfall, for the month of July, is underestimated by the model, and this slope correction, during the training phase of downscaling, makes a geographical correction for these bias errors for each and every member model. The purple and light purple coloring, in Fig. 4, denotes regions that carry an overestimate for the model rains; the predominant green coloring to the north denotes smaller slope errors. The downscaling algorithm corrects for all these errors. Figure 5 illustrates the baseline error for 15 yr of model forecasts for the month of July. Large intercept errors are also noted for July (Fig. 5), especially over the Indian monsoon belt and extending over South China and China. Such local errors arise from various land surfaces, orographic and model nonlinear interplay with such features. Overall the intercept errors are generally positive showing underestimates of the model forecasts with respect to the observed APHRODITE rainfall. It is interesting to note that the geographical distributions of the slope and intercept errors for most models are rather similar. In general the wintertime values for these errors, north of the equator, are much smaller compared to the summertime values. This suggests that the summertime model errors are strongly influenced by the nature of heavy convective rains that predominate. We calculated these slopes and intercept errors for each month of a seasonal forecast separately. This is necessary to capture the growth of such errors from month 1 to month 2 and to month 3 of forecasts; those errors generally amplify as the forecast length is increased, and we are thus able to incorporate monthly changes for these downscaling errors as a function of forecast length.

Grid-by-grid values of the spatially varied downscaling coefficient, slope *a*, for a typical July month. Positive values denote models underestimate rains over those regions and negative values denote overestimated values for the models. Red color prevails over regions of heavy monsoon rains, where the models seem to underestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Grid-by-grid values of the spatially varied downscaling coefficient, slope *a*, for a typical July month. Positive values denote models underestimate rains over those regions and negative values denote overestimated values for the models. Red color prevails over regions of heavy monsoon rains, where the models seem to underestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Grid-by-grid values of the spatially varied downscaling coefficient, slope *a*, for a typical July month. Positive values denote models underestimate rains over those regions and negative values denote overestimated values for the models. Red color prevails over regions of heavy monsoon rains, where the models seem to underestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Grid-by-grid values of the spatially varied downscaling coefficient, the intercept *b* for a typical July month. Positive values denote regions where the models underestimate rains and the negative values denote regions where the models overestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Grid-by-grid values of the spatially varied downscaling coefficient, the intercept *b* for a typical July month. Positive values denote regions where the models underestimate rains and the negative values denote regions where the models overestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Grid-by-grid values of the spatially varied downscaling coefficient, the intercept *b* for a typical July month. Positive values denote regions where the models underestimate rains and the negative values denote regions where the models overestimate rainfall totals.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

## 7. Model precipitation climatology for the coarse and the downscaled resolutions

In this section, the model climatology for the summer monsoon months, June, July, and August, at the model forecast resolutions (2.5° × 2.5°) and for the downscaled high horizontal resolution of 0.25° × 0.25° (approximately 25 km) are presented in Figs. 6 and 7, respectively. At both of these resolutions the observed rainfall climatology is shown in the top-left panel for each of the resolutions. In Fig. 6 the spatial correlations and the RMSE (provided as insets), over the monsoon domain for each of the forecast models, ensemble mean, and for the superensemble are presented. At the model resolutions we find RMS errors for precipitation climatology as low as 2.45 to 5.54 for the member models, whereas the RMS errors for the ensemble mean and the superensemble respectively are 2.50 and 1.72, The corresponding numbers for the spatial correlations for the member models range from 0.52 to 0.81 and for the ensemble mean and the superensemble are 0.81 and 0.92, respectively. The validation of large-scale rainfall was done using the APHRODITE rainfall averaged at these model resolutions. In Fig. 7 we show the results at the downscaled resolution of 0.25° × 0.25°, where the full resolution APHRODITE rainfall were used for the validations. Here, at the 25-km resolution, the RMS errors range from 1.46 to 2.65 for the member models and their spatial correlations range between 0.84 and 0.92. Those spatial correlations for the ensemble mean and the superensemble, respectively are 0.89 and 0.99, and the RMS errors are as low as 1.59 for the ensemble mean and 0.19 for the superensemble. These results clearly reflect a major improvement for the model rainfall climatology on the mesoscale. It is worth noting that the correlation for each of the member models was increased from roughly 0.70 to almost 0.90, for the model rainfall climatology with respect to the mesoscale observed rains. This reflects a major improvement from the downscaling. Further, these skills are increased for the superensemble with values as high as 0.99. The prediction of improved model climatology is a prerequisite for any climate modeling prior to addressing skills for precipitation anomalies; in that context the superensemble has a better start at seasonal climate prediction of precipitation. In Figs. 8 and 9 we illustrate the corresponding seasonal climatologies at the large model resolutions and for the mesoscale resolutions covering the winter season months, December, January, and February. Basically, similar major improvements were noted, from the downscaling for each of the member models followed by further improvements from the ensemble mean and eventually from the construction of the multimodel superensemble for each seasons. The final improvements from the superensemble for the RMS errors and the spatial correlations, over the monsoon domain, were respectively 0.08 and 0.99; these are very impressive numbers from the downscaled climatology.

Precipitation climatology (JJA, 1987–2001) from 16 coupled models at coarse resolution (2.5° × 2.5°). (top) The name of each member model and the correlation coefficient is indicated, while (bottom right) the root-mean-square-error values are identified and carry results for the ensemble mean and the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Precipitation climatology (JJA, 1987–2001) from 16 coupled models at coarse resolution (2.5° × 2.5°). (top) The name of each member model and the correlation coefficient is indicated, while (bottom right) the root-mean-square-error values are identified and carry results for the ensemble mean and the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Precipitation climatology (JJA, 1987–2001) from 16 coupled models at coarse resolution (2.5° × 2.5°). (top) The name of each member model and the correlation coefficient is indicated, while (bottom right) the root-mean-square-error values are identified and carry results for the ensemble mean and the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

As in Fig. 6, but here precipitation is downscaled to 0.25° × 0.25°5 resolution.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

As in Fig. 6, but here precipitation is downscaled to 0.25° × 0.25°5 resolution.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

As in Fig. 6, but here precipitation is downscaled to 0.25° × 0.25°5 resolution.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

## 8. Scatter diagrams of coarse resolution rainfall, downscaled rainfall, and rainfall climatology

In Figs. 10a–e scatter diagrams for the monsoon Asia domain are presented. These illustrate a scatter for the observed and predicted rainfall. Figure 10a shows the scatter for the coarse-resolution rainfall climatology of the 16 models; also shown are the scatter for the climatology of the coarse-resolution ensemble mean and the superensemble. This scatter is based on the datasets from all such grid points that simultaneously carry nonzero rains for the observed and the model forecasts. The best results are obtained for the multimodel superensemble. Many models carry correlations (between the observed and the modeled rainfall climatology, where the APHRODITE rainfalls were averaged to the coarse resolution) on the order of 0.70, whereas the superensemble elevates that value to 0.96. At the coarse resolution the largest value of rainfall (for the climatology) does not exceed values around 18 mm day^{−1}. The downscaled climatology is shown in Fig. 10b. Here the largest rainfall amounts are now around 30 mm day^{−1}. This is brought about by the downscaling of each model’s forecast with respect to the APHRODITE rainfalls using the cross-validation method. When we compare Figs. 10a and 10b the correlation of each of the models is noted to increase from the downscaling. Here we use the APHRODITE rainfalls at 25 km resolution for estimating the correlations of the model’s downscaled forecast rains. The superensmeble carries almost no scatter and all points fall along roughly the 45° slope line showing that a near-perfect climatology is attainable from the construction of the downscaled multimodel superensemble. It is due to the fact that the downscaling in each model’s forecast is regressed against the observed climatology. That elevates the correlation of each member model’s climatology to values in excess of 0.90 (Fig. 10b). The superensemble construction is carried out after downscaling each member model; here again the collective bias errors of the downscaled member models are reduced using the observed climatology as the reference. The same conclusion can be drawn for DJF season also (Fig. 10c). In Fig. 10c scattering spread is more, thus correlation is smaller compared to Fig. 10b. These two steps together reduce the error of the superensemble drastically. Having a near-perfect model climatology (i.e., that of the superenemble) is a necessary first step prior to the evaluations of skills of the rainfall anomalies for the seasonal climate. The correlations for the climatology for some individual member models are as low as 0.70. The total rains predicted by such models carry large errors from both their seasonal climatology forecasts and from their anomalies. During the Atmospheric Model Intercomparison Project (AMIP) period (January 1979–December 1988), Gadgil and Sajini (1998) noted that many models carried rather large errors for the model climatology. Models have improved considerably in recent decades, and the downscaled climatology and the superensemble throws in another dimension toward such improvements.

(a) Relation between forecasted and observed precipitation (includes results from all grid points), units mm day^{−1}, over the monsoon Asia region, during JJA, for the precipitation climatology from 16 coupled models for the coarse resolution 2.5° × 2.5°. The last row of the figure includes the results for the ensemble mean and the superensemble. These results are from model results at 2.5° latitude/ longitude, prior to the downscaling. The dots only include the scatter of 15-yr-averaged climatological rains for each model. (b) As in (a), but for the downscaled precipitation at the higher resolution of 0.25° × 0.25°. (c) As in (b), but for DJF. (d) Relation between forecasted and observed precipitation (covering all grid points) over monsoon Asia during JJA of 1987–2001 from 16 coupled models, ensemble mean, and the superensemble, before downscaling at 2.5° × 2.5° resolution. The scatter here includes the rains for all 15 seasons of model forecasts; hence, there are more points here compared to (a) and (b). (e) As in (d), but for downscaled precipitation at the higher resolution of 0.25° × 0.25°.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

(a) Relation between forecasted and observed precipitation (includes results from all grid points), units mm day^{−1}, over the monsoon Asia region, during JJA, for the precipitation climatology from 16 coupled models for the coarse resolution 2.5° × 2.5°. The last row of the figure includes the results for the ensemble mean and the superensemble. These results are from model results at 2.5° latitude/ longitude, prior to the downscaling. The dots only include the scatter of 15-yr-averaged climatological rains for each model. (b) As in (a), but for the downscaled precipitation at the higher resolution of 0.25° × 0.25°. (c) As in (b), but for DJF. (d) Relation between forecasted and observed precipitation (covering all grid points) over monsoon Asia during JJA of 1987–2001 from 16 coupled models, ensemble mean, and the superensemble, before downscaling at 2.5° × 2.5° resolution. The scatter here includes the rains for all 15 seasons of model forecasts; hence, there are more points here compared to (a) and (b). (e) As in (d), but for downscaled precipitation at the higher resolution of 0.25° × 0.25°.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

(a) Relation between forecasted and observed precipitation (includes results from all grid points), units mm day^{−1}, over the monsoon Asia region, during JJA, for the precipitation climatology from 16 coupled models for the coarse resolution 2.5° × 2.5°. The last row of the figure includes the results for the ensemble mean and the superensemble. These results are from model results at 2.5° latitude/ longitude, prior to the downscaling. The dots only include the scatter of 15-yr-averaged climatological rains for each model. (b) As in (a), but for the downscaled precipitation at the higher resolution of 0.25° × 0.25°. (c) As in (b), but for DJF. (d) Relation between forecasted and observed precipitation (covering all grid points) over monsoon Asia during JJA of 1987–2001 from 16 coupled models, ensemble mean, and the superensemble, before downscaling at 2.5° × 2.5° resolution. The scatter here includes the rains for all 15 seasons of model forecasts; hence, there are more points here compared to (a) and (b). (e) As in (d), but for downscaled precipitation at the higher resolution of 0.25° × 0.25°.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

If the data for many separate years are all included in a scatter diagram for seasonal rainfall forecasts (averaged for each season), without averaging for the total of 15 yr, then the scatter includes many more points; those are shown in Fig. 10d for the coarse model resolutions. This carries 15 times more scatter points compared to the previous illustration that was the 15-yr-averaged climatology. Here we also include the results for the coarse-resolution ensemble mean and the superensemble. Since we include many years of forecast datasets for the coarse resolution all in the same diagram, the scatter shows a poorer overall skill since the spread is larger. Those correlations of the seasonal rains, observed versus model for the coarse resolution, range from 0.20 to 0.71. The coarse-resolution superensemble is able to elevate that value to 0.74, that is, slightly better than the best model. The ensemble mean carries a value of 0.66.

In Fig. 10e we show the downscaled precipitation forecasts and observation where all the seasonally averaged datasets covering 15 yr are separately plotted. The downscaled rainfall carries much higher rainfall rates, almost reaching 30 mm day^{−1}; the course-resolution rainfall totals, prior to downscaling, were only around 18 mm day^{−1}. Thus the picture of the ensemble mean and the superensemble would now be quite different for this downscaled product.

## 9. Improvement of all models by downscaling

A comparison of the equitable threat scores (ETSs, which is a current standard metric for precipitation skill evaluation) for the seasonal forecasts between the years 1987 to 2001 is examined. It is noted that the member model’s forecasts at coarse resolutions carry small ETS compared to the downscaled member models (Figs. 11a and 11b) for JJA and DJF seasons. The validation for the large-scale model forecasts invoke a coarse-resolution-averaged APHRODITE rains at 2.5° latitude by 2.5° longitude, whereas the downscaled models invoke the 0.25° latitude/longitude resolution of APHRODITE rains. The threat score for the heavy rains, in excess of 12 mm day^{−1}, show a systematic improvement for the downscaled models; that improvement is very clear at all rainfall thresholds. These downscaling-based improvements are a starting point prior to the construction of the superensemble, which provides a further improvement for the predicted seasonal rainfall. Figure 11b shows the ETS score for the DJF season, where the same kind of improvements are achieved as incase of JJA season.

(a) ETS of precipitation over monsoon Asia for JJA averaged during the years 1987–2001 before and after downscaling. Abscissa shows threshold values (mm day^{−1}) and ordinate the ETS. Line with filled square denotes downscaled precipitation, while filled diamonds indicate coarse-resolution precipitation. Every model’s ETS scores are improved from the downscaling. (b) As in (a) except for DJF season.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

(a) ETS of precipitation over monsoon Asia for JJA averaged during the years 1987–2001 before and after downscaling. Abscissa shows threshold values (mm day^{−1}) and ordinate the ETS. Line with filled square denotes downscaled precipitation, while filled diamonds indicate coarse-resolution precipitation. Every model’s ETS scores are improved from the downscaling. (b) As in (a) except for DJF season.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

(a) ETS of precipitation over monsoon Asia for JJA averaged during the years 1987–2001 before and after downscaling. Abscissa shows threshold values (mm day^{−1}) and ordinate the ETS. Line with filled square denotes downscaled precipitation, while filled diamonds indicate coarse-resolution precipitation. Every model’s ETS scores are improved from the downscaling. (b) As in (a) except for DJF season.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

The RMS errors of the total rainfall for the individual summer seasons for each year, covering the years 1987 to 2001, for each of the member model’s downscaled forecasts, including results for the ensemble mean and the superensemble, are shown in Fig. 12. Here vertical bars denote the RMS errors of precipitation. The rightmost bar for each year carries the results for the multimodel superensemble and the results for the ensemble mean are shown to its left by a blank bar. One message that is conveyed by this illustration is that the lowest values of the RMS error are generally carried by the multimodel superensemble. Some models carry a RMS value as high as 4 mm day^{−1}, whereas the value for the superensemble is around 2.1 mm day^{−1}.

The RMS error (along ordinate) of the seasonal forecasts of downscaled precipitation over monsoon Asia region (69.125°–149.875°E; 0.125°–54.875°N) during JJA of 1987–2001 (years along abscissa) for 16 coupled models, and for the EM and SSE. Each year carries 16 vertical bars for the member models plus the bars for the ensemble mean and the superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

The RMS error (along ordinate) of the seasonal forecasts of downscaled precipitation over monsoon Asia region (69.125°–149.875°E; 0.125°–54.875°N) during JJA of 1987–2001 (years along abscissa) for 16 coupled models, and for the EM and SSE. Each year carries 16 vertical bars for the member models plus the bars for the ensemble mean and the superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

The RMS error (along ordinate) of the seasonal forecasts of downscaled precipitation over monsoon Asia region (69.125°–149.875°E; 0.125°–54.875°N) during JJA of 1987–2001 (years along abscissa) for 16 coupled models, and for the EM and SSE. Each year carries 16 vertical bars for the member models plus the bars for the ensemble mean and the superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

## 10. Spinup during the downscaling and during the training phase of the superensemble

How the downscaling coefficients (‘*a*’ & ‘*b*’) vary against time is shown in Fig. 13a at location 75.5°E, 13.5°N. This shows the spinup, following Chakraborty and Krishnamurti (2009), for the slope ‘*a*’ and the intercept ‘*b*’ for the forecasts for July and December, respectively, for the FSU coupled model with Arakawa–Schubert convection and New Radiation (ANR). These are calculated for each month of forecasts separately. These results pertain to month one of forecasts, at a single grid point whose location is identified in the figure caption. Similar spinup is noted for each of the grid points for 2 and 3 months of forecast length. Basically we note here that roughly 7 yr of forecasts for the downscaling, during the training phase, are needed to stabilize the values of the slope and intercept. In our study, given the cross-validation method, it was possible to use 15 yr of datasets to overcome this spinup issue in a uniform manner, which assured the stabilization for these coefficients.

Dependence of the spinup of slope, intercept, and weights on the number of training years calculated at 75.5°E, 13.5°N. (a) Stabilization of downscaling coefficients [(a),(b)] on the number of training years, from FSU coupled models forecasts (Chakraborty and Krishnamurti, 2009). (b) Weights assigned to members models as a function of training length for the calculation of superensemble. The values of the coefficients get stable when more than 10 yr of data are used during the training phase for downscaling and for the construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Dependence of the spinup of slope, intercept, and weights on the number of training years calculated at 75.5°E, 13.5°N. (a) Stabilization of downscaling coefficients [(a),(b)] on the number of training years, from FSU coupled models forecasts (Chakraborty and Krishnamurti, 2009). (b) Weights assigned to members models as a function of training length for the calculation of superensemble. The values of the coefficients get stable when more than 10 yr of data are used during the training phase for downscaling and for the construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

Dependence of the spinup of slope, intercept, and weights on the number of training years calculated at 75.5°E, 13.5°N. (a) Stabilization of downscaling coefficients [(a),(b)] on the number of training years, from FSU coupled models forecasts (Chakraborty and Krishnamurti, 2009). (b) Weights assigned to members models as a function of training length for the calculation of superensemble. The values of the coefficients get stable when more than 10 yr of data are used during the training phase for downscaling and for the construction of the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4125.1

The spinup for the superensemble weights for five of the coupled models are illustrated in Fig. 13b. Roughly 10 yr of forecasts during the training phase of the superensemble were needed to stabilize these weights. The weights here had to be calculated separately for each forecast length. These results pertain to a single grid point. Weights vary from one grid point to the next; however, a very similar behavior for the spinup was noted at all grid points. These illustrations show the spinup behavior for precipitation for one of the seasonal forecasts valid for the month of July. A similar spinup feature was noted for months 2 and 3 of forecasts uniformly.

## 11. Conclusions

This study makes use of a large suite of 16 coupled atmosphere–ocean models. The combinations of high-resolution precipitation datasets, the downscaling, and the multimodel superensemble makes it possible to replicate the seasonal observed climatology of precipitation, to a high degree of accuracy, from the multimodel ensembles. The downscaling works on one model at a time, whereas the multimodel superensemble works laterally across all models. This procedure first reduces the bias of each model’s forecast for each month, and the superensemble further removes the collective biases by assigning geographically distributed weights and thus identifying regions of superior performances for each of the member models of our multimodel suite. The conventional ensemble mean and the bias-corrected ensemble mean utilizes a single uniform weight everywhere that is 1/*N*, where *N* is the total number of models. The multimodel superensemble utilizes as many as 10^{7} weights (which is roughly the product of the number of two dimensional grid points, the number of vertical levels, the number of variables, and the number of models). A feature of the downscaling is that each model’s forecast rain is regressed toward the observed climatology during each month of the forecasts. The forecasts provided by each model prior to downscaling carry correlation skills (model versus observed climatology) in the range of 0.52 to 0.81. The downscaling is done with respect to the much heavier observed rainfall climatology at the higher resolution of 25 km. Models at the coarse operational resolutions underestimate the rainfall when compared to the high-resolution observed estimates. The downscaling improves the rainfall forecast skill of each member model. The correlation skills of the suite of forecast models now are in the range 0.85 to 0.92 and are higher than the skills shown at the coarser resolutions. The superensemble scores for correlations improve from 0.92 to 0.99 after the downscaling. Much further improvement comes from the construction of the superensemble using the cross-validation principle, with almost a lack of scatter between the observed and the model climatology; and the correlation reaches a value of nearly 1.0.

Overall, the contributions to the seasonal monsoon rainfall climatology from the downscaled multimodel superensemble are impressive in the sense that, some 12 years ago when the first AMIP results were reported by Gadgil and Sajini (1998), they noted large errors in the placement of the monsoon rainfall belt. Many models placed the rains too far south, near 10°N, for the summer season, whereas the observed rainfall maximum is located closer to the monsoon trough near 20°N. This feature was attributed to the use of smoothed mountains in several of the AMIP models. The much higher accuracy obtained from the construction of the multimodel superensemble is a good starting point for addressing rainfall anomalies, and they are addressed in Part II of this study. During the AMIP era many models defined a precipitation anomaly as a departure from the model climatology. This was necessitated by the large errors in their climatological locations of the monsoon rainfall belt. In our study it is now possible to define model rainfall anomalies using the observed climatology as the reference.

## Acknowledgments

This work was supported by NSF Award AGS-0419618 and NASA Awards NNX10AG86G and NNX07AD39G. We wish to acknowledge the APCC/CliPAS for providing coupled model datasets.

## REFERENCES

Barnett, T. P., and Coauthors, 1994: Forecasting global ENSO-related climate anomalies.

,*Tellus***46A**, 381–397.Brankovic, C., and T. N. Palmer, 1997: Atmospheric seasonal predictability and estimates of ensemble size.

,*Mon. Wea. Rev.***125**, 859–874.Brankovic, C., T. N. Palmer, F. Molenti, S. Tibaldi, and U. Cubasch, 1990: Extended range predictions with ECMWF models: Time lagged ensemble forecasting.

,*Quart. J. Roy. Meteor. Soc.***116**, 867–912.Chakraborty, A., and T. N. Krishnamurti, 2006: Improved seasonal climate forecasts of the South Asian summer monsoon using a suite of 13 coupled ocean–atmosphere models.

,*Mon. Wea. Rev.***134**, 1697–1721.Chakraborty, A., and T. N. Krishnamurti, 2009: Improving global model precipitation forecasts over India using downscaling and the FSU superensemble. Part II: Seasonal climate.

,*Mon. Wea. Rev.***137**, 2736–2757.Cocke, S., and T. E. LaRow, 2000: Seasonal predictions using a coupled ocean–atmospheric regional spectral model.

,*Mon. Wea. Rev.***128**, 689–708.Delworth, T. L., and Coauthors, 2006: GFDL’s CM2 global coupled climate models—Part I: Formulation and simulation characteristics.

,*J. Climate***19**, 643–674.Drbohlav, L. H.-K., and V. Krishnamurthy, 2010: Spatial structure, forecast errors, and predictability of the south Asian monsoon in CFS monthly retrospective forecasts.

,*J. Climate***23**, 4750–4769.Fu, X., and B. Wang, 2004: The boreal summer intraseasonal oscillations simulated in a hybrid coupled atmosphere–ocean model.

,*Mon. Wea. Rev.***132**, 2628–2649.Gadgil, S., and S. Sajini, 1998: Monsoon precipitation in the AMIP runs.

,*Climate Dyn.***14**, 659–689.Gadgil, S., M. Rajeevan, and R. Nanjundiah, 2005: Monsoon prediction – Why yet another failure.

,*Curr. Sci.***88**, 1389–1400.Kang, I.-S., and Coauthors, 2002: Intercomparison of GCM simulated anomalies associated with the 1997/98 El Niño.

,*J. Climate***15**, 2791–2805.Kim, H.-J., B. Wang, and Q. Ding, 2008: The global monsoon variability simulated by CMIP3 coupled climate models.

,*J. Climate***21**, 5271–5294.Kirtman, B. P., Y. Fan, and E. K. Schneider, 2002: The COLA global coupled and anomaly coupled ocean–atmosphere GCM.

,*J. Climate***15**, 2301–2320.Krishnamurti, T. N., and V. Kumar, 2012: Improved seasonal precipitation forecasts for the Asian monsoon using 16 atmosphere–ocean coupled models. Part II: Anomaly.

*J. Climate*,**25**, 65–88.Krishnamurti, T. N., C. M. Kishtawal, T. E. LaRow, D. R. Bachiochi, Z. Zhang, C. E. Williford, S. Gadgil, and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble.

,*Science***285**, 1548–1550.Krishnamurti, T. N., C. M. Kishtawal, W. D. Shin, and C. E. Williford, 2000: Improving tropical precipitation forecasts from a multianalysis superensemble.

,*J. Climate***13**, 4217–4227.Krishnamurti, T. N., A. Chakraborty, R. Krishnamurti, W. K. Dewar, and C. A. Clayson, 2006: Seasonal prediction of sea surface temperature anomalies using a suite of 13 coupled atmosphere–ocean models.

,*J. Climate***19**, 6069–6088.Krishnamurti, T. N., A. K. Mishra, A. Chakraborty, and M. Rajeevan, 2009: Improving global model precipitation forecasts over India using downscaling and the FSU superensemble. Part I: 1–5-day forecasts.

,*Mon. Wea. Rev.***137**, 2713–2735.Kug, J.-S., I.-S. Kang, and D.-H. Choi, 2007: Seasonal climate predictability with tier-one and tier-two prediction system.

,*Climate Dyn.***31**, 403–416, doi:10.1007/s00382-007-0264-7.Kug, J.-S., J.-Y. Lee, I.-S. Kang, B. Wang, and C. K. Park, 2008: Optimal multimodel ensemble method in seasonal climate prediction.

,*Asia-Pac. J. Atmos. Sci.***44**, 233–247.Leith, C. E., 1974: Theoretical skill of Monte Carlo forecasts.

,*Mon. Wea. Rev.***102**, 409–418.Lorenz, E. N., 1969: Atmospheric predictability as revealed by naturally occurring analogues.

,*J. Atmos. Sci.***26**, 636–646.Luo, J.-J., S. Masson, S. K. Behera, S. Shingu, and T. Yamagata, 2005: Seasonal climate predictability in a coupled OAGCM using a different approach for ensemble forecast.

,*J. Climate***18**, 4474–4497.Martin, G. M., S. F. Milton, C. A. Senior, M. E. Brooks, S. Ineson, T. Reichler and J. Kim, 2010: Analysis and reduction of systematic errors through a seamless approach to modeling weather and climate.

,*J. Climate***23**, 5933–5957.Meehl, G. A., H. Teng, and G. Branstator, 2006: Future changes of El Niño in two global coupled climate models.

,*Climate Dyn.***26**, 549.Michaelsen, J., 1987: Cross-validation in statistical climate forecast models.

,*J. Climate Appl. Meteor.***26**, 1589–1600.Mishra, A. K., and T. N. Krishnamurti, 2007: Current status of multimodel superensemble and operational NWP forecast of the Indian summer monsoon.

,*J. Earth Syst. Sci.***116**, 1–16.Nanjundiah, R., 2009: A quick-look assessment of forecasts for the Indian Summer monsoon rainfall in 2009. Indian Institute of Science Rep. 2009 As 02, 33 pp.

Palmer, T. N., 1993: Extended-range atmospheric prediction and the Lorenz model.

,*Bull. Amer. Meteor. Soc.***74**, 49–65.Palmer, T. N., and Coauthors, 2004: Development of a European Multi-Model Ensemble System for Seasonal-to-Interannual Prediction (DEMETER).

,*Bull. Amer. Meteor. Soc.***85**, 853–872.Philander, S. G. H., 1990:

*El Niño, La Niña and the Southern Oscillation*. Academic Press, 289 pp.Rajeevan, M., D. S. Pai, and V. Thapliyal, 2002: Predictive relationships between Indian Ocean sea surface temperatures and Indian Summer Monsoon rainfall.

,*Mausam (New Delhi)***53**, 337–348.Rajeevan, M., D. S. Pai, S. K. Dikshit, and R. R. Kelkar, 2003: IMD’s new operational models for long range forecast of southwest monsoon rainfall over India and their verification for 2003.

,*Curr. Sci.***86**, 422–431.Rajeevan, M., J. Bhate, J. Kale, and B. Lal, 2006: High resolution daily gridded rainfall data for the Indian region: Analysis of break and active monsoon spells.

,*Curr. Sci.***91**, 296–306.Rajeevan, M., D. S. Pai, A. R. Kumar, and B. Lal, 2007: New statistical models for long range forecasting of southwest monsoon rainfall over India.

,*Climate Dyn.***28**, 813–828.Randall, D. A., and Coauthors, 2007: Climate models and their evaluation.

*Climate Change 2007: The Physical Science Basis,*S. Solomon et al., Eds., Cambridge University Press, 589–662.Reichler, T., and J. Kim, 2008: How well do coupled models simulate today’s climate?

,*Bull. Amer. Meteor. Soc.***89**, 303–311.Saha, S., and Coauthors, 2006: The NCEP Climate Forecast System.

,*J. Climate***19**, 3483–3517.Sahai, A. K., 2009: Challenges in real time seasonal prediction: A plea for enhanced scientific rigor.

*APCC Newsletter,*Vol. 4, APPC, Haeundae-gu Busan, South Korea, 3–6.Stefanova, L., and T. N. Krishnamurti, 2002: Interpretation of seasonal climate forecast using brier skill score, The Florida State University Superensemble, and the AMIP-I dataset.

,*J. Climate***15**, 537–544.Thapliyal, V., and M. Rajeevan, 2003: Updated operational models for long range forecasts of Indian Summer Monsoon rainfall.

,*Mausam (New Delhi)***54**, 495–504.Toth, Z., and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations.

,*Bull. Amer. Meteor. Soc.***74**, 2317–2330.Waliser, D. E., and Coauthors, 2003: AGCM simulations of intraseasonal variability associated with the Asian summer monsoon.

,*Quart. J. Roy. Meteor. Soc.***129**, 2897–2925.Wang, B., and Coauthors, 2009: Advance and prospect of seasonal prediction: Assessment of the APCC/CliPAS 14-model ensemble retroperspective seasonal prediction (1980–2004).

,*Climate Dyn.***33**, 93–117.Watanabe, M., and Coauthors, 2007: Improved climate simulation by MIROC5: Mean states, variability, and climate sensitivity.

,*J. Climate***23**, 6312–6335.Yang, X.-Q., and J. Anderson, 2000: Correction of systematic errors in coupled GCM forecasts.

,*J. Climate***13**, 2072–2085.Yatagai, A., O. Arakawa, K. Kamiguchi, H. Kawamoto, M. I. Nodzu, and A. Hamada, 2009: A 44-year daily gridded precipitation dataset for Asia based on a dense network of rain gauges.

,*SOLA***5**, 137–140.Yun, W.-T., L. Stefanova, and T. N. Krishnamurti, 2003: Improvement of the multimodel superensemble technique for seasonal forecasts.

,*J. Climate***16**, 3834–3840.Zhong, A., H. H. Hendon, and O. Alves, 2005: Indian Ocean variability and its association with ENSO in a global coupled model.

,*J. Climate***18**, 3634–3649.