Improved Seasonal Precipitation Forecasts for the Asian Monsoon Using 16 Atmosphere–Ocean Coupled Models. Part II: Anomaly

T. N. Krishnamurti Department of Earth, Ocean and Atmospheric Science, The Florida State University, Tallahassee, Florida

Search for other papers by T. N. Krishnamurti in
Current site
Google Scholar
PubMed
Close
and
Vinay Kumar Department of Earth, Ocean and Atmospheric Science, The Florida State University, Tallahassee, Florida

Search for other papers by Vinay Kumar in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

This is the second part of a paper on the improved seasonal precipitation forecasts for the Asian monsoon using 16 atmosphere–ocean coupled models. This study utilizes a large suite of coupled atmosphere–ocean models; this second part largely addresses the skill of rainfall anomaly forecasts. These include both deterministic and probabilistic skill measures such as the RMS errors, anomaly correlations, equitable threat scores, and the Brier skill score. It was possible to improve the skills of rainfall climatology from the use of a downscaled multimodel superensemble to very high levels, and it is of interest to ask how far this methodology would go toward improving the skills of seasonal rainfall anomaly forecasts. It is possible to go through a sequence of multimodel post processing to improve upon these skills by using a dense rain gauge network over Asia, downscaling forecasts for each member model, and constructing a multimodel superensemble that benefits from the persistence of errors of the member models. This paper addresses the spinup issues of the downscaling and the superensemble results where the number of years of model data needed for training phase, for the downscaling, and for the construction of the superensemble, is addressed. In the context of cross validation, the training phase includes 14 seasons of monsoon data. The forecast phase is only one season; it is this season that was not included in the training phase each time.

The relationship between data length and the number of models needed for enhanced skills is another issue that is addressed. Seasonal climate forecasts over the larger monsoon Asia domain and over the regional belts are evaluated. The superensemble forecasts invariably have the highest skill compared to the member models globally and regionally. This is largely due to the presence of large systematic errors in models that carry low seasonal prediction skills. Such models carry persistent signatures of systematic errors, and their errors are recognized by the multimodel superensemble. The probabilistic skills show that the superensemble-based forecasts carry a much higher reliability score compared to the member models. This implies that the superensemble-based forecasts are the most reliable among all the member models. It is possible to examine the performance of models and of the superensemble during periods of heavy monsoon rainfall versus those for deficient monsoon rainfall seasons. One of the conclusions of this study is that given the uncertainties in current modeling for seasonal rainfall forecasts, post processing of multimodel forecasts, using the superensemble methodology, seems to provide the most promising results for the rainfall anomaly forecasts. These results are confirmed by an additional skill metric where the RMS errors and the correlations of forecast skills are evaluated using a normalized precipitation anomaly for the forecasts and the observed estimates.

Corresponding author address: T. N. Krishnamurti, Dept. of Earth, Ocean and Atmospheric Science, The Florida State University, Tallahassee, FL 32310. E-mail: tkrishnamurti@fsu.edu

Abstract

This is the second part of a paper on the improved seasonal precipitation forecasts for the Asian monsoon using 16 atmosphere–ocean coupled models. This study utilizes a large suite of coupled atmosphere–ocean models; this second part largely addresses the skill of rainfall anomaly forecasts. These include both deterministic and probabilistic skill measures such as the RMS errors, anomaly correlations, equitable threat scores, and the Brier skill score. It was possible to improve the skills of rainfall climatology from the use of a downscaled multimodel superensemble to very high levels, and it is of interest to ask how far this methodology would go toward improving the skills of seasonal rainfall anomaly forecasts. It is possible to go through a sequence of multimodel post processing to improve upon these skills by using a dense rain gauge network over Asia, downscaling forecasts for each member model, and constructing a multimodel superensemble that benefits from the persistence of errors of the member models. This paper addresses the spinup issues of the downscaling and the superensemble results where the number of years of model data needed for training phase, for the downscaling, and for the construction of the superensemble, is addressed. In the context of cross validation, the training phase includes 14 seasons of monsoon data. The forecast phase is only one season; it is this season that was not included in the training phase each time.

The relationship between data length and the number of models needed for enhanced skills is another issue that is addressed. Seasonal climate forecasts over the larger monsoon Asia domain and over the regional belts are evaluated. The superensemble forecasts invariably have the highest skill compared to the member models globally and regionally. This is largely due to the presence of large systematic errors in models that carry low seasonal prediction skills. Such models carry persistent signatures of systematic errors, and their errors are recognized by the multimodel superensemble. The probabilistic skills show that the superensemble-based forecasts carry a much higher reliability score compared to the member models. This implies that the superensemble-based forecasts are the most reliable among all the member models. It is possible to examine the performance of models and of the superensemble during periods of heavy monsoon rainfall versus those for deficient monsoon rainfall seasons. One of the conclusions of this study is that given the uncertainties in current modeling for seasonal rainfall forecasts, post processing of multimodel forecasts, using the superensemble methodology, seems to provide the most promising results for the rainfall anomaly forecasts. These results are confirmed by an additional skill metric where the RMS errors and the correlations of forecast skills are evaluated using a normalized precipitation anomaly for the forecasts and the observed estimates.

Corresponding author address: T. N. Krishnamurti, Dept. of Earth, Ocean and Atmospheric Science, The Florida State University, Tallahassee, FL 32310. E-mail: tkrishnamurti@fsu.edu

1. Introduction

In Part I of this paper (Kumar and Krishnamurti 2012, hereafter Part I), we addressed the prediction of monsoon rainfall climatology using a suite of 16 atmosphere–ocean global coupled models. Part II addresses skills of forecasts of seasonal precipitation anomalies over Asia. This paper differs from a similar research effort that was carried out previously using four atmosphere–ocean coupled models (Chakraborty and Krishnamurti 2009). The present study includes as many as 16 models.

A priority for most climate prediction centers has been to predict seasonal rainfall extremes in advance. A general understanding of the interannual variability of monsoon rainfall was linked to preseason departure characteristic of the combined atmosphere–ocean–land system (Hastenrath 1987), who noted that anticedent anomalies in large-scale circulation explain half of the interannual variance in rainfall anomalies. A canonical ensemble correlation prediction model was used to improve the seasonal prediction and achieved the correlation coefficient of 0.4 for 29 yr (Shen et al. 2001 Krishnamurti et al. (2002) have shown the skills of rainfall anomaly prediction using coupled model forecast and Atmospheric Models Intercomparision Project (AMIP) hindcast datasets based on statistical skill scores and methods for the North American and Asian monsoon regions. Coelho et al. (2006) addressed the seasonal rainfall predictability of South American rainfall using Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) datasets. They found that coupled models have skill in their deterministic forecasts of rainfall at 1-month lead time. Wang et al. (2007) used a spatial filter to improve the seasonal rainfall anomaly for the Asian monsoon region. Their method suggests enhancement of the correlation coefficient by 7%–40% and a reduction of the RMSE by 2%–10%. Wang and Fan (2009) developed a new scheme to improve the seasonal prediction of summer precipitation for the East Asian region. They found largest improvement in anomaly pattern correlation coefficient from 0.12 (using the old scheme, which is based on direct model output of the precipitation anomalies) to 0.22 (using the new scheme, which is based on both model predictions and observed spatial patterns of historical “analog years”). Yet the issue of forecasting the rainfall anomalies for a season in advance remains a challenge.

The question of forecast sensitivity from increasing the number of member models and data lengths is also covered in this study. Our use of multimodel forecasts utilizes the cross-validation method for forecast data sampling (Krishnamurti et al. 2006). This is necessitated by the small lengths of forecast data strings that were presently available from multimodels. We make forecasts covering the monsoon seasons (winter and summer) for the years 1987–2001. The training phase includes all those seasons that are not being forecasted, thus successively the forecast of a given season excludes that season from the training phase. The training phase carries out a downscaling of each monthly and seasonal forecast for each model. This is followed by the construction of multimodel superensemble forecasts, which provides weights for each grid location and are geographically distributed. Many of the details on resolution, datasets, downscaling, and superensemble methodology are provided in Part I of this study. Because of the very high skill of the superensemble-based rainfall climatology, we address rainfall anomalies with respect to the observed rainfall climatology. Some past model studies (Gadgil and Sajani 1998) defined observed rainfall anomalies with respect to the observed rainfall climatology, and the model-based anomalies with respect to the model-based climatology, which does not appear to be necessary in this study. This study also addresses the minimum number of years of forecasts that are needed for equilibrating the spinup of the growth rate of precipitation during the downscaling and the forecast phases of the multimodel superensemble.

The present study addresses geographical distributions and skills of forecasts for seasonal rainfall anomalies. Those are first presented in terms of scatter diagrams. Both deterministic and probabilistic skill measures including the RMS errors, the spatial correlations, the equitable threat scores, and the Brier skill scores (BSS) are used for forecast evaluations. These skills are evaluated over a large monsoon domain and over subregional belts of the monsoon (India, China, Taiwan, South Korea, and Japan, Fig. 1). Extreme mode forecasts are compared, including the dry monsoon year 1987 versus a wet monsoon year 1988. The member models fair rather poorly in predicting such contrasting monsoon regimes. Several of those poor forecasts still carry persistent systematic errors; those are capitalized by the multimodel superensemble. The dry and wet spells carry opposite signs for the anomalies; the superensemble carries a collective bias removal to improve on such extremes compared to what is presently possible from the individual member models. These topics are expounded upon in this study.

Fig. 1.
Fig. 1.

Subdomains of study over different parts of the monsoon Asia.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

2. Forecasts over monsoon Asia domain

a. Scatter plot of the rainfall anomaly over monsoon Asia

Unlike the scatterplot of the climatogical rains shown earlier in Part I of this study, the behavior of the rainfall anomalies is quite different. We take all forecast grid points and show a scatterplot of the observed versus the predicted rains for the entire summer season, shown in Fig. 2. The correlations of the observed to the predicted season long rains for the 16-member models range from −0.20 to 0.36. The superensemble is able to elevate the correlation to 0.49. Very similar results were obtained for winter seasons as well (not shown here). The combination of near perfect skills for the climatology, plus these high values for the rainfall anomalies from the multimodel superensemble, makes it a valuable seasonal prediction product at this stage. The implications of these results are very significant. In an operational forecast environment, at the outset as a forecast is issued one might not know which single model might carry the best forecast. For a coming season, since model forecast skills tend to vary a lot from one forecast to the next, having a superensemble forecast of the rainfall anomaly provides one with some assurance of having the best available forecast.

Fig. 2.
Fig. 2.

Scatterplot for JJA precipitation anomaly (mm day−1), over the larger monsoon Asia region for all the member models, ensemble mean EM and the superensemble SSE. (bottom) The inset numbers show the value of the correlation between the predicted and the observed estimates of the rainfall anomalies.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

b. Seasonal forecast skills for precipitation over monsoon Asia

The domains for the Asian monsoon region are outlined in Fig. 1. The histograms presented in Fig. 3 show the RMS errors and for the correlations for the seasonal downscaled forecasts of rains with respect to the observed estimates at nearly 25-km resolution. Also included are vertical bars for the downscaled ensemble mean and for the downscaled multimodel superensemble. These pertain to skills for the summer monsoon months of June, July, and August (JJA); the forecasts cover 15 yr and are indicated along the abscissa. For the most part, the general conclusion is that the multimodel superensemble almost always carries the lowest RMS errors and the highest correlations. There are very few exceptions where it might carry a skill close to the best model. The errors for the ensemble mean are almost always somewhat larger than those of the superensemble. In a real-time framework one can place great reliance on the superensemble forecasts since its performance is consistently better than all other models and the ensemble mean.

Fig. 3.
Fig. 3.

Showing along the ordinate the (a) RMS error and (b) anomaly correlations, respectively, for the rainfall anomaly over the monsoon Asia region. The abscissa denotes the years. The yearly set of histograms carry forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms show the 16-yr- averaged skills.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

c. Seasonal rainfall anomaly forecasts for a dry and wet monsoon rainfall season

From the sample of 15 yr of seasonal forecasts, we selected forecasts for the years 1987 and 1988 to contrast a dry from a wet monsoon year. Details of the monsoon from each of these two years were presented by Krishnamurti et al. (1989, 1990). The year 1987 was characterized by very deficient rainfall over northern India with 25% below-normal rainfall. The different panels, from top left of Fig. 4a, show the seasonal observed rainfall anomalies from the Yatagai et al. (2009) rain gauge–based datasets and those from the 16 different model forecasts. These are all downscaled model forecasts for the rainfall anomalies during the forecasts phase of the multimodel super ensemble. The last two panels of this illustration show the results of rainfall anomaly forecasts from the downscaled ensemble mean and the downscaled multimodel superensemble. All of these anomalies were calculated with respect to the model mean minus the APHRODITE-based observed rainfall anomalies. In each of these panels on the top right we provide the pattern correlations, that is, the anomaly correlations, and at the bottom right for each panel, the RMS error of the respective forecasts is presented. From an examination of these seasonal forecasts we note that many models carry rather low correlations, ranging from −0.05 to 0.39 (Fig. 4a). For the ensemble mean and the superensemble, these numbers were elevated to values 0.40 and 0.43, respectively. The superensemble also carried the lowest RMS error of 1.36—while these values were as high as 2.11 for one of the member models. The dryness revealed by the strongest negative values of the rainfall anomalies were present in the forecasts from the models National Centers for Environmental Prediction (NCEP), University of Hawaii (UH), Met Office (UKMO), Laboratoire d’Océanographie Dynamique et de Climatologie (LODY), European Centre for Medium-Range Weather Forecasts (ECMWF), and Geophysical Fluid Dynamics Laboratory (GFDL). This was also reflected by the multimodel superensemble. The spatial distributions of low rains during the dry year 1987 were best reflected by the multimodel superensemble. Some prominent features in the seasonal rainfall over this larger monsoon domain include the below-normal rains over northern India extending north-northwestward from the east coast of India. Those dry features are best represented by the multimodel superensemble. The above-normal rains over northeast China are somewhat underestimated by the multimodel superensemble, the GFDL model, and the FSU KOR model. Most other models fail to predict this feature. The above-normal rains of Bangladesh were underestimated by most models, except by the Bureau of Meteorology Research Centre (BMRC), UH, ECMWF, and The Florida State University (FSU) models with Arakawa–Schubert convection and new radiation (ANR), with Kuo convection and new radiation (KNR), and with Kuo convection and old radiation (KOR). Generally, the forecasts of individual member models were not consistent from one forecast to the next.

Fig. 4.
Fig. 4.

(a) JJA precipitation anomaly for 1987. First panel shows the observed rainfall anomalies, last two panels show the results for the ensemble mean and superensemble, the other panels show the results for each of the member models. Units mm day−1. (top right) The inset numbers show the correlations for the rainfall anomalies with respect to the observed estimates. (bottom) The inset numbers show the corresponding RMS errors.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

The summer season maps of rainfall anomalies for 1988, an above-normal monsoon rainfall year, and the respective skills for the member models over the Asian monsoon domain are presented in Fig. 4b. The observed anomalies for this season’s rains are shown in the top panel of Fig. 4b, showing above-normal rains over India, south of 20°N, and along the foothills of the Himalayas, Myanmar, Malaysia, and central and north central China. The models show a varied performance; their RMS errors ranged from 1.33 to 1.81, whereas those for the ensemble mean and the superensemble were respectively 1.37 and 1.58. The spatial correlations of the member models ranged from −0.08 to 0.46. The ensemble mean and the superensemble, respectively, carried values of 0.39 and 0.50. Prominent regions of above-normal rains during the summer season of 1988 were along the foot hills of the Himalayas, south-central India, and western China. The models Istituto Nazionale di Geofisica e Vulcanologia (INGV), KOR, KNR, LODY, and UKMO predicted the above-normal rains over western China. Some of the other features were seen in the forecasts from the multimodel superensemble but were somewhat underestimated. Some models, such as the INGV, UH, and the BMRC, predicted excessive rains over northern India during this wet season. Their overall scores for RMS errors and the correlations were somewhat low because the distributions of predicted rains over the entire monsoon domain carried large errors. Some measureable improvements in seasonal summer monsoon rainfall anomalies were seen from the multimodel superensemble, as reflected by the overall scores.

Fig. 4.
Fig. 4.

(b) JJA precipitation anomaly for 1988. First panel shows the observed rainfall anomaly; last two panels show the results for the ensemble mean and superensemble, the other panels show the results for each of the member models. Units mm day−1. (top right) The inset numbers show the correlations for the rainfall anomalies with respect to the observed estimates. (bottom) The inset numbers show the corresponding RMS errors.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

The results for the anomaly forecasts for the winter season (December, January, and February) rainfall of 1987 and 1988, shown in Figs. 4c and 4d, were quite similar to those of the summer season for these skill measures and the forecasts of the geographical distributions of anomaly rains. Many models showed excessive positive rainfall anomalies over eastern China; these features were corrected by the superensemble and less so by the ensemble mean. The ensemble mean assigns the same weight 1/N for all grid points, thus all regions get uniformly corrected. The superensemble recognizes the persistent systematic errors, geographically, for several of the models along the east coast of China, where the models overestimated the rains, and the fractional, positive, and negative weights of the superensemble were able to make regional corrections.

Fig. 4.
Fig. 4.

(c) As in (a), except for the DJF.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Fig. 4.
Fig. 4.

(d) As in (b), except for the DJF.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

As can be seen from the right top inset entries (Fig. 4d) for the correlation (model versus observed estimates of precipitation anomalies), several models carry negative correlations for seasonal forecasts of precipitation anomalies; those were corrected to positive values. As noted previously, even models with poor scores often carry persistent errors, as the superensemble is able to correct for such systematic biases. The weights of the superensemble are geographically distributed, they are positive or negative fractions generally, and carry a huge amount of information on model behavior derived from the training phase, and those that are carried over to the forecast phase as shown here.

3. Brier skill scores (a probabilistic measure)

Here we follow, Brier (1950) and Stefanova and Krishnamurti (2002) for the estimation of probabilistic skills for seasonal climate forecast evaluations. Probabilistic forecast is based on the estimation of the probability of the occurrence of a particular event. We have used the precipitation anomaly field as the event type that exceeds a preselected threshold level. BSS is a tool that is often used for the verification of probabilistic forecasts.

Following Murphy (1973) we define the Brier score (BS) into three parts:
e1
where is the unconditional mean frequency of the particular event, yk denotes to discrete categories, and denotes to observed conditional frequency.

The first term in Eq. (1), reliability, measures the accuracy of forecasts. If observed conditional frequency is equal to the forecast probability, the reliability achieves a maximum value meaning that forecast is most reliable. The second term, resolution, expresses the distance between forecast frequency and the unconditional climatological frequency and thus shows a variation of negative and positive anomalies from climatology. The poor forecast fails to distinguish between different regions. A forecast with good resolution can replicate the extreme events of the climate. The last term, uncertainty, is a measure of the variability of the system (observations) only, so it did not get influenced by the forecast.

The Skill Scores are calculated with respect to a reference forecast—
eq1
where Score is a common method of the skill of probabilistic forecast, for example, BS, Heidke skill score etc. Similarly, BSS is also calculated with respect to a reference forecast. So the Brier Skill Score can be written as
eq2
then , where BSreference is calculated against climatological forecast. For a perfect forecast, BSS = 1, and for climatological forecast BSS = 0.
In the first method [synthetic superensemble 1 (SSE1)] the weights assigned to the superensemble are based on the regression method that are defined as
eq3
where ai are the weights, and here λ is empirically assigned as the best value 0.5, empirically.
In the second method (SSE2) the weights are based on hit rate for an event or a nonevent during the training period
eq4
where ci is the sum of the hit rate for the event or a nonevent for the ith model for the training period, and κ is empirically chosen as 3.
We can define ci as
eq5
where α is the number of the events forecast and observed, β is the number of events forecast but not observed, γ is the number of events observed but not forecast, and δ is the number of events neither forecast nor observed.

From our results based on the use of 16 coupled atmosphere–ocean models for seasonal forecasts of precipitation, we noted several findings. Figure 5 contains four panels, and all of these panels are for different thresholds of precipitation anomaly intensities (these are indicated in the top-right inset box). The ordinate denotes the observed frequency and the abscissa denotes the modeled forecast frequency levels (shown by bins) for rainfall at different thresholds. A slope of 1.0, where the points are aligned at a 45° angle, indicates a model that is most reliable. The results for the member models are shown by thin dashed lines and have a slope of around 20°, indicating that forecasts are not considered very reliable. Combining those models it is possible to increase the reliability score for the superensemble that falls almost at the 45° angle. Here we have shown three ensemble-based results. These include the results from the multimodel ensemble mean (MME) and from two versions of the multimodel superensemble. One of these utilizes weights for the superensemble based on the FSU superensemble and the other uses weights based on the probabilistic method that utilizes hit rate (also called probability of detection; it measures number of correct forecast events over the total number of observed events) and miss rates (the number of cases where an event was observed but there was no forecast), Stefanova and Krishnamurti (2002). Overall, our results for the Asian summer monsoon are that the multimodel superensemble provides the most reliable probabilistic skill scores.

Fig. 5.
Fig. 5.

Probability forecasts for JJA over Monsoon Asia Region. Here the ordinate denotes the observed relative frequency and the abscissa denotes the corresponding forecast probability. MME stands for Mulimodel Ensemble mean; SSE1 stands for superensemble using method-1; SSE2 stands for superensemble using method-2, and Mdls stands for member models of the multimodel suite. Forecasts close to the slope angle of 45° carry the most reliable forecasts.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Figure 6 also contains four panels, and all of these panels are for different thresholds of precipitation anomaly. The frequency of the forecast probability yi for each the forecasting methods are shown here. These are known as sharpness diagrams (variance in the discrete category of forecast, and is function of forecast alone). A diagram which has U-shape distribution is considered as best in probabilistic forecast. Here we are getting a typical form where the forecast is showing reasonably sharp for the monsoon Asia region for summer season. Table 1 shows some important aspects of the probabilistic forecasts, for example, BSS, BSrel, and BSres. In general, reliability (BSSrel) is very strong in all the cases of precipitation anomaly by any method. No doubt the reliability from the SSE2 is relatively higher, even clearly seen from Fig. 5. Probabilistic forecast of superensemble calculated using method-1 (SSE1) has 26.0% improvement over MME while method-2 (SSE2) has 28.5% improvement.

Fig. 6.
Fig. 6.

Sharpness diagrams for probabilistic forecasts for JJA over the monsoon Asia region. Here the ordinate denotes the observed relative frequency and the abscissa denotes the corresponding forecast probability.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Table 1.

Showing the role of reliability (BSrel), resolution (BSres) and total Brier skill score (BSS) for different threshold of precipitation anomaly (denoted as ppta, mm day−1).

Table 1.

4. Number of models

In this study we utilized a total of 15 seasons of data using 16 multimodels. In the context of cross validation the number models used for cross validation for ensemble forecasts is reduced by 1. For a given number of models we noted an optimal number of years of datasets that provides the best results. This is illustrated in Fig. 7. Here the RMS errors of seasonal precipitation forecasts over Asia are plotted along the ordinate and the abscissa denotes the years. Three different curves are shown for the use of 5, 10, and 15 yr of total data for the construction of superensemble. These results show that for each choice of the number of years of available forecasts a different minimum value for the RMS is attained. As the number of years of forecast data is increased, the number of models required to attain the minimum RMS increases. Furthermore, the RMS value decreases with an increase in forecast data length.

Fig. 7.
Fig. 7.

Sensitivity of the superensemble forecast to the number of models for different lengths of datasets. The ordinate denotes RMS errors and the abscissa denotes the number of models. The three different curves show the results from the usage of different lengths of forecast datasets.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

5. Single-model-based ensembles versus multimodel superensemble

Several of the single models that are included in our multimodel suite carry many forecasts for each start time. The data that were used from such single models, in our study, were the ensemble mean of several such runs. Generally such several runs for the same start time are generated by using perturbed initial states. In the seasonal climate context some of these different initial states are generated by having a lagged start by a few days that still can have data for the same start dates. The question of how the results from a single model compare with those of a multimodel superensemble is frequently asked. Based on several such comparisons we found that the multimodel superensemble is always quite superior, in terms of performance skills, compared to single-model-based ensembles. Figure 8 illustrates such results of RMS errors, where the results for models that included 10 or more ensemble members, are included. These seven models go by the names: BMRC, GFDL, NCEP, and UH respectively.

Fig. 8.
Fig. 8.

The vertical bars shows RMS error (along ordinate) for single-model ensemble mean as compared to the overall ensemble mean (clear bar) and superensemble (dark bar) shown in at far right for each year . Also shown in the far right side is the overall average for 15 yr. These results pertain to the larger monsoon domain. The least RMS error are seen for the superensemble in the far right of each sets of bar.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

In Fig. 8 we show the RMS errors, along the ordinate for these four models (shown as vertical bars), as a function of the 15 yr of forecasts along the abscissa. Also shown, in the far right are the results from the ensemble mean of all four models and for the superensemble. The results that stand out are that the multimodel superensemble carries less error for seasonal forecasts of summer monsoon rains over the large Asia domain as compared to the single-model-based ensembles and their joint ensemble means (shown in the last set of bars in the far right). Clearly the superensemble benefits from the diversity of physical parameterizations, resolutions, different ocean model formulations, different initial states, and the different land surface physics. The single model ensembles carry a limited range for such options.

6. Forecasts over subdomain of monsoon Asia

a. Skills of precipitation anomaly over India

The histograms for the RMS errors and correlations for the summer seasonal forecasts of the rainfall anomalies over the India domain, identified in Fig. 1, are shown in Figs. 9a and 9b respectively. The abscissa denotes the years for which these forecasts were made. The RMS errors for the multimodel superensemble are generally around 1.4 to 2.5, whereas the values for member models are generally in the range of 1.5 to 3.5. The values for the correlations of the model-based rainfall anomalies with respect to the observed anomalies, at 25-km resolutions, are much less compared to those for the multimoldel superensemble which consistently has values around 0.5. Overall, these results over the Indian domain are slightly better than those over the Asian monsoon domain, shown in Fig. 1, because of the dense observational coverage of rain gauges that helps the downscaling and the training for the multimodel superensemble. An exception was an above-normal rainfall year in 1999, when most models performed rather poorly and displayed a seasonal correlation around −0.3. This was also reflected in the ensemble mean. The multimodel superensemble had a score of +0.13. Many of these models that carried poor correlation skills generally described rainfall anomalies that were out of phase with the observed anomalies. An illustration of this feature is presented in Fig. 9c. Here for an India subdomain we show the time history of rainfall anomaly forecasts for four models. The observed rainfall anomalies are shown by a heavy dark line. Two of these models carried negative correlations and show the out of phase relationship with respect to the observed rainfall anomaly estimates. Such an out of phase relationship can arise due to underestimates of rainfall from deficiencies in the physical parameterizations. If this kind of feature is persistent, then the superensemble is able to recognize systematic errors and can benefit from it. This is one of the important findings of this study.

Fig. 9.
Fig. 9.

Showing along the ordinate the (a) RMS error and (b) anomaly correlations, respectively, for the rainfall anomaly over the monsoon Asia region. The abscissa denotes the years. The yearly sets of histograms contain forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms shows the 16-yr-averaged skills. (c) Rainfall anomaly (JJA) for observed (APHRODITE, solid line) and models (ANR, KNR, METF, and BMRC) area averaged over central Indian (75°–90°E; 20°–30°N) region. The abscissa denotes the years of forecasts, and the ordinate denotes the rainfall anomalies, mm/day.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

The correlations for 3 out of the 15 yr showed a varied performance from the member models, these were the summer seasons of 1990, 1994, and 1999 (Fig. 9c). During all other years the member models generally had positive correlations (between the model-based rainfall anomaly forecasts and the corresponding observed estimates). These three years showed many negative correlations for the member model performance; however, the superensemble, in spite of such poor performance from member models, had positive low scores. The summer season of 1999 had the largest RMS errors and negative correlations for a number of models. These scores were seen for the larger Asian monsoon domain, as well as for most of the regional domains, described in section 6c. The failures of a large number of climate models during the summer of 1999 may be related to unusual cold sea surface temperatures (SSTs) in the eastern tropical Pacific and warm SSTs in the western tropical Pacific and Indian oceans that were remarkably persistent during this period, Hoerling and Kumar (2003). It is not clear how such cold SST anomalies would affect model forecasts. From such seasonal models, we can say that one can rely upon a multimodel superensemble to provide the highest skills. The question remains whether such an improvement of skill is useful to a user community who utilize such information.

Figures 10a and 10b provides two further examples of seasonal forecasts for the years 1989 and 1994. These were selected to show the consistent improvements of the rainfall distributions over the Asia domain from the multimodel superensemble-based forecasts. In both of these examples the member models and the ensemble mean do not describe the wet spells of the respective years very well. However the information content in these member model forecasts convey sufficient systematic errors that are corrected by the superensemble. Many models underestimate the rains; they however place their heavy rains in roughly the correct geographical locations. It should also be stated that during the downscaling each of the member models were bias corrected during the training phase only. During the forecast phase no further corrections are made to the member model forecasts that are displayed in Figs. 10a and 10b. The superensemble, however, does use the training phase statistics for further collective bias removal. The bias errors of the member models from the training phase do not remove all systematic errors since such errors do have a spread from one forecast to the next. The errors generated during the forecast phase for the member models cannot be entirely removed from the training phase’s downscaling statistics alone.

Fig. 10.
Fig. 10.

(a) JJA precipitation rainfall anomaly, units mm day−1, for 1989, over the Indian region. The inset numbers on top denotes the correlations of the predicted precipitation anomaly with respect to the observed estimates rainfall anomalies; the bottom inset numbers denote the corresponding RMS errors.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Fig. 10.
Fig. 10.

(b) As in (a), but for the JJA precipitation rainfall anomaly for 1994.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

b. Local and regional skills within India

At the outset it must be stated that the large-scale model’s precipitation datasets were first extracted at a resolution of roughly 250 km, and then those were downscaled to a resolution of 25 km. If one were to move to a district level forecast, which would take the interpretations of the results to a resolution of a few km. To ask whether a downscaled multimodel superensemble would have any useful information content at that district level is asking too much. Some arbitrarily selected eight districts are numbered in Fig. 11. There was some consistency in the seasonal forecasts of rains for these districts over India. These districts are local regions that occupy rather small areas. The district wise domains are often on the order of one to two degrees latitude by one to two degrees longitude squares. There is considerable interest in seasonal prediction of precipitation at this level. At this stage, what might be possible would be to bring to the table a product that is superior and consistent in its forecasts of seasonal rains compared to all the 16 participating member models of our suite. Figures 12a and 12b, illustrate the results for 15 summer seasons (1987–2001). Number of seasons is along the abscissa and the seasonal rainfall (mm day−1) is along the ordinate for these illustrations. The four different curves denote results from the observed estimates (the APHRODITE product shown by thick dark line), results from a selected model (the ECMWF, thin solid line), the ensemble mean (dotted line), and from the multimodel superensemble (dashed line). The inset numbers on the top left of each panel provide the location coordinates of the district. The inset on the top right of each panel provides the RMS errors of the seasonal forecasts for the ECMWF, the ensemble mean, and the multimodel superensemble. The model results are not very impressive for extrema of rainfall on a district level. The time history of the superensemble forecasts of rains, at the district level, is rather flat but they carry a somewhat higher skill compared to all member models. The forecast from the multimodel superensemble has the lowest RMS error. The intensity of rainfall clearly varies from district to district as is indicated by the range (7–14 mm day−1) along the ordinate. Usefulness of a forecast at this one degree latitude scale will require further work on modeling, observations, and post processing. Why not use a mesoscale modeling suite, where each member model starts at a horizontal resolution of a few kms? To carry those types of forecasts to a season-long time scale has been questioned by several authors, for example, Kanamitsu et al. (2010). They allude to the excitation of domain size modes in regional models that contaminate seasonal time-scale predictions. These modes get excited because of ill-posed lateral boundary conditions in mesoscale regional models. These questions need to be addressed further and are beyond the scope of this study.

Fig. 11.
Fig. 11.

The map of India showing district boundaries. The arbitrarily selected 8 districts are marked by numbers 1–8.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Fig. 12.
Fig. 12.

(a) Total rainfall (along ordinate) in mm day−1 for the first four districts (marked 1–4, of Fig. 11), the abscissa denotes the 15 yr for which forecasts were made. The four different curves are the total rains for the four districts. Also shown are the RMS errors for the best model (ECMWF), ensemble mean and the multimodel superensemble.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Fig. 12.
Fig. 12.

(b) As in (a), but this carries results for districts 5–8 of Fig. 11.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

c. Histogram showing skills of precipitation anomaly forecasts over China, Taiwan, Japan, and South Korea

Proceeding from the big monsoon domain to the smaller subdomains, identified in Fig. 1, the same statistics have been evaluated for the RMS errors and the correlations (shown in Figs. 13 and 14) for the modeled rainfall anomalies with respect to the observed anomalies. Some of these subdomains are much smaller in size and it is of interest to see how seasonal forecasts might degrade as the domain sizes are reduced. The subdomains considered here include China, Taiwan, Japan, and South Korea. These results pertain to the prediction of rainfall anomalies for the summer season (June, July, and August). The model RMS errors over China range from roughly 1.0 to 2.5, and the corresponding range of values over Taiwan, Japan, and South Korea, respectively, range from 1.1 to 5.0, 0.85 to 3.5, and 1.0 to 4.0. The respective minimum values of RMS errors for rainfall anomalies for the ensemble mean and the superensemble, for China, Taiwan, Japan, and South Korea, are (1.1, 1.0), (1.4, 1.0), (0.95, 0.8), and (1.2, 1.0). In all cases, some improvements in regional skills were possible from the multimodel superensemble. The correlations for the sub regions, China, Taiwan, Japan, and South Korea, respectively, ranged from (−0.5 to +0.7), (−0.7 to +0.8), (−0.6 to +0.8), and (−0.7 to +0.85). The respective maximum correlations for the ensemble mean and the multimodel superensemble are (0.7, 0.8), (0.5, 0.7), (0.75, 0.8), and (0.8, 0.85). Overall, there is clearly an interannual variability in these skills. The overall consistent picture that emerges is that the multimodel superensemble provides the highest skills over these sub domains. There is another metric often used in climate model forecast evaluations of precipitation, which is a normalized rainfall anomaly expressed as a percentage, and this powerful, which is presented in the next section.

Fig. 13.
Fig. 13.

Showing along the ordinate the RMS error and Anomaly correlations respectively, for the rainfall anomaly over (a),(b) China and (c),(d) Taiwan domains. The abscissa denotes the years. The yearly set of histograms carry forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms show the 15 year averaged skills.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

Fig. 14.
Fig. 14.

As in Fig. 13, but for (e),(f) Japan and (g),(h) South Korea domains.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

7. Percentages of normalized seasonal model rainfall anomalies

Table 2 shows the correlation coefficients and the RMS errors expressed as percent departure from their normal values. Here the normal is a 15-yr (1987–2001) summer season average observed total rain. The observed rains for these same periods are derived from the APHRODITE product. We take a seasonally predicted rainfall, of a model at a grid point, and derive the anomaly of the rainfall by subtracting the observed climatological rainfall value from it, then divide by the observed climatological rain (to normalize it), and express it as a percentage by multiplying the result by 100. Finally, they are averaged over the respective domains. The normalization, and subsequent averaging, makes it easier to standardize the performance of model results over different regions where the rainfall intensities are different. The table includes the model acronym names from top to bottom. The last two entries at the bottom include the percentages for the ensemble mean and for the multimodel superensemble. The results for the various domains are presented on the different vertical columns. The correlations are often small and even the ensemble mean of the correlations conveys low values. However, these percentages of normalized rains from the multimodel superensemble are very robust. The plethora of lower skills from the member model performances arise due to the normalization; moderate to heavy rains often are very poorly predicted by the models with low skills. The grid point by grid point corrections of the results from the combined use of the downscaling and the multimodel superensemble elevates those percentages to rather high values. One can ask the question as to how long it would take for a single model to improve their scores to higher levels such as those currently attainable from the downscaled superensemble. As an example, let us take the INGV model’s performance over Taiwan. The correlations shown in Table 2 for INGV are around 0.08, and the superensemble carries a score of +0.70. Models with good skills and models with low skills (and persistent systematic errors) contribute to these elevated skills for the multimodel superensemble. This is the major finding of this paper. The superensemble exists only because of these multimodels. To elevate the score of one model, further work is needed in resolution, data assimilation, and all aspects of model physics, cloud microphysics, dynamics, surface processes, and the entire representation of the oceans. Lee (1999), worked on a Ph.D. dissertation toward implementing a prognostic cloud scheme for radiative transfer, in place of an older diagnostic cloud scheme in a single model. In the prognostic cloud scheme for radiative transfer computations the model carried cloud fractions and cloud liquid water as explicit dependent variables. The diagnostic cloud scheme specified the clouds based on certain threshold values of model-based relative humidities. The impacts of that change on the anomaly correlations at the 500-hPa level at day 6 for a sample of 30 forecasts between 20° to 80°N latitudes was around 0.02 from this change in the cloud radiation algorithm. An illustration based on Lee (1999) is shown in Fig. 15. This illustrates the NWP impacts from these two cloud radiation algorithms. This is small impact, and one must address numerous such model changes to improve the current state of models. That work took nearly 4 years of time for a Ph.D. student. Single models must chip away at hundreds of such model algorithms, resolution, and data issues.

Table 2.

Showing correlation coefficients (CC) and RMSEs for the normalized precipitation anomalies over monsoonal Asia, India, China, Taiwan, Japan, and South Korea region for 16 member models (from top down) and for the ensemble mean and the superensemble.

Table 2.
Fig. 15.
Fig. 15.

Showing the anomaly correlations between 20°N and 80°N for two radiative schemes RTDC (an old emissivity/absorbitivity based radiative transfer scheme with threshold relative humidity based clouds; OLD Scheme) and RTPC (a band model for radiative transfer scheme with explicit clouds, NEW Scheme) results are based on 30-days of forecasts during September 1998.

Citation: Journal of Climate 25, 1; 10.1175/2011JCLI4126.1

8. Conclusions

The forecast of seasonal rainfall anomalies, a season in advance, is a major scientific challenge for the monsoon world. Known for large droughts, floods, and its need for advance information for agriculture, many scientific efforts to provide the best of such precipitation forecasts have been made, for example, Rajeevan et al. (2007). That progress largely came from a statistical multiple regression approach that included a number of predictands. A mix of multimodel-based seasonal forecasts and a comprehensive downscaling and the construction of superensemble from model outputs is the approach followed in this study. The availability of a comprehensive rainfall data archive, covering several decades of daily rain gauge datasets, Yatagai et al. (2009) made it possible for downscaling and construction of a multimodel superensemble at horizontal resolutions, such as 25 km used here. This rainfall data archive includes close to 10 000 rain gauge sites over Asia. The downscaling takes each month’s forecasts of rain from each model and statistically relate them to the APHRODITE rain. During the downscaling of each model’s forecasts, geographical fields of a model’s systematic errors are provided and are corrected for each model by the downscaling algorithm. In the training phase of the multimodel superensemble, the weights for each member model, at each grid point (separated by 25 km), are obtained using these same APHRODITE rains. The final model validation of total rain and the rainfall anomalies also make use of these same high-resolution observed rainfall estimates. During the AMIP period, there was a large mismatch between the model-based climatology and the observed rainfall climatology. During that time model rainfall anomalies were calculated with respect to the model climatology, which is not necessary since it is now possible to derive a downscaled superensemble-based rainfall anomaly with respect to the observed climatology; the superensemble-based rainfall climatology is very close to the observed climatology. Roughly 11 yr of past seasonal forecast datasets were needed during the training phase of the downscaling and superensemble construction for the stabilization of the statistical weights. We also find that further improvements of results, presented in this paper, may be possible if a larger number of forecasts are carried out by each member model, in which case lower errors could be achieved from the use of a larger number of member models. For a given data length, an optimal number of model forecasts provide the least errors, using more models than the optimal number appears to degrade the results. Our results show that the downscaled superensemble carries consistently much better skills for the prediction of rainfall anomalies over the Asian monsoon rainfall belt and for several of its subregions compared to the member models. These model validations are based on the use of both deterministic and probabilistic Brier skill scores. The Brier skill scores, shown here, clearly show that the probabilistic reliability score is highest for the multimodel superensemble when compared to those for any of the member model forecasts. This implies that, in the probabilistic sense, the superensemble-based forecasts are most reliable. It is noted that the superenesemble provides a 26% improvement over the score for the ensemble mean. The last row of this table shows the normalized anomaly correlations and RMS errors from the multimodel superensemble for the various sub domains. These skills for the superensemble are generally much higher than those of the 16 member models. This is clearly a major result of our findings. The elevated score (CC = 0.56–0.89 and RMS error = 2.39–14.05) for the superensemble comes from a number of poorer models that provide persistent contributions to their large systematic errors that are corrected by the superensemble.

The districtwise skills over subregions of India were also calculated. Here we start from model resolutions at around 100 km or larger. They were first downscaled to the observed precipitation resolution of 25 km prior to the construction of superensemble forecasts. Roughly 3–4 such final grid points describe a district. At that resolution, although the skills from the multimodel superensemble produced the best results compared to all member models, much skill is desired for predicting extrema of seasonal rains. Further improvements may come from models with much higher resolutions and possibly also from a suite of high-resolution mesoscale models. Rainfall observations at resolutions better than 25 km may also be needed for such forecast improvements.

The bias correction of the multimodel superensemble is not an additive process but is multiplicative. This is a unique aspect of the superensemble. We have also included a normalized rainfall anomaly–based metric that shows the uniquely large improvements for the correlations and RMS errors of the observed versus the modeled rainfall anomalies from the superensemble. This study points to the presence of large systematic errors in many models that carry poor skills; persistence of such errors enables the superensemble to benefit from this feature. Such persistence of errors seems to arise from weaknesses in physical parameterizations that tend to underestimate or overestimate some member model’s rainfall anomalies. Further research on post processing of model results would be quite helpful to extract more information from model forecasts and their errors.

Acknowledgments

This work was supported by NSF Award AGS-0419618 and NASA Awards NNX10AG86G and NNX07AD39G. We wish to acknowledge the APCC/CliPAS for providing coupled model datasets.

REFERENCES

  • Brier, G. W., 1950: Verification of forecasts expressed in terms of probabilities. Mon. Wea. Rev., 78, 13.

  • Chakraborty, A., and T. N. Krishnamurti, 2009: Improving global model precipitation forecasts over India using downscaling and the FSU superensemble. Part II: Seasonal climate. Mon. Wea. Rev., 137, 27362757.

    • Search Google Scholar
    • Export Citation
  • Coelho, C. A. S., D. B. Stephenson, F. J. Doblas-Reyes, and M. Balmaseda, 2006: The skill of empirical and combined/calibrated coupled multi-model South American seasonal predictions during ENSO. Adv. Geosci., 6, 5155.

    • Search Google Scholar
    • Export Citation
  • Gadgil, S., and S. Sajani, 1998: Monsoon precipitation in the AMIP runs. Climate Dyn., 14, 659689.

  • Hastenrath, S., 1987: On the prediction of Indian monsoon rainfall amonalies. J. Climate Appl. Meteor., 26, 847857.

  • Hoerling, M. P., and A. Kumar, 2003: The perfect ocean for drought. Science, 299, 691694.

  • Kanamitsu, M., K. Yoshimura, Y.-B. Yhang, and S.-Y. Hong, 2010: Errors of interannual variability and trend in dynamical downscaling of reanalysis. J. Geophys. Res., 115, D17115, doi:10.1029/2009JD013511.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., H. S. Bedi, and M. Subramaniam, 1989: The summer monsoon of 1987. J. Climate, 2, 321340.

  • Krishnamurti, T. N., H. S. Bedi, and M. Subramaniam, 1990: The summer monsoon of 1988. Meteor. Atmos. Phys., 42, 1937.

  • Krishnamurti, T. N., L. Stefanova, A. Chakraborty, T. S. V. Kumar, S. Cocke, D. Bachiochi, and B. Mackey, 2002: Seasonal forecasts of precipitation anomalies for North American and Asian monsoons. J. Meteor. Soc. Japan, 80, 14151426.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., A. Chakraborty, R. Krishnamurti, W. K. Dewar, and C. A. Clayson, 2006: Seasonal prediction of sea surface temperature anomalies using a suite of 13 coupled atmosphere–ocean models. J. Climate, 19, 60696088.

    • Search Google Scholar
    • Export Citation
  • Kumar, V., and T. N. Krishnamurti, 2012: Improved seasonal precipitation forecasts for the Asian monsoon using 16 atmosphere–ocean coupled models. Part I: Climatology. J. Climate, 25, 3964.

    • Search Google Scholar
    • Export Citation
  • Lee, H.-S., 1999: Cloud specification and forecasts in the Florida State University global spectral model. Ph.D. dissertation, The Florida State University, 361 pp.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1973: A new vector partition of the probability score. J. Appl. Meteor., 12, 595600.

  • Rajeevan, M., S. S. Pai, A. R. Kumar, and B. Lal, 2007: New statistical models for long range forecasting of southwest monsoon rainfall over India. Climate Dyn., 28, 813828.

    • Search Google Scholar
    • Export Citation
  • Shen, S. S. P., W. K. M. Lau, K.-M. Kim, and G. Li, 2001: A canonical ensemble correlation prediction model for seasonal precipitation anomaly. NASA Rep. NASA/TM—2001–209989, 65 pp.

    • Search Google Scholar
    • Export Citation
  • Stefanova, L., and T. N. Krishnamurti, 2002: Interpretation of seasonal climate forecast using Brier skill score, The Florida State University superensemble, and the AMIP-I dataset. J. Climate, 15, 537544.

    • Search Google Scholar
    • Export Citation
  • Wang, H., and K. Fan, 2009: A new scheme for improving the seasonal prediction of summer precipitation anomalies. Wea. Forecasting, 24, 548554.

    • Search Google Scholar
    • Export Citation
  • Wang, L., C. Zhu, and W.-T. Yun, 2007: Improvement of model forecast on the Asian summer rainfall anomaly with the application of a spatial filtering scheme. Theor. Appl. Climatol., 88, 225230.

    • Search Google Scholar
    • Export Citation
  • Yatagai, A., O. Arakawa, K. Kamiguchi, H. Kawamoto, M. I. Nodzu, and A. Hamada, 2009: A 44-year daily gridded precipitation dataset for Asia based on a dense network of rain gauges. SOLA, 5, 137140.

    • Search Google Scholar
    • Export Citation
Save
  • Brier, G. W., 1950: Verification of forecasts expressed in terms of probabilities. Mon. Wea. Rev., 78, 13.

  • Chakraborty, A., and T. N. Krishnamurti, 2009: Improving global model precipitation forecasts over India using downscaling and the FSU superensemble. Part II: Seasonal climate. Mon. Wea. Rev., 137, 27362757.

    • Search Google Scholar
    • Export Citation
  • Coelho, C. A. S., D. B. Stephenson, F. J. Doblas-Reyes, and M. Balmaseda, 2006: The skill of empirical and combined/calibrated coupled multi-model South American seasonal predictions during ENSO. Adv. Geosci., 6, 5155.

    • Search Google Scholar
    • Export Citation
  • Gadgil, S., and S. Sajani, 1998: Monsoon precipitation in the AMIP runs. Climate Dyn., 14, 659689.

  • Hastenrath, S., 1987: On the prediction of Indian monsoon rainfall amonalies. J. Climate Appl. Meteor., 26, 847857.

  • Hoerling, M. P., and A. Kumar, 2003: The perfect ocean for drought. Science, 299, 691694.

  • Kanamitsu, M., K. Yoshimura, Y.-B. Yhang, and S.-Y. Hong, 2010: Errors of interannual variability and trend in dynamical downscaling of reanalysis. J. Geophys. Res., 115, D17115, doi:10.1029/2009JD013511.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., H. S. Bedi, and M. Subramaniam, 1989: The summer monsoon of 1987. J. Climate, 2, 321340.

  • Krishnamurti, T. N., H. S. Bedi, and M. Subramaniam, 1990: The summer monsoon of 1988. Meteor. Atmos. Phys., 42, 1937.

  • Krishnamurti, T. N., L. Stefanova, A. Chakraborty, T. S. V. Kumar, S. Cocke, D. Bachiochi, and B. Mackey, 2002: Seasonal forecasts of precipitation anomalies for North American and Asian monsoons. J. Meteor. Soc. Japan, 80, 14151426.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., A. Chakraborty, R. Krishnamurti, W. K. Dewar, and C. A. Clayson, 2006: Seasonal prediction of sea surface temperature anomalies using a suite of 13 coupled atmosphere–ocean models. J. Climate, 19, 60696088.

    • Search Google Scholar
    • Export Citation
  • Kumar, V., and T. N. Krishnamurti, 2012: Improved seasonal precipitation forecasts for the Asian monsoon using 16 atmosphere–ocean coupled models. Part I: Climatology. J. Climate, 25, 3964.

    • Search Google Scholar
    • Export Citation
  • Lee, H.-S., 1999: Cloud specification and forecasts in the Florida State University global spectral model. Ph.D. dissertation, The Florida State University, 361 pp.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1973: A new vector partition of the probability score. J. Appl. Meteor., 12, 595600.

  • Rajeevan, M., S. S. Pai, A. R. Kumar, and B. Lal, 2007: New statistical models for long range forecasting of southwest monsoon rainfall over India. Climate Dyn., 28, 813828.

    • Search Google Scholar
    • Export Citation
  • Shen, S. S. P., W. K. M. Lau, K.-M. Kim, and G. Li, 2001: A canonical ensemble correlation prediction model for seasonal precipitation anomaly. NASA Rep. NASA/TM—2001–209989, 65 pp.

    • Search Google Scholar
    • Export Citation
  • Stefanova, L., and T. N. Krishnamurti, 2002: Interpretation of seasonal climate forecast using Brier skill score, The Florida State University superensemble, and the AMIP-I dataset. J. Climate, 15, 537544.

    • Search Google Scholar
    • Export Citation
  • Wang, H., and K. Fan, 2009: A new scheme for improving the seasonal prediction of summer precipitation anomalies. Wea. Forecasting, 24, 548554.

    • Search Google Scholar
    • Export Citation
  • Wang, L., C. Zhu, and W.-T. Yun, 2007: Improvement of model forecast on the Asian summer rainfall anomaly with the application of a spatial filtering scheme. Theor. Appl. Climatol., 88, 225230.

    • Search Google Scholar
    • Export Citation
  • Yatagai, A., O. Arakawa, K. Kamiguchi, H. Kawamoto, M. I. Nodzu, and A. Hamada, 2009: A 44-year daily gridded precipitation dataset for Asia based on a dense network of rain gauges. SOLA, 5, 137140.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Subdomains of study over different parts of the monsoon Asia.

  • Fig. 2.

    Scatterplot for JJA precipitation anomaly (mm day−1), over the larger monsoon Asia region for all the member models, ensemble mean EM and the superensemble SSE. (bottom) The inset numbers show the value of the correlation between the predicted and the observed estimates of the rainfall anomalies.

  • Fig. 3.

    Showing along the ordinate the (a) RMS error and (b) anomaly correlations, respectively, for the rainfall anomaly over the monsoon Asia region. The abscissa denotes the years. The yearly set of histograms carry forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms show the 16-yr- averaged skills.

  • Fig. 4.

    (a) JJA precipitation anomaly for 1987. First panel shows the observed rainfall anomalies, last two panels show the results for the ensemble mean and superensemble, the other panels show the results for each of the member models. Units mm day−1. (top right) The inset numbers show the correlations for the rainfall anomalies with respect to the observed estimates. (bottom) The inset numbers show the corresponding RMS errors.

  • Fig. 4.

    (b) JJA precipitation anomaly for 1988. First panel shows the observed rainfall anomaly; last two panels show the results for the ensemble mean and superensemble, the other panels show the results for each of the member models. Units mm day−1. (top right) The inset numbers show the correlations for the rainfall anomalies with respect to the observed estimates. (bottom) The inset numbers show the corresponding RMS errors.

  • Fig. 4.

    (c) As in (a), except for the DJF.

  • Fig. 4.

    (d) As in (b), except for the DJF.

  • Fig. 5.

    Probability forecasts for JJA over Monsoon Asia Region. Here the ordinate denotes the observed relative frequency and the abscissa denotes the corresponding forecast probability. MME stands for Mulimodel Ensemble mean; SSE1 stands for superensemble using method-1; SSE2 stands for superensemble using method-2, and Mdls stands for member models of the multimodel suite. Forecasts close to the slope angle of 45° carry the most reliable forecasts.

  • Fig. 6.

    Sharpness diagrams for probabilistic forecasts for JJA over the monsoon Asia region. Here the ordinate denotes the observed relative frequency and the abscissa denotes the corresponding forecast probability.

  • Fig. 7.

    Sensitivity of the superensemble forecast to the number of models for different lengths of datasets. The ordinate denotes RMS errors and the abscissa denotes the number of models. The three different curves show the results from the usage of different lengths of forecast datasets.

  • Fig. 8.

    The vertical bars shows RMS error (along ordinate) for single-model ensemble mean as compared to the overall ensemble mean (clear bar) and superensemble (dark bar) shown in at far right for each year . Also shown in the far right side is the overall average for 15 yr. These results pertain to the larger monsoon domain. The least RMS error are seen for the superensemble in the far right of each sets of bar.

  • Fig. 9.

    Showing along the ordinate the (a) RMS error and (b) anomaly correlations, respectively, for the rainfall anomaly over the monsoon Asia region. The abscissa denotes the years. The yearly sets of histograms contain forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms shows the 16-yr-averaged skills. (c) Rainfall anomaly (JJA) for observed (APHRODITE, solid line) and models (ANR, KNR, METF, and BMRC) area averaged over central Indian (75°–90°E; 20°–30°N) region. The abscissa denotes the years of forecasts, and the ordinate denotes the rainfall anomalies, mm/day.

  • Fig. 10.

    (a) JJA precipitation rainfall anomaly, units mm day−1, for 1989, over the Indian region. The inset numbers on top denotes the correlations of the predicted precipitation anomaly with respect to the observed estimates rainfall anomalies; the bottom inset numbers denote the corresponding RMS errors.

  • Fig. 10.

    (b) As in (a), but for the JJA precipitation rainfall anomaly for 1994.

  • Fig. 11.

    The map of India showing district boundaries. The arbitrarily selected 8 districts are marked by numbers 1–8.

  • Fig. 12.

    (a) Total rainfall (along ordinate) in mm day−1 for the first four districts (marked 1–4, of Fig. 11), the abscissa denotes the 15 yr for which forecasts were made. The four different curves are the total rains for the four districts. Also shown are the RMS errors for the best model (ECMWF), ensemble mean and the multimodel superensemble.

  • Fig. 12.

    (b) As in (a), but this carries results for districts 5–8 of Fig. 11.

  • Fig. 13.

    Showing along the ordinate the RMS error and Anomaly correlations respectively, for the rainfall anomaly over (a),(b) China and (c),(d) Taiwan domains. The abscissa denotes the years. The yearly set of histograms carry forecasts for each of the 16 models, all separately identified by a different color. The last set of histograms show the 15 year averaged skills.

  • Fig. 14.

    As in Fig. 13, but for (e),(f) Japan and (g),(h) South Korea domains.

  • Fig. 15.

    Showing the anomaly correlations between 20°N and 80°N for two radiative schemes RTDC (an old emissivity/absorbitivity based radiative transfer scheme with threshold relative humidity based clouds; OLD Scheme) and RTPC (a band model for radiative transfer scheme with explicit clouds, NEW Scheme) results are based on 30-days of forecasts during September 1998.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 213 53 0
PDF Downloads 81 27 0