• Brier, G W., 1950: Verification of forecasts expressed in terms of probabilities. Mon. Wea. Rev., 78 , 13.

  • Buizza, R., T. Petroliagis, T N. Palmer, J. Barkmeijer, M. Hamrud, A. Hollingsworth, S. Simmons, and N. Wedi, 1998: The impact of model resolution and ensemble size on the performance of an ensemble prediction system. Quart. J. Roy. Meteor. Soc., 124 , 19351960.

    • Search Google Scholar
    • Export Citation
  • Charney, J G., and J. Shukla, 1981: Predictability of monsoons. Monsoon Dynamics, J. Lighthill and R. Pearce, Eds., Cambridge University Press, 99–110.

    • Search Google Scholar
    • Export Citation
  • Cocke, S D., and T E. LaRow, 2000: Seasonal predictions using a regional spectral model embedded within a coupled ocean–atmosphere model. Mon. Wea. Rev., 128 , 689708.

    • Search Google Scholar
    • Export Citation
  • Déqué, M., J F. Royer, and R. Stroe, 1994: Formulation of Gaussian probability forecast based on model extended-range integrations. Tellus, 46A , 5265.

    • Search Google Scholar
    • Export Citation
  • Doswell III, C A., R. Davies-Jones, and D L. Keller, 1990: On summary measures of skill rare event forecasting based on contingency tables. Wea. Forecasting, 5 , 575586.

    • Search Google Scholar
    • Export Citation
  • Emanuel, K A., and M. Zivkovic-Rothman, 1999: Development and evaluation of a convective scheme for use in climate models. J. Atmos. Sci., 56 , 17661782.

    • Search Google Scholar
    • Export Citation
  • Erdreich, L S., and E T. Lee, 1981: Use of relative operating characteristics analysis in epidemiology: A method for dealing with subjective judgment. Amer. J. Epidemiol., 114 , 649662.

    • Search Google Scholar
    • Export Citation
  • Gilbert, G K., 1884: Finley’s tornado predictions. Amer. Meteor. J., 1 , 166172.

  • Ji, M., A. Leetmaa, and V. Kousky, 1996: Coupled model predictions of ENSO during the 1980s and the 1990s at the National Centers for Environmental Prediction. J. Climate, 9 , 31053120.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T N., S. Low-Nam, and R. Pasch, 1983: Cumulus parameterization and rainfall rates II. Mon. Wea. Rev., 111 , 816828.

  • Krishnamurti, T N., C. Kishtawal, T E. LaRow, D. Bachiochi, Z. Zhang, C E. Williford, S. Gadgil, and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285 , 15481550.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T N., C. Kishtawal, Z. Zhang, T E. LaRow, D. Bachiochi, and C E. Williford, 2000: Multimodel ensemble forecasts for weather and seasonal climate. J. Climate, 13 , 41694216.

    • Search Google Scholar
    • Export Citation
  • LaRow, T E., and T N. Krishnamurti, 1998: Initial conditions and ENSO prediction. Tellus, 50A , 7698.

  • Latif, M., R. Kleeman, and C. Eckart, 1997: Greenhouse warming, decadal variation, or El Niño: An attempt to understand the anomalous 1990s. J. Climate, 10 , 22212239.

    • Search Google Scholar
    • Export Citation
  • Levitus, S., 1982: Climatological Atlas of the World Ocean. NOAA Prof. Paper 13, 173 pp. and 17 microfiche.

  • Mason, S J., and N E. Graham, 1999: Conditional probabilities, relative operating characteristics, and relative operating levels. Wea. Forecasting, 14 , 713725.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, T N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122 , 73119.

    • Search Google Scholar
    • Export Citation
  • Moorthi, S., and M J. Suarez, 1992: Relaxed Arakawa–Schubert: A parameterization of moist convection for general circulation models. Mon. Wea. Rev., 120 , 9781002.

    • Search Google Scholar
    • Export Citation
  • Palmer, T N., and D. L. T. Anderson, 1994: The prospects for seasonal forecasting. Quart. J. Roy. Meteor. Soc., 120 , 755793.

  • Palmer, T N., and J. Shukla, 2000: Editorial to DSP/PROVOST special issue. Quart. J. Roy. Meteor. Soc., 126 , 19891990.

  • Palmer, T N., C. Brankovic, and D S. Richardson, 2000: A probability and decision-model analysis of PROVOST seasonal multi-model ensemble integrations. Quart. J. Roy. Meteor. Soc., 126 , 20132033.

    • Search Google Scholar
    • Export Citation
  • Palmer, T N., and Coauthors, 2004: Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER). Bull. Amer. Meteor. Soc., 85 , 853872.

    • Search Google Scholar
    • Export Citation
  • Pan, H-L., and W S. Wu, 1994: Implementing a mass flux convection parameterization package for the NMC MRF model. Preprints, Tenth Conf. on Numerical Weather Prediction, Portland, OR, Amer. Meteor. Soc., 96–98.

  • Reynolds, R W., and T M. Smith, 1994: Improved global sea surface temperature analyses using optimum interpolation. J. Climate, 7 , 929948.

    • Search Google Scholar
    • Export Citation
  • Stanski, H R., L J. Wilson, and W R. Burrows, 1989: Survey of common verification methods in meteorology. World Weather Watch Tech. Rep. 8 (WMO/TD-358), World Meteorological Organization, Geneva, Switzerland, 114 pp.

  • Stephenson, D B., 2000: Use of the “odds ratio” for diagnosing forecast skill. Wea. Forecasting, 15 , 221232.

  • Swets, J A., 1973: The Relative Operating Characteristics in psychology. Science, 182 , 990999.

  • Trenberth, K E., G W. Branstator, D. Koroly, A. Kumar, N C. Lau, and C. Ropelewski, 1998: Progress during TOGA in understanding and modelling global telecommunications associated with tropical sea surface temperatures. J. Geophys. Res., 103 , 1429114324.

    • Search Google Scholar
    • Export Citation
  • Xie, P., and P A. Arkin, 1997: Global precipitation: A 17-year monthly analysis based on gauge observation, satellite observations, and numerical model output. Bull. Amer. Meteor. Soc., 78 , 25392558.

    • Search Google Scholar
    • Export Citation
  • Zhang, G L., and N A. McFarlane, 1995: Sensitivity of climate simulations to the parameterization of cumulus convection in the Canadian Climate Centre general circulation model. Atmos.–Ocean, 33 , 407446.

    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    Tropical Pacific 1-month lead time DJF SST bias for the six different convective schemes. Negative values are shaded. Contour interval is 0.5°. The zero contour line is suppressed.

  • View in gallery
    Fig. 2.

    Niño-3 SST plumes for all seven months of the 12 years: (top) the multimodel (MM) and (bottom) multianalysis (MA). Observed plumes are shown with the dashed line and calculated using the Reynolds and Smith (1994) data. The open circles represent each of the seven months.

  • View in gallery
    Fig. 3.

    As in Fig. 2, except for SST plumes in the Niño-3.4 SST region.

  • View in gallery
    Fig. 4.

    Niño-3 SST anomaly plumes for all seven months of the 12 years: (top) MM and (bottom) MA. Observed plumes as in Fig. 2.

  • View in gallery
    Fig. 5.

    As in Fig. 4, except for SST anomaly plumes in the Niño-3.4 region.

  • View in gallery
    Fig. 6.

    (left) Niño-3.4 mean SST drift and (right) mean absolute SST. Solid lines are the MM, dashed lines are the MA, and the dash–dotted line is the observed in the absolute SST plots. Observed values are the Reynolds and Smith (1994) SSTs. Units are °C.

  • View in gallery
    Fig. 7.

    (left column) The Niño-3 and Niño-3.4 SST rms errors for the seven months from the entire 12-yr period. The MM is shown with the solid line, MA is shown with the dashed line. Persistence is given by the dash–dot line. (right column) The Niño-3 and Niño-3.4 anomaly correlation for the same time period as in the (left column), solid line is the MA and the dashed line is the MM. Anomalies are calculated with respect to the NCEP version 2 optimum interpolation (OIv2) 1971–2000 climatology.

  • View in gallery
    Fig. 8.

    DJF precipitation bias from the six different convective schemes. Bias is determined over the 12-yr period and is calculated by model minus observed where the monthly CMAP observed precipitation was used. Negative values are shaded. Contour interval is 1 mm day−1. Zero contour line is suppressed.

  • View in gallery
    Fig. 9.

    DJF MM and MA precipitation ETS for four selected domains. The MM (solid lines) and MA (dashed lines). (top left) The global ETS, (top right) the Brazil domain ETS, (bottom left) the ETS for North America, and (bottom right) the ETS for the global Tropics. CMAP observed precipitation is used. Equitable threats are in mm day−1.

  • View in gallery
    Fig. 10.

    DJF spatial pattern of the precipitation AROC values for threats of 0.5, 1.0, and 2.0 mm day−1. (top row) MM and (bottom row) MA. Color shades for AROC ≥ 0.5. Gray shades are for areas where there existed at least one threat but an AROC < 0.5.

  • View in gallery
    Fig. 11.

    DJF tropical precipitation reliability diagram for both the MM and MA ensembles. Threat values are 0.5, 1.0, and 2.0 mm day−1. Solid line is the MM and the dashed line is the MA. The solid diagonal line is the perfect reliability line.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 68 18 3
PDF Downloads 7 3 0

Multiconvective Parameterizations as a Multimodel Proxy for Seasonal Climate Studies

T. E. LaRowCenter for Ocean–Atmospheric Prediction Studies, The Florida State University, Tallahassee, Florida

Search for other papers by T. E. LaRow in
Current site
Google Scholar
PubMed
Close
,
S. D. CockeDepartment of Meteorology, The Florida State University, Tallahassee, Florida

Search for other papers by S. D. Cocke in
Current site
Google Scholar
PubMed
Close
, and
D. W. ShinCenter for Ocean–Atmospheric Prediction Studies, The Florida State University, Tallahassee, Florida

Search for other papers by D. W. Shin in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

A six-member multicoupled model ensemble is created by using six state-of-the-art deep atmospheric convective schemes. The six convective schemes are used inside a single model and make up the ensemble. This six-member ensemble is compared against a multianalysis ensemble, which is created by varying the initial start dates of the atmospheric component of the coupled model. Both ensembles were integrated for seven months (November–May) over a 12-yr period from 1987 to 1998. Examination of the sea surface temperature and precipitation show that while deterministic skill scores are slightly better for the multicoupled model ensemble the probabilistic skill scores favor the multimodel approach. Combining the two ensembles to create a larger ensemble size increases the probabilistic skill score compared to the multimodel. This altering physics approach to create a multimodel ensemble is seen as an easy way for small modeling centers to generate ensembles with better reliability than by only varying the initial conditions.

Corresponding author address: Dr. Timothy LaRow, Center for Ocean–Atmospheric Prediction Studies, The Florida State University, Tallahassee, FL 32306. Email: larow@coaps.fsu.edu

Abstract

A six-member multicoupled model ensemble is created by using six state-of-the-art deep atmospheric convective schemes. The six convective schemes are used inside a single model and make up the ensemble. This six-member ensemble is compared against a multianalysis ensemble, which is created by varying the initial start dates of the atmospheric component of the coupled model. Both ensembles were integrated for seven months (November–May) over a 12-yr period from 1987 to 1998. Examination of the sea surface temperature and precipitation show that while deterministic skill scores are slightly better for the multicoupled model ensemble the probabilistic skill scores favor the multimodel approach. Combining the two ensembles to create a larger ensemble size increases the probabilistic skill score compared to the multimodel. This altering physics approach to create a multimodel ensemble is seen as an easy way for small modeling centers to generate ensembles with better reliability than by only varying the initial conditions.

Corresponding author address: Dr. Timothy LaRow, Center for Ocean–Atmospheric Prediction Studies, The Florida State University, Tallahassee, FL 32306. Email: larow@coaps.fsu.edu

1. Introduction

The use of ensembles to quantify a forecast is widely practiced within the operational numerical weather prediction (NWP) and climate communities. One common method of generating ensemble members for climate forecasts involves using slightly different initial conditions, either by staggering the initial atmospheric start dates or by adding perturbations to the analysis. The lagged-average approach of developing an ensemble has been used by many (e.g., Molteni et al. 1996; LaRow and Krishnamurti 1998). This is due in part to the fact that seasonal predictions are strongly determined by the lower boundary conditions, most notably the sea surface temperatures (Charney and Shukla 1981; Palmer and Anderson 1994). For seasonal climate studies this might not be the optimal design since this technique only uses a single model and thus only provides a limited estimate of the uncertainties about the initial state. As shown by Déqué et al. (1994) this can lead to biased probability forecasts.

A new approach has been proposed by which ensembles are generated by combining outputs from several different global models (Palmer et al. 2000; Palmer and Shukla 2000). This approach estimates not only uncertainties in the initial state but also uncertainties in the understanding of the physical processes operating in the climate system. Using this approach, Palmer et al. (2000) showed that the increase in probabilistic skill of the multimodel ensemble was higher than that of any of the individual model ensembles. However, the majority of the increase in skill of the multimodel ensemble was largely due to the increase in the ensemble size and reliability.

Other atmospheric-only multimodel approaches include the superensemble bias removal technique of Krishnamurti et al. (1999, 2000). This technique works by minimizing the model biases during a “training period” using a least squares linear regression technique. The coefficients are then applied to the models during the forecast period. This technique was shown to reduce the model errors during the forecast phase.

Extending the atmospheric-only ideas of Palmer et al. (2000) to coupled climate models the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project was devised (Palmer et al. 2004). Seven different global coupled models were collected and run at a single site for a series of 6-month hindcast studies. The collection of coupled climate models within the DEMETER project showed impressive reliability (Palmer et al. 2004). In addition, the probabilistic scores were used to downscale for malaria incidence and crop yield forecasting with success.

In this paper, we propose a different method of generating multimodel ensembles for seasonal climate integrations. Instead of using a variety of models, as currently being done by some of the major operational centers, we will use a single model, the Florida State University (FSU) coupled model, with six different state-of-the-art deep convective parameterizations to develop our ensembles. The main advantage of creating the ensembles this way is that a single model is easier to maintain; however, the disadvantage is that the various versions of the model are not entirely independent.

From a coupled model perspective, no single parameterization is perhaps more important than deep convection. In the Tropics, the SSTs and deep convection are strongly coupled via the net heat, freshwater, and the momentum flux. The ability of tropical deep convection to influence extratropical weather is known to occur via upper-level Rossby wave dynamics that alter the placement of the subtropical and polar jet streams (e.g., Trenberth et al. 1998).

The purpose of this paper is not to show which convective scheme works best within the FSU coupled model, but to show that by changing the deep convective parameterizations one can obtain an effective multimodel proxy that will allow a better estimate of the model uncertainty and potentially lead to an increase in skill superior to (or in conjunction with) altering the initial conditions within a single model. Such an approach could be advantageous to modeling groups who do not have access to data from the major modeling centers, yet wish to develop multimodel ensembles for their purposes.

This paper is organized as follows. Section 2 discusses the model and the convective parameterizations used in this paper. Section 3 provides the experimental details. Section 4 briefly discusses the skill measures used. Section 5 presents the deterministic results. Section 6 presents the probabilistic results, and, finally, discussion, and conclusions are given in section 7.

2. Models

In this study, the FSU global coupled model (LaRow and Krishnamurti 1998; Cocke and LaRow 2000) is used. The components of the coupled system include the FSU global spectral model T63L171 and the Max Planck Institute’s global ocean model, the Hamburg Ocean Primitive Equation (HOPE) model. The net heat flux, freshwater flux, wind stress, and surface solar radiation are time averaged and spatially interpolated to the ocean grid every two model hours. The ocean model is integrated for one time step using the atmospheric forcing and the resulting SSTs are passed back to the atmospheric model. The coupled model does not employ any type of anomaly coupling or flux correction techniques and is dynamically coupled between 50°N and 50°S. Poleward of this region the temperatures and salinities are relaxed toward the Levitus (1982) climatology.

Six different formulations of the FSU coupled model are used in this study. The formulations are based on six different cumulus convection parameterizations. The six state-of-the-art deep convection schemes used in the FSU coupled model are the Emanuel scheme (Emanuel and Zivkovic-Rothman 1999), the Zhang and McFarlane (1995) scheme, a modified Kuo scheme (Krishnamurti et al. 1983), and three versions of the Arakawa–Schubert (A–S) convective parameterization. Two versions of the A–S scheme are currently in use at the National Centers for Environmental Prediction (NCEP; Pan and Wu 1994), and the Goddard Space Flight Center (GSFC; Moorthi and Suarez 1992). The third version of the A–S scheme was formerly used at the U.S. Naval Research Laboratory (NRL) before they changed operationally to the Emanuel scheme. In this paper the six model formulations are identified according to the institution where the convective schemes were obtained. For example, the Emanuel scheme is referred to as “MIT” and the Kuo scheme as “FSU.”

3. Experimental details

The six models are integrated each season for a 12 yr (1986–97) period. The integrations commence on 1 November of the respective year and continue for 210 days (7 months). The initial conditions for the atmospheric model are taken from corresponding 1200 UTC European Centre for Medium-Range Weather Forecasts (ECMWF) analysis. The ocean initial conditions are taken from a continuous initialization procedure (LaRow and Krishnamurti 1998) in which the ocean model is forced by the FSU-observed wind stresses and the Reynolds and Smith (1994) monthly mean SSTs using Newtonian relaxation prior to coupling to the atmospheric model. Outputs were saved once a day. Weekly and monthly mean fields were derived from the daily output. Anomalies were calculated using both the weekly and monthly mean fields and were defined with respect to the individual model’s climatology (i.e., bias corrected). This ensemble is called multimodel (MM).

In addition, a control integration was conducted for the same 12 yr using the NCEP convection scheme. This is the standard configuration used in the FSU coupled model. For each year, a five-member ensemble was developed by varying the initial start date of the atmosphere using consecutive ECMWF 1200 UTC analysis centered on 1 November. The ocean initial conditions were the same as used in the MM. This ensemble is called multianalysis (MA). Weekly and monthly mean fields and anomalies were derived exactly like those of the MM.

4. Skill measures

There exist many measures of skill [e.g., Heidke skill, linear error in probability space, Brier skill score (BSS), equitable threat score (ETS), relative operating characteristics (ROC), and the root-mean-square (rms) error] to name a few. In this paper, we will focus on BSS, ETS, and the ROC. Although we do not believe that the rms error is a good measure of model skill, for reasons we will point out in the next section, we shall also show rms error results in the next section so that comparisons can be made to other coupled models.

The Brier score (Brier 1950) is defined as
i1520-0442-18-15-2963-e1
where pi is the probability of an event occurring in the ith member and υi = 1 or υi = 0 if the event verified or not. The BSS can then be defined by
i1520-0442-18-15-2963-e2
where Bref is the Brier score of a reference forecast, which may be taken as the climatological forecast BCLIM = o(1 − o) and o is the observed frequency of the event. Therefore a BSS ≤ 0 will have no skill and a perfect deterministic forecast will have a BSS = 1.

The ROC has long been used in other sciences such as in the medical sciences (e.g., Swets 1973; Erdreich and Lee 1981) but has only recently been adopted in the atmospheric sciences (Stanski et al. 1989). The ROC tests the performance of a probabilistic forecast by using a 2 × 2 contingency table of forecast versus observation shown in Table 1.

The hit rates and false alarm rates are defined as
i1520-0442-18-15-2963-e3
i1520-0442-18-15-2963-e4
The hits rates and false alarm rate for a given range of probability are defined as
i1520-0442-18-15-2963-e5
i1520-0442-18-15-2963-e6
where
i1520-0442-18-15-2963-e7
is the climatological frequency of the event occurring, g(p) is the probability density function of the forecast probabilities, and o(p) is the observed probability. The ROC curve is a plot of H(pt) versus f (pt) for a set of thresholds probabilities pt between 0 and 1, where pt is defined a priori. For a perfect forecast H(pt) = 1 and f (pt) = 0. The resulting area under the ROC curve (AROC) is a measure of forecast skill, with a perfect forecast having an AROC = 1 and a no-skill forecast having AROC ≤ 0.5. The ROC is used at operational centers like ECMWF (Buizza et al. 1998) and the International Research Institute (Mason and Graham 1999) to evaluate the performance of operational medium- and long-range ensemble forecasts.
The ETS (Gilbert 1884; Doswell et al. 1990) is given by
i1520-0442-18-15-2963-e8
where Hr is the expected number of hits expected by chance and is defined by
i1520-0442-18-15-2963-e9
where F is the number of grid boxes that forecast more than a threshold, O is the number of grid boxes that observe more than the threshold, and H is the number of grid boxes that correctly forecast more than the threshold. Finally, T is the total number of grid boxes inside the verification domain.

a. Note on ROC scores

The definition of the false alarm rate with the correct rejections (CRs) in the denominator can lead to a false expectation of skill. This can occur if the ROC is calculated over a large spatial domain where the number of CR will be large for events that are not climatologically expected to occur. This is most apparent if one considers the precipitation field in which climatologically certain large subtropical regions exhibit a very small amount of rain or no precipitation at all. The model will be given credit for having skill for not predicting rain where it is not climatologically favored. Stephenson (2000) proposed using the “odds ratio” as a means of quantifying skill in a dichotomous weather event. This method is less sensitive to hedging than other skill measures and can be defined in terms of hit rate and false alarm rate and therefore might be a better alternative to the ROC score in climatologically favored regions.

5. Deterministic results

This study focuses on SSTs and the precipitation fields since these two fields are arguably the two most important fields from both a coupled model and seasonal forecast standpoint.

a. SST bias

The December–February (DJF) 12-yr-averaged tropical Pacific (30°N–30°S, 120°E–75°W) SST bias for the MM are shown in Fig. 1. The individual average SST biases are calculated with respect to the NCEP model in order to highlight the impact of the convective parameterizations on the mean SST field of the control model. The NCEP bias is determined by subtracting the seasonal SST of the NCEP model from the Reynolds and Smith (1994) observed SSTs. The MA SST bias is not shown since it is nearly identical to the MM NCEP SST bias shown in Fig. 1d.

The NCEP configuration (Fig. 1d) shows an area of colder-than-observed temperatures (−0.5° to −1.5°) confined to the eastern Pacific and another cold area along the equator between 160°E and the date line. In the western Pacific and along the east coast of Australia the NCEP scheme has a +0.5° warm bias. The other five ensemble members show marked differences compared to the NCEP bias. Three of the five models (MIT, NCAR, and NRL) have colder western Pacific and warmer eastern Pacific temperatures. All models show an area of east Pacific cold bias when compared against the observed values. This cold bias in the eastern tropical Pacific develops within the first two months and is likely a dynamical ocean response resulting from the lack of explicit subsurface initialization. Within the equatorial waveguide region, the NRL model exhibits the largest cold bias with an area average value of −1.0° while the FSU is the warmest model with an area-averaged value of +1.0°. Throughout the tropical Pacific, all models exhibit large differences compared to the NCEP model. The fact that these biases are as large as the signal to be predicted lends some support to this type of multimodel ensemble approach.

b. SST plumes and anomalies

The monthly SST plumes for all seven months from the MM and the MA ensembles in the Niño-3 (5°S–5°N, 150°–90°W) and Niño-3.4 (5°S–5°N, 170°–120°W) regions are shown in Figs. 2 and 3, respectively. The dashed lines show the observed SSTs with open circles representing the individual months. The basic observed SST pattern in the Niño-3 region is to increase during the first six months of each year followed by a decrease during May with the formation of the seasonal equatorial cold tongue. In the Niño-3 region, both the MM and MA ensemble members show the SST increase during the 12 years with similar slopes and magnitudes as the observed. Some of the MM show excessive warming compared to the observed during the last three months. This is especially true during the 1994/95 and 1995/96 forecasts.

In the Niño-3.4 region the observed SSTs show greater intraseasonal variability compared to the Niño-3 region over the 12-yr period. This region is also characterized by larger variation among the MM ensemble members than what was found in the Niño-3 region. The larger variation associated with the MM results from the fact that the Niño-3.4 region is dominated less by ocean dynamics and therefore the influence of the convective parameterization schemes can be seen more clearly. For example, during the 1987/88 forecast three of the six models forecast a decrease in the SSTs during the first four months followed by an increase in months 5 and 6. During 1987/88 the MA failed to show a decrease in the SSTs. A similar pattern in which the MM spread is significantly different than the MA spread is found during the 1997/98 forecast. The observed pattern shows a decrease in the SSTs from 29.5° to 28.5°C. The MA SST ensemble is almost constant during the period while the MM ensemble shows three models increasing in temperature and three models decreasing similar to the observed pattern.

The Niño-3 and Niño-3.4 monthly SST anomaly plumes for the MM and MA ensembles are shown in Figs. 4 and 5, respectively. For comparison purposes the “observed” anomalies (shown in black with open circles) are calculated using the climatology derived from the 1986–97 time period. Cross validation was used in the calculation of the anomalies. The MM and MA ensembles simulated the observed trends in the anomalies, although large magnitude anomalies were found to be more difficult to predict by both ensembles. During 1987/88 both the MM and MA ensembles failed to simulate the large negative anomalies (<−2°C) observed in the Niño-3 region during the last two months of the integration. However, the MM ensemble simulated larger negative anomalies compared to the MA with one member of the MM having an anomaly of −0.7°C by April 1988 compared to the observed value of −1.7° and −0.2°C with the MA. Similar difficulty in predicting large positive anomalies at longer lead times (>5 months) was found in the Niño-3 region during the 1996/97 forecast. Despite the fact that only 12 years of runs are done, it is difficult to say if this result is due to insufficient sample size or if it is a fundamental problem with the FSU coupled model or the initialization scheme. Offline multidecadal integrations show that the coupled model (control model) produces SST anomalies with ENSO-like frequency in the Niño-3 region with weak magnitudes close to 1.5°C (not shown).

In both Niño-3 and Niño-3.4 regions the spread within the individual members of the MM is larger than that of the MA. During the first six years the effect of the initial conditions on the evolution of the MA SSTA in the Niño-3 and Niño-3.4 is small to lead times of seven months with all members of the MA tending to cluster together. The sensitivity of the initial conditions becomes greater during the next five years (1992–96) with the spread within the ensemble becoming larger by month 6 of the integration. This increase in the sensitivity of the initial conditions during the 1990s corresponds to the time period in which predictive skill declined in some coupled models (Ji et al. 1996; Latif et al. 1997). During the strong ENSO event of 1997/98 all members of the MA ensemble again cluster together during the entire 7-month integration.

The MM shows larger spread compared to the MA during all 12 years with the spread becoming apparent by month 3. In contrast to the MA, the MM members do not show a tendency to cluster together during the first six years of the integration but maintain noticeably spread throughout all 12 years with the MM straddling the observed values during most of the years. This fact is highlighted in the variance (Table 2). For each month of each year the variance in the Niño-3 and Niño-3.4 region is calculated and average values for the months are shown in Table 2 for the MM and the MA. For all seven months of the integration the MA variance remains an order of magnitude smaller than the MM. This is seen as a possible weakness in the lagged forecasting approach for seasonal prediction since there is very little variability in the MA ensemble members.

The SST variance for the MM and MA increase as the forecast length increases, although the rate of increase decreases as the forecast time increases. The largest increase is seen during months 1 and 2, indicating a rapid adjustment away from the initial conditions. The influence of the convective parameterizations on the evolution of the SST field is small during the first month (variances of 0.04° and 0.03°C2) in the Niño-3 and Niño-3.4 region; however, this is still 4 and 3 times larger than the MA. The smaller variance noted in the first month is primarily due to fact that the evolution of the SST field in the eastern tropical Pacific tends to be dominated by the ocean’s subsurface initial conditions.

Figure 6 shows the 7-month Niño-3.4 average SST drift and average absolute SST along with the observed values. The observations were calculated from the forecast period and therefore have higher values (shown with the dot–dash line) than if a longer period climatology was used. The mean SST drift shows considerable variability among the MM in both the Niño-3.4 and Niño-3 (not shown) regions during the 7-month forecasts while the MA tends to cluster together. Similar to the mean SST drift, the MM absolute SST tends to spread much more than the MA and the range of solutions tends to encompass the observed values much better than the MA. Both ensembles show the warming trend found in the observations.

c. SST rms error and anomaly correlation

The SST rms error and anomaly correlation in the Niño-3 and Niño-3.4 regions are shown in Fig. 7. In both Niño regions, the MM and MA have lower rms error values compared to persistence at lead times greater than 3 months. The MA with its smaller SST variance has generally the lower rms error at short lead times, while the MM with the larger variance has the smaller rms error at longer lead times. The MM and MA rms error curves parallel each other during the 7-month integrations suggesting that the two ensemble methods are not independent of each other. Similar to the rms error plot, the MM has the lower anomaly correlation while the highest correlations belong to the MA ensemble. Although the differences are small, both ensembles show a sharp decline in the anomaly correlation at lead times greater than three months as the forecasts proceed into the boreal spring season. Based on these results alone one might conclude that the MA is slightly more skillful than the MM. However, as was shown in Table 2 and Figs. 4, 5 and 6, the members of the MA tend to cluster together, which can lead to smaller rms errors and higher anomaly correlations if the ensemble members are close to the observations. In addition, the outliers associated with the MM can penalize the rms error skill scores.

d. Precipitation bias

The MM precipitation biases over North and South America and adjacent oceans for DJF are shown in Fig. 8. The bias was calculated by subtracting the model climatology from the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) precipitation (Xie and Arkin 1997). All six models capture the basic climatological patterns over North America with the highest precipitation amounts along western North America and a secondary maximum over the eastern United States. However the models tend to produce too much precipitation along the west coast of North America (almost double the observed amount of 6 mm day−1) and too little precipitation in the southeastern part of the United States. In the southeast United States the FSU, GSFC, and MIT models underestimated by more than 50% the observed rainfall amounts while the NRL model produced almost no bias with precipitation amounts of 4 mm day−1 centered over Louisiana, Mississippi, and Alabama (not shown). It should be noted that much of the wintertime precipitation noted over North America is associated with large-scale precipitation and not individual convective systems since the mean wintertime precipitation is dominated by large-scale frontal systems. However, the location of the midlatitude storm tracks are directly influenced by the tropical Pacific deep convection via Rossby wave dynamics. The deep convective activity in the Tropics tends to be collocated with the higher SST anomalies and, as noted above, the differences in the SST biases and plumes from the various convective schemes can be substantial in the tropical Pacific.

The DJF MM ensemble precipitation biases show similar large-scale features among the various convective parameterizations. A dry bias exists in all models over northern Brazil, in the South Pacific region, and off the east coast of North America. A southward shift in the location of the ITCZ in the Pacific resulted in a wet/dry biase couplet in the eastern Pacific. The NRL model is the only model that does not show this couplet in the eastern Pacific; instead it maintains a dry bias throughout the eastern Pacific. In addition, the NRL model is the only model that does not show a precipitation bias in the southeastern United States.

Bias score calculated as the ratio of forecast amount to the observed amount shows that globally for threats less than 6 mm day−1 all models underpredict the events (not shown). Similar results are true for the Brazil and southeastern United States domains. Over the western United States all models have a bias of close to one for threats less than 3 mm day−1 with the biases increasing sharply for threats greater than 4 mm day−1. Only in the southeastern U.S. domain do the MA ensemble members show variability in all other domains (the spread of the MA bias is negligible).

The average precipitation ETS values are shown for four selected domains in Fig. 9. In three domains the MM ETS is higher than the MA with the two curves in all the domains paralleling each other. Only over North America does the MA have a higher precipitation ETS score compared to the MM. This is due to the fact that the MM fails to produce the correct location of the precipitation amounts compared to the MA and therefore is penalized in the ETS calculation. The highest threats (close to 0.5) are seen in Brazil for small threats of 1 mm day−1 and over North America for threats of 3 mm day−1. The Tropics have relatively uniform threats of 0.35 for both the MM and MA. The low threat scores seen at the higher threat values are related to the fact that the coupled model fails to produce the observed frequency of the large rainfall amounts.

6. Probabilistic score

a. SST Brier skill score

The BSS for both the MM(BSSMM) and MA(BSSMA) along with the Brier score for climatology (BCLIM) and the climatological probability of occurrence, o, are shown in Table 3. Here BCLIM is the Brier score of the climatological forecast, and o is the observed frequency of the event.

As shown in Table 3, by increasing the ensemble size (adding the MM + MA) we obtain an increase in the BSS 83% of the time compared to the BSSMA and the BSSMM for all threats. This increase occurs mostly for the small threats. Compared to the MA, the MM has the highest BSS for the large negative threats while the MA has higher BSS compared to the MM for the large positive threats. Both the MM and MA show the largest BSS at threats equal to 0.2°C.

b. SST ROC

The area under the SST ROC curves (AROC) for threats of 0.0°, ±0.5°, and ±1.0° in the Niño-3 and Niño3.4 regions for all years/all months (84 months) and for all DJF (36 months) from the MM and MA are shown in Tables 4 and 5, respectively. The highest values for a particular threat and domain are shown in bold. The MM and MA shown in Tables 4 and 5 have higher AROC values than the no skill value of 0.5. For all years and all months the MM AROC exceeds the MA in 70% of the cases, only in the Niño-3.4 region for threats greater than ±1° does the MA have a higher score. The skill of the SST forecast in the Niño-3 region does not appear to be strongly influenced by the size of the ensemble.

Although the sample size is small in Table 5 (only 36 members), the DJF predictive skill of both the MM and MA are greater than the no-skill value of 0.5 in the Niño-3 and Niño-3.4 regions for all threats. The MM has higher AROC values in the Niño-3 region than in the Niño-3.4 region. Conversely, the MA shows higher values in the Niño-3.4 region when compared to the Niño-3 region. The MA AROC scored higher than the MM only for the negative threats in the Niño-3.4 region. By combining the MM and MA, the skill of the SST forecast is slightly increased over the MM and MA ensembles. The combined skill score was greater than the MM in 4 of the 12 cases (33% of the time) and 50% of the time compared to the MA.

For threats >1.0°, the perfect AROC values are misleading since the number of observed threats is small (Nobs = 3). Both the MM and MA predicted the hit rate with no false alarms or misses. In addition, the larger positive threat hits during the DJF forecast are associated with the SSTs early in the forecast period (see Figs. 4 and 5) and therefore generally not associated with predictive skill but the initialization of the coupled model. It is suggested that for the coupled ocean–atmosphere system, the AROC is not an accurate measure of skill for fields where the memory of the system is long (relative to the forecast period), for example, the SST field. In this case the ROC is just a measure of the initialization scheme.

c. Precipitation probabilistic skill

The tropical Pacific SSTs are known to have an influence on the global circulation through the diabatic heating, especially in the extratropics. In this section we examine the precipitation skill scores over regions of North and South America using observations from the CMAP dataset.

Figure 10 shows the spatial precipitation ROC pattern of the MM and MA over North and South America and adjacent oceans for all DJF for threats of 0.5, 1.0, and 2.0 mm day−1. ROC values greater/less than 0.5 are colored and where the ROC values are less than 0.5 andif there is at least one event of the threat, the ROC is shaded gray. Plotting the ROC this way highlights regions where the ensembles show skill versus regions where the ensembles do not show skill despite the occurrences of at least one event. Areas left blank are locations where no events occurred during the period.

For all three precipitation threats, the AROC is highest over the tropical Pacific for both ensembles. Over North and South America the MM ensemble shows more areas with skill (less gray color areas) than the MA for all three threats. In addition, the MM is able to produce the higher threat amounts of 2 mm day−1 in the southeastern United States and in Brazil. Over Brazil, the MA failed to show skill with AROC values below 0.5 for all threats (Table 6). As shown in Table 6, the MM AROC values over Brazil are even higher than the combined ensemble AROC values for all threats, showing that, at least for this region, the increase in ensemble size does not necessarily translate into increased skill. The very low MA AROC score over Brazil helped to contribute to the combined ensemble failing to score higher than the MM.

For comparisons to the AROC table, the DJF global tropical precipitation reliability diagram is shown in Fig. 11 for both the MM and MA ensembles. For all three threats (0.5, 1.0, and 2.0 mm day−1) the MM (solid line) shows greater reliability compared to the MA. This is especially true at larger threats. In contrast to Table 6, where the MM tropical Pacific does not show higher scores in contrast to MA at the threat value of 2.0 mm day−1, the diagram demonstrates that the MM has greater reliability. The MM greater spread shown in Table 2 contributed to the greater MM reliability shown in Fig. 11.

7. Summary and conclusions

This paper examined the use of multiconvective parameterizations as a multimodel proxy for seasonal climate studies. Six different atmospheric convective schemes make up the multimodel proxy in the FSU coupled ocean–atmosphere model. The six different model formulations were run for 12 years (1987–98). The integrations commenced on 1 November of each year and continued for 210 days. The control integration was performed in which a single convective scheme was used and the ensemble was generated by varying the atmospheric initial conditions by a few days centered on 1 November of the respective year. The ocean initial conditions were identical for both the MM and MA ensembles and were taken from a continuous ocean initialization procedure.

The influence of the convective parameterizations on the evolution of the tropical Pacific SST was found to be substantial. During DJF (1-month lead forecast) the MM tropical Pacific SST (30°N–30°S) biases were found to be up to ±1.0° different than the MA. These SST biases developed within the first three months of the integrations and persisted throughout the rest of the integration period. All models were found to have cold bias in the eastern Pacific, although in some models (e.g., FSU) the biases were relatively small. Compared to the MA ensemble, the FSU member produced the warmest tropical Pacific SSTs while the NRL model was the coldest model. Associated with these SST differences were large teleconnection patterns of precipitation over North America. In fact, the NRL model with its coldest tropical Pacific waters (relative to the NCEP model) produced the least amount of DJF precipitation bias over North America when compared against the other five schemes.

Examination of the DJF Niño-3 and Niño-3.4 SST variances from the MM and MA shows that the MM has larger variances (an order of magnitude larger) than the MA at lead times greater than one month. Variances for both the MM and MA continue to increase as the length of forecast increases with the MM having the largest variance of 0.75°C2 in April in the Niño-3.4 region. A maximum variance of only 0.14°C2 was obtained by the MA in the Niño-3 region in May.

The SST rms error and anomaly correlation showed that the MA have relatively similar rms errors when compared against the MM for all seven months of the integration. The precipitation ETS showed that the MM has greater skill compared to the MA over all regions selected except for North America. Probabilistic scoring of the precipitation and SST using the ROC shows that the MM has greater skill over most of the globe including North America. This results from the fact that the MM solutions span a larger range of response than the MA and tends to encompass the observed values more often, thus the MM has greater reliability. Increasing the ensemble size by combining the MM and MA ensembles did produce slightly higher AROC values than either ensemble method. In the vast majority of the cases (80%) the SST MM AROC scored higher than the MA ensemble. This is true if one considers both the combined Niño-3.4 and Niño-3 regions together. If one considers only the Niño-3 region then the result is 100% of the time that the SST MM AROC is greater than or equal to the MA. In general, for all domains chosen, the MM precipitation showed greater skill (AROC and ETS) compared to the MA forecast. This is not too surprising given the small range of response of the MA ensembles to the varying of the atmospheric initial conditions. This fact was highlighted by the SST anomaly plume diagrams and mean SST drift diagrams. It could be argued that the atmospheric initial conditions did not adequately sample enough of the phase space since we only went back a maximum of three days and forward two days centered on 1 November. We feel however that increasing the MA ensemble size further by selecting dates farther away from 1 November will not substantially alter the MA results. It is believed that by perturbing the ocean initial states (as is done by some of the DEMEMTER models) we could get greater model spread and reliability than by perturbing the atmosphere alone.

Although the analysis is far from complete and the ensemble sizes are relatively small, the fact that the MM ensemble shows large variations in the SSTs and precipitation fields highlights the MM ensemble as a potential multimodel proxy for seasonal climate studies.

Acknowledgments

Computations were performed using the IBM SP4 at FSU. COAPS receives its base support from the Applied Research Center, funded by NOAA’s Office of Global Programs awarded to Dr. James J. O’Brien. The authors thank two anonymous reviewers and Noel Keenlyside for their insightful and helpful comments.

REFERENCES

  • Brier, G W., 1950: Verification of forecasts expressed in terms of probabilities. Mon. Wea. Rev., 78 , 13.

  • Buizza, R., T. Petroliagis, T N. Palmer, J. Barkmeijer, M. Hamrud, A. Hollingsworth, S. Simmons, and N. Wedi, 1998: The impact of model resolution and ensemble size on the performance of an ensemble prediction system. Quart. J. Roy. Meteor. Soc., 124 , 19351960.

    • Search Google Scholar
    • Export Citation
  • Charney, J G., and J. Shukla, 1981: Predictability of monsoons. Monsoon Dynamics, J. Lighthill and R. Pearce, Eds., Cambridge University Press, 99–110.

    • Search Google Scholar
    • Export Citation
  • Cocke, S D., and T E. LaRow, 2000: Seasonal predictions using a regional spectral model embedded within a coupled ocean–atmosphere model. Mon. Wea. Rev., 128 , 689708.

    • Search Google Scholar
    • Export Citation
  • Déqué, M., J F. Royer, and R. Stroe, 1994: Formulation of Gaussian probability forecast based on model extended-range integrations. Tellus, 46A , 5265.

    • Search Google Scholar
    • Export Citation
  • Doswell III, C A., R. Davies-Jones, and D L. Keller, 1990: On summary measures of skill rare event forecasting based on contingency tables. Wea. Forecasting, 5 , 575586.

    • Search Google Scholar
    • Export Citation
  • Emanuel, K A., and M. Zivkovic-Rothman, 1999: Development and evaluation of a convective scheme for use in climate models. J. Atmos. Sci., 56 , 17661782.

    • Search Google Scholar
    • Export Citation
  • Erdreich, L S., and E T. Lee, 1981: Use of relative operating characteristics analysis in epidemiology: A method for dealing with subjective judgment. Amer. J. Epidemiol., 114 , 649662.

    • Search Google Scholar
    • Export Citation
  • Gilbert, G K., 1884: Finley’s tornado predictions. Amer. Meteor. J., 1 , 166172.

  • Ji, M., A. Leetmaa, and V. Kousky, 1996: Coupled model predictions of ENSO during the 1980s and the 1990s at the National Centers for Environmental Prediction. J. Climate, 9 , 31053120.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T N., S. Low-Nam, and R. Pasch, 1983: Cumulus parameterization and rainfall rates II. Mon. Wea. Rev., 111 , 816828.

  • Krishnamurti, T N., C. Kishtawal, T E. LaRow, D. Bachiochi, Z. Zhang, C E. Williford, S. Gadgil, and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285 , 15481550.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T N., C. Kishtawal, Z. Zhang, T E. LaRow, D. Bachiochi, and C E. Williford, 2000: Multimodel ensemble forecasts for weather and seasonal climate. J. Climate, 13 , 41694216.

    • Search Google Scholar
    • Export Citation
  • LaRow, T E., and T N. Krishnamurti, 1998: Initial conditions and ENSO prediction. Tellus, 50A , 7698.

  • Latif, M., R. Kleeman, and C. Eckart, 1997: Greenhouse warming, decadal variation, or El Niño: An attempt to understand the anomalous 1990s. J. Climate, 10 , 22212239.

    • Search Google Scholar
    • Export Citation
  • Levitus, S., 1982: Climatological Atlas of the World Ocean. NOAA Prof. Paper 13, 173 pp. and 17 microfiche.

  • Mason, S J., and N E. Graham, 1999: Conditional probabilities, relative operating characteristics, and relative operating levels. Wea. Forecasting, 14 , 713725.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, T N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122 , 73119.

    • Search Google Scholar
    • Export Citation
  • Moorthi, S., and M J. Suarez, 1992: Relaxed Arakawa–Schubert: A parameterization of moist convection for general circulation models. Mon. Wea. Rev., 120 , 9781002.

    • Search Google Scholar
    • Export Citation
  • Palmer, T N., and D. L. T. Anderson, 1994: The prospects for seasonal forecasting. Quart. J. Roy. Meteor. Soc., 120 , 755793.

  • Palmer, T N., and J. Shukla, 2000: Editorial to DSP/PROVOST special issue. Quart. J. Roy. Meteor. Soc., 126 , 19891990.

  • Palmer, T N., C. Brankovic, and D S. Richardson, 2000: A probability and decision-model analysis of PROVOST seasonal multi-model ensemble integrations. Quart. J. Roy. Meteor. Soc., 126 , 20132033.

    • Search Google Scholar
    • Export Citation
  • Palmer, T N., and Coauthors, 2004: Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER). Bull. Amer. Meteor. Soc., 85 , 853872.

    • Search Google Scholar
    • Export Citation
  • Pan, H-L., and W S. Wu, 1994: Implementing a mass flux convection parameterization package for the NMC MRF model. Preprints, Tenth Conf. on Numerical Weather Prediction, Portland, OR, Amer. Meteor. Soc., 96–98.

  • Reynolds, R W., and T M. Smith, 1994: Improved global sea surface temperature analyses using optimum interpolation. J. Climate, 7 , 929948.

    • Search Google Scholar
    • Export Citation
  • Stanski, H R., L J. Wilson, and W R. Burrows, 1989: Survey of common verification methods in meteorology. World Weather Watch Tech. Rep. 8 (WMO/TD-358), World Meteorological Organization, Geneva, Switzerland, 114 pp.

  • Stephenson, D B., 2000: Use of the “odds ratio” for diagnosing forecast skill. Wea. Forecasting, 15 , 221232.

  • Swets, J A., 1973: The Relative Operating Characteristics in psychology. Science, 182 , 990999.

  • Trenberth, K E., G W. Branstator, D. Koroly, A. Kumar, N C. Lau, and C. Ropelewski, 1998: Progress during TOGA in understanding and modelling global telecommunications associated with tropical sea surface temperatures. J. Geophys. Res., 103 , 1429114324.

    • Search Google Scholar
    • Export Citation
  • Xie, P., and P A. Arkin, 1997: Global precipitation: A 17-year monthly analysis based on gauge observation, satellite observations, and numerical model output. Bull. Amer. Meteor. Soc., 78 , 25392558.

    • Search Google Scholar
    • Export Citation
  • Zhang, G L., and N A. McFarlane, 1995: Sensitivity of climate simulations to the parameterization of cumulus convection in the Canadian Climate Centre general circulation model. Atmos.–Ocean, 33 , 407446.

    • Search Google Scholar
    • Export Citation

Fig. 1.
Fig. 1.

Tropical Pacific 1-month lead time DJF SST bias for the six different convective schemes. Negative values are shaded. Contour interval is 0.5°. The zero contour line is suppressed.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 2.
Fig. 2.

Niño-3 SST plumes for all seven months of the 12 years: (top) the multimodel (MM) and (bottom) multianalysis (MA). Observed plumes are shown with the dashed line and calculated using the Reynolds and Smith (1994) data. The open circles represent each of the seven months.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 3.
Fig. 3.

As in Fig. 2, except for SST plumes in the Niño-3.4 SST region.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 4.
Fig. 4.

Niño-3 SST anomaly plumes for all seven months of the 12 years: (top) MM and (bottom) MA. Observed plumes as in Fig. 2.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 5.
Fig. 5.

As in Fig. 4, except for SST anomaly plumes in the Niño-3.4 region.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 6.
Fig. 6.

(left) Niño-3.4 mean SST drift and (right) mean absolute SST. Solid lines are the MM, dashed lines are the MA, and the dash–dotted line is the observed in the absolute SST plots. Observed values are the Reynolds and Smith (1994) SSTs. Units are °C.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 7.
Fig. 7.

(left column) The Niño-3 and Niño-3.4 SST rms errors for the seven months from the entire 12-yr period. The MM is shown with the solid line, MA is shown with the dashed line. Persistence is given by the dash–dot line. (right column) The Niño-3 and Niño-3.4 anomaly correlation for the same time period as in the (left column), solid line is the MA and the dashed line is the MM. Anomalies are calculated with respect to the NCEP version 2 optimum interpolation (OIv2) 1971–2000 climatology.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 8.
Fig. 8.

DJF precipitation bias from the six different convective schemes. Bias is determined over the 12-yr period and is calculated by model minus observed where the monthly CMAP observed precipitation was used. Negative values are shaded. Contour interval is 1 mm day−1. Zero contour line is suppressed.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 9.
Fig. 9.

DJF MM and MA precipitation ETS for four selected domains. The MM (solid lines) and MA (dashed lines). (top left) The global ETS, (top right) the Brazil domain ETS, (bottom left) the ETS for North America, and (bottom right) the ETS for the global Tropics. CMAP observed precipitation is used. Equitable threats are in mm day−1.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 10.
Fig. 10.

DJF spatial pattern of the precipitation AROC values for threats of 0.5, 1.0, and 2.0 mm day−1. (top row) MM and (bottom row) MA. Color shades for AROC ≥ 0.5. Gray shades are for areas where there existed at least one threat but an AROC < 0.5.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Fig. 11.
Fig. 11.

DJF tropical precipitation reliability diagram for both the MM and MA ensembles. Threat values are 0.5, 1.0, and 2.0 mm day−1. Solid line is the MM and the dashed line is the MA. The solid diagonal line is the perfect reliability line.

Citation: Journal of Climate 18, 15; 10.1175/JCLI3448.1

Table 1.

The 2 × 2 ROC contingency table.

Table 1.
Table 2.

Monthly SST variances in the Niño-3 and Niño-3.4 regions for all seven months for the multimodel and multianalysis.

Table 2.
Table 3.

The DJF 1-month lead SST Niño-3.4 region Brier skill score for the MA (BSSMA) and the MM (BSSMM) and the combined MM plus MA (BSSMM+MA) for various threats. Highest BSS values are shown in bold. Also shown are the Brier score for a climatological forecast (BCLIM) and the observed frequency (o).

Table 3.
Table 4.

The SST AROC scores for all years and all months (N = 84) in the Niño-3 and Niño-3.4 domains. Highest AROC values are shown in bold; Nobs is the observed number of occurrences in months. Combined is the MM + MA ensemble.

Table 4.
Table 5.

As in Table 4, but the SST AROC scores are for all DJF (N = 36) in the Niño-3 and Niño-3.4 domains at 1-month lead time. Highest area under the ROC curve is shown in bold.

Table 5.
Table 6.

One-month lead precipitation AROC for all DJF (N = 36). Highest AROC values between the MM and MA are in bold. Combined is the MM + MA.

Table 6.

1

 The spectral model triangular truncation has 63 waves with 17 vertical levels.

Save