1. Introduction
There has been growing interest in forecasts at subseasonal time scales (i.e., 3–4 weeks; National Research Council 2010; National Academies of Sciences, Engineering, and Medicine 2016), which fills the gap between medium-range weather forecast and seasonal prediction. The Madden–Julian oscillation (MJO; Madden and Julian 1971), the primary mode of tropical intraseasonal climate variability in the boreal winter and spring, is considered to be a major source of global predictability on the subseasonal time scale (e.g., Waliser 2011). With the advances in models and initialization techniques (e.g., Vitart 2014), marked improvements in the dynamical MJO predictions have been reported and now exceed the skill of empirical predictions (Kim et al. 2018). For example, at the National Centers for Environmental Prediction (NCEP), Wang et al. (2014) found that the Climate Forecast System, version 2 (CFSv2), had useful MJO prediction skill out to 20 days and was significantly better than its previous version (CFSv1) with skillful predictions of 10–15 days (Seo et al. 2009). Similar skills were also reported in the dynamical MJO predictions at other operational centers such as the Predictive Ocean Atmosphere Model for Australia (POAMA; Rashid et al. 2011), the European Centre for Medium-Range Weather Forecasts (ECMWF; Vitart et al. 2010; Vitart 2014), and Beijing Climate Center, China (Liu et al. 2017). Vitart et al. (2017) and Lim et al. (2018) summarized the latest dynamical MJO prediction capability by evaluating the MJO predictions in models participating in the World Weather Research Program–World Climate Research Program (WWRP–WCRP) Subseasonal to Seasonal Prediction (S2S) Project.
In the backdrop of recent advances in MJO prediction skill, it is still an open question as to what extent further gains in dynamical MJO prediction skill remain to be achieved. A methodology to address this question is to estimate the predictability of the MJO, which represents an intrinsic property of the climate system and quantifies the upper limit of MJO prediction skill. Relative to the quantification of MJO prediction skill, however, there are relatively fewer attempts to characterize the predictability of the MJO. The “perfect model” approach, first introduced by Waliser et al. (2003) in the MJO research, is a commonly used way to characterize the MJO predictability, which assesses a model’s ability to predict its own MJO variability with forecasts starting from slightly perturbing initial states. By applying the approach with an AGCM, Waliser et al. (2003) demonstrated that the predictability of the MJO extended to about 25–30 days for upper-level circulation fields and to about 10–15 days for precipitation. Similar predictability time scale was found by Reichler and Roads (2005) with a different AGCM. Later, a longer time horizon for MJO predictability was suggested with coupled GCMs (e.g., Fu et al. 2008; Pegion and Kirtman 2008). The early models used in these MJO predictability studies, however, were generally poor in simulating the MJO (e.g., Zhang et al. 2006).
During the recent years when extensive hindcast datasets (e.g., the S2S hindcast dataset) became available, the MJO predictability was reevaluated (e.g., Rashid et al. 2011; Kim et al. 2014; Neena et al. 2014; Liu et al. 2017). For example, Neena et al. (2014) conducted a comprehensive analysis about the MJO predictability based on hindcasts by eight coupled models participating in the Intraseasonal Variability Hindcast Experiment (ISVHE), and they found that the MJO predictability was highly model dependent, ranging from 30 to >45 days. The recent review by Kim et al. (2018) not only synthesized the latest progress in the MJO prediction but also briefly described progress in MJO predictability, studies of which were mostly based on the hindcast datasets. Even though the existing hindcast datasets provide an opportunity to update previous MJO predictability estimates, they are not sufficient to isolate influences of individual factors (e.g., the role of atmospheric convection scheme) in the uncertainties of predictability estimates, because various models with different model physics, resolutions, and initialization procedures, together with different ensemble members [e.g., ranging from 4 to 11 in Neena et al. (2014)], are generally used to generate these datasets.
The latest hindcast-based predictability studies were generally based on contemporary models, but one of their shortcomings is the use of perfect model approach with ensemble of forecasts initialized from observational analysis (e.g., Waliser et al. 2003; Reichler and Roads 2005; Fu et al. 2008; Pegion and Kirtman 2008). When forecasts are initialized from observed analyses, an initial imbalance (or initial shock) occurs as a result of mismatches between model and observed states. The initial shock could exert effects on the estimates of MJO predictability. This approach could be contrasted with an experimental setup where forecasts from a model are initialized from initial states taken from its own long-term simulation, and thereby avoiding the influence of initial shock on the estimate of predictability.
The importance of understanding possible reasons for uncertainties in current estimates for MJO predictability (Neena et al. 2014) is that the gap between predictability and actual prediction skill provides the motivation for further efforts for model improvement (National Research Council 2010), and therefore errors in their estimates need to be quantified. In addition to specifics of forecast configuration (e.g., the specification of initial conditions) to estimate MJO predictability, the MJO simulations have been found to be highly sensitive to convective parameterization (e.g., Wang and Schlesinger 1999; Zhang and Mu 2005; Bechtold et al. 2008; Lin et al. 2008; Zhu et al. 2017b), representation of air–sea coupling (e.g., Waliser et al. 1999; Kemball-Cook et al. 2002), and model resolution (particularly the atmospheric vertical resolution; Inness et al. 2001). Few studies, however, have addressed the influence of such sensitivities on the estimates of predictability, which become even more complicated when combined with influences of the initial shock as discussed above.
In this paper, we revisit the MJO predictability based on the perfect model framework with CFSv2 (Saha et al. 2014), the current operational model applied at NCEP. Our focus is quantifying the effect of convection schemes on the estimate of MJO predictability where model forecasts are from initial states taken from the model simulation itself, thereby avoiding the influence of initial shock in estimating predictability. The focus on the convective parameterization in atmospheric models is because it has been considered of foremost importance in influencing the characteristics of MJO simulations (e.g., Zhang et al. 2006). For example, studies suggest that the simulated MJO is strongly sensitive to the criteria for the onset of the convection, such as the convection entrainment rate and critical relative humidity (Wang and Schlesinger 1999; Zhang and Mu 2005; Bechtold et al. 2008; Lin et al. 2008). MJO representation in the ECMWF Integrated Forecast System has been attributed to improved representation of convection and diffusion (Bechtold et al. 2008). Based on CFSv2, Zhu et al. (2017b) also found that the MJO simulations strongly depended on the choice of convection schemes, and the use of the Relaxed Arakawa–Schubert (RAS) cumulus parameterization (Moorthi and Suarez 1992, 1999) in their study produced a significantly better MJO eastward propagation than the simplified Arakawa–Schubert (SAS) cumulus parameterization (Pan and Wu 1995). Their diagnostics further indicated that the use of RAS could realistically represent the MJO-related air–sea interactions; in contrast, the use of SAS unrealistically simulated the intraseasonal wind variability resulting in significant biases in latent heat flux and in SST variability. Unrealistic SST variations, in turn, degraded the MJO simulation by affecting SST-modulated heat fluxes and the boundary layer moisture convergence or surface moist static energy (e.g., Flatau et al. 1997; Maloney and Sobel 2004).
Given the critical role of convective parameterization in MJO simulations, it is possible that the large uncertainties in current estimates for MJO predictability (Neena et al. 2014) may be largely due to differences in convective parameterization schemes. The possibility is addressed in this study by performing predictability experiments with the RAS and SAS schemes, respectively. In addition, we will also investigate the influence of initial shock at the start of model forecast on the estimate of MJO predictability, and the influence of convection schemes on the so-called Maritime Continent prediction barrier is also discussed.
2. Model, experiments, and datasets
a. Model
As in the study of Zhu and Kumar (2019), CFSv2 (Saha et al. 2014) is used, but there is one difference in the version used in this study. The atmospheric component of the CFSv2 (Saha et al. 2014) uses the 2007 version of the NCEP operational GFS, and the SAS cumulus parameterization (Pan and Wu 1995) is used as its convection scheme. In this study, the 2011 version of GFS is used, but the model physics are still configured as in Saha et al. (2014). In the 2011 version of GFS, in addition to SAS, there are two additional built-in convection schemes including the RAS scheme (Moorthi and Suarez 1992, 1999) and the SAS, version 2 (SAS2), scheme (Han and Pan 2011). In a study about how the MJO-related tropical convection was simulated in the context of differences in various SST analyses, Wang et al. (2015) tested the three convection schemes in an uncoupled framework and found that the impacts of the SST analyses depend on the model physics. Zhu et al. (2017a) also applied the three schemes in a low-resolution version (Zhu et al. 2017c) of CFSv2 and demonstrated a significant effect of atmospheric convection schemes on ENSO predictions. In this study, RAS and SAS are used for the MJO predictability experiments, and the associated coupled model configurations are referred to as RASmod and SASmod, respectively. Our previous study (Zhu et al. 2017b) has demonstrated that RASmod simulates the MJO more realistically than SASmod, which was found to be related to a more realistic air–sea feedback simulated in RASmod than in SASmod.
b. Model experiments
Initialized from the Climate Forecast System Reanalysis (CFSR; Saha et al. 2010) state on 1 January 1980, RASmod and SASmod were first integrated for 30 yr (Zhu et al. 2017b). After the 11th year of model simulation, restart files of the two free runs were saved daily for ocean (and sea ice) and every 12 h for atmosphere (and land). Based on these restart files, three sets of prediction experiments (referred to as RAS, SAS, and SAS_RASic, respectively) were conducted. We conducted the prediction experiments for the boreal winters of the second 10-yr model simulations (referred to as the “reference”). For each experiment, 45-day predictions were made every five days starting from November 1 of each of the 10 model years till the end of following March (31 cases in total for each winter), with nine ensemble members for each initial date. The nine ensemble members were generated by perturbing the atmospheric initial conditions from the reference simulations, that is, by adding a small fraction (1%–5%) of atmospheric state differences between the initial time and 12 h before or after. For each of the three prediction experiments, there are a total of 10 yr × 31 cases × 9 members (=2790) 45-day predictions. The prediction procedure including initializations is similar to that in Zhu and Kumar (2019).
In RAS and SAS, the prediction model was respectively RASmod and SASmod, and the initial conditions were respectively constructed from restart files saved during the RASmod or SASmod free run (with the generation of nine ensemble members described above). The predictions are verified against the RASmod or SASmod reference simulation. The comparison between RAS and SAS demonstrates the difference in the estimates of MJO predictability resulting entirely due to convection schemes.
In SAS_RASic, the prediction model was SASmod, while its initial conditions were from the RASmod simulation. The experiment takes into account the fact that SASmod is poorer in simulating the MJO than RASmod, which is closer to observations (Zhu et al. 2017b; Fig. 1). In SAS_RASic, the RASmod state was treated as “observations” and used for forecast initializations and verifications. The SAS_RASic configuration, therefore, mimics the operational MJO predictions where a biased model is used for forecasts starting from the observed initial states. Because of initial imbalances, forecasts in SAS_RASic initially go through a spinup (or initial shock) period, and the predictability estimate with it is similar to the hindcast-based estimates (e.g., Rashid et al. 2011; Kim et al. 2014; Neena et al. 2014; Liu et al. 2017). Since this initial spinup does not exist for the RAS, the comparison of SAS_RASic with RAS includes the impact of convection parameterization on MJO predictions (on a perfect forecast framework) together with the effect of initial shock.
Composite MJO life cycle of intraseasonal anomalies of OLR (W m−2; shadings) and U850 (m s−1; contours) in (a) CFSR reanalysis (proxy for observations), (b) RASmod simulations, and (c) SASmod simulations. For each phase, the composite value is the average of the days when the MJO phase angle is within the phase and MJO amplitude is greater than 1. Phase 8 is repeated as phase 0 for continuity of the display.
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
c. Analysis method and other datasets
To extract the MJO component in RASmod or SASmod, a similar procedure as in Wheeler and Hendon (2004) is adopted, and a combined empirical orthogonal function (EOF) analysis for the equatorially averaged model 850- and 200-hPa zonal wind (U850 and U200) and outgoing longwave radiation (OLR) is done. Specifically, based on daily mean fields from the last 20 years of simulations of RASmod or SASmod, the following steps as employed in Wang et al. (2014) are taken to define the MJO and the following text is adopted from there with minor modifications: 1) daily climatology of U850, U200, and OLR is calculated as annual mean plus the first four harmonics of the 20-yr average; 2) raw daily mean anomalies are computed as the deviation of the total fields from the climatology; 3) filtered anomalies are obtained by applying a 20–100-day bandpass filter to the raw daily mean anomalies; 4) EOFs are computed for the combined OLR, U200, and U850 filtered anomalies averaged between 15°S and 15°N and normalized by the respective standard deviation of each field. The first two leading EOFs (not shown) are taken as a representation of the MJO in RASmod and SASmod simulations, and their corresponding normalized principal components (PC1 and PC2) are used to define its amplitude [MJOamp = (PC12 + PC22)1/2] and phase angle {MJOpha = tan−1[(−PC1)/PC2]}.
The reference and predicted MJO indices are obtained by projecting the reference and predicted anomalous fields onto the above two EOF modes and are referred to as the real-time multivariate MJO (RMM) indices (Wheeler and Hendon 2004). The predicted field anomalies are obtained by removing a background that is a function of starting date and lead day and represents seasonal and interannual variability. The background is computed as a fourth-order polynomial fit over the 31 five-day periods (corresponding to 155 days) for each year and each lead time, but the application of third- or fifth-order polynomial fit shows negligible skill difference. For a consistent verification, the same definition of anomalies is used for the verifying reference state, which is done by reconstructing the reference datasets as if it were a forecast member for each initial time and target day. The standard deviation of the reference RMM indices is then used to normalize the reference and predicted RMM indices. The MJO prediction skill in terms of RMM indices is measured by bivariate anomaly correlation coefficient (ACC) and bivariate root-mean-square error (RMSE) by following Lin et al. (2008).
This study is mostly based on a perfect model framework, but a limited observational dataset is still used, for example, the CFSR reanalysis (Saha et al. 2010) that is used to initialize the long-term simulations and also for the verification of simulated MJO characteristics.
3. Results
A brief comparison of MJO properties is first made between RASmod and SASmod to demonstrate the sensitivity of MJO simulations to atmospheric convection schemes. Figure 1 compares the simulated MJO life cycles together with that in the CFSR reanalysis (Saha et al. 2010) by showing the composite OLR and U850 anomalies as a function of eight MJO phases (Wheeler and Hendon 2004). For each phase, the composite values are calculated as the average of 20–100 filtered anomalies for the days when MJOpha is within this phase and MJOamp is greater than 1. It is noted that the MJO life cycle in CFSR (Fig. 1a) is similar to the one derived with CFSR winds and NOAA AVHRR OLR (Liebmann and Smith 1996), for example, Fig. 2 in Wang et al. (2014).
Strong negative OLR anomalies (enhanced convection) are shown to propagate from the Indian Ocean (in phases 2 and 3) across the Maritime Continent (in phases 4 and 5) to the western Pacific (in phases 6 and 7). Composite U850 shows consistent convergence (divergence) in association with enhanced (suppressed) convection. For model simulations, it is evident that the eastward propagation of convections is generally well captured by RASmod (Fig. 1b) but is ill organized in SASmod with a substantially weaker propagation signal (Fig. 1c). Differences in the MJO propagation between RASmod and SASmod could be better presented by regression analyses, for example, against the Indian Ocean precipitation (Zhu et al. 2017b). In addition, for more comprehensive diagnostics about the propagation bias in SASmod one is referred to Zhu et al. (2017b) as well, where the bias was attributed to its simulated intraseasonal wind variability that resulted in the simulation bias in latent heat flux and SST variability.
Taking the initial conditions from the long integrations with RASmod and SASmod, three sets of MJO predictability experiments (i.e., RAS, SAS, and SAS_RASic) were next performed. As representative metrics of their overall predictability, Fig. 2 shows the bivariate ACC and RMSE of RMM indices between the ensemble mean predictions and the corresponding indices in the reference simulation, as a function of lead day. For comparison, the operational MJO hindcast skill in CFSv2 [which also applies the SAS convection scheme and is initialized from the observational analysis (i.e., CFSR)] for the period of 1999–2010 (Wang et al. 2014; referred to as CFSv2_9910) is also included. As expected, because of the additional error sources from the initialization shock in CFSv2_9910, its prediction skill drops the fastest with lead time, with ACC and RMSE measures reaching ~0.1 and >1.8, respectively, at day 45. As concluded by Wang et al. (2014), the CFSv2 operational predictions have useful MJO prediction skill out to 20 days when the ACC is about 0.5 and the RMSE reaches about 1.4 (a value expected when the climatology is used as the forecast; Lin et al. 2008; Rashid et al. 2011).
(a) Bivariate ACC and (b) bivariate RMSE for predictions of RAS (black), SAS (red), and SAS_RASic (blue), along with CFSv2 operational predictions for 1999–2010 (gray; Wang et al. 2014). The horizontal line in (a) is 0.5.
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
The skill of RAS (black curves in Fig. 2) and SAS (red curves in Fig. 2) measures the MJO potential predictability in CFSv2 with RAS and SAS convection schemes, respectively. The effect of convection schemes on MJO predictability is generally indistinguishable at short lead times (e.g., <12 days by ACC and <8 days by RMSE; Fig. 2), but it becomes evident as lead time increases. At lead times of >25 days, the two schemes present ~0.1–0.2 ACC skill difference and ~0.2 RMSE skill difference when forecasting their own respective reference states, with RAS clearly having higher prediction skill than SAS. Taking 0.5 as the threshold of useful ACC skill, the MJO can be predicted >45 days ahead in RAS, and only ~31 days is achieved in SAS, but both significantly longer than current CFSv2 operational skill (~20 days; Wang et al. 2014).
The 15-day difference in predictability arising from only switching between the SAS and RAS schemes in a single model spans the range of predictability estimates in Neena et al. (2014) for an ensemble of models in which a multitude of factors (e.g., model physics, resolutions, initializations and ensemble size) differ across the multiple models. Thus, our results provide support for the leading-order contribution of convection schemes to substantial uncertainties in the estimated MJO predictability (e.g., Waliser 2011; Neena et al. 2014).
In the SAS_RASic experiment, the SAS convection scheme is used to predict the RAS reference state. This experiment is akin to real forecast situations that start from the observed atmospheric analysis and the forecast model has biases. In contrast to SAS, the skill difference of SAS_RASic relative to RAS is present at both short and long leads. After first 10–15 days, the ACC skill in SAS_RASic decreases at a similar rate as in SAS and CFSv2_9910. Considering the same SASmod model is used in all the three set of predictions (but with different initializations), the similar skill decrease rate at the longer lead times (>10–15 days) suggests that at those lead times the inherent predictability of SASmod (i.e., the skill as measured by SAS) control the evolution of prediction skill in SAS_RASic and CFSv2_9910, but their overall skill difference is related to initializations.
Next, we analyze why the MJO predictability for RASmod is larger than for the SASmod (i.e., the difference between black and red curves in Fig. 2). The predictability of a variable is determined by the relative magnitude of the signal component versus the noise component. For ensemble-based seasonal atmospheric forecasts, predictability is quantified as the signal-to-noise ratio where the signal is generally defined as SST-forced atmospheric variability (quantified as the variability of ensemble mean) and the noise is defined as the atmospheric internal variability (quantified as the variability of individual forecasts around the ensemble mean) (e.g., Kumar and Hoerling 1995; Rowell 1998). For forecasts as an initial value problem, a signal-to-noise ratio (SNR) measure of predictability can also be used for the MJO where the signal is defined as
Figure 3 shows the evolution of the signal and the noise estimates with the forecast lead time. In SAS and RAS, the noise component becomes as large as the signal component at the lead time of around 27 and 38 days, respectively, which generally correspond to the time when their ACCs drop to 0.6 (Fig. 2a). This is also consistent with the analysis of Kumar and Hoerling (2000) who in the context of seasonal predictions demonstrated that for SNR of 1 the expected value of the ACC was close of 0.65.
Signal (solid lines) and noise (dashed lines) estimates for predictions of RAS (black), SAS (red), and SAS_RASic (blue).
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
From the evolution of signal and noise, it is evident that smaller predictability (or smaller ACC) in SAS is caused by a weaker signal component than that in RAS (solid curves in Fig. 3). In fact, a tendency for a weaker MJO signal is also evident in the SASmod simulation as shown in Fig. 1c. It is interesting to note that noise component in the evolution of MJO is similar for simulations with both convective schemes. The analysis, therefore, suggests that the MJO predictability of a coupled system might be strongly controlled by the strength of its own MJO signal (and its dependence on the convective scheme), which could be a reason why substantial differences appear in the MJO predictability estimates with different coupled models (Neena et al. 2014). In addition, the effect of initialization on predictability estimate could be seen by comparing SAS_RASic with RAS/SAS. Since initialized from the same states, the signal remains the same between SAS_RASic and RAS at the beginning, but it gradually converges to the SAS one after first 15 days. The initialization effect affirms the problem in estimating the MJO predictability with hindcast datasets (e.g., Rashid et al. 2011; Kim et al. 2014; Neena et al. 2014; Liu et al. 2017).
We further try to understand how forecast errors grow with different convection schemes by comparing the large-scale evolution of components fields that compose RMM—that is, U850, U200, and OLR—in SAS_RASic with RAS. Figure 4 presents the spatial correlation coefficients of predicted U850, U200, and OLR anomalies (the daily anomalies are calculated relative to a fourth-order polynomial fit as described in section 2) over 30°E–90°W, 30°S–30°N against the RASmod reference as a function of the forecast lead time, averaged over all 310 prediction cases. In RAS, the forecast skill of OLR (solid green curves in Fig. 4) clearly decays faster than it does for the circulation fields (i.e., U850 and U200; solid red and black curves, respectively, in Fig. 4), and their difference becomes evident shortly after the forecast initialization, suggesting that the large-scale flow is more predictable than the smaller-scale convection.
Spatial correlation coefficients (averaged over all prediction cases) of U850 (black), U200 (red), and OLR (blue) over 30°E–90°W, 30°S–30°N against RASmod simulations as a function of lead time, in predictions of RAS (solid lines) and SAS_RASic (dotted lines).
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
The forecast skill differences between OLR and large-scale flows are even larger in SAS_RASic immediately after forecasts start. For instance, the skill difference at day 1 in SAS_RASic is as large as that at day 5 in RAS. This comparison indicates that, as a result of the replacement of the RAS convection scheme with SAS, the prediction skill of OLR (green curves in Fig. 4) degrades faster than that of large-scale flow (i.e., U850 and U200; the red and black curves, respectively, in Fig. 4); at day 5, for example, the reduction in OLR skill is 0.3 in contrast to 0.15 in wind components. This is a consequence of an initial adjustment due to inconsistency between the initial conditions (taken from the RASmod simulation) and the forecast model (SASmod). The initial adjustment (or initial shock) is also likely to have a large influence on the skill of MJO predictions in operational models. This result also confirms a previous speculation by Xie et al. (2012) and Ma et al. (2014) [who worked within the framework of the so-called Transpose-AMIP experiments (Williams et al. 2013)] that systematic errors, particularly those associated with moist processes, develop within 1–2 forecast days, are likely the result of issues with model parameterization. This analysis suggests that a further improvement in the MJO predictions could be expected by progress in convection parameterizations and reduction in the initial shock.
The effect of convection schemes on MJO predictions initialized from phases 1 and 4 is shown in Fig. 5 that compares the composite OLR (shadings) and U850 (contours) predictions between SAS_RASic and RAS, respectively. Compared with the RASmod reference (Figs. 5a,b), the most significant deficiency in SAS_RASic (Figs. 5e,f) is that the propagation of its predicted anomalies is too slow. For example, for predictions starting from phase 1, the predicted enhanced convection (negative OLR anomalies) in SAS_RASic (Fig. 5e) is still largely within the Indian Ocean by day 30 when the enhanced convection has propagated into western Pacific in both RASmod (the reference simulation) (Fig. 5a) and RAS (Fig. 5c). The propagation of U850 is also slow as represented by the evolution of the boundary (the thick curves) between westerlies and easterlies. For instance, while the east boundary in SAS_RASic (Fig. 5e) generally lies west of date line during the 40-day forecast period, it propagates well eastward and beyond the date line at around day 25 in both RASmod (Fig. 5a) and RAS (Fig. 5c). The propagation bias is also a feature of the MJO in CFSv2 with the SAS convection scheme (Wang et al. 2014; Zhu et al. 2017b). Another deficiency in SAS_RASic is that the overall amplitude of its predicted OLR anomalies remains smaller than the reference and the predicted by RAS during the forecast period. This prediction bias can also be explained by the MJO feature with the SAS convection scheme (i.e., Fig. 1c). Similar deficiencies of SAS_RASic also exist in predictions starting from phase 4 (Figs. 5b,d,f).
The evolutions of composite OLR (W m−2; shadings) and U850 (m s−1; contours) starting from (left) initial phase 1 and (right) initial phase 4 in (a),(b) RASmod simulations and predictions of (c),(d) RAS and (e),(f) SAS_RASic.
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
The choice of atmospheric convection scheme also exerts influence on the phase dependency of MJO predictability. In many MJO prediction systems (e.g., Vitart et al. 2007; Seo et al. 2009; Wang et al. 2014), low prediction skill is documented when the MJO-associated convection moves through the Maritime Continent, generating a so-called Maritime Continent prediction barrier problem. Figure 6 compares the dependence of prediction skills on target phases and lead times among RAS, SAS, and SAS_RASic, with phase 8 repeated as phase 0 for the display of continuity. In particular, the prediction skills are separately calculated for each MJO phase by using target days (the days to be predicted) when the reference (RASmod and SASmod) MJOpha is within this phase. For two perfect predictability experiments (i.e., RAS and SAS; Figs. 6a–d), both ACC and RMSE measures exhibit small variations in skill within short lead times, for example, ~25 days in RAS and ~20 days in SAS when the ACC is higher than 0.8 (Figs. 6a,c) and RMSE (Figs. 6b,d) is smaller than 1 for all target phases. Beyond those lead times, larger skill variations with target phase are seen in both predictability experiments. In RAS (Figs. 6a,b), there is slightly lower skill (smaller ACC and larger RMSE) for target phase 4. More pronounced skill variations occur in SAS in which significantly lower skill is seen for target phases 4 and 7/8, but better skill (larger ACC and smaller RMSE) is seen in predictions for target phases 5 and 1. In both RASmod (Fig. 1b) and SASmod (Fig. 1c), phase 4 corresponds to convection over the Maritime Continent area. Thus, a lower skill for this phase indicates that both RASmod and SASmod have difficulty in predicting the propagation of the MJO across the Maritime Continent as in many other MJO prediction systems (e.g., Vitart et al. 2007; Seo et al. 2009; Wang et al. 2014). The Maritime Continent prediction barrier problem, however, is clearly less evident when the RAS convection scheme other than SAS is used.
MJO prediction skills in terms of (left) ACC and (right) RMSE, as a function of lead time and target phase, in predictions of (a),(b) RAS, (c),(d) SAS, and (e),(f) SAS_RASic.
Citation: Journal of Climate 33, 11; 10.1175/JCLI-D-18-0552.1
In SAS_RASic (Figs. 6e,f), skill variations with target phase are also present at lead times longer than ~25 days, and the variations are stronger (weaker) than RAS (SAS). Its skill variations are also featured by lower skill for target phases 4 and 7/8, as in SAS. It should be clarified that the phase dependency of skill in SAS_RASic does not represent an inherent feature of the prediction model alone (i.e., SASmod) because it also mixes the influence of initializations; instead, it represents a feature of the prediction system as in most previous studies (e.g., Vitart et al. 2007; Seo et al. 2009; Wang et al. 2014; Neena et al. 2014; Kim et al. 2014). In contrast, the estimate with SAS (RAS) represents an inherent feature of SASmod (RASmod).
4. Summary and discussion
In this study, we revisited the MJO predictability based on the NOAA Climate Forecast System, version 2, with the perfect model approach. We specifically addressed the causes for “uncertainty” in current estimates about MJO predictability (e.g., Neena et al. 2014), with a focus on the effect of atmospheric convection parameterizations. In our experiments, two atmospheric convection schemes were applied in CFSv2. The analysis suggests that the 15-day difference arises from only switching between the SAS and RAS schemes in a single model. The difference spans the range of predictability estimates in Neena et al. (2014) for an ensemble of models in which a multitude of factors (e.g., model physics, resolutions, initializations and ensemble size) differ across the multiple models, which indicates the importance of convection schemes in studies about the MJO predictability. However, as pointed out by a reviewer, the finding cannot suggest that the convection scheme is at the heart of the MJO predictability without further experiments about how the MJO can also be influenced by other factors that are parameterized in the model.
Further diagnostics suggest that the shorter predictability with the SAS scheme was mainly caused by its associated too weak MJO signal. Wang et al. (2015) suggested that the weak MJO amplitude with an updated SAS scheme (i.e., SAS, version 2; Han and Pan 2011) was consistent with its less intense convective activity, which was related to its lower troposphere being too dry as a result of a persistent weak shallow convective moistening. We are not certain about whether the same mechanism can explain the weak MJO signal with SAS, and additional sensitivity experiments (e.g., changing the convection trigger in SAS) will be necessary to understand the MJO simulation bias. In addition, a more dynamical reason for the shorter predictability with SAS will also require assessment based on some process-based metrics (e.g., dynamic-oriented diagnostics; Wang et al. 2018) in addition to current performance-based skill metrics (e.g., correlation, RMSE), which is a common challenge in MJO prediction research (Kim et al. 2018) but is under consideration for our future project.
In the study, the effect of convection scheme was further explored by comparing two experiments predicting the same model MJO events. This comparison also demonstrated the importance of convective parameterizations in model errors, particularly those associated with moist processes. Thus, it is suggested that improving the convection parameterizations will be an efficient way to reduce model bias, and to improve our prediction capacity, specifically in MJO.
In addition, while at present it remains unclear whether the Maritime Continent prediction barrier problem represents an inherent feature of the MJO or a model error (e.g., Neena et al. 2014; Kim et al. 2014), our experiments indicate that the choice of atmospheric convection scheme has an influence on the phase dependency of MJO predictability. In particular, the problem is present in all experiments, but it is clearly more pronounced in one experiment that exhibits a larger MJO propagation bias and lower predictability. This indicates that, even if the prediction barrier might represent an inherent feature, it could be exacerbated by biases in convection parameterizations.
We also note that our study is based on a specific model, and the MJO predictability estimate is specific to CFSv2. However, our methodology should be applicable to other models, and it is a clean way to isolate the influence of one factor (e.g., atmospheric convection schemes) from others on current uncertainties in the estimates of MJO predictability (e.g., Neena et al. 2014). In fact, MJO predictability studies seem to have a similar problem as in the process-based MJO studies (e.g., the role coupling effect) in which “far more attention has been paid to inter-GCM variations [in the effects of coupling] than to intra-GCM variations” (DeMott et al. 2015). Thus, similar studies as ours with diverse models should be encouraged, which would not only assess the model dependency of our conclusions, but, more important, explore the influence of other factors on estimates of MJO predictability.
Acknowledgments
We thank NOAA’s Climate Program Office for their support through the Modeling, Analysis, Predictions, and Projections (MAPP). Author Zhu is partially supported by the NASA Ocean Salinity Science Team Grant NNX17AK09G. We are grateful for the constructive comments from the editor and three anonymous reviewers.
REFERENCES
Bechtold, P., M. Köhler, T. Jung, F. Doblas-Reyes, M. Leutbecher, M. J. Rodwell, F. Vitart, and G. Balsamo, 2008: Advances in simulating atmospheric variability with the ECMWF model: From synoptic to decadal time-scales. Quart. J. Roy. Meteor. Soc., 134, 1337–1351, https://doi.org/10.1002/qj.289.
DeMott, C. A., N. P. Klingaman, and S. J. Woolnough, 2015: Atmosphere-ocean coupled processes in the Madden-Julian oscillation. Rev. Geophys., 53, 1099–1154, https://doi.org/10.1002/2014RG000478.
Flatau, M., P. J. Flatau, P. Phoebus, and P. P. Niler, 1997: The feedback between equatorial convection and local radiative and evaporative processes: The implications for intraseasonal oscillations. J. Atmos. Sci., 54, 2373–2386, https://doi.org/10.1175/1520-0469(1997)054<2373:TFBECA>2.0.CO;2.
Fu, X., B. Yang, Q. Bao, and B. Wang, 2008: Sea surface temperature feedback extends the predictability of tropical intraseasonal oscillation. Mon. Wea. Rev., 136, 577–597, https://doi.org/10.1175/2007MWR2172.1.
Han, J., and H. L. Pan, 2011: Revision of convection and vertical diffusion schemes in the NCEP Global Forecast System. Wea. Forecasting, 26, 520–533, https://doi.org/10.1175/WAF-D-10-05038.1.
Inness, P. M., J. M. Slingo, S. J. Woolnough, R. B. Neale, and V. D. Pope, 2001: Organization of tropical convection in a GCM with varying vertical resolution: Implications for the simulation of the Madden–Julian oscillation. Climate Dyn., 17, 777–793, https://doi.org/10.1007/s003820000148.
Kemball-Cook, S., B. Wang, and X. Fu, 2002: Simulation of the intraseasonal oscillation in the ECHAM-4 model: The impact of coupling with an ocean model. J. Atmos. Sci., 59, 1433–1453, https://doi.org/10.1175/1520-0469(2002)059<1433:SOTIOI>2.0.CO;2.
Kim, H. M., P. J. Webster, V. E. Toma, and D. Kim, 2014: Predictability and prediction skill of the MJO in two operational forecasting systems. J. Climate, 27, 5364–5378, https://doi.org/10.1175/JCLI-D-13-00480.1.
Kim, H. M., F. Vitart, and D. E. Waliser, 2018: Prediction of the Madden–Julian oscillation: A review. J. Climate, 31, 9425–9443, https://doi.org/10.1175/JCLI-D-18-0210.1.
Kumar, A., and M. P. Hoerling, 1995: Prospects and limitations of seasonal atmospheric GCM predictions. Bull. Amer. Meteor. Soc., 76, 335–345, https://doi.org/10.1175/1520-0477(1995)076<0335:PALOSA>2.0.CO;2.
Kumar, A., and M. P. Hoerling, 2000: Analysis of a conceptual model of seasonal climate variability and implications for seasonal prediction. Bull. Amer. Meteor. Soc., 81, 255–264, https://doi.org/10.1175/1520-0477(2000)081<0255:AOACMO>2.3.CO;2.
Liebmann, B., and C. A. Smith, 1996: Description of a complete (interpolated) outgoing longwave radiation dataset. Bull. Amer. Meteor. Soc., 77, 1275–1277, https://doi.org/10.1175/1520-0477-77.6.1274.
Lim, Y., S. Son, and D. Kim, 2018: MJO prediction skill of the subseasonal-to-seasonal prediction models. J. Climate, 31, 4075–4094, https://doi.org/10.1175/JCLI-D-17-0545.1.
Lin, H., G. Brunet, and J. Derome, 2008: Forecast skill of the Madden–Julian oscillation in two Canadian atmospheric models. Mon. Wea. Rev., 136, 4130–4149, https://doi.org/10.1175/2008MWR2459.1.
Liu, X., and Coauthors, 2017: MJO prediction using the sub-seasonal to seasonal forecast model of Beijing Climate Center. Climate Dyn., 48, 3283–3307, https://doi.org/10.1007/s00382-016-3264-7.
Ma, H.-Y., and Coauthors, 2014: On the correspondence between mean forecast errors and climate errors in CMIP5 models. J. Climate, 27, 1781–1798, https://doi.org/10.1175/JCLI-D-13-00474.1.
Madden, R. A., and P. R. Julian, 1971: Detection of a 40-50 day oscillation in the zonal wind in the tropical Pacific. J. Atmos. Sci., 28, 702–708, https://doi.org/10.1175/1520-0469(1971)028<0702:DOADOI>2.0.CO;2.
Maloney, E. D., and A. H. Sobel, 2004: Surface fluxes and ocean coupling in the tropical intraseasonal oscillation. J. Climate, 17, 4368–4386, https://doi.org/10.1175/JCLI-3212.1.
Moorthi, S., and M. J. Suarez, 1992: Relaxed Arakawa–Schubert: A parameterization of moist convection for general circulation models. Mon. Wea. Rev., 120, 978–1002, https://doi.org/10.1175/1520-0493(1992)120<0978:RASAPO>2.0.CO;2.
Moorthi, S., and M. J. Suarez, 1999: Documentation of version 2 of relaxed Arakawa–Schubert cumulus parameterization with convective downdrafts. NOAA Office Note 99-01, 44 pp.
National Academies of Sciences, Engineering, and Medicine, 2016: Next Generation Earth System Prediction: Strategies for Subseasonal to Seasonal Forecasts. National Academies Press, 350 pp., https://doi.org/10.17226/21873.
National Research Council, 2010: Assessment of Intraseasonal to Interannual Climate Prediction and Predictability. National Academies Press, 192 pp.
Neena, J., J. Y. Lee, D. Waliser, B. Wang, and X. Jiang, 2014: Predictability of the Madden–Julian oscillation in the Intraseasonal Variability Hindcast Experiment (ISVHE). J. Climate, 27, 4531–4543, https://doi.org/10.1175/JCLI-D-13-00624.1.
Pan, H.-L., and W.-S. Wu, 1995: Implementing a mass flux convection parameterization package for the NMC medium-range forecast model. NMC Office Note 409, 43 pp., https://www2.mmm.ucar.edu/wrf/users/phys_refs/CU_PHYS/Old_SAS.pdf.
Pegion, K., and B. Kirtman, 2008: The impact of air–sea interactions on the predictability of the tropical intraseasonal oscillation. J. Climate, 21, 5870–5886, https://doi.org/10.1175/2008JCLI2209.1.
Rashid, H. A., H. H. Hendon, M. C. Wheeler, and O. Alves, 2011: Prediction of the Madden–Julian oscillation with the POAMA dynamical prediction system. Climate Dyn., 36, 649–661, https://doi.org/10.1007/s00382-010-0754-x.
Reichler, T., and J. O. Roads, 2005: Long-range predictability in the tropics. Part II: 30–60-day variability. J. Climate, 18, 634–650, https://doi.org/10.1175/JCLI-3295.1.
Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations. J. Climate, 11, 109–120, https://doi.org/10.1175/1520-0442(1998)011<0109:APSPWA>2.0.CO;2.
Saha, S., and Coauthors, 2010: The NCEP Climate Forecast System Reanalysis. Bull. Amer. Meteor. Soc., 91, 1015–1057, https://doi.org/10.1175/2010BAMS3001.1.
Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 2185–2208, https://doi.org/10.1175/JCLI-D-12-00823.1.
Seo, K. H., W. Q. Wang, J. Gottschalck, Q. Zhang, J. K. E. Schemm, W. R. Higgins, and A. Kumar, 2009: Evaluation of MJO forecast skill from several statistical and dynamical forecast models. J. Climate, 22, 2372–2388, https://doi.org/10.1175/2008JCLI2421.1.
Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., 140, 1889–1899, https://doi.org/10.1002/qj.2256.
Vitart, F., S. Woolnough, M. A. Balmaseda, and A. M. Tompkins, 2007: Monthly forecast of the Madden–Julian oscillation using a coupled GCM. Mon. Wea. Rev., 135, 2700–2715, https://doi.org/10.1175/MWR3415.1.
Vitart, F., A. Leroy, and M. C. Wheeler, 2010: A comparison of dynamical and statistical predictions of weekly tropical cyclone activity in the Southern Hemisphere. Mon. Wea. Rev., 138, 3671–3682, https://doi.org/10.1175/2010MWR3343.1.
Vitart, F., and Coauthors, 2017: The Subseasonal to Seasonal (S2S) Prediction Project database. Bull. Amer. Meteor. Soc., 98, 163–173, https://doi.org/10.1175/BAMS-D-16-0017.1.
Waliser, D. E., 2011: Predictability and forecasting. Intraseasonal Variability of the Atmosphere–Ocean Climate System, W. K. M. Lau and D. E. Waliser, Eds., 2nd ed. Springer, 433–476.
Waliser, D. E., K. M. Lau, and J.-H. Kim, 1999: The influence of coupled SSTs on the Madden–Julian oscillation: A model perturbation experiment. J. Atmos. Sci., 56, 333–358, https://doi.org/10.1175/1520-0469(1999)056<0333:TIOCSS>2.0.CO;2.
Waliser, D. E., K. M. Lau, W. Stern, and C. Jones, 2003: Potential predictability of the Madden–Julian oscillation. Bull. Amer. Meteor. Soc., 84, 33–50, https://doi.org/10.1175/BAMS-84-1-33.
Wang, B., and Coauthors, 2018: Dynamics-oriented diagnostics for the Madden–Julian oscillation. J. Climate, 31, 3117–3135, https://doi.org/10.1175/JCLI-D-17-0332.1.
Wang, W., and M. Schlesinger, 1999: The dependence on convection parameterization of the tropical intraseasonal oscillation simulated by the UIUC 11-layer atmospheric GCM. J. Climate, 12, 1423–1457, https://doi.org/10.1175/1520-0442(1999)012<1423:TDOCPO>2.0.CO;2.
Wang, W., M.-P. Hung, S. J. Weaver, A. Kumar, and X. Fu, 2014: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 42, 2509–2520, https://doi.org/10.1007/s00382-013-1806-9.
Wang, W., A. Kumar, J. X. Fu, and M.-P. Hung, 2015: What is the role of the sea surface temperature uncertainty in the prediction of tropical convection associated with the MJO? Mon. Wea. Rev., 143, 3156–3175, https://doi.org/10.1175/MWR-D-14-00385.1.
Wheeler, M. C., and H. H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 1917–1932, https://doi.org/10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.
Williams, K. D., and Coauthors, 2013: The Transpose-AMIP II experiment and its application to the understanding of Southern Ocean cloud biases in climate models. J. Climate, 26, 3258–3274, https://doi.org/10.1175/JCLI-D-12-00429.1.
Xie, S., H.-Y. Ma, J. S. Boyle, S. A. Klein, and Y. Zhang, 2012: On the correspondence between short- and long-time-scale systematic errors in CAM4/CAM5 for the Year of Tropical Convection. J. Climate, 25, 7937–7955, https://doi.org/10.1175/JCLI-D-12-00134.1.
Zhang, C., M. Dong, S. Gualdi, H. H. Hendon, E. D. Maloney, A. Marshall, K. R. Sperber, and W. Wang, 2006: Simulations of the Madden–Julian oscillation in four pairs of coupled and uncoupled global models. Climate Dyn., 27, 573–592, https://doi.org/10.1007/s00382-006-0148-2.
Zhang, G. J., and M. Mu, 2005: Simulation of the Madden–Julian Oscillation in the NCAR CCM3 using a revised Zhang–McFarlane convection parameterization scheme. J. Climate, 18, 4046–4064, https://doi.org/10.1175/JCLI3508.1.
Zhu, J., and A. Kumar, 2019: Role of sea surface salinity feedback in MJO predictability: A study with CFSv2. J. Climate, 32, 5745–5759, https://doi.org/10.1175/JCLI-D-18-0755.1.
Zhu, J., A. Kumar, W. Wang, Z.-Z. Hu, B. Huang, and M. A. Balmaseda, 2017a: Importance of convective parameterization in ENSO predictions. Geophys. Res. Lett., 44, 6334–6342, https://doi.org/10.1002/2017GL073669.
Zhu, J., W. Wang, and A. Kumar, 2017b: Simulations of MJO propagation across the maritime continent: Impacts of SST feedback. J. Climate, 30, 1689–1704, https://doi.org/10.1175/JCLI-D-16-0367.1.
Zhu, J., A. Kumar, H.-C. Lee, and H. Wang, 2017c: Seasonal predictions using a simple ocean initialization scheme. Climate Dyn., 49, 3989–4007, https://doi.org/10.1007/s00382-017-3556-6.