1. Introduction
An international field campaign was conducted over the tropical Indian Ocean and its surrounding regions during October 2011–March 2012 to collect observations for the study of convective initiation of the Madden–Julian oscillation (MJO; Madden and Julian 1971, 1972). This field campaign was a joint project of Cooperative Indian Ocean Experiment on Intraseasonal Variability in the Year 2011 (CINDY2011), Dynamics of the MJO (DYNAMO), the Atmospheric Radiation Measurement Program (ARM) MJO Investigation Experiment (AMIE), and Littoral Air–Sea Process (LASP). Hereafter, it is referred to in brief as the DYNAMO field campaign. Yoneyama et al. (2013) provided detailed descriptions of this field campaign.
Three MJO events were observed during the DYNAMO field campaign (Fig. 1a). They are described in detail by Gottschalck et al. (2013). Real-time forecasts produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) during the field campaign captured the convective initiation of the three MJO events with variable success. The ECMWF forecasts of the three MJO events are evaluated in this study, using a global measure and a local measure of the MJO.
The global measure is the all-season Real-time Multivariate MJO (RMM) index of Wheeler and Hendon (2004). It has been used to assess the statistics of MJO forecast skill of operational and research models (Lin et al. 2008; Gottschalck et al. 2010) and for the MJO events during DYNAMO (Fu et al. 2013). This global measure of the MJO provides quantitative information of the MJO amplitude in the entire tropics, but only qualitative information of the location, amplitude, and phase speed of its convective anomalies (Wheeler and Hendon 2004; Matsueda and Endo 2011; Wang et al. 2013). The local measure is based on an MJO tracking method. It provides quantitative information of the strength, propagation speed, and timing of MJO precipitation over a given longitudinal region (section 2c). Understanding the mechanisms of the MJO impact on weather and climate worldwide (Zhang 2013) requires information of the precise location of the convection center of the MJO relative to the background flow. This information is crucial to identifying Rossby waves generated by the MJO (Matthews 2004) that propagate toward higher latitudes and induce MJO teleconnection patterns (Kiladis and Weickmann 1992; Grimm and Silva Dias 1995; Renwick and Revell 1999). A models’ capability of forecasting the global teleconnection pattern of the MJO depends on its skill of forecasting MJO convection centers in terms of their strengths, propagation speeds, and timing.
The objectives of this study are to
document the ECMWF model’s forecast skill during DYNAMO as a benchmark for the comparative evaluation of forecast and hindcast skill of other operational and research models for the same MJO events;
advocate the need for both global and local measures of MJO forecast skill;
illustrate the large variability of MJO forecast skill between individual events and among different quantities (strength, speed, and timing); and
quantify the sensitivity of MJO forecast skill to observations assimilated in the analysis, lower-tropospheric humidity over the Indian Ocean and sea surface temperature (SST) using the ECWMF forecast model.
Data, methods, and numerical experiments are described in section 2. Section 3 evaluates the MJO forecast skill of ECMWF model for the three MJO events during DYNAMO using the global and local skill measures of the MJO. ECMWF forecasts of wind, temperature, and humidity profiles during the DYNAMO field campaign are evaluated against sounding observations in section 4. Results from three sets of numerical experiments designed to explore sources of forecast skill are presented in section 5. A summary and conclusions are given in section 6.
2. Data and methods
a. Observations
Daily rainfall data (0.25° × 0.25°) from the Tropical Rainfall Measuring Mission (TRMM; Kummerow et al. 2000) Multisatellite Precipitation Analysis, version 7 (Huffman et al. 2007), were used for observational MJO tracking to define the measure of MJO local forecast skill (section 2c). DYNAMO soundings (Yoneyama et al. 2013) were used to evaluate vertical profiles of wind, temperature, and humidity in the forecast at selected sounding sites (Fig. 2). The sounding frequency ranged from 2 to 8 day−1 (Table 1). The sounding observations were interpolated onto the vertical levels of the forecast model output (section 2b). The impacts of some of these sounding observations, as well as atmospheric motion vector (wind) data from the geostationary Meteorological Satellite 7 (Meteosat-7, hereafter Met-7) over the Indian Ocean, were assessed in the denial experiments (section 2d).
List of sounding observations from DYNAMO starting from 1 Oct 2011.
b. ECMWF forecast and analysis
Output from two operational forecast components of the ECMWF global model [the Integrated Forecast System (IFS)] were diagnosed in this study (Table 2). The first is the high-resolution atmospheric model (HRES) with a spectral truncation of T1279, a reduced Gaussian grid (equivalent to 16-km global grid spacing), and 91 vertical levels (denoted as T1279L91). Persisted SST anomalies are used as the lower boundary condition. Forecasts from 12 to 240 h with an interval of 12 h initialized at 0000 and 1200 UTC from 1 October 2011 to 31 January 2012 were evaluated in this study.
List of operational forecast products. All forecasts were initialized daily at 0000 and 1200 UTC.
The other ECMWF forecast component is the Ensemble Prediction System (ENS), which includes a control run and 50 perturbed members. Its model configuration changes from higher-resolution T639 (32 km) and 62 levels (T639L62) for the first 10 days with persistent SST anomalies as the lower boundary condition to coarser-resolution T319 (64 km) (T319L62) beyond 10 days when the atmospheric model is coupled to an ocean model (the Hamburg Ocean Primitive Equation model; Wolff et al. 1997). In addition, two other control forecasts with fixed high (T639L62) and low (T319L62) horizontal resolutions throughout the period, with the same transition from an uncoupled to a coupled configuration after 10 days, are made to serve as verification/calibration. All forecasts were initialized daily at 0000 and 1200 UTC.
Both ECMWF operational analysis and the Interim ECMWF Re-Analysis (ERA-Interim) during DYNAMO were used in this study for verification. ERA-Interim was produced by the IFS model cycle Cy31r2 with a resolution of T255L60. The operational analysis was generated by Cy37r2 and Cy37r31 versions of the IFS with a higher resolution (T1279L91) during DYNAMO.
All forecast and (re)analysis data were regridded to a 1° × 1° horizontal grid mesh. There are two exceptions: zonal wind, temperature, and relative humidity profiles from HRES at 0.25° × 0.25° horizontal grid spacing were used for comparisons to the DYNAMO sounding observations, while zonal winds and outgoing longwave radiation (OLR) at 2.5° × 2.5° horizontal grid spacing were used to calculate the RMM index following Gottschalck et al. (2010).
c. MJO metrics
We used two metrics to measure global and local MJO forecast skills. The global measure is the all-season RMM index of Wheeler and Hendon (2004). The RMM index is commonly used in MJO diagnostics (e.g., Waliser et al. 2009) and assessments of operational MJO forecast skill (Gottschalck et al. 2010). It defines the MJO using the two leading EOFs of global tropical OLR and upper- and lower-tropospheric zonal wind components. It is essentially a global measure of the MJO, with the zonal wind pattern being the dominant component (Straub 2013). The longitudinal position and amplitude of the MJO can be described using the principle components (PCs) of the two leading EOFs. Following Lin et al. (2008) and Gottschalck et al. (2010), we used bivariate correlation (COR) and the root-mean-square error (RMSE) of the RMM index to measure MJO forecast skill.
The local measure of the MJO is based on an MJO tracking method. This method provides quantitative information of strength, eastward propagation speed, and timing of precipitation for a given MJO event in a given longitudinal sector, such as the Indian and western Pacific Oceans. The MJO speed here is the average eastward propagation speed of a convection center of the MJO. It is different from the phase speed based on the RMM index (Matsueda and Endo 2011; Wang et al. 2013). The phase speed may depict the propagation speed of the MJO convection in a composite. For a given individual MJO event it cannot provide information of the precise location of a convection center (Straub 2013; Gottschalck et al. 2013) because a convection center may be located in a wide range of longitude in a RMM phase (e.g., phase 2 in late October in Fig. 1b).
Daily precipitation anomalies were first generated by removing the annual cycle. A 5-day running mean was then applied to remove high-frequency perturbations. Interannual perturbations were removed by subtracting the mean of analysis/forecast anomalies over the most recent 120 days prior to each day, following Gottschalck et al. (2010). On a time–longitude diagram of precipitation anomalies averaged over a latitudinal belt (10°S–10°N in this study), a set of straight lines can be drawn within a given longitudinal range L (50°–160°E in this study), each with a slope and starting date at the western boundary of L (Fig. 3a). Positive precipitation anomalies are then integrated along each line and then divided by the total number of zonal grid points within L, which is referred to as averaged precipitation along that line. Among this set of lines with different slopes and starting dates, the one with the largest averaged precipitation represents the MJO event (thick line in Fig. 3a), and is referred to as its track. Averaged precipitation of this track measures the strength of precipitation for this MJO event,2 the slope its eastward propagation speed, and the starting date its timing. All three MJO quantities (strength, speed, and timing) may vary with the tracking range L. Once L is chosen for the purpose of the study, the track and quantities that represent the MJO event can be determined objectively and uniquely.
When the tracking method is applied to an observed MJO event, a track can always be found to represent that event. When this method is applied to forecasts, however, there is no guarantee that a track with maximum averaged precipitation can be found with a slope corresponding to the range of MJO propagation speeds. As an example, there was an MJO event in November 2011 in observation (Fig. 1a) and forecast (Fig. 3a). A track with maximum averaged precipitation is found in both observations and forecast (the thick solid line in Fig. 3a). There was another MJO event during December in the observations (Fig. 1a), but no single track with maximum averaged precipitation within the speed range of 3–12 m s−1 can be found in that forecast in December (Fig. 3a). Averaged precipitation continues to increase with increasing speed up beyond 12 m s−1. Even if a track with maximum averaged precipitation can be found at a speed greater than 12 m s−1, it cannot be considered as an MJO event. Therefore, no track is identified in the forecast for the December MJO event.
d. Numerical experiments
Three sets of numerical experiments were conducted: observational data denial, humidity relaxation, and SST forcing (Table 3). The observational denial experiments were designed to explore the impact of observations in the Indian Ocean region on the MJO forecast. The humidity relaxation and SST forcing experiments were designed to identify potential sources of MJO forecast skill. Two model configurations were used in these numerical experiments. One is the high-resolution forecast system (T1279L91, version Cy38r1), which will be referred to as high-resolution prediction (HP). All forecasts by HP were initialized daily at 0000 and 1200 UTC from 1 October 2011 to 31 January 2012, and forecast output was archived every 12 h up to 240 h.
Forecast and experiment configurations.
HP was first used with data assimilation to generate initial conditions including all available observations collected during the DYNAMO field campaign. It was then integrated for 10 days in a single forecast (or hindcast) using these initial conditions. This is the control experiment (HP-CTL). In the second experiment, DYNAMO radiosonde and aircraft dropsonde data (Table 1) in the Indian Ocean region (Fig. 2) were excluded in the data assimilation system and 10-day forecasts were run from these modified initial conditions. This is referred to as the DYNAMO sounding denial run (HP-DNS). The procedure was repeated except this time atmospheric motion vector (AMV) winds derived from Met-7 satellite observations were excluded in the data assimilation process. This is referred to as the Met-7 denial run (HP-DNM).
The other model configuration is the ensemble forecast system (ENS, Cy38rl) with five ensemble members of fixed resolution (T159) coupled to the ocean model throughout the entire model integration of 32 days, which will be referred to as EP (ensemble prediction). The ensemble size of five was chosen for computing constraints. This is the ensemble size used in the operational ECMWF reforecasts. The bivariate correlation (COR) used to assess the MJO skill in this study tends to increase slightly with ensemble size. However, five ensemble members is generally sufficient to assess the difference between the two experiments (e.g., Vitart 2014). The ensemble perturbations were generated using singular vectors to perturb the atmospheric initial conditions (Buizza and Palmer 1995) and stochastic physics to perturb tendencies during the model integrations (Buizza et al. 1999). Perturbations were also applied to the ocean initial conditions by using five different ocean analyses that were produced by perturbing the wind stress during the ocean analysis (Vialard et al. 2005).
All EP forecasts were initialized daily at 0000 UTC from 1 October 2011 to 31 January 2012 using the same initial condition of HP-CTL including all observations from the DYNAMO field campaign. Forecast output was archived every 12 h. This is the ensemble control run (EP-CTL). In the humidity relaxation run (EP-HUM), EP was integrated using the same initial conditions but with relative humidity below the 400-hPa level over the Indian Ocean domain (Fig. 2) relaxed during the entire forecast toward the operational analysis using a relaxation time scale of 6 h. There are 91 vertical levels in the humidity relaxation experiment in order to better match the humidity profiles in the operational analysis. Humidity relaxation or nudging has been shown to be an effective way to make numerical simulations of the MJO closer to observations in regional models (e.g., Hagos et al. 2011), but relaxation (of a single field) may have a detrimental effect on the overall dynamical and thermodynamical balance of the model.
In the SST forcing experiment (EP-SST, 61 vertical levels), observed daily SST fields were used in place of the ocean model in the EP forecasts. Intraseasonal signals in daily SST are mostly induced by the MJO and they bear a specific phase relation with atmospheric signals of the MJO (Zhang and Anderson 2003). When daily SST is used as the lower boundary condition, its intraseasonal signal is treated as existing independently of the MJO and its phase relation with the MJO cannot be maintained (de Boisséson et al. 2012). This exercise, however, is useful to test the sensitivity of the MJO to intraseasonally varying SST.
3. Global versus local MJO forecast skills
Assessment of MJO forecast skill can be done by using the RMM index in terms of COR and the RMSE (Lin et al. 2008). These global measures indicate that certain models are able to produce useful MJO forecast (COR ≥ 0.5) with a lead time of up to 30 days (Gottschalck et al. 2010). Based on this standard, the ECMWF forecast model is in general able to predict the evolution of the MJO up to about 20 days (Vitart and Molteni 2010). During the DYNAMO period, MJO forecast can be useful with a lead time of 13 days by an atmospheric model and around 25 days by a coupled model (Fu et al. 2013).
In this study, we applied both the global and local measures of the MJO to the three MJO events during DYNAMO (Fig. 1), which will be referred to as the October, November, and December MJO events, respectively. We demonstrate in this section that the global and local measures may provide different information on MJO forecast skill and that both measures are needed for a complete assessment of MJO forecast.
Observed and forecasted RMM indices for the October MJO event are shown in Fig. 4. Its observed RMM amplitude (normalized by one standard deviation) is extraordinarily large in phases 1 and 2 but quickly reduces to about one and remains so during the rest of its lifetime (Fig. 1b). All forecasts capture the large amplitude and its reduction through a lead time of 10 days. They also capture the observed small amplitudes through phases 4–8 up to a lead time of 5 days. The amplitude over the Maritime Continent (phases 4 and 5) is considerably underestimated by the forecast at the lead time of 10 days and beyond. This is an example showing the difficulty of ECMWF forecasts to propagate the MJO across the Maritime Continent (Vitart et al. 2007; Vitart and Molteni 2010). Such a Maritime Continent barrier also exists in other models (e.g., Fu et al. 2013).
Observed and forecasted RMM phase and amplitude of the November MJO event are shown in Fig. 5. Based on the observed RMM index, this MJO event did not propagate into the western Pacific, in contrast to the October event. Its amplitude remains larger in phases 2–4 but quickly becomes less than one in phase 5 (Figs. 4a and 1b). All forecasts are able to capture this event up to a lead time of 5 days. For longer lead times, the forecast amplitude prematurely reduces to less than one in phases 3 and 4. RMM amplitudes of the ensemble mean and control run follow each other closely up to day 10, after which they start departing from each in phases 4 and 5. All forecasts underestimate the observed amplitude at lead times of 5–10 days in phases 3–5. At a lead time of 15 days the control run and some ensemble members produce amplitudes close to the observed ones in parts of phases 2 and 3.
The RMM phase diagram for the December event (Fig. 6) is very different from the other two events. No forecast maintains the observed RMM amplitude in phases 5 and 6 from the beginning of the forecast. In fact, based on the observed RMM index, this MJO event barely existed. This is a case of decoupling between the large-scale circulation, which shows eastward propagation only in phases 5 and 6 as represented by the observed RMM index, and an apparent eastward propagation of convective systems over the Indian Ocean represented by precipitation (Fig. 1a). The contrast between the MJO representations by the RMM index and precipitation presents a challenge to our perception and definition of the MJO.
Forecast skill for the three MJO events are quantitatively measured by COR and RMSE as functions of forecast lead time (Fig. 7). Forecast skill strongly varies among the three MJO events. Forecast of the November event (Figs. 7c,d) has the highest skill, with COR coefficients above 0.8 and RMSE below 1.0 up to lead times of 15 days. Forecast skill is clearly the lowest for the December event (Figs. 7e,f): COR coefficients approach zero by a lead time of 10 days when RMSE increases beyond 1.0. The poor skill in forecasting the RMM index for this event might be related to the decoupling of the circulation from MJO convection.
The relative performance of different forecast configurations also differs among the three events. For the October event (Fig. 7a), almost all individual ensemble members are outperformed by the other forecasts (e.g., E-CTL, L-CTL, and H-CTL), which do not differ much from each other. It is not the case for the other two events. The ensemble mean appears to be better than HRES and all the ensemble control runs for the November and December events, except for RMSE during the first 8 days of the forecast of the November event. High- and low-resolution controls are hardly distinguishable from each other until after a lead time of 10 days for the December event, and it may due to the complicity of this event and the ocean model coupling after the 10-day forecast.
The underestimation of the observed RMM amplitude by the forecast is well represented by the local measure of MJO tracking. From forecast lead times of 1–2 days there is a sharp decrease in predicted MJO strength (Fig. 8) as measured by the integrated precipitation along its track explained in section 2c. This sharp decrease comes from the model adjustment to its initial shock produced by the analysis. After that, forecast strength continues to decrease with lead time but more gradually. By day 15, the forecast strength asymptotically approaches a level that is about one-third of the observed. The ensemble spread of predicted MJO strength as measured by precipitation barely grows with lead time and remains below 50% of the observed strength through lead times of 15 days. This is in contrast to the predicted RMM amplitude for which the ensemble spread is large (i.e., about 90% of the observed; Figs. 4–6). As shown in section 5 for the 32-day ensemble forecasts, forecasted MJO strength continues to decrease slowly with lead times beyond day 15 and approaches almost zero at a lead time of 30 days. The error growth in predicted MJO strength for the three DYNAMO MJO events is very similar to each other, even though the observed strengths of the October and December events is only about 65% of the November event (4.4 and 4.6 vs 7.0 mm day−1).
While both global and local measures of the MJO provide information on forecast skill of the MJO amplitude (RMM) or strength (tracking), only the local measure is able to quantify the forecast skill of the propagation speed of MJO convection. The propagation speeds of the three DYNAMO MJO events as estimated from the tracking are roughly 6 m s−1. Errors in the forecast speed (Fig. 9) show characteristics that differ from those of forecast strength. At a lead time of 1 day the speed is slightly overestimated by almost all forecasts for all three events. Beyond day 1, forecast errors develop with lead times very differently for the three MJO events. For the October event (Fig. 9a), the forecast speeds fluctuate between over- and underestimates every 3–4 days in forecast lead time. Their spread gradually increases with lead time. For the November event (Fig. 8b), forecast MJO speeds gradually increase with lead time but the spread remains the smallest among the three events up to day 15. Forecast speeds for the December event (Fig. 9c) are the worst among the three. They become much larger than the observed at and beyond a lead time of 2 days and widely spread thereafter between the observed speed and 11.5 m s−1.3 When forecast speed is 9 m s−1 or higher, the MJO is no longer distinguishable from convectively coupled Kelvin waves (Kiladis et al. 2009) in the forecast.
Errors in the timing of convective initiation (Fig. 10) behave similarly as speed errors. For the October event (Fig. 10a), they fluctuate between positive (too late) and negative (too early) values with increasing spread after a lead time of 5 days. For the November event (Fig. 10b) most errors are positive and increase gradually with a lead time with less ensemble spread than the October event. For the December event (Fig. 10c) the timing error is large even within the 5-day forecast lead time, indicating that the ECMWF model is not capable of predicting the December case.
The comparison of the three MJO events suggests that deterioration in the global forecast skill is more related to poor local skill in forecasting the propagation speed than amplitude and timing. The propagation speed is best forecasted for the November MJO event (Fig. 9b) for which the global skill (Fig. 7c) remains high up to a lead time of 15 days. The speed forecast is the worst for the December MJO event, for which the global skill deteriorates the fastest. The correspondence between the global skill and local skills in the strength and timing are much subtler.
4. Forecast validation against sounding observations
In this section, forecasts of zonal wind, temperature, and humidity profiles are compared to DYNAMO sounding observations at selected sites. This is to evaluate the general ECMWF forecast skill over the tropical Indian Ocean and to see how this may be related to the MJO forecast skill. The mean departures of HRES forecasts from sounding observations at Gan Island (Fig. 2) are shown in Fig. 11. The largest errors in zonal wind occur in the upper troposphere centered at 200 hPa, where the forecasts exhibit a strong westerly bias reaching 3–4 m s−1 at a lead time of 10 days, and in the lower troposphere centered at 850 hPa, where the forecasts exhibit an easterly bias reaching 3 m s−1 at a lead time of 10 days. During October–December 2011 westerlies dominated at low levels and easterlies at high levels in sounding observations at Gan Island (Fig. 12a). The biases shown in Fig. 11a suggest that the zonal circulation in the forecast is too weak. Errors in 200-hPa zonal winds are associated with excessive mass flux in the upper troposphere and an overestimation of parameterized cumulus friction in the ECMWF IFS model (Bechtold et al. 2012).
The largest temperature errors of 1–2 K occur around 150 hPa at a lead time of 10 days. Two other layers of significant but smaller (≤1 K) temperature errors are centered around 300 and 700 hPa, respectively. Above the 300-hPa level, where the IFS assimilation system includes some humidity observations from satellite but no sounding humidity is used, forecast errors are exceptionally large (too humid). In addition, larger errors in relative humidity are found around 400 hPa where the forecasts become too dry by 3%–4%. Similar error characteristics of the HRES and the ensemble mean forecasts have also been found at Diego Garcia and Manus Island (not shown). The error profiles in Fig. 11 reflect closely the overall error distributions in the tropics as documented by Bechtold et al. (2012).
Despite these errors, HRES captures the main intraseasonal variations in zonal wind and humidity when compared to the sounding observations at Gan Island. During the DYNAMO field campaign several episodes of strong westerlies in the lower to midtroposphere accompanied by strong easterlies in the upper troposphere have been observed (Fig. 12a). These wind events occurred in late October, November, and December 2011 and late January 2012. The HRES forecast captured these events up to a lead time of 10 days though the forecast amplitude reduces with lead time (Figs. 12b–d), resulting in low-level easterly biases and upper-level westerly biases as seen from the mean errors (Fig. 11a). Each of the individual westerly wind bursts was related to an MJO event over the Indian Ocean (Gottschalck et al. 2013), except the event that occurred in January 2012.
The MJO signature is also identified from the moistening in the mid- and upper troposphere observed at Gan Island (Fig. 13a). One marked feature in the evolution of relative humidity at Gan Island is a shift from a period with frequent moist episodes before January 2012 to a period with a much dryer troposphere. This shift of humidity regime is part of the seasonal cycle over the Indian Ocean and is related to the Asian winter monsoon when dry air from the northern extratropics pushes its way into the tropics and seasonal precipitation moves southward across the equator (Yoneyama et al. 2013; Gottschalck et al. 2013). The HRES forecast captured this shift almost precisely up to a lead time of 10 days.
There were also regime shifts observed at other sounding sites. For example, at Diego Garcia, zonal wind was dominated by easterlies in the entire troposphere until the beginning of December 2011 when westerlies became prominent. Meanwhile, the upper-tropospheric zonal wind at Manus shifted from a period of alternations between westerlies and easterlies to one of complete easterly domination (Yoneyama et al. 2013). The HRES forecast also captured these shifts almost perfectly up to a lead time of 10 days (not shown).
These comparisons of sounding and HRES forecast indicate that the systematic errors in the upper-tropospheric humidity do not contribute dominantly to the reduction of MJO forecast skill of HRES seen in section 3. This suggests the insignificant role of upper-tropospheric humidity in the MJO forecast.
5. Results from numerical experiments
a. Observation denial experiments
Vertically (400–850 hPa) averaged RMSE of the zonal wind between the analysis of each denial (HP-DNS and HP-DNM) and control (HP-CTL) experiment are shown in Fig. 14. As expected, the impact of the DYNAMO sounding data and the Met-7 AMVs on the analysis is significant over the Indian Ocean (Figs. 14a,b). The large RMSE in the Pacific and Atlantic ITCZ is likely due to small shifts in convective cell locations due to the perturbed analysis. The difference between the average RMSE from the two data denial experiments (Fig. 14c) shows the impact of assimilating DYNAMO sounding data is essentially local near the sounding sites. The DYNAMO sounding data have similar impacts on the temperature and humidity fields as for the zonal wind (not shown).
The impact of observations over the Indian Ocean on forecast skill is limited. The DYNAMO sounding observations and Met-7 AMVs over the Indian Ocean have no significant impact on global MJO forecast skill: the two RMM skill measures (COR and RMSE) are undistinguishable up to a lead time of 10 days with and without these observations (Fig. 15, solid lines). The removal of the observations serves as an additional perturbation to the initial conditions. They may affect the local skill of individual ensemble members measured by MJO tracking, but have no significant effect on the ensemble mean and spread (not shown). This suggests that MJO forecast skill is not significantly affected by these observations over the Indian Ocean, but may be more influenced by the representation of physics/dynamics in the model and its ability to correctly represent the scale interactions and extratropical teleconnections (Vitart and Jung 2010). Locally, observations affect moisture, temperature, and wind analysis and can modulate smaller-scale structures but this has only limited impact on general MJO forecast skill.
b. Humidity relaxation and SST forcing experiments
Humidity relaxation toward that of the operational analysis over the Indian Ocean has no effect on the global forecast skill within a lead time of 8 days, but slightly degrades it afterward (Fig. 15, blue dotted line). This is in contrast to improved MJO simulations with humidity nudging in Hagos et al. (2011). This is also at odds with our current perception of the possible role of humidity in the MJO (Johnson et al. 1999; Kemball-Cook et al. 2002; Kiladis et al. 2005; Ling et al. 2013; Zhao et al. 2013). The degradation of the extended forecast beyond 8 days by humidity relaxation may occur because the behavior of convection over the Indian Ocean under the influence of relaxed humidity becomes less directly connected to the large-scale circulation controlled by many other factors. This might be another example of deterioration of global forecast skill of the MJO caused by decoupling between the large-scale circulation and convection in the forecast.
The local MJO forecast skill measured by MJO tracking is affected by the humidity relaxation only slightly (not shown). The ensemble spread in MJO strength is reduced a little for the November and December events with lead times larger than 10 days when the forecast amplitudes have dropped below 30% of the observed. There is no detectable effect on forecasted propagation speed. There is a sign that forecast timing becomes worse with humidity relaxation beyond a lead time of 8 days, possibly caused by a decoupling between convection and the large-scale circulation.
Using daily SSTs with intraseasonal signals does not improve the short-term global forecast skill. However, beyond a lead time of 15 days, forecasts with daily SST forcing are clearly improved (Fig. 15, orange dashed line). Locally, over the Indian and Pacific Oceans, daily SSTs tend to reduce the ensemble spread of forecast initial timing (not shown). This suggests an anchoring effect of intraseasonal perturbations in SST on MJO precipitation.
6. Summary and conclusions
We have evaluated ECMWF forecast skill of the three MJO events observed during the DYNAMO field campaign (Yoneyama et al. 2013), quantified the effect of selected observations in the Indian Ocean region on MJO forecast, and assessed the sensitivity of MJO forecast to humidity relaxation and specified SSTs. MJO forecast skill was measured globally using the RMM index (Wheeler and Hendon 2004), and locally over the Indian and western Pacific Oceans using an MJO tracking method. The main results are as follows:
Error growth of MJO forecast based on global and local measures shows very different characteristics. Forecast errors and ensemble spread defined by the global measure of COR and RMSE using the RMM index always grow gradually and monotonically. In contrast, when defined by the local measure of MJO tracking, characteristics of MJO forecast errors may jump suddenly (e.g., amplitude at lead time of 2 days) and fluctuate (e.g., propagation speed and timing). The global measure tends to lead to a more optimistic conclusion on MJO forecast skill than the local measure. When the global skill might be considered useful (COR > 0.5, RMSE < 1) at a lead time of 15 days, the local skill may indicate otherwise: forecast MJO strength is less than 30% of the observed with a large spread in forecast speed. As expected, the ensemble mean is better than any individual forecasts when evaluated by the global measure. No single forecast configuration shows obvious and consistent superiority to others by the local measure.
Global and local forecast skills differ substantially among the three DYNAMO MJO events. Forecast strength varies the least and forecast propagation speed the most among the three events. The deterioration of the global MJO forecast skill appears to be related to the poor local forecast skill of the speed. The December event is an unconventional and controversial one, difficult to forecast by both global and local measures possibly because of the decoupling between the large-scale circulation and convection.
The main source of ECMWF forecast skill or error of convective initiation of the MJO is not from the Indian Ocean alone for the three MJO events. Observations in the Indian Ocean region do not significantly affect forecast of the three MJO events and local humidity and SST also have minimal effects in the short-range forecast (15 days). These somewhat surprising results appear to contradict the current common notion of the important role of moisture in the MJO (Johnson et al. 1999; Kemball-Cook et al. 2002; Kiladis et al. 2005). The results suggests that MJO forecast skill is more influenced by the analysis beyond the Indian Ocean and the model’s ability to correctly represent extratropical teleconnections and physics–dynamics scale interactions of the MJO phenomenon.
Through comparing global and local MJO forecast skills, this study advocates the need for both. In a sense, the local MJO forecast skill sets a higher standard for measuring models’ capability of forecasting the MJO. It exposes poor skill in forecasting the strength and propagation speed of MJO precipitation that is not apparent in the global skill measure. This study has demonstrated that the MJO tracking method is a useful tool for evaluating model forecast, hindcast, and simulations of individual MJO events, complementary to the global measure of the MJO based on the RMM index. The results from this study based on three MJO events can be generalized only after confirmed by evaluation of a large number of MJO events and their forecast. The extent to which the deterioration of MJO forecast skill, in both global and local measures, is related to model infidelity versus MJO predictability needs to be explored.
Acknowledgments
Two anonymous reviewers provided careful comments on the submitted manuscript, which helped improve this article. Mark Rodwell provided valuable comments on this study. The first author thanks ECMWF for hosting his visit for a year, during which this study was conducted. The last author thanks the National Science Foundation, Department of Energy, and Office of Naval Research for their support, and thanks ECMWF for sponsoring his visit to work on this study.
REFERENCES
Bechtold, P., and Coauthors, 2012: Progress in predicting tropical systems: The role of convection. Tech. Memo. 686, ECMWF Tech. Rep., 61 pp.
Buizza, R., and T. N. Palmer, 1995: The singular-vector structure of the atmospheric general circulation. J. Atmos. Sci., 52, 1434–1456, doi:10.1175/1520-0469(1995)052<1434:TSVSOT>2.0.CO;2.
Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 2887–2908, doi:10.1002/qj.49712556006.
de Boisséson, E., M. A. Balmaseda, F. Vitart, and K. Mogensen, 2012: Impact of the sea surface temperature forcing on hindcasts of Madden-Julian Oscillation events using the ECMWF model. Ocean Sci., 8, 1071–1084, doi:10.5194/os-8-1071-2012.
Fu, X. H., J. Y. Lee, P. C. Hsu, H. Taniguchi, B. Wang, W. Q. Wang, and S. Weaver, 2013: Multi-model MJO forecasting during DYNAMO/CINDY period. Climate Dyn., 41, 1067–1081, doi:10.1007/s00382-013-1859-9.
Gottschalck, J., and Coauthors, 2010: A framework for assessing operational Madden–Julian oscillation forecasts: A CLIVAR MJO Working Group Project. Bull. Amer. Meteor. Soc., 91, 1247–1258, doi:10.1175/2010BAMS2816.1.
Gottschalck, J., P. E. Roundy, C. J. Schreck III, A. Vintzileos, and C. Zhang, 2013: Large-scale atmospheric and oceanic conditions during the 2011–12 DYNAMO field campaign. Mon. Wea. Rev., 141, 4173–4196, doi:10.1175/MWR-D-13-00022.1.
Grimm, A. M., and P. L. Silva Dias, 1995: Analysis of tropical–extratropical interactions with influence functions of a barotropic model. J. Atmos. Sci., 52, 3538–3555, doi:10.1175/1520-0469(1995)052<3538:AOTIWI>2.0.CO;2.
Hagos, S., L. R. Leung, and J. Dudhia, 2011: Thermodynamics of the Madden–Julian oscillation in a regional model with constrained moisture. J. Atmos. Sci., 68, 1974–1989, doi:10.1175/2011JAS3592.1.
Huffman, G. J., and Coauthors, 2007: The TRMM Multisatellite Precipitation Analysis (TMPA): Quasi-global, multiyear, combined-sensor precipitation estimates at fine scales. J. Hydrometeor., 8, 38–55, doi:10.1175/JHM560.1.
Johnson, R. H., T. M. Rickenbach, S. A. Rutledge, P. E. Ciesielski, and W. H. Schubert, 1999: Trimodal characteristics of tropical convection. J. Climate, 12, 2397–2418, doi:10.1175/1520-0442(1999)012<2397:TCOTC>2.0.CO;2.
Kemball-Cook, S., B. Wang, and X. H. Fu, 2002: Simulation of the intraseasonal oscillation in the ECHAM-4 model: The impact of coupling with an ocean model. J. Atmos. Sci., 59, 1433–1453, doi:10.1175/1520-0469(2002)059<1433:SOTIOI>2.0.CO;2.
Kiladis, G. N., and K. M. Weickmann, 1992: Circulation anomalies associated with tropical convection during northern winter. Mon. Wea. Rev., 120, 1900–1923, doi:10.1175/1520-0493(1992)120<1900:CAAWTC>2.0.CO;2.
Kiladis, G. N., K. H. Straub, and P. T. Haertel, 2005: Zonal and vertical structure of the Madden–Julian oscillation. J. Atmos. Sci., 62, 2790–2809, doi:10.1175/JAS3520.1.
Kiladis, G. N., M. C. Wheeler, P. T. Haertel, K. H. Straub, and P. E. Roundy, 2009: Convectively coupled equatorial waves. Rev. Geophys., 47, RG2003, doi:10.1029/2008RG000266.
Kummerow, C., and Coauthors, 2000: The status of the Tropical Rainfall Measuring Mission (TRMM) after two years in orbit. J. Appl. Meteor., 39, 1965–1982, doi:10.1175/1520-0450(2001)040<1965:TSOTTR>2.0.CO;2.
Lin, H., G. Brunet, and J. Derome, 2008: Forecast skill of the Madden–Julian oscillation in two Canadian atmospheric models. Mon. Wea. Rev., 136, 4130–4149, doi:10.1175/2008MWR2459.1.
Ling, J., C. Li, W. Zhou, X. Jia, and C. Zhang, 2013: Effect of boundary layer latent heating on MJO simulations. Adv. Atmos. Sci., 30, 101–115, doi:10.1007/s00376-012-2031-x.
Madden, R. A., and P. R. Julian, 1971: Detection of a 40–50 day oscillation in the zonal wind in the tropical Pacific. J. Atmos. Sci., 28, 702–708, doi:10.1175/1520-0469(1971)028<0702:DOADOI>2.0.CO;2.
Madden, R. A., and P. R. Julian, 1972: Description of global-scale circulation cells in tropics with a 40–50 day period. J. Atmos. Sci., 29, 1109–1123, doi:10.1175/1520-0469(1972)029<1109:DOGSCC>2.0.CO;2.
Matsueda, M., and H. Endo, 2011: Verification of medium-range MJO forecasts with TIGGE. Geophys. Res. Lett., 38, L11801, doi:10.1029/2011GL047480.
Matthews, A. J., 2004: Intraseasonal variability over tropical Africa during northern summer. J. Climate, 17, 2427–2440, doi:10.1175/1520-0442(2004)017<2427:IVOTAD>2.0.CO;2.
Renwick, J. A., and M. J. Revell, 1999: Blocking over the South Pacific and Rossby wave propagation. Mon. Wea. Rev., 127, 2233–2247, doi:10.1175/1520-0493(1999)127<2233:BOTSPA>2.0.CO;2.
Straub, K. H., 2013: MJO initiation in the real-time multivariate MJO index. J. Climate, 26, 1130–1151, doi:10.1175/JCLI-D-12-00074.1.
Waliser, D., and Coauthors, 2009: MJO simulation diagnostics. J. Climate, 22, 3006–3030, doi:10.1175/2008JCLI2731.1.
Vialard, J., F. Vitart, M. A. Balmaseda, T. Stockdale, and D. L. T. Anderson, 2005: An ensemble generation method for seasonal forecasting with an ocean–atmosphere coupled model. Mon. Wea. Rev., 133, 441–453, doi:10.1175/MWR-2863.1.
Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., doi:10.1002/qj.2256, in press.
Vitart, F., and T. Jung, 2010: Impact of the Northern Hemisphere extratropics on the skill in predicting the Madden–Julian Oscillation. Geophys. Res. Lett., 37, L23805, doi:10.1029/2010GL045465.
Vitart, F., and F. Molteni, 2010: Simulation of the Madden–Julian Oscillation and its teleconnections in the ECMWF forecast system. Quart. J. Roy. Meteor. Soc., 136, 842–855, doi:10.1002/qj.623.
Vitart, F., S. Woolnough, M. A. Balmaseda, and A. M. Tompkins, 2007: Monthly forecast of the Madden–Julian oscillation using a coupled GCM. Mon. Wea. Rev., 135, 2700–2715, doi:10.1175/MWR3415.1.
Wang, W., M.-P. Hung, S. J. Weaver, A. Kumar, and X. Fu, 2013: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 10.1007/s00382-013-1806-9, in press.
Wheeler, M. C., and H. H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 1917–1932, doi:10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.
Wolff, J. O., E. Maier-Raimer, and S. Legutke, 1997: The Hamburg ocean primitive equation model. Deutsches Klimarechenzentrum Tech. Rep. 13, Hamburg, Germany, 98 pp.
Yoneyama, K., C. Zhang, and C. N. Long, 2013: Tracking pulses of the Madden–Julian Oscillation. Bull. Amer. Meteor. Soc., 94, 1871–1891, doi:10.1175/BAMS-D-12-00157.1.
Zhang, C., 2013: Madden–Julian Oscillation: Bridging weather and climate. Bull. Amer. Meteor. Soc., 94, 1849–1870, doi:10.1175/BAMS-D-12-00026.1.
Zhang, C., and S. P. Anderson, 2003: Sensitivity of intraseasonal perturbations in SST to the structure of the MJO. J. Atmos. Sci., 60, 2196–2207, doi:10.1175/1520-0469(2003)060<2196:SOIPIS>2.0.CO;2.
Zhao, C., T. Li, and T. Zhou, 2013: Precursor signals and processes associated with MJO initiation over the tropical Indian Ocean. J. Climate, 26, 291–307, doi:10.1175/JCLI-D-12-00113.1.
The IFS model is updated from version Cy37r2 to Cy37r3 in November 2011.
For the ease of discussion, “strength” of the MJO is used for local skill and “amplitude” is used for global skill.
This is our upper bound of tracking speed. Forecast speeds greater than this are not included in the statistics.