The Madden–Julian oscillation (MJO) is an important source of predictability. The boreal 2004/05 winter is used as a case study to conduct predictability experiments with the Weather Research and Forecasting (WRF) Model. That winter season was characterized by an MJO event, weak El Niño, strong North Atlantic Oscillation, and extremely wet conditions over the contiguous United States (CONUS). The issues investigated are as follows: 1) growth of forecast errors in the tropics relative to the extratropics, 2) propagation of forecast errors from the tropics to the extratropics, 3) forecast error growth on spatial scales associated with MJO and non-MJO variability, and 4) the relative importance of MJO and non-MJO tropical variability on predictability of precipitation over CONUS.
Root-mean-square errors in forecasts of normalized eddy kinetic energy (NEKE) (200 hPa) show that errors in initial conditions in the tropics grow faster than in the extratropics. Potential predictability extends out to about 4 days in the tropics and 9 days in the extratropics. Forecast errors in the tropics quickly propagate to the extratropics, as demonstrated by experiments in which initial conditions are only perturbed in the tropics. Forecast errors in NEKE (200 hPa) on scales related to the MJO grow slower than in non-MJO variability over localized areas in the tropics and short lead times. Potential predictability of precipitation extends to 1–5 days over most of CONUS but to longer leads (7–12 days) over regions with orographic precipitation in California. Errors in initial conditions on small scales relative to the MJO quickly grow, propagate to the extratropics, and degrade forecast skill of precipitation.
Since its discovery in the 1970s (Madden and Julian 1971, 1972, 1994), the Madden–Julian oscillation (MJO) has attracted significant research interest to further explain its mechanisms and its role in weather and climate variability (Zhang 2005; Lau and Waliser 2012; Zhang 2013). The typical life cycle of the MJO shows regions of tropical intraseasonal (20–100 day) enhanced and suppressed convective anomalies propagating eastward from the tropical Indian Ocean to the western Pacific (e.g., Hendon and Salby 1994). The atmospheric response associated with the diabatic heating shows circulation anomalies that influence tropical and extratropical weather variability (Kiladis and Weickmann 1992; Carvalho et al. 2004; Matthews 2004; Seo and Son 2012; Zhang 2013; Adames and Wallace 2014; Jones and Carvalho 2014).
Because the time scale of the MJO (~30–60 days) is longer than synoptic scales, the MJO is considered to be a significant source of potential predictability in the tropics and extratropics of both hemispheres (Waliser et al. 2003; Jones et al. 2004a,b; Gottschalck et al. 2010, 2013). Therefore, studies in the 1990s realized the importance of determining the skill of operational numerical weather prediction models in forecasting the MJO (Chen and Alpert 1990; Lau and Chang 1992). However, global models at the time were not able to maintain the convectively coupled structure of the MJO, and forecast skill of the MJO was limited to about 5–7-day lead times (Hendon et al. 2000; Jones et al. 2000).
Efforts to improve model physics, especially convective parameterizations, and model resolutions have led to noticeable advances in the representation of the MJO in global models. Using 35 yr of hindcasts, Lin et al. (2008) showed that forecast skill of the MJO in two global Canadian models extends out to 2 weeks. Matsueda and Endo (2011) analyzed medium‐range forecasts from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) and concluded that operational models have different skills in forecasting the onset and phase evolution of the MJO. Rashid et al. (2010) shows that the Predictive Ocean Atmosphere Model for Australia (POAMA) forecasts the MJO out to about 21 days. Wang et al. (2014) demonstrate that the Climate Forecast System model (CFS, version 2) from the National Centers for Environmental Prediction predicts the evolution of the MJO out to 20-day lead. Vitart et al. (2007), Hirons et al. (2013a,b), and Vitart (2014) discuss the reasons for improvements in the MJO representation in the European Centre for Medium-Range Forecasts (ECMWF) model over the years and estimate the current forecast skill to be about 32 days, which is consistent with the study of Kim et al. (2014). In addition, air–sea coupling appears to be important to extend forecast skill of the MJO (Shelly et al. 2014).
Another noteworthy line of research has been on teleconnections associated with the MJO and extratropical forecast skill (Jones et al. 2004b; Vitart and Molteni 2010). Significant linkages have demonstrated that extratropical forecast skill of precipitation (Janowiak et al. 2010; Jones et al. 2011b,a; Xavier et al. 2014) and temperature (Matsueda and Takaya 2015) are higher during active MJO days than in inactive days. When the MJO is active, tropical convective activity is organized and diabatic heating generates Rossby wave responses that propagate into the extratropics of both hemispheres (Matthews et al. 2004; Seo and Son 2012). The enhanced predictability in the extratropics can thus be hypothesized as an organization of the atmosphere on large scales, which are more predictable than small scales. Interestingly, some studies also show that intraseasonal and synoptic extratropical waves can also influence the organization of tropical convection and the evolution of the MJO (Ray and Li 2013; Sakaeda and Roundy 2015, 2016).
Although previous studies have shed some light on the influential nature of the MJO in forecast skill, there are still many important aspects that have not been explored. This study investigates several questions regarding the MJO and short-to-medium-range forecast skill in the tropics and Northern Hemisphere extratropics during boreal winter. Specifically, when an MJO event is occurring, how does forecast error growth in the tropics compare against the extratropics? How quickly do forecast errors in the tropics propagate to the extratropics? Do forecast errors associated with the MJO grow significantly slower than errors related to non-MJO variability? Do forecast errors in MJO and non-MJO variability have similar influences on the potential predictability of precipitation in the extratropics?
To study these questions, we focus on the boreal 2004/05 winter, when an MJO event developed with enhanced convection over the western Indian Ocean on 18 December 2004, intensified and propagated eastward over Indonesia and the western Pacific and ended on 20 January 2005. The average amplitude of this event was close to the median value of the historical frequency distribution of MJO amplitudes. That event occurred within a weak warm El Niño–Southern Oscillation (ENSO) phase and a strong North Atlantic Oscillation (NAO) event. In addition, large areas in the western and midwestern contiguous United States (CONUS) received extremely high amounts of precipitation especially during January 2005 [see Figs. 1, 13, and 14 in Jones and Carvalho (2014)]. The combined impact of the storms during this period reached tens of millions of dollars in damage, and over 20 people were killed. Jones and Carvalho (2014) carried out modeling experiments for the same case study and investigated how the amplitude of the MJO influences the precipitation distribution over CONUS. They show that the frequency distribution of precipitation over CONUS is very sensitive to the amplitude of the MJO, particularly the occurrence of heavy precipitation. In this study, predictability experiments are described in section 2, and results are presented in section 3. Section 4 contains a summary and conclusions.
2. Numerical model and experiments
Potential predictability is investigated in this study with the “perfect-model twin experiments approach” (Waliser et al. 2003; Tribbia and Baumhefner 2004) and the Weather Research and Forecasting (WRF, version 3.7.1) Model (Skamarock et al. 2008). WRF is a fully compressible, nonhydrostatic, prognostic model suitable for idealized and realistic numerical simulations of the atmosphere. The model uses a terrain-following hydrostatic pressure coordinate in the vertical and the Arakawa C staggered grid in the horizontal.
WRF is configured with two nested grids (Fig. 1). The first domain (D1) has 54-km horizontal grid spacing, extending over a large area (18.1°S–53.5°N; 26.3°E–26.3°W), and captures the variability associated with the MJO. The second domain (D2; 18-km grid spacing) covers CONUS and is used to estimate potential predictability of precipitation. The model setup includes 41 vertical levels with the model top at 50 hPa. Nudging is not applied, and one-way interaction between D1 and D2 is used. The configuration used here includes parameterizations for microphysics (vapor, ice, cloud, rain, and snow) (Hong et al. 2004), solar and infrared radiation transfer (Iacono et al. 2008), Monin–Obukhov similarity theory (Skamarock et al. 2008), a land surface model (Unified Noah; Chen and Dudhia 2001), planetary boundary layer [Yonsei University (YSU) PBL; Hong et al. 2006], and cumulus convection (Kain 2004).
Forecasts are performed from 17 December 2004 to 20 January 2005, which was characterized by a moderate MJO event. Climate Forecast System Reanalysis (CFSR) data from the National Centers for Environmental Prediction (NCEP) (Saha et al. 2010) are used as initial and boundary conditions. Forecasts are made once a day and always initialized at 0000 UTC; each forecast extends to 15 days. For each day D, the unperturbed forecasts use the CFSR data at 0000 UTC (RD@00) as initial conditions. In addition, 10 forecast members are also run each day. Perturbations to the initial conditions in the D1 domain are computed as subdaily differences between reanalyses and assumed to represent uncertainties in the state of the atmosphere. Table 1 shows the different combinations of perturbations. Here, RD@HH (where HH = 00, 06, 12, or 18 corresponding to 0000, 0600, 1200, or 1800 UTC, respectively) are reanalyses on the day of the forecast D, and RD±1@HH are reanalyses on the day before or after the day of the forecast. Note that perturbations computed in this way are not necessarily independent from each other.
Three types of experiments are performed:
All Lat experiment: Perturbations to the initial conditions are applied over the entire D1 domain.
TRO experiment: Perturbations are applied only in tropical latitudes (18.1°S–20°N) over the D1 domain. Gaussian weights w are used to smoothly decrease the influence of the perturbations poleward of 10°N (w = 1 for 18.1°S–10°N, w = 0.85 at 15°N, w = 0.5 at 17.5°N, and w = 0.15 at 20°N).
NMJ experiment: Perturbations without large-scale tropical intraseasonal anomalies associated with the MJO are applied to tropical latitudes (18.1°S–20°N) in the D1 domain. The MJO signal is calculated by applying a recursive bandpass Murakami filter (20–100-day cutoff periods) to the CFSR data during 1 November 2004–31 March 2005. The bandpass-filtered anomalies are additionally filtered in space by retaining only zonal wavenumbers 1–5 [see Jones and Carvalho (2014) for details]. The filtered anomalies are subtracted from the perturbation analyses described above before they are added to the initial conditions. Some additional details and caveats are worth mentioning. The filter does not separate eastward from westward components. This is done because some westward-propagating signals are correlated with the eastward propagation of the MJO. In addition, MJO signal as used in this study refers to the large-scale features of the oscillation (i.e., 20–100 days; zonal wavenumbers 1–5). Variations outside these bands are referred here to as non-MJO signals, but they are not entirely independent of the MJO. Inspection of forecasts of tropical precipitation (D1 domain) do not suggest a reintensification of the MJO in the NMJ experiment relative to the MJO signal in the TRO experiment (not shown).
The three experiments are used to compare the growth rate of forecast errors in the tropics and Northern Hemisphere extratropics. Specifically, experiments TRO and NMJ allow for investigating the relative roles of MJO and non-MJO variability and how quickly errors in initial conditions propagate from the tropics to the extratropics. Note also that perturbations to the initial conditions are only applied to domain D1; domain D2 always starts from unperturbed reanalysis and WRF is configured with one-way interaction. In this setup, perturbations propagate over the D1 domain and are transmitted through the lateral boundaries of domain D2. This allows quantifying the relative roles of tropical and extratropical influences on the potential predictability of precipitation over CONUS.
Forecast error growth in eddy kinetic energy (EKE) is a useful metric to study predictability of three-dimensional, nonlinear, multiscale geophysical flows such as the atmosphere (e.g., Palmer et al. 2014; Tribbia and Baumhefner 2004). This is computed as EKE = 0.5(u*2 + υ*2), where u* = u − [u] and υ* = υ − [υ] are departures from the zonal means [u] and [υ] in the D1 domain. To get a first perspective in the All Lat experiment, EKE (200 hPa) is computed for all forecasts (11 members, 15-day forecast for each day during 17 December 2004–20 January 2005). To compare error growth over different latitudes, EKE is normalized at each grid point by subtracting the time mean and dividing by the standard deviation of the 0-day lead forecast averaged over all members:
where λ is longitude; φ is latitude; l = 1, …, 15 days; p = 1, …, 11 members; t is time (N = 35 days; 17 December 2004–20 January 2005); and and σ(λ, φ) are the mean and standard deviation in the All Lat experiment.
Potential predictability (Luo and Wood 2006; Lavers et al. 2014) is used to determine the ability of the model to predict itself given small perturbations in the initial conditions. In this approach, one member is taken as validation, another member as forecast, and score statistics are computed; the process is repeated with another member as validation and another as forecast and so on until all combinations are done (there are 55 possible combinations). The root-mean-square (rms) error of normalized eddy kinetic energy (NEKE) for a given lead time l and combination of forecast members F and validation V is
The ensemble average of all combinations of rms errors is then computed and used here as a metric of potential predictability. The rms errors are expressed in units of local standard deviation; rms = 1 indicates that forecast errors grow by one standard deviation, and it is used here as a limit of potential predictability. Uncertainties in potential predictability are estimated with standard deviations from rms errors derived from all possible combinations of forecast–validation pairs.
Mean rms errors of NEKE in the All Lat experiment grow approximately between 0.2 and 0.8 at 1 day over most of the tropics and midlatitudes (Fig. 2, top). It is interesting to note, however, that error growth is much faster (0.8–1.0) over midlatitudes in Asia, the North Pacific, the Atlantic, and small areas in the tropics. As expected, forecast errors grow to more than one standard deviation as the lead time increases to 5 and 10 days. It is also seen that large rms errors in the tropics have small spatial scales at 5 days and subsequently enlarge to broad areas at 10 days, suggesting upscale propagation of forecast errors.
The results above raise some important issues. While the MJO is considered a source of potential predictability, uncertainties in initial conditions on spatiotemporal scales not directly associated with the MJO (defined here as 20–100 day; zonal wavenumbers 1–5) can significantly impact forecast skill. To quantify these relationships, forecast error growth in the TRO and NMJ experiments are compared. The normalization of EKE (200 hPa) in these experiments, however, is done as
where EXP denotes either the TRO or NMJ experiment. Note that the normalization uses the mean EKE from the given experiment, but the standard deviation is always from the All Lat experiment. This is done because signal-to-noise ratios in the initial conditions in both experiments are different; recall that in the NMJ experiment the intraseasonal signal associated with the MJO is removed from the perturbations prior to being added to the initial conditions. In this normalization, rms errors are expressed in units of standard deviation from the All Lat experiment, and rms = 1 is also used as a limit of potential predictability.
Figure 3 shows the ratio of rms errors between the two experiments. Since perturbations to initial conditions in both experiments are only added between 18.1°S and 20°N, rms errors at 1 day are zero (or extremely small) poleward of 20°N, and, therefore, the ratio is undefined in those latitudes (Fig. 3, top). In the tropics, large rms ratios occur over the Indian Ocean, the central Pacific, and South America and indicate that forecast errors in NEKE grow up to 3 times faster in the NMJ than in the TRO experiment. At 5 days (Fig. 3, middle), forecast errors in the NMJ continue to grow faster than in the TRO experiment over most of the tropics, except over some small areas where the ratio is less than one. In addition, errors originating in the tropics propagate poleward of 20°N. At 10 days (Fig. 3, bottom), the rate of error growth between the two experiments appears to decrease, although it is still larger than one over most of the domain.
To get a better quantification of differences in forecast error growth among the three experiments, Fig. 4 shows zonal averages of mean rms in NEKE (200 hPa). The All Lat experiment (Fig. 4, top) clearly shows that forecast errors in the tropics grow faster than in the extratropics: rms = 1.0 at 3 days at the equator, whereas rms = 0.8–1.0 at about 12 days at 40°N. If one arbitrarily sets a threshold of rms = 1, then potential predictability of EKE (200 hPa) extends to about 3 days in the tropics and about 10–15 days in the Northern Hemisphere extratropics. In the TRO experiment (Fig. 4, middle), rms = 1.0 at 3 days between 18.1°S and 20°N, while forecast errors quickly saturate in the NMJ experiment (Fig. 4, bottom). In addition, forecast errors in the tropics propagate to the extratropics at a slower rate in the TRO than in the NMJ experiment: rms = 0.4 at 8 days at 30°N (Figs. 4, middle), and rms = 0.4 at 6 days at 30°N (Fig. 4, bottom). The All Lat and TRO experiments also suggest that errors in the extratropics can also propagate into the tropics and decrease predictability; for instance, rms errors are higher at 6–15 days over 10°S–10°N in the All Lat (Fig. 4, top) than in the TRO experiment (Fig. 4, middle). This is consistent with Sakaeda and Roundy’s (2015) analysis of momentum budget. They showed that intraseasonal and synoptic breaking waves modulate the equatorial zonal momentum, including Rossby and Kelvin waves, that can then influence the organization of tropical convection over the Western Hemisphere, Africa, and the Indian Ocean.
Figure 5 summarizes potential predictability in the All Lat experiment and displays median rms errors in NEKE (200 hPa) over the tropics and extratropics. Uncertainties (vertical bars) are estimated with the standard deviation from rms errors derived from all possible combinations of forecast–validation pairs. Forecast errors grow at the same pace in both areas up to 1 day but considerably fast thereafter in the tropics. While rms = 1 is reached at 4 days in the tropics, rms = 1 at 9 days in the extratropics. Additionally, forecast errors in extratropics grow almost linearly between 3 and 15 days, but the same relationship deviates slightly in the tropics.
Several previous studies have discussed scale interactions and upscale propagation of forecast errors (Tribbia and Baumhefner 2004; Palmer et al. 2014; Durran and Weyn 2016). The dependency of forecast error growth on spatial scales is now specifically investigated in this MJO event using the All Lat experiment and the following calculations. Forecasts of u and υ (200 hPa) are first spatially filtered in two groups of zonal wavenumbers (1–5 and 6–54), EKE is computed and normalized. The wavenumber 1–5 group captures the large-scale circulation associated with the MJO, while the wavenumber 6–54 group represents small EKE not directly related to the large-scale features of the MJO. For reference, wavenumbers 5 and 54 correspond to 5929.2 and 549 km, respectively, in the zonal direction in the D1 domain. Note that, as before, the filtering procedure does not separate between westward- and eastward-propagating waves. Next, forecast verification is performed as explained before, and rms errors are calculated for each group. Last, ensemble averages of all combinations of rms errors are computed. Uncertainties in potential predictability between both groups of zonal wavenumber scales are estimated with the standard deviations from rms errors derived from all possible combinations of forecast–validation pairs. To determine whether or not the error growth in the wavenumber 1–5 group is different than in the wavenumber 6–54 group, we compare the mean rms error plus one standard deviation in the wavenumber 1–5 group against the mean rms error plus one standard deviation in the wavenumber 6–54 group.
Figure 6 shows mean rms errors in the zonal wavenumber 1–5 group. At 1 day, errors grow to about 0.1–0.6 over most of the domain, although they grow faster in some areas in the tropics and midlatitudes. At 7 days, we note that rms errors in NEKE grow significantly over large areas, especially in the tropics; at 14 days, errors saturate in the Indian Ocean and other areas in the tropics and midlatitudes. Comparing these results with similar NEKE errors in zonal wavenumbers 6–54 (Fig. 7) shows a much faster growth on small spatial scales than in large scales associated with the MJO. At 1 day, rms errors grow to 0.6–1.6 over most of the domain (Fig. 7, top). Subsequently, NEKE errors enlarge and saturate broad areas in the tropics (Fig. 7, middle and bottom). Note also that errors can momentarily decrease locally as lead time increases, likely indicating upscale propagation of errors. It is important to note that differences in error growth between MJO and non-MJO scales are mostly distinct from each other only in localized regions in the tropics at 1-day lead (cf. thin black contours in Figs. 6 and 7).
To summarize these relationships, Fig. 8 shows median rms errors over the tropics and extratropics for each group of zonal wavenumbers. Uncertainties in potential predictability are estimated with standard deviations (vertical bars) among all possible combinations of forecast–verification pairs. In the tropics (Fig. 8, top), NEKE errors grow fast and rms ≥ 1 at 1–2 days in the wavenumber 6–54 group. In contrast, errors grow at a slower rate and predictability (rms = 1) extends to about 5–6 days for wavenumbers 1–5. It is noteworthy that NEKE errors in wavenumbers 6–54 grow slower than in wavenumbers 1–5 on lead times longer than 8 days. In the extratropics (Fig. 8, bottom), rms errors in wavenumbers 6–54 grow fast up to 1 day, decrease to a slower rate thereafter, and nearly saturate. A normalized error near rms ≅ 0.8–0.9 corresponds to a random phase error in the NEKE (200 hPa) pattern, which may be expected for short waves because there are many waves in the domain with different phase errors. The case of rms > 1 corresponds to a more systematic phase error with the fewer long waves present in the domain. For the long waves, the wavenumber 1–5 group, potential predictability in NEKE (rms = 1) extends to about 10 days in the extratropics.
The results above lead to three main conclusions: 1) forecast errors in the tropics grow faster than in the extratropics, 2) forecast errors associated with the MJO (20–100 day, zonal wavenumbers 1–5) grow slower than spatiotemporal scales not related to the large-scale characteristics of the oscillation over localized areas in the tropics and short lead times, and 3) forecast errors in the tropics quickly propagate to the extratropics. This motivates the investigation of how potential predictability of precipitation over CONUS relates to uncertainties in initial conditions in the tropics–extratropics and the MJO. This topic is investigated using WRF forecasts in the D2 domain. We recall that initial conditions in the D2 domain are unperturbed, and, therefore, errors in precipitation forecasts are transmitted through the lateral boundaries of the D2 domain.
The All Lat experiment is analyzed first. Since precipitation is highly skewed (Lavers et al. 2014), a square root transformation is applied to daily precipitation. Potential predictability is estimated first with correlations between pairs of forecast members F and V:
where overbars indicate time mean, and σF and σV are standard deviations. The ensemble average of all combinations of correlations errors is computed.
Figure 9 shows mean correlation of daily precipitation from the All Lat experiment. At 2 days (Fig. 9, top), predictability of precipitation is fairly high (cor > 0.7) over a broad area extending from California to the U.S. Midwest, Southeast, and Northeast and small over Oregon, Montana, and North and South Dakota. The region of high predictability collocates with the region of heavy orographic precipitation during January 2005 [see Fig. 1 in Jones and Carvalho (2014)]. At 5 days (Fig. 9, middle), predictability decreases substantially over most of CONUS (cor = 0.1–0.5), although it is still high over the Sierra Nevada and Northern California and near the Great Lakes. In both locations the direction of the large-scale flow strongly determines orographic precipitation (e.g., California) and lake effect precipitation (e.g., Great Lakes) and enhances predictability. At 10 days correlations are smaller than 0.5 (Fig. 9, bottom) over most of CONUS, but, nevertheless, it is interesting to note that correlations are still greater than 0.5 in high elevations in some locations in Northern California.
Predictability of precipitation is further investigated with rms errors. The square root transformation is used, and rms errors are computed as explained before, except that normalization is not applied. Figure 10 shows error growth in precipitation forecasts in the All Lat experiment. At 2 days, rms errors of up to 1.5 mm day−1 are observed over most of CONUS. Subsequently, at 5 and 10 days, rms errors grow to more than 1.5 mm day−1, especially in Southern California, the Sierra Nevada, Arizona, Utah, and Colorado, and over a long band in the eastern CONUS. Differences in the spatial patterns in Figs. 9 and 10 are related to the magnitudes of precipitation because rms errors are larger where there is more precipitation even when there is a phase lag between validations and forecasts (i.e., correlation metric).
Potential predictability of precipitation based on correlation and rms error metrics is summarized in the All Lat experiment. Figure 11 (top) shows the last lead time in which correlation is above 0.51 (an arbitrary value). For most of CONUS, potential predictability extends out to 1–5 days, except over the Coastal Range and Sierra Nevada in Northern California, where predictability extends to 6–12 days. Evidently, orographic precipitation in these regions was very intense. Figure 11 (bottom) shows the first lead time when rms error exceeds one standard deviation of the local frequency distribution of daily precipitation. Regions where rms errors never exceed one standard deviation are set to undefined (indicated in white). In the western CONUS, the rms error metric shows higher predictability than correlations and extends to 6–10 days over broad areas. Interestingly, potential predictability of precipitation extends to long leads (7–14 days) over several areas over the eastern CONUS. It is important to note, however, that the results above refer to potential predictability. Forecast skill is less when model forecasts are verified against observed precipitation.
The question of how errors in initial conditions in the tropics for MJO and non-MJO variability affect potential predictability of precipitation over CONUS is addressed comparing the TRO and NMJ experiments. The square root transformation is applied to daily precipitation forecasts in each experiment, and rms errors are calculated as explained before (normalization is not applied). We recall that perturbations in the TRO and NMJ experiments are added to the initial conditions only in tropics in the D1 domain; domain D2 (18 km) always starts from the unperturbed analyses. Perturbations in the tropics originated in the D1 domain are transmitted through the lateral boundaries of the D2 domain as the forecasts evolve. For this reason, errors in precipitation forecasts in the TRO and NMJ experiments grow slower than in the All Lat experiment, and potential predictability in those experiments extends beyond 14 days over many areas over CONUS (not shown). To make the presentation more succinct and illustrative, Fig. 12 shows, as an example, the ratio of rms errors between NMJ and TRO experiments at 5 days. Errors in forecasts of precipitation across many areas in CONUS grow faster (rms ratio ≥ 1.0) in the NMJ than in the TRO experiment. Similar spatial patterns are seen for other lead times (not shown).
To summarize differences between NMJ and TRO experiments, Fig. 13 shows median and interquartile ranges of ratio of rms errors calculated over the three CONUS sectors as functions of lead times. The median value is above one for several lead times in all three sectors, showing that errors in precipitation forecasts grow faster in NMJ than in the TRO experiment, especially between 3- and 12-day lead times. The averages of the median rms ratios are 1.06 (western), 1.09 (central), and 1.15 (eastern) for lead times between 4 and 12 days. Locally, however, rms errors can grow much faster in NMJ than in the TRO experiment. The mean 75th percentiles of the rms ratios are 1.25 (western), 1.28 (central), and 1.31 (eastern) for lead times between 1 and 14 days. The results above show evidence that tropical errors in initial conditions on scales not directly associated with the MJO (i.e., NMJ experiment) grow fast and have an important influence on the potential predictability of precipitation over CONUS.
4. Summary and conclusions
Potential predictability is investigated during the 2004/05 boreal winter, when the MJO was active from 18 December 2004 to 20 January 2005. That winter season was also characterized by extremely wet conditions across large areas of CONUS. Predictability experiments with WRF are done to investigate how forecast errors in the tropics grow relative to the extratropics, the importance of MJO and non-MJO variability, and potential predictability of precipitation over CONUS.
Normalized EKE at 200 hPa is used as a metric to investigate predictability, and the results show that errors in initial conditions in the tropics grow faster than in the extratropics. For this particular case study (model and experiments), potential predictability of NEKE extends out to about 4 days in the tropics and 9 days in the extratropics (All Lat; Fig. 5). Forecast errors originated in the tropics quickly propagate to the extratropics, as demonstrated by experiments in which initial conditions are only perturbed in the tropics. Furthermore, forecast errors in tropical NEKE (200 hPa) grow substantially faster with non-MJO variability errors in initial conditions than with MJO variability (Figs. 3 and 4). To investigate the growth of forecast errors with different spatial scales, NEKE (200 hPa) is spatially filtered in two groups of zonal wavenumbers 1–5 and 6–54. Forecast errors in NEKE (200 hPa) on zonal wavenumbers 1–5, and therefore directly related to the MJO, grow slower than in non-MJO variability over localized areas in the tropics and short lead times. Predictability of NEKE in the tropics extends to 1–2 days in the wavenumber 6–54 group and 5–6 days in the wavenumber 1–5 group.
Potential predictability of precipitation in this case study is estimated to extend to about 1–5 days over most of CONUS but to longer leads (7–12 days) over the western and eastern CONUS sectors, especially in regions with orographic (e.g., California) and lake effect (e.g., Great Lakes) precipitation. Moreover, an important question investigated here is how tropical errors in initial conditions associated with the MJO and non-MJO variability affect the predictability of precipitation over CONUS. The results show that errors on small scales relative to the large-scale characteristics of the MJO can quickly grow, propagate to the extratropics, and degrade forecast skill of precipitation over CONUS (Figs. 3, 12, and 13).
This study highlights interesting points about potential predictability associated with the MJO. The MJO manifests as coupling between tropical convection and planetary-scale circulation anomalies. In the convective field, the MJO has substantial spectral power between 30 and 60 days and eastward-propagating zonal wavenumbers of approximately 1–3 (Wheeler and Kiladis 1999). Anomalies in outgoing longwave radiation during the MJO life cycle are a manifestation of the collective diabatic heating of mesoscale convective systems organized within the envelope of enhanced convective activity (Hendon and Liebmann 1994; Hendon and Salby 1994; Zhang 2005). As previously discussed, progress in representing the large-scale characteristics of the MJO in global models and improvements in forecast skill of the oscillation have been significant in approximately the last 25 years. In particular, advances in computing technology and software engineering have facilitated global models to be run at horizontal resolutions that were not possible 5–10 years ago. Thus, global models are now able to resolve additional spatial scales that occur within the MJO life cycle. However, as shown here, when an MJO is occurring, errors in initial conditions on scales not directly associated with the MJO (defined here as 20–100 days; zonal wavenumbers 1–5) grow fast, and upscale propagation of forecast errors contaminates the large scales and degrades forecast skill in the tropics and extratropics. To fully explore potential predictability of the MJO, further improvements in model representation of tropical mesoscale processes and observational systems are needed. The Dynamics of the MJO (DYNAMO) field campaign is one example that may guide future research (e.g., Kerns and Chen 2014).
In this study, we have used EKE (200 hPa) to investigate potential predictability of the atmospheric circulation during an MJO event, including differences in predictability on scales associated with MJO and non-MJO variability. The scale dependence of precipitation predictability and linkages to the atmospheric predictability (e.g., EKE at 200 hPa) have not been investigated here. Last, some caveats are worth mentioning. Potential predictability is investigated here only during one MJO event that occurred in association with a weak warm ENSO phase and a strong NAO case. Because the study is based only on one event, scale dependency of forecasts error growth and MJO phases could not be evaluated. In addition, interactions between different modes of climate variability may also alter potential predictability associated with the MJO. These issues are being investigated and will be reported on in a separate study.
This research was supported by the Climate and Large-Scale Dynamics Program from the National Science Foundation (AGS-1053294). The CFSR was provided by the Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory, Boulder, Colorado (available online at http://rda.ucar.edu/datasets/ds093.0). The research also benefitted from computational support provided by the CISL, NCAR.