1. Introduction
The flooding and landslides in California that followed the strong 1982/83 and 1997/98 El Niño events raised concerns that the predicted strong 2015/16 El Niño could lead to similar outcomes. An El Niño advisory was issued in March 2015, and in April 2015 forecasters at the NOAA/Climate Prediction Center (CPC) began to note the potential for enhanced precipitation over parts of California during winter 2015/16 (CPC 2015a,b). In August 2015, CPC forecasters indicated that the upcoming El Niño would be historically strong with Niño-3.4 index values exceeding +2.0°C (CPC 2015c). Hoell et al. (2016) used historical AMIP simulations to conclude that there was an 85% chance of increased California precipitation during El Niño events as strong as the 2015/16 event was predicted to be. In contrast, operational CPC precipitation forecasts for California gave a 40%–60% chance of above-median precipitation for January–March 2016 (CPC 2015d). The weaker CPC probabilities turned out to be justified in the sense that below-median precipitation was observed across much of California (Jong et al. 2018; Chen and Kumar 2018), frustrating hopes for the end of California’s multiyear drought (Seager et al. 2015; Wahl et al. 2017). Wet conditions during the 2016/17 La Niña raised further questions as to the impact of El Niño–Southern Oscillation (ENSO) on California precipitation.
ENSO is a primary source of skill in seasonal precipitation forecasts. Consequently, seasonal precipitation forecasts tend to closely resemble ENSO teleconnections and be most confident during ENSO events. Increased precipitation over California during the Northern Hemisphere winter and early spring is expected during El Niño (e.g., Schonher and Nicholson 1989; Jong et al. 2016), and La Niña is associated with the opposite pattern and drier conditions across California (e.g., L’Heureux et al. 2015; Deser et al. 2018). However, seasonal forecasts are necessarily probabilistic because predictable signals explain a relatively small fraction of the observed seasonal climate variability. Also, the small number of strong El Niño events in the recent observational record is a source of uncertainty in estimates of ENSO teleconnections. To overcome this limitation of the historical record, Kumar and Chen (2017) examined the relationship of California precipitation and ENSO in integrations of the NCEP Climate Forecast System version 2 (CFSv2; Saha et al. 2014). While CFSv2 does show a mean shift (signal) toward wetter conditions over California during El Niño conditions, the variability unexplained by ENSO (noise) is large enough that even during strong El Niño events, drier-than-average conditions, as occurred in 2015/16, are possible. The signal-to-noise ratio is in fact less than one over all of California in the CFSv2 model (Kumar and Chen 2017). Using a broader set of models and examining all winter seasons, correlations for predictions of California precipitation with ENSO are typically on the order of 0.3–0.5, or ~9%–25% of variability explained (Kumar and Chen 2020).
While it is discouraging that even the most extreme El Niño events offer limited predictability of California precipitation on seasonal time scales, forecasts of subseasonal climate anomalies may offer some hope. For instance, monthly forecasts made at the start of January and February in 2016 and 2017 were able to capture the correct sign of the observed anomaly for much of the month (Wang et al. 2017). In those monthly forecasts, a zonal wind index for offshore, upper-level winds over the eastern North Pacific Ocean provided an explanation for this skill in the sense that January and February values of the wind index are well correlated to California statewide precipitation, and the wind index can be skillfully predicted. Compared to ENSO, the North Pacific jet stream is a more proximate factor in the evolution of precipitation over California. Previous research on atmospheric rivers has also emphasized the importance of the jet stream in steering moisture and storms toward California (Ralph et al. 2018; Fish et al. 2019). However, a question that remains is what are the relative contributions of tropical Pacific variability and the more immediately located upper-level wind anomalies in driving precipitation variability and forecast skill over California on subseasonal and submonthly time scales.
Given the linkages between jet stream variability over the North Pacific Ocean and leading tropical modes, such as ENSO and the Madden–Julian oscillation (MJO), it can be a challenge to untangle the drivers that result in skillful predictions of California precipitation. For instance, monthly averages of the zonal wind index are significantly correlated to ENSO at a level of 0.6 for winter months (Wang et al. 2017). Others have shown that certain phases of the MJO can help to drive California precipitation (Higgins et al. 2000; Jones 2000; Arcodia et al. 2020). Determining the sources of skill and predictability has practical implications too. For instance, for the purpose of model development and improvement, it is useful to know whether improving the dynamics of the jet stream or reducing errors in the representation and prediction of tropical modes would be most likely to improve California precipitation predictions. Some have speculated that reducing tongue biases and improving predictions of sea surface temperature (SST) in the western equatorial Pacific may enhance precipitation predictions (Dias and Kiladis 2019; Bayr et al. 2019; Ferrett et al. 2020).
This study, inspired by Kumar and Chen (2017), examines drivers of California precipitation predictability and skill in hindcasts from the Subseasonal Experiment (SubX; Pegion et al. 2019) suite of models. Using model data avoids some of the sample size limitations of the historical record. Moreover, the relatively low skill of the precipitation forecasts, especially at longer leads, means that the model precipitation is, to a large extent, independent of observations. We address the following questions: In the model, what is the relative sensitivity of California precipitation anomalies to the wind index and ENSO? What is the relationship between the zonal wind index and California precipitation after removing the influence of ENSO? Beyond ENSO, do anomalous patterns of tropical Pacific SST and outgoing longwave radiation (OLR) relate significantly to predicted California precipitation anomalies? Do SubX models demonstrate skill in predicting California precipitation, and what are some of the sources of skill or, conversely, of forecast error?
The paper is organized as follows. Data and methods are described in section 2. A preliminary analysis in section 3 is the basis for restricting our attention to a single SubX model and to statewide averages of California precipitation. Results are presented in section 4 and discussed in section 5.
2. Data and methods
a. SubX forecasts
Forecast data from SubX (Pegion et al. 2019) are analyzed in this study. This multimodel ensemble dataset contains retrospective and real-time forecasts. The real-time forecasts provide guidance for operational Week 3–4 outlooks at NOAA/CPC, which are issued weekly. Seven SubX models with varying ensemble sizes, initialization schemes, and ocean coupling make forecasts of 0000–0000 UTC daily averages. Forecasts extend approximately one month from their start date. A forecast out to 30 days is expressed as the 30-day “lead time.” Retrospective forecast periods vary with model, as do start times, start frequency, and forecast length. Most of the model retrospective forecasts from SubX include the period 1999–2016, and all analysis here uses retrospective forecasts for that period. Because California precipitation occurs primarily during cool season months, and because of our interest in ENSO, we further restrict our analysis to the boreal winter months of November–March (NDJFM).
b. Observations
The Unified gridded precipitation dataset (Chen et al. 2008) is commonly used at CPC for precipitation analysis and verification. However, the time resolution of the Unified data are 1200–1200 UTC daily averages, which is incompatible with the 0000–0000 UTC daily averages of the SubX reforecast data. This incompatibility is a problem both in SubX forecasts of daily averages and for longer forecast target windows (e.g., weekly averages) as well, since there are 12 h at both the beginning and end of the forecast target window where SubX and Unified data do not match. Therefore, we use observational-based precipitation data from 3-hourly values of the North American Regional reanalysis (NARR; Mesinger et al. 2006). The correlation of statewide California averages of 1200–1200 UTC daily averages of NARR with Unified data are greater than 0.95. The Unified data tend to have slightly larger amplitude than the NARR data for extreme precipitation events (Becker et al. 2009), likely due to the Unified’s more direct use of precipitation observations. We note that SubX forecast skill is slightly lower when computed using the Unified data (not shown).
The observed Niño-3.4 index is computed from anomalies of daily OISST data (Reynolds et al. 2007) averaged over the region of 5°N–5°S and 170°–120°W. The zonal wind index from Wang et al. (2017) is the difference of daily 200-hPa zonal wind anomalies from the NCEP–NCAR reanalysis data averaged over two boxes over the eastern North Pacific Ocean: 220°–240°E, 27°–40°N minus 210°–240°E, 45°–60°N. Observations of daily gridded SST anomalies in the tropical Pacific Ocean are processed from OSTIA (Donlon et al. 2012), and daily gridded OLR anomalies are computed from the High-Resolution Infrared Radiation Sounder (HIRS; Lee et al. 2007). Observational and model forecast climatologies are computed as the first three annual harmonics of daily data and removed from the total fields to form anomalies. Forecast climatologies are lead-time dependent, which means that lead-dependent biases as well as the seasonal cycle are removed from model forecasts.
c. Methods
We use linear correlation as a measure of association between two quantities, for instance, to measure the association of ENSO or zonal wind with California precipitation. We use partial correlation as a measure of association between two quantities while accounting for (i.e., conditional on) a third quantity. Specifically, the partial correlation between California precipitation and the zonal wind index is computed by linearly removing the Niño-3.4 index from the two time series and correlating the residuals. Likewise, the partial correlation between California precipitation and the Niño-3.4 index is computed by linearly removing the zonal wind index from the two time series and correlating the residuals.
3. Preliminary analysis
For the period 1999–2016, the observed correlations between monthly Niño-3.4 and monthly precipitation during NDJFM over California are not statistically significant or uniformly positive (Fig. 1), which differs from the correlations that are seen in seasonal (3-month) averages over longer periods (Jong et al. 2016; Kumar and Chen 2020). On the other hand, correlations between the monthly zonal wind index and monthly precipitation span the entire state of California and are statistically significant. Since the associations are fairly uniform across California, we focus on the statewide precipitation average in the analysis that follows.
Monthly correlations between precipitation anomalies and (a) the Niño-3.4 index and (b) zonal wind index during November–March over 1999–2016. The data are pooled, meaning that November–March are not averaged together and are retained as monthly averages. The stippling indicates significance at the 95% level using the Student’s t test.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
The skill of predicting the statewide average of California precipitation was computed for each SubX model’s reforecast as a function of lead-time and forecast-target averaging window (Figs. S1–S7 in the online supplemental material). Differing start days of the retrospective forecasts and ensemble sizes make a direct comparison of model skill impractical, and that is not our goal here. Likewise, a calculation of the multimodel ensemble mean skill as a function of lead time is impossible for the same reason—forecasts with the same lead and target period are not available for models with disparate start days. Some notable features of the skill across models include that CCSM4 has lower skill at the shortest leads than the other models, which is likely due to the fact that it does not have its own atmospheric data assimilation system. NRL also has relatively low skill in its short lead forecasts of 1-day averages, which could be related to its initialization or ensemble strategy. Apart from these differences, skill across models for the statewide average of California precipitation is more similar than different. Therefore, we show results from a single model, the Flow-Following Icosahedral Model (FIM; Sun et al. 2018), which is a coupled model that is among the more skillful in the suite of SubX models. The Global Ensemble Forecast System (GEFS version 11 in SubX; Zhu et al. 2018) ensemble mean is slightly more skillful than FIM, but the larger GEFS ensemble (11 members versus 4 in FIM) can explain this difference. Since GEFS and the FIM have the same start days (once a week on Wednesdays), their skill comparison is more straightforward. The skill of a randomly selected 4-member GEFS ensemble mean is statistically indistinguishable from that of the FIM ensemble mean in predicting California-wide precipitation and zonal winds according to a sign test (Fig. S8). Another reason for focusing on FIM is that it is a coupled model, and GEFS is not (Zhu et al. 2018). A coupled model can potentially represent the relation of California precipitation with dynamically evolving SST and OLR anomalies.
4. Results
a. Skill
The skill of FIM in predicting NDJFM California precipitation anomalies depends on both the forecast lead time and on the length of the forecast averaging window (Fig. 2a). On the x axis, the forecast lead time describes the prediction for X days into the future from the model’s starting condition. The color shading in the legend indicates the length of the forecast averaging window. For instance, a forecast of a 2-week average with lead time of 8 days consists of the average of forecast days 8–21. Due to the inherent predictability limits of weather forecasting, the skill of forecasting a 1-day average decays steeply, dropping to a correlation of 0.5 around lead-day 8 and to 0.2 by around a 2-week lead. At NOAA/CPC, subseasonal forecasts are produced for Week 2 (forecast days 8–14) and for week 3–4 (forecast days 15–28), and the FIM correlation for these targets is 0.5 and 0.3, respectively, reflecting a sharp drop in skill with lead times beyond day 7 that is not offset by longer forecast-averaging windows. Taking 0.5 as a correlation threshold for useful skill, useful skill is lost at roughly the same lead regardless of forecast-averaging window, that is, at day 9 for the single-day average forecast, at day 10 for the 1-week forecast average, and at day 7 for the 2-week average. While the 3-week and monthly average forecast correlations (yellow and green shaded lines in Fig. 2a) remain at or slightly in excess of 0.5 correlations at short leads, this may well be primarily due higher skill in the earlier part of the forecast-averaging window. Notably we do not see skill with the signature of ENSO, that is, skill whose source is persistent and has increasing influence as the averaging window broadens due to enhancement of the predictable signal and reduction of unpredictable noise. Nor do we see skill extending to leads when the MJO forecasts have skill (Wang et al. 2014; Vitart 2014). On the other hand, the skill that FIM demonstrates in predicting the zonal wind index, has similar characteristics to its California precipitation skill, albeit with slightly higher values (Fig. 2b). FIM predicts daily averages of the zonal wind index with a correlation exceeding 0.5 out to ~10 days and with a correlation of 0.7 for the monthly average. Though not shown, all SubX models demonstrate greater skill in predicting the zonal wind than in predicting California precipitation.
The correlation by forecast lead time (x axis) between observed and FIM ensemble mean predictions of (a) California statewide average precipitation anomalies and (b) the ensemble mean zonal wind index anomalies for monthly averages during November–March 1999–2016. The colors indicate different averaging windows ranging from daily to monthly. All correlations are statistically significant at the 95% level using a Student’s t test, with the exception of correlations beyond 23 days lead for the 1-day average.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
b. Predictability
To directly investigate the question of what signals lead to skillful precipitation forecasts, we take the view that predictability is a prerequisite for skill and examine precipitation predictability in the FIM. First, we consider potential predictability of precipitation, and ask to what extent the Niño-3.4 and wind indices in the model are sources of precipitation skill. The term potential is used because simultaneous values are used, similar to observational studies of potential predictability (Hoerling and Kumar 2002; DelSole et al. 2013). Computing the average precipitation anomaly in FIM ensemble members as a function of Niño-3.4 and the wind index (Fig. 3a) shows that average precipitation anomalies increase as the wind index increases (a strong gradient changing from orange to green shades in the vertical direction), regardless of the value of Niño-3.4. On the other hand, there are no clear changes in average precipitation anomalies as the Niño-3.4 index changes (no clear gradients in the horizontal direction of Fig. 3a). In fact, when the zonal wind index is near zero, California precipitation anomalies shift slightly from wetter conditions (light green) during negative Niño-3.4 values to drier conditions (light orange) for positive Niño-3.4 values. Overall, California precipitation anomalies discernibly shift depending on the value of the zonal wind index (Fig. 3b), but shift little for values of the Niño-3.4 index (Fig. 3c), and this is true both for the central tendency of the precipitation distribution and its percentiles. The only shift in the precipitation distribution with changes in Niño-3.4 is for values around 1.5°–2°C. We conclude that in FIM, ENSO is not a significant source of potential predictability and that the wind index is.
(a) Two-dimensional density plot of statewide averaged California precipitation anomalies conditional on the zonal wind index and the Niño-3.4 index. The mean and selected percentiles of the distribution of statewide averaged California precipitation anomalies conditional on (b) the zonal wind index and (c) the Niño-3.4 index. (d) A scatterplot of the zonal wind and Niño-3.4 indices. Lines in (b) and (c) show the 10th, 25th, 75th, and ~90th percentile of precipitation and the mean. The color bar inset in (a) shows the average California precipitation value in mm day−1. Data shows monthly averages from the 4 FIM members (1252 samples = 313 starts × 4 members) for November–March during 1999–2016. Gaps are present where there are no samples.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
These details may be a consequence of limited sampling during the 1999–2016 hindcast period (note the gap in Niño-3.4 values between 2° and 2.5°C in Figs. 3a,c), but it is clear that wetter conditions occur during a positive Niño-3.4 index only with the support of a stronger wind index (all of the green values in Fig. 3a on the right side of the zero line are in the upper quadrant, where the wind index is positive). Likewise, dry conditions occur during a negative Niño-3.4 index only with the support of a weaker wind index (nearly all of the orange values in Fig. 3a on the left side of the zero line are located in the lower quadrant, where the wind index is negative). The strong dependence on zonal wind and lack of dependence on ENSO is even more striking in daily FIM averages (Fig. S9). Therefore, ENSO plays little role in subseasonal predictability of California precipitation according to FIM, with the zonal wind index playing a dominant role in predictability of daily to monthly averages. Despite the correlation between the wind index and Niño-3.4 in FIM being 0.54 (Fig. 3d), only the wind index shows a strong correlation with California precipitation.
As in the observed data (correlation of 0.6 for the monthly average; not shown), there is a clear relationship between the model’s Niño-3.4 index and the zonal wind across various averaging windows and lead times (Fig. 4). Unlike Fig. 2, which correlates the observations with the prediction, Fig. 4 displays the correlation between the predicted Niño-3.4 index and the predicted zonal wind index. A forecast lead time of 10 indicates the correlation between the two indices predicted 10 days into the future. In the ensemble means, the correlation between the model Niño-3.4 index and the zonal wind increases with lead time and averaging window, which is consistent with the relative enhancement of the ENSO signal in the ensemble mean as the synoptic-scale predictability related to the initial conditions fades away. There are also significant correlations, from 0.4 to 0.7, between the model’s California precipitation and the model’s zonal wind index for all averaging windows and lead times, with little dependence on lead time (Fig. 5a). When the model’s Niño-3.4 index is linearly removed from precipitation and zonal wind, the partial correlations are not notably different (cf. Fig. 5a with Fig. 5c), indicating that the association between precipitation and the zonal wind is not due to ENSO. Interestingly, the correlations continue to increase with lead time, which means the predictable signal for California precipitation must be provided by the zonal wind.
Correlations by forecast lead time (x axis) between FIM ensemble mean predictions of the zonal wind index and the FIM ensemble mean Niño-3.4 index. The colors indicate different averaging windows ranging from daily to monthly. November–March anomalies are created from removing the lead-dependent climatology over 1999–2016. All correlations are statistically significant at the 95% level using a Student’s t test.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
The correlation by forecast lead time (x axis) of FIM ensemble mean predictions of California precipitation with (a) the ensemble mean wind index and (b) the ensemble mean Niño-3.4 index. The correlation by forecast lead time (x axis) of FIM ensemble mean predictions of California precipitation with (c) the ensemble mean wind index and the ensemble mean Niño-3.4 index removed and with (d) the ensemble mean Niño-3.4 index and the ensemble mean wind index removed. November–March anomalies are created from removing the lead-dependent climatology over 1999–2016. All correlations are statistically significant at the 95% level using a Student’s t test, with few exceptions. In (b), correlations are only significant beyond the 16-day lead for the 1-day average and beyond the 7-day lead for the 7-day average. In (d), the 1-day average is insignificant at 2–3-, 10–11-, and 22-day leads.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
Even though the wind–precipitation relation is not sensitive to the presence of ENSO, is it possible that the zonal wind plays some role in ENSO–precipitation linkages? Based on the conditional distribution of precipitation with Niño-3.4 (Fig. 3c), the small positive correlations (less than 0.2) between the Niño-3.4 index and California precipitation are to be expected (Fig. 5b). Again, there is a tendency for the correlations to increase with lead time as the variability in the ensemble mean related to initial conditions decreases, and Niño-3.4 explains more variance. However, with the removal of the zonal wind, the ENSO–precipitation relation changes markedly (Fig. 5d). The partial correlations are near zero or even slightly negative (−0.4 for some averages and lead times). Though El Niño is weakly related to wetter conditions and La Niña is associated with a drier state, the removal of the zonal wind reverses these relationships. This behavior is consistent with the previously noted behavior in which near-zero zonal index values indicate an inversion in the relationship between monthly average precipitation and ENSO (Fig. 3). This finding is new, and we discuss some potential mechanisms for this behavior in the discussion section.
An interpretation of the results presented in Figs. 4 and 5 is that the ENSO–precipitation relationship is due to ENSO influencing the zonal wind and the zonal wind in turn influencing California precipitation. This two-step process means that the connection from ENSO to precipitation is relatively weak. However, the zonal wind varies for many reasons independent of ENSO. When ENSO and the zonal wind happen to be in alignment (Fig. 4), positive associations between ENSO and precipitation can exist (Fig. 5b). However, the zonal wind appears to influence precipitation independently of ENSO.
c. Forecast errors
Thus, the potential predictability analysis suggests that forecast skill is strongly related to the zonal wind index and negligibly related to Niño-3.4. If this is the case, errors in wind index predictions should be strongly related to errors in precipitation predictions. Errors are computed by subtracting the observed values from the ensemble mean forecast values. Indeed, errors in California precipitation are significantly correlated to errors in the zonal wind index in the FIM model at all leads and for averaging windows (Fig. 6a). Conversely, the correlation between errors in California precipitation anomalies and the Niño-3.4 index is close to zero for all forecast averaging window and lead times (Fig. 6b). Errors in the prediction of precipitation are associated with errors in the zonal wind, not errors in the Niño-3.4 index.
The correlation by forecast lead time (x axis) of the FIM ensemble mean prediction error (forecasts minus observations) of the statewide average California precipitation anomalies with the error in (a) the zonal wind index and (b) the error in the Niño-3.4 index. The colors indicate different averaging windows ranging from daily to monthly. Observations are subtracted from the ensemble mean forecast to obtain the error. November–March anomalies are created from removing the lead-dependent climatology over 1999–2016. In (a), correlations are statistically significant at the 95% level using a Student’s t test, whereas in (b), none of the correlations are significant.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
d. Other tropical forcings
Though the Niño-3.4 SST index has significant relationships with many other ENSO indices over the tropical Pacific Ocean (Bamston et al. 1997), there may be other regions or variables that are better related to California precipitation. For instance, during the 2015/16 El Niño, some (Lee et al. 2018; Siler et al. 2017) argued that the unique pattern, or flavor, of the tropical SST anomalies resulted in the abnormally dry conditions in California. To examine the possibility that SST and OLR anomalies across the tropical Pacific account for predictability and error in California precipitation, we computed correlation maps associated with the 32-day average in FIM. Within the model, ensemble mean SST anomalies that are positively correlated with ensemble mean California precipitation appear as an El Niño–like pattern, with maximum correlations on the equator in the central and eastern Pacific Ocean (Fig. 7a). The same is true for OLR, with an El Niño–like pattern of negative correlations evident from the date line to the eastern tropical Pacific Ocean (Fig. 7b). However, correlations are very weak in both maps, with the maximum around 0.2–0.4, with SST and OLR corresponding to a 4%–16% variance in the predictions of monthly California precipitation. Correlations between forecast errors (Figs. 7c,d) are even weaker, with insignificant correlations (less than 0.2) across the tropical Pacific. Thus, while very limited predictability may arise from ENSO-like patterns in the tropical Pacific for monthly averages, it does not appear there are other tropical SST or OLR regions that contribute systematically to predictions of California precipitation. Likewise, SST and OLR correlations with forecast errors do not expose patterns that account for errors in California precipitation.
Correlation maps of FIM ensemble mean California statewide average precipitation anomalies with (a) ensemble mean sea surface temperature anomalies and (b) ensemble mean outgoing longwave radiation anomalies across the tropical Pacific Ocean. Correlation maps of California statewide average precipitation anomaly errors with (c) sea surface temperature anomaly errors and (d) outgoing longwave radiation anomaly errors across the tropical Pacific Ocean. November–March anomalies are created from removing the lead-dependent climatology over 1999–2016. Correlations are statistically significant for coefficients greater than 0.1 and less than −0.1 at the 95% level using a Student’s t test.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
Precipitation skill could arise from shorter-time-scale variations, such as the MJO and other subseasonal processes, However, lead–longitude plots of forecast skill and error between daily averaged California precipitation anomalies and SST and OLR anomalies averaged between 5°S–5°N show small correlations, at most around 0.2–0.3 between forecasts of precipitation and SST/OLR (Fig. 8). As in Fig. 7, the correlation patterns resemble ENSO, with a positive relationship with SST and negative relationship with overlying OLR. Overall, there is no strong evidence that differences between the observations and forecasts in SST or OLR patterns are correlated to errors in precipitation. Furthermore, the patterns shown in the forecast correlation plots are fairly stationary with lead time, only showing slight fluctuations in strength. In particular, there are no hints of eastward or westward propagation that might suggest a lead–lag relationship with the MJO or other subseasonal equatorial modes.
As in Fig. 7, but for time–longitude diagrams of 5°S–5°N averaged anomalies across the equatorial Pacific Ocean of (left) sea surface temperature and (right) outgoing longwave radiation. The x axis indicates the longitude, and the y axis denotes the forecast lead time by day. Data are based on daily (1-day) averages. Correlations are statistically significant for coefficients greater than 0.1 and less than −0.1 at the 95% level using a Student’s t test.
Citation: Weather and Forecasting 36, 5; 10.1175/WAF-D-21-0061.1
5. Discussion
Predictability and forecast errors in subseasonal California precipitation are more directly related to the upper-level zonal winds over the eastern North Pacific Ocean than to tropical Pacific variability. In the FIM model, ENSO and the MJO do not appear to contribute substantially to forecast skill or account for errors in the predictions. Instead, the analysis presented here, albeit focused on a single SubX model, strongly indicates that accurately predicting the zonal winds near California is key to capturing California precipitation on daily to monthly time scales. Most of our analysis has focused on linear relations that are well described by correlation and partial correlation coefficients. Therefore, there is the possibility that there are nonlinear relations that are not captured in such a linear framework. However, we do note that our nonparametric analysis (binning in Fig. 3c) is not overly supportive of a nonlinear relation between ENSO and California precipitation. Also, we examined simultaneous rather than lagged relationships between the ENSO state and California precipitation primarily because the persistence of ENSO exceeds 30 days. Future work might consider lagged associations with less persistent, subseasonal modes (Mundhenk et al. 2018).
Our findings are consistent with Wang et al. (2017) who found that monthly average predictions from ECMWF and CFSv2 were not significantly associated with tropical variability in forecasts associated with the 2015/16 El Niño or the 2016/17 La Niña. Also during these two events, Singh et al. (2018) and Swenson et al. (2019) revealed that even seasonal mean forecasts were sensitive to the details of the extratropical, upper-level circulation anomalies over the North Pacific Ocean and California. Thus, these seasonal anomalies were consistent with noise and not forcing from tropical heating. Other papers that have examined subseasonal precipitation over California, such as the results from the S2S models by Pan et al. (2019), obtain skill that is similar to those shown in the FIM and other SubX models. Generally, there is minimal evidence that daily-to-monthly tropical variations play a large role in modulating the predictability and prediction skill of subseasonal precipitation over California.
Nonetheless, we have shown that predictions of California precipitation can be skillful when time scales shorter than seasonal averages are considered, which provides an option for users who find the level of unpredictable noise intolerable in seasonal forecasts. Diagrams like those in Fig. 2 and Figs. S1–S7 can be used to select skill levels that may better match the interests and confidence-level triggers for decision making. For example, looking at Fig. 2, someone may not feel comfortable taking action when correlations are 0.3 (9% of the observation variance are explained by the forecasts), but they may be able to make a decision from correlation coefficients greater than 0.5 or greater (25% explained variability). For instance, if a correlation of ~0.5 or greater is required, then a 32-day average forecast would meet these conditions and so would a 7-day (1 week) averaging window for forecasts out to lead-8. In other words, a Week 2 forecast (a forecast made today for the average of 8–14 days from now) would meet these criteria. However, a Week 3–4 forecast (14-day average), which starts at lead-15, would fall short and only offer a correlation of ~0.3. Without a longer reforecast dataset, we were unable to test how skill may vary for other decades (e.g., Weisheimer et al. 2020).
The analysis within this paper may also help inform decisions related to model development. Because the FIM is a coupled model, the lack of significant correlations in tropical Pacific SST and OLR may suggest that coupled processes are not essential to skillful subseasonal forecasts of California precipitation. The relative unimportance of coupled processes could also be why the GEFS, with its prescribed, uncoupled SSTs appears to offer equivalent skill to the FIM in predictions of California precipitation and zonal wind. Therefore, focusing on the model’s representation of the jet stream and mid-to-upper-level circulation over the North Pacific Ocean may provide greater dividends in improving California precipitation forecasts (e.g., Jung et al. 2010; Rodwell et al. 2018; Grams et al. 2018; Chang et al. 2019; Sánchez et al. 2020, and many more).
Our use of partial correlation to analyze the relations between California precipitation, Niño-3.4, and the zonal wind index fits into the framework known as a causal network or inference (Pearl 2009; Kretschmer et al. 2021). Here, the nodes of the graphical model are ENSO, the zonal wind, and California precipitation. Kretschmer et al. (2021) used these three nodes as an example of a mediator type of causal structure. A simple linear regression with Niño-3.4 as a predictor and California precipitation as the predictand indicates a causal effect between the two (a positive, nonzero coefficient). However, when their wind index (using storm track activity) is added as an additional predictor, the Niño-3.4 index is given zero weight. This implies if the wind index is known, knowing the ENSO state provides no additional (in a linear sense) information. Equivalently, this means that ENSO and Californian precipitation are conditionally (on the wind index) independent. Moreover, the stronger relation of California precipitation with the wind index than with ENSO is consistent with wind index variability being influenced by other factors in addition to ENSO.
We find that useful skill in California precipitation predictions does not extend much beyond lead times of 7–10 days, regardless of averaging window. This behavior is in contrast to rainfall predictions in other regions, where there are strong ENSO and MJO signals and where longer window averages are more skillful than shorter window averages at long lead times (Tippett et al. 2015; Vigaud et al. 2019).
Our finding of a negative statistical relation between ENSO and California precipitation after accounting for zonal wind variability is intriguing but, at present, we cannot not offer a thorough physical explanation. Though not examined here, others have demonstrated that a warmer ocean relative to the land can result in drying over the land (e.g., Roxy et al. 2015). Perhaps the land-sea thermal contrast, which during El Niño is weaker due to above-average SSTs off coastal California, can locally result in more offshore winds, increased subsidence, and drying over land. Another possible physical mechanism is that poleward transport of tropical moisture along integrated water vapor bands depends on the ENSO phase, being highest in the neutral phase and lowest in the El Niño phase (Bao et al. 2006). Consistent with this is the reduction of Pacific tropical moisture exports to the Northern Hemisphere during El Niño years (Knippertz et al. 2013). We leave it to future investigation to explore this new and unexpected result. More importantly, it is clear that from the analysis so far that, in the FIM, the zonal wind is key to predicting California subseasonal precipitation variability and that this linkage does not depend on ENSO. Moreover, in order to see a positive association between ENSO and precipitation, albeit weak, the zonal wind must be involved.
Acknowledgments
We thank Michael Bell (IRI) for patiently answering questions about the SubX reforecast data. Also, we appreciate Arun Kumar and Mingyue Chen (NOAA/CPC) for their initial reviews and also the constructive comments of three anonymous reviewers.
Data availability statement
The SubX reforecast data are available at the International Research Institute (IRI) for Climate and Society data library: https://iridl.ldeo.columbia.edu/SOURCES/.Models/.SubX/. The 3-hourly averages of the NCEP North America Regional Reanalysis data are located at https://iridl.ldeo.columbia.edu/SOURCES/.NOAA/.NCEP/.EMC/.NARR/. Past NOAA/CPC seasonal outlooks are available here: https://www.cpc.ncep.noaa.gov/products/archives/long_lead/llarc.ind.php.
REFERENCES
Arcodia, M. C., B. P. Kirtman, and L. S. P. Siqueira, 2020: How MJO teleconnections and ENSO interference impacts U.S. precipitation. J. Climate, 33, 4621–4640, https://doi.org/10.1175/JCLI-D-19-0448.1.
Bao, J.-W., S. A. Michelson, P. J. Neiman, F. M. Ralph, and J. M. Wilczak, 2006: Interpretation of enhanced integrated water vapor bands associated with extratropical cyclones: Their formation and connection to tropical moisture. Mon. Wea. Rev., 134, 1063–1080, https://doi.org/10.1175/MWR3123.1.
Bamston, A. G., M. Chelliah, and S. B. Goldenberg, 1997: Documentation of a highly ENSO-related SST region in the equatorial Pacific. Atmos.–Ocean, 35, 367–383, https://doi.org/10.1080/07055900.1997.9649597.
Bayr, T., D. I. V. Domeisen, and C. Wengel, 2019: The effect of the equatorial Pacific cold SST bias on simulated ENSO teleconnections to the North Pacific and California. Climate Dyn., 53, 3771–3789, https://doi.org/10.1007/s00382-019-04746-9.
Becker, E. J., E. H. Berbery, and R. W. Higgins, 2009: Understanding the characteristics of daily precipitation over the United States using the North American regional reanalysis. J. Climate, 22, 6268–6286, https://doi.org/10.1175/2009JCLI2838.1.
Chang, Y., S. D. Schubert, R. D. Koster, A. M. Molod, and H. Wang, 2019: Tendency bias correction in coupled and uncoupled global climate models with a focus on impacts over North America. J. Climate, 32, 639–661, https://doi.org/10.1175/JCLI-D-18-0598.1.
Chen, M., and A. Kumar, 2018: Winter 2015/16 atmospheric and precipitation anomalies over North America: El Niño response and the role of noise. Mon. Wea. Rev., 146, 909–927, https://doi.org/10.1175/MWR-D-17-0116.1.
Chen, M., W. Shi, P. Xie, V. B. S. Silva, V. E. Kousky, R. Wayne Higgins, and J. E. Janowiak, 2008: Assessing objective techniques for gauge-based analyses of global daily precipitation. J. Geophys. Res., 113, D04110, https://doi.org/10.1029/2007JD009132.
CPC, 2015a: El Niño/Southern Oscillation (ENSO) diagnostics discussion. NOAA/Climate Prediction Center, accessed 26 December 2020, https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_disc_mar2015/ensodisc.html.
CPC, 2015b: Prognostic discussion for the long-lead seasonal outlooks. NOAA/Climate Prediction Center, accessed 26 December 2020, https://www.cpc.ncep.noaa.gov/products/archives/long_lead/PMD/2015/201504_PMD90D.
CPC, 2015c: El Niño/Southern Oscillation (ENSO) diagnostics discussion. NOAA/Climate Prediction Center, accessed 26 December 2020, https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_disc_aug2015/ensodisc.html.
CPC, 2015d: 3-month precipitation outlooks. NOAA/Climate Prediction Center, accessed 26 December 2020, https://www.cpc.ncep.noaa.gov/products/archives/long_lead/gifs/2015/201508prcp.gif.
DelSole, T., A. Kumar, and B. Jha, 2013: Potential seasonal predictability: Comparison between empirical and dynamical model estimates. Geophys. Res. Lett., 40, 3200–3206, https://doi.org/10.1002/grl.50581.
Deser, C., I. R. Simpson, A. S. Phillips, and K. A. McKinnon, 2018: How well do we know ENSO’s climate impacts over North America, and how do we evaluate models accordingly? J. Climate, 31, 4991–5014, https://doi.org/10.1175/JCLI-D-17-0783.1.
Dias, J., and G. N. Kiladis, 2019: The influence of tropical forecast errors on higher latitude predictions. Geophys. Res. Lett., 46, 4450–4459, https://doi.org/10.1029/2019GL082812.
Donlon, C. J., M. Martin, J. Stark, J. Roberts-Jones, E. Fiedler, and W. Wimmer, 2012: The Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) system. Remote Sens. Environ., 116, 140–158, https://doi.org/10.1016/j.rse.2010.10.017.
Ferrett, S., M. Collins, H.-L. Ren, B. Wu, and T. Zhou, 2020: The role of tropical mean-state biases in modeled winter Northern Hemisphere El Niño teleconnections. J. Climate, 33, 4751–4768, https://doi.org/10.1175/JCLI-D-19-0668.1.
Fish, M. A., A. M. Wilson, and F. M. Ralph, 2019: Atmospheric river families: Definition and associated synoptic conditions. J. Hydrometeor., 20, 2091–2108, https://doi.org/10.1175/JHM-D-18-0217.1.
Grams, C. M., L. Magnusson, and E. Madonna, 2018: An atmospheric dynamics perspective on the amplification and propagation of forecast error in numerical weather prediction models: A case study. Quart. J. Roy. Meteor. Soc., 144, 2577–2591, https://doi.org/10.1002/qj.3353.
Higgins, R. W., J.-K. E. Schemm, W. Shi, and A. Leetmaa, 2000: Extreme precipitation events in the western United States related to tropical forcing. J. Climate, 13, 793–820, https://doi.org/10.1175/1520-0442(2000)013<0793:EPEITW>2.0.CO;2.
Hoell, A., M. Hoerling, J. Eischeid, K. Wolter, R. Dole, J. Perlwitz, T. Xu, and L. Cheng, 2016: Does El Niño intensity matter for California precipitation? Geophys. Res. Lett., 43, 819–825, https://doi.org/10.1002/2015GL067102.
Hoerling, M. P., and A. Kumar, 2002: Atmospheric response patterns associated with tropical forcing. J. Climate, 15, 2184–2203, https://doi.org/10.1175/1520-0442(2002)015<2184:ARPAWT>2.0.CO;2.
Jones, C., 2000: Occurrence of extreme precipitation events in California and relationships with the Madden–Julian oscillation. J. Climate, 13, 3576–3587, https://doi.org/10.1175/1520-0442(2000)013<3576:OOEPEI>2.0.CO;2.
Jong, B.-T., M. Ting, and R. Seager, 2016: El Niño’s impact on California precipitation: Seasonality, regionality, and El Niño intensity. Environ. Res. Lett., 11, 054021, https://doi.org/10.1088/1748-9326/11/5/054021.
Jong, B.-T., M. Ting, R. Seager, N. Henderson, and D. E. Lee, 2018: Role of equatorial Pacific SST forecast error in the late winter California precipitation forecast for the 2015/16 El Niño. J. Climate, 31, 839–852, https://doi.org/10.1175/JCLI-D-17-0145.1.
Jung, T., M. J. Miller, and T. N. Palmer, 2010: Diagnosing the origin of extended-range forecast errors. Mon. Wea. Rev., 138, 2434–2446, https://doi.org/10.1175/2010MWR3255.1.
Knippertz, P., H. Wernli, and G. Gläser, 2013: A global climatology of tropical moisture exports. J. Climate, 26, 3031–3045, https://doi.org/10.1175/JCLI-D-12-00401.1.
Kretschmer, M., S. V. Adams, A. Arribas, R. Prudden, N. Robinson, E. Saggioro, and T. G. Shepherd, 2021: Quantifying causal pathways of teleconnections. Bull. Amer. Meteor. Soc., https://doi.org/10.1175/BAMS-D-20-0117.1, in press.
Kumar, A., and M. Chen, 2017: What is the variability in U.S. West Coast winter precipitation during strong El Niño events? Climate Dyn., 49, 2789–2802, https://doi.org/10.1007/s00382-016-3485-9.
Kumar, A., and M. Chen, 2020: Understanding skill of seasonal mean precipitation prediction over California during boreal winter and role of predictability limits. J. Climate, 33, 6141–6163, https://doi.org/10.1175/JCLI-D-19-0275.1.
Lee, H.-T., A. Gruber, R. G. Ellingson, and I. Laszlo, 2007: Development of the HIRS outgoing longwave radiation climate dataset. J. Atmos. Oceanic Technol., 24, 2029–2047, https://doi.org/10.1175/2007JTECHA989.1.
Lee, S.-K., H. Lopez, E.-S. Chung, P. DiNezio, S.-W. Yeh, and A. T. Wittenberg, 2018: On the fragile relationship between El Niño and California rainfall. Geophys. Res. Lett., 45, 907–915, https://doi.org/10.1002/2017GL076197.
L’Heureux, M. L., M. K. Tippett, and A. G. Barnston, 2015: Characterizing ENSO coupled variability and its impact on North American seasonal precipitation and temperature. J. Climate, 28, 4231–4245, https://doi.org/10.1175/JCLI-D-14-00508.1.
Mesinger, F., and Coauthors, 2006: North American Regional Reanalysis. Bull. Amer. Meteor. Soc., 87, 343–360, https://doi.org/10.1175/BAMS-87-3-343.
Mundhenk, B. D., E. A. Barnes, E. D. Maloney, and C. F. Baggett, 2018: Skillful empirical subseasonal prediction of landfalling atmospheric river activity using the Madden–Julian oscillation and quasi-biennial oscillation. Climate Atmos. Sci., 1, 20177, https://doi.org/10.1038/s41612-017-0008-2.
Pan, B., K. Hsu, A. AghaKouchak, S. Sorooshian, and W. Higgins, 2019: Precipitation prediction skill for the West Coast United States: From short to extended range. J. Climate, 32, 161–182, https://doi.org/10.1175/JCLI-D-18-0355.1.
Pearl, J., 2009: Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge University Press, 464 pp.
Pegion, K., and Coauthors, 2019: The Subseasonal Experiment (SubX): A multimodel subseasonal prediction experiment. Bull. Amer. Meteor. Soc., 100, 2043–2060, https://doi.org/10.1175/BAMS-D-18-0270.1.
Ralph, F. M., M. D. Dettinger, M. M. Cairns, T. J. Galarneau, and J. Eylander, 2018: Defining “atmospheric river”: How the Glossary of Meteorology helped resolve a debate. Bull. Amer. Meteor. Soc., 99, 837–839, https://doi.org/10.1175/BAMS-D-17-0157.1.
Reynolds, R. W., T. M. Smith, C. Liu, D. B. Chelton, K. S. Casey, and M. G. Schlax, 2007: Daily high-resolution-blended analyses for sea surface temperature. J. Climate, 20, 5473–5496, https://doi.org/10.1175/2007JCLI1824.1.
Rodwell, M. J., D. S. Richardson, D. B. Parsons, and H. Wernli, 2018: Flow-dependent reliability: A path to more skillful ensemble forecasts. Bull. Amer. Meteor. Soc., 99, 1015–1026, https://doi.org/10.1175/BAMS-D-17-0027.1.
Roxy, M. K., K. Ritika, P. Terray, R. Murtugudde, K. Ashok, and B. N. Goswami, 2015: Drying of Indian subcontinent by rapid Indian Ocean warming and a weakening land–sea thermal gradient. Nat. Commun., 6, 7423, https://doi.org/10.1038/ncomms8423.
Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 2185–2208, https://doi.org/10.1175/JCLI-D-12-00823.1.
Sánchez, C., J. Methven, S. Gray, and M. Cullen, 2020: Linking rapid forecast error growth to diabatic processes. Quart. J. Roy. Meteor. Soc., 146, 3548–3569, https://doi.org/10.1002/qj.3861.
Schonher, T., and S. E. Nicholson, 1989: The relationship between California rainfall and ENSO events. J. Climate, 2, 1258–1269, https://doi.org/10.1175/1520-0442(1989)002<1258:TRBCRA>2.0.CO;2.
Seager, R., M. Hoerling, S. Schubert, H. Wang, B. Lyon, A. Kumar, J. Nakamura, and N. Henderson, 2015: Causes of the 2011–14 California drought. J. Climate, 28, 6997–7024, https://doi.org/10.1175/JCLI-D-14-00860.1.
Siler, N., Y. Kosaka, S.-P. Xie, and X. Li, 2017: Tropical ocean contributions to California’s surprisingly Dry El Niño of 2015/16. J. Climate, 30, 10 067–10 079, https://doi.org/10.1175/JCLI-D-17-0177.1.
Singh, D., M. Ting, A. A. Scaife, and N. Martin, 2018: California winter precipitation predictability: Insights from the anomalous 2015–2016 and 2016–2017 seasons. Geophys. Res. Lett., 45, 9972–9980, https://doi.org/10.1029/2018GL078844.
Sun, S., R. Bleck, S. G. Benjamin, B. W. Green, and G. A. Grell, 2018: Subseasonal forecasting with an icosahedral, vertically quasi-Lagrangian coupled model. Part I: Model overview and evaluation of systematic errors. Mon. Wea. Rev., 146, 1601–1617, https://doi.org/10.1175/MWR-D-18-0006.1.
Swenson, E. T., D. M. Straus, C. E. Snide, and A. al Fahad, 2019: The role of tropical heating and internal variability in the California response to the 2015/16 ENSO event. J. Atmos. Sci., 76, 3115–3128, https://doi.org/10.1175/JAS-D-19-0064.1.
Tippett, M. K., M. Almazroui, and I.-S. Kang, 2015: Extended-range forecasts of areal-averaged rainfall over Saudi Arabia. Wea. Forecasting, 30, 1090–1105, https://doi.org/10.1175/WAF-D-15-0011.1.
Vigaud, N., M. K. Tippett, and A. W. Robertson, 2019: Deterministic skill of subseasonal precipitation forecasts for the East Africa–West Asia sector from September to May. J. Geophys. Res. Atmos., 124, 11 887–11 896, https://doi.org/10.1029/2019JD030747.
Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., 140, 1889–1899, https://doi.org/10.1002/qj.2256.
Wahl, E. R., H. F. Diaz, R. S. Vose, and W. S. Gross, 2017: Multicentury evaluation of recovery from strong precipitation deficits in California. J. Climate, 30, 6053–6063, https://doi.org/10.1175/JCLI-D-16-0423.1.
Wang, S., A. Anichowski, M. K. Tippett, and A. H. Sobel, 2017: Seasonal noise versus subseasonal signal: Forecasts of California precipitation during the unusual winters of 2015–2016 and 2016–2017. Geophys. Res. Lett., 44, 9513–9520, https://doi.org/10.1002/2017GL075052.
Wang, W., M.-P. Hung, S. J. Weaver, A. Kumar, and X. Fu, 2014: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 42, 2509–2520, https://doi.org/10.1007/s00382-013-1806-9.
Weisheimer, A., D. J. Befort, D. MacLeod, T. Palmer, C. O’Reilly, and K. Strømmen, 2020: Seasonal forecasts of the twentieth century. Bull. Amer. Meteor. Soc., 101, E1413–E1426, https://doi.org/10.1175/BAMS-D-19-0019.1.
Zhu, Y., and Coauthors, 2018: Toward the improvement of subseasonal prediction in the National Centers for Environmental Prediction global ensemble forecast system. J. Geophys. Res. Atmos., 123, 6732–6745, https://doi.org/10.1029/2018JD028506.