1. Introduction
Successful seasonal forecasts have great economic and societal benefits (Meinke and Stone 2005; Lemos and Dilling 2007; Kumar 2010). Despite the unpredictable variability produced by internal atmospheric dynamics, the response to tropical sea surface temperature (SST) anomalies, especially those related to the El Niño–Southern Oscillation (ENSO) phenomenon, has long been considered a major source of seasonal predictability for different parts of the world (Palmer and Anderson 1994; Barnett et al. 1994; Livezey et al. 1996). Based on this boundary forcing conception, some efforts have been made in using atmosphere-only models (Rowell 1998; Kumar and Hoerling 2000; Kumar et al. 2001; Barnston et al. 2005), as well as initialized coupled models, which include two-way air–sea interaction and the prediction of the future state of the sea surface (Stockdale et al. 1998; Chakraborty and Krishnamurti 2006; Peng et al. 2011; Gleixner et al. 2017), to develop seasonal prediction systems.
By integrating a general circulation model (GCM) with varied initial conditions but identical boundary forcing, Shukla (1981) suggested that the prediction skill of the seasonal mean is determined by the evolution of low-frequency planetary waves. Thus, the ENSO-induced predictability inevitably relies on the models’ ability to simulate the large-scale atmospheric response to tropical SST anomalies. In the presence of an El Niño (La Niña) event, one of the most dominant features in the Northern Hemisphere circulation response is a deepened (weakened) low pressure center over the North Pacific (Bjerknes 1966, 1969; Namias 1976), which can be explained by the propagation of Rossby waves from the tropics (Hoskins and Karoly 1981; Webster 1981; Horel and Wallace 1981). Through this pathway, ENSO’s effects can be transmitted to the North Pacific and North America (Ropelewski and Halpert 1986; Papineau 2001; Schubert et al. 2008; Johnson et al. 2014). This ENSO–North Pacific relationship is considered a fundamental process that should be reasonably reproduced in models, despite the diversity of model configurations, resolutions, numerical methods, and parameterization schemes (Held and Kang 1987; Stoner et al. 2009; Hurwitz et al. 2014; Deser et al. 2017).
Most previous studies have focused on the peak season of ENSO [December–February (DJF)] when the amplitudes of SST anomalies and teleconnections are largest (Gershunov and Barnett 1998; Yang and DelSole 2012; Bellenger et al. 2014), with a few other studies having also looked at the other seasons before or after the peak (Alexander et al. 2002; Spencer and Slingo 2003; Bladé et al. 2008; Lee et al. 2014; Jong et al. 2016). According to Kumar and Hoerling (1998), the potential for seasonal predictability over North America is stronger during the late winter and spring season, as the SST-forced signals remain strong enough but the background noise is substantially reduced. By looking at the relationship between California precipitation and ENSO events, Jong et al. (2016) found that the influence of El Niño on California precipitation becomes stronger when the SST anomalies weaken in the spring, and suggested that this can be attributed to a warming climatological mean sea surface temperature, which is favorable for deep convection. Some other studies have attributed extreme events over North America during the spring season to the prolonged influence of ENSO (Wolter et al. 1999; Bates et al. 2001; Schmidt et al. 2001). Therefore, the evaluation of the models’ performance in simulating the springtime ENSO teleconnection is of great practical value but has received relatively little attention.
Two notable prior studies that examined the model fidelity of springtime ENSO teleconnection are those of Alexander et al. (2002) and Spencer and Slingo (2003). They each pointed out that, compared to observations, the modeled sea level pressure (SLP) anomalies over the North Pacific remain too strong during the springtime after the peak of ENSO events. Alexander et al.’s (2002) conclusions were drawn from an ensemble of experiments with an atmospheric GCM coupled to a one-dimensional entraining ocean mixed layer model beneath each atmospheric grid cell, with observed SSTs specified in the eastern equatorial Pacific. They speculated that the model’s bias toward a stronger North Pacific SLP response during spring was likely caused by the lack of ocean dynamics in their model setup, although sampling due to internal variability may have also contributed. Spencer and Slingo (2003), on the other hand, used an ensemble of HadAM3 atmospheric GCM simulations forced by observed SSTs. They also found that the ENSO anomalies in the simulated North Pacific SLP field persisted too strongly into March–May (MAM), and argued that this could be due to the model’s limitation in simulating the tropical precipitation accurately, even in the presence of the imposed observed SSTs. In particular, the model was found to overestimate the spring precipitation over the warmest tropical SSTs, leading to a stronger extratropical wave train response. The lack of two-way coupling and deficiencies in the convective scheme used were suggested as possible reasons behind the bias in simulated precipitation and the bias was not found to be reduced significantly by increasing the vertical and horizontal resolutions.
The aim of the present study is to evaluate the springtime ENSO teleconnection in current generations of Earth system models (ESMs) and assess the extent to which the overly strong springtime teleconnection bias discussed in the aforementioned studies, almost two decades ago, remains ubiquitous among models and model configurations. In addition, we discuss the implications of this bias for the simulation of ENSO-induced surface climate anomalies over North America. More specifically, our aims are 1) to accurately place the magnitude of the bias within the context of the internal variability that is present in both the observational record and model simulations; 2) to examine, within one model, the dependence of the bias on experimental design (e.g., prescribed SSTs vs coupled ocean); 3) to assess the fidelity of springtime ENSO teleconnections across a large suite of state-of-art global ESMs; and 4) to assess the implications of the bias for the simulated surface climate response to ENSO over North America. To achieve these, we make use of nine sets of simulations from the Community Earth System Model (CESM), preindustrial (piControl) simulations from the Coupled Model Intercomparison Project phases 5 and 6 (CMIP5 and CMIP6), and Atmospheric Model Intercomparison Project (AMIP) simulations from CMIP6.
The rest of this study is organized as follows. The data and methods used are described in section 2. The assessments of the suite of CESM configurations and other model simulations (CMIP5, CMIP6, and ERA20CM) are presented in section 3. The implications for the modeled surface temperature and precipitation anomalies over North America are addressed in section 4. In section 5, we briefly discuss the possible origin of the bias before a summary and conclusions are provided in section 6.
2. Data and methods
a. Observation-based products
Our primary period of analysis is 1920 to 2010. The SST dataset used to identify the ENSO events is the Extended Reconstructed Sea Surface Temperature version 3b (ERSSTv3b; Smith et al. 2008) from the National Oceanographic and Atmospheric Administration (NOAA), which provides monthly analysis from 1854 with a 2° × 2° horizontal resolution. Three SLP datasets are used for evaluating the ENSO-related circulation anomalies: the European Centre for Medium-Range Weather Forecasts (ECMWF) twentieth-century reanalysis (ERA20C; Poli et al. 2016), the NOAA Twentieth Century Reanalysis version 2 (20CRv2; Compo et al. 2011), and the Met Office Hadley Center’s mean sea level pressure dataset (HadSLP2r; Allan and Ansell 2006). Note that the interpolation procedure applied in HadSLP2r could introduce uncertainties over regions with sparse observations. ERA20C and 20CRv2 each assimilate surface pressure observations, while ERA20C additionally assimilates marine surface winds and has a higher resolution. We primarily use ERA20C, but the same qualitative conclusions can be drawn from the other two datasets. The ERA20C-based horizontal winds and specific humidity on pressure levels are also used. Our primary near-surface air temperature dataset is the monthly Berkeley Earth Surface Temperature (BEST; Rohde et al. 2013) dataset, which provides 1° × 1° gridded temperature analysis created with a large sampling of in situ thermometer measurements. For precipitation data over land, we use the Global Precipitation Climatology Center (GPCC) Full Data Product version 7 (Schneider et al. 2014) at 1° × 1° horizontal resolution, which is derived from a combination of rain gauge–based analyses and remote sensing. Both the monthly global precipitation products of ERA20C and the Global Precipitation Climatology Project (GPCP) version 2 (Adler et al. 2003) are also used. The GPCP precipitation product is available on a 2.5° × 2.5° grid from 1979 to the present and is based on rain gauge stations, satellites, and sounding observations.
b. Model simulations
1) CESM simulations
We make use of a wide array of atmosphere-only and fully coupled simulations with the Community Earth System Model (Hurrell et al. 2013). These simulations are summarized in Table 1 and are available for download from http://www.cesm.ucar.edu/experiments/. For the atmosphere-only simulations, we include three sets of 10-member Tropical Ocean–Global Atmosphere simulations (TOGA; Lau and Nath 1994; Trenberth et al. 1998; Deser et al. 2018) with CESM version 1 (CESM1) that only differ in the SST dataset used: ERSSTv3b (Smith et al. 2008), ERSSTv4 (Huang et al. 2015), and ERSSTv5 (Huang et al. 2017) SSTs, henceforth referred to as TOGA-ERSSTv3b, TOGA-ERSSTv4, and TOGA-ERSSTv5, respectively. By prescribing observed SSTs within the tropical belt (from 28°S to 28°N) and using the climatological seasonal cycle elsewhere, the TOGA configuration focuses on the model’s response to the observed historical evolution of tropical SST anomalies. We also use a 10-member ensemble of Global Ocean–Global Atmosphere (GOGA) simulations with CESM1 with observed time-varying SSTs from ERSSTv4 imposed globally (GOGA-ERSSTv4). In addition to the CESM1 simulations, we assess two three-member GOGA ensembles with the newly developed CESM version 2 (CESM2) (Danabasoglu et al. 2020). One of them employs the low-top Community Atmosphere Model (CESM2-CAM6) (Bogenschutz et al. 2018) and the other employs the high-top Whole Atmosphere Community Climate Model (CESM2-WACCM6) (Gettelman et al. 2019). Note that these two CESM2 simulations cover a shorter period from 1950 to 2010 (Table 1).
List of the length of simulations, and the number of El Niño (EN) and La Niña (LN) events identified in the CMIP5 and CMIP6 piControl simulations (with CESM-related ones marked by asterisks), for nine sets of CESM simulations and ERA20CM from ECMWF as introduced in section 2. The CMIP6 models that are also included for their AMIP simulations are marked by boldface font.
For the fully coupled configurations, we use a 20-member ensemble of the tropical Pacific pacemaker simulations performed with CESM1 (PACEMAKER; Kosaka and Xie 2013; Deser et al. 2017). In the PACEMAKER simulations, the observed evolution of the ENSO events is prescribed by nudging the eastern equatorial Pacific (20°S–20°N, 180°–80°W) SST anomalies toward observations. We also make use of a 40-member ensemble of fully coupled CESM1 historical simulations (LENS-his) and a 1801-yr-long piControl (LENS-pi) simulation conducted as part of the CESM1 Large Ensemble (LENS) project (Kay et al. 2015). With the exception of LENS-pi, all of the above simulations are forced with historical atmospheric forcings (Hurrell et al. 2013).
2) Other model simulations (CMIP5, CMIP6, and ERA20CM)
We also assess the piControl simulations from 43 CMIP5 and 20 CMIP6 models (listed in Table 1). The piControl simulations typically offer a larger sample of ENSO events than historical simulations, which is helpful for accurately diagnosing the ENSO forced response in the presence of internal variability. To access the behavior of the CMIP models with prescribed observed SSTs we also use the AMIP simulations from selected CMIP6 models. To ensure a sufficient sample size, only models that provide more than three AMIP members from 1979 or 1950 to 2010 or at least one member that covers the whole period of 1920–2010 are included, such that the AMIP composites include at least as many events as observations. This leaves nine CMIP6 models with AMIP simulations (boldface font in Table 1), in addition to the CESM2-CAM6 and CESM2-WACCM6 simulations described above. We also make use of a 10-member ensemble of atmospheric model integrations known as ERA20CM (Hersbach et al. 2015) available from ECMWF. It uses prescribed historical SSTs and sea ice cover from HadISST2 (Titchner and Rayner 2014), and with no assimilation of atmospheric observations.
c. Methods
1) The definition of ENSO events
In this study, the El Niño (EN) and La Niña (LN) events are identified by first calculating the Niño-3.4 index [defined by Barnston et al. (1997) as the area-averaged SST anomalies over 5°S–5°N, 120°–170°W] for each month during 1920–2010. A three-point binomial filter is used to smooth the indices before the DJF mean is calculated and linear detrending is subsequently performed. The EN (LN) events are defined as when the detrended DJF Niño-3.4 index is higher (lower) than plus (minus) one standard deviation (Okumura and Deser 2010; Deser et al. 2017). We identifited 16 EN events and 14 LN events based on ERSSTv3b as observed ENSO events (Table 2). We use the same events in the simulations with prescribed historical SSTs and the PACEMAKER simulations. Since the CESM2 simulations begin in 1950, only events after 1950 are included in those. A summary of the number of events used for each model or ensemble is given in Table 1.
The El Niño (EN) and La Niña (LN) years identified in observation based on ERSSTv3b during 1920–2010. Extremely strong events are marked by boldface font.
The composite of anomalies during the EN (LN) years are obtained by first removing the annual cycle of the monthly climatological mean during 1920–2010 for the observational record, and during the total simulation length used for each model (Table 1) at each grid point, then compositing based on the identified EN (LN) winters. For the coupled models and AMIP simulations shorter than 1920–2010, to take into account any differences from observed ENSO amplitude, the composite anomalies are further normalized by scaling by the ratio of the DJF composite mean EN or LN Niño-3.4 anomaly in observations (ERSSTv3b) over 1920–2010 to that in the models over whatever simulation period is available.
2) Statistical testing
To evaluate the statistical significance of the difference between the modeled and observed ENSO response in the presence of sampling uncertainty, a random sampling technique is employed following Deser et al. (2017, 2018). For observations, we randomly sample with replacement (bootstrap) 16 EN and 14 LN events from the observed events. This is repeated 1000 times to form 1000 resampled ENSO composites for observations. The same procedure is applied to the modeled ENSO events, still keeping the sample size of the EN and LN events the same as that in observations (16 for the EN and 14 for the LN). Note that for the model simulations with more than one member, the EN (LN) events of each member are put into one single large “sampling pool” before bootstrapping.
3. Evaluation of the modeled springtime North Pacific ENSO teleconnection
a. CESM simulations
To depict the North Pacific circulation related to ENSO, we define a North Pacific index (NPI) after Trenberth and Hurrell (1994) and Deser et al. (2017) as the area-averaged SLP anomalies over 35°–60°N, 165°E–145°W. The seasonal evolution of the observed and modeled NPI is illustrated in Fig. 1 for EN-LN and EN/LN alone. Here, we include the results from ERA20C, HadSLP2r, and 20CRv2 reanalyses, as well as all nine CESM configurations. The response shown by the HadSLP2r is weaker than that displayed by the ERA20C and 20CRv2, while the characteristics of its seasonal evolution are similar: the intensification of the observed anomalies occurs in November and peaks in January, before decaying rapidly over the next 2 months (February and March). The peak value in the models is about 50% larger than the observed, and occurs in February, not January (i.e., one month delayed compared to observations). When considering the timing of the peak SLP anomaly during individual events, it is only for EN events where there is a consistent difference between models and observations (Fig. S1 in the online supplemental material). For LN events, there is no consistent difference between the models and observations in the probability of the timing of the peak SLP anomaly (Fig. S2), but this can be reconciled with the differences in Fig. 1c by the fact that, in the model, the NPI peak value is on average larger for February-peak events than for January-peak events, whereas the opposite is true in observations (not shown). The discrepancy shown in Fig. 1 appears from February to March in the EN-LN and EN composites, and from February to May in the LN composite. In either EN or LN, that discrepancy indicates stronger North Pacific circulation anomalies in the models compared to those in observations. Note that in the LN composite, the difference between the models and observations is much smaller during April–May compared to that during February–March (FM). Although the peak magnitude of NPI is larger for the EN composite, the bias in the EN–LN composite is dominated by that in the LN composite. Given that prior studies have suggested that the North Pacific circulation response may not scale linearly with ENSO amplitude (Frauen et al. 2014; Garfinkel et al. 2019; Jiménez-Esteve and Domeisen 2019), we further check whether this springtime bias is heavily dependent on the inclusion of the strongest ENSO event of the record in the composite. The results are shown in Fig. S3, where it can be seen that the bias remains when the extreme events of 1982/83 and 1997/98 are excluded. In fact, if anything, the bias does not seem to be present in the extreme events, but the limited sample size in observations inhibits our ability to draw any strong conclusions in this regard. This bias is, therefore, a characteristic feature of ENSO in general and is not dominated by the extreme events and we proceed to consider all ENSO events within the time series together.
To test the significance of the difference between the simulated and observed EN–LN NPI, the random sampling method of Deser et al. (2017) (see section 2) is applied to generate the box plots in Fig. 2. As depicted in Fig. 2c, the bottom and top of each box represent the 25th and 75th percentiles of the sorted bootstrapped NPI composites, respectively, and the middle line is the average of the 1000 bootstrapped NPI composites. The 5th and 95th percentiles are marked by the whiskers. For model simulations with more than one member, the composite anomaly of each individual member is marked by a blue circle. For the reanalysis and the simulation with only one member (LENS-pi), the blue circle represents the value of the composited NPI for that one time series. The height of the box and whiskers can be interpreted as the range of the uncertainty due to internal variability for a sample size that is equivalent to that of observations. The difference between the 75th and 25th percentiles of the bootstrapped NPI composites varies from 2.47 hPa in DJF to 1.38 hPa in MAM for ERA20C, and from 2.68 hPa in DJF to 1.71 hPa in MAM for the average of nine CESM simulations. Thus, both in observations and model simulations, the internal variability during the winter season is generally larger than that during the spring, which is consistent with previous studies (Kumar and Hoerling 1998). During February and March, the interquartile ranges of the observational and model samples do not overlap and for the majority of model simulations the 95% confidence intervals do not even overlap, which indicates a very significant difference between the model and observations during these months in the sense that the amplitude of the ENSO teleconnection is larger in the model. Quantitatively, where the 95% confidence interval of the bootstrapped samples of the model does not overlap with the observed value, which is true in every case in February and March, there is less than a 5% chance that the observed value would be sampled from the model distribution and, therefore, less than a 5% chance that the model is behaving as observations.
Having found the bias in the NPI index to be the largest in FM, we show a more detailed view of the spatial distribution of the bias in this season in Fig. 3. The observed (based on ERA20C) and simulated (TOGA-ERSSTv3b and PACEMAKER) SLP anomalies over the North Pacific during FM of the EN and LN events, and their difference (EN–LN) are shown by the contours in Fig. 3, whereas for the CESM simulations the difference in the composite anomaly from observed is depicted by shading. The corresponding composite results for the other CESM simulations are similar to the ones shown here (Fig. S4). The box area used for defining NPI is roughly centered on the EN–LN composite anomalies of the nine CESM simulations throughout the DJ–FM seasons, and covers the maximum SLP anomaly difference from observations in FM (Fig. 3; see also Figs. S4 and S5). Note that the observed center of action (150°W) is about 20° east of the center of NPI [170°W, the same as that in Trenberth and Hurrell (1994)], but moving the NPI box 20° to the east (so that it is instead centered on the observed anomaly) does not qualitatively change the results here. The exception is that for the composites of LENS-his and LENS-pi simulations, the relative weak NPI bias during EN events (Fig. 1) is mainly a shift in the anomaly center away from our NPI box. Overall, the conclusions are not strongly dependent on the averaging region used and the bias is clear from the density of contours in Fig. 3.
In observations, the negative values during EN years over the North Pacific dominate the EN–LN composite difference (cf. Figs. 3d,g). This asymmetric feature in the amplitude of the EN and LN anomalies can either be attributed to a true asymmetry in the nature of the ENSO response or to internal variability–induced uncertainty associated with the relatively small sample size in observations (Zhang et al. 2014; Deser et al. 2017). By examining the distribution of 1000 bootstrapped NPI composites for the EN and LN (inverted) from observations, the significance of this asymmetry can be tested. As shown in Fig. 4b, the 10th percentile of the bootstrapped inverted LN composite is greater than the 90th percentile of the EN composite (i.e., this is a strong indication that there is a real asymmetry between EN and LN responses in this season). By excluding the extreme EN events of 1982/83 and 1997/98, this asymmetric feature becomes less significant (Fig. 4d), suggesting that the contribution from extreme EN events in giving an overall stronger North Pacific response in EN events is important. As shown in Fig. 4a, a similar asymmetric response over the North Pacific can also be seen in DJ, but, as concluded in Deser et al. (2017) for the DJF season, it does not pass significance tests. The modeled responses in FM are also generally stronger in EN years than in LN years (although less dramatic than observed) in most of the CESM simulations (Fig. 3; see also Fig. S4). Given the reduced importance of internal variability in the model composites due to the larger sample size, this lends further support to the asymmetric nature of the EN and LN responses in this season. Despite the asymmetric absolute response, the consistency of both sets of CESM simulations in Fig. 3 in giving a stronger response over the North Pacific than observations in FM during both EN and LN is obvious.
b. Simulations with other models
Figures 5a and 5c show the composite of the FM SLP anomalies (EN–LN) over the North Pacific area for the 43 CMIP5 models and 20 CMIP6 models from the coupled piControl simulations, and Fig. 5e shows the same but for the CMIP6 AMIP simulations. Similar to CESM (Fig. 3; see also Fig. S4), a stronger amplitude of the ENSO response is observed in the multimodel mean composites of both CMIP5 and CMIP6 compared to observations and the discrepancy is largest in FM. In addition, the modeled response is centered farther west than that in observations, much like CESM. The seasonal evolution of the NPI (Figs. 5b,d, f) in the multimodel mean (blue lines) of the CMIP5 and CMIP6 simulations also shows that the modeled NPI tends to peak one month (two months in CMIP6’s piControl simulations) later than observations. As was also found in CESM, the bias in the CMIP5/CMIP6 models is dominated by the bias during La Niña events, with the simulated springtime teleconnection during El Niño events being closer to observed (Fig. S6). However, it is not appropriate to conclude that the models are better in simulating the NPI during EN events given that the modeled DJ NPI in EN events is generally weaker than observed. It is speculated that if the modeled NPI during DJ of the EN events were closer to the observational value, the CMIP models may exhibit stronger North Pacific circulation anomalies in the FM season as well. This also indicates that the seasonal average commonly used for model evaluation (i.e., DJF) may average out compensating errors in the subseasonal time scale. Similar to what was found in CESM (Fig. S3b), the discrepancy shown in EN events becomes larger after excluding extremes in the CMIP models (not shown). To summarize, the biases in the distribution and evolution of North Pacific SLP anomalies during ENSO events in the CMIP models are consistent with those revealed by the CESM simulations in the previous section.
The difference between modeled and observed (MOD-OBS) NPI for the EN–LN composite during FM is summarized for the individual models in Fig. 6. Here, the 5th–95th (25th–75th) confidence intervals in the modeled and observed NPI anomalies are generated using the same random sampling technique as in Fig. 2. The models marked by red plus signs are those with composite NPI anomalies lying outside of the 5th–95th bootstrapped composites of observational values—that is, with less than 5% chance that the modeled NPIs could be sampled from the observed distribution given the observational uncertainty. To test based on the uncertainty range in the models, a blue plus sign is used to indicate when the observed value lies outside of the 5th–95th confidence interval of a given model. These two assessments of significance will only differ if the model’s sampling uncertainty differs from that in observation. It is obvious that the majority of the CMIP models tend to give a stronger rather than weaker ENSO response over the North Pacific area compared to observations. Specifically, only three CMIP5 models and one CMIP6 model are associated with a positive (although not significant) bias in NPI. After taking into account the sampling uncertainty, the composite FM NPI from 23 out of 43 CMIP5 models and 11 out of 20 CMIP6 models’ piControl simulations is significantly (exceeding 95% confidence level against the uncertainties in both observations and models) more negative than observed, further emphasizing that the bias identified in CESM is present in many other models as well.
Here we are cognizant of the fact that the bias in coupled runs could arise from a bias in the response to the ENSO SST anomalies, a bias in the representation of the SST anomalies themselves, or both. For example, there is a well-known bias that modeled ENSO SST anomalies tend to extend too far west (Kiehl and Gent 2004; Li and Xie 2014). Despite the possible contribution of biases in the simulation of the actual ENSO SSTs, in these coupled model composites, it is shown that similar biases exist when observed SSTs are prescribed in these models too (Figs. 5 and 6), indicating that the primary issue lies in how the atmosphere responds to ENSO SST anomalies. Recall that to partially eliminate the potential influence from the difference in the amplitude of the ENSO events, the NPI from the coupled simulations has been scaled by the ratio of the composite mean Niño-3.4 anomalies in ERSSTv3b to that in the model. The corresponding ratio of the DJF mean Niño-3.4 index in each model to that in observations is marked by a green dot in Fig. 6. This shows that there is no clear relationship between a model’s NPI index bias and its ENSO amplitude. The conclusions here are not sensitive to the season used for scaling, namely replacing the DJF Niño-3.4 indices with the FM/FMA index gives qualitatively the same results (not shown). It is clear that the NPI bias in FM is not simply a persistence of a bias from early winter (DJ), as shown by the red dots in Fig. 6. There is a suggestion that the models that have less of an FM bias, actually have a too-weak teleconnection during DJ (also see Fig. S6). Therefore, the FM bias might be even more ubiquitous across the models if those models had a more accurate early winter teleconnection.
The influence of coupling can be analyzed using CESM multi-configuration experiments and piControl/AMIP simulations from CMIP6. In CESM, the bias in the coupled LENS-his and LENS-pi simulations are significantly smaller (25th–75th confidence intervals are not overlapped) than that in the TOGA-ERSSTv3b and TOGA-ERSSTv4 simulations (Fig. 6b). However, no significant improvement is found in the PACEMAKER simulation compared to other atmosphere-only simulations. Compared to LENS-his and LENS-pi, PACEMAKER is also a coupled simulation while it has more realistic ENSO SST anomalies. Therefore, the apparent improvement shown in LENS-his and LENS-pi simulations is not likely to be due to the two-way coupling between atmosphere and ocean. A possibility is that the differences in the ENSO simulation in the coupled model partially offset the atmospheric model errors. A comparison between the AMIP-type simulations in CMIP6 and their corresponding coupled simulations shows that no significant improvement is found in the presence of the coupling, so the AMIP simulations do not produce the bias as a result of the lack of coupling but rather do so due to errors in the atmospheric response to the SSTs. We also consider one set of simulations of ERA20CM, which is a 10-member ensemble of prescribed historical SST simulations. These are analogous to the CESM GOGA runs but with an entirely different model, and it can be seen in Fig. 6b (far right) that this model has a very similar springtime bias when forced by the observed ENSO events. Therefore, it provides further evidence for the ubiquity of the bias in different climate models, even when forced with observed SSTs. Furthermore, as different atmosphere-only configurations of CESM displayed no significant difference from each other, no evidence has been found that the influence of extratropical SSTs (by comparing TOGA and GOGA), or the model top height (by comparing CESM2-CAM6 and CESM2-WACCM6) has any bearing on the bias.
In most cases, the models developed by the same institution resemble each other in terms of the amplitude of NPI biases, especially when the models only differ in atmospheric chemistry (MIROC-ESM and MIROC-ESM-CHEM), biochemistry (e.g., NorESM1-M and NorESM1-ME), and stratospheric representation (e.g., CMCC-CM and CMCC-CMS; CESM1-CAM5 and CESM1-WACCM; and CESM2-CAM6 and CESM2-WACCM6). Model resolution, in general is not a determining factor for the occurrence of the bias (e.g., IPSL-CM5A-LR and IPSL-CM5A-MR; MPI-ESM-LR and MPI-ESM-MR), although there are some cases when improvements are seen (e.g., BCC-CSM-1.1 and BCC-CSM-1.1-M in CMIP5, and HadGEM3-GC31-LL and HadGEM3-GC31-MM in CMIP6). There are also cases when changes in the physics lead to improvement (e.g., compare IPSL-CM5B-LR with IPSL-CM5A-MR and IPSL-CM5A-LR).
4. Implications for the modeled climate response over North America
The North Pacific atmospheric circulation can have a profound impact on the surface climate over North America. For example, when the low pressure system around the Aleutian Islands becomes deeper and shifted eastward, the southerly and westerly winds along the west coast of North America are intensified, increasing the frequency of heavy daily precipitation events over the western United States (Kim et al. 2019), and warming a large portion of Alaska (Papineau 2001) and the western United States (Favre and Gershunov 2009).
To investigate the influence of the circulation bias on the simulation of the climate impacts over North America, we first compare the modeled EN–LN composites of surface air temperature and precipitation anomalies during FM in the CESM PACEMAKER simulations with observations in Figs. 7a and 7c. The PACEMAKER simulations are selected over other CESM configurations since they has a larger ensemble size than TOGA and GOGA simulations (20 members compared to 10 members), and two-way air–sea interaction is included. Moreover, compared to the LENS-his and LENS-pi simulations, the SST anomalies during ENSO are more realistic. For surface air temperature (TAS; Fig. 7a), significant warm biases are found in the northwest covering Alaska to Nunavut. Significant cold biases are mostly seen to the north and northeast of the Great Lakes. Weak warm biases also occur along the coastal area of the Gulf of Mexico. The accompanying circulation anomalies at 850 hPa suggest that the warm bias over Alaska can be directly linked to the northeast branch of the cyclonic circulation bias over the North Pacific, and associated southerly warm advection. The weak warm biases along the coastal area of the Gulf of Mexico accompany a southerly wind bias in the western North Atlantic. The anomalous cold air over northeastern Canada south of Hudson Bay appears to be related to circulation anomalies over the western Atlantic and Hudson Bay, rather than directly influenced by the NPI. The distribution of precipitation (PREC; Fig. 7c) shows a patch of significant dry biases along the west coast from southern Canada to the northern tier of the United States, and wet biases over the southwestern and the southeastern United States. The moisture flux shown in vectors suggests that there are offshore moisture transport biases prevailing over the dry-bias region and southerly inshore moisture transport biases associated with the southwest United States wet bias. Following Jong et al. (2016), we apply the composite analysis to the extreme and nonextreme EN events separately and find that California precipitation is markedly increased in both types of events in PACEMAKER, in contrast to observations which show a larger signal during extreme EN events (not shown). A possible consequence is that the models would give an overoptimistic estimation of the role played by moderate EN events in relieving periods of drought of California.
Similar results are given by the multimodel mean of the CMIP5 models in Fig. 7b for surface air temperature and in Fig. 7d for precipitation. Compared to PACEMAKER, the warm biases shown in Alaska and western Canada are weaker in the CMIP5 models, mainly because the North Pacific circulation bias and the associated southerly anomalies (Figs. 7e,f) are weaker in the CMIP5 ensemble mean. The cold biases south of Hudson Bay, by contrast, are stronger in the CMIP5 models. The major difference in the composite precipitation bias in the CMIP5 models compared to PACEMAKER is that the wet bias over California is missing, which can be explained by the limited eastward extension of the cyclonic flow anomalies that prevent the inland moisture flux anomalies from reaching the southwest United States in the CMIP5 models.
From another perspective, the linkage between the surface climate response and the NPI bias can be established by regressing across the CMIP5 models. The regression maps shown in Figs. 8a and 8b provide further evidence that the above surface climate biases found over western North America are, indeed, connected to the NPI bias. Note that the sign of the NPI bias has been inverted before calculating the regression coefficients, so that the sign of the regression coefficient matches the sign of the anomalies in the composite bias discussed above. Figure 8a indicates that, in the CMIP5 models, a stronger North Pacific ENSO teleconnection tends to be accompanied by a warmer anomaly over northwest North America and colder anomaly in the southeast. Different from Fig. 7b, no significant temperature biases to the north and northeast of the Great Lakes are present. In addition, the weak cold biases seen over the coastal area of the Gulf of Mexico in the regression map across the CMIP5 models are of opposite sign compared to the composite bias. This suggests that these features found in the composite bias are not directly tied to the NPI bias. The across-model regression coefficient map between the precipitation composite bias and the NPI bias (Fig. 8b) exhibits significant positive precipitation anomalies in California (90% confidence level) and dry biases to the north, which is in line with a strengthening of the canonical precipitation response to ENSO as depicted by Mo and Higgins (1998), Mo and Schemm (2008), and many other studies. Note that compared to the composite results, the NPI regressions show a weaker dry bias to the west of Hudson Bay and a more coastal-confined wet bias over the southeastern United States.
In the EN–LN composite field of SLP anomalies, the springtime anomalies in SLP in both the PACEMAKER simulations and the coupled CMIP5 models exhibit a positive bias centered over the North Atlantic (Figs. 7e,f). However, the regression of SLP anomalies onto the NPI bias across the CMIP5 models makes clear that the magnitude of this North Atlantic bias is not related to the magnitude of the NPI bias (Fig. 8c), suggesting the Atlantic and Pacific biases may have distinct causes. The lack of significant circulation changes around Hudson Bay in the regression map is also consistent with the weak climate impacts (both in TAS and PREC) displayed over that area (Figs. 8a,b). The cold biases over the southeastern United States are under the influence of the prevailing of northerly wind brought by the weak southeastern North Atlantic low pressure anomalies, which are linked to the NPI bias. In summary, the influence of the model biases in the simulation of a stronger North Pacific ENSO teleconnection after the peak season is consistent with a prolonged ENSO impact on western North America in the models (Mo and Higgins 1998; Papineau 2001; Deser et al. 2018).
5. Potential causes of the February–March NPI bias
It is well known that diabatic heating related to tropical precipitation anomalies plays a critical role in triggering the extratropical Rossby wave train pattern during ENSO events (Hoskins and Karoly 1981). Here we investigate whether there is any evidence that this mechanism underlies the FM NPI bias in models. The differences in the EN–LN composited anomalous precipitation fields for CESM PACEMAKER, TOGA-ERSSTv3b, and the multimodel mean of the CMIP5 piControl simulations relative to ERA20C averaged over FM are shown in Figs. 9a, 9c, and 9e, respectively. Figure 9g shows the regression, across CMIP5 models, of the precipitation simulation bias at each grid point over the ocean onto the NPI bias. Since precipitation in ERA20C may have substantial influence from the underlying model, we also show an equivalent comparison for the shorter period from 1979–2010 using the GPCP observational product in Fig. S7. These both give consistent conclusions.
According to Spencer and Slingo’s (2003) analysis, the simulated precipitation over the equatorial southwestern Pacific to the east of New Guinea (central and western equatorial Pacific) remains too high (low) during the springtime of EN (LN) events. However, as can be seen in Figs. 9a, 9c, and 9e, in our study the EN–LN composites of modeled precipitation to the east of New Guinea around the date line are actually lower than observations in FM. Furthermore, these negative precipitation anomalies are not significantly correlated with the North Pacific SLP bias across the CMIP5 models as indicated by Fig. 9g. Instead, across the CMIP5 models, a weak positive correlation relationship with the NPI index can be found in the precipitation anomalies from the western tropical Pacific around the Philippines into the equatorial western Pacific. A similar structure is detected in the composite fields where positive precipitation biases are detected over the western tropical Pacific surrounding the Maritime Continent, which extend farther east along the north of the equator in the TOGA-ERSSTv3b simulation. A significant relationship is also found for precipitation anomalies over the midlatitude North Pacific (e.g., the dry biases around the Kuroshio area and wet biases over the central North Pacific), but of course these are very likely due to the circulation bias as opposed to causing it. Over the Indian Ocean, there is an overall dry-bias signal accompanying the overly strong North Pacific ENSO teleconnection in the models. The detailed structure reveals that while there is a consistent dry bias over the south Indian Ocean in each composite analyses (Figs. 9a,c,e), significant negative correlations are found north of the equator in the regression map across the CMIP5 models (Fig. 9g). Similar dry biases are also seen over the north Indian Ocean in the PACEMAKER differences from observations, but are not present in the TOGA experiments.
Although the precipitation-induced diabatic heating is essential for generating the ENSO teleconnection, the climatological background flow (e.g., the location and intensity of the jet stream) could modify the source of the waves and influence the subsequent wave propagation (Simmons et al. 1983; Held and Ting 1990; Hoskins et al. 1983; Sardeshmukh and Hoskins 1988). As a preliminary search for discrepancies in the mean flow between models and observations, the zonal wind climatology (over all years) in FM at 200 hPa (U200) is examined (Figs. 9b,d,f,h). The results from the regression onto the NPI across the CMIP5 models suggest that the strength of the climatological East Asian subtropical jet stream is positively correlated with the NPI bias across the models. Compared to observations, the East Asian jet stream over the North Pacific in the PACEMAKER simulations extends farther eastward while the TOGA-ERSSTv3b simulation shows no obvious difference from the observed background jet stream intensity. Therefore, it is possible that the climatological U200 plays a role in some models, although there is not a consistent discrepancy in each case. It is possible that different mechanisms are acting in different setups; for example, the north Indian Ocean precipitation bias may be important in the coupled configurations like CMIP5 and PACEMAKER, while other factors may be important in TOGA. There are still several other factors possibly involved in determining the ultimate structure and intensity of the teleconnection pattern, such as the role played by transient eddies, the detailed horizontal structure and vertical profile of the diabatic heating associated with the tropical precipitation anomalies (Ting and Sardeshmukh 1993), feedbacks involving extratropical diabatic heating, and the interactions between all these potential factors.
6. Summary and conclusions
The North Pacific ENSO teleconnection acts as an important atmospheric bridge that helps transmit the influence of ENSO-related SST anomalies to North American surface climate (Alexander et al. 2002). Due to a larger forced signal relative to background noise from internal variability, the late winter to spring season is believed to have a potential for higher seasonal predictability for North America compared to the midwinter (Kumar and Hoerling 1998). However, this predictability inevitably relies on the model’s capability to reasonably reproduce the SST-forced teleconnection pattern over the North Pacific during that season. Here we find that CESM1 and CESM2 display a significantly biased North Pacific ENSO teleconnection intensity after the peak of El Niño/La Niña events, especially in the 2-month average from February to March, regardless of how the experiments are configured (i.e., observed SSTs specified in the atmospheric model, eastern tropical Pacific SST anomalies nudged to observed in the coupled model, or free-running fully coupled simulations). Specifically, the modeled SLP anomalies have larger amplitude over the North Pacific, and maximize one month later than in observations. A comparison with piControl simulations from 43 CMIP5 and 20 CMIP6 models demonstrates that more than half of the CMIP models exhibit a similar significant bias (i.e., they exhibit an overly strong teleconnection response to ENSO in the FM season, which is not a carry-over phenomenon from the preceding December–January season).
Both CESM and the CMIP ensemble indicate that a stronger North Pacific ENSO teleconnection after the peak of El Niño/La Niña is associated with a bias in the simulation of anomalous surface climate over western North America. As revealed by the EN–LN composites, this takes the form of a warm bias over Alaska, a wet bias over California, and a dry bias along the west coast from southern Canada to the northern United States compared to observations. These features are expected based on the large-scale circulation-induced thermal advection and moisture transports that accompany the stronger North Pacific circulation anomaly. In general, the models’ bias in displaying a stronger North Pacific ENSO teleconnection during spring likely falsely prolongs the influence of ENSO over North America. In a broader sense, the ENSO forced signal during that season is overestimated in the models.
In the modeling work of Garfinkel et al. (2019), the early spring North Pacific SLP response to ENSO events is of similar magnitude to its winter counterpart (see Fig. 1 of their paper). Our study suggests that this is a model deficiency and care should be taken in interpreting the modeling results related to the strength of springtime North Pacific circulation anomalies during ENSO events. Further, understanding of the mechanism causing that bias is of critical importance to develop future models that can effectively be used for studying springtime ENSO teleconnections, or providing seasonal forecasts. While both Spencer and Slingo (2003) and Alexander et al. (2002) speculated that a potential cause of the bias is the lack of dynamical ocean–atmosphere coupling, our study suggests that most coupled models also suffer from a similar problem. By examining the tropical Pacific precipitation discrepancy in the CESM PACEMAKER, TOGA-ERSSTv3b, and CMIP5 models’ piControl simulations, we show that the central tropical Pacific precipitation bias in the PACEMAKER and TOGA-ERSSTv3b simulations are of the opposite sign compared to Spencer and Slingo’s (2003) analysis. Moreover, no significant relationship is detected in the CMIP5 models between the bias in precipitation anomalies in the tropical Pacific and the North Pacific circulation bias. Instead, the NPI biases in the CMIP5 models are found to be significantly negatively correlated with precipitation biases over the tropical north Indian Ocean. However, such a precipitation bias is not present in the TOGA runs. There are many other factors that could play a role, such as biases in transient eddy feedbacks on the extratropical circulation or biases in the vertical structure of the heating perturbation and a detailed investigation of possibilities like these is needed to further understand the ultimate reason behind this modeled deficiency.
Despite considerable improvements in coupled models over the past 15 years, the springtime bias in ENSO teleconnections to the North Pacific and attendant climate impacts over North America remain ubiquitous and lack explanation. Further work is needed to fully understand the bias and disentangle the role played by multiple components. Inspired by Held et al. (2002) and the series of literature prior to that (Held and Kang 1987; Held et al. 1989; Ting and Sardeshmukh 1993), stationary wave modeling that focuses on decomposing the stationary field into multiple contributing components may be a useful tool in future studies that aim for this understanding.
Acknowledgments
We appreciate the three anonymous reviewers for their thoughtful comments. The CESM project is supported primarily by the National Science Foundation (NSF). This material is supported by the National Natural Science Foundation of China (41875127) and is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement 1852977. Computing and data storage resources, including the Cheyenne supercomputer (doi:10.5065/D6RX99HX), were provided by the Computational and Information Systems Laboratory (CISL) at NCAR. Ruyan Chen was supported by the graduate visitor program of the Advanced Study Program at NCAR. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table 1) for producing and making available their model output. For CMIP, the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals.
REFERENCES
Adler, R. F., and Coauthors, 2003: The version-2 Global Precipitation Climatology Project (GPCP) monthly precipitation analysis (1979–present). J. Hydrometeor., 4, 1147–1167, https://doi.org/10.1175/1525-7541(2003)004<1147:TVGPCP>2.0.CO;2.
Alexander, M. A., I. Bladé, M. Newman, J. R. Lanzante, N.-C. Lau, and J. D. Scott, 2002: The atmospheric bridge: The influence of ENSO teleconnections on air–sea interaction over the global oceans. J. Climate, 15, 2205–2231, https://doi.org/10.1175/1520-0442(2002)015<2205:TABTIO>2.0.CO;2.
Allan, R., and T. Ansell, 2006: A new globally complete monthly historical gridded mean sea level pressure dataset (HadSLP2): 1850–2004. J. Climate, 19, 5816–5842, https://doi.org/10.1175/JCLI3937.1.
Barnett, T., and Coauthors, 1994: Forecasting global ENSO-related climate anomalies. Tellus, 46A, 381–397, https://doi.org/10.3402/tellusa.v46i4.15487.
Barnston, A. G., M. Chelliah, and S. B. Goldenberg, 1997: Documentation of a highly ENSO–related SST region in the equatorial Pacific: Research note. Atmos.–Ocean, 35, 367–383, https://doi.org/10.1080/07055900.1997.9649597.
Barnston, A. G., A. Kumar, L. Goddard, and M. P. Hoerling, 2005: Improving seasonal prediction practices through attribution of climate variability. Bull. Amer. Meteor. Soc., 86, 59–72, https://doi.org/10.1175/BAMS-86-1-59.
Bates, G. T., M. P. Hoerling, and A. Kumar, 2001: Central U.S. springtime precipitation extremes: Teleconnections and relationships with sea surface temperature. J. Climate, 14, 3751–3766, https://doi.org/10.1175/1520-0442(2001)014<3751:CUSSPE>2.0.CO;2.
Bellenger, H., E. Guilyardi, J. Leloup, M. Lengaigne, and J. Vialard, 2014: ENSO representation in climate models: From CMIP3 to CMIP5. Climate Dyn., 42, 1999–2018, https://doi.org/10.1007/s00382-013-1783-z.
Bjerknes, J., 1966: A possible response of the atmospheric Hadley circulation to equatorial anomalies of ocean temperature. Tellus, 18, 820–829, https://doi.org/10.3402/tellusa.v18i4.9712.
Bjerknes, J., 1969: Atmospheric teleconnections from the equatorial Pacific. Mon. Wea. Rev., 97, 163–172, https://doi.org/10.1175/1520-0493(1969)097<0163:ATFTEP>2.3.CO;2.
Bladé, I., M. Newman, M. A. Alexander, and J. D. Scott, 2008: The late fall extratropical response to ENSO: Sensitivity to coupling and convection in the tropical west Pacific. J. Climate, 21, 6101–6118, https://doi.org/10.1175/2008JCLI1612.1.
Bogenschutz, P. A., A. Gettelman, C. Hannay, V. E. Larson, R. B. Neale, C. Craig, and C.-C. Chen, 2018: The path to CAM6: Coupled simulations with CAM5.4 and CAM5.5. Geosci. Model Dev., 11, 235–255, https://doi.org/10.5194/gmd-11-235-2018.
Chakraborty, A., and T. N. Krishnamurti, 2006: Improved seasonal climate forecasts of the South Asian summer monsoon using a suite of 13 coupled ocean–atmosphere models. Mon. Wea. Rev., 134, 1697–1721, https://doi.org/10.1175/MWR3144.1.
Compo, G. P., and Coauthors, 2011: The Twentieth Century Reanalysis Project. Quart. J. Roy. Meteor. Soc., 137 (654), 1–28, https://doi.org/10.1002/qj.776.
Danabasoglu, G., and Coauthors, 2020: The Community Earth System Model version 2 (CESM2). J. Adv. Model Earth Syst., 12, e2019MS001916, https://doi.org/10.1029/2019MS001916.
Deser, C., I. R. Simpson, K. A. McKinnon, and A. S. Phillips, 2017: The Northern Hemisphere extratropical atmospheric circulation response to ENSO: How well do we know it and how do we evaluate models accordingly? J. Climate, 30, 5059–5082, https://doi.org/10.1175/JCLI-D-16-0844.1.
Deser, C., I. R. Simpson, A. S. Phillips, and K. A. McKinnon, 2018: How well do we know ENSO’s climate impacts over North America, and how do we evaluate models accordingly? J. Climate, 31, 4991–5014, https://doi.org/10.1175/JCLI-D-17-0783.1.
Favre, A., and A. Gershunov, 2009: North Pacific cyclonic and anticyclonic transients in a global warming context: Possible consequences for western North American daily precipitation and temperature extremes. Climate Dyn., 32, 969–987, https://doi.org/10.1007/s00382-008-0417-3.
Frauen, C., D. Dommenget, N. Tyrrell, M. Rezny, and S. Wales, 2014: Analysis of the nonlinearity of El Niño–Southern Oscillation teleconnections. J. Climate, 27, 6225–6244, https://doi.org/10.1175/JCLI-D-13-00757.1.
Garfinkel, C. I., I. Weinberger, I. P. White, L. D. Oman, V. Aquila, and Y.-K. Lim, 2019: The salience of nonlinearities in the boreal winter response to ENSO: North Pacific and North America. Climate Dyn., 52, 4429–4446, https://doi.org/10.1007/s00382-018-4386-x.
Gershunov, A., and T. P. Barnett, 1998: ENSO influence on intraseasonal extreme rainfall and temperature frequencies in the contiguous United States: Observations and model results. J. Climate, 11, 1575–1586, https://doi.org/10.1175/1520-0442(1998)011<1575:EIOIER>2.0.CO;2.
Gettelman, A., and Coauthors, 2019: The Whole Atmosphere Community Climate Model version 6 (WACCM6). J. Geophys. Res. Atmos., 124, 12 380–12 403, https://doi.org/10.1029/2019JD030943.
Gleixner, S., N. S. Keenlyside, T. D. Demissie, F. Counillon, Y. Wang, and E. Viste, 2017: Seasonal predictability of Kiremt rainfall in coupled general circulation models. Environ. Res. Lett., 12, 114016, https://doi.org/10.1088/1748-9326/aa8cfa.
Held, I. M., and I.-S. Kang, 1987: Barotropic models of the extratropical response to El Niño. J. Atmos. Sci., 44, 3576–3586, https://doi.org/10.1175/1520-0469(1987)044<3576:BMOTER>2.0.CO;2.
Held, I. M., and M. Ting, 1990: Orographic versus thermal forcing of stationary waves: The importance of the mean low-level wind. J. Atmos. Sci., 47, 495–500, https://doi.org/10.1175/1520-0469(1990)047<0495:OVTFOS>2.0.CO;2.
Held, I. M., S. W. Lyons, and S. Nigam, 1989: Transients and the extratropical response to El Niño. J. Atmos. Sci., 46, 163–174, https://doi.org/10.1175/1520-0469(1989)046<0163:TATERT>2.0.CO;2.
Held, I. M., M. Ting, and H. Wang, 2002: Northern winter stationary waves: Theory and modeling. J. Climate, 15, 2125–2144, https://doi.org/10.1175/1520-0442(2002)015<2125:NWSWTA>2.0.CO;2.
Hersbach, H., C. Peubey, A. Simmons, P. Berrisford, P. Poli, and D. Dee, 2015: ERA-20CM: A twentieth-century atmospheric model ensemble. Quart. J. Roy. Meteor. Soc., 141, 2350–2375, https://doi.org/10.1002/qj.2528.
Horel, J. D., and J. M. Wallace, 1981: Planetary-scale atmospheric phenomena associated with the southern oscillation. Mon. Wea. Rev., 109, 813–829, https://doi.org/10.1175/1520-0493(1981)109<0813:PSAPAW>2.0.CO;2.
Hoskins, B. J., and D. J. Karoly, 1981: The steady linear response of a spherical atmosphere to thermal and orographic forcing. J. Atmos. Sci., 38, 1179–1196, https://doi.org/10.1175/1520-0469(1981)038<1179:TSLROA>2.0.CO;2.
Hoskins, B. J., I. N. James, and G. H. White, 1983: The shape, propagation and mean-flow interaction of large-scale weather systems. J. Atmos. Sci., 40, 1595–1612, https://doi.org/10.1175/1520-0469(1983)040<1595:TSPAMF>2.0.CO;2.
Huang, B., and Coauthors, 2015: Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4). Part I: Upgrades and intercomparisons. J. Climate, 28, 911–930, https://doi.org/10.1175/JCLI-D-14-00006.1.
Huang, B., and Coauthors, 2017: Extended Reconstructed Sea Surface Temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. J. Climate, 30, 8179–8205, https://doi.org/10.1175/JCLI-D-16-0836.1.
Hurrell, J. W., and Coauthors, 2013: The Community Earth System Model: A framework for collaborative research. Bull. Amer. Meteor. Soc., 94, 1339–1360, https://doi.org/10.1175/BAMS-D-12-00121.1.
Hurwitz, M. M., N. Calvo, C. I. Garfinkel, A. H. Butler, S. Ineson, C. Cagnazzo, E. Manzini, and C. Peña-Ortiz, 2014: Extra-tropical atmospheric response to ENSO in the CMIP5 models. Climate Dyn., 43, 3367–3376, https://doi.org/10.1007/s00382-014-2110-z.
Jiménez-Esteve, B., and D. I. V. Domeisen, 2019: Nonlinearity in the North Pacific atmospheric response to a linear ENSO forcing. Geophys. Res. Lett., 46, 2271–2281, https://doi.org/10.1029/2018gl081226.
Johnson, N. C., D. C. Collins, S. B. Feldstein, M. L. L’Heureux, and E. E. Riddle, 2014: Skillful wintertime North American temperature forecasts out to 4 weeks based on the state of ENSO and the MJO. Wea. Forecasting, 29 23–38, https://doi.org/10.1175/WAF-D-13-00102.1.
Jong, B.-T., M. Ting, and R. Seager, 2016: El Niño’s impact on California precipitation: Seasonality, regionality, and El Niño intensity. Environ. Res. Lett., 11, 054021, https://doi.org/10.1088/1748-9326/11/5/054021.
Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) large ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 1333–1349, https://doi.org/10.1175/BAMS-D-13-00255.1.
Kiehl, J. T., and P. R. Gent, 2004: The Community Climate System Model, version 2. J. Climate, 17, 3666–3682, https://doi.org/10.1175/1520-0442(2004)017<3666:TCCSMV>2.0.CO;2.
Kim, H.-M., Y. Zhou, and M. A. Alexander, 2019: Changes in atmospheric rivers and moisture transport over the northeast Pacific and western North America in response to ENSO diversity. Climate Dyn., 52, 7375–7388, https://doi.org/10.1007/s00382-017-3598-9.
Kosaka, Y., and S.-P. Xie, 2013: Recent global-warming hiatus tied to equatorial Pacific surface cooling. Nature, 501, 403–407, https://doi.org/10.1038/nature12534.
Kumar, A., 2010: On the assessment of the value of the seasonal forecast information. Meteor. Appl., 17, 385–392, https://doi.org/10.1002/met.167.
Kumar, A., and M. P. Hoerling, 1998: Annual cycle of Pacific–North American seasonal predictability associated with different phases of ENSO. J. Climate, 11, 3295–3308, https://doi.org/10.1175/1520-0442(1998)011<3295:ACOPNA>2.0.CO;2.
Kumar, A., and M. P. Hoerling, 2000: Analysis of a conceptual model of seasonal climate variability and implications for seasonal prediction. Bull. Amer. Meteor. Soc., 81, 255–264, https://doi.org/10.1175/1520-0477(2000)081<0255:AOACMO>2.3.CO;2.
Kumar, A., A. G. Barnston, and M. P. Hoerling, 2001: Seasonal predictions, probabilistic verifications, and ensemble size. J. Climate, 14, 1671–1676, https://doi.org/10.1175/1520-0442(2001)014<1671:SPPVAE>2.0.CO;2.
Lau, N.-C., and M. J. Nath, 1994: A modeling study of the relative roles of tropical and extratropical SST anomalies in the variability of the global atmosphere-ocean system. J. Climate, 7, 1184–1207, https://doi.org/10.1175/1520-0442(1994)007<1184:AMSOTR>2.0.CO;2.
Lee, S.-K., B. E. Mapes, C. Wang, D. B. Enfield, and S. J. Weaver, 2014: Springtime ENSO phase evolution and its relation to rainfall in the continental U.S. Geophys. Res. Lett., 41, 1673–1680, https://doi.org/10.1002/2013GL059137.
Lemos, M. C., and L. Dilling, 2007: Equity in forecasting climate: Can science save the world’s poor? Sci. Public Policy, 34, 109–116, https://doi.org/10.3152/030234207X190964.
Li, G., and S.-P. Xie, 2014: Tropical biases in CMIP5 multimodel ensemble: The excessive equatorial Pacific cold tongue and double ITCZ problems. J. Climate, 27, 1765–1780, https://doi.org/10.1175/JCLI-D-13-00337.1.
Livezey, R. E., M. Masutani, and M. Ji, 1996: SST-forced seasonal simulation and prediction skill for versions of the NCEP/MRF model. Bull. Amer. Meteor. Soc., 77, 507–518, https://doi.org/10.1175/1520-0477(1996)077<0507:SFSSAP>2.0.CO;2.
Meinke, H., and R. C. Stone, 2005: Seasonal and inter-annual climate forecasting: The new tool for increasing preparedness to climate variability and change in agricultural planning and operations. Climatic Change, 70, 221–253, https://doi.org/10.1007/s10584-005-5948-6.
Mo, K. C., and R. W. Higgins, 1998: Tropical influences on California precipitation. J. Climate, 11, 412–430, https://doi.org/10.1175/1520-0442(1998)011<0412:TIOCP>2.0.CO;2.
Mo, K. C., and J. E. Schemm, 2008: Relationships between ENSO and drought over the southeastern United States. Geophys. Res. Lett., 35, L15701, https://doi.org/10.1029/2008GL034656.
Namias, J., 1976: Some statistical and synoptic characteristics associated with El Niño. J. Phys. Oceanogr., 6, 130–138, https://doi.org/10.1175/1520-0485(1976)006<0130:SSASCA>2.0.CO;2.
Okumura, Y. M., and C. Deser, 2010: Asymmetry in the duration of El Niño and La Niña. J. Climate, 23, 5826–5843, https://doi.org/10.1175/2010JCLI3592.1.
Palmer, T. N., and D. L. T. Anderson, 1994: The prospects for seasonal forecasting—A review paper. Quart. J. Roy. Meteor. Soc., 120, 755–793, https://doi.org/10.1002/qj.49712051802.
Papineau, J. M., 2001: Wintertime temperature anomalies in Alaska correlated with ENSO and PDO. Int. J. Climatol., 21, 1577–1592, https://doi.org/10.1002/joc.686.
Peng, P., A. Kumar, and W. Wang, 2011: An analysis of seasonal predictability in coupled model forecasts. Climate Dyn., 36, 637–648, https://doi.org/10.1007/s00382-009-0711-8.
Poli, P., and Coauthors, 2016: ERA-20C: An atmospheric reanalysis of the twentieth century. J. Climate, 29, 4083–4097, https://doi.org/10.1175/JCLI-D-15-0556.1.
Rohde, R., and Coauthors, 2013: A new estimate of the average Earth surface land temperature spanning 1753 to 2011. Geoinfor. Geostat. Overview, 1 (1), https://doi.org/10.4172/2327-4581.1000101.
Ropelewski, C. F., and M. S. Halpert, 1986: North American precipitation and temperature patterns associated with the El Niño/Southern Oscillation (ENSO). Mon. Wea. Rev., 114, 2352–2362, https://doi.org/10.1175/1520-0493(1986)114<2352:NAPATP>2.0.CO;2.
Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations. J. Climate, 11, 109–120, https://doi.org/10.1175/1520-0442(1998)011<0109:APSPWA>2.0.CO;2.
Sardeshmukh, P. D., and B. J. Hoskins, 1988: The generation of global rotational flow by steady idealized tropical divergence. J. Atmos. Sci., 45, 1228–1251, https://doi.org/10.1175/1520-0469(1988)045<1228:TGOGRF>2.0.CO;2.
Schmidt, N., E. K. Lipp, J. B. Rose, and M. E. Luther, 2001: ENSO influences on seasonal rainfall and river discharge in Florida. J. Climate, 14, 615–628, https://doi.org/10.1175/1520-0442(2001)014<0615:EIOSRA>2.0.CO;2.
Schneider, U., A. Becker, P. Finger, A. Meyer-Christoffer, M. Ziese, and B. Rudolf, 2014: GPCC’s new land surface precipitation climatology based on quality-controlled in situ data and its role in quantifying the global water cycle. Theor. Appl. Climatol., 115, 15–40, https://doi.org/10.1007/s00704-013-0860-x.
Schubert, S. D., Y. Chang, M. J. Suarez, and P. J. Pegion, 2008: ENSO and wintertime extreme precipitation events over the contiguous United States. J. Climate, 21, 22–39, https://doi.org/10.1175/2007JCLI1705.1.
Shukla, J., 1981: Dynamical predictability of monthly means. J. Atmos. Sci., 38, 2547–2572, https://doi.org/10.1175/1520-0469(1981)038<2547:DPOMM>2.0.CO;2.
Simmons, A. J., J. M. Wallace, and G. W. Branstator, 1983: Barotropic wave propagation and instability, and atmospheric teleconnection patterns. J. Atmos. Sci., 40, 1363–1392, https://doi.org/10.1175/1520-0469(1983)040<1363:BWPAIA>2.0.CO;2.
Smith, T. M., R. W. Reynolds, T. C. Peterson, and J. Lawrimore, 2008: Improvements to NOAA’s historical merged land–ocean surface temperature analysis (1880–2006). J. Climate, 21, 2283–2296, https://doi.org/10.1175/2007JCLI2100.1.
Spencer, H., and J. M. Slingo, 2003: The simulation of peak and delayed ENSO teleconnections. J. Climate, 16, 1757–1774, https://doi.org/10.1175/1520-0442(2003)016<1757:TSOPAD>2.0.CO;2.
Stockdale, T. N., D. L. T. Anderson, J. O. S. Alves, and M. A. Balmaseda, 1998: Global seasonal rainfall forecasts using a coupled ocean–atmosphere model. Nature, 392, 370–373, https://doi.org/10.1038/32861.
Stoner, A. M. K., K. Hayhoe, and D. J. Wuebbles, 2009: Assessing general circulation model simulations of atmospheric teleconnection patterns. J. Climate, 22, 4348–4372, https://doi.org/10.1175/2009JCLI2577.1.
Ting, M., and P. D. Sardeshmukh, 1993: Factors determining the extratropical response to equatorial diabatic heating anomalies. J. Atmos. Sci., 50, 907–918, https://doi.org/10.1175/1520-0469(1993)050<0907:FDTERT>2.0.CO;2.
Titchner, H. A., and N. A. Rayner, 2014: The Met Office Hadley Centre sea ice and sea surface temperature data set, version 2: 1. Sea ice concentrations. J. Geophys. Res. Atmos., 119, 2864–2889, https://doi.org/10.1002/2013JD020316.
Trenberth, K. E., and J. W. Hurrell, 1994: Decadal atmosphere–ocean variations in the Pacific. Climate Dyn., 9, 303–319, https://doi.org/10.1007/BF00204745.
Trenberth, K. E., G. W. Branstator, D. Karoly, A. Kumar, N.-C. Lau, and C. Ropelewski, 1998: Progress during TOGA in understanding and modeling global teleconnections associated with tropical sea surface temperatures. J. Geophys. Res., 103, 14 291–14 324, https://doi.org/10.1029/97JC01444.
Webster, P. J., 1981: Mechanisms determining the atmospheric response to sea surface temperature anomalies. J. Atmos. Sci., 38, 554–571, https://doi.org/10.1175/1520-0469(1981)038<0554:MDTART>2.0.CO;2.
Wolter, K., R. M. Dole, and C. A. Smith, 1999: Short-term climate extremes over the continental United States and ENSO. Part I: Seasonal temperatures. J. Climate, 12, 3255–3272, https://doi.org/10.1175/1520-0442(1999)012<3255:STCEOT>2.0.CO;2.
Yang, X., and T. DelSole, 2012: Systematic comparison of ENSO teleconnection patterns between models and observations. J. Climate, 25, 425–446, https://doi.org/10.1175/JCLI-D-11-00175.1.
Zhang, T., J. Perlwitz, and M. P. Hoerling, 2014: What is responsible for the strong observed asymmetry in teleconnections between El Niño and La Niña? Geophys. Res. Lett., 41, 1019–1025, https://doi.org/10.1002/2013GL058964.