1. Introduction
Polar environmental predictability has become a growing area of research over the last decade, spurred by a combination of environmental (sea ice loss, rapid warming) and socioeconomic (increasing economic interests, national security) factors (e.g., Jung et al. 2016). This effort spans research on potential predictability in dynamical models (e.g., Holland et al. 2011; Blanchard-Wrigglesworth et al. 2011), developing real-world forecasts using a range of dynamical and statistical models (e.g., Wang et al. 2013; Merryfield et al. 2013; Sigmond et al. 2013; Msadek et al. 2014; Yuan et al. 2016), improving model simulation of polar-specific processes such as sea ice floe size distribution (e.g., Roach et al. 2018), advances in sea ice data assimilation (e.g., Zhang et al. 2018), and the deployment of observing networks and fieldwork campaigns [e.g., NASA’s Operation IceBridge and Ice, Cloud and Land Elevation Satellite-2 (IceSAT-2) platforms or the upcoming Multidisciplinary Drifting Observatory for the Study of Arctic Climate experiment]. Recent or current examples that characterize the growing momentum in polar predictability are the start of regular seasonal sea ice forecasts such as the sea ice outlook (Stroeve et al. 2015), a year-round sea ice forecast portal (Wayand et al. 2019), and the Year of Polar Prediction (YOPP) taking place over 2017–19 (Jung et al. 2016).
An emerging picture from potential predictability studies shows that forecasts of pan-Arctic sea ice area (SIA) and volume (SIV) should be skillful for about one and three years, respectively, yet SIA forecasts of observations tend to lose skill within a season or two, revealing a gap between potential and observed forecast skill (e.g., Bushuk et al. 2019). Polar predictability of other variables such as air temperature and precipitation has received less attention. Surface air temperature predictability tends to be coupled to the predictability of SIA over the marginal ice zone and adjacent regions (e.g., Day et al. 2014a), given the influence of sea ice on lower-tropospheric air temperature particularly outside the summer months. He et al. (2018) find that the predictability time scale of the Arctic atmosphere is seasonal at best both in observations and a suite of GCMs.
At the same time, studies have focused on unveiling skillful predictors of sea ice. For Arctic summer sea ice forecasts, a range of variables have been shown to offer skill, such as preceding spring sea ice thickness (e.g., Day et al. 2014b), spring melt-pond fraction (Schröder et al. 2014), late winter/early spring sea ice motion (Williams et al. 2016), ocean heat flux (Woodgate et al. 2010), stratospheric conditions (Smith et al. 2018), spring longwave radiation/cloud fraction (Kapsch et al. 2013), surface winds (Ogi et al. 2010), summer tropospheric temperatures and downwelling longwave radiation (Ding et al. 2017), or summer tropical Pacific sea surface temperatures (SSTs) (Hu et al. 2016; Ding et al. 2019). Thus, a range of ocean, sea ice, and atmospheric predictors, both local and remote, are thought to influence the evolution of summer sea ice and thus offer forecast skill. Nevertheless, it is the atmosphere that dominates forecast error growth of Arctic sea ice at the daily to seasonal time scales (Tietsche et al. 2016), owing to its smaller overall heat capacity, thermal inertia, and shorter predictability time scales relative to the ocean–sea ice system.
Antarctic sea ice predictability has generally received less attention relative to the Arctic. Here, it is thought that sea ice thickness is less relevant as a predictor than in the Arctic, likely given the thinner, less perennial sea ice, while ocean heat content and SST anomalies are particularly relevant predictors (Holland et al. 2013). More so than in the Arctic, remote sources of Antarctic variability have been found in tropical Pacific SSTs (e.g., Yuan 2004), particularly linked with forcing from El Niño–Southern Oscillation (ENSO) mode of variability, which is thought to influence Antarctic sea ice both at annual (e.g., Stuecker et al. 2017) and decadal time scales (Stammerjohn et al. 2008; Meehl et al. 2016), but not shorter, monthly time scales (Kohyama and Hartmann 2016). More locally, an important source of Antarctic atmospheric variability that impacts sea ice is the southern annular mode (SAM; e.g., Simpkins et al. 2012). While there is a vast body of work on ENSO predictability (e.g., Latif et al. 1998), the predictability of SAM or other local modes of Antarctic variability is less well studied.
What is the relative importance of remote versus local sources for polar variability? One way to investigate this problem is through the use of regional climate models (RCMs) in which ensembles are created either by forcing an RCM with boundary conditions and/or initial conditions (ICs) that are not held fixed (e.g., Mikolajewicz et al. 2005; Döscher et al. 2010; Rinke et al. 2004), or by using different RCM domains (Sein et al. 2014). The comparison of ensemble spread and the ensemble mean can then offer a quantification of local to remotely sourced variability. One of the drawbacks, however, from using RCMs for polar studies is that the RCM domain is limited, by definition, to one pole, and all of the above-cited RCM studies focus on the Arctic. Nevertheless, these studies have found that in long (multidecadal) historical simulations of Arctic sea ice the relative contribution of locally sourced variability is largest in summer, as the influence of large-scale processes decreases, yet in general, remotely sourced variability dominates. Within the Arctic, the Barents–Kara–Greenland–Iceland–Norwegian (GIN) Seas have shown greater internal variability relative to other Arctic regions both in atmospheric (geopotential heights, temperature) and sea ice (ice thickness) variables (Sein et al. 2014). Other studies, using a combination of models and observations, have shown how trends in tropical SSTs exert an influence on atmospheric and sea ice trends in both the Arctic (Ding et al. 2014; Meehl et al. 2018) and Antarctica (Meehl et al. 2016).
Less work has been done on quantifying the contributions of remotely sourced variability (or forecast error growth) to initial-value seasonal polar predictability. In other words, how much would a polar prediction improve if the tropics/midlatitudes could be perfectly predicted at seasonal time scales? This problem can be explored using a nudging approach in which a forecast simulation is relaxed toward a known solution over a specific domain (e.g., the tropics and/or midlatitudes). Using this technique, Jung et al. (2014) explored winter atmospheric predictability in the ECMWF model and found modest to negligible improvement in weekly-to-monthly forecast skill in the northern midlatitudes originating from the tropics. More recently, Ye et al. (2018), using a similar methodology, found some tropically sourced skill over the North Pacific but less over the far North Atlantic in the Northern Hemisphere, and greater skill in winter relative to summer. Additionally, over the Southern Ocean they found tropically sourced skill mostly in the Bellingshausen–Amundsen Seas. In an earlier study, Ferranti et al. (1990) did find a significant improvement in 15-day atmospheric forecast skill over the North Pacific and Asia (but marginal over the North Atlantic/Europe) originating in the tropics, but did not consider polar predictability. In this work, we use a similar nudging technique to the papers above but in a fully coupled GCM, which allows us to investigate the predictability of the atmosphere, sea ice, and ocean components. Unlike the above papers, we focus on monthly-to-seasonal predictability and use a perfect-model experiment (PME) approach whereby we quantify the predictability inherent to the model (Collins 2002). To the authors’ knowledge, this is the first study that investigates remote influence on polar predictability using a nudging approach with a fully coupled GCM.
2. Data, model, and experiment design
For observational data of sea level pressure (SLP), SST, and SIA we use the NCEP–NCAR reanalysis SLP (Kalnay et al. 1996), the Hadley SST product (Rayner et al. 2003), and the NSIDC sea ice index SIA (Fetterer et al. 2002, updated 2017).
a. Model simulations
We use the NCAR Community Earth System Model, version 1, with the Community Atmosphere Model, version 5 [CESM1 (CAM5); see Hurrell et al. 2013]. The model simulates fully coupled atmosphere, ocean, sea ice, and land components at a ~1° resolution, and is among the CMIP5 models with highest fidelity in simulating observations (Knutti et al. 2013). We explore seasonal polar predictability in a year 2000 mean state by initializing PMEs from year 2000 ICs taken from preexisting twentieth-century simulations from the CESM-Large Ensemble (CESM-LENS) experiment (Kay et al. 2015). We create two forecast cycles of three sets of PMEs, each set consisting of 6 different forecast ensembles of 15 runs each that are seven months long. Each forecast ensemble has identical external forcing (e.g., greenhouse gases, ozone) and ICs in all components taken from one of the simulations from CESM-LENS, chosen to sample the range of year 2000 sea ice conditions in CESM-LENS (high to low SIA and SIV). To create the forecast ensemble, a random white noise perturbation of order 10−14°C is added to the initial temperature field in the atmosphere across the 15 runs of each ensemble. We initialize two different forecast cycles of PMEs at two different dates that are symmetric with respect to the seasonal cycle: 1 May 2000 and 1 November 2000. This allows us to compare predictability in the same season for each hemisphere at the same lead time (e.g., boreal winter predictability in the Arctic and austral winter predictability in Antarctica). The simulations include a total of 540 (6 × 6 × 15) 7-month integrations that are summarized in Table 1.
Summary of perfect-model experiments.



Nudging weighting factor α in the (left) Nudge30 and (right) Nudge55 PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

Nudging weighting factor α in the (left) Nudge30 and (right) Nudge55 PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
Nudging weighting factor α in the (left) Nudge30 and (right) Nudge55 PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

(top left) Latitudinal boxplot of SIC variability in CESM-LENS for year 2000—the horizontal red lines represent the median, the edges of the box are the 25th and 75th percentiles. The whiskers extend to the more extreme data points. The green line is the 55° latitude. Maps show the regional seas. (bottom five rows) Mean monthly variability in year 2000 SIA (in millions of km2) in CESM-LENS and mean monthly SIA across the forecast PMEs. The gray shading represents the ±1σ spread about the climatology in CESM-LENS.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

(top left) Latitudinal boxplot of SIC variability in CESM-LENS for year 2000—the horizontal red lines represent the median, the edges of the box are the 25th and 75th percentiles. The whiskers extend to the more extreme data points. The green line is the 55° latitude. Maps show the regional seas. (bottom five rows) Mean monthly variability in year 2000 SIA (in millions of km2) in CESM-LENS and mean monthly SIA across the forecast PMEs. The gray shading represents the ±1σ spread about the climatology in CESM-LENS.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
(top left) Latitudinal boxplot of SIC variability in CESM-LENS for year 2000—the horizontal red lines represent the median, the edges of the box are the 25th and 75th percentiles. The whiskers extend to the more extreme data points. The green line is the 55° latitude. Maps show the regional seas. (bottom five rows) Mean monthly variability in year 2000 SIA (in millions of km2) in CESM-LENS and mean monthly SIA across the forecast PMEs. The gray shading represents the ±1σ spread about the climatology in CESM-LENS.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
b. Skill metrics used
We quantify forecast skill by assessing the root-mean-square error (RMSE) and the normalized root-mean-square error (NRMSE) as used in the PME literature (Collins 2002), whereby skill is quantified by considering each single member of an ensemble as the “truth,” and all other members from that ensemble as forecasts. We choose these metrics over the anomaly correlation coefficient (ACC) given known ACC biases (Bushuk et al. 2019). We have also calculated the integrated ice edge error (IIEE, Goessling et al. 2016), which yields similar results to RMSE and NRMSE (not shown).
3. Results
We first inspect the response in the climate’s mean state in the forecast experiments in the context of the CESM-LENS climate for year 2000. We do not expect the Free PME to show any drift over the short 7-month forecasts, as the model physics and external forcing are identical to the CESM-LENS, but it is not known a priori if model drift may develop in the nudged PMEs that could affect the mean state—we note that Arctic RCMs that use different domains but identical physics and reanalysis product for the boundary forcing can show significantly different mean climate states [e.g., Sein et al. (2014) found in Arctic RCMs that the Arctic Oscillation is an internally generated mode of variability as long as the Aleutian low region is included in the domain]. Additionally, past nudging experiments have shown mean state drift (Greatbatch et al. 2012). Since climate variability and predictability are mean-state dependent (Goosse et al. 2009; Screen 2014; Blanchard-Wrigglesworth and Bushuk 2019), it is important to consider this issue [note how the NRMSE metric compares the forecast spread to the control climate variability, the denominator in Eq. (2)]. Figure 2 shows the climatological SIA values in CESM-LENS (averaged over 1996–2005 using all ensemble members) and the ensemble-mean SIA in the forecast PMEs. As expected, there is no drift in the Free PMEs. There is a small drift with respect to climatology over Antarctica in the Nudge30 PME, a slight decrease in SIA in the last 2/3 months of the forecast ensembles (mostly over the south Indian Ocean) that is too modest to affect sea ice variability (Goosse et al. 2009). Elsewhere, there is no significant drift in the Nudging PMEs sea ice conditions. We also inspect the atmospheric mean state and its variability in all three PMEs. We show the patterns of mean sea level pressure (MSLP) in June–August (JJA; lead 2–4-month forecast in the 1 May PMEs) and December–February (DJF; lead 2–4-month forecast in the 1 November PMEs—herein we just refer to these forecast leads as JJA or DJF) in Fig. S1 (in the online supplemental material) and the leading EOF patterns of MSLP over 20°–90°N and 20°–90°S in Fig. S2. The Free and Nudge55 PMEs replicate almost identically the MSLP fields in CESM-LENS, whereas the Nudge30 PME replicates the mean CESM-LENS MSLP in the Arctic but has a positive bias in MSLP over Antarctica and its leading EOF pattern of variability also shows significant biases with respect to CESM-LENS. The cause of this bias and why it is confined to the Southern Hemisphere remains unclear.
a. Sea ice predictability
1) Hemispheric predictability
We now analyze the predictability of Arctic and Antarctic SIA. Figure 3 shows the NRMSE for SIA in both polar regions for the 1 May and 1 November PMEs. The results from the Free PME agree with previously published results from perfect-model studies: significant seasonal SIA predictability in both the Arctic and Antarctica, while SIV shows higher seasonal predictability (see Fig. S3). In the 1 May Free PME, the rapid loss of skill from June to July in Arctic SIA showcases the so-called predictability barrier (e.g., Blanchard-Wrigglesworth et al. 2011; Day et al. 2014b). Interestingly, the loss in predictability in the 1 November Free PME is also not linear with lead time, as a rapid loss in forecast skill in the first two months is followed by a plateau in forecast skill, similar to results in Bushuk et al. (2019). In the Arctic 1 May PMEs, there is significant improvement in forecast skill (lower NRMSE) in the Nudge55 PME relative to the Free PME throughout the forecast period, whereas forecast skill in the Nudge30 PME is not statistically different to the Free PME until the last two months of the forecast (Fig. 3a). The forecast of the summer SIA minimum (September) shows no significant improvement in the Nudge30 relative to Free PME, and a forecast skill improvement of ~25% in the Nudge55 PME relative to the Free PME. Thus, three quarters of the forecast error in the summer SIA minimum forecast is due to local (Arctic) forecast error growth.

(a),(b) SIA NRMSE for the Arctic and (c),(d) Antarctic for the (left) 1 May PMEs and (right) 1 Nov PMEs. A small filled circle indicates that the NRMSE is statistically different to 1 at the 95% level, and a larger open circle indicates that the Nudge PME NRMSE is statistically different to the Free PME NRMSE.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

(a),(b) SIA NRMSE for the Arctic and (c),(d) Antarctic for the (left) 1 May PMEs and (right) 1 Nov PMEs. A small filled circle indicates that the NRMSE is statistically different to 1 at the 95% level, and a larger open circle indicates that the Nudge PME NRMSE is statistically different to the Free PME NRMSE.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
(a),(b) SIA NRMSE for the Arctic and (c),(d) Antarctic for the (left) 1 May PMEs and (right) 1 Nov PMEs. A small filled circle indicates that the NRMSE is statistically different to 1 at the 95% level, and a larger open circle indicates that the Nudge PME NRMSE is statistically different to the Free PME NRMSE.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
In the 1 November PMEs (Fig. 3b), the improvement in forecast skill in the Nudge55 PME is more pronounced (forecast skill improvement of ~60%), whereas forecast skill in the Nudge30 is not statistically different to the Free PME for most of the forecast. For Antarctic SIA (Figs. 3c,d), forecast skill in the Nudge30 PME is not significantly different to the Free PME in either 1 May or 1 November PMEs, while the Nudge55 PMEs show a marked improvement in forecast skill relative to the Free PME for both 1 May and 1 November PMEs (forecast skill improvement of ~65%–75%).
2) Regional predictability
We next analyze regional sea ice predictability. We split the Arctic into three regions and the Antarctic into five regions (see Fig. 2). In the Arctic, we define a “North Pacific” region, which includes seas both south and north of the Bering Strait (Bering and Okhotsk Seas to the south, and Beaufort–Chukchi–East Siberian–Laptev Seas to the north). This simplifies our analysis as this region encompasses the full annual evolution of the sea ice edge north and south of the Bering Strait, and displays sea ice variability in all months of the year. We also define a “North Atlantic” region, which includes the East Greenland, Barents, and Kara Seas, and a “Canadian” region, which includes the Labrador–Baffin–Hudson–Canadian Arctic archipelago seas. In Antarctica the five regions, each 72° in longitude, are roughly aligned west to east along the Weddell–south Indian–southwest Pacific–Ross–Amundsen and Bellingshausen (A&B) Seas.
Figure 4 shows the NRMSE for SIA in Arctic regions in both 1 May and 1 November PMEs. In the 1 May PMEs, we observe a fast loss in forecast skill in the first two months, followed by a plateau in skill (in the North Atlantic) or even a slight reemergence of skill by the end of summer (in the North Pacific and Canadian region). The Nudge30 PME offers no improved skill in any region relative to the Free PME, while the Nudge55 PME offers improved skill relative to the Free PME in the Pacific and Labrador sectors throughout the forecasts, but in the North Atlantic forecast skill is only significantly improved in the first three months and not for the September summer minimum. Thus the remote contribution of forecast error growth of the September minimum is regionally confined to the Pacific and Canadian regions (note, however, the low summer SIA and SIA variability in the Canadian region in Fig. 2).

NRMSE of Arctic regional seas for the Free, Nudge30, and Nudge55 (top) 1 May PMEs and (bottom) 1 Nov PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

NRMSE of Arctic regional seas for the Free, Nudge30, and Nudge55 (top) 1 May PMEs and (bottom) 1 Nov PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
NRMSE of Arctic regional seas for the Free, Nudge30, and Nudge55 (top) 1 May PMEs and (bottom) 1 Nov PMEs.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
In the 1 November PMEs (Fig. 4 bottom row), all three sectors show similar patterns of predictability in the Free PME, with fast loss of forecast skill in the first two months followed by a slower loss of skill, or a slight reemergence of skill in the Pacific and North Atlantic regions. In the Nudge30 PME, the Pacific shows significant improvement in skill relative to the Free PME from January onward, the North Atlantic shows improvement in skill over January–March, while the Canadian region shows no improvement in skill at any lead time. In the Nudge55 PME, all three regions show forecast skill improvement over the Free and Nudge30 PMEs, the Pacific (North Atlantic) showing the highest (lowest) improvement in skill relative to the Free PME, and much more forecast skill improvement compared to that offered by 1 May Nudge55 PME.
Figure 5 shows the NRMSE for SIA in Antarctic regions in both 1 May and 1 November PMEs. In the 1 November (austral summer) Free PME, forecast skill loss with lead time tends to be slightly more linear than in the Arctic (i.e., less evidence of a predictability barrier). The Nudge30 PME only offers short-lived significant improvement in forecast skill relative to the Free PME in the A&B and Ross Seas—elsewhere, forecast skill is not significantly different. The Nudge55 PME shows significantly improved skill in all regions for all lead times, with forecast skill improvement of ~60%.

As in Fig. 4, but for Antarctic regions. Months when mean SIA approaches zero are left blank (SW Pacific and south Indian in the austral summer).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

As in Fig. 4, but for Antarctic regions. Months when mean SIA approaches zero are left blank (SW Pacific and south Indian in the austral summer).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
As in Fig. 4, but for Antarctic regions. Months when mean SIA approaches zero are left blank (SW Pacific and south Indian in the austral summer).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
In the 1 May PMEs (austral winter, Fig. 5 top row), the Free PME shows similar predictability compared to the 1 November Free PME, although there are slightly more regional differences in predictability: the A&B and south Indian sectors show significant forecast skill throughout the whole forecasts (seasonal NRMSE of 0.5 in A&B Seas, ~0.7 in Ross Sea), while the southwest Pacific and the Weddell sectors lose skill after 4/5 months forecast lead time. As in the 1 November PMEs, the Nudge30 PME only offers brief improvement in skill relative to the Free PME in the Ross Sea, where forecast skill improvement is ~15% in July–September. Interestingly, in the south Indian and southwest Pacific sectors, the Nudge30 PME shows a faster loss of forecast skill relative to the Free PME. We hypothesize that the model bias that results from nudging introduces an enhanced meridional component to the variability in the SLP field over the south Indian–southwest Pacific seas (see the trough in the first EOF aligned at ~15°W in JJA and ~25°E in DJF in Fig. S2), which results in greater variability of northerly/southerly winds and associated temperature/sea ice responses (warm/cold, respectively) in the region compared to the control ensemble. In all sectors, the Nudge55 PME offers marked improvement over both the Nudge30 and Free PME (forecast skill improvement of ~80%).
b. Atmospheric predictability
Since previous studies have suggested that various atmospheric variables (e.g., temperature, winds, SLP) in the polar regions may serve as useful predictors of sea ice variability, we now investigate the predictability of the atmosphere. We begin by showing the RMSE of seasonal mean SLP of JJA for the 1 May PMEs and DJF for the 1 November PMEs in Fig. 6 together with the background CESM-LENS variability [quantified as the denominator of NRMSE,

RMSE (mb) in PMEs for mean seasonal MSLP in (top two rows) JJA (forecast lead of 2–4 months) and (bottom two rows) DJF (forecast lead of 2–4 months). The inner and outer latitude circles in magenta are the 55° and 30° latitudes, respectively. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

RMSE (mb) in PMEs for mean seasonal MSLP in (top two rows) JJA (forecast lead of 2–4 months) and (bottom two rows) DJF (forecast lead of 2–4 months). The inner and outer latitude circles in magenta are the 55° and 30° latitudes, respectively. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
RMSE (mb) in PMEs for mean seasonal MSLP in (top two rows) JJA (forecast lead of 2–4 months) and (bottom two rows) DJF (forecast lead of 2–4 months). The inner and outer latitude circles in magenta are the 55° and 30° latitudes, respectively. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
During the boreal winter (DJF), atmospheric variability in the Northern Hemisphere is significantly larger than in JJA (Fig. 6, third row). In CESM-LENS, the Arctic and North Atlantic centers of action merge into one center of action, representing variability in the wintertime extension of the Atlantic storm track into the Arctic. The Pacific center of action remains located over the Aleutian low. The patterns of forecast skill in the Free and Nudge30 PMEs are similar but more pronounced to those in JJA: we see progressively reduced (from CESM-LENS to Free PME to Nudge30 PME) RMSE values in the North Pacific (NRMSE ~0.7–0.8 in Free PME, 0.3–0.6 in Nudge30), but mostly unchanged forecast skill over the Arctic and North Atlantic (NRMSE > 0.9 in both PMEs). In the Nudge55 PME, there is significantly lower central Arctic RMSE (unlike in JJA above, and NRMSE ~0.3–0.6), reflecting how forecast error growth of central Arctic MSLP is less “local” in winter compared to summer.
In Antarctica, the main center of action of SLP variability in CESM-LENS is over the A&B Seas, collocated with the Amundsen low both in DJF and JJA (Fig. 6, second and fourth rows), but as happens in the Northern Hemisphere, stronger during winter (JJA). During JJA, there is no significant forecast skill in the Free PME anywhere in Antarctica (NRMSE > 0.9, Fig. S4), while during DJF there is some forecast skill (note the lower RMSE values in Fig. 6 and NRMSE of 0.7) in the A&B Seas. In the Nudge30 PME, the RMSE in JJA is actually higher than that in CESM-LENS over most of Antarctica, indicating increased variability relative to the free simulating run, rather than decreased variability as expected a priori. This likely results from the bias that develops in Nudge30 that results in enhanced variability over Antarctica (see Fig. S2). The only exception is over the A&B Seas, where NRMSE is < 0.8. During DJF, RMSE is reduced relative to CESM-LENS around 50°–60°S and along the A&B and Ross Seas (NRMSE 0.7–0.8 there) but unchanged relative to the Free PME over the Antarctic continent and Weddell–southwest Indian–southwest Pacific seas (NRMSE > 0.9). In the Nudge55 PMEs, there is very high predictability over the whole Antarctic continent to the South Pole in JJA (NRMSE 0.1–0.4) and DJF (NRMSE 0.1–0.2).
Next we take a more global outlook on predictability by inspecting the zonal mean of gridded NRMSE (i.e., the NRMSE calculated at each grid cell) of monthly air temperature for lead time of 3 months in pressure (height)–latitude plots in Fig. 7 (see Fig. S5 to see skill for all lead times). Starting with the Free PME, we see that outside the tropics and at all heights, most forecast skill is lost after one month lead time in both the summer and winter forecasts. Nevertheless, significant forecast skill remains over the equatorial troposphere, particularly near the surface and toward the top of the troposphere (NRMSE < 0.7 for all lead times). In the Nudge30 PME, predictability is quasi-perfect equatorward of 30° (as expected given the nudging), but rapidly drops off poleward and by lead month two, poleward of around 35°, skill is mostly lost (NRMSE > 0.9).

Elevation–latitudinal plots of the zonal mean of gridded NRMSE in PMEs for mean monthly air temperature in (top) July (forecast lead 3 months) and (bottom) January (forecast lead 3 months).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

Elevation–latitudinal plots of the zonal mean of gridded NRMSE in PMEs for mean monthly air temperature in (top) July (forecast lead 3 months) and (bottom) January (forecast lead 3 months).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
Elevation–latitudinal plots of the zonal mean of gridded NRMSE in PMEs for mean monthly air temperature in (top) July (forecast lead 3 months) and (bottom) January (forecast lead 3 months).
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
In the Nudge55 PMEs in contrast, significant forecast skill is found in the Antarctic stratosphere for all lead times in both 1 May and 1 November PMEs, and in the Arctic stratosphere in the 1 November PME–note the contrast between both polar stratospheres in Fig. 7c. There is a tendency for forecast skill to be lost with lower heights (higher pressure), and thus the troposphere is less predictable than the stratosphere, yet the polar differences mirror those found for SLP in Fig. 6: the Antarctic troposphere has higher forecast skill than the Arctic troposphere, particularly in their respective summers (cf. the lack of forecast skill in the Arctic troposphere at a 3-month lead time in July to the forecast skill in the Antarctic troposphere at a 3-month lead time in January in Fig. 7). During summer, the lower troposphere over the Arctic shows very little skill (Fig. 7c), agreeing with our results of SLP skill in Fig. 6. Analyses of T, V, Q, and geopotential height for lead months 2–7 show similar patterns (see Figs. S6–S8).
These results overall show a seasonal evolution in the link between the Arctic and the midlatitudes. During the summer months, Arctic predictability is mostly unaffected by midlatitudinal influence, illustrating large sources of local error growth internal to the Arctic. During the winter, Arctic predictability is more influenced by the midlatitudes, particularly at elevation and subpolar latitudes. On the other hand, Antarctic predictability shows a strong link to the midlatitudes both in winter and summer. This contrast between Arctic and Antarctic atmospheric predictability agrees with the contrast in sea ice predictability found above in the comparison between the Nudge55 and the Free PMEs.
c. SST predictability
We now consider the predictability of SSTs. Anomalous SST patterns are the primary source of atmospheric seasonal predictability (e.g., Rowell 1998) and teleconnections, and thus a key issue in helping understand our experiment results is the following question: Is the modest/negligible amount of remote forcing from the tropics on polar predictability due to a lack of seasonal tropical SST predictability? Figures 8 and 9 show the RMSE and NRMSE, respectively, in seasonal SSTs (forecast lead time 2–4 months) for the three PMEs and CESM-LENS. We see that in CESM-LENS, the main variability in SSTs takes place in the central-east tropical Pacific, an expression of ENSO. Other main regions of variability are the boundary currents, particularly the Kuroshio and the Gulf Stream. In the Free PME, we see high forecast skill of the tropical east Pacific cold tongue, particularly in DJF (NRMSE of ~0.2 or lower). In other tropical ocean areas, NRMSEs are around 0.2–0.4. In contrast, the midlatitude oceans show fairly low forecast skill, with typical NRMSE values of ~0.7, with the exception of regions of deep ocean convection in the subpolar North Atlantic, especially in DJF, and the A&B Seas in the Southern Ocean. In the Nudge30PME, we see an enhancement of SST forecast skill relative to the Free PME in the North Pacific and North Atlantic oceans that is more marked in DJF compared to JJA. In contrast, Southern Ocean forecast skill is not much different, with the exception of an improvement in forecast skill over the A&B Seas. In the Nudge55 PMEs, ocean SSTs have high forecast skill globally, with the only exception of the GIN Seas and Arctic marginal sea ice zone in JJA. In general, the SST predictability patterns longitudinally align with the sea ice predictability patterns (e.g., higher skill in Bellingshausen–Amundsen for SST and sea ice, lower in Weddell).

RMSE (in °C) in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

RMSE (in °C) in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
RMSE (in °C) in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME RMSE is significantly different to the CESM-LENS values at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

NRMSE in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME NRMSE is significant at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

NRMSE in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME NRMSE is significant at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
NRMSE in PMEs for mean seasonal SSTs in (left) JJA (forecast lead of 2–4 months) and (right) DJF (forecast lead of 2–4 months). The green lines are the 30° and 55° latitudes. Stippling indicates the PME NRMSE is significant at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
d. Assessing model teleconnection biases
How might model biases affect our results in the context of observations? One can hypothesize that biases in the strength and location of simulated teleconnections in a GCM could impact the interpretation of our results—if the model simulates weaker teleconnections, then we may find that predictability is less influenced by remote forcing than if it simulated stronger teleconnections. One way to gain some insight on this issue is by comparing teleconnections in CESM-LENS to observations. Figure 10 shows the correlation between seasonal tropical equatorial Pacific SSTs at 0° 150°W (approximately the center of the Niño-3.4 domain and the region where SST variability and predictability peak in Figs. 8 and 9) and global SLP anomalies in the CESM-LENS (using all ensemble members from 1939 to 2005) and in observations for the period 1950–2016. All data are detrended prior to calculating correlations, and for CESM-LENS we show the average of all individual ensemble members. The most obvious feature is the SLP dipole between the east-central Pacific and the western Pacific–Indian Ocean regions, a defining signature of the Southern Oscillation. To the south, a center of action is present over the A&B Seas (r > 0.4 in CESM-LENS, r > 0.2 in observations), which is slightly stronger in DJF relative to JJA, while the SLP field in the reminder of the Southern Ocean is mostly uncorrelated to tropical Pacific SSTs. To the north, the North Pacific and the subtropical North Atlantic are coupled in DJF. The Arctic is mostly uncoupled to tropical Pacific SSTs in CESM-LENS, and only weakly coupled in observations (r ~ 0.2, not significant at the 95% level).

Correlations in (top) CESM-LENS and (bottom) observations between seasonal SST anomalies in the central equatorial Pacific at 0°, 150°W (marked with a black ×) and seasonal SLP. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

Correlations in (top) CESM-LENS and (bottom) observations between seasonal SST anomalies in the central equatorial Pacific at 0°, 150°W (marked with a black ×) and seasonal SLP. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
Correlations in (top) CESM-LENS and (bottom) observations between seasonal SST anomalies in the central equatorial Pacific at 0°, 150°W (marked with a black ×) and seasonal SLP. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
We next investigate the connection between Arctic SIA and global SSTs. Figures 11a and 11b shows the correlation between September Arctic SIA and global SSTs in the preceding JJA in both observations (using 1979–2017 data) and CESM-LENS (averaged over 30 ensemble members, using years 1967–2005 to compare records of equal length to observations). All data are detrended. In observations, there is a modest but statistically significant link to the central-eastern subequatorial Pacific (r > 0.4) and the eastern branch of the Pacific decadal oscillation that is absent in CESM-LENS. However, we note that this teleconnection shows strong internal variability across ensemble members in CESM-LENS, and some members show similar patterns to observations (see Fig. S8). Analyzing tropical SST–global SLP linkages in a similar fashion to Fig. 10 but using a tropical domain centered on the SST region that shows highest correlations with Arctic SIA in Fig. 11b shows stronger sub-Arctic SLP–tropical SST linkages in observations relative to CESM-LENS. However, we also note the large amount of internal variability in this teleconnection pattern across ensemble members (see Fig. S9).

Correlation between September Arctic SIA and global JJA SSTs in CESM-LENS (averaged over 30 ensemble members, 1967–2005) and observations (1979–2017), and correlation between JJA SSTs averaged over the central equatorial Pacific (5°–20°N, 140°–170°W, black box in plots) and global JJA SLPs in CESM-LENS (averaged over 30 ensemble members) and observations over the same years. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

Correlation between September Arctic SIA and global JJA SSTs in CESM-LENS (averaged over 30 ensemble members, 1967–2005) and observations (1979–2017), and correlation between JJA SSTs averaged over the central equatorial Pacific (5°–20°N, 140°–170°W, black box in plots) and global JJA SLPs in CESM-LENS (averaged over 30 ensemble members) and observations over the same years. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
Correlation between September Arctic SIA and global JJA SSTs in CESM-LENS (averaged over 30 ensemble members, 1967–2005) and observations (1979–2017), and correlation between JJA SSTs averaged over the central equatorial Pacific (5°–20°N, 140°–170°W, black box in plots) and global JJA SLPs in CESM-LENS (averaged over 30 ensemble members) and observations over the same years. All data are detrended prior to calculating correlations. Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
Further analysis of atmospheric teleconnections using one-point correlation maps of seasonal SLP anomalies in the southern centers of action in the North Atlantic Oscillation (NAO) and Pacific–North American (PNA) modes of variability [locations are taken from Wallace and Gutzler (1981)] shows overall good agreement in the simulation of these modes of variability in CESM-LENS with respect to observations (see Fig. 12) in DJF, as found to be the case with the previous generation NCAR GCM CCSM4 (Coats et al. 2013).

One-point correlations maps of seasonal SLP DJF anomalies in (a),(b) CESM-LENS and (c),(d) observations for the southern centers of action of the North Atlantic Oscillation (30°N, 20°W) and the Pacific–North American pattern (20°N, 160°W). Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1

One-point correlations maps of seasonal SLP DJF anomalies in (a),(b) CESM-LENS and (c),(d) observations for the southern centers of action of the North Atlantic Oscillation (30°N, 20°W) and the Pacific–North American pattern (20°N, 160°W). Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
One-point correlations maps of seasonal SLP DJF anomalies in (a),(b) CESM-LENS and (c),(d) observations for the southern centers of action of the North Atlantic Oscillation (30°N, 20°W) and the Pacific–North American pattern (20°N, 160°W). Stippling indicates statistically significant correlations at the 95% level.
Citation: Journal of Climate 32, 18; 10.1175/JCLI-D-19-0088.1
While it is beyond the scope of this paper to further investigate teleconnection patterns in the model and observations, this preliminary analysis suggests that to first order the model is adequately capturing the main teleconnection modes. While we note that in observations there is a link between September Arctic SIA and preceding JJA SSTs in the tropical Pacific that is absent in the mean-ensemble analysis of CESM-LENS (Fig. 11), it is unclear whether this represents a model mean-state bias or internal variability in the evolution of this teleconnection pattern, as single members of CESM-LENS show a similar teleconnection pattern to observations.
4. Discussion and conclusions
We have quantified the impact of the tropics and midlatitudes on seasonal polar predictability with a perfect-model experiment using a nudging approach. Overall, our results show a strong seasonality and regional dependence of the remote influence on polar predictability. Forecast skill of Arctic summer sea ice and atmosphere in CESM is primarily governed by local error growth, while during winter the midlatitudes are a significant source of forecast error (or potential forecast skill, were the midlatitudes predictable at seasonal time scales). The tropics offer only at best modest improved forecast skill particularly in the North Pacific sector and in winter. This result agrees with previous studies (Ye et al. 2018; Jung et al. 2014) that found using the ECMWF atmospheric model that tropical nudging offered skill generally confined to the North Pacific and North Atlantic, and is greater in winter compared to summer.
Regionally in the Arctic, we find that forecast skill in the Atlantic sector (Kara–Barents–Greenland Seas) is less remotely forced than that in the Pacific sector, echoing previously found results using RCMs (Sein et al. 2014; Döscher et al. 2010). Concerning the seasonal predictability of Arctic summer SIA, we do not find significant improvement in forecast skill when the tropics are nudged. This result suggests that for summer Arctic SIA seasonal forecasting, a forecast from a fully coupled RCM may not be at a major disadvantage with respect to a forecast from a fully coupled GCM.
In Antarctica, forecast skill is strongly influenced by the midlatitudes year-round (in contrast to the Arctic, where the midlatitude influence is found mostly in the winter). This difference with the Arctic may be affected by the seasonality of the climate mean state. During the austral summer, the Arctic marginal ice zone (MIZ) that drives SIA variability is more poleward (around 75°N) than the Antarctic MIZ during the boreal summer (around 70°N, see Fig. 2), and thus one might expect from a simple geometric argument (distance to the nudging boundary) that the midlatitudes may result in more improved skill in Antarctica in the boreal summer relative to the Arctic in the austral summer (cf. Figs. 3a and 3d). By the same argument, when comparing midlatitude impact on sea ice predictability in the summer and winter at each pole, one may expect higher improved predictability relative to the Free PME in the winter, as the sea ice edge migrates equatorward toward 55°. However, we find that seasonal differences in atmospheric predictability mirror the seasonal and polar differences in sea ice predictability—in the Arctic, forecast skill of the atmosphere is much more influenced by the midlatitudes in the winter compared to summer, whereas in Antarctica, the influence on forecast skill by the midlatitudes is strong in both seasons.
As in the Arctic, the tropics have a very weak impact on Antarctic seasonal predictability, and it is confined to the A&B and Ross Seas, that is, the Amundsen low region and its vicinity. This regional signature of tropical influence agrees with previous work that highlights the coupling of this region with the tropics, even if the magnitude of forecast skill improvement is perhaps weaker than anticipated. While the overall weak tropical–polar predictability link we find (particularly the lack of a summertime tropics–Arctic link) seems to somewhat contradict previous results documenting the coupling between tropical trends in SST and polar climate (Ding et al. 2014; Meehl et al. 2016), we note that the different time scales (decadal versus seasonal) are likely crucial (Kohyama and Hartmann 2016). Additionally, it is unclear what role the model drift in the tropically nudged experiment may play. Model drift in published nudging experiments has been noted (Greatbatch et al. 2012) but it is generally not discussed in the literature. Finally, we are aware that the findings described here may reflect CESM’s biases in replicating observed tropical–global linkages (Ding et al. 2019). Future work is planned to assess tropical–polar links in a similar modeling framework both at longer time scales and using a different GCM to examine whether the modest polar predictability originating in the tropics is a common feature across GCMs.
Acknowledgments
We thank foremost Patrick Callaghan of NCAR who developed the nudging module for CESM and provided crucial assistance in implementing this module, and Jen Kay, Marika Holland, and David Bailey of NCAR’s Polar Climate Working Group for assisting with computing resources. We also thank Patrick Kelly, Brian Mapes, and Xiao Yuan for further assistance with the modeling, and Cecilia Bitz, Aaron Donohoe, Ian Eisenman, Mitch Bushuk, and Francisco Doblas-Reyes for thoughtful discussions. EBW was supported by the Office of Naval Research (N000141812175), and NSF’s Antarctic Program (PLR1643436), and Ding is supported by NSF’s Polar Programs (OPP1744598).
REFERENCES
Blanchard-Wrigglesworth, E., and M. Bushuk, 2019: Robustness of Arctic sea-ice predictability in GCMs. Climate Dyn., 52, 5555–5566, https://doi.org/10.1007/s00382-018-4461-3.
Blanchard-Wrigglesworth, E., C. M. Bitz, and M. H. Holland, 2011: Influence of initial conditions and climate forcing on predicting Arctic sea ice. Geophys. Res. Lett., 38, L18503,https://doi.org/10.1029/2011GL048807.
Bushuk, M., R. Msadek, M. Winton, G. Vecchi, X. Yang, A. Rosati, and R. Gudgel, 2019: Regional Arctic sea-ice prediction: Potential versus operational seasonal forecast skill. Climate Dyn., 2, 2721–2743, https://doi.org/10.1007/s00382-018-4288-y.
Coats, S., J. E. Smerdon, B. I. Cook, and R. Seager, 2013: Stationarity of the tropical Pacific teleconnection to North America in CMIP5/PMIP3 model simulations. Geophys. Res. Lett., 40, 4927–4932, https://doi.org/10.1002/grl.50938.
Collins, M., 2002: Climate predictability on interannual to decadal time scales: The initial value problem. Climate Dyn., 19, 671–692, https://doi.org/10.1007/s00382-002-0254-8.
Day, J., E. Hawkins, and S. Tietsche, 2014a: Will Arctic sea ice thickness initialization improve seasonal forecast skill? Geophys. Res. Lett., 41, 7566–7575, https://doi.org/10.1002/2014GL061694.
Day, J., S. Tietsche, and E. Hawkins, 2014b: Pan-Arctic and regional sea ice predictability: Initialization month dependence. J. Climate, 27, 4371–4390, https://doi.org/10.1175/JCLI-D-13-00614.1.
Ding, Q., J. M. Wallace, D. S. Battisti, E. J. Steig, A. J. Gallant, H.-J. Kim, and L. Geng, 2014: Tropical forcing of the recent rapid Arctic warming in northeastern Canada and Greenland. Nature, 509, 209, https://doi.org/10.1038/nature13260.
Ding, Q., and Coauthors, 2017: Influence of high-latitude atmospheric circulation changes on summertime Arctic sea ice. Nat. Climate Change, 7, 289–295, https://doi.org/10.1038/nclimate3241.
Ding, Q., and Coauthors, 2019: Fingerprints of internal drivers of Arctic sea ice loss in observations and model simulations. Nat. Geosci., 12, 28–33, https://doi.org/10.1038/s41561-018-0256-8.
Döscher, R., K. Wyser, H. M. Meier, M. Qian, and R. Redler, 2010: Quantifying Arctic contributions to climate predictability in a regional coupled ocean-ice-atmosphere model. Climate Dyn., 34, 1157–1176, https://doi.org/10.1007/s00382-009-0567-y.
Ferranti, L., T. Palmer, F. Molteni, and E. Klinker, 1990: Tropical–extratropical interaction associated with the 30–60 day oscillation and its impact on medium and extended range prediction. J. Atmos. Sci., 47, 2177–2199, https://doi.org/10.1175/1520-0469(1990)047<2177:TEIAWT>2.0.CO;2.
Fetterer, F., K. Knowles, W. Meier, and M. Savoie, 2002: Sea ice index (updated 2017). National Snow and Ice Data Center, Boulder, CO, accessed 11 October 2018, http://nsidc.org/data/G02135.html.
Goessling, H. F., S. Tietsche, J. J. Day, E. Hawkins, and T. Jung, 2016: Predictability of the Arctic sea ice edge. Geophys. Res. Lett., 43, 1642–1650, https://doi.org/10.1002/2015GL067232.
Goosse, H., O. Arzel, C. M. Bitz, A. de Montety, and M. Vancoppenolle, 2009: Increased variability of the Arctic summer ice extent in a warmer climate. Geophys. Res. Lett., 36, L23702, https://doi.org/10.1029/2009GL040546.
Greatbatch, R. J., G. Gollan, T. Jung, and T. Kunz, 2012: Factors influencing Northern Hemisphere winter mean atmospheric circulation anomalies during the period 1960/61 to 2001/02. Quart. J. Roy. Meteor. Soc., 138, 1970–1982, https://doi.org/10.1002/qj.1947.
He, S., E. M. Knudsen, D. W. Thompson, and T. Furevik, 2018: Evidence for predictive skill of high-latitude climate due to midsummer sea ice extent anomalies. Geophys. Res. Lett., 45, 9114–9122, https://doi.org/10.1029/2018GL078281.
Holland, M. M., D. A. Bailey, and S. Vavrus, 2011: Inherent sea ice predictability in the rapidly changing Arctic environment of the Community Climate System Model, version 3. Climate Dyn., 36, 1239–1253, https://doi.org/10.1007/s00382-010-0792-4.
Holland, M. M., E. Blanchard-Wrigglesworth, J. Kay, and S. Vavrus, 2013: Initial-value predictability of Antarctic sea ice in the Community Climate System Model 3. Geophys. Res. Lett., 40, 2121–2124, https://doi.org/10.1002/grl.50410.
Hu, C., S. Yang, Q. Wu, Z. Li, J. Chen, K. Deng, T. Zhang, and C. Zhang, 2016: Shifting El Niño inhibits summer Arctic warming and Arctic sea-ice melting over the Canada Basin. Nat. Commun., 7, 11721, https://doi.org/10.1038/ncomms11721.
Hurrell, J. W., and Coauthors, 2013: The Community Earth System Model: A framework for collaborative research. Bull. Amer. Meteor. Soc., 94, 1339–1360, https://doi.org/10.1175/BAMS-D-12-00121.1.
Jung, T., M. A. Kasper, T. Semmler, and S. Serrar, 2014: Arctic influence on subseasonal midlatitude prediction. Geophys. Res. Lett., 41, 3676–3680, https://doi.org/10.1002/2014GL059961.
Jung, T., and Coauthors, 2016: Advancing polar prediction capabilities on daily to seasonal time scales. Bull. Amer. Meteor. Soc., 97, 1631–1647, https://doi.org/10.1175/BAMS-D-14-00246.1.
Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-Year Reanalysis Project. Bull. Amer. Meteor. Soc., 77, 437–472, https://doi.org/10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2.
Kapsch, M.-L., R. G. Graversen, and M. Tjernström, 2013: Springtime atmospheric energy transport and the control of Arctic summer sea-ice extent. Nat. Climate Change, 3, 744, https://doi.org/10.1038/nclimate1884.
Kay, J., and Coauthors, 2015: The Community Earth System Model (CESM) large ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 1333–1349, https://doi.org/10.1175/BAMS-D-13-00255.1.
Knutti, R., D. Masson, and A. Gettelman, 2013: Climate model genealogy: Generation CMIP5 and how we got there. Geophys. Res. Lett., 40, 1194–1199, https://doi.org/10.1002/grl.50256.
Kohyama, T., and D. L. Hartmann, 2016: Antarctic sea ice response to weather and climate modes of variability. J. Climate, 29, 721–741, https://doi.org/10.1175/JCLI-D-15-0301.1.
Latif, M., and Coauthors, 1998: A review of the predictability and prediction of ENSO. J. Geophys. Res., 103, 14 375–14 393, https://doi.org/10.1029/97JC03413.
Meehl, G. A., J. M. Arblaster, C. M. Bitz, C. T. Chung, and H. Teng, 2016: Antarctic sea-ice expansion between 2000 and 2014 driven by tropical Pacific decadal climate variability. Nat. Geosci., 9, 590–595, https://doi.org/10.1038/ngeo2751.
Meehl, G. A., C. T. Chung, J. M. Arblaster, M. M. Holland, and C. M. Bitz, 2018: Tropical decadal variability and the rate of Arctic sea ice decrease. Geophys. Res. Lett., 45, 11 326–11 333, https://doi.org/10.1029/2018GL079989.
Merryfield, W., W.-S. Lee, W. Wang, M. Chen, and A. Kumar, 2013: Multi-system seasonal predictions of Arctic sea ice. Geophys. Res. Lett., 40, 1551–1556, https://doi.org/10.1002/grl.50317.
Mikolajewicz, U., D. V. Sein, D. Jacob, T. Königk, R. Podzun, and T. Semmler, 2005: Simulating Arctic sea ice variability with a coupled regional atmosphere-ocean-sea ice model. Meteor. Z., 14, 793–800, https://doi.org/10.1127/0941-2948/2005/0083.
Msadek, R., G. Vecchi, M. Winton, and R. Gudgel, 2014: Importance of initial conditions in seasonal predictions of Arctic sea ice extent. Geophys. Res. Lett., 41, 5208–5215, https://doi.org/10.1002/2014GL060799.
Ogi, M., K. Yamazaki, and J. M. Wallace, 2010: Influence of winter and summer surface wind anomalies on summer Arctic sea ice extent. Geophys. Res. Lett., 37, L07701, https://doi.org/10.1029/2009GL042356.
Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent, and A. Kaplan, 2003: Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res., 108, 4407, https://doi.org/10.1029/2002JD002670.
Rinke, A., P. Marbaix, and K. Dethloff, 2004: Internal variability in Arctic regional climate simulations: Case study for the SHEBA year. Climate Res., 27, 197–209, https://doi.org/10.3354/cr027197.
Roach, L. A., C. Horvat, S. M. Dean, and C. M. Bitz, 2018: An emergent sea ice floe size distribution in a global coupled ocean–sea ice model. J. Geophys. Res. Oceans, 123, 4322–4337, https://doi.org/10.1029/2017JC013692.
Rowell, D. P., 1998: Assessing potential seasonal predictability with an ensemble of multidecadal GCM simulations. J. Climate, 11, 109–120, https://doi.org/10.1175/1520-0442(1998)011<0109:APSPWA>2.0.CO;2.
Schröder, D., D. L. Feltham, D. Flocco, and M. Tsamados, 2014: September Arctic sea-ice minimum predicted by spring melt-pond fraction. Nat. Climate Change, 4, 353–357, https://doi.org/10.1038/nclimate2203.
Screen, J. A., 2014: Arctic amplification decreases temperature variance in northern mid-to high-latitudes. Nat. Climate Change, 4, 577–582, https://doi.org/10.1038/nclimate2268.
Sein, D. V., N. V. Koldunov, J. G. Pinto, and W. Cabos, 2014: Sensitivity of simulated regional Arctic climate to the choice of coupled model domain. Tellus, 66A, 23966, https://doi.org/10.3402/tellusa.v66.23966.
Sigmond, M., J. Fyfe, G. Flato, V. Kharin, and W. Merryfield, 2013: Seasonal forecast skill of Arctic sea ice area in a dynamical forecast system. Geophys. Res. Lett., 40, 529–534, https://doi.org/10.1002/grl.50129.
Simpkins, G. R., L. M. Ciasto, D. W. Thompson, and M. H. England, 2012: Seasonal relationships between large-scale climate variability and Antarctic sea ice concentration. J. Climate, 25, 5451–5469, https://doi.org/10.1175/JCLI-D-11-00367.1.
Smith, K. L., L. M. Polvani, and L. B. Tremblay, 2018: The impact of stratospheric circulation extremes on minimum Arctic sea ice extent. J. Climate, 31, 7169–7183, https://doi.org/10.1175/JCLI-D-17-0495.1.
Stammerjohn, S., D. Martinson, R. Smith, X. Yuan, and D. Rind, 2008: Trends in Antarctic annual sea ice retreat and advance and their relation to El Niño–Southern Oscillation and southern annular mode variability. J. Geophys. Res., 113, C03S90, https://doi.org/10.1029/2007JC004269.
Stroeve, J., E. Blanchard-Wrigglesworth, V. Guemas, S. Howell, F. Massonnet, and S. Tietsche, 2015: Improving predictions of Arctic sea ice extent. Eos, Trans. Amer. Geophys. Union, 96, https://doi.org/10.1029/2015EO031431.
Stuecker, M. F., C. M. Bitz, and K. C. Armour, 2017: Conditions leading to the unprecedented low Antarctic sea ice extent during the 2016 austral spring season. Geophys. Res. Lett., 44, 9008–9019, https://doi.org/10.1002/2017GL074691.
Tietsche, S., E. Hawkins, and J. Day, 2016: Atmospheric and oceanic contributions to irreducible forecast uncertainty of Arctic surface climate. J. Climate, 29, 331–346, https://doi.org/10.1175/JCLI-D-15-0421.1.
Wallace, J. M., and D. S. Gutzler, 1981: Teleconnections in the geopotential height field during the Northern Hemisphere winter. Mon. Wea. Rev., 109, 784–812, https://doi.org/10.1175/1520-0493(1981)109<0784:TITGHF>2.0.CO;2.
Wang, W., M. Chen, and A. Kumar, 2013: Seasonal prediction of Arctic sea ice extent from a coupled dynamical forecast system. Mon. Wea. Rev., 141, 1375–1394, https://doi.org/10.1175/MWR-D-12-00057.1.
Wayand, N., C. Bitz, and E. Blanchard-Wrigglesworth, 2019: A year-round subseasonal-to-seasonal sea ice prediction portal. Geophys. Res. Lett., 46, 3298–3307, https://doi.org/10.1029/2018GL081565.
Williams, J., B. Tremblay, R. Newton, and R. Allard, 2016: Dynamic preconditioning of the minimum September sea-ice extent. J. Climate, 29, 5879–5891, https://doi.org/10.1175/JCLI-D-15-0515.1.
Woodgate, R. A., T. Weingartner, and R. Lindsay, 2010: The 2007 Bering Strait oceanic heat flux and anomalous Arctic sea-ice retreat. Geophys. Res. Lett., 37, L01602, https://doi.org/10.1029/2009GL041621.
Ye, K., T. Jung, and T. Semmler, 2018: The influences of the Arctic troposphere on the midlatitude climate variability and the recent Eurasian cooling. J. Geophys. Res. Atmos., 123, 10 162–10 184, https://doi.org/10.1029/2018JD028980.
Yuan, X., 2004: Enso-related impacts on Antarctic sea ice: A synthesis of phenomenon and mechanisms. Antarct. Sci., 16, 415–425, https://doi.org/10.1017/S0954102004002238.
Yuan, X., D. Chen, C. Li, L. Wang, and W. Wang, 2016: Arctic sea ice seasonal prediction by a linear Markov model. J. Climate, 29, 8151–8173, https://doi.org/10.1175/JCLI-D-15-0858.1.
Zhang, Y.-F., C. M. Bitz, J. L. Anderson, N. Collins, J. Hendricks, T. Hoar, K. Raeder, and F. Massonnet, 2018: Insights on sea ice data assimilation from perfect model observing system simulation experiments. J. Climate, 31, 5911–5926, https://doi.org/10.1175/JCLI-D-17-0904.1.