Modeled Interannual Variability of Arctic Sea Ice Cover is within Observational Uncertainty

Christopher Wyburn-Powell aDepartment of Atmospheric and Oceanic Sciences, and Institute of Arctic and Alpine Research, University of Colorado Boulder, Boulder, Colorado

Search for other papers by Christopher Wyburn-Powell in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-8362-9151
,
Alexandra Jahn aDepartment of Atmospheric and Oceanic Sciences, and Institute of Arctic and Alpine Research, University of Colorado Boulder, Boulder, Colorado

Search for other papers by Alexandra Jahn in
Current site
Google Scholar
PubMed
Close
, and
Mark R. England bDepartment of Earth and Planetary Science, University of California, Santa Cruz, Santa Cruz, California

Search for other papers by Mark R. England in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Internal variability is the dominant cause of projection uncertainty of Arctic sea ice in the short and medium term. However, it is difficult to determine the realism of simulated internal variability in climate models, as observations only provide one possible realization while climate models can provide numerous different realizations. To enable a robust assessment of simulated internal variability of Arctic sea ice, we use a resampling technique to build synthetic ensembles for both observations and climate models, focusing on interannual variability, which is the dominant time scale of Arctic sea ice internal variability. We assess the realism of the interannual variability of Arctic sea ice cover as simulated by six models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) that provide large ensembles compared to four observational datasets. We augment the standard definition of model and observational consistency by representing the full distribution of resamplings, analogous to the distribution of variability that could have randomly occurred. We find that modeled interannual variability typically lies within observational uncertainty. The three models with the smallest mean state biases are the only ones consistent in the pan-Arctic for all months, but no model is consistent for all regions and seasons. Hence, choosing the right model for a given task as well as using internal variability as an additional metric to assess sea ice simulations is important. The fact that CMIP5 large ensembles broadly simulate interannual variability consistent within observational uncertainty gives confidence in the internal projection uncertainty for Arctic sea ice based on these models.

Significance Statement

The purpose of this study is to evaluate the historical simulated internal variability of Arctic sea ice in climate models. Determining model realism is important to have confidence in the projected sea ice evolution from these models, but so far only mean state and trends are commonly assessed metrics. Here we assess internal variability with a focus on the interannual variability, which is the dominant time scale for internal variability. We find that, in general, models agree well with observations, but as no model is within observational uncertainty for all months and locations, choosing the right model for a given task is crucial. Further refinement of internal variability realism assessments will require reduced observational uncertainty.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Christopher Wyburn-Powell, chwy8767@colorado.edu

Abstract

Internal variability is the dominant cause of projection uncertainty of Arctic sea ice in the short and medium term. However, it is difficult to determine the realism of simulated internal variability in climate models, as observations only provide one possible realization while climate models can provide numerous different realizations. To enable a robust assessment of simulated internal variability of Arctic sea ice, we use a resampling technique to build synthetic ensembles for both observations and climate models, focusing on interannual variability, which is the dominant time scale of Arctic sea ice internal variability. We assess the realism of the interannual variability of Arctic sea ice cover as simulated by six models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) that provide large ensembles compared to four observational datasets. We augment the standard definition of model and observational consistency by representing the full distribution of resamplings, analogous to the distribution of variability that could have randomly occurred. We find that modeled interannual variability typically lies within observational uncertainty. The three models with the smallest mean state biases are the only ones consistent in the pan-Arctic for all months, but no model is consistent for all regions and seasons. Hence, choosing the right model for a given task as well as using internal variability as an additional metric to assess sea ice simulations is important. The fact that CMIP5 large ensembles broadly simulate interannual variability consistent within observational uncertainty gives confidence in the internal projection uncertainty for Arctic sea ice based on these models.

Significance Statement

The purpose of this study is to evaluate the historical simulated internal variability of Arctic sea ice in climate models. Determining model realism is important to have confidence in the projected sea ice evolution from these models, but so far only mean state and trends are commonly assessed metrics. Here we assess internal variability with a focus on the interannual variability, which is the dominant time scale for internal variability. We find that, in general, models agree well with observations, but as no model is within observational uncertainty for all months and locations, choosing the right model for a given task is crucial. Further refinement of internal variability realism assessments will require reduced observational uncertainty.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Christopher Wyburn-Powell, chwy8767@colorado.edu

1. Introduction

Arctic sea ice has declined precipitously since 1979, at a faster rate than at any time over the last millennium (Brennan and Hakim 2022), with less than half the summer area and one-quarter the summer volume remaining (Schweiger et al. 2011; Notz and Stroeve 2018). This observed decline is due to both anthropogenic climate change and internal variability, which can act to amplify or dampen the trend from external forcing alone (Kay et al. 2011; Notz and Marotzke 2012). The relative contribution of internal variability to the observed September sea ice area decline remains uncertain but has been estimated at 43%–53% (Stroeve et al. 2007; Kay et al. 2011; Ding et al. 2019). Internal variability also influences future sea ice projections, leading to large internal variability uncertainty, especially for the next few decades (Kay et al. 2011; Jahn et al. 2016; Bonan et al. 2021). As internal variability is such a large contributor to the observed and projected changes in Arctic sea ice cover, but global climate models (GCMs) differ in the magnitude of their simulated sea ice internal variability (Olonscheck and Notz 2017), it is imperative that we understand how realistically models simulate internal variability.

Internal variability of Arctic sea ice has been shown to be spatially heterogeneous (England et al. 2019) and to act on multiple time scales from annual to multidecadal (Zhang and Wallace 2015; Ding et al. 2017, 2019; Brennan et al. 2020). Over the historical period, internal variability has been the dominant cause of sea ice decline in many regions, most notably parts of the Kara Sea in summer and the Barents Sea in winter (Li et al. 2017; England et al. 2019; Dörr et al. 2021). Sea ice loss in recent decades has been most rapid and expansive in the summer, particularly in the shelf seas, which have transitioned from mainly ice-covered to ice-free for more of the summer, facilitating high internal variability (Onarheim et al. 2018; Mioduszewski et al. 2019). These areas of rapid and unpredictable change coincide with the most impactful areas for a range of stakeholders from shipping and oil interests to indigenous peoples and biodiversity (Kovacs et al. 2011; Petrick et al. 2017; Christensen and Nilsson 2017; Chen et al. 2020).

The established way to estimate internal variability in GCMs is to use multiple realizations of single-model initial-condition large ensembles (SMILEs) or long constant-forcing model runs to assess the ensemble spread or standard deviation (Olonscheck and Notz 2017; Lehner et al. 2020; Maher et al. 2020). SMILEs have successfully been used to study internal variability in the context of polar temperatures (England 2021), precipitation trends (Dai and Bloecker 2019), and regional trends (McKinnon and Deser 2018; Hu et al. 2019). However, such analysis cannot be done on observations, due to only one realization of reality and a limited length of the observational record. It is this single realization of reality over a relatively short period of time that has previously prevented direct assessment of internal variability of Arctic sea ice in models compared to observations. Hence, previous sea ice model assessments have been focused on the trends (e.g., Swart et al. 2015; Rosenblum and Eisenman 2017), sensitivity to warming (e.g., Winton 2011; Niederdrenk and Notz 2018), and mean state (e.g., Davy and Outten 2020). Furthermore, even if we were able to precisely disentangle internal variability from the forced response in observations, comparisons with GCMs are still challenging because we do not know where the one realization seen in the observations falls within the probability distribution obtained from a model ensemble (Notz 2015).

Here, we provide the first direct comparison of internal variability of Arctic sea ice from a suite of SMILEs from phase 5 of the Coupled Model Intercomparison Project (CMIP5) with observations, by using a statistical technique to construct a “synthetic ensemble” of Arctic sea ice variability, following McKinnon et al. (2017). Synthetic ensembles have been used for several climate variability questions such as for sea surface temperature (Chan et al. 2020), climate extremes (Deser et al. 2020a), precipitation (McKinnon and Deser 2021), ocean chlorophyll concentration (Elsworth et al. 2021), and Antarctic sea ice trends (Chemke and Polvani 2020). Here we present the first use of a synthetic ensemble for studying Arctic sea ice, specifically to assess the realism of internal variability on interannual time scales. Using the synthetic ensemble method, we are able to show that generally the simulated interannual variability fits within the observational uncertainty derived from different datasets, but that there are considerable seasonal and spatial differences, and that some models perform better than others for a given task. We also show that interannual variability makes up approximately three-quarters of the total Arctic sea ice internal variability, and hence the majority of the sea ice internal variability over the past 42 years.

2. Data sources

a. Observational data

We primarily use two observational datasets for sea ice concentration (SIC), the National Snow and Ice Data Center (NSIDC) Climate Data Record (CDR) version 4 (Meier et al. 2021) and the Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST1) (Rayner et al. 2003). To further test the sensitivity of our results to the observational dataset used, we also utilize datasets derived from the satellite algorithms NASA Team (NT) (Cavalieri et al. 1984) and NASA Bootstrap (BT) (Comiso 1986). Together, these datasets are a representative sample of interpretations of past sea ice conditions, with both the mean state and variability differing between the datasets due to observational uncertainties (Comiso et al. 2017; Kern et al. 2019). Sea ice area (SIA) was chosen over sea ice extent (SIE) as the variability of SIA is more independent of satellite algorithms and is intrinsically more precise and thus better for comparing internal variability between models (Notz 2014). All analysis is performed using monthly data for 1979–2020. Missing data for NSIDC datasets, and discontinuities in the HadISST1 dataset, were filled using the same month’s data in a different year, instead of interpolating to avoid unrealistic SIC values (see Table S1 in the online supplemental material for the specific replacements used).

b. Model data

Six models from the Climate Variability and Predictability Program (CLIVAR) Multi-Model Large Ensemble Archive (Deser et al. 2020b) are utilized in this analysis, as detailed in Table 1. EC-Earth was excluded from this analysis due to no available SIC output. All models are CMIP5-class and use historical and representative concentration pathway (RCP) 8.5 forcing, the high-emissions CMIP5 scenario. The models from the Multi-Model Large Ensemble Archive are diverse in their mean state and trends, spanning nearly the full range of CMIP5 sea ice projections (see Fig. 1 in Bonan et al. 2021). In winter, model mean-state biases are typically smaller in absolute and relative terms than summer (see Table S2). The notable outliers in summer are CanESM2, with the largest negative mean-state bias of −54% in September, and CSIRO-Mk3.6, being an extreme positive outlier for all seasons and with +83% in September. Although GFDL CM3 is not as large an outlier in mean state in September, its SIA loss over the period 1979–2020 is by far the most rapid. The six models range in ensemble size between 20 and 100 (see Table 1). We present results for all members of the SMILEs to assess each GCM’s ability to realistically simulate the observed interannual variability. We also provide subsampled results, scaled to 20 members, the size of the smallest large ensemble, for model intercomparison with our consistency metric. Subsampling is discussed in more detail in section 3c.

Table 1

Models used in this analysis from the CLIVAR Multi-Model Large Ensemble Archive (Deser et al. 2020b).

Table 1

3. Methods

a. Resampling technique

We estimate interannual variability in a single model member or observational time series by assuming the forced response is represented by an ordinary least squares regression linear trend. This assumption is deemed appropriate for 1979–2020, but may not be applicable for time periods extending further back (England et al. 2019; England 2021), and allows us to follow the methodology from McKinnon et al. (2017). Anomalies from this linear trend are therefore considered largely due to interannual variability alone. Typically the ensemble mean is a more accurate measure of the forced response, but (as discussed in sections 3d and 4c) detrending using the individual member produces similar results and is chosen here for reasons detailed in section 3d.

By using this technique we can calculate a consistent metric of interannual variability to directly compare model members and observations. We consider all months in the pan-Arctic and present spatial results for the minimum and maximum SIA months September and March respectively. We resample the anomalies from the linear trend 10 000 times for SIA and 1000 times for each SIC grid box, with replacement, and use a 2-yr bootstrap block size. This can be considered analogous to shuffling independent anomalies to produce a range of alternative scenarios that would have been equally likely to occur, allowing us to calculate metrics of interannual variability for a representative sample of all possible scenarios (see Fig. 1). As suggested by McKinnon et al. (2017) and McKinnon and Deser (2018), we retain spatial coherence by resampling in the time dimension for all grid boxes at once. A total of 10 000 resamplings in the pan-Arctic were chosen for increased reliability of consistency classifications, whereas spatially 1000 was determined sufficient as each grid box has a lower impact on results if a classification were to change from rerunning the experiment. A 2-yr block size is chosen because normalized autocorrelation frequently exceeds 0.4 for a lag of 1 year, and a marked drop-off in autocorrelation between a lag of 1 and 2 years occurs in comparison with years 2 and 3 (not shown), occurring both spatially and in the pan-Arctic time series. Resampling with a 1- or 2-yr block size leads to almost identical results (not shown).

Fig. 1.
Fig. 1.

Resampling methodology, applied to the observed September SIA. (a) Observed sea ice area from CDR (dots) with linear trend (gray dashed line). (b) Anomalies from the linear trend. (c),(d) Two randomly different resamplings of the anomalies in (b), color coded to match the year of anomaly. (e) Distribution of the standard deviation with respect to time for all 10 000 resamplings. The printed statistics use σ for standard deviation. In (e) the red vertical line represents the standard deviation of the original data, gray indicates the distribution of standard deviations for the 10 000 resamplings, and the black line indicates a normal distribution.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

We focus our analysis on the standard deviation of sea ice state over the 42-yr period 1979–2020, not the trends, as we want to assess the realism of the models’ simulated interannual variability, rather than the realism of the simulated trends [see Swart et al. (2015) for a discussion of simulated trends compared to observations]. The standard deviation with respect to time is computed either for the 10 000 pan-Arctic SIA resampling or the 1000 SIC resamplings in each grid cell. To represent the distribution of these resamplings or ensemble members we use the standard deviation (σ) and mean (μ). Here, σ can be considered analogous to the range of interannual variability that could have occurred, given the underlying data; μ is analogous to the typical interannual variability represented in the resamplings.

To directly compare interannual variability between models and observations we define three measures of variability as follows, where σLE is internal variability in SMILEs and both σmem and σobs are the interannual variability within a synthetic ensemble:

  • σLE and μLE—Standard deviation and mean of standard deviations within a single large ensemble, without resampling, an established measure of the full range of internal variability.

  • σmem and μmem—Standard deviation and mean of the standard deviations of all resamplings of a single model member. The resampling process for a given ensemble member is equivalent to that of the observations in Fig. 1. The median member’s value across all members of the SMILE is denoted σ¯mem and μ¯mem.

  • σobs and μobs—Standard deviations and mean of the standard deviation of all resamplings of the single realization of the observational dataset. These metrics relate to Fig. 1 as the standard deviation and mean of the distribution in Fig. 1e.

b. Consistency

To assess the realism of simulated internal variability, we utilize a consistency metric to provide a binary classification as to whether the modeled variability is within or outside the range of observational uncertainty. For sea ice analysis in the past, consistency has typically been defined by at least one member of a large ensemble overlapping with observations (e.g., Notz 2015; Swart et al. 2015; Jahn 2018). However, this is a relatively low bar for models to reach. Other more elaborate consistency methods have been applied for other aspects of the climate system (e.g., Santer et al. 2008) and applied to Arctic sea ice by Stroeve et al. (2012). However the methodology of Santer et al. (2008) bases consistency assessments on trends rather than the internal variability independent of the trends, as is the goal here. Hence, we here use resampling and define consistency by comparing distributions, as it allows us to compare whether the resampled distributions overlap. Comparing distributions is a more stringent decision about consistency than comparing single values for each ensemble member or observational dataset that would be available without resampling. Further augmentation to this binary classification is achieved by comparing SMILEs with four diverse observational datasets independently, adding the category of “consistent within observational uncertainty.” We only use this three-category consistency classification rather than a significance or probability value (e.g., from a Student’s t test), as both the resampled average variability (μmem) and standard deviation of variability (σmem) are positively skewed across members. Nonetheless, we find that a 95% confidence interval is in fact similar to our consistency classification, but classifies fewer instances of inconsistency in the pan-Arctic than our method.

Applying this consistency metric to Arctic SIA, each SMILE realization or observational dataset has a different value of interannual variability for each of the 10 000 resamplings. These 10 000 resamplings from a single member or observational time series are approximately normally distributed and as such can be thought of as probability distribution functions (PDFs) (see Fig. 2). The width of the PDFs show the distribution of the 10 000 resamplings, indicating the range of possible interannual variabilities (proportional to σmem and σobs). The location on the horizontal axis indicates the average interannual variability (μmem and μobs). For models and observations to be considered “consistent” in the following, we require their means (their position on the horizontal axis in Fig. 2) and their standard deviations (height on the vertical axis) to overlap such that at least one member is greater than the lowest observational dataset and one member is lower than the highest observations for each σ and μ metric independently. Average SIC differences do not preclude a consistent classification as variability may be equal between a SMILE and observational datasets but about different means. However, due to the zero-bound nature of SIC, if a mean state differs so much that SMILE members have at least some sea ice where there is no sea ice in the observational datasets, we exclude those regions from the analysis rather than classifying them as inconsistent. We do this as the focus of our analysis is on assessing the realism of actual sea ice variability, so we only compare regions where there is variability in both models and observations.

Fig. 2.
Fig. 2.

Distribution of pan-Arctic SIA standard deviations across members, resamplings, and observations. Probability distribution functions (PDFs) for detrended standard deviation of pan-Arctic SIA, for (a)–(f) March and (g)–(l) September. PDFs are produced from the mean (μ) and standard deviation (σ) across the 10 000 resamplings. Each individual resampled member (σmem) is plotted with a thin line colored according to the legend, the average resampled member (σ¯mem) is colored similarly with a thick line, and the resampled observations (σobs) are in red for the four datasets according to the legend. Percentiles noted on the figure are the single values of σobs or μobs for the observational datasets relative to the distribution of σmem and μmem across members.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

c. Ensemble size

We have included SMILEs with ensemble members as low as 20 in our analysis as the standard deviation between members (σLE), representing the full range of internal variability, increases only marginally beyond approximately 8–12 members, compared to the full range of 20–100 members (see Fig. S2). This leads us to consider SMILEs of at least 12 members to generate enough diversity between realizations to capture most aspects of internal variability. The selection of a minimum number of members for SMILEs when assessing different time periods or other aspects of the climate system may require considerably more members (Milinski et al. 2020). With increasing ensemble size, the values of the minimum and maximum σmem diverge, making it easier for a SMILE to overlap with observations (see Fig. S1). Our primary goal is to assess the individual realism of SMILEs when compared with observations, using as much information from each model as is available. Hence, we present results without subsampling the members to a consistent ensemble size. However, as others may be interested in a direct comparison of the interannual variability in CMIP5 SMILEs, we provide subsampled results in the online supplemental material, where consistency is standardized to 20 members, the size of the smallest SMILE, in the pan-Arctic (Fig. S5) and spatially (Fig. S6).

d. Detrending

The ensemble mean of a SMILE is considered a good representation of the “forced response” of the model to the changing climate (Frankcombe et al. 2018). However, observations only have one realization, and hence the observed forced trend must be computed from that single realization. Hence, in our analysis we use the individual members’ trends over the period 1979–2020 as representation of the forced response, to enable the same methodology to be applied to observations and models, for direct comparisons. The SMILEs provide the perfect place to test the impact of this method: we find that linear detrending rather than removing the ensemble mean results in only a marginal decrease in variability (8% reduction for σmem and 11% for σLE) yielding a very similar ratio (see Fig. S11).

Applying linear detrending largely removes low-frequency variability. We reached this conclusion as detrending ensemble members and observations using a 2-yr fifth-order low-pass Butterworth filter (Roberts and Roberts 1978), which explicitly removes low-frequency variability, obtaining almost identical consistency results as with a simple linear trend (see Fig. S7 in comparison to Fig. 8). This low-pass filter removes variability on frequencies in excess of 2 years, the time period beyond which autocorrelation in the sea ice is negligible. Good agreement between the linear detrending and the low-pass filtered data suggests that both anomaly calculation methods effectively isolate interannual variability. The variability in our resampled anomalies of individual SMILE members (σmem) captures approximately three-quarters of internal variability across SMILE realizations without resampling (σLE), as discussed further in section 4c. This enables us to conclude that our detrending and resampling analysis primarily assesses interannual variability, and that this is the dominant time scale of internal variability for Arctic sea ice for the period 1979–2020.

In the spatial analysis we obtain a linear trend for each grid cell, using the same method of detrending as we did for the pan-Arctic. While we find some isolated incidences of grid cells where the linear SIC trend exceeds 100% or is lower than 0%, extremely small differences are found in consistency if a different detrending method is used, such as a 2-yr low-pass filter (see Fig. S7) or trends capped to physical bounds (not shown). Hence, the detrending method does not affect the conclusions drawn from the analysis.

e. Time periods

The time period considered is the observational period 1979–2020, focused on the seasonal extremes of March and September for the spatial analysis; 1979–2020 is chosen for observations due to high-quality spatial data from 1979 onward, which is particularly important for assessing interannual sea ice variability. We found that shifting the time period used from the models to better match the observed mean sea ice state yielded negligible differences spatially and minimally affected pan-Arctic results for shifts of a few years to a decade. When matching the observed mean state required adjustments of many decades, the changes in the results were larger. However, in some instances a model did not have a time period when the mean state matched the observed mean state in the whole historical and future simulations. Furthermore, we want to assess the realism of the simulated interannual variability as simulated, to complement previous model assessments of trends and mean state that were done over the same periods in models and observations (e.g., Swart et al. 2015; Notz and SIMIP Community 2020). Hence, although internal variability has been shown to be sensitive to mean state (Goosse et al. 2009; Jahn et al. 2016; Olonscheck and Notz 2017; Massonnet et al. 2018), and some models have more linear SIA and SIC declines than others over 1979–2020, we find that the choice of exact period analyzed did not materially impact our results.

As the use of a 42-yr time period is out of necessity, this raises the question of whether a 42-yr period is sufficient for our analyses. To answer this question different time period lengths were assessed within the models (see Fig. 3). Time periods longer than approximately 20 years yield similar σmem/σobs ratios, which gives confidence in our results for this metric being representative of a broad range of time periods. Similarly, the ratio σmem/σLE changes rapidly for short time periods but becomes relatively stable for time periods of at least a few decades. To confirm this, we conducted a similar time period analysis for the period 1953–2020 using low-pass filtered SIA. This more clearly indicates the stabilization of the ratio σmem to σLE, at approximately 75%, independent of the length of the time period in excess of approximately 30 years (see Fig. S3). Spatially, when we compare the shorter 32-yr time periods 1979–2010 and 1989–2020 to the full time period of 1979–2020, we find that there are small consistency differences between the time periods for some regions, but these differences are not substantial enough for our main conclusions to be altered (see Fig. S4).

Fig. 3.
Fig. 3.

Influence of the length of the time period on the standard deviation of pan-Arctic SIA. The standard deviation with respect to time is shown for time periods between 6 years and the maximum length of a linear trend in SIA, bootstrapped 1000 times. Thick lines show the median ensemble member; shading shows ±1 standard deviation. (a),(b) The ratio of standard deviation across resamplings (σmem) to standard deviation across members (σLE) over a subset of the time periods (a) 1965–2066 for March and (b) 1970–2040 for September. (c),(d) The ratio of standard deviation across resamplings (σmem) to standard deviation across resampled observations (σobs) in the HadISST1 dataset for the period 1979–2020 in (c) March and (d) September.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

4. Results

a. Resampled variability in models and observations

Resampling the observations and SMILE models, we find that the variability of models is generally similar to observations, but with considerable seasonal and regional variability. The variability in both models (σmem) and observations (σobs) shows distinct seasonality in the pan-Arctic, peaking in the autumn with the exception of CSIRO-Mk3.6 [see Fig. 4 and shown for average variability (μ) in Fig. S8]. In spring we find larger variation between different realizations of the same model than between model averages. This highlights the sensitivity of interannual variability to realization, and why we assess realism based on consistency rather than comparison between the median SMILE member and observations (see section 3b). The results of this consistency assessment are discussed further in section 4b.

Fig. 4.
Fig. 4.

Seasonality of resampled variability in ensemble members and observations for pan-Arctic sea ice area. The distribution of standard deviations (σmem) across ensemble members is shown for each model and month as a box-and-whisker charts, where whiskers show the full range of ensemble members, boxes show the interquartile range, and gray bars indicate the median member. Values of resampled variability in observations (σobs) are shown as horizontal lines for each of the four datasets.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

Observations have substantial uncertainties that impact the value of observational interannual variability (σobs). Hence, the choice of which dataset to use for comparison with models can affect whether observations fall within the large ensemble range, both for the pan-Arctic and spatially (see Figs. 4 and 6). Furthermore, the uncertainties vary seasonally, with the largest relative uncertainty of pan-Arctic observational variability in the winter and spring (see Fig. 5). Hence, it is easier for models to fall within the observational uncertainty in the winter and spring than in the summer and autumn. For most months, we find the majority of the ensemble median variability (σ¯mem; gray bars in Fig. 4) are similar or higher than observations (σobs; in red). However, as we do not know how typical observations are, we cannot use these differences to diagnose model biases.

Fig. 5.
Fig. 5.

Resampled variability of pan-Arctic sea ice area for the four observational datasets, showing (a),(c) absolute values and (b),(d) percentage uncertainty shown as calculated from the range of σobs divided by the mean of σobs.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

Fig. 6.
Fig. 6.

Resampled modeled and observed variability of September sea ice concentration. (top three rows) Standard deviation of resamplings for the six models (σmem) for the maximum, median, and minimum member for each grid cell. (bottom) Standard deviation of resamplings for the four observational datasets (σobs). The color bar applies to all subplots on this figure. The same analysis for March is shown in Fig. S9.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

Spatially, there is considerable difference in the locations of maximum variability between models and the observational datasets in September (see Fig. 6). We find large-magnitude differences throughout the ice-covered region between different models and when comparing models with observations. Despite these large differences in the ensemble medians between models, we find that the range between members for a given model is considerably larger in most instances. Again this draws attention to the difference of interannual variability between realizations. In comparison to September, the location and magnitude of highest variability in March is more similar between different models, with the range between members being very large for the ice edge region (see Fig. S9). Observational uncertainty is also highly variable between regions; for example, NT exhibits much higher variability in the central Arctic in September than the other datasets (see Fig. 6). When we combine both the spread of model simulations across realizations and the spread of interpretations of the observational record, we find broad agreement between models and observations. This is true both in the pan-Arctic and spatially in their representation of Arctic sea ice interannual variability.

b. Consistency of models and observations

When utilizing the range of observational datasets for the pan-Arctic, we find model consistency for a majority of the time (57%) across models and months (see Fig. 7i). Models consistent within observational uncertainty account for 33% of months, far greater than the 10% of months identified as inconsistent. It is important to note that these proportions relate to the specific six models we analyzed, which capture the full spread of the CMIP5 sea ice simulations (Bonan et al. 2021). Nonetheless, the common pattern is for GCMs to be predominantly consistent within observational uncertainty. By our definition of consistency, all models except CSIRO-Mk3.6 and GFDL CM3 are consistent in September for all observational datasets. In the spring, when observational uncertainty is largest, we find that all models are consistent within observational uncertainty and in April and May all models are consistent with all observational datasets. When looking across all months we find that only MPI ESM1 is unambiguously consistent with all observational datasets and CESM1 and GFDL ESM2M are consistent but not for all observational datasets. CanESM2, CSIRO-Mk3.6, and GFDL CM3 (the models with the largest mean-state bias) are the only models with inconsistent classifications beyond observational uncertainty. Our ability to more stringently assess realism by using the two metrics is demonstrated by CanESM2 and GFDL CM3 being considered consistent for all months for σ, but when also considering μ we find that both models have two months with inconsistencies.

Fig. 7.
Fig. 7.

Consistency between models and observations in pan-Arctic SIA. White indicates consistency between models and all observational datasets, while reds and blues indicate inconsistency in at least one metric. Specifically, dark blue indicates the model is inconsistent with observations, as all members are too low, while dark red indicates inconsistency due to all members being too high. (c),(f),(i) Two metrics are combined. Here, light blue means one of the metrics classifies the model as too low while the other metric is consistent, and light red indicates that the model is too high in one metric but consistent in the other metric. There are no instances of too-high and too-low classifications for a given month by the different metrics. (g)–(i) All observational products are combined. Here, black indicates disagreement in classification between the observational datasets, indicating consistency within observational uncertainty.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

When considering consistency spatially, each grid cell can be considered to have a distribution of PDFs similar to Fig. 2 and thus can be categorized in the same way. Consistency in σ and μ is highly correlated but with some differences, indicating the benefit of using both metrics (areas of light blue and light red in Fig. 8). As noted earlier, we focus on the seasonal minimum and maximum sea ice area in September and March, respectively, and present a consistency classification only where both the model and observations exhibit nonzero sea ice.

Fig. 8.
Fig. 8.

Spatial consistency of interannual variability between large-ensemble members and observations. Members of the large ensembles that have at least one member overlapping with the variability of resampled observed SIC are shown in white, indicating consistency. Regions where the classification differs between the maximum and minimum observational datasets are shaded black, indicating consistency within observational uncertainty. Areas without sea ice, either in the model or observations, are shaded beige. Areas shaded in red and blue indicate inconsistency in at least one metric, using the same color scheme as Fig. 7.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

Similarly to the pan-Arctic, we find no areas where the σ and μ metrics produce different signs of inconsistency. With the exception of CSIRO-Mk3.6, the shelf and marginal seas in September in all models are broadly consistent within observational uncertainty, with CESM1 and GFDL ESM2M performing the best. CSIRO-Mk3.6 shows the largest inconsistencies in March with underestimation of variability in the Barents Sea. All other models simulate consistent variability in the Barents Sea where atypically rapid SIC decline has occurred (Li et al. 2017). Both regions of too high variability and too low variability occur for MPI ESM1 in September, yet this model is consistent for September in the pan-Arctic, indicating these regions counteract each other for SIA. For March the models are more dissimilar than in September, with no regions of over- or underestimation of interannual variability common to all models. Large portions of the central Arctic Ocean have very little observed and modeled variability in March, due to the 100% bounding of SIC. This means that small absolute biases in the modeled interannual variability can cause an inconsistent classification (see Fig. S9). With our consistency classification we conclude that more models have greater realism of simulated interannual variability in September than in March. However, even well-performing models in some regions in September or March generally do a poorer job in the other month, indicating that the skill of a certain model in simulating interannual variability is highly seasonally and regionally dependent.

c. Internal variability captured by resampling versus ensemble spread

Our best estimate of the full range of internal variability, on high- and low-frequency time scales, is through SMILEs; here we use the standard deviation between detrended members (σLE) to represent this. As we consider the resampled standard deviation of SMILE members and observations to be representative of interannual variability and not the full range of internal variability, we would expect the ratio σmem/σLE to be less than one. For all seasons, when looking at pan-Arctic SIA, interannual variability simulated by the median standard deviation across resamplings (σ¯mem) is less than the internal variability simulated by multiple realizations without resampling (σLE), an annual average of 75.9% across models (Fig. 9). This ratio is robust irrespective of detrending method with an average of 74.4% and 82.4% when the ensemble mean and a 2-yr low-pass filter, respectively, is used for detrending (see Fig. S11).

This ratio of three-quarters interannual variability and one-quarter lower-frequency variability also holds for different time period lengths, as discussed in section 3e, and is relatively stable for a given 42-yr time period sometime between 1950–91 and 2050–91. Hence, we expect interannual variability to remain the dominant portion of internal variability for the near future. The general underestimation of the resampled variability, compared with the benchmark of large ensemble spread, is in agreement with previous uses of this methodology on surface temperature, precipitation and sea level pressure (McKinnon et al. 2017; McKinnon and Deser 2018). When considering the difference between σLE and σmem spatially, we find the largest underestimations along the ice edge but in general the signal in the pan-Arctic is replicated homogeneously across the Arctic (see Fig. S10).

Fig. 9.
Fig. 9.

Seasonality of the ratio of internal variability across SMILEs and interannual variability of resampled members for pan-Arctic sea ice area. Lines show the ratio of the standard deviation of the median resampled member to the standard deviation across members without resampling (σ¯mem to σLE); shading shows the interquartile range of the ratios for all members.

Citation: Journal of Climate 35, 20; 10.1175/JCLI-D-21-0958.1

5. Discussion

Sea ice poses unique challenges in assessing internal variability: a short time period of high-quality observations, physical bounds of 0%–100%, and changes in variability as mean state changes. Despite this, we were able to apply the synthetic ensemble method to Arctic sea ice as used in McKinnon et al. (2017) and McKinnon and Deser (2018) for temperature, precipitation, and sea level pressure. Similarly to previous research, we found that resampling leads to an underestimation of the full range of internal variability captured by a large ensemble, both in the pan-Arctic (where σ¯mem 0.76σLE; see Fig. 9) and also locally across the Arctic Ocean (Fig. S10). This agrees with the expectation that low-frequency variability is not fully captured by the resampling (McKinnon and Deser 2018). Hence our analysis primarily assesses the interannual component of internal variability. Interestingly, this proportion of three-quarters of the internal variability being due to interannual variability matches closely with the 75% contribution from atmospheric temperature fluctuation to Arctic sea ice variability found by Olonscheck et al. (2019) via a “decoupling” methodology. Both of these independent analyses hence suggest that Arctic sea ice interannual variability is largely unpredictable.

Our analysis assumes that a given anomaly is equally likely to have occurred in 1979 or 2020. This is a dependable assumption, despite the fact that it has been shown that variability increases as sea ice extent decreases (Goosse et al. 2009; Jahn et al. 2016; Olonscheck and Notz 2017; Massonnet et al. 2018), as we showed that neither the length of the period considered (Fig. 3) nor the period itself (Fig. S12) substantially change the results. However, as the Arctic approaches seasonally ice-free conditions, an “equally likely” assumption will no longer be a valid approach. For example, it would not be appropriate to assume that a September SIA negative anomaly of one million square kilometers (as occurred in 2007) would be equally likely to occur when the mean state in September is practically zero in most models.

All of the SMILEs, except CSIRO-Mk3.6, capture the seasonal cycle of σmem and μmem with highest values in the summer. However, the magnitude of observational uncertainty also needs to be taken into account as it factors into how stringent consistent classifications are. Observational uncertainty is largest in the winter for the pan-Arctic (see Fig. 5), and therefore it is easier for models to be consistent during this part of the year. Spatially we find the largest differences in variability between observational datasets in the central Arctic during September (see Fig. 6). Nevertheless, we still find that most models simulate too high variability in this region in September, and it is only the extreme variability of NT compared with the other observational datasets that allows a “consistent within observational uncertainty” classification for most models (see Fig. 8). Consensus regarding which observational dataset is the most realistic for these areas would be required before determining which models have the better representation of variability in the high SIC regions.

As we have shown that almost all models can simulate consistent members across seasons, we can say most of the SMILE models are realistic in their simulation of historical interannual variability. Realism of internal variability is a complementary assessment to the analysis of mean state, sensitivity to warming, and trends (Swart et al. 2015; Rosenblum and Eisenman 2017; Winton 2011; Niederdrenk and Notz 2018; Davy and Outten 2020). Some of these metrics are interrelated but each provide part of the picture for a full model assessment for Arctic sea ice. We show that the CMIP5 models with inconsistent months or large regions of inconsistency are those with the largest mean state biases, but even these models are consistent for several months of the year in the pan-Arctic and for most regions in March and September. This suggests that avoiding mean state biases is important for correctly simulating the evolution of the Arctic sea ice cover [see Massonnet et al. (2018)], but models can have moderately large mean-state biases and still simulate realistic sea ice interannual variability. Furthermore, as we find that most CMIP5 SMILE models agree with observations in terms of their interannual variability for the pan-Arctic in September, the internal variability prediction uncertainty of an ice-free Arctic of over two decades from climate models (Notz 2015; Jahn et al. 2016) is likely realistic. However, no SMILE model performs well in all months and regions. But if one wishes to only focus on one season or region, one can find a CMIP5 SMILE model where the interannual variability is consistent with observations. This is true even for hotspots of internal variability such as the Barents Sea in winter and the shelf seas in summer (England et al. 2019; Bonan et al. 2021), showing the robustness of the consistency classification.

6. Conclusions

In this study, we showed that simulated interannual variability of CMIP5 large ensemble models is typically within observational uncertainty, by generating a synthetic ensemble of Arctic sea ice variability and using a binary classification of consistency that considers the full distribution of resamplings to aid the assessment of model realism. This analysis method considers approximately three-quarters of Arctic sea ice internal variability, on the dominant interannual time scale for the period 1979–2020. Sea ice variability is another metric that augments the realism assessment of GCMs in the context of Arctic sea ice beyond the typical mean state and trend consistency and the assessment of sea ice sensitivity (Swart et al. 2015; Rosenblum and Eisenman 2017; Winton 2011; Niederdrenk and Notz 2018; Davy and Outten 2020).

We showed that all models are able to simulate the seasonal cycle of interannual variability with peaks in the summer, except CSIRO-Mk3.6, which has by far the largest mean state biases (see Table S2), caused by aerosol issues (Uotila et al. 2013). We demonstrate that all modeled interannual variability is within observational uncertainty, except for CanESM2 in January and November, GFDL CM3 in August and November, and CSIRO-Mk3.6 in August–October for the pan-Arctic. Except for areas of low absolute variability in the central Arctic Ocean, there are no inconsistencies that are common across all six models we assessed. Spatially, we find the models underestimate interannual variability for most regions in March, and in September most models overestimate variability in the central Arctic. The marginal seas, which have high absolute variability, are generally realistically simulated, although our assessment is limited to where both models and observations have sea ice. No model simulated the spatial interannual variability in both March and September without inconsistencies, but most models simulated at least one of the two months realistically. CESM1 and GFDL ESM2M simulate September spatial variability very well, with very few areas of inconsistency, including the highly variable shelf seas. In March, MPI ESM1 performs best, with only the Siberian coast displaying too-high variability.

In summary, in this first direct comparison of interannual variability between observations and models, we have shown that estimates of interannual variability from models are largely consistent with observations. However, model skill varies by month and region, highlighting that the best model to use for a study varies based on the context. To be able to assess the impact of the full range of internal variability, including the low-frequency variability (McKinnon and Deser 2018), first requires an improved understanding of the drivers of low-frequency variability on Arctic sea ice. Generally, the fact that the simulated interannual variability of most CMIP5 large ensembles agrees quite well with historical observations, especially in September, increases trust in the internal variability uncertainty of Arctic sea ice projections.

Acknowledgments.

This work was supported by the National Science Foundation under Grant 1847398. We would also like to acknowledge high-performance computing on Cheyenne (https://www.doi.org/10.5065/D6RX99HX) provided by NCAR’s Computational and Information Systems Laboratory, sponsored by the National Science Foundation.

Data availability statement.

All code required to replicate this study has been made open-access via Zenodo at https://www.doi.org/10.5281/zenodo.6687725. All data used in the analysis are already freely available. The CLIVAR Large Ensemble Archive can be obtained from the NCAR Climate Data Gateway (https://www.earthsystemgrid.org/dataset/ucar.cgd.ccsm4.CLIVAR_LE.html). The NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration (version 4) is available from https://doi.org/10.7265/efmz-2t65. The Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST) is available from https://www.metoffice.gov.uk/hadobs/hadisst/index.html. Additionally, the summary statistics from the resampled synthetic ensemble (μLE, σLE, μmem, σmem, μobs, and σobs) can be accessed from the Arctic Data Center at https://doi.org/10.18739/A2H98ZF3T.

REFERENCES

  • Bonan, D. B., F. Lehner, and M. M. Holland, 2021: Partitioning uncertainty in projections of Arctic sea ice. Environ. Res. Lett., 16, 044002, https://doi.org/10.1088/1748-9326/abe0ec.

    • Search Google Scholar
    • Export Citation
  • Brennan, M. K., and G. J. Hakim, 2022: Reconstructing Arctic sea ice over the common era using data assimilation. J. Climate, 35, 12311247, https://doi.org/10.1175/JCLI-D-21-0099.1.

    • Search Google Scholar
    • Export Citation
  • Brennan, M. K., G. J. Hakim, and E. Blanchard-Wrigglesworth, 2020: Arctic sea-ice variability during the instrumental era. Geophys. Res. Lett., 47, e2019GL086843, https://doi.org/10.1029/2019GL086843.

    • Search Google Scholar
    • Export Citation
  • Cavalieri, D. J., P. Gloersen, and W. J. Campbell, 1984: Determination of sea ice parameters with the Nimbus 7 SMMR. J. Geophys. Res., 89, 53555369, https://doi.org/10.1029/JD089iD04p05355.

    • Search Google Scholar
    • Export Citation
  • Chan, D., A. Cobb, L. R. Zeppetello, D. S. Battisti, and P. Huybers, 2020: Summertime temperature variability increases with local warming in midlatitude regions. Geophys. Res. Lett., 47, e2020GL087624, https://doi.org/10.1029/2020GL087624.

    • Search Google Scholar
    • Export Citation
  • Chemke, R., and L. M. Polvani, 2020: Using multiple large ensembles to elucidate the discrepancy between the 1979–2019 modeled and observed Antarctic sea ice trends. Geophys. Res. Lett., 47, e2020GL088339, https://doi.org/10.1029/2020GL088339.

    • Search Google Scholar
    • Export Citation
  • Chen, J., and Coauthors, 2020: Changes in sea ice and future accessibility along the Arctic Northeast Passage. Global Planet. Change, 195, 103319, https://doi.org/10.1016/j.gloplacha.2020.103319.

  • Christensen, M., and A. E. Nilsson, 2017: Arctic sea ice and the communication of climate change. Pop. Commun., 15, 249268, https://doi.org/10.1080/15405702.2017.1376064.

    • Search Google Scholar
    • Export Citation
  • Comiso, J. C., 1986: Characteristics of Arctic winter sea ice from satellite multispectral microwave observations. J. Geophys. Res., 91, 975994, https://doi.org/10.1029/JC091iC01p00975.

    • Search Google Scholar
    • Export Citation
  • Comiso, J. C., W. N. Meier, and R. Gersten, 2017: Variability and trends in the Arctic Sea ice cover: Results from different techniques. J. Geophys. Res. Oceans, 122, 68836900, https://doi.org/10.1002/2017JC012768.

    • Search Google Scholar
    • Export Citation
  • Dai, A., and C. E. Bloecker, 2019: Impacts of internal variability on temperature and precipitation trends in large ensemble simulations by two climate models. Climate Dyn., 52, 289306, https://doi.org/10.1007/s00382-018-4132-4.

    • Search Google Scholar
    • Export Citation
  • Davy, R., and S. Outten, 2020: The Arctic surface climate in CMIP6: Status and developments since CMIP5. J. Climate, 33, 80478068, https://doi.org/10.1175/JCLI-D-19-0990.1.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and Coauthors, 2020a: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277286, https://doi.org/10.1038/s41558-020-0731-2.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and Coauthors, 2020b: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277286, https://doi.org/10.1038/s41558-020-0731-2.

    • Search Google Scholar
    • Export Citation
  • Ding, Q., and Coauthors, 2017: Influence of high-latitude atmospheric circulation changes on summertime Arctic sea ice. Nat. Climate Change, 7, 289295, https://doi.org/10.1038/nclimate3241.

    • Search Google Scholar
    • Export Citation
  • Ding, Q., and Coauthors, 2019: Fingerprints of internal drivers of Arctic sea ice loss in observations and model simulations. Nat. Geosci., 12, 2833, https://doi.org/10.1038/s41561-018-0256-8.

    • Search Google Scholar
    • Export Citation
  • Dörr, J., M. Årthun, T. Eldevik, and E. Madonna, 2021: Mechanisms of regional winter sea-ice variability in a warming Arctic. J. Climate, 34, 86358653, https://doi.org/10.1175/JCLI-D-21-0149.1.

    • Search Google Scholar
    • Export Citation
  • Elsworth, G. W., N. S. Lovenduski, and K. A. McKinnon, 2021: Alternate history: A synthetic ensemble of ocean chlorophyll concentrations. Global Biogeochem. Cycles, 35, e2020GB006924, https://doi.org/10.1029/2020GB006924.

    • Search Google Scholar
    • Export Citation
  • England, M. R., 2021: Are multi-decadal fluctuations in Arctic and Antarctic surface temperatures a forced response to anthropogenic emissions or part of internal climate variability? Geophys. Res. Lett., 48, e2020GL090631, https://doi.org/10.1029/2020GL090631.

  • England, M. R., A. Jahn, and L. Polvani, 2019: Nonuniform contribution of internal variability to recent Arctic sea ice loss. J. Climate, 32, 40394053, https://doi.org/10.1175/JCLI-D-18-0864.1.

    • Search Google Scholar
    • Export Citation
  • Frankcombe, L. M., M. H. England, J. B. Kajtar, M. E. Mann, and B. A. Steinman, 2018: On the choice of ensemble mean for estimating the forced signal in the presence of internal variability. J. Climate, 31, 56815693, https://doi.org/10.1175/JCLI-D-17-0662.1.

    • Search Google Scholar
    • Export Citation
  • Goosse, H., O. Arzel, C. M. Bitz, A. De Montety, and M. Vancoppenolle, 2009: Increased variability of the Arctic summer ice extent in a warmer climate. Geophys. Res. Lett., 36, L23702, https://doi.org/10.1029/2009GL040546.

    • Search Google Scholar
    • Export Citation
  • Hu, K., G. Huang, and S. P. Xie, 2019: Assessing the internal variability in multi-decadal trends of summer surface air temperature over East Asia with a large ensemble of GCM simulations. Climate Dyn., 52, 62296242, https://doi.org/10.1007/s00382-018-4503-x.

    • Search Google Scholar
    • Export Citation
  • Jahn, A., 2018: Reduced probability of ice-free summers for 1.5°C compared to 2°C warming. Nat. Climate Change, 8, 409413, https://doi.org/10.1038/s41558-018-0127-8.

    • Search Google Scholar
    • Export Citation
  • Jahn, A., J. E. Kay, M. M. Holland, and D. M. Hall, 2016: How predictable is the timing of a summer ice-free Arctic? Geophys. Res. Lett., 43, 91139120, https://doi.org/10.1002/2016GL070067.

    • Search Google Scholar
    • Export Citation
  • Jeffrey, S., L. Rotstayn, M. Collier, S. Dravitzki, C. Hamalainen, C. Moeseneder, K. Wong, and J. Syktus, 2013: Australia’s CMIP5 submission using the CSIRO-Mk3.6 model. Aust. Meteor. Oceanogr. J., 63 (1), 113, https://doi.org/10.22499/2.6301.001.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., M. M. Holland, and A. Jahn, 2011: Inter-annual to multi-decadal Arctic sea ice extent trends in a warming world. Geophys. Res. Lett., 38, L15708, https://doi.org/10.1029/2011GL048008.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) large ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 13331349, https://doi.org/10.1175/BAMS-D-13-00255.1.

    • Search Google Scholar
    • Export Citation
  • Kern, S., T. Lavergne, D. Notz, L. Toudal Pedersen, R. Tage Tonboe, R. Saldo, and A. MacDonald Sørensen, 2019: Satellite passive microwave sea-ice concentration data set intercomparison: Closed ice and ship-based observations. Cryosphere, 13, 32613307, https://doi.org/10.5194/tc-13-3261-2019.

    • Search Google Scholar
    • Export Citation
  • Kirchmeier-Young, M. C., F. W. Zwiers, and N. P. Gillett, 2017: Attribution of extreme events in Arctic sea ice extent. J. Climate, 30, 553571, https://doi.org/10.1175/JCLI-D-16-0412.1.

    • Search Google Scholar
    • Export Citation
  • Kovacs, K. M., C. Lydersen, J. E. Overland, and S. E. Moore, 2011: Impacts of changing sea-ice conditions on Arctic marine mammals. Mar. Biodivers., 41, 181194, https://doi.org/10.1007/s12526-010-0061-0.

    • Search Google Scholar
    • Export Citation
  • Lehner, F., C. Deser, N. Maher, J. Marotzke, E. M. Fischer, L. Brunner, R. Knutti, and E. Hawkins, 2020: Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. Earth Syst. Dyn., 11, 491508, https://doi.org/10.5194/esd-11-491-2020.

    • Search Google Scholar
    • Export Citation
  • Li, D., R. Zhang, and T. R. Knutson, 2017: On the discrepancy between observed and CMIP5 multi-model simulated Barents Sea winter sea ice decline. Nat. Commun., 8, 14991, https://doi.org/10.1038/ncomms14991.

    • Search Google Scholar
    • Export Citation
  • Maher, N., and Coauthors, 2019: The Max Planck Institute Grand Ensemble: Enabling the exploration of climate system variability. J. Adv. Model. Earth Syst., 11, 20502069, https://doi.org/10.1029/2019MS001639.

    • Search Google Scholar
    • Export Citation
  • Maher, N., F. Lehner, and J. Marotzke, 2020: Quantifying the role of internal variability in the temperature we expect to observe in the coming decades. Environ. Res. Lett., 15, 054014, https://doi.org/10.1088/1748-9326/ab7d02.

    • Search Google Scholar
    • Export Citation
  • Massonnet, F., M. Vancoppenolle, H. Goosse, D. Docquier, T. Fichefet, and E. Blanchard-Wrigglesworth, 2018: Arctic sea-ice change tied to its mean state through thermodynamic processes. Nat. Climate Change, 8, 599603, https://doi.org/10.1038/s41558-018-0204-z.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., and C. Deser, 2018: Internal variability and regional climate trends in an observational large ensemble. J. Climate, 31, 67836802, https://doi.org/10.1175/JCLI-D-17-0901.1.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., and C. Deser, 2021: The inherent uncertainty of precipitation variability, trends, and extremes due to internal variability, with implications for western U.S. water resources. J. Climate, 34, 96059622, https://doi.org/10.1175/JCLI-D-21-0251.1.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., A. Poppick, E. Dunn-Sigouin, and C. Deser, 2017: An “observational large ensemble” to compare observed and modeled temperature trend uncertainty due to internal variability. J. Climate, 30, 75857598, https://doi.org/10.1175/JCLI-D-16-0905.1.

    • Search Google Scholar
    • Export Citation
  • Meier, W. N., F. Fetterer, A. Windnagel, and J. Stewart, 2021: NOAA/NSIDC climate data record of passive microwave sea ice concentration, version 4. Tech. Rep., National Snow and Ice Data Center, 44 pp., https://doi.org/10.7265/efmz-2t65.

  • Milinski, S., N. Maher, and D. Olonscheck, 2020: How large does a large ensemble need to be? Earth Syst. Dyn., 11, 885901, https://doi.org/10.5194/esd-11-885-2020.

    • Search Google Scholar
    • Export Citation
  • Mioduszewski, J. R., S. Vavrus, M. Wang, M. Holland, and L. Landrum, 2019: Past and future interannual variability in Arctic sea ice in coupled climate models. Cryosphere, 13, 113124, https://doi.org/10.5194/tc-13-113-2019.

    • Search Google Scholar
    • Export Citation
  • Niederdrenk, A. L., and D. Notz, 2018: Arctic sea ice in a 1.5°C warmer world. Geophys. Res. Lett., 45, 19631971, https://doi.org/10.1002/2017GL076159.

    • Search Google Scholar
    • Export Citation
  • Notz, D., 2014: Sea-ice extent and its trend provide limited metrics of model performance. Cryosphere, 8, 229243, https://doi.org/10.5194/tc-8-229-2014.

    • Search Google Scholar
    • Export Citation
  • Notz, D., 2015: How well must climate models agree with observations? Philos. Trans. Roy. Soc., 373A, 20140164, https://doi.org/10.1098/rsta.2014.0164.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and J. Marotzke, 2012: Observations reveal external driver for Arctic sea-ice retreat. Geophys. Res. Lett., 39, L08502, https://doi.org/10.1029/2012GL051094.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and J. Stroeve, 2018: The trajectory towards a seasonally ice-free Arctic ocean. Curr. Climate Change Rep., 4, 407416, https://doi.org/10.1007/s40641-018-0113-2.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and SIMIP Community, 2020: Arctic sea ice in CMIP6. Geophys. Res. Lett., 47, e2019GL086749, https://doi.org/10.1029/2019GL086749.

    • Search Google Scholar
    • Export Citation
  • Olonscheck, D., and D. Notz, 2017: Consistently estimating internal climate variability from climate model simulations. J. Climate, 30, 95559573, https://doi.org/10.1175/JCLI-D-16-0428.1.

    • Search Google Scholar
    • Export Citation
  • Olonscheck, D., T. Mauritsen, and D. Notz, 2019: Arctic sea-ice variability is primarily driven by atmospheric temperature fluctuations. Nat. Geosci., 12, 430434, https://doi.org/10.1038/s41561-019-0363-1.

    • Search Google Scholar
    • Export Citation
  • Onarheim, I. H., T. Eldevik, L. H. Smedsrud, and J. C. Stroeve, 2018: Seasonal and regional manifestation of Arctic sea ice loss. J. Climate, 31, 49174932, https://doi.org/10.1175/JCLI-D-17-0427.1.

    • Search Google Scholar
    • Export Citation
  • Petrick, S., K. Riemann-Campe, S. Hoog, C. Growitsch, H. Schwind, R. Gerdes, and K. Rehdanz, 2017: Climate change, future Arctic sea ice, and the competitiveness of European Arctic offshore oil and gas production on world markets. Ambio, 46, 410422, https://doi.org/10.1007/s13280-017-0957-z.

    • Search Google Scholar
    • Export Citation
  • Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent, and A. Kaplan, 2003: Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. Atmos., 108, 4407, https://doi.org/10.1029/2002JD002670.

    • Search Google Scholar
    • Export Citation
  • Roberts, J., and T. D. Roberts, 1978: Use of the Butterworth low-pass filter for oceanographic data. J. Geophys. Res. Oceans, 83, 55105514, https://doi.org/10.1029/JC083iC11p05510.

    • Search Google Scholar
    • Export Citation
  • Rodgers, K. B., J. Lin, and T. L. Frölicher, 2015: Emergence of multiple ocean ecosystem drivers in a large ensemble suite with an Earth system model. Biogeosciences, 12, 33013320, https://doi.org/10.5194/bg-12-3301-2015.

    • Search Google Scholar
    • Export Citation
  • Rosenblum, E., and I. Eisenman, 2017: Sea ice trends in climate models only accurate in runs with biased global warming. J. Climate, 30, 62656278, https://doi.org/10.1175/JCLI-D-16-0455.1.

    • Search Google Scholar
    • Export Citation
  • Santer, B. D., and Coauthors, 2008: Consistency of modelled and observed temperature trends in the tropical troposphere. Int. J. Climatol., 28, 17031722, https://doi.org/10.1002/joc.1756.

    • Search Google Scholar
    • Export Citation
  • Schweiger, A., R. Lindsay, J. Zhang, M. Steele, H. Stern, and R. Kwok, 2011: Uncertainty in modeled Arctic sea ice volume. J. Geophys. Res. Oceans, 116, C00D06, https://doi.org/10.1029/2011JC007084.

  • Stroeve, J. C., M. M. Holland, W. Meier, T. Scambos, and M. Serreze, 2007: Arctic sea ice decline: Faster than forecast. Geophys. Res. Lett., 34, 9501, https://doi.org/10.1029/2007GL029703.

    • Search Google Scholar
    • Export Citation
  • Stroeve, J. C., V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W. N. Meier, 2012: Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations. Geophys. Res. Lett., 39, L16502, https://doi.org/10.1029/2012GL052676.

  • Sun, L., M. Alexander, and C. Deser, 2018: Evolution of the global coupled climate response to Arctic sea ice loss during 1990–2090 and its contribution to climate change. J. Climate, 31, 78237843, https://doi.org/10.1175/JCLI-D-18-0134.1.

    • Search Google Scholar
    • Export Citation
  • Swart, N. C., J. C. Fyfe, E. Hawkins, J. E. Kay, and A. Jahn, 2015: Influence of internal variability on Arctic sea-ice trends. Nat. Climate Change, 5, 8689, https://doi.org/10.1038/nclimate2483.

    • Search Google Scholar
    • Export Citation
  • Uotila, P., S. O’Farrell, S. J. Marsland, and D. Bi, 2013: The sea-ice performance of the Australian climate models participating in the CMIP5. Aust. Meteor. Oceanogr. J., 63, 121143, https://doi.org/10.22499/2.6301.008.

    • Search Google Scholar
    • Export Citation
  • Winton, M., 2011: Do climate models underestimate the sensitivity of Northern Hemisphere sea ice cover? J. Climate, 24, 39243934, https://doi.org/10.1175/2011JCLI4146.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, R., and J. M. Wallace, 2015: Mechanisms for low-frequency variability of summer Arctic sea ice extent. Proc. Natl. Acad. Sci. USA, 112, 45704575, https://doi.org/10.1073/pnas.1422296112.

    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save
  • Bonan, D. B., F. Lehner, and M. M. Holland, 2021: Partitioning uncertainty in projections of Arctic sea ice. Environ. Res. Lett., 16, 044002, https://doi.org/10.1088/1748-9326/abe0ec.

    • Search Google Scholar
    • Export Citation
  • Brennan, M. K., and G. J. Hakim, 2022: Reconstructing Arctic sea ice over the common era using data assimilation. J. Climate, 35, 12311247, https://doi.org/10.1175/JCLI-D-21-0099.1.

    • Search Google Scholar
    • Export Citation
  • Brennan, M. K., G. J. Hakim, and E. Blanchard-Wrigglesworth, 2020: Arctic sea-ice variability during the instrumental era. Geophys. Res. Lett., 47, e2019GL086843, https://doi.org/10.1029/2019GL086843.

    • Search Google Scholar
    • Export Citation
  • Cavalieri, D. J., P. Gloersen, and W. J. Campbell, 1984: Determination of sea ice parameters with the Nimbus 7 SMMR. J. Geophys. Res., 89, 53555369, https://doi.org/10.1029/JD089iD04p05355.

    • Search Google Scholar
    • Export Citation
  • Chan, D., A. Cobb, L. R. Zeppetello, D. S. Battisti, and P. Huybers, 2020: Summertime temperature variability increases with local warming in midlatitude regions. Geophys. Res. Lett., 47, e2020GL087624, https://doi.org/10.1029/2020GL087624.

    • Search Google Scholar
    • Export Citation
  • Chemke, R., and L. M. Polvani, 2020: Using multiple large ensembles to elucidate the discrepancy between the 1979–2019 modeled and observed Antarctic sea ice trends. Geophys. Res. Lett., 47, e2020GL088339, https://doi.org/10.1029/2020GL088339.

    • Search Google Scholar
    • Export Citation
  • Chen, J., and Coauthors, 2020: Changes in sea ice and future accessibility along the Arctic Northeast Passage. Global Planet. Change, 195, 103319, https://doi.org/10.1016/j.gloplacha.2020.103319.

  • Christensen, M., and A. E. Nilsson, 2017: Arctic sea ice and the communication of climate change. Pop. Commun., 15, 249268, https://doi.org/10.1080/15405702.2017.1376064.

    • Search Google Scholar
    • Export Citation
  • Comiso, J. C., 1986: Characteristics of Arctic winter sea ice from satellite multispectral microwave observations. J. Geophys. Res., 91, 975994, https://doi.org/10.1029/JC091iC01p00975.

    • Search Google Scholar
    • Export Citation
  • Comiso, J. C., W. N. Meier, and R. Gersten, 2017: Variability and trends in the Arctic Sea ice cover: Results from different techniques. J. Geophys. Res. Oceans, 122, 68836900, https://doi.org/10.1002/2017JC012768.

    • Search Google Scholar
    • Export Citation
  • Dai, A., and C. E. Bloecker, 2019: Impacts of internal variability on temperature and precipitation trends in large ensemble simulations by two climate models. Climate Dyn., 52, 289306, https://doi.org/10.1007/s00382-018-4132-4.

    • Search Google Scholar
    • Export Citation
  • Davy, R., and S. Outten, 2020: The Arctic surface climate in CMIP6: Status and developments since CMIP5. J. Climate, 33, 80478068, https://doi.org/10.1175/JCLI-D-19-0990.1.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and Coauthors, 2020a: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277286, https://doi.org/10.1038/s41558-020-0731-2.

    • Search Google Scholar
    • Export Citation
  • Deser, C., and Coauthors, 2020b: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277286, https://doi.org/10.1038/s41558-020-0731-2.

    • Search Google Scholar
    • Export Citation
  • Ding, Q., and Coauthors, 2017: Influence of high-latitude atmospheric circulation changes on summertime Arctic sea ice. Nat. Climate Change, 7, 289295, https://doi.org/10.1038/nclimate3241.

    • Search Google Scholar
    • Export Citation
  • Ding, Q., and Coauthors, 2019: Fingerprints of internal drivers of Arctic sea ice loss in observations and model simulations. Nat. Geosci., 12, 2833, https://doi.org/10.1038/s41561-018-0256-8.

    • Search Google Scholar
    • Export Citation
  • Dörr, J., M. Årthun, T. Eldevik, and E. Madonna, 2021: Mechanisms of regional winter sea-ice variability in a warming Arctic. J. Climate, 34, 86358653, https://doi.org/10.1175/JCLI-D-21-0149.1.

    • Search Google Scholar
    • Export Citation
  • Elsworth, G. W., N. S. Lovenduski, and K. A. McKinnon, 2021: Alternate history: A synthetic ensemble of ocean chlorophyll concentrations. Global Biogeochem. Cycles, 35, e2020GB006924, https://doi.org/10.1029/2020GB006924.

    • Search Google Scholar
    • Export Citation
  • England, M. R., 2021: Are multi-decadal fluctuations in Arctic and Antarctic surface temperatures a forced response to anthropogenic emissions or part of internal climate variability? Geophys. Res. Lett., 48, e2020GL090631, https://doi.org/10.1029/2020GL090631.

  • England, M. R., A. Jahn, and L. Polvani, 2019: Nonuniform contribution of internal variability to recent Arctic sea ice loss. J. Climate, 32, 40394053, https://doi.org/10.1175/JCLI-D-18-0864.1.

    • Search Google Scholar
    • Export Citation
  • Frankcombe, L. M., M. H. England, J. B. Kajtar, M. E. Mann, and B. A. Steinman, 2018: On the choice of ensemble mean for estimating the forced signal in the presence of internal variability. J. Climate, 31, 56815693, https://doi.org/10.1175/JCLI-D-17-0662.1.

    • Search Google Scholar
    • Export Citation
  • Goosse, H., O. Arzel, C. M. Bitz, A. De Montety, and M. Vancoppenolle, 2009: Increased variability of the Arctic summer ice extent in a warmer climate. Geophys. Res. Lett., 36, L23702, https://doi.org/10.1029/2009GL040546.

    • Search Google Scholar
    • Export Citation
  • Hu, K., G. Huang, and S. P. Xie, 2019: Assessing the internal variability in multi-decadal trends of summer surface air temperature over East Asia with a large ensemble of GCM simulations. Climate Dyn., 52, 62296242, https://doi.org/10.1007/s00382-018-4503-x.

    • Search Google Scholar
    • Export Citation
  • Jahn, A., 2018: Reduced probability of ice-free summers for 1.5°C compared to 2°C warming. Nat. Climate Change, 8, 409413, https://doi.org/10.1038/s41558-018-0127-8.

    • Search Google Scholar
    • Export Citation
  • Jahn, A., J. E. Kay, M. M. Holland, and D. M. Hall, 2016: How predictable is the timing of a summer ice-free Arctic? Geophys. Res. Lett., 43, 91139120, https://doi.org/10.1002/2016GL070067.

    • Search Google Scholar
    • Export Citation
  • Jeffrey, S., L. Rotstayn, M. Collier, S. Dravitzki, C. Hamalainen, C. Moeseneder, K. Wong, and J. Syktus, 2013: Australia’s CMIP5 submission using the CSIRO-Mk3.6 model. Aust. Meteor. Oceanogr. J., 63 (1), 113, https://doi.org/10.22499/2.6301.001.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., M. M. Holland, and A. Jahn, 2011: Inter-annual to multi-decadal Arctic sea ice extent trends in a warming world. Geophys. Res. Lett., 38, L15708, https://doi.org/10.1029/2011GL048008.

    • Search Google Scholar
    • Export Citation
  • Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) large ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 13331349, https://doi.org/10.1175/BAMS-D-13-00255.1.

    • Search Google Scholar
    • Export Citation
  • Kern, S., T. Lavergne, D. Notz, L. Toudal Pedersen, R. Tage Tonboe, R. Saldo, and A. MacDonald Sørensen, 2019: Satellite passive microwave sea-ice concentration data set intercomparison: Closed ice and ship-based observations. Cryosphere, 13, 32613307, https://doi.org/10.5194/tc-13-3261-2019.

    • Search Google Scholar
    • Export Citation
  • Kirchmeier-Young, M. C., F. W. Zwiers, and N. P. Gillett, 2017: Attribution of extreme events in Arctic sea ice extent. J. Climate, 30, 553571, https://doi.org/10.1175/JCLI-D-16-0412.1.

    • Search Google Scholar
    • Export Citation
  • Kovacs, K. M., C. Lydersen, J. E. Overland, and S. E. Moore, 2011: Impacts of changing sea-ice conditions on Arctic marine mammals. Mar. Biodivers., 41, 181194, https://doi.org/10.1007/s12526-010-0061-0.

    • Search Google Scholar
    • Export Citation
  • Lehner, F., C. Deser, N. Maher, J. Marotzke, E. M. Fischer, L. Brunner, R. Knutti, and E. Hawkins, 2020: Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. Earth Syst. Dyn., 11, 491508, https://doi.org/10.5194/esd-11-491-2020.

    • Search Google Scholar
    • Export Citation
  • Li, D., R. Zhang, and T. R. Knutson, 2017: On the discrepancy between observed and CMIP5 multi-model simulated Barents Sea winter sea ice decline. Nat. Commun., 8, 14991, https://doi.org/10.1038/ncomms14991.

    • Search Google Scholar
    • Export Citation
  • Maher, N., and Coauthors, 2019: The Max Planck Institute Grand Ensemble: Enabling the exploration of climate system variability. J. Adv. Model. Earth Syst., 11, 20502069, https://doi.org/10.1029/2019MS001639.

    • Search Google Scholar
    • Export Citation
  • Maher, N., F. Lehner, and J. Marotzke, 2020: Quantifying the role of internal variability in the temperature we expect to observe in the coming decades. Environ. Res. Lett., 15, 054014, https://doi.org/10.1088/1748-9326/ab7d02.

    • Search Google Scholar
    • Export Citation
  • Massonnet, F., M. Vancoppenolle, H. Goosse, D. Docquier, T. Fichefet, and E. Blanchard-Wrigglesworth, 2018: Arctic sea-ice change tied to its mean state through thermodynamic processes. Nat. Climate Change, 8, 599603, https://doi.org/10.1038/s41558-018-0204-z.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., and C. Deser, 2018: Internal variability and regional climate trends in an observational large ensemble. J. Climate, 31, 67836802, https://doi.org/10.1175/JCLI-D-17-0901.1.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., and C. Deser, 2021: The inherent uncertainty of precipitation variability, trends, and extremes due to internal variability, with implications for western U.S. water resources. J. Climate, 34, 96059622, https://doi.org/10.1175/JCLI-D-21-0251.1.

    • Search Google Scholar
    • Export Citation
  • McKinnon, K. A., A. Poppick, E. Dunn-Sigouin, and C. Deser, 2017: An “observational large ensemble” to compare observed and modeled temperature trend uncertainty due to internal variability. J. Climate, 30, 75857598, https://doi.org/10.1175/JCLI-D-16-0905.1.

    • Search Google Scholar
    • Export Citation
  • Meier, W. N., F. Fetterer, A. Windnagel, and J. Stewart, 2021: NOAA/NSIDC climate data record of passive microwave sea ice concentration, version 4. Tech. Rep., National Snow and Ice Data Center, 44 pp., https://doi.org/10.7265/efmz-2t65.

  • Milinski, S., N. Maher, and D. Olonscheck, 2020: How large does a large ensemble need to be? Earth Syst. Dyn., 11, 885901, https://doi.org/10.5194/esd-11-885-2020.

    • Search Google Scholar
    • Export Citation
  • Mioduszewski, J. R., S. Vavrus, M. Wang, M. Holland, and L. Landrum, 2019: Past and future interannual variability in Arctic sea ice in coupled climate models. Cryosphere, 13, 113124, https://doi.org/10.5194/tc-13-113-2019.

    • Search Google Scholar
    • Export Citation
  • Niederdrenk, A. L., and D. Notz, 2018: Arctic sea ice in a 1.5°C warmer world. Geophys. Res. Lett., 45, 19631971, https://doi.org/10.1002/2017GL076159.

    • Search Google Scholar
    • Export Citation
  • Notz, D., 2014: Sea-ice extent and its trend provide limited metrics of model performance. Cryosphere, 8, 229243, https://doi.org/10.5194/tc-8-229-2014.

    • Search Google Scholar
    • Export Citation
  • Notz, D., 2015: How well must climate models agree with observations? Philos. Trans. Roy. Soc., 373A, 20140164, https://doi.org/10.1098/rsta.2014.0164.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and J. Marotzke, 2012: Observations reveal external driver for Arctic sea-ice retreat. Geophys. Res. Lett., 39, L08502, https://doi.org/10.1029/2012GL051094.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and J. Stroeve, 2018: The trajectory towards a seasonally ice-free Arctic ocean. Curr. Climate Change Rep., 4, 407416, https://doi.org/10.1007/s40641-018-0113-2.

    • Search Google Scholar
    • Export Citation
  • Notz, D., and SIMIP Community, 2020: Arctic sea ice in CMIP6. Geophys. Res. Lett., 47, e2019GL086749, https://doi.org/10.1029/2019GL086749.

    • Search Google Scholar
    • Export Citation
  • Olonscheck, D., and D. Notz, 2017: Consistently estimating internal climate variability from climate model simulations. J. Climate, 30, 95559573, https://doi.org/10.1175/JCLI-D-16-0428.1.

    • Search Google Scholar
    • Export Citation
  • Olonscheck, D., T. Mauritsen, and D. Notz, 2019: Arctic sea-ice variability is primarily driven by atmospheric temperature fluctuations. Nat. Geosci., 12, 430434, https://doi.org/10.1038/s41561-019-0363-1.

    • Search Google Scholar
    • Export Citation
  • Onarheim, I. H., T. Eldevik, L. H. Smedsrud, and J. C. Stroeve, 2018: Seasonal and regional manifestation of Arctic sea ice loss. J. Climate, 31, 49174932, https://doi.org/10.1175/JCLI-D-17-0427.1.

    • Search Google Scholar
    • Export Citation
  • Petrick, S., K. Riemann-Campe, S. Hoog, C. Growitsch, H. Schwind, R. Gerdes, and K. Rehdanz, 2017: Climate change, future Arctic sea ice, and the competitiveness of European Arctic offshore oil and gas production on world markets. Ambio, 46, 410422, https://doi.org/10.1007/s13280-017-0957-z.

    • Search Google Scholar
    • Export Citation
  • Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent, and A. Kaplan, 2003: Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. Atmos., 108, 4407, https://doi.org/10.1029/2002JD002670.

    • Search Google Scholar
    • Export Citation
  • Roberts, J., and T. D. Roberts, 1978: Use of the Butterworth low-pass filter for oceanographic data. J. Geophys. Res. Oceans, 83, 55105514, https://doi.org/10.1029/JC083iC11p05510.

    • Search Google Scholar
    • Export Citation
  • Rodgers, K. B., J. Lin, and T. L. Frölicher, 2015: Emergence of multiple ocean ecosystem drivers in a large ensemble suite with an Earth system model. Biogeosciences, 12, 33013320, https://doi.org/10.5194/bg-12-3301-2015.

    • Search Google Scholar
    • Export Citation
  • Rosenblum, E., and I. Eisenman, 2017: Sea ice trends in climate models only accurate in runs with biased global warming. J. Climate, 30