This paper studies regional climate variability for the time period 1900–2013 using parsimonious stochastic models. Instrumental data records on 5° × 5°, 2° × 2°, and equal-area grids are examined. A long-range dependent (LRD) stochastic process is used as a simplified description of the multitude of response times in the climate system. Fitting a linear trend to the global mean surface temperature (GMST) implies a warming of 0.08 decade−1, which is highly significant under an LRD null hypothesis (p < 10−4). The regional trends are distributed around the global mean trend, while the fluctuation levels increases when going from global to regional scale. The temperature fluctuations of the tropical oceans are observed to be strongly influenced by El Niño–Southern Oscillation (ENSO) and, therefore, more consistent with autoregressive processes of order 1 [AR(1)]. A likelihood-ratio test is used to systematically determine the best null model [AR(1) or LRD]. About 80% of the regional warming trends are found to be significant (with a 5% significance level).
Given the extensive evidence of global warming, there is now an increased attention to whether trends can be detected on local and/or regional scales and also to the spatiotemporal pattern of climate variability. Stott et al. (2010) and Knutson et al. (2013) have presented such an analysis using control runs of climate models as a null hypothesis for trend detection. An alternative and complementary approach, which we pursue in this paper, is to use stochastic models. The main objective is to test the hypothesis of a linear trend versus the null hypothesis of “stationary climate.” That is, we assume that temperature time series Y(t) can be modeled as superpositions of deterministic trend signals m(t) and stationary, stochastic processes (climate noise) X(t):
with m(t) = a0 + a1t. The choice of a linear trend is mainly used to test the hypothesis that stationary climate can explain the last 110 years of warming, without assuming the correctness of this model (Bloomfield 1992).
For the regional surface temperature series analyzed in this paper we find, except for a small part of the land area, significant positive serial correlation (after detrending) of the residuals, with higher persistence over oceans compared to land. Thus, the stochastic part of the model X(t) should have built-in memory, consistent with the serial correlations of the observations. For the global mean surface temperature (GMST) there is evidence of long-range dependence (LRD) (Bloomfield 1992; Rypdal et al. 2013). Similar statistics are found in some grid cells, and it is therefore reasonable to choose stochastic models that exhibit scaling and slowly decaying autocorrelation functions (ACFs).
For the GMST, Cohn and Lins (2005) and Koutsoyiannis and Montanari (2007) have raised doubt about the statistical significance of a warming trend under an LRD null hypothesis, while Bunde and Lennartz (2012) find that a linear trend is significant at the 5% but not the 1% significance level. We have conducted our own analysis (this is presented in section 4) using standard statistical methods, which shows that a linear trend for the GMST is highly significant (p < 10−4). We note that a second-order polynomial trend (with linear term set to zero) is a better model in terms of the explained variation R2 reflecting that global warming has been accelerating.
On regional scales, the question of statistical significance of trends is not as clear-cut because of the much lower signal-to-noise ratio. This is illustrated in Fig. 1, where we have plotted monthly deseasonalized temperature data for the city of Moscow, Russia, together with the global mean temperature anomaly. While the trend estimates (slopes) are distributed around the GMST trend estimate, the fluctuation level is much higher. However, for many grid cells the persistence parameter (e.g., Hurst exponent in the LRD model) is lower than for the GMST. Thus the result of a detection analysis is not given a priori. A complicating factor is that regions strongly influenced by El Niño–Southern Oscillation (ENSO) have stronger persistence on time scales of 2–5 yr than predicted by an LRD process (Huybers and Curry 2006) and lower persistence than is predicted from an LRD model on time scales longer than a decade. In fact, the estimated power spectral densities (PSDs) of the temperature fluctuations in regions strongly influenced by ENSO are inconsistent with a power law, but fit better with the Lorentzian-shaped PSDs that characterize an autoregressive process of order 1, the so-called AR(1) model.1
We note that in some aspects it is unsatisfactory to use AR(1) models to describe ENSO dynamics, since we know that ENSO is an oscillatory mode in the climate system. The AR(1) models, which can be seen as discretizations of the Ornstein–Uhlenbeck processes, take shape from simple linear first-order equations with dissipation and random forcing, and hence they are incapable of describing oscillating modes. On the other hand, we are not seeking an accurate physical model of ENSO; rather, we need to quantify how the fluctuation levels in the climate noise vary with time scales. More specifically, we need to make an estimate of the natural climate variability on centennial time scales based on the statistical properties of the climate variability on the shorter time scales. The role of the models in trend detection is therefore to correctly prescribe the fluctuation levels on the long time scales using parameters estimated from the statistics on the shorter time scales. If we apply an LRD model in the ENSO regions, we will estimate very large Hurst exponents, which in turn will overestimate the natural variability on the centennial time scales.
For many grid cells it is not clear whether to choose an AR(1) or LRD process. This is an inherent statistical problem given the available sample length of about 110 years of data (Percival et al. 2001). Vyushin et al. (2012) find that climate variability appears to be more persistent than an AR(1) process and less persistent than a power-law process, and conclude that both representations are potentially useful for statistical applications. Thus, in a first attempt we compute the statistical significance against both null models. A similar approach is taken by Franzke (2012), who classifies the degree of significance based on the fraction of a set of null models that are rejected by the observations. We advance this approach further by selecting the “best” null model based on likelihood-ratio (LR) criteria. The LR test classifies the ENSO regions as significantly (significance level 5%) better described by an AR(1) than the LRD model fractional Gaussian noise (fGn). This is consistent with our empirical analysis and also with the findings of Huybers and Curry (2006). We also observe examples of the opposite [fGn better than AR(1)], while many grid cells are classified as undecided in the sense that the test is unable to discriminate between the two models. By assessing trend significance against the best null model we find that about 80% of the grid points have significant warming trends at the 5% significance level.
The remainder of this paper is organized as follows: In section 2 we review the stochastic models used in this study. An outline of the statistical methods used is given in section 3. In particular, we review the trend detection methodology used in this paper. The main results are presented in section 4. We discuss our findings in section 5.
2. Stochastic models
a. Hurst exponent
As noticed by Hurst (1957), many signals in nature satisfy scaling in the sense that the fluctuations levels of their coarse-grained versions vary as power-law functions of the aggregation scale. For a time series Xt, this means that the standard deviation of the running mean Yk = t−1(Xk−t+1 + … + Xk) scales as ∝tH−1, so if the signal is stationary we can define the Hurst exponent H ∈ (0, 1) by the relation
From stationarity and Eq. (2) it follows that the autocovariance function γ(τ) of Xt decays asymptotically as a power law:
The parameter σ > 0 is the standard deviation, while the Hurst exponent H ∈ (0, 1) determines the correlation structure. For H = 1/2, the stochastic process Xt is white noise, while H > 1/2 gives persistent (positive correlated) random variables. The case H < 1/2 corresponds to negative correlation and is not relevant here [see Rypdal and Løvsletten (2013) for application of antipersistent stochastic processes with power-law statistics].
One can extend the definition of the Hurst exponent to also include certain nonstationary processes. For instance, if X(t) is nonstationary with a power-law variogram but has stationary increments, then one can define the Hurst exponent by
With this (extended) definition a Brownian motion has Hurst exponent H = 3/2 while Gaussian white noise has H = 1/2.
Two classes of stochastic processes with well-defined Hurst exponents are the self-similar (Embrechts and Maejima 2002) and the multifractal processes (e.g., Løvsletten and Rypdal 2012) with finite second moments. The Ornstein–Uhlenbeck process, defined as the solution to the stochastic differential equation (SDE)
where B(t) is a Brownian motion and τ > 0, does not satisfy the scaling relation Eq. (2). However, an Ornstein–Uhlenbeck process scales asymptotically. When τ → ∞, X(t) converges to a Brownian motion and as τ → 0 the process X(t) is a Gaussian white noise.
b. Fractional Gaussian noise
The LRD model adopted in this paper is the fGn. If we assume that Xt is a Gaussian and stationary stochastic processes that satisfies the scaling property of Eq. (2), then these properties define the class of fGn. In discrete time fGn can be defined as the increments of a continuous time fractional Brownian motion (fBm) (Mandelbrot and Van Ness 1968):
In continuous time fGn is not well defined as a (finite variance) process, but rather as a random signed measure. However, using the definition of fBm one can write a formal (but divergent) integral representation of fGn:
c. Ornstein–Uhlenbeck and AR(1) processes
An Ornstein–Uhlenbeck (OU) process is defined by replacing the power law (t − s)H−3/2 in Eq. (6) with an exponential kernel ∝e−(t−s)/τ. This introduces a characteristic time scale τ > 0, and the formulation is equivalent to the SDE in Eq. (5). Straightforward discretization of this equation gives an AR(1) process:
where ϕ = 1 − Δt/τ, and εt are independent and identically distributed Gaussian random variables. The power spectral density of an OU process is Lorentzian, with S(f) ~ f−2 for f ≫ 1/τ and S(f) ~ f0 for f ≪ 1/τ. Hence we have two scaling regimes, one corresponding to Brownian motion (i.e., H = 3/2) on short time scales, and one corresponding to white noise (i.e., H = 1/2) on long time scales. The transition between these time scales is given by the characteristic time τ, which is also the e-folding time for the ACF.
3. Statistical methods
In this section we present theory for trend significance testing for linear models where the noise is an LRD process. Many of these results can be found in Ko et al. (2008) and the references therein, but we will also present some extensions and modifications of the existing theory. We note that the statistical methods used in this paper have been tested and validated in the supplementary material.
Consider n observations from the linear trend model in Eq. (1) where the climate variability Xt is represented by an fGn with scale parameter var(X1) = σ2 and Hurst exponent H. From the definition of an fGn it follows that the random vector X = (X1, …, Xn)T is multivariate normal distributed
where the n × n covariance matrix Γ is the Toeplitz matrix of the autocovariances [γ(0), …, γ(n − 1)]; that is, elements (i, j) of Γ are in the form γ(|i − j|), with γ(⋅) defined in Eq. (3). Denote by the correlation matrix of X and note that . It is convenient to write the linear trend model in vector form:
where a = (a0, a1)T and is the 2 × n design matrix with ones on the first row and the sampling times (1, 2, …, n) as the second row. The ordinary least squares (OLS) estimator of a can then be written as
This estimator has a bivariate normal distribution with mean a and covariance matrix
If we define c(H) to be element (2, 2) of the correlation matrix , then the estimator for the slope is distributed as ; that is,
where c(H)1/2 ~ nH−2. A closed-form expression for the variance factor c(H) can be found in Lee and Lund (2004). By setting a1 = 0, Eq. (11) gives the distribution of trend estimates under the null hypothesis of no trend. It follows that a (1 − α) × 100% confidence interval is given by
with the (OLS) estimated slope and zα the α upper quantile of the standard normal distribution. The corresponding p value (probability of an fGn producing a larger trend estimate than the observed estimate) is given by
where Φ is the cumulative distribution function of a standard normal random variable. Equations (12) and (13) come with the tacit assumption of known noise parameters. For most practical applications, one only has access to a set of parameter estimates. To assess trend significance, in a first attempt, one can just plug in the estimates of the noise parameters. For consistent estimators this approach results in an asymptotically (i.e., as the sample size goes to infinity) valid significance test. The advantage of this approach is that analytical formulas are available.
To estimate the Hurst exponent we use the maximum likelihood (ML) method (e.g., McLeod et al. 2007). As noted by Koutsoyiannis and Montanari (2007), the usual white-noise estimator for the scale parameter σ is severely biased for LRD processes. A better alternative is to use the ML estimator, adjusted such that the sample length n in the denominator is replaced with n − 2:
where is the maximum likelihood estimate (MLE) of the Hurst exponent, and the corresponding Toeplitz matrix formed from the ACF of n observations. In Eq. (14) the matrix has the effect of decorrelating an fGn vector X, in the sense that , with , is independent, standard normal variables.
The noise parameters are estimated from the residuals x, found by subtracting the OLS linear trend. Several authors (e.g., Koutsoyiannis and Montanari 2007; Franzke 2012) have argued that, to reflect the null hypothesis, these estimates should be calculated directly from the data. This gives a very weak significance test, since only the null hypothesis, and not the null and alternative hypothesis, is taken into account. Indeed, if we have a trend, this approach will lead to an erroneous high estimate of the scale parameter and also the Hurst exponent. If we instead subtract an estimated trend, given the null hypothesis, we introduce a small bias in the estimates. A similar bias is also introduced by just subtracting the sample mean (see Table S2 in the supplementary material). However, this inherent bias can be accounted for by adopting the small-sample correction proposed by Ko et al. (2008), and the details of this procedure can be found in the supplementary material.
While uncertainties in the estimates of the Hurst exponent and the scale parameter are taken into account with this small-sample correction method, the significance test still depends crucially on the estimated Hurst exponent. To add robustness to our results, we consider ML estimates on several time scales, and also detrended fluctuation analysis of order 2 and simple variograms. The advantage of these methods is that one can visually inspect the scaling properties (taking into account the well-known error bars). In addition we have inspected the ACFs for detrended data. From these nonparametric methods we identify a lack of scaling for the temperature fluctuations in some grid cells, most notably in the ENSO region.
Trend detection under an AR(1) model follows along the same lines with an explicit description given by Lee and Lund (2008).
4. Analysis of surface temperature data
Four datasets are analyzed in this project. These are the HadCRUT4 surface temperature anomalies (Morice et al. 2012), which combine the land temperatures from the CRU surface temperature data version 4 (CRUTEM4; Jones et al. 2012) and the sea surface temperatures (SSTs) from the Hadley Centre SST data version 3 (HadSST3; Kennedy et al. 2011). We also use the NOAA Merged Land–Ocean Surface Temperature Analysis (MLOST, V3.5.4) data developed by Smith and Reynolds (2005). In both of these datasets the mean temperature in 5° × 5° grids are provided with monthly time resolution. In addition to these we use Berkeley Earth’s 15984 equal-area dataset, and the GISS Surface Temperature Analysis (hereafter GISS; Hansen et al. 2010), with 1200-km smoothing, which is given on 2° × 2° grids. Possible sources of differences between the GISS, HadCRUT4, and NOAA MLOST data products have been briefly discussed by Libardoni and Forest (2011). The majority of land surface data [which comes from the Global Historical Climatology Network (GHCN)] are treated differently in construction of the different datasets. For instance, in the construction of the HadCRUT4 data there is a requirement that stations should have a certain number of observations in their normal period 1960–90, while in the construction of the GISS data (with 1200-km smoothing) a station is only included if there are other stations within a 1200-km radius with a period of overlap that is at least 20 years. In addition, each data product uses different SSTs, and there are differences in the way that data are extrapolated, or not extrapolated. The Berkeley land temperatures are constructed from 16 preexisting data archives. The current archive uses over 39 000 unique stations which is roughly 5 times the number of stations used in GHCN. The Berkeley SST is a modified version of the HadSST3.
All four datasets were downloaded on 1 October 2015 from the web pages listed in the supplementary material. The time period analyzed is January 1900–December 2013.
b. Sampling scale
For the regional surface temperature series we observe that direct application of the ML method tends to give higher estimates of the Hurst exponent compared with the detrended fluctuation analysis of order 2 (DFA2). For the latter we have control over which time scales contribute to the estimate. We also observe that the discrepancy between the two methods disappears if the signals are coarse grained over 4-month windows prior to the ML estimation (i.e., if a new, coarser time series is produced by dividing the series into 4-month segments and averaging the data points within each segment). Which time scales that should be emphasized in the parameter estimation is always a trade-off between the improved statistics achieved when focusing on the shorter scales and the increased relevance and importance of the longer scales. The choice to apply a 4-month coarse graining is based on this type of consideration, and it is meant to ensure that distinctive features of the month-to-month fluctuations do not have too large an impact on the predicted centennial-scale fluctuation level.
c. GMST trend significance
In Table 1 we present the results of a trend detection analysis for the four GMST time series. We see that there is very little variation between the four data products, with linear trends ≈0.08 K decade−1 and fluctuation levels σwn ≈ 0.15 K (4 months)−1. Here σwn denotes the white-noise estimator, which is defined in Eq. (14), with replaced by the identity matrix. The MLEs of the Hurst exponents are H = 0.97 for the GISS data and H = 0.98 for the other three GMST time series (not shown in the table). Since the methods we apply are restricted to the case H < 1, we should be attentive to the fact that the high estimates for H could simply be a result of the upper bound H = 1. This would be the case if the GMST scales with an exponent H > 1. However, this can be tested using the DFA2 estimator, which can be used both in the cases H < 1 and H > 1. The results of the DFA2 estimator to the GMST data are in the range from H = 0.87 to H = 0.96 for all the four data products. The bias-corrected ML estimates are HBC = 0.99, and the resulting adjusted ML estimator for the fluctuation level [see Eq. (14)] is σ ≃ 0.45 K (4 months)−1. The rather large discrepancy between the estimates for the fluctuation level is caused by Hurst exponents close to one.
The statistical significance of the trend estimates are computed using HBC and σ with the small-sample correction outlined in section 3 (details of this method are given in the supplementary material). The p values for the OLS slopes are less than 10−4 and thus highly significant. The 95% confidence intervals for the trends are ≈0.08 ± 0.03 K decade−1.
d. Regional results
We start the discussion of regional statistics by first considering the GISS dataset. Figure 2a shows the estimated trends, and as can be seen in Table 2, the regional trends are distributed around the GMST trend. We observe warming over all of Earth’s surface, except for a small region in the North Atlantic. The warming trends are generally weaker in the SST compared to surface air temperature (SAT) over land; in particular, we observe weaker trends in the Pacific Ocean.
Figure 2b shows the (white noise) fluctuation levels of the temperature signal (i.e., standard deviation around the regression line). A summary of these estimates can be found in Table 2. The MLEs of the fluctuation levels based on an AR(1) model and a fGn model yield similar results. Very large fluctuation levels are observed over land compared to the oceans, and hence it is not a priori clear that the stronger trend over land is more significant than the weaker trend in the oceans. There are also large fluctuation levels around the equator in the Pacific Ocean. This is a region that is colder than average during the La Niña cold phase and warmer than average in the El Niño warm phase. In this region, the standard deviations are influenced by the ENSO, and not only the year-to-year variability. As discussed in the introduction, this is one of the reasons why an AR(1) process is a better null model in this region.
The estimated Hurst exponents are shown in Fig. 2c, and we observe stronger persistence in SST than in land temperatures. In North America and in Eurasia the estimated model is close to a white-noise process (i.e., H ≈ 0.5), while we apparently have strong LRD in the oceans, in particular in the tropical Pacific. A similar picture is seen in Fig. 2d. Here we have plotted the estimated correlation length in an AR(1) process. We observe that the estimated correlation time varies from a few months over much of Earth’s land areas to a couple of years in the tropical Pacific and tropical Atlantic.
Based on the parameter estimates presented in Fig. 2 we can compute the p values for the estimated trends. As illustrated in Figs. 3a and 3b, these p values depend crucially on the chosen null model. In Fig. 3a we have shown a map of the p values computed with respect to the fGn model, and in Fig. 3b we have shown the corresponding p values computed with respect to the AR(1) model. A striking feature in these plots is that the SST trends for cell points in the Pacific Ocean are determined as significant with respect to an AR(1) model, but cannot be concluded as significant if we apply an LRD model. Hence, our interpretation of the significance of the local warming trends in the Pacific Ocean depends on which model is best suited to describe the correlation structure in these data.
As discussed in the introduction, we observe that many of the time series in this region (see, e.g., Figs. 1b,d) have statistical properties that are strongly influenced by ENSO. That is, the PSDs are not power laws, but rather have strong persistence on the shortest time scales and white-noise characteristics on longer scales. In contrast, many of the SST series in the North Atlantic basin, the statistical properties of which are influenced by the Atlantic multidecadal oscillation (AMO), are consistent with a scaling model. It is important to realize that a persistent (H > 0.5) scaling description of the climate noise is a parsimonious way of stating that there are natural oscillations on all scales, and the parameter H determines the relative fluctuation levels of the slow oscillations compared to the faster modes. However, as the PSD reveals, the ENSO is too strong to be consistent with an LRD model and must be seen as an anomalous oscillation in this description. Whether or not the AMO is anomalous with respect to an LRD description is difficult to determine from the instrumental record due to insufficient statistics. In any case, it is evident that the persistent multidecadal SST variability in the North Atlantic and SAT variability over adjacent continents is related to the AMO and the North Atlantic Oscillation (NAO) (Li et al. 2013).
To systematically determine if an AR(1) null model or LRD null model is best suited at a given geographic location, we apply the likelihood ratio (LR) model selection test (see Fig. 3c). We observe that AR(1) processes are preferred over an fGn in much of the Pacific Ocean, while fGn models are preferred in the North Atlantic and over the adjacent continents.
In Fig. 3d we have combined Figs. 3a and 3b so that the p value for the preferred model is plotted in each grid point. When combining the two models we have more grid points with significant warming than what is obtained using the fGn null hypothesis, but less than inferred from the AR(1) null model.
e. Comparisons of the datasets
To add robustness to the results presented in the previous section, we have repeated the same regional statistical analysis on the datasets from HadCRUT4, Berkeley Earth, and NOAA MLOST. The trends and standard deviations are shown in Fig. 4 and summarized in Table 2. The persistence parameters are shown in Fig. 5. For the GISS dataset, these estimates are shown in Fig. 2. The most notable difference between the four data products is in the southern oceans. This can be seen by comparing the persistence parameters, and also the standard deviations.
In Fig. 6d the statistical significance of the trends, based on the best null model, are shown for HadCRUT4, Berkeley Earth, and NOAA MLOST data. The patterns are similar to what we found for the GISS data, where the largest domains of insignificant trends are found in the Pacific and North Atlantic Oceans. Table 3 shows the percentages of trends that are significant. The relative frequency of significant trends, at the 5% significance level tested against the best null model, is approximately 80% for all the data products. The HadCRUT4 data shows the smallest percentage (70%) of significant trends, but this can be understood from the difference in spatial coverage. See Fig. 6d.
5. Summary and discussion
This paper studies climate variability after 1900 using simple stochastic models and four different data products. The results are in many respects similar for the four data products, although there are some differences that are discussed in section 4e.
One of our main focuses has been statistical significance testing of regional temperature trends in this time period with an LRD representation of the internal climate variability. Several studies have presented such detection analysis for a few selected locations, and an advantage of this study is that we get a global overview of local and regional climate variability.
Bloomfield (1992) has shown that the GMST trend is significantly different from zero. Our study confirms this conclusion with an updated estimate of the GMST trend of 0.08 ± 0.03 K decade−1. Here, the error bars indicate the 95% confidence interval under the assumption of a linear trend superposed on long-range dependent (LRD) stationary fluctuations, which in this work is represented by the fGn model. Under the same assumption we have shown that the p value (the probability of a fGn producing pseudotrends larger than the observed warming) is less than 10−4.
For regional surface temperatures we find that approximately 80% of the analyzed grid cells have significant warming trends. This number is obtained from first choosing the best null model [fGn or AR(1)] based on a likelihood-ratio criteria, and subsequently applying a trend detection using the most appropriate model. This approach is preferable compared to the standard method, which is to restrict the analysis to a single class of models (e.g., fGn). The main reason for this is that some regions, in particular those strongly influenced by ENSO, show a lack of scaling, while other regions are more consistent with LRD processes.
A similar fraction of grid cells with significant warming trends (about 80%) was also found by Karoly and Wu (2005) for trends over 1903–2002, although a one-sided test was used there. The results of our study, as well as those of Karoly and Wu (2005), Stott et al. (2010), and Knutson et al. (2013), are evidence that global warming is observable on regional scales.
The regions where we do not have warming trends, or where we cannot establish significance of the warming trends, can be identified with feedback mechanisms in the ocean dynamics. In fact, the lack of warming trends in the North Atlantic basin can partly be explained by the 60-yr periodicity in the AMO. The AMO began a negative phase around the year 1900, and in the time period 1900–2013 (the period we have analyzed) it had not quite completed two full cycles. Consequently, the AMO has a negative contribution to the SST trends over the period.
Another region where we cannot establish significant warming trends is the in the equatorial Pacific Ocean, specifically its eastern part (see, e.g., Fig. 3d). This is related to the so-called Pacific cold tongue, which is a region around the equator west of South America that experiences cooling relative to the other regions of the Pacific Ocean. The phenomenon is produced by upwelling of cold water in the eastern Pacific and its amplification by the trade winds. Our results for this region are consistent with a study of Zhang et al. (2010), where principal component analysis is used to discern a spatial pattern for the variations in the SST over the last century, and where the Pacific cold tongue is identified in the second orthogonal function mode. Climate models show that the cooling mode is not observed in the preindustrial period, and therefore it might be seen as a negative dynamical feedback to global warming (Zhang et al. 2010).
In a wider perspective, this paper presents a simple methodology for accurately quantifying the local and regional temperature variability on centennial time scales. Several authors have used climate models to determine the relative role of natural variations to the overall uncertainty in the climate predictions for the next century (see, e.g., Monier et al. 2015; Deser et al. 2012, 2014). In these studies, the natural variability is defined as the variations of the individual runs around the ensemble means. The obvious advantage of climate models in this respect is the availability of a large number of runs, which makes it possible to construct ensemble means. When analyzing the instrumental temperature records, we only have a single realization at each location, and we have to apply different methods in order to separate internal climate variability from the climate system’s response to the anthropogenic changes in radiative forcing. This separation of signals into noise terms (internal variability) and trends is exactly what is done in trend significance testing, and hence this paper contains a description of natural climate variability, including its dependence on geographic location and how its fluctuation levels depend on time scale. Our study can be seen as a complement to the ongoing efforts of using climate models to quantify uncertainty in future climate projections.
This work has received support from the Norwegian Research Council under Contract 229754/E10. We thank the referees for useful comments that helped improve the paper. The authors also acknowledge useful discussions with Kristoffer Rypdal and Hege-Beate Fredriksen.
Supplemental information related to this paper is available at the Journals Online website: http://dx.doi.org/10.1175/JCLI-D-15-0437.s1.
AR(1) models are commonly used to model climate noise [e.g., Fig. SPM.1(b) in IPCC (2013)].