Global Surface Temperature Response to 11-Yr Solar Cycle Forcing Consistent with General Circulation Model Results

T. Amdur aDepartment of Earth and Planetary Sciences, Harvard University, Cambridge, Massachusetts

Search for other papers by T. Amdur in
Current site
Google Scholar
PubMed
Close
,
A. R. Stine bDepartment of Earth and Climate Sciences, San Francisco State University, San Francisco, California

Search for other papers by A. R. Stine in
Current site
Google Scholar
PubMed
Close
, and
P. Huybers aDepartment of Earth and Planetary Sciences, Harvard University, Cambridge, Massachusetts

Search for other papers by P. Huybers in
Current site
Google Scholar
PubMed
Close
Full access

ABSTRACT

The 11-yr solar cycle is associated with a roughly 1 W m−2 trough-to-peak variation in total solar irradiance and is expected to produce a global temperature response. The sensitivity of this response is, however, contentious. Empirical best estimates of global surface temperature sensitivity to solar forcing range from 0.08 to 0.18 K (W m−2)−1. In comparison, best estimates from general circulation models forced by solar variability range between 0.03 and 0.07 K (W m−2)−1, prompting speculation that physical mechanisms not included in general circulation models may amplify responses to solar variability. Using a lagged multiple linear regression method, we find a sensitivity of global-average surface temperature ranging between 0.02 and 0.09 K (W m−2)−1, depending on which predictor and temperature datasets are used. On the basis of likelihood maximization, we give a best estimate of the sensitivity to solar variability of 0.05 K (W m−2)−1 (0.03–0.09 K; 95% confidence interval). Furthermore, through updating a widely used compositing approach to incorporate recent observations, we revise prior global temperature sensitivity best estimates of 0.12–0.18 K (W m−2)−1 downward to 0.07–0.10 K (W m−2)−1. The finding of a most likely global temperature response of 0.05 K (W m−2)−1 supports a relatively modest role for solar cycle variability in driving global surface temperature variations over the twentieth century and removes the need to invoke processes that amplify the response relative to that exhibited in general circulation models.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: T. Amdur, amdur@g.harvard.edu

ABSTRACT

The 11-yr solar cycle is associated with a roughly 1 W m−2 trough-to-peak variation in total solar irradiance and is expected to produce a global temperature response. The sensitivity of this response is, however, contentious. Empirical best estimates of global surface temperature sensitivity to solar forcing range from 0.08 to 0.18 K (W m−2)−1. In comparison, best estimates from general circulation models forced by solar variability range between 0.03 and 0.07 K (W m−2)−1, prompting speculation that physical mechanisms not included in general circulation models may amplify responses to solar variability. Using a lagged multiple linear regression method, we find a sensitivity of global-average surface temperature ranging between 0.02 and 0.09 K (W m−2)−1, depending on which predictor and temperature datasets are used. On the basis of likelihood maximization, we give a best estimate of the sensitivity to solar variability of 0.05 K (W m−2)−1 (0.03–0.09 K; 95% confidence interval). Furthermore, through updating a widely used compositing approach to incorporate recent observations, we revise prior global temperature sensitivity best estimates of 0.12–0.18 K (W m−2)−1 downward to 0.07–0.10 K (W m−2)−1. The finding of a most likely global temperature response of 0.05 K (W m−2)−1 supports a relatively modest role for solar cycle variability in driving global surface temperature variations over the twentieth century and removes the need to invoke processes that amplify the response relative to that exhibited in general circulation models.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: T. Amdur, amdur@g.harvard.edu

1. Introduction

The 11-yr solar cycle is associated with a roughly 1 W m−2 trough-to-peak variation in total solar irradiance (TSI) over the past half century, corresponding to a 0.25 W m−2 anomaly in globally averaged top-of-atmosphere insolation. Such a regular, albeit small, increase in shortwave flux during solar-cycle maxima is expected to cause temperature variations that can be examined to better understand Earth’s responses to anomalies in radiative forcing. Indeed, the relationship between solar cycle forcing and surface temperature response has spawned a long history of studies investigating the magnitude of the global temperature response (Pittock 1978), with more recent studies taking advantage of satellite observations, global reanalysis, and gridded surface observation products. Recent empirical estimates of the sensitivity of global-average surface temperature to solar forcing (Table 1) range from 0.08 to 0.18 K (W m−2)−1. Throughout this study, surface temperature sensitivity is given as the temperature response per watt per meter squared anomaly in the solar constant, as per convention (e.g., Douglass and Clader 2002; Camp and Tung 2007). Implicit in these comparisons is the assumption that climate sensitivity to solar cycle forcing is consistent over time and thus can be compared over various intervals.

Table 1.

Published and updated sensitivities to solar cycle forcing. Updated estimates use the same methods and datasets but with the latter updated to cover 1959–2019 or 1882–2019. Rows 1–9 describe observational studies, and rows 10–12 are model studies. Publication 6, Lean and Rind (2008), originally reports a temperature response rather than sensitivity, and sensitivity estimates produced by their model formulation are reported here. Updated sensitivities for all publications using HadCRUT3 data are calculated using HadCRUT4. Standard errors, if reported, reflect a 95% confidence interval assuming a normal distribution. The following inference methods are represented: multiple linear regression (MLR), composite mean difference (CMD), wavelet decomposition (WD), linear discriminant analysis (LDA), Fourier transform (FT), and energy balance model (EBM).

Table 1.

Observational estimates rely upon a variety of methods, the most common of which is multiple linear regression (MLR), where TSI features as one of the predictors. Estimates of the sensitivity of global-average temperature to solar forcing from MLR from the past 20 years range from 0.08 to 0.11 K (W m−2)−1 (Benestad and Schmidt 2009; Douglass and Clader 2002; Folland et al. 2018; Lean and Rind 2008; Misios et al. 2016).

Studies using a composite mean difference approach (CMD) have reported larger estimates for the magnitude of the solar response. Notably, Camp and Tung (2007) estimate a response as high as 0.18 ± 0.08 K (W m−2)−1 (all ranges given as 95% confidence intervals) when using CMD with NCEP reanalysis temperature fields over 1959–2004. CMD involves generating a spatial pattern of temperature response by averaging and differencing the annual meridional temperature anomalies for years with high solar activity versus low solar activity. Using an approach called linear discriminant analysis (LDA), which is similar to CMD but in which residuals are constrained to be orthogonal in space and time to the retained pattern, Tung and Camp (2008) find a sensitivity of 0.17 ± 0.04 K (W m−2)−1 when also analyzing NCEP reanalysis temperature fields over 1959–2004. These sensitivities are the highest that we are aware of in the literature. Tung et al. (2008) obtained lower sensitivities, however, when applying LDA to other global temperature datasets, highlighting the importance of evaluating different datasets and their uncertainties.

Perhaps surprisingly, it appears more difficult to detect a response to solar variability in a forced general circulation model simulation. Simulated responses to solar variability are difficult to disentangle from internal variability (Hegerl et al. 1997) and appear small in magnitude (Benestad and Schmidt 2009; MacMynowski et al. 2011; Stevens and North 1996). A representative model study by Misios et al. (2016) finds that models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) display a response to solar cycle forcing of 0.07 ± 0.10 K (W m−2)−1, with 2 models of 31 examined exhibiting a negative global temperature response to solar cycle forcing.

If empirical estimates indicating a sensitivity of 0.12 K (W m−2)−1 or higher are correct, most estimates from model simulations are low, possibly because relevant processes are not accounted for within the models. One set of mechanisms calls upon the fact that solar variability is greatest at shorter wavelengths that are largely absorbed in the stratosphere, with UV heating of the stratosphere causing a top-down response that is either regional (e.g., Andrews et al. 2015) or global (e.g., Meehl et al. 2009) in scale. Because the solar cycle includes a modulation of the sun’s magnetic field, others have suggested feedbacks involving changes in the solar wind modifying the flux of cosmic rays that, in turn, alter cloud properties (Svensmark et al. 2017). A review of proposed processes is provided by Gray et al. (2010). It is also possible that the sensitivity of global temperature to solar variations is indicative of the feedback strength determining the response to greenhouse gas forcing (Tung and Camp 2008; Cai and Tung 2012). In this view, model simulations showing a weaker response to solar variability may also exhibit too low a sensitivity to greenhouse gas forcing.

Given the implications of a disagreement between solar-cycle sensitivity estimates in observations and models, it is useful to determine whether such a disagreement can be addressed by further examination of observational data and techniques. We first present a baseline estimate of the global temperature response to solar cycle forcing. We then use comparisons between this baseline and other approaches to explain the range of estimates found in the literature. Although our primary line of analysis is focused on estimating sensitivity using multiple regression, we also re-examine estimates made using compositing approaches. We do not address wavelet decomposition, a third approach used by Scafetta and West (2005), because it was found by Benestad and Schmidt (2009) to be overly sensitive to parameter choices.

2. Baseline estimate of the sensitivity of global temperature to solar variability

An initial “baseline” estimate of the global temperature response to solar cycle forcing is obtained using up-to-date forcing and temperature records and multiple linear regression with lags (MLR). MLR yields a set of regression coefficients that minimize the difference of observed from predicted temperatures given a set of predictors. We follow the approach of past studies (Douglass and Clader 2002; Lean and Rind 2008; Misios et al. 2016) in attributing global-average surface temperature anomalies to four predictors:
T^(t)=β0+βsFs(tδs)+βaFa(tδa)+βυFυ(tδυ)+βeFe(tδe).
Here T^(t) is the monthly global surface temperature anomalies relative to climatology, Fs is the anomaly in total solar irradiance (TSI), Fa is anthropogenic forcing from greenhouse gases, Fυ is volcanic stratospheric aerosol forcing, and Fe is an index for the El Niño–Southern Oscillation (ENSO). The δ terms represent lags, such that our analysis is a multiple-lag linear regression.

Global-average surface temperature is given by HadCRUT4 from the Hadley Center (Morice et al. 2012). TSI estimates are obtained from the Solar Influence for SPARC–High Energy Particle Precipitation in the Atmosphere (SOLARIS-HEPPA) monthly TSI reconstruction used for CMIP6 model simulations (Matthes et al. 2017). Anthropogenic forcing is represented by the CMIP5 effective CO2 concentration radiative forcing from all greenhouse gases, as calculated by the representative concentration pathway (RCP). Effective CO2 values from 2005 to 2019 are estimated using the RCP6.0 scenario (Meinshausen et al. 2011). Volcanic forcing is taken from the stratospheric aerosol optical depth reconstructions used in CMIP6 model inputs (Arfeuille et al. 2014; Thomason et al. 2018), which is a time series predictor calculated by taking the spatially weighted mean of 550-nm extinction coefficients integrated over all layers above 15 km. ENSO is represented by the multivariate ENSO index (MEI; Wolter and Timlin 1998).

Monthly resolution is used for all records in order to capture response and lag information at subannual time scales, although comparisons are made against estimates derived from annual-resolution data later. Because the temperature response to forcing in the climate system is not instantaneous, lags for all predictors are selected for each combination of predictors in the regression model such that the total contribution of all predictors is maximized. This is accomplished by simultaneously fitting lags for each forcing to maximize explained temperature variability in the median global temperature estimate from HadCRUT4. Bounds for the allowed lag between global temperature and each predictor are prescribed between zero time-offset and one-quarter of a cycle or one e-folding of the respective predictor process: specifically, 0–36 months for solar (δs), 0–12 months for volcanic and ENSO (δυ; δe), and 0–20 years for anthropogenic forcing (δe). In practice, we find explained variance is maximized for δs = 11 months, δυ = 7 months, δe = 3 months, and δa = 12 years.

Our multiple regression analysis gives a global temperature sensitivity of 0.05 K (W m−2)−1 with a 95% confidence interval of 0.00–0.10 K (W m−2)−1. Confidence intervals are estimated using the results of 104 regressions, each involving a different realization of surrogate TSI data obtained from phase randomization of the observations (Theiler et al. 1992). The central value is generally consistent with the predicted global temperature response to solar cycle forcing estimated in model studies (Table 1) but smaller than estimates from several observational studies. The remainder of this paper reconciles this baseline MLR estimate, which we refer to as ASH21, with other estimates.

3. Discrepancies between empirical estimates

Discrepancies in existing estimates of the global temperature response to solar variability point to the need to intercompare estimates in a systematic manner. Benestad and Schmidt (2009) note two important sources of discrepancies. First, because multiple regression models cannot account for all forced changes or all internal sources of variability, there is the possibility of overattribution of variability to the selected predictors. This may not be resolved by adding additional predictors because many potential contributions are relatively weak and the available record only affords a certain number of degrees of freedom for fitting. Sources of internal variability include the Atlantic multidecadal oscillation and the interdecadal Pacific oscillation, which also have variability on 11-yr time scales, and thus may affect solar-cycle sensitivity estimates. In practice, however, we find that adding AMO and IPO indices to the MLR leads to little change in estimated global temperature response, with inferred sensitivities of monthly temperature to solar variability ranging from 0.05 to 0.06 (W m−2)−1, depending on which combination of AMO and IPO is included in the regression (Table 2).

Table 2.

Comparison of sensitivity [K (W m−2)−1] for various combinations of predictors and temperature records. The left column labels the combination as either the ASH21 baseline or the ASH21 baseline with specified modification. Variance-maximizing lags are used for each calculation, and 95% confidence intervals are estimated using phase randomization of the TSI predictor. Combinations that cannot be calculated are left blank. Although not indicative of absolute precision, values are reported to the thousandths place for purposes of intercomparison.

Table 2.

A second source of discrepancy noted by Benestad and Schmidt (2009) is collinearity between predictors. The conventionally used predictors for global surface temperature (volcanic aerosols, TSI, anthropogenic forcing, and ENSO) are not orthogonal, with volcanic aerosols and ENSO containing spectral energy at the same decadal time scales as solar cycle forcing. For the predictors chosen for our baseline estimate, 56% of spectral energy in TSI is contained within periods between 8 and 14 years, as compared with 34% of anthropogenic forcing, 19% of volcanic forcing, and 10% of ENSO spectral energy. The resulting collinearity makes the regression estimates sensitive to changes in other predictors.

There also appears to be shared variance in the centennial trends present in solar and anthropogenic forcing. Over the twentieth century, several solar forcing reconstructions depict a secular increase in TSI that is largely collinear with the increase in anthropogenic forcing, as depicted by either effective greenhouse gas concentrations or radiative forcing (Wang et al. 2005; Matthes et al. 2017). The linear trend in the anthropogenic forcing predictor used in our best estimate is approximately 1.8 W m−2 since 1882, whereas the secular trend of the TSI predictor is 0.44 W m−2 since 1882, corresponding to an increase in top-of-atmosphere radiative forcing of ~0.1 W m−2. As a result, any global temperature response to long-term solar forcing is potentially conflated with the anthropogenic temperature response and thus potentially subject to bias. For example, if the TSI predictor were replaced with the 0.44 W m−2 linear trend in TSI from 1882–2019, MLR yields a sensitivity of 0.42 K (W m−2)−1, evidently conflating the response to trends in greenhouse gas forcing with a response to TSI. We address indeterminacy in trends by linearly detrending all TSI records.

The effects of collinearity can also increase when allowing for lags among predictors. Specifically, lags selected to maximize the solar response coefficient may permit for alignment of otherwise uncorrelated signals with TSI. Misios et al. (2016) address this collinearity by applying an 8-yr high-pass filter to their ENSO predictor. This approach, however, has the drawback of tending to attribute ENSO-induced variability in surface temperature at longer time scales to the remaining predictors with spectral energy at decadal time scales, most notably TSI. It follows that use of an 8-yr, high-pass ENSO predictor results in a higher estimate of global temperature sensitivity to solar forcing with greater uncertainty, adjusting our baseline estimate from 0.05 ± 0.05 to 0.07 ± 0.06 K (W m−2)−1. We address this issue by using an unfiltered ENSO record and allowing all predictors to freely lag simultaneously such that their explained variance is maximized.

To the two issues already pointed out by Benestad and Schmidt (2009), we also consider the topic of data resolution. Converting a monthly time series to annual resolution involves block averaging and is a form of low-pass filtering that can influence the representation of seasonal-scale processes such as volcanic aerosol or ENSO effects. Sensitivity is estimated using annual resolution data by 3 of the 12 studies listed in Table 1 (Camp and Tung 2007; Tung et al. 2008; Misios et al. 2016). The question of whether a coarsening of predictor and temperature records has an effect on the precision of sensitivity estimates is addressed in Table 2, where estimates from various model formulations are repeated for both monthly and annual resolutions. Across the formulations we consider, estimates from annual models are, on average, less than 1% larger than monthly models but the uncertainty is 13% larger. The ASH21 baseline estimate is representative: the annual-resolution ASH21 estimate is 0.058 ± 0.06 K (W m−2)−1, whereas the monthly resolution estimate is 0.054 ± 0.05 K (W m−2)−1 (Table 2), where values are reported to the thousandths place for purposes of intercomparison.

4. Choice of predictors and variability of estimates

Observational MLR studies vary in which predictor datasets are used, reflecting uncertainty about which predictor is most suitable for inferring sensitivity. To estimate the magnitude of the resulting model uncertainty, we compare our baseline MLR results to those obtained when using the TSI, anthropogenic, volcanic, and El Niño–Southern Oscillation predictors used in past MLR studies (Lean and Rind 2008; Misios et al. 2016). For TSI, in addition to SOLARIS-HEPPA (Matthes et al. 2017), we evaluate the solar-cycle-only TSI reconstruction by Lean (2000) and the reconstruction by Wang et al. (2005). For anthropogenic forcing, in addition to our baseline estimate using the RCP effective CO2 concentration (Matthes et al. 2017), we also evaluate the anthropogenic radiative forcing used by Miller et al. (2014). Our adopted baseline radiative forcing record from volcanic stratospheric aerosols (Arfeuille et al. 2014; Thomason et al. 2018) is compared to a global-mean aerosol optical depth record at 550 nm (Sato et al. 1993), as used by Lean and Rind (2008) and Misios et al. (2016). Last, the MEI representation of El Niño–Southern Oscillation (Wolter and Timlin 1998) is compared with the Niño-3.4 indices determined from the Kaplan and ERSSTv5 records (Kaplan et al. 1998; Huang et al. 2017), as well as the 8-yr high-passed version of the Kaplan Niño-3.4 index used by Misios et al. (2016).

The effect of predictor choice on solar cycle sensitivity estimates is illustrated by comparing our ASH21 baseline estimate with an MLR analysis using the same predictors as Lean and Rind (2008), where the latter model is referred to here as LR08. LR08 predictors are updated through 2019. Unlike for other TSI predictors, for purposes of reproducing foregoing results, the LR08 TSI record is not detrended. Predictors are scaled by their estimated regression coefficients and depicted in units of temperature in Fig. 1. Although the predicted global temperature time series match each other to within 0.17 K over all times, estimates of the response to solar forcing under LR08 are 38% larger [0.07 K (W m−2)−1] than under ASH21 [0.05 K (W m−2)−1] due to subtle differences in the shape of the predictor time series and associated differences in estimated lag coefficients. Because the global-average temperature record is dominated by a response to anthropogenic forcing, slight variations in the temperature variation attributed to the anthropogenic predictor can have large effects on the scaling of other predictors. ASH21 ascribes more temperature variability to the anthropogenic predictor and a smaller fraction of variability to all other predictors, leading to a weaker TSI response. Variation in predictor choice also affects the estimated lag of other predictors. For example, both LR08 and ASH21 use the same ENSO predictor (MEI), but δe for the ENSO predictor is 4 months for LR08 and 3 months for ASH21.

Fig. 1.
Fig. 1.

Observed and predicted temperature anomalies: (a) HadCRUT4 global-average surface temperature anomaly at monthly (gray) and annual-average (black) resolution. Predicted monthly temperature anomalies, as determined from lag multiple regression, are overlaid for the LR08 (blue) and ASH21 (orange) models. Also shown are the temperature contribution from anthropogenic predictors for LR08 (smoothly varying blue) and ASH21 (smoothly varying orange). Individual temperature contributions estimated for (b) TSI, (c) volcanic aerosols, and (d) ENSO forcing are shown for LR08 (blue) and ASH21 (orange). The predicted monthly global temperature anomalies shown in (a) are equal to the sum of the contributions from anthropogenic, TSI, volcanic aerosol, and ENSO forcings.

Citation: Journal of Climate 34, 8; 10.1175/JCLI-D-20-0312.1

To further explore the contribution of predictor choice to uncertainty in sensitivity, we repeat our MLR analysis using each of the 48 combinations associated with the 3 TSI, 2 anthropogenic, 2 volcanic aerosol, and 4 ENSO predictors. Across the different predictor combinations, solar cycle sensitivity ranges from 0.03 to 0.09 K (W m−2)−1 (Fig. 2). The relative contribution of each predictor choice toward the spread of global temperature estimates is assessed using a linear regression of the sensitivity of temperature to solar forcing as a function of predictor choice. Perhaps unsurprisingly, choice of TSI reconstruction accounts for the majority of the variance between response estimates, accounting for 65% of the total. With the exception of the ASH21 estimate, all estimates shown in Fig. 2 below 0.06 K (W m−2)−1 use the Lean (2000) reconstruction that includes no secular trend in TSI beyond changes in cycle amplitude. Other differences in predictor choice have a relatively minor influence. The choice of anthropogenic forcing predictor accounts for 8% of the variance across the 48 combinations of predictors and their associated temperature response, the choice of volcanic predictor 0%, and the choice of ENSO predictor 20%. The remaining 7% of the variance is associated with nonlinear interactions between predictor choice.

Fig. 2.
Fig. 2.

Relative likelihoods of MLR models using various predictor combinations. For 48 combinations of predictor dataset choices, the relative likelihood is assessed on the basis of residuals between predicted and observed monthly global temperature anomalies. Models are assessed over the interval 1882–2019, and the median likelihood is set to 1. Estimates using the LR08 and ASH21 models are shown in blue and orange, respectively. Combinations using MEI as an ENSO predictor are marked with a diamond and have, on average, a nine-times-larger likelihood relative to combinations using alternative ENSO predictors. The likelihood-weighted distribution of sensitivities from 48 model combinations is applied to the 100 HadCRUT4 ensemble members and is plotted at bottom to show a median of 0.05 K (W m−2)−1, the interquartile range (IQR; box), and the 95% range of 0.03 to 0.09 K (W m−2)−1 (whiskers).

Citation: Journal of Climate 34, 8; 10.1175/JCLI-D-20-0312.1

5. Determining relative likelihoods of MLR model formulations

Given the large space of predictors and global-average temperature time series that could be used to generate and evaluate the MLR model, we turn to Bayesian inversion as a means to determine which sets of predictors are more likely given their agreement with the observed global surface temperature record. The likelihood of a given model can be evaluated using an unnormalized version of Bayes’s theorem:
P(M|Tobs)P(Tobs|M)×P(M),
where M is the set of predictors used to generate the model and Tobs is the observed global-average temperature record. We expect that the combined effects of measurement and model error will lead to a residual between observed temperatures and a MLR model. The prior, P(M), is assumed to be uniform. Residuals are assumed to be normally distributed and relative likelihoods are assigned to M according to a joint distribution:
P(Tobs|M)~N[TmodelTobs,Σ(t)],
where N is a multivariate-normal distribution with mean zero and covariance Σ(t) that is evaluated according to residuals, TmodelTobs.
The covariance term in Eq. (3) accounts for autocorrelation inherent to temperature time series not otherwise accounted for by our model, and is defined as
Σ2(t)=Σm2(t)+Σi2.
Measurement error covariance Σm2(t) is estimated using the variance across the annual-resolution HadCRUT4 ensemble, whereas internal variability covariance Σi2 is calculated as the variance of residuals between the global-average temperature in 110 CMIP5 historical runs and the temperature predicted by MLR models. MLR models are fit for each historical run using predictor time series from the external forcing applied to the models (Wang et al. 2005; Miller et al. 2014; Sato et al. 1993) as well as each model’s simulated Niño-3.4 index. Autocorrelation for both internal variability and measurement error, and thus the off-diagonal covariance, is represented as an autoregressive order-1 process. The autocorrelation of internal variability is determined empirically from the residuals of the CMIP5 ensemble after removing the MLR fits to be 0.56. The autocorrelation for measurement error is specified to be 0.9 in order to account for long-duration systematic biases (Chan et al. 2019). The combined Σ yields a standard deviation for the residual ranging from 0.19 K for the last two decades of the twentieth century to 0.26 K at the beginning of the twentieth century.

Figure 2 displays the likelihood estimates for the 48 predictor forcing combinations. The estimates using the LR08 and ASH21 predictor sets are plotted as well. As a result of collinearity between predictors, the estimated global temperature response depends on predictor choice, with a 95% spread of 0.03–0.09 K (W m−2)−1. The sensitivity of global temperature to TSI has some association with the relative likelihood of the model, with the primary distinguishing factor being which ENSO predictor is specified. Using MEI as the ENSO predictor, in particular, gives an almost order of magnitude higher likelihood estimate and lower inferred sensitivity to solar forcing of 0.05 K (W m−2)−1, as opposed to an average of 0.07 K (W m−2)−1 for other ENSO predictors.

Because such a wide range of estimated global temperature responses can be determined from seemingly reasonable combinations of predictor choices, we caution against reporting any single estimate relying on a particular combination of predictors. Additional formulations of MLR models are possible beyond the model space of this likelihood analysis, some of which are shown in Table 2. These comprise the use of alternative temperature records, incorporation of additional predictors, and annually resolved MLR models. We also consider performing MLR over the time period 1959–2019, when global temperature data coverage is relatively more robust. This broader space of combinations gives estimates ranging from 0.02 to 0.08 K (W m−2)−1, comparable with both the 95% range obtained using the likelihood analysis and with the 95% confidence intervals in the ASH21 baseline estimate obtained from phase randomization. The major exception is that use of NCEP reanalysis temperatures over 1959–2019 gives negative estimates of global temperature sensitivity to TSI variations, possibly because of inaccuracies in climate trends brought about by changes in data availability through time (Bengtsson et al. 2004).

6. Modest power of the MLR test inferred from analysis of forced and unforced simulations

To assess the power of MLR to accurately characterize the global temperature sensitivity to TSI variations, we repeat the MLR analysis using output from the general circulation model integrations from the CMIP5 ensemble. More specifically, we apply the MLR method to monthly global-average surface temperature from two sets of CMIP5 experiments: historical runs that include anthropogenic, volcanic aerosol, and solar forcings, and preindustrial control runs that have no external forcing. For both sets of experiments, the ENSO predictor is determined by calculating a Niño-3.4 index internal to each simulation. From the CMIP5 model ensemble, we draw upon 110 historical runs of global temperature from 1882–2005, as well as 5289 124-yr subsets of control runs that span the same duration as the historical runs, all of which are regridded to 2.5° × 2.5° resolution. Control run subsets are drawn from longer runs by subsetting 124-yr windows, with each window within a run spaced 3 years apart, corresponding to approximately one-quarter of a solar cycle. As the preindustrial control run is unperturbed by external forcings, the realizations of TSI sensitivity βs from these experiments gives a null distribution of coefficients in the absence of any true solar forcing signal. By comparing the distribution of solar response coefficients between the two distributions, the statistical power of MLR can be estimated under the assumption that the actual TSI response is approximated by the CMIP5 historical runs. Implicit in this approach is an assumption that models accurately capture internal variability at decadal time scales, which may not be the case (Smith et al. 2020).

The median sensitivity for the historical experiments is 0.06 K (W m−2)−1, with a distribution that generally coincides with the high-likelihood sensitivity estimate distributions from observations for 1882–2005 and 1882–2019, which have a median of 0.04 and 0.05 K (W m−2)−1, respectively (Fig. 3a). The control distribution is centered on 0.00 K (W m−2)−1 but with a local minimum in likelihood at 0 K (W m−2)−1 because responses are biased away from zero by selection of a lag that maximizes explained variance. From these two distributions, we can empirically determine the statistical power of the MLR approach, defined as the probability of correctly rejecting the null hypothesis—in this case, rejecting a scenario with no global surface temperature response to solar cycle forcing (Wilks 2011); 58% of observations from the historical experiment are above the 95th percentile of the control distribution, indicating that our MLR test has a modest statistical power by which to discern a true signal.

Fig. 3.
Fig. 3.

CMIP5 TSI sensitivities using CMD and MLR. Histograms are of the TSI sensitivities inferred for control simulations (black) and historical simulations (orange). Control simulations are drawn from subsets of CMIP5 control runs that are unforced. Historical simulations are drawn from 110 CMIP5 historical runs that include external radiative forcings, including TSI. Distributions are shown using (a) monthly-resolution MLR from 1882 to 2005, (b) annual-resolution CMD from 1959 to 2005, and (c) monthly-resolution MLR from 1959 to 2005. Corresponding observational estimates from HadCRUT4 (solid line) and NCEP reanalysis (dashed line) over 1959–2019 or 1882–2019 and using MLR (red) or CMD (blue) are included where possible. Also shown in (a) is the box plot from Fig. 2, indicating the distribution of observational sensitivity estimates over 1882–2019. A distribution following the same procedure is also generated for 1882–2005, matching the time interval of CMIP5 runs. The CMIP5 distributions yield estimates of statistical power at p = 0.05 for each approach of 0.58 [in (a)], 0.47 [in (b)], and 0.32 [in (c)], indicating that the MLR test using the longest record available is the most discriminating approach of those examined.

Citation: Journal of Climate 34, 8; 10.1175/JCLI-D-20-0312.1

7. Solar cycle sensitivity using the composite mean difference approach

The largest estimates of sensitivity to TSI have come from the composite mean difference (CMD) method (Camp and Tung 2007; Tung and Camp 2008). CMD involves calculating the spatially varying response to a forcing by taking the difference of means between two subsets of years featuring either positive or negative anomalies of the predictor forcing:
c(i)=1N1+N2[y1N1T(i,y)y2N2T(i,y)].
Here y1 indexes over the N1 years determined to coincide with anomalously large TSI forcing, y2 indexes over the N2 years having anomalously small TSI forcing, and T(i, y) is the annual zonal-mean temperature at latitude i in year y. To estimate temperature variability attributed to solar forcing, T(i, y) is projected onto c(i):
p(y)=c(i)×T(i,y).
A study by Camp and Tung (2007) using CMD estimated a sensitivity of global temperatures to TSI of 0.18 ± 0.08 K (W m−2)−1 from NCEP reanalysis. Using a related linear discriminant analysis approach, Tung et al. (2008) also obtained a sensitivity estimate of 0.17 ± 0.04 K (W m−2)−1 using temperatures from NCEP reanalysis and lower estimates of 0.12 ± 0.04 K (W m−2)−1 using temperature estimates from ERA-40, GISS, or HadCRUT3 datasets. In all cases, the authors arrive at these estimates after removing years following the El Chichón and Pinatubo eruptions (1982–83 and 1992–93, respectively), as well as years during which TSI is within 0.06 W m−2 of the 1959–2004 mean. We find differences in sensitivity estimates of no more than 0.02 K (W m−2)−1 to varying the TSI threshold between 0 and 0.12 W m−2.

We update the CMD estimates to span 1959–2019 and find a sensitivity of 0.10 ± 0.13 K (W m−2)−1 using NCEP and 0.07 ± 0.13 K (W m−2)−1 using HadCRUT4. These results and confidence intervals are consistent with the ASH21 baseline of 0.05 ± 0.05 K (W m−2)−1. Similarly, updating the 0.17 ± 0.04 K (W m−2)−1 NCEP reanalysis estimate and 0.12 ± 0.04 K (W m−2)−1 HadCRUT3 estimate obtained by Tung et al. (2008) using linear discriminant analysis by extending the record from 1959–2004 to 1959–2019 gives a more muted sensitivity of 0.10 ± 0.12 K (W m−2)−1 for NCEP reanalysis and 0.10 ± 0.12 K (W m−2)−1 for HadCRUT4. Due to updates in temperature reconstructions since earlier publication of linear discriminant analysis estimates, our estimates also differ when using the original interval of 1959–2004. The smaller confidence intervals reported by Camp and Tung (2007) and Tung and Camp (2008) are based on standard regression error. Our confidence interval estimates are wider than previously reported intervals because our estimates account for autocorrelation through recomputing results 104 times using phase-randomized versions of the TSI signal.

The discrepancy between 1959–2004 and 1959–2019 CMD estimates is indicative of volatility associated with the exact time interval chosen for the analysis. We examine the variability of CMD estimates by computing estimates over successive 46-yr intervals (Fig. 4). The criteria used for excluding years by Camp and Tung (2007) is similarly applied whereby years associated with the El Chichón and Pinatubo eruptions and years when the TSI anomaly is near zero are censored. Whereas the reported sensitivity of CMD applied to NCEP reanalysis for the 46-yr period over 1959–2004 is 0.18 ± 0.08 K (W m−2)−1, the 46-yr period over 1974–2019 gives 0.06 K (W m−2)−1. Using HadCRUT4 surface temperatures, a local minimum of 0.05 K (W m−2)−1 is reached for a sliding 46-yr analysis when evaluating 1961–2006. MLR estimates are also variable when calculated over sliding 46-yr windows, ranging between 0.04 and 0.09 K (W m−2)−1 for NCEP temperatures and 0.04 to 0.07 K (W m−2)−1 for HadCRUT4. In comparison to CMD, however, sliding MLR estimates for HadCRUT4 are more evenly distributed about the best estimate obtained when using the 1959–2019 interval (Fig. 4), while the 1959–2019 NCEP-derived MLR estimate exists as a clear outlier (as discussed earlier).

Fig. 4.
Fig. 4.

Sliding estimate of TSI sensitivity for different methods and datasets. Shown are estimates using CMD (blue), LDA (gray), and MLR (red) applied to the HadCRUT4 (solid line) and NCEP reanalysis (dashed line) datasets. (a) Sensitivity estimates for sliding 46-yr windows centered at the year shown on the x axis. (b) Central estimates and 95% confidence intervals estimated using CMD (Camp and Tung 2007) and LDA (Tung et al. 2008) over 1959–2004 and centered on 1982, with confidence intervals determined using standard regression uncertainties. (c) Central estimates and 95% confidence intervals for the period 1959–2019, with confidence intervals determined using phase randomization. Over the most recent 46 years, as well as over the wider 1959–2019 period, central estimates indicate sensitivities below 0.10 K (W m−2)−1.

Citation: Journal of Climate 34, 8; 10.1175/JCLI-D-20-0312.1

Similar to our evaluation of statistical power for MLR, we use CMIP5 model experiments to examine differences between MLR and CMD approaches. We again select overlapping subsets from the CMIP5 control runs, each regridded to 2.5° × 2.5° resolution. For each sample, we estimate global temperature sensitivity to solar cycle forcing using the same approach as applied to the observations [Eq. (5)], including with respect to censoring years having small TSI anomalies. We evaluate the CMIP5 historical output between 1959 and 2005. Years since 1959 are selected to correspond with the period when global surface temperature variations are well constrained and the end of the historical simulations in 2005.

The empirical distribution from applying CMD to CMIP5 control runs over 1959–2005 has a median sensitivity of zero with a 95% range from −0.11 to 0.11 K (W m−2)−1 (Fig. 3b). Performing the same CMD analyses on 111 runs of the CMIP5 historical simulations that include external volcanic and solar forcing over 1959–2005 yields a median of 0.08 K (W m−2)−1 and a 95% range from −0.03 to 0.21 K (W m−2)−1; 47% of observations from this historical experiment are above the 95th percentile of the control distribution. For purposes of comparison, we also estimate sensitivity for the same CMIP5 runs using monthly-resolution MLR over 1959–2005 (Fig. 3c).

8. Discussion and conclusions

Revisiting various estimates of the sensitivity of global temperatures to variations in TSI with updated observations, we find values that are in keeping with those indicated by general circulation model simulations (Table 1). Specifically, a likelihood-weighted distribution (Fig. 2) gives a maximum likelihood value of 0.05 K (W m−2)−1 and a 95% confidence interval of 0.03–0.09 K (W m−2)−1. Whereas additional processes have been proposed to amplify the climate response to solar cycle forcing, observations of the global temperature response to solar forcing are found to be in keeping with standard general circulation model simulations (Fig. 3). There are, however, several indications of broad uncertainty in inferred TSI sensitivity.

Observational estimates are generally consistent with results not only from historical CMIP5 simulations, but also from unforced CMIP5 simulations. Only one of the five observational TSI sensitivities that we consider in Fig. 3 indicates a TSI sensitivity that is above the 95th percentile of the results from the unforced CMIP5 simulations. The CMD-derived estimate using NCEP reanalysis between 1959 and 2019 of 0.10 K (W m−2)−1 has a p value of 0.03 relative to its corresponding null distribution. A complete analysis, however, should account for the fact that we are in a multiple testing regime. For example, MLR applied to the same dataset gives a negative TSI sensitivity. If a Bonferroni correction is performed, which accounts for multiple tests by dividing the significance threshold of the p value by the number of tests performed, the CMD result no longer appears significant at 95% confidence.

Inability to distinguish observed estimates from the control distribution reflects substantial overlap between results from CMIP5 control and historical distributions. Statistical power is 0.47 for CMD between 1959 and 2005, 0.58 for MLR between 1882 and 2005, and 0.32 for MLR between 1959 and 2005 (Fig. 3). Consistent with our focus, the CMIP5 analyses suggest that the longest MLR estimates are both the most powerful, as judged by the statistical power of the test, and the most likely to give an accurate estimate, as judged by the narrowest 95% confidence interval. There should be little expectation, however, for the null hypothesis to be rejected, unless the TSI sensitivity is, in fact, substantially higher than represented in general circulation model simulations.

Uncertainty is also indicated by variation in the inferred sensitivity of global temperature according to which predictors and response variables are used (Table 2) as well as the interval that is analyzed (Fig. 4). As an example of where predictor uncertainty may arise from, reconstruction of TSI dating before the satellite instrumental record, beginning in 1978, generally requires determination of the relationship between TSI and sunspots. This relationship is established by comparing sunspot counts to recorded TSI over the satellite interval, but given calibration uncertainties in satellite observing platforms, it is difficult to verify the stability of such a relationship (Coddington et al. 2016; Dudok de Wit et al. 2017). The instrumental temperature record may also exhibit uncertainties beyond those captured by the HadCRUT4 temperature ensemble. Systemic biases in temperature records as a result of measurement and data assimilation error have recently been shown to substantively affect the global-average temperature record over the early twentieth century (Chan et al. 2019), a period when MLR suggests zero or negative correlation between TSI and temperature. The fact that selection of particular epochs strongly influences sensitivity estimates may also reflect internal variability altering estimates of sensitivity to TSI variability. In the regional case of solar-cycle influences on the North Atlantic Oscillation (NAO), Chiodo et al. (2019) also find that estimates of an NAO response to solar cycles are sensitive to the intervals examined.

Additional uncertainty in TSI sensitivity may be contributed by methodological limitations associated with the representation of a lagged response to forcing. Our MLR approach does not account for a distribution of response time scales. In fact, the global temperature response should be integrated over a range of time scales that cannot be fully captured by a single lag (e.g., Proistosescu and Huybers 2017). Thus, one may find that an anthropogenic forcing time series with a larger increase toward the present day generates better predictions, but for the wrong reasons. This provides a possible explanation as to why MLR results using an anthropogenic forcing dataset that does not include the effects of aerosol forcing (the CMIP5 effective CO2 concentration) give a Bayesian relative likelihood twice as large, on average, as those based on a forcing dataset that does include the effects of aerosol forcing, specifically that provided by Miller et al. (2014).

Further exploration of how to design a test that increases statistical power in identifying a solar response appears to be warranted. Examination of how missing data, measurement uncertainty, and lags respectively influence CMD and MLR estimates is also potentially useful. One approach may be to test any new methods with regard to their ability to distinguish the global temperature response to solar forcing between historical and control simulations, omitting data not available from observations and adding noise to the remainder in order to better emulate available observations.

Acknowledgments

We thank Cristian Proistosescu and Anna Lea Albright for their helpful insights. The paper was improved in responding to comments on an earlier version by three reviewers, with special thanks to K. K. Tung (University of Washington) for extensive comments that improved the paper, including identification of a significant error in the initial draft. Author Amdur is supported by Future Investigators in NASA Earth and Space Science and Technology (FINESST-19) Grant 80NSSC19K1327. Authors Stine and Huybers were supported by NSF P2C2 Award AGS-1903674, and Stine was additionally supported by NSF Award ICER-1824770.

Data availability statement.

All sensitivity calculations published herein, and their associated datasets, have been collected and archived (https://github.com/tamdur/SolarCycleSensitivity). All forcing datasets used in this work are publicly available through the Climatic Research Unit, World Climate Research Program LLNL CMIP5 and CMIP6, NASA Goddard Institute for Space Studies, and NOAA Physical Sciences Laboratory websites and are documented in the repository. See citations within this paper for more information.

REFERENCES

  • Andrews, M., J. Knight, and L. Gray, 2015: A simulated lagged response of the North Atlantic Oscillation to the solar cycle over the period 1960–2009. Environ. Res. Lett., 10, 054022, https://doi.org/10.1088/1748-9326/10/5/054022.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arfeuille, F. X., D. Weisenstein, H. Mack, E. Rozanov, T. Peter, and S. Brönnimann, 2014: Volcanic forcing for climate modeling: A new microphysics-based data set covering years 1600–present. Climate Past, 10, 359375, https://doi.org/10.5194/cp-10-359-2014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benestad, R., and G. Schmidt, 2009: Solar trends and global warming. J. Geophys. Res., 114, D14101, https://doi.org/10.1029/2008JD011639.

  • Bengtsson, L., S. Hagemann, and K. I. Hodges, 2004: Can climate trends be calculated from reanalysis data? J. Geophys. Res., 109, D11111, https://doi.org/10.1029/2004JD004536.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cai, M., and K.-K. Tung, 2012: Robustness of dynamical feedbacks from radiative forcing: 2% solar versus 2×CO2 experiments in an idealized GCM. J. Atmos. Sci., 69, 22562271, https://doi.org/10.1175/JAS-D-11-0117.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Camp, C. D., and K. K. Tung, 2007: Surface warming by the solar cycle as revealed by the composite mean difference projection. Geophys. Res. Lett., 34, L14703, https://doi.org/10.1029/2007GL030207.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chan, D., E. C. Kent, D. I. Berry, and P. Huybers, 2019: Correcting datasets leads to more homogeneous early-twentieth-century sea surface warming. Nature, 571, 393397, https://doi.org/10.1038/s41586-019-1349-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chiodo, G., J. Oehrlein, L. M. Polvani, J. C. Fyfe, and A. K. Smith, 2019: Insignificant influence of the 11-year solar cycle on the North Atlantic Oscillation. Nat. Geosci., 12, 9499, https://doi.org/10.1038/s41561-018-0293-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coddington, O., J. Lean, P. Pilewskie, M. Snow, and D. Lindholm, 2016: A solar irradiance climate data record. Bull. Amer. Meteor. Soc., 97, 12651282, https://doi.org/10.1175/BAMS-D-14-00265.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Douglass, D. H., and B. D. Clader, 2002: Climate sensitivity of the Earth to solar irradiance. Geophys. Res. Lett., 29, 33-133-4, https://doi.org/10.1029/2002GL015345.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dudok de Wit, T., G. Kopp, C. Fröhlich, and M. Schöll, 2017: Methodology to create a new total solar irradiance record: Making a composite out of multiple data records. Geophys. Res. Lett., 44, 11961203, https://doi.org/10.1002/2016GL071866.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Folland, C. K., O. Boucher, A. Colman, and D. E. Parker, 2018: Causes of irregularities in trends of global mean surface temperature since the late 19th century. Sci. Adv., 4, eaao5297, https://doi.org/10.1126/sciadv.aao5297.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fröhlich, C., and J. Lean, 1998: The sun’s total irradiance: Cycles, trends, and related climate change uncertainties since 1976. Geophys. Res. Lett., 25, 43774380, https://doi.org/10.1029/1998GL900157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gray, L. J., and Coauthors, 2010: Solar influences on climate. Rev. Geophys., 48, RG4001, https://doi.org/10.1029/2009RG000282.

  • Hegerl, G. C., K. Hasselmann, U. Cubasch, J. F. Mitchell, E. Roeckner, R. Voss, and J. Waszkewitz, 1997: Multi-fingerprint detection and attribution analysis of greenhouse gas, greenhouse gas-plus-aerosol and solar forced climate change. Climate Dyn., 13, 613634, https://doi.org/10.1007/s003820050186.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, B., and Coauthors, 2017: Extended reconstructed sea surface temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. J. Climate, 30, 81798205, https://doi.org/10.1175/JCLI-D-16-0836.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, A., M. A. Cane, Y. Kushnir, A. C. Clement, M. B. Blumenthal, and B. Rajagopalan, 1998: Analyses of global sea surface temperature 1856–1991. J. Geophys. Res., 103, 18 56718 589, https://doi.org/10.1029/97JC01736.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., 2000: Evolution of the sun’s spectral irradiance since the Maunder Minimum. Geophys. Res. Lett., 27, 24252428, https://doi.org/10.1029/2000GL000043.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., and D. H. Rind, 2008: How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophys. Res. Lett., 35, L18701, https://doi.org/10.1029/2008GL034864.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., J. Beer, and R. Bradley, 1995: Reconstruction of solar irradiance since 1610: Implications for climate change. Geophys. Res. Lett., 22, 31953198, https://doi.org/10.1029/95GL03093.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • MacMynowski, D. G., H.-J. Shin, and K. Caldeira, 2011: The frequency response of temperature and precipitation in a climate model. Geophys. Res. Lett., 38, L16711, https://doi.org/10.1029/2011GL048623.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Matthes, K., and Coauthors, 2017: Solar forcing for CMIP6 (v3.2). Geosci. Model Dev., 10, 22472302, https://doi.org/10.5194/gmd-10-2247-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., J. M. Arblaster, K. Matthes, F. Sassi, and H. van Loon, 2009: Amplifying the Pacific climate system response to a small 11-year solar cycle forcing. Science, 325, 11141118, https://doi.org/10.1126/science.1172872.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meinshausen, M., and Coauthors, 2011: The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 109, 213241, https://doi.org/10.1007/s10584-011-0156-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, R. L., and Coauthors, 2014: CMIP5 historical simulations (1850–2012) with GISS ModelE2. J. Adv. Model. Earth Syst., 6, 441478, https://doi.org/10.1002/2013MS000266.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Misios, S., and Coauthors, 2016: Solar signals in CMIP-5 simulations: Effects of atmosphere–ocean coupling. Quart. J. Roy. Meteor. Soc., 142, 928941, https://doi.org/10.1002/qj.2695.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morice, C. P., J. J. Kennedy, N. A. Rayner, and P. D. Jones, 2012: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set. J. Geophys. Res., 117, D08101, https://doi.org/10.1029/2011JD017187.

    • Search Google Scholar
    • Export Citation
  • Pittock, A. B., 1978: A critical look at long-term sun–weather relationships. Rev. Geophys., 16, 400420, https://doi.org/10.1029/RG016i003p00400.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Proistosescu, C., and P. J. Huybers, 2017: Slow climate mode reconciles historical and model-based estimates of climate sensitivity. Sci. Adv., 3, e1602821, https://doi.org/10.1126/sciadv.1602821.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sato, M., J. E. Hansen, M. P. McCormick, and J. B. Pollack, 1993: Stratospheric aerosol optical depths, 1850–1990. J. Geophys. Res., 98, 22 98722 994, https://doi.org/10.1029/93JD02553.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Scafetta, N., and B. J. West, 2005: Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite. Geophys. Res. Lett., 32, L18713, https://doi.org/10.1029/2005GL023849.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Smith, D. M., and Coauthors, 2020: North Atlantic climate far more predictable than models imply. Nature, 583, 796800, https://doi.org/10.1038/s41586-020-2525-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stevens, M. J., and G. R. North, 1996: Detection of the climate response to the solar cycle. J. Atmos. Sci., 53, 25942608, https://doi.org/10.1175/1520-0469(1996)053<2594:DOTCRT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Svensmark, H., M. Enghoff, N. Shaviv, and J. Svensmark, 2017: Increased ionization supports growth of aerosols into cloud condensation nuclei. Nat. Commun., 8, 2199, https://doi.org/10.1038/s41467-017-02082-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Theiler, J., S. Eubank, A. Longtin, B. Galdrikian, and J. D. Farmer, 1992: Testing for nonlinearity in time series: The method of surrogate data. Physica D, 58, 7794, https://doi.org/10.1016/0167-2789(92)90102-S.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Thomason, L. W., and Coauthors, 2018: A global space-based stratospheric aerosol climatology: 1979–2016. Earth Syst. Sci. Data, 10, 469492, https://doi.org/10.5194/essd-10-469-2018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tung, K. K., and C. D. Camp, 2008: Solar cycle warming at the Earth’s surface in NCEP and ERA-40 data: A linear discriminant analysis. J. Geophys. Res., 113, D05114, https://doi.org/10.1029/2007JD009164.

    • Search Google Scholar
    • Export Citation
  • Tung, K. K., J. Zhou, and C. D. Camp, 2008: Constraining model transient climate response using independent observations of solar-cycle forcing and response. Geophys. Res. Lett., 35, L17707, https://doi.org/10.1029/2008GL034240.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Y.-M., J. Lean, and N. Sheeley Jr., 2005: Modeling the sun’s magnetic field and irradiance since 1713. Astrophys. J., 625, 522538, https://doi.org/10.1086/429689.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. Academic Press, 704 pp.

  • Wolter, K., and M. S. Timlin, 1998: Measuring the strength of ENSO events: How does 1997/98 rank? Weather, 53, 315324, https://doi.org/10.1002/j.1477-8696.1998.tb06408.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • Andrews, M., J. Knight, and L. Gray, 2015: A simulated lagged response of the North Atlantic Oscillation to the solar cycle over the period 1960–2009. Environ. Res. Lett., 10, 054022, https://doi.org/10.1088/1748-9326/10/5/054022.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arfeuille, F. X., D. Weisenstein, H. Mack, E. Rozanov, T. Peter, and S. Brönnimann, 2014: Volcanic forcing for climate modeling: A new microphysics-based data set covering years 1600–present. Climate Past, 10, 359375, https://doi.org/10.5194/cp-10-359-2014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benestad, R., and G. Schmidt, 2009: Solar trends and global warming. J. Geophys. Res., 114, D14101, https://doi.org/10.1029/2008JD011639.

  • Bengtsson, L., S. Hagemann, and K. I. Hodges, 2004: Can climate trends be calculated from reanalysis data? J. Geophys. Res., 109, D11111, https://doi.org/10.1029/2004JD004536.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cai, M., and K.-K. Tung, 2012: Robustness of dynamical feedbacks from radiative forcing: 2% solar versus 2×CO2 experiments in an idealized GCM. J. Atmos. Sci., 69, 22562271, https://doi.org/10.1175/JAS-D-11-0117.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Camp, C. D., and K. K. Tung, 2007: Surface warming by the solar cycle as revealed by the composite mean difference projection. Geophys. Res. Lett., 34, L14703, https://doi.org/10.1029/2007GL030207.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chan, D., E. C. Kent, D. I. Berry, and P. Huybers, 2019: Correcting datasets leads to more homogeneous early-twentieth-century sea surface warming. Nature, 571, 393397, https://doi.org/10.1038/s41586-019-1349-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chiodo, G., J. Oehrlein, L. M. Polvani, J. C. Fyfe, and A. K. Smith, 2019: Insignificant influence of the 11-year solar cycle on the North Atlantic Oscillation. Nat. Geosci., 12, 9499, https://doi.org/10.1038/s41561-018-0293-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coddington, O., J. Lean, P. Pilewskie, M. Snow, and D. Lindholm, 2016: A solar irradiance climate data record. Bull. Amer. Meteor. Soc., 97, 12651282, https://doi.org/10.1175/BAMS-D-14-00265.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Douglass, D. H., and B. D. Clader, 2002: Climate sensitivity of the Earth to solar irradiance. Geophys. Res. Lett., 29, 33-133-4, https://doi.org/10.1029/2002GL015345.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dudok de Wit, T., G. Kopp, C. Fröhlich, and M. Schöll, 2017: Methodology to create a new total solar irradiance record: Making a composite out of multiple data records. Geophys. Res. Lett., 44, 11961203, https://doi.org/10.1002/2016GL071866.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Folland, C. K., O. Boucher, A. Colman, and D. E. Parker, 2018: Causes of irregularities in trends of global mean surface temperature since the late 19th century. Sci. Adv., 4, eaao5297, https://doi.org/10.1126/sciadv.aao5297.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fröhlich, C., and J. Lean, 1998: The sun’s total irradiance: Cycles, trends, and related climate change uncertainties since 1976. Geophys. Res. Lett., 25, 43774380, https://doi.org/10.1029/1998GL900157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gray, L. J., and Coauthors, 2010: Solar influences on climate. Rev. Geophys., 48, RG4001, https://doi.org/10.1029/2009RG000282.

  • Hegerl, G. C., K. Hasselmann, U. Cubasch, J. F. Mitchell, E. Roeckner, R. Voss, and J. Waszkewitz, 1997: Multi-fingerprint detection and attribution analysis of greenhouse gas, greenhouse gas-plus-aerosol and solar forced climate change. Climate Dyn., 13, 613634, https://doi.org/10.1007/s003820050186.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, B., and Coauthors, 2017: Extended reconstructed sea surface temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. J. Climate, 30, 81798205, https://doi.org/10.1175/JCLI-D-16-0836.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaplan, A., M. A. Cane, Y. Kushnir, A. C. Clement, M. B. Blumenthal, and B. Rajagopalan, 1998: Analyses of global sea surface temperature 1856–1991. J. Geophys. Res., 103, 18 56718 589, https://doi.org/10.1029/97JC01736.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., 2000: Evolution of the sun’s spectral irradiance since the Maunder Minimum. Geophys. Res. Lett., 27, 24252428, https://doi.org/10.1029/2000GL000043.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., and D. H. Rind, 2008: How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophys. Res. Lett., 35, L18701, https://doi.org/10.1029/2008GL034864.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lean, J., J. Beer, and R. Bradley, 1995: Reconstruction of solar irradiance since 1610: Implications for climate change. Geophys. Res. Lett., 22, 31953198, https://doi.org/10.1029/95GL03093.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • MacMynowski, D. G., H.-J. Shin, and K. Caldeira, 2011: The frequency response of temperature and precipitation in a climate model. Geophys. Res. Lett., 38, L16711, https://doi.org/10.1029/2011GL048623.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Matthes, K., and Coauthors, 2017: Solar forcing for CMIP6 (v3.2). Geosci. Model Dev., 10, 22472302, https://doi.org/10.5194/gmd-10-2247-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., J. M. Arblaster, K. Matthes, F. Sassi, and H. van Loon, 2009: Amplifying the Pacific climate system response to a small 11-year solar cycle forcing. Science, 325, 11141118, https://doi.org/10.1126/science.1172872.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meinshausen, M., and Coauthors, 2011: The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 109, 213241, https://doi.org/10.1007/s10584-011-0156-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, R. L., and Coauthors, 2014: CMIP5 historical simulations (1850–2012) with GISS ModelE2. J. Adv. Model. Earth Syst., 6, 441478, https://doi.org/10.1002/2013MS000266.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Misios, S., and Coauthors, 2016: Solar signals in CMIP-5 simulations: Effects of atmosphere–ocean coupling. Quart. J. Roy. Meteor. Soc., 142, 928941, https://doi.org/10.1002/qj.2695.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morice, C. P., J. J. Kennedy, N. A. Rayner, and P. D. Jones, 2012: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set. J. Geophys. Res., 117, D08101, https://doi.org/10.1029/2011JD017187.

    • Search Google Scholar
    • Export Citation
  • Pittock, A. B., 1978: A critical look at long-term sun–weather relationships. Rev. Geophys., 16, 400420, https://doi.org/10.1029/RG016i003p00400.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Proistosescu, C., and P. J. Huybers, 2017: Slow climate mode reconciles historical and model-based estimates of climate sensitivity. Sci. Adv., 3, e1602821, https://doi.org/10.1126/sciadv.1602821.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sato, M., J. E. Hansen, M. P. McCormick, and J. B. Pollack, 1993: Stratospheric aerosol optical depths, 1850–1990. J. Geophys. Res., 98, 22 98722 994, https://doi.org/10.1029/93JD02553.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Scafetta, N., and B. J. West, 2005: Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite. Geophys. Res. Lett., 32, L18713, https://doi.org/10.1029/2005GL023849.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Smith, D. M., and Coauthors, 2020: North Atlantic climate far more predictable than models imply. Nature, 583, 796800, https://doi.org/10.1038/s41586-020-2525-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stevens, M. J., and G. R. North, 1996: Detection of the climate response to the solar cycle. J. Atmos. Sci., 53, 25942608, https://doi.org/10.1175/1520-0469(1996)053<2594:DOTCRT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Svensmark, H., M. Enghoff, N. Shaviv, and J. Svensmark, 2017: Increased ionization supports growth of aerosols into cloud condensation nuclei. Nat. Commun., 8, 2199, https://doi.org/10.1038/s41467-017-02082-2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Theiler, J., S. Eubank, A. Longtin, B. Galdrikian, and J. D. Farmer, 1992: Testing for nonlinearity in time series: The method of surrogate data. Physica D, 58, 7794, https://doi.org/10.1016/0167-2789(92)90102-S.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Thomason, L. W., and Coauthors, 2018: A global space-based stratospheric aerosol climatology: 1979–2016. Earth Syst. Sci. Data, 10, 469492, https://doi.org/10.5194/essd-10-469-2018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tung, K. K., and C. D. Camp, 2008: Solar cycle warming at the Earth’s surface in NCEP and ERA-40 data: A linear discriminant analysis. J. Geophys. Res., 113, D05114, https://doi.org/10.1029/2007JD009164.

    • Search Google Scholar
    • Export Citation
  • Tung, K. K., J. Zhou, and C. D. Camp, 2008: Constraining model transient climate response using independent observations of solar-cycle forcing and response. Geophys. Res. Lett., 35, L17707, https://doi.org/10.1029/2008GL034240.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Y.-M., J. Lean, and N. Sheeley Jr., 2005: Modeling the sun’s magnetic field and irradiance since 1713. Astrophys. J., 625, 522538, https://doi.org/10.1086/429689.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. Academic Press, 704 pp.

  • Wolter, K., and M. S. Timlin, 1998: Measuring the strength of ENSO events: How does 1997/98 rank? Weather, 53, 315324, https://doi.org/10.1002/j.1477-8696.1998.tb06408.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Observed and predicted temperature anomalies: (a) HadCRUT4 global-average surface temperature anomaly at monthly (gray) and annual-average (black) resolution. Predicted monthly temperature anomalies, as determined from lag multiple regression, are overlaid for the LR08 (blue) and ASH21 (orange) models. Also shown are the temperature contribution from anthropogenic predictors for LR08 (smoothly varying blue) and ASH21 (smoothly varying orange). Individual temperature contributions estimated for (b) TSI, (c) volcanic aerosols, and (d) ENSO forcing are shown for LR08 (blue) and ASH21 (orange). The predicted monthly global temperature anomalies shown in (a) are equal to the sum of the contributions from anthropogenic, TSI, volcanic aerosol, and ENSO forcings.

  • Fig. 2.

    Relative likelihoods of MLR models using various predictor combinations. For 48 combinations of predictor dataset choices, the relative likelihood is assessed on the basis of residuals between predicted and observed monthly global temperature anomalies. Models are assessed over the interval 1882–2019, and the median likelihood is set to 1. Estimates using the LR08 and ASH21 models are shown in blue and orange, respectively. Combinations using MEI as an ENSO predictor are marked with a diamond and have, on average, a nine-times-larger likelihood relative to combinations using alternative ENSO predictors. The likelihood-weighted distribution of sensitivities from 48 model combinations is applied to the 100 HadCRUT4 ensemble members and is plotted at bottom to show a median of 0.05 K (W m−2)−1, the interquartile range (IQR; box), and the 95% range of 0.03 to 0.09 K (W m−2)−1 (whiskers).

  • Fig. 3.

    CMIP5 TSI sensitivities using CMD and MLR. Histograms are of the TSI sensitivities inferred for control simulations (black) and historical simulations (orange). Control simulations are drawn from subsets of CMIP5 control runs that are unforced. Historical simulations are drawn from 110 CMIP5 historical runs that include external radiative forcings, including TSI. Distributions are shown using (a) monthly-resolution MLR from 1882 to 2005, (b) annual-resolution CMD from 1959 to 2005, and (c) monthly-resolution MLR from 1959 to 2005. Corresponding observational estimates from HadCRUT4 (solid line) and NCEP reanalysis (dashed line) over 1959–2019 or 1882–2019 and using MLR (red) or CMD (blue) are included where possible. Also shown in (a) is the box plot from Fig. 2, indicating the distribution of observational sensitivity estimates over 1882–2019. A distribution following the same procedure is also generated for 1882–2005, matching the time interval of CMIP5 runs. The CMIP5 distributions yield estimates of statistical power at p = 0.05 for each approach of 0.58 [in (a)], 0.47 [in (b)], and 0.32 [in (c)], indicating that the MLR test using the longest record available is the most discriminating approach of those examined.

  • Fig. 4.

    Sliding estimate of TSI sensitivity for different methods and datasets. Shown are estimates using CMD (blue), LDA (gray), and MLR (red) applied to the HadCRUT4 (solid line) and NCEP reanalysis (dashed line) datasets. (a) Sensitivity estimates for sliding 46-yr windows centered at the year shown on the x axis. (b) Central estimates and 95% confidence intervals estimated using CMD (Camp and Tung 2007) and LDA (Tung et al. 2008) over 1959–2004 and centered on 1982, with confidence intervals determined using standard regression uncertainties. (c) Central estimates and 95% confidence intervals for the period 1959–2019, with confidence intervals determined using phase randomization. Over the most recent 46 years, as well as over the wider 1959–2019 period, central estimates indicate sensitivities below 0.10 K (W m−2)−1.

All Time Past Year Past 30 Days
Abstract Views 610 0 0
Full Text Views 1407 583 49
PDF Downloads 772 262 25