Search Results

You are looking at 1 - 10 of 17 items for :

  • Author or Editor: Peter Huybers x
  • Journal of Climate x
  • Refine by Access: All Content x
Clear All Modify Search
Peter Huybers

Abstract

Spectral analysis of the Greenland Ice Sheet Project 2 (GISP2) δ 18O record has been interpreted to show a 1/(1470 yr) spectral peak that is highly statistically significant (p < 0.01). The presence of such a peak, if accurate, provides an important clue about the mechanisms controlling glacial climate. As is standard, however, statistical significance was judged relative to a null model, H 0, consisting of an autoregressive order one process, AR(1). In this study, H 0 is generalized using an autoregressive moving-average process, ARMA(p, q). A rule of thumb is proposed for evaluating the adequacy of H 0 involving comparing the expected and observed variances of the logarithm of a spectral estimate, which are generally consistent insomuch as removal of the ARMA structure from a time series results in an approximately level spectral estimate. An AR(1), or ARMA(1, 0), process is shown to be an inadequate representation of the GISP2 δ 18O structure, whereas higher-order ARMA processes result in approximately level spectral estimates. After suitably leveling GISP2 δ 18O and accounting for multiple hypothesis testing, multitaper spectral estimation indicates that the 1/(1470 yr) peak is insignificant. The seeming prominence of the 1/(1470 yr) peak is explained as the result of evaluating a spectrum involving higher-order ARMA structure and the peak having been selected on the basis of its seeming anomalous. The proposed technique for evaluating the significance of spectral peaks is also applicable to other geophysical records.

Significance Statement

A suitable null hypothesis is necessary for obtaining accurate test results, but a means for evaluating the adequacy of a null hypothesis for a spectral peak has been lacking. A generalized null model is presented in the form of an autoregressive, moving-average process whose adequacy can be gauged by comparing the observed and expected variance of log spectral density. Application of the method to the GISP2 δ 18O record indicates that spectral structure found at 1/(1470 yr) is statistically insignificant.

Open access
Peter Huybers

Abstract

The spread in climate sensitivity obtained from 12 general circulation model runs used in the Fourth Assessment of the Intergovernmental Panel on Climate Change indicates a 95% confidence interval of 2.1°–5.5°C, but this reflects compensation between model feedbacks. In particular, cloud feedback strength negatively covaries with the albedo feedback as well as with the combined water vapor plus lapse rate feedback. If the compensation between feedbacks is removed, the 95% confidence interval for climate sensitivity expands to 1.9°–8.0°C. Neither of the quoted 95% intervals adequately reflects the understanding of climate sensitivity, but their differences illustrate that model interdependencies must be understood before model spread can be correctly interpreted.

The degree of negative covariance between feedbacks is unlikely to result from chance alone. It may, however, result from the method by which the feedbacks were estimated, physical relationships represented in the models, or from conditioning the models upon some combination of observations and expectations. This compensation between model feedbacks—when taken together with indications that variations in radiative forcing and the rate of ocean heat uptake play a similar compensatory role in models—suggests that conditioning of the models acts to curtail the intermodel spread in climate sensitivity. Observations used to condition the models ought to be explicitly stated, or there is the risk of doubly calling on data for purposes of both calibration and evaluation. Conditioning the models upon individual expectation (e.g., anchoring to the Charney range of 3° ± 1.5°C), to the extent that it exists, greatly complicates statistical interpretation of the intermodel spread.

Full access
Parker Liautaud
and
Peter Huybers

Abstract

Proxy reconstructions indicate that sea level responded more sensitively to CO2 radiative forcing in the late Pleistocene than in the early Pleistocene, a transition that was proposed to arise from changes in ice-sheet dynamics. In this study we analyze the links between sea level, orbital variations, and CO2 using an energy-balance model having a simple ice sheet. Model parameters, including for age models, are inferred over the late Pleistocene using a Bayesian method, and the inferred relationships are used to evaluate CO2 levels over the past 2 million years in relation to sea level. Early Pleistocene model CO2 averages 244 ppm (241–246 ppm 95% confidence interval) across 2 to 1 million years ago and indicates that sea level was less sensitive to radiative forcing than in the late Pleistocene, consistent with foregoing δ 11B-derived estimates. Weaker early Pleistocene sea level sensitivity originates from a weaker ice-albedo feedback and the fact that smaller ice sheets are thinner, absent changes over time in model equations or parameters. An alternative scenario involving thin and expansive early Pleistocene ice sheets, in accord with some lines of geologic evidence, implies 15-ppm-lower average CO2 or ~10–15-m-higher average sea level during the early Pleistocene relative to the original scenario. Our results do not rule out dynamical transitions during the middle Pleistocene, but indicate that variations in the sea level response to CO2 forcing over the past 2 million years can be explained on the basis of nonlinearities associated with ice-albedo feedbacks and ice-sheet geometry that are consistently present across this interval.

Full access
Duo Chan
and
Peter Huybers

Abstract

Most historical sea surface temperature (SST) estimates indicate warmer World War II SSTs than expected from forcing and internal climate variability. If real, this World War II warm anomaly (WW2WA) has important implications for decadal variability, but the WW2WA may also arise from incomplete corrections of biases associated with bucket and engine room intake (ERI) measurements. To better assess the origins of the WW2WA, we develop five different historical SST estimates (reconstructions R1–R5). Using uncorrected SST measurements from the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) version 3.0 (R1) gives a WW2WA of 0.41°C. In contrast, using only buckets (R2) or ERI observations (R3) gives WW2WAs of 0.18° and 0.08°C, respectively, implying that uncorrected biases are the primary source of the WW2WA. We then use an extended linear-mixed-effect method to quantify systematic differences between subsets of SSTs and develop groupwise SST adjustments based on differences between pairs of nearby SST measurements. Using all measurements after applying groupwise adjustments (R4) gives a WW2WA of 0.13°C [95% confidence interval (c.i.): 0.01°–0.26°C] and indicates that U.S. and U.K. naval observations are the primary cause of the WW2WA. Finally, nighttime bucket SSTs are found to be warmer than their daytime counterparts during WW2, prompting a daytime-only reconstruction using groupwise adjustments (R5) that has a WW2WA of 0.09°C (95% c.i.: −0.01° to 0.18°C). R5 is consistent with the range of internal variability found in either the CMIP5 (95% c.i.: −0.10° to 0.10°C) or CMIP6 ensembles (95% c.i.: −0.11° to 0.10°C). These results support the hypothesis that the WW2WA is an artifact of observational biases, although further data and metadata analyses will be important for confirmation.

Open access
Duo Chan
and
Peter Huybers

Abstract

The International Comprehensive Ocean–Atmosphere Dataset (ICOADS) is a cornerstone for estimating changes in sea surface temperatures (SST) over the instrumental era. Interest in determining SST changes to within 0.1°C makes detecting systematic offsets within ICOADS important. Previous studies have corrected for offsets among engine room intake, buoy, and wooden and canvas bucket measurements, as well as noted discrepancies among various other groupings of data. In this study, a systematic examination of differences in collocated bucket SST measurements from ICOADS3.0 is undertaken using a linear-mixed-effect model according to nations and more-resolved groupings. Six nations and a grouping for which nation metadata are missing, referred to as “deck 156,” together contribute 91% of all bucket measurements and have systematic offsets among one another of as much as 0.22°C. Measurements from the Netherlands and deck 156 are colder than the global average by −0.10° and −0.13°C, respectively, both at p < 0.01, whereas Russian measurements are offset warm by 0.10°C at p < 0.1. Furthermore, of the 31 nations whose measurements are present in more than one grouping of data (i.e., deck), 14 contain decks that show significant offsets at p < 0.1, including all major collecting nations. Results are found to be robust to assumptions regarding the independence and distribution of errors as well as to influences from the diurnal cycle and spatially heterogeneous noise variance. Correction for systematic offsets among these groupings should improve the accuracy of estimated SSTs and their trends.

Full access
Marena Lin
and
Peter Huybers

Abstract

In an earlier study, a weaker trend in global mean temperature over the past 15 years relative to the preceding decades was characterized as significantly lower than those contained within the phase 5 of the Coupled Model Intercomparison Project (CMIP5) ensemble. In this study, divergence between model simulations and observations is estimated using a fixed-intercept linear trend with a slope estimator that has one-third the noise variance compared to simple linear regression. Following the approach of the earlier study, where intermodel spread is used to assess the distribution of trends, but using the fixed-intercept trend metric demonstrates that recently observed trends in global mean temperature are consistent ( ) with the CMIP5 ensemble for all 15-yr intervals of observation–model divergence since 1970. Significant clustering of global trends according to modeling center indicates that the spread in CMIP5 trends is better characterized using ensemble members drawn across models as opposed to using ensemble members from a single model. Despite model–observation consistency at the global level, substantial regional discrepancies in surface temperature trends remain.

Full access
Martin P. Tingley
and
Peter Huybers

Abstract

Reconstructing the spatial pattern of a climate field through time from a dataset of overlapping instrumental and climate proxy time series is a nontrivial statistical problem. The need to transform the proxy observations into estimates of the climate field, and the fact that the observed time series are not uniformly distributed in space, further complicate the analysis. Current leading approaches to this problem are based on estimating the full covariance matrix between the proxy time series and instrumental time series over a “calibration” interval and then using this covariance matrix in the context of a linear regression to predict the missing instrumental values from the proxy observations for years prior to instrumental coverage.

A fundamentally different approach to this problem is formulated by specifying parametric forms for the spatial covariance and temporal evolution of the climate field, as well as “observation equations” describing the relationship between the data types and the corresponding true values of the climate field. A hierarchical Bayesian model is used to assimilate both proxy and instrumental datasets and to estimate the probability distribution of all model parameters and the climate field through time on a regular spatial grid. The output from this approach includes an estimate of the full covariance structure of the climate field and model parameters as well as diagnostics that estimate the utility of the different proxy time series.

This methodology is demonstrated using an instrumental surface temperature dataset after corrupting a number of the time series to mimic proxy observations. The results are compared to those achieved using the regularized expectation–maximization algorithm, and in these experiments the Bayesian algorithm produces reconstructions with greater skill. The assumptions underlying these two methodologies and the results of applying each to simple surrogate datasets are explored in greater detail in Part II.

Full access
Martin P. Tingley
and
Peter Huybers

Abstract

Part I presented a Bayesian algorithm for reconstructing climate anomalies in space and time (BARCAST). This method involves specifying simple parametric forms for the spatial covariance and temporal evolution of the climate field as well as “observation equations” describing the relationships between the data types and the corresponding true values of the climate field. As this Bayesian approach to reconstructing climate fields is new and different, it is worthwhile to compare it in detail to the more established regularized expectation–maximization (RegEM) algorithm, which is based on an empirical estimate of the joint data covariance matrix and a multivariate regression of the instrumental time series onto the proxy time series. The differing assumptions made by BARCAST and RegEM are detailed, and the impacts of these differences on the analysis are discussed. Key distinctions between BARCAST and RegEM include their treatment of spatial and temporal covariance, the prior information that enters into each analysis, the quantities they seek to impute, the end product of each analysis, the temporal variance of the reconstructed field, and the treatment of uncertainty in both the imputed values and functions of these imputations. Differences between BARCAST and RegEM are illustrated by applying the two approaches to various surrogate datasets. If the assumptions inherent to BARCAST are not strongly violated, then in scenarios comparable to practical applications BARCAST results in reconstructions of both the field and the spatial mean that are more skillful than those produced by RegEM, as measured by the coefficient of efficiency. In addition, the uncertainty intervals produced by BARCAST are narrower than those estimated using RegEM and contain the true values with higher probability.

Full access
Karen A. McKinnon
and
Peter Huybers

Abstract

The seasonal cycle in temperature is a large and well-observed response to radiative forcing, suggesting its potential as a natural analog to human-caused climate change. Although there have been advances constraining some climate feedback parameters using seasonal observations, the seasonal cycle has not been used to inform about the local temperature sensitivity to greenhouse gas forcing. In this study, we uncover a nonlinear relationship between the amplitude and phase of the seasonal cycle and forced temperature trends in seven CMIP5-era large ensembles across the Northern Hemisphere extratropical continents. We develop a mixture energy balance model that reproduces this relationship and reveals the unexpected finding that the phasing of the seasonal cycle—in addition to the amplitude—contains information about local temperature sensitivity to seasonal forcing over land. Using this energy balance model framework, we compare the pattern and magnitude of the seasonally inferred sensitivity of the surface temperature response to anthropogenic radiative forcing. The seasonally constrained model largely reproduces the pattern of human-caused temperature trends seen in climate models (r = 0.81, p value < 0.01), including polar amplification, but the magnitude of the response is smaller by about a factor of 3. Our results show the relevance of both phasing and amplitude for constraining patterns of local feedbacks and suggest the utility of additional research to better understand the differences in sensitivity between seasonal and greenhouse gas forcing.

Significance Statement

Warming in response to increased greenhouse gases is not spatially uniform across land. We wanted to understand whether the familiar seasonal cycle in temperature could provide information about climate change. We found that climate models show a strong link between the seasonal cycle and future warming: places with a larger and more delayed temperature response to the seasonal cycle in solar forcing tend to warm more across the Northern Hemisphere midlatitudes. A very simple model for the climate system, whose parameters are based on the seasonal cycle, captures the pattern but not the magnitude of warming. Our findings suggest that there are some similarities between the processes that control temperature change on seasonal and climate change time scales, but that we must understand the difference between seasonal and longer-term sensitivity to warming before the seasonal cycle can be used to reduce uncertainty about climate change.

Restricted access
Duo Chan
,
Geoffrey Gebbie
, and
Peter Huybers

Abstract

Land surface air temperatures (LSAT) inferred from weather station data differ among major research groups. The estimate by NOAA’s monthly Global Historical Climatology Network (GHCNm) averages 0.02°C cooler between 1880 and 1940 than Berkeley Earth’s and 0.14°C cooler than the Climate Research Unit estimates. Such systematic offsets can arise from differences in how poorly documented changes in measurement characteristics are detected and adjusted. Building upon an existing pairwise homogenization algorithm used in generating the fourth version of NOAA’s GHCNm(V4), PHA0, we propose two revisions to account for autocorrelation in climate variables. One version, PHA1, makes minimal modification to PHA0 by extending the threshold used in breakpoint detection to be a function of LSAT autocorrelation. The other version, PHA2, uses penalized likelihood to detect breakpoints through optimizing a model-selection problem globally. To facilitate efficient optimization for series with more than 1000 time steps, a multiparent genetic algorithm is proposed for PHA2. Tests on synthetic data generated by adding breakpoints to CMIP6 simulations and realizations from a Gaussian process indicate that PHA1 and PHA2 both similarly outperform PHA0 in recovering accurate climatic trends. Applied to unhomogenized GHCNmV4, both revised algorithms detect breakpoints that correspond with available station metadata. Uncertainties are estimated by perturbing algorithmic parameters, and an ensemble is constructed by pooling 50 PHA1- and 50 PHA2-based members. The continental-mean warming in this new ensemble is consistent with that of Berkeley Earth, despite using different homogenization approaches. Relative to unhomogenized data, our homogenization increases the 1880–2022 trend by 0.16 [0.12, 0.19]°C century−1 (95% confidence interval), leading to continental-mean warming of 1.65 [1.62, 1.69]°C over 2010–22 relative to 1880–1900.

Significance Statement

Accurately correcting for systematic errors in observational records of land surface air temperature (LSAT) is critical for quantifying historical warming. Existing LSAT estimates are subject to systematic offsets associated with processes including changes in instrumentation and station movement. This study improves a pairwise homogenization algorithm by accounting for the fact that climate signals are correlated over time. The revised algorithms outperform the original in identifying discontinuities and recovering accurate warming trends. Applied to monthly station temperatures, the revised algorithms adjust trends in continental mean LSAT since the 1880s to be 0.16°C century−1 greater relative to raw data. Our estimate is most consistent with that from Berkeley Earth and indicates lesser and greater warming than estimates from NOAA and the Met Office, respectively.

Restricted access