Search Results

You are looking at 1 - 10 of 19 items for

  • Author or Editor: Peter Huybers x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Peter Huybers

Abstract

Spectral analysis of the Greenland Ice Sheet Project 2 (GISP2) δ 18O record has been interpreted to show a 1/(1470 yr) spectral peak that is highly statistically significant (p < 0.01). The presence of such a peak, if accurate, provides an important clue about the mechanisms controlling glacial climate. As is standard, however, statistical significance was judged relative to a null model, H 0, consisting of an autoregressive order one process, AR(1). In this study, H 0 is generalized using an autoregressive moving-average process, ARMA(p, q). A rule of thumb is proposed for evaluating the adequacy of H 0 involving comparing the expected and observed variances of the logarithm of a spectral estimate, which are generally consistent insomuch as removal of the ARMA structure from a time series results in an approximately level spectral estimate. An AR(1), or ARMA(1, 0), process is shown to be an inadequate representation of the GISP2 δ 18O structure, whereas higher-order ARMA processes result in approximately level spectral estimates. After suitably leveling GISP2 δ 18O and accounting for multiple hypothesis testing, multitaper spectral estimation indicates that the 1/(1470 yr) peak is insignificant. The seeming prominence of the 1/(1470 yr) peak is explained as the result of evaluating a spectrum involving higher-order ARMA structure and the peak having been selected on the basis of its seeming anomalous. The proposed technique for evaluating the significance of spectral peaks is also applicable to other geophysical records.

Significance Statement

A suitable null hypothesis is necessary for obtaining accurate test results, but a means for evaluating the adequacy of a null hypothesis for a spectral peak has been lacking. A generalized null model is presented in the form of an autoregressive, moving-average process whose adequacy can be gauged by comparing the observed and expected variance of log spectral density. Application of the method to the GISP2 δ 18O record indicates that spectral structure found at 1/(1470 yr) is statistically insignificant.

Open access
Peter Huybers

Abstract

The spread in climate sensitivity obtained from 12 general circulation model runs used in the Fourth Assessment of the Intergovernmental Panel on Climate Change indicates a 95% confidence interval of 2.1°–5.5°C, but this reflects compensation between model feedbacks. In particular, cloud feedback strength negatively covaries with the albedo feedback as well as with the combined water vapor plus lapse rate feedback. If the compensation between feedbacks is removed, the 95% confidence interval for climate sensitivity expands to 1.9°–8.0°C. Neither of the quoted 95% intervals adequately reflects the understanding of climate sensitivity, but their differences illustrate that model interdependencies must be understood before model spread can be correctly interpreted.

The degree of negative covariance between feedbacks is unlikely to result from chance alone. It may, however, result from the method by which the feedbacks were estimated, physical relationships represented in the models, or from conditioning the models upon some combination of observations and expectations. This compensation between model feedbacks—when taken together with indications that variations in radiative forcing and the rate of ocean heat uptake play a similar compensatory role in models—suggests that conditioning of the models acts to curtail the intermodel spread in climate sensitivity. Observations used to condition the models ought to be explicitly stated, or there is the risk of doubly calling on data for purposes of both calibration and evaluation. Conditioning the models upon individual expectation (e.g., anchoring to the Charney range of 3° ± 1.5°C), to the extent that it exists, greatly complicates statistical interpretation of the intermodel spread.

Full access
Parker Liautaud
and
Peter Huybers

Abstract

Proxy reconstructions indicate that sea level responded more sensitively to CO2 radiative forcing in the late Pleistocene than in the early Pleistocene, a transition that was proposed to arise from changes in ice-sheet dynamics. In this study we analyze the links between sea level, orbital variations, and CO2 using an energy-balance model having a simple ice sheet. Model parameters, including for age models, are inferred over the late Pleistocene using a Bayesian method, and the inferred relationships are used to evaluate CO2 levels over the past 2 million years in relation to sea level. Early Pleistocene model CO2 averages 244 ppm (241–246 ppm 95% confidence interval) across 2 to 1 million years ago and indicates that sea level was less sensitive to radiative forcing than in the late Pleistocene, consistent with foregoing δ11B-derived estimates. Weaker early Pleistocene sea level sensitivity originates from a weaker ice-albedo feedback and the fact that smaller ice sheets are thinner, absent changes over time in model equations or parameters. An alternative scenario involving thin and expansive early Pleistocene ice sheets, in accord with some lines of geologic evidence, implies 15-ppm-lower average CO2 or ~10–15-m-higher average sea level during the early Pleistocene relative to the original scenario. Our results do not rule out dynamical transitions during the middle Pleistocene, but indicate that variations in the sea level response to CO2 forcing over the past 2 million years can be explained on the basis of nonlinearities associated with ice-albedo feedbacks and ice-sheet geometry that are consistently present across this interval.

Full access
Duo Chan
and
Peter Huybers

Abstract

Differences in sea surface temperature (SST) biases among groups of bucket measurements in the International Comprehensive Ocean–Atmosphere Dataset, version 3.0 (ICOADS3.0), were recently identified that introduce offsets of as much as 1°C and have first-order implications for regional temperature trends. In this study, the origin of these groupwise offsets is explored through covariation between offsets and diurnal cycle amplitudes. Examination of an extended bucket model leads to expectations for offsets and amplitudes to covary in either sign, whereas misclassified engine room intake (ERI) temperatures invariably lead to negative covariance on account of ERI measurements being warmer and having a smaller diurnal amplitude. An analysis of ICOADS3.0 SST measurements that are inferred to come from buckets indicates that offsets after the 1930s primarily result from the misclassification of ERI measurements in points of five lines of evidence. 1) Prior to when ERI measurements become available in the 1930s, offset–amplitude covariance is weak and generally positive, whereas covariance is stronger and generally negative subsequently. 2) The introduction of ERI measurements in the 1930s is accompanied by a wider range of offsets and diurnal amplitudes across groups, with 3) approximately 20% of estimated diurnal amplitudes being significantly smaller than buoy and drifter observations. 4) Regressions of offsets versus amplitudes intersect independently determined end-member values of ERI measurements. 5) Offset-amplitude slopes become less negative across all regions and seasons between 1960 and 1980, when ERI temperatures were independently determined to become less warmly biased. These results highlight the importance of accurately determining measurement procedures for bias corrections and reducing uncertainty in historical SST estimates.

Free access
Duo Chan
and
Peter Huybers

Abstract

Most historical sea surface temperature (SST) estimates indicate warmer World War II SSTs than expected from forcing and internal climate variability. If real, this World War II warm anomaly (WW2WA) has important implications for decadal variability, but the WW2WA may also arise from incomplete corrections of biases associated with bucket and engine room intake (ERI) measurements. To better assess the origins of the WW2WA, we develop five different historical SST estimates (reconstructions R1–R5). Using uncorrected SST measurements from the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) version 3.0 (R1) gives a WW2WA of 0.41°C. In contrast, using only buckets (R2) or ERI observations (R3) gives WW2WAs of 0.18° and 0.08°C, respectively, implying that uncorrected biases are the primary source of the WW2WA. We then use an extended linear-mixed-effect method to quantify systematic differences between subsets of SSTs and develop groupwise SST adjustments based on differences between pairs of nearby SST measurements. Using all measurements after applying groupwise adjustments (R4) gives a WW2WA of 0.13°C [95% confidence interval (c.i.): 0.01°–0.26°C] and indicates that U.S. and U.K. naval observations are the primary cause of the WW2WA. Finally, nighttime bucket SSTs are found to be warmer than their daytime counterparts during WW2, prompting a daytime-only reconstruction using groupwise adjustments (R5) that has a WW2WA of 0.09°C (95% c.i.: −0.01° to 0.18°C). R5 is consistent with the range of internal variability found in either the CMIP5 (95% c.i.: −0.10° to 0.10°C) or CMIP6 ensembles (95% c.i.: −0.11° to 0.10°C). These results support the hypothesis that the WW2WA is an artifact of observational biases, although further data and metadata analyses will be important for confirmation.

Open access
Duo Chan
and
Peter Huybers

Abstract

The International Comprehensive Ocean–Atmosphere Dataset (ICOADS) is a cornerstone for estimating changes in sea surface temperatures (SST) over the instrumental era. Interest in determining SST changes to within 0.1°C makes detecting systematic offsets within ICOADS important. Previous studies have corrected for offsets among engine room intake, buoy, and wooden and canvas bucket measurements, as well as noted discrepancies among various other groupings of data. In this study, a systematic examination of differences in collocated bucket SST measurements from ICOADS3.0 is undertaken using a linear-mixed-effect model according to nations and more-resolved groupings. Six nations and a grouping for which nation metadata are missing, referred to as “deck 156,” together contribute 91% of all bucket measurements and have systematic offsets among one another of as much as 0.22°C. Measurements from the Netherlands and deck 156 are colder than the global average by −0.10° and −0.13°C, respectively, both at p < 0.01, whereas Russian measurements are offset warm by 0.10°C at p < 0.1. Furthermore, of the 31 nations whose measurements are present in more than one grouping of data (i.e., deck), 14 contain decks that show significant offsets at p < 0.1, including all major collecting nations. Results are found to be robust to assumptions regarding the independence and distribution of errors as well as to influences from the diurnal cycle and spatially heterogeneous noise variance. Correction for systematic offsets among these groupings should improve the accuracy of estimated SSTs and their trends.

Full access
Marena Lin
and
Peter Huybers

Abstract

In an earlier study, a weaker trend in global mean temperature over the past 15 years relative to the preceding decades was characterized as significantly lower than those contained within the phase 5 of the Coupled Model Intercomparison Project (CMIP5) ensemble. In this study, divergence between model simulations and observations is estimated using a fixed-intercept linear trend with a slope estimator that has one-third the noise variance compared to simple linear regression. Following the approach of the earlier study, where intermodel spread is used to assess the distribution of trends, but using the fixed-intercept trend metric demonstrates that recently observed trends in global mean temperature are consistent ( ) with the CMIP5 ensemble for all 15-yr intervals of observation–model divergence since 1970. Significant clustering of global trends according to modeling center indicates that the spread in CMIP5 trends is better characterized using ensemble members drawn across models as opposed to using ensemble members from a single model. Despite model–observation consistency at the global level, substantial regional discrepancies in surface temperature trends remain.

Full access
Geoffrey Gebbie
and
Peter Huybers

Abstract

Ocean tracer distributions have long been used to decompose the deep ocean into constituent water masses, but previous inverse methods have generally been limited to just a few water masses that have been defined by a subjective choice of static property combinations. Through air–sea interaction and upper-ocean processes, all surface locations are potential sources of distinct tracer properties, and thus it is natural to define a distinct water type for each surface site. Here, a new box inversion method is developed to explore the contributions of all surface locations to the ocean interior, as well as the degree to which the observed tracer fields can be explained by a steady-state circulation with unchanging surface-boundary conditions. The total matrix intercomparison (TMI) method is a novel way to invert observations to solve for the pathways connecting every surface point to every interior point. In the limiting case that the circulation is steady and that five conservative tracers are perfectly observed, the TMI method unambiguously recovers the complete pathways information, owing to the fact that each grid box has, at most, six neighbors. Modern-day climatologies of temperature, salinity, phosphate, nitrate, oxygen, and oxygen-18/oxygen-16 isotope ratios are simultaneously inverted at 4° × 4° grid resolution with 33 vertical levels. Using boundary conditions at the surface and seafloor, the entire interior distribution of the observed tracers is reconstructed using the TMI method. Assuming that seafloor fluxes of tracer properties can be neglected, the method suggests that 25% or less of the water residing in the deep North Pacific originated in the North Atlantic. Integrating over the global ocean, the Southern Ocean is dominant, as the inversion indicates that almost 60% of the ocean volume originates from south of the Southern Hemisphere subtropical front.

Full access
Geoffrey Gebbie
and
Peter Huybers

Abstract

A number of previous observational studies have found that the waters of the deep Pacific Ocean have an age, or elapsed time since contact with the surface, of 700–1000 yr. Numerical models suggest ages twice as old. Here, the authors present an inverse framework to determine the mean age and its upper and lower bounds given Global Ocean Data Analysis Project (GLODAP) radiocarbon observations, and they show that the potential range of ages increases with the number of constituents or sources that are included in the analysis. The inversion requires decomposing the World Ocean into source waters, which is obtained here using the total matrix intercomparison (TMI) method at up to 2° × 2° horizontal resolution with 11 113 surface sources. The authors find that the North Pacific at 2500-m depth can be no younger than 1100 yr old, which is older than some previous observational estimates. Accounting for the broadness of surface regions where waters originate leads to a reservoir-age correction of almost 100 yr smaller than would be estimated with a two or three water-mass decomposition and explains some of the discrepancy with previous observational studies. A best estimate of mean age is also presented using the mixing history along circulation pathways. Subject to the caveats that inference of the mixing history would benefit from further observations and that radiocarbon cannot rule out the presence of extremely old waters from exotic sources, the deep North Pacific waters are 1200–1500 yr old, which is more in line with existing numerical model results.

Full access
Martin P. Tingley
and
Peter Huybers

Abstract

Reconstructing the spatial pattern of a climate field through time from a dataset of overlapping instrumental and climate proxy time series is a nontrivial statistical problem. The need to transform the proxy observations into estimates of the climate field, and the fact that the observed time series are not uniformly distributed in space, further complicate the analysis. Current leading approaches to this problem are based on estimating the full covariance matrix between the proxy time series and instrumental time series over a “calibration” interval and then using this covariance matrix in the context of a linear regression to predict the missing instrumental values from the proxy observations for years prior to instrumental coverage.

A fundamentally different approach to this problem is formulated by specifying parametric forms for the spatial covariance and temporal evolution of the climate field, as well as “observation equations” describing the relationship between the data types and the corresponding true values of the climate field. A hierarchical Bayesian model is used to assimilate both proxy and instrumental datasets and to estimate the probability distribution of all model parameters and the climate field through time on a regular spatial grid. The output from this approach includes an estimate of the full covariance structure of the climate field and model parameters as well as diagnostics that estimate the utility of the different proxy time series.

This methodology is demonstrated using an instrumental surface temperature dataset after corrupting a number of the time series to mimic proxy observations. The results are compared to those achieved using the regularized expectation–maximization algorithm, and in these experiments the Bayesian algorithm produces reconstructions with greater skill. The assumptions underlying these two methodologies and the results of applying each to simple surrogate datasets are explored in greater detail in .

Full access