Search Results

You are looking at 1 - 10 of 21 items for

  • Author or Editor: Jason E. Smerdon x
  • All content x
Clear All Modify Search
Marc Stieglitz and Jason E. Smerdon

Abstract

The objective of this work is to develop a Simple Land-Interface Model (SLIM) that captures the seasonal and interannual behavior of land–atmosphere coupling, as well as the subsequent subsurface temperature evolution. The model employs the one-dimensional thermal diffusion equation driven by a surface flux boundary condition. While the underlying physics is straightforward, the SLIM framework allows a qualitative understanding of the first-order controls that govern the seasonal coupling between the land and atmosphere by implicitly representing the dominant processes at the land surface. The model is used to perform a suite of experiments that demonstrate how changes in surface air temperature and coupling conditions control subsurface temperature evolution. The work presented here suggests that a collective approach employing both complex and simple models, when joined with analyses of observational data, has the potential to increase understanding of land–atmosphere coupling and the subsequent evolution of subsurface temperatures.

Full access
Jason E. Smerdon, Alexey Kaplan, and Diana Chang

Abstract

The regularized expectation maximization (RegEM) method has been used in recent studies to derive climate field reconstructions of Northern Hemisphere temperatures during the last millennium. Original pseudoproxy experiments that tested RegEM [with ridge regression regularization (RegEM-Ridge)] standardized the input data in a way that improved the performance of the reconstruction method, but included data from the reconstruction interval for estimates of the mean and standard deviation of the climate field—information that is not available in real-world reconstruction problems. When standardizations are confined to the calibration interval only, pseudoproxy reconstructions performed with RegEM-Ridge suffer from warm biases and variance losses. Only cursory explanations of this so-called standardization sensitivity of RegEM-Ridge have been published, but they have suggested that the selection of the regularization (ridge) parameter by means of minimizing the generalized cross validation (GCV) function is the source of the effect. The origin of the standardization sensitivity is more thoroughly investigated herein and is shown not to be associated with the selection of the ridge parameter; sets of derived reconstructions reveal that GCV-selected ridge parameters are minimally different for reconstructions standardized either over both the reconstruction and calibration interval or over the calibration interval only. While GCV may select ridge parameters that are different from those that precisely minimize the error in pseudoproxy reconstructions, RegEM reconstructions performed with truly optimized ridge parameters are not significantly different from those that use GCV-selected ridge parameters. The true source of the standardization sensitivity is attributable to the inclusion or exclusion of additional information provided by the reconstruction interval, namely, the mean and standard deviation fields computed for the complete modeled dataset. These fields are significantly different from those for the calibration period alone because of the violation of a standard EM assumption that missing values are missing at random in typical paleoreconstruction problems; climate data are predominantly missing in the preinstrumental period when the mean climate was significantly colder than the mean of the instrumental period. The origin of the standardization sensitivity therefore is not associated specifically with RegEM-Ridge, and more recent attempts to regularize the EM algorithm using truncated total least squares could theoretically also be susceptible to the problems affecting RegEM-Ridge. Nevertheless, the principal failure of RegEM-Ridge arises because of a poor initial estimate of the mean field, and therefore leaves open the possibility that alternative methods may perform better.

Full access
Jason E. Smerdon, Alexey Kaplan, and Daniel E. Amrhein

Abstract

The commenters confirm the errors identified and discussed in Smerdon et al., which either invalidated or required the reinterpretation of quantitative results from pseudoproxy experiments presented or used in several earlier papers. These errors have a strong influence on the spatial skill assessments of climate field reconstructions (CFRs), despite their small impacts on skill statistics averaged over the Northern Hemisphere. On the basis of spatial performance and contrary to the claim by the commenters, the Regularized Expectation Maximization method using truncated total least squares (RegEM-TTLS) cannot be considered a preferred CFR technique. Moreover, distinctions between CFR methods in the context of the discussion in the original paper are immaterial. Continued investigations using accurately described and faithfully executed pseudoproxy experiments are critical for further evaluation and improvement of CFR methods.

Full access
Jason E. Smerdon, Alexey Kaplan, and Daniel E. Amrhein

Abstract

Pseudoproxy experiments evaluate statistical methods used to reconstruct climate fields from paleoclimatic proxies during the Common Era. These experiments typically employ output from millennial simulations by general circulation models (GCMs). It is demonstrated that multiple published pseudoproxy studies have used erroneously processed GCM surface temperature fields: the NCAR Community Climate System Model (CCSM), version 1.4, field was incorrectly oriented geographically and the GKSS ECHO-g FOR1 field was corrupted by a hemispheric-scale smoothing in the Western Hemisphere. These problems are not associated with the original model simulations; they instead arose because of incorrect processing of the model data for the pseudoproxy experiments. The consequences of these problems are evaluated for the studies in which the incorrect fields were used. Some quantitative results are invalidated by the findings: these include all experiments that used the corrupted ECHO-g field and those aspects of previous CCSM experiments that focused on Niño-3 reconstructions. Other results derived from the CCSM field can be reinterpreted based on the information provided herein and their qualitative characteristics remain similar.

Full access
Sloan Coats, Benjamin I. Cook, Jason E. Smerdon, and Richard Seager

Abstract

Pancontinental droughts in North America, or droughts that simultaneously affect a large percentage of the geographically and climatically distinct regions of the continent, present significant on-the-ground management challenges and, as such, are an important target for scientific research. The methodology of paleoclimate-model data comparisons is used herein to provide a more comprehensive understanding of pancontinental drought dynamics. Models are found to simulate pancontinental drought with the frequency and spatial patterns exhibited by the paleoclimate record. They do not, however, agree on the modes of atmosphere–ocean variability that produce pancontinental droughts because simulated El Niño–Southern Oscillation (ENSO), Pacific decadal oscillation (PDO), and Atlantic multidecadal oscillation (AMO) dynamics, and their teleconnections to North America, are different between models and observations. Despite these dynamical differences, models are able to reproduce large-magnitude centennial-scale variability in the frequency of pancontinental drought occurrence—an important feature of the paleoclimate record. These changes do not appear to be tied to exogenous forcing, suggesting that simulated internal hydroclimate variability on these time scales is large in magnitude. Results clarify our understanding of the dynamics that produce real-world pancontinental droughts while assessing the ability of models to accurately characterize future drought risks.

Full access
Sloan Coats, Jason E. Smerdon, Benjamin I. Cook, and Richard Seager

Abstract

Multidecadal drought periods in the North American Southwest (25°–42.5°N, 125°–105°W), so-called megadroughts, are a prominent feature of the paleoclimate record over the last millennium (LM). Six forced transient simulations of the LM along with corresponding historical (1850–2005) and 500-yr preindustrial control runs from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are analyzed to determine if atmosphere–ocean general circulation models (AOGCMs) are able to simulate droughts that are similar in persistence and severity to the megadroughts in the proxy-derived North American Drought Atlas. Megadroughts are found in each of the AOGCM simulations of the LM, although there are intermodel differences in the number, persistence, and severity of these features. Despite these differences, a common feature of the simulated megadroughts is that they are not forced by changes in the exogenous forcing conditions. Furthermore, only the Community Climate System Model (CCSM), version 4, simulation contains megadroughts that are consistently forced by cooler conditions in the tropical Pacific Ocean. These La Niña–like mean states are not accompanied by changes to the interannual variability of the El Niño–Southern Oscillation system and result from internal multidecadal variability of the tropical Pacific mean state, of which the CCSM has the largest magnitude of the analyzed simulations. Critically, the CCSM is also found to have a realistic teleconnection between the tropical Pacific and North America that is stationary on multidecadal time scales. Generally, models with some combination of a realistic and stationary teleconnection and large multidecadal variability in the tropical Pacific are found to have the highest incidence of megadroughts driven by the tropical Pacific boundary conditions.

Full access
Jason E. Smerdon, Alexey Kaplan, Diana Chang, and Michael N. Evans

Abstract

Canonical correlation analysis (CCA) is evaluated for paleoclimate field reconstructions in the context of pseudoproxy experiments assembled from the millennial integration (850–1999 c.e.) of the National Center for Atmospheric Research Community Climate System Model, version 1.4. A parsimonious method for selecting the order of the CCA model is presented. Results suggest that the method is capable of resolving multiple (3–13) climatic patterns given the estimated proxy observational network and the amount of observational uncertainty. CCA reconstructions are compared to those derived from the regularized expectation maximization method using ridge regression regularization (RegEM-Ridge). CCA and RegEM-Ridge yield similar skill patterns that are characterized by high correlation regions collocated with dense pseudoproxy sampling areas in North America and Europe. Both methods also produce reconstructions characterized by spatially variable warm biases and variance losses, particularly at high pseudoproxy noise levels. RegEM-Ridge in particular is subject to significantly larger variance losses than CCA, even though the spatial correlation patterns of the two methods are comparable. Results collectively indicate the importance of evaluating the field performance of methods that target spatial climate patterns during the last several millennia and indicate that the results of currently available climate field reconstructions should be interpreted carefully.

Full access
Johannes P. Werner, Juerg Luterbacher, and Jason E. Smerdon

Abstract

A pseudoproxy comparison is presented for two statistical methods used to derive annual climate field reconstructions (CFRs) for Europe. The employed methods use the canonical correlation analysis (CCA) procedure presented by Smerdon et al. and the Bayesian hierarchical model (BHM) method adopted from Tingley and Huybers. Pseudoproxy experiments (PPEs) are constructed from modeled temperature data sampled from the 1250-yr paleo-run of the NCAR Community Climate System Model (CCSM) version 1.4 model by Ammann et al. Pseudoproxies approximate the distribution of the multiproxy network used by Mann et al. over the European region of interest. Gaussian white noise is added to the temperature data to mimic the combined signal and noise properties of real-world proxies. Results indicate that, while both methods perform well in areas with good proxy coverage, the BHM method outperforms the CCA method across the entire field and additionally returns objective error estimates.

Full access
Jason E. Smerdon, Alexey Kaplan, Diana Chang, and Michael N. Evans
Full access