Search Results

You are looking at 1 - 10 of 49 items for

  • Author or Editor: Peter A. Stott x
  • All content x
Clear All Modify Search
Nikolaos Christidis and Peter A. Stott
Full access
Nikolaos Christidis and Peter A. Stott
Free access
Nikolaos Christidis and Peter A. Stott
Full access
Nikolaos Christidis and Peter A. Stott

Abstract

The new Hadley Centre system for attribution of weather and climate extremes provides assessments of how human influence on the climate may lead to a change in the frequency of such events. Two different types of ensembles of simulations are generated with an atmospheric model to represent the actual climate and what the climate would have been in the absence of human influence. Estimates of the event frequency with and without the anthropogenic effect are then obtained. Three experiments conducted so far with the new system are analyzed in this study to examine how anthropogenic forcings change the odds of warm years, summers, or winters in a number of regions where the model reliably reproduces the frequency of warm events. In all cases warm events become more likely because of human influence, but estimates of the likelihood may vary considerably from year to year depending on the ocean temperature. While simulations of the actual climate use prescribed observational data of sea surface temperature and sea ice, simulations of the nonanthropogenic world also rely on coupled atmosphere–ocean models to provide boundary conditions, and this is found to introduce a major uncertainty in attribution assessments. Improved boundary conditions constructed with observational data are introduced in order to minimize this uncertainty. In more than half of the 10 cases considered here anthropogenic influence results in warm events being 3 times more likely and extreme events 5 times more likely during September 2011–August 2012, as an experiment with the new boundary conditions indicates.

Full access
Fraser C. Lott and Peter A. Stott

Abstract

Although it is critical to assess the accuracy of attribution studies, the fraction of attributable risk (FAR) cannot be directly assessed from observations since it involves the probability of an event in a world that did not happen, the “natural” world where there was no human influence on climate. Instead, reliability diagrams (usually used to compare probabilistic forecasts to the observed frequencies of events) have been used to assess climate simulations employed for attribution and by inference to evaluate the attribution study itself. The Brier score summarizes this assessment of a model by the reliability diagram. By constructing a modeling framework where the true FAR is already known, this paper shows that Brier scores are correlated to the accuracy of a climate model ensemble’s calculation of FAR, although only weakly. This weakness exists because the diagram does not account for accuracy of simulations of the natural world. This is better represented by two reliability diagrams from early and late in the period of study, which would have, respectively, less and greater anthropogenic climate forcing. Two new methods are therefore proposed for assessing the accuracy of FAR, based on using the earlier observational period as a proxy for observations of the natural world. It is found that errors from model-based estimates of these observable quantities are strongly correlated with errors in the FAR estimated in the model framework. These methods thereby provide new observational estimates of the accuracy in FAR.

Full access
Nikolaos Christidis, Andrew Ciavarella, and Peter A. Stott

Abstract

Attribution analyses of extreme events estimate changes in the likelihood of their occurrence due to human climatic influences by comparing simulations with and without anthropogenic forcings. Classes of events are commonly considered that only share one or more key characteristics with the observed event. Here we test the sensitivity of attribution assessments to such event definition differences, using the warm and wet winter of 2015/16 in the United Kingdom as a case study. A large number of simulations from coupled models and an atmospheric model are employed. In the most basic case, warm and wet events are defined relative to climatological temperature and rainfall thresholds. Several other classes of events are investigated that, in addition to threshold exceedance, also account for the effect of observed sea surface temperature (SST) anomalies, the circulation flow, or modes of variability present during the reference event. Human influence is estimated to increase the likelihood of warm winters in the United Kingdom by a factor of 3 or more for events occurring under any atmospheric and oceanic conditions, but also for events with a similar circulation or oceanic state to 2015/16. The likelihood of wet winters is found to increase by at least a factor of 1.5 in the general case, but results from the atmospheric model, conditioned on observed SST anomalies, are more uncertain, indicating that decreases in the likelihood are also possible. The robustness of attribution assessments based on atmospheric models is highly dependent on the representation of SSTs without the effect of human influence.

Full access
Peter A. Stott and Simon F. B. Tett

Abstract

Spatially and temporally dependent fingerprint patterns of near-surface temperature change are derived from transient climate simulations of the second Hadley Centre coupled ocean–atmosphere GCM (HADCM2). Trends in near-surface temperature are calculated from simulations in which HADCM2 is forced with historical increases in greenhouse gases only and with both greenhouse gases and anthropogenic sulfur emissions. For each response an ensemble of four simulations is carried out. An estimate of the natural internal variability of the ocean–atmosphere system is taken from a long multicentury control run of HADCM2.

The aim of the study is to investigate the spatial and temporal scales on which it is possible to detect a significant change in climate. Temporal scales are determined by taking temperature trends over 10, 30, and 50 yr using annual mean data, and spatial scales are defined by projecting these trends onto spherical harmonics.

Each fingerprint pattern is projected onto the recent observed pattern to give a scalar detection variable. This is compared with the distribution expected from natural variability, estimated by projecting the fingerprint pattern onto a distribution of patterns taken from the control run. Detection is claimed if the detection variable is greater than the 95th percentile of the distribution expected from natural variability. The results show that climate change can be detected on the global mean scale for 30- and 50-yr trends but not for 10-yr trends, assuming that the model’s estimate of variability is correct. At subglobal scales, climate change can be detected only for 50-yr trends and only for large spatial scales (greater than 5000 km).

Patterns of near-surface temperature trends for the 50 yr up to 1995 from the simulation that includes only greenhouse gas forcing are inconsistent with the observed patterns at small spatial scales (less than 2000 km). In contrast, patterns of temperature trends for the simulation that includes both greenhouse gas and sulfate forcing are consistent with the observed patterns at all spatial scales.

The possible limits to future detectability are investigated by taking one member of each ensemble to represent the observations and other members of the ensemble to represent model realizations of future temperature trends. The results show that for trends to 1995 the probability of detection is greatest at spatial scales greater than 5000 km. As the future signal of climate change becomes larger relative to the noise of natural variability, detection becomes very likely at all spatial scales by the middle of the next century.

The model underestimates climate variability as seen in the observations at spatial scales less than 2000 km. Therefore, some caution must be exercised when interpreting model-based detection results that include a contribution of small spatial scales to the climate change fingerprint.

Full access
Nikolaos Christidis, Richard A. Betts, and Peter A. Stott
Open access
Philip W. Mote, Peter A. Stott, and Robert S. Harwood

Abstract

The authors have used a spectral, primitive equation mechanistic model of the stratosphere and mesosphere to simulate observed stratospheric flow through the winters of 1991–92 and 1994–95 by forcing the model at 100 hPa with observed geopotential height. The authors assess the model’s performance quantitatively by comparing the simulations with the United Kingdom Meteorological Office (UKMO) assimilated stratosphere–troposphere data. Time-mean, zonal-mean temperatures are generally within 5 K and winds within 5 m s−1; transient features, such as wave growth, are mostly simulated well. The phase accuracy of planetary-scale waves declines with altitude and wavenumber, and the model has difficulty correctly simulating traveling anticyclones in the upper stratosphere. The authors examine the minor warming of January 1995 which was unusual in its depth and development and which the model simulated fairly well. The authors also examine the minor warming of January 1992, which the model missed, and a major warming in February 1992 that occurred in the model but not in the observations.

Full access
Donald P. Cummins, David B. Stephenson, and Peter A. Stott

Abstract

This study has developed a rigorous and efficient maximum likelihood method for estimating the parameters in stochastic energy balance models (with any k > 0 number of boxes) given time series of surface temperature and top-of-the-atmosphere net downward radiative flux. The method works by finding a state-space representation of the linear dynamic system and evaluating the likelihood recursively via the Kalman filter. Confidence intervals for estimated parameters are straightforward to construct in the maximum likelihood framework, and information criteria may be used to choose an optimal number of boxes for parsimonious k-box emulation of atmosphere–ocean general circulation models (AOGCMs). In addition to estimating model parameters the method enables hidden state estimation for the unobservable boxes corresponding to the deep ocean, and also enables noise filtering for observations of surface temperature. The feasibility, reliability, and performance of the proposed method are demonstrated in a simulation study. To obtain a set of optimal k-box emulators, models are fitted to the 4 × CO2 step responses of 16 AOGCMs in CMIP5. It is found that for all 16 AOGCMs three boxes are required for optimal k-box emulation. The number of boxes k is found to influence, sometimes strongly, the impulse responses of the fitted models.

Restricted access