Search Results

You are looking at 1 - 10 of 49 items for

  • Author or Editor: Peter A. Stott x
  • Refine by Access: All Content x
Clear All Modify Search
Nikolaos Christidis and Peter A. Stott
Free access
Nikolaos Christidis and Peter A. Stott
Full access
Nikolaos Christidis and Peter A. Stott
Full access
Nikolaos Christidis and Peter A. Stott

Abstract

The new Hadley Centre system for attribution of weather and climate extremes provides assessments of how human influence on the climate may lead to a change in the frequency of such events. Two different types of ensembles of simulations are generated with an atmospheric model to represent the actual climate and what the climate would have been in the absence of human influence. Estimates of the event frequency with and without the anthropogenic effect are then obtained. Three experiments conducted so far with the new system are analyzed in this study to examine how anthropogenic forcings change the odds of warm years, summers, or winters in a number of regions where the model reliably reproduces the frequency of warm events. In all cases warm events become more likely because of human influence, but estimates of the likelihood may vary considerably from year to year depending on the ocean temperature. While simulations of the actual climate use prescribed observational data of sea surface temperature and sea ice, simulations of the nonanthropogenic world also rely on coupled atmosphere–ocean models to provide boundary conditions, and this is found to introduce a major uncertainty in attribution assessments. Improved boundary conditions constructed with observational data are introduced in order to minimize this uncertainty. In more than half of the 10 cases considered here anthropogenic influence results in warm events being 3 times more likely and extreme events 5 times more likely during September 2011–August 2012, as an experiment with the new boundary conditions indicates.

Full access
Fraser C. Lott and Peter A. Stott

Abstract

Although it is critical to assess the accuracy of attribution studies, the fraction of attributable risk (FAR) cannot be directly assessed from observations since it involves the probability of an event in a world that did not happen, the “natural” world where there was no human influence on climate. Instead, reliability diagrams (usually used to compare probabilistic forecasts to the observed frequencies of events) have been used to assess climate simulations employed for attribution and by inference to evaluate the attribution study itself. The Brier score summarizes this assessment of a model by the reliability diagram. By constructing a modeling framework where the true FAR is already known, this paper shows that Brier scores are correlated to the accuracy of a climate model ensemble’s calculation of FAR, although only weakly. This weakness exists because the diagram does not account for accuracy of simulations of the natural world. This is better represented by two reliability diagrams from early and late in the period of study, which would have, respectively, less and greater anthropogenic climate forcing. Two new methods are therefore proposed for assessing the accuracy of FAR, based on using the earlier observational period as a proxy for observations of the natural world. It is found that errors from model-based estimates of these observable quantities are strongly correlated with errors in the FAR estimated in the model framework. These methods thereby provide new observational estimates of the accuracy in FAR.

Full access
Peter A. Stott and Simon F. B. Tett

Abstract

Spatially and temporally dependent fingerprint patterns of near-surface temperature change are derived from transient climate simulations of the second Hadley Centre coupled ocean–atmosphere GCM (HADCM2). Trends in near-surface temperature are calculated from simulations in which HADCM2 is forced with historical increases in greenhouse gases only and with both greenhouse gases and anthropogenic sulfur emissions. For each response an ensemble of four simulations is carried out. An estimate of the natural internal variability of the ocean–atmosphere system is taken from a long multicentury control run of HADCM2.

The aim of the study is to investigate the spatial and temporal scales on which it is possible to detect a significant change in climate. Temporal scales are determined by taking temperature trends over 10, 30, and 50 yr using annual mean data, and spatial scales are defined by projecting these trends onto spherical harmonics.

Each fingerprint pattern is projected onto the recent observed pattern to give a scalar detection variable. This is compared with the distribution expected from natural variability, estimated by projecting the fingerprint pattern onto a distribution of patterns taken from the control run. Detection is claimed if the detection variable is greater than the 95th percentile of the distribution expected from natural variability. The results show that climate change can be detected on the global mean scale for 30- and 50-yr trends but not for 10-yr trends, assuming that the model’s estimate of variability is correct. At subglobal scales, climate change can be detected only for 50-yr trends and only for large spatial scales (greater than 5000 km).

Patterns of near-surface temperature trends for the 50 yr up to 1995 from the simulation that includes only greenhouse gas forcing are inconsistent with the observed patterns at small spatial scales (less than 2000 km). In contrast, patterns of temperature trends for the simulation that includes both greenhouse gas and sulfate forcing are consistent with the observed patterns at all spatial scales.

The possible limits to future detectability are investigated by taking one member of each ensemble to represent the observations and other members of the ensemble to represent model realizations of future temperature trends. The results show that for trends to 1995 the probability of detection is greatest at spatial scales greater than 5000 km. As the future signal of climate change becomes larger relative to the noise of natural variability, detection becomes very likely at all spatial scales by the middle of the next century.

The model underestimates climate variability as seen in the observations at spatial scales less than 2000 km. Therefore, some caution must be exercised when interpreting model-based detection results that include a contribution of small spatial scales to the climate change fingerprint.

Full access
Nikolaos Christidis, Andrew Ciavarella, and Peter A. Stott

Abstract

Attribution analyses of extreme events estimate changes in the likelihood of their occurrence due to human climatic influences by comparing simulations with and without anthropogenic forcings. Classes of events are commonly considered that only share one or more key characteristics with the observed event. Here we test the sensitivity of attribution assessments to such event definition differences, using the warm and wet winter of 2015/16 in the United Kingdom as a case study. A large number of simulations from coupled models and an atmospheric model are employed. In the most basic case, warm and wet events are defined relative to climatological temperature and rainfall thresholds. Several other classes of events are investigated that, in addition to threshold exceedance, also account for the effect of observed sea surface temperature (SST) anomalies, the circulation flow, or modes of variability present during the reference event. Human influence is estimated to increase the likelihood of warm winters in the United Kingdom by a factor of 3 or more for events occurring under any atmospheric and oceanic conditions, but also for events with a similar circulation or oceanic state to 2015/16. The likelihood of wet winters is found to increase by at least a factor of 1.5 in the general case, but results from the atmospheric model, conditioned on observed SST anomalies, are more uncertain, indicating that decreases in the likelihood are also possible. The robustness of attribution assessments based on atmospheric models is highly dependent on the representation of SSTs without the effect of human influence.

Full access
Nikolaos Christidis, Richard A. Betts, and Peter A. Stott
Open access
Nikolaos Christidis, Mark McCarthy, Andrew Ciavarella, and Peter A. Stott
Full access
Thomas C. Peterson, Peter A. Stott, and Stephanie Herring

Attribution of extreme events shortly after their occurrence stretches the current state-of-theart of climate change assessment. To help foster the growth of this science, this article illustrates some approaches to answering questions about the role of human factors, and the relative role of different natural factors, for six specific extreme weather or climate events of 2011.

Not every event is linked to climate change. The rainfall associated with the devastating Thailand floods can be explained by climate variability. But long-term warming played a part in the others. While La Niña contributed to the failure of the rains in the Horn of Africa, an increased frequency of such droughts there was linked to warming in the Western Pacific– Indian Ocean warm pool. Europe's record warm temperatures would probably not have been as unusual if the high temperatures had been caused only by the atmospheric flow regime without any long-term warming.

Calculating how the odds of a particular extreme event have changed provides a means of quantifying the influence of climate change on the event. The heatwave that affected Texas has become distinctly more likely than 40 years ago. In the same vein, the likelihood of very warm November temperatures in the UK has increased substantially since the 1960s.

Comparing climate model simulations with and without human factors shows that the cold UK winter of 2010/2011 has become about half as likely as a result of human influence on climate, illustrating that some extreme events are becoming less likely due to climate change.

Full access