Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: Aurelien Ribes x
  • All content x
Clear All Modify Search
Julien Cattiaux and Aurélien Ribes

Abstract

Weather extremes are the showcase of climate variability. Given their societal and environmental impacts, they are of great public interest. The prevention of natural hazards, the monitoring of single events, and, more recently, their attribution to anthropogenic climate change constitute key challenges for both weather services and scientific communities. Before a single event can be scrutinized, it must be properly defined; in particular, its spatiotemporal characteristics must be chosen. So far, this definition is made with some degree of arbitrariness, yet it might affect conclusions when explaining an extreme weather event from a climate perspective. Here, we propose a generic road map for defining single events as objectively as possible. In particular, as extreme events are inherently characterized by a small probability of occurrence, we suggest selecting the space–time characteristics that minimize this probability. In this way, we are able to automatically identify the spatiotemporal scale at which the event has been the most extreme. According to our methodology, the European heat wave of summer 2003 would be defined as a 2-week event over France and Spain and the Boulder, Colorado, intense rainfall of September 2013 a 5-day local event. Importantly, we show that in both cases, maximizing the rarity of the event does not maximize (or minimize) its fraction of attributable risk to anthropogenic climate change.

Open access
Elodie Charles, Benoit Meyssignac, and Aurélien Ribes

Abstract

Observations and climate models are combined to identify an anthropogenic warming signature in the upper ocean heat content (OHC) changes since 1971. We apply a new detection and attribution analysis developed by Ribes et al. that uses a symmetric treatment of the magnitude and the pattern of the climate response to each radiative forcing. A first estimate of the OHC response to natural, anthropogenic, greenhouse gas, and other forcings is derived from a large ensemble of CMIP5 simulations. Observational datasets from historical reconstructions are then used to constrain this estimate. A spatiotemporal observational mask is applied to compare simulations with actual observations and to overcome reconstruction biases. Results on the 0–700-m layer from 1971 to 2005 show that the global OHC would have increased since 1971 by 2.12 ± 0.21 × 107 J m−2 yr−1 in response to GHG emissions alone. But this has been compensated for by other anthropogenic influences (mainly aerosol), which induced an OHC decrease of 0.84 ± 0.18 × 107 J m−2 yr−1. The natural forcing has induced a slight global OHC decrease since 1971 of 0.13 ± 0.09 × 107 J m−2 yr−1. Compared to previous studies we have separated the effect of the GHG forcing from the effect of the other anthropogenic forcing on OHC changes. This has been possible by using a new detection and attribution (D&A) method and by analyzing simultaneously the global OHC trends over 1957–80 and over 1971–2005. This bivariate method takes advantage of the different time variation of the GHG forcing and the aerosol forcing since 1957 to separate both effects and reduce the uncertainty in their estimates.

Open access
Aurélien Ribes, Soulivanh Thao, and Julien Cattiaux

Abstract

Describing the relationship between a weather event and climate change—a science usually termed event attribution—involves quantifying the extent to which human influence has affected the frequency or the strength of an observed event. In this study we show how event attribution can be implemented through the application of nonstationary statistics to transient simulations, typically covering the 1850–2100 period. The use of existing CMIP-style simulations has many advantages, including their availability for a large range of coupled models and the fact that they are not conditional to a given oceanic state. We develop a technique for providing a multimodel synthesis, consistent with the uncertainty analysis of long-term changes. Last, we describe how model estimates can be combined with historical observations to provide a single diagnosis accounting for both sources of information. The potential of this new method is illustrated using the 2003 European heat wave and under a Gaussian assumption. Results suggest that (i) it is feasible to perform event attribution using transient simulations and nonstationary statistics, even for a single model; (ii) the use of multimodel synthesis in event attribution is highly desirable given the spread in single-model estimates; and (iii) merging models and observations substantially reduces uncertainties in human-induced changes. Investigating transient simulations also enables us to derive insightful diagnostics of how the targeted event will be affected by climate change in the future.

Open access
Aurélien Ribes, Nathan P. Gillett, and Francis W. Zwiers

Abstract

Climate change detection and attribution studies rely on historical simulations using specified combinations of forcings to quantify the contributions from greenhouse gases and other forcings to observed climate change. In the last CMIP5 exercise, in addition to the so-called all-forcings simulations, which are driven with a combination of anthropogenic and natural forcings, natural forcings–only and greenhouse gas–only simulations were prioritized among other possible experiments. This study addresses the question of optimally designing this set of experiments to estimate the recent greenhouse gas–induced warming, which is highly relevant to the problem of constraining estimates of the transient climate response. Based on Monte Carlo simulations and considering experimental designs with a fixed budget for the number of simulations that modeling centers can perform, the most accurate estimate of historical greenhouse gas–induced warming is obtained with a design using a combination of all-forcings, natural forcings–only, and aerosol forcing–only simulations. An investigation of optimal ensemble sizes, given the constraint on the total number of simulations, indicates that allocating larger ensemble sizes to weaker forcings, such as natural-only, is optimal.

Full access
Laurent Terray, Lola Corre, Sophie Cravatte, Thierry Delcroix, Gilles Reverdin, and Aurélien Ribes

Abstract

Changes in the global water cycle are expected as a result of anthropogenic climate change, but large uncertainties exist in how these changes will be manifest regionally. This is especially the case over the tropical oceans, where observed estimates of precipitation and evaporation disagree considerably. An alternative approach is to examine changes in near-surface salinity. Datasets of observed tropical Pacific and Atlantic near-surface salinity combined with climate model simulations are used to assess the possible causes and significance of salinity changes over the late twentieth century. Two different detection methodologies are then applied to evaluate the extent to which observed large-scale changes in near-surface salinity can be attributed to anthropogenic climate change.

Basin-averaged observed changes are shown to enhance salinity geographical contrasts between the two basins: the Pacific is getting fresher and the Atlantic saltier. While the observed Pacific and interbasin-averaged salinity changes exceed the range of internal variability provided from control climate simulations, Atlantic changes are within the model estimates. Spatial patterns of salinity change, including a fresher western Pacific warm pool and a saltier subtropical North Atlantic, are not consistent with internal climate variability. They are similar to anthropogenic response patterns obtained from transient twentieth- and twenty-first-century integrations, therefore suggesting a discernible human influence on the late twentieth-century evolution of the tropical marine water cycle. Changes in the tropical and midlatitudes Atlantic salinity levels are not found to be significant compared to internal variability. Implications of the results for understanding of the recent and future marine tropical water cycle changes are discussed.

Full access
Philippe Naveau, Aurélien Ribes, Francis Zwiers, Alexis Hannart, Alexandre Tuel, and Pascal Yiou

Abstract

Both climate and statistical models play an essential role in the process of demonstrating that the distribution of some atmospheric variable has changed over time and in establishing the most likely causes for the detected change. One statistical difficulty in the research field of detection and attribution resides in defining events that can be easily compared and accurately inferred from reasonable sample sizes. As many impacts studies focus on extreme events, the inference of small probabilities and the computation of their associated uncertainties quickly become challenging. In the particular context of event attribution, the authors address the question of how to compare records between the counterfactual “world as it might have been” without anthropogenic forcings and the factual “world that is.” Records are often the most important events in terms of impact and get much media attention. The authors will show how to efficiently estimate the ratio of two small probabilities of records. The inferential gain is particularly substantial when a simple hypothesis-testing procedure is implemented. The theoretical justification of such a proposed scheme can be found in extreme value theory. To illustrate this study’s approach, classical indicators in event attribution studies, like the risk ratio or the fraction of attributable risk, are modified and tailored to handle records. The authors illustrate the advantages of their method through theoretical results, simulation studies, temperature records in Paris, and outputs from a numerical climate model.

Full access
Andrew Schurer, Gabi Hegerl, Aurélien Ribes, Debbie Polson, Colin Morice, and Simon Tett

Abstract

The transient climate response (TCR) quantifies the warming expected during a transient doubling of greenhouse gas concentrations in the atmosphere. Many previous studies quantifying the observed historic response to greenhouse gases, and with it the TCR, use multimodel mean fingerprints and found reasonably constrained values, which contributed to the IPCC estimated (>66%) range from 1° to 2.5°C. Here, it is shown that while the multimodel mean fingerprint is statistically more powerful than any individual model’s fingerprint, it does lead to overconfident results when applied to synthetic data, if model uncertainty is neglected. Here, a Bayesian method is used that estimates TCR, accounting for climate model and observational uncertainty with indices of global temperature that aim at constraining the aerosol contribution to the historical record better. Model uncertainty in the aerosol response was found to be large. Nevertheless, an overall TCR estimate of 0.4°–3.1°C (>90%) was calculated from the historical record, which reduces to 1.0°–2.6°C when using prior information that rules out negative TCR values and model misestimates of more than a factor of 3, and to 1.2°–2.4°C when using the multimodel mean fingerprints with a variance correction. Modeled temperature, like in the observations, is calculated as a blend of sea surface and air temperatures.

Open access
Pascal Yiou, Julien Cattiaux, Davide Faranda, Nikolay Kadygrov, Aglae Jézéquel, Philippe Naveau, Aurelien Ribes, Yoann Robin, Soulivanh Thao, Geert Jan van Oldenborgh, and Mathieu Vrac
Free access
Lukas Brunner, Carol McSweeney, Andrew P. Ballinger, Daniel J. Befort, Marianna Benassi, Ben Booth, Erika Coppola, Hylke de Vries, Glen Harris, Gabriele C. Hegerl, Reto Knutti, Geert Lenderink, Jason Lowe, Rita Nogherotto, Chris O’Reilly, Saïd Qasmi, Aurélien Ribes, Paolo Stocchi, and Sabine Undorf

Abstract

Political decisions, adaptation planning, and impact assessments need reliable estimates of future climate change and related uncertainties. To provide these estimates, different approaches to constrain, filter, or weight climate model projections into probabilistic distributions have been proposed. However, an assessment of multiple such methods to, for example, expose cases of agreement or disagreement, is often hindered by a lack of coordination, with methods focusing on a variety of variables, time periods, regions, or model pools. Here, a consistent framework is developed to allow a quantitative comparison of eight different methods; focus is given to summer temperature and precipitation change in three spatial regimes in Europe in 2041–60 relative to 1995–2014. The analysis draws on projections from several large ensembles, the CMIP5 multimodel ensemble, and perturbed physics ensembles, all using the high-emission scenario RCP8.5. The methods’ key features are summarized, assumptions are discussed, and resulting constrained distributions are presented. Method agreement is found to be dependent on the investigated region but is generally higher for median changes than for the uncertainty ranges. This study, therefore, highlights the importance of providing clear context about how different methods affect the assessed uncertainty—in particular, the upper and lower percentiles that are of interest to risk-averse stakeholders. The comparison also exposes cases in which diverse lines of evidence lead to diverging constraints; additional work is needed to understand how the underlying differences between methods lead to such disagreements and to provide clear guidance to users.

Open access