Search Results

You are looking at 1 - 10 of 10 items for

  • Author or Editor: Philippe Naveau x
  • Refine by Access: All Content x
Clear All Modify Search
Alexis Hannart
and
Philippe Naveau

Abstract

Multiple changes in Earth’s climate system have been observed over the past decades. Determining how likely each of these changes is to have been caused by human influence is important for decision making with regard to mitigation and adaptation policy. Here we describe an approach for deriving the probability that anthropogenic forcings have caused a given observed change. The proposed approach is anchored into causal counterfactual theory (), which was introduced recently, and in fact partly used already, in the context of extreme weather event attribution (EA). We argue that these concepts are also relevant to, and can be straightforwardly extended to, the context of detection and attribution of long-term trends associated with climate change (D&A). For this purpose, and in agreement with the principle of fingerprinting applied in the conventional D&A framework, a trajectory of change is converted into an event occurrence defined by maximizing the causal evidence associated to the forcing under scrutiny. Other key assumptions used in the conventional D&A framework, in particular those related to numerical model error, can also be adapted conveniently to this approach. Our proposal thus allows us to bridge the conventional framework with the standard causal theory, in an attempt to improve the quantification of causal probabilities. An illustration suggests that our approach is prone to yield a significantly higher estimate of the probability that anthropogenic forcings have caused the observed temperature change, thus supporting more assertive causal claims.

Full access
Philippe Naveau
and
Soulivanh Thao

Abstract

Global climate models, like any in silico numerical experiments, are affected by different types of bias. Uncertainty quantification remains a challenge in any climate detection and attribution analysis. A fundamental methodological question is to determine which statistical summaries, while bringing relevant signals, can be robust with respect to multimodel errors. In this paper, we propose a simple statistical framework that significantly improves signal detection in climate attribution studies. We show that the complex bias correction step can be entirely bypassed for models for which bias between the simulated and unobserved counterfactual worlds is the same as between the simulated and unobserved factual worlds. To illustrate our approach, we infer emergence times in precipitation from the CMIP5 and CMIP6 archives. The detected anthropogenic signal in yearly maxima of daily precipitation clearly emerges at the beginning of the twenty-first century. In addition, no CMIP model seems to outperform the others and a weighted linear combination of all improves the estimation of emergence times.

Significance Statement

We show that the bias in multimodel global climate simulations can be efficiently handled when the appropriate metric is chosen. This metric leads to an easy-to-implement statistical procedure based on a checkable assumption. This allows us to demonstrate that optimal convex combinations of CMIP outputs can improve the signal strength in finding emergence times. Our data analysis procedure is applied to yearly maximum of precipitation from CMIP5 and CMIP6 databases. The attribution of the anthropogenic forcing clearly emerges in extreme precipitation at the beginning of the twenty-first century.

Full access
Maxime Taillardat
,
Olivier Mestre
,
Michaël Zamo
, and
Philippe Naveau

Abstract

Ensembles used for probabilistic weather forecasting tend to be biased and underdispersive. This paper proposes a statistical method for postprocessing ensembles based on quantile regression forests (QRF), a generalization of random forests for quantile regression. This method does not fit a parametric probability density function (PDF) like in ensemble model output statistics (EMOS) but provides an estimation of desired quantiles. This is a nonparametric approach that eliminates any assumption on the variable subject to calibration. This method can estimate quantiles using not only members of the ensemble but any predictor available including statistics on other variables.

The method is applied to the Météo-France 35-member ensemble forecast (PEARP) for surface temperature and wind speed for available lead times from 3 up to 54 h and compared to EMOS. All postprocessed ensembles are much better calibrated than the PEARP raw ensemble and experiments on real data also show that QRF performs better than EMOS, and can bring a real gain for human forecasters compared to EMOS. QRF provides sharp and reliable probabilistic forecasts. At last, classical scoring rules to verify predictive forecasts are completed by the introduction of entropy as a general measure of reliability.

Full access
Elsa Bernard
,
Philippe Naveau
,
Mathieu Vrac
, and
Olivier Mestre

Abstract

One of the main objectives of statistical climatology is to extract relevant information hidden in complex spatial–temporal climatological datasets. To identify spatial patterns, most well-known statistical techniques are based on the concept of intra- and intercluster variances (like the k-means algorithm or EOFs). As analyzing quantitative extremes like heavy rainfall has become more and more prevalent for climatologists and hydrologists during these last decades, finding spatial patterns with methods based on deviations from the mean (i.e., variances) may not be the most appropriate strategy in this context of studying such extremes. For practitioners, simple and fast clustering tools tailored for extremes have been lacking. A possible avenue to bridging this methodological gap resides in taking advantage of multivariate extreme value theory, a well-developed research field in probability, and to adapt it to the context of spatial clustering. In this paper, a novel algorithm based on this plan is proposed and studied. The approach is compared and discussed with respect to the classical k-means algorithm throughout the analysis of weekly maxima of hourly precipitation recorded in France (fall season, 92 stations, 1993–2011).

Full access
Malaak Kallache
,
Elena Maksimovich
,
Paul-Antoine Michelangeli
, and
Philippe Naveau

Abstract

The performance of general circulation models (GCMs) varies across regions and periods. When projecting into the future, it is therefore not obvious whether to reject or to prefer a certain GCM. Combining the outputs of several GCMs may enhance results. This paper presents a method to combine multimodel GCM projections by means of a Bayesian model combination (BMC). Here the influence of each GCM is weighted according to its performance in a training period, with regard to observations, as outcome BMC predictive distributions for yet unobserved observations are obtained. Technically, GCM outputs and observations are assumed to vary randomly around common means, which are interpreted as the actual target values under consideration. Posterior parameter distributions of the authors’ Bayesian hierarchical model are obtained by a Markov chain Monte Carlo (MCMC) method. Advantageously, all parameters—such as bias and precision of the GCM models—are estimated together. Potential time dependence is accounted for by integrating a Kalman filter. The significance of trend slopes of the common means is evaluated by analyzing the posterior distribution of the parameters. The method is applied to assess the evolution of ice accumulation over the oceanic Arctic region in cold seasons. The observed ice index is created out of NCEP reanalysis data. Outputs of seven GCMs are combined by using the training period 1962–99 and prediction periods 2046–65 and 2082–99 with Special Report on Emissions Scenarios (SRES) A2 and B1. A continuing decrease of ice accumulation is visible for the A2 scenario, whereas the index stabilizes for the B1 scenario in the second prediction period.

Full access
Maxime Taillardat
,
Anne-Laure Fougères
,
Philippe Naveau
, and
Olivier Mestre
Open access
Maxime Taillardat
,
Anne-Laure Fougères
,
Philippe Naveau
, and
Olivier Mestre

Abstract

To satisfy a wide range of end users, rainfall ensemble forecasts have to be skillful for both low precipitation and extreme events. We introduce local statistical postprocessing methods based on quantile regression forests and gradient forests with a semiparametric extension for heavy-tailed distributions. These hybrid methods make use of the forest-based outputs to fit a parametric distribution that is suitable to model jointly low, medium, and heavy rainfall intensities. Our goal is to improve ensemble quality and value for all rainfall intensities. The proposed methods are applied to daily 51-h forecasts of 6-h accumulated precipitation from 2012 to 2015 over France using the Météo-France ensemble prediction system called Prévision d’Ensemble ARPEGE (PEARP). They are verified with a cross-validation strategy and compete favorably with state-of-the-art methods like analog ensemble or ensemble model output statistics. Our methods do not assume any parametric links between the variables to calibrate and possible covariates. They do not require any variable selection step and can make use of more than 60 predictors available such as summary statistics on the raw ensemble, deterministic forecasts of other parameters of interest, or probabilities of convective rainfall. In addition to improvements in overall performance, hybrid forest-based procedures produced the largest skill improvements for forecasting heavy rainfall events.

Open access
Eva Marquès
,
Valéry Masson
,
Philippe Naveau
,
Olivier Mestre
,
Vincent Dubreuil
, and
Yves Richard

Abstract

An ever-growing portion of the global population lives in urban areas. Cities are expanding quickly and consequently, the urban heat island effect has become a major health concern to maintain city dwellers’ thermal comfort. For this reason, city planners want to access urban meteorological databases in local areas where specific attention is needed. With the growth of connected devices, it is possible to collect unusual but massive temperature measurements from people’s activities. In this article, we study temperatures measured by thermometers embedded in everyday personal cars. To assess the quality of such opportunistic data, we first detect factors deteriorating the measurement. After preprocessing, the measurement error is then estimated thanks to two weather station networks providing a local-scale reference through the cities of Dijon and Rennes, France. The overall aggregation of private car temperature measurements allows us to estimate very precisely the urban heat island at a 200-m resolution. We detect the cooling effect of parks in Rennes and Paris urban areas. In Barcelona and Dijon, we observe the impact of regional environments and the orographic effect on the urban heat island. With our method, similar maps can be made accessible to every interested city in western Europe to target critical areas and support urban planning decisions.

Full access
Philippe Naveau
,
Aurélien Ribes
,
Francis Zwiers
,
Alexis Hannart
,
Alexandre Tuel
, and
Pascal Yiou

Abstract

Both climate and statistical models play an essential role in the process of demonstrating that the distribution of some atmospheric variable has changed over time and in establishing the most likely causes for the detected change. One statistical difficulty in the research field of detection and attribution resides in defining events that can be easily compared and accurately inferred from reasonable sample sizes. As many impacts studies focus on extreme events, the inference of small probabilities and the computation of their associated uncertainties quickly become challenging. In the particular context of event attribution, the authors address the question of how to compare records between the counterfactual “world as it might have been” without anthropogenic forcings and the factual “world that is.” Records are often the most important events in terms of impact and get much media attention. The authors will show how to efficiently estimate the ratio of two small probabilities of records. The inferential gain is particularly substantial when a simple hypothesis-testing procedure is implemented. The theoretical justification of such a proposed scheme can be found in extreme value theory. To illustrate this study’s approach, classical indicators in event attribution studies, like the risk ratio or the fraction of attributable risk, are modified and tailored to handle records. The authors illustrate the advantages of their method through theoretical results, simulation studies, temperature records in Paris, and outputs from a numerical climate model.

Full access
Pascal Yiou
,
Julien Cattiaux
,
Davide Faranda
,
Nikolay Kadygrov
,
Aglae Jézéquel
,
Philippe Naveau
,
Aurelien Ribes
,
Yoann Robin
,
Soulivanh Thao
,
Geert Jan van Oldenborgh
, and
Mathieu Vrac
Free access