Search Results

You are looking at 1 - 7 of 7 items for

  • Author or Editor: Philippe Naveau x
  • All content x
Clear All Modify Search
Alexis Hannart and Philippe Naveau

Abstract

Multiple changes in Earth’s climate system have been observed over the past decades. Determining how likely each of these changes is to have been caused by human influence is important for decision making with regard to mitigation and adaptation policy. Here we describe an approach for deriving the probability that anthropogenic forcings have caused a given observed change. The proposed approach is anchored into causal counterfactual theory (), which was introduced recently, and in fact partly used already, in the context of extreme weather event attribution (EA). We argue that these concepts are also relevant to, and can be straightforwardly extended to, the context of detection and attribution of long-term trends associated with climate change (D&A). For this purpose, and in agreement with the principle of fingerprinting applied in the conventional D&A framework, a trajectory of change is converted into an event occurrence defined by maximizing the causal evidence associated to the forcing under scrutiny. Other key assumptions used in the conventional D&A framework, in particular those related to numerical model error, can also be adapted conveniently to this approach. Our proposal thus allows us to bridge the conventional framework with the standard causal theory, in an attempt to improve the quantification of causal probabilities. An illustration suggests that our approach is prone to yield a significantly higher estimate of the probability that anthropogenic forcings have caused the observed temperature change, thus supporting more assertive causal claims.

Full access
Malaak Kallache, Elena Maksimovich, Paul-Antoine Michelangeli, and Philippe Naveau

Abstract

The performance of general circulation models (GCMs) varies across regions and periods. When projecting into the future, it is therefore not obvious whether to reject or to prefer a certain GCM. Combining the outputs of several GCMs may enhance results. This paper presents a method to combine multimodel GCM projections by means of a Bayesian model combination (BMC). Here the influence of each GCM is weighted according to its performance in a training period, with regard to observations, as outcome BMC predictive distributions for yet unobserved observations are obtained. Technically, GCM outputs and observations are assumed to vary randomly around common means, which are interpreted as the actual target values under consideration. Posterior parameter distributions of the authors’ Bayesian hierarchical model are obtained by a Markov chain Monte Carlo (MCMC) method. Advantageously, all parameters—such as bias and precision of the GCM models—are estimated together. Potential time dependence is accounted for by integrating a Kalman filter. The significance of trend slopes of the common means is evaluated by analyzing the posterior distribution of the parameters. The method is applied to assess the evolution of ice accumulation over the oceanic Arctic region in cold seasons. The observed ice index is created out of NCEP reanalysis data. Outputs of seven GCMs are combined by using the training period 1962–99 and prediction periods 2046–65 and 2082–99 with Special Report on Emissions Scenarios (SRES) A2 and B1. A continuing decrease of ice accumulation is visible for the A2 scenario, whereas the index stabilizes for the B1 scenario in the second prediction period.

Full access
Maxime Taillardat, Anne-Laure Fougères, Philippe Naveau, and Olivier Mestre

Abstract

To satisfy a wide range of end users, rainfall ensemble forecasts have to be skillful for both low precipitation and extreme events. We introduce local statistical postprocessing methods based on quantile regression forests and gradient forests with a semiparametric extension for heavy-tailed distributions. These hybrid methods make use of the forest-based outputs to fit a parametric distribution that is suitable to model jointly low, medium, and heavy rainfall intensities. Our goal is to improve ensemble quality and value for all rainfall intensities. The proposed methods are applied to daily 51-h forecasts of 6-h accumulated precipitation from 2012 to 2015 over France using the Météo-France ensemble prediction system called Prévision d’Ensemble ARPEGE (PEARP). They are verified with a cross-validation strategy and compete favorably with state-of-the-art methods like analog ensemble or ensemble model output statistics. Our methods do not assume any parametric links between the variables to calibrate and possible covariates. They do not require any variable selection step and can make use of more than 60 predictors available such as summary statistics on the raw ensemble, deterministic forecasts of other parameters of interest, or probabilities of convective rainfall. In addition to improvements in overall performance, hybrid forest-based procedures produced the largest skill improvements for forecasting heavy rainfall events.

Open access
Elsa Bernard, Philippe Naveau, Mathieu Vrac, and Olivier Mestre

Abstract

One of the main objectives of statistical climatology is to extract relevant information hidden in complex spatial–temporal climatological datasets. To identify spatial patterns, most well-known statistical techniques are based on the concept of intra- and intercluster variances (like the k-means algorithm or EOFs). As analyzing quantitative extremes like heavy rainfall has become more and more prevalent for climatologists and hydrologists during these last decades, finding spatial patterns with methods based on deviations from the mean (i.e., variances) may not be the most appropriate strategy in this context of studying such extremes. For practitioners, simple and fast clustering tools tailored for extremes have been lacking. A possible avenue to bridging this methodological gap resides in taking advantage of multivariate extreme value theory, a well-developed research field in probability, and to adapt it to the context of spatial clustering. In this paper, a novel algorithm based on this plan is proposed and studied. The approach is compared and discussed with respect to the classical k-means algorithm throughout the analysis of weekly maxima of hourly precipitation recorded in France (fall season, 92 stations, 1993–2011).

Full access
Maxime Taillardat, Olivier Mestre, Michaël Zamo, and Philippe Naveau

Abstract

Ensembles used for probabilistic weather forecasting tend to be biased and underdispersive. This paper proposes a statistical method for postprocessing ensembles based on quantile regression forests (QRF), a generalization of random forests for quantile regression. This method does not fit a parametric probability density function (PDF) like in ensemble model output statistics (EMOS) but provides an estimation of desired quantiles. This is a nonparametric approach that eliminates any assumption on the variable subject to calibration. This method can estimate quantiles using not only members of the ensemble but any predictor available including statistics on other variables.

The method is applied to the Météo-France 35-member ensemble forecast (PEARP) for surface temperature and wind speed for available lead times from 3 up to 54 h and compared to EMOS. All postprocessed ensembles are much better calibrated than the PEARP raw ensemble and experiments on real data also show that QRF performs better than EMOS, and can bring a real gain for human forecasters compared to EMOS. QRF provides sharp and reliable probabilistic forecasts. At last, classical scoring rules to verify predictive forecasts are completed by the introduction of entropy as a general measure of reliability.

Full access
Philippe Naveau, Aurélien Ribes, Francis Zwiers, Alexis Hannart, Alexandre Tuel, and Pascal Yiou

Abstract

Both climate and statistical models play an essential role in the process of demonstrating that the distribution of some atmospheric variable has changed over time and in establishing the most likely causes for the detected change. One statistical difficulty in the research field of detection and attribution resides in defining events that can be easily compared and accurately inferred from reasonable sample sizes. As many impacts studies focus on extreme events, the inference of small probabilities and the computation of their associated uncertainties quickly become challenging. In the particular context of event attribution, the authors address the question of how to compare records between the counterfactual “world as it might have been” without anthropogenic forcings and the factual “world that is.” Records are often the most important events in terms of impact and get much media attention. The authors will show how to efficiently estimate the ratio of two small probabilities of records. The inferential gain is particularly substantial when a simple hypothesis-testing procedure is implemented. The theoretical justification of such a proposed scheme can be found in extreme value theory. To illustrate this study’s approach, classical indicators in event attribution studies, like the risk ratio or the fraction of attributable risk, are modified and tailored to handle records. The authors illustrate the advantages of their method through theoretical results, simulation studies, temperature records in Paris, and outputs from a numerical climate model.

Full access
Pascal Yiou, Julien Cattiaux, Davide Faranda, Nikolay Kadygrov, Aglae Jézéquel, Philippe Naveau, Aurelien Ribes, Yoann Robin, Soulivanh Thao, Geert Jan van Oldenborgh, and Mathieu Vrac
Free access