Search Results

You are looking at 1 - 8 of 8 items for

  • Author or Editor: Olivier Mestre x
  • All content x
Clear All Modify Search
Olivier Mestre and Stéphane Hallegatte

Abstract

Fluctuations of the annual number of tropical cyclones over the North Atlantic and of the energy dissipated by the most intense hurricane of a season are related to a variety of predictors [global temperature, SST and detrended SST, North Atlantic Oscillation (NAO), Southern Oscillation index (SOI)] using generalized additive and linear models. This study demonstrates that SST and SOI are predictors of interest. The SST is found to influence positively the annual number of tropical cyclones and the intensity of the most intense hurricanes. The use of specific additive models reveals nonlinearity in the responses to SOI that has to be taken into account using changepoint models. The long-term trend in SST is found to influence the annual number of tropical cyclones but does not add information for the prediction of the most intense hurricane intensity.

Full access
Maxime Taillardat, Anne-Laure Fougères, Philippe Naveau, and Olivier Mestre

Abstract

To satisfy a wide range of end users, rainfall ensemble forecasts have to be skillful for both low precipitation and extreme events. We introduce local statistical postprocessing methods based on quantile regression forests and gradient forests with a semiparametric extension for heavy-tailed distributions. These hybrid methods make use of the forest-based outputs to fit a parametric distribution that is suitable to model jointly low, medium, and heavy rainfall intensities. Our goal is to improve ensemble quality and value for all rainfall intensities. The proposed methods are applied to daily 51-h forecasts of 6-h accumulated precipitation from 2012 to 2015 over France using the Météo-France ensemble prediction system called Prévision d’Ensemble ARPEGE (PEARP). They are verified with a cross-validation strategy and compete favorably with state-of-the-art methods like analog ensemble or ensemble model output statistics. Our methods do not assume any parametric links between the variables to calibrate and possible covariates. They do not require any variable selection step and can make use of more than 60 predictors available such as summary statistics on the raw ensemble, deterministic forecasts of other parameters of interest, or probabilities of convective rainfall. In addition to improvements in overall performance, hybrid forest-based procedures produced the largest skill improvements for forecasting heavy rainfall events.

Open access
Elsa Bernard, Philippe Naveau, Mathieu Vrac, and Olivier Mestre

Abstract

One of the main objectives of statistical climatology is to extract relevant information hidden in complex spatial–temporal climatological datasets. To identify spatial patterns, most well-known statistical techniques are based on the concept of intra- and intercluster variances (like the k-means algorithm or EOFs). As analyzing quantitative extremes like heavy rainfall has become more and more prevalent for climatologists and hydrologists during these last decades, finding spatial patterns with methods based on deviations from the mean (i.e., variances) may not be the most appropriate strategy in this context of studying such extremes. For practitioners, simple and fast clustering tools tailored for extremes have been lacking. A possible avenue to bridging this methodological gap resides in taking advantage of multivariate extreme value theory, a well-developed research field in probability, and to adapt it to the context of spatial clustering. In this paper, a novel algorithm based on this plan is proposed and studied. The approach is compared and discussed with respect to the classical k-means algorithm throughout the analysis of weekly maxima of hourly precipitation recorded in France (fall season, 92 stations, 1993–2011).

Full access
Maxime Taillardat, Olivier Mestre, Michaël Zamo, and Philippe Naveau

Abstract

Ensembles used for probabilistic weather forecasting tend to be biased and underdispersive. This paper proposes a statistical method for postprocessing ensembles based on quantile regression forests (QRF), a generalization of random forests for quantile regression. This method does not fit a parametric probability density function (PDF) like in ensemble model output statistics (EMOS) but provides an estimation of desired quantiles. This is a nonparametric approach that eliminates any assumption on the variable subject to calibration. This method can estimate quantiles using not only members of the ensemble but any predictor available including statistics on other variables.

The method is applied to the Météo-France 35-member ensemble forecast (PEARP) for surface temperature and wind speed for available lead times from 3 up to 54 h and compared to EMOS. All postprocessed ensembles are much better calibrated than the PEARP raw ensemble and experiments on real data also show that QRF performs better than EMOS, and can bring a real gain for human forecasters compared to EMOS. QRF provides sharp and reliable probabilistic forecasts. At last, classical scoring rules to verify predictive forecasts are completed by the introduction of entropy as a general measure of reliability.

Full access
Olivier Mestre, Christine Gruber, Clémentine Prieur, Henri Caussinus, and Sylvie Jourdain

Abstract

One major concern of climate change is the possible rise of temperature extreme events, in terms of occurrence and intensity. To study this phenomenon, reliable daily series are required, for instance to compute daily-based indices: high-order quantiles, annual extrema, number of days exceeding thresholds, and so on. Because observed series are likely to be affected by changes in the measurement conditions, adapted homogenization procedures are required. Although a very large number of procedures have been proposed for adjustment of observed series at a monthly time scale, few have been proposed for adjustment of daily temperature series. This article proposes a new adjustment method for temperature series at a daily time scale. This method, called spline daily homogenization (SPLIDHOM), relies on an indirect nonlinear regression method. Estimation of the regression functions is performed by cubic smoothing splines. This method is able to adjust the mean of the series as well as high-order quantiles and moments of the series. When using well-correlated series, SPLIDHOM improves the results of two widely used methods, as a result of an optimal selection of the smoothing parameter. Applications to the Toulouse, France, temperature series are shown as a real example.

Full access
Michaël Zamo, Liliane Bel, Olivier Mestre, and Joël Stein

Abstract

Numerical weather forecast errors are routinely corrected through statistical postprocessing by several national weather services. These statistical postprocessing methods build a regression function called model output statistics (MOS) between observations and forecasts that is based on an archive of past forecasts and associated observations. Because of limited spatial coverage of most near-surface parameter measurements, MOS have been historically produced only at meteorological station locations. Nevertheless, forecasters and forecast users increasingly ask for improved gridded forecasts. The present work aims at building improved hourly wind speed forecasts over the grid of a numerical weather prediction model. First, a new observational analysis, which performs better in terms of statistical scores than those operationally used at Météo-France, is described as gridded pseudo-observations. This analysis, which is obtained by using an interpolation strategy that was selected among other alternative strategies after an intercomparison study conducted internally at Météo-France, is very parsimonious since it requires only two additive components, and it requires little computational resources. Then, several scalar regression methods are built and compared, using the new analysis as the observation. The most efficient MOS is based on random forests trained on blocks of nearby grid points. This method greatly improves forecasts compared with raw output of numerical weather prediction models. Furthermore, building each random forest on blocks and limiting those forests to shallow trees does not impair performance compared with unpruned and pointwise random forests. This alleviates the storage burden of the objects and speeds up operations.

Open access
Florian Dupuy, Olivier Mestre, Mathieu Serrurier, Valentin Kivachuk Burdá, Michaël Zamo, Naty Citlali Cabrera-Gutiérrez, Mohamed Chafik Bakkay, Maud-Alix Mader, Guillaume Oller, and Jean-Christophe Jouhaud

Abstract

Cloud cover provides crucial information for many applications such as planning land observation missions from space. It remains however a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In this study, ARPEGE (Météo-France global NWP) cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows the integration of spatial information contained in NWP outputs. We use a gridded cloud cover product derived from satellite observations over Europe as ground truth, and predictors are spatial fields of various variables produced by ARPEGE at the corresponding lead time. We show that a simple U-Net architecture (a particular type of CNN) produces significant improvements over Europe. Moreover, the U-Net outclasses more traditional machine learning methods used operationally such as a random forest and a logistic quantile regression. When using a large number of predictors, a first step toward interpretation is to produce a ranking of predictors by importance. Traditional methods of ranking (permutation importance, sequential selection, . . . ) need important computational resources. We introduced a weighting predictor layer prior to the traditional U-Net architecture in order to produce such a ranking. The small number of additional weights to train (the same as the number of predictors) does not impact the computational time, representing a huge advantage compared to traditional methods.

Restricted access
Stéphane Vannitsem, John Bjørnar Bremnes, Jonathan Demaeyer, Gavin R. Evans, Jonathan Flowerdew, Stephan Hemri, Sebastian Lerch, Nigel Roberts, Susanne Theis, Aitor Atencia, Zied Ben Bouallègue, Jonas Bhend, Markus Dabernig, Lesley De Cruz, Leila Hieta, Olivier Mestre, Lionel Moret, Iris Odak Plenković, Maurice Schmeits, Maxime Taillardat, Joris Van den Bergh, Bert Van Schaeybroeck, Kirien Whan, and Jussi Ylhaisi

Capsule

State-of-the-Art statistical postprocessing techniques for ensemble forecasts are reviewed, together with the challenges posed by a demand for timely, high-resolution and reliable probabilistic information. Possible research avenues are also discussed.

Full access