Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Omar Bellprat x
  • All content x
Clear All Modify Search
Omar Bellprat, Sven Kotlarski, Daniel Lüthi, and Christoph Schär

Abstract

Perturbed physics ensembles (PPEs) have been widely used to assess climate model uncertainties and have provided new estimates of climate sensitivity and parametric uncertainty in state-of-the-art climate models. So far, mainly global climate models were used to generate PPEs, and little work has been conducted with regional climate models. This paper discusses the parameter uncertainty in two PPEs of a regional climate model driven by reanalysis data for the present climate over Europe. The uncertainty is evaluated for the variables of 2-m temperature, precipitation, and total cloud cover, with a focus on the annual cycle, interannual variability, and selected extremes.

The authors show that the simulated spread of the PPEs encompasses the observations at a regional scale in terms of the annual cycle and the interannual variability, provided observational uncertainty is taken into account. To rank the PPEs a new skill metric is proposed, which takes into account observational uncertainty and natural variability. The metric is a generalization of the climate prediction index (CPI) and is compared to metrics used in other studies. The consideration of observational uncertainty is particularly important for total cloud cover and reveals that current observations do not allow for a systematic evaluation of high precipitation intensities over the entire European domain.

The skill framework is additionally used to identify important model parameters, which are of interest for an objective model calibration.

Full access
Omar Bellprat, Javier García-Serrano, Neven S. Fučkar, François Massonnet, Virginie Guemas, and Francisco J. Doblas-Reyes
Full access
Omar Bellprat, Sven Kotlarski, Daniel Lüthi, Ramón De Elía, Anne Frigon, René Laprise, and Christoph Schär

Abstract

An important source of model uncertainty in climate models arises from unconfined model parameters in physical parameterizations. These parameters are commonly estimated on the basis of manual adjustments (expert tuning), which carries the risk of overtuning the parameters for a specific climate region or time period. This issue is particularly germane in the case of regional climate models (RCMs), which are often developed and used in one or a few geographical regions only. This study addresses the role of objective parameter calibration in this context. Using a previously developed objective calibration methodology, an RCM is calibrated over two regions (Europe and North America) and is used to investigate the transferability of the results. A total of eight different model parameters are calibrated, using a metamodel to account for parameter interactions. The study demonstrates that the calibration is effective in reducing model biases in both domains. For Europe, this concerns in particular a pronounced reduction of the summer warm bias and the associated overestimation of interannual temperature variability that have persisted through previous expert tuning efforts and are common in many global and regional climate models. The key process responsible for this improvement is an increased hydraulic conductivity. Higher hydraulic conductivity increases the water availability at the land surface and leads to increased evaporative cooling, stronger low cloud formation, and associated reduced incoming shortwave radiation. The calibrated parameter values are found to be almost identical for both domains; that is, the parameter calibration is transferable between the two regions. This is a promising result and indicates that models may be more universal than previously considered.

Full access
Stefan Siegert, Omar Bellprat, Martin Ménégoz, David B. Stephenson, and Francisco J. Doblas-Reyes

Abstract

The skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly detecting skill improvements, and to determine the minimum number of samples required to reliably detect improvements. The proposed statistical test has a higher power of detecting improvements than the traditional test. The main examples suggest that sample sizes of climate hindcasts should be increased to about 40 years to ensure sufficiently high power. It is found that seasonal temperature forecasts are significantly improved by using realistic land surface initial conditions.

Full access
Neven S. Fučkar, François Massonnet, Virginie Guemas, Javier García-Serrano, Omar Bellprat, Mario Acosta, and Francisco J. Doblas-Reyes
Full access