Search Results

You are looking at 1 - 10 of 13 items for :

  • Author or Editor: Francisco J. Doblas-Reyes x
  • Refine by Access: All Content x
Clear All Modify Search
Francisco J. Doblas-Reyes
and
Michel Déqué

Abstract

A general digital bandpass filtering procedure is presented whose advantages over other simple filtering methods are 1) versatility in its design, because only a set of three parameters is needed to calculate the filter weights via a simple analytic expression, 2) good performance at the transition band depending on the number of weights considered, and 3) the reduction of the Gibbs oscillations in the pass band of a given raw filter by convolving it with a convergence window. In order to illustrate the method, the filter has been first used to assess the ability of the Météo-France general circulation model ARPEGE to simulate the midtropospheric low-frequency intraseasonal variability in the Northern Hemisphere. The filter examined here allows one to assess the model drawbacks in different frequency bands. As a second example, the synoptic-scale baroclinic fluctuations in midlatitudes have also been studied. It is shown that the horizontal and vertical structure of these fluctuations does not depend very much on the frequency band up to a period of 10 days, but shows an increase in zonal wavelength as lower-frequency fluctuations are considered.

Full access
Virginie Guemas
,
Ludovic Auger
, and
Francisco J. Doblas-Reyes

Abstract

Commonly used statistical tests of hypothesis, also termed inferential tests, that are available to meteorologists and climatologists all require independent data in the time series to which they are applied. However, most of the time series that are usually handled are actually serially dependent. A common approach to handle such a serial dependence is to replace in those statistical tests the actual number of data by an estimated effective number of independent data that is computed from a classical and widely used formula that relies on the autocorrelation function. Despite being perfectly demonstrable under some hypotheses, this formula provides unreliable results on practical cases, for two different reasons. First, the formula has to be applied using the estimated autocorrelation function, which bears a large uncertainty because of the usual shortness of the available time series. After the impact of this uncertainty is illustrated, some recommendations of preliminary treatment of the time series prior to any application of this formula are made. Second, the derivation of this formula is done under the hypothesis of identically distributed data, which is often not valid in real climate or meteorological problems. It is shown how this issue is due to real physical processes that induce temporal coherence, and an illustration is given of how not respecting the hypotheses affects the results provided by the formula.

Full access
Robin J. T. Weber
,
Alberto Carrassi
, and
Francisco J. Doblas-Reyes

Abstract

Seasonal-to-decadal predictions are initialized using observations of the present climatic state in full field initialization (FFI). Such model integrations undergo a drift toward the model attractor due to model deficiencies that incur a bias in the model. The anomaly initialization (AI) approach reduces the drift by adding an estimate of the bias onto the observations at the expense of a larger initial error.

In this study FFI is associated with the fidelity paradigm, and AI is associated with an instance of the mapping paradigm, in which the initial conditions are mapped onto the imperfect model attractor by adding a fixed error term; the mapped state on the model attractor should correspond to the nature state. Two diagnosis tools assess how well AI conforms to its own paradigm under various circumstances of model error: the degree of approximation of the model attractor is measured by calculating the overlap of the AI initial conditions PDF with the model PDF; and the sensitivity to random error in the initial conditions reveals how well the selected initial conditions on the model attractor correspond to the nature states. As a useful reference, the initial conditions of FFI are subjected to the same analysis.

Conducting hindcast experiments using a hierarchy of low-order coupled climate models, it is shown that the initial conditions generated using AI approximate the model attractor only under certain conditions: differences in higher-than-first-order moments between the model and nature PDFs must be negligible. Where such conditions fail, FFI is likely to perform better.

Full access
Prince K. Xavier
,
Jean-Philippe Duvel
, and
Francisco J. Doblas-Reyes

Abstract

The intraseasonal variability (ISV) of the Asian summer monsoon represented in seven coupled general circulation models (CGCMs) as part of the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) project is analyzed and evaluated against observations. The focus is on the spatial and seasonal variations of ISV of outgoing longwave radiation (OLR). The large-scale organization of convection, the propagation characteristics, and the air–sea coupling related to the monsoon ISV are also evaluated. A multivariate local mode analysis (LMA) reveals that most models produce less organized convection and ISV events of shorter duration than observed. Compared to the real atmosphere, these simulated patterns of perturbations are poorly reproducible from one event to the other. Most models simulate too weak sea surface temperature (SST) perturbations and systematic phase quadrature between OLR, surface winds, and SST—indicative of a slab-ocean-like response of the SST to surface flux perturbations. The relatively coarse vertical resolution of the different ocean GCMs (OGCMs) limits their ability to represent intraseasonal processes, such as diurnal warm layer formation, which are important for realistic simulation of the SST perturbations at intraseasonal time scales. Models with the same atmospheric GCM (AGCM) and different OGCMs tend to have similar biases of the simulated ISV, indicating the dominant role of atmospheric models in fixing the nature of the intraseasonal variability. It is, therefore, implied that improvements in the representation of ISV in coupled models have to fundamentally arise from fixing problems in the large-scale organization of convection in AGCMs.

Full access
Prince K. Xavier
,
Jean-Philippe Duvel
,
Pascale Braconnot
, and
Francisco J. Doblas-Reyes

Abstract

The intraseasonal variability (ISV) is an intermittent phenomenon with variable perturbation patterns. To assess the robustness of the simulated ISV in climate models, it is thus interesting to consider the distribution of perturbation patterns rather than only one average pattern. To inspect this distribution, the authors first introduce a distance that measures the similarity between two patterns. The reproducibility (realism) of the simulated intraseasonal patterns is then defined as the distribution of distances between each pattern and the average simulated (observed) pattern. A good reproducibility is required to analyze the physical source of the simulated disturbances. The realism distribution is required to estimate the proportion of simulated events that have a perturbation pattern similar to observed patterns. The median value of this realism distribution is introduced as an ISV metric. The reproducibility and realism distributions are used to evaluate boreal summer ISV of precipitations over the Indian Ocean for 19 phase 3 of the Coupled Model Intercomparison Project (CMIP3) models. The 19 models are classified in increasing ISV metric order. In agreement with previous studies, the four best ISV metrics are obtained for models having a convective closure totally or partly based on the moisture convergence. Models with high metric values (poorly realistic) tend to give (i) poorly reproducible intraseasonal patterns, (ii) rainfall perturbations poorly organized at large scales, (iii) small day-to-day variability with overly red temporal spectra, and (iv) less accurate summer monsoon rainfall distribution. This confirms that the ISV is an important link in the seamless system that connects weather and climate.

Full access
Javier García-Serrano
,
Christophe Cassou
,
Hervé Douville
,
Alessandra Giannini
, and
Francisco J. Doblas-Reyes

Abstract

One of the most robust remote impacts of El Niño–Southern Oscillation (ENSO) is the teleconnection to tropical North Atlantic (TNA) sea surface temperature (SST) in boreal spring. However, important questions still remain open. In particular, the timing of the ENSO–TNA relationship lacks understanding. The three previously proposed mechanisms rely on teleconnection dynamics involving a time lag of one season with respect to the ENSO mature phase in winter, but recent results have shown that the persistence of ENSO into spring is necessary for the development of the TNA SST anomalies. Likewise, the identification of the effective atmospheric forcing in the deep TNA to drive the regional air–sea interaction is also lacking. In this manuscript a new dynamical framework to understand the ENSO–TNA teleconnection is proposed, in which a continuous atmospheric forcing is present throughout the ENSO decaying phase. Observational datasets in the satellite era, which include reliable estimates over the ocean, are used to illustrate the mechanism at play. The dynamics rely on the remote Gill-type response to the ENSO zonally compensated heat source over the Amazon basin, associated with perturbations in the Walker circulation. For El Niño conditions, the anomalous diabatic heating in the tropical Pacific is compensated by anomalous diabatic cooling, in association with negative rainfall anomalies and descending motion over northern South America. A pair of anomalous cyclonic circulations is established at upper-tropospheric levels in the tropical Atlantic straddling the equator, displaying a characteristic baroclinic structure with height. In the TNA region, the mirrored anomalous anticyclonic circulation at lower-tropospheric levels weakens the northeasterly trade winds, leading to a reduction in evaporation and of the ocean mixed layer depth, hence to positive SST anomalies. Apart from the dominance of latent heat flux anomalies in the remote response, sensible heat flux and shortwave radiation anomalies also appear to contribute. The “lagged” relationship between mature ENSO in winter and peaking TNA SSTs in spring seems to be phase locked with the seasonal cycle in both the location of the mechanism’s centers of action and regional SST variance.

Full access
Verónica Torralba
,
Francisco J. Doblas-Reyes
,
Dave MacLeod
,
Isadora Christel
, and
Melanie Davis

Abstract

Climate predictions tailored to the wind energy sector represent an innovation in the use of climate information to better manage the future variability of wind energy resources. Wind energy users have traditionally employed a simple approach that is based on an estimate of retrospective climatological information. Instead, climate predictions can better support the balance between energy demand and supply, as well as decisions relative to the scheduling of maintenance work. One limitation for the use of the climate predictions is the bias, which has until now prevented their incorporation in wind energy models because they require variables with statistical properties that are similar to those observed. To overcome this problem, two techniques of probabilistic climate forecast bias adjustment are considered here: a simple bias correction and a calibration method. Both approaches assume that the seasonal distributions are Gaussian. These methods are linear and robust and neither requires parameter estimation—essential features for the small sample sizes of current climate forecast systems. This paper is the first to explore the impact of the necessary bias adjustment on the forecast quality of an operational seasonal forecast system, using the European Centre for Medium-Range Weather Forecasts seasonal predictions of near-surface wind speed to produce useful information for wind energy users. The results reveal to what extent the bias adjustment techniques, in particular the calibration method, are indispensable to produce statistically consistent and reliable predictions. The forecast-quality assessment shows that calibration is a fundamental requirement for high-quality climate service.

Full access
Stefan Siegert
,
Omar Bellprat
,
Martin Ménégoz
,
David B. Stephenson
, and
Francisco J. Doblas-Reyes

Abstract

The skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly detecting skill improvements, and to determine the minimum number of samples required to reliably detect improvements. The proposed statistical test has a higher power of detecting improvements than the traditional test. The main examples suggest that sample sizes of climate hindcasts should be increased to about 40 years to ensure sufficiently high power. It is found that seasonal temperature forecasts are significantly improved by using realistic land surface initial conditions.

Full access
Deborah Verfaillie
,
Francisco J. Doblas-Reyes
,
Markus G. Donat
,
Núria Pérez-Zanón
,
Balakrishnan Solaraju-Murali
,
Verónica Torralba
, and
Simon Wild

Abstract

Decadal climate predictions are being increasingly used by stakeholders interested in the evolution of climate over the coming decade. However, investigating the added value of those initialized decadal predictions over other sources of information typically used by stakeholders generally relies on forecast accuracy, while probabilistic aspects, although crucial to users, are often overlooked. In this study, the quality of the near-surface air temperature from initialized predictions has been assessed in terms of reliability, an essential characteristic of climate simulation ensembles, and compared to the reliability of noninitialized simulations performed with the same model ensembles. Here, reliability is defined as the capability to obtain a true estimate of the forecast uncertainty from the ensemble spread. We show the limited added value of initialization in terms of reliability, the initialized predictions being significantly more reliable than their noninitialized counterparts only for specific regions and the first forecast year. By analyzing reliability for different forecast system ensembles, we further highlight the fact that the combination of models seems to play a more important role than the ensemble size of each individual forecast system. This is due to sampling different model errors related to model physics, numerics, and initialization approaches involved in the multimodel, allowing for a certain level of error compensation. Finally, this study demonstrates that all forecast system ensembles are affected by systematic biases and dispersion errors that affect the reliability. This set of errors makes bias correction and calibration necessary to obtain reliable estimates of forecast probabilities that can be useful to stakeholders.

Open access
Marta Terrado
,
Llorenç Lledó
,
Dragana Bojovic
,
Asun Lera St. Clair
,
Albert Soret
,
Francisco J. Doblas-Reyes
,
Rodrigo Manzanas
,
Daniel San-Martín
, and
Isadora Christel

Abstract

Climate predictions, from three weeks to a decade into the future, can provide invaluable information for climate-sensitive socioeconomic sectors, such as renewable energy, agriculture, or insurance. However, communicating and interpreting these predictions is not straightforward. Barriers hindering user uptake include a terminology gap between climate scientists and users, the difficulties of dealing with probabilistic outcomes for decision-making, and the lower skill of climate predictions compared to the skill of weather forecasts. This paper presents a gaming approach to break communication and understanding barriers through the application of the Weather Roulette conceptual framework. In the game, the player can choose between two forecast options, one that uses ECMWF seasonal predictions against one using climatology-derived probabilities. For each forecast option, the bet is spread proportionally to the predicted probabilities, either in a single year game or a game for the whole period of 33 past years. This paper provides skill maps of forecast quality metrics commonly used by the climate prediction community (e.g., ignorance skill score and ranked probability skill score), which in the game are linked to metrics easily understood by the business sector (e.g., interest rate and return on investment). In a simplified context, we illustrate how in skillful regions the economic benefits of using ECMWF predictions arise in the long term and are higher than using climatology. This paper provides an example of how to convey the usefulness of climate predictions and transfer the knowledge from climate science to potential users. If applied, this approach could provide the basis for a better integration of knowledge about climate anomalies into operational and managerial processes.

Free access