The Biggest Unknowns Related to Decadal Prediction: What 50 Experts Think Are the 5 Major Knowledge Gaps

Dragana Bojovic Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Dragana Bojovic in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-7354-1885
,
Roberto Bilbao Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Roberto Bilbao in
Current site
Google Scholar
PubMed
Close
,
Leandro B. Díaz Centro de Investigaciones del Mar y la Atmósfera, Consejo Nacional de Investigaciones Científicas y Técnicas, and Departamento de Ciencias de la Atmósfera y los Océanos, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, and Instituto Franco Argentino sobre Estudios de Clima y sus Impactos (UMI3351), Centre National de la Recherche Scientifique, Buenos Aires, Argentina, and Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Leandro B. Díaz in
Current site
Google Scholar
PubMed
Close
,
Markus Donat Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Markus Donat in
Current site
Google Scholar
PubMed
Close
,
Pablo Ortega Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Pablo Ortega in
Current site
Google Scholar
PubMed
Close
,
Yohan Ruprich-Robert Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Yohan Ruprich-Robert in
Current site
Google Scholar
PubMed
Close
,
Balakrishnan Solaraju-Murali Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Balakrishnan Solaraju-Murali in
Current site
Google Scholar
PubMed
Close
,
Marta Terrado Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Marta Terrado in
Current site
Google Scholar
PubMed
Close
,
Deborah Verfaillie Barcelona Supercomputing Center (BSC-CNS), Barcelona, Spain

Search for other papers by Deborah Verfaillie in
Current site
Google Scholar
PubMed
Close
, and
Francisco Doblas-Reyes Barcelona Supercomputing Center (BSC-CNS), and Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain

Search for other papers by Francisco Doblas-Reyes in
Current site
Google Scholar
PubMed
Close
Free access

© 2019 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Dragana Bojovic, dragana.bojovic@bsc.es

© 2019 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Dragana Bojovic, dragana.bojovic@bsc.es

EUCP WORKSHOP ON SCIENTIFIC KNOWLEDGE GAPS RELATED TO DECADAL CLIMATE PREDICTION

What: The Horizon 2020 (H2020)-funded project European Climate Prediction System (EUCP) and Barcelona Supercomputing Center’s (BSC) Earth Sciences Department organized the workshop to discuss pertinent issues related to development and application of decadal climate prediction. The organizers invited participants of the larger and parallel CMIP6 Workshop and all other interested scientists, and as a result, the workshop reached the capacity of the venue, with 50 participants, experts in this field.

When: 25 March 2019

Where: Barcelona, Spain

Decadal climate prediction is a relatively new line of research, getting momentum within the climate community, as well as in the climate services arena. Given that time scales of 1–10 years address needs of stakeholders from various economic sectors and societal groups, we expect wide interest and use of this innovative climate information once pending scientific concerns are addressed. Parallel to an event of high relevance for the international climate community—the Coupled Model Intercomparison Project Phase 6 (CMIP6) Model Analysis Workshop—we organized a workshop to discuss the biggest challenges in decadal climate prediction. The workshop was attended by 50 climate scientists from six continents with a broad spectrum of research interests (Fig. 1). It started with a discussion about the most relevant issues for the scientific community. After voting for five key topics (Fig. 1), we had two sets of roundtable discussions for each of these topics. In this 2-h-long interactive session, each scientist had an opportunity to share their thoughts and experiences related to two of these topics. The workshop collected many new views and possible solutions for the problems under discussion but also opened new questions and recognized needs for further scientific considerations. This report summarizes the main findings and conclusions from each of the roundtables.

Fig. 1.
Fig. 1.

(left) Participants’ main research interests and (right) percentage of votes given to each of the topics listed in the opening panel discussion. The five topics with the highest score were selected for the roundtable discussions.

Citation: Bulletin of the American Meteorological Society 100, 10; 10.1175/BAMS-D-19-0190.1

PHYSICS BEHIND THE SOURCES OF DECADAL CLIMATE PREDICTABILITY.

To predict climate variability beyond its persistence/autocorrelation time scale, the community generally uses forecast systems built on numerical coupled global climate models (CGCMs). However, at decadal time scale, the shortness of the observational records by comparison to the time scale of interest strongly limits our ability to evaluate model performance. Knowing how much variability is externally or internally driven is key to understand the sources and limits of climate predictability. At regional scale particularly, simulations from CGCMs fed with historical external forcing show large differences, highlighting model uncertainties and the importance of internal variability.

Due to the limited observed records, model uncertainties, and potential importance of both internal variability and short-lived forcings at regional scale, further research is needed. In this context, several ideas were proposed during the workshop, grouped under four subtopics:

  1. Perfect model framework. This framework aims to estimate the potential ability of forecast systems to make skillful predictions. It consists in performing ensemble member simulations in which initial conditions are slightly perturbed from one member to another and investigating from which lead time no more information coming from the initialization can be detected. This approach gives insights about predictability of the real world, assuming that the simulated signal-to-noise ratio is comparable to the observed one.

  2. Process-oriented experiments. Not all mechanisms driving decadal variability imply decadal predictability, yet the ones involving ocean dynamics are expected to be more predictable. Isolating such processes, within a model intercomparison framework, would help evaluate the limits of predictability associated to those mechanisms and their impacts. Participants suggested to perform ensemble simulations in which specific surface oceanic boundary conditions are imposed to trigger such mechanisms (e.g., by imposing additional ocean surface heat, freshwater, or momentum fluxes to evaluate the climate response to a given atmospheric forcing or sea ice melting). Evaluation of the ensemble spread would give us information on the decadal predictability associated with the forcing response. However, in order to better assess the predictability of the oceans and their predictable impacts on the atmosphere, we also need to reduce model uncertainty coming from nonresolved oceanic processes (using higher-resolution models and improved parameterizations).

  3. Pacemaker simulations. Observed decadal variability over a specific region can drive decadal variability over another. An approach consisting of constraining CGCMs with an observed mode of variability over a specific region, for example, sea surface temperature restoring, leaving the models free to adjust everywhere else, allows investigating the mechanisms that give rise to observed teleconnections and their predictability while circumventing the effects of model biases in terms of decadal variability.

  4. High- versus low-top atmosphere prediction system. The stratosphere variability is dominated by a strong quasi-biennial oscillation known to modulate monsoon precipitation and sudden stratospheric warmings. Such oscillatory behavior can be considered as a source of predictability at multiyear time scales. However, how much the current climate forecast systems can account for this potential source of predictability is still unknown. To test it, it was proposed to compare outputs from decadal prediction systems resolving the stratosphere (high-top models) with outputs from systems that do not resolve it (low-top models).

INITIALIZATION.

Initialization is the process of phasing the model toward the observed climate state at the beginning of each prediction, from where the climate then freely evolves during the forecast. Identified potential problems and uncertainties, to be addressed through systematic sensitivity experiments, are grouped under the following four subtopics:

  1. Ensemble generation and size. A clear guidance as to what is an optimum ensemble size, or how to best perturb the different ensemble members, is missing. Clearer ideas of a reasonable ensemble generation and size that provide sufficient sampling of the relevant uncertainties but at the same time minimize the computational cost would be helpful.

  2. Generation of initial conditions. Initializing climate models toward observed climate states often introduces unphysical artifacts in the simulations, such as model drift and initialization shocks. To avoid such artifacts, it is important to ensure internal consistency of the different Earth system components. This highlights the need for coupled generation of initial conditions for the different components and a consistent approach to improve models. It may further be worth exploring new initialization techniques, for example, using analogs consistent with the observed large-scale climate state, to initialize predictions from a model-specific climate attractor.

  3. Initializing different Earth system components. Initialization has so far mostly focused on the ocean and the atmosphere, while the questions about how to best initialize other slow climate components, such as the land surface, vegetation, or carbon cycle, are yet to be tackled.

  4. Inhomogeneities of observations. It is currently unclear how the inhomogeneity of observational datasets in space or time affect the model drift or the actual prediction signal. This also affects the consistency between the observations used to generate the initial conditions for the hindcasts and the forecasts. Questions were also raised about whether available observations are optimal to produce predictions or if additional, more targeted, observations are needed (e.g., sea ice thickness).

FORECAST UNCERTAINTY.

From a physical point of view, forecast uncertainty is tightly linked to internal climate variability, the response to short-lived forcings, and the signal-to-noise paradigm. In more detail, different elements of uncertainty were discussed in the workshop:

  1. Irreducible uncertainty. It is defined by the unpredictable part of the internal variability and can explain the most important regional differences in terms of predictability. Perfect model predictions are affected by this irreducible uncertainty and free of model error and, as such, provide a powerful approach to estimate the upper bound of skill, also understood as predictability. However, many forecast systems disagree in their signal-to-noise ratio in some areas. Different signal-to-noise ratios lead to underconfident or overconfident predictions, depending on the forecast system, and thus hinder our understanding of the regions that are truly predictable. Improving the realism of the major modes of internal climate variability in GCMs is thus essential to constrain the irreducible uncertainty in future forecast systems.

  2. Model uncertainty. As approximate discretized representations of the Earth system, climate models introduce an additional source of uncertainty related to the misrepresentation of certain physical processes that are either not resolved or are imperfectly parameterized in the models. To reduce model uncertainty, it is important to keep improving climate models, as well as to perform coordinated multimodel experiments to identify which regions and processes are consistently skillful and investigate the reasons for the intermodel differences.

  3. Model spread. Emerging constraints are a promising tool to advance in the understanding of the model spread and guide future efforts to constrain (and not necessarily to reduce) it. Current results from the climate projection community suggest that model weighting to improve the ensemble performance is not always worth the effort, especially if the number of models available is sufficiently large so that the contribution of the worst performing ones cancels out. However, concern was raised about the lack of real model diversity, as many GCMs contributing to CMIP6 share certain model components or come from the same model family.

  4. Observational uncertainty. This is evidenced by important disagreements among the different observational datasets. Besides, when the data quality is not homogeneous in time, this can compromise the final forecast skill by introducing nonstationary features in the initial conditions or in the reference data against which the forecast is evaluated. The participants favored the use of long and more homogeneous datasets—both for forecast initialization and verification purposes—over datasets with higher quality over the recent period, but higher uncertainty before.

FORECAST QUALITY ASSESSMENT.

The most important questions discussed were how to measure forecast quality and what scores to use. A more philosophical discussion of how to define quality was also engaged. A summary of the main points follows:

  1. Measuring forecast quality. Forecast quality is a multifaceted property that measures different aspects of the simultaneous correspondence between forecasts and observations. An important way of measuring forecast quality is by assessing the ability of the forecast to represent different statistical characteristics of the observations. The quality of a forecast will depend on many choices, such as the ensemble size, whether bias adjustment has been applied, spatial resolution, or the presence of important trends in the initial conditions.

  2. Forecast quality metrics. It was discussed how best to perform quality assessment, by using the metrics already available from the seasonal prediction community, or by adding new metrics for the quality assessment of decadal predictions. Considering the user perspective, the scientists discussed the usability of relative measures (e.g., anomalies relative to a given climatology) versus more direct, absolute measures.

  3. Comparing initialized predictions and noninitialized projections. It was discussed whether users care about this dilemma or whether they are only interested in receiving the best possible climate information merging different sources for their specific needs.

USABILITY OF DECADAL PREDICTION.

Over the past few decades, the user community has been widely applying climate projections as an important source of climate change information. Users’ readiness to adopt decadal prediction, to undertake decisions in the coming decade was discussed, while the participants suggested aspects that should be improved if we are to achieve this goal:

  1. Communication. Effective communication and dissemination strategies, developed by an interdisciplinary team and in consultation with users from various sectors, is necessary to enhance the visibility of the existing decadal climate information and pave the way for their broader adoption. This will demand using clear and precise language, with well-defined and consistently used terminology, as well as sustained interaction with users. Clear communication of uncertainty could particularly be relevant for improving usability of decadal prediction. New ways of communicating both the predictive skill and its uncertainty will be required, in particular related to the occurrence of extreme events. Applying understandable and intuitive measures of forecast skill could show the potential added value of climate forecast systems.

  2. United voice. Climate and communication scientists should align their story, to have a more effective impact on users. In addition, trustworthy institutions with long-lasting reputation could help in spreading awareness related to decadal climate services (e.g., WMO, national weather services).

  3. Resolution. Owing to the high computing power and data storage space required to run and store a decadal prediction experiment, the resolution of such climate information is relatively coarse. Coarse resolution can impede the broader use of predictions in some areas, and participants encouraged efforts to produce downscaled products of higher resolution, which could lead to a better and more user relevant climate information. The group proposed to find the break-even point, where the quality of the forecast is not compromised by the resolution of the simulation.

  4. Open access data. Making the data available publicly would provide equal opportunity for the scientific and user communities in fostering research and development of products. This could help to reach the critical mass of users required for broader uptake and the sharing of experiences, which could lead to its more substantial role in decision-making.

LESSON LEARNED.

The most concrete recommendations, such as using the multimodel coordinated approach to better understand predictability and forecast quality, refer to how to set the experiments and run the simulations. Other points where further research should focus include how to best measure and communicate forecast quality or how to use emerging constraints to advance the understanding of the model spread. Systematic sensitivity studies will be necessary to address many of the identified knowledge gaps, and ideally provide a set of best practices for the initialization of decadal predictions. Interesting discussions emerged also from the usability and uncertainty roundtables, for example, about the importance of a united and authoritative voice and the need to explore additional communication strategies when we communicate about new climate services. Finally, the workshop provided for brainstorming and sharing of ideas between 50 climate scientists from all over the world. This brought in new questions and issues that should be addressed in the near future, if we are to move decadal prediction from research to a widely used operational climate service.

ACKNOWLEDGMENTS

The workshop was funded by the EUCP project. EUCP received funding from the European Union under Horizon 2020 (Grant Agreement 776613). We thank all workshop participants for their valuable and constructive contributions.

Save
  • Fig. 1.

    (left) Participants’ main research interests and (right) percentage of votes given to each of the topics listed in the opening panel discussion. The five topics with the highest score were selected for the roundtable discussions.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 2809 2154 364
PDF Downloads 557 63 0