Assessing the Simulation of Precipitation in Earth System Models
What: Scientists who model, observe, analyze, and evaluate precipitation met to strategize about how to invigorate work on evaluating precipitation simulated by Earth system models.
When: 1–2 July 2019
Where: Rockville, Maryland
Earth system models (ESMs) bridge observationally based and theoretical understanding of the Earth system. They are among the most frequently used tools to study a variety of questions related to variability and changes in Earth’s climate. For many applications, ESMs must realistically simulate observed large-scale precipitation patterns and seasonal cycles that have a multitude of societal and national security implications. Despite steady improvement in the simulation of precipitation, model errors in many aspects of precipitation characteristics limit the use of ESMs both in understanding Earth system variability and change and for decision-making.
The impetus for this workshop was to accelerate efforts to improve ESMs by designing a capability to comprehensively evaluate simulated precipitation in ESMs—a capability that will help ESM developers better understand their models and provide them with quantitative targets for demonstrating model improvements. A group of experts with diverse interests participated in the workshop, including model developers, observational experts, scientists with expertise in diagnosing or evaluating simulated precipitation and related processes, and several with experience in objectively summarizing model performance with metrics.
Two goals steered the workshop discussions:
1) Identify a holistic set of observed rainfall characteristics that could be used to define metrics that gauge the consistency between ESMs and observations
2) Assess state-of-the-science methods used to evaluate simulated rainfall and identify areas of research for exploratory metrics to improve the understanding of model biases and meet stakeholder needs
The first challenge was to identify a set of observed characteristics that can reliably be used for benchmarking models—establishing observational targets and determining how far models are from these targets. Multiple viable approaches were discussed, and workshop participants agreed that it was most important to establish a starting point that while imperfect, can be useful and provide a foundation for future improvement and expansion. A set of six precipitation characteristics was agreed upon as an appropriate starting point for developing baseline precipitation metrics. They include the spatial distribution of mean state precipitation (including snow), seasonal cycle, variability on time scales from diurnal to decadal, intensity and frequency distributions, extremes, and drought. Expansion of these basic characteristics is envisioned through a tiered system including a wider range of quantitative measures that provide significantly more detail than the basic metrics. All metrics and diagnostics are designed to be applied to a common set of simulations from the current phase of the Coupled Model Intercomparison Project (CMIP6).
While the primary function of the baseline metrics is to benchmark model simulations of precipitation for documenting model performance and improvements over time, precipitation metrics are also useful for a broader community of researchers and stakeholders. The second part of the workshop focused on developing a more diagnostics-oriented set of precipitation metrics, which were referred to as “exploratory.” These exploratory metrics target critical precipitation-related characteristics and processes that are actively being researched but to date lack widely adopted measures for established benchmarking. They can be useful for model developers in guiding model development, for Earth system scientists investigating precipitation variability and change, and for researchers and stakeholders interested in specific aspects of precipitation relevant to their applications. Exploratory metrics were grouped into three types according to their functions and characteristics: process-oriented metrics, regime-oriented metrics, and use-inspired metrics. Over time, these exploratory metrics may inform further baseline metrics that can be included in the set of benchmarks.
The group plans to move forward by incorporating the initial set of benchmarks into a common analysis framework and applying it to CMIP6 and earlier generations of climate models, and in parallel continuing to develop exploratory metrics. Ultimately, this effort aims to provide a guide to modelers as they strive to improve simulated precipitation.
The workshop was funded by the Regional and Global Model Analysis (RGMA) Program Area within the Earth and Environmental Systems Modeling Program in the Climate and Environmental Sciences Division at the U.S. Department of Energy and hosted by Program Manager Renu Joseph. RGMA also supported Gleckler via the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Pendergrass via National Science Foundation IA 1844590, and Leung via the Pacific Northwest National Laboratory managed by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC06-76RLO-1830. We thank all workshop participants for their contributions.