Search Results

You are looking at 1 - 1 of 1 items for :

  • Author or Editor: Nicholas P. Klingaman x
  • Process-oriented Diagnostics in CMIP6 Models and Beyond x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
L. Ruby Leung
,
William R. Boos
,
Jennifer L. Catto
,
Charlotte A. DeMott
,
Gill M. Martin
,
J. David Neelin
,
Travis A. O’Brien
,
Shaocheng Xie
,
Zhe Feng
,
Nicholas P. Klingaman
,
Yi-Hung Kuo
,
Robert W. Lee
,
Cristian Martinez-Villalobos
,
S. Vishnu
,
Matthew D. K. Priestley
,
Cheng Tao
, and
Yang Zhou

Abstract

Precipitation sustains life and supports human activities, making its prediction one of the most societally relevant challenges in weather and climate modeling. Limitations in modeling precipitation underscore the need for diagnostics and metrics to evaluate precipitation in simulations and predictions. While routine use of basic metrics is important for documenting model skill, more sophisticated diagnostics and metrics aimed at connecting model biases to their sources and revealing precipitation characteristics relevant to how model precipitation is used are critical for improving models and their uses. This paper illustrates examples of exploratory diagnostics and metrics including 1) spatiotemporal characteristics metrics such as diurnal variability, probability of extremes, duration of dry spells, spectral characteristics, and spatiotemporal coherence of precipitation; 2) process-oriented metrics based on the rainfall–moisture coupling and temperature–water vapor environments of precipitation; and 3) phenomena-based metrics focusing on precipitation associated with weather phenomena including low pressure systems, mesoscale convective systems, frontal systems, and atmospheric rivers. Together, these diagnostics and metrics delineate the multifaceted and multiscale nature of precipitation, its relations with the environments, and its generation mechanisms. The metrics are applied to historical simulations from phases 5 and 6 of the Coupled Model Intercomparison Project. Models exhibit diverse skill as measured by the suite of metrics, with very few models consistently ranked as top or bottom performers compared to other models in multiple metrics. Analysis of model skill across metrics and models suggests possible relationships among subsets of metrics, motivating the need for more systematic analysis to understand model biases for informing model development.

Open access