Search Results

You are looking at 1 - 10 of 16 items for :

  • Model performance/evaluation x
  • Process-Oriented Model Diagnostics x
  • All content x
Clear All
Grey S. Nearing, Benjamin L. Ruddell, Martyn P. Clark, Bart Nijssen, and Christa Peters-Lidard

used to understand model consistency, accuracy, and precision. They defined model evaluation as a process in which model outputs are compared against observations, model intercomparison as a process in which multiple models are applied to specific test cases, and model benchmarking as a process in which model outputs are compared against a priori expectations of model performance. Our first purpose here is to formalize the concept of model benchmarking. A benchmark consists of three distinct

Full access
Maik Renner, Axel Kleidon, Martyn Clark, Bart Nijssen, Marvin Heidkamp, Martin Best, and Gab Abramowitz

, which include many different process parameterizations, may not represent diurnal land–atmosphere interaction well. This is important because land–atmosphere feedbacks propagate to larger scales and may ultimately affect model sensitivity to global change ( Miralles et al. 2019 ). Here, we employ a diagnostic model evaluation based on diurnal signatures of surface heat fluxes to quantify the performance of LSMs specifically for diurnal heat redistribution processes and point toward parameterizations

Open access
Jiabao Wang, Hyemi Kim, Daehyun Kim, Stephanie A. Henderson, Cristiana Stan, and Eric D. Maloney

. 2013 ; Ahn et al. 2017 ). Performance-based MJO simulation diagnostics and metrics have been developed for a consistent evaluation of the MJO fidelity in models ( Waliser et al. 2009 ). Although MJO simulation has generally improved in models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) over CMIP3 models in terms of MJO variance and eastward propagation ( Hung et al. 2013 ), models still tend to produce a weaker MJO with faster eastward propagation (e.g., Kim et al. 2014a

Open access
Zhe Feng, Fengfei Song, Koichi Sakaguchi, and L. Ruby Leung

different resolutions are needed. In this study, we develop a process-oriented approach to systematically evaluate the performance of climate models at mesoscale resolutions (grid spacing 10–50 km) in simulating warm-season MCS-like precipitation features and their favorable large-scale environments over the United States. Climate simulations at mesoscale resolutions are becoming more commonly available (e.g., Caldwell et al. 2019 ; Gutjahr et al. 2019 ; Roberts et al. 2019 ), motivating the need to

Open access
Catherine M. Naud, James F. Booth, Jeyavinoth Jeyaratnam, Leo J. Donner, Charles J. Seman, Ming Zhao, Huan Guo, and Yi Ming

. 2018b ), in part because of the nonlinear nature of the model physics. Because these two issues cannot easily be untangled, the goal of this study is therefore to examine both the impact of the convection parameterization and of the tuning of cloud parameters on the model representation of cloud cover in the cold sector of extratropical cyclones. To do this, we take advantage of metrics designed to evaluate modeled cloud cover in the cold sector of extratropical cyclones (e.g., Naud et al. 2014

Full access
Suzana J. Camargo, Claudia F. Giulivi, Adam H. Sobel, Allison A. Wing, Daehyun Kim, Yumin Moon, Jeffrey D. O. Strong, Anthony D. Del Genio, Maxwell Kelley, Hiroyuki Murakami, Kevin A. Reed, Enrico Scoccimarro, Gabriel A. Vecchi, Michael F. Wehner, Colin Zarzycki, and Ming Zhao

.1175/JCLI4282.1 Camargo , S. J. , A. H. Sobel , A. G. Barnston , and K. A. Emanuel , 2007b : Tropical cyclone genesis potential index in climate models . Tellus , 59A , 428 – 443 , https://doi.org/10.1111/j.1600-0870.2007.00238.x . 10.1111/j.1600-0870.2007.00238.x Camargo , S. J. , M. K. Tippett , A. H. Sobel , G. A. Vecchi , and M. Zhao , 2014 : Testing the performance of tropical cyclone genesis indices in future climates using the HIRAM model . J. Climate , 27 , 9171

Free access
Alexis Berg and Justin Sheffield

differences in their simulation of land–atmosphere fluxes, including ET (e.g., Mueller and Seneviratne 2014 , and references therein). Much research has been directed over the past decade toward evaluating the representation of ET in climate models, based on global land ET products derived from observations, such as remote sensing data, upscaled in situ measurements, and/or land surface models driven by observations (e.g., Mueller et al. 2013 ). Perhaps less attention has been devoted, until recently

Full access
Eric D. Maloney, Andrew Gettelman, Yi Ming, J. David Neelin, Daniel Barrie, Annarita Mariotti, C.-C. Chen, Danielle R. B. Coleman, Yi-Hung Kuo, Bohar Singh, H. Annamalai, Alexis Berg, James F. Booth, Suzana J. Camargo, Aiguo Dai, Alex Gonzalez, Jan Hafner, Xianan Jiang, Xianwen Jing, Daehyun Kim, Arun Kumar, Yumin Moon, Catherine M. Naud, Adam H. Sobel, Kentaroh Suzuki, Fuchang Wang, Junhong Wang, Allison A. Wing, Xiaobiao Xu, and Ming Zhao

and complementary to other efforts such as European Earth System Model Bias Reduction and Assessing Abrupt Climate Change (EMBRACE) project/Earth System Model eValuation Tool (ESMValTool) and Coordinated Set of Model Evaluation Capabilities (CMEC) that use open-source software packages for multimodel evaluation. Because most other efforts have thus far largely emphasized basic performance metrics for models, the MDTF effort described here is complementary and advantageous to these other efforts as

Open access

Toward a Systematic Evaluation of Warm Conveyor Belts in Numerical Weather Prediction and Climate Models. Part I: Predictor Selection and Logistic Regression Model

Julian F. Quinting and Christian M. Grams

regression and the limitations of this approach. In section 4 we evaluate the performance of the models during Northern Hemisphere winter and demonstrate their applicability to an operational ECMWF ensemble forecast of a WCB event during January 2011. The study ends with concluding remarks and an outlook in section 5 . 2. Data a. Predictor dataset The predictor selection as well as the development and evaluation of the logistic regression models is based on ECMWF’s interim reanalysis data (ERA

Restricted access
Motoki Nagura, J. P. McCreary, and H. Annamalai

. Since for in (2b) , is the depth-integrated pressure anomaly relative to its value off Sumatra. In Fig. 6b , we used ERA-Interim winds to evaluate (2). Sverdrup (1947) derived solution (2) from depth-integrated equations, and in so doing lost all information about the vertical structure of the flow. In a model that allows for vertical structure, baroclinic adjustments (namely, the radiation of baroclinic Rossby waves across the basin) tend to trap the Sverdrup flow in the upper ocean (e

Full access