Search Results
used to understand model consistency, accuracy, and precision. They defined model evaluation as a process in which model outputs are compared against observations, model intercomparison as a process in which multiple models are applied to specific test cases, and model benchmarking as a process in which model outputs are compared against a priori expectations of model performance. Our first purpose here is to formalize the concept of model benchmarking. A benchmark consists of three distinct
used to understand model consistency, accuracy, and precision. They defined model evaluation as a process in which model outputs are compared against observations, model intercomparison as a process in which multiple models are applied to specific test cases, and model benchmarking as a process in which model outputs are compared against a priori expectations of model performance. Our first purpose here is to formalize the concept of model benchmarking. A benchmark consists of three distinct
, which include many different process parameterizations, may not represent diurnal land–atmosphere interaction well. This is important because land–atmosphere feedbacks propagate to larger scales and may ultimately affect model sensitivity to global change ( Miralles et al. 2019 ). Here, we employ a diagnostic model evaluation based on diurnal signatures of surface heat fluxes to quantify the performance of LSMs specifically for diurnal heat redistribution processes and point toward parameterizations
, which include many different process parameterizations, may not represent diurnal land–atmosphere interaction well. This is important because land–atmosphere feedbacks propagate to larger scales and may ultimately affect model sensitivity to global change ( Miralles et al. 2019 ). Here, we employ a diagnostic model evaluation based on diurnal signatures of surface heat fluxes to quantify the performance of LSMs specifically for diurnal heat redistribution processes and point toward parameterizations
. 2013 ; Ahn et al. 2017 ). Performance-based MJO simulation diagnostics and metrics have been developed for a consistent evaluation of the MJO fidelity in models ( Waliser et al. 2009 ). Although MJO simulation has generally improved in models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) over CMIP3 models in terms of MJO variance and eastward propagation ( Hung et al. 2013 ), models still tend to produce a weaker MJO with faster eastward propagation (e.g., Kim et al. 2014a
. 2013 ; Ahn et al. 2017 ). Performance-based MJO simulation diagnostics and metrics have been developed for a consistent evaluation of the MJO fidelity in models ( Waliser et al. 2009 ). Although MJO simulation has generally improved in models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) over CMIP3 models in terms of MJO variance and eastward propagation ( Hung et al. 2013 ), models still tend to produce a weaker MJO with faster eastward propagation (e.g., Kim et al. 2014a
different resolutions are needed. In this study, we develop a process-oriented approach to systematically evaluate the performance of climate models at mesoscale resolutions (grid spacing 10–50 km) in simulating warm-season MCS-like precipitation features and their favorable large-scale environments over the United States. Climate simulations at mesoscale resolutions are becoming more commonly available (e.g., Caldwell et al. 2019 ; Gutjahr et al. 2019 ; Roberts et al. 2019 ), motivating the need to
different resolutions are needed. In this study, we develop a process-oriented approach to systematically evaluate the performance of climate models at mesoscale resolutions (grid spacing 10–50 km) in simulating warm-season MCS-like precipitation features and their favorable large-scale environments over the United States. Climate simulations at mesoscale resolutions are becoming more commonly available (e.g., Caldwell et al. 2019 ; Gutjahr et al. 2019 ; Roberts et al. 2019 ), motivating the need to
. 2018b ), in part because of the nonlinear nature of the model physics. Because these two issues cannot easily be untangled, the goal of this study is therefore to examine both the impact of the convection parameterization and of the tuning of cloud parameters on the model representation of cloud cover in the cold sector of extratropical cyclones. To do this, we take advantage of metrics designed to evaluate modeled cloud cover in the cold sector of extratropical cyclones (e.g., Naud et al. 2014
. 2018b ), in part because of the nonlinear nature of the model physics. Because these two issues cannot easily be untangled, the goal of this study is therefore to examine both the impact of the convection parameterization and of the tuning of cloud parameters on the model representation of cloud cover in the cold sector of extratropical cyclones. To do this, we take advantage of metrics designed to evaluate modeled cloud cover in the cold sector of extratropical cyclones (e.g., Naud et al. 2014
differences in their simulation of land–atmosphere fluxes, including ET (e.g., Mueller and Seneviratne 2014 , and references therein). Much research has been directed over the past decade toward evaluating the representation of ET in climate models, based on global land ET products derived from observations, such as remote sensing data, upscaled in situ measurements, and/or land surface models driven by observations (e.g., Mueller et al. 2013 ). Perhaps less attention has been devoted, until recently
differences in their simulation of land–atmosphere fluxes, including ET (e.g., Mueller and Seneviratne 2014 , and references therein). Much research has been directed over the past decade toward evaluating the representation of ET in climate models, based on global land ET products derived from observations, such as remote sensing data, upscaled in situ measurements, and/or land surface models driven by observations (e.g., Mueller et al. 2013 ). Perhaps less attention has been devoted, until recently
and complementary to other efforts such as European Earth System Model Bias Reduction and Assessing Abrupt Climate Change (EMBRACE) project/Earth System Model eValuation Tool (ESMValTool) and Coordinated Set of Model Evaluation Capabilities (CMEC) that use open-source software packages for multimodel evaluation. Because most other efforts have thus far largely emphasized basic performance metrics for models, the MDTF effort described here is complementary and advantageous to these other efforts as
and complementary to other efforts such as European Earth System Model Bias Reduction and Assessing Abrupt Climate Change (EMBRACE) project/Earth System Model eValuation Tool (ESMValTool) and Coordinated Set of Model Evaluation Capabilities (CMEC) that use open-source software packages for multimodel evaluation. Because most other efforts have thus far largely emphasized basic performance metrics for models, the MDTF effort described here is complementary and advantageous to these other efforts as
.1175/JCLI4282.1 Camargo , S. J. , A. H. Sobel , A. G. Barnston , and K. A. Emanuel , 2007b : Tropical cyclone genesis potential index in climate models . Tellus , 59A , 428 – 443 , https://doi.org/10.1111/j.1600-0870.2007.00238.x . 10.1111/j.1600-0870.2007.00238.x Camargo , S. J. , M. K. Tippett , A. H. Sobel , G. A. Vecchi , and M. Zhao , 2014 : Testing the performance of tropical cyclone genesis indices in future climates using the HIRAM model . J. Climate , 27 , 9171
.1175/JCLI4282.1 Camargo , S. J. , A. H. Sobel , A. G. Barnston , and K. A. Emanuel , 2007b : Tropical cyclone genesis potential index in climate models . Tellus , 59A , 428 – 443 , https://doi.org/10.1111/j.1600-0870.2007.00238.x . 10.1111/j.1600-0870.2007.00238.x Camargo , S. J. , M. K. Tippett , A. H. Sobel , G. A. Vecchi , and M. Zhao , 2014 : Testing the performance of tropical cyclone genesis indices in future climates using the HIRAM model . J. Climate , 27 , 9171
regression and the limitations of this approach. In section 4 we evaluate the performance of the models during Northern Hemisphere winter and demonstrate their applicability to an operational ECMWF ensemble forecast of a WCB event during January 2011. The study ends with concluding remarks and an outlook in section 5 . 2. Data a. Predictor dataset The predictor selection as well as the development and evaluation of the logistic regression models is based on ECMWF’s interim reanalysis data (ERA
regression and the limitations of this approach. In section 4 we evaluate the performance of the models during Northern Hemisphere winter and demonstrate their applicability to an operational ECMWF ensemble forecast of a WCB event during January 2011. The study ends with concluding remarks and an outlook in section 5 . 2. Data a. Predictor dataset The predictor selection as well as the development and evaluation of the logistic regression models is based on ECMWF’s interim reanalysis data (ERA
. Since for in (2b) , is the depth-integrated pressure anomaly relative to its value off Sumatra. In Fig. 6b , we used ERA-Interim winds to evaluate (2). Sverdrup (1947) derived solution (2) from depth-integrated equations, and in so doing lost all information about the vertical structure of the flow. In a model that allows for vertical structure, baroclinic adjustments (namely, the radiation of baroclinic Rossby waves across the basin) tend to trap the Sverdrup flow in the upper ocean (e
. Since for in (2b) , is the depth-integrated pressure anomaly relative to its value off Sumatra. In Fig. 6b , we used ERA-Interim winds to evaluate (2). Sverdrup (1947) derived solution (2) from depth-integrated equations, and in so doing lost all information about the vertical structure of the flow. In a model that allows for vertical structure, baroclinic adjustments (namely, the radiation of baroclinic Rossby waves across the basin) tend to trap the Sverdrup flow in the upper ocean (e