Search Results

You are looking at 1 - 10 of 3,735 items for :

  • Model performance/evaluation x
  • Journal of Climate x
  • All content x
Clear All
Gab Abramowitz, Ray Leuning, Martyn Clark, and Andy Pitman

-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing

Full access
Valentina Radić and Garry K. C. Clarke

further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best

Full access
Shoji Kusunoki and Osamu Arakawa

with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were

Full access
Lifen Jiang, Yaner Yan, Oleksandra Hararuk, Nathaniel Mikle, Jianyang Xia, Zheng Shi, Jerry Tjiputra, Tongwen Wu, and Yiqi Luo

wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the modelsperformance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate

Full access
Juan Li, Bin Wang, and Young-Min Yang

. 2016 ; Xing et al. 2017 ). Thus, it is necessary to assess the performance of models with respect to the EASM in MJ and JA separately, in favor of improving the subseasonal prediction of dynamical models. The EAWM features surface air temperature variability that is dominated by a northern mode and southern mode, which have distinct circulation structures ( Wang et al. 2010 ). These unique features of the EAWM have barely been evaluated in climate models. Meanwhile a set of systematic metrics that

Open access
Sara A. Rauscher, Todd D. Ringler, William C. Skamarock, and Arthur A. Mirin

not been so rigorously evaluated in idealized settings, or at a minimum that performance has not been well documented. As part of a hierarchical approach to test the veracity of global high-resolution and global variable-resolution simulations for regional modeling applications, we analyze a series of idealized, full-physics aquaplanet test cases produced using CAM version 5 coupled to the new MPAS dynamical core (CAM-MPAS). To provide context for this analysis, the new CAM-MPAS model is compared

Full access
Noel C. Baker and Patrick C. Taylor

1. Introduction Efforts to standardize climate model experiments and collect simulation data—such as the Coupled Model Intercomparison Project (CMIP)—provide the means to intercompare and evaluate climate models. The evaluation of models using observations is a critical component of model assessment. Performance metrics—or the systematic determination of model biases—succinctly quantify aspects of climate model behavior. With the many global climate models that participate in CMIP, it is

Full access
John E. Walsh, William L. Chapman, Vladimir Romanovsky, Jens H. Christensen, and Martin Stendel

( Nakicenovic and Swart 2000 ). For the evaluation performed in this study, we use only the output from the twentieth-century simulations (20C3M) to evaluate the modelsperformance. The CMIP3 model output is compared here against the 40-yr European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-40), which directly assimilates observed air temperature and SLP observations into a reanalysis product spanning 1958–2000. Precipitation is computed by the model used in the data assimilation

Full access
Yi Zhang, Haoming Chen, and Rucong Yu

favorable ambient environment for the simulation of EC stratus clouds. Therefore, it is worthwhile investigating the sensitivity of these stratus clouds in a GCM with different resolution configurations. This paper will evaluate the performance of the Community Atmosphere Model, version 5 (CAM5) in simulating EC stratus clouds and associated environmental fields under different resolutions. We will document the common strengths, limitations, and changes from low- to high-resolution experiments. We hope

Full access
Liang Chen and Oliver W. Frauenfeld

models driven by historical natural and anthropogenic forcings in CMIP3, Zhou and Yu (2006) found that the robustness of temperature estimates averaged over China is lower than that of the global and hemispheric average, and discrepancies exist between the observed and simulated spatial patterns of temperature trends. By comparing output from 24 models with observational data in China, Miao et al. (2012) evaluated the performance of the CMIP3 GCMs in simulating temperature, and found that 18

Full access