Search Results

You are looking at 1 - 10 of 4,112 items for :

  • Model performance/evaluation x
  • Journal of Climate x
  • Refine by Access: Content accessible to me x
Clear All
Gab Abramowitz
,
Ray Leuning
,
Martyn Clark
, and
Andy Pitman

-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing

Full access
Valentina Radić
and
Garry K. C. Clarke

further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best

Full access
Shoji Kusunoki
and
Osamu Arakawa

with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were

Full access
Lifen Jiang
,
Yaner Yan
,
Oleksandra Hararuk
,
Nathaniel Mikle
,
Jianyang Xia
,
Zheng Shi
,
Jerry Tjiputra
,
Tongwen Wu
, and
Yiqi Luo

wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the modelsperformance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate

Full access
Juan Li
,
Bin Wang
, and
Young-Min Yang

. 2016 ; Xing et al. 2017 ). Thus, it is necessary to assess the performance of models with respect to the EASM in MJ and JA separately, in favor of improving the subseasonal prediction of dynamical models. The EAWM features surface air temperature variability that is dominated by a northern mode and southern mode, which have distinct circulation structures ( Wang et al. 2010 ). These unique features of the EAWM have barely been evaluated in climate models. Meanwhile a set of systematic metrics that

Open access
Sara A. Rauscher
,
Todd D. Ringler
,
William C. Skamarock
, and
Arthur A. Mirin

not been so rigorously evaluated in idealized settings, or at a minimum that performance has not been well documented. As part of a hierarchical approach to test the veracity of global high-resolution and global variable-resolution simulations for regional modeling applications, we analyze a series of idealized, full-physics aquaplanet test cases produced using CAM version 5 coupled to the new MPAS dynamical core (CAM-MPAS). To provide context for this analysis, the new CAM-MPAS model is compared

Full access
Kexin Song
,
Jiuwei Zhao
,
Ruifen Zhan
,
Li Tao
, and
Lin Chen

_N and the SCS, the ECMWF model performed best in simulating the TCGF among all models, while the MRI model had a higher correlation with the observed TCTD than other models. The FGOALS model showed best performance in reproducing TCTD over the NA_T and Caribbean Sea, while the NICAM model simulated the interdecadal variability of TCGF/TCTD very well over the WNP_W. This indicates that for a given region we need to first evaluate the model performances and then choose the best model to describe

Open access
Min-Seop Ahn
,
Peter J. Gleckler
,
Jiwoo Lee
,
Angeline G. Pendergrass
, and
Christian Jakob

framework to evaluate state-of-the-art climate models. Six phases of CMIP spanning several decades include more than 100 climate models of varying complexity, with many of them including multiple realizations. The well-established framework for model evaluation and research opens the possibility for a more systematic and potentially semi-operational benchmarking of model performance (e.g., Flato et al. 2013 ; Gleckler et al. 2016 ; Eyring et al. 2019 ; Pendergrass et al. 2020 ). This has inspired

Free access
John E. Walsh
,
William L. Chapman
,
Vladimir Romanovsky
,
Jens H. Christensen
, and
Martin Stendel

( Nakicenovic and Swart 2000 ). For the evaluation performed in this study, we use only the output from the twentieth-century simulations (20C3M) to evaluate the modelsperformance. The CMIP3 model output is compared here against the 40-yr European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-40), which directly assimilates observed air temperature and SLP observations into a reanalysis product spanning 1958–2000. Precipitation is computed by the model used in the data assimilation

Full access
Rachael N. Isphording
,
Lisa V. Alexander
,
Margot Bador
,
Donna Green
,
Jason P. Evans
, and
Scott Wales

progress across CMIP generations. Benchmarking differs from standard model evaluation in that benchmarking requires performance expectations to be defined a priori ( Abramowitz 2005 , 2012 ). Since its publication, many additional studies have investigated the diurnal cycle ( Tang et al. 2021 ), temporal variability ( Ahn et al. 2022 ), and daily distributions of rainfall ( Martinez-Villalobos et al. 2022 ) in GCMs, underpinned by the work presented in U.S. DOE (2020) . International efforts to

Open access