Search Results

You are looking at 1 - 10 of 17 items for :

  • Model performance/evaluation x
  • All content x
Clear All
Brian A. Colle, Zhenhai Zhang, Kelly A. Lombardo, Edmund Chang, Ping Liu, and Minghua Zhang

, but there are likely major differences between the models attributed to spatial resolution and physics. We evaluate the models separately, rank them, and use the selected members based on model performance during the historical period to determine if this approach has an impact on the future cyclone results. Finally, some physical reasons for the model difference in cyclone frequency and intensity have been explored in the western Atlantic storm-track region. In summary, this paper will address

Full access
Jeanne M. Thibeault and Anji Seth

). Model selection was based on the availability of 6-hourly data from the CMIP5 archive at the time of writing, which are required to compute vertically integrated moisture transport. This research also takes a first look at future projections in this subset of models, highlighting processes that lead to model disagreement about the direction of change in northeast-region warm-season precipitation, which may be particularly important for evaluating model credibility. Fig . 1. Mean JJA precipitation

Full access
Baird Langenbrunner and J. David Neelin

ensemble mean (MMEM), and measures of sign agreement. In these alternative measures, the CMIP5 model ensemble does unexpectedly well compared to observations. The performance on sign agreement measures is decent enough to motivate questions regarding the optimal way to apply significance tests within multimodel ensembles. We provide some explanation in the discussion section, noting that even though a full answer may not yet exist, such alternative measures are relevant to the evaluation of

Full access
Kerrie L. Geil, Yolande L. Serra, and Xubin Zeng

). Global and limited-area model simulations have been conducted in the past to evaluate the representation of the NAMS and the results show a wide range of model ability. Arritt et al. (2000) demonstrated that the Met Office (UKMO) HadCM2 global model could simulate generally realistic NAMS circulation and precipitation, whereas Yang et al. (2001) showed that the National Center for Atmospheric Research (NCAR) CCM3 global model was unable to simulate these NAMS features. Liang et al. (2008) found

Full access
Lin Chen, Yongqiang Yu, and De-Zheng Sun

of the biases a. Relationship of biases among different feedbacks For further evaluating model performance and revealing the origin of biases analyzed above, we have compared the skill scores of the feedbacks of SWCRF, LWCRF, precipitation, and vertical velocity at 500 hPa (ω500) ( Fig. 3 ). As is shown in Figs. 3a–c , there is a positive correlation between the intermodel variations in the simulated SWCRF feedback and the intermodel variations in other simulated feedbacks: the feedback of

Full access
Meng-Pai Hung, Jia-Lin Lin, Wanqiu Wang, Daehyun Kim, Toshiaki Shinoda, and Scott J. Weaver

in the CMIP3 version. Some of the CMIP5 models have also evolved from “climate system models” to “Earth system models” and include biogeochemical components and time-varying carbon fluxes among the ocean, atmosphere, and terrestrial biosphere. The evaluation of the performance of the CMIP5 models in terms of the tropical intraseasonal variability and the comparison of the simulation fidelity against that of the former generation of models is the goal of this study. The evaluation of the tropical

Full access
Zaitao Pan, Xiaodong Liu, Sanjiv Kumar, Zhiqiu Gao, and James Kinter

determining the WH pattern, so we also compared its relative role in determining the temporal variations of WH temperature. Figure 9 contrasts the best and worst model performance measured by r . The shaded area represents the spread (maximum minus minimum) of the top 25 best- r members and the eight lines are the best eight members. [We chose eight best (worst) members mainly for plot clarity.] The members captured the temporal variation of the WH Tmax very well, especially during the latter half of

Full access
Justin Sheffield, Andrew P. Barrett, Brian Colle, D. Nelun Fernando, Rong Fu, Kerrie L. Geil, Qi Hu, Jim Kinter, Sanjiv Kumar, Baird Langenbrunner, Kelly Lombardo, Lindsey N. Long, Eric Maloney, Annarita Mariotti, Joyce E. Meyerson, Kingtse C. Mo, J. David Neelin, Sumant Nigam, Zaitao Pan, Tong Ren, Alfredo Ruiz-Barradas, Yolande L. Serra, Anji Seth, Jeanne M. Thibeault, Julienne C. Stroeve, Ze Yang, and Lei Yin

processing constraints, and so for some analyses (in particular those requiring high-temporal-resolution data) a smaller subset of the core models was analyzed. When data for noncore models were available, these were also evaluated for some analyses and the results are highlighted if they showed better (or particularly poor) performance. The specific models used for each individual analysis are provided within the results section where appropriate. b. Overview of methods Data from the historical CMIP5

Full access
Justin Sheffield, Suzana J. Camargo, Rong Fu, Qi Hu, Xianan Jiang, Nathaniel Johnson, Kristopher B. Karnauskas, Seon Tae Kim, Jim Kinter, Sanjiv Kumar, Baird Langenbrunner, Eric Maloney, Annarita Mariotti, Joyce E. Meyerson, J. David Neelin, Sumant Nigam, Zaitao Pan, Alfredo Ruiz-Barradas, Richard Seager, Yolande L. Serra, De-Zheng Sun, Chunzai Wang, Shang-Ping Xie, Jin-Yi Yu, Tao Zhang, and Ming Zhao

1. Introduction This is the second part of a three-part paper on phase 5 of the Coupled Model Intercomparison Project (CMIP5; Taylor et al. 2012 ) model simulations for North America. This second part evaluates the CMIP5 models in their ability to replicate the observed variability of North American continental and regional climate, and related climate processes. Sheffield et al. (2013 , hereafter Part I) evaluate the representation of the climatology of continental and regional climate

Full access
Sanjiv Kumar, James Kinter III, Paul A. Dirmeyer, Zaitao Pan, and Jennifer Adams

.e., r 2 ) evaluate modelsperformances relative to the observations (see related figures in the supplemental material ). Sl. No. is as given in Table 1 . We have summarized individual model results for the warming hole simulation skill in Table 3 . For models having more than one ensemble member, we calculated model results for each individual ensemble member and then averaged across different ensemble members in the given model. The relative power in the second harmonic ranges from 1.9 (INMCM4

Full access