Search Results

You are looking at 1 - 10 of 19,281 items for :

  • Model performance/evaluation x
  • Refine by Access: All Content x
Clear All
Gab Abramowitz, Ray Leuning, Martyn Clark, and Andy Pitman

-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing

Full access
Valentina Radić and Garry K. C. Clarke

further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best

Full access
Cécile B. Ménard, Jaakko Ikonen, Kimmo Rautiainen, Mika Aurela, Ali Nadir Arslan, and Jouni Pulliainen

; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple

Full access
Lifen Jiang, Yaner Yan, Oleksandra Hararuk, Nathaniel Mikle, Jianyang Xia, Zheng Shi, Jerry Tjiputra, Tongwen Wu, and Yiqi Luo

wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the modelsperformance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate

Full access
Shoji Kusunoki and Osamu Arakawa

with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were

Full access
Prem Woli and Joel O. Paz

). In accord with this emphasis, several agricultural research activities are carried out in this region and need data on R g . Although a large number of models exist that can estimate R g from commonly available meteorological variables, researchers have used only a limited number of methods for generating R g or have explored the performance of only a few methods for the southeastern United States. For instance, Thornton and Running (1999) evaluated the reformulated Bristow and Campbell

Full access
Graham P. Weedon, Christel Prudhomme, Sue Crooks, Richard J. Ellis, Sonja S. Folwell, and Martin J. Best

are used here to help interpret the processes involved in transforming precipitation into discharge variability. However, we evaluate average model performance by comparing two time series of the same variable—modeled discharge with observed discharge—by using amplitude ratio spectra and phase spectra ( section 6 ). Unlike Bode plots, this requires no assumptions about the system being modeled. In this case, negative phase values are plausible, indicating that model discharge variations lead

Full access
Juan Li, Bin Wang, and Young-Min Yang

. 2016 ; Xing et al. 2017 ). Thus, it is necessary to assess the performance of models with respect to the EASM in MJ and JA separately, in favor of improving the subseasonal prediction of dynamical models. The EAWM features surface air temperature variability that is dominated by a northern mode and southern mode, which have distinct circulation structures ( Wang et al. 2010 ). These unique features of the EAWM have barely been evaluated in climate models. Meanwhile a set of systematic metrics that

Open access
R. Padilla-Hernández, W. Perrie, B. Toulany, and P. C. Smith

observations. They used a model spatial resolution of 1/12° × 1/12° in order to capture the rapidly varying wave field generated by Hurricane Bonnie, which is much finer than is typically implemented in operational forecasting. For example, the fine-resolution wave model grid for the Gulf of Maine Ocean Observing System (GoMOOS; information online at ) is 0.2° × 0.2°. In this study, three modern widely used third-generation spectral wave models are evaluated: (a) the Simulating Waves

Full access
Cecile B. Menard, Richard Essery, Gerhard Krinner, Gabriele Arduini, Paul Bartlett, Aaron Boone, Claire Brutel-Vuilmet, Eleanor Burke, Matthias Cuntz, Yongjiu Dai, Bertrand Decharme, Emanuel Dutra, Xing Fang, Charles Fierz, Yeugeniy Gusev, Stefan Hagemann, Vanessa Haverd, Hyungjun Kim, Matthieu Lafaysse, Thomas Marke, Olga Nasonova, Tomoko Nitta, Masashi Niwano, John Pomeroy, Gerd Schädler, Vladimir A. Semenov, Tatiana Smirnova, Ulrich Strasser, Sean Swenson, Dmitry Turkov, Nander Wever, and Hua Yuan

previous MIPs: albedo is still a major source of uncertainty, surface exchange parameterizations are still problematic, and individual model performance is inconsistent. In fact, models are less classifiable with results from more sites, years and evaluation variables. Our initial, or false, hypothesis had to be killed off. Developments have been made, particularly in terms of the complexity of snow process representations, and conclusions from PILPS2(d) and snow MIPs have undoubtedly driven model

Full access