Search Results

You are looking at 1 - 10 of 17,578 items for :

  • Model evaluation/performance x
  • All content x
Clear All
Gab Abramowitz, Ray Leuning, Martyn Clark, and Andy Pitman

-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing

Full access
Valentina Radić and Garry K. C. Clarke

further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best

Full access
Cécile B. Ménard, Jaakko Ikonen, Kimmo Rautiainen, Mika Aurela, Ali Nadir Arslan, and Jouni Pulliainen

; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple

Full access
Jawad S. Touma, William M. Cox, Harold Thistle, and James G. Zapert

MARCH 1995 TOUMA ET AL. 603Performance Evaluation of Dense Gas Dispersion Models JAWAD S. TOUMA*Atmospheric Sciences Modeling Division, Air Resources Laboratory, National Oceanic and Atmospheric Administration, Research Triangle Park, North Carolina WILLIAM M. COXOffice of Air Quality Planning and Standards, U.S. Environmental

Full access
Noel C. Baker and Patrick C. Taylor

1. Introduction Efforts to standardize climate model experiments and collect simulation data—such as the Coupled Model Intercomparison Project (CMIP)—provide the means to intercompare and evaluate climate models. The evaluation of models using observations is a critical component of model assessment. Performance metrics—or the systematic determination of model biases—succinctly quantify aspects of climate model behavior. With the many global climate models that participate in CMIP, it is

Full access
Graham P. Weedon, Christel Prudhomme, Sue Crooks, Richard J. Ellis, Sonja S. Folwell, and Martin J. Best

are used here to help interpret the processes involved in transforming precipitation into discharge variability. However, we evaluate average model performance by comparing two time series of the same variable—modeled discharge with observed discharge—by using amplitude ratio spectra and phase spectra ( section 6 ). Unlike Bode plots, this requires no assumptions about the system being modeled. In this case, negative phase values are plausible, indicating that model discharge variations lead

Full access
Huaqing Cai and Robert E. Dumais Jr.

compiles and compares single-object attribute statistics from both forecasts and observations without matching each individual forecast object with its corresponding observed object; the former needs to match the forecast objects with observed objects first, then calculate performance statistics such as the percentage of forecast objects that matched with observed objects. Davis et al. (2006a , b) showed that both performance metrics are useful for evaluating NWP model storm forecast performance

Full access
Lifen Jiang, Yaner Yan, Oleksandra Hararuk, Nathaniel Mikle, Jianyang Xia, Zheng Shi, Jerry Tjiputra, Tongwen Wu, and Yiqi Luo

wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the modelsperformance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate

Full access
Shoji Kusunoki and Osamu Arakawa

with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were

Full access
Jason C. Knievel, David A. Ahijevych, and Kevin W. Manning

between the 10- and 4-km simulations was not possible, the latter did seem to produce a more realistic phase in the diurnal mode, perhaps partly because no cumulus parameterization was used. In this article we focused on the WRF model because of its growing prominence in the operational and research communities. However, our underlying point is more general: patterns such as the modes of rainfall frequency we examined are an underused standard for evaluating the performance of numerical weather

Full access