Search Results

You are looking at 1 - 10 of 1,426 items for :

  • Model performance/evaluation x
  • Journal of Hydrometeorology x
  • Refine by Access: Content accessible to me x
Clear All
Cécile B. Ménard, Jaakko Ikonen, Kimmo Rautiainen, Mika Aurela, Ali Nadir Arslan, and Jouni Pulliainen

; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple

Full access
Graham P. Weedon, Christel Prudhomme, Sue Crooks, Richard J. Ellis, Sonja S. Folwell, and Martin J. Best

are used here to help interpret the processes involved in transforming precipitation into discharge variability. However, we evaluate average model performance by comparing two time series of the same variable—modeled discharge with observed discharge—by using amplitude ratio spectra and phase spectra ( section 6 ). Unlike Bode plots, this requires no assumptions about the system being modeled. In this case, negative phase values are plausible, indicating that model discharge variations lead

Full access
Mohammad Safeeq, Guillaume S. Mauger, Gordon E. Grant, Ivan Arismendi, Alan F. Hamlet, and Se-Yeun Lee

important to evaluate and assess whether any model inherits systematic biases, whether these are more prevalent in some landscapes than others, and whether these biases can be reduced to improve model performance. Any evaluation of bias should also address how the choice of model ( Vano et al. 2012 ), meteorological data ( Elsner et al. 2014 ), or even parameterization scheme ( Tague et al. 2013 ) affects model behavior. Here, we examine the source of a range of parametric and structural uncertainties

Full access
Xiaoli Yang, Xiaohan Yu, Yuqian Wang, Xiaogang He, Ming Pan, Mengru Zhang, Yi Liu, Liliang Ren, and Justin Sheffield

full ensemble, which could be adopted by hydrological model for climate change effects assessment across China. This paper is organized as follows. Sections 2 and 3 describe the datasets used in the study as well as the statistical downscaling methods, the bias-correction method, the performance evaluation metrics, and the subensemble selection methods. Section 4 presents the main results, including evaluation of the model performance, optimization of the model subensemble, and its

Free access
M. J. Best, G. Abramowitz, H. R. Johnson, A. J. Pitman, G. Balsamo, A. Boone, M. Cuntz, B. Decharme, P. A. Dirmeyer, J. Dong, M. Ek, Z. Guo, V. Haverd, B. J. J. van den Hurk, G. S. Nearing, B. Pak, C. Peters-Lidard, J. A. Santanello Jr., L. Stevens, and N. Vuichard

relative errors, this type of analysis also identifies metrics for which one model performs better than another, or where errors in multiple models are systematic. This has the advantage over evaluation of giving a clear indication that performance improvements are achievable for those metrics where another model already performs better. For example, in Fig. 1b , metrics 4 and 11 apparently have the largest relative errors for both models A and B (and are hence likely to be flagged as development

Full access
Baozhang Chen, Jing M. Chen, Gang Mo, Chiu-Wai Yuen, Hank Margolis, Kaz Higuchi, and Douglas Chan

carbon fluxes. The purposes of this study are threefold: (i) to test and verify the capability and accuracy of the EASS model when coupled to a GCM model and applied to a large area with significant heterogeneity, such as the Canada’s landmass; (ii) to evaluate the effect of land-cover heterogeneity on regional energy, water, and carbon flux simulation based on remote sensing data; and (iii) to explore up-scaling methodologies using satellite-derived data. In this paper, we briefly present a basic

Full access
A. M. Ukkola, A. J. Pitman, M. G. De Kauwe, G. Abramowitz, N. Herger, J. P. Evans, and M. Decker

) for all soil layers within the top 3 m and (where applicable) any layer partly within 3 m weighted by the fraction of this layer located above 3 m. b. Observations We used two observed precipitation products to evaluate model performance. These were global monthly time series products by 1) the Climatic Research Unit (CRU TS 3.23; Harris et al. 2014 ) and 2) Global Precipitation Climatology Centre (GPCC, version 7; Schneider et al. 2016 ). Both products are available at a 0.5° spatial resolution

Full access
Kristie J. Franz, Terri S. Hogue, and Soroosh Sorooshian

evaluation of a model requires three critical elements: a performance criterion, a benchmark, and an outcome. Performance criterion refers to the ability to match the desired variable being modeled; in this instance the variables of interest are simulated snow water equivalent (SWE), melt, and discharge. The benchmark is an alternative to the model being evaluated. Given forecasting as the proposed application of the SAST, our benchmark is identified as the operational National Weather Service (NWS) SNOW

Full access
Shaobo Sun, Baozhang Chen, Quanqin Shao, Jing Chen, Jiyuan Liu, Xue-jun Zhang, Huifang Zhang, and Xiaofeng Lin

simulations; however, the datasets were not validated with measurements, and the LSMs used were relatively early versions. The first objective of this study was to develop long-term (1979–2012) and consistent ET estimates of China (0.25° × 0.25°) using multiple LSMs driven by recently developed observation-based forcing datasets. The modeled ET values were evaluated against measurements from nine flux towers at site scale and from the land-surface-water-budget-based ET estimation at regional scale. In

Full access
Lei Meng and Steven M. Quiring

handle frozen soils or accurately simulate other important processes). All these studies have shown that model performance varies by location and that intermodel comparisons are useful for evaluating and improving model performance. However, to date, there have been no direct comparisons of the Variable Infiltration Capacity (VIC), the Decision Support System for Agrotechnology Transfer (DSSAT; Ritchie and Otter 1985 ), and the Climatic Water Budget (CWB; Thornthwaite 1948 ; Thornthwaite and

Full access