Search Results

You are looking at 1 - 10 of 1,770 items for :

  • Model performance/evaluation x
  • Journal of Hydrometeorology x
  • Refine by Access: All Content x
Clear All
Cécile B. Ménard
,
Jaakko Ikonen
,
Kimmo Rautiainen
,
Mika Aurela
,
Ali Nadir Arslan
, and
Jouni Pulliainen

; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple

Full access
Mohammad Safeeq
,
Guillaume S. Mauger
,
Gordon E. Grant
,
Ivan Arismendi
,
Alan F. Hamlet
, and
Se-Yeun Lee

important to evaluate and assess whether any model inherits systematic biases, whether these are more prevalent in some landscapes than others, and whether these biases can be reduced to improve model performance. Any evaluation of bias should also address how the choice of model ( Vano et al. 2012 ), meteorological data ( Elsner et al. 2014 ), or even parameterization scheme ( Tague et al. 2013 ) affects model behavior. Here, we examine the source of a range of parametric and structural uncertainties

Full access
Bong-Chul Seo
,
Witold F. Krajewski
,
Felipe Quintero
,
Steve Buan
, and
Brian Connelly

that the USGS rating curves are accurate and did not consider their uncertainty in our analysis. 3. Methodology In this section, we briefly describe the concept of the mix-and-match approach using multiple precipitation forcing products and hydrologic models. We discuss a modeling framework implemented for a fair comparison/evaluation of streamflow predictions generated from different modeling elements. This section also outlines tactics to assess the accuracy/performance of precipitation forcing

Full access
Xiaoli Yang
,
Xiaohan Yu
,
Yuqian Wang
,
Xiaogang He
,
Ming Pan
,
Mengru Zhang
,
Yi Liu
,
Liliang Ren
, and
Justin Sheffield

full ensemble, which could be adopted by hydrological model for climate change effects assessment across China. This paper is organized as follows. Sections 2 and 3 describe the datasets used in the study as well as the statistical downscaling methods, the bias-correction method, the performance evaluation metrics, and the subensemble selection methods. Section 4 presents the main results, including evaluation of the model performance, optimization of the model subensemble, and its

Free access
Hui Guo
,
Ying Hou
,
Yuting Yang
, and
Tim R. McVicar

simulation (e.g., Müller Schmied et al. 2016 ; Mockler et al. 2016 ). As a result, although previous evaluation studies have gained valuable understandings of the performance and uncertainty in these modeled global runoff datasets, their limitations necessitate more comprehensive evaluations of simulated extreme flows from more models, under different environmental conditions and against higher-quality streamflow observations. Table 1. Overview of global-scale studies evaluating hydrological

Restricted access
M. J. Best
,
G. Abramowitz
,
H. R. Johnson
,
A. J. Pitman
,
G. Balsamo
,
A. Boone
,
M. Cuntz
,
B. Decharme
,
P. A. Dirmeyer
,
J. Dong
,
M. Ek
,
Z. Guo
,
V. Haverd
,
B. J. J. van den Hurk
,
G. S. Nearing
,
B. Pak
,
C. Peters-Lidard
,
J. A. Santanello Jr.
,
L. Stevens
, and
N. Vuichard

relative errors, this type of analysis also identifies metrics for which one model performs better than another, or where errors in multiple models are systematic. This has the advantage over evaluation of giving a clear indication that performance improvements are achievable for those metrics where another model already performs better. For example, in Fig. 1b , metrics 4 and 11 apparently have the largest relative errors for both models A and B (and are hence likely to be flagged as development

Full access
Graham P. Weedon
,
Christel Prudhomme
,
Sue Crooks
,
Richard J. Ellis
,
Sonja S. Folwell
, and
Martin J. Best

are used here to help interpret the processes involved in transforming precipitation into discharge variability. However, we evaluate average model performance by comparing two time series of the same variable—modeled discharge with observed discharge—by using amplitude ratio spectra and phase spectra ( section 6 ). Unlike Bode plots, this requires no assumptions about the system being modeled. In this case, negative phase values are plausible, indicating that model discharge variations lead

Full access
Baozhang Chen
,
Jing M. Chen
,
Gang Mo
,
Chiu-Wai Yuen
,
Hank Margolis
,
Kaz Higuchi
, and
Douglas Chan

carbon fluxes. The purposes of this study are threefold: (i) to test and verify the capability and accuracy of the EASS model when coupled to a GCM model and applied to a large area with significant heterogeneity, such as the Canada’s landmass; (ii) to evaluate the effect of land-cover heterogeneity on regional energy, water, and carbon flux simulation based on remote sensing data; and (iii) to explore up-scaling methodologies using satellite-derived data. In this paper, we briefly present a basic

Full access
A. M. Ukkola
,
A. J. Pitman
,
M. G. De Kauwe
,
G. Abramowitz
,
N. Herger
,
J. P. Evans
, and
M. Decker

) for all soil layers within the top 3 m and (where applicable) any layer partly within 3 m weighted by the fraction of this layer located above 3 m. b. Observations We used two observed precipitation products to evaluate model performance. These were global monthly time series products by 1) the Climatic Research Unit (CRU TS 3.23; Harris et al. 2014 ) and 2) Global Precipitation Climatology Centre (GPCC, version 7; Schneider et al. 2016 ). Both products are available at a 0.5° spatial resolution

Full access
Kristie J. Franz
,
Terri S. Hogue
, and
Soroosh Sorooshian

evaluation of a model requires three critical elements: a performance criterion, a benchmark, and an outcome. Performance criterion refers to the ability to match the desired variable being modeled; in this instance the variables of interest are simulated snow water equivalent (SWE), melt, and discharge. The benchmark is an alternative to the model being evaluated. Given forecasting as the proposed application of the SAST, our benchmark is identified as the operational National Weather Service (NWS) SNOW

Full access