1. Introduction
Abramowitz et al. (2008) found that statistical models outperform physics-based models at estimating land surface states and fluxes and concluded that land surface models are not able to fully utilize information in forcing data. Gong et al. (2013) provided a theoretical explanation for this result and also showed how to measure both the underutilization of available information by a particular model as well as the extent to which the information available from forcing data was unable to resolve the total uncertainty about the predicted phenomena. That is, they separated uncertainty due to forcing data from uncertainty due to imperfect models.
Dynamical systems models, however, are composed of three primary components (Gupta and Nearing 2014): model structures are descriptions of and solvers for hypotheses about the governing behavior of a certain class of dynamical systems, model parameters describe details of individual members of that class of systems, and forcing data are measurements of the time-dependent boundary conditions of each prediction scenario. The analysis by Gong et al. (2013) did not distinguish between uncertainties that are due to a misparameterized model from those due to a misspecified model structure, and we propose that this distinction is important for directing model development and efforts to both quantify and reduce uncertainty.
The problem of segregating these three sources of uncertainty has been studied extensively (e.g., Keenan et al. 2012; Montanari and Koutsoyiannis 2012; Schöniger et al. 2015; Liu and Gupta 2007; Kavetski et al. 2006; Draper 1995; Oberkampf et al. 2002; Wilby and Harris 2006; Poulin et al. 2011; Clark et al. 2011). Almost ubiquitously, the methods that have been applied to this problem are based on the chain rule of probability theory (Liu and Gupta 2007). These methods ignore model structural error completely (e.g., Keenan et al. 2012), require sampling a priori distributions over model structures (e.g., Clark et al. 2011), or rely on distributions derived from model residuals (e.g., Montanari and Koutsoyiannis 2012). In all cases, results are conditional on the proposed model structure(s). Multimodel ensembles allow us to assess the sensitivity of predictions to a choice between different model structures, but they do not facilitate true uncertainty attribution or partitioning. Specifically, any distribution (prior or posterior) over potential model parameters and/or structures is necessarily degenerate (Nearing et al. 2015, manuscript submitted to Hydrol. Sci. J.), and sampling from or integrating over such distributions does not facilitate uncertainty estimates that approach any true value.
The theoretical development by Gong et al. (2013) fundamentally solved this problem. They first measured the amount of information contained in the forcing data—that is, the total amount of information available for the model to translate into predictions1—and then showed that this represents an upper bound on the performance of any model (not just the model being evaluated). Deviation between a given model’s actual performance and this upper bound represents uncertainty due to errors in that model. The upper bound can, in theory, be estimated using an asymptotically accurate empirical regression (e.g., Cybenko 1989; Wand and Jones 1994). That is, estimates and attributions of uncertainty produced by this method approach correct values as the amount of evaluation data increases—something that is not true for any method that relies on sampling from degenerate distributions over models.
In this paper, we extend the analysis of information use efficiency by Gong et al. (2013) to consider model parameters. We do this by using a “large sample” approach (Gupta et al. 2014) that requires field data from a number of sites. Formally, this is an example of model benchmarking (Abramowitz 2005). A benchmark consists of 1) a specific reference value for 2) a particular performance metric that is computed against 3) a specific dataset. Benchmarks have been used extensively to test land surface models (e.g., van den Hurk et al. 2011; Best et al. 2011; Abramowitz 2012; Best et al. 2015). They allow for direct and consistent comparisons between different models, and although it has been argued that they can be developed to highlight potential model deficiencies (Luo et al. 2012), there is no systematic method for doing so [see discussion by Beck et al. (2009)]. What we propose is a systematic benchmarking strategy that at least lets us evaluate whether the problems with land surface model predictions are due primarily to forcings, parameters, or structures.
We applied the proposed strategy to benchmark the four land surface models that constitute phase 2 of the North American Land Data Assimilation System (NLDAS-2; Xia et al. 2012a,b), which is a continental-scale ensemble land modeling and data assimilation system. The structure of the paper is as follows. The main text describes the application of this theory to NLDAS-2. Methods are given in section 2 and results in section 3. Section 4 offers a discussion both about the strengths and limitations of information-theoretic benchmarking in general, and also about how the results can be interpreted in context of our application to NLDAS-2. A brief and general theory of model performance metrics is given in the appendix, along with an explanation of the basic concept of information-theoretic benchmarking. The strategy is general enough to be applicable to any dynamical systems model.
2. Methods
a. NLDAS-2
The NLDAS-2 produces distributed hydrometeorological products over the contiguous United States used primarily for drought assessment and NWP initialization. NLDAS-2 is the second generation of the NLDAS, which became operational at the National Centers for Environmental Protection in 2014. Xia et al. (2012a) provided extensive details about the NLDAS-2 models, forcing data, and parameters, and so we will present only a brief summary here.
NLDAS-2 runs four land surface models over a North American domain (25°–53°N, 125°–67°W) at ⅛° resolution: 1) Noah, 2) Mosaic, 3) the Sacramento Soil Moisture Accounting (SAC-SMA) model, and 4) the Variable Infiltration Capacity model (VIC). Noah and Mosaic run at a 15-min time step whereas SAC-SMA and VIC run at an hourly time step; however, all produce hourly time-averaged output of soil moisture in various soil layers and evapotranspiration at the surface. Mosaic has three soil layers with depths of 10, 30, and 160 cm. Noah uses four soil layers with depths of 10, 30, 60, and 100 cm. SAC-SMA uses conceptual water storage zones that are postprocessed to produce soil moisture values at the depths of the Noah soil layers. VIC uses a 10-cm surface soil layer and two deeper layers with variable soil depths. Here we are concerned with estimating surface and root-zone (top 100 cm) soil moistures. The former is taken to be the moisture content of the top 10 cm (top layer of each model), and the latter as the depth-weighted average over the top 100 cm of the soil column.
Atmospheric data from the North American Regional Reanalysis (NARR), which is natively at 32-km spatial resolution and 3-h temporal resolution, is interpolated to the 15 min and ⅛° resolution required by NLDAS-2. NLDAS-2 forcing also includes several observational datasets, including a daily gauge-based precipitation, which is temporally disaggregated to hourly using a number of different data sources, as well as satellite-derived shortwave radiation used for bias correction. A lapse-rate correction between the NARR grid elevation and the NLDAS grid elevation was also applied to several NLDAS-2 surface meteorological forcing variables. NLDAS forcings consist of eight variables: 2-m air temperature (K), 2-m specific humidity (kg kg−1), 10-m zonal and meridional wind speed (m s−1), surface pressure (kPa), hourly integrated precipitation (kg m−2), and incoming longwave and shortwave radiation (W m−2). All models act only on the total wind speed, and in this study we also used only the net radiation (sum of shortwave and longwave) so that a total of six forcing variables were considered at each time step.
Parameters used by each model are listed in Table 1. The vegetation and soil classes are categorical variables and are therefore unsuitable for using as regressors in our benchmarks. The vegetation classification indices were therefore mapped onto a five-dimensional real-valued parameter set using the University of Maryland (UMD) classification system (Hansen et al. 2000). These real-valued vegetation parameters included optimum transpiration air temperature (called topt in the Noah model and literature), a radiation stress parameter (rgl), maximum and minimum stomatal resistances (rsmax and rsmin), and a parameter used in the calculation of vapor pressure deficit (hs). Similarly, the soil classification indices were mapped, for use in NLDAS-2 models, to soil hydraulic parameters: porosity, field capacity, wilting point, a Clapp–Hornberger-type exponent, saturated matric potential, and saturated conductivity. These mappings from class indices to real-valued parameters ensured that similar parameter values generally indicated similar phenomenological behavior. In addition, certain models use one or two time-dependent parameters: monthly climatology of greenness fraction, quarterly albedo climatology, and monthly leaf area index (LAI). These were each interpolated to the model time step and so had different values at each time step.
Parameters used by the NLDAS-2 models.
b. Benchmarks
As mentioned in the introduction, a model benchmark consists of three components: a particular dataset, a particular performance metric, and a particular reference value for that metric. The following subsections describe these three components of our benchmark analysis of NLDAS-2.
1) Benchmark dataset
As was done by Kumar et al. (2014) and Xia et al. (2014), we evaluated the NLDAS-2 models against quality-controlled hourly soil moisture observations from the Soil Climate Analysis Network (SCAN). Although there are over 100 operational SCAN sites, we used only those 49 sites with at least 2 years of complete hourly data during the period of 2001–11. These sites are distributed throughout the NLDAS-2 domain (Fig. 1). The SCAN data have measurement depths of 5, 10, 20.3, 51, and 101.6 cm (2, 4, 8, 20, and 40 in.) and were quality controlled (Liu et al. 2011) and depth averaged to 10 and 100 cm to match the surface and root-zone depth-weighted model estimates.
For evapotranspiration (ET), we used level 3 station data from the AmeriFlux network (Baldocchi et al. 2001). We used only those 50 sites that had at least 4000 time steps of hourly data during the period 2001–11. The AmeriFlux network was also used by Mo et al. (2011) and by Xia et al. (2015) for evaluation of the NLDAS-2 models, and a gridded flux dataset from Jung et al. (2009), based on the same station data, was used by Peters-Lidard et al. (2011) to assess the impact on ET estimates of soil moisture data assimilation in the NLDAS framework.
2) Benchmark metrics and reference values
Nearing and Gupta (2015) provide a brief overview of the theory of model performance metrics, and the general formula for a performance metric is given in the appendix. All performance metrics measure some aspect (either quantity or quality) of the information content of model predictions, and the metric that we propose here uses this fact explicitly.
The basic strategy for measuring uncertainty due to model errors is to first measure the amount of information available in model inputs (forcing data and parameters) and then to subtract the information that is contained in model predictions. The latter is always less than the former since the model is never perfect, and this difference measures uncertainty (i.e., lack of complete information) that is due to model error (Nearing and Gupta 2015). This requires that we measure information (and uncertainty) using a metric that behaves so that the total quantity of information available from two independent sources is the sum of the information available from either source. The only type of metric that meets this requirement are those based on Shannon-type entropy (Shannon 1948), so we used this standard definition of information and accordingly measure uncertainty as (conditional) entropy (the appendix contains further explanation).
To segregate the three sources of uncertainty (forcings, parameters, and structures), we require three reference values. The first is the total entropy of the benchmark observations, which is notated as
The second reference value measures information about the benchmark observations contained in model forcing data. This is notated as
Our third reference value is the total amount of information about the benchmark observations that is contained in the forcing data plus model parameters. This is notated as
3) Calculating information metrics
As described in the appendix, the
The value
It is important to point out that we did not use a split-record training/prediction for either the
4) Training the regressions
A separate
We used sparse pseudo-input Gaussian processes (SPGPs; Snelson and Ghahramani 2006), which are kernel density emulators of differentiable functions. SPGPs are computationally efficient and very general in the class of functions that they can emulate. SPGPs use a stationary anisotropic squared exponential kernel (see Rasmussen and Williams 2006, chapter 4) that we call an automatic relevance determination (ARD) kernel for reasons that are described presently. Because the land surface responds differently during rain events than it does during dry-down, we trained two separate SPGPs for each observation variable to act on time steps 1) during and 2) between rain events. Thus, each
Because the NLDAS-2 models effectively act on all past forcing data, it was necessary for the regressions to act on lagged forcings. We used hourly lagged forcings from the 15 h previous to time
Because of the time-lagged regressors, each SPGP for rainy time steps in the
3. Results
a. Soil moisture
Figure 4 compares the model and benchmark estimates of soil moisture with SCAN observations and also provides anomaly correlations for the model estimates, which for Noah were very similar to those presented by Kumar et al. (2014). The spread of the benchmark estimates around the 1:1 line represents uncertainty that was unresolvable given the input data—this occurred when we were unable to construct an injective mapping from inputs to observations. This happened, for example, near the high range of the soil moisture observations, which indicates that the forcing data were not representative of the largest rainfall events at these measurements sites. This might be due to localized precipitation events that are not always captured by the ⅛° forcing data and is an example of the type of lack of representativeness that is captured by this information analysis—the forcing data simply lack this type of information.
It is clear from these scatterplots that the models did not use all available information in the forcing data. In concordance with the empirical results of Abramowitz et al. (2008) and the theory of Gong et al. (2013), the statistical models here outperformed the physics-based models. This is not at all surprising considering that the regressions were trained on the benchmark dataset, which—to reemphasize—is necessary for this particular type of analysis. Figure 5 reproduces the conceptual diagram from Fig. 2 using the data from this study and directly compares the three benchmark reference values with the values of benchmark performance metric. Table 2 lists the fractions of total uncertainty, that is,
Fractions of total uncertainty due to forcings, parameters, and structures.
The total uncertainty in each set of model predictions was generally about 90% of the total entropy of the benchmark observations (this was similar for all four land surface models and can be inferred from Fig. 5). Forcing data accounted for about a quarter of this total uncertainty related to soil moisture near the surface (10 cm), and about one-sixth of total uncertainty in the 100-cm observations (Table 2). The difference is expected since the surface soil moisture responds more dynamically to the system boundary conditions, and so errors in measurements of those boundary conditions will have a larger effect in predicting the near-surface response.
In all cases except SAC-SMA, parameters accounted for about half of total uncertainty in both soil layers, but for SAC-SMA this percentage was higher, at 60% and 70% for the two soil depths, respectively (Table 2). Similarly, the efficiencies of the different parameter sets were relatively low—below 45% in all cases and below 30% for SAC-SMA (Table 3). SAC-SMA parameters are a strict subset of the others, so it is not surprising that this set contained less information. In general, these results indicate that the greatest potential for improvement to NLDAS-2 simulations of soil moisture would come from improving the parameter sets.
Although the total uncertainty in all model predictions was similar, the model structures themselves performed very differently. Overall, VIC performed the worst and was able to use less than a quarter of the information available to it, while SAC-SMA was able to use almost half (Table 3). SAC-SMA had less information to work with (from parameters; Fig. 5), but it was better at using what it had. The obvious extension of this analysis would measure which of the parameters that were not used by SAC-SMA are the most important, and then determine how SAC-SMA might consider the processes represented by these missing parameters. It is interesting to notice that the model structure that performed the best, SAC-SMA, was an uncalibrated conceptual model, whereas Noah, Mosaic, and VIC are ostensibly physics based (and VIC parameters were calibrated).
The primary takeaway from these results is that there is significant room to improve both the NLDAS-2 models and parameter sets, but that the highest return on investment, in terms of predicting soil moisture, will likely come from looking at the parameters. This type of information-based analysis could easily be extended to look at the relative value of individual parameters.
b. Evapotranspiration
Figure 6 compares the model and benchmark estimates of ET with AmeriFlux observations. Again, the spread in the benchmark estimates is indicative of substantial unresolvable uncertainty given the various input data. Figure 5 again plots the ET reference values and values of the ET performance metrics. Related to ET, forcing data accounted for about two-thirds of total uncertainty in the predictions from all four models (Table 2). Parameters accounted for about one-fifth of total uncertainty, and model structures only accounted for about 10%. In all three cases, the fractions of ET uncertainty due to different components were essentially the same between the four models. Related to efficiency, the forcing data were able to resolve less than half of total uncertainty in the benchmark observations, and the parameters and structures generally had efficiencies between 50% and 60%, with the efficiencies of the models being slightly higher (Table 3). Again, the ET efficiencies were similar among all four models and their respective parameter sets.
4. Discussion
The purpose of this paper is twofold. First, we want to demonstrate (and expand) information-theoretic benchmarking as a way to quantify contributions to uncertainty in dynamical model predictions without relying on degenerate priors or on specific model structures. Second, we used this strategy to measure the potential for improving various aspects of the continental-scale hydrologic modeling system, NLDAS-2.
Related to NLDAS-2 specifically, we found significant potential to improve all parts of the modeling system. Parameters contributed the most uncertainty to soil moisture estimates, and forcing data contributed the majority of uncertainty to evapotranspiration estimates; however, the models themselves used only a fraction of the information that was available to them. Differences between the soil moisture and ET results and those from the soil moisture experiments highlight that model adequacy (Gupta et al. 2012) depends very much on the specific purpose of the model (in this case, the “purpose” indicates what variable we are particularly interested in predicting with the model). As mentioned above, an information use efficiency analysis like this one could easily be extended not only to look at the information content of individual parameters, but also of individual process components of a model by using a modular modeling system (e.g., Clark et al. 2011). We therefore expect that this study will serve as a foundation for a diagnostic approach to both assessing and improving model performance—again, in a way that does not rely on simply comparing a priori models. The ideas presented here also will guide the development and evaluation of the next phase of NLDAS, which will be at a finer spatial scale, and include updated physics in the land surface models, data assimilation of remotely sensed water states, improved model parameters, and higher-quality forcings through improved model forcings.
Related to benchmarking theory in general, there have recently been a number of large-scale initiatives to compare, benchmark, and evaluate the land surface models used for hydrological, ecological, and weather and climate prediction (e.g., van den Hurk et al. 2011; Best et al. 2015); however, we argue that those efforts have not exploited the full power of model benchmarking. The most exciting aspect of the benchmarking concept seems to be its ability to help us understand and measure factors that limit model performance—specifically, benchmarking’s ability to assign (approximating) upper bounds on the potential to improve various components of the modeling system. As we mentioned earlier, essentially all existing methods for quantifying uncertainty rely on a priori distributions over model structures, and because such distributions are necessarily incomplete, there is no way for such analyses to give approximating estimates of uncertainty. What we outline here can provide such estimates. It is often at least theoretically possible to use regressions that asymptotically approximate the true relationship between model inputs and outputs (Cybenko 1989).
The caveat here is that although this type of benchmarking-based uncertainty analysis solves the problem of degenerate priors, the problem of finite evaluation data remains. We can argue that information-theoretic benchmarking allows us to produce asymptotic estimates of uncertainty, but since we will only ever have access to a finite number of benchmark observations, the best we can ever hope to do in terms of uncertainty partitioning (using any available method) is to estimate uncertainty in the context of whatever data we have available. We can certainly extrapolate any uncertainty estimates into the future (e.g., Montanari and Koutsoyiannis 2012), but there is no guarantee that such extrapolations will be correct. Information-theoretic benchmarking does not solve this problem. All model evaluation exercises necessarily ask the question “What information does the model provide about the available observations?” Such is the nature of inductive reasoning.
Similarly, although it is possible to explicitly consider error in the benchmark observations during uncertainty partitioning (Nearing and Gupta 2015), any estimate of this observation error ultimately and necessarily constitutes part of the model that we are evaluating (Nearing et al. 2015, manuscript submitted to Hydrol. Sci. J.). The only thing that we can ever assess during any type of model evaluation (in fact, during any application of the scientific method) is whether a given model (including all probabilistic components) is able to reproduce various instrument readings with certain accuracy and precision. Like any other type of uncertainty analysis, benchmarking is fully capable of testing models that do include models of instrument error and representativeness.
The obvious open question is about how to use this to fix our models. It seems that the method proposed here might, at least theoretically, help to address the question in certain respects. To better understand the relationship between individual model parameters and model structures, we could use an
To summarize, Earth scientists are collecting ever-increasing amounts of data from a growing number of field sites and remote sensing platforms. These data are typically not cheap, and we expect that it will be valuable to understand the extent to which we are able to fully utilize this investment—that is, by using it to characterize and model biogeophysical relationships. Hydrologic prediction in particular seems to be a data-limited endeavor. Our ability to apply our knowledge of watershed physics is limited by unresolved heterogeneity in the systems at different scales (Blöschl and Sivapalan 1995), and we see here that this difficulty manifests in our data and parameters. Our ability to resolve prediction problems will, to a large extent, be dependent on our ability to collect and make use of observational data, and one part of this puzzle involves understanding the extents to which 1) our current data are insufficient and 2) our current data are underutilized. Model benchmarking has the potential to help distinguish these two issues.
Acknowledgments
Thank you to Martyn Clark (NCAR) for his help with organizing the presentation. The NLDAS-2 data used in this study were acquired as part of NASA’s Earth–Sun System Division and archived and distributed by the Goddard Earth Sciences (GES) Data and Information Services Center (DISC) Distributed Active Archive Center (DAAC). Funding for AmeriFlux data resources was provided by the U.S. Department of Energy’s Office of Science.
APPENDIX
A General Description of Model Performance Metrics
We begin with five things: 1) a (probabilistic) model
Because it is necessary to have a model to translate the information contained in
REFERENCES
Abramowitz, G., 2005: Towards a benchmark for land surface models. Geophys. Res. Lett., 32, L22702, doi:10.1029/2005GL024419.
Abramowitz, G., 2012: Towards a public, standardized, diagnostic benchmarking system for land surface models. Geosci. Model Dev., 5, 819–827, doi:10.5194/gmd-5-819-2012.
Abramowitz, G., Leuning R. , Clark M. , and Pitman A. , 2008: Evaluating the performance of land surface models. J. Climate, 21, 5468–5481, doi:10.1175/2008JCLI2378.1.
Baldocchi, D., and Coauthors, 2001: FLUXNET: A new tool to study the temporal and spatial variability of ecosystem-scale carbon dioxide, water vapor, and energy flux densities. Bull. Amer. Meteor. Soc., 82, 2415–2434, doi:10.1175/1520-0477(2001)082<2415:FANTTS>2.3.CO;2.
Beck, M. B., and Coauthors, 2009: Grand challenges of the future for environmental modeling. White Paper, National Science Foundation, Arlington, VA, 135 pp. [Available online at http://www.ewp.rpi.edu/hartford/~ernesto/S2013/MMEES/Papers/ENVIRONMENT/1EnvironmentalSystemsModeling/Beck2009-nsfwhitepaper.pdf.]
Best, M. J., and Coauthors, 2011: The Joint UK Land Environment Simulator (JULES), model description—Part 1: Energy and water fluxes. Geosci. Model Dev., 4, 677–699, doi:10.5194/gmd-4-677-2011.
Best, M. J., and Coauthors, 2015: The plumbing of land surface models: Benchmarking model performance. J. Hydrometeor., 16, 1425–1442, doi:10.1175/JHM-D-14-0158.1.
Beven, K. J., and Young P. , 2013: A guide to good practice in modeling semantics for authors and referees. Water Resour. Res., 49, 5092–5098, doi:10.1002/wrcr.20393.
Blöschl, G., and Sivapalan M. , 1995: Scale issues in hydrological modelling: A review. Hydrol. Processes, 9, 251–290, doi:10.1002/hyp.3360090305.
Clark, M. P., Kavetski D. , and Fenicia F. , 2011: Pursuing the method of multiple working hypotheses for hydrological modeling. Water Resour. Res., 47, W09301, doi:10.1029/2010WR009827.
Clark, M. P., and Coauthors, 2015: A unified approach for process-based hydrologic modeling: 1. Modeling concept. Water Resour. Res., 51, 2498–2514, doi:10.1002/2015WR017198.
Cover, T. M., and Thomas J. A. , 1991: Elements of Information Theory. Wiley-Interscience, 726 pp.
Cybenko, G., 1989: Approximation by superpositions of a sigmoidal function. Math. Control Signal, 2, 303–314, doi:10.1007/BF02551274.
Draper, D., 1995: Assessment and propagation of model uncertainty. J. Roy. Stat. Soc., 57B, 45–97.
Edwards, A. F. W., 1984: Likelihood. Cambridge University Press, 243 pp.
Gong, W., Gupta H. V. , Yang D. , Sricharan K. , and Hero A. O. , 2013: Estimating epistemic and aleatory uncertainties during hydrologic modeling: An information theoretic approach. Water Resour. Res., 49, 2253–2273, doi:10.1002/wrcr.20161.
Gupta, H. V., and Nearing G. S. , 2014: Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resour. Res., 50, 5351–5359, doi:10.1002/2013WR015096.
Gupta, H. V., Clark M. P. , Vrugt J. A. , Abramowitz G. , and Ye M. , 2012: Towards a comprehensive assessment of model structural adequacy. Water Resour. Res., 48, W08301, doi:10.1029/2011WR011044.
Gupta, H. V., Perrin C. , Kumar R. , Blöschl G. , Clark M. , Montanari A. , and Andréassian V. , 2014: Large-sample hydrology: A need to balance depth with breadth. Hydrol. Earth Syst. Sci., 18, 463–477, doi:10.5194/hess-18-463-2014.
Hansen, M. C., DeFries R. S. , Townshend J. R. G. , and Sohlberg R. , 2000: Global land cover classification at 1 km spatial resolution using a classification tree approach. Int. J. Remote Sens., 21, 1331–1364, doi:10.1080/014311600210209.
Jaynes, E. T., 2003: Probability Theory: The Logic of Science. Cambridge University Press, 727 pp.
Jung, M., Reichstein M. , and Bondeau A. , 2009: Towards global empirical upscaling of FLUXNET eddy covariance observations: Validation of a model tree ensemble approach using a biosphere model. Biogeosciences, 6, 2001–2013, doi:10.5194/bg-6-2001-2009.
Kavetski, D., Kuczera G. , and Franks S. W. , 2006: Bayesian analysis of input uncertainty in hydrological modeling: 2. Application. Water Resour. Res., 42, W03408, doi:10.1029/2005WR004376.
Keenan, T. F., Davidson E. , Moffat A. M. , Munger W. , and Richardson A. D. , 2012: Using model–data fusion to interpret past trends, and quantify uncertainties in future projections, of terrestrial ecosystem carbon cycling. Global Change Biol., 18, 2555–2569, doi:10.1111/j.1365-2486.2012.02684.x.
Kumar, S. V., and Coauthors, 2014: Assimilation of remotely sensed soil moisture and snow depth retrievals for drought estimation. J. Hydrometeor., 15, 2446–2469, doi:10.1175/JHM-D-13-0132.1.
Liu, Y. Q., and Gupta H. V. , 2007: Uncertainty in hydrologic modeling: Toward an integrated data assimilation framework. Water Resour. Res., 43, W07401, doi:10.1029/2006WR005756.
Liu, Y. Q., and Coauthors, 2011: The contributions of precipitation and soil moisture observations to the skill of soil moisture estimates in a land data assimilation system. J. Hydrometeor., 12, 750–765, doi:10.1175/JHM-D-10-05000.1.
Luo, Y. Q., and Coauthors, 2012: A framework for benchmarking land models. Biogeosciences, 9, 3857–3874, doi:10.5194/bg-9-3857-2012.
Mo, K. C., Long L. N. , Xia Y. , Yang S. K. , Schemm J. E. , and Ek M. , 2011: Drought indices based on the climate forecast system reanalysis and ensemble NLDAS. J. Hydrometeor., 12, 181–205, doi:10.1175/2010JHM1310.1.
Montanari, A., and Koutsoyiannis D. , 2012: A blueprint for process-based modeling of uncertain hydrological systems. Water Resour. Res., 48, W09555, doi:10.1029/2011WR011412.
Neal, R. M., 1993: Probabilistic inference using Markov chain Monte Carlo methods. Tech. Rep. CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 144 pp. [Available online at http://www.cs.toronto.edu/~radford/ftp/review.pdf.]
Nearing, G. S., and Gupta H. V. , 2015: The quantity and quality of information in hydrologic models. Water Resour. Res., 51, 524–538, doi:10.1002/2014WR015895.
Nearing, G. S., Gupta H. V. , and Crow W. T. , 2013: Information loss in approximately Bayesian estimation techniques: A comparison of generative and discriminative approaches to estimating agricultural productivity. J. Hydrol., 507, 163–173, doi:10.1016/j.jhydrol.2013.10.029.
Oberkampf, W. L., DeLand S. M. , Rutherford B. M. , Diegert K. V. , and Alvin K. F. , 2002: Error and uncertainty in modeling and simulation. Reliab. Eng. Syst. Saf., 75, 333–357, doi:10.1016/S0951-8320(01)00120-X.
Paninski, L., 2003: Estimation of entropy and mutual information. Neural Comput., 15, 1191–1253, doi:10.1162/089976603321780272.
Peters-Lidard, C. D., Kumar S. V. , Mocko D. M. , and Tian Y. , 2011: Estimating evapotranspiration with land data assimilation systems. Hydrol. Processes, 25, 3979–3992, doi:10.1002/hyp.8387.
Poulin, A., Brissette F. , Leconte R. , Arsenault R. , and Malo J.-S. , 2011: Uncertainty of hydrological modelling in climate change impact studies in a Canadian, snow-dominated river basin. J. Hydrol., 409, 626–636, doi:10.1016/j.jhydrol.2011.08.057.
Rasmussen, C., and Williams C. , 2006: Gaussian Processes for Machine Learning. MIT Press, 248 pp.
Schöniger, A., Wöhling T. , and Nowak W. , 2015: A statistical concept to assess the uncertainty in Bayesian model weights and its impact on model ranking. Water Resour. Res., 51, 7524–7546, doi:10.1002/2015WR016918.
Shannon, C. E., 1948: A mathematical theory of communication. Bell Syst. Tech. J., 27, 379–423, doi:10.1002/j.1538-7305.1948.tb01338.x.
Snelson, E., and Ghahramani Z. , 2006: Sparse Gaussian processes using pseudo-inputs. Advances in Neural Information Processing Systems 18, Y. Weiss, B. Schölkopf, and J. C. Platt, Eds., Neural Information Processing Systems, 1257–1264. [Available online at http://papers.nips.cc/paper/2857-sparse-gaussian-processes-using-pseudo-inputs.]
van den Hurk, B., Best M. , Dirmeyer P. , Pitman A. , Polcher J. , and Santanello J. , 2011: Acceleration of land surface model development over a decade of GLASS. Bull. Amer. Meteor. Soc., 92, 1593–1600, doi:10.1175/BAMS-D-11-00007.1.
Wand, M. P., and Jones M. C. , 1994: Kernel Smoothing. CRC Press, 212 pp.
Weijs, S. V., Schoups G. , and Giesen N. , 2010: Why hydrological predictions should be evaluated using information theory. Hydrol. Earth Syst. Sci., 14, 2545–2558, doi:10.5194/hess-14-2545-2010.
Wilby, R. L., and Harris I. , 2006: A framework for assessing uncertainties in climate change impacts: Low-flow scenarios for the River Thames, UK. Water Resour. Res., 42, W02419, doi:10.1029/2005WR004065.
Xia, Y., and Coauthors, 2012a: Continental-scale water and energy flux analysis and validation for the North American Land Data Assimilation System project phase 2 (NLDAS-2): 1. Intercomparison and application of model products. J. Geophys. Res., 117, D03109, doi:10.1029/2011JD016051.
Xia, Y., and Coauthors, 2012b: Continental-scale water and energy flux analysis and validation for North American Land Data Assimilation System project phase 2 (NLDAS-2): 2. Validation of model-simulated streamflow. J. Geophys. Res., 117, D03110, doi:10.1029/2011JD016051.
Xia, Y., Sheffield J. , Ek M. B. , Dong J. , Chaney N. , Wei H. , Meng J. , and Wood E. F. , 2014: Evaluation of multi-model simulated soil moisture in NLDAS-2. J. Hydrol., 512, 107–125, doi:10.1016/j.jhydrol.2014.02.027.
Xia, Y., Hobbins M. T. , Mu Q. , and Ek M. B. , 2015: Evaluation of NLDAS-2 evapotranspiration against tower flux site observations. Hydrol. Processes, 29, 1757–1771, doi:10.1002/hyp.10299.
Ziv, J., and Zakai M. , 1973: On functionals satisfying a data-processing theorem. IEEE Trans. Inf. Theory, 19, 275–283, doi:10.1109/TIT.1973.1055015.
Contrary to the suggestion by Beven and Young (2013), we use the term “prediction” to mean a model estimate before it is compared with observation data for some form of hypothesis testing or model evaluation. This definition is consistent with the etymology of the word and is meaningful in the context of the scientific method.