Scale-Dependent Value of QPF for Real-Time Streamflow Forecasting

Ganesh R. Ghimire aIowa Flood Center and IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, Iowa

Search for other papers by Ganesh R. Ghimire in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-4284-3941
,
Witold F. Krajewski aIowa Flood Center and IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, Iowa

Search for other papers by Witold F. Krajewski in
Current site
Google Scholar
PubMed
Close
, and
Felipe Quintero aIowa Flood Center and IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, Iowa

Search for other papers by Felipe Quintero in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Incorporating rainfall forecasts into a real-time streamflow forecasting system extends the forecast lead time. Since quantitative precipitation forecasts (QPFs) are subject to substantial uncertainties, questions arise on the trade-off between the time horizon of the QPF and the accuracy of the streamflow forecasts. This study explores the problem systematically, exploring the uncertainties associated with QPFs and their hydrologic predictability. The focus is on scale dependence of the trade-off between the QPF time horizon, basin-scale, space–time scale of the QPF, and streamflow forecasting accuracy. To address this question, the study first performs a comprehensive independent evaluation of the QPFs at 140 U.S. Geological Survey (USGS) monitored basins with a wide range of spatial scales (~10–40 000 km2) over the state of Iowa in the midwestern United States. The study uses High-Resolution Rapid Refresh (HRRR) and Global Forecasting System (GFS) QPFs for short and medium-range forecasts, respectively. Using Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimate (QPE) as a reference, the results show that the rainfall-to-rainfall QPF errors are scale dependent. The results from the hydrologic forecasting experiment show that both QPFs illustrate clear value for real-time streamflow forecasting at longer lead times in the short- to medium-range relative to the no-rain streamflow forecast. The value of QPFs for streamflow forecasting is particularly apparent for basin sizes below 1000 km2. The space–time scale, or reference time tr (ratio of forecast lead time to basin travel time), ~1 depicts the largest streamflow forecasting skill with a systematic decrease in forecasting accuracy for tr > 1.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

G. R. Ghimire’s current affiliation: Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee.

Corresponding author: Ganesh R. Ghimire, ghimiregr@ornl.gov

Abstract

Incorporating rainfall forecasts into a real-time streamflow forecasting system extends the forecast lead time. Since quantitative precipitation forecasts (QPFs) are subject to substantial uncertainties, questions arise on the trade-off between the time horizon of the QPF and the accuracy of the streamflow forecasts. This study explores the problem systematically, exploring the uncertainties associated with QPFs and their hydrologic predictability. The focus is on scale dependence of the trade-off between the QPF time horizon, basin-scale, space–time scale of the QPF, and streamflow forecasting accuracy. To address this question, the study first performs a comprehensive independent evaluation of the QPFs at 140 U.S. Geological Survey (USGS) monitored basins with a wide range of spatial scales (~10–40 000 km2) over the state of Iowa in the midwestern United States. The study uses High-Resolution Rapid Refresh (HRRR) and Global Forecasting System (GFS) QPFs for short and medium-range forecasts, respectively. Using Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimate (QPE) as a reference, the results show that the rainfall-to-rainfall QPF errors are scale dependent. The results from the hydrologic forecasting experiment show that both QPFs illustrate clear value for real-time streamflow forecasting at longer lead times in the short- to medium-range relative to the no-rain streamflow forecast. The value of QPFs for streamflow forecasting is particularly apparent for basin sizes below 1000 km2. The space–time scale, or reference time tr (ratio of forecast lead time to basin travel time), ~1 depicts the largest streamflow forecasting skill with a systematic decrease in forecasting accuracy for tr > 1.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

G. R. Ghimire’s current affiliation: Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee.

Corresponding author: Ganesh R. Ghimire, ghimiregr@ornl.gov

1. Introduction

Despite recent advances in the numerical weather prediction (NWP) models and weather observations, quantitative precipitation forecasting still reflects considerable uncertainty. Quantitative precipitation forecasts (QPFs) are needed in real-time streamflow (flood) forecasting to extend the forecast lead time (e.g., Cuo et al. 2011; Collischonn et al. 2005). Given significant uncertainties associated with QPFs, particularly with respect to location, timing, and magnitude, these errors and uncertainties propagate to the hydrologic forecasts when used as deterministic forcing of the hydrologic rainfall–runoff models (e.g., Cuo et al. 2011; Collischonn et al. 2005; Adams and Pagano 2016; Adams and Dymond 2019, 2018; Hardy et al. 2016). Operational hydrologic services, such as the National Weather Service (NWS) in the United States, therefore, consider the use of QPFs skillful up to 1 or 2 days of lead time for hydrologic forecasting (e.g., Adams and Dymond 2019; Wu et al. 2020) at many operational river forecasting centers. As a result, it has remained a challenge to the hydrologic community to extend forecast lead times from a few hours to a week (e.g., Cuo et al. 2011) using QPFs. Particularly at short lead times, it has been shown that the short-term extrapolation of current radar–rainfall patterns can increase forecasting skill (e.g., Vivoni et al. 2006) in some cases. Ensemble QPFs, also regarded as an alternative to deterministic forecasts, are actively researched, but their operational use has lagged behind other methods (Cuo et al. 2011; Calvetti and Pereira Filho 2014; Adams and Ostrowski 2010; Adams and Dymond 2018; Shrestha et al. 2013; Wu et al. 2020; Sharma et al. 2018, 2017; Seo et al. 2006).

Exploring methods to improve streamflow forecasting accuracy in short, medium, and long ranges has been an active area of hydrologic research. In this study, we step back and pose a simple question that has been largely overlooked in the hydrologic forecasting community: How much value QPFs add to real-time streamflow forecasting across scales? We explore this question by evaluating streamflow forecasts on a spectrum of scenarios from “no-rain” forecast, to QPF-derived forecast, to a surrogate of a “perfect” QPF (i.e., QPE). In this quest, we assess the uncertainties associated with QPFs at short- to medium-range and their hydrologic predictability using an operational hydrologic forecasting system. We emphasize the scale dependence of the trade-off between the forecast lead time, basin scale, space–time scale of the QPF, and accuracy of the streamflow forecasts in the operational setting of streamflow (flood) forecasting.

A number of studies have been reported in the literature exploring the use of QPFs in hydrologic forecasting ranging from deterministic to probabilistic. Cloke and Pappenberger (2009), Cuo et al. (2011), and Wu et al. (2020) provide the most comprehensive literature review on the efforts of using QPEs and QPFs for short- to medium-range streamflow forecasting. Cuo et al. (2011) emphasize in their review that ensemble outputs from NWP models can account for QPF uncertainty, thereby enhancing the streamflow forecasting skill at longer lead times. They discuss and recommend four areas of focus for improving QPF-based streamflow forecasting: 1) selection and evaluation of NWP-derived QPFs, 2) enhancement of QPFs, 3) adoption of suitable hydrologic models, and 4) integration of NWP models and hydrologic forecast systems. Cloke and Pappenberger (2009) and Wu et al. (2020), in their comprehensive review of ensemble prediction-based forecasting, concur that it has added value while citing several key challenges mostly similar to the ones highlighted by Cuo et al. (2011). The major operational challenge with ensemble QPF-based forecasting (probabilistic forecasting) has been its use in an operational setting and communicating these forecasts properly (e.g., Cloke and Pappenberger 2009; Adams and Dymond 2018; Silvestro et al. 2011; Wu et al. 2020).

For short-range (up to ~12 h) streamflow (flood) forecasting, using radar-based nowcasting in a deterministic framework also has been found useful (e.g., Cuo et al. 2011; Vivoni et al. 2006). Vivoni et al. (2006) explore the flood predictability as a function of lead time, catchment scale, and rainfall spatial variability in a simulated real-time operational mode across nested basins in Oklahoma and show the advantage of using QPFs at short-range flood forecasting.

In the medium (from ~12 h to 10 days) to long (up to ~30 days) ranges, the utility of QPFs in deterministic hydrologic forecasting is less clear owing to considerable QPF uncertainty. Therefore, the hydrologic forecasting community has slowly transitioned toward an ensemble prediction (probabilistic) system. Reviews by Cuo et al. (2011), Cloke and Pappenberger (2009), and Wu et al. (2020) discuss this in detail. The accuracy of an ensemble prediction system largely depends on the method of constructing the ensembles, the number of ensemble members, and the process of evaluating the ensemble forecasts (Wu et al. 2020). While communicating probabilistic forecasts has been a challenge, Adams and Dymond (2018) show that ensemble mean and median show much smaller errors for all basins compared to the NOAA/NWS legacy deterministic hydrologic forecasts for lead times larger than 96 h. They perform better, particularly for fast-responding basins beginning at about 36 h of lead time and hence could be an alternative. In this study, we follow a similar direction in the evaluation of ensemble forecasts.

The quantification of streamflow (flood) forecasting skill across forecast lead time, basin scale, and space–time variability of rainfall has been less studied despite its significance to the hydrologic forecasting community (e.g., Georgakakos 1986; Pereira Fo et al. 1999; Dolciné et al. 2001; Vivoni et al. 2006). Most reported studies have been on QPF accuracy rather than exploring the operational QPF-based streamflow forecasting accuracy (Welles et al. 2007; Cuo et al. 2011; Pagano et al. 2004). With the increasing availability of operational QPFs, the number of studies pursued in that direction is increasing (e.g., Demargne et al. 2009; Pagano et al. 2004; Zhou et al. 2011; Adams and Dymond 2019, 2018; Zalenski et al. 2017). For instance, Adams and Dymond (2019) explore the utility of QPFs for short-range streamflow (flood) forecasting focusing on the NOAA/NWS Ohio River Forecast Center stage forecasts. They show for basins from ~500 to ~49 500 km2 that hydrologic forecast errors are reliable up to 6–12-h lead times for flood forecasting. Beyond 12 h, they show that stage forecast errors increase for all flow conditions, and more profoundly for floods.

In this study, we assess the hydrologic forecasting skill using QPFs in the short- to medium-range, focusing on the dependency with forecast lead time, catchment scale, and space–time scales of the QPFs. Our working hypothesis is that the potential value of QPFs is much higher at small spatial scales. Consider the following situation (Fig. 1). Rainfall forecast (QPF) is displaced in space from the true rainfall by a large enough distance that it completely misses basin A. The predicted response is in error, although from the meteorological weather prediction point of view this could be considered a rather good forecast (because of the similarity to the true system shape and quantity of rainfall pattern). Suppose now that the same forecast affects basin B, which is larger than and includes basin A. As the areas affected by the true (QPE) and predicted (erroneous) rainfall (QPF) in basin B are approximately similar, the predicted response for basin B should be much better than the prediction for A. Can we detect such situations in actual data? Do they affect the statistical measures of the forecasting system performance?

Fig. 1.
Fig. 1.

A demonstration of rainfall QPF relative to two basin scales: one large and the other small. The rainfall corresponds to 1200 UTC 9 Sep 2016. Examples of (left) QPE and (right) QPF. Basin A (Black Hawk Creek near Hudson, 800 km2) is the nested basin inside larger basin B (Cedar River at Waterloo, 13 300 km2). Note that the QPF completely misses basin A.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

A similar example can be constructed from the point of view of the total predicted rainfall amount. Figure 1 also gives some indications in terms of rainfall amount relative to basin scales. The same error in the amount of the QPF affects small rivers more, as they carry less water than large rivers. A completely botched forecast of a record storm in Iowa has little consequence for the streamflow in the lower Mississippi River.

Note that assessment of the value of QPFs for hydrologic forecasting is not straightforward. True rainfall—as well as its estimates (QPEs) and forecasts (QPFs)—demonstrates significant space–time variability, which has important implications for hydrologic forecasts. These variabilities are coupled with basin size and shape, e.g., as characterized by the width function of the drainage network (see Ayalew and Krajewski 2017; Perez et al. 2018), leading to a wide variety of possible outcomes. For instance, for the same QPF the amount of runoff volume already in the river network could affect the hydrologic predictions at basins of different size depending on the forecast horizon. Similarly, the rainfall amounts associated with QPFs may provide more or less value to hydrologic forecasts across basin scales.

While the above situations are familiar to operational hydrological forecasters, we explore them systematically across spatial scales and forecast lead times. This is the main contribution of our study. Rather than focusing on a single basin, we consider many that range in size but share many other hydroclimatological and physiotopographical attributes. This study does not answer all questions pertaining to the value of QPFs to hydrologic forecasting but does provide key insights on many of the issues discussed above.

2. Experimental domain and data

a. Study area

We explore the value of QPFs for real-time streamflow forecasting in the domain of the U.S. state of Iowa and its interior rivers (see Fig. 2). The study domain conforms to the model domain of the Iowa Flood Center (IFC) forecasting model (e.g., Krajewski et al. 2017; Quintero et al. 2016, 2020). The interior rivers of Iowa are within the state boundaries with small portions draining from the states of Minnesota and South Dakota (Krajewski et al. 2020). About 65% of Iowa drains to the Mississippi River on the east while the rest of the state drains to the Missouri River on the west (e.g., Larimer 1957; Ghimire et al. 2018, 2020). The watersheds in Iowa are predominantly agricultural. The northeastern part of the state mostly comprises deeply carved terrain with narrow valleys and higher slope channels, while the rest of the state comprises low reliefs with mild slope streams and rivers (e.g., Prior 1991; Ghimire et al. 2020; Krajewski et al. 2020). Iowa experiences significant seasonal climate variability. For instance, in the warm season moist air from the Gulf of Mexico directed by the Great Plains low-level jet—including its strength, shear, and relative divergent circulations—results in intense summer rainfall. In winter, Canadian airflow results in cold, dry weather. Throughout the year, air moving across the western United States causes mild, dry weather in the state (e.g., Krajewski et al. 2020; Budikova et al. 2010).

Fig. 2.
Fig. 2.

Map of the experimental domain of Iowa. Green dots represent the USGS stream gauge stations used for the evaluation of forecasting skills.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

b. Streamflow data

There are about 140 USGS stream gauge stations and more than 280 IFC stage-only sensors in Iowa monitoring streams and rivers in real time every 15 min (Krajewski et al. 2017; Kruger et al. 2016) (Fig. 2). We used streamflow observations at USGS stream gauge sites (USGS 2020) for independent evaluation of streamflow forecasts for the years 2016 to 2018. USGS stream gauge stations in Iowa monitor streamflow at basins ranging from ~10 to 40 000 km2 in size, which enables us to characterize variability in streamflow forecasting accuracy across spatial scales.

c. Quantitative precipitation forecasts

Since the study is focused on exploring the value of QPFs for streamflow forecasting, it requires a selection of operational QPF products. We used three rainfall products for the study: 1) MRMS QPE, which serves as a surrogate for the perfect QPF, 2) HRRR QPF, and 3) GFS QPF (see Table 1). The MRMS rainfall product is rain gauge corrected (e.g., Zhang et al. 2016, 2011; Qi et al. 2016). By applying the local gauge bias correction, the MRMS system produces a more accurate QPE at a latency of about 1.5 h (e.g., Zhang et al. 2016; Ghimire and Krajewski 2020). The HRRR QPF is a NOAA real-time high-resolution, cloud-resolving, and convection-allowing atmospheric model (NOAA 2020a; Alexander et al. 2010; Benjamin et al. 2009). The model is initialized by 3-km grids and radar data are assimilated every 15 min over a 1-h period, producing the standard HRRR product for short-range streamflow forecasting in the conterminous United States (CONUS).

Table 1.

Details of QPE and QPFs used in the study.

Table 1.

The National Centers for Environmental Prediction (NCEP) produces operational Global Forecast System (GFS) analysis and forecast grids (NOAA 2020c) for the medium range at 3- and 6-h time intervals up to 240 h of forecast lead time (NOAA 2020c) with a spatial resolution of ~25 km. In Fig. 3, we present an example showing the comparison of QPFs used in the study, clearly demonstrating the spatiotemporal variability and bias over Iowa. The National Water Model (NWM) (NOAA 2020b) adopts the ensemble scheme for GFS in the medium range constructed with time-lagged GFS forcing. Member 1 is the most recent GFS forecast cycle output, while members 2–8 are the successively older GFS cycle outputs (see Fig. 4b). The oldest one corresponds to the forecast run performed 36 h ago. For our study, however, we used only 6-h accumulation interval GFS. Figure 4 demonstrates the use of QPFs in the real-time streamflow forecasting scheme, which we discuss in further detail in section 3b.

Fig. 3.
Fig. 3.

Rainfall map for (a) MRMS QPE, (b) HRRR QPF, and (c) GFS QPF for the month of September 2016 over Iowa. Note that the QPFs shown correspond to the 6-h lead time forecasts. GFS uses a 6-h accumulation interval for the 6-h lead time.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

Fig. 4.
Fig. 4.

QPF configurations used for streamflow forecasting across forecast lead times: (a) different QPF schemes and (b) the construction of GFS ensemble members from the first member (original) of the GFS system. The yellow triangles depict the time at which the forecasts are issued.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

3. Methods

a. The hydrologic model

At the core of a real-time streamflow forecasting system is an operational hydrologic forecasting model. Here we use a nonlinear physics-based spatially distributed model called the Hillslope-Link Model (HLM), used by the IFC for real-time streamflow forecasting. The model decomposes a landscape into a system of hillslopes and links (e.g., Krajewski et al. 2017; Quintero et al. 2016). Hillslopes are the hydrologic response units where the rainfall–runoff transformation process occurs. The average size of hillslopes is about 0.4 km2. The model has four storage components at each hillslope-link system: 1) channel storage q(t) (m3 s−1); 2) pondage on the surface Sp(t) (m); 3) soil top-layer storage St(t) (m); and 4) subsurface storage Ss(t) (m). The water transport component uses a nonlinear velocity formulation (Ghimire et al. 2018; Mantilla 2007) and aggregates water from two upstream links, as appropriate. We selected the HLM model for this study because it has origins in the scaling properties of the river network and thus readily allows scale-based analyses. The model is sensitive to the rain rate, but the sensitivity changes with basin scale as at larger scale the water transport through the river network smooths out the short-scale temporal fluctuations and it is the total volume of rainfall at the basin scale that determines the basin response (e.g., G. R. Ghimire et al. 2021, unpublished manuscript). The version of the model used for our study is relatively simple to implement, yet credibly reproduces observed streamflow across scales. Moreover, the model is not calibrated to any basin (or scale) or rainfall product and thus, in principle, given a more accurate input should result in a better streamflow prediction. Refer to, e.g., Krajewski et al. (2017), Quintero et al. (2016), and Quintero et al. (2020) for more details on HLM.

b. Experimental design

We set up our investigation in two phases. The first phase entails the rainfall-to-rainfall evaluation. We compute errors and uncertainties associated with the QPFs with respect to a surrogate for the perfect QPF, i.e., the MRMS QPE. We use standard verification techniques (see section 3c) to evaluate the QPFs independent of the hydrologic model.

The second phase involves exploring the propagation of QPF uncertainties to streamflow forecasts, which requires the use of a hydrologic forecasting model. A typical real-time streamflow forecasting system is a two-step process. The first step is the standard analysis step, also referred to as the updating procedure, which creates initial conditions every hour for running the forecasting model. We use the observed rainfall, in this case the MRMS QPE, to simulate the states of the system. The output from this simulation provides states of the system at the forecast issue time (0 h of Fig. 4a) to run the hydrologic forecasting model in the second step. In addition, we implement an updating procedure by Collischonn et al. (2005) that can potentially improve the streamflow forecasts by updating the initial states of the system with the observed streamflow, which one could think of as the simplest form of data assimilation. Acknowledging the complexity of updating the large number of states in the forecasting model, the simplest way to implement the scheme is to compare the simulated streamflow at the USGS stream gauge site with the observed streamflow. We compute the streamflow updating correction factor (refer to Collischonn et al. 2005) called FCA given by
FCAk=QobsQsim,
where FCAk represents the updating correction factor at link k where streamflow observations are available, Qobs represents the observed streamflow at link k, and Qsim corresponds to the simulated streamflow using observed streamflow at link k. Then, the corrected streamflow at any link i upstream of the link k in the river network is computed by Eq. (2):
Qupi,k=FCAk×Qsimi×AiAk+Qsimi(1AiAk),
where Qupi,k is the corrected streamflow at link i, Qsimi is the simulated streamflow at link i, Ai is the upstream drainage area at link i, and Ak is the upstream drainage area at link k. We provide in Fig. 5 a schematic for updating the streamflow using Eq. (2). For example, we use the observed streamflow at Osage to update the simulated streamflow at links 1 and 2. Note that the effect of the streamflow update at link 2 will be more prominent than at link 1. Similarly, at links 3 and 4 (i.e., for all links in the green subbasin), we use the observed streamflow at Janesville to update the simulated streamflow. During the dry periods, channel flow is mainly fed by groundwater flow. Therefore, we correct storage volume, Ss in HLM employing the same correction factor, FCA. The corrections are weighted based on the fraction of groundwater flux in the streamflow in the drainage network, PSi. The updated groundwater storage Ssup,i is then given by Eq. (3):
Ssup,i=FCAk×Ssi×PSi+Ssi(1PSi),
where Ssi is simulated groundwater storage at link i and PSi is the fraction of streamflow at link i contributed by groundwater flux from corresponding hillslope.
Fig. 5.
Fig. 5.

Schematic for updating states of the system using observed streamflow. The update locations (1, 2, 3, and 4) represent the locations at which initial simulated states are updated with the observed streamflow at USGS stations. For example, simulated streamflow at forecast issue time at 1 and 2 are updated using the observed streamflow at the Osage station.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

In the second step of the experiment, we run the IFC flood forecasting model independently for three QPF inputs that we discussed in section 2c and the no-rain forecasts (see Fig. 4a). The no-rain forecast assumes that there is no rainfall anywhere in the basin during the forecast horizon. The real-time forecast scheme we employ here is consistent with the legacy forecasts used by NWS and NWM. We issue forecasts every hour with HRRR out to 18-h (forecast model was run up to 5 days) and every 6 h with GFS out to 10 days (forecast model was run up to 15 days). For continuity and simplicity in the implementation of GFS, we use 6-h accumulation intervals (magenta bars in Figs. 4a and 4b).

c. Evaluation metrics

We compute errors in QPFs with respect to QPEs using both categorical and continuous verification measures. The categorical verification measures correspond to the performance of the QPFs associated with the detection of rainfall (e.g., CAWCR 2017; Seo et al. 2018). These measures are computed based on rainfall/no rainfall comparison between QPF and QPE grids using rainfall detection thresholds. We use three rainfall thresholds of 0.5, 1, and 2 mm. The measures are probability of detection (POD), false alarm rate (FAR), frequency bias (FB), and Gilbert skill score (GSS). These measures are computed based on the contingency table presented in Table 2 and Eqs. (4)(7):
POD=TPTP+FN,
FAR=FPTP+FP,
FB=TP+FPTP+FN,
GSS=TP×TNFN×FP(FN+FP)(TP+FN+FP+TN)+(TP×TNFN×FP).
The POD [0, 1] measures the fraction of correctly forecasted rainfall grids to the total number of observed rainfall grids with a perfect score of 1. The FAR [0, 1] measures the fraction of the number of forecasted rainfall grids that actually did not occur, with 0 representing the perfect score. The FB [0, ∞] represents the fraction of the total number of forecasted rainfall grids to the total number of the observed rainfall grids and signifies whether the forecasts show a tendency to underforecast (FB < 1) or overforecast (FB > 1). The GSS [−1/3, 1], also referred to as the equitable threat score, depicts the overall forecasting skill of correctly detecting the rainfall, with 1 representing a perfect score. The GSS provides the verification of rainfall forecast that can be compared more fairly across regimes.
Table 2.

Contingency table for categorical verification of QPFs. TP, FP, TN, and FN represent true positive, false positive, true negative, and false negative, respectively.

Table 2.
The continuous verification of QPFs quantifies the forecasting skill in terms of the amount of rainfall with respect to QPEs. For the verification, we use the mean areal precipitation (MAP) across USGS basins (refer to Quintero et al. 2016). Here we employ bias (B), correlation (r), root-mean-square difference (RMSD), and mean absolute difference (MAD). We refer to these statistics as “difference” rather than “error” as the reference QPE is corrupted with significant uncertainty of its own (e.g., Ciach et al. 2007; Villarini and Krajewski 2010; Seo et al. 2018). The bias B computed here is multiplicative depicting the under or overprediction of rainfall volume in a basin:
B=tMAPQPF(t)tMAPQPE(t),
RMSD=(MAPQPFMAPQPE)2N,
MAD=|MAPQPFMAPQPE|N,
where MAPQPF and MAPQPE correspond to the MAP for QPF and QPE, respectively.
Exploring the value of QPFs for streamflow forecasting across scales requires a continuous evaluation of QPF-derived streamflow forecasts. We evaluate the streamflow forecasts at the 140 USGS streamflow monitoring sites across Iowa. We use standard streamflow forecast evaluation metrics. As emphasized in section 1, we follow the approach of, e.g., Adams and Dymond (2018) to evaluate the ensemble forecasts for the medium-range using the ensemble mean and median. The metrics we report here are Kling–Gupta efficiency (KGE), normalized MAE (nMAE), hydrograph timing (TH), and percent peak difference (PD). The KGE involves three components: 1) Pearson’s correlation (r), 2) variance ratio (α), and 3) mean ratio (β) (see Gupta et al. 2009). The ideal value of the KGE is equal to 1, which can be achieved when each component attains the value of 1:
KGE=1(r1)2+(α1)2+(β1)2,
where r is the correlation, α=σf/σo, β=μf/μo, σf is the standard deviation of forecasts, σo is the standard deviation of observations, μf is the mean of forecasts, and μo is the mean of observations. For instance, if α and β are close to 1, the KGE is dominated by r. Knoben et al. (2019) illustrated that the interpretation of the KGE should not be guided by our typical understanding of Nash–Sutcliffe efficiency (NSE) values, for which only positive values are considered as skillful forecasts.
The nMAE is computed similarly to Eq. (10), except that we normalize it by the upstream drainage area of the basin. The HT depicts the cross correlation between the streamflow forecasts and observations. Here, we compute it as the number of hours a forecasted time series needs to be shifted so that the cross correlation is maximized. For instance, the positive value of HT shows delay in the timing of forecasts while the negative value of HT indicates early timing of forecasts. We compute the PD as
PD=peakfpeakopeako×100,
where peakf and peako are annual peaks of forecasted and observed streamflow time series, respectively.

4. Results

a. Categorical verification of QPFs

To understand the QPF errors and uncertainties in rainfall detection, we use Eqs. (4)(7) and compute categorical standard verification measures. Note that we evaluate QPFs on an annual basis. In Fig. 6, we show POD, FAR, and GSS for the HRRR QPF. Two distinct patterns emerge from this figure. First, there is a clear systematic dependence of these measures with rainfall threshold selected for detection of rainfall across years. The smaller the threshold, the higher the detection skill associated with the HRRR QPF. Second, the clear dependence of these three measures emerges with forecast lead times across rainfall thresholds. Note that the skill deteriorates significantly after a lead time of 1 h and steadily declines as lead times increase. Since POD and FAR behave in opposite directions, the overall behavior of forecasting skill is captured by GSS. The HRRR QPF shows the GSS score of ~0.3 on average after a lead time of 1 h, clearly indicating that there is significant room for improvement to correctly detect the observed rainfall. As Seo et al. (2018) indicated, the initial higher forecasting skill of the HRRR QPF arises from the data assimilation scheme employed in the HRRR algorithm.

Fig. 6.
Fig. 6.

Categorical verification of the HRRR QPF relative to the MRMS QPE across three rainfall thresholds of 0.5, 1, and 2 mm for the year 2016. Each column corresponds to the categorical verification measures, i.e., POD, FAR, and GSS, respectively. Categorical verification results for 2017 and 2018 show very similar patterns.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

We also compute POD, FAR, and GSS for the GFS QPF (most recent forecast cycle), presented in Fig. 7. The relationship between the skill measures and the forecast lead times across rainfall thresholds and years is similar to the one demonstrated by the HRRR QPF. Note, however, that the POD and GSS decay more slowly with lead times than we observe with HRRR. For example, POD at a 1-mm threshold is ~0.6 for a lead time of 2 days, which HRRR demonstrates only for the first hour. We suggest that these measures reflect the detection skill of the QPF, and the GFS QPF depicts a relatively higher skill largely due to its coarser resolution. However, verifying this by upscaling the HRRR is difficult due to a significant mismatch in the forecast horizon limit. Also, we kept the rainfall threshold the same for the GFS QPF, which uses 3- and 6-h accumulation intervals, hence the higher likelihood of achieving higher detection skill. Also, we compute FB for both QPFs. As Fig. 8 shows, we see a clear distinction between the two QPFs in terms of FB. At longer lead times in the short-range HRRR shows a reasonable skill (FB ~ 1 on average), while it overforecasts at shorter lead times. GFS, however, mostly overforecasts (FB > 1 on average) across lead times, which we largely attribute to the coarser resolution of GFS (~25 km) relative to HRRR (~3 km). The categorical forecasting skill represented here is somewhat similar to that in Moser et al. (2015) and Seo et al. (2018). We discuss in detail how these QPF uncertainties in terms of rainfall detection propagate to the hydrologic forecasts in section 4c.

Fig. 7.
Fig. 7.

Categorical verification of the GFS QPF relative to the MRMS QPE across three rainfall thresholds of 0.5, 1, and 2 mm for the year 2016. Each column corresponds to the categorical verification measures, i.e., POD, FAR, and GSS, respectively. Categorical verification results for 2017 and 2018 show similar patterns.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

Fig. 8.
Fig. 8.

Categorical verification measure, frequency bias (FB) of the HRRR and GFS relative to the MRMS QPE across three rainfall thresholds of 0.5, 1, and 2 mm for the year 2016. Categorical verification results for 2017 and 2018 show similar patterns.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

As Fig. 1 alluded, the displacement in space could have implications on the detectability of rainfall across basin scales. The categorical evaluation across spatial scales shows an indication of that signature (see Fig. S1 in the online supplemental material). The smaller basins demonstrate significant variability in the categorical skill than the larger ones, despite not showing a very strong signature on average.

b. Continuous verification of QPFs

The categorical verification measures only represent QPF uncertainties to correctly detect rainfall, and hence cannot completely illuminate the impact on hydrologic forecasts. A rainfall quantity that has hydrologic relevance at the catchment scale is the MAP (e.g., Quintero et al. 2016). Here we perform the continuous evaluation of the QPFs in terms of MAP using Eqs. (8)(10). In Fig. 9, we show the continuous verification metrics r, B, RMSD, and MAD on the HRRR QPF across USGS basin scales and forecast lead times for 2016. A clear dependence structure emerges between these measures and catchment scales as we anticipated in Fig. 1. As the basin size increases, the forecasting skills increase while the corresponding variability decreases. All four metrics do not show a significant change in the dependence structure across forecast lead times. However, note the increase in the variability with increasing lead times, particularly for smaller basins. The bias B shows the overprediction, particularly up to a lead time of 6 h while it remains close to 1 on average for longer lead times in the short range. Note the similar behavior illustrated by FB in Fig. 8.

Fig. 9.
Fig. 9.

Continuous verification of the HRRR QPF relative to the MRMS QPE based on mean annual precipitation (MAP) for the year 2016. Each row corresponds to lead times of 3, 6, 12, and 15 h, respectively, while each column corresponds to the continuous verification measures, i.e., correlation (r), RMSD, MAD, and multiplicative bias (B), respectively. Verification results for 2017 and 2018 show similar patterns.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

We observe a behavior of the GFS QPF similar to that of HRRR across spatiotemporal scales (Fig. 10). One should note here a clear distinction in terms of r. With increasing lead times, r shows rapid decline, particularly after 2 days. The overall values of RMSD and MAD seem larger, which we can attribute to the 6-h accumulation intervals associated with GFS.

Fig. 10.
Fig. 10.

Continuous verification of the GFS QPF relative to the MRMS QPE based on mean annual precipitation (MAP) for the year 2016. Each row corresponds to lead times of 24, 48, 96, and 192 h, respectively, while each column corresponds to the continuous verification measures, i.e., correlation (r), RMSD, MAD, and multiplicative bias (B), respectively. Verification results for 2017 and 2018 show similar patterns.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

c. Hydrologic evaluation of QPFs for streamflow forecasting

Hydrologic evaluation of QPFs requires one to employ a hydrologic forecasting model as outlined in section 3 and systematically investigate the propagation of QPF uncertainties. Here, we produce streamflow forecasts using the three QPFs discussed before in addition to the no-rain forecast and evaluate the predictability of streamflow across forecast lead times, catchment scales, and space–time scale of the QPFs. In the following discussion, we focus particularly on the hydrologic predictability associated with these forecasts with emphasis on the KGE.

A clear signature emerges from the relationship between the KGE and basin size across forecast lead times (see Fig. 11 and Fig. S2). Across forecast lead times, the variability of the KGE (Fig. 11) in terms of both median and interquartile range shows clear dependence with catchment size for both forecast scenarios, i.e., without and with streamflow update. Note that the boxplots represent the distribution of the KGE across all years and basins pooled together. The results indicate a systematic decrease of the median and interquartile range of forecasting skill associated with the MRMS, HRRR, no-rain, and HRRR-only forecasts with lead times in the short range. As expected, the MRMS QPE, which can be treated as a surrogate for the perfect QPF, shows the best performance among QPFs. When the updating procedure outlined in Eqs. (1) and (2) is adopted, the MRMS QPE shows changes in the KGE with lead times (right column of Fig. 11) as opposed to a constant behavior of the KGE when streamflow update is not performed (left column of Fig. 11). As we alluded to previously, the streamflow update while issuing the streamflow forecast clearly shows enhanced forecasting skill across both space and time scales. There are two ways one could decide whether QPFs add value to streamflow forecasting. First, they should be skillful. Second, they should depict improvement over the no-rain forecast. If the addition of QPFs is not performing better than no-rain forecasts, we cannot say that they demonstrate added value. In particular, at longer lead times in the short range (12 and 15 h), the HRRR QPF shows improvement (i.e., increase in the median and reduced interquartile range of the KGE) over the no-rain forecast (also, are skillful) for basins below 1000 km2. There is still room for improvement to achieve at least the performance equal to that of the QPE (MRMS). For basins larger than 1000 km2, the forecasting skill across QPFs does not reveal any significant change in the relationship with forecast lead time (also refer to Fig. S2). Further, we explore the relationship of nMAE, HT, and PD with basin scales (Figs. S3–S5). We note that the relational behavior for these metrics is more apparent than for the KGE. The metrics also illustrate that basins larger than 1000 km2 show enhanced predictability over the no-rain forecast. Note that HT generally increases with increasing basin size (Fig. S4). Given that it is not a normalized quantity, HT increases with the increasing time of concentration for larger basins. The overall results of the HRRR QPF for 2017 and 2018 show a similar signature.

Fig. 11.
Fig. 11.

Variability of the KGE with basin scales across forecast lead times for the HRRR QPF. “HRRR QPF only” refers to the forecasting skills associated with the HRRR QPF without updating with observed rainfall, i.e., MRMS QPE. Boxplots of the KGE correspond to streamflow forecasts (left) without and (right) with streamflow update using observations. The boxplots depict the variability of the KGE conditional on basin size in the horizontal axis. For example, boxplots at 1000 km2 correspond to the basin size below 1000 km2, boxplots at 10 000 km2 correspond to the basin size between 1000 and 10 000 km2, and so on. The horizontal line inside each box represents the median while the box represents the interquartile range (75th quartile–25th quartile). The whiskers represent 1.5 times the standard deviation from the mean.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

In the medium-range forecasts using the GFS QPF, the KGEs reveal a similar pattern (Fig. 12) as in the HRRR QPF. Here, we use the forecast issued every 6 h for the streamflow forecast evaluation owing to the 6-h accumulation intervals (temporal resolution) used for the GFS. The KGE skill on the GFS QPF in Fig. 12 corresponds to the eight-member ensemble mean. In addition to enhanced performance using the streamflow updating procedure (right column of Fig. 12), the QPFs show an increase in the KGE with the increase in basin size while decreasing with the increase in forecast lead time as expected. Though we computed skills up to 10 days, here we show the results only up to 4 days for the sake of brevity and because nothing interesting happened at longer time scales. We discuss the temporal evolution of forecasting skill across the entire time horizon in the medium range in Fig. 13. One could clearly note the improvement of the performance of the GFS QPF over no-rain forecasts for basin sizes even up to 10 000 km2 (Fig. 12 and Fig. S6). Also, GFS-only forecasts significantly underperform if we do not update the forecast in real time using QPEs. The results show that the GFS QPF manifests its clear value for basin scales around 1000 km2 up to 5000 km2 (Fig. S6) across forecast lead times. Despite the GFS QPF showing improvement over no-rain forecasts, its forecasting skill starts deteriorating (median KGE < 0) for basin sizes below 1000 km2 and lead time greater than 48 h. Our results for nMAE, HT, and PD across basin size (Figs. S7–S9) corroborate the conclusion from the relational structure of the KGE. For larger basins in the short to medium range, however, the difference in performance across QPF combinations is generally indistinguishable.

Fig. 12.
Fig. 12.

Variability of the KGE with basin scales across forecast lead times for the GFS QPF. “GFS QPF only” refers to the forecast skills associated with the GFS QPF without updating with observed rainfall, i.e., MRMS QPE. QPE + GFS QPF (mean) corresponds to the GFS ensemble mean. Boxplots of the KGE correspond to streamflow forecasts (left) without and (right) with streamflow update using observations. The boxplots depict the variability of the KGE conditional on basin size in the horizontal axis. For example, boxplots at 1000 km2 correspond to the basin size below 1000 km2, boxplots at 10 000 km2 correspond to the basin size between 1000 and 10 000 km2, and so on.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

Fig. 13.
Fig. 13.

(top) Evolution of the KGE across forecast lead times for three representative nested basin scales shown in columns (refer to the basins in the inset). The color code corresponds to the different QPF combinations. (bottom) The enlarged view of the top row up to lead time of 24 h to highlight the evolution of the KGE associated with the HRRR QPF. Note that forecasting skill presented here corresponds to results in the left columns of Figs. 11 and 12.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

In Fig. 13, we show the evolution of the KGE across forecast lead times for three nested basins in the Cedar River. Here we can capture the hydrologic predictability in a full spectrum of forecast lead times in the short to medium range. For the small basin (770 km2), one could observe the largest improvement of the GFS and HRRR QPFs over the no-rain forecast while showing systematic smaller improvement at larger basin scales. For example, at 48-h lead time, the GFS QPF shows some forecasting skill (KGE ~ 0.25) while no-rain already loses the skill (−0.41 < KGE < 0). Note that the evolution of the KGE at longer lead times in the medium range for smaller basins shows some irregular jumps. We can largely attribute this to the space–time interaction of the GFS QPF with smaller basins. Also, the GFS model does not entail data assimilation schemes, thus contributing to initial jumps. Note the remarkable deterioration of correlation r and bias B at longer lead times in smaller basins in Fig. 9. However, as the basin scale increases, the impact of QPF uncertainty on forecasting skill starts diminishing yet maintains the KGE > 0.5.

An important aspect in exploring the impact of QPFs in streamflow forecasting is to explore the space–time interaction of QPFs with basins in terms of their response time. Normalizing forecast lead time by the travel time of the basin allows one to compare small and large basins in a meaningful way. Defining tr as the ratio of forecast lead time tL to the travel time of the basin tt, we compute the tt for the longest channel of the basin using the constant flow velocity of 0.7 m s−1. This velocity roughly corresponds to the average velocity in Iowa streams (Ghimire et al. 2018). We present the variability of the KGE with space–time scale represented by the ratio, tt in Fig. 14. The tr depicts the implicit contribution of the river network topology on the propagation of QPF uncertainties to streamflow forecasts.

Fig. 14.
Fig. 14.

Variability of the KGE with space–time scale of the HRRR pertaining to streamflow forecasts (a) without and (b) with real-time streamflow update, respectively. The tr is the ratio of forecast lead time tL to the travel time of the basin tt. The tt is computed for the longest channel of the basin using the constant flow velocity of 0.7 m s−1. The boxplots at tt = 1, for example, comprise all data points in the range 0 ≤ tr <1, boxplots at tr = 2 comprise all data points in the range 1 ≤ tr < 2, and so on.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

The results in Fig. 14 show that the performance across the QPFs in the short-range is generally similar for tr < 1 for both forecast scenarios. It is the case when all stormwater within the lead time of the forecast does not reach the basin outlet. As tr exceeds 1, the KGE (also note median KGE) starts declining sharply. This suggests that for tr > 1, the stormwater will have already reached the basin outlet within the forecast lead time. From the results in Fig. 11 and Figs. S2–S5, we can draw insights that this is possible for longer lead times (e.g., 12 and 15 h) across smaller basins. The implication is that the best performance could be achieved for basins whose response time is on the order of or less than the QPF’s forecast lead time. In other words, the dynamics of the storm within the basin can be more credibly captured for tr < 1. Note, however, that for tr > 1 the HRRR QPF begins to show clear improvement over the no-rain forecast, while also highlighting the room for improvement to match the performance of the QPE. The results show a very similar pattern for the GFS QPF in terms of space–time variation (Fig. 15). Note that the larger range of tr is shown here because the longest forecast lead time is up to 10 days. Again, for tr > 1, the sharp decline of the KGE (i.e., the decrease of the median and the increase of the variability for ensemble GFS mean) is apparent across years. We observe that the GFS QPF starts losing its forecasting skill (KGE < 0) as tr exceeds ~6, despite showing consistent improvement over the no-rain forecast.

Fig. 15.
Fig. 15.

Variability of the KGE with space–time scale of the GFS pertaining to streamflow forecasts (a) without and (b) with real-time streamflow update, respectively. The boxplots at tr = 1, for example, comprise all data points in the range 0 ≤ tr < 1, boxplots at tr = 2 comprise all data points in the range 1 ≤ tr < 2, and so on.

Citation: Journal of Hydrometeorology 22, 7; 10.1175/JHM-D-20-0297.1

5. Discussion

The results from this study provide some key insights regarding the value of QPFs for streamflow forecasting. We discuss the results and their implications in terms of the assessment of QPF uncertainties and the propagation of QPF uncertainties from short- to medium-range real-time hydrologic forecasts across spatial scales.

The categorical and continuous verification results of QPFs reveal different implications for streamflow forecasting. For example, we observe that the overall rainfall detection skill of QPFs in the short to medium range is low (GSS of about 0.3) across years (Figs. 6 and 7). Seo et al. (2018) suggest that hydrologic predictability for such QPFs will also be low. Here we argue that a more relevant quantity to examine the propagation of QPF uncertainties to hydrologic forecasts is the combination of correlation r and bias B at basin scales (Figs. 9 and 10). The results clearly highlight that one should not rely only on the categorical verification skills to assess the underlying QPF uncertainties from a hydrologic forecasting point of view.

The comprehensive evaluation of QPF-based streamflow forecasts at 140 USGS-monitored basins in Iowa provides some key insights in relation to scales. We explored in detail the added value of QPFs across basin scales, forecast lead times, and space–time scale of the QPF, which is of primary importance for the hydrologic forecasting community. The results for both QPFs (Figs. 11 and 12) show that the value of using QPFs is more apparent at longer lead times than shorter ones in the short to medium range (e.g., Adams and Dymond 2019). Note the similar performance of the QPFs relative to the QPE and no-rain forecasts at shorter lead times. A possible explanation is that the updating procedure of the real-time streamflow forecasting has a large influence on the forecasts in this range. There is enough water in the channel networks before they receive any contribution from the QPF forcing, thereby contributing to this result. We find the behavior of forecasting skill across forecast lead times consistent with the findings from similar studies (e.g., Collischonn et al. 2005; Adams and Dymond 2018, 2019; Vivoni et al. 2006).

It is also of critical importance for forecasters to have insights into the trade-offs for reasonable forecasting skill across basin scales and forecast lead times, which we alluded to in section 4. One key point highlighted by Adams and Dymond (2019) in exploring the effect of QPFs on stage forecasting errors is the flood-level stratification. Adams and Dymond (2019) stratify the stage forecasts above and below the flood levels and clearly show that stage forecast errors above flood levels are much higher for fast-responding basins. Therefore, we speculate that the trade-offs across lead times and basin scales might be different for flooding and nonflooding conditions. Also, note that in the medium range, one could see an added value of using QPFs when reported in terms of ensemble mean/median. Our findings are congruous with those from some recent studies such as Adams and Dymond (2018). The space–time scale tr, which can aptly describe the interaction of storm dynamics with basins, provides a more suitable measure for forecasting skill evaluation due to its normalized form. Though Vivoni et al. (2006) only explored it at the short range, our results across the short- to medium-range spectrum reveals similar dependence with tr. We also show that we can achieve the highest performance for QPFs when tr < 1. For tr < 1 (generally for larger basins), the runoff volume already in the river network dominates the overall performance. For tr > 1 (generally for smaller basins), however, the quality of QPFs as explained by the categorical and continuous measures (Figs. 610) adds more value to the overall performance. Thus, tr ~ 1 generally depicts the largest streamflow forecasting skill with a systematic decrease in forecasting accuracy for tr > 1. Note that there is enough room for bridging the gap with QPE-based (perfect QPF) forecasts at longer lead times in the medium range. Various approaches, such as increasing the ensemble size and using multimodel ensemble schemes, have been discussed (e.g., Cuo et al. 2011; Wu et al. 2020; Cloke and Pappenberger 2009) and practiced to incorporate the full range of uncertainty. The QPFs used for short- to medium-range streamflow forecasting in this study represent different spatial and temporal resolutions. The uncertainty arising from this aspect is a separate issue, which has been discussed sufficiently in the literature (e.g., Lobligeois et al. 2014; Quintero et al. 2016; Ghimire and Krajewski 2020). Note that our focus here is to explore the worth of these QPFs as is for real-time streamflow forecasting.

6. Conclusions

In this study, we acknowledge the errors and uncertainties associated with QPFs and explore the scale-dependent value of QPFs to real-time hydrologic forecasting. We emphasize the trade-offs between the forecast lead time, basin size, space–time scale of the QPF, and streamflow forecasting accuracy. We address this issue through a comprehensive evaluation of QPFs at 140 USGS-monitored basins (~10–40 000 km2) across Iowa for the years 2016–18. We use HRRR and GFS QPFs for short- to medium-range streamflow forecasting, respectively, conforming to the scheme employed by the NWM. First, we conduct an independent evaluation of both QPFs with respect to the MRMS QPE to assess QPF uncertainties. Second, we use the IFC flood forecasting model to investigate the propagation of QPF uncertainties to hydrologic forecasts. The results of this study indicate the following:

  1. The QPF errors and uncertainties from rainfall-to-rainfall evaluation are scale dependent. The errors generally increase with increasing forecast lead times and decrease with increasing basin scales in the short to medium range.

  2. The overall results show that both QPFs clearly show added value for real-time streamflow forecasting at longer lead times in the short to medium range. Their value is particularly apparent for basin sizes below 1000 km2. The results also indicate room for improvement of QPF-based streamflow forecasting accuracy across scales.

  3. The tr ~ 1 generally depicts the largest streamflow forecasting skill with a systematic decrease in forecasting accuracy for tr > 1. The values of using QPFs are, however, more apparent for tr > 1.

Our study is subject to several limitations. For example, we have not accounted for the uncertainty arising from the incompatibility of spatial and temporal resolutions associated with the QPFs. Several studies (e.g., Lobligeois et al. 2014; Quintero et al. 2016; G. R. Ghimire et al. 2021, unpublished manuscript) have been reported in the literature exploring this effect on streamflow predictions. We also have not accounted for inherent forecasting model uncertainty. The use of persistence-based QPFs has been shown to improve forecasting skills, particularly at short lead times (Wilson et al. 1998; Lin et al. 2005; Seo et al. 2018).

Note that the QPF scheme used in the study for real-time streamflow forecasting conforms to the scheme used by the NWS for the NWM. Therefore, our findings have implications for guiding the evaluation of NWM forecasting skill across the CONUS in addition to about 4000 river forecasting locations. The hydrologic forecasters at NOAA/NWS could clearly benefit from the insights developed in terms of trade-offs between forecast lead time, basin-scale, space–time scale, and forecasting accuracy. While we used a different model (the IFC’s HLM), our earlier model comparison studies (e.g., ElSaadani et al. 2018; Rojas et al. 2020) indicate that the two models have similar predictive skill, therefore the model choice has little effect on the conclusions. As Carpenter and Georgakakos (2006) indicate, the performance of distributed models such as the HLM should generally be better than their lumped counterparts.

There are some active areas of research intended to enhance hydrologic predictability and extend the range of forecasting. For instance, the space–time predictability of soil moisture (Vivoni et al. 2006), use of data assimilation techniques, AI techniques, and methods outlined in the review by Cuo et al. (2011) provide directions for further research in this direction.

Acknowledgments

This study did not receive any external funding. The study was funded by the Iowa Flood Center of the University of Iowa. The second author also acknowledges partial support from the Rose and Joseph Summers endowment. The authors are grateful to many colleagues at the IFC who facilitated the study by providing observational and computational support as well as fruitful discussions.

REFERENCES

  • Adams, T. E., and J. Ostrowski, 2010: Short lead-time hydrologic ensemble forecasts from numerical weather prediction model ensembles. World Environmental and Water Resources Congress 2010, Providence, RI, American Society of Civil Engineers, 2294–2304, https://doi.org/10.1061/41114(371)237.

  • Adams, T. E., and T. C. Pagano, 2016: Flood Forecasting: A Global Perspective. Elsevier, 487 pp.

  • Adams, T. E., and R. Dymond, 2018: Evaluation and benchmarking of operational short-range ensemble mean and median streamflow forecasts for the Ohio River basin. J. Hydrometeor., 19, 16891706, https://doi.org/10.1175/JHM-D-18-0102.1.

    • Search Google Scholar
    • Export Citation
  • Adams, T. E., and R. Dymond, 2019: The effect of QPF on real-time deterministic hydrologic forecast uncertainty. J. Hydrometeor., 20, 16871705, https://doi.org/10.1175/JHM-D-18-0202.1.

    • Search Google Scholar
    • Export Citation
  • Alexander, C. R., S. S. Weygandt, T. G. Smirnova, S. Benjamin, P. Hofmann, E. P. James, and D. A. Koch, 2010: High Resolution Rapid Refresh (HRRR): Recent enhancements and evaluation during the 2010 convective season. 25th Conf. on Severe Local Storms, Denver, CO, Amer. Meteor. Soc., 9.2, https://ams.confex.com/ams/25SLS/techprogram/paper_175722.htm.

  • Ayalew, T. B., and W. F. Krajewski, 2017: Effect of river network geometry on flood frequency: A tale of two watersheds in Iowa. J. Hydrol. Eng., 22, 6017004, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001544.

    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., T. G. Smirnova, S. S. Weygandt, M. Hu, S. R. Sahm, B. D. Jamison, M. M. Wolfson, and J. O. Pinto, 2009: The HRRR 3-km storm-resolving, radar-initialized, hourly updated forecasts for air traffic management. Aviation, Range and Aerospace Meteorology Special Symp. on Weather-Air Traffic Management Integration, Phoenix, AZ, Amer. Meteor. Soc., P1.2, https://ams.confex.com/ams/89annual/techprogram/paper_150430.htm.

  • Blaylock, B., 2020: University of Utah HRRR Data Archive. Accessed 8 January 2020, http://home.chpc.utah.edu/~u0553130/Brian_Blaylock/cgi-bin/hrrr_download.cgi.

  • Budikova, D., J. S. M. Coleman, S. A. Strope, and A. Austin, 2010: Hydroclimatology of the 2008 Midwest floods. Water Resour. Res., 46, W12524, https://doi.org/10.1029/2010WR009206.

    • Search Google Scholar
    • Export Citation
  • Calvetti, L., and A. J. Pereira Filho, 2014: Ensemble hydrometeorological forecasts using WRF hourly QPF and topmodel for a middle watershed. Adv. Meteor., 2014, 484120, https://doi.org/10.1155/2014/484120.

    • Search Google Scholar
    • Export Citation
  • Carpenter, T. M., and K. P. Georgakakos, 2006: Intercomparison of lumped versus distributed hydrologic model ensemble simulations on operational forecast scales. J. Hydrol., 329, 174185, https://doi.org/10.1016/j.jhydrol.2006.02.013.

    • Search Google Scholar
    • Export Citation
  • CAWCR, 2017: WWRP/WGNE Joint Working Group on forecast verification research. Accessed 1 August 2020, https://www.cawcr.gov.au/projects/verification/#Types_of_forecasts_and_verifications.

  • Ciach, G. J., W. F. Krajewski, and G. Villarini, 2007: Product-error-driven uncertainty model for probabilistic quantitative precipitation estimation with NEXRAD data. J. Hydrometeor., 8, 13251347, https://doi.org/10.1175/2007JHM814.1.

    • Search Google Scholar
    • Export Citation
  • Cloke, H. L., and F. Pappenberger, 2009: Ensemble flood forecasting: A review. J. Hydrol., 375, 613626, https://doi.org/10.1016/j.jhydrol.2009.06.005.

    • Search Google Scholar
    • Export Citation
  • Collischonn, W., R. Haas, I. Andreolli, and C. E. M. Tucci, 2005: Forecasting River Uruguay flow using rainfall forecasts from a regional weather-prediction model. J. Hydrol., 305, 8798, https://doi.org/10.1016/j.jhydrol.2004.08.028.

    • Search Google Scholar
    • Export Citation
  • Cuo, L., T. C. Pagano, and Q. J. Wang, 2011: A review of quantitative precipitation forecasts and their use in short- to medium-range streamflow forecasting. J. Hydrometeor., 12, 713728, https://doi.org/10.1175/2011JHM1347.1.

    • Search Google Scholar
    • Export Citation
  • Demargne, J., M. Mullusky, K. Werner, T. Adams, S. Lindsey, N. Schwein, W. Marosi, and E. Welles, 2009: Application of forecast verification science to operational river forecasting in the U.S. National Weather Service. Bull. Amer. Meteor. Soc., 90, 779784, https://doi.org/10.1175/2008BAMS2619.1.

    • Search Google Scholar
    • Export Citation
  • Dolciné, L., H. Andrieu, D. Sempere-Torres, and D. Creutin, 2001: Flash flood forecasting with coupled precipitation model in mountainous Mediterranean basin. J. Hydrol. Eng., 6, 110, https://doi.org/10.1061/(ASCE)1084-0699(2001)6:1(1).

    • Search Google Scholar
    • Export Citation
  • ElSaadani, M., W. F. Krajewski, R. Goska, and M. B. Smith, 2018: An investigation of errors in distributed models’ stream discharge prediction due to channel routing. J. Amer. Water Resour. Assoc., 54, 742751, https://doi.org/10.1111/1752-1688.12627.

    • Search Google Scholar
    • Export Citation
  • Georgakakos, K. P., 1986: A generalized stochastic hydrometeorological model for flood and flash-flood forecasting: 2. Case studies. Water Resour. Res., 22, 20962106, https://doi.org/10.1029/WR022i013p02096.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., and W. F. Krajewski, 2020: Hydrologic implications of wind farm effect on radar-rainfall observations. Geophys. Res. Lett., 47, e2020GL089188, https://doi.org/10.1029/2020GL089188.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., W. F. Krajewski, and R. Mantilla, 2018: A power law model for river flow velocity in Iowa basins. J. Amer. Water Resour. Assoc., 54, 10551067, https://doi.org/10.1111/1752-1688.12665.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., N. Jadidoleslam, W. F. Krajewski, and A. A. Tsonis, 2020: Insights on streamflow predictability across scales using horizontal visibility graph based networks. Front. Water, 2, 17, https://doi.org/10.3389/frwa.2020.00017.

    • Search Google Scholar
    • Export Citation
  • Gupta, H. V., H. Kling, K. K. Yilmaz, and G. F. Martinez, 2009: Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. J. Hydrol., 377, 8091, https://doi.org/10.1016/j.jhydrol.2009.08.003.

    • Search Google Scholar
    • Export Citation
  • Hardy, J., J. J. Gourley, P. E. Kirstetter, Y. Hong, F. Kong, and Z. L. Flamig, 2016: A method for probabilistic flash flood forecasting. J. Hydrol., 541, 480494, https://doi.org/10.1016/j.jhydrol.2016.04.007.

    • Search Google Scholar
    • Export Citation
  • IFC, 2020: IFC Archive. Accessed 1 August 2020, http://s-iihr51.iihr.uiowa.edu/precipitation/mrms_gc1h/.

  • Iowa Mesonet, 2020: Iowa environmental mesonet. Accessed 8 January 2020, https://mtarchive.geol.iastate.edu/.

  • Knoben, W. J. M., J. E. Freer, and R. A. Woods, 2019: Technical note: Inherent benchmark or not? Comparing Nash-Sutcliffe and Kling-Gupta efficiency scores. Hydrol. Earth Syst. Sci. Discuss., 23, 43234331, https://doi.org/10.5194/hess-23-4323-2019.

    • Search Google Scholar
    • Export Citation
  • Krajewski, W. F., and Coauthors, 2017: Real-time flood forecasting and information system for the State of Iowa. Bull. Amer. Meteor. Soc., 98, 539554, https://doi.org/10.1175/BAMS-D-15-00243.1.

    • Search Google Scholar
    • Export Citation
  • Krajewski, W. F., G. R. Ghimire, and F. Quintero, 2020: Streamflow forecasting without models. J. Hydrometeor., 21, 16891704, https://doi.org/10.1175/JHM-D-19-0292.1.

    • Search Google Scholar
    • Export Citation
  • Kruger, A., W. F. Krajewski, J. J. Niemeier, D. L. Ceynar, and R. Goska, 2016: Bridge-mounted river stage sensors (BMRSS). IEEE Access, 4, 89488966, https://doi.org/10.1109/ACCESS.2016.2631172.

    • Search Google Scholar
    • Export Citation
  • Larimer, O. J., 1957: Drainage Areas of Iowa Streams. Iowa Highway Research Board Bulletin 7, 404 pp.

  • Lin, C., S. Vasić, A. Kilambi, B. Turner, and I. Zawadzki, 2005: Precipitation forecast skill of numerical weather prediction models and radar nowcasts. Geophys. Res. Lett., 32, L14801, https://doi.org/10.1029/2005GL023451.

    • Search Google Scholar
    • Export Citation
  • Lobligeois, F., V. Andréassian, C. Perrin, P. Tabary, and C. Loumagne, 2014: When does higher spatial resolution rainfall information improve streamflow simulation? An evaluation using 3620 flood events. Hydrol. Earth Syst. Sci., 18, 575594, https://doi.org/10.5194/hess-18-575-2014.

    • Search Google Scholar
    • Export Citation
  • Mantilla, R., 2007: Physical basis of statistical scaling in peak flows and stream flow hydrographs for topologic and spatially embedded random self-similar channel networks. Ph.D. thesis, University of Colorado Boulder, 144 pp.

  • Moser, B. A., W. A. Gallus, and R. Mantilla, 2015: An initial assessment of radar data assimilation on warm season rainfall forecasts for use in hydrologic models. Wea. Forecasting, 30, 14911520, https://doi.org/10.1175/WAF-D-14-00125.1.

    • Search Google Scholar
    • Export Citation
  • NCEP, 2020: NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive. Research Data Archive at the NCAR, accessed 1 August 2020, https://doi.org/10.5065/D65D8PWK .

  • NOAA, 2020a: The High-Resolution Rapid Refresh (HRRR). Accessed 1 August 2020, https://rapidrefresh.noaa.gov/hrrr/.

  • NOAA, 2020b: The National Water Model (NWM). Office of Weather Prediction, accessed 1 August 2020, https://water.noaa.gov/about/nwm.

  • NOAA, 2020c: NCEP products inventory. Accessed 1 August 2020, https://www.nco.ncep.noaa.gov/pmb/products/gfs/.

  • Pagano, T., D. Garen, and S. Sorooshian, 2004: Evaluation of official western U.S. seasonal water supply outlooks, 1922–2002. J. Hydrometeor., 5, 896909, https://doi.org/10.1175/1525-7541(2004)005<0896:EOOWUS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Pereira Fo, A. J., K. C. Crawford, and D. J. Stensrud, 1999: Mesoscale precipitation fields. Part II: Hydrometeorologic modeling. J. Appl. Meteor., 38, 102125, https://doi.org/10.1175/1520-0450(1999)038<0102:MPFPIH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Perez, G., R. Mantilla, and W. F. Krajewski, 2018: The influence of spatial variability of width functions on regional peak flow regressions. Water Resour. Res., 54, 76517669, https://doi.org/10.1029/2018WR023509.

    • Search Google Scholar
    • Export Citation
  • Prior, J. C., 1991: Landforms of Iowa. University of Iowa Press, 153 pp.

  • Qi, Y., S. Martinaitis, J. Zhang, and S. Cocks, 2016: A real-time automated quality control of hourly rain gauge data based on multiple sensors in MRMS system. J. Hydrometeor., 17, 16751691, https://doi.org/10.1175/JHM-D-15-0188.1.

    • Search Google Scholar
    • Export Citation
  • Quintero, F., W. F. Krajewski, R. Mantilla, S. Small, and B.-C. Seo, 2016: A spatial–dynamical framework for evaluation of satellite rainfall products for flood prediction. J. Hydrometeor., 17, 21372154, https://doi.org/10.1175/JHM-D-15-0195.1.

    • Search Google Scholar
    • Export Citation
  • Quintero, F., B.-C. Seo, and R. Mantilla, 2020: Improvement and evaluation of the Iowa Flood Center Hillslope Link Model (HLM) by calibration-free approach. J. Hydrol., 584, 124686, https://doi.org/10.1016/j.jhydrol.2020.124686.

    • Search Google Scholar
    • Export Citation
  • Rojas, M., F. Quintero, and W. F. Krajewski, 2020: Performance of the National Water Model in Iowa using independent observations. J. Amer. Water Resour. Assoc., 56, 568585, https://doi.org/10.1111/1752-1688.12820.

    • Search Google Scholar
    • Export Citation
  • Seo, B. C., F. Quintero, and W. F. Krajewski, 2018: High-resolution QPF uncertainty and its implications for flood prediction: A case study for the eastern Iowa flood of 2016. J. Hydrometeor., 19, 12891304, https://doi.org/10.1175/JHM-D-18-0046.1.

    • Search Google Scholar
    • Export Citation
  • Seo, D.-J., H. D. Herr, and J. C. Schaake, 2006: A statistical post-processor for accounting of hydrologic uncertainty in short-range ensemble streamflow prediction. Hydrol. Earth Syst. Sci. Discuss., 3, 19872035, https://doi.org/10.5194/hessd-3-1987-2006.

    • Search Google Scholar
    • Export Citation
  • Sharma, S., and Coauthors, 2017: Eastern U.S. verification of ensemble precipitation forecasts. Wea. Forecasting, 32, 117139, https://doi.org/10.1175/WAF-D-16-0094.1.

    • Search Google Scholar
    • Export Citation
  • Sharma, S., R. Siddique, S. Reed, P. Ahnert, P. Mendoza, and A. Mejia, 2018: Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system. Hydrol. Earth Syst. Sci., 22, 18311849, https://doi.org/10.5194/hess-22-1831-2018.

    • Search Google Scholar
    • Export Citation
  • Shrestha, D. L., D. E. Robertson, Q. J. Wang, T. C. Pagano, and H. A. P. Hapuarachchi, 2013: Evaluation of numerical weather prediction model precipitation forecasts for short-term streamflow forecasting purpose. Hydrol. Earth Syst. Sci., 17, 19131931, https://doi.org/10.5194/hess-17-1913-2013.

    • Search Google Scholar
    • Export Citation
  • Silvestro, F., N. Rebora, and L. Ferraris, 2011: Quantitative flood forecasting on small- and medium-sized basins: A probabilistic approach for operational purposes. J. Hydrometeor., 12, 14321446, https://doi.org/10.1175/JHM-D-10-05022.1.

    • Search Google Scholar
    • Export Citation
  • USGS, 2020: USGS current water data for Iowa. Accessed 8 January 2020, https://waterdata.usgs.gov/ia/nwis/rt.

  • Villarini, G., and W. F. Krajewski, 2010: Review of the different sources of uncertainty in single polarization radar-based estimates of rainfall. Surv. Geophys., 31, 107129, https://doi.org/10.1007/s10712-009-9079-x.

    • Search Google Scholar
    • Export Citation
  • Vivoni, E. R., D. Entekhabi, R. L. Bras, V. Y. Ivanov, M. P. Van Horne, C. Grassotti, and R. N. Hoffman, 2006: Extending the predictability of hydrometeorological flood events using radar rainfall nowcasting. J. Hydrometeor., 7, 660677, https://doi.org/10.1175/JHM514.1.

    • Search Google Scholar
    • Export Citation
  • Welles, E., S. Sorooshian, G. Carter, and B. Olsen, 2007: Hydrologic verification: A call for action and collaboration. Bull. Amer. Meteor. Soc., 88, 503512, https://doi.org/10.1175/BAMS-88-4-503.

    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., N. A. Crook, C. K. Mueller, J. Sun, and M. Dixon, 1998: Nowcasting thunderstorms: A status report. Bull. Amer. Meteor. Soc., 79, 20792099, https://doi.org/10.1175/1520-0477(1998)079<2079:NTASR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wu, W., R. Emerton, Q. Duan, A. W. Wood, F. Wetterhall, and D. E. Robertson, 2020: Ensemble flood forecasting: Current status and future opportunities. Wiley Interdiscip. Rev.: Water, 7, e1432, https://doi.org/10.1002/wat2.1432.

    • Search Google Scholar
    • Export Citation
  • Zalenski, G., W. F. Krajewski, F. Quintero, P. Restrepo, and S. Buan, 2017: Analysis of national weather service stage forecast errors. Wea. Forecasting, 32, 14411465, https://doi.org/10.1175/WAF-D-16-0219.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, J., and Coauthors, 2011: National Mosaic and Multi-Sensor QPE (NMQ) system description, results, and future plans. Bull. Amer. Meteor. Soc., 92, 13211338, https://doi.org/10.1175/2011BAMS-D-11-00047.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, J., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 621638, https://doi.org/10.1175/BAMS-D-14-00174.1.

    • Search Google Scholar
    • Export Citation
  • Zhou, H., G. Tang, N. Li, F. Wang, Y. Wang, and D. Jian, 2011: Evaluation of precipitation forecasts from NOAA Global Forecast System in hydropower operation. J. Hydroinform., 13, 8195, https://doi.org/10.2166/hydro.2010.005.

    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save
  • Adams, T. E., and J. Ostrowski, 2010: Short lead-time hydrologic ensemble forecasts from numerical weather prediction model ensembles. World Environmental and Water Resources Congress 2010, Providence, RI, American Society of Civil Engineers, 2294–2304, https://doi.org/10.1061/41114(371)237.

  • Adams, T. E., and T. C. Pagano, 2016: Flood Forecasting: A Global Perspective. Elsevier, 487 pp.

  • Adams, T. E., and R. Dymond, 2018: Evaluation and benchmarking of operational short-range ensemble mean and median streamflow forecasts for the Ohio River basin. J. Hydrometeor., 19, 16891706, https://doi.org/10.1175/JHM-D-18-0102.1.

    • Search Google Scholar
    • Export Citation
  • Adams, T. E., and R. Dymond, 2019: The effect of QPF on real-time deterministic hydrologic forecast uncertainty. J. Hydrometeor., 20, 16871705, https://doi.org/10.1175/JHM-D-18-0202.1.

    • Search Google Scholar
    • Export Citation
  • Alexander, C. R., S. S. Weygandt, T. G. Smirnova, S. Benjamin, P. Hofmann, E. P. James, and D. A. Koch, 2010: High Resolution Rapid Refresh (HRRR): Recent enhancements and evaluation during the 2010 convective season. 25th Conf. on Severe Local Storms, Denver, CO, Amer. Meteor. Soc., 9.2, https://ams.confex.com/ams/25SLS/techprogram/paper_175722.htm.

  • Ayalew, T. B., and W. F. Krajewski, 2017: Effect of river network geometry on flood frequency: A tale of two watersheds in Iowa. J. Hydrol. Eng., 22, 6017004, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001544.

    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., T. G. Smirnova, S. S. Weygandt, M. Hu, S. R. Sahm, B. D. Jamison, M. M. Wolfson, and J. O. Pinto, 2009: The HRRR 3-km storm-resolving, radar-initialized, hourly updated forecasts for air traffic management. Aviation, Range and Aerospace Meteorology Special Symp. on Weather-Air Traffic Management Integration, Phoenix, AZ, Amer. Meteor. Soc., P1.2, https://ams.confex.com/ams/89annual/techprogram/paper_150430.htm.

  • Blaylock, B., 2020: University of Utah HRRR Data Archive. Accessed 8 January 2020, http://home.chpc.utah.edu/~u0553130/Brian_Blaylock/cgi-bin/hrrr_download.cgi.

  • Budikova, D., J. S. M. Coleman, S. A. Strope, and A. Austin, 2010: Hydroclimatology of the 2008 Midwest floods. Water Resour. Res., 46, W12524, https://doi.org/10.1029/2010WR009206.

    • Search Google Scholar
    • Export Citation
  • Calvetti, L., and A. J. Pereira Filho, 2014: Ensemble hydrometeorological forecasts using WRF hourly QPF and topmodel for a middle watershed. Adv. Meteor., 2014, 484120, https://doi.org/10.1155/2014/484120.

    • Search Google Scholar
    • Export Citation
  • Carpenter, T. M., and K. P. Georgakakos, 2006: Intercomparison of lumped versus distributed hydrologic model ensemble simulations on operational forecast scales. J. Hydrol., 329, 174185, https://doi.org/10.1016/j.jhydrol.2006.02.013.

    • Search Google Scholar
    • Export Citation
  • CAWCR, 2017: WWRP/WGNE Joint Working Group on forecast verification research. Accessed 1 August 2020, https://www.cawcr.gov.au/projects/verification/#Types_of_forecasts_and_verifications.

  • Ciach, G. J., W. F. Krajewski, and G. Villarini, 2007: Product-error-driven uncertainty model for probabilistic quantitative precipitation estimation with NEXRAD data. J. Hydrometeor., 8, 13251347, https://doi.org/10.1175/2007JHM814.1.

    • Search Google Scholar
    • Export Citation
  • Cloke, H. L., and F. Pappenberger, 2009: Ensemble flood forecasting: A review. J. Hydrol., 375, 613626, https://doi.org/10.1016/j.jhydrol.2009.06.005.

    • Search Google Scholar
    • Export Citation
  • Collischonn, W., R. Haas, I. Andreolli, and C. E. M. Tucci, 2005: Forecasting River Uruguay flow using rainfall forecasts from a regional weather-prediction model. J. Hydrol., 305, 8798, https://doi.org/10.1016/j.jhydrol.2004.08.028.

    • Search Google Scholar
    • Export Citation
  • Cuo, L., T. C. Pagano, and Q. J. Wang, 2011: A review of quantitative precipitation forecasts and their use in short- to medium-range streamflow forecasting. J. Hydrometeor., 12, 713728, https://doi.org/10.1175/2011JHM1347.1.

    • Search Google Scholar
    • Export Citation
  • Demargne, J., M. Mullusky, K. Werner, T. Adams, S. Lindsey, N. Schwein, W. Marosi, and E. Welles, 2009: Application of forecast verification science to operational river forecasting in the U.S. National Weather Service. Bull. Amer. Meteor. Soc., 90, 779784, https://doi.org/10.1175/2008BAMS2619.1.

    • Search Google Scholar
    • Export Citation
  • Dolciné, L., H. Andrieu, D. Sempere-Torres, and D. Creutin, 2001: Flash flood forecasting with coupled precipitation model in mountainous Mediterranean basin. J. Hydrol. Eng., 6, 110, https://doi.org/10.1061/(ASCE)1084-0699(2001)6:1(1).

    • Search Google Scholar
    • Export Citation
  • ElSaadani, M., W. F. Krajewski, R. Goska, and M. B. Smith, 2018: An investigation of errors in distributed models’ stream discharge prediction due to channel routing. J. Amer. Water Resour. Assoc., 54, 742751, https://doi.org/10.1111/1752-1688.12627.

    • Search Google Scholar
    • Export Citation
  • Georgakakos, K. P., 1986: A generalized stochastic hydrometeorological model for flood and flash-flood forecasting: 2. Case studies. Water Resour. Res., 22, 20962106, https://doi.org/10.1029/WR022i013p02096.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., and W. F. Krajewski, 2020: Hydrologic implications of wind farm effect on radar-rainfall observations. Geophys. Res. Lett., 47, e2020GL089188, https://doi.org/10.1029/2020GL089188.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., W. F. Krajewski, and R. Mantilla, 2018: A power law model for river flow velocity in Iowa basins. J. Amer. Water Resour. Assoc., 54, 10551067, https://doi.org/10.1111/1752-1688.12665.

    • Search Google Scholar
    • Export Citation
  • Ghimire, G. R., N. Jadidoleslam, W. F. Krajewski, and A. A. Tsonis, 2020: Insights on streamflow predictability across scales using horizontal visibility graph based networks. Front. Water, 2, 17, https://doi.org/10.3389/frwa.2020.00017.

    • Search Google Scholar
    • Export Citation
  • Gupta, H. V., H. Kling, K. K. Yilmaz, and G. F. Martinez, 2009: Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. J. Hydrol., 377, 8091, https://doi.org/10.1016/j.jhydrol.2009.08.003.

    • Search Google Scholar
    • Export Citation
  • Hardy, J., J. J. Gourley, P. E. Kirstetter, Y. Hong, F. Kong, and Z. L. Flamig, 2016: A method for probabilistic flash flood forecasting. J. Hydrol., 541, 480494, https://doi.org/10.1016/j.jhydrol.2016.04.007.

    • Search Google Scholar
    • Export Citation
  • IFC, 2020: IFC Archive. Accessed 1 August 2020, http://s-iihr51.iihr.uiowa.edu/precipitation/mrms_gc1h/.

  • Iowa Mesonet, 2020: Iowa environmental mesonet. Accessed 8 January 2020, https://mtarchive.geol.iastate.edu/.

  • Knoben, W. J. M., J. E. Freer, and R. A. Woods, 2019: Technical note: Inherent benchmark or not? Comparing Nash-Sutcliffe and Kling-Gupta efficiency scores. Hydrol. Earth Syst. Sci. Discuss., 23, 43234331, https://doi.org/10.5194/hess-23-4323-2019.

    • Search Google Scholar
    • Export Citation
  • Krajewski, W. F., and Coauthors, 2017: Real-time flood forecasting and information system for the State of Iowa. Bull. Amer. Meteor. Soc., 98, 539554, https://doi.org/10.1175/BAMS-D-15-00243.1.

    • Search Google Scholar
    • Export Citation
  • Krajewski, W. F., G. R. Ghimire, and F. Quintero, 2020: Streamflow forecasting without models. J. Hydrometeor., 21, 16891704, https://doi.org/10.1175/JHM-D-19-0292.1.

    • Search Google Scholar
    • Export Citation
  • Kruger, A., W. F. Krajewski, J. J. Niemeier, D. L. Ceynar, and R. Goska, 2016: Bridge-mounted river stage sensors (BMRSS). IEEE Access, 4, 89488966, https://doi.org/10.1109/ACCESS.2016.2631172.

    • Search Google Scholar
    • Export Citation
  • Larimer, O. J., 1957: Drainage Areas of Iowa Streams. Iowa Highway Research Board Bulletin 7, 404 pp.

  • Lin, C., S. Vasić, A. Kilambi, B. Turner, and I. Zawadzki, 2005: Precipitation forecast skill of numerical weather prediction models and radar nowcasts. Geophys. Res. Lett., 32, L14801, https://doi.org/10.1029/2005GL023451.

    • Search Google Scholar
    • Export Citation
  • Lobligeois, F., V. Andréassian, C. Perrin, P. Tabary, and C. Loumagne, 2014: When does higher spatial resolution rainfall information improve streamflow simulation? An evaluation using 3620 flood events. Hydrol. Earth Syst. Sci., 18, 575594, https://doi.org/10.5194/hess-18-575-2014.

    • Search Google Scholar
    • Export Citation
  • Mantilla, R., 2007: Physical basis of statistical scaling in peak flows and stream flow hydrographs for topologic and spatially embedded random self-similar channel networks. Ph.D. thesis, University of Colorado Boulder, 144 pp.

  • Moser, B. A., W. A. Gallus, and R. Mantilla, 2015: An initial assessment of radar data assimilation on warm season rainfall forecasts for use in hydrologic models. Wea. Forecasting, 30, 14911520, https://doi.org/10.1175/WAF-D-14-00125.1.

    • Search Google Scholar
    • Export Citation
  • NCEP, 2020: NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive. Research Data Archive at the NCAR, accessed 1 August 2020, https://doi.org/10.5065/D65D8PWK .

  • NOAA, 2020a: The High-Resolution Rapid Refresh (HRRR). Accessed 1 August 2020, https://rapidrefresh.noaa.gov/hrrr/.

  • NOAA, 2020b: The National Water Model (NWM). Office of Weather Prediction, accessed 1 August 2020, https://water.noaa.gov/about/nwm.

  • NOAA, 2020c: NCEP products inventory. Accessed 1 August 2020, https://www.nco.ncep.noaa.gov/pmb/products/gfs/.

  • Pagano, T., D. Garen, and S. Sorooshian, 2004: Evaluation of official western U.S. seasonal water supply outlooks, 1922–2002. J. Hydrometeor., 5, 896909, https://doi.org/10.1175/1525-7541(2004)005<0896:EOOWUS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Pereira Fo, A. J., K. C. Crawford, and D. J. Stensrud, 1999: Mesoscale precipitation fields. Part II: Hydrometeorologic modeling. J. Appl. Meteor., 38, 102125, https://doi.org/10.1175/1520-0450(1999)038<0102:MPFPIH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Perez, G., R. Mantilla, and W. F. Krajewski, 2018: The influence of spatial variability of width functions on regional peak flow regressions. Water Resour. Res., 54, 76517669, https://doi.org/10.1029/2018WR023509.

    • Search Google Scholar
    • Export Citation
  • Prior, J. C., 1991: Landforms of Iowa. University of Iowa Press, 153 pp.

  • Qi, Y., S. Martinaitis, J. Zhang, and S. Cocks, 2016: A real-time automated quality control of hourly rain gauge data based on multiple sensors in MRMS system. J. Hydrometeor., 17, 16751691, https://doi.org/10.1175/JHM-D-15-0188.1.

    • Search Google Scholar
    • Export Citation
  • Quintero, F., W. F. Krajewski, R. Mantilla, S. Small, and B.-C. Seo, 2016: A spatial–dynamical framework for evaluation of satellite rainfall products for flood prediction. J. Hydrometeor., 17, 21372154, https://doi.org/10.1175/JHM-D-15-0195.1.

    • Search Google Scholar
    • Export Citation
  • Quintero, F., B.-C. Seo, and R. Mantilla, 2020: Improvement and evaluation of the Iowa Flood Center Hillslope Link Model (HLM) by calibration-free approach. J. Hydrol., 584, 124686, https://doi.org/10.1016/j.jhydrol.2020.124686.

    • Search Google Scholar
    • Export Citation
  • Rojas, M., F. Quintero, and W. F. Krajewski, 2020: Performance of the National Water Model in Iowa using independent observations. J. Amer. Water Resour. Assoc., 56, 568585, https://doi.org/10.1111/1752-1688.12820.

    • Search Google Scholar
    • Export Citation
  • Seo, B. C., F. Quintero, and W. F. Krajewski, 2018: High-resolution QPF uncertainty and its implications for flood prediction: A case study for the eastern Iowa flood of 2016. J. Hydrometeor., 19, 12891304, https://doi.org/10.1175/JHM-D-18-0046.1.

    • Search Google Scholar
    • Export Citation
  • Seo, D.-J., H. D. Herr, and J. C. Schaake, 2006: A statistical post-processor for accounting of hydrologic uncertainty in short-range ensemble streamflow prediction. Hydrol. Earth Syst. Sci. Discuss., 3, 19872035, https://doi.org/10.5194/hessd-3-1987-2006.

    • Search Google Scholar
    • Export Citation
  • Sharma, S., and Coauthors, 2017: Eastern U.S. verification of ensemble precipitation forecasts. Wea. Forecasting, 32, 117139, https://doi.org/10.1175/WAF-D-16-0094.1.

    • Search Google Scholar
    • Export Citation
  • Sharma, S., R. Siddique, S. Reed, P. Ahnert, P. Mendoza, and A. Mejia, 2018: Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system. Hydrol. Earth Syst. Sci., 22, 18311849, https://doi.org/10.5194/hess-22-1831-2018.

    • Search Google Scholar
    • Export Citation
  • Shrestha, D. L., D. E. Robertson, Q. J. Wang, T. C. Pagano, and H. A. P. Hapuarachchi, 2013: Evaluation of numerical weather prediction model precipitation forecasts for short-term streamflow forecasting purpose. Hydrol. Earth Syst. Sci., 17, 19131931, https://doi.org/10.5194/hess-17-1913-2013.

    • Search Google Scholar
    • Export Citation
  • Silvestro, F., N. Rebora, and L. Ferraris, 2011: Quantitative flood forecasting on small- and medium-sized basins: A probabilistic approach for operational purposes. J. Hydrometeor., 12, 14321446, https://doi.org/10.1175/JHM-D-10-05022.1.

    • Search Google Scholar
    • Export Citation
  • USGS, 2020: USGS current water data for Iowa. Accessed 8 January 2020, https://waterdata.usgs.gov/ia/nwis/rt.

  • Villarini, G., and W. F. Krajewski, 2010: Review of the different sources of uncertainty in single polarization radar-based estimates of rainfall. Surv. Geophys., 31, 107129, https://doi.org/10.1007/s10712-009-9079-x.

    • Search Google Scholar
    • Export Citation
  • Vivoni, E. R., D. Entekhabi, R. L. Bras, V. Y. Ivanov, M. P. Van Horne, C. Grassotti, and R. N. Hoffman, 2006: Extending the predictability of hydrometeorological flood events using radar rainfall nowcasting. J. Hydrometeor., 7, 660677, https://doi.org/10.1175/JHM514.1.

    • Search Google Scholar
    • Export Citation
  • Welles, E., S. Sorooshian, G. Carter, and B. Olsen, 2007: Hydrologic verification: A call for action and collaboration. Bull. Amer. Meteor. Soc., 88, 503512, https://doi.org/10.1175/BAMS-88-4-503.

    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., N. A. Crook, C. K. Mueller, J. Sun, and M. Dixon, 1998: Nowcasting thunderstorms: A status report. Bull. Amer. Meteor. Soc., 79, 20792099, https://doi.org/10.1175/1520-0477(1998)079<2079:NTASR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wu, W., R. Emerton, Q. Duan, A. W. Wood, F.