Abstract

This study describes the generation and testing of a reference rainfall product created from field campaign datasets collected during the NASA Global Precipitation Measurement (GPM) mission Ground Validation Iowa Flood Studies (IFloodS) experiment. The study evaluates ground-based radar rainfall (RR) products acquired during IFloodS in the context of building the reference rainfall product. The purpose of IFloodS was not only to attain a high-quality ground-based reference for the validation of satellite rainfall estimates but also to enhance understanding of flood-related rainfall processes and the predictability of flood forecasting. We assessed the six RR estimates (IFC, Q2, CSU-DP, NWS-DP, Stage IV, and Q2-Corrected) using data from rain gauge and disdrometer networks that were located in the broader field campaign area of central and northeastern Iowa. We performed the analyses with respect to time scales ranging from 1 h to the entire campaign period in order to compare the capabilities of each RR product and to characterize the error structure at scales that are frequently used in hydrologic applications. The evaluation results show that the Stage IV estimates perform superior to other estimates, demonstrating the need for gauge-based bias corrections of radar-only products. This correction should account for each product’s algorithm-dependent error structure that can be used to build unbiased rainfall products for the campaign reference. We characterized the statistical error structures (e.g., systematic and random components) of each RR estimate and used them for the generation of a campaign reference rainfall product. To assess the hydrologic utility of the reference product, we performed hydrologic simulations driven by the reference product over the Turkey River basin. The comparison of hydrologic simulation results demonstrates that the campaign reference product performs better than Stage IV in streamflow generation.

1. Introduction

Rainfall estimates from ground-based radars are often used as a reference to assess the capabilities and limitations inherent in using space-based rainfall estimates in hydrologic modeling and prediction (e.g., Schumacher and Houze 2000; Chandrasekar et al. 2008; Villarini et al. 2009). During the period from late spring to early summer in 2013, the National Aeronautics and Space Administration (NASA) conducted a hydrology-oriented field campaign called Iowa Flood Studies (IFloodS) in collaboration with the Iowa Flood Center (IFC) at The University of Iowa. This field campaign sought to enhance the understanding of flood-related rainfall processes and the prediction capability in flood forecasting as well as to support activities of Global Precipitation Measurement (GPM) Ground Validation (see, e.g., Hou et al. 2014; Skofronick-Jackson et al. 2017). A number of scientific instruments were deployed in central and northeastern Iowa to collect high-quality precipitation data and thus improve flood forecasting capabilities (Petersen and Krajewski 2013). Therefore, this unique campaign can be understood in the context of many other NASA field experiments briefly summarized in Dolan et al. (2018).

While multiple types of rainfall datasets (e.g., satellite, radar, rain gauge and disdrometer, and many others) are available through IFloodS, we focus on evaluating the ground-based radar rainfall (RR) composite products. The utility of evaluating the RR products and characterizing their uncertainties is toward the goal of building a campaign reference product for the satellite data validation and distributed hydrologic modeling (e.g., Reed et al. 2004; Smith et al. 2004). The radar-only products used in the evaluation are the U.S. Next Generation Weather Radar (NEXRAD) single-polarization (SP) estimates [i.e., next-generation National Mosaic and QPE system (Q2) and IFC] and products generated using dual-polarization (DP) procedures (i.e., the U.S. National Weather Service operational and Colorado State University experimental blended precipitation processing algorithms). We also compare these radar-only products with rain-gauge-corrected RR estimates (Stage IV and Q2-Corrected products). We explore the algorithm-dependent features (e.g., SP versus DP) among the RR estimates based on the comprehensive analyses of product intercomparison. The uncertainty for different temporal and spatial resolution products is also characterized using ground reference of dense rain gauge and disdrometer networks. This multiscale characterization is required for hydrologic modeling frameworks that assess model predictive abilities as a function of space and time scales. Based on the evaluation and error characterization results, we create the campaign reference product by combining selected RR estimates with the data from the NASA polarimetric radar (NPOL) that was placed at the center of the campaign domain. We do not include the detailed evaluation of NPOL RR estimates in this study because the comparison between individual (e.g., NPOL) and composite products would not be fair, and individual radar products are often affected by significant range effects (e.g., Fabry et al. 1992) that are less impactful for composite products. A detailed evaluation and the performance of NPOL estimates are documented in Chen et al. (2017). We also drive the IFC hillslope-link model (HLM) using the reference product over the Turkey River basin in Iowa and assess its predictive capability in flood prediction.

The paper is structured as follows. In section 2, we introduce the study area in which the IFloodS campaign was conducted and discuss the datasets of the RR products, rain gauge, and disdrometer. Section 3 describes the methodology we used for the RR product evaluation and error characterization in this study. In section 4, we present the comparison and evaluation results and discuss the observed similarity and discrepancy among the RR products. In section 5, we provide a procedure to create the campaign reference product and evaluate its hydrologic utility. Section 6 summarizes and discusses the main findings and required future works.

2. Data

In this section, we briefly introduce the IFloodS study area and describe the RR products collected during the campaign and the ground reference datasets (i.e., rain gauge and disdrometer data) used for the evaluation of the collected products. The IFloodS domain consists of central and northeast Iowa, and the major basins in the area are the Cedar and Iowa River basins in the middle of the domain and the Turkey River basin near the northeast Iowa border (Fig. 1). As a result of the record flood that occurred in these basins in 2008, they have been used in a number of hydrologic studies to investigate various hydrologic factors (see, e.g., Gupta et al. 2010; Cunha et al. 2012; Seo et al. 2013; Smith et al. 2013; Ayalew et al. 2014). The basin areas are visible from the existing network of NEXRAD radars (KARX in La Crosse, Wisconsin; KDMX in Des Moines, Iowa; KDVN in Davenport, Iowa; and KMPX in Minneapolis, Minnesota). Despite the fact that the field deployment of rainfall measuring devices started as early as April, we define the analysis time window as the period of 1 May–15 June 2013 in order to synchronize different periods of collected data and products. We refer to this time window as the “official” IFloodS period. Further details of the precipitation events that occurred during the period are described in Cunha et al. (2015) and Seo et al. (2015a).

Fig. 1.

IFloodS spatial domain and the distribution of the rain gauge and disdrometer networks used in the RR product evaluation. The shaded circular areas indicate the 230-km range domain from the involved NEXRAD radars. The circular lines in the middle of the domain demarcate every 50 km from the NPOL radar location. The Cedar–Iowa and Turkey River basins are presented in the middle of the domain and in the northeast close to the Iowa border, respectively.

Fig. 1.

IFloodS spatial domain and the distribution of the rain gauge and disdrometer networks used in the RR product evaluation. The shaded circular areas indicate the 230-km range domain from the involved NEXRAD radars. The circular lines in the middle of the domain demarcate every 50 km from the NPOL radar location. The Cedar–Iowa and Turkey River basins are presented in the middle of the domain and in the northeast close to the Iowa border, respectively.

a. Radar rainfall products

We acquired six NEXRAD-based rainfall composite products through the campaign: the IFC real-time product; Q2; the Colorado State University DP product (CSU-DP); the National Weather Service real-time DP product (NWS-DP); the National Centers for Environmental Prediction (NCEP) Stage IV analysis; and the Q2 product with rain gauge correction (Q2-Corrected). As featured in Table 1, these composite products can be categorized into three types: radar-only SP (IFC and Q2), radar-only DP (CSU-DP and NWS-DP), and rain-gauge-corrected (Stage IV and Q2-Corrected) products. We provide a brief comparison of space and time resolutions and estimation algorithms of each product in Table 1.

Table 1.

The RR composite products evaluated and their resolution and algorithm comparison.

The RR composite products evaluated and their resolution and algorithm comparison.
The RR composite products evaluated and their resolution and algorithm comparison.

Using processing algorithms documented in Krajewski et al. (2013), the IFC rain rate map is generated every 5 min with a grid spacing of a quarter decimal minute (approximately 0.5 km). Seven NEXRAD radars (KEAX in Kansas City, Missouri; KFSD in Sioux Falls, South Dakota; KOAX in Omaha, Nebraska; and four more radars discussed earlier) are used to cover the entire state of Iowa, whereas the Q2 product is created with a 5-min and one-hundredth decimal degree (approximately 1 km) resolution over the entire United States. While the IFC uses a single NEXRAD ZR equation (Z = 300R1.4), the Q2 algorithm uses four different ZR equations (see Table 1) that depend on the precipitation type classification based on the three-dimensional structure of reflectivity and environmental (atmospheric) variables with physically based heuristic rules (Zhang et al. 2011). We note that there have been many changes and improvements in Q2, and it is now called Multi-Radar Multi-Sensor (MRMS; Zhang et al. 2016). The evaluation of Q2/MRMS and their comparison with Stage IV are reported in Chen et al. (2013) and Zhang et al. (2016).

For the CSU-DP product, the radar Level II volume data (e.g., Kelleher et al. 2007) from four radars (KARX, KDMX, KDVN, and KMPX) were postprocessed, not in real time, using a DP algorithm called CSU-HIDRO (Cifelli et al. 2011) and a hybrid scan algorithm documented in Seo et al. (2011) for combining multiple elevation angle data [for more details on the product generating procedures, refer to Seo et al. (2015a)]. The time and space resolution is identical to that of the IFC product. The CSU-DP product covers only the IFloodS domain and does not provide full coverage of the entire state of Iowa. Regarding the creation of the NWS-DP product, the instantaneous precipitation rate (Level III) products, generated using the algorithm reported in Istok et al. (2009), were collected for involved radars. We applied the procedure described by Cunha et al. (2013) to generate hourly rainfall accumulations based on the instantaneous precipitation rate. We then combined the data from the individual radars into a composite map using the exponential decaying scheme (e.g., Zhang et al. 2005) that assigns weights calculated by the distance from individual radars for a given location. Most DP algorithms are based on the procedures of identifying hydrometeor types and selecting relevant rain rate estimators. Both the CSU (e.g., Lim et al. 2005) and NWS (e.g., Park et al. 2009) identification algorithms use a similar fuzzy logic, but the architecture of the classification procedure is different in terms of the input variables and membership functions employed. These DP identification algorithms contain part of data quality control (see Table 1) and yield categories of nonprecipitation radar echoes (e.g., ground clutter and biological returns) as well as hydrometeor types. The comparison of NWS-DP and -SP products, as well as the effect of hydrometeor identification, is documented in Cunha et al. (2013).

The Stage IV product (Lin and Mitchell 2005; Wu et al. 2012) consists of hourly-based rain-gauge-corrected precipitation analyses with some manual quality controls that are performed by forecasters in the River Forecast Center (RFC). The rainfall maps that cover each individual RFC are collected at NCEP and are then combined into a national coverage based on the 4-km Hydrologic Rainfall Analysis Project (HRAP; see, e.g., Reed and Maidment 1999). The Q2-Corrected product represents the hourly Q2 bias correction using a national network of rain gauges (e.g., Kim et al. 2009), and the detailed procedures are documented in Zhang et al. (2011).

A comparison of the IFC and Q2 SP estimation algorithms presented in Table 1 demonstrates that different ZR relations can be used even for identical meteorological targets, depending on the result of precipitation classification in Q2. This can lead to a major discrepancy between the two SP products. The DP algorithms initially implement a sophisticated quality control method (e.g., Ryzhkov and Zrnić 1998; Park et al. 2009) to eliminate nonprecipitation echoes that have been identified during the hydrometeor classification step and then apply a designated relationship between rain rate and measured radar variables (i.e., differential reflectivity, specific differential phase, or horizontal reflectivity) according to the classified types. There are two major differences in defining the rain rate conversion between the CSU and NWS algorithms (we do not discuss the difference in the hydrometeor identification procedures): 1) there is no rain rate estimation in the CSU algorithm when the radar beam observes the melting layer or ice region and 2) the CSU algorithm uses both the specific differential phase Kdp and differential reflectivity Zdr for the liquid phase, while the NWS algorithm seems to rely more heavily on Zdr. Section 4 discusses the algorithm-derived differences in rain rate estimation among products in more detail.

b. Rain gauge and disdrometer data

We use rain gauge and disdrometer data as a ground reference to evaluate the collected RR products. We acquired data from local networks that were operated by NASA, IFC, the USDA Agriculture Research Service (ARS), and the University of Wyoming as well as the national networks of Automated Surface Observing System (ASOS; Clark 1995), Automated Weather Observing System (AWOS), and NWS Cooperative Observer Program (COOP; NOAA 1989). As illustrated in Fig. 1, we selected rain gauge sites that effectively cover the IFloodS study area and discussed basins.

NASA deployed 20 and 5 rain gauge platforms, each with double tipping-bucket gauges, in the Turkey River basin and South Fork Iowa River basin, respectively. Likewise, 20 NASA-owned disdrometers [14 Autonomous Parsivel Units (APUs) and 6 two-dimensional video disdrometers (2DVDs)] were deployed and distributed along the southeast direction from the domain center (some of them were clustered). Additionally, 30 IFC gauges, which were similar to the platforms in the NASA network, were clustered around Iowa City, and four more IFC gauge platforms were deployed in central Iowa. The NASA and IFC rain gauge networks transferred the recorded time-of-tip data, and the accumulated number of tips data with a 5-min resolution was used. The ARS deployed 15 rain gauges within the South Fork Iowa River basin, and all gauges were equipped with double tipping buckets (Coopersmith et al. 2015). The University of Wyoming group placed four triple tipping-bucket gauges with soil moisture probes in the vicinity of the IFC network. The ASOS and AWOS data were collected with their original resolutions (i.e., 1 and 5 min, respectively) and accumulated over the designated time intervals (e.g., hourly). The use of the NWS COOP data was limited to the rain total and daily analyses because the network only reports data daily.

3. Methodology

We provide the analysis procedures that are associated with product evaluation and error characterization with respect to multiple time scales ranging from 1 h to the entire campaign period. As shown in Table 1, with respect to radar–rain gauge (RG) comparison, the majority of the RR products have comparable spatial resolutions (0.5 and 1 km), so it is assumed that point rainfall measurements from rain gauges well represent the areal rainfall over such spatial scales. This assumption can be justified for given time scales (e.g., hourly) of the analyses because rainfall spatial variability is relatively small at such spatial (even for the 4-km resolution of Stage IV) and temporal scales (e.g., Villarini et al. 2008). This enables direct RG comparison without considering a spatial sampling disagreement (e.g., Seo and Krajewski 2011) between different measuring devices (e.g., radar versus rain gauge). Subhourly scale (e.g., 15 and 30 min) evaluation may require much denser rain gauge networks because gauge representativeness decreases (rainfall spatial variability increases) at finer temporal scales.

a. Product evaluation

The evaluation began by examining and comparing accumulated rain totals for the entire campaign period. We present and discuss the observed discrepancies that arise from the different estimation algorithms among all acquired RR products. In addition, we perform an RG comparison analysis to assess campaign totals at the ground reference locations. We also use the Parameter-Elevation Regressions on Independent Slopes Model (PRISM) rain gauge interpolated analysis (Daly et al. 2008) as a gridded reference for the campaign total (we assume that the rainfall spatial variability at the time scale of the entire period and spatial scale of 4 km is relatively small), which enables us to explore the spatial error structure of each RR product at the 6-week period scale.

While the campaign total analysis solely reveals overall agreement with ground reference data, the temporal variation of the error (over- or underestimation depending on individual events) might be somewhat compensated for and concealed in this analysis. In fact, different rainfall estimators (see Table 1) determined by the classification procedures and their outcomes tend to be sensitive to individual rain events. The classification and resulting estimators among different RR products that are associated with meteorological regimes and storm types could become major factors of estimation errors (e.g., Rosenfeld et al. 1995; Anagnostou 2004). Consequently, we selected two precipitation cases that were identified as the snow/mix (with stratiform rain) and mesoscale convective system events, respectively. We present the results of the RG comparison analysis and discuss possible reasons that algorithm-derived discrepancies were detected. This event-based analysis allows for the persuasive assessment of the potential benefits of using DP versus SP algorithms as well as exposes the basic performance of each algorithm.

In the multiscale RG comparison, we use time resolutions of 1, 3, 6, 12, and 24 h. For those accumulation times, we integrated the RR products and ground reference data over the corresponding time intervals from the original data resolutions. If missing minutes or hours in a specific accumulation window exceed 10% of the designated time interval, the corresponding hour data are regarded as missing and are excluded from the analysis. We define the systematic tendency of the RR products using the overall and conditional bias terms. We also employ two more statistical metrics, the correlation coefficient and root-mean-square error normalized by the mean of rain gauge data (normalized RMSE), to measure the accuracy across the presented time scales and to compare performance among the RR products.

b. Error characterization

In general, the error is identified as the discrepancy between the true and estimated rainfall, and we use rain gauge measurements as a reference against RR estimates at radar pixels that are collocated with the gauge location. As we discussed earlier in this section, sampling scale disagreements between rain gauges and radars are less impactful with respect to the temporal (1–24 h) and spatial (0.5–4 km) scales used in this analysis. The RR estimation error is typically defined using two mathematical notions of multiplicative and additive terms (represented as the proportions/ratios and differences, respectively), and both terms have been employed numerous times in the literature. In this study, we adopt the multiplicative term of the error/bias to characterize the error structure of the acquired RR products. The full procedure of error characterization conforms to the one in Ciach et al. (2007).

As an initial step in the error characterization, we estimate and eliminate a systematic or climatological tendency, which is described as the overall bias factor B:

 
formula

where Rg(t) and Rr(t) denote rain gauge and radar rainfall at a time step t, and RG data pairs are aggregated for all of the time steps in the period in order to calculate the overall bias factor. This value should be unique for the same RR product regardless of the data accumulation time scales if the RG data pairs at any time scale are not significantly affected by missing data or gaps. After removing this overall bias, we need to account for the over- or underestimation that occurs depending on the RR magnitude (e.g., Katz and Murphy 1997; Ciach et al. 2000). This behavior can be determined by the concept of conditional expectation function h(·):

 
formula

where E{·} denotes an expectation function, Rr is a random variable, and rr is a specific RR value. The function h(rr) implies a systematic distortion describing the conditionality of error on the RR magnitude. This tendency can be estimated using the nonparametric kernel smoothing regression (e.g., Nadaraya 1965) or the two-parameter (ah and bh) power-law function:

 
formula

Although Eqs. (1) and (3) account for the systematic behaviors of RR estimates, there is a remaining component describing a stochastic process of random uncertainties. We address this random component by estimating conditional variance of the error in Eq. (4) and use a three-parameter (σ0e, ae, and be) function in Eq. (5) to take into account the random factor:

 
formula
 
formula

where σe denotes the standard deviation of the multiplicative error, and the estimated random feature is used to combine selected RR estimates for the campaign reference products. In our error characterization, we assumed that the conditional mean and standard deviation of the RR error are stationary for convenience in modeling because accounting for nonstationarity in the modeling procedure is challenging. We also note that other factors (e.g., radar beam altitude) can be considered in modeling errors depending on product types (e.g., individual versus composite) while we used rain rate as a main factor in this study.

4. RR product evaluation

In this section, we present the analysis results of product rain totals (for the entire period and two selected events) and the statistical evaluation of the products with respect to diverse time scales (1, 3, 6, 12, and 24 h). The former analysis compares the total amounts of rainfall for the specified periods among the RR products and the ground reference data and assesses the algorithm-dependent strengths and weaknesses of each product. The latter approach discloses the statistical structure of the product error and provides useful information for the reference product generation and hydrologic modeling.

a. Difference among products

1) Campaign totals

The rain maps of the campaign totals for the “official period” (1 May–15 June) are illustrated in Fig. 2, in which the rain-gauge-corrected (Stage IV and Q2-Corrected), radar-only SP-based (IFC and Q2), and radar-only DP-based (CSU-DP and NWS-DP) products are aligned from the left to the right panels. While we evaluate the RR products using the ground reference data for the spatial domain, as shown in Fig. 1, we present the campaign totals for the entire state of Iowa in Fig. 2. For that reason, the CSU-DP map in Fig. 2 that we created using the data from the four NEXRAD radars only (see section 2) shows some of the missed (gray) rain area that is not covered by the four radars. The main features of the rainfall spatial structure are captured in most composite products, with some differences. The CSU-DP product shows certain range rings at far ranges from the individual radar locations because the CSU-DP algorithm does not estimate rain rate when the radar beam interacts with the melting layer or ice regions, and the chance of detecting the cold regions increases at far range with higher sampling altitudes. On the other hand, the NWS-DP exposes the inconsistency among individual radar observations (maybe due to radar calibration errors, and we will discuss this issue in section 6) as well as quality control issues such as the wind farm effects discussed in Seo et al. (2015b). The wind farm locations are also clearly visible in the Q2 and Q2-Corrected products in Fig. 2.

Fig. 2.

Rain total maps of the RR products accumulated over the entire campaign period (1 May–15 June). Shown are the (left) rain-gauge-corrected (Stage IV and Q2-Corrected), (center) radar-only SP (IFC and Q2), and (right) radar-only DP (CSU-DP and NWS-DP) products. Since the CSU-DP product was created using only four radars (KARX, KDMX, KDVN, and KMPX), the coverage of this product is limited to the central and eastern regions of Iowa.

Fig. 2.

Rain total maps of the RR products accumulated over the entire campaign period (1 May–15 June). Shown are the (left) rain-gauge-corrected (Stage IV and Q2-Corrected), (center) radar-only SP (IFC and Q2), and (right) radar-only DP (CSU-DP and NWS-DP) products. Since the CSU-DP product was created using only four radars (KARX, KDMX, KDVN, and KMPX), the coverage of this product is limited to the central and eastern regions of Iowa.

To ensure that the PRISM rain gauge interpolation analysis can be used as gridded reference at the campaign total scale, we first evaluate the PRISM rain totals with rain gauge observations (Fig. 3). Despite the fact that only the ASOS and NWS COOP rain gauge network data are incorporated in the PRISM analysis shown in Fig. 3 (left panel), the PRISM analysis agrees well with the IFC, NASA, and ARS network data near the one-to-one line as shown in Fig. 3 (right panel). This agreement with the independent network data confirms that the PRISM estimation is reliable as a reference (only at the scale of rain totals) and allows a further analysis to show the spatial pattern of product error using the normalized error/difference term:

 
formula

where RRtotal denotes the campaign totals from the six RR products presented in Fig. 2. The RR products are resampled (averaged) with the same spatial resolution of as PRISM (4 km), and the normalized error calculated by Eq. (6) is mapped in Fig. 4. The blue and red colors used in Fig. 4 distinguish under- and overestimation patterns, respectively. Since the campaign totals of Stage IV (top-left panel in Fig. 2) and PRISM (left panel in Fig. 3) look quite similar, the Stage IV error shown in Fig. 4 is even less than in the others, which implies that the rain gauge correction in Stage IV was successful. We note that a small number of ASOS rain gauges (e.g., 15 in Iowa) are commonly used for both PRISM and Stage IV. The Q2-corrected estimates tend to reduce the error that was originally observed in the Q2, but some errors remain. The IFC in Fig. 4 shows underestimation mostly in the northeast and some overestimation within the domain of the KOAX radar. The area covered by the KFSD represents some radar beam blockage effects and significant differences with surrounding radars (e.g., KOAX and KDMX). It is likely that the KFSD difference from other radars detected in the IFC product was suitably handled in the Q2 and NWS-DP algorithms. However, the Q2 and NWS-DP, shown in Fig. 4, introduce other questions regarding substantial overestimation in the KOAX and KDMX regions. The CSU-DP shows mostly underestimation due to range effects.

Fig. 3.

(left) Rain total map of the PRISM rain gauge interpolation analysis and (right) rain gauge comparison of PRISM rain totals. Independent gauges (e.g., NASA, IFC, and ARS) show good agreement with the PRISM data.

Fig. 3.

(left) Rain total map of the PRISM rain gauge interpolation analysis and (right) rain gauge comparison of PRISM rain totals. Independent gauges (e.g., NASA, IFC, and ARS) show good agreement with the PRISM data.

Fig. 4.

Normalized error/difference maps estimated by Eq. (6) for the campaign totals. Red and blue colors indicate over- and underestimation, respectively. The map alignment is as in Fig. 2.

Fig. 4.

Normalized error/difference maps estimated by Eq. (6) for the campaign totals. Red and blue colors indicate over- and underestimation, respectively. The map alignment is as in Fig. 2.

In Fig. 5, we present rain gauge comparison results. The rain-gauge-corrected products show relatively good agreement but indicate slight overestimation. While the Q2 and NWS-DP demonstrate significant overestimation, as seen in Fig. 4, the IFC and CSU-DP appear as underestimation. Although the dots denoting the RG pairs in the IFC are clustered along the one-to-one line in Fig. 5, the scatter is relatively larger than that in the rain-gauge-corrected one (e.g., Stage IV), and many more dots (rain gauges in the blue area in the top-center panel of Fig. 4) are densely placed in the underestimation area.

Fig. 5.

The RG comparison of the campaign totals. The rain gauge color code is as in Fig. 3 (right panel). Shown are the (left) rain-gauge-corrected (Stage IV and Q2-Corrected), (center) radar-only SP (IFC and Q2), and (right) radar-only DP (CSU-DP and NWS-DP) products.

Fig. 5.

The RG comparison of the campaign totals. The rain gauge color code is as in Fig. 3 (right panel). Shown are the (left) rain-gauge-corrected (Stage IV and Q2-Corrected), (center) radar-only SP (IFC and Q2), and (right) radar-only DP (CSU-DP and NWS-DP) products.

2) Event totals

We selected two example precipitation cases to demonstrate the algorithm-derived capabilities of RR estimation. The first case is defined as a snow/mix case with stratiform rain during the period of 2–4 May. The second one, which took place during 27–30 May, was relatively wetter and is characterized by some convective systems. Some of the convective storms were followed by widespread stratiform storms. For the detailed meteorological characteristics of these two events, refer to Seo et al. (2015a). Figure 6 shows 3- and 4-day event rain totals of the RR products with the same configuration that is seen in Fig. 2. Figure 7 also presents the event-based RG comparison.

Fig. 6.

Rain total maps for the two selected events characterized by (a) snow/mix with stratiform rain (2–4 May) and (b) a mesoscale convective system (27–30 May). The map alignment is as in Fig. 2.

Fig. 6.

Rain total maps for the two selected events characterized by (a) snow/mix with stratiform rain (2–4 May) and (b) a mesoscale convective system (27–30 May). The map alignment is as in Fig. 2.

Fig. 7.

The RG comparison of event rain totals shown in Fig. 6 for the two selected events characterized by (a) snow/mix with stratiform rain (2–4 May) and (b) a mesoscale convective system (27–30 May).

Fig. 7.

The RG comparison of event rain totals shown in Fig. 6 for the two selected events characterized by (a) snow/mix with stratiform rain (2–4 May) and (b) a mesoscale convective system (27–30 May).

Regarding the snow/mix case, it is hard to conclude that the DP algorithm shows superior performance (see Fig. 7a) and, in particular, the CSU-DP shows significant low estimation, which is due to the range limitation arising from the detection of a low-level melting layer. However, we note that the rain gauge measurement in such a cold case might contain errors as well because the rain gauges used in this study are mostly nonheated tipping-bucket types. We think that these probable errors, if any, were not considerable because the frozen and mixed snow transitioned to stratiform rain after a short duration of snow. For the SP algorithm comparison, the IFC does not capture the rainfall feature in the northeast (Fig. 6a), which appears in the Q2 and Stage IV. It is likely that the rain type classification and the application of different ZR equations in Q2 (see Fig. 8a) lead to this observed difference. The ZR curves illustrated in Fig. 8a demonstrate that the snow and stratiform (represented as “M-P” in Fig. 8a) types result in a larger rain rate at the lower reflectivity range (e.g., 0–30 dBZ) than the unique ZR (NEXRAD) used in the IFC algorithm does.

Fig. 8.

Comparison of rain rate estimation functions in the SP and DP algorithms: (a) ZR relation curves show the difference in rain rate estimation between the IFC and Q2 algorithms (the inset shows a zoomed-in view for the reflectivity range of 0–30 dBZ) and (b) rain rate estimation functions according to identified hydrometeor types in the CSU-DP and NWS-DP algorithms. The coefficient “A” for the ice and snow types in the NWS-DP rain rate estimation changes according to hydrometeor classes.

Fig. 8.

Comparison of rain rate estimation functions in the SP and DP algorithms: (a) ZR relation curves show the difference in rain rate estimation between the IFC and Q2 algorithms (the inset shows a zoomed-in view for the reflectivity range of 0–30 dBZ) and (b) rain rate estimation functions according to identified hydrometeor types in the CSU-DP and NWS-DP algorithms. The coefficient “A” for the ice and snow types in the NWS-DP rain rate estimation changes according to hydrometeor classes.

In Fig. 7b, the convective example shows better agreement in both the SP and DP algorithms than the cold one. In this case, the DP tends to work better than the SP in terms of the scatter and the RG pair alignment on the one-to-one line. Particularly, the superior performance of the CSU-DP is noticeable (upper-right panel in Fig. 7b). Most dots are aligned and concentrated on the line (indicates very good agreement) except for some NWS COOP gauge locations (yellow dots). As the COOP gauges are well distributed over the analysis domain, the disagreement can be interpreted by the observed range issue (in the upper-right panel in Fig. 2) due to the fact that some of the gauges are located far from the radars. Regarding the observed difference between the DP estimates in Fig. 7b, one probable deriving factor could be DP variables used in rain rate estimation. Since the CSU-DP shows better agreement at its observable range, the rain rate estimation in the CSU-DP based on both Kdp and Zdr for the liquid phase (see Fig. 8b) seems more reliable than that in the NWS-DP, mostly based on Zdr and Zh. The detailed equations related to each phase in Fig. 8b are listed in Table 1. Concerning the SP estimate comparisons in Fig. 7b, it seems likely that the use of the “tropical” ZR (Fig. 8a) in Q2 generates some overestimation and differences between the IFC and Q2.

b. Multiscale comparison

We evaluated the RG agreement with respect to the diverse accumulation time scales (1, 3, 6, 12, and 24 h) that are frequently used for various hydrologic models. Figure 9 shows two-dimensional histograms of the hourly RG comparison. The different colors in Fig. 9 indicate data occurrences for the given RG magnitude with a 1-mm resolution. The overall bias values are placed in the upper-right corner of each panel and imply under- (>1) or overestimation (<1) of the RR estimates. The overall tendency of under- or overestimation presented in Fig. 9 is similar to that observed in Figs. 4 and 5. We performed the same analysis for other accumulation time scales and confirmed that the bias values were in the same range and exhibited smaller scatter as time scale increases. The Stage IV in Fig. 9 reveals relatively frequent false detection on the x (radar) axis at ranges smaller than 30 mm. We speculated that the false detection in Stage IV might arise from a mismatch of spatial scales (point versus 4 × 4 km2) and the small scale variability of rainfall.

Fig. 9.

Two-dimensional histograms of the hourly RG comparison. Different colors indicate data occurrences for given RG pairs with a 1-mm resolution. Overall bias values are provided in the upper-right corner of each panel. The solid black lines represent the averaged tendency described by the presented overall bias values.

Fig. 9.

Two-dimensional histograms of the hourly RG comparison. Different colors indicate data occurrences for given RG pairs with a 1-mm resolution. Overall bias values are provided in the upper-right corner of each panel. The solid black lines represent the averaged tendency described by the presented overall bias values.

In Table 2, we present three statistical metrics (overall bias, correlation coefficient, and normalized RMSE) from the six RG datasets that pertain to five time scales. The overall bias should not change with time scale if there is no significant effect from missing data or gaps in data. Therefore, we calculated the overall bias values presented in Table 2 from the hourly RG data. Figure 10 illustrates the change in correlation and RMSE with respect to time scale and demonstrates that temporal aggregation results in better RG agreement with increasing correlation and decreasing RMSE. However, the correlation at a longer time span (e.g., 24 h) slightly decreases for most products in Fig. 10 (top panel), probably because of adding the NWS COOP dataset to the analysis. Particularly, the most significant correlation drop observed in the CSU-DP at the 24-h scale is caused by the fact that some COOP gauges located far from the radars are not within the observable range of the CSU-DP, as seen in Fig. 2. Overall, the rain-gauge-corrected (Stage IV) product shows statistically superior performance in all metrics, assuming that the difference in spatial resolution (4 versus 0.5 and 1 km) is negligible. Despite the given range limitation, the CSU-DP agrees well with rain gauge observations at all scales, and its agreement is comparable to that of the Stage IV in both correlation and RMSE. Based on the presented metrics, the NWS-DP does not seem much better than the SP products.

Table 2.

The RG comparison results with respect to time scale: three statistical metrics (overall bias, correlation coefficient, and normalized RMSE).

The R–G comparison results with respect to time scale: three statistical metrics (overall bias, correlation coefficient, and normalized RMSE).
The R–G comparison results with respect to time scale: three statistical metrics (overall bias, correlation coefficient, and normalized RMSE).
Fig. 10.

Two statistical metrics of multiscale RG comparison: correlation coefficient and normalized RMSE. The CSU-DP correlation drop at 24 h is caused by the NWS COOP rain gauges that are located outside of the observable range shown in Fig. 2.

Fig. 10.

Two statistical metrics of multiscale RG comparison: correlation coefficient and normalized RMSE. The CSU-DP correlation drop at 24 h is caused by the NWS COOP rain gauges that are located outside of the observable range shown in Fig. 2.

c. Error characterization

The error structure of the RR products is characterized for the aforementioned time scales. As discussed in section 3, the overall systematic tendency in the RR field is first eliminated by a simple multiplication of the bias value (see Table 2) by the RR estimates. For the next step, we use both nonparametric and parametric regression methods to model the remaining conditional bias. The advantage of employing the nonparametric approach is that the bias structure/behavior is not restricted by the predefined function as used in the parametric approach. However, the curve (conditional bias) pattern estimated by the nonparametric Gaussian smoothing was inconsistent and showed abrupt changes at large RR range with shorter time scales (e.g., 1 h). This behavior can be attributed to the limited sample size, which implies that there are few large RR values at the hourly scale for the given 6-week period (but temporal aggregation increases the number of large RR values). For that reason, we present the results from only parametric application in Fig. 11 for the RR products and all time scales. The common aspect observed in Fig. 11 is that the RR data aggregation over longer time spans reduces the conditional bias. Table 3 presents the parameters of the power function defined in Eq. (3). The estimated curve feature shown in Fig. 11 and parameter values in Table 3 are comparable to those in Ciach et al. (2007). The presented conditional structure is useful for hydrologic applications that are forced by the RR estimates because the systematic difference in rainfall volume tends to significantly affect errors in streamflow simulations/predictions (see, e.g., Seo et al. 2013). We describe the results of random error structure in section 5 because the random structure is used to combine different RR estimates in generating the reference product. We indicate that the error models provided in this study represent the uncertainty features averaged over the 6-week campaign period, and the uncertainty features may vary with different events or seasons.

Fig. 11.

Conditional bias of the RR products represented by a power-law function with respect to time scale. Table 3 presents the power-law function parameters.

Fig. 11.

Conditional bias of the RR products represented by a power-law function with respect to time scale. Table 3 presents the power-law function parameters.

Table 3.

Estimated power-law function parameters describing the RR conditional bias with respect to time scale.

Estimated power-law function parameters describing the RR conditional bias with respect to time scale.
Estimated power-law function parameters describing the RR conditional bias with respect to time scale.

5. Reference product

a. Reference product generation

This section describes the procedures for creating the campaign reference rainfall product. These procedures involve the systematic error (overall and conditional biases) correction of the RR estimates and their weighted combinations, calculated using the relative magnitude of random errors. The random errors are characterized by the standard deviation of remaining errors after correcting the RR estimates for the overall and conditional biases. We assumed the random error as a normal distribution because we removed the effects of bias and skewness in the RR estimates. We tested Stage IV, Q2-Corrected, IFC, CSU-DP, and two NPOL DP products as the ingredients of reference product. The NPOL estimates are called NPOL-RR and NPOL-RC, identified by the data processing and QPE algorithms known as DROPS2 (Pippitt et al. 2015; Chen et al. 2017) and CSU-HIDRO (Cifelli et al. 2011). We excluded NWS-DP and Q2 because of the relatively low performance shown in the evaluation and the presence of Q2-Corrected, respectively. As shown in Fig. 12, we estimated the random error function in Eq. (5) for the four selected RR composites and two NPOL products. The parameters of conditional bias function in Eq. (3) were also estimated for the NPOL products. We conducted the parameter estimation for both conditional mean and random components for the hourly scale at which the reference product is generated.

Fig. 12.

Random error structures at the hourly scale for the four composite and two NPOL DP products. Model parameters are provided in the figure.

Fig. 12.

Random error structures at the hourly scale for the four composite and two NPOL DP products. Model parameters are provided in the figure.

We examined a variety of combinations using all the ingredient products, and the resultant candidates for the reference product looked more or less similar (at the scale of a 6-week period), mainly because of the bias correction used in the combining procedure. We then selected the statistically better ones through an independent evaluation at the scale of campaign totals. For the selected reference, we used and combined Stage IV, Q2-Corrected, IFC, CSU-DP, and NPOL-RC. Figure 13 illustrates the map of campaign totals of the selected reference product and its independent evaluation using NWS COOP and CoCoRaHS (Cifelli et al. 2005) observations. The term “independent” is justified here because the both network stations collect daily reports only, and their observations were not used to quantify uncertainties at the hourly scale shown in Figs. 11 and 12. As shown in Fig. 13, part of the CoCoRaHS observations contains a quality control issue (e.g., missing), and we did not include data from this network in a simple quantitative/statistical evaluation. The calculated bias (G/R) and mean absolute error of the reference product with the COOP observations are 0.97 and 28.3 mm (9.4% of the mean of COOP totals), respectively. Based on the observed agreement with the COOP data (Fig. 13), the reference product appears almost unbiased, which is the most significant element required for hydrologic prediction (e.g., Seo et al. 2013).

Fig. 13.

Rain total maps of the campaign reference product and its independent evaluation using the NWS COOP and CoCoRaHS rain gauge data. The scatterplots show RG comparison of the campaign totals between the reference product and rain gauge observations. The CoCoRaHS observations show a quality control issue (e.g., missing).

Fig. 13.

Rain total maps of the campaign reference product and its independent evaluation using the NWS COOP and CoCoRaHS rain gauge data. The scatterplots show RG comparison of the campaign totals between the reference product and rain gauge observations. The CoCoRaHS observations show a quality control issue (e.g., missing).

b. Hydrologic evaluation

We created the reference product by correcting major uncertainty features (e.g., overall and conditional biases) of the selected RR products. A direct evaluation or verification of the reference product at finer scale (e.g., hourly) was not feasible because of the lack of independent ground reference data at the required scale. Rain gauge and disdrometer data collected during IFloodS were all included in the RR uncertainty characterization and used in the reference product generation procedures. Therefore, in this section, we force a hydrologic model using the reference product and assess its predictive capability in flood forecasting.

We used the IFC hillslope-link model (HLM) to simulate streamflow during the campaign period. This distributed hydrologic model is based on landscape decomposition into hillslopes and channels, and its configuration and governing equations are documented in Krajewski et al. (2017). Here, suffice to say that the model is terrain based, that is, it respects water transport in the stream and river network. The key components are 1) rainfall to runoff transformation at the hillslopes and 2) water routing in the river channels. The main feature of the HLM is that it is calibration-free: the model parameters are determined a priori, and therefore the model does not “favor” any particular input product. Model calibration may conceal different aspects in streamflow generation driven by different precipitation forcing products. The use of HLM can be understood in the context of the Prediction in Ungauged Basins (PUB; Sivapalan 2003) initiative because the HLM predictions are not limited to the locations/stations where streamflow observations exist. Although such a physics-based model does not always guarantee accurate predictions, our earlier and ongoing evaluations of HLM (e.g., Cunha et al. 2012; Seo et al. 2013, 2018; Ayalew et al. 2014; Quintero et al. 2016; Krajewski et al. 2017) have indicated its acceptable performance. We selected the Turkey River basin for this hydrologic evaluation because there are five USGS stream gauges providing discharge at a range of spatial scales from about 450 to about 4000 km2. In addition, 20 NASA rain gauges densely deployed within the basin (see Fig. 1) allowed us to test and compare the simulation results driven by gauge-based gridded estimates with those driven by the reference product.

We created a gauge-interpolated rainfall product at the hourly scale using a geostatistical procedure known as the optimal interpolation technique, of which ordinary kriging is an example (e.g., Tabios and Salas 1985). Figure 14 shows the campaign rainfall totals of the reference and gauge interpolation products over the Turkey River basin and also indicates the locations of the USGS stations and NASA rain gauges. We then ran the HLM with the rainfall forcing of the reference, gauge interpolation, and Stage IV products. We compare each simulated hydrograph at the five USGS stations with streamflow observations in Fig. 15 and present performance metrics in Table 4 to quantitatively assess the hydrologic prediction capability associated with each rainfall product. We indicate that the rating curve uncertainty was not accounted for in the analysis. The performance metrics used here are Kling–Gupta efficiency (KGE; Gupta et al. 2009), correlation, and normalized RMSE. All simulations started with the same initial conditions, that is, the amount of water in the soil and in the channels. We can observe from Fig. 15 that the simulation results driven by the gauge interpolation product better agree with the USGS observations than those driven by the RR products. The gauge interpolation product simulation tends to capture small peaks in May well, while both the reference and Stage IV simulations overestimate these somewhat (the overestimation is more significant in Stage IV). We think that these streamflow overestimations were not caused by the systematic rainfall overestimation in the reference products, but rather by complicated hydrologic processes and interactions between initial soil water content and dynamic changes of rainfall space–time distribution. We confirmed that there was little difference between the reference and gauge interpolation products in the total amounts of mean areal precipitation (particularly for the event in early May) at all five scales. Regarding the significant event in late May and early June, the simulations driven by the reference product captured the flood peak and timing well for the relatively smaller-scale basin (e.g., at Spillville in Fig. 15). The observed delay of the streamflow peak at Elkader and the noticeable underestimation at Garber do not look like a rainfall issue because all forcing products led to the similar results. Given the evaluation metrics provided in Table 4, we concluded that the overall performance of the reference product in generating streamflow is superior to that of Stage IV.

Fig. 14.

The maps of campaign rain totals of the (a) reference and (b) gauge interpolation products over the Turkey River basin.

Fig. 14.

The maps of campaign rain totals of the (a) reference and (b) gauge interpolation products over the Turkey River basin.

Fig. 15.

Hydrologic simulation results driven by the gauge interpolation, campaign reference, and Stage IV products at the five USGS stream gauge stations in the Turkey River basin.

Fig. 15.

Hydrologic simulation results driven by the gauge interpolation, campaign reference, and Stage IV products at the five USGS stream gauge stations in the Turkey River basin.

Table 4.

Performance metrics for hydrologic simulations at the five USGS stream gauge locations in the Turkey River basin.

Performance metrics for hydrologic simulations at the five USGS stream gauge locations in the Turkey River basin.
Performance metrics for hydrologic simulations at the five USGS stream gauge locations in the Turkey River basin.

While the best performance of gauge-only rainfall product may come as a surprise, the setup in terms of gauge density (one gauge per 200 km2) and quality (double gauges at each location) would be difficult to repeat in an operational environment. For example, in Iowa, this would require some 800 rain gauge sites. It seems that the best strategy is what has been implemented operationally, that is, rain-gauge-corrected radar rainfall. Good performance of the Stage IV and campaign reference products offers solid evidence to support this approach.

6. Summary and conclusions

We evaluated the RR composite products collected during the NASA IFloodS campaign, which was designed to serve as a high-quality ground-based reference for the validation of satellite rainfall estimates. We characterized the acquired RR products as the SP (IFC and Q2), DP (CSU-DP and NWS-DP), and rain-gauge-corrected (Stage IV and Q2-Corrected) estimates. We used data from a number of rain gauge and disdrometer networks (NASA, IFC, USDA ARS, University of Wyoming, ASOS, AWOS, and NWS COOP) as ground reference to assess the algorithm-derived capability of the RR products and their potential benefit for hydrologic prediction. Some of these networks were newly deployed, while others were preexisting within the campaign area. We implemented the performance evaluation and error characterization of the RR products with respect to multiscale ranging from 1 h to the entire campaign period.

The analysis of rain totals for the entire period showed significantly different spatial patterns (Figs. 2, 4) among the RR products. The RG comparison analysis verified this discrepancy (Fig. 5), and the rain-gauge-corrected products (Stage IV and Q2-Corrected) seemed fairly close to the rain gauge observations. All other products exposed either over- or underestimation properties. In particular, the CSU-DP showed a range limitation because of an algorithm component in which rain rate was not estimated when the radar beam interacted with regions of ice or melting ice. In the event-based analysis, the heavy rain case performance looked better in the DP-based algorithms (based on the RG comparison in Fig. 7b), but the DP results were not superior to the SP for a presented snow/mix with the stratiform case (Fig. 7a). This implies that the DP algorithms still need improvement [for more detailed evaluation of the DP products and algorithms, refer to Cunha et al. (2015) and Seo et al. (2015a)]. In the comparison of the DP algorithms (see Fig. 7b with the exclusion of daily COOP gauges for a fair comparison), it is likely that the algorithm using both Kdp and Zdr (CSU-DP) better represents heavy rain than that based on Zdr and Zh (NWS-DP). The significant relative bias observed around the KOAX radar (NWS-DP in Fig. 6b) seemed to be affected by the calibration errors in either Zdr or Zh. We confirmed with the Radar Operations Center (ROC) that the Zh values of the KOAX radar were somewhat hotter (1.0–1.5 dBZ) than those of adjacent radars (e.g., KDMX) for May 2013. The observed underestimation around the KFSD radar for the IFC product is also explained by the relative Zh bias from −1.5 to −1.0 dBZ. We note that this calibration error is a challenging issue, particularly in the real-time application, and hope that the new Dual-Frequency Precipitation Radar (DPR) that was recently launched by the GPM program will help address this problem in radar QPE (e.g., Schwaller and Morris 2011; Warren et al. 2018).

We performed the multiscale RG comparison using three statistical metrics: multiplicative bias, the correlation coefficient, and normalized RMSE. As seen in the precedent analyses, the rain-gauge-corrected product (Stage IV) showed statistically superior results when compared to the radar-only products. This implies that radar-only products should be corrected in a way (e.g., Steiner et al. 1999; Seo and Breidenbach 2002) that addresses their intrinsic error structure before they are used in hydrologic applications. However, the comparison result from one of the radar-only products, that is, the CSU-DP, demonstrates its noticeable capability and potential in spite of the presented radar range restriction. We expect that even a simple application of the relation (in the literature) between rain rate and observed radar variables for some cold precipitation types in the CSU-DP algorithm may improve upon the current state for an operational purpose. The vertical profile of reflectivity (VPR) approach (e.g., Krajewski et al. 2011) has the potential to remedy the previously discussed melting layer issue.

We quantitatively characterized the error structure of the RR products using a framework documented in Ciach et al. (2007). Using the characterized error structure, we removed systematic errors (overall and conditional biases) of the selected RR products (Stage IV, Q2-Corrected, IFC, CSU-DP, and NPOL-RC) and combined them using their random error features to create the campaign reference product. We evaluated the created reference product through the HLM streamflow simulations. The streamflow simulation results and evaluation metrics presented in Fig. 15 and Table 4 demonstrate that the reference product created in this study performs better than Stage IV, which was selected as the best RR composite product in our evaluation. We hope that our findings and understanding, as well as our developments (e.g., the campaign reference product) that have been gained from this unique field campaign, will be useful for satellite product validation and various hydrologic modeling efforts.

Acknowledgments

This study was supported by NASA’s Iowa Flood Studies in collaboration with the Iowa Flood Center. The National Science Foundation provided partial support under Award EAR-1327830. The authors thank Carrie Langston and Pierre Kirstetter at the U.S. National Severe Storms Laboratory for providing the real-time Q2 and rain-gauge-corrected Q2 products. The Stage IV products were provided by NCAR/EOL under the sponsorship of the National Science Foundation. We also would like to thank Ali Tokay at the NASA Goddard Space Flight Center for providing the disdrometer observations and Munsung Keem at the University of Iowa for the meticulous quality controls of the disdrometer data. The authors are also grateful to the PMM/GPM Program Management for their support of IFloodS and this research.

REFERENCES

REFERENCES
Anagnostou
,
E.
,
2004
:
A convective/stratiform precipitation classification algorithm for volume scanning weather radar observations
.
Meteor. Appl.
,
11
,
291
300
, https://doi.org/10.1017/S1350482704001409.
Ayalew
,
T. B.
,
W. F.
Krajewski
, and
R.
Mantilla
,
2014
:
Connecting the power-law scaling structure of peak-discharges to spatially variable rainfall and catchment physical properties
.
Adv. Water Resour.
,
71
,
32
43
, https://doi.org/10.1016/j.advwatres.2014.05.009.
Chandrasekar
,
V.
,
V. N.
Bringi
,
S. A.
Rutledge
,
A.
Hou
,
E.
Smith
,
G. S.
Jackson
,
E.
Gorgucci
, and
W. A.
Petersen
,
2008
:
Potential role of dual-polarization radar in the validation of satellite precipitation measurements: Rationale and opportunities
.
Bull. Amer. Meteor. Soc.
,
89
,
1127
1145
, https://doi.org/10.1175/2008BAMS2177.1.
Chen
,
H.
,
V.
Chandrasekar
, and
R.
Bechini
,
2017
:
An improved dual-polarization radar rainfall algorithm (DROPS2.0): Application in NASA IFloodS field campaign
.
J. Hydrometeor.
,
18
,
917
937
, https://doi.org/10.1175/JHM-D-16-0124.1.
Chen
,
S.
, and Coauthors
,
2013
:
Evaluation and uncertainty estimation of NOAA/NSSL next-generation National Mosaic quantitative precipitation estimation product (Q2) over the continental United States
.
J. Hydrometeor.
,
14
,
1308
1322
, https://doi.org/10.1175/JHM-D-12-0150.1.
Ciach
,
G. J.
,
M. L.
Morrissey
, and
W. F.
Krajewski
,
2000
:
Conditional bias in radar rainfall estimation
.
J. Appl. Meteor.
,
39
,
1941
1946
, https://doi.org/10.1175/1520-0450(2000)039<1941:CBIRRE>2.0.CO;2.
Ciach
,
G. J.
,
W. F.
Krajewski
, and
G.
Villarini
,
2007
:
Product-error-driven uncertainty model for probabilistic quantitative precipitation estimation with NEXRAD data
.
J. Hydrometeor.
,
8
,
1325
1347
, https://doi.org/10.1175/2007JHM814.1.
Cifelli
,
R.
,
N.
Doesken
,
P.
Kennedy
,
L. D.
Carey
,
S. A.
Rutledge
,
C.
Gimmestad
, and
T.
Depue
,
2005
:
The Community Collaborative Rain, Hail, and Snow Network: Informal education for scientists and citizens
.
Bull. Amer. Meteor. Soc.
,
86
,
1069
1077
, https://doi.org/10.1175/BAMS-86-8-1069.
Cifelli
,
R.
,
V.
Chandrasekar
,
S.
Lim
,
P. C.
Kennedy
,
Y.
Wang
, and
S. A.
Rutledge
,
2011
:
A new dual-polarization radar rainfall algorithm: Application in Colorado precipitation events
.
J. Atmos. Oceanic Technol.
,
28
,
352
364
, https://doi.org/10.1175/2010JTECHA1488.1.
Clark
,
P.
,
1995
: Automated surface observations, new challenges - new tools. Preprints, Sixth Conf. on Aviation Weather Systems, Dallas, TX, Amer. Meteor. Soc., 445–450.
Coopersmith
,
E. J.
,
M. H.
Cosh
,
W. A.
Petersen
,
J.
Prueger
, and
J. J.
Niemeier
,
2015
:
Soil moisture model calibration and validation: An ARS watershed on the South Fork Iowa River
.
J. Hydrometeor.
,
16
,
1087
1101
, https://doi.org/10.1175/JHM-D-14-0145.1.
Cunha
,
L. K.
,
P. V.
Mandapaka
,
W. F.
Krajewski
,
R.
Mantilla
, and
A. A.
Bradley
,
2012
:
Impact of radar-rainfall error structure on estimated flood magnitude across scales: An investigation based on a parsimonious distributed hydrological model
.
Water Resour. Res.
,
48
,
W10515
, https://doi.org/10.1029/2012WR012138.
Cunha
,
L. K.
,
J. A.
Smith
,
M. L.
Baeck
, and
W. F.
Krajewski
,
2013
:
An early performance evaluation of the NEXRAD dual-polarization radar rainfall estimates for urban flood applications
.
Wea. Forecasting
,
28
,
1478
1497
, https://doi.org/10.1175/WAF-D-13-00046.1.
Cunha
,
L. K.
,
J. A.
Smith
,
W. F.
Krajewski
,
M. L.
Baeck
, and
B.-C.
Seo
,
2015
:
NEXRAD NWS polarimetric product evaluation for IFloodS
.
J. Hydrometeor.
,
16
,
1676
1699
, https://doi.org/10.1175/JHM-D-14-0148.1.
Daly
,
C.
,
M.
Halbleib
,
J. I.
Smith
,
W. P.
Gibson
,
M. K.
Dogget
,
G. H.
Tayor
,
J.
Curtis
, and
P. P.
Pasteris
,
2008
:
Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States
.
Int. J. Climatol.
,
28
,
2031
2064
, https://doi.org/10.1002/joc.1688.
Dolan
,
B.
,
B.
Fuchs
,
S. A.
Rutledge
,
E. A.
Barnes
, and
E. J.
Thompson
,
2018
:
Primary modes of global drop size distribution
.
J. Atmos. Sci.
,
75
,
1453
1476
, https://doi.org/10.1175/JAS-D-17-0242.1.
Fabry
,
F.
,
G. L.
Austin
, and
D.
Tees
,
1992
:
The accuracy of rainfall estimates by radar as a function of range
.
Quart. J. Roy. Meteor. Soc.
,
118
,
435
453
, https://doi.org/10.1002/qj.49711850503.
Gupta
,
H. V.
,
H.
Kling
,
K. K.
Yilmaz
, and
G. F.
Martinez
,
2009
:
Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling
.
J. Hydrol.
,
377
,
80
91
, https://doi.org/10.1016/j.jhydrol.2009.08.003.
Gupta
,
V. K.
,
R.
Mantilla
,
B. M.
Troutman
,
D.
Dawdy
, and
W. F.
Krajewski
,
2010
:
Generalizing a nonlinear geophysical flood theory to medium-size river networks
.
Geophys. Res. Lett.
,
37
,
L11402
, https://doi.org/10.1029/2009GL041540.
Hou
,
A. Y.
, and Coauthors
,
2014
:
Global Precipitation Measurement (GPM) Mission
.
Bull. Amer. Meteor. Soc.
,
95
,
701
722
, https://doi.org/10.1175/BAMS-D-13-00164.1.
Istok
,
M.
, and Coauthors
,
2009
: WSR-88D dual-polarization initial operational capabilities. 25th Conf. on Int. Interactive Information and Processing Systems (IIPS) in Meteorology, Oceanography, and Hydrology, Phoenix, AZ, Amer. Meteor. Soc., 15.5, http://ams.confex.com/ams/pdfpapers/148927.pdf.
Katz
,
R. W.
, and
A. H.
Murphy
, Eds.,
1997
: Economic Value of Weather and Climate Forecasts. Cambridge University Press, 222 pp.
Kelleher
,
K. E.
, and Coauthors
,
2007
:
A real-time delivery system for NEXRAD Level II data via the internet
.
Bull. Amer. Meteor. Soc.
,
88
,
1045
1057
, https://doi.org/10.1175/BAMS-88-7-1045.
Kessinger
,
C.
,
S.
Ellis
, and
J.
Van Andel
,
2003
: The radar echo classifier: A fuzzy logic algorithm for the WSR-88D. Third Conf. on Artificial Intelligence Applications to the Environmental Sciences, Long Beach, CA, Amer. Meteor. Soc., P1.6, https://ams.confex.com/ams/annual2003/techprogram/paper_54946.htm.
Kim
,
D.
,
B.
Nelson
, and
D.-J.
Seo
,
2009
:
Characteristics of reprocessed Hydrometeorological Automated Data System (HADS) hourly precipitation data
.
Wea. Forecasting
,
24
,
1287
1296
, https://doi.org/10.1175/2009WAF2222227.1.
Krajewski
,
W. F.
,
B.
Vignal
,
B.-C.
Seo
, and
G.
Villarini
,
2011
:
Statistical model of the range-dependent error in radar-rainfall estimates due to the vertical profile of reflectivity
.
J. Hydrol.
,
402
,
306
316
, https://doi.org/10.1016/j.jhydrol.2011.03.024.
Krajewski
,
W. F.
,
A.
Kruger
,
S.
Singh
,
B.-C.
Seo
, and
J. A.
Smith
,
2013
:
Hydro-NEXRAD-2: Real time access to customized radar-rainfall for hydrologic applications
.
J. Hydroinform.
,
15
,
580
590
, https://doi.org/10.2166/hydro.2012.227.
Krajewski
,
W. F.
, and Coauthors
,
2017
:
Real-time flood forecasting and information system for the State of Iowa
.
Bull. Amer. Meteor. Soc.
,
98
,
539
554
, https://doi.org/10.1175/BAMS-D-15-00243.1.
Lim
,
S.
,
V.
Chandrasekar
, and
V. N.
Bringi
,
2005
:
Hydrometeor classification system using dual-polarization radar measurements: Model improvements and in situ verification
.
IEEE Trans. Geosci. Remote Sens.
,
43
,
792
801
, https://doi.org/10.1109/TGRS.2004.843077.
Lin
,
Y.
, and
K. E.
Mitchell
,
2005
: The NCEP Stage II/IV hourly precipitation analyses: Development and applications. 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2, https://ams.confex.com/ams/Annual2005/techprogram/paper_83847.htm.
Nadaraya
,
E. A.
,
1965
:
On non-parametric estimates of density functions and regression curves
.
Theory Probab. Appl.
,
10
,
186
190
, https://doi.org/10.1137/1110024.
NOAA
,
1989
: Cooperative station observations. National Weather Service Observing Handbook 2, 83 pp.
Park
,
H.
,
A. V.
Ryzhkov
,
D. S.
Zrnić
, and
K.-E.
Kim
,
2009
:
The hydrometeor classification algorithm for the polarimetric WSR-88D: Description and application to an MCS
.
Wea. Forecasting
,
24
,
730
748
, https://doi.org/10.1175/2008WAF2222205.1.
Petersen
,
W. A.
, and
W. F.
Krajewski
,
2013
: Status update on the GPM Ground Validation Iowa Flood Studies (IFloodS) field experiment. Geophysical Research Abstracts, Vol. 15, Abstract EGU2013-13345, http://meetingorganizer.copernicus.org/EGU2013/EGU2013-13345.pdf.
Pippitt
,
J. L.
,
D. B.
Wolff
,
W.
Petersen
, and
D. A.
Marks
,
2015
: Data and operational processing for NASA’s GPM Ground Validation program. 37th Conf. on Radar Meteorology, Norman, OK, Amer. Meteor. Soc., 111, https://ams.confex.com/ams/37RADAR/webprogram/Paper275627.html.
Quintero
,
F.
,
W. F.
Krajewski
,
R.
Mantilla
,
S.
Small
, and
B.-C.
Seo
,
2016
:
A spatial–dynamical framework for evaluation of satellite rainfall products for flood prediction
.
J. Hydrometeor.
,
17
,
2137
2154
, https://doi.org/10.1175/JHM-D-15-0195.1.
Reed
,
S. M.
, and
D.
Maidment
,
1999
:
Coordinate transformations for using NEXRAD data in GIS-based hydrologic modeling
.
J. Hydrol. Eng.
,
4
,
174
182
, https://doi.org/10.1061/(ASCE)1084-0699(1999)4:2(174) .
Reed
,
S. M.
,
V.
Koren
,
M. S.
Smith
,
Z.
Zhang
,
F.
Moreda
, and
D.-J.
Seo
,
2004
:
Overall distributed model intercomparison project results
.
J. Hydrol.
,
298
,
27
60
, https://doi.org/10.1016/j.jhydrol.2004.03.031.
Rosenfeld
,
D.
,
E.
Amitai
, and
D. B.
Wolff
,
1995
:
Classification of rain regimes by the three-dimensional properties of reflectivity fields
.
J. Appl. Meteor.
,
34
,
198
211
, https://doi.org/10.1175/1520-0450(1995)034<0198:CORRBT>2.0.CO;2.
Ryzhkov
,
A. V.
, and
D. S.
Zrnić
,
1998
:
Polarimetric rainfall estimation in the presence of anomalous propagation
.
J. Atmos. Oceanic Technol.
,
15
,
1320
1330
, https://doi.org/10.1175/1520-0426(1998)015<1320:PREITP>2.0.CO;2.
Schumacher
,
C.
, and
R. A.
Houze
Jr.
,
2000
:
Comparison of radar data from the TRMM satellite and Kwajalein oceanic validation site
.
J. Appl. Meteor.
,
39
,
2151
2164
, https://doi.org/10.1175/1520-0450(2001)040<2151:CORDFT>2.0.CO;2.
Schwaller
,
M. R.
, and
K. R.
Morris
,
2011
:
A ground validation network for the Global Precipitation Measurement mission
.
J. Atmos. Oceanic Technol.
,
28
,
301
319
, https://doi.org/10.1175/2010JTECHA1403.1.
Seo
,
B.-C.
, and
W. F.
Krajewski
,
2011
:
Investigation of the scale-dependent variability of radar-rainfall and rain gauge error covariance
.
Adv. Water Resour.
,
34
,
152
163
, https://doi.org/10.1016/j.advwatres.2010.10.006.
Seo
,
B.-C.
,
W. F.
Krajewski
,
A.
Kruger
,
P.
Domaszczynski
,
J. A.
Smith
, and
M.
Steiner
,
2011
:
Radar-rainfall estimation algorithms of Hydro-NEXRAD
.
J. Hydroinform.
,
13
,
277
291
, https://doi.org/10.2166/hydro.2010.003.
Seo
,
B.-C.
,
L. K.
Cunha
, and
W. F.
Krajewski
,
2013
:
Uncertainty in radar-rainfall composite and its impact on hydrologic prediction for the eastern Iowa flood of 2008
.
Water Resour. Res.
,
49
,
2747
2764
, https://doi.org/10.1002/wrcr.20244.
Seo
,
B.-C.
,
B.
Dolan
,
W. F.
Krajewski
,
S.
Rutledge
, and
W.
Petersen
,
2015a
:
Comparison of single and dual polarization based rainfall estimates using NEXRAD data for the NASA Iowa Flood Studies Project
.
J. Hydrometeor.
,
16
,
1658
1675
, https://doi.org/10.1175/JHM-D-14-0169.1.
Seo
,
B.-C.
,
W. F.
Krajewski
, and
K. V.
Mishra
,
2015b
:
Using the new dual-polarimetric capability of WSR-88D to eliminate anomalous propagation and wind turbine effects in radar-rainfall
.
Atmos. Res.
,
153
,
296
309
, https://doi.org/10.1016/j.atmosres.2014.09.004.
Seo
,
B.-C.
,
F.
Quintero
, and
W. F.
Krajewski
,
2018
:
High-resolution QPF uncertainty and its implications for flood prediction: A case study for the eastern Iowa flood of 2016
.
J. Hydrometeor.
,
19
,
1289
1304
, https://doi.org/10.1175/JHM-D-18-0046.1.
Seo
,
D.-J.
, and
J. P.
Breidenbach
,
2002
:
Real-time correction of spatially nonuniform bias in radar rainfall data using rain gauge measurements
.
J. Hydrometeor.
,
3
,
93
111
, https://doi.org/10.1175/1525-7541(2002)003<0093:RTCOSN>2.0.CO;2.
Sivapalan
,
M.
,
2003
:
Prediction in ungauged basins: A grand challenge for theoretical hydrology
.
Hydrol. Processes
,
17
,
3163
3170
, https://doi.org/10.1002/hyp.5155.
Skofronick-Jackson
,
G.
, and Coauthors
,
2017
:
The Global Precipitation Measurement (GPM) mission for science and society
.
Bull. Amer. Meteor. Soc.
,
98
,
1679
1695
, https://doi.org/10.1175/BAMS-D-15-00306.1.
Smith
,
J. A.
,
M. L.
Baeck
,
G.
Villarini
,
D. B.
Wright
, and
W. F.
Krajewski
,
2013
:
Extreme flood response: The June 2008 flooding in Iowa
.
J. Hydrometeor.
,
14
,
1810
1825
, https://doi.org/10.1175/JHM-D-12-0191.1.
Smith
,
M. B.
,
D.-J.
Seo
,
V. I.
Koren
,
S. M.
Reed
,
Z.
Zhang
,
A.
Duan
,
F.
Moreda
, and
S.
Cong
,
2004
:
The distributed model intercomparison project (DMIP): Motivation and experiment design
.
J. Hydrol.
,
298
,
4
26
, https://doi.org/10.1016/j.jhydrol.2004.03.040.
Steiner
,
M.
,
J. A.
Smith
,
S. J.
Burges
,
C. V.
Alonso
, and
R. W.
Darden
,
1999
:
Effect of bias adjustment and rain gauge data quality control on radar rainfall estimation
.
Water Resour. Res.
,
35
,
2487
2503
, https://doi.org/10.1029/1999WR900142.
Tabios
,
G. Q.
, III
, and
J. D.
Salas
,
1985
:
A comparative analysis of techniques for spatial interpolation of precipitation
.
Water Resour. Bull.
,
21
,
365
380
, https://doi.org/10.1111/j.1752-1688.1985.tb00147.x.
Villarini
,
G.
,
P. V.
Mandapaka
,
W. F.
Krajewski
, and
R. J.
Moore
,
2008
:
Rainfall and sampling uncertainties: A rain gauge perspective
.
J. Geophys. Res.
,
113
,
D11102
, https://doi.org/10.1029/2007JD009214.
Villarini
,
G.
,
W. F.
Krajewski
, and
J. A.
Smith
,
2009
:
New paradigm for statistical validation of satellite precipitation estimates: Application to a large sample of the TMPA 0.25° 3-hourly estimates over Oklahoma
.
J. Geophys. Res.
,
114
,
D12106
, https://doi.org/10.1029/2008JD011475.
Warren
,
R. A.
,
A.
Protat
,
S. T.
Siems
,
H. A.
Ramsay
,
V.
Louf
,
M. J.
Manton
, and
T. A.
Kane
,
2018
:
Calibrating ground-based radars against TRMM and GPM
.
J. Atmos. Oceanic Technol.
,
35
,
323
346
, https://doi.org/10.1175/JTECH-D-17-0128.1.
Wu
,
W.
,
D.
Kitamiller
, and
S.
Wu
,
2012
:
Evaluation of radar precipitation estimates from the National Mosaic and Multisensor Quantitative Precipitation Estimation system and the WSR-88D Precipitation Processing System over the conterminous United States
.
J. Hydrometeor.
,
13
,
1080
1093
, https://doi.org/10.1175/JHM-D-11-064.1.
Zhang
,
J.
,
K.
Howard
, and
J. J.
Gourley
,
2005
:
Constructing three-dimensional multiple-radar reflectivity mosaics: Examples of convective storms and stratiform rain echoes
.
J. Atmos. Oceanic Technol.
,
22
,
30
42
, https://doi.org/10.1175/JTECH-1689.1.
Zhang
,
J.
, and Coauthors
,
2011
:
National Mosaic and multi-sensor QPE (NMQ) system: Description, results, and future plans
.
Bull. Amer. Meteor. Soc.
,
92
,
1321
1338
, https://doi.org/10.1175/2011BAMS-D-11-00047.1.
Zhang
,
J.
, and Coauthors
,
2016
:
Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities
.
Bull. Amer. Meteor. Soc.
,
97
,
621
638
, https://doi.org/10.1175/BAMS-D-14-00174.1.

Footnotes

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).