1. Introduction
As national meteorological services look to enhance or replace their radar networks, the sensors’ performance must be quantified relative to the network layout and costs. Radar has been an invaluable source of information regarding severe thunderstorms, tornadoes, flash floods, and numerous other meteorological phenomena, but many of these benefits are a function of the radar network layout and coverage. For example, Cho and Kurdzo (2019) recently showed that fraction of vertical coverage is paramount to tornado warning performance (probability of detection and false alarm rate), while Cho and Kurdzo (2020) showed a similar result for flash-flood warnings. For these reasons, radar network design is an important topic to investigate. We propose an approach to analyze and optimize network design that integrates several constraints, with a specific focus on flash flooding and quantitative precipitation estimation (QPE) applications. This study focuses on QPE from the standpoint of a sensor’s ability to accurately estimate rainfall based on network density, where Cho and Kurdzo (2020) provided a similar perspective but with a focus on flash-flood warning performance.
QPE has been an intense research focus area for decades (e.g., Arkin et al. 1989; Gourley et al. 2002; Ryzhkov et al. 2005d,c; Hong and Gourley 2015) due in part to its critical impact on flash-flood warning guidance (Sene 2012; Hong and Gourley 2015). Flash floods along with heat-related illnesses and severe weather (including lightning and tornadoes) are the top three causes of weather-related fatalities in the United States (National Climatic Data Center 2019). Because of the relative lack of rain gauge density (Mishra 2013; Xu et al. 2013), especially in rural and mountainous areas (sometimes with hundreds of kilometers of separation), radar provides invaluable rainfall estimation information in times of torrential rainfall and flash flooding. However, the accuracy of QPE is an area of nearly constant investigation. There are numerous methods for QPE, ranging from relationships between reflectivity factor and rainfall rate [R(Z)] to the use of polarimetric estimates for Z/differential reflectivity/rainfall rate [R(Z, ZDR)] and specific differential phase/rainfall rate [R(KDP)], to specific attenuation/rainfall rate [R(A)]. Each has its own strengths and weaknesses, and all of the methods have been investigated extensively in the literature (summarized in Bringi and Chandrasekar 2001; Zhang 2016; Ryzhkov and Zrnić 2019).
QPE accuracy will continue to be a central concern due to the role weather radars are expected to play in flash-flood forecasting. Not only do different methods for QPE have varying characteristics, but the radar network itself has a significant impact on QPE performance and capability. For example, Ryzhkov et al. (2014) show that the R(A) method does not perform reliably within and above the melting layer (ML). Matrosov (2008) shows that R(Z, ZDR) is also unreliable within and above the ML. For these reasons, the R(Z) relationship is still used in these regions, even when polarimetric estimates are available. Given the different methods and, hypothetically, less-accurate results, one could postulate that radar network density is a driving factor in QPE accuracy, especially in areas far from a radar. Smith et al. (1996), as well as Dotzek and Fehr (2003), have investigated the accuracy of QPE where the lowest beam height is significantly elevated and/or the beam is broadened, and found a general decrease in accuracy at far ranges from the radar.
In addition, the National Weather Service (NWS) has recently approved moving forward with switching the Weather Surveillance Radar 1988 Doppler (WSR-88D) operational QPE method to R(A) (Snow 2017), in part because of difficulties in managing polarimetric bias within the WSR-88D fleet (Zittel et al. 2014) and issues with partial beam blockage (Giangrande and Ryzhkov 2005; Ryzhkov et al. 2014). Of the available methods, given its relatively recent advancement, R(A) is yet to be extensively studied on extremely large datasets, although Cocks et al. (2018, 2019) have made great headway in this area. Cocks et al. (2019), in particular, investigated 49 cases from 37 calendar days. We refer to the R(A) method used in Cocks et al. (2018) and Wang et al. (2019) for this study, as well as discussions with A. Ryzhkov (2018–19, personal communications) and those recently published in Ryzhkov and Zrnić (2019).
Different metrics for QPE errors and models have been used (Ciach 2003; Mandapaka and Germann 2010), but in general, the goal is to determine how closely the remotely sensed radar estimates mirror in situ measurements. Many of the aforementioned studies analyzed the impact of error sources on QPE generated from single radars, and a few investigated QPE from radar networks. Among the most common network-based studies are those that quantify accuracy of mosaicked QPE using the WSR-88D network, for example, the National Mosaic and Multisensor system (NMQ; Zhang et al. 2011b) and the Multi-Radar/Multi-Sensor system (Zhang et al. 2016). Since these studies build on existing infrastructure, the impact of radar location or network design has been left unexplored. We propose to address this important gap for future network design.
Relevant factors impacting the accuracy of QPE at the network scale are included with sensor-level error sources, such as radar moment miscalibration, and moments-to-precipitation rate relations [e.g., R(A)]. We use the Level-III QPE product on the WSR-88D Open Radar Product Generator (ORPG) as an analysis framework. This product is at 250-m, 1°-resolution and works by assuming a constant rainfall rate at the base scan between updates. The operational Level-III product uses R(Z, ZDR) below the ML, and R(Z) within and above the ML. The impact of downstream QPE processing is beyond the scope of this study. In particular, we do not consider advanced QPE processing techniques correcting for range-dependent error such as vertical profile of reflectivity (VPR; Andrieu and Creutin 1995; Kirstetter et al. 2010; Zhang and Qi 2010). This is expanded upon in the discussion and summary section.
Network designs may take various shapes, including full-network design for one type of radar, full-network design for multifunction radar, network design based on a system of systems or multiple radar designs, gap filling in a current network, or a combination of these modalities (e.g., Kurdzo and Palmer 2012; Cho and Kurdzo 2019). The Multifunction Phased Array Radar (MPAR; Weber et al. 2007; Zrnić et al. 2007; Stailey and Hondl 2016), Collaborative Adaptive Sensing of the Atmosphere (CASA; McLaughlin et al. 2009), and Spectrum Efficient National Surveillance Radar (SENSR; Weber et al. 2018) programs are some examples that have proposed different types of radar network designs in the United States. Of course, radar networks have been designed and deployed in several other countries, with varying degrees of coverage, frequency, and capabilities (e.g., Huuskonen et al. 2014).
This paper addresses the impact of QPE errors on network density through the use of a generalized model. Quantification is offered through model output and its application to geospatial network designs. A general trend of increased underestimates is seen at far ranges from the radar and high rainfall rates, arguing for increased density in future weather radar networks. The errors in this analysis are quantified using a large database of collected cases “truthed” with ASOS rain gauges. The factors considered for error sources are minimum height of the beam above ground level (including topography effects), cross-radial resolution, artificially added polarimetric bias, and gauge-observed rainfall rate. A support vector machine (SVM) regression model is constructed to estimate error. SVM regression (SVR) is a technique to minimize a function that deviates by no more than a stated margin of tolerance at any given input–output pair. The trained model is then applied to the WSR-88D network (with the caveat that we assume error is consistent across the network) to generate a quantification of networkwide, system-level QPE error. Different rainfall rates (based on historical return rates) and added polarimetric biases are considered. The model is applied to a series of example future network scenarios to show its potential usefulness for network design.
Within the QPE realm, beam height largely determines the hydrometeor regime that is being sampled in a given resolution volume. This is due to the fact that below the ML (i.e., low elevations or “close” to the radar), hydrometeors are liquid, while within and above the ML (i.e., high elevations or “far” from the radar), hydrometeors become increasingly solid. Of the three QPE methods investigated within this paper, only the R(Z) relationship can be used within and above the ML (Ryzhkov and Zrnić 2019). The existence of the ML, especially associated with deep convection, can lead to the use of R(Z) instead of R(Z, ZDR) or R(A). Not only is R(Z) shown to be less accurate than the other methods overall, but it tends to have its highest errors within and above the ML, despite the fact that it is the only current QPE method that can be used there (Ryzhkov and Zrnić 2019). Therefore, it must be scaled based on previous study results—a solution that is not “one size fits all.” This existence of significant error in QPE is paramount to radar network density and design and is the backbone structure of our study and the need for increased radar density.
The novel aspects of this study center around hazard monitoring capabilities that include aspects of sampling (e.g., radar network density), quantitative accuracy (e.g., QPE algorithm biases) and societal impact (population density) to optimize future iterations of a national weather radar network. The results of the QPE accuracy analysis are derived from a substantial number of cases, and this is the first we know of a model being developed to assess R(A) accuracy. Various potential future network designs have previously been proposed (Cho 2015) but have not been evaluated for effectiveness at improving weather radar data product quality. This study attempts to mitigate that shortfall. Additionally, no previous study has quantified the expected QPE errors across the WSR-88D fleet based on polarimetric bias, an issue that has unfortunately plagued the WSR-88D network (Ryzhkov and Zrnić 2019). Finally, although gap-filling radar solutions have been addressed previously (e.g., Kurdzo and Palmer 2012; Cho and Kurdzo 2019), no application has been shown with regard to QPE error. Similar to the cost-density function that utilizes “perfect” coverage shown in Cho and Kurdzo (2019), the optimal locations of gap-filler radars based on QPE are discussed. This is an important contribution since the combination of multiple hazard-based benefits from a monetary cost–benefit perspective could drive future radar network design and gap-filling options.
The paper is organized as follows: the data and methods section (section 2) sets forth the data collection and processing techniques; the results section (section 3) shows a series of case examples; and the discussion and summary section (section 4) expands on the possible scenarios in which this model can be used and summarizes our findings.
2. Data and methods
a. Dataset collection
Five datasets are used in this study: WSR-88D, ASOS gauge, rawinsonde, Atlas rainfall return rates, and topography data. The radar and gauge data are broken up into “cases” that consist of an approximately 1-h timeframe. For the warm-season months of May–August 2015–17, 4750 cases were selected manually by choosing radar sites that had experienced rainfall during a 1-h period. The warm season was chosen for the relative prevalence of flash-flood rainfall events according to the National Severe Storms Laboratory Flooded Locations and Simulated Hydrographs (FLASH) project database (Gourley et al. 2017). The convective nature of warm-season storms leads to the collection of higher rainfall rates, which is necessary to avoid data sparsity issues with the model. A visual representation of the case density is presented in Fig. 1.
Case count in database for each WSR-88D region of coverage. For each grid point, the closest radar site (by line of sight) is computed, and each radar site is assigned a value on the basis of the number of cases from that radar in the database.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
Case viability was determined through the use of the Advanced Hydrologic Prediction Service daily QPE mosaics, combined with 5-min data from the Corridor Integrated Weather System (CIWS) archives (Evans and Ducot 2006). One-min data are saved from all of the ASOS sites within 300 km of the selected radar during the selected time. The nearest rawinsonde station to the radar is automatically selected, and the most recent rawinsonde data available are used to determine the ML for later processing. All data across a 1-h timeframe are saved for the selected radar, including the two radar volumes on the “outside edges” of the hour (one prehour volume and one posthour volume).
Only ASOS sites were used in this study in an attempt to mitigate quality control issues from less-maintained gauges. Despite this restriction, over 220 000 ASOS data points with measurable precipitation were used in this study. No quality control was performed on the collected ASOS data. Radar data were not edited or corrected other than the ORPG processing steps described in the following subsection. Every effort was made to identify cases that had suspect data and remove them from the database. Issues could include incomplete radar data, incomplete ASOS data, or the lack of a nearby full/timely sounding. Soundings were chosen as the nearest available rawinsonde launch, both in space and time, within 6 h. It is stated as an assumption that the use of soundings for ML determination is not perfect, but given the large dataset, should be sufficient for our purposes.
Topographical data used for modeling beam blockage and beam height were acquired from the U.S. Geological Survey (USGS) GTOPO30 dataset (Gesch et al. 1999). These data are at 30-arc-s resolution, providing an exceptionally fine grid for beam modeling. A 4/3 Earth’s radius assumption was made for all beam height calculations. This is a standard approach for beam blockage in weather radar, described in detail in Doviak and Zrnić (1993), which assumes that the refractivity decreases linearly with height above the surface of Earth. A representative beam blockage map for the current WSR-88D network is presented in Fig. 2. As a comparison metric for flash-flood rainfall rates, we make use of the National Oceanic and Atmospheric Administration (NOAA) Atlas 2 and Atlas 14 datasets (Miller et al. 1973; Pavlovic et al. 2013). These datasets provide rainfall return rates over various return period thresholds (RPTs) and accumulation intervals.
Calculated beam center elevation for the lowest (minimum) nonblocked radial in volume coverage pattern 12 for the current WSR-88D network. Only elevations down to 0.5° are considered. Elevations are reported in kilometers above ground level, and values over 4.5 km or areas that are completely blocked are shown in white.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
One consideration that has not been taken into account because of its difficulty to quantify is rain gauge error. As with any instrument, tipping-bucket rain gauges are subject to errors (Habib et al. 2001). Networkwide gauge errors have been investigated previously (Janowiak et al. 1998) but must be compared to other instrumentation. In most cases, this is a remote sensing instrument, and in the case of Janowiak et al. (1998), satellites are used for the analysis. There are multiple ways to approach error correction. The first is to assume some type of error tolerance in our model (e.g., a Gaussian error model). The second approach, and possibly the most appealing for future application, is to use calibrated gauge output to feed our model. Finally, an approach that we took in this study, is to attempt to select the “most dependable” sensors. For example, we only utilized ASOS data in this study, as opposed to also including Automated Weather Observing System data, which have been found to be less maintained and therefore less accurate (Tokay et al. 2010).
b. Processing methods
1) The ORPG simulator and rain relationship algorithms
The core processing engine for this study was a modified version of the ORPGSim software package, a MATLAB-based ORPG simulator that mimics the C language–based real-time ORPG (Kurdzo et al. 2018b). ORPGSim performs dual-polarimetric preprocessing, quantization, data recombination, and modular product generation, including products such as the hydrometeor classification algorithm (HCA; Park et al. 2009). ORPGSim seeks to provide nearly identical results to the operational ORPG, which can be accessed in public form as the Common Operations and Development Environment (CODE; National Weather Service 2018). ORPGSim was used solely due to convenience running in a supercomputing environment; use of the public CODE database would produce nearly identical results. As part of this study, a series of QPE modules were implemented in ORPGSim, including modules based on R(Z), R(Z, ZDR), R(KDP), and R(A) methods. The formulation of the R(A) method we used in this study is detailed in Wang et al. (2019). The ORPGSim framework was integrated into a software suite that acquires, processes, and outputs all relevant datasets needed for each case in the case database. Each case was processed on a separate core of the Massachusetts Institute of Technology (MIT) Lincoln Laboratory Supercomputing Center (LLSC) Grid (Bliss et al. 2006), allowing the processing of all 4750 cases in under one week. Inputs to the software suite included the radar site, date, and time. The suite then automatically downloaded the necessary radar data, all available ASOS data within 300 km of the radar, and the closest rawinsonde data at the closest time.
The QPE methods used in this study are well documented in previous literature and are summarized in Zhang (2016) and Ryzhkov and Zrnić (2019). The original relation, R(Z), related the horizontal reflectivity factor to rainfall rate through a series of Z–R relationships. These relationships were based off empirical observations across many different campaigns. Unfortunately, with so many Z–R relationships, it was difficult to get accurate rainfall estimates across different rainfall regimes. The R(Z, ZDR) and R(KDP) methods use polarimetric information to input additional information into the QPE process. The R(Z, ZDR) approach is currently used in the operational WSR-88D network and combines horizontal reflectivity factor and differential reflectivity values to reach a QPE result. R(KDP) has a similar goal, but predominantly uses specific differential phase to estimate rainfall rate. Last, R(A) is a relatively new approach that utilizes path-integrated attenuation to determine rainfall rates. R(A) is less susceptible to major changes in rainfall drop size distributions and is based on the radial profile of horizontal reflectivity factor and differential phase (Ryzhkov et al. 2014; Wang et al. 2019).
The decision to use R(A) as a benchmark for QPE accuracy was made because it is the most recent state-of-the-art approach to QPE, both within the NWS vision, and in many ways, the research realm. While it is true that the NWS decision to move forward with R(A) does not dictate how other industry partners may or may not use the R(A) method, R(A) appears to be getting the most attention in the literature in recent years. The NWS, as well as its industry partners, are also moving toward more multisystem tools such as MRMS, further stressing the need to look at the entire picture. However, we posit that that any increase in network density will get the beam elevation down closer to the ground in more locations, giving a better representation of ground-level rainfall, especially in the presence of a low- to medium-elevation ML.
2) Polarimetric bias
3) Rainfall return rates
To provide realistic rainfall rates for flash-flood situations, we calculated return period/recurrence intervals using the NOAA Atlas 2 and 14 datasets. These datasets (specifically Atlas 14) calculate RPTs using return periods spanning 1–1000 years and accumulation intervals from minutes to months (Herman and Schumacher 2016). NOAA’s Atlas 2 precipitation frequency estimates (Miller et al. 1973), released in 1973, covered much of the western United States, while the work of Hershfield (1961) covered much of the eastern United States. These datasets provided 6- and 24-h accumulation intervals for RPTs of 2 and 100 years. These datasets were (mostly) superseded by NOAA’s Atlas 14 project starting in the early 2000s. As of the time of this study, all states had been updated to Atlas 14 except for Idaho, Montana, Oregon, Washington, and Wyoming. Those states are still available using the Atlas 2 dataset. Atlas 14 utilizes a more advanced technique for regionalization, resulting in the use of multiple stations to estimate a point-based rainfall rate at high resolution (Herman and Schumacher 2016).
We needed to choose a representative rainfall rate that gave us an estimate of a relatively rare rainfall event, but not too rare (i.e., on the scale of time that we would expect a flash-flooding event). We also wanted to keep with a 1-h accumulation interval due to our data collection and processing method. One of the native RPT/accumulation intervals in the Atlas 14 dataset is the 2-yr/1-h return rate, which fit our needs well. It is important to note that the use of this combination is meant to serve as an example; all of the following results can be modified to fit a different distribution or need. To match the 2-yr/1-h return from the Atlas 2 sites, we interpolated the native 6-/24-h and 2-yr datasets to achieve the 2-yr/1-h goal. The resolution of the grid matches our USGS GTOPO30 dataset (30 arc s), and the results are shown in Fig. 3.
The NOAA Atlas-derived 2-yr/1-h return rainfall rates across the CONUS (mm h−1). A combination of all Atlas 14 and Atlas 2 volumes was completed, with linear interpolation at volume boundaries. The Atlas 2 data were interpolated in time to match the 2-yr/1-h returns native to Atlas 14.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
c. SVM regression modeling
The intent of the error models is to predict the expected radar-based QPE error based on a series of influencing factors, including minimum height of the beam, cross-radial resolution, actual rainfall rate, and added polarimetric bias. We intend to focus on systematic error based on these parameters. The models developed in this study were generated using an SVR technique. SVR is a method for mapping predictor data from a moderate-dimensional dataset to a model output through the use of kernel functions (Ho and Lin 2012). SVR is a subset of the SVM technique that borrows from the main SVM approach but applies them to a range of outputs. An SVM separates data inputs into distinctive output classifications by determining multidimensional hyperplanes that provide the largest-possible separation between classes. SVR works similarly, although a margin of tolerance is used to provide a regression-type output that does not have to be categorized sequentially (Vapnik 1995). The technique looks to minimize a function whose outputs deviate from the training data by no more than the margin of tolerance at each grid point.
The SVR technique minimizes prediction error by finding the hyperplane that maximizes a margin of tolerance (Ho and Lin 2012). Because of the complexity and size of our dataset, a nonlinear kernel was used to transform the data into a higher-dimensional feature space. Several kernels and settings were experimented with, including linear, Gaussian, radial basis functions, and polynomials. The best results (lowest error) calculable in a reasonable amount of time were achieved using a third-order polynomial kernel with standardized data. The standardization scales all of the input parameters to the same limits, allowing for a smoother representation in the model.
No iteration limit was put onto the model solutions, and most models took tens of millions of iterations to converge. A sequential minimal optimization was used as a solver (Fan et al. 2005), duplicate data points were removed to increase the solver efficiency, and no variable weighting was used. Hyperparameters, which are a form of starting parameters for the model based on an optimization/exhaustive search of training data, were not optimized due to the computational constraints of such a large dataset. Although there were three input parameters (minimum height of the beam, cross-radial resolution, and actual rainfall rate), the examples shown in section 3 make simplifying assumptions (i.e., a fixed beamwidth and a 0.5° tilt angle) so as to make visualization of the results more straightforward. After training, a cross validation was performed to assess model performance. Using a fivefold regression loss, the cross-validated mean square error was calculated to be 17.51 mm2 h−2. A fivefold loss was calculated due simply to the massive size of the model, since a more-traditional 10-fold loss would have taken over a month to run on LLSC. Although the mean squared error seems high, this is likely skewed by the fact that so many high-impact rainfall events were included in the dataset.
Cross-radial resolution was decoupled from minimum height of the beam by using the four lowest elevations of radar data in each volume. In using cross-radial resolution as an input metric for the model, a mapping can be performed that relates the resolution to antenna aperture size. This approach makes it possible to account for errors at different antenna aperture sizes, including the analyses in section 3 that cover different replacement radar types.
3. Results
a. WSR-88D network analysis
The first example of the SVR modeling technique is applied to the existing WSR-88D network to generate a baseline understanding of current network capabilities. These results can then be compared to other siting scenarios and can also be applied to different rainfall estimation techniques. Focus is given to two techniques: R(A) and R(Z, ZDR). As mentioned previously, for both techniques, a scaled R(Z) relation for deep convection is used within and above the ML, and an R(KDP) relation is used anywhere that the HCA detects a rain/hail categorization.
1) Specific attenuation example
Given the eventual goal of moving the WSR-88D fleet to the R(A) method of rainfall estimation (Snow 2017), the primary example in this study is an analysis of the WSR-88D network in the contiguous United States (CONUS) using specific attenuation. In this example, the model is trained using all radar data processed with the R(A) method. The resulting SVR model is shown in Fig. 4. The rain rate on the ordinate is the observed rainfall rate at the gauge (not the radar estimate). The shading/contouring is the error in the radar estimate of rainfall. The 1-h time scale was chosen based on the data collection methods described earlier (1-h events) and the return rate of 2-yr/1-h in the Atlas database.
SVR model results for the R(A) QPE method using 4750 cases across the CONUS. Distance (km) is tied to a 0.5° elevation angle, and rain rate (mm h−1) is the gauge-measured “truth” rainfall rate. The shading and contours represent the expected 1-h errors at different combinations of range and observed rainfall rate based on a comparison of radar and ASOS gauge data in the collected cases. Note that slightly positive errors (overestimates) are observed close to the radar and at low rainfall rates, whereas increasingly stronger negative errors (underestimates) are observed at farther ranges and higher rainfall rates.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
The results in Fig. 4 show positive errors (overestimates) at relatively low rainfall rates and relatively close to the radar. However, the errors in these areas are quite small, and at low rainfall rates, often close to zero even out to 200 km in range. At higher rainfall rates, the error shifts to negative (underestimates). This is a known estimation issue with the R(A) method in deep convection (Ryzhkov et al. 2014; Cocks et al. 2018, 2019), and we expect to see it in the model, especially when training with predominantly convective cases in the warm season.
As the range increases, and rainfall rate is high, the errors become more strongly negative, sometimes exceeding a 50% error when compared with the observed rate. At these ranges, the beam is often within or above the ML, meaning that the R(Z) method is typically being used for rainfall estimation. No holistic correction for the ML was used, meaning just a general trend of the layer’s existence at higher beam elevations is indicated in the model results. This is expected to be sufficient since only warm-season precipitation was considered, and the focus was on deep convection in the warm season. It is also important to note that VPR attempts to correct for these ranges far from the radar, and we have not included this in our model due to the lack of VPR in the operational ORPG and its relative lack of success in deep convection (Matrosov 2008; Matrosov et al. 2014).
The use of R(Z) within and above the ML is a significant source of error for QPE. In fact, this point is a critical aspect of this study. Given that we currently have no reliable way to perform QPE within and above the ML using polarimetric data, we are “stuck” with R(Z) in these regions. Theoretically, a denser network of weather radars would result in beam heights closer to ground level, meaning R(Z) would be used less in cases of deep convection. The inclusion of the R(Z) data within and above the ML helps clue the model into the fact that the regions farther away from the radar tend to underestimate rainfall, therefore theoretically necessitating a denser radar network to overcome the problems. These considerations will be important in future work when optimal network designs are being compared.
The modeled results from Fig. 4 were then applied to the current WSR-88D network (assuming model stationarity across the CONUS) with the beam elevations shown in Fig. 2 and the Atlas rainfall rates shown in Fig. 3. The resulting errors are shown in Fig. 5. As expected from the model shown in Fig. 4, areas with lighter rainfall rates (i.e., the West Coast and the Northeast) display lower overall error, and in some cases, slightly positive error. Areas with heavier rainfall rates (i.e., the plains and the Southeast) display higher overall error, including occasionally severe underestimates along the Gulf Coast. The overall average rainfall rate error for the R(A) method at the Atlas-derived 2-yr/1-h return rainfall rates for all of CONUS is −5.79 mm h−1, while the mean of the absolute value of all errors is 7.11 mm h−1. In general, errors increase in magnitude farther from a radar site, making gaps in coverage relatively clear in the results. Note that in this example, all beam elevations higher than 4.5 km were removed from plotting and calculation of the means. These are areas where we do not feel that there is sufficient training data to generate an accurate model of error.
Application of the R(A) SVR model results from Fig. 4 to the current WSR-88D network across the CONUS. Atlas-derived 2-yr/1-h return rainfall rates are assumed (see Fig. 3). The shading is expected QPE error rates (mm h−1). The means in the header represent the total mean and absolute value mean of the error rate across the CONUS are −5.79 and 7.11 mm h−1, respectively. Radar sites are marked by black dots.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
2) Polarimetric bias examples
The SVR QPE method described in this study can also be used to identify the effects of polarimetric bias on QPE error. As stated earlier, we believe this is the first networkwide application of such a comparison with added bias. This is important because the WSR-88D fleet has historically suffered from polarimetric bias issues (Ryzhkov and Zrnić 2019), and these biases have never been quantified for QPE on a fleetwide basis. It is critical that these considerations be made not only with the current WSR-88D network, but also in the planning of future networks that may similarly suffer from polarimetric bias issues.
This example shows the importance of polarimetric calibration, as well as the potential advancement opportunities for the R(A) method over the R(Z, ZDR) method. The R(Z, ZDR) method with no added bias and the 2-yr/1-h return rainfall rates is shown in the top panel of Fig. 6. The R(Z, ZDR) models look similar to the R(A) method, with more strongly negative errors at farther ranges and higher rainfall rates (models not shown). Although the trend is similar due to the eventual use of R(Z) within and above the ML, the slope of error is higher as the ML is approached, leading to higher overall error rates. This can be seen by comparing error rates at locations in radar coverage gaps in Figs. 5 and 6, which are similar due to their locations above the ML. It should be noted that there are polarimetric errors inherent in the training dataset that cannot be accounted for in our processing. This assumption is made for all R(Z, ZDR) examples shown. The results in the top panel of Fig. 6 show a stronger negative (underestimate) bias in rainfall errors than the R(A) method, confirming the R(A) method’s accuracy over the R(Z, ZDR) method (Ryzhkov et al. 2014; Cocks et al. 2018, 2019). The total mean error observed is −12.67 mm h−1, while the absolute value of all errors is 12.70 mm h−1.
As in Fig. 5, but for the R(Z, ZDR) SVR model results (not shown) applied to the current WSR-88D network across the CONUS, showing results for (top) no assumed added polarimetric bias and (bottom) an added +0.5-dB bias.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
As an example, a +0.5-dB polarimetric bias was added to the entire training dataset, and the model was retrained (model results not shown). The choice of +0.5 dB was made due to the fact that many current WSR-88Ds have exhibited biases in this range over the past several years (Secrest 2016). Despite the stated goal of a 0.2-dB bias (maximum) for the NEXRAD network (up from 0.1 dB originally; Ryzhkov et al. 2005a), a +0.5-dB error is unfortunately not uncommon (Secrest 2016). This model was applied to the same parameters as the other examples for the existing WSR-88D CONUS network, and the results are shown in the bottom panel of Fig. 6. Virtually all errors become negative, meaning the total mean error and mean of the absolute value error are equal at −17.31/17.31 mm h−1. This is a 35.7% increase in total error over the R(Z, ZDR) method with no added bias. The addition of a +0.5-dB bias across the entire fleet is meant to serve as an example, as it is an exceptionally high bias and would not be the same at every site. Lower biases at adjacent sites may reduce the operational impact of high bias at a location where multiple WSR-88Ds are used to issue a flash-flood warning.
b. Example applications to candidate network designs
A series of network design case studies were carried out using the R(A) method to assess the changes in overall error with added network density. The designs herein are not intended to be expected future network scenarios but are a combination of examined scenarios in previous literature (Cho 2015). This example is meant to exploit the capability of the method in this study to directly compare different network designs. The scenarios are combinations of the current WSR-88D sites and the addition of polarimetric weather radars at Terminal Doppler Weather Radar (TDWR), Airport Surveillance Radar (ASR), Air Route Surveillance Radar 4 (ARSR-4), and Common Air Route Surveillance Radar (CARSR) sites. The scenarios also differentiate between ASR sites with and without a Weather Systems Processor (WSP).
For each scenario, a new blockage/beam-height map was created using the sites included in the scenario. A fully polarimetric radar was assumed; however, the replacement radars were considered to have the range and beamwidth of the radars they were replacing. We assumed that the replacement radars were pencil-beam systems, with both azimuthal and elevation beamwidths equal to the azimuthal beamwidth of the radars they replaced. It is important to note that a wider beamwidth in elevation (a “fan beam”) would likely degrade the results. This means that replacements of TDWR, ASR, ARSR-4, and CARSR had maximum ranges of 90, 111, 467, and 370 km, respectively, and beamwidths of 1.0°, 2.0°, 1.7°, and 1.7°, respectively. Note that although the TDWR has a 0.5°-beamwidth, the processed data are at 1.0° (Michelson et al. 1990). Additional scenarios that replaced radars with nonpolarimetric versions were considered but offered limited improvement due to the use of the R(Z) method.
A listing of the scenarios and their mean 1-h QPE error is presented in Table 1. The representative error maps are shown in Figs. 7–10. Scenarios 4, 7, and 8 include all ASR sites, and represent the most significant improvements in mean error, with 46.80%, 52.50%, and 59.93% improvements over the current WSR-88D network, respectively. The addition of the CARSR sites (scenario 6) also provides an improvement of 26.42%. Without inclusion of all ASR sites, such as in scenarios 2, 3, and 5, only modest improvements to the baseline mean error are observed, with 3.63%, 12.26%, and 11.92% improvements in mean total error, respectively.
Evaluation of total and absolute mean QPE error across the CONUS for scenarios 1–8. Scenario 1 is the existing WSR-88D radar network (N), and all other scenarios provide a relative improvement in comparison with NEXRAD. The scenarios are marked by the inclusion of different radar networks, including TDWR (T), ASR with WSP (AW), all other ASR (A), ARSR-4 (4), and CARSR (C).
As in Fig. 5, but for the R(A) method in (top) scenario 1 and (bottom) scenario 2 for radar network design. These scenarios are designated in Table 1. Note that scenario 1 is the existing WSR-88D network.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
As in Fig. 5, but for the R(A) method in (top) scenario 3 and (bottom) scenario 4 for radar network design. These scenarios are designated in Table 1.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
As in Fig. 5, but for the R(A) method in (top) scenario 5 and scenario 6 for radar network design. These scenarios are designated in Table 1.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
As in Fig. 5, but for the R(A) method in (top) scenario 7 and (bottom) scenario 8 for radar network design. These scenarios are designated in Table 1.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
Much of the impact on improvement from these scenarios rests with the location of the added sites relative to current gaps in coverage. This is also heavily impacted simply by the number of CONUS sites, which for TDWR, ASR with WSP, remaining ASR, ARSR-4, and CARSR options are 44, 33, 261, 40, and 73, respectively. Therefore, it makes intuitive sense that inclusion of the ASR sites would make the most significant impact, with the CARSR sites having a secondarily large impact. Of course, scenario 8, which includes all radars, displays the most significant improvement relative to the baseline. It is important to note that none of these scenarios alone would make sense due to the overlapping nature of many radar sites. A future network, at least in the United States, would likely use some combination of sites that provide the most benefit without significant overlap.
For the total mean error values, with a majority of areas represented as underestimates, new radar placement in areas with low 2-yr/1-h return rainfall rates will sometimes generate slight positive errors instead of slight negative errors. For this reason, the plots in Figs. 7–10 include two means; the first is the total mean error, while the second is the mean of the absolute value of the total error. The latter number provides an arguably better estimate of network-based QPE error. These values are also included in Table 1.
A focus on the Southeast and Gulf Coast states is provided in Fig. 11 and Table 2 as an additional example of network density effects on QPE errors in an area of heavier rainfall. As seen in Fig. 3, the Atlas-based rainfall rates along the Gulf coast are significantly higher than the rest of the CONUS. Those rainfall rates lead to higher errors, as seen in the upper portions of Fig. 4. In addition, the difference in error with range is less than the difference at lower rainfall rates, indicating that it is particularly difficult to avoid QPE errors at extreme rainfall rates; underestimation is very likely according to the SVR model.
As in Fig. 5, but for the R(A) method in the southeastern United States. Scenarios (top) 1 and (bottom) 7 are compared.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
c. Gap-filling example
There is significant overlap when including all WSR-88D, TDWR, and ASR radars due to their common proximity to airports. To generate more realistic scenarios, we created a cost–benefit metric that determines the ideal locations for new radars relative to QPE improvement. This metric is called possible improvement factor (PIF), and it considers the possible QPE error improvement multiplied by a base-10 logarithm of population density. The possible QPE error improvement is determined by assuming that every grid point is 1 km from a radar (but still maintains the 2-yr/1-h rainfall rate). This generation of a “perfect” error map (not shown) describes the best-possible scenario for radar coverage given the CONUS-wide rainfall rates. The perfect error scenario, along with a population density database, is used to generate the PIF at full-grid resolution. The PIF for the existing WSR-88D network is shown in Fig. 12. Note areas of western North Carolina and southeastern Pennsylvania that are devoid of WSR-88D coverage and have relatively high rainfall rates (resulting in higher errors but a high possibility for improvement), while also presenting a relatively high population density.
PIF for the CONUS, considering only the existing WSR-88D network. PIF is defined by the possible improvement of error relative to “perfect” (mm h−1) multiplied by the base-10 logarithm of population density (persons per square kilometer). The mean is 2.64. Western North Carolina and southeastern Pennsylvania show areas where additional radars would provide the highest cost–benefit ratio.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
We then performed an experimental optimization using PIF to place radars at all possible locations and assess cost–benefit. The possible locations investigated included all TDWR, ASR, ARSR, and CARSR locations. At each iteration of the optimization, each remaining potential site is given a theoretical radar, and the reduction of CONUS-wide PIF is calculated. The site that reduces PIF the most is selected for placement of a gap-filling radar, and the process repeats.
The first radar is placed in western North Carolina, and the second radar is placed in southeastern Pennsylvania (not shown). This continues, lowering the remaining available PIF with each additional radar. We ran this optimization out past 350 additional radars and generated a plot of PIF versus additional radars, shown in Fig. 13. The original benefit is high because of the steep drop in available PIF early in the curve, but the benefit levels out with more radars. The benefit pool is cut in half after roughly 80 additional radars and stagnates after roughly 200 additional radars. All radars added in this example are of WSR-88D type (i.e., pencil-beam, 1°, polarimetric, full range, etc.).
Mean CONUS QPE PIF as related to the number of additional radars in the network (relative to the existing WSR-88Ds). Note that additional radars for this example are only considered at existing TDWR, ASR, ARSR, and CARSR locations. The original benefit is high because of the steep drop in available PIF early in the curve, but the benefit levels out with more radars. The benefit pool is cut in half after roughly 80 additional radars and stagnates after roughly 200 additional radars.
Citation: Journal of Applied Meteorology and Climatology 59, 11; 10.1175/JAMC-D-19-0164.1
4. Discussion and summary
This study offers a season-specific (warm season) method for QPE error as a benefit pool, leading to the possibility of determining gap-filler locations and entire network design in the CONUS. However, as previously mentioned, this is only one type of benefit. Additionally, only flash-flood situations at the Atlas-derived 2-yr/1-h return rainfall rates were considered. Other scenarios, such as QPE performance in more typical rainfall situations, vertical coverage for tornado and mesocyclone detection and warning (Cho and Kurdzo 2019), coverage impacts on flash-flood warning performance (Cho and Kurdzo 2020a), coverage impacts on nontornadic severe thunderstorm winds (Cho and Kurdzo 2020b), and low-level wind field observation and analysis for rapid ingest into models (McLaughlin et al. 2009; Stensrud et al. 2009), should be considered in future studies.
QPE calculations for 4750 1-h cases across 12 total warm-season months were used to create an SVR model for the R(A) QPE method. This model showed that R(A) QPE estimates display slightly positive (overestimate) errors of up to 5 mm h−1 at close ranges (within 100 km at a 0.5° elevation angle, and below 25 mm h−1). As the range (and hence beam height) and the gauge-measured rainfall rate increase, errors trend significantly negative (underestimates), with underestimates as high as −55 mm h−1 for a 70 mm h−1 gauge-measured rainfall rate at 190 km. The SVR results were applied to the existing WSR-88D network, along with “flash flood” rainfall rates estimated geospatially across the CONUS using the Atlas-derived 2-yr/1-h return rainfall rates. The existing network showed a mean underestimate of −5.79 mm h−1 across the CONUS in these flash-flood rainfall situations, but a focus in the southeastern CONUS resulted in underestimates of −19.15 mm h−1 across the region. Additionally, a SVR model for the R(Z, ZDR) QPE method was trained using varying values of added polarimetric bias. It was shown that adding +0.5-dB bias to ZDR across the WSR-88D fleet resulted in CONUS-wide, flash-flood underestimates increasing from −12.67 to −17.31 mm h−1.
The R(A) SVR model was used to analyze previously hypothesized radar network laydowns as examples for how much decrease in QPE error could be attained with a denser radar network. Closer spacing of radars would lead to lower elevations generally being used for QPE calculations, keeping the beam below the ML more often and resulting in increased QPE accuracy. A series of cases were studied, the most aggressive of which replaced the TDWR, ASR, ASR with WSP, ARSR, and CARSR radars across the CONUS with dual-polarimetric weather radars. In this case, the total mean 1-h QPE error with the R(A) method improves by 59.93% compared with the existing WSR-88D network. A “perfect” weather radar network was generated to devise a population- and climatological-based PIF for the CONUS. It was shown that approximately one-quarter of the associated benefit pool could be filled with just 25 additional “gap filling” weather radars.
There has been little previous work analyzing the actual effect of polarimetric bias on QPE error in extreme rainfall situations across the entire WSR-88D fleet. This is, in part, because of the relative lack of information regarding the current status of polarimetric accuracy on the WSR-88D radars, as well as the (often) lack of an additional polarimetric, well-calibrated, S-band radar with which to compare. Additionally, in many areas of the United States, rain gauges are relatively sparse, making small, regional-based studies difficult unless there is, for example, a local mesonet (e.g., Brock et al. 1995; Brotzge et al. 2018). Investigating polarimetric bias in this study is an attempt at quantifying the effects of bias across a wide region with an exceptionally large dataset. This study not only quantifies these effects but adds impetus to move toward the R(A) QPE method based on the notably improved accuracy nationwide.
A number of simplifying assumptions were made in this study that could be expanded upon for future work. First and foremost, there was no systematic correction for the ML height. The trend of larger errors at higher rainfall rates and farther ranges was evident, but this could be modeled based on proximity to the ML. There was no attempt to ingest our method into a networked system such as MRMS (Zhang et al. 2016). Although most flash-flood warning operations and modeling are directly related to MRMS and its correction, we felt that it was important to quantify errors at the individual sensor level. A multitude of studies have investigated the accuracy of MRMS (e.g., Kirstetter et al. 2015; Cocks et al. 2016; Qi et al. 2016; Cocks et al. 2017; Delbert et al. 2017), and this accuracy is often higher than the individual sensors due to the corrections and adjustments made through the use of numerous sensors. This is true for many quality-controlled datasets for many types of sensors, as errors are inherent in virtually all sensor types. However, we maintain that an understanding of errors at the sensor level can only positively impact a system like MRMS.
This expectation is predicated on the operational characteristics of MRMS reported in the literature (e.g., Zhang et al. 2016). Within the MRMS framework, there are multiple QPE products produced, including one solely based on radar QPE. The radar-based QPE is derived from a “seamless hybrid scan reflectivity,” which takes into account blockages within the radar’s field of view. Additionally, an apparent vertical profile of reflectivity (AVPR) is applied to avoid QPE overestimates in the bright band associated with the ML. Besides these corrections and quality control measures, a qualitative metric known as the radar QPE quality index (RQI) is derived to represent the radar QPE uncertainty associated with reflectivity changes with height and near the ML (Zhang et al. 2011a). One error metric, referred to as “error 4” in Zhang et al. (2016), is based on the beam spreading/ascending with range from the radar, similar to what our study aims to quantify. RQI is designed to be relatively low within and above the ML, in relation to the greater uncertainty expected from estimating precipitation from radar observations sampled in the ML and the ice phase. As a qualitative index, RQI does not offer an actual correction. For this reason, the argument that improving the sensor-level data (in a radar sense) through gap filling (for example) is strengthened, since observations within and above the ML are not easily recoverable, even with quality control and VPR methods within MRMS. A denser radar network with fewer cases of the beam within and above the ML would most certainly improve the quality of MRMS data. Nevertheless, an investigation of downstream MRMS errors coupled with existing network topology should be considered going forward.
QPE errors can vary greatly from case to case, because the schemes will work better or worse in specific conditions. For example, an anomalously low or high ML could skew results based on range and beamwidth. Several nonstatistical case studies and combinations of case studies have been published in the literature, with the most-recent examples pertaining to R(A) (Cocks et al. 2019; Ryzhkov and Zrnić 2019; Wang et al. 2019). As an example, regarding different results between cases, two studies comparing R(A) were recently presented at conferences. A case on 30 June 2012 during a derecho across the southern mid-Atlantic region conformed to our results presented in this study (Kurdzo et al. 2018a, 2020), while another study on the 31 May 2013 flash flooding and tornado case in El Reno/Oklahoma City, Oklahoma, showed the opposite trends, largely due to differences in the ML and the proliferation of hail in the supercell (Kurdzo et al. 2020). Therefore, while our results are generalized, they are not all encompassing; they simply show the statistical trends.
Continental-scale QPE is not an easy problem to solve due to the multitude of possible errors and physical challenges present. In addition to poor coverage resulting from elevation and errors due to proximity to the ML, many areas of the United States are not covered by a radar at all. Gap filling has proven to be problematic due not only to cost, but terrain blockage in mountainous areas. Placement of radars on mountain peaks limits the ability to observe low altitudes, and placement anywhere else leads to terrain blockage. Making matters worse, areas with extreme elevation changes are often uniquely susceptible to flash flooding. These challenges will likely need to be weighed with cost–benefit analyses to determine optimal radar network design in future changes/additions to the NEXRAD fleet or a future radar network.
The gap-filling example demonstrates the ability to focus improvements in radar network coverage based on QPE and population density. Similar to the example shown in Cho and Kurdzo (2019), the convolution of potential improvement to QPE error with population density displays “hot zones” that indicate the most-beneficial areas for new radars from a cost–benefit perspective. The actual cost savings can be calculated using the methods described in Cho and Kurdzo (2020). When combined with tornado, severe thunderstorm, and other types of benefits, the combination of these cost–benefit maps could be a driving force for future radar network designs and/or gap-filler analysis. It is recommended that future investigations regarding radar network design and modification take the PIF method into account in coordination with existing/potential siting options. This method will allow for the generation of relatively noncomplex optimization problems.
A critical finding in this study is the quantification of QPE errors at high rainfall rates and at far distances from the radar using thousands of individual cases. The primary explanation for this phenomenon is the existence of the ML. As Earth curves away from a radar, the beam becomes increasingly elevated. In a less-dense radar network, this will lead to many areas of precipitation being covered only within and above the ML. This has been generalized in our SVR model. By applying this model to future network designs, an optimal network density based on regional rainfall statistics can be determined. When combined with other benefit pools, a holistic optimization of a future weather radar network can be performed for maximum cost benefit. When considering flash-flood, severe thunderstorm, and tornado-based hazards in radar network design, climatology (and likely population density) should be combined with potential network geometries to estimate future benefit.
Additional data collection outside of the summer months and in different regimes, including the analysis of stratiform and tropical rainfall in separate categories, is planned for the future. This would also involve the inclusion of different R(Z) factors for different rainfall regimes, likely making the accuracy within and above the ML better. More cases, especially far from radars and in extremely heavy rainfall situations, would help fill in data sparsity issues with the SVR model, resulting in a better fit and accuracy. Additional types of benefit pools should continue to be investigated to determine the most appropriate cost–benefit approach. These approaches will likely be different in different regions of CONUS and will most certainly be different in other nations across the world, depending on the type and severity of weather that is experienced in those locations on a regular basis.
MRMS has already ingested the Canadian radars and the TDWR radars. Until recently, the Canadian radars were not dual polarimetric, and the TDWRs continue to lack dual-polarimetric capabilities. For this reason, MRMS has only been able to ingest R(Z)-based QPE for these radars, which has been shown to be the least-accurate QPE method. This study quantified 4750 cases using the S-band, dual-polarimetric WSR-88D network. Mixing non-dual-polarimetric and C-band radars (the TDWR and older Canadian network were both at C band during our analysis period) could lead to nontrivial differences in the results due to different scattering characteristics, increased attenuation, calibration, etc. For this reason, we did not incorporate TDWR or Canadian data into our analysis. The issue of incorporating TDWR data was addressed in this study by assuming that future TDWR sites in a new network would be dual-polarimetric and at S band, per the existing WSR-88D network. As the dual-polarimetric, S-band Canadian radars come online, differences in calibration and likely an entirely new study regarding their performance would be necessary to properly include them in an analysis such as the one presented in this paper. Regardless, given that these radars are ingested into MRMS, studies such as these should be carried out in the not-too-distant future.
Acknowledgments
This article is approved for public release, and distribution is unlimited. This material is based upon work supported by the National Oceanic and Atmospheric Administration under Air Force Contract FA8702-15-D-0001. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Oceanic and Atmospheric Administration. The authors thank the four anonymous reviewers for their insightful comments on the paper. The authors also thank Alexander Ryzhkov for numerous helpful discussions about the specific attenuation method and the current state of the science. We are also grateful for the initial outline formulation of this study by Jud Stailey and the support for this project from Kurt Hondl and Mark Weber.
REFERENCES
Andrieu, H., and J. D. Creutin, 1995: Identification of vertical profiles of radar reflectivity for hydrological applications using an inverse method. Part I: Formulation. J. Appl. Meteor., 34, 225–239, https://doi.org/10.1175/1520-0450(1995)034<0225:IOVPOR>2.0.CO;2.
Arkin, P. A., A. V. R. Krishna Rao, and R. R. Kelkar, 1989: Large-scale precipitation and outgoing longwave radiation from INSAT-1B during the 1986 southwest monsoon season. J. Climate, 2, 619–628, https://doi.org/10.1175/1520-0442(1989)002<0619:LSPAOL>2.0.CO;2.
Bliss, N. T., R. Bond, J. Kepner, H. Kim, and A. Reuther, 2006: Interactive grid computing at Lincoln laboratory. Lincoln Lab. J., 16, 165–216.
Bringi, V. N., and V. Chandrasekar, 2001: Polarimetric Doppler Weather Radar: Principles and Applications. Cambridge University Press, 664 pp.
Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The Oklahoma mesonet: A technical overview. J. Atmos. Oceanic Technol., 12, 5–19, https://doi.org/10.1175/1520-0426(1995)012<0005:TOMATO>2.0.CO;2.
Brotzge, J. A., and Coauthors, 2018: Development of a statewide, multiuse surface and vertical profiling network: An overview of the New York state mesonet. 22nd Conf. on IOAS-AOLS, Austin, TX, Amer. Meteor. Soc., 7.6, https://ams.confex.com/ams/98Annual/webprogram/Paper333289.html.
Cho, J.-Y.-N., 2015: Revised Multifunction Phased Array Radar (MPAR) network siting analysis. MIT Lincoln Laboratory Project Rep. ATC-425, 84 pp., https://www.ll.mit.edu/sites/default/files/publication/doc/2018-05/Cho_2015_ATC-425.pdf.
Cho, J.-Y.-N., and J. M. Kurdzo, 2019: Weather radar network benefit model for tornadoes. J. Appl. Meteor. Climatol., 58, 971–987, https://doi.org/10.1175/JAMC-D-18-0205.1.
Cho, J.-Y.-N., and J. M. Kurdzo, 2020a: Weather radar network benefit model for flash flood casualty reduction. J. Appl. Meteor. Climatol., 59, 589–604, https://doi.org/10.1175/JAMC-D-19-0176.1.
Cho, J.-Y.-N., and J. M. Kurdzo, 2020b: Weather radar network benefit model for nontornadic thunderstorm wind casualty cost reduction. Wea. Climate Soc., 12, 789–804, https://doi.org/10.1175/WCAS-D-20-0063.1.
Ciach, G. J., 2003: Local random errors in tipping-bucket rain gauge measurements. J. Atmos. Oceanic Technol., 20, 752–759, https://doi.org/10.1175/1520-0426(2003)20<752:LREITB>2.0.CO;2.
Cocks, S. B., S. M. Martinaitis, B. Kaney, J. Zhang, and K. Howard, 2016: MRMS QPE performance during the 2013/14 cool season. J. Hydrometeor., 17, 791–810, https://doi.org/10.1175/JHM-D-15-0095.1.
Cocks, S. B., J. Zhang, S. M. Martinaitis, Y. Qi, B. Kaney, and K. Howard, 2017: MRMS QPE performance east of the Rockies during the 2014 warm season. J. Hydrometeor., 18, 761–775, https://doi.org/10.1175/JHM-D-16-0179.1.
Cocks, S. B., L. Tang, Y. Wang, J. Zhang, A. Ryzhkov, P. Zhang, and K. W. Howard, 2018: MRMS precipitation estimates using specific attenuation. 32nd Conf. on Hydrology, Austin, TX, Amer. Meteor. Soc., 77, https://ams.confex.com/ams/98Annual/webprogram/Paper335167.html.
Cocks, S. B., and Coauthors, 2019: A prototype quantitative precipitation estimation algorithm for operational S-band polarimetric radar utilizing specific attenuation and specific differential phase. Part II: Performance verification and case study analysis. J. Hydrometeor., 20, 999–1014, https://doi.org/10.1175/JHM-D-18-0070.1.
Delbert, W., C. Haonan, V. Chandrasekar, C. Robert, C. Carroll, R. David, M. Sergey, and Z. Yu, 2017: Evaluation of multisensor quantitative precipitation estimation in Russian River Basin. J. Hydrol. Eng., 22, E5016002, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001422.
Dotzek, N., and T. Fehr, 2003: Relationship between precipitation rates at the ground and aloft—A modeling study. J. Appl. Meteor., 42, 1285–1301, https://doi.org/10.1175/1520-0450(2003)042<1285:RBPRAT>2.0.CO;2.
Doviak, R. J., and D. S. Zrnić, 1993: Doppler Radar and Weather Observations. Dover Publications, 481 pp.
Doviak, R. J., V. Bringi, A. Ryzhkov, A. Zahrai, and D. Zrnić, 2000: Considerations for polarimetric upgrades to operational WSR-88D radars. J. Atmos. Oceanic Technol., 17, 257–278, https://doi.org/10.1175/1520-0426(2000)017<0257:CFPUTO>2.0.CO;2.
Evans, J. E., and E. R. Ducot, 2006: Corridor integrated weather system. Lincoln Lab. J., 16, 59–80.
Fan, R. E., P. H. Chen, and C. J. Lin, 2005: Working set selection using second order information for training support vector machines. J. Mach. Learn. Res., 6, 1889–1918.
Gesch, D. B., K. L. Verdin, and S. K. Greenlee, 1999: New land surface digital elevation model covers the Earth. Eos, Trans. Amer. Geophys. Union, 80, 69–70, https://doi.org/10.1029/99EO00050.
Giangrande, S. E., and A. V. Ryzhkov, 2005: Calibration of dual-polarization radar in the presence of partial beam blockage. J. Atmos. Oceanic Technol., 22, 1156–1166, https://doi.org/10.1175/JTECH1766.1.
Gourley, J. J., R. A. Maddox, K. W. Howard, and D. W. Burgess, 2002: An exploratory multisensor technique for quantitative estimation of stratiform rainfall. J. Hydrometeor., 3, 166–180, https://doi.org/10.1175/1525-7541(2002)003<0166:AEMTFQ>2.0.CO;2.
Gourley, J. J., and Coauthors, 2017: The FLASH project: Improving the tools for flash flood monitoring and prediction across the United States. Bull. Amer. Meteor. Soc., 98, 361–372, https://doi.org/10.1175/BAMS-D-15-00247.1.
Habib, E., W. F. Krajewski, and A. Kruger, 2001: Sampling errors of tipping-bucket rain gauge measurements. J. Hydrol. Eng., 6, 159–166, https://doi.org/10.1061/(ASCE)1084-0699(2001)6:2(159).
Herman, G. R., and R. S. Schumacher, 2016: Extreme precipitation in models: An evaluation. Wea. Forecasting, 31, 1853–1879, https://doi.org/10.1175/WAF-D-16-0093.1.
Hershfield, D. M., 1961: Rainfall frequency atlas of the United States: For durations from 30 minutes to 24 hours and return periods from 1 to 100 years. U.S. Weather Bureau Tech. Paper 40, 65 pp., http://www.nws.noaa.gov/oh/hdsc/PF_documents/TechnicalPaper_No40.pdf.
Ho, C. H., and C. J. Lin, 2012: Large-scale linear support vector regression. J. Mach. Learn. Res., 13, 3323–3348.
Hong, Y., and J. J. Gourley, 2015: Radar Hydrology: Principles, Models, and Applications. CRC Press, 182 pp.
Huuskonen, A., E. Saltikoff, and I. Holleman, 2014: The operational weather radar network in Europe. Bull. Amer. Meteor. Soc., 95, 897–907, https://doi.org/10.1175/BAMS-D-12-00216.1.
Janowiak, J. E., A. Gruber, C. R. Kondragunta, R. E. Livezey, and G. J. Huffman, 1998: A comparison of the NCEP–NCAR reanalysis precipitation and the GPCP rain gauge–satellite combined dataset with observational error considerations. J. Climate, 11, 2960–2979, https://doi.org/10.1175/1520-0442(1998)011<2960:ACOTNN>2.0.CO;2.
Kirstetter, P.-E., H. Andrieu, G. Delrieu, and B. Boudevillain, 2010: Identification of vertical profiles of reflectivity for correction of volumetric radar data using rainfall classification. J. Appl. Meteor. Climatol., 49, 2167–2180, https://doi.org/10.1175/2010JAMC2369.1.
Kirstetter, P.-E., J. J. Gourley, Y. Hong, J. Zhang, S. Moazamigoodarzi, C. Langston, and A. Arthur, 2015: Probabilistic precipitation rate estimates with ground-based radar networks. Water Resour. Res., 51, 1422–1442, https://doi.org/10.1002/2014WR015672.
Kurdzo, J. M., and R. D. Palmer, 2012: Objective optimization of weather radar networks for low-level coverage using a genetic algorithm. J. Atmos. Oceanic Technol., 29, 807–821, https://doi.org/10.1175/JTECH-D-11-00076.1.
Kurdzo, J. M., E. F. Clemons, J. Y. N. Cho, P. L. Heinselman, and N. Yussouf, 2018a: Quantification of radar QPE performance based on SENSR network design possibilities. IEEE Radar Conf. (RadarConf18), Oklahoma City, OK, IEEE, 169–174, https://ieeexplore.ieee.org/document/8378551.
Kurdzo, J. M., E. R. Williams, D. J. Smalley, B. J. Bennett, D. C. Patterson, M. S. Veillette, and M. F. Donovan, 2018b: Polarimetric observations of chaff using the WSR-88D network. J. Appl. Meteor. Climatol., 57, 1063–1081, https://doi.org/10.1175/JAMC-D-17-0191.1.
Kurdzo, J. M., Y. Wen, C. M. Kuster, J. Y. N. Cho, and T. J. Schuur, 2020: Investigating the impact of radar observation height on streamflow modeling: The 31 May 2013 El Reno/Oklahoma City, OK flash flood case. 36th Conf. on Environmental Information Processing Technologies, Boston, MA, Amer. Meteor. Soc., 12B.4, https://ams.confex.com/ams/2020Annual/webprogram/Paper367153.html.
Mandapaka, P. V., and U. Germann, 2010: Radar-rainfall error models and ensemble generators. Rainfall: State of the Science, Geophys. Monogr., Vol. 191, Amer. Geophys. Union, 247–264, https://doi.org/10.1029/2010GM001003.
Matrosov, S. Y., 2008: Assessment of radar signal attenuation caused by the melting hydrometeor layer. IEEE Trans. Geosci. Remote Sens., 46, 1039–1047, https://doi.org/10.1109/TGRS.2008.915757.
Matrosov, S. Y., F. M. Ralph, P. J. Neiman, and A. B. White, 2014: Quantitative assessment of operational weather radar rainfall estimates over California’s Northern Sonoma County using HMT-West data. J. Hydrometeor., 15, 393–410, https://doi.org/10.1175/JHM-D-13-045.1.
McLaughlin, D., and Coauthors, 2009: Short-wavelength technology and the potential for distributed networks of small radar systems. Bull. Amer. Meteor. Soc., 90, 1797–1818, https://doi.org/10.1175/2009BAMS2507.1.
Michelson, M., W. Shrader, and J. Wieler, 1990: Terminal Doppler weather radar. Microwave J., 33, 139–148.
Miller, J., R. Frederick, and R. Tracey, 1973: Colorado. Vol. III, Precipitation-Frequency Atlas of the Western United States, NOAA Atlas 2, National Weather Service, 48 pp., https://mhfd.org/wp-content/uploads/2019/12/NOAA_Atlas_2_Precipitation_Frequency_Vol_3_Colorado-1.pdf.
Mishra, A. K., 2013: Effect of rain gauge density over the accuracy of rainfall: A case study over Bangalore, India. Springerplus, 2, 311, https://doi.org/10.1186/2193-1801-2-311.
National Climatic Data Center, 2019: 80-year list of severe weather fatalities. NOAA Doc., https://www.weather.gov/media/hazstat/80years.pdf.
National Weather Service, 2018: The Common Operations and Development Environment (CODE) for the WSR-88D open RPG. NOAA, https://www.weather.gov/code88d/.
Park, H. S., A. V. Ryzhkov, D. S. Zrnić, and K.-E. Kim, 2009: The hydrometeor classification algorithm for the polarimetric WSR-88D: Description and application to an MCS. Wea. Forecasting, 24, 730–748, https://doi.org/10.1175/2008WAF2222205.1.
Pavlovic, S., and Coauthors, 2013: NOAA Atlas 14: Updated precipitation frequency estimates for the United States. 2013 Fall Meeting, San Francisco, CA, Amer. Geophys. Union, Abstract H52B-07.
Qi, Y., S. Martinaitis, J. Zhang, and S. Cocks, 2016: A real-time automated quality control of hourly rain gauge data based on multiple sensors in MRMS system. J. Hydrometeor., 17, 1675–1691, https://doi.org/10.1175/JHM-D-15-0188.1.
Ryzhkov, A. V., and D. S. Zrnić, 2019: Radar Polarimetry for Weather Observations. Springer, 486 pp., https://doi.org/10.1007/978-3-030-05093-1.
Ryzhkov, A. V., S. E. Giangrande, V. M. Melnikov, and T. J. Schuur, 2005a: Calibration issues of dual-polarization radar measurements. J. Atmos. Oceanic Technol., 22, 1138–1155, https://doi.org/10.1175/JTECH1772.1.
Ryzhkov, A. V., S. E. Giangrande, and T. J. Schuur, 2005b: Rainfall estimation with a polarimetric prototype of WSR-88D. J. Appl. Meteor., 44, 502–515, https://doi.org/10.1175/JAM2213.1.
Ryzhkov, A. V., T. J. Schurr, D. W. Burgess, and D. Zrnić, 2005c: Polarimetric tornado detection. J. Appl. Meteor., 44, 557–570, https://doi.org/10.1175/JAM2235.1.
Ryzhkov, A. V., T. J. Schuur, D. W. Burgess, P. L. Heinselman, S. E. Giangrande, and D. S. Zrnić, 2005d: The joint polarization experiment: Polarimetric rainfall measurements and hydrometeor classification. Bull. Amer. Meteor. Soc., 86, 809–824, https://doi.org/10.1175/BAMS-86-6-809.
Ryzhkov, A. V., M. Diederich, P. Zhang, and C. Simmer, 2014: Potential utilization of specific attenuation for rainfall estimation, mitigation of partial beam blockage, and radar networking. J. Atmos. Oceanic Technol., 31, 599–619, https://doi.org/10.1175/JTECH-D-13-00038.1.
Secrest, G., 2016: ZDR calibration update. NOAA Doc., 9 pp., https://www.roc.noaa.gov/WSR88D/PublicDocs/TAC/2016/ZDRCalibrationUpdate_TAC2016Mar.pdf.
Sene, K., 2012: Flash Floods: Forecasting and Warning. Springer, 395 pp.
Smith, J. A., D. J. Seo, M. L. Baeck, and M. D. Hudlow, 1996: An intercomparison study of NEXRAD precipitation estimates. Water Resour. Res., 32, 2035–2045, https://doi.org/10.1029/96WR00270.
Snow, J., 2017: Memorandum: Recommendation of R(A) technique for QPE. NOAA Radar Operations Center Doc., 1 p., https://www.roc.noaa.gov/WSR88D/PublicDocs/TAC/2017/February2017NEXRADTAC-Specific%20Attenuation%20QPE%20Decision.pdf
Stailey, J. E., and K. D. Hondl, 2016: Multifunction phased array radar for aircraft and weather surveillance. Proc. IEEE, 104, 649–659, https://doi.org/10.1109/JPROC.2015.2491179.
Stensrud, D. J., and Coauthors, 2009: Convective-scale warn-on-forecast system. Bull. Amer. Meteor. Soc., 90, 1487–1500, https://doi.org/10.1175/2009BAMS2795.1.
Tokay, A., P. G. Bashor, and V. L. McDowell, 2010: Comparison of rain gauge measurements in the Mid-Atlantic region. J. Hydrometeor., 11, 553–565, https://doi.org/10.1175/2009JHM1137.1.
Vapnik, V., 1995: The Nature of Statistical Learning Theory. Springer, 201 pp.
Wang, Y., S. Cocks, L. Tang, A. Ryzhkov, P. Zhang, J. Zhang, and K. Howard, 2019: A prototype quantitative precipitation estimation algorithm for operational S-band polarimetric radar utilizing specific attenuation and specific differential phase. Part I: Algorithm description. J. Hydrometeor., 20, 985–997, https://doi.org/10.1175/JHM-D-18-0071.1.
Weber, M. E., J. Y. N. Cho, J. S. Herd, J. M. Flavin, W. E. Benner, and G. S. Torok, 2007: The next-generation multimission U.S. surveillance radar network. Bull. Amer. Meteor. Soc., 88, 1739–1752, https://doi.org/10.1175/BAMS-88-11-1739.
Weber, M. E., K. D. Hondl, M. J. Istok, and R. E. Saffle, 2018: NOAA’s spectrum efficient national surveillance radar (SENSR) research program. 34th Conf. on Environmental Information Processing Technologies, Austin, TX, Amer. Meteor. Soc., 10.1, https://ams.confex.com/ams/98Annual/webprogram/Paper337206.html.
Xu, H., C.-Y. Xu, H. Chen, Z. Zhang, and L. Li, 2013: Assessing the influence of rain gauge density and distribution on hydrological model performance in a humid region of China. J. Hydrol., 505, 1–12, https://doi.org/10.1016/j.jhydrol.2013.09.004.
Zhang, G., 2016: Weather Radar Polarimetry. CRC Press, 323 pp.
Zhang, J., and Y. Qi, 2010: A real-time algorithm for the correction of brightband effects in radar-derived QPE. J. Hydrometeor., 11, 1157–1171, https://doi.org/10.1175/2010JHM1201.1.
Zhang, J., Y. Qi, K. Howard, C. Langston, and B. Kaney, 2011a: Radar quality index (RQI)—A combined measure of beam blockage and VPR effects in a national network. Proc. Eighth Int. Symp. on Weather Radar and Hydrology, Exeter, United Kingdom, Royal Meteor. Soc., 388–393.
Zhang, J., and Coauthors, 2011b: National mosaic and multi-sensor QPE (NMQ) system: Description, results, and future plans. Bull. Amer. Meteor. Soc., 92, 1321–1338, https://doi.org/10.1175/2011BAMS-D-11-00047.1.
Zhang, J., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 621–638, https://doi.org/10.1175/BAMS-D-14-00174.1.
Zittel, W. D., J. G. Cunningham, R. R. Lee, L. M. Richardson, R. L. Ice, and V. Melnikov, 2014: Use of hydrometeors, Bragg scatter, and sun spikes to determine system ZDR biases in the WSR-88D fleet. Eighth European Conf. on Radar in Meteorology and Hydrology (ERAD 2014), Garmisch-Partenkirchen, Germany, DWD and DLR, DAC.P12, https://www.roc.noaa.gov/WSR88D/PublicDocs/Publications/132_Zittel.pdf.
Zrnić, D. S., and Coauthors, 2007: Agile-beam phased array radar for weather observations. Bull. Amer. Meteor. Soc., 88, 1753–1766, https://doi.org/10.1175/BAMS-88-11-1753.
Zrnić, D. S., R. Doviak, G. Zhang, and A. Ryzhkov, 2010a: Bias in differential reflectivity due to cross coupling through the radiation patterns of polarimetric weather radars. J. Atmos. Oceanic Technol., 27, 1624–1637, https://doi.org/10.1175/2010JTECHA1350.1.
Zrnić, D. S., G. Zhang, and R. J. Doviak, 2010b: Bias correction and Doppler measurement for polarimetric phased-array radar. IEEE Trans. Geosci. Remote Sens., 49, 843–853, https://doi.org/10.1109/TGRS.2010.2057436.