• Chang, K.-T., 2009: Introduction to Geographic Information Systems. 5th ed. McGraw-Hill, 448 pp.

  • Dyer, J. L., and R. C. Garza, 2004: A comparison of precipitation estimation techniques over Lake Okeechobee, Florida. Wea. Forecasting, 19, 10291043, doi:10.1175/824.1.

    • Search Google Scholar
    • Export Citation
  • Fabry, F., and I. Zawadzki, 1995: Long-term radar observations of the melting layer of precipitation and their interpretation. J. Atmos. Sci., 52, 838851, doi:10.1175/1520-0469(1995)052<0838:LTROOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fulton, R. A., J. P. Breidenbach, D. Seo, D. A. Miller, and T. O’Bannon, 1998: The WSR-88D rainfall algorithm. Wea. Forecasting, 13, 377395, doi:10.1175/1520-0434(1998)013<0377:TWRA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gourley, J. J., Y. Hong, Z. L. Flamig, L. Li, and J. Wang, 2010: Intercomparisons of rainfall estimates from radar, satellite, gauge and combinations for a season of record rainfall. J. Appl. Meteor. Climatol., 49, 437452, doi:10.1175/2009JAMC2302.1.

    • Search Google Scholar
    • Export Citation
  • Groisman, P. Ya., E. L. Peck, and R. G. Quayle, 1999: Intercomparison of recording and standard nonrecording U.S. gauges. J. Atmos. Oceanic Technol., 16, 602609, doi:10.1175/1520-0426(1999)016<0602:IORASN>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Habib, E., B. F. Larson, and J. Graschel, 2009: Validation of NEXRAD multisensory precipitation estimates using an experimental dense rain gauge network in south Louisiana. J. Hydrol., 373, 463478, doi:10.1016/j.jhydrol.2009.05.010.

    • Search Google Scholar
    • Export Citation
  • Joss, J., and A. Waldvogel, 1990: Precipitation measurement and hydrology. Radar in Meteorology, D. Atlas, Ed., Amer. Meteor. Soc., 577–606.

  • Krajewski, W. F., and J. A. Smith, 2002: Radar hydrology: Rainfall estimation. Adv. Water Resour., 25, 13871394, doi:10.1016/S0309-1708(02)00062-3.

    • Search Google Scholar
    • Export Citation
  • Kursinski, A. L., and S. L. Mullen, 2008: Spatiotemporal variability of hourly precipitation over the eastern contiguous United States from stage IV multisensor analysis. J. Hydrometeor., 9, 321, doi:10.1175/2007JHM856.1.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., and K. E. Mitchell, 2005: The NCEP stage II/IV hourly precipitation analyses: Development and applications. 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2. [Available online at https://ams.confex.com/ams/pdfpapers/83847.pdf.]

  • Maddox, R. A., J. Zhang, J. J. Gourley, and K. W. Howard, 2002: Weather radar coverage over the contiguous United States. Wea. Forecasting, 17, 927934, doi:10.1175/1520-0434(2002)017<0927:WRCOTC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Marzen, J., and H. E. Fuelberg, 2005: Developing a high resolution precipitation dataset for Florida hydrologic studies. 19th Conf. on Hydrology. San Diego, CA, Amer. Meteor. Soc., J9.2. [Available online at https://ams.confex.com/ams/pdfpapers/83718.pdf.]

  • Medlin, J. M., S. K. Kimball, and K. G. Blackwell, 2007: Radar and rain gauge analysis of the extreme rainfall during Hurricane Danny’s (1997) landfall. Mon. Wea. Rev., 135, 18691888, doi:10.1175/MWR3368.1.

    • Search Google Scholar
    • Export Citation
  • Nystuen, J. A., 1999: Relative performance of automatic rain gauges under different rainfall conditions. J. Atmos. Oceanic Technol., 16, 10251043, doi:10.1175/1520-0426(1999)016<1025:RPOARG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nystuen, J. A., J. R. Proni, P. G. Black, and J. C. Wilkerson, 1996: A comparison of automatic rain gauges. J. Atmos. Oceanic Technol., 13, 6273, doi:10.1175/1520-0426(1996)013<0062:ACOARG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Rinehart, R. E., 2006: Radar for Metorologists. 4th ed. Knight Printing, 482 pp.

  • Rohli, R., and A. J. Vega, 2011: Climatology. 2nd ed. Jones and Bartlett Learning, 433 pp.

  • Tokay, A., P. G. Bashor, and V. L. McDowell, 2010: Comparison of rain gauge measurements in the mid-Atlantic region. J. Hydrometeor., 11, 553565, doi:10.1175/2009JHM1137.1.

    • Search Google Scholar
    • Export Citation
  • Ulbrich, C. W., and N. E. Miller, 2001: Experimental test of the effects of ZR law variations on comparison of WSR-88D rainfall amounts with surface rain gauge and disdrometer data. Wea. Forecasting, 16, 369374, doi:10.1175/1520-0434(2001)016<0369:ETOTEO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wang, D., M. B. Smith, Z. Zhang, S. Reed, and V. I. Koren, 2000: Statistical comparison of mean areal precipitation estimates from WSR-88D, operational and historical gauge networks. 15th Conf. on Hydrology. Long Beach, CA, Amer. Meteor. Soc., 2.17.

  • Wang, X., H. Xie, H. Sharif, and J. Zeitler, 2008: Validating NEXRAD MPE and stage III precipitation products for uniform rainfall on the Upper Guadalupe River basin of Texas Hill Country. J. Hydrol., 348, 7386, doi:10.1016/j.jhydrol.2007.09.057.

    • Search Google Scholar
    • Export Citation
  • Westcott, N. E., S. E. Hollinger, and K. E. Kunkel, 2005: Use of real-time multisensor data to assess the relationship of normalized corn yield with monthly rainfall and heat stress across the central United States. J. Appl. Meteor., 44, 16671676, doi:10.1175/JAM2303.1.

    • Search Google Scholar
    • Export Citation
  • Westcott, N. E., H. V. Knapp, and S. D. Hilberg, 2008: Comparison of gauge and multi-sensor precipitation estimates over a range of spatial and temporal scales. J. Hydrol., 351, 112, doi:10.1016/j.jhydrol.2007.10.057.

    • Search Google Scholar
    • Export Citation
  • Wu, W., D. Kitzmiller, and S. Wu, 2012: Evaluation of radar precipitation estimates from the National Mosaic and Multisensor Quantitative Precipitation Estimation System and the WSR-88D Precipitation Processing System over the conterminous United States. J. Hydrometeor., 13, 10801093, doi:10.1175/JHM-D-11-064.1.

    • Search Google Scholar
    • Export Citation
  • Yilmaz K. K., T. S. Hogue, K. Hsu, S. Sorooshian, H. V. Gupta, and T. Wagener, 2005: Intercomparison of rain gauge, radar, and satellite-based precipitation estimates with emphasis on hydrologic forecasting. J. Hydrometeor., 6, 497517, doi:10.1175/JHM431.1.

    • Search Google Scholar
    • Export Citation
  • Young, C. B., and N. A. Brunsell, 2008: Evaluating NEXRAD estimates for the Missouri River basin: Analysis using daily raingauge data. J. Hydrol. Eng., 13, 549553, doi:10.1061/(ASCE)1084-0699(2008)13:7(549).

    • Search Google Scholar
    • Export Citation
  • Young, C. B., A. A. Bradley, W. F. Krajewski, A. Kruger, and M. L. Morrissey, 2000: Evaluating NEXRAD multisensor precipitation estimates for operational hydrologic forecasting. J. Hydrometeor., 1, 241254, doi:10.1175/1525-7541(2000)001<0241:ENMPEF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    (a) RMSE, (b) NRMSE, (c) percent bias, and (d) mean bias at each COOP station across the whole time period. For percent bias and mean bias, negative values indicate a tendency for MPE to underestimate observed precipitation and positive values indicate a tendency for MPE to overestimate observed precipitation.

  • View in gallery
    Fig. 2.

    Cluster number for each COOP station based upon RMSE. Mean RMSE for clusters 1–3 is 14.35, 5.56, and 9.45 mm, respectively.

  • View in gallery
    Fig. 3.

    RMSE for all the COOP stations for the (a) DJF and (b) JJA seasons.

  • View in gallery
    Fig. 4.

    RMSE for all COOP stations for (a) observed 24-h totals of <2.54 mm and (b) observed intensities of >25.4 mm.

  • View in gallery
    Fig. 5.

    (a) Mean RMSE and (b) mean NRMSE across the domain for minimal (nonzero observed precipitation ≤ 2.54 mm), light (between 2.54 and 6.35 mm), moderate (between 6.35 and 12.7 mm), heavy (between 12.7 and 25.4 mm), and very heavy (>25.4 mm) precipitation.

  • View in gallery
    Fig. 6.

    As in Fig. 5, but for (a) mean percent bias and (b) mean bias.

  • View in gallery
    Fig. 7.

    MPE 24-h total precipitation ending at 0700 EST 28 Aug 2011 (case 1).

  • View in gallery
    Fig. 8.

    MPE bias for each station at which observed precipitation was greater than zero for the 24-h total precipitation ending at 0700 EST 28 Aug 2011 (case 1).

  • View in gallery
    Fig. 9.

    Accumulations of (a) snow/sleet and (b) freezing rain for 29–30 Jan 2010 (case 2; figures obtained from the NWS Raleigh Weather Forecast Office).

  • View in gallery
    Fig. 10.

    MPE 24-h total precipitation ending at 0700 EST 30 Jan 2010 (case 2).

  • View in gallery
    Fig. 11.

    MPE bias for each station at which the observed precipitation was greater than zero for the 24-h total precipitation ending at 0700 EST 30 Jan 2010 (case 2).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 209 62 7
PDF Downloads 186 51 5

Comparison of NCEP Multisensor Precipitation Estimates with Independent Gauge Data over the Eastern United States

Adrienne WoottenState Climate Office of North Carolina, North Carolina State University, Raleigh, North Carolina

Search for other papers by Adrienne Wootten in
Current site
Google Scholar
PubMed
Close
and
Ryan P. BoylesState Climate Office of North Carolina, North Carolina State University, Raleigh, North Carolina

Search for other papers by Ryan P. Boyles in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Gauge-calibrated radar estimates of daily precipitation are compared with daily observed values of precipitation from National Weather Service (NWS) Cooperative Observer Network (COOP) stations to evaluate the multisensor precipitation estimate (MPE) product that is gridded by the National Centers for Environmental Prediction (NCEP) for the eastern United States (defined as locations east of the Mississippi River). This study focuses on a broad evaluation of MPE across the study domain by season and intensity. In addition, the aspect of precipitation type is considered through case studies of winter and summer precipitation events across the domain. Results of this study indicate a north–south gradient in the error of MPE and a seasonal pattern with the highest error in summer and autumn and the lowest error in winter. Two case studies of precipitation are also considered in this study. These case studies include instances of intense precipitation and frozen precipitation. These results suggest that MPE is less able to estimate convective-scale precipitation as compared with precipitation variations at larger spatial scales. In addition, the results suggest that MPE is subject to errors related both to the measurement gauges and to the radar estimates used. Two case studies are also included to discuss the differences with regard to precipitation type. The results from these case studies suggest that MPE may have higher error associated with estimating the liquid equivalent of frozen precipitation when compared with NWS COOP network data. The results also suggest the need for more analysis of MPE error for frozen precipitation in diverse topographic regimes.

Corresponding author address: Adrienne Wootten, Centennial Campus Box 7236, N.C. State University, Raleigh, NC 27695-7236. E-mail: amwootte@ncsu.edu

Abstract

Gauge-calibrated radar estimates of daily precipitation are compared with daily observed values of precipitation from National Weather Service (NWS) Cooperative Observer Network (COOP) stations to evaluate the multisensor precipitation estimate (MPE) product that is gridded by the National Centers for Environmental Prediction (NCEP) for the eastern United States (defined as locations east of the Mississippi River). This study focuses on a broad evaluation of MPE across the study domain by season and intensity. In addition, the aspect of precipitation type is considered through case studies of winter and summer precipitation events across the domain. Results of this study indicate a north–south gradient in the error of MPE and a seasonal pattern with the highest error in summer and autumn and the lowest error in winter. Two case studies of precipitation are also considered in this study. These case studies include instances of intense precipitation and frozen precipitation. These results suggest that MPE is less able to estimate convective-scale precipitation as compared with precipitation variations at larger spatial scales. In addition, the results suggest that MPE is subject to errors related both to the measurement gauges and to the radar estimates used. Two case studies are also included to discuss the differences with regard to precipitation type. The results from these case studies suggest that MPE may have higher error associated with estimating the liquid equivalent of frozen precipitation when compared with NWS COOP network data. The results also suggest the need for more analysis of MPE error for frozen precipitation in diverse topographic regimes.

Corresponding author address: Adrienne Wootten, Centennial Campus Box 7236, N.C. State University, Raleigh, NC 27695-7236. E-mail: amwootte@ncsu.edu

1. Introduction

The National Centers for Environmental Prediction (NCEP) have created national mosaics of radar-based precipitation estimates that are calibrated with surface gauge observations around the country (Lin and Mitchell 2005). These gridded products are provided in the Hydrologic Rainfall Analysis Projection at hourly, 6-hourly, and 24-hourly accumulation time scales on a 4.765-km grid using multisensor precipitation estimate (MPE) algorithms employed by National Weather Service (NWS) River Forecast Centers. This high-resolution dataset could be valuable for multiple applications, including hydrology, crop modeling, and mesoscale precipitation research. The analysis in this study will focus on the NCEP stage-IV estimate product, hereinafter referred to as MPE for simplicity.

As Lin and Mitchell (2005) describe, the NCEP stage-II estimates combine radar precipitation estimates with hourly observations from operationally available surface gauges [Automated Surface Observing System (ASOS) stations and Hydrometeorological Automated Data System (HADS) stations; Lin and Mitchell 2005], which are used to adjust for the general bias in the radar returns. Stage-II estimates are available quickly but have little human quality control applied to them. Stage-III estimates also have some level of human quality control but have different algorithms used than are used for MPE (Fulton et al. 1998). In addition, one of the prime differences between stage III and MPE is that stage III is produced by the River Forecasts Centers for individual regions whereas MPE is a national mosaic. MPE is available from 2002 and have additional quality control and updated algorithms relative to the stage-III estimates. Additional information regarding the stage-II, stage-III, and MPE estimates is available online from NCEP (http://www.emc.ncep.noaa.gov/mmb/ylin/pcpanl/QandA/#STAGEX). Therefore, MPE products are generally more accurate and valuable in monitoring and research applications but are not rapidly available for dissemination or real-time operational purposes. Radar-based estimates of precipitation have also been used in monitoring and research applications but are subject to several errors associated with radar beam geometry. These include beam blockage, beam spreading, and bright-beam overestimates (e.g., Joss and Waldvogel 1990; Fulton et al. 1998; Ulbrich and Miller 2001; Krajewski and Smith 2002; Habib et al. 2009), some of which can also be associated with distance from the radar. Aside from other aspects included in the MPE algorithm, the incorporation of gauge calibration is intended to address radar-estimate errors prior to the creation of MPE.

There are multiple studies that have investigated the accuracy of gauge-corrected radar estimates, including Wang et al. (2000), Marzen and Fuelberg (2005), Dyer and Garza (2004), Young et al. (2000), Yilmaz et al. (2005), and Westcott et al. (2005), but the majority of these studies have focused on the stage-II or stage-III estimates. Several more recent studies have begun to evaluate MPE. Each of these studies provides some measure of the accuracy of MPE, but the majority focus on small regions of the United States, only consider hourly MPE, or focus on the spatial structure of the estimates (Kursinski and Mullen 2008). In other cases, MPE is evaluated in depth but for small regions (e.g., Wang et al. 2008; Westcott et al. 2008; Habib et al. 2009; Gourley et al. 2010). In another case, the National Mosaic and Multisensor Quantitative Precipitation Estimation System (NMQ) and the Weather Surveillance Radar-1988 Doppler (WSR-88D) Precipitation Processing System (PPS) are evaluated against MPE (Wu et al. 2012). Although there are multiple studies for specific regions in the United States, there is no evaluation of MPE for the eastern United States. These smaller evaluations provide a sense of the error of MPE over individual regions, but MPE is also used for national and regional drought monitoring. In particular, the authors have witnessed an increase in widespread usage of MPE for routine weekly drought monitoring and as inputs for regional watershed modeling. This broader-scale use makes a larger-scale evaluation necessary to provide estimates of error on a larger scale and any patterns in error across the larger domain. In their role as climate-science and climate-data resource centers, state climate offices are often asked to provide MPE and directly address user issues of appropriate data usage and data accuracy. The eastern United States was chosen for the domain of this study since there are numerous radar coverage gaps in the western United States (Maddox et al. 2002).

In addition, the eastern United States contains a wide range of diverse topographic regimes, including plains, rolling hills, and mountains. It is expected, given known limitations in radar and tipping-bucket gauges, that there will be some difference in the error of MPE in complex topography and storm types (such as tropical cyclones) that could be identified in the results. In this study, MPE is evaluated at the daily time scale for 2002–11. The analysis is considered over the entire time period, seasonally, and for various observed precipitation-intensity ranges. In addition, the large domain of this study allows for some additional insights into spatial patterns of error in MPE that were not visible in prior small-region studies. Section 2 of this paper describes the intensity ranges, along with the data used and the specific statistics that were considered in this evaluation. Section 3 discusses the results of the evaluation, and section 4 summarizes the conclusions from this evaluation. The goal of this study is to evaluate MPE across the eastern United States through a combination of a study of the larger domain and case studies to consider differences in precipitation type and intensity. This approach includes discussing the potential sources of error in MPE and recommendations for future evaluation. The results of this study indicate several aspects of MPE of which potential users of MPE data should be aware for MPE use in other applications.

2. Data and methods

Quality-controlled daily precipitation data from NWS Cooperative Observer Network (COOP, also known as TD-3200) stations are used to evaluate MPE across the eastern United States. This volunteer station network was chosen because the data from these gauges are not included in the MPE algorithms for estimating precipitation and therefore serve as an independent dataset for evaluation purposes. ASOS and HADS stations are not used in this analysis because the measurements from the tipping-bucket rain gauges from these networks are used in the calculation of MPE. Although the COOP network has several challenges, such as those discussed by Tokay et al. (2010), the spatial density and temporal coverage of the dataset provide the best local station comparison available for this analysis. Discussed in this section are efforts taken to minimize the errors associated with the COOP record as described by Tokay et al. (2010). In addition, note that there is overlap between the ASOS and HADS networks and the COOP network. For this analysis, only COOP stations that are not also part of the ASOS or HADS network are used. COOP data are available from the National Climatic Data Center (NCDC) and were retrieved through the Applied Climate Information System database. The COOP network makes use of one of two different rain gauges, and further description of the gauges is available online from the National Oceanic and Atmospheric Administration (http://www.crh.noaa.gov/lot/?n=coop). To minimize the influence of errors that are related to false reports by COOP gauges (Tokay et al. 2010), only those observations considered to be valid data elements by NCDC are used in this analysis. More information on these quality-control metrics is available online from the NWS Training Center and the NCDC (http://www.nwstc.noaa.gov/Hydrology/HYDRO/QCModule/QC-Intro.html; http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL). MPE gridded daily estimates are evaluated against COOP daily precipitation data. Observed and MPE estimates of daily total precipitation are for 24-h accumulations ending at 0700 local standard time [1200 UTC or 0700 eastern standard time (EST)] each day. To minimize errors associated with recording time for precipitation (Tokay et al. 2010), the COOP stations used in this study are restricted to those reporting between 0600 and 0800 local standard time according to the available records from NCDC. It is possible that a few stations changed their recording times during the period of analysis for this study, however, and this possibility should be considered as a source of error in this analysis. MPE products are used because they are subject to human quality control by scientists at the NWS River Forecast Centers. The focus of this analysis is on instances in which precipitation is observed. Given this focus, days on which COOP stations did not record precipitation or reported missing observations are not used to evaluate MPE. The intent is to focus the evaluation of MPE on instances in which precipitation is observed, and focusing on dates on which precipitation was recorded as greater than zero effectively negates errors associated with missing reports from COOP gauges (Tokay et al. 2010).

For this evaluation, daily MPE values are interpolated to the nearest COOP station using bilinear interpolation (e.g., Chang 2009) for each day and station in the domain. A bilinear interpolation was used to provide the best representation of MPE in comparison with the station data. The raw values available from MPE can be considered to be an average over each grid cell and therefore are not entirely representative of point data. Interpolation is used to determine the values of MPE at each COOP station. A sensitivity analysis was done on an annual and seasonal basis and compared the evaluation results for 435 stations in North Carolina, South Carolina, Georgia, Virginia, West Virginia, Tennessee, and Kentucky that are based on bilinear interpolation with those done with a nearest-neighbor approach. The results indicate that the average root-mean-square error (RMSE) of MPE is smaller when using the bilinear-interpolation approach (7.75 mm for bilinear interpolation vs 8.15 mm for nearest neighbor). Although this is less than a 1-mm difference on average, there are instances in which the bilinear interpolation makes a more significant difference in the results. For example, consider COOP station 402202 in Pleasant Hill, Tennessee. During summer (June–August), the RMSEs for the bilinear interpolation and nearest neighbor are 6.34 and 11.20 mm, respectively. This result reflects a 43% improvement when using the bilinear interpolation as opposed to nearest neighbor. The sensitivity analysis was not performed for the entire study domain of the MPE evaluation, and the results discussed are assumed to extend to the study domain. The sensitivity analysis was performed with stations that cover diverse topography that is similar to the variability in the larger study domain of the eastern United States. As such, the results of the sensitivity analysis can be assumed to be reflective of other regions of the larger domain. Therefore, bilinear interpolation is used for the eastern-U.S. analysis. The resulting datasets that are used in this study include the COOP station precipitation and the interpolated MPE precipitation from the closest possible grid points throughout the eastern United States. These matching MPE and COOP precipitation values are used in all of the analyses that follow. The focus of this analysis is primarily on the ability of MPE to capture the observed intensities of precipitation as shown by the station data. Therefore, these analyses focus on instances in which observed precipitation (i.e., station precipitation) is greater than zero. This analysis uses data for the period 2002–11, which represents the available MPE period of record at the time of analysis.

To define the accuracy of MPE for daily precipitation estimates, the statistics considered include RMSE and percent bias. RMSE is defined as
e1
where n is the number of days used in an analysis, MPEi is the interpolated MPE value for day i, and obsi is the observed daily precipitation for day i for each COOP station. In addition, the normalized RMSE (NRMSE) is calculated for each station by dividing the RMSE by the mean of the daily observations for a given station. Percent bias is also used as a measure of the accuracy of MPE. Percent bias is defined as
e2
where MPEi and obsi are again as described previously. Bias is simply defined as the observed precipitation subtracted from the MPE precipitation. Bias is computed daily at each COOP station location. For brevity, we consider only mean bias across the domain of each analysis. In addition, mean RMSE is used to represent the spatial mean of RMSE and mean percent bias is used to represent the spatial mean of percent bias for various spatial aggregations. For this study it is important to note that we are defining the mean bias or mean RMSE to be the same as average bias or average RMSE.

This study includes analyses across the domain, for the entire time period and for each season. For each of these groups, the accuracy of MPE is also considered for different ranges of observed daily precipitation intensities. Five observed ranges are defined:

  1. minimal is nonzero observed precipitation of less than or equal to 2.5 mm,

  2. light precipitation is between 2.5 and 6.35 mm,

  3. moderate precipitation is between 6.4 and 12.7 mm,

  4. heavy precipitation is between 12.7 and 25.4 mm, and

  5. very heavy precipitation is greater than 25.4 mm.

These intensity ranges are chosen in part to provide enough observations in each range for a robust analysis. Trace precipitation is typically defined as 0.254 mm or less by the NWS. Given that this analysis focuses only on instances in which precipitation was observed (i.e., any observations greater than zero), considering minimal precipitation to be less than 0.254 mm would limit the number of observations per station that fall in this category, however. In a similar way, if very heavy precipitation was classified at some level larger than 25.4 mm, the number of observations per station that fall in this category would also be small. Table 1 shows the overall average number of events (observed precipitation > 0 mm) for 2002–11 and the average number of events for different precipitation amounts. From Table 1, we can see that the numbers of events in each range become comparable to each other when we consider precipitation < 2.54 mm in place of trace precipitation (<0.254 mm) and precipitation > 25.4 mm in place of 50.8 mm. Therefore, the observed precipitation ranges described above are designed to include a similar number of observations per station for each range to ensure a robust analysis and comparison.
Table 1.

Average number of events per COOP station (observed precipitation > 0 mm), and the number of events for given precipitation amounts for the period 2002–11.

Table 1.

3. MPE evaluation results

This section will consider the metrics and methods discussed previously but will focus on the entire domain and then on two case studies for frozen and liquid precipitation in North Carolina. For the entire domain, the evaluation is considered for the period 2002–11 and for seasonal differences. In this study, seasons are defined as December–February (DJF), March–May (MAM), June–August (JJA), and September–November (SON). The ranges of observed precipitation intensity are also explored.

a. Regional patterns and evaluation

Across the entire region, there are consistent patterns present in RMSE for MPE in comparison with COOP observations. Figures 1a–d display the RMSE, NRMSE, percent bias, and mean bias, respectively, across the entire time period for each station in the domain. For the entire region, mean RMSE is 8.13 mm, mean NRMSE is 2.55, mean percent bias is −14.86%, and mean bias is −0.457 mm, with the mean number of observations per station equal to 2667 days. As shown in Fig. 1a, RMSE is higher in the southern region of the domain as compared with the northern regions. This is also shown by the NRMSE in Fig. 1b. In contrast, both percent bias and mean bias show no consistent pattern in space. The majority of stations (69%) in the domain have a percent bias in MPE that is between 0% and −20%, indicating a general tendency to underestimate observed precipitation across much of the domain. Although the majority of the stations (85%) show a mean bias within 1.27 mm of zero, the majority of the remaining (347 or 81% of the remaining 426) stations have a mean bias of less than −1.27, which also suggests a tendency to underestimate observed precipitation. In the northern portions of the domain, a transition is observed from low RMSE over inland areas to higher RMSE along the shore of the East Coast. Higher RMSE in the southern part of the study area is likely associated with convective precipitation, which can occur throughout the year. This suggests that MPE has greater difficulty in accurately estimating rainfall from convective processes that occur at scales smaller than the 4.765-km resolution of MPE. This spatial pattern is not evident for percent bias (Fig. 1c) or mean bias (Fig. 1d), however. To further show this spatial pattern, a simple K-means cluster analysis is performed on the RMSE data. The stations are stratified by RMSE into the three clusters shown in Fig. 2. Three clusters are chosen for this analysis because adding additional clusters obscures the visibility of the large-scale spatial pattern, which is potentially a result of other local-spatial-scale patterns in MPE error. The clusters are generally spread through the domain, but the majority of stations in cluster 2 (80%) are north of 38°N. For clusters 1–3, the mean RMSE is 14.35, 5.56, and 9.45 mm, respectively. Given the lower RMSE of stations in cluster 2 and that most of these stations are in the northern portion of the domain, there is a spatial pattern across the domain for the RMSE of MPE. The spatial pattern for the percent bias and mean bias may be the result of the performance differences between individual COOP gauges.

Fig. 1.
Fig. 1.

(a) RMSE, (b) NRMSE, (c) percent bias, and (d) mean bias at each COOP station across the whole time period. For percent bias and mean bias, negative values indicate a tendency for MPE to underestimate observed precipitation and positive values indicate a tendency for MPE to overestimate observed precipitation.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Fig. 2.
Fig. 2.

Cluster number for each COOP station based upon RMSE. Mean RMSE for clusters 1–3 is 14.35, 5.56, and 9.45 mm, respectively.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Table 2 shows the mean RMSE, mean NRMSE, mean bias, and mean percent bias across the study domain by season. Higher mean RMSE is present in the summer and autumn across the domain as compared with other seasons, and higher mean NRMSE is present in DJF and JJA; the mean number of observations per station is similar for all seasons. The largest absolute values of mean bias and mean percent bias also occur during autumn, with the smallest absolute value of mean bias during spring. The north–south trend in RMSE and NRMSE is consistent for each season as well. Figures 3a and 3b show the RMSE for each station in the domain for DJF and JJA, respectively. In DJF (Fig. 3a), RMSE in the southern portion of the domain is larger than in other locations in the domain. In JJA (Fig. 3b), this trend is still apparent but has less of a north–south gradient since localized convection can occur across the entire domain in the summer. RMSE in JJA and SON is higher across the domain as compared with DJF, which also suggests that MPE has larger error for estimating precipitation from localized convection or is subject to errors associated with the tendency for errors in automated tipping-bucket rain gauges that are incorporated in the MPE algorithm (Nystuen et al. 1996; Nystuen 1999; Medlin et al. 2007). In addition, the MPE is also subject to errors in the radar-estimation algorithm (Dyer and Garza 2004; Habib et al. 2009). Errors in the radar-estimation algorithm include changes in the relationship between radar reflectivity and rainfall (ZR relationship) (which contributes to underestimates; e.g., Ulbrich and Miller 2001), bright-beam overestimates, and beam blockage (e.g., Krajewski and Smith 2002). Given that MPE can be considered as an average across each grid box (as per Fulton et al. 1998), however, it is also important to consider that the average is not necessarily representative of the point-gauge values. Across the eastern United States in DJF, midlatitude cyclones traverse the region and provide precipitation across large areas mainly through stratiform precipitation, which is typically spatially more uniform. Although there is some convection in DJF in the domain, the majority of this convection is in the southern United States. Therefore, the precipitation interpolated from the MPE grid point closest to each station is more likely to be similar (i.e., lower RMSE) to the actual precipitation measured at the COOP station. In summer, however, the majority of precipitation for the eastern United States comes in the form of more spatially unstructured convective precipitation that occurs on spatial scales that are smaller than the resolution of MPE. Most thunderstorms occur on a scale of 0.5–5 km (Rohli and Vega 2011), but the associated heavy precipitation can be more localized, and thus MPE experiences larger RMSE since precipitation occurring at the station may not be occurring throughout the associated grid cell. Precipitation from convective storms is also averaged across the 4.7-km2 area, which can also increase the error of MPE as compared with the gauge, even with the interpolation used in this analysis.

Table 2.

Mean RMSE, mean bias, and mean percent bias of MPE, and the mean number of observations per station for the entire domain for each season.

Table 2.
Fig. 3.
Fig. 3.

RMSE for all the COOP stations for the (a) DJF and (b) JJA seasons.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

The trend in RMSE across the domain shows that, as the observed precipitation intensity increases, the RMSE increases. Figures 4a and 4b show the RMSEs for each station in the domain for minimal and very heavy intensities, respectively. For minimal intensities the mean RMSE for the domain is 3.32 mm; for very heavy intensities the mean RMSE is 19.05 mm. For trace intensities the mean NRMSE for the domain is 3.20; for very heavy intensities the mean NRMSE is 0.47. Figures 5a and 5b show the mean RMSE and mean NRMSE, respectively, across the domain for each intensity range. From this comparison, it becomes evident that the RMSE increases and the NRMSE decreases across the domain with increasing intensity. For comparison, Figs. 6a and 6b show the mean percent bias and mean bias, respectively, across the domain for each intensity range. The mean percent bias and mean bias decrease with increasing observed intensity. This result indicates that MPE has a tendency to overestimate precipitation at the lowest observed intensities and makes a transition to underestimating precipitation for the highest observed intensities. For these low intensities, MPE overestimates the observed precipitation by small amounts (0.76 mm on average) in most cases. For very heavy intensities, MPE generally underestimates precipitation across the domain, however.

Fig. 4.
Fig. 4.

RMSE for all COOP stations for (a) observed 24-h totals of <2.54 mm and (b) observed intensities of >25.4 mm.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Fig. 5.
Fig. 5.

(a) Mean RMSE and (b) mean NRMSE across the domain for minimal (nonzero observed precipitation ≤ 2.54 mm), light (between 2.54 and 6.35 mm), moderate (between 6.35 and 12.7 mm), heavy (between 12.7 and 25.4 mm), and very heavy (>25.4 mm) precipitation.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for (a) mean percent bias and (b) mean bias.

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

The patterns shown previously for mean RMSE, mean NRMSE, mean bias, and mean percent bias are also evident for all four seasons. Table 3 shows the mean RMSE, mean NRMSE, mean percent bias, and mean bias across the domain for each season by intensity range. For all seasons, the mean percent bias increases and the mean bias decreases with increasing observed intensity. That is, MPE overestimates lower intensities of precipitation and underestimates higher intensities of precipitation. Mean RMSE also increases (mean NRMSE decreases) with increasing precipitation intensity, for all seasons. Table 3 also shows that, regardless of the intensity range considered or the mean number of observations per station, the highest mean RMSE and mean percent bias for the MPE are during JJA and SON. Multiple studies have shown that the tipping-bucket gauges used to calibrate MPE are also subject to a similar pattern of errors. They overestimate lower intensities and underestimate higher intensities. This pattern indicates that the errors associated with MPE may be in part a result of the propagation of error from gauges into the MPE product. Note also that the gauges used at ASOS stations for the calibration of MPE have begun to be converted from a tipping-bucket gauge to a weighing-bucket gauge (Tokay et al. 2010). The weighing buckets are more accurate than the tipping-bucket gauges. As a result, the errors in MPE may be a combined error from the gauges and radar. For example, the tipping-bucket errors for light precipitation along with brightband overestimates of precipitation (e.g., Krajewski and Smith 2002) by the radar may have a combined influence on the overestimation of light precipitation by MPE. The combined influence of station-measurement errors and radar-estimate errors on MPE was not the focus of this study but is recommended as a topic for future study.

Table 3.

Mean RMSE, mean bias, and mean percent bias of MPE, and the mean number of observations per station for each season across the domain and time period for five intensity ranges: minimal (nonzero observed precipitation ≤ 2.54 mm), light (between 2.54 and 6.35 mm), moderate (between 6.35 and 12.7 mm), heavy (between 12.7 and 25.4 mm), and very heavy (>25.4 mm) precipitation.

Table 3.

Thus, MPE experiences the highest error when the observed intensity is larger than 25.4 mm during JJA or during periods associated with intense localized convective precipitation. Note that this study has not considered the difference in these metrics between convective and nonconvective events or between localized and structured events in summer, which is a caveat in this result.

Another interesting result is that mean bias and percent bias indicate that MPE underestimates precipitation in general. Mean bias and percent bias indicate that MPE overestimates minimal precipitation and underestimates very heavy precipitation. This shows that MPE consistently overestimates small amounts of precipitation and underestimates large amounts of precipitation. In addition, the increase in the absolute value of the bias with increasing intensity indicates that MPE overestimates small amounts of precipitation by a small margin (0.76 mm on average for observed precipitation of ≤2.54 mm) and underestimates larger precipitation amounts by a larger margin (10.16 mm on average for observed precipitation of >25.4 mm). These trends are also consistent for each season (Tables 2 and 3). The reason that this pattern of error exists in MPE may be related to the error of those stations used in its calculation. Several studies have shown that the tipping-bucket gauges like those used to calculate MPE have a similar pattern of error (Nystuen et al. 1996; Nystuen 1999; Medlin et al. 2007). That is, they overestimate small amounts of precipitation and underestimate large amounts of precipitation. In addition, errors in the radar algorithms of precipitation estimates associated with beam overshooting and beam spreading have also been shown to overestimate smaller precipitation amounts and underestimate heavier precipitation amounts (e.g., Habib et al. 2009). Therefore, the error from station measurements may have propagated through the MPE calculation, resulting in a similar error in MPE. Therefore, although this pattern is present in this study, further evaluation of MPE should be performed with more very heavy observations to confirm this finding.

b. Case-study analysis

Two case studies are presented in this section to illuminate differences in error with regard to precipitation type. Two case studies are considered in this analysis:

  1. case 1 is the landfall of Hurricane Irene in North Carolina (the 24-h total precipitation ending 0700 EST 28 August 2011) and

  2. case 2 is a winter-weather event in North Carolina (the 24-h total precipitation ending 0700 EST 30 January 2010).

These two case studies are chosen to represent both the intense precipitation possible in severe thunderstorms and hurricanes and the potential differences in error resulting from different types of precipitation, including snow, sleet, and freezing rain.

First, for case 1, Fig. 7 shows the daily total precipitation from MPE ending at 0700 EST 28 August 2011. From Fig. 7 it is apparent that precipitation from the landfall of Irene was confined to coastal North Carolina and Virginia. By following the same method that was described previously, the MPE precipitation from the closest grid point is interpolated to each station prior to the evaluation. Table 4 shows the RMSE, percent bias, average bias, and total number of stations for this case study.

Fig. 7.
Fig. 7.

MPE 24-h total precipitation ending at 0700 EST 28 Aug 2011 (case 1).

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Table 4.

Evaluation metrics for observed precipitation for case 1.

Table 4.

Although the RMSE for this case of intense precipitation is greater than 25.4 mm, the percent bias and average bias indicate a tendency to underestimate the total precipitation from the landfall of Hurricane Irene. The bias of each station in the domain (shown in Fig. 8) indicates that the largest bias of MPE exists in areas of heaviest rainfall. In this case it also appears that the MPE error is not consistent in coastal North Carolina. There are several instances of overestimates that are near in location to underestimates. Table 5 shows a comparison of the over- and underestimates along with the observed precipitation amount. Table 5 and Fig. 8 together indicate that MPE more frequently underestimates intense precipitation amounts by a larger magnitude than that by which it tends to overestimate observed precipitation in general. This is consistent with the broader results suggested by the climatological analysis presented earlier. This result may not be due to the inability to capture local precipitation intensity with regard to incidences of isolated intensity (isolated convection) as described previously, since precipitation from a hurricane is more uniform. In this instance, the errors in hurricane rainfall may be related both to the measurement errors associated with tipping-bucket rain gauges used in calibration and to the radar tendency to underestimate intense rainfall. Last, the stations at which MPE overestimates precipitation may have been subject to some measurement error associated with precipitation at the time of recording. As is pointed out by Tokay et al. (2010), there is potential for COOP measurements to be reported inaccurately, and this fact could cause the COOP station to overestimate the actual amount. Given that the landfall of Hurricane Irene in North Carolina was at 0700 EST 27 August 2011 according to the National Hurricane Center, there is a possibility that some COOP reports are false for the 28 August 2011 evaluation and still passed NCDC quality-control standards.

Fig. 8.
Fig. 8.

MPE bias for each station at which observed precipitation was greater than zero for the 24-h total precipitation ending at 0700 EST 28 Aug 2011 (case 1).

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Table 5.

Comparison of MPE bias for underestimation and overestimation of precipitation.

Table 5.

In contrast, the second case focuses on precipitation for a mix of multiple phases of precipitation. Case 2 is a winter-weather event in North Carolina that included snow, sleet, and freezing rain across the entire state. Figure 9 shows the accumulations of snow and sleet (Fig. 9a) and freezing rain (Fig. 9b) for the entire event from 1200 EST 29 January to 1500 EST 30 January 2010 (provided by the NWS Raleigh Weather Forecast Office). Snow and sleet covered the western and northern portions of the state while freezing rain was dominant in the southeastern corner of the state. Figure 10 shows the MPE total precipitation ending at 0700 EST 30 January 2010. This event includes multiple types of precipitation, but it is also apparent from Fig. 10 that there are multiple distortions in the MPE values in western and northeastern North Carolina that are discontinuous and are potentially related to errors in the mosaicking process. Table 6 shows the evaluation metrics for observed precipitation for case 2. Note, however, that the evaluation for each of these is against the liquid equivalent measured at each station. In this case, the RMSE is smaller than in the previous case, but the percent bias and average bias again indicate a tendency to underestimate observed precipitation. Figure 11 shows the bias of MPE across the domain for case 2. It indicates that there is a tendency to underestimate precipitation and that the largest underestimates occurred in western North Carolina. That is, in this case the largest MPE errors were associated with regions in which the heaviest snow and sleet occurred during the event. In contrast, in areas of freezing rain or a combination of precipitation types the error is lower than in those areas of sleet and snow. The results from these cases suggest that while MPEs do have larger errors for heavier precipitation in both DJF and JJA, there may be distinct differences that are based on the liquid equivalent of frozen precipitation. These results suggest that MPE may better estimate liquid precipitation over the liquid equivalent of frozen precipitation. There are two additional aspects that should be considered, however. In case 2, the precipitation primarily fell as sleet and snow in western North Carolina while the majority of the rest of the domain in this case was a mixture of frozen and unfrozen precipitation. This fact suggests that, in instances of mixed precipitation, MPE may less accurately capture the liquid equivalent of frozen precipitation. Although heated tipping-bucket rain gauges were used at ASOS stations during this time period, it is possible that the resulting liquid equivalent of frozen precipitation was less than that recorded by weighing rain gauges or nonrecording gauges (Groisman et al. 1999). As Groisman et al. point out, tipping-bucket rain gauges generally tend to underestimate frozen precipitation for several reasons, including burial or wind-driven undercatchment. This would result in an underestimate of the liquid equivalent, at least during the day of the event, which could be translated into MPE. Another factor during this case is that the only active ASOS gauge in the mountains of North Carolina during the event was at the Asheville Regional Airport. The sparse coverage of ASOS stations in the mountains may contribute to errors for this event given the lack of data for gauge calibration of MPE. An additional possible reason for the difference may be related to the ability of the radar to estimate frozen precipitation. It is important to note that frozen precipitation occurred in the mountains in this region, and it is possible that the error was higher in this region as a result of a combination of topography and precipitation type, particularly with regard to beam blockage (Joss and Waldvogel 1990). On the other hand, frozen precipitation (particularly snow) is not easily detectable by radar given that such storms can be shallow in height, which leads to lower detection of snowfall in general relative to rainfall situations (Rinehart 2006). In addition, there are numerous documented issues associated with estimation of precipitation in the melting layer that may play a role (Fabry and Zawadzki 1995). This combination of factors can also contribute to the underestimate of the liquid equivalent of frozen precipitation in the mountains. It is recommended that further research and winter case studies with MPE be done to ascertain the amount of error that is associated with frozen precipitation in different topographic regimes.

Fig. 9.
Fig. 9.

Accumulations of (a) snow/sleet and (b) freezing rain for 29–30 Jan 2010 (case 2; figures obtained from the NWS Raleigh Weather Forecast Office).

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Fig. 10.
Fig. 10.

MPE 24-h total precipitation ending at 0700 EST 30 Jan 2010 (case 2).

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

Table 6.

Evaluation metrics for observed precipitation for case 2.

Table 6.
Fig. 11.
Fig. 11.

MPE bias for each station at which the observed precipitation was greater than zero for the 24-h total precipitation ending at 0700 EST 30 Jan 2010 (case 2).

Citation: Journal of Applied Meteorology and Climatology 53, 12; 10.1175/JAMC-D-14-0034.1

4. Summary and conclusions

An evaluation of the accuracy of MPE across multiple spatial scales has been presented. This analysis uses the NWS COOP station network to evaluate the MPE daily precipitation given the available station density and temporal coverage. Tokay et al. (2010) points out several error sources in the COOP data, which are addressed as part of this analysis. First, to negate errors associated with missing reports, the evaluation focuses on dates for which recorded precipitation was greater than zero. Second, to minimize the influence of false reports by COOP gauges, only those observations that are considered to be valid data elements by NCDC are used. Third, to minimize the errors associated with recording time, the COOP stations used in this analysis were restricted to those reporting between 0600 and 0800 local standard time according to available records from NCDC. These three areas should be noted as potential sources of error, although their influence on this analysis has been minimized as much as possible.

The overall mean RMSE across the eastern United States for the MPE available from NCEP is 8.13 mm, with a mean NRMSE of 2.55, mean bias of −0.457 mm, and mean percent bias of −14.86%. Larger RMSE is present in the MPE for the southern portion of the domain. The pattern of increased error in summer across the domain also suggests that MPE experiences higher error when estimating localized or unstructured precipitation from convection, which is dominant in summer. This result suggests that MPE has higher error for estimates of convective precipitation than for more stratiform precipitation. In addition, mean RMSE increases with increasing observed intensity while mean bias and mean percent bias show that the MPE overestimates low intensities by small amounts and underestimates high intensities by larger amounts. This result is consistent with previous work, such as that of Westcott et al. (2008) and Habib et al. (2009). The results of this study also agree with the findings of Dyer and Garza (2004) and Habib et al. (2009), who demonstrated that MPE underestimates precipitation in general and that there is seasonality in the MPE error. Several researchers have focused on known problems with radar-based estimates of precipitation, including beam blockage, beam spreading, and bright-beam overestimates (e.g., Joss and Waldvogel 1990; Fulton et al. 1998; Ulbrich and Miller 2001; Krajewski and Smith 2002; Habib et al. 2009). Another possible reason for at least the intensity-related errors is that the station precipitation used in the calculation of MPE also experiences errors with increasing precipitation intensity. This possibility has been particularly indicated with regard to tipping-bucket rain gauges as noted by several investigators (Nystuen et al. 1996; Nystuen 1999; Medlin et al. 2007). Therefore, MPE errors can be associated with both the radar estimates and the station-measurement error. Although the results of this evaluation suggest that MPE underestimates localized intense precipitation resulting from convection primarily in summer, there is no distinction made in the evaluation between localized or unstructured convective precipitation, structured convective precipitation, and nonconvective precipitation. Therefore, to draw a firm conclusion on this facet, an evaluation should be made with a distinction among these three types. The pair of case studies in this analysis shows that MPE underestimates intense precipitation, and also shows errors for estimating the liquid equivalent of frozen precipitation. Young and Brunsell (2008) found that MPE tends to underestimate precipitation in mountainous areas more than in other regions. In our analysis, however, the patterns in bias and percent bias do not indicate a stronger tendency for MPE to underestimate precipitation in the mountains overall. Our results do potentially agree with Young and Brunsell (2008) with regard to individual cases in winter. The results for the winter case study show that the bias in the mountains is ≤−6.35 mm while it is within 6.35 mm for much of the rest of the case’s domain. This result suggests some agreement with Young and Brunsell (2008) with regard to frozen precipitation although not with regard to combined liquid and frozen precipitation. Further analysis is recommended to isolate the error associated with different types of precipitation and topographic regimes.

Acknowledgments

We acknowledge Aaron Sims and Colin Loftin for providing technical assistance and the other staff and students of the State Climate Office of North Carolina for their support. In addition, we acknowledge NCEP and NCDC for providing the MPE and COOP station data. We also thank our anonymous reviewers for providing advice and guidance to improve this paper.

REFERENCES

  • Chang, K.-T., 2009: Introduction to Geographic Information Systems. 5th ed. McGraw-Hill, 448 pp.

  • Dyer, J. L., and R. C. Garza, 2004: A comparison of precipitation estimation techniques over Lake Okeechobee, Florida. Wea. Forecasting, 19, 10291043, doi:10.1175/824.1.

    • Search Google Scholar
    • Export Citation
  • Fabry, F., and I. Zawadzki, 1995: Long-term radar observations of the melting layer of precipitation and their interpretation. J. Atmos. Sci., 52, 838851, doi:10.1175/1520-0469(1995)052<0838:LTROOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fulton, R. A., J. P. Breidenbach, D. Seo, D. A. Miller, and T. O’Bannon, 1998: The WSR-88D rainfall algorithm. Wea. Forecasting, 13, 377395, doi:10.1175/1520-0434(1998)013<0377:TWRA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gourley, J. J., Y. Hong, Z. L. Flamig, L. Li, and J. Wang, 2010: Intercomparisons of rainfall estimates from radar, satellite, gauge and combinations for a season of record rainfall. J. Appl. Meteor. Climatol., 49, 437452, doi:10.1175/2009JAMC2302.1.

    • Search Google Scholar
    • Export Citation
  • Groisman, P. Ya., E. L. Peck, and R. G. Quayle, 1999: Intercomparison of recording and standard nonrecording U.S. gauges. J. Atmos. Oceanic Technol., 16, 602609, doi:10.1175/1520-0426(1999)016<0602:IORASN>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Habib, E., B. F. Larson, and J. Graschel, 2009: Validation of NEXRAD multisensory precipitation estimates using an experimental dense rain gauge network in south Louisiana. J. Hydrol., 373, 463478, doi:10.1016/j.jhydrol.2009.05.010.

    • Search Google Scholar
    • Export Citation
  • Joss, J., and A. Waldvogel, 1990: Precipitation measurement and hydrology. Radar in Meteorology, D. Atlas, Ed., Amer. Meteor. Soc., 577–606.

  • Krajewski, W. F., and J. A. Smith, 2002: Radar hydrology: Rainfall estimation. Adv. Water Resour., 25, 13871394, doi:10.1016/S0309-1708(02)00062-3.

    • Search Google Scholar
    • Export Citation
  • Kursinski, A. L., and S. L. Mullen, 2008: Spatiotemporal variability of hourly precipitation over the eastern contiguous United States from stage IV multisensor analysis. J. Hydrometeor., 9, 321, doi:10.1175/2007JHM856.1.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., and K. E. Mitchell, 2005: The NCEP stage II/IV hourly precipitation analyses: Development and applications. 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2. [Available online at https://ams.confex.com/ams/pdfpapers/83847.pdf.]

  • Maddox, R. A., J. Zhang, J. J. Gourley, and K. W. Howard, 2002: Weather radar coverage over the contiguous United States. Wea. Forecasting, 17, 927934, doi:10.1175/1520-0434(2002)017<0927:WRCOTC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Marzen, J., and H. E. Fuelberg, 2005: Developing a high resolution precipitation dataset for Florida hydrologic studies. 19th Conf. on Hydrology. San Diego, CA, Amer. Meteor. Soc., J9.2. [Available online at https://ams.confex.com/ams/pdfpapers/83718.pdf.]

  • Medlin, J. M., S. K. Kimball, and K. G. Blackwell, 2007: Radar and rain gauge analysis of the extreme rainfall during Hurricane Danny’s (1997) landfall. Mon. Wea. Rev., 135, 18691888, doi:10.1175/MWR3368.1.

    • Search Google Scholar
    • Export Citation
  • Nystuen, J. A., 1999: Relative performance of automatic rain gauges under different rainfall conditions. J. Atmos. Oceanic Technol., 16, 10251043, doi:10.1175/1520-0426(1999)016<1025:RPOARG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nystuen, J. A., J. R. Proni, P. G. Black, and J. C. Wilkerson, 1996: A comparison of automatic rain gauges. J. Atmos. Oceanic Technol., 13, 6273, doi:10.1175/1520-0426(1996)013<0062:ACOARG>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Rinehart, R. E., 2006: Radar for Metorologists. 4th ed. Knight Printing, 482 pp.

  • Rohli, R., and A. J. Vega, 2011: Climatology. 2nd ed. Jones and Bartlett Learning, 433 pp.

  • Tokay, A., P. G. Bashor, and V. L. McDowell, 2010: Comparison of rain gauge measurements in the mid-Atlantic region. J. Hydrometeor., 11, 553565, doi:10.1175/2009JHM1137.1.

    • Search Google Scholar
    • Export Citation
  • Ulbrich, C. W., and N. E. Miller, 2001: Experimental test of the effects of ZR law variations on comparison of WSR-88D rainfall amounts with surface rain gauge and disdrometer data. Wea. Forecasting, 16, 369374, doi:10.1175/1520-0434(2001)016<0369:ETOTEO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wang, D., M. B. Smith, Z. Zhang, S. Reed, and V. I. Koren, 2000: Statistical comparison of mean areal precipitation estimates from WSR-88D, operational and historical gauge networks. 15th Conf. on Hydrology. Long Beach, CA, Amer. Meteor. Soc., 2.17.

  • Wang, X., H. Xie, H. Sharif, and J. Zeitler, 2008: Validating NEXRAD MPE and stage III precipitation products for uniform rainfall on the Upper Guadalupe River basin of Texas Hill Country. J. Hydrol., 348, 7386, doi:10.1016/j.jhydrol.2007.09.057.

    • Search Google Scholar
    • Export Citation
  • Westcott, N. E., S. E. Hollinger, and K. E. Kunkel, 2005: Use of real-time multisensor data to assess the relationship of normalized corn yield with monthly rainfall and heat stress across the central United States. J. Appl. Meteor., 44, 16671676, doi:10.1175/JAM2303.1.

    • Search Google Scholar
    • Export Citation
  • Westcott, N. E., H. V. Knapp, and S. D. Hilberg, 2008: Comparison of gauge and multi-sensor precipitation estimates over a range of spatial and temporal scales. J. Hydrol., 351, 112, doi:10.1016/j.jhydrol.2007.10.057.

    • Search Google Scholar
    • Export Citation
  • Wu, W., D. Kitzmiller, and S. Wu, 2012: Evaluation of radar precipitation estimates from the National Mosaic and Multisensor Quantitative Precipitation Estimation System and the WSR-88D Precipitation Processing System over the conterminous United States. J. Hydrometeor., 13, 10801093, doi:10.1175/JHM-D-11-064.1.

    • Search Google Scholar
    • Export Citation
  • Yilmaz K. K., T. S. Hogue, K. Hsu, S. Sorooshian, H. V. Gupta, and T. Wagener, 2005: Intercomparison of rain gauge, radar, and satellite-based precipitation estimates with emphasis on hydrologic forecasting. J. Hydrometeor., 6, 497517, doi:10.1175/JHM431.1.

    • Search Google Scholar
    • Export Citation
  • Young, C. B., and N. A. Brunsell, 2008: Evaluating NEXRAD estimates for the Missouri River basin: Analysis using daily raingauge data. J. Hydrol. Eng., 13, 549553, doi:10.1061/(ASCE)1084-0699(2008)13:7(549).

    • Search Google Scholar
    • Export Citation
  • Young, C. B., A. A. Bradley, W. F. Krajewski, A. Kruger, and M. L. Morrissey, 2000: Evaluating NEXRAD multisensor precipitation estimates for operational hydrologic forecasting. J. Hydrometeor., 1, 241254, doi:10.1175/1525-7541(2000)001<0241:ENMPEF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
Save