Search Results
You are looking at 1 - 10 of 12 items for
- Author or Editor: Zachary L. Flamig x
- Refine by Access: All Content x
Abstract
Advanced remote sensing and in situ observing systems employed during the Hydrometeorological Testbed experiment on the American River basin near Sacramento, California, provided a unique opportunity to evaluate correction procedures applied to gap-filling, experimental radar precipitation products in complex terrain. The evaluation highlighted improvements in hourly radar rainfall estimation due to optimizing the parameters in the reflectivity-to-rainfall (Z–R) relation, correcting for the range dependence in estimating R due to the vertical variability in Z in snow and melting-layer regions, and improving low-altitude radar coverage by merging rainfall estimates from two research radars operating at different frequencies and polarization states. This evaluation revealed that although the rainfall product from research radars provided the smallest bias relative to gauge estimates, in terms of the root-mean-square error (with the bias removed) and Pearson correlation coefficient it did not outperform the product from a nearby operational radar that used optimized Z–R relations and was corrected for range dependence. This result was attributed to better low-altitude radar coverage with the operational radar over the upper part of the basin. In these regions, the data from the X-band research radar were not available and the C-band research radar was forced to use higher-elevation angles as a result of nearby terrain and tree blockages, which yielded greater uncertainty in surface rainfall estimates. This study highlights the challenges in siting experimental radars in complex terrain. Last, the corrections developed for research radar products were adapted and applied to an operational radar, thus providing a simple transfer of research findings to operational rainfall products yielding significantly improved skill.
Abstract
Advanced remote sensing and in situ observing systems employed during the Hydrometeorological Testbed experiment on the American River basin near Sacramento, California, provided a unique opportunity to evaluate correction procedures applied to gap-filling, experimental radar precipitation products in complex terrain. The evaluation highlighted improvements in hourly radar rainfall estimation due to optimizing the parameters in the reflectivity-to-rainfall (Z–R) relation, correcting for the range dependence in estimating R due to the vertical variability in Z in snow and melting-layer regions, and improving low-altitude radar coverage by merging rainfall estimates from two research radars operating at different frequencies and polarization states. This evaluation revealed that although the rainfall product from research radars provided the smallest bias relative to gauge estimates, in terms of the root-mean-square error (with the bias removed) and Pearson correlation coefficient it did not outperform the product from a nearby operational radar that used optimized Z–R relations and was corrected for range dependence. This result was attributed to better low-altitude radar coverage with the operational radar over the upper part of the basin. In these regions, the data from the X-band research radar were not available and the C-band research radar was forced to use higher-elevation angles as a result of nearby terrain and tree blockages, which yielded greater uncertainty in surface rainfall estimates. This study highlights the challenges in siting experimental radars in complex terrain. Last, the corrections developed for research radar products were adapted and applied to an operational radar, thus providing a simple transfer of research findings to operational rainfall products yielding significantly improved skill.
Abstract
Rainfall products from radar, satellite, rain gauges, and combinations have been evaluated for a season of record rainfall in a heavily instrumented study domain in Oklahoma. Algorithm performance is evaluated in terms of spatial scale, temporal scale, and rainfall intensity. Results from this study will help users of rainfall products to understand their errors. Moreover, it is intended that developers of rainfall algorithms will use the results presented herein to optimize the contribution from available sensors to yield the most skillful multisensor rainfall products.
Abstract
Rainfall products from radar, satellite, rain gauges, and combinations have been evaluated for a season of record rainfall in a heavily instrumented study domain in Oklahoma. Algorithm performance is evaluated in terms of spatial scale, temporal scale, and rainfall intensity. Results from this study will help users of rainfall products to understand their errors. Moreover, it is intended that developers of rainfall algorithms will use the results presented herein to optimize the contribution from available sensors to yield the most skillful multisensor rainfall products.
Abstract
Rainfall estimated from the polarimetric prototype of the Weather Surveillance Radar-1988 Doppler [WSR-88D (KOUN)] was evaluated using a dense Micronet rain gauge network for nine events on the Ft. Cobb research watershed in Oklahoma. The operation of KOUN and its upgrade to dual polarization was completed by the National Severe Storms Laboratory. Storm events included an extreme rainfall case from Tropical Storm Erin that had a 100-yr return interval. Comparisons with collocated Micronet rain gauge measurements indicated all six rainfall algorithms that used polarimetric observations had lower root-mean-squared errors and higher Pearson correlation coefficients than the conventional algorithm that used reflectivity factor alone when considering all events combined. The reflectivity based relation R(Z) was the least biased with an event-combined normalized bias of −9%. The bias for R(Z), however, was found to vary significantly from case to case and as a function of rainfall intensity. This variability was attributed to different drop size distributions (DSDs) and the presence of hail. The synthetic polarimetric algorithm R(syn) had a large normalized bias of −31%, but this bias was found to be stationary.
To evaluate whether polarimetric radar observations improve discharge simulation, recent advances in Markov Chain Monte Carlo simulation using the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) were used. This Bayesian approach infers the posterior probability density function of model parameters and output predictions, which allows us to quantify HL-RDHM uncertainty. Hydrologic simulations were compared to observed streamflow and also to simulations forced by rain gauge inputs. The hydrologic evaluation indicated that all polarimetric rainfall estimators outperformed the conventional R(Z) algorithm, but only after their long-term biases were identified and corrected.
Abstract
Rainfall estimated from the polarimetric prototype of the Weather Surveillance Radar-1988 Doppler [WSR-88D (KOUN)] was evaluated using a dense Micronet rain gauge network for nine events on the Ft. Cobb research watershed in Oklahoma. The operation of KOUN and its upgrade to dual polarization was completed by the National Severe Storms Laboratory. Storm events included an extreme rainfall case from Tropical Storm Erin that had a 100-yr return interval. Comparisons with collocated Micronet rain gauge measurements indicated all six rainfall algorithms that used polarimetric observations had lower root-mean-squared errors and higher Pearson correlation coefficients than the conventional algorithm that used reflectivity factor alone when considering all events combined. The reflectivity based relation R(Z) was the least biased with an event-combined normalized bias of −9%. The bias for R(Z), however, was found to vary significantly from case to case and as a function of rainfall intensity. This variability was attributed to different drop size distributions (DSDs) and the presence of hail. The synthetic polarimetric algorithm R(syn) had a large normalized bias of −31%, but this bias was found to be stationary.
To evaluate whether polarimetric radar observations improve discharge simulation, recent advances in Markov Chain Monte Carlo simulation using the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) were used. This Bayesian approach infers the posterior probability density function of model parameters and output predictions, which allows us to quantify HL-RDHM uncertainty. Hydrologic simulations were compared to observed streamflow and also to simulations forced by rain gauge inputs. The hydrologic evaluation indicated that all polarimetric rainfall estimators outperformed the conventional R(Z) algorithm, but only after their long-term biases were identified and corrected.
ABSTRACT
Hazard Services is a software toolkit that integrates information management, hazard alerting, and communication functions into a single user interface. When complete, National Weather Service forecasters across the United States will use Hazard Services for operational issuance of weather and hydrologic alerts, making the system an instrumental part of the threat management process. As a new decision-support tool, incorporating an understanding of user requirements and behavior is an important part of building a system that is usable, allowing users to perform work-related tasks efficiently and effectively. This paper discusses the Hazard Services system and findings from a usability evaluation with a sample of end users. Usability evaluations are frequently used to support software and website development and can provide feedback on a system’s efficiency of use, effectiveness, and learnability. In the present study, a user-testing evaluation assessed task performance in terms of error rates, error types, response time, and subjective feedback from a questionnaire. A series of design recommendations was developed based on the evaluation’s findings. The recommendations not only further the design of Hazard Services, but they may also inform the designs of other decision-support tools used in weather and hydrologic forecasting.
Incorporating usability evaluation into the iterative design of decision-support tools, such as Hazard Services, can improve system efficiency, effectiveness, and user experience.
ABSTRACT
Hazard Services is a software toolkit that integrates information management, hazard alerting, and communication functions into a single user interface. When complete, National Weather Service forecasters across the United States will use Hazard Services for operational issuance of weather and hydrologic alerts, making the system an instrumental part of the threat management process. As a new decision-support tool, incorporating an understanding of user requirements and behavior is an important part of building a system that is usable, allowing users to perform work-related tasks efficiently and effectively. This paper discusses the Hazard Services system and findings from a usability evaluation with a sample of end users. Usability evaluations are frequently used to support software and website development and can provide feedback on a system’s efficiency of use, effectiveness, and learnability. In the present study, a user-testing evaluation assessed task performance in terms of error rates, error types, response time, and subjective feedback from a questionnaire. A series of design recommendations was developed based on the evaluation’s findings. The recommendations not only further the design of Hazard Services, but they may also inform the designs of other decision-support tools used in weather and hydrologic forecasting.
Incorporating usability evaluation into the iterative design of decision-support tools, such as Hazard Services, can improve system efficiency, effectiveness, and user experience.
Abstract
This study evaluates rainfall estimates from the Next Generation Weather Radar (NEXRAD), operational rain gauges, Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS) in the context as inputs to a calibrated, distributed hydrologic model. A high-density Micronet of rain gauges on the 342-km2 Ft. Cobb basin in Oklahoma was used as reference rainfall to calibrate the National Weather Service’s (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) at 4-km/l-h and 0.25°/3-h resolutions. The unadjusted radar product was the overall worst product, while the stage IV radar product with hourly rain gauge adjustment had the best hydrologic skill with a Micronet relative efficiency score of −0.5, only slightly worse than the reference simulation forced by Micronet rainfall. Simulations from TRMM-3B42RT were better than PERSIANN-CCS-RT (a real-time version of PERSIANN-CSS) and equivalent to those from the operational rain gauge network. The high degree of hydrologic skill with TRMM-3B42RT forcing was only achievable when the model was calibrated at TRMM’s 0.25°/3-h resolution, thus highlighting the importance of considering rainfall product resolution during model calibration.
Abstract
This study evaluates rainfall estimates from the Next Generation Weather Radar (NEXRAD), operational rain gauges, Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS) in the context as inputs to a calibrated, distributed hydrologic model. A high-density Micronet of rain gauges on the 342-km2 Ft. Cobb basin in Oklahoma was used as reference rainfall to calibrate the National Weather Service’s (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) at 4-km/l-h and 0.25°/3-h resolutions. The unadjusted radar product was the overall worst product, while the stage IV radar product with hourly rain gauge adjustment had the best hydrologic skill with a Micronet relative efficiency score of −0.5, only slightly worse than the reference simulation forced by Micronet rainfall. Simulations from TRMM-3B42RT were better than PERSIANN-CCS-RT (a real-time version of PERSIANN-CSS) and equivalent to those from the operational rain gauge network. The high degree of hydrologic skill with TRMM-3B42RT forcing was only achievable when the model was calibrated at TRMM’s 0.25°/3-h resolution, thus highlighting the importance of considering rainfall product resolution during model calibration.
Abstract
This study quantifies the skill of the National Weather Service’s (NWS) flash flood guidance (FFG) product. Generated by River Forecast Centers (RFCs) across the United States, local NWS Weather Forecast Offices compare estimated and forecast rainfall to FFG to monitor and assess flash flooding potential. A national flash flood observation database consisting of reports in the NWS publication Storm Data and U.S. Geological Survey (USGS) stream gauge measurements are used to determine the skill of FFG over a 4-yr period. FFG skill is calculated at several different precipitation-to-FFG ratios for both observation datasets. Although a ratio of 1.0 nominally indicates a potential flash flooding event, this study finds that FFG can be more skillful when ratios other than 1.0 are considered. When the entire continental United States is considered, the highest observed critical success index (CSI) with 1-h FFG is 0.20 for the USGS dataset, which should be considered a benchmark for future research that seeks to improve, modify, or replace the current FFG system. Regional benchmarks of FFG skill are also determined on an RFC-by-RFC basis. When evaluated against Storm Data reports, the regional skill of FFG ranges from 0.00 to 0.19. When evaluated against USGS stream gauge measurements, the regional skill of FFG ranges from 0.00 to 0.44.
Abstract
This study quantifies the skill of the National Weather Service’s (NWS) flash flood guidance (FFG) product. Generated by River Forecast Centers (RFCs) across the United States, local NWS Weather Forecast Offices compare estimated and forecast rainfall to FFG to monitor and assess flash flooding potential. A national flash flood observation database consisting of reports in the NWS publication Storm Data and U.S. Geological Survey (USGS) stream gauge measurements are used to determine the skill of FFG over a 4-yr period. FFG skill is calculated at several different precipitation-to-FFG ratios for both observation datasets. Although a ratio of 1.0 nominally indicates a potential flash flooding event, this study finds that FFG can be more skillful when ratios other than 1.0 are considered. When the entire continental United States is considered, the highest observed critical success index (CSI) with 1-h FFG is 0.20 for the USGS dataset, which should be considered a benchmark for future research that seeks to improve, modify, or replace the current FFG system. Regional benchmarks of FFG skill are also determined on an RFC-by-RFC basis. When evaluated against Storm Data reports, the regional skill of FFG ranges from 0.00 to 0.19. When evaluated against USGS stream gauge measurements, the regional skill of FFG ranges from 0.00 to 0.44.
Abstract
There are numerous challenges with the forecasting and detection of flash floods, one of the deadliest weather phenomena in the United States. Statistical metrics of flash flood warnings over recent years depict a generally stagnant warning performance, while regional flash flood guidance utilized in warning operations was shown to have low skill scores. The Hydrometeorological Testbed—Hydrology (HMT-Hydro) experiment was created to allow operational forecasters to assess emerging products and techniques designed to improve the prediction and warning of flash flooding. Scientific goals of the HMT-Hydro experiment included the evaluation of gridded products from the Multi-Radar Multi-Sensor (MRMS) and Flooded Locations and Simulated Hydrographs (FLASH) product suites, including the experimental Coupled Routing and Excess Storage (CREST) model, the application of user-defined probabilistic forecasts in experimental flash flood watches and warnings, and the utility of the Hazard Services software interface with flash flood recommenders in real-time experimental warning operations. The HMT-Hydro experiment ran in collaboration with the Flash Flood and Intense Rainfall (FFaIR) experiment at the Weather Prediction Center to simulate the real-time workflow between a national center and a local forecast office, as well as to facilitate discussions on the challenges of short-term flash flood forecasting. Results from the HMT-Hydro experiment highlighted the utility of MRMS and FLASH products in identifying the spatial coverage and magnitude of flash flooding, while evaluating the perception and reliability of probabilistic forecasts in flash flood watches and warnings.
NSSL scientists and NWS forecasters evaluate new tools and techniques through real-time test bed operations for the improvement of flash flood detection and warning operations.
Abstract
There are numerous challenges with the forecasting and detection of flash floods, one of the deadliest weather phenomena in the United States. Statistical metrics of flash flood warnings over recent years depict a generally stagnant warning performance, while regional flash flood guidance utilized in warning operations was shown to have low skill scores. The Hydrometeorological Testbed—Hydrology (HMT-Hydro) experiment was created to allow operational forecasters to assess emerging products and techniques designed to improve the prediction and warning of flash flooding. Scientific goals of the HMT-Hydro experiment included the evaluation of gridded products from the Multi-Radar Multi-Sensor (MRMS) and Flooded Locations and Simulated Hydrographs (FLASH) product suites, including the experimental Coupled Routing and Excess Storage (CREST) model, the application of user-defined probabilistic forecasts in experimental flash flood watches and warnings, and the utility of the Hazard Services software interface with flash flood recommenders in real-time experimental warning operations. The HMT-Hydro experiment ran in collaboration with the Flash Flood and Intense Rainfall (FFaIR) experiment at the Weather Prediction Center to simulate the real-time workflow between a national center and a local forecast office, as well as to facilitate discussions on the challenges of short-term flash flood forecasting. Results from the HMT-Hydro experiment highlighted the utility of MRMS and FLASH products in identifying the spatial coverage and magnitude of flash flooding, while evaluating the perception and reliability of probabilistic forecasts in flash flood watches and warnings.
NSSL scientists and NWS forecasters evaluate new tools and techniques through real-time test bed operations for the improvement of flash flood detection and warning operations.
Abstract
The flash flood event of 23 June 2016 devastated portions of West Virginia and west-central Virginia, resulting in 23 fatalities and 5 new record river crests. The flash flooding was part of a multiday event that was classified as a billion-dollar disaster. The 23 June 2016 event occurred during real-time operations by two Hydrometeorology Testbed (HMT) experiments. The Flash Flood and Intense Rainfall (FFaIR) experiment focused on the 6–24-h forecast through the utilization of experimental high-resolution deterministic and ensemble numerical weather prediction and hydrologic model guidance. The HMT Multi-Radar Multi-Sensor Hydro (HMT-Hydro) experiment concentrated on the 0–6-h time frame for the prediction and warning of flash floods primarily through the experimental Flooded Locations and Simulated Hydrographs product suite. This study describes the various model guidance, applications, and evaluations from both testbed experiments during the 23 June 2016 flash flood event. Various model outputs provided a significant precipitation signal that increased the confidence of FFaIR experiment participants to issue a high risk for flash flooding for the region between 1800 UTC 23 June and 0000 UTC 24 June. Experimental flash flood warnings issued during the HMT-Hydro experiment for this event improved the probability of detection and resulted in a 63.8% increase in lead time to 84.2 min. Isolated flash floods in Kentucky demonstrated the potential to reduce the warned area. Participants characterized how different model guidance and analysis products influenced the decision-making process and how the experimental products can help shape future national and local flash flood operations.
Abstract
The flash flood event of 23 June 2016 devastated portions of West Virginia and west-central Virginia, resulting in 23 fatalities and 5 new record river crests. The flash flooding was part of a multiday event that was classified as a billion-dollar disaster. The 23 June 2016 event occurred during real-time operations by two Hydrometeorology Testbed (HMT) experiments. The Flash Flood and Intense Rainfall (FFaIR) experiment focused on the 6–24-h forecast through the utilization of experimental high-resolution deterministic and ensemble numerical weather prediction and hydrologic model guidance. The HMT Multi-Radar Multi-Sensor Hydro (HMT-Hydro) experiment concentrated on the 0–6-h time frame for the prediction and warning of flash floods primarily through the experimental Flooded Locations and Simulated Hydrographs product suite. This study describes the various model guidance, applications, and evaluations from both testbed experiments during the 23 June 2016 flash flood event. Various model outputs provided a significant precipitation signal that increased the confidence of FFaIR experiment participants to issue a high risk for flash flooding for the region between 1800 UTC 23 June and 0000 UTC 24 June. Experimental flash flood warnings issued during the HMT-Hydro experiment for this event improved the probability of detection and resulted in a 63.8% increase in lead time to 84.2 min. Isolated flash floods in Kentucky demonstrated the potential to reduce the warned area. Participants characterized how different model guidance and analysis products influenced the decision-making process and how the experimental products can help shape future national and local flash flood operations.
Abstract
The Republic of Namibia, located along the arid and semiarid coast of southwest Africa, is highly dependent on reliable forecasts of surface and groundwater storage and fluxes. Since 2009, the University of Oklahoma (OU) and National Aeronautics and Space Administration (NASA) have engaged in a series of exercises with the Namibian Ministry of Agriculture, Water, and Forestry to build the capacity to improve the water information available to local decision-makers. These activities have included the calibration and implementation of NASA and OU’s jointly developed Coupled Routing and Excess Storage (CREST) hydrological model as well as the Ensemble Framework for Flash Flood Forecasting (EF5). Hydrological model output is used to produce forecasts of river stage height, discharge, and soil moisture.
To enable broad access to this suite of environmental decision support information, a website, the Namibia Flood Dashboard, hosted on the infrastructure of the Open Science Data Cloud, has been developed. This system enables scientists, ministry officials, nongovernmental organizations, and other interested parties to freely access all available water information produced by the project, including comparisons of NASA satellite imagery to model forecasts of flooding or drought. The local expertise needed to generate and enhance these water information products has been grown through a series of training meetings bringing together national government officials, regional stakeholders, and local university students and faculty. Aided by online training materials, these exercises have resulted in additional capacity-building activities with CREST and EF5 beyond Namibia as well as the initial implementation of a global flood monitoring and forecasting system.
Abstract
The Republic of Namibia, located along the arid and semiarid coast of southwest Africa, is highly dependent on reliable forecasts of surface and groundwater storage and fluxes. Since 2009, the University of Oklahoma (OU) and National Aeronautics and Space Administration (NASA) have engaged in a series of exercises with the Namibian Ministry of Agriculture, Water, and Forestry to build the capacity to improve the water information available to local decision-makers. These activities have included the calibration and implementation of NASA and OU’s jointly developed Coupled Routing and Excess Storage (CREST) hydrological model as well as the Ensemble Framework for Flash Flood Forecasting (EF5). Hydrological model output is used to produce forecasts of river stage height, discharge, and soil moisture.
To enable broad access to this suite of environmental decision support information, a website, the Namibia Flood Dashboard, hosted on the infrastructure of the Open Science Data Cloud, has been developed. This system enables scientists, ministry officials, nongovernmental organizations, and other interested parties to freely access all available water information produced by the project, including comparisons of NASA satellite imagery to model forecasts of flooding or drought. The local expertise needed to generate and enhance these water information products has been grown through a series of training meetings bringing together national government officials, regional stakeholders, and local university students and faculty. Aided by online training materials, these exercises have resulted in additional capacity-building activities with CREST and EF5 beyond Namibia as well as the initial implementation of a global flood monitoring and forecasting system.
Abstract
Quantitative precipitation estimation (QPE) products from the next-generation National Mosaic and QPE system (Q2) are cross-compared to the operational, radar-only product of the National Weather Service (Stage II) using the gauge-adjusted and manual quality-controlled product (Stage IV) as a reference. The evaluation takes place over the entire conterminous United States (CONUS) from December 2009 to November 2010. The annual comparison of daily Stage II precipitation to the radar-only Q2Rad product indicates that both have small systematic biases (absolute values > 8%), but the random errors with Stage II are much greater, as noted with a root-mean-squared difference of 4.5 mm day−1 compared to 1.1 mm day−1 with Q2Rad and a lower correlation coefficient (0.20 compared to 0.73). The Q2 logic of identifying precipitation types as being convective, stratiform, or tropical at each grid point and applying differential Z–R equations has been successful in removing regional biases (i.e., overestimated rainfall from Stage II east of the Appalachians) and greatly diminishes seasonal bias patterns that were found with Stage II. Biases and radar artifacts along the coastal mountain and intermountain chains were not mitigated with rain gauge adjustment and thus require new approaches by the community. The evaluation identifies a wet bias by Q2Rad in the central plains and the South and then introduces intermediate products to explain it. Finally, this study provides estimates of uncertainty using the radar quality index product for both Q2Rad and the gauge-corrected Q2RadGC daily precipitation products. This error quantification should be useful to the satellite QPE community who use Q2 products as a reference.
Abstract
Quantitative precipitation estimation (QPE) products from the next-generation National Mosaic and QPE system (Q2) are cross-compared to the operational, radar-only product of the National Weather Service (Stage II) using the gauge-adjusted and manual quality-controlled product (Stage IV) as a reference. The evaluation takes place over the entire conterminous United States (CONUS) from December 2009 to November 2010. The annual comparison of daily Stage II precipitation to the radar-only Q2Rad product indicates that both have small systematic biases (absolute values > 8%), but the random errors with Stage II are much greater, as noted with a root-mean-squared difference of 4.5 mm day−1 compared to 1.1 mm day−1 with Q2Rad and a lower correlation coefficient (0.20 compared to 0.73). The Q2 logic of identifying precipitation types as being convective, stratiform, or tropical at each grid point and applying differential Z–R equations has been successful in removing regional biases (i.e., overestimated rainfall from Stage II east of the Appalachians) and greatly diminishes seasonal bias patterns that were found with Stage II. Biases and radar artifacts along the coastal mountain and intermountain chains were not mitigated with rain gauge adjustment and thus require new approaches by the community. The evaluation identifies a wet bias by Q2Rad in the central plains and the South and then introduces intermediate products to explain it. Finally, this study provides estimates of uncertainty using the radar quality index product for both Q2Rad and the gauge-corrected Q2RadGC daily precipitation products. This error quantification should be useful to the satellite QPE community who use Q2 products as a reference.