• Adams, T. E., 2016: Flood forecasting in the United States NOAA/National Weather Service. Flood Forecasting: A Global Perspective, T. E. Adams and T. C. Pagano, Eds., 1st ed. Elsevier, 249–310.

    • Crossref
    • Export Citation
  • Adams, T. E., and G. Smith, 1993: National weather service interactive river forecasting using state, parameter, and data modifications. Proc. Int. Symp. Engineering Hydrology, San Francisco, CA, Environmental Water Resources Institute, 10 pp.

  • Adams, T. E., and J. Ostrowski, 2010: Short lead-time hydrologic ensemble forecasts from numerical weather prediction model ensembles. Proc. World Environmental and Water Resources Congress 2010, Providence, RI, Environmental Water Resources Institute, 2294–2304, https://doi.org/10.1061/41114(371)237.

    • Crossref
    • Export Citation
  • Adams, T. E., and T. C. Pagano, Eds., 2016: Flood Forecasting: A Global Perspective. 1st ed., Elsevier, 478 pp.

    • Crossref
    • Export Citation
  • Adams, T. E., S. Chen, and R. Dymond, 2018: Results from operational hydrologic forecasts using the NOAA/NWS OHRFC Ohio River Community HEC-RAS Model. J. Hydrol. Eng., 23, 04018028, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001663.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, E. A., 1973: National Weather Service River Forecast System—Snow accumulation and ablation model. NOAA Tech. Rep. NWS-HYDRO-17, 229 pp.

  • Anderson, E. A., 2002: Calibration of conceptual hydrologic models for use in river forecasting. NWS Tech. Rep., 419 pp., http://www.nws.noaa.gov/oh/hrl/modelcalibration/1.%20Calibration%20Process/1_Anderson_CalbManual.pdf.

  • Bogner, K., and M. Kalas, 2008: Error-correction methods and evaluation of an ensemble based hydrological forecasting system for the Upper Danube catchment. Atmos. Sci. Lett., 9, 95102, https://doi.org/10.1002/asl.180.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bradley, A. A., and S. S. Schwartz, 2011: Summary verification measures and their interpretation for ensemble forecasts. Mon. Wea. Rev., 139, 30753089, https://doi.org/10.1175/2010MWR3305.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. D., and D.-J. Seo, 2010: A nonparametric postprocessor for bias correction of hydrometeorological and hydrologic ensemble forecasts. J. Hydrometeor., 11, 642665, https://doi.org/10.1175/2009JHM1188.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. D., J. Demargne, D.-J. Seo, and Y. Liu, 2010: The Ensemble Verification System (EVS): A software tool for verifying ensemble forecasts of hydrometeorological and hydrologic variables at discrete locations. Environ. Modell. Software, 25, 854872, https://doi.org/10.1016/j.envsoft.2010.01.009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burnash, R., 1995: The NWS River Forecast System—Catchment modeling. Computer Models of Watershed Hydrology, 1st ed., V. Singh, Ed., Water Resources Publication, 311–366.

  • Burnash, R., R. Ferral, and R. McGuire, 1973: A generalized streamflow simulation system: Conceptual modeling for digital computers. NWS Tech. Rep., 204 pp.

  • Candille, G., 2009: The multiensemble approach: The NAEFS example. Mon. Wea. Rev., 137, 16551665, https://doi.org/10.1175/2008MWR2682.1.

  • Cloke, H., and F. Pappenberger, 2009: Ensemble flood forecasting: A review. J. Hydrol., 375, 613626, https://doi.org/10.1016/j.jhydrol.2009.06.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cuo, L., T. C. Pagano, and Q. J. Wang, 2011: A review of quantitative precipitation forecasts and their use in short- to medium-range streamflow forecasting. J. Hydrometeor., 12, 713728, https://doi.org/10.1175/2011JHM1347.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Day, G. N., 1985: Extended streamflow forecasting using NWSRFS. J. Water Resour. Plann. Manage., 111, 157170, https://doi.org/10.1061/(ASCE)0733-9496(1985)111:2(157).

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Deltares, 2018: Flood Early Warning System (FEWS). Deltares, https://www.deltares.nl/en/software/flood-forecasting-system-delft-fews-2.

  • Demargne, J., M. Mulluski, K. Werner, T. Adams, S. Lindsey, N. Schwein, W. Marosi, and E. Welles, 2009: Application of forecast verification science to operational river forecasting in the U.S. National Weather Service. Bull. Amer. Meteor. Soc., 90, 779784, https://doi.org/10.1175/2008BAMS2619.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demargne, J., J. Brown, D.-J. Seo, L. Wu, Z. Toth, and Y. Zhu, 2010: Diagnostic verification of hydrometeorological and hydrologic ensembles. Atmos. Sci. Lett., 11, 114122, https://doi.org/10.1002/asl.261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demargne, J., and Coauthors, 2014: The science of NOAA’s operational hydrologic ensemble forecast service. Bull. Amer. Meteor. Soc., 95, 7998, https://doi.org/10.1175/BAMS-D-12-00081.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demeritt, D., S. Nobert, H. Cloke, and F. Pappenberger, 2010: Challenges in communicating and using ensembles in operational flood forecasting. Meteor. Appl., 17, 209222, https://doi.org/10.1002/met.194.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Diomede, T., C. Marsigli, A. Montani, F. Nerozzi, and T. Paccagnella, 2014: Calibration of limited-area ensemble precipitation forecasts for hydrological predictions. Mon. Wea. Rev., 142, 21762197, https://doi.org/10.1175/MWR-D-13-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Du, J., G. DiMego, M. S. Tracton, and B. Zhou, 2003: NCEP short-range ensemble forecasting (SREF) system: Multi-IC, multi-model and multi-physics approach. WMO/TD 1161, 5.09–5.10.

  • Georgakakos, K. P., and M. D. Hudlow, 1984: Quantitative precipitation forecast techniques for use in hydrologic forecasting. Bull. Amer. Meteor. Soc., 65, 11861200, https://doi.org/10.1175/1520-0477(1984)065<1186:QPFTFU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, 550560, https://doi.org/10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hartmann, H. C., T. C. Pagano, S. Sorooshian, and R. Bales, 2002: Confidence builders: Evaluating seasonal climate forecasts from user perspectives. Bull. Amer. Meteor. Soc., 83, 683698, https://doi.org/10.1175/1520-0477(2002)083<0683:CBESCF>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hashino, T., A. A. Bradley, and S. S. Schwartz, 2007: Evaluation of bias-correction methods for ensemble streamflow volume forecasts. Hydrol. Earth Syst. Sci., 11, 939950, https://doi.org/10.5194/hess-11-939-2007.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • He, Y., Y. Zhang, R. Kuligowski, R. Cifelli, and D. Kitzmiller, 2018: Incorporating satellite precipitation estimates into a radar-gauge multi-sensor precipitation estimation algorithm. Remote Sens., 10, 117, https://doi.org/10.3390/rs10010117.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, C., and N. Bowler, 2009: On the reliability and calibration of ensemble forecasts. Mon. Wea. Rev., 137, 17171720, https://doi.org/10.1175/2009MWR2715.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S., and S. Savelli, 2010: Communicating forecast uncertainty: Public perception of weather forecast uncertainty. Meteor. Appl., 17, 180195, https://doi.org/10.1002/met.190.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kharin, V. V., and F. W. Zwiers, 2003: On the ROC score of probability forecasts. J. Climate, 16, 41454150, https://doi.org/10.1175/1520-0442(2003)016<4145:OTRSOP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kitzmiller, D., D. Miller, R. Fulton, and F. Ding, 2013: Radar and multisensor precipitation estimation techniques in National Weather Service hydrologic operations. J. Hydrol. Eng., 18, 133142, https://doi.org/10.1061/(ASCE)HE.1943-5584.0000523.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Krzysztofowicz, R., 1998: Probabilistic hydrometeorological forecasts: Toward a new era in operational forecasting. Bull. Amer. Meteor. Soc., 79, 243252, https://doi.org/10.1175/1520-0477(1998)079<0243:PHFTAN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Y. Chen, H. Wang, J. Qin, J. Li, and S. Chiao, 2017: Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model. Hydrol. Earth Syst. Sci., 21, 12791294, https://doi.org/10.5194/hess-21-1279-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, M., Q. J. Wang, J. C. Bennett, and D. E. Robertson, 2016: Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting. Hydrol. Earth Syst. Sci., 20, 35613579, https://doi.org/10.5194/hess-20-3561-2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Michaels, S., 2015: Probabilistic forecasting and the reshaping of food risk management. J. Nat. Resour. Policy Res., 7, 4151, https://doi.org/10.1080/19390459.2014.970800.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. L. Demuth, and J. K. Lazo, 2008: Communicating uncertainty in weather forecasts: A survey of the U.S. public. Wea. Forecasting, 23, 974991, https://doi.org/10.1175/2008WAF2007088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1991: Probabilities, odds, and forecasts of rare events. Wea. Forecasting, 6, 302307, https://doi.org/10.1175/1520-0434(1991)006<0302:POAFOR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mylne, K. R., R. E. Evans, and R. T. Clark, 2002: Multi-model multi-analysis ensembles in quasi-operational medium-range forecasting. Quart. J. Roy. Meteor. Soc., 128, 361384, https://doi.org/10.1256/00359000260498923.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • National Academies of Sciences, Engineering, and Medicine, 2018: Integrating Social and Behavioral Sciences within the Weather Enterprise. The National Academies Press, 198 pp., https://doi.org/10.17226/24865.

    • Crossref
    • Export Citation
  • National Research Council, 1997: An Assessment of the Advanced Weather Interactive Processing System: Operational Test and Evaluation of the First system build. The National Academies Press, 124 pp. https://doi.org/10.17226/5995.

    • Crossref
    • Export Citation
  • National Research Council, 2006a: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. The National Academies Press, 124 pp., https://doi.org/10.17226/11699.

    • Crossref
    • Export Citation
  • National Research Council, 2006b: Toward A New Advanced Hydrologic Prediction Service (AHPS). The National Academies Press, 68 pp., https://doi.org/10.17226/11598.

    • Crossref
    • Export Citation
  • NCAR, 2015: Verification: Weather forecast verification utilities, r package version 1.42. NCAR Research Applications Laboratory, https://CRAN.R-project.org/package=verification.

  • Novak, D. R., C. Bailey, K. F. Brill, P. Burke, W. A. Hogsett, R. Rausch, and M. Schichtel, 2014: Precipitation and temperature forecast performance at the Weather Prediction Center. Wea. Forecasting, 29, 489504, https://doi.org/10.1175/WAF-D-13-00066.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Office of Hydrologic Development, 2000: National Weather Service Verification software users’ manual. NOAA Tech. Rep., 26 pp., http://www.nws.noaa.gov/oh/hrl/verification/ob3/VerifyUsersManual_ob3.pdf.

  • Pappenberger, F., E. Stephens, J. Thielen, P. Salamon, D. Demeritt, S. J. van Andel, F. Wetterhall, and L. Alfieri, 2013: Visualizing probabilistic flood forecast information: Expert preferences and perceptions of best practice in uncertainty communication. Hydrol. Processes, 27, 132146, https://doi.org/10.1002/hyp.9253.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ramos, M. H., S. J. van Andel, and F. Pappenberger, 2013: Do probabilistic forecasts lead to better decisions? Hydrol. Earth Syst. Sci., 17, 22192232, https://doi.org/10.5194/hess-17-2219-2013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rayner, S., D. Lach, and H. Ingram, 2005: Weather forecasts are for wimps: Why water resource managers do not use climate forecasts. Climatic Change, 69, 197227, https://doi.org/10.1007/s10584-005-3148-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • R Core Team, 2017: R: A language and environment for statistical computing. R Foundation for Statistical Computing, https://www.R-project.org.

  • Smith, M. B., D. P. Laurine, V. I. Koren, S. M. Reed, and Z. Zhang, 2013: Hydrologic model calibration in the National Weather Service. Calibration of Watershed Models, Q. Duan et al., Eds., Vol. 6, Water Science and Application Series, Amer. Geophys. Union, 133–152, https://doi.org/10.1029/WS006p0133.

    • Crossref
    • Export Citation
  • Sokol, Z., 2003: MOS-based precipitation forecasts for river basins. Wea. Forecasting, 18, 769781, https://doi.org/10.1175/1520-0434(2003)018<0769:MPFFRB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stern, P. C., and W. E. Easterling, Eds., 1999: Making Climate Forecasts Matter. The National Academies Press, 192 pp., https://doi.org/10.17226/6370.

    • Crossref
    • Export Citation
  • U.S. Department of Commerce, 1972: National Weather Service river forecast procedures. NOAA Tech. Rep., NWS-Hydro-14, 252 pp.

  • Welles, E., S. Sorooshian, G. Carter, and B. Olsen, 2007: Hydrologic verification: A call for action and collaboration. Bull. Amer. Meteor. Soc., 88, 503511, https://doi.org/10.1175/BAMS-88-4-503.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wentao, L., D. Qingyun, M. Chiyuan, Y. Aizhong, G. Wei, and D. Zhenhua, 2017: A review on statistical postprocessing methods for hydrometeorological ensemble forecasting. Wiley Interdiscip. Rev.: Water, 4, e1246, https://doi.org/10.1002/wat2.1246.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. International Geophysics Series, Vol. 100, Academic Press, 648 pp.

  • Yuan, X., and E. F. Wood, 2012: Downscaling precipitation or bias-correcting streamflow? Some implications for coupled general circulation model (CGCM)-based ensemble seasonal hydrologic forecast. Water Resour. Res., 48, W12519, https://doi.org/10.1029/2012WR012256.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, Y., S. Reed, and D. Kitzmiller, 2011: Effects of retrospective gauge-based readjustment of multisensor precipitation estimates on hydrologic simulations. J. Hydrometeor., 12, 429443, https://doi.org/10.1175/2010JHM1200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    The NWS 13 RFCs—Alaska/Pacific RFC (APRFC), Arkansas–Red basin RFC (ABRFC), Colorado basin RFC (CBRFC), California–Nevada RFC (CNRFC), lower Mississippi RFC (LMRFC), mid-Atlantic RFC (MARFC), Missouri basin RFC (MBRFC), north-central RFC (NCRFC), Northwest RFC (NWRFC), Ohio RFC (OHRFC), Southeast RFC (SERFC), and west Gulf RFC (WGRFC). Please note that several RFC boundaries extend beyond the United States’ national boundary into Canada and Mexico.

  • View in gallery

    MMEFS schematic showing the flow of NWP forcing data from NCEP through the NWS AWIPS into the OHRFC FEWS-based CHPS for ensemble forecast generation.

  • View in gallery

    Map showing the location of 54 experiment forecast point locations used in the OHRFC forecast area, listed in Tables 1, 2, and 3, identifying fast, medium, and slow responding basins. Locations of dams are shown with maximum storage capacities ≥ 250,000 ac ft (308 370 000 m3). Gray outlined polygons are 696 modeled subbasins.

  • View in gallery

    Example MMEFS NAEFS ensemble forecast showing 42 individual ensemble model members (various colors), the ensemble median (black line identified with triangles), and the 75%–25% probability of exceedance confidence band is shown as the shaded region. The minor and moderate flood levels are indicated for reference.

  • View in gallery

    ME, MAE, and RMSE (m) by lead time for all and fast response basins identified in Fig. 3 and in Tables 1, 2, and 3. Results are shown for the operational forecast (OHRFC 24-h QPF) and MMEFS NAEFS ensemble mean and median forecasts, 30 Nov 2010–24 May 2012.

  • View in gallery

    MAE (m) by lead time for medium and slow response basins identified in Fig. 3 and in Tables 2 and 3. Results are shown for the operational forecast (OHRFC 24-h QPF) and MMEFS NAEFS ensemble mean and median forecasts, 30 Nov 2010–24 May 2012.

  • View in gallery

    CRPSS (dimensionless) by lead time for all forecast point locations identified in Fig. 3, for (a) all forecast stage ranges and (b) stage ranges ≥ 0.90 probability of nonexceedance. Point shading identifies basin response category.

  • View in gallery

    Reliability diagrams for all 54 basins for lead times of 24, 48, 96, 120, and 168 h. Shown for stage ranges (left) ≥0.50 and (right) ≥0.90 probability of nonexceedance.

  • View in gallery

    ROC for all 54 basins for lead times of 24, 96, 120, and 168 h. Shown for stage ranges ≥ 0.90 probability of nonexceedance.

  • View in gallery

    Rank histograms for all forecast point locations identified in Fig. 3 and for fast response basins for 24- and 168-h lead times.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 35 35 9
PDF Downloads 24 24 8

Evaluation and Benchmarking of Operational Short-Range Ensemble Mean and Median Streamflow Forecasts for the Ohio River Basin

View More View Less
  • 1 Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, and NOAA/NWS Ohio River Forecast Center, Wilmington, Ohio
  • 2 Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia
© Get Permissions
Full access

Abstract

This study presents findings from a real-time forecast experiment that compares legacy deterministic hydrologic stage forecasts to ensemble mean and median stage forecasts from the NOAA/NWS Meteorological Model-Based Ensemble Forecast System (MMEFS). The NOAA/NWS Ohio River Forecast Center (OHRFC) area of responsibility defines the experimental region. Real-time forecasts from subbasins at 54 forecast point locations, ranging in drainage area, geographic location within the Ohio River valley, and watershed response time serve as the basis for analyses. In the experiment, operational hydrologic forecasts, with a 24-h quantitative precipitation forecast (QPF) and forecast temperatures, are compared to MMEFS hydrologic ensemble mean and median forecasts, with model forcings from the NOAA/NWS National Centers for Environmental Prediction (NCEP) North American Ensemble Forecast System (NAEFS), over the period from 30 November 2010 through 24 May 2012. Experiments indicate that MMEFS ensemble mean and median forecasts exhibit lower errors beginning at about lead time 90 h when forecasts at all locations are aggregated. With fast response basins that peak at ≤24 h, ensemble mean and median forecasts exhibit lower errors much earlier, beginning at about lead time 36 h, which suggests the viability of using MMEFS ensemble forecasts as an alternative to OHRFC legacy forecasts. Analyses show that ensemble median forecasts generally exhibit smaller errors than ensemble mean forecasts for all stage ranges. Verification results suggest that OHRFC MMEFS NAEFS ensemble forecasts are reasonable, but needed improvements are identified.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Thomas E. Adams III, tea@terrapredictions.org

Abstract

This study presents findings from a real-time forecast experiment that compares legacy deterministic hydrologic stage forecasts to ensemble mean and median stage forecasts from the NOAA/NWS Meteorological Model-Based Ensemble Forecast System (MMEFS). The NOAA/NWS Ohio River Forecast Center (OHRFC) area of responsibility defines the experimental region. Real-time forecasts from subbasins at 54 forecast point locations, ranging in drainage area, geographic location within the Ohio River valley, and watershed response time serve as the basis for analyses. In the experiment, operational hydrologic forecasts, with a 24-h quantitative precipitation forecast (QPF) and forecast temperatures, are compared to MMEFS hydrologic ensemble mean and median forecasts, with model forcings from the NOAA/NWS National Centers for Environmental Prediction (NCEP) North American Ensemble Forecast System (NAEFS), over the period from 30 November 2010 through 24 May 2012. Experiments indicate that MMEFS ensemble mean and median forecasts exhibit lower errors beginning at about lead time 90 h when forecasts at all locations are aggregated. With fast response basins that peak at ≤24 h, ensemble mean and median forecasts exhibit lower errors much earlier, beginning at about lead time 36 h, which suggests the viability of using MMEFS ensemble forecasts as an alternative to OHRFC legacy forecasts. Analyses show that ensemble median forecasts generally exhibit smaller errors than ensemble mean forecasts for all stage ranges. Verification results suggest that OHRFC MMEFS NAEFS ensemble forecasts are reasonable, but needed improvements are identified.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Thomas E. Adams III, tea@terrapredictions.org

1. Introduction

The use of hydrologic ensembles to produce probabilistic flood and water resources forecasts, using ensemble prediction systems (EPSs), is rapidly gaining acceptance (National Research Council 2006a,b; Cloke and Pappenberger 2009; Demargne et al. 2014). However, full adoption of probabilistic forecasts by the public and decision-makers as a replacement to traditional, single-valued deterministic hydrologic forecasts is problematic, particularly with how risk-based forecasts are communicated to end users (Demeritt et al. 2010; Pappenberger et al. 2013; Ramos et al. 2013; Michaels 2015) and because of “institutional conservatism” (Rayner et al. 2005). The National Academies of Sciences Engineering and Medicine (2018) report that with weather-related decision-making, end users of weather-related forecasts benefit from 1) their understanding of forecasts developed over time, 2) prior experience with severe weather, and 3) other factors, such as family relationships. Related to end user familiarity with hydrometeorological forecasts, National Research Council (2006a) and Joslyn and Savelli (2010) found that end users understand that hydrometeorological forecasts are uncertain, but they make internal adjustments to account for these uncertainties. Morss et al. (2008) also found that end users of weather forecasts understood forecasts are uncertain and that most preferred the inclusion of uncertainty information with the forecasts. But as Demeritt et al. (2010, 209–222) points out, resistance to the acceptance of EPS forecasts is “not simply cognitive or communicative”; there is also the need by decision-makers to “shift institutional liability for decisions taken in the face of uncertainty.” Murphy (1991) and Krzysztofowicz (1998) argue for the adoption of probabilistic hydrometeorological forecasting, pointing out that rational decision-making in such a system necessarily shifts decision-making from the forecaster to end users of forecasts. An intuitive understanding of this undoubtedly helps to shape the reluctance by end users to adopt probabilistic hydrometeorological forecasts. In other words, resistance to the adoption of forecasts derived from EPSs, in the form of a probabilistic forecast, by both individuals and many decision-makers is complex, even with prior understanding that single-valued deterministic forecasts are uncertain. There is the added issue pointed to by Stern and Easterling (1999) that addresses the need for climate forecasts to be relevant to make them useful. This need applies to weather and hydrologic forecasts as well, which points to the broad issue, not addressed in this paper, of how to best convey forecast uncertainty to end users in ways that are relevant to them. We might ask, however, if there is an interim step with the use of the hydrologic ensemble prediction system (HEPSs), that can be taken that addresses two issues related to flood forecasting and the eventual adoption of probabilistic hydrologic forecasts, namely,

  1. Improving flood forecast accuracy over current deterministic hydrologic forecasting methods that rely on single-valued quantitative precipitation forecast (QPF)
  2. Softening the landscape for end users for eventual adoption of forecasts derived from HEPSs in the form of probabilistic forecasts.

In this paper, we explore the use of ensemble mean and median hydrologic forecasts from an HEPS as alternatives to deterministic predictions that depend on single-valued QPF. The study region in this paper is the forecast area of responsibility of the National Oceanic and Atmospheric Administration (NOAA), National Weather Service (NWS), Ohio River Forecast Center (OHRFC), shown in Fig. 1, which is one of thirteen NOAA/NWS River Forecast Centers (RFCs). Single-valued, deterministic QPF is a commonly used model forcing in hydrologic forecasting (Georgakakos and Hudlow 1984; Sokol 2003; Adams and Pagano 2016; Li et al. 2017) and used by all NWS RFCs. Research has demonstrated that the use of deterministic QPF introduces considerable error into hydrologic forecasting (Cuo et al. 2011; Diomede et al. 2014; T. E. Adams and R. Dymond 2018, unpublished manuscript; Adams and Dymond 2018, manuscript submitted to J. Hydrometeor.). We hypothesize that ensemble mean or median forecasts have smaller error than deterministic hydrologic forecasts that rely on single-valued QPF, suggested by Du et al. (2003) and Mylne et al. (2002) with numerical weather prediction (NWP) ensemble modeling systems.

Fig. 1.
Fig. 1.

The NWS 13 RFCs—Alaska/Pacific RFC (APRFC), Arkansas–Red basin RFC (ABRFC), Colorado basin RFC (CBRFC), California–Nevada RFC (CNRFC), lower Mississippi RFC (LMRFC), mid-Atlantic RFC (MARFC), Missouri basin RFC (MBRFC), north-central RFC (NCRFC), Northwest RFC (NWRFC), Ohio RFC (OHRFC), Southeast RFC (SERFC), and west Gulf RFC (WGRFC). Please note that several RFC boundaries extend beyond the United States’ national boundary into Canada and Mexico.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

a. Background

NWS RFCs are responsible for providing routine river stage/flow forecast guidance to NWS Weather Forecast Offices (WFOs) following procedures described in Adams (2016) and Adams and Dymond (2018, manuscript submitted to J. Hydrometeor.). The central responsibility of most RFCs is flood prediction, although for RFCs in the western United States, water supply forecasting, largely for reservoir inflows is, perhaps, of greater importance. RFCs utilize the NWS Community Hydrologic Prediction System (CHPS; Adams 2016), based on the Flood Early Warning System (FEWS; Deltares 2018). CHPS modeling is predominantly interactive, as described by Adams and Smith (1993), within the Linux-based NOAA/NWS Advanced Weather Interactive Processing System (AWIPS; National Research Council 1997, 2006a). The OHRFC employs several models within the CHPS operational environment, including the Sacramento Soil Moisture Accounting (SAC-SMA) model (Burnash et al. 1973; Burnash 1995), the SNOW-17 snow accumulation and ablation model (Anderson 1973), several lumped-parameter hydrologic routing models, and three reservoir simulation models. All OHRFC CHPS models were migrated from the legacy NWS River Forecast System (NWSRFS; U.S. Department of Commerce 1972) in 2011, with parallel NWSRFS and CHPS modeling through 2012. In addition to QPF, principal hydrologic model forcings are observed precipitation and observed and forecasted temperature. Observed precipitation forcings are obtained from a multisensor estimation process involving rain gauges, NWS Next Generation Weather Radar (NEXRAD) Doppler radar, and, at some RFCs, remotely sensed satellite estimates of precipitation (Zhang et al. 2011; Kitzmiller et al. 2013; He et al. 2018). Forecasted precipitation is derived from NWP models, usually with meteorological forecaster adjustments made at both the NWS Weather Prediction Center (WPC) and/or at local RFCs (Novak et al. 2014).

b. Research goals

The aim of this research is to determine the utility of using hydrologic ensemble mean or median forecasts of river stage from the NOAA/NWS Meteorological Model-Based Ensemble Forecast System (MMEFS), described in Adams and Ostrowski (2010), as an alternative to current, operational, single-valued deterministic hydrologic stage forecasts at the OHRFC and, possibly, elsewhere. Section 2 of this paper describes the real-time hydrologic forecasting experiment used in this study. Model simulations are restricted to watersheds in the OHRFC area of responsibility, shown in Fig. 1. The experiment consists of the concurrent generation of OHRFC operational river stage forecasts and MMEFS ensemble forecasts for the 30 November 2010–24 May 2012 period. Verification results of the ensemble median and mean forecasts relative to the OHRFC operational forecasts are presented in section 3. Experimental results are discussed in relation to verification of the MMEFS ensemble forecasts in section 4. Section 5 summarizes the experimental results and presents conclusions.

2. Research approach

The approach of this study is to compare OHRFC operational forecasts to MMEFS ensemble mean and median forecasts that use NWP model precipitation and temperature output from the NOAA/NWS National Centers for Environmental Prediction (NCEP) North American Ensemble Forecast System (NAEFS; Candille 2009) as hydrologic model forcings. The MMEFS, which utilizes raw NAEFS forcings, shown in Fig. 2, does not include pre- or postprocessing of either ensemble forcings or hydrologic ensemble output. All hydrologic modeling occurs within the OHRFC CHPS-FEWS forecasting system within AWIPS. The study period was from 30 November 2010 through 24 May 2012. The NAEFS consists of 42 ensemble members. The research methodology includes the following:

  1. Capturing OHRFC operational forecasts initialized at 1200 UTC (daily), with a 5-day forecast horizon
  2. Capturing automated MMEFS NAEFS hydrologic ensemble forecasts based on OHRFC 1200 UTC saved model states (daily), with a 7-day forecast horizon
  3. Deterministic verification of operational forecasts and MMEFS NAEFS ensemble mean and median forecasts (after 24 May 2012)
  4. Verification of MMEFS NAEFS ensemble forecasts (after 24 May 2012).
Fig. 2.
Fig. 2.

MMEFS schematic showing the flow of NWP forcing data from NCEP through the NWS AWIPS into the OHRFC FEWS-based CHPS for ensemble forecast generation.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

The real-time hydrologic forecasts were made using the legacy NWSRFS, relying on a geographically broad distribution of forecast point locations, with varying basin sizes and hydrologic response times for the OHRFC. All model forcing inputs and internal and output time steps are 6-hourly. A total of 54 basins, shown in Fig. 3, were selected for the study. Calibrations of SAC-SMA, SNOW-17, channel routing, and reservoir simulation models for operational use for all OHRFC subbasins were completed long before the experiments started, following guidelines presented by Anderson (2002) and Smith et al. (2013). Operational and MMEFS simulations utilize a 6-h time step for model forcings, internally, and output. Forecasts are evaluated on the basis of comparisons between U.S. Geological Survey (USGS) observed stages and model-estimated river stage values, which were transformed from simulated flow values using USGS station rating curves. Deterministic verification followed methods proposed by Welles et al. (2007) and Demargne et al. (2009).

Fig. 3.
Fig. 3.

Map showing the location of 54 experiment forecast point locations used in the OHRFC forecast area, listed in Tables 1, 2, and 3, identifying fast, medium, and slow responding basins. Locations of dams are shown with maximum storage capacities ≥ 250,000 ac ft (308 370 000 m3). Gray outlined polygons are 696 modeled subbasins.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

The 54 study basins (Fig. 3) are categorized as fast (Table 1), medium (Table 2), and slow (Table 3) responding. These include 26 fast, 20 medium, and 8 slow responding forecast point locations. The terms slow, medium, and fast refer to hydrograph time-to-peak-response times from the center of mass of the observed precipitation to the hydrograph peak. Response times less than 24 h are classified as fast, response times between 24 and 60 h are considered medium, and response times greater than 60 h are considered slow; see Office of Hydrologic Development (2000).

Table 1.

Fast response basins used in the study listing NWS station identifier (ID), USGS ID, station name, basin area, and response time category.

Table 1.
Table 2.

As in Table 1, but for medium response basins.

Table 2.
Table 3.

As in Table 1, but for slow response basins.

Table 3.

a. Operational legacy forecasts

The study relied on operational forecasts, using the OHRFC operational modeling system outlined in section 1a, covering the period 30 November 2010–24 May 2012. All operational forecasts used 24-h duration (four 6-h periods per 24 h) QPF. The experimental period spans 541 days at 54 locations, with 28 forecast periods each (four 6-h periods per day for five days), resulting in 817 992 forecast verification pairs for analysis. It should be pointed out that the operational forecasts used in this study include modeling of all 696 subbasins in the OHRFC area, an approximately 450 000 km2 region, shown in Fig. 3. Operational forecast horizons are five days.

b. MMEFS ensemble forecasts

The automated MMEFS NAEFS hydrologic model ensemble simulations exactly parallel OHRFC operational forecasts. All simulations begin with 1200 UTC initializations, but they are not run until about 1800 UTC when NAEFS data are available from NCEP. Simulations utilize the full operational suite of models and follow identical operational workflows as the legacy deterministic OHRFC model forecast runs. Forecast horizons are seven days. An example MMEFS NAEFS forecast is shown in Fig. 4.

Fig. 4.
Fig. 4.

Example MMEFS NAEFS ensemble forecast showing 42 individual ensemble model members (various colors), the ensemble median (black line identified with triangles), and the 75%–25% probability of exceedance confidence band is shown as the shaded region. The minor and moderate flood levels are indicated for reference.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

c. Ensemble forecast verification

Ensemble verification results are presented to demonstrate that MMEFS NAEFS ensemble median and mean forecasts are derived from a system that has the desired properties of acceptable forecast skill, reliability, sharpness, and discrimination (Wilks 2006). A demonstration of acceptable ensemble forecast verification results will provide a degree of confidence that the ensemble median and mean forecasts are derived from a reasonably robust ensemble forecast system.

Demargne et al. (2010) discuss the need for verification of hydrologic ensemble forecasts, identifying the need to improve research and operations, specifically aimed at 1) monitoring changes in forecast quality over time, 2) analyzing sources of forecast error, and 3) evaluating forecast skill improvements resulting from the introduction of new science and technology. Consequently, we present some ensemble verification results to serve as a baseline evaluation of MMEFS forecast quality to identify areas of needed system improvement, which should further reduce MMEFS ensemble median and mean prediction errors.

Many statistical measures have been used to evaluate probabilistic forecasts. Johnson and Bowler (2009) suggest that, for example, resolution, the property that there should be large variability of observed frequencies associated with different forecast probabilities around the climatological value, is desirable. Also, ensemble forecasts should be reliable; that is, forecast probabilities should give an estimate of the expected frequencies of the event occurring. Welles et al. (2007) and Demargne et al. (2009) explain the necessity of hydrologic forecast verification. Hydrologic ensemble forecast verification methods, as recommended and discussed by Brown et al. (2010) and Demargne et al. (2010) are used to assess the MMEFS NAEFS ensemble forecasts.

The statistical measures to evaluate MMEFS ensemble forecasts are discussed below. Ensemble verification results are presented only to indicate the overall reasonableness of the MMEFS NAEFS-based ensemble forecasts, not as a comprehensive evaluation of MMEFS ensemble forecasts.

1) Continuous ranked probability skill score

The ranked probability score (RPS), shown in Eq. (1), is a measure of how well forecasts that are expressed as probability distributions are in matching observed outcomes,
e1
where r is the number of outcomes, is the forecasted probability of outcome j, and is the actual probability of outcome j. As a note, the special case where gives the Brier score (Wilks 2006). The RPS applies to probability forecasts for discrete categories and the continuous ranked probability score (CRPS) extends the measure to continuous forecasts. Bradley and Schwartz (2011) show that the continuous ranked probability skill score [CRPSS; Eq. (4)] is a summary measure representing the weighted-average skill score, using climatology as the reference forecast, in Eq. (4), over the continuous range of outcomes y. The CRPSS, which can be derived from the mean-square error (MSE), is given by
e2
e3
e4
where and are the observed and predicted values, respectively, of a verification pair. The overbar refers to averaging of CRPS values across the sample of events. CRPSS values can range from to 1, with perfect skill equal to 1 and negative values when the forecast has worse CRPS than the reference forecast.

2) Reliability diagram

Reliability diagrams (Hartmann et al. 2002) represent, graphically, the observed frequency of an event plotted against the forecast probability of an event. This expresses how often (as a relative frequency) a forecast probability actually occurred. The hit rate is calculated from the sets of forecasts for each probability separately. Consequently, the hit rate for each probability bin n is given by
e5
e6
where F is the forecast frequency, O is the number of observed instances, N is the number of nonobserved instances, and T is the total number of forecasts.

3) Rank histogram

Rank histograms are useful for evaluating ensemble forecasts because they can efficaciously assess the reliability and errors in the mean and spread of ensemble forecasts (Hamill 2001). Rank histograms are created by tallying the rank of observations relative to values from an ensemble sorted from lowest to highest rank, which produces, ideally, a uniform distribution across the ranks.

4) Relative operating characteristic

The relative operating characteristic (ROC; Kharin and Zwiers 2003) provides information on the hit rates and false alarm rates that can be expected from the use of different probability thresholds. The ROC is a summary score used to describe the ability of forecasts to discriminate between events and nonevents, from Table 4,
e7
e8
Table 4.

Contingency table.

Table 4.

3. Study results

Verification results from the experiment, comparing the MMEFS NAEFS ensemble mean and median forecasts to OHRFC operational deterministic forecasts, are summarized in Figs. 5 and 6. Mean error (ME), mean absolute error (MAE), and root-mean-square error (RMSE), based on predicted and observed stage pairs, are shown by forecast lead time in hours. Figure 5 compares results from fast response basins to the results for all basins. Figure 6 shows MAE for medium response and slow response basins. Several observations can be made from Fig. 5, namely,

  1. In most instances, there is little difference between ensemble mean and median values by lead time. With the exception of RMSE, where ensemble mean values are smaller than ensemble median values, ensemble median values are always smaller in magnitude than ensemble mean values, which suggest that ensemble median forecasts should be preferred over ensemble mean forecasts, since less error is incurred.
  2. With the aggregation of all 54 basins, with respect to ME, little difference exists between the ensemble median forecast (and mean) and the OHRFC operational forecast (OHRFC 24-h QPF) through lead time 54 h. Beginning with lead time 60 h, the OHRFC operational forecast becomes increasingly more negatively biased with increased lead times, whereas the MMEFS ensemble median forecasts remain unbiased through lead time 168 h.
  3. With fast response basins, the ensemble median forecast always shows ME values equal to or smaller in magnitude than the OHRFC operational forecasts, which get increasingly more negative with longer lead times after lead time 72 h compared to ensemble median forecasts that very slowly become more negative with longer lead times.
  4. OHRFC operational forecasts have smaller MAE values compared to MMEFS ensemble median and mean forecasts until lead time 96 h, with all basins aggregated; however, for fast response basins, MMEFS ensemble median and mean forecasts have MAE values equal to or smaller than OHRFC operational forecasts beginning at about lead time 36 h.
  5. With respect to RMSE, with all basins aggregated, OHRFC operational forecasts exhibit smaller error compared to MMEFS ensemble median and mean forecasts until lead time 90 h, after which MMEFS ensemble median and mean forecast RMSE values are smaller than OHRFC operational forecast RMSE values; however, for fast response basins, MMEFS ensemble median and mean forecast RMSE values are approximately equal to or less than OHRFC operational forecast RMSE values beginning with lead time 72 h.
Fig. 5.
Fig. 5.

ME, MAE, and RMSE (m) by lead time for all and fast response basins identified in Fig. 3 and in Tables 1, 2, and 3. Results are shown for the operational forecast (OHRFC 24-h QPF) and MMEFS NAEFS ensemble mean and median forecasts, 30 Nov 2010–24 May 2012.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

Fig. 6.
Fig. 6.

MAE (m) by lead time for medium and slow response basins identified in Fig. 3 and in Tables 2 and 3. Results are shown for the operational forecast (OHRFC 24-h QPF) and MMEFS NAEFS ensemble mean and median forecasts, 30 Nov 2010–24 May 2012.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

Figure 6 shows that for medium and slow response basins, OHRFC operational forecasts tend to exhibit smaller forecast error compared to MMEFS ensemble median and mean forecasts until longer lead times are reached, ≥102 h. An explanation for this finding is discussed in section 4.

a. Forecast verification

Verification of the operational legacy forecasts use the R language and environment for statistical computing (R Core Team 2017) and contributed verification package (NCAR 2015). MMEFS NAEFS ensemble mean and median forecast verification statistics were obtained from ensemble analyses utilizing the NOAA/NWS ensemble verification service (EVS; Brown et al. 2010; Demargne et al. 2010). Operational forecast data are stored in the OHRFC PostgreSQL verification database, and MMEFS simulations are written to NWSRFS Ensemble Streamflow Prediction (ESP; Day 1985) format files. Verification measures used are ME, MAE, and RMSE, given in Eqs. (9), (10), and (11):
e9
e10
e11
where the quantities and are the predicted and observed kth stage values, respectively, for n total paired values. Units of measure for stage are meters unless reported otherwise. Values for ME, MAE, and RMSE = 0 imply perfect agreement, that is, no error.

b. Ensemble verification

1) CRPSS

Figure 7 summarizes the forecast skill of all 54 basins, with aggregation across all forecast stage ranges (Fig. 7a) and for stage ranges ≥ 0.90 probability of nonexceedance (Fig. 7b). The results show that MMEFS NAEFS forecasts are skillful relative to unconditional sample climatology utilizing all observations over the period 30 November 2010–24 May 2012. Results also show that forecasts for stage ranges ≥ 0.90 probability of nonexceedance are more skillful than forecasts that are aggregated across all stage ranges, which is encouraging since RFCs emphasize flood forecasting. Figure 7 shows differentiation between fast, medium, and slow response basins, illustrating that MMEFS ensemble forecasts are most skillful for slow response basins and that forecast skill is most variable for medium response basins. Forecast skill, as expected, declines with increased lead times.

Fig. 7.
Fig. 7.

CRPSS (dimensionless) by lead time for all forecast point locations identified in Fig. 3, for (a) all forecast stage ranges and (b) stage ranges ≥ 0.90 probability of nonexceedance. Point shading identifies basin response category.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

An interesting point relates to some of the lowest CRPSS values in Fig. 7a, which correspond to the forecast point location at Pittsburgh, Pennsylvania (PTTP1). Referring to Table 5, CRPSS values aggregated across all forecast stage ranges demonstrate little to no skill. However, for stage ranges ≥ 0.90 probability of nonexceedance, MMEFS ensemble forecasts show reasonable skill. The reason for this difference relates to the complex physical setting described by Adams et al. (2018) involving downstream control at the Dashields, Pennsylvania, lock and dam that regulates the pool level at the PTTP1 stream gauge at low flows in a manner that is not well captured by the OHRFC modeling system. With higher flows, particularly flood flows, this control does not exist.

Table 5.

CRPSS for Pittsburgh (PTTP1) for all stage ranges and for stages with probability of exceedance p = 0.90 by lead time.

Table 5.

2) Reliability

Figure 8 shows the reliability diagram for stage ranges ≥0.50 and ≥0.90 probability of nonexceedance, for lead times of 24, 48, 96, 120, and 168 h, aggregated across all 54 basins. Generally, the ensemble forecasts show reasonable reliability, but that forecast overconfidence exists between forecast probability ranges 0.50–0.75 for both stage ranges. Results for lead time of 24 h for the ≥0.90 stage range is, most likely, representative of small-sample-size problems, which is a general concern since the study period was short, only 541 days, 30 November 2010–24 May 2012. But for ≥0.50 stage range, 24-h lead-time reliability is better, possibly because of either a larger sample size or reduced error or both. The ≥0.50 stage range forecasts do exhibit a small degree of underconfidence at smaller forecast probability levels compared to the ≥0.90 stage range that may likely be related to an overstatement of the importance of near-term QPF uncertainty relative to model state influences.

Fig. 8.
Fig. 8.

Reliability diagrams for all 54 basins for lead times of 24, 48, 96, 120, and 168 h. Shown for stage ranges (left) ≥0.50 and (right) ≥0.90 probability of nonexceedance.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

3) ROC

Figure 9 shows the ROC diagram, aggregated across all 54 basins, for stage ranges ≥ 0.90 probability of nonexceedance for forecast lead times of 24, 96, 120, and 168 h, indicating that MMEFS NAEFS ensemble forecasts discriminate between events and nonevents very well. This result is representative of similar analyses done for individual basins and for all forecast stage ranges.

Fig. 9.
Fig. 9.

ROC for all 54 basins for lead times of 24, 96, 120, and 168 h. Shown for stage ranges ≥ 0.90 probability of nonexceedance.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

4) Rank histogram

Rank histogram analyses are summarized with aggregation across all 54 basins in Fig. 10, showing severe underspread of MMEFS ensemble forecasts, particularly at shorter lead times, illustrated for lead time of 24 h. An explanation for this result is shown in Fig. 4, which was specifically selected to illustrate the underspread problem. In this example, the 42 MMEFS NAEFS ensembles appear as a single-valued forecast from the beginning of the forecast at 1800 UTC 4 April 2018 through 1200 UTC 7 April 2018. This occurred because there were 1) no model forcings to substantially perturb the hydrologic models over the period from 1800 UTC 4 April through 1200 UTC 7 April and 2) because inherent model error is not included in the MMEFS.

Fig. 10.
Fig. 10.

Rank histograms for all forecast point locations identified in Fig. 3 and for fast response basins for 24- and 168-h lead times.

Citation: Journal of Hydrometeorology 19, 10; 10.1175/JHM-D-18-0102.1

4. Discussion

Two main topics of discussion are as follows: 1) MMEFS ensemble median and mean forecasts compared to OHRFC operational forecasts and 2) needed improvements to the MMEFS. The discussion on the first point focuses on the differences found between MMEFS ensemble median and mean forecasts for fast response basins versus what is seen with medium and slow response basins compared to OHRFC operational forecasts. We suggest there are two principal explanations. First, there are differences in the quality of model calibrations, and second, there are independent operational modeling complications that account for larger initial errors for medium and slow response basins that are not present with fast response basins.

a. MMEFS ensemble median and mean forecasts

Results presented in Fig. 5 clearly show, in section 3, with all 54 basins aggregated, that MMEFS ensemble median and mean forecasts add considerable value at longer lead times above OHRFC operational forecasts at lead times ≥ 90 h. An important result is that the MMEFS ensemble median and mean forecasts for fast response basins have smaller forecast errors than basins with longer basin response times beginning with lead times ≥ 36 h for MAE. MMEFS ensemble median and mean forecasts for fast response basins display smaller error immediately with ME, at lead time of 6 h. This finding is important because it demonstrates the viability of using MMEFS ensemble median and mean forecasts as an alternative to the continued use of OHRFC operational forecasts relying on 24- or, currently, 48-h QPF.

There are likely several factors that contribute to lower errors found with shorter lead times, ≤90 h, for OHRFC operational forecasts compared to MMEFS ensemble median and mean forecasts. These include the following:

  1. The fact that OHRFC operational forecasts benefit from real-time interactive forecaster adjustments in the near term to account for hydrologic model state inaccuracies, which are infeasible with ensemble processing workflows
  2. The incorporation of observed flows in streamflow routings for nonheadwater basins with OHRFC operational forecasts, which is not tenable with hydrologic ensemble forecasting
  3. The QPF used in OHRFC operational forecasts is closer in time to the beginning of forecast initiation than is currently possible with MMEFS ensemble QPF inputs with the delay in computer processing. That is, meteorological conditions are likely to change enough in the near term that are not reflected in the MMEFS ensemble QPF inputs but maybe better captured in OHRFC operational QPF.

Several factors influence the disparity in MMEFS ensemble median and mean forecast verification results for medium and slow response basins compared to fast response basins. These include the following:

  1. Model calibrations are good for fast response, headwater basins, but calibrations for medium and slow response basins are more problematic. Calibration for nonheadwater basins is more difficult because of, in part, overwhelming upstream flow influences that cannot be accurately separated from total observed streamflow at local nonheadwater gauges. The separation of flows routed from upstream basins from the observed total streamflow is necessary to estimate the local observed streamflow. With the local observed streamflow, model parameter adjustments to the local basin can be made appropriately to reflect the hydrologic response of the local watershed being calibrated. The consequence is that hydrologic model parameters used for downstream basins usually can only be estimated, not calibrated. The outcome is that local downstream basin simulations can be considerably more in error than fast response headwater basins that have been calibrated.
  2. Overall modeling for downstream medium and slow response basins is more complex, involving the use of flow routing and reservoir simulation models, which adds to modeling uncertainty and error.
  3. For some medium and all slow response basins, complex channel flow dynamics, including hydrodynamic backwater effects, adversely affects stage–discharge relationships and, on the main stem of the Ohio River, lock and dam controls influence stage–discharge relationships (Adams et al. 2018). The implication here is that, while flow simulations may be good, stage–discharge relationships are nonunique because of hysteresis. This nonunique stage–discharge relationship precludes unique conversion from flow to stage. Consequently, considerable error is incurred, which adversely influences verification scores because of the introduction of erroneous stage values. Manual forecaster adjustments in the OHRFC operational workflow are made, in part, to minimize stage–discharge ratings hysteresis effects. Such adjustments are not possible in MMEFS NAEFS simulations. The need for dynamic flow routing modeling served as the basis for the development and operational implementation of the Ohio River Community Hydrologic Engineering Center River Analysis System (HEC-RAS) Model into CHPS (Adams et al. 2018). Unfortunately, this model was not used in the MMEFS NAEFS simulations.
  4. Reservoir releases significantly alter downstream flows. OHRFC operational forecasts benefit from the inclusion of U.S. Army Corps of Engineers (USACE) deterministic reservoir release schedules, including flow releases from locks and dams, which are incorporated into the operational forecasts. MMEFS modeling relies on NWSRFS-based (now CHPS) reservoir model simulations, which, under many scenarios, can be quite erroneous compared to actual USACE reservoir releases, which involves human decision-making that is difficult to capture in a reservoir simulation model. Figure 3 shows that all slow and all but two medium response basin locations are downstream of significant reservoirs with maximum storage capacities ≥ 250 000 acre feet (ac ft; 308 370 000 m3). All of the fast response basins are upstream of these reservoirs. Consequently, OHRFC operational forecast errors are reduced in the near term relative to MMEFS NAEFS ensemble median and mean forecasts that rely solely on model simulations of reservoir outflows.

b. MMEFS improvements

MMEFS ensemble forecasts do not currently make use of ensemble model-error correction methods suggested by, for example, Bogner and Kalas (2008) and Li et al. (2016) or postprocessing bias correction of raw ensemble forecasts proposed by, for example, Hashino et al. (2007), Brown and Seo (2010), and Yuan and Wood (2012) to reduce uncertainties arising from model inputs and outputs, initial and boundary conditions, and the structure and parameter estimates of models. Wentao et al. (2017) recently reviewed statistical postprocessing methods for hydrometeorological ensemble forecasting, citing the need for further work on many fronts. These concerns include the need to address stationarity assumptions; handle extreme events, including the timing of flood peaks in the case streamflow modeling; further investigate methods proposed to make adjustments at ungauged locations; and continue research into methods that attempt to address total uncertainty, including model structure, parameter estimation, and model initial and boundary conditions. The need for the use of such techniques is illustrated in section 4b where MMEFS ensemble verification results are discussed related to ensemble underspread and apparent biases. Although not strictly an ensemble-modeling-related issue, MMEFS ensemble forecasts would benefit from improved hydrologic model calibrations for many basins. Model simulation error would be reduced further and MMEFS verification would also improve by incorporating the Ohio River Community HEC-RAS Model directly into MMEFS ensemble simulations for slow response rivers where complex hydrodynamics are inadequately handled by simple streamflow routing models. MMEFS ensemble verification metrics are clearly better for basins known to have good calibrations than for basins with suboptimal model calibrations.

5. Summary and conclusions

Experimental results from this study demonstrate that NAEFS-based MMEFS ensemble median forecasts have smaller forecast error than ensemble mean forecasts based on ME and MAE verification measures. Even though RMSE values suggest slightly lower forecast error with ensemble mean compared to ensemble median forecasts, analyses, overall, suggest that ensemble median forecasts should be preferred over ensemble mean forecasts. More importantly, when forecasts spanning all ranges in stage and basin response times are aggregated, MMEFS ensemble mean and median forecasts show lower forecast error than legacy OHRFC operational forecasts, based on 24-h deterministic QPF, at long forecast lead times, that is, beginning at approximately ≥90 h. This result shows the viability of using ensemble mean/median forecasts for extended forecasts beyond the 4-day forecast horizon. When analyses are restricted to fast response basins only, MMEFS ensemble mean and median forecasts have smaller forecast error than legacy OHRFC operational forecasts beginning at about lead time of 36 h. This finding has potentially significant implications on forecast operations at the OHRFC and other hydrologic forecast centers. Specifically, lower MMEFS ensemble mean/median forecast errors compared to the current operational OHRFC deterministic forecasts suggests the feasibility of changing operational workflows from manually intensive, interactive forecasting procedures to a more automated operational environment using ensemble forecasting methodologies. Of course, the generation of full probabilistic forecasts from ensemble methods is greatly preferred over the use of the ensemble mean or median because the former conveys forecast uncertainty to end users, where the latter does not (National Research Council 2006a). In addition, a wide range of water resources applications requires the use of flow hydrographs; the proposed ensemble median/mean stage forecasts should not be translated to flow hydrographs because mass balance is not preserved. Consequently, for most other water resources applications, full hydrologic ensemble forecasts are needed.

MMEFS ensemble verification results covering the 30 November 2010–24 May 2012 study period demonstrate forecast skill and reasonable forecast discrimination, reliability, and sharpness. However, verification results also identify needed areas of improvement, such as the need to account for model error, utilizing a priori hindcast experiments to quantify model error, and the adoption of a postforecast, ensemble bias correction methodology.

Acknowledgments

The authors are grateful to James Brown, Hydrologic Solutions, Limited, Winchester, United Kingdom, for providing helpful guidance in this study and by providing updated EVS software to the authors. We also thank the anonymous reviewers for their comments, which helped to improve the manuscript.

REFERENCES

  • Adams, T. E., 2016: Flood forecasting in the United States NOAA/National Weather Service. Flood Forecasting: A Global Perspective, T. E. Adams and T. C. Pagano, Eds., 1st ed. Elsevier, 249–310.

    • Crossref
    • Export Citation
  • Adams, T. E., and G. Smith, 1993: National weather service interactive river forecasting using state, parameter, and data modifications. Proc. Int. Symp. Engineering Hydrology, San Francisco, CA, Environmental Water Resources Institute, 10 pp.

  • Adams, T. E., and J. Ostrowski, 2010: Short lead-time hydrologic ensemble forecasts from numerical weather prediction model ensembles. Proc. World Environmental and Water Resources Congress 2010, Providence, RI, Environmental Water Resources Institute, 2294–2304, https://doi.org/10.1061/41114(371)237.

    • Crossref
    • Export Citation
  • Adams, T. E., and T. C. Pagano, Eds., 2016: Flood Forecasting: A Global Perspective. 1st ed., Elsevier, 478 pp.

    • Crossref
    • Export Citation
  • Adams, T. E., S. Chen, and R. Dymond, 2018: Results from operational hydrologic forecasts using the NOAA/NWS OHRFC Ohio River Community HEC-RAS Model. J. Hydrol. Eng., 23, 04018028, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001663.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, E. A., 1973: National Weather Service River Forecast System—Snow accumulation and ablation model. NOAA Tech. Rep. NWS-HYDRO-17, 229 pp.

  • Anderson, E. A., 2002: Calibration of conceptual hydrologic models for use in river forecasting. NWS Tech. Rep., 419 pp., http://www.nws.noaa.gov/oh/hrl/modelcalibration/1.%20Calibration%20Process/1_Anderson_CalbManual.pdf.

  • Bogner, K., and M. Kalas, 2008: Error-correction methods and evaluation of an ensemble based hydrological forecasting system for the Upper Danube catchment. Atmos. Sci. Lett., 9, 95102, https://doi.org/10.1002/asl.180.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bradley, A. A., and S. S. Schwartz, 2011: Summary verification measures and their interpretation for ensemble forecasts. Mon. Wea. Rev., 139, 30753089, https://doi.org/10.1175/2010MWR3305.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. D., and D.-J. Seo, 2010: A nonparametric postprocessor for bias correction of hydrometeorological and hydrologic ensemble forecasts. J. Hydrometeor., 11, 642665, https://doi.org/10.1175/2009JHM1188.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. D., J. Demargne, D.-J. Seo, and Y. Liu, 2010: The Ensemble Verification System (EVS): A software tool for verifying ensemble forecasts of hydrometeorological and hydrologic variables at discrete locations. Environ. Modell. Software, 25, 854872, https://doi.org/10.1016/j.envsoft.2010.01.009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burnash, R., 1995: The NWS River Forecast System—Catchment modeling. Computer Models of Watershed Hydrology, 1st ed., V. Singh, Ed., Water Resources Publication, 311–366.

  • Burnash, R., R. Ferral, and R. McGuire, 1973: A generalized streamflow simulation system: Conceptual modeling for digital computers. NWS Tech. Rep., 204 pp.

  • Candille, G., 2009: The multiensemble approach: The NAEFS example. Mon. Wea. Rev., 137, 16551665, https://doi.org/10.1175/2008MWR2682.1.

  • Cloke, H., and F. Pappenberger, 2009: Ensemble flood forecasting: A review. J. Hydrol., 375, 613626, https://doi.org/10.1016/j.jhydrol.2009.06.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cuo, L., T. C. Pagano, and Q. J. Wang, 2011: A review of quantitative precipitation forecasts and their use in short- to medium-range streamflow forecasting. J. Hydrometeor., 12, 713728, https://doi.org/10.1175/2011JHM1347.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Day, G. N., 1985: Extended streamflow forecasting using NWSRFS. J. Water Resour. Plann. Manage., 111, 157170, https://doi.org/10.1061/(ASCE)0733-9496(1985)111:2(157).

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Deltares, 2018: Flood Early Warning System (FEWS). Deltares, https://www.deltares.nl/en/software/flood-forecasting-system-delft-fews-2.

  • Demargne, J., M. Mulluski, K. Werner, T. Adams, S. Lindsey, N. Schwein, W. Marosi, and E. Welles, 2009: Application of forecast verification science to operational river forecasting in the U.S. National Weather Service. Bull. Amer. Meteor. Soc., 90, 779784, https://doi.org/10.1175/2008BAMS2619.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demargne, J., J. Brown, D.-J. Seo, L. Wu, Z. Toth, and Y. Zhu, 2010: Diagnostic verification of hydrometeorological and hydrologic ensembles. Atmos. Sci. Lett., 11, 114122, https://doi.org/10.1002/asl.261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demargne, J., and Coauthors, 2014: The science of NOAA’s operational hydrologic ensemble forecast service. Bull. Amer. Meteor. Soc., 95, 7998, https://doi.org/10.1175/BAMS-D-12-00081.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Demeritt, D., S. Nobert, H. Cloke, and F. Pappenberger, 2010: Challenges in communicating and using ensembles in operational flood forecasting. Meteor. Appl., 17, 209222, https://doi.org/10.1002/met.194.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Diomede, T., C. Marsigli, A. Montani, F. Nerozzi, and T. Paccagnella, 2014: Calibration of limited-area ensemble precipitation forecasts for hydrological predictions. Mon. Wea. Rev., 142, 21762197, https://doi.org/10.1175/MWR-D-13-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Du, J., G. DiMego, M. S. Tracton, and B. Zhou, 2003: NCEP short-range ensemble forecasting (SREF) system: Multi-IC, multi-model and multi-physics approach. WMO/TD 1161, 5.09–5.10.

  • Georgakakos, K. P., and M. D. Hudlow, 1984: Quantitative precipitation forecast techniques for use in hydrologic forecasting. Bull. Amer. Meteor. Soc., 65, 11861200, https://doi.org/10.1175/1520-0477(1984)065<1186:QPFTFU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, 550560, https://doi.org/10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hartmann, H. C., T. C. Pagano, S. Sorooshian, and R. Bales, 2002: Confidence builders: Evaluating seasonal climate forecasts from user perspectives. Bull. Amer. Meteor. Soc., 83, 683698, https://doi.org/10.1175/1520-0477(2002)083<0683:CBESCF>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hashino, T., A. A. Bradley, and S. S. Schwartz, 2007: Evaluation of bias-correction methods for ensemble streamflow volume forecasts. Hydrol. Earth Syst. Sci., 11, 939950, https://doi.org/10.5194/hess-11-939-2007.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • He, Y., Y. Zhang, R. Kuligowski, R. Cifelli, and D. Kitzmiller, 2018: Incorporating satellite precipitation estimates into a radar-gauge multi-sensor precipitation estimation algorithm. Remote Sens., 10, 117, https://doi.org/10.3390/rs10010117.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, C., and N. Bowler, 2009: On the reliability and calibration of ensemble forecasts. Mon. Wea. Rev., 137, 17171720, https://doi.org/10.1175/2009MWR2715.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S., and S. Savelli, 2010: Communicating forecast uncertainty: Public perception of weather forecast uncertainty. Meteor. Appl., 17, 180195, https://doi.org/10.1002/met.190.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kharin, V. V., and F. W. Zwiers, 2003: On the ROC score of probability forecasts. J. Climate, 16, 41454150, https://doi.org/10.1175/1520-0442(2003)016<4145:OTRSOP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kitzmiller, D., D. Miller, R. Fulton, and F. Ding, 2013: Radar and multisensor precipitation estimation techniques in National Weather Service hydrologic operations. J. Hydrol. Eng., 18, 133142, https://doi.org/10.1061/(ASCE)HE.1943-5584.0000523.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Krzysztofowicz, R., 1998: Probabilistic hydrometeorological forecasts: Toward a new era in operational forecasting. Bull. Amer. Meteor. Soc., 79, 243252, https://doi.org/10.1175/1520-0477(1998)079<0243:PHFTAN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Y. Chen, H. Wang, J. Qin, J. Li, and S. Chiao, 2017: Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model. Hydrol. Earth Syst. Sci., 21, 12791294, https://doi.org/10.5194/hess-21-1279-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, M., Q. J. Wang, J. C. Bennett, and D. E. Robertson, 2016: Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting. Hydrol. Earth Syst. Sci., 20, 35613579, https://doi.org/10.5194/hess-20-3561-2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Michaels, S., 2015: Probabilistic forecasting and the reshaping of food risk management. J. Nat. Resour. Policy Res., 7, 4151, https://doi.org/10.1080/19390459.2014.970800.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. L. Demuth, and J. K. Lazo, 2008: Communicating uncertainty in weather forecasts: A survey of the U.S. public. Wea. Forecasting, 23, 974991, https://doi.org/10.1175/2008WAF2007088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1991: Probabilities, odds, and forecasts of rare events. Wea. Forecasting, 6, 302307, https://doi.org/10.1175/1520-0434(1991)006<0302:POAFOR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mylne, K. R., R. E. Evans, and R. T. Clark, 2002: Multi-model multi-analysis ensembles in quasi-operational medium-range forecasting. Quart. J. Roy. Meteor. Soc., 128, 361384, https://doi.org/10.1256/00359000260498923.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • National Academies of Sciences, Engineering, and Medicine, 2018: Integrating Social and Behavioral Sciences within the Weather Enterprise. The National Academies Press, 198 pp., https://doi.org/10.17226/24865.

    • Crossref
    • Export Citation
  • National Research Council, 1997: An Assessment of the Advanced Weather Interactive Processing System: Operational Test and Evaluation of the First system build. The National Academies Press, 124 pp. https://doi.org/10.17226/5995.

    • Crossref
    • Export Citation
  • National Research Council, 2006a: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. The National Academies Press, 124 pp., https://doi.org/10.17226/11699.

    • Crossref
    • Export Citation
  • National Research Council, 2006b: Toward A New Advanced Hydrologic Prediction Service (AHPS). The National Academies Press, 68 pp., https://doi.org/10.17226/11598.

    • Crossref
    • Export Citation
  • NCAR, 2015: Verification: Weather forecast verification utilities, r package version 1.42. NCAR Research Applications Laboratory, https://CRAN.R-project.org/package=verification.

  • Novak, D. R., C. Bailey, K. F. Brill, P. Burke, W. A. Hogsett, R. Rausch, and M. Schichtel, 2014: Precipitation and temperature forecast performance at the Weather Prediction Center. Wea. Forecasting, 29, 489504, https://doi.org/10.1175/WAF-D-13-00066.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Office of Hydrologic Development, 2000: National Weather Service Verification software users’ manual. NOAA Tech. Rep., 26 pp., http://www.nws.noaa.gov/oh/hrl/verification/ob3/VerifyUsersManual_ob3.pdf.

  • Pappenberger, F., E. Stephens, J. Thielen, P. Salamon, D. Demeritt, S. J. van Andel, F. Wetterhall, and L. Alfieri, 2013: Visualizing probabilistic flood forecast information: Expert preferences and perceptions of best practice in uncertainty communication. Hydrol. Processes, 27, 132146, https://doi.org/10.1002/hyp.9253.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ramos, M. H., S. J. van Andel, and F. Pappenberger, 2013: Do probabilistic forecasts lead to better decisions? Hydrol. Earth Syst. Sci., 17, 22192232, https://doi.org/10.5194/hess-17-2219-2013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rayner, S., D. Lach, and H. Ingram, 2005: Weather forecasts are for wimps: Why water resource managers do not use climate forecasts. Climatic Change, 69, 197227, https://doi.org/10.1007/s10584-005-3148-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • R Core Team, 2017: R: A language and environment for statistical computing. R Foundation for Statistical Computing, https://www.R-project.org.

  • Smith, M. B., D. P. Laurine, V. I. Koren, S. M. Reed, and Z. Zhang, 2013: Hydrologic model calibration in the National Weather Service. Calibration of Watershed Models, Q. Duan et al., Eds., Vol. 6, Water Science and Application Series, Amer. Geophys. Union, 133–152, https://doi.org/10.1029/WS006p0133.

    • Crossref
    • Export Citation
  • Sokol, Z., 2003: MOS-based precipitation forecasts for river basins. Wea. Forecasting, 18, 769781, https://doi.org/10.1175/1520-0434(2003)018<0769:MPFFRB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stern, P. C., and W. E. Easterling, Eds., 1999: Making Climate Forecasts Matter. The National Academies Press, 192 pp., https://doi.org/10.17226/6370.

    • Crossref
    • Export Citation
  • U.S. Department of Commerce, 1972: National Weather Service river forecast procedures. NOAA Tech. Rep., NWS-Hydro-14, 252 pp.

  • Welles, E., S. Sorooshian, G. Carter, and B. Olsen, 2007: Hydrologic verification: A call for action and collaboration. Bull. Amer. Meteor. Soc., 88, 503511, https://doi.org/10.1175/BAMS-88-4-503.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wentao, L., D. Qingyun, M. Chiyuan, Y. Aizhong, G. Wei, and D. Zhenhua, 2017: A review on statistical postprocessing methods for hydrometeorological ensemble forecasting. Wiley Interdiscip. Rev.: Water, 4, e1246, https://doi.org/10.1002/wat2.1246.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. International Geophysics Series, Vol. 100, Academic Press, 648 pp.

  • Yuan, X., and E. F. Wood, 2012: Downscaling precipitation or bias-correcting streamflow? Some implications for coupled general circulation model (CGCM)-based ensemble seasonal hydrologic forecast. Water Resour. Res., 48, W12519, https://doi.org/10.1029/2012WR012256.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, Y., S. Reed, and D. Kitzmiller, 2011: Effects of retrospective gauge-based readjustment of multisensor precipitation estimates on hydrologic simulations. J. Hydrometeor., 12, 429443, https://doi.org/10.1175/2010JHM1200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save