Browse
Abstract
The spatial description of high-resolution extreme daily rainfall fields is challenging because of the high spatial and temporal variability of rainfall, particularly in tropical regions due to the stochastic nature of convective rainfall. Geostatistical simulations offer a solution to this problem. In this study, a stochastic geostatistical simulation technique based on the spectral turning bands method is presented for modeling daily rainfall extremes in the data-scarce tropical Ouémé River basin (Benin). This technique uses meta-Gaussian frameworks built on Gaussian random fields, which are transformed into realistic rainfall fields using statistical transfer functions. The simulation framework can be conditioned on point observations and is computationally efficient in generating multiple ensembles of extreme rainfall fields. The results of tests and evaluations for multiple extremes demonstrate the effectiveness of the simulation framework in modeling more realistic rainfall fields and capturing their variability. It successfully reproduces the empirical cumulative distribution function of the observation samples and outperforms classical interpolation techniques like ordinary kriging in terms of spatial continuity and rainfall variability. The study also addresses the challenge of dealing with uncertainty in data-poor areas and proposes a novel approach for determining the spatial correlation structure even with low station density, resulting in a performance boost of 9.5% compared to traditional techniques. Additionally, we present a low-skill reference simulation method to facilitate a comprehensive comparison of the geostatistical simulation approaches. The simulations generated have the potential to provide valuable inputs for hydrological modeling.
Abstract
The spatial description of high-resolution extreme daily rainfall fields is challenging because of the high spatial and temporal variability of rainfall, particularly in tropical regions due to the stochastic nature of convective rainfall. Geostatistical simulations offer a solution to this problem. In this study, a stochastic geostatistical simulation technique based on the spectral turning bands method is presented for modeling daily rainfall extremes in the data-scarce tropical Ouémé River basin (Benin). This technique uses meta-Gaussian frameworks built on Gaussian random fields, which are transformed into realistic rainfall fields using statistical transfer functions. The simulation framework can be conditioned on point observations and is computationally efficient in generating multiple ensembles of extreme rainfall fields. The results of tests and evaluations for multiple extremes demonstrate the effectiveness of the simulation framework in modeling more realistic rainfall fields and capturing their variability. It successfully reproduces the empirical cumulative distribution function of the observation samples and outperforms classical interpolation techniques like ordinary kriging in terms of spatial continuity and rainfall variability. The study also addresses the challenge of dealing with uncertainty in data-poor areas and proposes a novel approach for determining the spatial correlation structure even with low station density, resulting in a performance boost of 9.5% compared to traditional techniques. Additionally, we present a low-skill reference simulation method to facilitate a comprehensive comparison of the geostatistical simulation approaches. The simulations generated have the potential to provide valuable inputs for hydrological modeling.
Abstract
Flash flooding remains a challenging prediction problem, which is exacerbated by the lack of a universally accepted definition of the phenomenon. In this article, we extend prior analysis to examine the correspondence of various combinations of quantitative precipitation estimates (QPEs) and precipitation thresholds to observed occurrences of flash floods, additionally considering short-term quantitative precipitation forecasts from a convection-allowing model. Consistent with previous studies, there is large variability between QPE datasets in the frequency of “heavy” precipitation events. There is also large regional variability in the best thresholds for correspondence with reported flash floods. In general, flash flood guidance (FFG) exceedances provide the best correspondence with observed flash floods, although the best correspondence is often found for exceedances of ratios of FFG above or below unity. In the interior western United States, NOAA Atlas 14 derived recurrence interval thresholds (for the southwestern United States) and static thresholds (for the northern and central Rockies) provide better correspondence. The 6-h QPE provides better correspondence with observed flash floods than 1-h QPE in all regions except the West Coast and southwestern United States. Exceedances of precipitation thresholds in forecasts from the operational High-Resolution Rapid Refresh (HRRR) generally do not correspond with observed flash flood events as well as QPE datasets, but they outperform QPE datasets in some regions of complex terrain and sparse observational coverage such as the southwestern United States. These results can provide context for forecasters seeking to identify potential flash flood events based on QPE or forecast-based exceedances of precipitation thresholds.
Significance Statement
Flash floods result from heavy rainfall, but it is difficult to know exactly how much rain will cause a flash flood in a particular location. Furthermore, different precipitation datasets can show very different amounts of precipitation, even from the same storm. This study examines how well different precipitation datasets and model forecasts, used by forecasters to warn the public of flash flooding, represent heavy rainfall leading to flash flooding around the United States. We found that different datasets have dramatically different numbers of heavy rainfall events and that high-resolution model forecasts of heavy rain correspond with observed flash flood events about as well as precipitation datasets based on rain gauge and radar in some regions of the country with few observations.
Abstract
Flash flooding remains a challenging prediction problem, which is exacerbated by the lack of a universally accepted definition of the phenomenon. In this article, we extend prior analysis to examine the correspondence of various combinations of quantitative precipitation estimates (QPEs) and precipitation thresholds to observed occurrences of flash floods, additionally considering short-term quantitative precipitation forecasts from a convection-allowing model. Consistent with previous studies, there is large variability between QPE datasets in the frequency of “heavy” precipitation events. There is also large regional variability in the best thresholds for correspondence with reported flash floods. In general, flash flood guidance (FFG) exceedances provide the best correspondence with observed flash floods, although the best correspondence is often found for exceedances of ratios of FFG above or below unity. In the interior western United States, NOAA Atlas 14 derived recurrence interval thresholds (for the southwestern United States) and static thresholds (for the northern and central Rockies) provide better correspondence. The 6-h QPE provides better correspondence with observed flash floods than 1-h QPE in all regions except the West Coast and southwestern United States. Exceedances of precipitation thresholds in forecasts from the operational High-Resolution Rapid Refresh (HRRR) generally do not correspond with observed flash flood events as well as QPE datasets, but they outperform QPE datasets in some regions of complex terrain and sparse observational coverage such as the southwestern United States. These results can provide context for forecasters seeking to identify potential flash flood events based on QPE or forecast-based exceedances of precipitation thresholds.
Significance Statement
Flash floods result from heavy rainfall, but it is difficult to know exactly how much rain will cause a flash flood in a particular location. Furthermore, different precipitation datasets can show very different amounts of precipitation, even from the same storm. This study examines how well different precipitation datasets and model forecasts, used by forecasters to warn the public of flash flooding, represent heavy rainfall leading to flash flooding around the United States. We found that different datasets have dramatically different numbers of heavy rainfall events and that high-resolution model forecasts of heavy rain correspond with observed flash flood events about as well as precipitation datasets based on rain gauge and radar in some regions of the country with few observations.
Abstract
Predicting and managing the impacts of flash droughts is difficult owing to their rapid onset and intensification. Flash drought monitoring often relies on assessing changes in root-zone soil moisture. However, the lack of widespread soil moisture measurements means that flash drought assessments often use process-based model data like that from the North American Land Data Assimilation System (NLDAS). Such reliance opens flash drought assessment to model biases, particularly from vegetation processes. Here, we examine the influence of vegetation on NLDAS-simulated flash drought characteristics by comparing two experiments covering 1981–2017: open loop (OL), which uses NLDAS surface meteorological forcing to drive a land surface model using prognostic vegetation, and data assimilation (DA), which instead assimilates near-real-time satellite-derived leaf area index (LAI) into the land surface model. The OL simulation consistently underestimates LAI across the United States, causing relatively high soil moisture values. Both experiments produce similar geographic patterns of flash droughts, but OL produces shorter duration events and regional trends in flash drought occurrence that are sometimes opposite to those in DA. Across the Midwest and Southern United States, flash droughts are 4 weeks (about 70%) longer on average in DA than OL. Moreover, across much of the Great Plains, flash drought occurrence has trended upward according to the DA experiment, opposite to the trend in OL. This sensitivity of flash drought to the representation of vegetation suggests that representing plants with greater fidelity could aid in monitoring flash droughts and improve the prediction of flash drought transitions to more persistent and damaging long-term droughts.
Significance Statement
Flash droughts are a subset of droughts with rapid onset and intensification leading to devastating losses to crops. Rapid soil moisture decline is one way to detect flash droughts. Because there is a lack of widespread observational data, we often rely on model outputs of soil moisture. Here, we explore how the representation of vegetation within land surface models influences the U.S. flash drought characteristics covering 1981–2017. We show that the misrepresentation of vegetation status propagates soil moisture biases into flash drought monitoring, impacting our understanding of the onset, magnitude, duration, and trends in flash droughts. Our results suggest that the assimilation of near-real-time vegetation into land surface models could improve the detection, monitoring, and prediction of flash droughts.
Abstract
Predicting and managing the impacts of flash droughts is difficult owing to their rapid onset and intensification. Flash drought monitoring often relies on assessing changes in root-zone soil moisture. However, the lack of widespread soil moisture measurements means that flash drought assessments often use process-based model data like that from the North American Land Data Assimilation System (NLDAS). Such reliance opens flash drought assessment to model biases, particularly from vegetation processes. Here, we examine the influence of vegetation on NLDAS-simulated flash drought characteristics by comparing two experiments covering 1981–2017: open loop (OL), which uses NLDAS surface meteorological forcing to drive a land surface model using prognostic vegetation, and data assimilation (DA), which instead assimilates near-real-time satellite-derived leaf area index (LAI) into the land surface model. The OL simulation consistently underestimates LAI across the United States, causing relatively high soil moisture values. Both experiments produce similar geographic patterns of flash droughts, but OL produces shorter duration events and regional trends in flash drought occurrence that are sometimes opposite to those in DA. Across the Midwest and Southern United States, flash droughts are 4 weeks (about 70%) longer on average in DA than OL. Moreover, across much of the Great Plains, flash drought occurrence has trended upward according to the DA experiment, opposite to the trend in OL. This sensitivity of flash drought to the representation of vegetation suggests that representing plants with greater fidelity could aid in monitoring flash droughts and improve the prediction of flash drought transitions to more persistent and damaging long-term droughts.
Significance Statement
Flash droughts are a subset of droughts with rapid onset and intensification leading to devastating losses to crops. Rapid soil moisture decline is one way to detect flash droughts. Because there is a lack of widespread observational data, we often rely on model outputs of soil moisture. Here, we explore how the representation of vegetation within land surface models influences the U.S. flash drought characteristics covering 1981–2017. We show that the misrepresentation of vegetation status propagates soil moisture biases into flash drought monitoring, impacting our understanding of the onset, magnitude, duration, and trends in flash droughts. Our results suggest that the assimilation of near-real-time vegetation into land surface models could improve the detection, monitoring, and prediction of flash droughts.
Abstract
Ensemble copula coupling (Schefzik et al.) is a widely used method to produce a calibrated ensemble from a calibrated probabilistic forecast. This process improves the statistical accuracy of the ensemble; in other words, the distribution of the calibrated ensemble members at each grid point more closely approximates the true expected distribution. However, the trade-off is that the individual members are often less physically realistic than the original ensemble: there is noisy variation among neighboring grid points, and, depending on the calibration method, extremes in the original ensemble are sometimes muted. We introduce neighborhood ensemble copula coupling (N-ECC), a simple modification of ECC designed to mitigate these problems. We show that, when used with the calibrated forecasts produced by Flowerdew’s (Flowerdew) reliability calibration, N-ECC improves both the visual plausibility and the statistical properties of the forecast.
Significance Statement
Numerical weather prediction (NWP) uses physical models of the atmosphere to produce a set of scenarios (called an ensemble) describing possible weather outcomes. These forecasts are used in other models to produce weather forecasts and warnings of extreme events. For example, NWP forecasts of rainfall are used in hydrological models to predict the probability of flooding. However, the raw NWP forecasts require statistical postprocessing to ensure that the range of scenarios they describe accurately represents the true range of possible outcomes. This paper introduces a new method of processing NWP forecasts to produce physically realistic, well-calibrated ensembles.
Abstract
Ensemble copula coupling (Schefzik et al.) is a widely used method to produce a calibrated ensemble from a calibrated probabilistic forecast. This process improves the statistical accuracy of the ensemble; in other words, the distribution of the calibrated ensemble members at each grid point more closely approximates the true expected distribution. However, the trade-off is that the individual members are often less physically realistic than the original ensemble: there is noisy variation among neighboring grid points, and, depending on the calibration method, extremes in the original ensemble are sometimes muted. We introduce neighborhood ensemble copula coupling (N-ECC), a simple modification of ECC designed to mitigate these problems. We show that, when used with the calibrated forecasts produced by Flowerdew’s (Flowerdew) reliability calibration, N-ECC improves both the visual plausibility and the statistical properties of the forecast.
Significance Statement
Numerical weather prediction (NWP) uses physical models of the atmosphere to produce a set of scenarios (called an ensemble) describing possible weather outcomes. These forecasts are used in other models to produce weather forecasts and warnings of extreme events. For example, NWP forecasts of rainfall are used in hydrological models to predict the probability of flooding. However, the raw NWP forecasts require statistical postprocessing to ensure that the range of scenarios they describe accurately represents the true range of possible outcomes. This paper introduces a new method of processing NWP forecasts to produce physically realistic, well-calibrated ensembles.
Abstract
Observations over a saltwater lagoon in the Altiplano show that evaporation E is triggered at noon, concurrent to the transition of a shallow, stable atmospheric boundary layer (ABL) into a deep mixed layer. We investigate the coupling between the ABL and E drivers using a land–atmosphere conceptual model, observations, and a regional model. Additionally, we analyze the ABL interaction with the aerodynamic and radiative components of evaporation using the Penman equation adapted to saltwater. Our results demonstrate that nonlocal processes are dominant in driving E. In the morning, the ABL is controlled by the local advection of warm air (∼5 K h−1), which results in a shallow (<350 m), stable ABL, with virtually no mixing and no E (<50 W m−2). The warm-air advection ultimately connects the ABL with the residual layer above, increasing the ABL height h by ∼1 km. At midday, a thermally driven regional flow arrives to the lagoon, which first advects a deeper ABL from the surrounding desert (∼1500 m h−1) that leads to an extra ∼700-m h increase. The regional flow also causes an increase in wind (∼12 m s−1) and an ABL collapse due to the entrance of cold air (∼−2 K h−1) with a shallower ABL (∼−350 m h−1). The turbulence produced by the wind decreases the aerodynamic resistance and mixes the water body releasing the energy previously stored in the lake. The ABL feedback on E through vapor pressure enables high evaporation values (∼450 W m−2 at 1430 LT). These results contribute to the understanding of E of water bodies in semiarid conditions and emphasize the importance of understanding ABL processes when describing evaporation drivers.
Abstract
Observations over a saltwater lagoon in the Altiplano show that evaporation E is triggered at noon, concurrent to the transition of a shallow, stable atmospheric boundary layer (ABL) into a deep mixed layer. We investigate the coupling between the ABL and E drivers using a land–atmosphere conceptual model, observations, and a regional model. Additionally, we analyze the ABL interaction with the aerodynamic and radiative components of evaporation using the Penman equation adapted to saltwater. Our results demonstrate that nonlocal processes are dominant in driving E. In the morning, the ABL is controlled by the local advection of warm air (∼5 K h−1), which results in a shallow (<350 m), stable ABL, with virtually no mixing and no E (<50 W m−2). The warm-air advection ultimately connects the ABL with the residual layer above, increasing the ABL height h by ∼1 km. At midday, a thermally driven regional flow arrives to the lagoon, which first advects a deeper ABL from the surrounding desert (∼1500 m h−1) that leads to an extra ∼700-m h increase. The regional flow also causes an increase in wind (∼12 m s−1) and an ABL collapse due to the entrance of cold air (∼−2 K h−1) with a shallower ABL (∼−350 m h−1). The turbulence produced by the wind decreases the aerodynamic resistance and mixes the water body releasing the energy previously stored in the lake. The ABL feedback on E through vapor pressure enables high evaporation values (∼450 W m−2 at 1430 LT). These results contribute to the understanding of E of water bodies in semiarid conditions and emphasize the importance of understanding ABL processes when describing evaporation drivers.
Abstract
The Canadian Precipitation Analysis (CaPA) system provides near-real-time precipitation analyses over Canada by combining observations with short-term numerical weather prediction forecasts. CaPA’s snowfall estimates suffer from the lack of accurate solid precipitation measurements to correct the first-guess estimate. Weather radars have the potential to add precipitation measurements to CaPA in all seasons but are not assimilated in winter due to radar snowfall estimate imprecision and lack of precipitation gauges for calibration. The main objective of this study is to assess the impact of assimilating Canadian dual-polarized radar-based snowfall data in CaPA to improve precipitation estimates. Two sets of experiments were conducted to evaluate the impact of including radar snowfall retrievals, one set using the high-resolution CaPA (HRDPA) with the currently operational quality control configuration and another increasing the number of assimilated surface observations by relaxing quality control. Experiments spanned two winter seasons (2021 and 2022) in central Canada, covering part of the entire CaPA domain. The results showed that the assimilation of radar-based snowfall data improved CaPA’s precipitation estimates 81.75% of the time for 0.5-mm precipitation thresholds. An increase in the probability of detection together with a decrease in the false alarm ratio suggested an improvement of the precipitation spatial distribution and estimation accuracy. Additionally, the results showed improvements for both precipitation mass and frequency biases for low precipitation amounts. For larger thresholds, the frequency bias was degraded. The results also indicated that the assimilation of dual-polarization radar data is beneficial for the two CaPA configurations tested in this study.
Abstract
The Canadian Precipitation Analysis (CaPA) system provides near-real-time precipitation analyses over Canada by combining observations with short-term numerical weather prediction forecasts. CaPA’s snowfall estimates suffer from the lack of accurate solid precipitation measurements to correct the first-guess estimate. Weather radars have the potential to add precipitation measurements to CaPA in all seasons but are not assimilated in winter due to radar snowfall estimate imprecision and lack of precipitation gauges for calibration. The main objective of this study is to assess the impact of assimilating Canadian dual-polarized radar-based snowfall data in CaPA to improve precipitation estimates. Two sets of experiments were conducted to evaluate the impact of including radar snowfall retrievals, one set using the high-resolution CaPA (HRDPA) with the currently operational quality control configuration and another increasing the number of assimilated surface observations by relaxing quality control. Experiments spanned two winter seasons (2021 and 2022) in central Canada, covering part of the entire CaPA domain. The results showed that the assimilation of radar-based snowfall data improved CaPA’s precipitation estimates 81.75% of the time for 0.5-mm precipitation thresholds. An increase in the probability of detection together with a decrease in the false alarm ratio suggested an improvement of the precipitation spatial distribution and estimation accuracy. Additionally, the results showed improvements for both precipitation mass and frequency biases for low precipitation amounts. For larger thresholds, the frequency bias was degraded. The results also indicated that the assimilation of dual-polarization radar data is beneficial for the two CaPA configurations tested in this study.
Abstract
The Weather Research and Forecasting (WRF) Model is used to dynamically downscale ERA-Interim global reanalysis data to test its performance as a regional climate model (RCM) for the Great Lakes region (GLR). Four cumulus parameterizations and three spectral nudging techniques applied to moisture are evaluated based on 2-m temperature and precipitation accumulation in the Great Lakes drainage basin (GLDB). Results are compared to a control simulation without spectral nudging, and additional analysis is presented showing the contribution of each nudged variable to temperature, moisture, and precipitation. All but one of the RCM test simulations have a dry precipitation bias in the warm months, and the only simulation with a wet bias also has the least precipitation error. It is found that the inclusion of spectral nudging of temperature dramatically improves a cold-season cold bias, and while the nudging of moisture improves simulated annual and diurnal temperature ranges, its impact on precipitation is complicated.
Significance Statement
Global climate models are vital to understanding our changing climate. While many include a coarse representation of the Great Lakes, they lack the resolution to represent effects like lake effect precipitation, lake breeze, and surface air temperature modification. Therefore, using a regional climate model to downscale global data is imperative to correctly simulate the land–lake–atmosphere feedbacks that contribute to regional climate. Modeling precipitation is particularly important because it plays a direct role in the Great Lakes’ water cycle. The purpose of this study is to identify the configuration of the Weather Research and Forecasting Model that best simulates precipitation and temperature in the Great Lakes region by testing cumulus parameterizations and methods of nudging the regional model toward the global model.
Abstract
The Weather Research and Forecasting (WRF) Model is used to dynamically downscale ERA-Interim global reanalysis data to test its performance as a regional climate model (RCM) for the Great Lakes region (GLR). Four cumulus parameterizations and three spectral nudging techniques applied to moisture are evaluated based on 2-m temperature and precipitation accumulation in the Great Lakes drainage basin (GLDB). Results are compared to a control simulation without spectral nudging, and additional analysis is presented showing the contribution of each nudged variable to temperature, moisture, and precipitation. All but one of the RCM test simulations have a dry precipitation bias in the warm months, and the only simulation with a wet bias also has the least precipitation error. It is found that the inclusion of spectral nudging of temperature dramatically improves a cold-season cold bias, and while the nudging of moisture improves simulated annual and diurnal temperature ranges, its impact on precipitation is complicated.
Significance Statement
Global climate models are vital to understanding our changing climate. While many include a coarse representation of the Great Lakes, they lack the resolution to represent effects like lake effect precipitation, lake breeze, and surface air temperature modification. Therefore, using a regional climate model to downscale global data is imperative to correctly simulate the land–lake–atmosphere feedbacks that contribute to regional climate. Modeling precipitation is particularly important because it plays a direct role in the Great Lakes’ water cycle. The purpose of this study is to identify the configuration of the Weather Research and Forecasting Model that best simulates precipitation and temperature in the Great Lakes region by testing cumulus parameterizations and methods of nudging the regional model toward the global model.
Abstract
We study the impact of uncertain precipitation estimates on simulated streamflows for the poorly gauged Yarlung Tsangpo basin (YTB), high mountain Asia (HMA). A process-based hydrological model at 0.5-km resolution is driven by an ensemble of precipitation estimation products (PEPs), including analyzed ground observations, high-resolution precipitation estimates, climate data records, and reanalyses over the 2008–15 control period. The model is then forced retrospectively from 1983 onward to obtain seamless discharge estimates till 2007, a period for which there is very sparse flow data coverage. Whereas temperature forcing is considered deterministic, precipitation is sampled from the predictive distribution, which is obtained through processing PEPs by means of a probabilistic processor of uncertainty. The employed Bayesian processor combines the PEPs and outputs the predictive densities of daily precipitation depth accumulation as well as the probability of precipitation occurrence, from which random precipitation fields for probabilistic model forcing are sampled. The predictive density of precipitation is conditional on the precipitation estimation predictors that are bias corrected and variance adjusted. For the selected HMA study site, discharges simulated from reanalysis and climate data records score lowest against observations at three flow gauging points, whereas high-resolution satellite estimates perform better, but are still outperformed by precipitation fields obtained from analyzed observed precipitation and merged products, which were corrected against ground observations. The applied methodology indicates how missing flows for poorly gauged sites can be retrieved and is further extendable to hydrological projections of climate.
Significance Statement
We show how to use different precipitation estimates, like computer simulations of weather or satellite observations, in conjunction with all available ground measurements in regions with generally poor meteorological and flow measurement infrastructure. We demonstrate how it is possible to retrieve past unobserved river flows using these estimates in combination with a hydrological computer model for streamflow simulations. The method can help us to better understand the hydrology of poorly gauged regions that play an important role in the distribution of water resources and can be affected by future changes. We applied the method to a large transboundary river basin in China. This basin holds water needed by large, densely populated regions of India that may become water constrained by warmer climate.
Abstract
We study the impact of uncertain precipitation estimates on simulated streamflows for the poorly gauged Yarlung Tsangpo basin (YTB), high mountain Asia (HMA). A process-based hydrological model at 0.5-km resolution is driven by an ensemble of precipitation estimation products (PEPs), including analyzed ground observations, high-resolution precipitation estimates, climate data records, and reanalyses over the 2008–15 control period. The model is then forced retrospectively from 1983 onward to obtain seamless discharge estimates till 2007, a period for which there is very sparse flow data coverage. Whereas temperature forcing is considered deterministic, precipitation is sampled from the predictive distribution, which is obtained through processing PEPs by means of a probabilistic processor of uncertainty. The employed Bayesian processor combines the PEPs and outputs the predictive densities of daily precipitation depth accumulation as well as the probability of precipitation occurrence, from which random precipitation fields for probabilistic model forcing are sampled. The predictive density of precipitation is conditional on the precipitation estimation predictors that are bias corrected and variance adjusted. For the selected HMA study site, discharges simulated from reanalysis and climate data records score lowest against observations at three flow gauging points, whereas high-resolution satellite estimates perform better, but are still outperformed by precipitation fields obtained from analyzed observed precipitation and merged products, which were corrected against ground observations. The applied methodology indicates how missing flows for poorly gauged sites can be retrieved and is further extendable to hydrological projections of climate.
Significance Statement
We show how to use different precipitation estimates, like computer simulations of weather or satellite observations, in conjunction with all available ground measurements in regions with generally poor meteorological and flow measurement infrastructure. We demonstrate how it is possible to retrieve past unobserved river flows using these estimates in combination with a hydrological computer model for streamflow simulations. The method can help us to better understand the hydrology of poorly gauged regions that play an important role in the distribution of water resources and can be affected by future changes. We applied the method to a large transboundary river basin in China. This basin holds water needed by large, densely populated regions of India that may become water constrained by warmer climate.
Abstract
Spaceborne microwave radiometers represent an important component of the Global Precipitation Measurement (GPM) mission due to their frequent sampling of rain systems. Microwave radiometers measure microwave radiation (brightness temperatures Tb), which can be converted into precipitation estimates with appropriate assumptions. However, detecting shallow precipitation systems using spaceborne radiometers is challenging, especially over land, as their weak signals are hard to differentiate from those associated with dry conditions. This study uses a random forest (RF) model to classify microwave radiometer observations as dry, shallow, or nonshallow over the Netherlands—a region with varying surface conditions and frequent occurrence of shallow precipitation. The RF model is trained on five years of data (2016–20) and tested with two independent years (2015 and 2021). The observations are classified using ground-based weather radar echo top heights. Various RF models are assessed, such as using only GPM Microwave Imager (GMI) Tb values as input features or including spatially aligned ERA5 2-m temperature and freezing level reanalysis and/or Dual-Frequency Precipitation Radar (DPR) observations. Independent of the input features, the model performs best in summer and worst in winter. The model classifies observations from high-frequency channels (≥85 GHz) with lower Tb values as nonshallow, higher values as dry, and those in between as shallow. Misclassified footprints exhibit radiometric characteristics corresponding to their assigned class. Case studies reveal dry observations misclassified as shallow are associated with lower Tb values, likely resulting from the presence of ice particles in nonprecipitating clouds. Shallow footprints misclassified as dry are likely related to the absence of ice particles.
Significance Statement
Published research concerning rainfall retrieval algorithms from microwave radiometers is often focused on the accuracy of these algorithms. While shallow precipitation over land is often characterized as problematic in these studies, little progress has been made with these systems. In particular, precipitation formed by shallow clouds, where shallow refers to the clouds being close to Earth’s surface, is often missed. This study is focused on detecting shallow precipitation and its physical characteristics to further improve its detection from spaceborne sensors. As such, it contributes to understanding which shallow precipitation scenes are challenging to detect from microwave radiometers, suggesting possible ways for algorithm improvement.
Abstract
Spaceborne microwave radiometers represent an important component of the Global Precipitation Measurement (GPM) mission due to their frequent sampling of rain systems. Microwave radiometers measure microwave radiation (brightness temperatures Tb), which can be converted into precipitation estimates with appropriate assumptions. However, detecting shallow precipitation systems using spaceborne radiometers is challenging, especially over land, as their weak signals are hard to differentiate from those associated with dry conditions. This study uses a random forest (RF) model to classify microwave radiometer observations as dry, shallow, or nonshallow over the Netherlands—a region with varying surface conditions and frequent occurrence of shallow precipitation. The RF model is trained on five years of data (2016–20) and tested with two independent years (2015 and 2021). The observations are classified using ground-based weather radar echo top heights. Various RF models are assessed, such as using only GPM Microwave Imager (GMI) Tb values as input features or including spatially aligned ERA5 2-m temperature and freezing level reanalysis and/or Dual-Frequency Precipitation Radar (DPR) observations. Independent of the input features, the model performs best in summer and worst in winter. The model classifies observations from high-frequency channels (≥85 GHz) with lower Tb values as nonshallow, higher values as dry, and those in between as shallow. Misclassified footprints exhibit radiometric characteristics corresponding to their assigned class. Case studies reveal dry observations misclassified as shallow are associated with lower Tb values, likely resulting from the presence of ice particles in nonprecipitating clouds. Shallow footprints misclassified as dry are likely related to the absence of ice particles.
Significance Statement
Published research concerning rainfall retrieval algorithms from microwave radiometers is often focused on the accuracy of these algorithms. While shallow precipitation over land is often characterized as problematic in these studies, little progress has been made with these systems. In particular, precipitation formed by shallow clouds, where shallow refers to the clouds being close to Earth’s surface, is often missed. This study is focused on detecting shallow precipitation and its physical characteristics to further improve its detection from spaceborne sensors. As such, it contributes to understanding which shallow precipitation scenes are challenging to detect from microwave radiometers, suggesting possible ways for algorithm improvement.
Abstract
Hydrologic assessment of climate change impacts on complex terrains and data-sparse regions like High Mountain Asia is a major challenge. Combining hydrological models with satellite and reanalysis data for evaluating changes in hydrological variables is often the only available approach. However, uncertainties associated with the forcing dataset, coupled with model parameter uncertainties, can have significant impacts on hydrologic simulations. This work aims to understand and quantify how the uncertainty in precipitation and its interaction with the model uncertainty affect streamflow estimation in glacierized catchments. Simulations for four precipitation datasets [Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG), Climate Hazards Group Infrared Precipitation with Station (CHIRPS), ERA5-Land, and Asian Precipitation–Highly Resolved Observational Data Integration Toward Evaluation (APHRODITE)] and two glacio-hydrological models [Glacio-Hydrological Degree-Day Model (GDM) and Hydrological Model for Distributed Systems (HYMOD_DS)] are evaluated for the Marsyangdi and Budhigandaki River basins in Nepal. Temperature sensitivity of streamflow simulations is also investigated. Relative to APHRODITE, which compared well with ground stations, ERA5-Land overestimates the catchment average precipitation for both basins by more than 70%; IMERG and CHIRPS overestimate by ∼20%. Precipitation uncertainty propagation to streamflow exhibits strong dependencies to model structure and streamflow components (snowmelt, ice melt, and rainfall-runoff), but overall uncertainty dampens through precipitation-to-streamflow transformation. Temperature exerts a significant additional source of uncertainty in hydrologic simulations of such environments. GDM was found to be more sensitive to temperature variations, with >50% increase in total flow for 20% increase in actual temperature, emphasizing that models that rely on lapse rates for the spatial distribution of temperature have much higher sensitivity. Results from this study provide critical insight into the challenges of utilizing satellite and reanalysis products for simulating streamflow in glacierized catchments.
Significance Statement
This work investigates the uncertainty of streamflow simulations due to climate forcing and model parameter/structure uncertainty and quantifies the relative importance of each source of uncertainty and its impact on simulating different streamflow components in glacierized catchments of High Mountain Asia. Results highlight that in high mountain regions, temperature uncertainty exerts a major control on hydrologic simulations and models that do not adequately represent the spatial variability of temperature are more sensitive to bias in the forcing data. These findings provide guidance on important aspects to be considered when modeling glacio-hydrological response of catchments in such areas and are thus expected to impact both research and operation practice related to hydrologic modeling of glacierized catchments.
Abstract
Hydrologic assessment of climate change impacts on complex terrains and data-sparse regions like High Mountain Asia is a major challenge. Combining hydrological models with satellite and reanalysis data for evaluating changes in hydrological variables is often the only available approach. However, uncertainties associated with the forcing dataset, coupled with model parameter uncertainties, can have significant impacts on hydrologic simulations. This work aims to understand and quantify how the uncertainty in precipitation and its interaction with the model uncertainty affect streamflow estimation in glacierized catchments. Simulations for four precipitation datasets [Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG), Climate Hazards Group Infrared Precipitation with Station (CHIRPS), ERA5-Land, and Asian Precipitation–Highly Resolved Observational Data Integration Toward Evaluation (APHRODITE)] and two glacio-hydrological models [Glacio-Hydrological Degree-Day Model (GDM) and Hydrological Model for Distributed Systems (HYMOD_DS)] are evaluated for the Marsyangdi and Budhigandaki River basins in Nepal. Temperature sensitivity of streamflow simulations is also investigated. Relative to APHRODITE, which compared well with ground stations, ERA5-Land overestimates the catchment average precipitation for both basins by more than 70%; IMERG and CHIRPS overestimate by ∼20%. Precipitation uncertainty propagation to streamflow exhibits strong dependencies to model structure and streamflow components (snowmelt, ice melt, and rainfall-runoff), but overall uncertainty dampens through precipitation-to-streamflow transformation. Temperature exerts a significant additional source of uncertainty in hydrologic simulations of such environments. GDM was found to be more sensitive to temperature variations, with >50% increase in total flow for 20% increase in actual temperature, emphasizing that models that rely on lapse rates for the spatial distribution of temperature have much higher sensitivity. Results from this study provide critical insight into the challenges of utilizing satellite and reanalysis products for simulating streamflow in glacierized catchments.
Significance Statement
This work investigates the uncertainty of streamflow simulations due to climate forcing and model parameter/structure uncertainty and quantifies the relative importance of each source of uncertainty and its impact on simulating different streamflow components in glacierized catchments of High Mountain Asia. Results highlight that in high mountain regions, temperature uncertainty exerts a major control on hydrologic simulations and models that do not adequately represent the spatial variability of temperature are more sensitive to bias in the forcing data. These findings provide guidance on important aspects to be considered when modeling glacio-hydrological response of catchments in such areas and are thus expected to impact both research and operation practice related to hydrologic modeling of glacierized catchments.