Browse
Abstract
The predictability of precipitation is hindered by finer-scale processes not captured explicitly in global numerical models, such as convective interactions, cloud microphysics, and boundary layer dynamics. However, there is growing demand across various sectors for medium- (3–10-day) and extended-range (10–30-day) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs). This study uses a novel statistical postprocessing technique, APPM, that combines analog postprocessing (AP) with probability matching (PM) to produce week-1 and week-2 accumulated precipitation forecasts over Taiwan. AP searches for historical predictions that closely resemble the current forecast and create an AP ensemble using the observed high-resolution precipitation patterns corresponding to these forecast analogs. Frequency counting and PM are then separately applied to the AP ensemble to produce calibrated and downscaled PQPFs and bias-reduced QPFs, respectively. Evaluation over a 22-yr (1999–2020) period shows that raw ensemble forecasts from the GEFS of NOAA/NWS/Environmental Modeling Center, collected for the subseasonal experiment, are underdispersive with a wet bias. In contrast, the AP ensemble spread well represents forecast uncertainty, leading to substantially more reliable and skillful probabilistic forecasts. Furthermore, the AP-based PQPF demonstrates superior discrimination ability and yields notably greater economic benefits for a wider range of users, with the maximum economic value increasing by 30%–50% for the week-2 forecast. Compared to the raw ensemble mean forecast, the calibrated QPF exhibits lower mean absolute error and explains 3–8 times more variance in observations. Overall, the APPM technique significantly improves week-1 and week-2 QPFs and PQPFs over Taiwan.
Significance Statement
There are two significant challenges in improving precipitation forecasts beyond a few days in Taiwan. First, large-scale numerical models often struggle with accurately predicting precipitation locations, magnitudes, and providing sufficient detail. Second, probabilistic precipitation forecasts have been unreliable, failing to convey accurate uncertainty information to users. In response to these challenges, this study has developed a relatively simple yet effective technique that corrects the spatiotemporal distribution of predicted precipitation and downscales the forecasts from a 1° to 1-km spatial resolution. Our results demonstrate that this technique significantly alleviates these two issues, resulting in more accurate precipitation forecasts and more reliable probabilistic precipitation forecasts within a 2-week timeframe.
Abstract
The predictability of precipitation is hindered by finer-scale processes not captured explicitly in global numerical models, such as convective interactions, cloud microphysics, and boundary layer dynamics. However, there is growing demand across various sectors for medium- (3–10-day) and extended-range (10–30-day) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs). This study uses a novel statistical postprocessing technique, APPM, that combines analog postprocessing (AP) with probability matching (PM) to produce week-1 and week-2 accumulated precipitation forecasts over Taiwan. AP searches for historical predictions that closely resemble the current forecast and create an AP ensemble using the observed high-resolution precipitation patterns corresponding to these forecast analogs. Frequency counting and PM are then separately applied to the AP ensemble to produce calibrated and downscaled PQPFs and bias-reduced QPFs, respectively. Evaluation over a 22-yr (1999–2020) period shows that raw ensemble forecasts from the GEFS of NOAA/NWS/Environmental Modeling Center, collected for the subseasonal experiment, are underdispersive with a wet bias. In contrast, the AP ensemble spread well represents forecast uncertainty, leading to substantially more reliable and skillful probabilistic forecasts. Furthermore, the AP-based PQPF demonstrates superior discrimination ability and yields notably greater economic benefits for a wider range of users, with the maximum economic value increasing by 30%–50% for the week-2 forecast. Compared to the raw ensemble mean forecast, the calibrated QPF exhibits lower mean absolute error and explains 3–8 times more variance in observations. Overall, the APPM technique significantly improves week-1 and week-2 QPFs and PQPFs over Taiwan.
Significance Statement
There are two significant challenges in improving precipitation forecasts beyond a few days in Taiwan. First, large-scale numerical models often struggle with accurately predicting precipitation locations, magnitudes, and providing sufficient detail. Second, probabilistic precipitation forecasts have been unreliable, failing to convey accurate uncertainty information to users. In response to these challenges, this study has developed a relatively simple yet effective technique that corrects the spatiotemporal distribution of predicted precipitation and downscales the forecasts from a 1° to 1-km spatial resolution. Our results demonstrate that this technique significantly alleviates these two issues, resulting in more accurate precipitation forecasts and more reliable probabilistic precipitation forecasts within a 2-week timeframe.
Abstract
The spatial description of high-resolution extreme daily rainfall fields is challenging because of the high spatial and temporal variability of rainfall, particularly in tropical regions due to the stochastic nature of convective rainfall. Geostatistical simulations offer a solution to this problem. In this study, a stochastic geostatistical simulation technique based on the spectral turning bands method is presented for modeling daily rainfall extremes in the data-scarce tropical Ouémé River basin (Benin). This technique uses meta-Gaussian frameworks built on Gaussian random fields, which are transformed into realistic rainfall fields using statistical transfer functions. The simulation framework can be conditioned on point observations and is computationally efficient in generating multiple ensembles of extreme rainfall fields. The results of tests and evaluations for multiple extremes demonstrate the effectiveness of the simulation framework in modeling more realistic rainfall fields and capturing their variability. It successfully reproduces the empirical cumulative distribution function of the observation samples and outperforms classical interpolation techniques like ordinary kriging in terms of spatial continuity and rainfall variability. The study also addresses the challenge of dealing with uncertainty in data-poor areas and proposes a novel approach for determining the spatial correlation structure even with low station density, resulting in a performance boost of 9.5% compared to traditional techniques. Additionally, we present a low-skill reference simulation method to facilitate a comprehensive comparison of the geostatistical simulation approaches. The simulations generated have the potential to provide valuable inputs for hydrological modeling.
Abstract
The spatial description of high-resolution extreme daily rainfall fields is challenging because of the high spatial and temporal variability of rainfall, particularly in tropical regions due to the stochastic nature of convective rainfall. Geostatistical simulations offer a solution to this problem. In this study, a stochastic geostatistical simulation technique based on the spectral turning bands method is presented for modeling daily rainfall extremes in the data-scarce tropical Ouémé River basin (Benin). This technique uses meta-Gaussian frameworks built on Gaussian random fields, which are transformed into realistic rainfall fields using statistical transfer functions. The simulation framework can be conditioned on point observations and is computationally efficient in generating multiple ensembles of extreme rainfall fields. The results of tests and evaluations for multiple extremes demonstrate the effectiveness of the simulation framework in modeling more realistic rainfall fields and capturing their variability. It successfully reproduces the empirical cumulative distribution function of the observation samples and outperforms classical interpolation techniques like ordinary kriging in terms of spatial continuity and rainfall variability. The study also addresses the challenge of dealing with uncertainty in data-poor areas and proposes a novel approach for determining the spatial correlation structure even with low station density, resulting in a performance boost of 9.5% compared to traditional techniques. Additionally, we present a low-skill reference simulation method to facilitate a comprehensive comparison of the geostatistical simulation approaches. The simulations generated have the potential to provide valuable inputs for hydrological modeling.
Abstract
Information about past floods and historical precipitation records is fundamental to the management of water resources, but observational records usually cover only the last 100–150 years. Using several different data sources, such as newly digitized meteorological data from several stations in the south-eastern part of Romania, from historical newspapers of that time, and daily reanalysis of large-scale data, here we provide a detailed analysis of the atmospheric circulation conditions associated with a devastating flood event which took place in June 1897. The floods in June 1897 were one of the most devastating natural disasters in Romania's history and they were caused by heavy rainfall that started at the beginning of May and continued for several weeks, resulting in widespread flooding, especially in the eastern part of the country. The most affected areas were the cities of Braila and Galati, located on the main course of the Danube River, where the floods caused extensive damage to infrastructure, including homes, bridges, and roads, and disrupted transportation and communication networks. The heavy rainfall events occurring in June 1897 and the associated flood peak were triggered by intrusions of high Potential Vorticity (PV) anomalies toward the southeastern part of Europe, persistent and pivotal cut-off lows over the analyzed region, and increased water vapor transport over the south-eastern part of Romania. We argue that digitizing and analyzing old meteorological records enables researchers to better understand the Earth's climate system and make more accurate predictions about future climate change.
Abstract
Information about past floods and historical precipitation records is fundamental to the management of water resources, but observational records usually cover only the last 100–150 years. Using several different data sources, such as newly digitized meteorological data from several stations in the south-eastern part of Romania, from historical newspapers of that time, and daily reanalysis of large-scale data, here we provide a detailed analysis of the atmospheric circulation conditions associated with a devastating flood event which took place in June 1897. The floods in June 1897 were one of the most devastating natural disasters in Romania's history and they were caused by heavy rainfall that started at the beginning of May and continued for several weeks, resulting in widespread flooding, especially in the eastern part of the country. The most affected areas were the cities of Braila and Galati, located on the main course of the Danube River, where the floods caused extensive damage to infrastructure, including homes, bridges, and roads, and disrupted transportation and communication networks. The heavy rainfall events occurring in June 1897 and the associated flood peak were triggered by intrusions of high Potential Vorticity (PV) anomalies toward the southeastern part of Europe, persistent and pivotal cut-off lows over the analyzed region, and increased water vapor transport over the south-eastern part of Romania. We argue that digitizing and analyzing old meteorological records enables researchers to better understand the Earth's climate system and make more accurate predictions about future climate change.
Abstract
Potentially the greatest benefit of Commercial Microwave Links (CMLs) as opportunistic rainfall sensors lies in regions that lack dedicated rainfall sensors, most notably low- and middle income countries. However, current CML rainfall retrieval algorithms are predominantly tuned and applied to (European) CML networks in temperate or Mediterranean climates. This study investigates whether local quantitative precipitation estimates from CMLs in a tropical region, specifically Sri Lanka, can be improved by optimizing two dominant parameters in the rainfall retrieval algorithm RAINLINK, namely the wet-antenna correction factor Aa and the relative contribution of minimum and maximum received signal levels α. Using a grid search, based on ten months of CML data from 22 link-gauge clusters consisting of 105 sub-links that lie within 1 km of a daily rain gauge, optimal values of Aa and α are first derived for the entire country and compared to the default RAINLINK values. Subsequently, the CMLs are grouped by link length, frequency, climate zone, and daily rainfall depth classes, and Aa and α are derived for each of these classes. Calibrating parameters on all clusters across the country only leads to minor improvements. The actual optimal Aa and α values depend on the performance metric favored. Calibrating on network properties, particularly short link length and high frequency classes, does significantly improve rainfall estimates. By relating the optimal Aa and α values to known network meta data, the results from this study are potentially applicable to other tropical CML networks that lack nearby reference rainfall data.
Abstract
Potentially the greatest benefit of Commercial Microwave Links (CMLs) as opportunistic rainfall sensors lies in regions that lack dedicated rainfall sensors, most notably low- and middle income countries. However, current CML rainfall retrieval algorithms are predominantly tuned and applied to (European) CML networks in temperate or Mediterranean climates. This study investigates whether local quantitative precipitation estimates from CMLs in a tropical region, specifically Sri Lanka, can be improved by optimizing two dominant parameters in the rainfall retrieval algorithm RAINLINK, namely the wet-antenna correction factor Aa and the relative contribution of minimum and maximum received signal levels α. Using a grid search, based on ten months of CML data from 22 link-gauge clusters consisting of 105 sub-links that lie within 1 km of a daily rain gauge, optimal values of Aa and α are first derived for the entire country and compared to the default RAINLINK values. Subsequently, the CMLs are grouped by link length, frequency, climate zone, and daily rainfall depth classes, and Aa and α are derived for each of these classes. Calibrating parameters on all clusters across the country only leads to minor improvements. The actual optimal Aa and α values depend on the performance metric favored. Calibrating on network properties, particularly short link length and high frequency classes, does significantly improve rainfall estimates. By relating the optimal Aa and α values to known network meta data, the results from this study are potentially applicable to other tropical CML networks that lack nearby reference rainfall data.
Abstract
In the Gulf Coastal Plains of Texas, a state-of-the-art distributed network of field observatories, known as the Texas Water Observatory (TWO), is developed to better understand the water, energy, and carbon cycles across the critical zone (encompassing aquifers, soils, plants, and atmosphere) at different spatiotemporal scales. Using more than 300 advanced real-time / near-real-time sensors, this observatory monitors high-frequency water, energy, and carbon storage and fluxes in the Brazos River corridor, which are critical for coupled hydrologic, biogeochemical, and land-atmosphere process understanding in the region. TWO provides a regional resource for better understanding and/or managing agriculture, water resources, ecosystems, biodiversity, disasters, health, energy, and weather/climate. TWO infrastructure spans common land uses in this region, including (traditional/aspirational cultivated agriculture, rangelands, native prairie, bottomland hardwood forest, and coastal wetlands). Sites represent landforms from low-relief erosional uplands to depositional lowlands across climatic and geologic gradients of central Texas. We present the overarching vision of TWO and describe site design, instrumentation specifications, data collection, and quality control protocols. We also provide a comparison of water, energy, and carbon budget across sites, including evapotranspiration, carbon fluxes, radiation budget, weather, profile soil moisture and soil temperature, soil hydraulic properties, hydrogeophysical surveys, groundwater levels and groundwater quality reported at TWO primary sites for 2018-2020 (with certain data gaps). In conjunction with various earth-observing remote sensing and legacy databases, TWO provides a master testbed to evaluate process-driven or data-driven critical zone science, leading to improved natural resource management and decision support at different spatiotemporal scales.
Abstract
In the Gulf Coastal Plains of Texas, a state-of-the-art distributed network of field observatories, known as the Texas Water Observatory (TWO), is developed to better understand the water, energy, and carbon cycles across the critical zone (encompassing aquifers, soils, plants, and atmosphere) at different spatiotemporal scales. Using more than 300 advanced real-time / near-real-time sensors, this observatory monitors high-frequency water, energy, and carbon storage and fluxes in the Brazos River corridor, which are critical for coupled hydrologic, biogeochemical, and land-atmosphere process understanding in the region. TWO provides a regional resource for better understanding and/or managing agriculture, water resources, ecosystems, biodiversity, disasters, health, energy, and weather/climate. TWO infrastructure spans common land uses in this region, including (traditional/aspirational cultivated agriculture, rangelands, native prairie, bottomland hardwood forest, and coastal wetlands). Sites represent landforms from low-relief erosional uplands to depositional lowlands across climatic and geologic gradients of central Texas. We present the overarching vision of TWO and describe site design, instrumentation specifications, data collection, and quality control protocols. We also provide a comparison of water, energy, and carbon budget across sites, including evapotranspiration, carbon fluxes, radiation budget, weather, profile soil moisture and soil temperature, soil hydraulic properties, hydrogeophysical surveys, groundwater levels and groundwater quality reported at TWO primary sites for 2018-2020 (with certain data gaps). In conjunction with various earth-observing remote sensing and legacy databases, TWO provides a master testbed to evaluate process-driven or data-driven critical zone science, leading to improved natural resource management and decision support at different spatiotemporal scales.
Abstract
Flash flooding remains a challenging prediction problem, which is exacerbated by the lack of a universally accepted definition of the phenomenon. In this article, we extend prior analysis to examine the correspondence of various combinations of quantitative precipitation estimates (QPEs) and precipitation thresholds to observed occurrences of flash floods, additionally considering short-term quantitative precipitation forecasts from a convection-allowing model. Consistent with previous studies, there is large variability between QPE datasets in the frequency of “heavy” precipitation events. There is also large regional variability in the best thresholds for correspondence with reported flash floods. In general, flash flood guidance (FFG) exceedances provide the best correspondence with observed flash floods, although the best correspondence is often found for exceedances of ratios of FFG above or below unity. In the interior western United States, NOAA Atlas 14 derived recurrence interval thresholds (for the southwestern United States) and static thresholds (for the northern and central Rockies) provide better correspondence. The 6-h QPE provides better correspondence with observed flash floods than 1-h QPE in all regions except the West Coast and southwestern United States. Exceedances of precipitation thresholds in forecasts from the operational High-Resolution Rapid Refresh (HRRR) generally do not correspond with observed flash flood events as well as QPE datasets, but they outperform QPE datasets in some regions of complex terrain and sparse observational coverage such as the southwestern United States. These results can provide context for forecasters seeking to identify potential flash flood events based on QPE or forecast-based exceedances of precipitation thresholds.
Significance Statement
Flash floods result from heavy rainfall, but it is difficult to know exactly how much rain will cause a flash flood in a particular location. Furthermore, different precipitation datasets can show very different amounts of precipitation, even from the same storm. This study examines how well different precipitation datasets and model forecasts, used by forecasters to warn the public of flash flooding, represent heavy rainfall leading to flash flooding around the United States. We found that different datasets have dramatically different numbers of heavy rainfall events and that high-resolution model forecasts of heavy rain correspond with observed flash flood events about as well as precipitation datasets based on rain gauge and radar in some regions of the country with few observations.
Abstract
Flash flooding remains a challenging prediction problem, which is exacerbated by the lack of a universally accepted definition of the phenomenon. In this article, we extend prior analysis to examine the correspondence of various combinations of quantitative precipitation estimates (QPEs) and precipitation thresholds to observed occurrences of flash floods, additionally considering short-term quantitative precipitation forecasts from a convection-allowing model. Consistent with previous studies, there is large variability between QPE datasets in the frequency of “heavy” precipitation events. There is also large regional variability in the best thresholds for correspondence with reported flash floods. In general, flash flood guidance (FFG) exceedances provide the best correspondence with observed flash floods, although the best correspondence is often found for exceedances of ratios of FFG above or below unity. In the interior western United States, NOAA Atlas 14 derived recurrence interval thresholds (for the southwestern United States) and static thresholds (for the northern and central Rockies) provide better correspondence. The 6-h QPE provides better correspondence with observed flash floods than 1-h QPE in all regions except the West Coast and southwestern United States. Exceedances of precipitation thresholds in forecasts from the operational High-Resolution Rapid Refresh (HRRR) generally do not correspond with observed flash flood events as well as QPE datasets, but they outperform QPE datasets in some regions of complex terrain and sparse observational coverage such as the southwestern United States. These results can provide context for forecasters seeking to identify potential flash flood events based on QPE or forecast-based exceedances of precipitation thresholds.
Significance Statement
Flash floods result from heavy rainfall, but it is difficult to know exactly how much rain will cause a flash flood in a particular location. Furthermore, different precipitation datasets can show very different amounts of precipitation, even from the same storm. This study examines how well different precipitation datasets and model forecasts, used by forecasters to warn the public of flash flooding, represent heavy rainfall leading to flash flooding around the United States. We found that different datasets have dramatically different numbers of heavy rainfall events and that high-resolution model forecasts of heavy rain correspond with observed flash flood events about as well as precipitation datasets based on rain gauge and radar in some regions of the country with few observations.
Abstract
Predicting and managing the impacts of flash droughts is difficult owing to their rapid onset and intensification. Flash drought monitoring often relies on assessing changes in root-zone soil moisture. However, the lack of widespread soil moisture measurements means that flash drought assessments often use process-based model data like that from the North American Land Data Assimilation System (NLDAS). Such reliance opens flash drought assessment to model biases, particularly from vegetation processes. Here, we examine the influence of vegetation on NLDAS-simulated flash drought characteristics by comparing two experiments covering 1981–2017: open loop (OL), which uses NLDAS surface meteorological forcing to drive a land surface model using prognostic vegetation, and data assimilation (DA), which instead assimilates near-real-time satellite-derived leaf area index (LAI) into the land surface model. The OL simulation consistently underestimates LAI across the United States, causing relatively high soil moisture values. Both experiments produce similar geographic patterns of flash droughts, but OL produces shorter duration events and regional trends in flash drought occurrence that are sometimes opposite to those in DA. Across the Midwest and Southern United States, flash droughts are 4 weeks (about 70%) longer on average in DA than OL. Moreover, across much of the Great Plains, flash drought occurrence has trended upward according to the DA experiment, opposite to the trend in OL. This sensitivity of flash drought to the representation of vegetation suggests that representing plants with greater fidelity could aid in monitoring flash droughts and improve the prediction of flash drought transitions to more persistent and damaging long-term droughts.
Significance Statement
Flash droughts are a subset of droughts with rapid onset and intensification leading to devastating losses to crops. Rapid soil moisture decline is one way to detect flash droughts. Because there is a lack of widespread observational data, we often rely on model outputs of soil moisture. Here, we explore how the representation of vegetation within land surface models influences the U.S. flash drought characteristics covering 1981–2017. We show that the misrepresentation of vegetation status propagates soil moisture biases into flash drought monitoring, impacting our understanding of the onset, magnitude, duration, and trends in flash droughts. Our results suggest that the assimilation of near-real-time vegetation into land surface models could improve the detection, monitoring, and prediction of flash droughts.
Abstract
Predicting and managing the impacts of flash droughts is difficult owing to their rapid onset and intensification. Flash drought monitoring often relies on assessing changes in root-zone soil moisture. However, the lack of widespread soil moisture measurements means that flash drought assessments often use process-based model data like that from the North American Land Data Assimilation System (NLDAS). Such reliance opens flash drought assessment to model biases, particularly from vegetation processes. Here, we examine the influence of vegetation on NLDAS-simulated flash drought characteristics by comparing two experiments covering 1981–2017: open loop (OL), which uses NLDAS surface meteorological forcing to drive a land surface model using prognostic vegetation, and data assimilation (DA), which instead assimilates near-real-time satellite-derived leaf area index (LAI) into the land surface model. The OL simulation consistently underestimates LAI across the United States, causing relatively high soil moisture values. Both experiments produce similar geographic patterns of flash droughts, but OL produces shorter duration events and regional trends in flash drought occurrence that are sometimes opposite to those in DA. Across the Midwest and Southern United States, flash droughts are 4 weeks (about 70%) longer on average in DA than OL. Moreover, across much of the Great Plains, flash drought occurrence has trended upward according to the DA experiment, opposite to the trend in OL. This sensitivity of flash drought to the representation of vegetation suggests that representing plants with greater fidelity could aid in monitoring flash droughts and improve the prediction of flash drought transitions to more persistent and damaging long-term droughts.
Significance Statement
Flash droughts are a subset of droughts with rapid onset and intensification leading to devastating losses to crops. Rapid soil moisture decline is one way to detect flash droughts. Because there is a lack of widespread observational data, we often rely on model outputs of soil moisture. Here, we explore how the representation of vegetation within land surface models influences the U.S. flash drought characteristics covering 1981–2017. We show that the misrepresentation of vegetation status propagates soil moisture biases into flash drought monitoring, impacting our understanding of the onset, magnitude, duration, and trends in flash droughts. Our results suggest that the assimilation of near-real-time vegetation into land surface models could improve the detection, monitoring, and prediction of flash droughts.