Browse
Abstract
Annual spring and summer runoff from western Colorado is relied upon by 40 million people, six states, and two countries. Cool season precipitation and snowpack have historically been robust predictors of seasonal runoff in western Colorado. Forecasts made with this information allow water managers to plan for the season ahead. Antecedent hydrological conditions, such as root zone soil moisture and groundwater storage, and weather conditions following peak snowpack also impact seasonal runoff. The roles of such factors were scrutinized in 2020 and 2021: seasonal runoff was much lower than expectations based on snowpack values alone. We investigate the relative importance of meteorological and hydrological conditions occurring before and after the snowpack season in predicting seasonal runoff in western Colorado. This question is critical because the most effective investment strategy for improving forecasts depends on if errors arise before or after the snowpack season. This study is conducted using observations from the Snow Telemetry Network, root zone soil moisture and groundwater data from the Western Land Data Assimilation Systems, and a random forest–based statistical forecasting framework. We find that on average, antecedent root zone soil moisture and groundwater storage values do not add significant skill to seasonal water supply forecasts in western Colorado. In contrast, using precipitation and temperature data after the time of peak snowpack improves water supply forecasts significantly. The 2020 and 2021 runoffs were hampered by dry conditions both before and after the snowpack season. Both antecedent soil moisture and spring/summer precipitation data improved water supply forecast accuracy in these years.
Significance Statement
Seasonal water supply forecasts in western Colorado are highly valuable because spring and summer runoff from this region helps support the water supply of 40 million people. Accurate forecasts improve the management of the region’s water. Heavy investments have been made in improving our ability to monitor antecedent hydrological conditions in western Colorado, such as root zone soil moisture and groundwater. However, results from this study indicate that the largest source of uncertainty in western Colorado runoff forecasts is future weather. Therefore, improved subseasonal-to-seasonal weather forecasts for western Colorado are what is most needed to improve regional water supply forecasts, and the ability to properly manage western Colorado water.
Abstract
Annual spring and summer runoff from western Colorado is relied upon by 40 million people, six states, and two countries. Cool season precipitation and snowpack have historically been robust predictors of seasonal runoff in western Colorado. Forecasts made with this information allow water managers to plan for the season ahead. Antecedent hydrological conditions, such as root zone soil moisture and groundwater storage, and weather conditions following peak snowpack also impact seasonal runoff. The roles of such factors were scrutinized in 2020 and 2021: seasonal runoff was much lower than expectations based on snowpack values alone. We investigate the relative importance of meteorological and hydrological conditions occurring before and after the snowpack season in predicting seasonal runoff in western Colorado. This question is critical because the most effective investment strategy for improving forecasts depends on if errors arise before or after the snowpack season. This study is conducted using observations from the Snow Telemetry Network, root zone soil moisture and groundwater data from the Western Land Data Assimilation Systems, and a random forest–based statistical forecasting framework. We find that on average, antecedent root zone soil moisture and groundwater storage values do not add significant skill to seasonal water supply forecasts in western Colorado. In contrast, using precipitation and temperature data after the time of peak snowpack improves water supply forecasts significantly. The 2020 and 2021 runoffs were hampered by dry conditions both before and after the snowpack season. Both antecedent soil moisture and spring/summer precipitation data improved water supply forecast accuracy in these years.
Significance Statement
Seasonal water supply forecasts in western Colorado are highly valuable because spring and summer runoff from this region helps support the water supply of 40 million people. Accurate forecasts improve the management of the region’s water. Heavy investments have been made in improving our ability to monitor antecedent hydrological conditions in western Colorado, such as root zone soil moisture and groundwater. However, results from this study indicate that the largest source of uncertainty in western Colorado runoff forecasts is future weather. Therefore, improved subseasonal-to-seasonal weather forecasts for western Colorado are what is most needed to improve regional water supply forecasts, and the ability to properly manage western Colorado water.
Abstract
Satellites provide a useful way of estimating rainfall where the availability of in situ data is low but their indirect nature of estimation means there can be substantial biases. Consequently, the assimilation of in situ data is an important step in improving the accuracy of the satellite rainfall analysis. The effectiveness of this step varies with gauge density, and this study investigated the effectiveness of statistical interpolation (SI), also known as optimal interpolation (OI), on a monthly time scale when gauge density is extremely low using Papua New Guinea (PNG) as a study region. The topography of the region presented an additional challenge to the algorithm. An open-source implementation of SI was developed on Python 3 and confirmed to be consistent with an existing implementation, addressing a lack of open-source implementation for this classical algorithm. The effectiveness of the analysis produced by this algorithm was then compared to the pure satellite analysis over PNG from 2001 to 2014. When performance over the entire study domain was considered, the improvement from using SI was close to imperceptible because of the small number of stations available for assimilation and the small radius of influence of each station (imposed by the topography present in the domain). However, there was still value in using OI as performance around each of the stations was noticeably improved, with the error consistently being reduced along with a general increase in the correlation metric. Furthermore, in an operational context, the use of OI provides an important function of ensuring consistency between in situ data and the gridded analysis.
Significance Statement
The blending of satellite and gauge rainfall data through a process known as statistical interpolation (SI) is known to be capable of producing a more accurate dataset that facilitates better estimation of rainfall. However, the performance of this algorithm over a domain such as Papua New Guinea, where gauge density is extremely low, is not often explored. This study reveals that, although an improvement over the entire Papua New Guinea domain was slight, the algorithm is still valuable as there was a consistent improvement around the stations. Additionally, an adaptable and open-source version of the algorithm is provided, allowing users to blend their own satellite and gauge data and create better geospatial datasets for their own purposes.
Abstract
Satellites provide a useful way of estimating rainfall where the availability of in situ data is low but their indirect nature of estimation means there can be substantial biases. Consequently, the assimilation of in situ data is an important step in improving the accuracy of the satellite rainfall analysis. The effectiveness of this step varies with gauge density, and this study investigated the effectiveness of statistical interpolation (SI), also known as optimal interpolation (OI), on a monthly time scale when gauge density is extremely low using Papua New Guinea (PNG) as a study region. The topography of the region presented an additional challenge to the algorithm. An open-source implementation of SI was developed on Python 3 and confirmed to be consistent with an existing implementation, addressing a lack of open-source implementation for this classical algorithm. The effectiveness of the analysis produced by this algorithm was then compared to the pure satellite analysis over PNG from 2001 to 2014. When performance over the entire study domain was considered, the improvement from using SI was close to imperceptible because of the small number of stations available for assimilation and the small radius of influence of each station (imposed by the topography present in the domain). However, there was still value in using OI as performance around each of the stations was noticeably improved, with the error consistently being reduced along with a general increase in the correlation metric. Furthermore, in an operational context, the use of OI provides an important function of ensuring consistency between in situ data and the gridded analysis.
Significance Statement
The blending of satellite and gauge rainfall data through a process known as statistical interpolation (SI) is known to be capable of producing a more accurate dataset that facilitates better estimation of rainfall. However, the performance of this algorithm over a domain such as Papua New Guinea, where gauge density is extremely low, is not often explored. This study reveals that, although an improvement over the entire Papua New Guinea domain was slight, the algorithm is still valuable as there was a consistent improvement around the stations. Additionally, an adaptable and open-source version of the algorithm is provided, allowing users to blend their own satellite and gauge data and create better geospatial datasets for their own purposes.
Abstract
The persistence or memory of soil moisture (θ) after rainfall has substantial environmental implications. Much work has been done to study soil moisture drydown for in situ and satellite data separately. In this work, we present a comparison of drydown characteristics across multiple U.K. soil moisture products, including satellite-merged (i.e., TCM), in situ (i.e., COSMOS-UK), hydrological model [i.e., Grid-to-Grid (G2G)], statistical model [i.e., Soil Moisture U.K. (SMUK)], and land surface model (LSM) [i.e., Climate Hydrology and Ecology research Support System (CHESS)] data. The drydown decay time scale (τ) for all gridded products is computed at an unprecedented resolution of 1–2 km, a scale relevant to weather and climate models. While their range of τ differs (except SMUK and CHESS are similar) due to differences such as sensing depths, their spatial patterns are correlated to land cover and soil types. We further analyze the occurrence of drydown events at COSMOS-UK sites. We show that soil moisture drydown regimes exhibit strong seasonal dependencies, whereby the soil dries out quicker in summer than winter. These seasonal dependencies are important to consider during model benchmarking and evaluation. We show that fitted τ based on COSMOS and LSM are well correlated, with a bias of lower τ for COSMOS. Our findings contribute to a growing body of literature to characterize τ, with the aim of developing a method to systematically validate model soil moisture products at a range of scales.
Significance Statement
While important for many aspects of the environment, the evaluation of modeled soil moisture has remained incredibly challenging. Sensors work at different space and time scales to the models, the definitions of soil moisture vary between applications, and the soil moisture itself is subject to the soil properties while the impact of the soil moisture on evaporation or river flow is more dependent on its variation in time and space than its absolute value. What we need is a method that allows us to compare the important features of soil moisture rather than its value. In this study, we choose to study drydown as a way to capture and compare the behavior of different soil moisture data products.
Abstract
The persistence or memory of soil moisture (θ) after rainfall has substantial environmental implications. Much work has been done to study soil moisture drydown for in situ and satellite data separately. In this work, we present a comparison of drydown characteristics across multiple U.K. soil moisture products, including satellite-merged (i.e., TCM), in situ (i.e., COSMOS-UK), hydrological model [i.e., Grid-to-Grid (G2G)], statistical model [i.e., Soil Moisture U.K. (SMUK)], and land surface model (LSM) [i.e., Climate Hydrology and Ecology research Support System (CHESS)] data. The drydown decay time scale (τ) for all gridded products is computed at an unprecedented resolution of 1–2 km, a scale relevant to weather and climate models. While their range of τ differs (except SMUK and CHESS are similar) due to differences such as sensing depths, their spatial patterns are correlated to land cover and soil types. We further analyze the occurrence of drydown events at COSMOS-UK sites. We show that soil moisture drydown regimes exhibit strong seasonal dependencies, whereby the soil dries out quicker in summer than winter. These seasonal dependencies are important to consider during model benchmarking and evaluation. We show that fitted τ based on COSMOS and LSM are well correlated, with a bias of lower τ for COSMOS. Our findings contribute to a growing body of literature to characterize τ, with the aim of developing a method to systematically validate model soil moisture products at a range of scales.
Significance Statement
While important for many aspects of the environment, the evaluation of modeled soil moisture has remained incredibly challenging. Sensors work at different space and time scales to the models, the definitions of soil moisture vary between applications, and the soil moisture itself is subject to the soil properties while the impact of the soil moisture on evaporation or river flow is more dependent on its variation in time and space than its absolute value. What we need is a method that allows us to compare the important features of soil moisture rather than its value. In this study, we choose to study drydown as a way to capture and compare the behavior of different soil moisture data products.
Abstract
NASA’s multi-satellite precipitation product from the Global Precipitation Measurement (GPM) mission, the Integrated Multi-satellitE Retrievals for GPM (IMERG) product, is validated over tropical and high-latitude oceans from June 2014 to August 2021. This oceanic study uses the GPM Validation Network’s island-based radars to assess IMERG when the GPM Core Observatory’s Microwave Imager (GMI) observes precipitation at these sites (i.e., IMERG-GMI). Error tracing from the Level 3 (gridded) IMERG V06B product back through to the input Level 2 (satellite footprint) Goddard Profiling Algorithm GMI V05 climate (GPROF-CLIM) product quantifies the errors separately associated with each step in the gridding and calibration of the estimates from GPROF-CLIM to IMERG-GMI. Mean relative bias results indicate that IMERG-GMI V06B overestimates Alaskan high-latitude oceanic precipitation by +147% and tropical oceanic precipitation by +12% with respect to surface radars. GPROF-CLIM V05 overestimates Alaskan oceanic precipitation by +15%, showing that the IMERG algorithm’s calibration adjustments to the input GPROF-CLIM precipitation estimates increase the mean relative bias in this region. In contrast, IMERG adjustments are minimal over tropical waters with GPROF-CLIM overestimating oceanic precipitation by +14%. This study discovered that the IMERG V06B gridding process incorrectly geolocated GPROF-CLIM V05 precipitation estimates by 0.1° eastward in the latitude band 75°N–S, which has been rectified in the IMERG V07 algorithm. Correcting for the geolocation error in IMERG-GMI V06B improved oceanic statistics, with improvements greater in tropical waters than Alaskan waters. This error tracing approach enables a high-precision diagnosis of how different IMERG algorithm steps contribute to and mitigate errors, demonstrating the importance of collaboration between evaluation studies and algorithm developers.
Abstract
NASA’s multi-satellite precipitation product from the Global Precipitation Measurement (GPM) mission, the Integrated Multi-satellitE Retrievals for GPM (IMERG) product, is validated over tropical and high-latitude oceans from June 2014 to August 2021. This oceanic study uses the GPM Validation Network’s island-based radars to assess IMERG when the GPM Core Observatory’s Microwave Imager (GMI) observes precipitation at these sites (i.e., IMERG-GMI). Error tracing from the Level 3 (gridded) IMERG V06B product back through to the input Level 2 (satellite footprint) Goddard Profiling Algorithm GMI V05 climate (GPROF-CLIM) product quantifies the errors separately associated with each step in the gridding and calibration of the estimates from GPROF-CLIM to IMERG-GMI. Mean relative bias results indicate that IMERG-GMI V06B overestimates Alaskan high-latitude oceanic precipitation by +147% and tropical oceanic precipitation by +12% with respect to surface radars. GPROF-CLIM V05 overestimates Alaskan oceanic precipitation by +15%, showing that the IMERG algorithm’s calibration adjustments to the input GPROF-CLIM precipitation estimates increase the mean relative bias in this region. In contrast, IMERG adjustments are minimal over tropical waters with GPROF-CLIM overestimating oceanic precipitation by +14%. This study discovered that the IMERG V06B gridding process incorrectly geolocated GPROF-CLIM V05 precipitation estimates by 0.1° eastward in the latitude band 75°N–S, which has been rectified in the IMERG V07 algorithm. Correcting for the geolocation error in IMERG-GMI V06B improved oceanic statistics, with improvements greater in tropical waters than Alaskan waters. This error tracing approach enables a high-precision diagnosis of how different IMERG algorithm steps contribute to and mitigate errors, demonstrating the importance of collaboration between evaluation studies and algorithm developers.
Abstract
Despite the intensifying interest in flash drought both within the United States and globally, moist tropical landscapes have largely escaped the attention of the flash drought community. Because these ecozones are acclimatized to receiving regular, near-daily precipitation, they are especially vulnerable to rapid-drying events. This is particularly true within the Caribbean Sea basin where numerous small islands lack the surface and groundwater resources to cope with swiftly developing drought conditions. This study fills the tropical flash drought gap by examining the pervasiveness of flash drought across the pan-Caribbean region using a recently proposed criterion based on the evaporative demand drought index (EDDI). The EDDI identifies 46 instances of widespread flash drought “outbreaks” in which significant fractions of the pan-Caribbean encounter rapid drying over 15 days and then maintain this condition for another 15 days. Moreover, a self-organizing maps (SOM) classification reveals a tendency for flash drought to assume recurring typologies concentrated in one of the Central American, South American, or Greater Antilles coastlines, although a simultaneous, Caribbean-wide drought is never observed within the 40-yr (1981–2020) period examined. Furthermore, three of the six flash drought typologies identified by the SOM initiate most often during Phase 2 of the Madden–Julian oscillation. Collectively, these findings motivate the need to more critically examine the transferability of flash drought definitions into the global tropics, particularly for small water-vulnerable islands where even island-wide flash droughts may only occupy a few pixels in most reanalysis datasets.
Significance Statement
The purpose of this study is to understand if flash drought occurs in tropical environments, specifically the Caribbean. Flash droughts represent a quickly evolving drought, which have particularly acute impacts on agriculture and often catch stakeholders by surprise as conditions evolve rapidly from wet to dry conditions. Our results indicate that flash droughts occur with regular periodicity in the Caribbean. Expansive flash droughts tend to occur in coherent subregional clusters. Future studies will further investigate the drivers of these flash droughts to create early warning systems for flash drought.
Abstract
Despite the intensifying interest in flash drought both within the United States and globally, moist tropical landscapes have largely escaped the attention of the flash drought community. Because these ecozones are acclimatized to receiving regular, near-daily precipitation, they are especially vulnerable to rapid-drying events. This is particularly true within the Caribbean Sea basin where numerous small islands lack the surface and groundwater resources to cope with swiftly developing drought conditions. This study fills the tropical flash drought gap by examining the pervasiveness of flash drought across the pan-Caribbean region using a recently proposed criterion based on the evaporative demand drought index (EDDI). The EDDI identifies 46 instances of widespread flash drought “outbreaks” in which significant fractions of the pan-Caribbean encounter rapid drying over 15 days and then maintain this condition for another 15 days. Moreover, a self-organizing maps (SOM) classification reveals a tendency for flash drought to assume recurring typologies concentrated in one of the Central American, South American, or Greater Antilles coastlines, although a simultaneous, Caribbean-wide drought is never observed within the 40-yr (1981–2020) period examined. Furthermore, three of the six flash drought typologies identified by the SOM initiate most often during Phase 2 of the Madden–Julian oscillation. Collectively, these findings motivate the need to more critically examine the transferability of flash drought definitions into the global tropics, particularly for small water-vulnerable islands where even island-wide flash droughts may only occupy a few pixels in most reanalysis datasets.
Significance Statement
The purpose of this study is to understand if flash drought occurs in tropical environments, specifically the Caribbean. Flash droughts represent a quickly evolving drought, which have particularly acute impacts on agriculture and often catch stakeholders by surprise as conditions evolve rapidly from wet to dry conditions. Our results indicate that flash droughts occur with regular periodicity in the Caribbean. Expansive flash droughts tend to occur in coherent subregional clusters. Future studies will further investigate the drivers of these flash droughts to create early warning systems for flash drought.
Abstract
Freshwater supplies in most western Canadian watersheds are threatened by the warming of temperatures because it alters the snow-dominated hydrologic patterns which characterize these cold regions. In this study, we used datasets from 12 climate simulations associated to seven global climate models and four future scenarios and participating in the Coupled Model Intercomparison Project Phase 6, to calculate and assess the historical and future temporal patterns of 13 hydroclimate indicators relevant to water resources management. We conducted linear long-term trend and change analyses on their annual time series, to provide insight into the potential regional impacts of the detected changes on water availability for all users. We implemented our framework with the Alberta oil sands region in Canada, to support the monitoring of environmental changes in this region, relative to the established baseline 1985-2014. Our analysis indicates a persistent increase in the occurrence of extreme hot temperatures, fewer extreme cold temperatures, and an increase in warm spells and heatwaves, while precipitation-related indices show minor changes. Consequently, deficits in regional water availability during summer and water-year periods, as depicted by the Standardized Precipitation Evapotranspiration indices, are expected. The combined effects of the strong climate warming signals and the small increases in precipitation annual amounts generally detected in this study, suggest that drier conditions may become severe and frequent in the Alberta oil sands region. The challenging climate change risks identified for this region should therefore be continuously monitored, updated, and integrated to support a sustainable management for all water users.
Abstract
Freshwater supplies in most western Canadian watersheds are threatened by the warming of temperatures because it alters the snow-dominated hydrologic patterns which characterize these cold regions. In this study, we used datasets from 12 climate simulations associated to seven global climate models and four future scenarios and participating in the Coupled Model Intercomparison Project Phase 6, to calculate and assess the historical and future temporal patterns of 13 hydroclimate indicators relevant to water resources management. We conducted linear long-term trend and change analyses on their annual time series, to provide insight into the potential regional impacts of the detected changes on water availability for all users. We implemented our framework with the Alberta oil sands region in Canada, to support the monitoring of environmental changes in this region, relative to the established baseline 1985-2014. Our analysis indicates a persistent increase in the occurrence of extreme hot temperatures, fewer extreme cold temperatures, and an increase in warm spells and heatwaves, while precipitation-related indices show minor changes. Consequently, deficits in regional water availability during summer and water-year periods, as depicted by the Standardized Precipitation Evapotranspiration indices, are expected. The combined effects of the strong climate warming signals and the small increases in precipitation annual amounts generally detected in this study, suggest that drier conditions may become severe and frequent in the Alberta oil sands region. The challenging climate change risks identified for this region should therefore be continuously monitored, updated, and integrated to support a sustainable management for all water users.
Abstract
The historical rise of irrigation has profoundly mitigated the effect of drought on agriculture in many parts of the United States. While irrigation directly alters soil moisture, meteorological drought indices ignore the effects of irrigation, since they are often based on simple water balance models that neglect the irrigation input. Reanalyses also largely neglect irrigation. Other approaches estimate the evaporative fraction (EF), which is correlated with soil moisture under water-limited conditions typical of droughts, with lower values corresponding to drier soils. However, those approaches require satellite observations of land surface temperature, meaning they cannot be used to study droughts prior to the satellite era. Here, we use a recent theory of land–atmosphere coupling—surface flux equilibrium (SFE) theory—to estimate EF from readily available observations of near-surface air temperature and specific humidity with long historical records. In contrast to EF estimated from a reanalysis that largely neglects irrigation, the SFE-predicted EF is greater at irrigated sites than at nonirrigated sites during droughts, and its historical trends are typically consistent with the spatial distribution of irrigation growth. Two sites at which SFE-predicted EF unexpectedly rises in the absence of changes in irrigation can be explained by increased flooding due to human interventions unrelated to irrigation (river engineering and the expansion of fish hatcheries). This work introduces a new method for quantifying agricultural drought prior to the satellite era. It can be used to provide insight into the role of irrigation in mitigating drought in the United States over the twentieth century.
Significance Statement
Irrigation grew profoundly in the United States over the twentieth century, increasing the resilience of American agriculture to drought. Yet observational records of agricultural drought, and its response to irrigation, are limited to the satellite era. Here, we show that a common measure of agricultural drought (the evaporative fraction, EF) can be estimated using widespread weather data, extending the agricultural drought record decades further back in time. We show that EF estimated using our approach is both sensitive and specific to the occurrence of irrigation, unlike an alternative derived from a reanalysis.
Abstract
The historical rise of irrigation has profoundly mitigated the effect of drought on agriculture in many parts of the United States. While irrigation directly alters soil moisture, meteorological drought indices ignore the effects of irrigation, since they are often based on simple water balance models that neglect the irrigation input. Reanalyses also largely neglect irrigation. Other approaches estimate the evaporative fraction (EF), which is correlated with soil moisture under water-limited conditions typical of droughts, with lower values corresponding to drier soils. However, those approaches require satellite observations of land surface temperature, meaning they cannot be used to study droughts prior to the satellite era. Here, we use a recent theory of land–atmosphere coupling—surface flux equilibrium (SFE) theory—to estimate EF from readily available observations of near-surface air temperature and specific humidity with long historical records. In contrast to EF estimated from a reanalysis that largely neglects irrigation, the SFE-predicted EF is greater at irrigated sites than at nonirrigated sites during droughts, and its historical trends are typically consistent with the spatial distribution of irrigation growth. Two sites at which SFE-predicted EF unexpectedly rises in the absence of changes in irrigation can be explained by increased flooding due to human interventions unrelated to irrigation (river engineering and the expansion of fish hatcheries). This work introduces a new method for quantifying agricultural drought prior to the satellite era. It can be used to provide insight into the role of irrigation in mitigating drought in the United States over the twentieth century.
Significance Statement
Irrigation grew profoundly in the United States over the twentieth century, increasing the resilience of American agriculture to drought. Yet observational records of agricultural drought, and its response to irrigation, are limited to the satellite era. Here, we show that a common measure of agricultural drought (the evaporative fraction, EF) can be estimated using widespread weather data, extending the agricultural drought record decades further back in time. We show that EF estimated using our approach is both sensitive and specific to the occurrence of irrigation, unlike an alternative derived from a reanalysis.
Abstract
In the arid and semiarid southwestern United States, both cool- and warm-season storms result in flash flooding, although the former storms have been much less studied. Here, we investigate a catalog of 52 flash-flood-producing storms over the 1996–2021 period for the arid Las Vegas Wash watershed using rain gauge observations, reanalysis fields, radar reflectivities, cloud-to-ground lightning flashes, and streamflow records. Our analyses focus on the hydroclimatology, convective intensity, and evolution of these storms. At the synoptic scale, cool-season storms are associated with open wave and cutoff low weather patterns, whereas warm-season storms are linked to classic and troughing North American monsoon (NAM) patterns. At the storm scale, cool-season events are southwesterly and southeasterly under open wave and cutoff low conditions, respectively, with long duration and low to moderate rainfall intensity. Warm-season storms, however, are characterized by short-duration, high-intensity rainfall, with either no apparent direction or southwesterly under classic and troughing NAM patterns, respectively. Atmospheric rivers and deep convection are the principal agents for the extreme rainfall and upper-tail flash floods in cool and warm seasons, respectively. Additionally, intense rainfall over the developed low valley is imperative for urban flash flooding. The evolution properties of seasonal storms and the resulting streamflows show that peak flows of comparable magnitude are “intensity driven” in the warm season but “volume driven” in the cool season. Furthermore, the distinctive impacts of complex terrain and climate change on rainfall properties are discussed with respect to storm seasonality.
Abstract
In the arid and semiarid southwestern United States, both cool- and warm-season storms result in flash flooding, although the former storms have been much less studied. Here, we investigate a catalog of 52 flash-flood-producing storms over the 1996–2021 period for the arid Las Vegas Wash watershed using rain gauge observations, reanalysis fields, radar reflectivities, cloud-to-ground lightning flashes, and streamflow records. Our analyses focus on the hydroclimatology, convective intensity, and evolution of these storms. At the synoptic scale, cool-season storms are associated with open wave and cutoff low weather patterns, whereas warm-season storms are linked to classic and troughing North American monsoon (NAM) patterns. At the storm scale, cool-season events are southwesterly and southeasterly under open wave and cutoff low conditions, respectively, with long duration and low to moderate rainfall intensity. Warm-season storms, however, are characterized by short-duration, high-intensity rainfall, with either no apparent direction or southwesterly under classic and troughing NAM patterns, respectively. Atmospheric rivers and deep convection are the principal agents for the extreme rainfall and upper-tail flash floods in cool and warm seasons, respectively. Additionally, intense rainfall over the developed low valley is imperative for urban flash flooding. The evolution properties of seasonal storms and the resulting streamflows show that peak flows of comparable magnitude are “intensity driven” in the warm season but “volume driven” in the cool season. Furthermore, the distinctive impacts of complex terrain and climate change on rainfall properties are discussed with respect to storm seasonality.
Abstract
Recent advances in artificial intelligence (AI) and explainable AI (XAI) have created opportunities to better predict and understand drought processes. This study uses a machine learning approach for understanding the drivers of drought severity and extent in the Canadian Prairies from 2005 to 2019 using climate and satellite data. The model is trained on the Canadian Drought Monitor (CDM), an extensive dataset produced by expert analysis of drought impacts across various sectors that enables a more comprehensive understanding of drought. Shapley additive explanation (SHAP) is used to understand model predictions during emerging or worsening drought conditions, providing insight into the key determinants of drought. The results demonstrate the importance of capturing spatiotemporal autocorrelation structures for accurate drought characterization and elucidates the drought time scales and thresholds that optimally separate each CDM severity category. In general, there is a positive relationship between the severity of drought and the time scale of the anomalies. However, high-severity droughts are also more complex and driven by a multitude of factors. It was found that satellite-based evaporative stress index (ESI), soil moisture, and groundwater were effective predictors of drought onset and intensification. Similarly, anomalous phases of large-scale atmosphere–ocean dynamics exhibit teleconnections with Prairie drought. Overall, this investigation provides a better understanding of the physical mechanisms responsible for drought in the Prairies, provides data-driven thresholds for estimating drought severity that could improve future drought assessments, and offers a set of early warning indicators that may be useful for drought adaptation and mitigation.
Significance Statement
This work is significant because it identifies drivers of drought onset and intensification in an agriculturally and economically important region of Canada. This information can be used in the future to improve early warning for adaptation and mitigation. It also uses state-of-the-art machine learning techniques to understand drought, including a novel approach called SHAP probability values to improve interpretability. This provides evidence that machine learning models are not black boxes and should be more widely considered for understanding drought and other hydrometeorological phenomena.
Abstract
Recent advances in artificial intelligence (AI) and explainable AI (XAI) have created opportunities to better predict and understand drought processes. This study uses a machine learning approach for understanding the drivers of drought severity and extent in the Canadian Prairies from 2005 to 2019 using climate and satellite data. The model is trained on the Canadian Drought Monitor (CDM), an extensive dataset produced by expert analysis of drought impacts across various sectors that enables a more comprehensive understanding of drought. Shapley additive explanation (SHAP) is used to understand model predictions during emerging or worsening drought conditions, providing insight into the key determinants of drought. The results demonstrate the importance of capturing spatiotemporal autocorrelation structures for accurate drought characterization and elucidates the drought time scales and thresholds that optimally separate each CDM severity category. In general, there is a positive relationship between the severity of drought and the time scale of the anomalies. However, high-severity droughts are also more complex and driven by a multitude of factors. It was found that satellite-based evaporative stress index (ESI), soil moisture, and groundwater were effective predictors of drought onset and intensification. Similarly, anomalous phases of large-scale atmosphere–ocean dynamics exhibit teleconnections with Prairie drought. Overall, this investigation provides a better understanding of the physical mechanisms responsible for drought in the Prairies, provides data-driven thresholds for estimating drought severity that could improve future drought assessments, and offers a set of early warning indicators that may be useful for drought adaptation and mitigation.
Significance Statement
This work is significant because it identifies drivers of drought onset and intensification in an agriculturally and economically important region of Canada. This information can be used in the future to improve early warning for adaptation and mitigation. It also uses state-of-the-art machine learning techniques to understand drought, including a novel approach called SHAP probability values to improve interpretability. This provides evidence that machine learning models are not black boxes and should be more widely considered for understanding drought and other hydrometeorological phenomena.