Search Results
You are looking at 1 - 10 of 16 items for
- Author or Editor: Phu Nguyen x
- Refine by Access: All Content x
Abstract
Floods are among the most devastating natural hazards in society. Flood forecasting is crucially important in order to provide warnings in time to protect people and properties from such disasters. This research applied the high-resolution coupled hydrologic–hydraulic model from the University of California, Irvine, named HiResFlood-UCI, to simulate the historical 2008 Iowa flood. HiResFlood-UCI was forced with the near-real-time Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS) and NEXRAD Stage 2 precipitation data. The model was run using the a priori hydrologic parameters and hydraulic Manning n values from lookup tables. The model results were evaluated in two aspects: point comparison using USGS streamflow and areal validation of inundation maps using USDA’s flood extent maps derived from Advanced Wide Field Sensor (AWiFS) 56-m resolution imagery. The results show that the PERSIANN-CCS simulation tends to capture the observed hydrograph shape better than Stage 2 (minimum correlation of 0.86 for PERSIANN-CCS and 0.72 for Stage 2); however, at most of the stream gauges, Stage 2 simulation provides more accurate estimates of flood peaks compared to PERSIANN-CCS (49%–90% bias reduction from PERSIANN-CCS to Stage 2). The simulation in both cases shows a good agreement (0.67 and 0.73 critical success index for Stage 2 and PERSIANN-CCS simulations, respectively) with the AWiFS flood extent. Since the PERSIANN-CCS simulation slightly underestimated the discharge, the probability of detection (0.93) is slightly lower than that of the Stage 2 simulation (0.97). As a trade-off, the false alarm rate for the PERSIANN-CCS simulation (0.23) is better than that of the Stage 2 simulation (0.31).
Abstract
Floods are among the most devastating natural hazards in society. Flood forecasting is crucially important in order to provide warnings in time to protect people and properties from such disasters. This research applied the high-resolution coupled hydrologic–hydraulic model from the University of California, Irvine, named HiResFlood-UCI, to simulate the historical 2008 Iowa flood. HiResFlood-UCI was forced with the near-real-time Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS) and NEXRAD Stage 2 precipitation data. The model was run using the a priori hydrologic parameters and hydraulic Manning n values from lookup tables. The model results were evaluated in two aspects: point comparison using USGS streamflow and areal validation of inundation maps using USDA’s flood extent maps derived from Advanced Wide Field Sensor (AWiFS) 56-m resolution imagery. The results show that the PERSIANN-CCS simulation tends to capture the observed hydrograph shape better than Stage 2 (minimum correlation of 0.86 for PERSIANN-CCS and 0.72 for Stage 2); however, at most of the stream gauges, Stage 2 simulation provides more accurate estimates of flood peaks compared to PERSIANN-CCS (49%–90% bias reduction from PERSIANN-CCS to Stage 2). The simulation in both cases shows a good agreement (0.67 and 0.73 critical success index for Stage 2 and PERSIANN-CCS simulations, respectively) with the AWiFS flood extent. Since the PERSIANN-CCS simulation slightly underestimated the discharge, the probability of detection (0.93) is slightly lower than that of the Stage 2 simulation (0.97). As a trade-off, the false alarm rate for the PERSIANN-CCS simulation (0.23) is better than that of the Stage 2 simulation (0.31).
Abstract
The Nile River basin is one of the global hotspots vulnerable to climate change impacts because of a fast-growing population and geopolitical tensions. Previous studies demonstrated that general circulation models (GCMs) frequently show disagreement in the sign of change in annual precipitation projections. Here, we first evaluate the performance of 20 GCMs from phase six of the Coupled Model Intercomparison Project (CMIP6) benchmarked against a high-spatial-resolution precipitation dataset dating back to 1983 from Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR). Next, a Bayesian model averaging (BMA) approach is adopted to derive probability distributions of precipitation projections in the Nile basin. Retrospective analysis reveals that most GCMs exhibit considerable (up to 64% of mean annual precipitation) and spatially heterogenous bias in simulating annual precipitation. Moreover, it is shown that all GCMs underestimate interannual variability; thus, the ensemble range is underdispersive and is a poor indicator of uncertainty. The projected changes from the BMA model show that the value and sign of change vary considerably across the Nile basin. Specifically, it is found that projected changes in the two headwaters basins, namely, the Blue Nile and Upper White Nile, are 0.03% and −1.65%, respectively; both are statistically insignificant at α = 0.05. The uncertainty range estimated from the BMA model shows that the probability of a precipitation decrease is much higher in the Upper White Nile basin whereas projected change in the Blue Nile is highly uncertain both in magnitude and sign of change.
Abstract
The Nile River basin is one of the global hotspots vulnerable to climate change impacts because of a fast-growing population and geopolitical tensions. Previous studies demonstrated that general circulation models (GCMs) frequently show disagreement in the sign of change in annual precipitation projections. Here, we first evaluate the performance of 20 GCMs from phase six of the Coupled Model Intercomparison Project (CMIP6) benchmarked against a high-spatial-resolution precipitation dataset dating back to 1983 from Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR). Next, a Bayesian model averaging (BMA) approach is adopted to derive probability distributions of precipitation projections in the Nile basin. Retrospective analysis reveals that most GCMs exhibit considerable (up to 64% of mean annual precipitation) and spatially heterogenous bias in simulating annual precipitation. Moreover, it is shown that all GCMs underestimate interannual variability; thus, the ensemble range is underdispersive and is a poor indicator of uncertainty. The projected changes from the BMA model show that the value and sign of change vary considerably across the Nile basin. Specifically, it is found that projected changes in the two headwaters basins, namely, the Blue Nile and Upper White Nile, are 0.03% and −1.65%, respectively; both are statistically insignificant at α = 0.05. The uncertainty range estimated from the BMA model shows that the probability of a precipitation decrease is much higher in the Upper White Nile basin whereas projected change in the Blue Nile is highly uncertain both in magnitude and sign of change.
Abstract
Calibration is a crucial step in hydrologic modeling that is typically handled by tuning parameters to match an observed hydrograph. In this research, an alternative calibration scheme based on soil moisture was investigated as a means of identifying the potentially heterogeneous calibration needs of a distributed hydrologic model. The National Weather Service’s (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) was employed to carry out such a calibration, along with concentrated in situ soil moisture observations from the Iowa Flood Studies (IFloodS) field campaign in Iowa’s Turkey River basin. Synthetic, single-pixel experiments were conducted in order to identify parameters relevant to soil moisture dynamics and to test the ability of three calibration procedures (discharge, soil moisture, and hybrid based) to recapture prescribed parameter sets. It was found that three storage parameters of HL-RDHM could be consistently identified using soil moisture RMSE as the objective function and that the addition of discharge-based calibration led to more consistent parameter identification for all 11 storage and release parameters. Expanding to full-basin experiments, these three calibration procedures were applied following an investigation to find the most advantageous method of distributing the point-based calibrations carried out at each pixel collocated with an IFloodS observation site. A method based on pixel similarity was deemed most appropriate for this purpose. Additionally, streamflow simulations calibrated with soil moisture showed improvement in RMSE and Nash–Sutcliffe efficiency (NSE) for all calibration–validation events despite a short calibration period, a promising result when considering calibration of ungauged basins. However, supplementary evaluation metrics show mixed results for streamflow simulations, suggesting further investigation is required.
Abstract
Calibration is a crucial step in hydrologic modeling that is typically handled by tuning parameters to match an observed hydrograph. In this research, an alternative calibration scheme based on soil moisture was investigated as a means of identifying the potentially heterogeneous calibration needs of a distributed hydrologic model. The National Weather Service’s (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) was employed to carry out such a calibration, along with concentrated in situ soil moisture observations from the Iowa Flood Studies (IFloodS) field campaign in Iowa’s Turkey River basin. Synthetic, single-pixel experiments were conducted in order to identify parameters relevant to soil moisture dynamics and to test the ability of three calibration procedures (discharge, soil moisture, and hybrid based) to recapture prescribed parameter sets. It was found that three storage parameters of HL-RDHM could be consistently identified using soil moisture RMSE as the objective function and that the addition of discharge-based calibration led to more consistent parameter identification for all 11 storage and release parameters. Expanding to full-basin experiments, these three calibration procedures were applied following an investigation to find the most advantageous method of distributing the point-based calibrations carried out at each pixel collocated with an IFloodS observation site. A method based on pixel similarity was deemed most appropriate for this purpose. Additionally, streamflow simulations calibrated with soil moisture showed improvement in RMSE and Nash–Sutcliffe efficiency (NSE) for all calibration–validation events despite a short calibration period, a promising result when considering calibration of ungauged basins. However, supplementary evaluation metrics show mixed results for streamflow simulations, suggesting further investigation is required.
Abstract
Flood mapping from satellites provides large-scale observations of flood events, but cloud obstruction in satellite optical sensors limits its practical usability. In this study, we implemented the Variational Interpolation (VI) algorithm to remove clouds from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) Snow-Covered Area (SCA) products. The VI algorithm estimated states of cloud-hindered pixels by constructing three-dimensional space–time surfaces based on assumptions of snow persistence. The resulting cloud-free flood maps, while maintaining the temporal resolution of the original MODIS product, showed an improvement of nearly 70% in average probability of detection (POD) (from 0.29 to 0.49) when validated with flood maps derived from Landsat-8 imagery. The second part of this study utilized the cloud-free flood maps for calibration of a hydrologic model to improve simulation of flood inundation maps. The results demonstrated the utility of the cloud-free maps, as simulated inundation maps had average POD, false alarm ratio (FAR), and Hanssen–Kuipers (HK) skill score of 0.87, 0.49, and 0.84, respectively, compared to POD, FAR, and HK of 0.70, 0.61, and 0.67 when original maps were used for calibration.
Abstract
Flood mapping from satellites provides large-scale observations of flood events, but cloud obstruction in satellite optical sensors limits its practical usability. In this study, we implemented the Variational Interpolation (VI) algorithm to remove clouds from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) Snow-Covered Area (SCA) products. The VI algorithm estimated states of cloud-hindered pixels by constructing three-dimensional space–time surfaces based on assumptions of snow persistence. The resulting cloud-free flood maps, while maintaining the temporal resolution of the original MODIS product, showed an improvement of nearly 70% in average probability of detection (POD) (from 0.29 to 0.49) when validated with flood maps derived from Landsat-8 imagery. The second part of this study utilized the cloud-free flood maps for calibration of a hydrologic model to improve simulation of flood inundation maps. The results demonstrated the utility of the cloud-free maps, as simulated inundation maps had average POD, false alarm ratio (FAR), and Hanssen–Kuipers (HK) skill score of 0.87, 0.49, and 0.84, respectively, compared to POD, FAR, and HK of 0.70, 0.61, and 0.67 when original maps were used for calibration.
Abstract
Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model.
Abstract
Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model.
Abstract
Little dispute surrounds the observed global temperature changes over the past decades. As a result, there is widespread agreement that a corresponding response in the global hydrologic cycle must exist. However, exactly how such a response manifests remains unsettled. Here we use a unique recently developed long-term satellite-based record to assess changes in precipitation across spatial scales. We show that warm climate regions exhibit decreasing precipitation trends, while arid and polar climate regions show increasing trends. At the country scale, precipitation seems to have increased in 96 countries, and decreased in 104. We also explore precipitation changes over 237 global major basins. Our results show opposing trends at different scales, highlighting the importance of spatial scale in trend analysis. Furthermore, while the increasing global temperature trend is apparent in observations, the same cannot be said for the global precipitation trend according to the high-resolution dataset, PERSIANN-CDR, used in this study.
Abstract
Little dispute surrounds the observed global temperature changes over the past decades. As a result, there is widespread agreement that a corresponding response in the global hydrologic cycle must exist. However, exactly how such a response manifests remains unsettled. Here we use a unique recently developed long-term satellite-based record to assess changes in precipitation across spatial scales. We show that warm climate regions exhibit decreasing precipitation trends, while arid and polar climate regions show increasing trends. At the country scale, precipitation seems to have increased in 96 countries, and decreased in 104. We also explore precipitation changes over 237 global major basins. Our results show opposing trends at different scales, highlighting the importance of spatial scale in trend analysis. Furthermore, while the increasing global temperature trend is apparent in observations, the same cannot be said for the global precipitation trend according to the high-resolution dataset, PERSIANN-CDR, used in this study.
Abstract
Most heavy precipitation events and extreme flooding over the U.S. Pacific coast can be linked to prevalent atmospheric river (AR) conditions. Thus, reliable quantitative precipitation estimation with a rich spatiotemporal resolution is vital for water management and early warning systems of flooding and landslides over these regions. At the same time, high-quality near-real-time measurements of AR precipitation remain challenging due to the complex topographic features of land surface and meteorological conditions of the region: specifically, orographic features occlude radar measurements while infrared-based algorithms face challenges, differentiating between both cold brightband (BB) precipitation and the warmer nonbrightband (NBB) precipitation. It should be noted that the latter precipitation is characterized by greater orographic enhancement. In this study, we evaluate the performance of a recently developed near-real-time satellite precipitation algorithm: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) Dynamic Infrared–Rain Rate-Now (PDIR-Now). This model is primarily dependent on infrared information from geostationary satellites as input; consequently, PDIR-Now has the advantage of short data latency, 15–60-min delay between observation to precipitation product delivery. The performance of PDIR-Now is analyzed with a focus on AR-related events for cases dominated by NBB and BB precipitation over the Russian River basin. In our investigations, we utilize S-band (3-GHz) precipitation profilers with Joss/Parsivel disdrometer measurements at the Middletown and Santa Rosa stations to classify BB and NBB precipitation events. In general, our analysis shows that PDIR-Now is more skillful in retrieving precipitation rates over both BB and NBB events across the topologically complex study area as compared to PERSIANN-Cloud Classification System (CCS). Also, we discuss the performance of well-known operational near-real-time precipitation products from 2017 to 2019. Conventional categorical and volumetric categorical indices, as well as continuous statistical metrics, are used to show the differences between various high-resolution precipitation products such as Multi-Radar Multi-Sensor (MRMS).
Abstract
Most heavy precipitation events and extreme flooding over the U.S. Pacific coast can be linked to prevalent atmospheric river (AR) conditions. Thus, reliable quantitative precipitation estimation with a rich spatiotemporal resolution is vital for water management and early warning systems of flooding and landslides over these regions. At the same time, high-quality near-real-time measurements of AR precipitation remain challenging due to the complex topographic features of land surface and meteorological conditions of the region: specifically, orographic features occlude radar measurements while infrared-based algorithms face challenges, differentiating between both cold brightband (BB) precipitation and the warmer nonbrightband (NBB) precipitation. It should be noted that the latter precipitation is characterized by greater orographic enhancement. In this study, we evaluate the performance of a recently developed near-real-time satellite precipitation algorithm: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) Dynamic Infrared–Rain Rate-Now (PDIR-Now). This model is primarily dependent on infrared information from geostationary satellites as input; consequently, PDIR-Now has the advantage of short data latency, 15–60-min delay between observation to precipitation product delivery. The performance of PDIR-Now is analyzed with a focus on AR-related events for cases dominated by NBB and BB precipitation over the Russian River basin. In our investigations, we utilize S-band (3-GHz) precipitation profilers with Joss/Parsivel disdrometer measurements at the Middletown and Santa Rosa stations to classify BB and NBB precipitation events. In general, our analysis shows that PDIR-Now is more skillful in retrieving precipitation rates over both BB and NBB events across the topologically complex study area as compared to PERSIANN-Cloud Classification System (CCS). Also, we discuss the performance of well-known operational near-real-time precipitation products from 2017 to 2019. Conventional categorical and volumetric categorical indices, as well as continuous statistical metrics, are used to show the differences between various high-resolution precipitation products such as Multi-Radar Multi-Sensor (MRMS).
Abstract
This study aims to investigate the performance of Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR) in a rainfall–runoff modeling application over the past three decades. PERSIANN-CDR provides precipitation data at daily and 0.25° temporal and spatial resolutions from 1983 to present for the 60°S–60°N latitude band and 0°–360° longitude. The study is conducted in two phases over three test basins from the Distributed Hydrologic Model Intercomparison Project, phase 2 (DMIP2). In phase 1, a more recent period of time (2003–10) when other high-resolution satellite-based precipitation products are available is chosen. Precipitation evaluation analysis, conducted against stage IV gauge-adjusted radar data, shows that PERSIANN-CDR and TRMM Multisatellite Precipitation Analysis (TMPA) have close performances with a higher correlation coefficient for TMPA (~0.8 vs 0.75 for PERSIANN-CDR) and almost the same root-mean-square deviation (~6) for both products. TMPA and PERSIANN-CDR outperform PERSIANN, mainly because, unlike PERSIANN, TMPA and PERSIANN-CDR are gauge-adjusted precipitation products. The National Weather Service Office of Hydrologic Development Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) is then forced with PERSIANN, PERSIANN-CDR, TMPA, and stage IV data. Quantitative analysis using five different statistical and model efficiency measures against USGS streamflow observation show that in general in all three DMIP2 basins, the simulated hydrographs forced with PERSIANN-CDR and TMPA have close agreement. Given the promising results in the first phase, the simulation process is extended back to 1983 where only PERSIANN-CDR rainfall estimates are available. The results show that PERSIANN-CDR-derived streamflow simulations are comparable to USGS observations with correlation coefficients of ~0.67–0.73, relatively low biases (~5%–12%), and high index of agreement criterion (~0.68–0.83) between PERSIANN-CDR-simulated daily streamflow and USGS daily observations. The results prove the capability of PERSIANN-CDR in hydrological rainfall–runoff modeling application, especially for long-term streamflow simulations over the past three decades.
Abstract
This study aims to investigate the performance of Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR) in a rainfall–runoff modeling application over the past three decades. PERSIANN-CDR provides precipitation data at daily and 0.25° temporal and spatial resolutions from 1983 to present for the 60°S–60°N latitude band and 0°–360° longitude. The study is conducted in two phases over three test basins from the Distributed Hydrologic Model Intercomparison Project, phase 2 (DMIP2). In phase 1, a more recent period of time (2003–10) when other high-resolution satellite-based precipitation products are available is chosen. Precipitation evaluation analysis, conducted against stage IV gauge-adjusted radar data, shows that PERSIANN-CDR and TRMM Multisatellite Precipitation Analysis (TMPA) have close performances with a higher correlation coefficient for TMPA (~0.8 vs 0.75 for PERSIANN-CDR) and almost the same root-mean-square deviation (~6) for both products. TMPA and PERSIANN-CDR outperform PERSIANN, mainly because, unlike PERSIANN, TMPA and PERSIANN-CDR are gauge-adjusted precipitation products. The National Weather Service Office of Hydrologic Development Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) is then forced with PERSIANN, PERSIANN-CDR, TMPA, and stage IV data. Quantitative analysis using five different statistical and model efficiency measures against USGS streamflow observation show that in general in all three DMIP2 basins, the simulated hydrographs forced with PERSIANN-CDR and TMPA have close agreement. Given the promising results in the first phase, the simulation process is extended back to 1983 where only PERSIANN-CDR rainfall estimates are available. The results show that PERSIANN-CDR-derived streamflow simulations are comparable to USGS observations with correlation coefficients of ~0.67–0.73, relatively low biases (~5%–12%), and high index of agreement criterion (~0.68–0.83) between PERSIANN-CDR-simulated daily streamflow and USGS daily observations. The results prove the capability of PERSIANN-CDR in hydrological rainfall–runoff modeling application, especially for long-term streamflow simulations over the past three decades.
Abstract
Recent developments in “headline-making” deep neural networks (DNNs), specifically convolutional neural networks (CNNs), along with advancements in computational power, open great opportunities to integrate massive amounts of real-time observations to characterize spatiotemporal structures of surface precipitation. This study aims to develop a CNN algorithm, named Deep Neural Network High Spatiotemporal Resolution Precipitation Estimation (Deep-STEP), that ingests direct satellite passive microwave (PMW) brightness temperatures (Tbs) at emission and scattering frequencies combined with infrared (IR) Tbs from geostationary satellites and surface information to automatically extract geospatial features related to the precipitable clouds. These features allow the end-to-end Deep-STEP algorithm to instantaneously map surface precipitation intensities with a spatial resolution of 4 km. The main advantages of Deep-STEP, as compared to current state-of-the-art techniques, are 1) it learns and estimates complex precipitation systems directly from raw measurements in near–real time, 2) it uses the automatic spatial neighborhood feature extraction approach, and 3) it fuses coarse-resolution PMW footprints with IR images to reliably retrieve surface precipitation at a high spatial resolution. We anticipate our proposed DNN algorithm to be a starting point for more sophisticated and efficient precipitation retrieval systems in terms of accuracy, fine spatial pattern detection skills, and computational costs.
Abstract
Recent developments in “headline-making” deep neural networks (DNNs), specifically convolutional neural networks (CNNs), along with advancements in computational power, open great opportunities to integrate massive amounts of real-time observations to characterize spatiotemporal structures of surface precipitation. This study aims to develop a CNN algorithm, named Deep Neural Network High Spatiotemporal Resolution Precipitation Estimation (Deep-STEP), that ingests direct satellite passive microwave (PMW) brightness temperatures (Tbs) at emission and scattering frequencies combined with infrared (IR) Tbs from geostationary satellites and surface information to automatically extract geospatial features related to the precipitable clouds. These features allow the end-to-end Deep-STEP algorithm to instantaneously map surface precipitation intensities with a spatial resolution of 4 km. The main advantages of Deep-STEP, as compared to current state-of-the-art techniques, are 1) it learns and estimates complex precipitation systems directly from raw measurements in near–real time, 2) it uses the automatic spatial neighborhood feature extraction approach, and 3) it fuses coarse-resolution PMW footprints with IR images to reliably retrieve surface precipitation at a high spatial resolution. We anticipate our proposed DNN algorithm to be a starting point for more sophisticated and efficient precipitation retrieval systems in terms of accuracy, fine spatial pattern detection skills, and computational costs.
Abstract
Recent tropical cyclones (TCs) have highlighted the hazards that TC rainfall poses to human life and property. These hazards are not adequately conveyed by the commonly used Saffir–Simpson scale. Additionally, while recurrence intervals (or, their inverse, annual exceedance probabilities) are sometimes used in the popular media to convey the magnitude and likelihood of extreme rainfall and floods, these concepts are often misunderstood by the public and have important statistical limitations. We introduce an alternative metric—the extreme rain multiplier (ERM), which expresses TC rainfall as a multiple of the climatologically derived 2-yr rainfall value. ERM allows individuals to connect (“anchor,” in cognitive psychology terms) the magnitude of a TC rainfall event to the magnitude of rain events that are more typically experienced in their area. A retrospective analysis of ERM values for TCs from 1948 to 2017 demonstrates the utility of the metric as a hazard quantification and communication tool. Hurricane Harvey (2017) had the highest ERM value during this period, underlining the storm’s extreme nature. ERM correctly identifies damaging historical TC rainfall events that would have been classified as “weak” using wind-based metrics. The analysis also reveals that the distribution of ERM maxima is similar throughout the eastern and southern United States, allowing for both the accurate identification of locally extreme rainfall events and the development of regional-scale (rather than local-scale) recurrence interval estimates for extreme TC rainfall. Last, an analysis of precipitation forecast data for Hurricane Florence (2018) demonstrates ERM’s ability to characterize Florence’s extreme rainfall hazard in the days preceding landfall.
Abstract
Recent tropical cyclones (TCs) have highlighted the hazards that TC rainfall poses to human life and property. These hazards are not adequately conveyed by the commonly used Saffir–Simpson scale. Additionally, while recurrence intervals (or, their inverse, annual exceedance probabilities) are sometimes used in the popular media to convey the magnitude and likelihood of extreme rainfall and floods, these concepts are often misunderstood by the public and have important statistical limitations. We introduce an alternative metric—the extreme rain multiplier (ERM), which expresses TC rainfall as a multiple of the climatologically derived 2-yr rainfall value. ERM allows individuals to connect (“anchor,” in cognitive psychology terms) the magnitude of a TC rainfall event to the magnitude of rain events that are more typically experienced in their area. A retrospective analysis of ERM values for TCs from 1948 to 2017 demonstrates the utility of the metric as a hazard quantification and communication tool. Hurricane Harvey (2017) had the highest ERM value during this period, underlining the storm’s extreme nature. ERM correctly identifies damaging historical TC rainfall events that would have been classified as “weak” using wind-based metrics. The analysis also reveals that the distribution of ERM maxima is similar throughout the eastern and southern United States, allowing for both the accurate identification of locally extreme rainfall events and the development of regional-scale (rather than local-scale) recurrence interval estimates for extreme TC rainfall. Last, an analysis of precipitation forecast data for Hurricane Florence (2018) demonstrates ERM’s ability to characterize Florence’s extreme rainfall hazard in the days preceding landfall.