Search Results

You are looking at 1 - 10 of 14 items for :

  • Author or Editor: Kuo-lin Hsu x
  • Journal of Hydrometeorology x
  • All content x
Clear All Modify Search
Kuo-lin Hsu, Tim Bellerby, and S. Sorooshian

Abstract

A new satellite-based rainfall monitoring algorithm that integrates the strengths of both low Earth-orbiting (LEO) and geostationary Earth-orbiting (GEO) satellite information has been developed. The Lagrangian Model (LMODEL) algorithm combines a 2D cloud-advection tracking system and a GEO data–driven cloud development and rainfall generation model with procedures to update model parameters and state variables in near–real time. The details of the LMODEL algorithm were presented in Part I. This paper describes a comparative validation against ground radar rainfall measurements of 1- and 3-h LMODEL accumulated rainfall outputs. LMODEL rainfall estimates consistently outperform accumulated 3-h microwave (MW)-only rainfall estimates, even before the more restricted spatial coverage provided by the latter is taken into account. In addition, the performance of LMODEL products remains effective and consistent between MW overpasses. Case studies demonstrate that the LMODEL provides the potential to synergize available satellite data to generate useful precipitation measurements at an hourly scale.

Full access
Tim Bellerby, Kuo-lin Hsu, and Soroosh Sorooshian

Abstract

The Lagrangian Model (LMODEL) is a new multisensor satellite rainfall monitoring methodology based on the use of a conceptual cloud-development model that is driven by geostationary satellite imagery and is locally updated using microwave-based rainfall measurements from low earth-orbiting platforms. This paper describes the cloud development model and updating procedures; the companion paper presents model validation results. The model uses single-band thermal infrared geostationary satellite imagery to characterize cloud motion, growth, and dispersal at high spatial resolution (∼4 km). These inputs drive a simple, linear, semi-Lagrangian, conceptual cloud mass balance model, incorporating separate representations of convective and stratiform processes. The model is locally updated against microwave satellite data using a two-stage process that scales precipitable water fluxes into the model and then updates model states using a Kalman filter. Model calibration and updating employ an empirical rainfall collocation methodology designed to compensate for the effects of measurement time difference, geolocation error, cloud parallax, and rainfall shear.

Full access
Sepideh Sarachi, Kuo-lin Hsu, and Soroosh Sorooshian

Abstract

Earth-observing satellites provide a method to measure precipitation from space with good spatial and temporal coverage, but these estimates have a high degree of uncertainty associated with them. Understanding and quantifying the uncertainty of the satellite estimates can be very beneficial when using these precipitation products in hydrological applications. In this study, the generalized normal distribution (GND) model is used to model the uncertainty of the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) precipitation product. The stage IV Multisensor Precipitation Estimator (radar-based product) was used as the reference measurement. The distribution parameters of the GND model are further extended across various rainfall rates and spatial and temporal resolutions. The GND model is calibrated for an area of 5° × 5° over the southeastern United States for both summer and winter seasons from 2004 to 2009. The GND model is used to represent the joint probability distribution of satellite (PERSIANN) and radar (stage IV) rainfall. The method is further investigated for the period of 2006–08 over the Illinois watershed south of Siloam Springs, Arkansas. Results show that, using the proposed method, the estimation of the precipitation is improved in terms of percent bias and root-mean-square error.

Full access
Mohammed Ombadi, Phu Nguyen, Soroosh Sorooshian, and Kuo-lin Hsu

Abstract

The Nile River basin is one of the global hotspots vulnerable to climate change impacts because of a fast-growing population and geopolitical tensions. Previous studies demonstrated that general circulation models (GCMs) frequently show disagreement in the sign of change in annual precipitation projections. Here, we first evaluate the performance of 20 GCMs from phase six of the Coupled Model Intercomparison Project (CMIP6) benchmarked against a high-spatial-resolution precipitation dataset dating back to 1983 from Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR). Next, a Bayesian model averaging (BMA) approach is adopted to derive probability distributions of precipitation projections in the Nile basin. Retrospective analysis reveals that most GCMs exhibit considerable (up to 64% of mean annual precipitation) and spatially heterogenous bias in simulating annual precipitation. Moreover, it is shown that all GCMs underestimate interannual variability; thus, the ensemble range is underdispersive and is a poor indicator of uncertainty. The projected changes from the BMA model show that the value and sign of change vary considerably across the Nile basin. Specifically, it is found that projected changes in the two headwaters basins, namely, the Blue Nile and Upper White Nile, are 0.03% and −1.65%, respectively; both are statistically insignificant at α = 0.05. The uncertainty range estimated from the BMA model shows that the probability of a precipitation decrease is much higher in the Upper White Nile basin whereas projected change in the Blue Nile is highly uncertain both in magnitude and sign of change.

Restricted access
Ali Behrangi, Kuo-lin Hsu, Bisher Imam, Soroosh Sorooshian, and Robert J. Kuligowski

Abstract

Data from geosynchronous Earth-orbiting (GEO) satellites equipped with visible (VIS) and infrared (IR) scanners are commonly used in rain retrieval algorithms. These algorithms benefit from the high spatial and temporal resolution of GEO observations, either in stand-alone mode or in combination with higher-quality but less frequent microwave observations from low Earth-orbiting (LEO) satellites. In this paper, a neural network–based framework is presented to evaluate the utility of multispectral information in improving rain/no-rain (R/NR) detection. The algorithm uses the powerful classification features of the self-organizing feature map (SOFM), along with probability matching techniques to map single- or multispectral input space into R/NR maps. The framework was tested and validated using the 31 possible combinations of the five Geostationary Operational Environmental Satellite 12 (GOES-12) channels. An algorithm training and validation study was conducted over the conterminous United States during June–August 2006. The results indicate that during daytime, the visible channel (0.65 μm) can yield significant improvements in R/NR detection capabilities, especially when combined with any of the other four GOES-12 channels. Similarly, for nighttime detection the combination of two IR channels—particularly channels 3 (6.5 μm) and 4 (10.7 μm)—resulted in significant performance gain over any single IR channel. In both cases, however, using more than two channels resulted only in marginal improvements over two-channel combinations. Detailed examination of event-based images indicate that the proposed algorithm is capable of extracting information useful to screen no-rain pixels associated with cold, thin clouds and identifying rain areas under warm but rainy clouds. Both cases have been problematic areas for IR-only algorithms.

Full access
Negin Hayatbini, Kuo-lin Hsu, Soroosh Sorooshian, Yunji Zhang, and Fuqing Zhang

Abstract

The effective identification of clouds and monitoring of their evolution are important toward more accurate quantitative precipitation estimation and forecast. In this study, a new gradient-based cloud-image segmentation algorithm is developed using image processing techniques. This method integrates morphological image gradient magnitudes to separate cloud systems and patches boundaries. A varying scale kernel is implemented to reduce the sensitivity of image segmentation to noise and to capture objects with various finenesses of the edges in remote sensing images. The proposed method is flexible and extendable from single to multispectral imagery. Case studies were carried out to validate the algorithm by applying the proposed segmentation algorithm to synthetic radiances for channels of the Geostationary Operational Environmental Satellite (GOES-16) simulated by a high-resolution weather prediction model. The proposed method compares favorably with the existing cloud-patch-based segmentation technique implemented in the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS) rainfall retrieval algorithm. Evaluation of event-based images indicates that the proposed algorithm has potentials comparing to the conventional segmentation technique used in PERSIANN-CCS to improve rain detection and estimation skills with an accuracy rate of up to 98% in identifying cloud regions.

Full access
Gab Abramowitz, Hoshin Gupta, Andy Pitman, Yingping Wang, Ray Leuning, Helen Cleugh, and Kuo-lin Hsu

Abstract

Data assimilation in the field of predictive land surface modeling is generally limited to using observational data to estimate optimal model states or restrict model parameter ranges. To date, very little work has attempted to systematically define and quantify error resulting from a model's inherent inability to simulate the natural system. This paper introduces a data assimilation technique that moves toward this goal by accounting for those deficiencies in the model itself that lead to systematic errors in model output. This is done using a supervised artificial neural network to “learn” and simulate systematic trends in the model output error. These simulations in turn are used to correct the model's output each time step. The technique is applied in two case studies, using fluxes of latent heat flux at one site and net ecosystem exchange (NEE) of carbon dioxide at another. Root-mean-square error (rmse) in latent heat flux per time step was reduced from 27.5 to 18.6 W m−2 (32%) and monthly from 9.91 to 3.08 W m−2 (68%). For NEE, rmse per time step was reduced from 3.71 to 2.70 μmol m−2 s−1 (27%) and annually from 2.24 to 0.11 μmol m−2 s−1 (95%). In both cases the correction provided significantly greater gains than single criteria parameter estimation on the same flux.

Full access
Yang Hong, David Gochis, Jiang-tao Cheng, Kuo-lin Hsu, and Soroosh Sorooshian

Abstract

Robust validation of the space–time structure of remotely sensed precipitation estimates is critical to improving their quality and confident application in water cycle–related research. In this work, the performance of the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) precipitation product is evaluated against warm season precipitation observations from the North American Monsoon Experiment (NAME) Event Rain Gauge Network (NERN) in the complex terrain region of northwestern Mexico. Analyses of hourly and daily precipitation estimates show that the PERSIANN-CCS captures well active and break periods in the early and mature phases of the monsoon season. While the PERSIANN-CCS generally captures the spatial distribution and timing of diurnal convective rainfall, elevation-dependent biases exist, which are characterized by an underestimate in the occurrence of light precipitation at high elevations and an overestimate in the occurrence of precipitation at low elevations. The elevation-dependent biases contribute to a 1–2-h phase shift of the diurnal cycle of precipitation at various elevation bands. For reasons yet to be determined, the PERSIANN-CCS significantly underestimated a few active periods of precipitation during the late or “senescent” phase of the monsoon. Despite these shortcomings, the continuous domain and relatively high spatial resolution of PERSIANN-CCS quantitative precipitation estimates (QPEs) provide useful characterization of precipitation space–time structures in the North American monsoon region of northwestern Mexico, which should prove useful for hydrological applications.

Full access
Koray K. Yilmaz, Terri S. Hogue, Kuo-lin Hsu, Soroosh Sorooshian, Hoshin V. Gupta, and Thorsten Wagener

Abstract

This study compares mean areal precipitation (MAP) estimates derived from three sources: an operational rain gauge network (MAPG), a radar/gauge multisensor product (MAPX), and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) satellite-based system (MAPS) for the time period from March 2000 to November 2003. The study area includes seven operational basins of varying size and location in the southeastern United States. The analysis indicates that agreements between the datasets vary considerably from basin to basin and also temporally within the basins. The analysis also includes evaluation of MAPS in comparison with MAPG for use in flow forecasting with a lumped hydrologic model [Sacramento Soil Moisture Accounting Model (SAC-SMA)]. The latter evaluation investigates two different parameter sets, the first obtained using manual calibration on historical MAPG, and the second obtained using automatic calibration on both MAPS and MAPG, but over a shorter time period (23 months). Results indicate that the overall performance of the model simulations using MAPS depends on both the bias in the precipitation estimates and the size of the basins, with poorer performance in basins of smaller size (large bias between MAPG and MAPS) and better performance in larger basins (less bias between MAPG and MAPS). When using MAPS, calibration of the parameters significantly improved the model performance.

Full access
Hamed Ashouri, Phu Nguyen, Andrea Thorstensen, Kuo-lin Hsu, Soroosh Sorooshian, and Dan Braithwaite

Abstract

This study aims to investigate the performance of Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Climate Data Record (PERSIANN-CDR) in a rainfall–runoff modeling application over the past three decades. PERSIANN-CDR provides precipitation data at daily and 0.25° temporal and spatial resolutions from 1983 to present for the 60°S–60°N latitude band and 0°–360° longitude. The study is conducted in two phases over three test basins from the Distributed Hydrologic Model Intercomparison Project, phase 2 (DMIP2). In phase 1, a more recent period of time (2003–10) when other high-resolution satellite-based precipitation products are available is chosen. Precipitation evaluation analysis, conducted against stage IV gauge-adjusted radar data, shows that PERSIANN-CDR and TRMM Multisatellite Precipitation Analysis (TMPA) have close performances with a higher correlation coefficient for TMPA (~0.8 vs 0.75 for PERSIANN-CDR) and almost the same root-mean-square deviation (~6) for both products. TMPA and PERSIANN-CDR outperform PERSIANN, mainly because, unlike PERSIANN, TMPA and PERSIANN-CDR are gauge-adjusted precipitation products. The National Weather Service Office of Hydrologic Development Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) is then forced with PERSIANN, PERSIANN-CDR, TMPA, and stage IV data. Quantitative analysis using five different statistical and model efficiency measures against USGS streamflow observation show that in general in all three DMIP2 basins, the simulated hydrographs forced with PERSIANN-CDR and TMPA have close agreement. Given the promising results in the first phase, the simulation process is extended back to 1983 where only PERSIANN-CDR rainfall estimates are available. The results show that PERSIANN-CDR-derived streamflow simulations are comparable to USGS observations with correlation coefficients of ~0.67–0.73, relatively low biases (~5%–12%), and high index of agreement criterion (~0.68–0.83) between PERSIANN-CDR-simulated daily streamflow and USGS daily observations. The results prove the capability of PERSIANN-CDR in hydrological rainfall–runoff modeling application, especially for long-term streamflow simulations over the past three decades.

Full access