Abstract
Despite the advantage of global coverage at high spatiotemporal resolutions, satellite remotely sensed precipitation estimates still suffer from insufficient accuracy that needs to be improved for weather, climate, and hydrologic applications. This paper presents a framework of a deep neural network (DNN) that improves the accuracy of satellite precipitation products, focusing on reducing the bias and false alarms. The state-of-the-art deep learning techniques developed in the area of machine learning specialize in extracting structural information from a massive amount of image data, which fits nicely into the task of retrieving precipitation data from satellite cloud images. Stacked denoising autoencoder (SDAE), a widely used DNN, is applied to perform bias correction of satellite precipitation products. A case study is conducted on the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS) with spatial resolution of 0.08° × 0.08° over the central United States, where SDAE is used to process satellite cloud imagery to extract information over a window of 15 × 15 pixels. In the study, the summer of 2012 (June–August) and the winter of 2012/13 (December–February) serve as the training periods, while the same seasons of the following year (summer of 2013 and winter of 2013/14) are used for validation purposes. To demonstrate the effectiveness of the methodology outside the study area, three more regions are selected for additional validation. Significant improvements are achieved in both rain/no-rain (R/NR) detection and precipitation rate quantification: the results make 33% and 43% corrections on false alarm pixels and 98% and 78% bias reductions in precipitation rates over the validation periods of the summer and winter seasons, respectively.
This article is included in the Seventh International Precipitation Working Group (IPWG) Workshop special collection.