• Arkin, P. A., and B. N. Meisner, 1987: The relationship between large-scale convective rainfall and cold cloud over the Western Hemisphere during 1982–84. Mon. Wea. Rev., 115, 5174, doi:10.1175/1520-0493(1987)115<0051:TRBLSC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ba, M. B., and A. Gruber, 2001: GOES Multispectral Rainfall Algorithm (GMSRA). J. Appl. Meteor., 40, 15001514, doi:10.1175/1520-0450(2001)040<1500:GMRAG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K.-L. Hsu, B. Imam, S. Sorooshian, G. J. Huffman, and R. J. Kuligowski, 2009a: PERSIANN-MSA: A precipitation estimation method from satellite-based multispectral analysis. J. Hydrometeor., 10, 14141429, doi:10.1175/2009JHM1139.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K.-L. Hsu, B. Imam, S. Sorooshian, and R. J. Kuligowski, 2009b: Evaluating the utility of multispectral information in delineating the areal extent of precipitation. J. Hydrometeor., 10, 684700, doi:10.1175/2009JHM1077.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K. Hsu, B. Imam, and S. Sorooshian, 2010: Daytime precipitation estimation using bispectral cloud classification system. J. Appl. Meteor. Climatol., 49, 10151031, doi:10.1175/2009JAMC2291.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bengio, Y., 2009: Learning deep architectures for AI. Found. Trends Mach. Learn., 2, 1127, doi:10.1561/2200000006.

  • Bourlard, H., and Y. Kamp, 1988: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern., 59, 291294, doi:10.1007/BF00332918.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Capacci, D., and B. J. Conway, 2005: Delineation of precipitation areas from MODIS visible and infrared imagery with artificial neural networks. Meteor. Appl., 12, 291305, doi:10.1017/S1350482705001787.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Glorot, X., A. Bordes, and Y. Bengio, 2011: Domain Adaptation for large-scale sentiment classification: A deep learning approach. Proceedings of the 28th International Conference on Machine Learning, L. Getoor and T. Scheffer, Eds., Omnipress, 513–520.

  • Hinton, G. E., and R. S. Zemel, 1993: Autoencoders, minimum description length and Helmholtz free energy. Advances in Neural Information Processing Systems 6, J. D. Cowan, G. Tesauro, and J. Alspector, Eds., Morgan Kaufmann, 3–10.

  • Hinton, G. E., S. Osindero, and Y. W. Teh, 2006: A fast learning algorithm for deep belief nets. Neural Comput., 18, 15271554, doi:10.1162/neco.2006.18.7.1527.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, Y., K. L. Hsu, S. Sorooshian, and X. G. Gao, 2004: Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network Cloud Classification System. J. Appl. Meteor., 43, 18341852, doi:10.1175/JAM2173.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K.-L., X. G. Gao, S. Sorooshian, and H. V. Gupta, 1997: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks. J. Appl. Meteor., 36, 11761190, doi:10.1175/1520-0450(1997)036<1176:PEFRSI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K.-L., H. V. Gupta, X. Gao, and S. Sorooshian, 1999: Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation. Water Resour. Res., 35, 16051618, doi:10.1029/1999WR900032.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huffman, G. J., and et al. , 2007: The TRMM Multisatellite Precipitation Analysis (TMPA): Quasi-global, multiyear, combined-sensor precipitation estimates at fine scales. J. Hydrometeor., 8, 3855, doi:10.1175/JHM560.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joyce, R. J., J. E. Janowiak, P. A. Arkin, and P. P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, 487503, doi:10.1175/1525-7541(2004)005<0487:CAMTPG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kidd, C., D. R. Kniveton, M. C. Todd, and T. J. Bellerby, 2003: Satellite rainfall estimation using combined passive microwave and infrared algorithms. J. Hydrometeor., 4, 10881104, doi:10.1175/1525-7541(2003)004<1088:SREUCP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuligowski, R. J., 2002: A self-calibrating real-time GOES rainfall algorithm for short-term rainfall estimates. J. Hydrometeor., 3, 112130, doi:10.1175/1525-7541(2002)003<0112:ASCRTG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kummerow, C., and L. Giglio, 1995: A method for combining passive microwave and infrared rainfall observations. J. Atmos. Oceanic Technol., 12, 3345, doi:10.1175/1520-0426(1995)012<0033:AMFCPM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kurino, T., 1997: A satellite infrared technique for estimating “deep/shallow” precipitation. Adv. Space Res., 19, 511514, doi:10.1016/S0273-1177(97)00063-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Martin, D. W., R. A. Kohrs, F. R. Mosher, C. M. Medaglia, and C. Adamo, 2008: Over-ocean validation of the global convective diagnostic. J. Appl. Meteor. Climatol., 47, 525543, doi:10.1175/2007JAMC1525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marzano, F. S., M. Palmacci, D. Cimini, G. Giuliani, and F. J. Turk, 2004: Multivariate statistical integration of satellite infrared and microwave radiometric measurements for rainfall retrieval at the geostationary scale. IEEE Trans. Geosci. Remote Sens., 42, 10181032, doi:10.1109/TGRS.2003.820312.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nasrollahi, N., K. L. Hsu, and S. Sorooshian, 2013: An artificial neural network model to reduce false alarms in satellite precipitation products using MODIS and CloudSat observations. J. Hydrometeor., 14, 18721883, doi:10.1175/JHM-D-12-0172.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Netzer, Y., T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, 2011: Reading digits in natural images with unsupervised feature learning. 2011 NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, Neural Information Processing Systems Foundation, 4 pp.

  • Nguyen, P., S. Sellars, A. Thorstensen, Y. Tao, H. Ashouri, D. Braithwaite, K. Hsu, and S. Sorooshian, 2014: Satellites track precipitation of Super Typhoon Haiyan. Eos, Trans. Amer. Geophys. Union, 95, 133135, doi:10.1002/2014EO160002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rumelhart, D. E., G. E. Hinton, and R. J. Williams, 1986: Learning representations by back-propagating errors. Nature, 323, 533536, doi:10.1038/323533a0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sorooshian, S., and et al. , 2011: Advanced concepts on remote sensing of precipitation at multiple scales. Bull. Amer. Meteor. Soc., 92, 13531357, doi:10.1175/2011BAMS3158.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, K. Hsu, S. Sorooshian, and A. Ihler, 2016a: A deep neural network modeling framework to reduce bias in satellite precipitation products. J. Hydrometeor., 17, 931945, doi:10.1175/JHM-D-15-0075.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, A. Ihler, K. Hsu, and S. Sorooshian, 2016b: Deep neural networks for precipitation estimation from remotely sensed information. 2016 IEEE Congress on Evolution Computation, Vancouver, BC, Canada, IEEE, 1349–1355, doi:10.1109/CEC.2016.7743945.

    • Crossref
    • Export Citation
  • Tjemkes, S. A., L. van de Berg, and J. Schmetz, 1997: Warm water vapour pixels over high clouds as observed by METEOSAT. Contrib. Atmos. Phys., 70, 1522.

    • Search Google Scholar
    • Export Citation
  • Vincent, P., H. Larochelle, Y. Bengio, and P.-A. Manzagol, 2008: Extracting and composing robust features with denoising autoencoders. Proc. 25th Int. Conf. on Machine Learning, Helsinki, Finland, ACM, 1096–1103, doi:10.1145/1390156.1390294.

    • Crossref
    • Export Citation
  • Vincent, P., H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, 2010: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11, 33713408.

    • Search Google Scholar
    • Export Citation
  • Weng, F., L. Zhao, R. R. Ferraro, G. Poe, X. Li, and N. C. Grody, 2003: Advanced microwave sounding unit cloud and precipitation algorithms. Radio Sci., 38, 8068, doi:10.1029/2002RS002679.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xie, J., L. Xu, and E. Chen, 2012: Image denoising and inpainting with deep neural networks. Advances in Neural Information Processing Systems 25, P. L. Bartlett et al., Eds., Neural Information Processing Systems Foundation, 350–358. [Available online at https://papers.nips.cc/paper/4686-image-denoising-and-inpainting-with-deep-neural-networks.pdf.]

  • Zhang, J., and et al. , 2011: National Mosaic and Multi-Sensor QPE (NMQ) system: Description, results, and future plans. Bull. Amer. Meteor. Soc., 92, 1321, doi:10.1175/2011BAMS-D-11-00047.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    Structure of an AE, which reconstructs the input information by learning the internal representations (hidden layer).

  • View in gallery

    Structure of a four-layer, fully connected neural network. The first layer is the input layer, which is the IR patches used in this study. The next two layers are the hidden layers, which can be interpreted as extracted features. The last layer is the output layer, which contains the probabilities of R/NR for the centered pixel.

  • View in gallery

    Overview of the model training and verification process. Model 1 is a deep neural network with only IR imageries as input, as shown in Fig. 2. Model 2 is a combined deep neural network with both IR and WV imageries as inputs, as shown in Fig. 4.

  • View in gallery

    Structure of a four-layer neural network with bispectral imageries. The input layer has two images, which connect to half of the hidden nodes in the first hidden layer separately.

  • View in gallery

    (a)–(c) POD, (d)–(f) FAR, and (g)–(i) CSI of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for summer 2013 (June–August).

  • View in gallery

    (a)–(c) POD, (d)–(f) FAR, and (g)–(i) CSI of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for winter 2013/14 (from December 2013 to February 2014). The white color means that less than 50 precipitation pixels in the location are observed within corresponding periods.

  • View in gallery

    Visualization of WV and IR imageries and precipitation identification performance of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for 1900 UTC 26 Jul 2013: (a),(b) snapshots of WV and IR channels; (c) radar R/NR observation; and (d)–(f) hits, false alarms, and misses maps. Green, red, and blue indicate hits, false alarms, and misses, respectively.

  • View in gallery

    As in Fig. 7, but for 2100 UTC 21 Dec 2013.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 155 155 41
PDF Downloads 151 151 37

Precipitation Identification with Bispectral Satellite Information Using Deep Learning Approaches

View More View Less
  • 1 Center for Hydrometeorology and Remote Sensing, and Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California
  • | 2 Department of Computer Science, University of California, Irvine, Irvine, California
  • | 3 Center for Hydrometeorology and Remote Sensing, and Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California
© Get Permissions
Full access

Abstract

In the development of a satellite-based precipitation product, two important aspects are sufficient precipitation information in the satellite-input data and proper methodologies, which are used to extract such information and connect it to precipitation estimates. In this study, the effectiveness of the state-of-the-art deep learning (DL) approaches to extract useful features from bispectral satellite information, infrared (IR), and water vapor (WV) channels, and to produce rain/no-rain (R/NR) detection is explored. To verify the methodologies, two models are designed and evaluated: the first model, referred to as the DL-IR only method, applies deep learning approaches to the IR data only; the second model, referred to as the DL-IR+WV method, incorporates WV data to further improve the precipitation identification performance. The radar stage IV data are the reference data used as ground observation. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS), serves as a baseline model with which to compare the performances. The experiments show significant improvement for both models in R/NR detection. The overall performance gains in the critical success index (CSI) are 21.60% and 43.66% over the verification periods for the DL-IR only model and the DL-IR+WV model compared to PERSIANN-CCS, respectively. In particular, the performance gains in CSI are as high as 46.51% and 94.57% for the models for the winter season. Moreover, specific case studies show that the deep learning techniques and the WV channel information effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Yumeng Tao, yumengt@uci.edu

Abstract

In the development of a satellite-based precipitation product, two important aspects are sufficient precipitation information in the satellite-input data and proper methodologies, which are used to extract such information and connect it to precipitation estimates. In this study, the effectiveness of the state-of-the-art deep learning (DL) approaches to extract useful features from bispectral satellite information, infrared (IR), and water vapor (WV) channels, and to produce rain/no-rain (R/NR) detection is explored. To verify the methodologies, two models are designed and evaluated: the first model, referred to as the DL-IR only method, applies deep learning approaches to the IR data only; the second model, referred to as the DL-IR+WV method, incorporates WV data to further improve the precipitation identification performance. The radar stage IV data are the reference data used as ground observation. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS), serves as a baseline model with which to compare the performances. The experiments show significant improvement for both models in R/NR detection. The overall performance gains in the critical success index (CSI) are 21.60% and 43.66% over the verification periods for the DL-IR only model and the DL-IR+WV model compared to PERSIANN-CCS, respectively. In particular, the performance gains in CSI are as high as 46.51% and 94.57% for the models for the winter season. Moreover, specific case studies show that the deep learning techniques and the WV channel information effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Yumeng Tao, yumengt@uci.edu

1. Introduction

As a key resource of the surface hydrological process, reliable large-scale precipitation data are required for weather forecasts, water resources management, and climate analysis. With the continuing development in current research studies, satellite-based precipitation estimation products are capable of providing near-real-time global precipitation data with fine spatiotemporal resolutions, which is extremely valuable for analysis over remote regions. In addition, these products have the advantage of monitoring complete evolutions of storms compared to ground-based observations (Nguyen et al. 2014).

To retrieve precipitation values, the most commonly used satellite information includes infrared (IR) data from geosynchronous Earth orbiting (GEO) satellites and passive microwave (PMW) data from low-Earth-orbiting satellites (Hsu et al. 1997; Weng et al. 2003). Passive microwave data have the advantage of being directly retrieved from actual hydrometeor content, while IR data are limited to cloud-top information (Behrangi et al. 2009b; Kummerow and Giglio 1995). However, one main drawback of PMW data is their low temporal resolutions (Marzano et al. 2004). On the other hand, IR data have high spatial and temporal resolutions and thus can monitor the complete evolution of a local precipitation event (Arkin and Meisner 1987; Behrangi et al. 2009a). Many satellite-based precipitation estimation products have been developed and made operational in the past few years with the general concept of combining IR and PMW data (Hong et al. 2004; Hsu et al. 1997; Huffman et al. 2007; Joyce et al. 2004; Kidd et al. 2003; Kuligowski 2002). Most algorithms estimate precipitation from IR information and then postprocess it with PMW information. With their high spatiotemporal resolutions, GEO satellite observations continuously serve as fundamental sources in precipitation estimation (Behrangi et al. 2009a; Huffman et al. 2007).

Moreover, some studies have incorporated multiple channels of GEO satellite data because IR cloud-top brightness temperature data alone do not contain sufficient information for accurate precipitation retrieval (Ba and Gruber 2001; Behrangi et al. 2010, 2009a). One common choice is the visible (VIS) wavelength cloud albedo data because of their high quality in the daytime (Behrangi et al. 2010; Capacci and Conway 2005; Hsu et al. 1999). The obvious drawback for VIS data is that they are not available at night. On the other hand, the water vapor (WV) channel information is also proven to be quite effective for precipitation retrieval by a few previous works, especially when used in combination with IR data (Behrangi et al. 2009b; Martin et al. 2008; Tjemkes et al. 1997).

One other important component for a satellite-based precipitation product is its algorithms used to produce precipitation estimation from the input data. It is essential to have appropriate methods that are capable of extracting useful information in the satellite imageries and linking it to precipitation estimates (Nasrollahi et al. 2013; Sorooshian et al. 2011). In recent years, deep learning techniques, also known as deep neural networks, have been developed and widely applied in the machine learning and computer vision areas (Bengio 2009; Hinton et al. 2006; Vincent et al. 2010). Tao et al. (2016a,b) applied the methods to satellite-based precipitation estimation and demonstrated their effectiveness in extracting more useful features from the raw IR imageries and further improving the accuracy of the estimation.

In this paper, we explore the application of the deep learning techniques to precipitation estimation with bispectral information (IR and WV channels). As a first step, we focus on rain/no-rain (R/NR) identification because accurate precipitation areal delineation is essential for a precipitation estimation product. Specifically, we address the following tasks: 1) designing a deep neural network that is capable of dealing with satellite imageries from multiple channels; 2) demonstrating the effectiveness of the methodology on precipitation identification by comparing its performance with an operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS); and 3) evaluating the value of adding data sources in addition to IR imageries.

The remainder of this paper is organized as follows. Section 2 briefly describes the study region and data used for this study. Section 3 explains the concepts and detailed process of the deep learning approaches that were applied. Then, the specific experimental design is provided in section 4. Section 5 presents results for the models developed in this study against results from operational products. Finally, the main conclusions and recommended future work are discussed in section 6.

2. Study region and data used

The data used in this study are IR (10.8 μm) and WV (6.7 μm) imageries from the Geostationary Operational Environmental Satellite (GOES). The reason for selecting WV channel data as supplementary inputs to IR data is that they are proven to be helpful for precipitation identification and estimation in conjunction with IR data (Ba and Gruber 2001; Behrangi et al. 2009a,b; Kurino 1997). More specifically, conversion of water vapor is a necessary condition in precipitation formation. By adding satellite precipitable water measurement, the vertical integral of water vapor, the performance of precipitation estimation products are expected to improve.

Compared to VIS channel data, WV channel data have the advantage of being available at both day and night, which keeps the precipitation estimation consistent.

Specifically, both channels’ data are processed to hourly scale in this study. The study period covers the summer (June–August) and the winter (December–February) seasons for 2012–14 at 0.08° × 0.08° spatial resolution. The stage IV radar and gauge precipitation data from the National Centers for Environmental Prediction (NCEP; http://www.emc.ncep.noaa.gov/mmb/ylin/pcpanl/stage4/) at the same spatial and temporal resolution serves as ground observations. Currently, we only include data of two summer seasons and two winter seasons for model verification purpose. PERSIANN-CCS serves as a baseline model in this study for performance comparisons. A description of PERSIANN-CCS can be found in Hong et al. (2004). We selected the central United States (30°–45°N, 90°–105°W) to be the study region to avoid the mountainous areas, where radar data are not as precise. The average precipitation of summer and winter seasons of the study region are 0.097 and 0.060 mm h−1.

3. Methodology

With useful information in the IR and WV channels, it is essential to use proper methodologies to extract such information sufficiently. This paper takes advantage of newly developed deep learning techniques, also known as deep neural networks, for their capability to automatically extract useful features from raw imageries. Successful use of deep neural network depends largely on how well one can optimize the parameters (weights) of neural nodes, that is, calibration of a neural network with given data samples. In general, the parameter values are established through supervised or unsupervised training; the former requires data samples of input–output pairs and an effective search program to calculate the parameter values while the latter only needs input data samples and a self-organization program to set the parameters. Because of the large number of parameters in a deep neural network, the traditional supervised training process could be time exhaustive and may not successfully optimize the parameters from a random initialization. The concept of deep learning techniques is to start the training process with an unsupervised automatic feature extraction (pretraining) to capture information in the input data and then apply supervised machine learning methods to perform the classification (Bengio 2009; Vincent et al. 2008). A detailed review of deep learning methodologies and their applications can be found in Bengio (2009).

Specifically, here we used a stacked denoising auto-encoder (SDAE), which is a widely used deep learning technique introduced by Vincent et al. (2008, 2010). SDAEs have been proven to effectively construct high-level representations from image patches (Glorot et al. 2011; Netzer et al. 2011; Vincent et al. 2010; Xie et al. 2012). Tao et al. (2016a,b) applied the method toward satellite precipitation estimation and demonstrated its ability to extract effective precipitation information from IR imageries. Such features extracted are information describing the cloud patches when we use the satellite image patches as inputs. The method mainly involves two steps: 1) unsupervised feature extraction pretraining and 2) supervised neural network fine tuning.

The first step of the SDAE is the unsupervised feature extraction process. The structure of neural networks for the process is an auto-encoder (AE), as shown in Fig. 1. The calibration of an AE adjusts the two sets of weights W12 and W23 to convert the input values x into effective features (hidden nodes h) and then the reconstructed estimates (Bourlard and Kamp 1988; Hinton and Zemel 1993). The specific transformations can be expressed as
eq1
and
eq2
where s is called activation function, which is a nonlinear mapping, and where b1 and b2 are intercepts. For classification problems, sigmoid function is commonly used as the activation function.
Fig. 1.
Fig. 1.

Structure of an AE, which reconstructs the input information by learning the internal representations (hidden layer).

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

To extract sparse and robust features, we need to avoid overfitting in the calibration of an AE. A simple yet effective design to overcome the overfitting problem is called a denoising auto-encoder (DAE), introduced by Vincent et al. (2008). The idea is to allow the neural network to extract the features from the inputs with extra noise. The process can be considered as “denoising” the noisy images to a clean version. In this study, we added the noise by randomly forcing a fraction of the input values to zero, called “masking noise.” When training a multilayer neural network (Fig. 2), we apply a DAE layer-wise sequentially, starting from the input layer, to calibrate the weights between it and the next hidden layer.

Fig. 2.
Fig. 2.

Structure of a four-layer, fully connected neural network. The first layer is the input layer, which is the IR patches used in this study. The next two layers are the hidden layers, which can be interpreted as extracted features. The last layer is the output layer, which contains the probabilities of R/NR for the centered pixel.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

After the unsupervised feature extraction pretraining, the entire neural network is fine tuned in a supervised manner. This step connects the input data to the target values directly to further adjust the weights for the final prediction task (Hinton et al. 2006; Vincent et al. 2010). The backward propagation of errors (backpropagation) algorithm is used to update the parameters to minimize the loss function in both steps (Rumelhart et al. 1986). In this study, the output layer produces probabilities of R/NR for a single pixel and then thresholds it to get binary classification results. Mean-square error (MSE) between the outputs and the target values serves as the loss function in this study.

4. Experimental design

The overview of the experimental design is presented in Fig. 3. In this study, two models are built to explore the effectiveness of the deep learning approaches and the additional information provided by the WV channel. Specifically, model 1 applies SDAE to IR patches alone as input and predicts R/NR for the centered pixel. Model 1 (structure shown in Fig. 2) is referred to as DL-IR only in the remainder of this paper. Model 2 is built on the well-calibrated model 1 and incorporates WV imageries as a combined neural network, as shown in Fig. 4. Hereafter, model 2 is referred to as DL-IR+WV.

Fig. 3.
Fig. 3.

Overview of the model training and verification process. Model 1 is a deep neural network with only IR imageries as input, as shown in Fig. 2. Model 2 is a combined deep neural network with both IR and WV imageries as inputs, as shown in Fig. 4.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

Fig. 4.
Fig. 4.

Structure of a four-layer neural network with bispectral imageries. The input layer has two images, which connect to half of the hidden nodes in the first hidden layer separately.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

As shown in Fig. 3, for both IR and WV images, the unsupervised feature extraction pretraining step of SDAE is applied first. For the DL-IR only (model 1), the supervised fine-tuning step is then applied to connect the IR images to the R/NR information for the centered pixel extracted from the stage IV data. For the DL-IR+WV (model 2), we combine the WV imageries and their extracted features into the calibrated DL-IR only model with the design shown in Fig. 4 and apply the supervised fine-tuning step to it. The input layer includes two patches and connects to the first hidden layer separately to produce their extracted features, respectively. Then, all features act together to eventually produce R/NR predictions. We start with the DL-IR only model to take advantage of the calibrated parameters, which contain more effective information to transform IR patches to R/NR probability compared to random parameter initialization.

Here, each input patch is a 15 × 15 pixel IR or WV patch, which covers approximately 120 km × 120 km. The patches are selected with overlapping pixels. The number of hidden nodes for the DL-IR only model is 1000 for both hidden layers. Thus, the number of hidden nodes for the first hidden layer for the DL-IR+WV is 2000, with 1000 nodes connected to each input patch. The second layer remains at 1000 hidden nodes. The output layer for both models has two nodes, predicting the probability of R/NR for the centered pixel of the 15 × 15 pixel input patch. These hyperparameters are selected based on the previous study parameter selection results and experiments with different number combinations (Tao et al. 2016a,b; Vincent et al. 2010). We divide the training data into two parts in the calibration process to choose other important hyperparameters to avoid overfitting, such as learning rate and number of iterations. More specifically, the model is trained with 75% of the training data and measures are calculated with the 25% holdout data to determine the performances of the different combination of hyperparameters.

As described in section 2, the study region is located in the central United States (30°–45°N, 90°–105°W). The study periods are the summer (June–August) and winter (December–February) seasons of 2012–14. To validate the models, we divided the data into training and verification periods. The data of the summer and winter seasons of 2012–13 are used to calibrate the models, and the data of the next year serve as verification data. Note that we did not distinguish between the seasons for the models because the potential uses of such models are global. In other words, such climate differences are left for the models to detect from the cloud and water vapor information.

5. Results and discussion

In this section, we evaluate the performance of the DL-IR model and the DL-IR+WV model to validate the effectiveness of the deep learning techniques and additional information contained in the WV channel. The results of both models are compared with the data of PERSIANN-CCS, a current operational precipitation product introduced in section 2. The validation measurements we used to evaluate precipitation identification performance are probability of detection (POD), false alarm ratio (FAR), critical success index (CSI), and the performance gain of these measurements with respect to PERSIANN-CCS. Table 1 gives their specific definitions and desirable values. In addition, we also provide a few case studies as examples to analyze the precipitation identification performance difference between the models. All evaluations are conducted for the verification periods (winter and summer 2013–14).

Table 1.

Description of verification measures used. TP denotes the number of true positive events, MS denotes the number of missing events, FP denotes the number of false positive events, denotes the value of a measurement for a model, and denotes the value of a measurement for a reference.

Table 1.

Table 2 provides the overall performances of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the verification periods. Both models show improvement in all measurements compared to PERSIANN-CCS. In addition, the DL-IR+WV model has the best performance among them. Specifically, the DL-IR only model and the DL-IR+WV model have 14.83% and 29.41% performance improvement in POD compared to PERSIANN-CCS, respectively (0.449 and 0.506 compared to 0.391). At the same time, these two models have 9.86% and 20.75% performance improvement in FAR, respectively (0.620 and 0.564 compared to 0.681). These improvements result in significant increases (21.60% and 43.66%) in CSI performance for the models (0.259 and 0.306 compared to 0.213). The improvement of the DL-IR only model, compared to PERSIANN-CCS, demonstrates the effectiveness of the deep learning techniques in automatically extracting useful features for precipitation identification from the raw IR images. The DL-IR+WV model outperforms both PERSIANN-CCS and the DL-IR only model, which shows that the WV channel contains additional information to better support delineating precipitation areas and that the methodology is capable of taking advantage of such information.

Table 2.

Summary of R/NR classification performance over the verification periods (including both summer 2013 and winter 2013/14).

Table 2.

More detailed performances of the models over different seasons in the verification periods are provided in Table 3. Compared to PERSIANN-CCS, the overall improvements shown in CSI are much higher for the winter (46.51% and 94.57% performance gains, respectively) than the summer (4.28% and 13.82% performance gains, respectively) for both the DL-IR only model and the DL-IR+WV model. One reason for these improvements is that the performance of PERSIANN-CCS in the summer (CSI: 0.304) is much better than it is in the winter (CSI: 0.129) for the central United States because the strong convective storms in the summer are relatively easy to detect.

Table 3.

Summary of R/NR classification performance for summer 2013 and winter 2013/14.

Table 3.

Figures 5 and 6 present the maps of POD, FAR, and CSI of the models in the warm and cold verification periods, respectively. The warm colors indicate high measurement values, while cold colors indicate low measurement values. Note that high values are desirable for POD and CSI, while low values are desirable for FAR. Figures 5a–c show that the DL-IR+WV model outperforms the other two in the summer, especially in Kansas, Missouri, and Oklahoma. For FAR (Figs. 5d–f), no significant difference can be distinguished between the models, which is consistent with the FAR values presented in Table 3 (0.552, 0.575, and 0.554 for PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model, respectively). An ascending order can be observed in the maps of CSI of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model (Figs. 5g–i). Figure 6 shows very similar but more significant improvements in the winter. Moreover, noticeable decreases in FAR for the DL-IR only model and DL-IR+WV model can be observed, compared to PERSIANN-CCS (Figs. 6d–f). Overall, performance improvements are consistent geographically for both models in both seasons. Furthermore, the DL-IR+WV model has the best performance for both seasons.

Fig. 5.
Fig. 5.

(a)–(c) POD, (d)–(f) FAR, and (g)–(i) CSI of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for summer 2013 (June–August).

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

Fig. 6.
Fig. 6.

(a)–(c) POD, (d)–(f) FAR, and (g)–(i) CSI of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for winter 2013/14 (from December 2013 to February 2014). The white color means that less than 50 precipitation pixels in the location are observed within corresponding periods.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

To further investigate how the models identify precipitation pixels from the IR and WV channels for precipitation events, Figs. 7 and 8 provide visualizations of two case studies in the summer and winter, respectively. The two case studies selected help to explain the improvement of the performance of the DL-IR only and DL-IR+WV models, compared to PERSIANN-CCS. Their performance statistics are presented in Table 4. Figure 7 presents WV and IR imageries, stage IV R/NR observations, and R/NR identification results for all models for a rainfall event at 1900 UTC 26 July 2013. The cloud patch on the east of the map is relatively warm with only a few cold pixels, as shown in Fig. 7b. Hence, only small sections of rainfall are correctly identified by PERSIANN-CCS (the green pixels in Fig. 7d), while there is a large rainy area over Missouri and Arkansas captured by stage IV observations (Fig. 7c). On the other hand, the cold cloud over the northeastern part (Fig. 7b) leads to false alarms in PERSIANN-CCS (the red pixels in Fig. 7d). In Fig. 7e, the DL-IR only model seems to be able to reduce the false alarm pixels while successfully identifying similar amounts of precipitation pixels. However, the overall improvement is marginal compared to PERSIANN-CCS (CSI performance gain of 1.84%). Nevertheless, the DL-IR+WV model shows a significant improvement in delineating the precipitation area (the green area in Fig. 7f). Figures 7d–f show that the DL-IR+WV model successfully connects the large area of precipitation captured by stage IV observations instead of only identifying small sections like PERSIANN-CCS and the DL-IR model, especially the warm cloud over the center of Arkansas and the eastern region of Oklahoma. Moreover, the DL-IR+WV model is also capable of avoiding the false alarms on the northeastern part of the map. The overall improvement is very significant compared to PERSIANN-CCS (CSI performance gain of 117.97%). This case study shows the supplementary information that the WV channel offered to support precipitation identification.

Fig. 7.
Fig. 7.

Visualization of WV and IR imageries and precipitation identification performance of PERSIANN-CCS, the DL-IR only model, and the DL-IR+WV model over the central United States for 1900 UTC 26 Jul 2013: (a),(b) snapshots of WV and IR channels; (c) radar R/NR observation; and (d)–(f) hits, false alarms, and misses maps. Green, red, and blue indicate hits, false alarms, and misses, respectively.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

Fig. 8.
Fig. 8.

As in Fig. 7, but for 2100 UTC 21 Dec 2013.

Citation: Journal of Hydrometeorology 18, 5; 10.1175/JHM-D-16-0176.1

Table 4.

Summary of R/NR classification performance in the case studies.

Table 4.

Similarly, Fig. 8 shows WV and IR imageries, stage IV R/NR observations, and R/NR identification results for all models during a rainfall event at 2100 UTC 21 December 2013. As shown in Fig. 8d, PERSIANN-CCS misses a large area of precipitation in Oklahoma and Texas, where the cloud is not as cold as that in the eastern part of the map (Fig. 8b). As shown in Fig. 8e, the DL-IR only model is able to extend the hits pixels in most parts of Texas, but still misses the pixels over the midwestern section of the map, where the cloud brightness temperatures are relatively high (Fig. 8b). The overall improvement is already quite significant compared to PERSIANN-CCS (CSI performance gain of 46.00%). This proves the capability of the deep learning techniques to extract additional information in IR data, compared to PERSIANN-CCS. On the other hand, Fig. 8f shows that the DL-IR+WV model almost correctly captures the entire precipitation region (green) with only a few false alarms and misses at the edges (blue and red). Its CSI performance gain is 67.73% with regard to PERSIANN-CCS. It again shows value of the WV channel information.

6. Conclusions

This study explores the application of the deep learning techniques on precipitation identification with bispectral information (IR and WV channels). Two models are built to evaluate the effectiveness of the methodology and the value of the additional information provided by the WV channel. The first model, referred to as the DL-IR only model in this paper, applies deep learning techniques to IR imageries to automatically extract useful features for precipitation identification. The second model, referred to as DL-IR+WV, incorporates WV imageries and its extracted features into the calibrated DL-IR only model to further improve the performance. Results of both models are compared with PERSIANN-CCS, an operational satellite-based precipitation product.

The experiments show significant improvements for both models in R/NR detection compared to PERSIANN-CCS. The performance gains in CSI are 21.60% and 43.66% over the verification periods for the DL-IR only model and the DL-IR+WV model, respectively. In particular, for the winter, the performance gains in CSI are as high as 46.51% and 94.57% for the models. In addition, case studies in both seasons show that the deep learning techniques and the WV channel information can help delineate precipitation regions with relatively warm clouds, while reducing false alarms with cold clouds.

The improved performance of the DL-IR only model demonstrates that sparse automatically extracted features from the IR imageries can assist in better detecting precipitation, compared to a limited amount of hand-designed features. On the other hand, the significant improvement in the DL-IR+WV model, compared to both PERSIANN-CCS and the DL-IR only model, verifies that the information contained in WV imageries supplements the IR data to better delineate precipitation regions. The case studies show that the additional WV information can help capture warm cloud precipitation likely to be missed by other models and identify nonraining cold clouds.

There are also many research opportunities left for further exploration. One of the most promising ideas is to implement this model for precipitation amount estimation as a second step and eventually provide a complete satellite-based precipitation estimation product. On the other hand, the hourly satellite snapshots and hourly cumulative radar observations may not be the best pairs to draw a relationship. In the future, we consider upgrading the ground observation data to a higher temporal resolution for experiments, such as the new 5-min NEXRAD data, to build a model with higher temporal resolution (Zhang et al. 2011). In addition, such studies should be conducted over a larger region with longer training and verification periods to investigate the stability of the model. More specifically, mountainous area may need some more careful consideration in the choices of ground measurements for the model to be accurate on a larger scale.

Acknowledgments

Financial support for this study is made available from the National Science Foundation Cyber-Enabled Sustainability Science and Engineering (Grant CCF-1331915), the U.S. Army Research Office (Grant W911NF-11-1-0422), and the NASA Earth and Space Science Fellowship (Grant NNX15AN86H).

REFERENCES

  • Arkin, P. A., and B. N. Meisner, 1987: The relationship between large-scale convective rainfall and cold cloud over the Western Hemisphere during 1982–84. Mon. Wea. Rev., 115, 5174, doi:10.1175/1520-0493(1987)115<0051:TRBLSC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ba, M. B., and A. Gruber, 2001: GOES Multispectral Rainfall Algorithm (GMSRA). J. Appl. Meteor., 40, 15001514, doi:10.1175/1520-0450(2001)040<1500:GMRAG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K.-L. Hsu, B. Imam, S. Sorooshian, G. J. Huffman, and R. J. Kuligowski, 2009a: PERSIANN-MSA: A precipitation estimation method from satellite-based multispectral analysis. J. Hydrometeor., 10, 14141429, doi:10.1175/2009JHM1139.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K.-L. Hsu, B. Imam, S. Sorooshian, and R. J. Kuligowski, 2009b: Evaluating the utility of multispectral information in delineating the areal extent of precipitation. J. Hydrometeor., 10, 684700, doi:10.1175/2009JHM1077.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Behrangi, A., K. Hsu, B. Imam, and S. Sorooshian, 2010: Daytime precipitation estimation using bispectral cloud classification system. J. Appl. Meteor. Climatol., 49, 10151031, doi:10.1175/2009JAMC2291.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bengio, Y., 2009: Learning deep architectures for AI. Found. Trends Mach. Learn., 2, 1127, doi:10.1561/2200000006.

  • Bourlard, H., and Y. Kamp, 1988: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern., 59, 291294, doi:10.1007/BF00332918.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Capacci, D., and B. J. Conway, 2005: Delineation of precipitation areas from MODIS visible and infrared imagery with artificial neural networks. Meteor. Appl., 12, 291305, doi:10.1017/S1350482705001787.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Glorot, X., A. Bordes, and Y. Bengio, 2011: Domain Adaptation for large-scale sentiment classification: A deep learning approach. Proceedings of the 28th International Conference on Machine Learning, L. Getoor and T. Scheffer, Eds., Omnipress, 513–520.

  • Hinton, G. E., and R. S. Zemel, 1993: Autoencoders, minimum description length and Helmholtz free energy. Advances in Neural Information Processing Systems 6, J. D. Cowan, G. Tesauro, and J. Alspector, Eds., Morgan Kaufmann, 3–10.

  • Hinton, G. E., S. Osindero, and Y. W. Teh, 2006: A fast learning algorithm for deep belief nets. Neural Comput., 18, 15271554, doi:10.1162/neco.2006.18.7.1527.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, Y., K. L. Hsu, S. Sorooshian, and X. G. Gao, 2004: Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network Cloud Classification System. J. Appl. Meteor., 43, 18341852, doi:10.1175/JAM2173.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K.-L., X. G. Gao, S. Sorooshian, and H. V. Gupta, 1997: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks. J. Appl. Meteor., 36, 11761190, doi:10.1175/1520-0450(1997)036<1176:PEFRSI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K.-L., H. V. Gupta, X. Gao, and S. Sorooshian, 1999: Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation. Water Resour. Res., 35, 16051618, doi:10.1029/1999WR900032.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huffman, G. J., and et al. , 2007: The TRMM Multisatellite Precipitation Analysis (TMPA): Quasi-global, multiyear, combined-sensor precipitation estimates at fine scales. J. Hydrometeor., 8, 3855, doi:10.1175/JHM560.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joyce, R. J., J. E. Janowiak, P. A. Arkin, and P. P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, 487503, doi:10.1175/1525-7541(2004)005<0487:CAMTPG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kidd, C., D. R. Kniveton, M. C. Todd, and T. J. Bellerby, 2003: Satellite rainfall estimation using combined passive microwave and infrared algorithms. J. Hydrometeor., 4, 10881104, doi:10.1175/1525-7541(2003)004<1088:SREUCP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuligowski, R. J., 2002: A self-calibrating real-time GOES rainfall algorithm for short-term rainfall estimates. J. Hydrometeor., 3, 112130, doi:10.1175/1525-7541(2002)003<0112:ASCRTG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kummerow, C., and L. Giglio, 1995: A method for combining passive microwave and infrared rainfall observations. J. Atmos. Oceanic Technol., 12, 3345, doi:10.1175/1520-0426(1995)012<0033:AMFCPM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kurino, T., 1997: A satellite infrared technique for estimating “deep/shallow” precipitation. Adv. Space Res., 19, 511514, doi:10.1016/S0273-1177(97)00063-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Martin, D. W., R. A. Kohrs, F. R. Mosher, C. M. Medaglia, and C. Adamo, 2008: Over-ocean validation of the global convective diagnostic. J. Appl. Meteor. Climatol., 47, 525543, doi:10.1175/2007JAMC1525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marzano, F. S., M. Palmacci, D. Cimini, G. Giuliani, and F. J. Turk, 2004: Multivariate statistical integration of satellite infrared and microwave radiometric measurements for rainfall retrieval at the geostationary scale. IEEE Trans. Geosci. Remote Sens., 42, 10181032, doi:10.1109/TGRS.2003.820312.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nasrollahi, N., K. L. Hsu, and S. Sorooshian, 2013: An artificial neural network model to reduce false alarms in satellite precipitation products using MODIS and CloudSat observations. J. Hydrometeor., 14, 18721883, doi:10.1175/JHM-D-12-0172.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Netzer, Y., T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, 2011: Reading digits in natural images with unsupervised feature learning. 2011 NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, Neural Information Processing Systems Foundation, 4 pp.

  • Nguyen, P., S. Sellars, A. Thorstensen, Y. Tao, H. Ashouri, D. Braithwaite, K. Hsu, and S. Sorooshian, 2014: Satellites track precipitation of Super Typhoon Haiyan. Eos, Trans. Amer. Geophys. Union, 95, 133135, doi:10.1002/2014EO160002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rumelhart, D. E., G. E. Hinton, and R. J. Williams, 1986: Learning representations by back-propagating errors. Nature, 323, 533536, doi:10.1038/323533a0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sorooshian, S., and et al. , 2011: Advanced concepts on remote sensing of precipitation at multiple scales. Bull. Amer. Meteor. Soc., 92, 13531357, doi:10.1175/2011BAMS3158.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, K. Hsu, S. Sorooshian, and A. Ihler, 2016a: A deep neural network modeling framework to reduce bias in satellite precipitation products. J. Hydrometeor., 17, 931945, doi:10.1175/JHM-D-15-0075.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, A. Ihler, K. Hsu, and S. Sorooshian, 2016b: Deep neural networks for precipitation estimation from remotely sensed information. 2016 IEEE Congress on Evolution Computation, Vancouver, BC, Canada, IEEE, 1349–1355, doi:10.1109/CEC.2016.7743945.

    • Crossref
    • Export Citation
  • Tjemkes, S. A., L. van de Berg, and J. Schmetz, 1997: Warm water vapour pixels over high clouds as observed by METEOSAT. Contrib. Atmos. Phys., 70, 1522.

    • Search Google Scholar
    • Export Citation
  • Vincent, P., H. Larochelle, Y. Bengio, and P.-A. Manzagol, 2008: Extracting and composing robust features with denoising autoencoders. Proc. 25th Int. Conf. on Machine Learning, Helsinki, Finland, ACM, 1096–1103, doi:10.1145/1390156.1390294.

    • Crossref
    • Export Citation
  • Vincent, P., H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, 2010: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11, 33713408.

    • Search Google Scholar
    • Export Citation
  • Weng, F., L. Zhao, R. R. Ferraro, G. Poe, X. Li, and N. C. Grody, 2003: Advanced microwave sounding unit cloud and precipitation algorithms. Radio Sci., 38, 8068, doi:10.1029/2002RS002679.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xie, J., L. Xu, and E. Chen, 2012: Image denoising and inpainting with deep neural networks. Advances in Neural Information Processing Systems 25, P. L. Bartlett et al., Eds., Neural Information Processing Systems Foundation, 350–358. [Available online at https://papers.nips.cc/paper/4686-image-denoising-and-inpainting-with-deep-neural-networks.pdf.]

  • Zhang, J., and et al. , 2011: National Mosaic and Multi-Sensor QPE (NMQ) system: Description, results, and future plans. Bull. Amer. Meteor. Soc., 92, 1321, doi:10.1175/2011BAMS-D-11-00047.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save