Search Results

You are looking at 21 - 30 of 830 items for :

  • Deep learning x
  • All content x
Clear All
Simon Veldkamp, Kirien Whan, Sjoerd Dirksen, and Maurice Schmeits

adaptive moment estimation (Adam; Kingma and Ba 2014 ), a variant of stochastic gradient descent that is very popular in deep learning. We use early stopping to determine the number of epochs (the number of times the training data are used during training). The neural networks used in this research were programmed using Keras ( Chollet et al. 2015 ), with TensorFlow as backend ( Abadi et al. 2015 ). Adam was employed using default options for all parameters other than the learning rate decay parameter

Restricted access
Ryan Lagerquist, John T. Allen, and Amy McGovern

: Global relationship between fronts and warm conveyor belts and the impact on extreme precipitation . J. Climate , 28 , 8411 – 8429 , . 10.1175/JCLI-D-15-0171.1 Chollet , F. , 2018 : Deep Learning with Python . Manning, 384 pp. Clarke , L. , and R. Renard , 1966 : The U.S. Navy numerical frontal analysis scheme: Further development and a limited evaluation . J. Appl. Meteor. , 5 , 764 – 777 ,

Restricted access
Florian Dupuy, Olivier Mestre, Mathieu Serrurier, Valentin Kivachuk Burdá, Michaël Zamo, Naty Citlali Cabrera-Gutiérrez, Mohamed Chafik Bakkay, Jean-Christophe Jouhaud, Maud-Alix Mader, and Guillaume Oller

. The atmospheric research community has already taken advantage of CNN’s ability [see Reichstein et al. (2019) for an overview]. Most of the applications deal with images, for example from satellite observations to create cloud masks or derive rainfalls ( Drönner et al. 2018 ; Moraux et al. 2019 ), or from pictures for weather classification ( Elhoseiny et al. 2015 ). Often, CNNs using NWP data as predictors (predictors are also named features in the deep learning community) are used to produce

Open access
Kyle A. Hilburn, Imme Ebert-Uphoff, and Steven D. Miller

. Miller , 2020 : Evaluating Geostationary Lightning Mapper flash rates within intense convective storms . J. Geophys. Res. Atmos. , 125 , e2020JD032827, . 10.1029/2020JD032827 Samsi , S. , C. J. Mattioli , and M. S. Veillette , 2019 : Distributed deep learning for precipitation nowcasting. IEEE High Performance Extreme Computing Conf. , Waltham, MA, IEEE, . 10.1109/HPEC.2019.8916416 Sawada , Y. , K

Open access
Xiaodong Chen, L. Ruby Leung, Yang Gao, and Ying Liu

: NeuralHydrology – Interpreting LSTMs in hydrology. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , W. Samek et al., Eds., Springer, 347–362 . 10.1007/978-3-030-28954-6_19 Li , D. , M. L. Wrzesien , M. Durand , J. Adam , and D. P. Lettenmaier , 2017 : How much runoff originates as snow in the western United States, and how will that change in the future? Geophys. Res. Lett. , 44 , 6163 – 6172 , . 10.1002/2017GL073551 Li , Z

Restricted access
Andrew Geiss and Joseph C. Hardin

features), and similar precipitating features occur across many different PPI scans depending on the regional weather, for instance, the presence of a cold front and corresponding heavy precipitation in an extratropical cyclone. By learning common sub-pixel-scale features in the context of large-scale weather in PPI scans, a neural network can outperform interpolation schemes. Though introduced in the late 1980s, deep CNNs have become very popular since about 2010 for various image processing tasks

Restricted access
Yingkai Sha, David John Gagne II, Gregory West, and Roland Stull


We present a novel approach for the automated quality control (QC) of precipitation for a sparse station observation network within the complex terrain of British Columbia, Canada. Our QC approach uses Convolutional Neural Networks (CNNs) to classify bad observation values, incorporating a multi-classifier ensemble to achieve better QC performance. We train CNNs using human QC’d labels from 2016 to 2017 with gridded precipitation and elevation analyses as inputs. Based on the classification evaluation metrics, our QC approach shows reliable and robust performance across different geographical environments (e.g., coastal and inland mountains), with 0.927 Area Under Curve (AUC) and type I/type II error lower than 15%. Based on the saliency-map-based interpretation studies, we explain the success of CNN-based QC by showing that it can capture the precipitation patterns around, and upstream of the station locations. This automated QC approach is an option for eliminating bad observations for various applications, including the pre-processing of training datasets for machine learning. It can be used in conjunction with human QC to improve upon what could be accomplished with either method alone.

Open access
Min Wang, Shudao Zhou, Zhong Yang, and Zhanhua Liu

application and low recognition accuracy. Therefore, we need to find a method that can automatically learn different cloud features. The convolutional neural network (CNN) has achieved great success in large-scale image classification tasks. The CNN is a deep, feedforward artificial neural network that has the ability to perform in-depth learning. After in-depth learning, it is possible to express features that are difficult to express in general, fully mine the association between data, extract the

Restricted access
Yumeng Tao, Kuolin Hsu, Alexander Ihler, Xiaogang Gao, and Soroosh Sorooshian

related to rainfall. In recent years, deep learning algorithms, also known as deep neural networks (DNNs), have been widely applied in many fields, including signal and image processing, computer vision, and language, in part for their ability to perform complex feature extraction ( Bengio 2009 ; Hinton et al. 2006 ; LeCun et al. 2015 ). According to many recent studies ( Glorot et al. 2011 ; Hinton et al. 2006 ; Lu et al. 2013 ; Tao et al. 2016a , 2018 ; Vincent et al. 2008 ; Yang et al. 2017

Full access
Alex M. Haberlie and Walker S. Ashley

-tree predictions ( Gagne et al. 2017 ). In addition, this study uses the XGBoost algorithm ( Chen and Guestrin 2016 ). XGBoost is an extension of GB that uses additional methods that reduce model overfitting. These algorithms can rival the performance of more complex algorithms (e.g., deep neural networks; Krizhevsky et al. 2012 ) in machine-learning competitions ( Chen and Guestrin 2016 ), with less time spent on tuning model hyperparameters. Gagne et al. (2017 , their section 2.4) provide a detailed

Full access