Search Results

You are looking at 1 - 10 of 16 items for :

  • Model performance/evaluation x
  • The 1st NOAA Workshop on Leveraging AI in the Exploitation of Satellite Earth Observations & Numerical Weather Prediction x
  • Refine by Access: All Content x
Clear All
Christina Kumler-Bonfanti, Jebb Stewart, David Hall, and Mark Govett

the model learns to fit the training data. The paper is organized as follows. In section 2 , the U-Net architecture is described, as are the numerical metrics used to evaluate its success. Section 3 describes both qualitatively and quantitatively the design and performance of the best U-Net model obtained for identifying tropical cyclones using the Global Forecast System (GFS) total precipitable water field as inputs. In section 4 , three additional U-Net models are introduced that identify

Restricted access
John L. Cintineo, Michael J. Pavolonis, Justin M. Sieglaff, Anthony Wimmers, Jason Brunner, and Willard Bellon

trained model, which is useful in selecting hyperparameters (see section 2d ). However, by choosing hyperparameter values that optimize performance on the validation set, the hyperparameters can be overfit to the validation set, just like model weights (those adjusted by training) can be overfit to the training set. Thus, the selected model is also evaluated on the testing set, which is independent of the data used to fit both the model weights and hyperparameters. c. Model architecture CNNs use a

Restricted access
Kyle A. Hilburn, Imme Ebert-Uphoff, and Steven D. Miller

values. Performance is evaluated using metrics including the mean-square error (MSE), coefficient of determination R 2 , categorical metrics (probability of detection, false-alarm rate, critical success index, and categorical bias) at various output threshold levels, and evaluation of the root-mean-square difference (RMSD) binned over the range of true output values. A potential disadvantage of ML is that it is statistically based, making it harder to interpret. So, besides producing a trained and

Open access
Noah D. Brenowitz, Tom Beucler, Michael Pritchard, and Christopher S. Bretherton

deep convective self-aggregation above uniform SST . J. Atmos. Sci. , 62 , 4273 – 4292 , https://doi.org/10.1175/JAS3614.1 . 10.1175/JAS3614.1 Chevallier , F. , and J.-F. Mahfouf , 2001 : Evaluation of the Jacobians of infrared radiation models for variational data assimilation . J. Appl. Meteor. , 40 , 1445 – 1461 , https://doi.org/10.1175/1520-0450(2001)040<1445:EOTJOI>2.0.CO;2 . 10.1175/1520-0450(2001)040<1445:EOTJOI>2.0.CO;2 Chevallier , F. , F. Chéruy , N. A. Scott , and

Restricted access
Andrew E. Mercer, Alexandria D. Grimes, and Kimberly M. Wood

.1175/2009WAF2222280.1 Kaplan , J. , and Coauthors , 2015 : Evaluating environmental impacts on tropical cyclone rapid intensification predictability utilizing statistical models . Wea. Forecasting , 30 , 1374 – 1396 , https://doi.org/10.1175/WAF-D-15-0032.1 . 10.1175/WAF-D-15-0032.1 Karpatne , A. , I. Ebert-Uphoff , S. Ravela , H. A. Babaie , and V. Kumar , 2018 : Machine learning for the geosciences: Challenges and opportunities . IEEE Trans. Knowl. Data Eng. , 31 , 1544 – 1554

Restricted access
Hanoi Medina, Di Tian, Fabio R. Marin, and Giovanni B. Chirico

global NWP models ( Bauer et al. 2015 ). The representation of these processes is especially challenging over continental areas from the Southern Hemisphere where the abundant vegetation and the sparse observations for evaluation and data assimilation have limited the models’ accuracy. Recent progress in forecasting tropical convection ( Bechtold et al. 2014 ; Subramanian et al. 2017 ) and the increasing quantity and quality of global information encourage the use of NWP for tropical precipitation

Full access
Ryan Lagerquist, Amy McGovern, Cameron R. Homeyer, David John Gagne II, and Travis Smith

are 1D with lower spatial resolution), which would present a major difficulty for non-ML-based postprocessing methods such as SSPF. The rest of this paper is organized as follows. Section 2 briefly describes the inner workings of CNNs [a more thorough description is provided in Lagerquist et al. (2019) , hereafter L19 ], section 3 describes the input data and preprocessing, section 4 describes experiments used to find the best CNNs, section 5 evaluates performance of the best CNNs, and

Free access
Dan Lu, Goutam Konapala, Scott L. Painter, Shih-Chieh Kao, and Sudershan Gangrade

southeastern United States where the snow has little influence on runoff. The LSTM performance is evaluated in comparison with a physics-based hydrologic model calibrated using the same training data as in the LSTM. Results from this study should have important implications for streamflow simulation in rural watersheds where data quality and availability are a critical issue. The paper is organized as follows. Section 2 describes the LSTM network, various regularization techniques, Bayesian LSTM and the

Restricted access
Imme Ebert-Uphoff and Kyle Hilburn

, by trying different sets of hyperparameters, training a complete model for each set, evaluating the resulting model, and then deciding which hyperparameter set results in best performance. Algorithms range from simple exhaustive grid search (as illustrated in the “Using performance measures for NN tuning” section) to sophisticated algorithms ( Kasim et al. 2020 ; Hertel et al. 2020 ). Sample application: Image-to-image translation from GOES to MRMS. We demonstrate many of the concepts in this

Full access
Anthony Wimmers, Christopher Velden, and Joshua H. Cossuth

. However, it is difficult to generalize this difference because of the small sample size for category 5. Overall, the improvement is enough to justify limiting the remaining model evaluation to only the two-channel version of DeepMicroNet going forward. Fig . 5. (a) Intensity error (RMSE) according to best track MSW for the three model versions labeled in the legend, and (b) average standard deviation of the PDFs according to best track MSW. b. Model performance The following describes a two

Full access