Search Results

You are looking at 1 - 2 of 2 items for :

  • Model performance/evaluation x
  • Weather and Forecasting x
  • The 1st NOAA Workshop on Leveraging AI in the Exploitation of Satellite Earth Observations & Numerical Weather Prediction x
  • Refine by Access: All Content x
Clear All
John L. Cintineo, Michael J. Pavolonis, Justin M. Sieglaff, Anthony Wimmers, Jason Brunner, and Willard Bellon

trained model, which is useful in selecting hyperparameters (see section 2d ). However, by choosing hyperparameter values that optimize performance on the validation set, the hyperparameters can be overfit to the validation set, just like model weights (those adjusted by training) can be overfit to the training set. Thus, the selected model is also evaluated on the testing set, which is independent of the data used to fit both the model weights and hyperparameters. c. Model architecture CNNs use a

Full access
Eric D. Loken, Adam J. Clark, Amy McGovern, Montgomery Flora, and Kent Knopfmeier

), probability of false detection (POFD), success ratio (SR), bias, and critical success index (CSI) can then be obtained [e.g., see Eqs. (3)–(7) in Loken et al. 2017 ]. These metrics form the basis of other forecast evaluation tools used herein, such as the ROC curve ( Mason 1982 ) and performance diagram ( Roebber 2009 ). ROC curves plot POD against POFD at multiple forecast probability thresholds (here, 1%, 2%, and 5%–95% in intervals of 5%). Area under the ROC curve (AUC) provides a measure of forecast

Full access