Browse

You are looking at 141 - 146 of 146 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Michaela Vorndran
,
Adrian Schütz
,
Jörg Bendix
, and
Boris Thies

Abstract

Large inaccuracies still exist in accurately predicting fog formation, dissipation, and duration. To improve these deficiencies, machine learning (ML) algorithms are increasingly used in nowcasting in addition to numerical fog forecasts because of their computational speed and their ability to learn the nonlinear interactions between the variables. Although a powerful tool, ML models require precise training and thoroughly evaluation to prevent misinterpretation of the scores. In addition, a fog dataset’s temporal order and the autocorrelation of the variables must be considered. Therefore, classification-based ML related pitfalls in fog forecasting will be demonstrated in this study by using an XGBoost fog forecasting model. By also using two baseline models that simulate guessing and persistence behavior, we have established two independent evaluation thresholds allowing for a more assessable grading of the ML model’s performance. It will be shown that, despite high validation scores, the model could still fail in operational application. If persistence behavior is simulated, commonly used scores are insufficient to measure the performance. That will be demonstrated through a separate analysis of fog formation and dissipation, because these are crucial for a good fog forecast. We also show that commonly used blockwise and leave-many-out cross-validation methods might inflate the validation scores and are therefore less suitable than a temporally ordered expanding window split. The presented approach provides an evaluation score that closely mimics not only the performance on the training and test dataset but also the operational model’s fog forecasting abilities.

Significance Statement

This study points out current pitfalls in the training and evaluation of pointwise radiation fog forecasting with machine learning algorithms. The objective of this study is to raise awareness of 1) consideration of the time stability of variables (autocorrelation) during training and evaluation, 2) the necessity of evaluating the performance of a fog forecasting model in direct comparison with an independent performance threshold (baseline model) that evaluates whether the fog forecasting model is better than guessing, and 3) the fact that prediction of fog formation and dissipation must be evaluated separately because a model that misses all of these transitions can still achieve high performance in the commonly used overall evaluation.

Free access
Arnaud Mounier
,
Laure Raynaud
,
Lucie Rottner
,
Matthieu Plu
,
Philippe Arbogast
,
Michaël Kreitz
,
Léo Mignan
, and
Benoît Touzé

Abstract

Bow echoes (BEs) are bow-shaped lines of convective cells that are often associated with swaths of damaging straight-line winds and small tornadoes. This paper describes a convolutional neural network (CNN) able to detect BEs directly from French kilometer-scale model outputs in order to facilitate and accelerate the operational forecasting of BEs. The detections are only based on the maximum pseudoreflectivity field predictor (“pseudo” because it is expressed in mm h−1 and not in dBZ). A preprocessing of the training database is carried out in order to reduce imbalance issues between the two classes (inside or outside bow echoes). A CNN sensitivity analysis against a set of hyperparameters is done. The selected CNN configuration has a hit rate of 86% and a false alarm rate of 39%. The strengths and weaknesses of this CNN are then emphasized with an object-oriented evaluation. The BE largest pseudoreflectivities are correctly detected by the CNN, which tends to underestimate the size of BEs. Detected BE objects have wind gusts similar to the hand-labeled BE. Most of the time, false alarm objects and missed objects are rather small (e.g., <1500 km2). Based on a cooperation with forecasters, synthesis plots are proposed that summarize the BE detections in French kilometer-scale models. A subjective evaluation of the CNN performances is also reported. The overall positive feedback from forecasters is in good agreement with the object-oriented evaluation. Forecasters perceive these products as relevant and potentially useful to handle the large amount of available data from numerical weather prediction models.

Free access
Free access
Amy McGovern
and
Anthony J. Broccoli
Free access
Free access
Restricted access