Browse

You are looking at 101 - 110 of 140 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Louis Le Toumelin
,
Isabelle Gouttevin
,
Nora Helbig
,
Clovis Galiez
,
Mathis Roux
, and
Fatima Karbou

Abstract

Estimating the impact of wind-driven snow transport requires modeling wind fields with a lower grid spacing than the spacing on the order of 1 or a few kilometers used in the current numerical weather prediction (NWP) systems. In this context, we introduce a new strategy to downscale wind fields from NWP systems to decametric scales, using high-resolution (30 m) topographic information. Our method (named “DEVINE”) is leveraged on a convolutional neural network (CNN), trained to replicate the behavior of the complex atmospheric model ARPS, and was previously run on a large number (7279) of synthetic Gaussian topographies under controlled weather conditions. A 10-fold cross validation reveals that our CNN is able to accurately emulate the behavior of ARPS (mean absolute error for wind speed = 0.16 m s−1). We then apply DEVINE to real cases in the Alps, that is, downscaling wind fields forecast by the AROME NWP system using information from real alpine topographies. DEVINE proved able to reproduce main features of wind fields in complex terrain (acceleration on ridges, leeward deceleration, and deviations around obstacles). Furthermore, an evaluation on quality-checked observations acquired at 61 sites in the French Alps reveals improved behavior of the downscaled winds (AROME wind speed mean bias is reduced by 27% with DEVINE), especially at the most elevated and exposed stations. Wind direction is, however, only slightly modified. Hence, despite some current limitations inherited from the ARPS simulations setup, DEVINE appears to be an efficient downscaling tool whose minimalist architecture, low input data requirements (NWP wind fields and high-resolution topography), and competitive computing times may be attractive for operational applications.

Significance Statement

Wind largely influences the spatial distribution of snow in mountains, with direct consequences on hydrology and avalanche hazard. Most operational models predicting wind in complex terrain use a grid spacing on the order of several kilometers, too coarse to represent the real patterns of mountain winds. We introduce a novel method based on deep learning to increase this spatial resolution while maintaining acceptable computational costs. Our method mimics the behavior of a complex model that is able to represent part of the complexity of mountain winds by using topographic information only. We compared our results with observations collected in complex terrain and showed that our model improves the representation of winds, notably at the most elevated and exposed observation stations.

Open access
Ignacio Lopez-Gomez
,
Amy McGovern
,
Shreya Agrawal
, and
Jason Hickey

Abstract

Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ∼200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks.

Significance Statement

Heatwaves are projected to become stronger and more frequent as a result of global warming. Accurate forecasting of these events would enable the implementation of effective mitigation strategies. Here we analyze the forecast accuracy of artificial intelligence systems trained on historical surface temperature data to predict extreme heat events globally, 1 to 28 days ahead. We find that artificial intelligence systems trained to focus on extreme temperatures are significantly more accurate at predicting heatwaves than systems trained to minimize errors in surface temperatures and remain equally skillful at predicting moderate temperatures. Furthermore, the extreme-focused systems compete with state-of-the-art physics-based forecast systems in the subseasonal range, while incurring a much lower computational cost.

Open access
Mohamad Abed El Rahman Hammoud
,
Humam Alwassel
,
Bernard Ghanem
,
Omar Knio
, and
Ibrahim Hoteit

Abstract

Backward-in-time predictions are needed to better understand the underlying dynamics of physical fluid flows and improve future forecasts. However, integrating fluid flows backward in time is challenging because of numerical instabilities caused by the diffusive nature of the fluid systems and nonlinearities of the governing equations. Although this problem has been long addressed using a nonpositive diffusion coefficient when integrating backward, it is notoriously inaccurate. In this study, a physics-informed deep neural network (PI-DNN) is presented to predict past states of a dissipative dynamical system from snapshots of the subsequent evolution of the system state. The performance of the PI-DNN is investigated using several systematic numerical experiments and the accuracy of the backward-in-time predictions is evaluated in terms of different error metrics. The proposed PI-DNN can predict the previous state of the Rayleigh–Bénard convection with an 8-time-step average normalized 2 error of less than 2% for a turbulent flow at a Rayleigh number of 105.

Open access
Masahiro Momoi
,
Shunji Kotsuki
,
Ryota Kikuchi
,
Satoshi Watanabe
,
Masafumi Yamada
, and
Shiori Abe

Abstract

Predicting the spatial distribution of maximum inundation depth (depth-MAP) is important for the mitigation of hydrological disasters induced by extreme precipitation. However, physics-based rainfall–runoff-inundation (RRI) models, which are used operationally to predict hydrological disasters in Japan, require massive computational resources for numerical simulations. Here, we aimed at developing a computationally inexpensive deep learning model (Rain2Depth) that emulates an RRI model. Our study focused on the Omono River (Akita Prefecture, Japan) and predicted the depth-MAP from spatial and temporal rainfall data for individual events. Rain2Depth was developed based on a convolutional neural network (CNN) and predicts depth-MAP from 7-day successive hourly rainfall at 13 rain gauge stations in the basin. For training the Rain2Depth, we simulated the depth-MAP by the RRI model forced by 50 ensembles of 30-yr data from large-ensemble weather/climate predictions. Instead of using the input and output data directly, we extracted important features from input and output data with two dimensionality reduction techniques [principal component analysis (PCA) and the CNN approach] prior to training the network. This dimensionality reduction aimed to avoid overfitting caused by insufficient training data. The nonlinear CNN approach was superior to the linear PCA for extracting features. Finally, the Rain2Depth architecture was built by connecting the extracted features between input and output data through a neural network. Rain2Depth-based predictions were more accurate than predictions from our previous model (K20), which used ensemble learning of multiple regularized regressions for a specific station. Whereas the K20 can predict maximum inundation depth only at stations, our study achieved depth-MAP prediction by training only the single model Rain2Depth.

Open access
Lander Ver Hoef
,
Henry Adams
,
Emily J. King
, and
Imme Ebert-Uphoff

Abstract

Topological data analysis (TDA) is a tool from data science and mathematics that is beginning to make waves in environmental science. In this work, we seek to provide an intuitive and understandable introduction to a tool from TDA that is particularly useful for the analysis of imagery, namely, persistent homology. We briefly discuss the theoretical background but focus primarily on understanding the output of this tool and discussing what information it can glean. To this end, we frame our discussion around a guiding example of classifying satellite images from the sugar, fish, flower, and gravel dataset produced for the study of mesoscale organization of clouds by Rasp et al. We demonstrate how persistent homology and its vectorization, persistence landscapes, can be used in a workflow with a simple machine learning algorithm to obtain good results, and we explore in detail how we can explain this behavior in terms of image-level features. One of the core strengths of persistent homology is how interpretable it can be, so throughout this paper we discuss not just the patterns we find but why those results are to be expected given what we know about the theory of persistent homology. Our goal is that readers of this paper will leave with a better understanding of TDA and persistent homology, will be able to identify problems and datasets of their own for which persistent homology could be helpful, and will gain an understanding of the results they obtain from applying the included GitHub example code.

Significance Statement

Information such as the geometric structure and texture of image data can greatly support the inference of the physical state of an observed Earth system, for example, in remote sensing to determine whether wildfires are active or to identify local climate zones. Persistent homology is a branch of topological data analysis that allows one to extract such information in an interpretable way—unlike black-box methods like deep neural networks. The purpose of this paper is to explain in an intuitive manner what persistent homology is and how researchers in environmental science can use it to create interpretable models. We demonstrate the approach to identify certain cloud patterns from satellite imagery and find that the resulting model is indeed interpretable.

Open access
Andrew Geiss
and
Joseph C. Hardin

Abstract

Recently, deep convolutional neural networks (CNNs) have revolutionized image “super resolution” (SR), dramatically outperforming past methods for enhancing image resolution. They could be a boon for the many scientific fields that involve imaging or any regularly gridded datasets: satellite remote sensing, radar meteorology, medical imaging, numerical modeling, and so on. Unfortunately, while SR-CNNs produce visually compelling results, they do not necessarily conserve physical quantities between their low-resolution inputs and high-resolution outputs when applied to scientific datasets. Here, a method for “downsampling enforcement” in SR-CNNs is proposed. A differentiable operator is derived that, when applied as the final transfer function of a CNN, ensures the high-resolution outputs exactly reproduce the low-resolution inputs under 2D-average downsampling while improving performance of the SR schemes. The method is demonstrated across seven modern CNN-based SR schemes on several benchmark image datasets, and applications to weather radar, satellite imager, and climate model data are shown. The approach improves training time and performance while ensuring physical consistency between the super-resolved and low-resolution data.

Significance Statement

Recent advancements in using deep learning to increase the resolution of images have substantial potential across the many scientific fields that use images and image-like data. Most image super-resolution research has focused on the visual quality of outputs, however, and is not necessarily well suited for use with scientific data where known physics constraints may need to be enforced. Here, we introduce a method to modify existing deep neural network architectures so that they strictly conserve physical quantities in the input field when “super resolving” scientific data and find that the method can improve performance across a wide range of datasets and neural networks. Integration of known physics and adherence to established physical constraints into deep neural networks will be a critical step before their potential can be fully realized in the physical sciences.

Open access
Yongquan Qu
and
Xiaoming Shi

Abstract

The development of machine learning (ML) techniques enables data-driven parameterizations, which have been investigated in many recent studies. Some investigations suggest that a priori-trained ML models exhibit satisfying accuracy during training but poor performance when coupled to dynamical cores and tested. Here we use the evolution of the barotropic vorticity equation (BVE) with periodically reinforced shear instability as a prototype problem to develop and evaluate a model-consistent training strategy, which employs a numerical solver supporting automatic differentiation and includes the solver in the loss function for training ML-based subgrid-scale (SGS) turbulence models. This approach enables the interaction between the dynamical core and the ML-based parameterization during the model training phase. The BVE model was run at low, high, and ultrahigh (truth) resolutions. Our training dataset contains only a short period of coarsened high-resolution simulations. However, given initial conditions long after the training dataset time, the trained SGS model can still significantly increase the effective lead time of the BVE model running at the low resolution by up to 50% relative to the BVE simulation without an SGS model. We also tested using a covariance matrix to normalize the loss function and found it can notably boost the performance of the ML parameterization. The SGS model’s performance is further improved by conducting transfer learning using a limited number of discontinuous observations, increasing the forecast lead-time improvement to 73%. This study demonstrates a potential pathway to using machine learning to enhance the prediction skills of our climate and weather models.

Significance Statement

Numerical weather prediction is performed at limited resolution for computational feasibility, and the schemes to estimate unresolved processes are called parameterization. We propose a strategy to develop better deep learning–based parameterization in which an automatic differentiable numerical solver is employed as the dynamic core and interacts with the parameterization scheme during its training. Such a numerical solver enables consistent deep learning, because the parameterization is trained with a direct target of making the numerical model (dynamic core and parameterization together) forecast match observations as much as possible. We demonstrate the feasibility and effectiveness of such a strategy using a surrogate model and advocate that such machine learning–enabled numerical models provide a promising pathway to developing next-generation weather forecast and climate models.

Open access
Fatemeh Farokhmanesh
,
Kevin Höhlein
, and
Rüdiger Westermann

Abstract

Numerical simulations in Earth-system sciences consider a multitude of physical parameters in space and time, leading to severe input/output (I/O) bandwidth requirements and challenges in subsequent data analysis tasks. Deep learning–based identification of redundant parameters and prediction of those from other parameters, that is, variable-to-variable (V2V) transfer, has been proposed as an approach to lessening the bandwidth requirements and streamlining subsequent data analysis. In this paper, we examine the applicability of V2V to meteorological reanalysis data. We find that redundancies within pairs of parameter fields are limited, which hinders application of the original V2V algorithm. Therefore, we assess the predictive strength of reanalysis parameters by analyzing the learning behavior of V2V reconstruction networks in an ablation study. We demonstrate that efficient V2V transfer becomes possible when considering groups of parameter fields for transfer and propose an algorithm to implement this. We investigate further whether the neural networks trained in the V2V process can yield insightful representations of recurring patterns in the data. The interpretability of these representations is assessed via layerwise relevance propagation that highlights field areas and parameters of high importance for the reconstruction model. Applied to reanalysis data, this allows for uncovering mutual relationships between landscape orography and different regional weather situations. We see our approach as an effective means to reduce bandwidth requirements in numerical weather simulations, which can be used on top of conventional data compression schemes. The proposed identification of multiparameter features can spawn further research on the importance of regional weather situations for parameter prediction and also in other kinds of simulation data.

Open access
Yanan Duan
,
Sathish Akula
,
Sanjiv Kumar
,
Wonjun Lee
, and
Sepideh Khajehei

Abstract

The National Oceanic and Atmospheric Administration has developed a very high-resolution streamflow forecast using National Water Model (NWM) for 2.7 million stream locations in the United States. However, considerable challenges exist for quantifying uncertainty at ungauged locations and forecast reliability. A data science approach is presented to address the challenge. The long-range daily streamflow forecasts are analyzed from December 2018 to August 2021 for Alabama and Georgia. The forecast is evaluated at 389 observed USGS stream gauging locations using standard deterministic metrics. Next, the forecast errors are grouped using watersheds’ biophysical characteristics, including drainage area, land use, soil type, and topographic index. The NWM forecasts are more skillful for larger and forested watersheds than smaller and urban watersheds. The NWM forecast considerably overestimates the streamflow in the urban watersheds. The classification and regression tree analysis confirm the dependency of the forecast errors on the biophysical characteristics. A densely connected neural network model consisting of six layers [deep learning (DL)] is developed using biophysical characteristics, NWM forecast as inputs, and the forecast errors as outputs. The DL model successfully learns location invariant transferrable knowledge from the domain trained in the gauged locations and applies the learned model to estimate forecast errors at the ungauged locations. A temporal and spatial split of the gauged data shows that the probability of capturing the observations in the forecast range improved significantly in the hybrid NWM-DL model (82% ± 3%) than in the NWM-only forecast (21% ± 1%). A trade-off between overly constrained NWM forecast and increased forecast uncertainty range in the DL model is noted.

Significance Statement

A hybrid biophysical–artificial intelligence (physics–AI) model is developed from the first principle to estimate streamflow forecast errors at ungauged locations, improving the forecast’s reliability. The first principle refers to identifying the need for the hybrid physics–AI model, determining physically interpretable and machine identifiable model inputs, followed by the deep learning (DL) model development and its evaluations, and finally, a biophysical interpretation of the hybrid model. A very high-resolution National Water Model (NWM) forecast, developed by the National Oceanic and Atmospheric Administration, serves as the biophysical component of the hybrid model. Out of 2.7 million daily forecasts, less than 1% of the forecasts can be verified using the traditional hydrological method of comparing the forecast with the observations, motivating the need for the AI technique to improve forecast reliability at millions of ungauged locations. An exploratory analysis followed by the classification and regression tree analysis successfully determines the dependency of the forecast errors on the biophysical attributes, which along with the NWM forecast, are used for the DL model development. The hybrid model is evaluated in a subtropical humid climate of Alabama and Georgia in the United States. Long-term streamflow forecasts from zero-day lead to 30-day lead forecasts are archived and analyzed for 979 days (December 2018–August 2021) and 389 USGS gauging stations. The forecast reliability is assessed as the probability of capturing the observations in its ensemble range. As a result, the forecast reliability increased from 21% (±1%) in the NWM only forecasts to 82% (±3%) in the hybrid physics–AI model.

Open access
Antonios Mamalakis
,
Elizabeth A. Barnes
, and
Imme Ebert-Uphoff

Abstract

Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research.

Significance Statement

In recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.

Open access