Browse

You are looking at 121 - 130 of 164 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Adam B. Milstein
,
Joseph A. Santanello
, and
William J. Blackwell

Abstract

In recent decades, spaceborne microwave and hyperspectral infrared sounding instruments have significantly benefited weather forecasting and climate science. However, existing retrievals of lower-troposphere temperature and humidity profiles have limitations in vertical resolution, and often cannot accurately represent key features such as the mixed-layer thermodynamic structure and the inversion at the planetary boundary layer (PBL) top. Because of the existing limitations in PBL remote sensing from space, there is a compelling need to improve routine, global observations of the PBL and enable advances in scientific understanding and weather and climate prediction. To address this, we have developed a new 3D deep neural network (DNN) that enhances detail and reduces noise in level 2 granules of temperature and humidity profiles from the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU) sounder instruments aboard NASA’s Aqua spacecraft. We show that the enhancement improves accuracy and detail including key features such as capping inversions at the top of the PBL over land, resulting in improved accuracy in estimations of PBL height.

Open access
Maria Reinhardt
,
Sybille Y. Schoger
,
Frederik Kurzrock
, and
Roland Potthast

Abstract

This paper presents an innovational way of assimilating observations of clouds into the icosahedral nonhydrostatic weather forecasting model for regional scale (ICON-D2), which is operated by the German Weather Service (Deutscher Wetterdienst) (DWD). A convolutional neural network (CNN) is trained to detect clouds in camera photographs. The network’s output is a grayscale picture, in which each pixel has a value between 0 and 1, describing the probability of the pixel belonging to a cloud (1) or not (0). By averaging over a certain box of the picture a value for the cloud cover of that region is obtained. A forward operator is built to map an ICON model state into the observation space. A three-dimensional grid in the space of the camera’s perspective is constructed and the ICON model variable cloud cover (CLC) is interpolated onto that grid. The maximum CLC along the rays that fabricate the camera grid, is taken as a model equivalent for each pixel. After superobbing, monitoring experiments have been conducted to compare the observations and model equivalents over a longer time period, yielding promising results. Further we show the performance of a single assimilation step as well as a longer assimilation experiment over a time period of 6 days, which also yields good results. These findings are proof of concept and further research has to be invested before these new innovational observations can be assimilated operationally in any numerical weather prediction (NWP) model.

Open access
Free access
Jiaxin Chen
,
Ian G. C. Ashton
,
Edward C. C. Steele
, and
Ajit C. Pillai

Abstract

The safe and successful operation of offshore infrastructure relies on a detailed awareness of ocean wave conditions. Ongoing growth in offshore wind energy is focused on very large-scale projects, deployed in ever more challenging environments. This inherently increases both cost and complexity and therefore the requirement for efficient operational planning. To support this, we propose a new machine learning framework for the short-term forecasting of ocean wave conditions to support critical decision-making associated with marine operations. Here, an attention-based long short-term memory (LSTM) neural network approach is used to learn the short-term temporal patterns from in situ observations. This is then integrated with an existing, low computational cost spatial nowcasting model to develop a complete framework for spatiotemporal forecasting. The framework addresses the challenge of filling gaps in the in situ observations and undertakes feature selection, with seasonal training datasets embedded. The full spatiotemporal forecasting system is demonstrated using a case study based on independent observation locations near the southwest coast of the United Kingdom. Results are validated against in situ data from two wave buoy locations within the domain and compared to operational physics-based wave forecasts from the Met Office (the United Kingdom’s national weather service). For these two example locations, the spatiotemporal forecast is found to have an accuracy of R 2 = 0.9083 and 0.7409 in forecasting 1-h-ahead significant wave height and R 2 = 0.8581 and 0.6978 in 12-h-ahead forecasts, respectively. Importantly, this represents respectable levels of accuracy, comparable to traditional physics-based forecast products, but requires only a fraction of the computational resources.

Significance Statement

Spectral wave models, based on modeling the underlying physics and physical processes, are traditionally used to generate wave forecasts but require significant computational cost. In this study, we propose a machine learning forecasting framework developed using both in situ buoy observations and a surrogate regional numerical wave model. The proposed framework is validated against in situ measurements at two renewable energy sites and found to have very similar 12-h forecasting errors when benchmarked against the Met Office’s physics-based forecasting model but requires far less computational power. The proposed framework is highly flexible and has the potential for offering a low-cost, low computational resource approach for the provision of short-term forecasts and can operate with other types of observations and other machine learning algorithms to improve the availability and accuracy of the prediction.

Open access
Louis Le Toumelin
,
Isabelle Gouttevin
,
Nora Helbig
,
Clovis Galiez
,
Mathis Roux
, and
Fatima Karbou

Abstract

Estimating the impact of wind-driven snow transport requires modeling wind fields with a lower grid spacing than the spacing on the order of 1 or a few kilometers used in the current numerical weather prediction (NWP) systems. In this context, we introduce a new strategy to downscale wind fields from NWP systems to decametric scales, using high-resolution (30 m) topographic information. Our method (named “DEVINE”) is leveraged on a convolutional neural network (CNN), trained to replicate the behavior of the complex atmospheric model ARPS, and was previously run on a large number (7279) of synthetic Gaussian topographies under controlled weather conditions. A 10-fold cross validation reveals that our CNN is able to accurately emulate the behavior of ARPS (mean absolute error for wind speed = 0.16 m s−1). We then apply DEVINE to real cases in the Alps, that is, downscaling wind fields forecast by the AROME NWP system using information from real alpine topographies. DEVINE proved able to reproduce main features of wind fields in complex terrain (acceleration on ridges, leeward deceleration, and deviations around obstacles). Furthermore, an evaluation on quality-checked observations acquired at 61 sites in the French Alps reveals improved behavior of the downscaled winds (AROME wind speed mean bias is reduced by 27% with DEVINE), especially at the most elevated and exposed stations. Wind direction is, however, only slightly modified. Hence, despite some current limitations inherited from the ARPS simulations setup, DEVINE appears to be an efficient downscaling tool whose minimalist architecture, low input data requirements (NWP wind fields and high-resolution topography), and competitive computing times may be attractive for operational applications.

Significance Statement

Wind largely influences the spatial distribution of snow in mountains, with direct consequences on hydrology and avalanche hazard. Most operational models predicting wind in complex terrain use a grid spacing on the order of several kilometers, too coarse to represent the real patterns of mountain winds. We introduce a novel method based on deep learning to increase this spatial resolution while maintaining acceptable computational costs. Our method mimics the behavior of a complex model that is able to represent part of the complexity of mountain winds by using topographic information only. We compared our results with observations collected in complex terrain and showed that our model improves the representation of winds, notably at the most elevated and exposed observation stations.

Open access
Ignacio Lopez-Gomez
,
Amy McGovern
,
Shreya Agrawal
, and
Jason Hickey

Abstract

Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ∼200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks.

Significance Statement

Heatwaves are projected to become stronger and more frequent as a result of global warming. Accurate forecasting of these events would enable the implementation of effective mitigation strategies. Here we analyze the forecast accuracy of artificial intelligence systems trained on historical surface temperature data to predict extreme heat events globally, 1 to 28 days ahead. We find that artificial intelligence systems trained to focus on extreme temperatures are significantly more accurate at predicting heatwaves than systems trained to minimize errors in surface temperatures and remain equally skillful at predicting moderate temperatures. Furthermore, the extreme-focused systems compete with state-of-the-art physics-based forecast systems in the subseasonal range, while incurring a much lower computational cost.

Open access
Mohamad Abed El Rahman Hammoud
,
Humam Alwassel
,
Bernard Ghanem
,
Omar Knio
, and
Ibrahim Hoteit

Abstract

Backward-in-time predictions are needed to better understand the underlying dynamics of physical fluid flows and improve future forecasts. However, integrating fluid flows backward in time is challenging because of numerical instabilities caused by the diffusive nature of the fluid systems and nonlinearities of the governing equations. Although this problem has been long addressed using a nonpositive diffusion coefficient when integrating backward, it is notoriously inaccurate. In this study, a physics-informed deep neural network (PI-DNN) is presented to predict past states of a dissipative dynamical system from snapshots of the subsequent evolution of the system state. The performance of the PI-DNN is investigated using several systematic numerical experiments and the accuracy of the backward-in-time predictions is evaluated in terms of different error metrics. The proposed PI-DNN can predict the previous state of the Rayleigh–Bénard convection with an 8-time-step average normalized 2 error of less than 2% for a turbulent flow at a Rayleigh number of 105.

Open access
Masahiro Momoi
,
Shunji Kotsuki
,
Ryota Kikuchi
,
Satoshi Watanabe
,
Masafumi Yamada
, and
Shiori Abe

Abstract

Predicting the spatial distribution of maximum inundation depth (depth-MAP) is important for the mitigation of hydrological disasters induced by extreme precipitation. However, physics-based rainfall–runoff-inundation (RRI) models, which are used operationally to predict hydrological disasters in Japan, require massive computational resources for numerical simulations. Here, we aimed at developing a computationally inexpensive deep learning model (Rain2Depth) that emulates an RRI model. Our study focused on the Omono River (Akita Prefecture, Japan) and predicted the depth-MAP from spatial and temporal rainfall data for individual events. Rain2Depth was developed based on a convolutional neural network (CNN) and predicts depth-MAP from 7-day successive hourly rainfall at 13 rain gauge stations in the basin. For training the Rain2Depth, we simulated the depth-MAP by the RRI model forced by 50 ensembles of 30-yr data from large-ensemble weather/climate predictions. Instead of using the input and output data directly, we extracted important features from input and output data with two dimensionality reduction techniques [principal component analysis (PCA) and the CNN approach] prior to training the network. This dimensionality reduction aimed to avoid overfitting caused by insufficient training data. The nonlinear CNN approach was superior to the linear PCA for extracting features. Finally, the Rain2Depth architecture was built by connecting the extracted features between input and output data through a neural network. Rain2Depth-based predictions were more accurate than predictions from our previous model (K20), which used ensemble learning of multiple regularized regressions for a specific station. Whereas the K20 can predict maximum inundation depth only at stations, our study achieved depth-MAP prediction by training only the single model Rain2Depth.

Open access
Lander Ver Hoef
,
Henry Adams
,
Emily J. King
, and
Imme Ebert-Uphoff

Abstract

Topological data analysis (TDA) is a tool from data science and mathematics that is beginning to make waves in environmental science. In this work, we seek to provide an intuitive and understandable introduction to a tool from TDA that is particularly useful for the analysis of imagery, namely, persistent homology. We briefly discuss the theoretical background but focus primarily on understanding the output of this tool and discussing what information it can glean. To this end, we frame our discussion around a guiding example of classifying satellite images from the sugar, fish, flower, and gravel dataset produced for the study of mesoscale organization of clouds by Rasp et al. We demonstrate how persistent homology and its vectorization, persistence landscapes, can be used in a workflow with a simple machine learning algorithm to obtain good results, and we explore in detail how we can explain this behavior in terms of image-level features. One of the core strengths of persistent homology is how interpretable it can be, so throughout this paper we discuss not just the patterns we find but why those results are to be expected given what we know about the theory of persistent homology. Our goal is that readers of this paper will leave with a better understanding of TDA and persistent homology, will be able to identify problems and datasets of their own for which persistent homology could be helpful, and will gain an understanding of the results they obtain from applying the included GitHub example code.

Significance Statement

Information such as the geometric structure and texture of image data can greatly support the inference of the physical state of an observed Earth system, for example, in remote sensing to determine whether wildfires are active or to identify local climate zones. Persistent homology is a branch of topological data analysis that allows one to extract such information in an interpretable way—unlike black-box methods like deep neural networks. The purpose of this paper is to explain in an intuitive manner what persistent homology is and how researchers in environmental science can use it to create interpretable models. We demonstrate the approach to identify certain cloud patterns from satellite imagery and find that the resulting model is indeed interpretable.

Open access
Andrew Geiss
and
Joseph C. Hardin

Abstract

Recently, deep convolutional neural networks (CNNs) have revolutionized image “super resolution” (SR), dramatically outperforming past methods for enhancing image resolution. They could be a boon for the many scientific fields that involve imaging or any regularly gridded datasets: satellite remote sensing, radar meteorology, medical imaging, numerical modeling, and so on. Unfortunately, while SR-CNNs produce visually compelling results, they do not necessarily conserve physical quantities between their low-resolution inputs and high-resolution outputs when applied to scientific datasets. Here, a method for “downsampling enforcement” in SR-CNNs is proposed. A differentiable operator is derived that, when applied as the final transfer function of a CNN, ensures the high-resolution outputs exactly reproduce the low-resolution inputs under 2D-average downsampling while improving performance of the SR schemes. The method is demonstrated across seven modern CNN-based SR schemes on several benchmark image datasets, and applications to weather radar, satellite imager, and climate model data are shown. The approach improves training time and performance while ensuring physical consistency between the super-resolved and low-resolution data.

Significance Statement

Recent advancements in using deep learning to increase the resolution of images have substantial potential across the many scientific fields that use images and image-like data. Most image super-resolution research has focused on the visual quality of outputs, however, and is not necessarily well suited for use with scientific data where known physics constraints may need to be enforced. Here, we introduce a method to modify existing deep neural network architectures so that they strictly conserve physical quantities in the input field when “super resolving” scientific data and find that the method can improve performance across a wide range of datasets and neural networks. Integration of known physics and adherence to established physical constraints into deep neural networks will be a critical step before their potential can be fully realized in the physical sciences.

Open access