Browse

You are looking at 1 - 10 of 57 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Rikhi Bose
,
Adam L. Pintar
, and
Emil Simiu

Abstract

The objective of this paper is to employ machine learning (ML) and deep learning (DL) techniques to obtain, from input data (storm features) available in or derived from the HURDAT2 database, models capable of simulating important hurricane properties (e.g., landfall location and wind speed) consistent with historical records. In pursuit of this objective, a trajectory model providing the storm center in terms of longitude and latitude, and intensity models providing the central pressure and maximum 1–min wind speed at 10m elevationwere created. The trajectory and intensity models are coupled and must be advanced together, six hours at a time, as the features that serve as inputs to the models at any given step depend on predictions at the previous time steps. Once a synthetic storm database is generated, properties of interest, such as the frequencies of large wind speeds may be extracted from any part of the simulation domain. The coupling of the trajectory and intensity models obviates the need for an intensity decay model inland of the coastline. Prediction results are compared to historical data, and the efficacy of the storm simulation models is evaluated at four sites: New Orleans, Miami, Cape Hatteras, and Boston.

Open access
Mark S. Veillette
,
James M. Kurdzo
,
Phillip M. Stepanian
,
Joseph McDonald
,
Siddharth Samsi
, and
John Y. N. Cho

Abstract

Radial velocity estimates provided by Doppler weather radar are critical measurements used by operational forecasters for the detection and monitoring of life-impacting storms. The sampling methods used to produce these measurements are inherently susceptible to aliasing, which produces ambiguous velocity values in regions with high winds, and needs to be corrected using a velocity dealiasing algorithm (VDA). In the US, the Weather Surveillance Radar – 1988 Doppler (WSR-88D) Open Radar Product Generator (ORPG) is a processing environment that provides a world-class VDA; however, this algorithm is complex and can be difficult to port to other radar systems outside of the WSR-88D network. In this work, a Deep Neural Network (DNN) is used to emulate the 2-dimensionalWSR-88D ORPG dealiasing algorithm. It is shown that a DNN, specifically a customized U-Net, is highly effective for building VDAs that are accurate, fast, and portable to multiple radar types. To train the DNN model, a large dataset is generated containing aligned samples of folded and dealiased velocity pairs. This dataset contains samples collected from WSR-88D Level-II and Level-III archives, and uses the ORPG dealiasing algorithm output as a source of truth. Using this dataset, a U-Net is trained to produce the number of folds at each point of a velocity image. Several performance metrics are presented using WSR-88D data. The algorithm is also applied to other non-WSR-88D radar systems to demonstrate portability to other hardware/software interfaces. A discussion of the broad applicability of this method is presented, including how other Level-III algorithms may benefit from this approach.

Free access
Free access
Jiaxin Chen
,
Ian G. C. Ashton
,
Edward C. C. Steele
, and
Ajit C. Pillai

Abstract

The safe and successful operation of offshore infrastructure relies on a detailed awareness of ocean wave conditions. Ongoing growth in offshore wind energy is focused on very large-scale projects, deployed in ever more challenging environments. This inherently increases both cost and complexity and therefore the requirement for efficient operational planning. To support this, we propose a new machine learning framework for the short-term forecasting of ocean wave conditions to support critical decision-making associated with marine operations. Here, an attention-based long short-term memory (LSTM) neural network approach is used to learn the short-term temporal patterns from in situ observations. This is then integrated with an existing, low computational cost spatial nowcasting model to develop a complete framework for spatiotemporal forecasting. The framework addresses the challenge of filling gaps in the in situ observations and undertakes feature selection, with seasonal training datasets embedded. The full spatiotemporal forecasting system is demonstrated using a case study based on independent observation locations near the southwest coast of the United Kingdom. Results are validated against in situ data from two wave buoy locations within the domain and compared to operational physics-based wave forecasts from the Met Office (the United Kingdom’s national weather service). For these two example locations, the spatiotemporal forecast is found to have an accuracy of R 2 = 0.9083 and 0.7409 in forecasting 1-h-ahead significant wave height and R 2 = 0.8581 and 0.6978 in 12-h-ahead forecasts, respectively. Importantly, this represents respectable levels of accuracy, comparable to traditional physics-based forecast products, but requires only a fraction of the computational resources.

Significance Statement

Spectral wave models, based on modeling the underlying physics and physical processes, are traditionally used to generate wave forecasts but require significant computational cost. In this study, we propose a machine learning forecasting framework developed using both in situ buoy observations and a surrogate regional numerical wave model. The proposed framework is validated against in situ measurements at two renewable energy sites and found to have very similar 12-h forecasting errors when benchmarked against the Met Office’s physics-based forecasting model but requires far less computational power. The proposed framework is highly flexible and has the potential for offering a low-cost, low computational resource approach for the provision of short-term forecasts and can operate with other types of observations and other machine learning algorithms to improve the availability and accuracy of the prediction.

Free access
Alexandra N. Ramos-Valle
,
Joshua Alland
, and
Anamaria Bukvic

Abstract

Many urban coastal communities are experiencing more profound flood impacts due to accelerated sea level rise that sometimes exceed their capacity to protect the built environment. In such cases, relocation may serve as a more effective hazard mitigation and adaptation strategy. However, it is unclear how urban residents living in flood-prone locations perceive the possibility of relocation and under what circumstances they would consider moving. Understanding the factors affecting an individual’s willingness to relocate due to coastal flooding is vital for developing accessible and equitable relocation policies. The main objective of this study is to identify the key considerations that would prompt urban coastal residents to consider permanent relocation due to coastal flooding. We leverage survey data collected from urban areas along the U.S. East Coast, assessing attitudes towards relocation, and design an artificial neural network (ANN) and a random forest (RF) model to find patterns in the survey data and indicate which considerations impact the decision to consider relocation. We trained the models to predict whether respondents would relocate due to socioeconomic factors, past exposure and experiences with flooding, and their flood-related concerns. Analyses performed on the models highlight the importance of flood-related concerns that accurately predict relocation behavior. Some common factors among the model analyses are concerns with increasing crime, the possibility of experiencing one more flood per year in the future, and more frequent business closures due to flooding.

Free access
Louis Le Toumelin
,
Isabelle Gouttevin
,
Nora Helbig
,
Clovis Galiez
,
Mathis Roux
, and
Fatima Karbou

Abstract

Estimating the impact of wind-driven snow transport requires modeling wind fields with a lower grid spacing than the spacing on the order of 1 or a few kilometers used in the current numerical weather prediction (NWP) systems. In this context, we introduce a new strategy to downscale wind fields from NWP systems to decametric scales, using high-resolution (30 m) topographic information. Our method (named “DEVINE”) is leveraged on a convolutional neural network (CNN), trained to replicate the behavior of the complex atmospheric model ARPS, and was previously run on a large number (7279) of synthetic Gaussian topographies under controlled weather conditions. A 10-fold cross validation reveals that our CNN is able to accurately emulate the behavior of ARPS (mean absolute error for wind speed = 0.16 m s−1). We then apply DEVINE to real cases in the Alps, that is, downscaling wind fields forecast by the AROME NWP system using information from real alpine topographies. DEVINE proved able to reproduce main features of wind fields in complex terrain (acceleration on ridges, leeward deceleration, and deviations around obstacles). Furthermore, an evaluation on quality-checked observations acquired at 61 sites in the French Alps reveals improved behavior of the downscaled winds (AROME wind speed mean bias is reduced by 27% with DEVINE), especially at the most elevated and exposed stations. Wind direction is, however, only slightly modified. Hence, despite some current limitations inherited from the ARPS simulations setup, DEVINE appears to be an efficient downscaling tool whose minimalist architecture, low input data requirements (NWP wind fields and high-resolution topography), and competitive computing times may be attractive for operational applications.

Significance Statement

Wind largely influences the spatial distribution of snow in mountains, with direct consequences on hydrology and avalanche hazard. Most operational models predicting wind in complex terrain use a grid spacing on the order of several kilometers, too coarse to represent the real patterns of mountain winds. We introduce a novel method based on deep learning to increase this spatial resolution while maintaining acceptable computational costs. Our method mimics the behavior of a complex model that is able to represent part of the complexity of mountain winds by using topographic information only. We compared our results with observations collected in complex terrain and showed that our model improves the representation of winds, notably at the most elevated and exposed observation stations.

Free access
Ignacio Lopez-Gomez
,
Amy McGovern
,
Shreya Agrawal
, and
Jason Hickey

Abstract

Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ∼200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks.

Significance Statement

Heatwaves are projected to become stronger and more frequent as a result of global warming. Accurate forecasting of these events would enable the implementation of effective mitigation strategies. Here we analyze the forecast accuracy of artificial intelligence systems trained on historical surface temperature data to predict extreme heat events globally, 1 to 28 days ahead. We find that artificial intelligence systems trained to focus on extreme temperatures are significantly more accurate at predicting heatwaves than systems trained to minimize errors in surface temperatures and remain equally skillful at predicting moderate temperatures. Furthermore, the extreme-focused systems compete with state-of-the-art physics-based forecast systems in the subseasonal range, while incurring a much lower computational cost.

Free access
Elizabeth Carter
,
Carolynne Hultquist
, and
Tao Wen

Abstract

Globally available environmental observations (EOs), specifically from satellites and coupled earth systems models, represent some of the largest datasets of the digital age. As the volume of global EOs continues to grow, so does the potential of this data to help earth scientists discover trends and patterns in earth systems at large spatial scales. To leverage global EOs for scientific insight, earth scientists need targeted and accessible exposure to skills in reproducible scientific computing and spatiotemporal data science, and to be empowered to apply their domain understanding to interpret data-driven models for knowledge discovery. The GRRIEn (Generalizable, Reproducible, Robust, and Interpreted Environmental) analysis framework was developed to prepare earth scientists with an introductory statistics background and limited/no understanding of programming and computational methods to use global EOs to successfully generalize insights from local/regional field measurements across unsampled times and locations. GRRIEn analysis is generalizable, meaning results from a sample are translated to landscape scales by combining direct environmental measurements with global EOs using supervised machine learning; robust, meaning that model shows good performance on data with scale-dependent feature and observation dependence; reproducible, based on a standard repository structure so that other scientists can quickly and easily replicate the analysis with a few computational tools; and interpreted, meaning that earth scientists apply domain expertise to ensure that model parameters reflect a physically plausible diagnosis of the environmental system. This tutorial presents standard steps for achieving GRRIEn analysis by combining conventions of rigor in traditional experimental design with the open-science movement.

Free access
Mohamad Abed El Rahman Hammoud
,
Humam Alwassel
,
Bernard Ghanem
,
Omar Knio
, and
Ibrahim Hoteit

Abstract

Backward-in-time predictions are needed to better understand the underlying dynamics of physical fluid flows and improve future forecasts. However, integrating fluid flows backward in time is challenging because of numerical instabilities caused by the diffusive nature of the fluid systems and nonlinearities of the governing equations. Although this problem has been long addressed using a nonpositive diffusion coefficient when integrating backward, it is notoriously inaccurate. In this study, a physics-informed deep neural network (PI-DNN) is presented to predict past states of a dissipative dynamical system from snapshots of the subsequent evolution of the system state. The performance of the PI-DNN is investigated using several systematic numerical experiments and the accuracy of the backward-in-time predictions is evaluated in terms of different error metrics. The proposed PI-DNN can predict the previous state of the Rayleigh–Bénard convection with an 8-time-step average normalized 2 error of less than 2% for a turbulent flow at a Rayleigh number of 105.

Free access
Dario Dematties
,
Bhupendra A. Raut
,
Seongha Park
,
Robert C. Jackson
,
Sean Shahkarami
,
Yongho Kim
,
Rajesh Sankaran
,
Pete Beckman
,
Scott M. Collis
, and
Nicola Ferrier

Abstract

Accurate cloud type identification and coverage analysis are crucial in understanding the Earth’s radiative budget. Traditional computer vision methods rely on low-level visual features of clouds for estimating cloud coverage or sky conditions. Several handcrafted approaches have been proposed; however, scope for improvement still exists. Newer deep neural networks (DNNs) have demonstrated superior performance for cloud segmentation and categorization. These methods, however, need expert engineering intervention in the preprocessing steps—in the traditional methods—or human assistance in assigning cloud or clear sky labels to a pixel for training DNNs. Such human mediation imposes considerable time and labor costs. We present the application of a new self-supervised learning approach to autonomously extract relevant features from sky images captured by ground-based cameras, for the classification and segmentation of clouds. We evaluate a joint embedding architecture that uses self-knowledge distillation plus regularization. We use two datasets to demonstrate the network’s ability to classify and segment sky images—one with ∼ 85,000 images collected from our ground-based camera and another with 400 labeled images from the WSISEG database. We find that this approach can discriminate full-sky images based on cloud coverage, diurnal variation, and cloud base height. Furthermore, it semantically segments the cloud areas without labels. The approach shows competitive performance in all tested tasks, suggesting a new alternative for cloud characterization.

Free access