Browse

You are looking at 91 - 100 of 164 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Maria J. Molina
,
Travis A. O’Brien
,
Gemma Anderson
,
Moetasim Ashfaq
,
Katrina E. Bennett
,
William D. Collins
,
Katherine Dagon
,
Juan M. Restrepo
, and
Paul A. Ullrich

Abstract

Climate variability and weather phenomena can cause extremes and pose significant risk to society and ecosystems, making continued advances in our physical understanding of such events of utmost importance for regional and global security. Advances in machine learning (ML) have been leveraged for applications in climate variability and weather, empowering scientists to approach questions using big data in new ways. Growing interest across the scientific community in these areas has motivated coordination between the physical and computer science disciplines to further advance the state of the science and tackle pressing challenges. During a recently held workshop that had participants across academia, private industry, and research laboratories, it became clear that a comprehensive review of recent and emerging ML applications for climate variability and weather phenomena that can cause extremes was needed. This article aims to fulfill this need by discussing recent advances, challenges, and research priorities in the following topics: sources of predictability for modes of climate variability, feature detection, extreme weather and climate prediction and precursors, observation–model integration, downscaling, and bias correction. This article provides a review for domain scientists seeking to incorporate ML into their research. It also provides a review for those with some ML experience seeking to broaden their knowledge of ML applications for climate variability and weather.

Open access
Free access
Chuyen Nguyen
,
Jason E. Nachamkin
,
David Sidoti
,
Jacob Gull
,
Adam Bienkowski
,
Rich Bankert
, and
Melinda Surratt

Abstract

Given the diversity of cloud-forcing mechanisms, it is difficult to classify and characterize all cloud types through the depth of a specific troposphere. Importantly, different cloud families often coexist even at the same atmospheric level. The Naval Research Laboratory (NRL) is developing machine learning–based cloud forecast models to fuse numerical weather prediction model and satellite data. These models were built for the dual purpose of understanding numerical weather prediction model error trends as well as improving the accuracy and sensitivity of the forecasts. The framework implements a UNet convolutional neural network (UNet-CNN) with features extracted from clouds observed by the Geostationary Operational Environmental Satellite-16 (GOES-16) as well as clouds predicted by the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS). The fundamental idea behind this novel framework is the application of UNet-CNN for separate variable sets extracted from GOES-16 and COAMPS to characterize and predict broad families of clouds that share similar physical characteristics. A quantitative assessment and evaluation based on an independent dataset for upper-tropospheric (high) clouds suggests that UNet-CNN models capture the complexity and error trends of combined data from GOES-16 and COAMPS, and also improve forecast accuracy and sensitivity for different lead time forecasts (3–12 h). This paper includes an overview of the machine learning frameworks as well as an illustrative example of their application and a comparative assessment of results for upper-tropospheric clouds.

Significance Statement

Clouds are difficult to forecast because they require, in addition to spatial location, accurate height, depth, and cloud type. Satellite imagery is useful for verifying geographical location but is limited by 2D technology. Multiple cloud types can coexist at various heights within the same pixel. In this situation, cloud/no cloud verification does not convey much information about why the forecast went wrong. Sorting clouds by physical attributes such as cloud-top height, atmospheric stability, and cloud thickness contributes to a better understanding since very different physical mechanisms produce various types of clouds. Using a fusion of numerical model outputs and GOES-16 observations, we derive variables related to atmospheric conditions that form and move the clouds for our machine learning–based cloud forecast. The resulting verification over the U.S. mid-Atlantic region revealed our machine learning–based cloud forecast corrects systematic errors associated with high atmospheric clouds and provides accurate and consistent cloud forecasts from 3 to 12 h lead times.

Open access
Brian C. Filipiak
,
Nick P. Bassill
,
Kristen L. Corbosiero
,
Andrea L. Lang
, and
Ross A. Lazear

Abstract

Winter mixed-precipitation events are associated with multiple hazards and create forecast challenges that are due to the difficulty in determining the timing and amount of each precipitation type. In New York State, complex terrain enhances these forecast challenges. Machine learning is a relatively nascent tool that can help improve forecasting by synthesizing large amounts of data and finding underlying relationships. This study uses a random forest machine learning algorithm that generates probabilistic winter precipitation type forecasts. Random forest configuration, testing, and development methods are presented to show how this tool can be applied to operational forecasting. Dataset generation and variation are also explained because of their essential nature in the random forest. Last, the methodology of transitioning a machine learning algorithm from research to operations is discussed.

Significance Statement

Examining the role that machine learning can play in winter precipitation type forecasting is an area of research that has ample room for exploration, as much of the previous research has focused on applying machine learning to warm-season precipitation and severe weather events. Establishing a framework and methodology to successfully combine machine learning and weather research into effective operational tools is a valuable addition to the machine learning community. Because machine learning is increasingly being applied to meteorology, this work can act as a road map to help develop other meteorological tools based in machine learning.

Open access
Christina Feng Chang
,
Marina Astitha
,
Yongping Yuan
,
Chunling Tang
,
Penny Vlahos
,
Valerie Garcia
, and
Ummul Khaira

Abstract

Tributary phosphorus (P) loads are one of the main drivers of eutrophication problems in freshwater lakes. Being able to predict P loads can aid in understanding subsequent load patterns and elucidate potential degraded water quality conditions in downstream surface waters. We demonstrate the development and performance of an integrated multimedia modeling system that uses machine learning (ML) to assess and predict monthly total P (TP) and dissolved reactive P (DRP) loads. Meteorological variables from the Weather Research and Forecasting (WRF) Model, hydrologic variables from the Variable Infiltration Capacity model, and agricultural management practice variables from the Environmental Policy Integrated Climate agroecosystem model are utilized to train the ML models to predict P loads. Our study presents a new modeling methodology using as testbeds the Maumee, Sandusky, Portage, and Raisin watersheds, which discharge into Lake Erie and contribute to significant P loads to the lake. Two models were built, one for TP loads using 10 environmental variables and one for DRP loads using nine environmental variables. Both models ranked streamflow as the most important predictive variable. In comparison with observations, TP and DRP loads were predicted very well temporally and spatially. Modeling results of TP loads are within the ranges of those obtained from other studies and on some occasions more accurate. Modeling results of DRP loads exceed performance measures from other studies. We explore the ability of both ML-based models to further improve as more data become available over time. This integrated multimedia approach is recommended for studying other freshwater systems and water quality variables using available decadal data from physics-based model simulations.

Open access
Kyle A. Hilburn

Abstract

Convolutional neural networks (CNNs) are opening new possibilities in the realm of satellite remote sensing. CNNs are especially useful for capturing the information in spatial patterns that is evident to the human eye but has eluded classical pixelwise retrieval algorithms. However, the black-box nature of CNN predictions makes them difficult to interpret, hindering their trustworthiness. This paper explores a new way to simplify CNNs that allows them to be implemented in a fully transparent and interpretable framework. This clarity is accomplished by moving the inner workings of the CNN out into a feature engineering step and replacing the CNN with a regression model. The specific example of the GOES Radar Estimation via Machine Learning to Inform NWP (GREMLIN) is used to demonstrate that such simplifications are possible and to show the benefits of the interpretable approach. GREMLIN translates images of GOES radiances and lightning into images of radar reflectivity, and previous research used explainable artificial intelligence (XAI) approaches to explain some aspects of how GREMLIN makes predictions. However, the Interpretable GREMLIN model shows that XAI missed several strategies, and XAI does not provide guarantees on how the model will respond when confronted with new scenarios. In contrast, the interpretable model establishes well-defined relationships between inputs and outputs, offering a clear mapping of the spatial context utilized by the CNN to make accurate predictions, and providing guarantees on how the model will respond to new inputs. The significance of this work is that it provides a new approach for developing trustworthy artificial intelligence models.

Significance Statement

Convolutional neural networks (CNNs) are very powerful tools for interpreting and processing satellite imagery. However, the black-box nature of their predictions makes them difficult to interpret, compromising their trustworthiness when applied in the context of high-stakes decision-making. This paper develops an interpretable version of a CNN model, showing that it has similar performance as the original CNN. The interpretable model is analyzed to obtain clear relationships between inputs and outputs, which elucidates the nature of spatial context utilized by CNNs to make accurate predictions. The interpretable model has a well-defined response to inputs, providing guarantees for how it will respond to novel inputs. The significance of this work is that it provides an approach to developing trustworthy artificial intelligence models.

Open access
Tsuyoshi Thomas Sekiyama
,
Syugo Hayashi
,
Ryo Kaneko
, and
Ken-ichi Fukui

Abstract

Surrogate modeling is one of the most promising applications of deep learning techniques in meteorology. The purpose of this study was to downscale surface wind fields in a gridded format at a much lower computational load. We employed a superresolution convolutional neural network (SRCNN) as a surrogate model and created a 20-member ensemble by training the same SRCNN model with different random seeds. The downscaling accuracy of the ensemble mean remained stable throughout a year and was consistently better than that of the input wind fields. It was confirmed that 1) the ensemble spread was efficiently created and that 2) the ensemble mean was superior to individual ensemble members and 3) robust to the presence of outlier members. Training, validation, and test data for 10 years were computed via our nested mesoscale weather forecast models not derived from public analysis datasets or real observations. The predictands were 1-km gridded surface zonal and meridional winds, of which the domain was defined as a 180 km × 180 km area around Tokyo, Japan. The predictors included 5-km gridded surface zonal and meridional winds, temperature, humidity, vertical gradient of the potential temperature, elevation, and land-to-water ratio as well as 1-km gridded elevation and land-to-water ratio. Although a perfect surrogate of the weather forecast model could not be achieved, the SRCNN downscaling accuracy could likely enable us to apply this approach in high-resolution advection simulations, considering its overwhelmingly high prediction speed.

Open access
Cameron C. Lee
,
Scott C. Sheridan
,
Gregory P. Dusek
, and
Douglas E. Pirhalla

Abstract

With climate change causing rising sea levels around the globe, multiple recent efforts in the United States have focused on the prediction of various meteorological factors that can lead to periods of anomalously high tides despite seemingly benign atmospheric conditions. As part of these efforts, this research explores monthly scale relationships between sea level variability and atmospheric circulation patterns and demonstrates two options for subseasonal to seasonal (S2S) predictions of anomalous sea levels using these patterns as inputs to artificial neural network (ANN) models. Results on the monthly scale are similar to previous research on the daily scale, with above-average sea levels and an increased risk of high-water events on days with anomalously low atmospheric pressure patterns and wind patterns leading to onshore or downwelling-producing wind stress. Some wind patterns show risks of high-water events to be over 6 times higher than baseline risk and exhibit an average water level anomaly of +94 mm above normal. In terms of forecasting, nonlinear autoregressive ANN models with exogenous input (NARX models) and pattern-based lagged ANN (PLANN) models show skill over postprocessed numerical forecast model output, and simple climatology. Damped-persistence forecasts and PLANN models show nearly the same skill in terms of predicting anomalous sea levels out to 9 months of lead time, with a slight edge to PLANN models, especially with regard to error statistics. This perspective on forecasting—using predefined circulation patterns along with ANN models—should aid in the real-time prediction of coastal flooding events, among other applications.

Open access
Oksana A. Chkrebtii
and
Frederick M. Bingham

Abstract

We explore the use of ocean near-surface salinity (NSS), that is, salinity at 1-m depth, as a rainfall occurrence detector for hourly precipitation using data from the Salinity Processes in the Upper-Ocean Regional Studies–2 (SPURS-2) mooring at 10°N, 125°W. Our proposed unsupervised learning algorithm consists of two stages. First, an empirical quantile-based identification of dips in NSS enables us to capture most events with hourly averaged rainfall rate of >5 mm h−1. Overestimation of precipitation duration is then corrected locally by fitting a parametric model based on the salinity balance equation. We propose a local precipitation model composed of a small number of calibration parameters representing individual rainfall events and their location in time. We show that unsupervised rainfall detection can be formulated as a statistical problem of predicting these variables from NSS data. We present our results and provide a validation technique based on data collected at the SPURS-2 mooring.

Significance Statement

Continuous monitoring of precipitation in the ocean is challenging when a physical rain gauge is not available in the region of interest. Indirect detection of precipitation using available data, such as changes in ocean near-surface salinity (NSS) can be used to construct a virtual rainfall detector. We propose to combine data-based and model-based methods to detect rainfall without the use of a physical rain gauge. We use NSS and precipitation data from a mooring in the eastern tropical Pacific Ocean to develop and test the method.

Open access
Sem Vijverberg
,
Raed Hamed
, and
Dim Coumou

Abstract

Soy harvest failure events can severely impact farmers, insurance companies, and raise global prices. Reliable seasonal forecasts of misharvests would allow stakeholders to prepare and take appropriate early action. However, especially for farmers, the reliability and lead time of current prediction systems provide insufficient information to justify within-season adaptation measures. Recent innovations increased our ability to generate reliable statistical seasonal forecasts. Here, we combine these innovations to predict the 1–3 poor soy harvest years in the eastern United States. We first use a clustering algorithm to spatially aggregate crop producing regions within the eastern United States that are particularly sensitive to hot–dry weather conditions. Next, we use observational climate variables [sea surface temperature (SST) and soil moisture] to extract precursor time series at multiple lags. This allows the machine learning model to learn the low-frequency evolution, which carries important information for predictability. A selection based on causal inference allows for physically interpretable precursors. We show that the robust selected predictors are associated with the evolution of the horseshoe Pacific SST pattern, in line with previous research. We use the state of the horseshoe Pacific to identify years with enhanced predictability. We achieve high forecast skill of poor harvests events, even 3 months prior to sowing, using a strict one-step-ahead train-test splitting. Over the last 25 years, when the horseshoe Pacific SST pattern was anomalously strong, 67% of the poor harvests predicted in February were correct. When operational, this forecast would enable farmers to make informed decisions on adaption measures, for example, selecting more drought-resistant cultivars or change planting management.

Significance Statement

If soy farmers would know that the upcoming growing season will be hot and dry, they could decide to take anticipatory action to reduce losses, that is, buy more drought resistant soy cultivars or change planting management. To make such decisions, farmers would need information even prior to sowing. On these very long lead times, a predictable signal can emerge from low-frequency processes of the climate system that can affect surface weather via teleconnections. However, traditional forecast systems are unable to make reliable predictions at these lead times. In this work, we used machine learning techniques to train a forecast model based on these low-frequency components. This allowed us to make reliable predictions of poor harvest years even 3 months prior to sowing.

Open access