Browse

You are looking at 71 - 80 of 164 items for :

  • Artificial Intelligence for the Earth Systems x
  • Refine by Access: All Content x
Clear All
Stephanie M. Ortland
,
Michael J. Pavolonis
, and
John L. Cintineo

Abstract

This paper presents the Thunderstorm Nowcasting Tool (ThunderCast), a 24-h, year-round model for predicting the location of convection that is likely to initiate or remain a thunderstorm in the next 0–60 min in the continental United States, adapted from existing deep learning convection applications. ThunderCast utilizes a U-Net convolutional neural network for semantic segmentation trained on 320 km × 320 km data patches with four inputs and one target dataset. The inputs are satellite bands from the Geostationary Operational Environmental Satellite-16 (GOES-16) Advanced Baseline Imager (ABI) in the visible, shortwave infrared, and longwave infrared spectra, and the target is Multi-Radar Multi-Sensor (MRMS) radar reflectivity at the −10°C isotherm in the atmosphere. On a pixel-by-pixel basis, ThunderCast has high accuracy, recall, and specificity but is subject to false-positive predictions resulting in low precision. However, the number of false positives decreases when buffering the target values with a 15 km × 15 km centered window, indicating ThunderCast’s predictions are useful within a buffered area. To demonstrate the initial prediction capabilities of ThunderCast, three case studies are presented: a mesoscale convective vortex, sea-breeze convection, and monsoonal convection in the southwestern United States. The case studies illustrate that the ThunderCast model effectively nowcasts the location of newly initiated and ongoing active convection, within the next 60 min, under a variety of geographical and meteorological conditions.

Significance Statement

In this research, a machine learning model is developed for short-term (0–60 min) forecasting of thunderstorms in the continental United States using geostationary satellite imagery as inputs for predicting active convection based on radar thresholds. Pending additional testing, the model may be able to provide decision-support services for thunderstorm forecasting. The case studies presented here indicate the model is able to nowcast convective initiation with 5–35 min of lead time in areas without radar coverage and anticipate future locations of storms without additional environmental context.

Open access
Charlotte Cambier van Nooten
,
Koert Schreurs
,
Jasper S. Wijnands
,
Hidde Leijnse
,
Maurice Schmeits
,
Kirien Whan
, and
Yuliya Shapovalova

Abstract

Precipitation nowcasting is essential for weather-dependent decision-making, but it remains a challenging problem despite active research. The combination of radar data and deep learning methods has opened a new avenue for research. Radar data are well suited for precipitation nowcasting due to the high space–time resolution of the precipitation field. On the other hand, deep learning methods allow the exploitation of possible nonlinearities in the precipitation process. Thus far, deep learning approaches have demonstrated equal or better performance than optical flow methods for low-intensity precipitation, but nowcasting high-intensity events remains a challenge. In this study, we have built a deep generative model with various extensions to improve nowcasting of heavy precipitation intensities. Specifically, we consider different loss functions and how the incorporation of temperature data as an additional feature affects the model’s performance. Using radar data from KNMI and 5–90-min lead times, we demonstrate that the deep generative model with the proposed loss function and temperature feature outperforms other state-of-the-art models and benchmarks. Our model, with both loss function and feature extensions, is skillful at nowcasting precipitation for high rainfall intensities, up to 60-min lead time.

Open access
Jordan Richards
,
Raphaël Huser
,
Emanuele Bevacqua
, and
Jakob Zscheischler

Abstract

Extreme wildfires continue to be a significant cause of human death and biodiversity destruction within countries that encompass the Mediterranean Basin. Recent worrying trends in wildfire activity (i.e., occurrence and spread) suggest that wildfires are likely to be highly impacted by climate change. To facilitate appropriate risk mitigation, it is imperative to identify the main drivers of extreme wildfires and assess their spatiotemporal trends, with a view to understanding the impacts of the changing climate on fire activity. To this end, we analyze the monthly burnt area due to wildfires over a region encompassing most of Europe and the Mediterranean Basin from 2001 to 2020 and identify high fire activity during this period in eastern Europe, Algeria, Italy, and Portugal. We build an extreme quantile regression model with a high-dimensional predictor set describing meteorological conditions, land-cover usage, and orography, for the domain. To model the complex relationships between the predictor variables and wildfires, we make use of a hybrid statistical deep learning framework that allows us to disentangle the effects of vapor pressure deficit (VPD), air temperature, and drought on wildfire activity. Our results highlight that while VPD, air temperature, and drought significantly affect wildfire occurrence, only VPD affects wildfire spread. Furthermore, to gain insights into the effect of climate trends on wildfires in the near future, we focus on the extreme wildfires in August 2001 and perturb VPD and temperature according to their observed trends. We find that, on average over Europe, trends in temperature (median over Europe: +0.04 K yr−1) lead to a relative increase of 17.1% and 1.6% in the expected frequency and severity, respectively, of wildfires in August 2001; similar analyses using VPD (median over Europe: +4.82 Pa yr−1) give respective increases of 1.2% and 3.6%. Our analysis finds evidence suggesting that global warming can lead to spatially nonuniform changes in wildfire activity.

Open access
Dwaipayan Chatterjee
,
Claudia Acquistapace
,
Hartwig Deneke
, and
Susanne Crewell

Abstract

In this study, we introduce a self-supervised deep neural network approach to classify satellite images into independent classes of cloud systems. The driving question of the work is to understand whether our algorithm can capture cloud variability and identify distinct cloud regimes. Ultimately, we want to achieve generalization such that the algorithm can be applied to unseen data and thus help automatically extract relevant information important to atmospheric science and renewable energy applications from the ever-increasing satellite data stream. We use cloud optical depth (COD) retrieved from postprocessed high-resolution Meteosat Second Generation (MSG) satellite data as input for the network. The network’s architecture is based on the DeepCluster, version 2, and consists of a convolutional neural network and a multilayer perceptron, followed by a k-means algorithm. We explore the network’s training capabilities by analyzing the centroids and feature vectors found from progressive minimization of the cross-entropy loss function. By making use of additional MSG retrieval products based on multichannel information, we derive the optimum number of classes to determine independent cloud regimes. We test the network capabilities on COD data from 2013 and find that the trained neural network gives insights into the cloud systems’ persistence and transition probability. The generalization on the 2015 data shows good skills of our algorithm with unseen data, but results depend on the spatial scale of cloud systems.

Significance Statement

This study uses a self-supervised deep neural network to identify distinct cloud systems from cloud optical depth satellite images over central Europe. Satellite-retrieved products support the physical interpretation of the identified cloud classes and help optimize the number of identified classes. The trained neural network gives insights into cloud systems’ persistence and transition probability. The generalization capacity of the deep neural network with unseen data is promising but depends on the spatial scale of cloud systems.

Open access
Lei Meng
and
Laiyin Zhu

Abstract

Snow is an important component of Earth’s climate system, and snowfall intensity and variation often significantly impact society, the environment, and ecosystems. Understanding monthly and seasonal snowfall intensity and variations is challenging because of multiple controlling mechanisms at different spatial and temporal scales. Using 65 years of in situ snowfall observation, we evaluated seven machine learning algorithms for modeling monthly and seasonal snowfall in the Lower Peninsula of Michigan (LPM) based on selected environmental and climatic variables. Our results show that the Bayesian additive regression tree (BART) has the best fitting (R 2 = 0.88) and out-of-sample estimation skills (R 2 = 0.58) for the monthly mean snowfall followed by the random forest model. The BART also demonstrates strong estimation skills for large monthly snowfall amounts. Both BART and the random forest models suggest that topography, local/regional environmental factors, and teleconnection indices can significantly improve the estimation of monthly and seasonal snowfall amounts in the LPM. These statistical models based on machine learning algorithms can incorporate variables at multiple scales and address nonlinear responses of snowfall variations to environmental/climatic changes. It demonstrated that the multiscale machine learning techniques provide a reliable and computationally efficient approach to modeling snowfall intensity and variability.

Open access
Pedro Ortiz
,
Eleanor Casas
,
Marko Orescanin
,
Scott W. Powell
,
Veljko Petkovic
, and
Micky Hall

Abstract

Visible and infrared radiance products of geostationary orbiting platforms provide virtually continuous observations of Earth. In contrast, low-Earth orbiters observe passive microwave (PMW) radiances at any location much less frequently. Prior literature demonstrates the ability of a machine learning (ML) approach to build a link between these two complementary radiance spectra by predicting PMW observations using infrared and visible products collected from geostationary instruments, which could potentially deliver a highly desirable synthetic PMW product with nearly continuous spatiotemporal coverage. However, current ML models lack the ability to provide a measure of uncertainty of such a product, significantly limiting its applications. In this work, Bayesian deep learning is employed to generate synthetic Global Precipitation Measurement (GPM) Microwave Imager (GMI) data from Advanced Baseline Imager (ABI) observations with attached uncertainties over the ocean. The study first uses deterministic residual networks (ResNets) to generate synthetic GMI brightness temperatures with as little mean absolute error as 1.72 K at the ABI spatiotemporal resolution. Then, for the same task, we use three Bayesian ResNet models to produce a comparable amount of error while providing previously unavailable predictive variance (i.e., uncertainty) for each synthetic data point. We find that the Flipout configuration provides the most robust calibration between uncertainty and error across GMI frequencies, and then demonstrate how this additional information is useful for discarding high-error synthetic data points prior to use by downstream applications.

Open access
Mary Ruth Keller
,
Christine Piatko
,
Mary Versa Clemens-Sewall
,
Rebecca Eager
,
Kevin Foster
,
Christopher Gifford
,
Derek Rollend
, and
Jennifer Sleeman

Abstract

Ships inside the Arctic basin require high-resolution (1–5 km), near-term (days to semimonthly) forecasts for guidance on scales of interest to their operations where forecast model predictions are insufficient due to their coarse spatial and temporal resolutions. Deep learning techniques offer the capability of rapid assimilation and analysis of multiple sources of information for improved forecasting. Data from the National Oceanographic and Atmospheric Administration’s Global Forecast System, Multi-scale Ultra-high Resolution Sea Surface Temperature (MEaSUREs), and the National Snow and Ice Data Center’s Multisensor Analyzed Sea ice Extent (MASIE) were used to develop the sea ice extent deep learning forecast model, over the freeze-up periods of 2016, 2018, 2019, and 2020 in the Beaufort Sea. Sea ice extent forecasts were produced for 1–7 days in the future. The approach was novel for sea ice extent forecasting in using forecast data as model input to aid in the prediction of sea ice extent. Model accuracy was assessed against a persistence model. While the average accuracy of the persistence model dropped from 97% to 90% for forecast days 1–7, the deep learning model accuracy dropped only to 93%. A k-fold (fourfold) cross-validation study found that on all except the first day, the deep learning model, which includes a U-Net architecture with an 18-layer residual neural network (Resnet-18) backbone, does better than the persistence model. Skill scores improve the farther out in time to 0.27. The model demonstrated success in predicting changes in ice extent of significance for navigation in the Amundsen Gulf. Extensions to other Arctic seas, seasons, and sea ice parameters are under development.

Significance Statement

Ships traversing the Arctic require timely, accurate sea ice location information to successfully complete their transits. After testing several potential candidates, we have developed a short-term (7 day) forecast process using existing observations of ice extent and sea surface temperature, with operational forecasts of weather and oceanographic variables, and appropriate machine learning models. The process included using forecasts of atmospheric and oceanographic conditions, as a human forecaster/analyst would. The models were trained for the Beaufort Sea north of Alaska using data and forecasts from 2016 combined with 2018–20. The results showed improvement in short-term forecasts of ice locations over current methods and also demonstrated correctly predicted changes in the sea ice that are important for navigation.

Open access
Selina M. Kiefer
,
Sebastian Lerch
,
Patrick Ludwig
, and
Joaquim G. Pinto

Abstract

Skillful weather prediction on subseasonal to seasonal time scales is crucial for many socioeconomic ventures. But forecasting, especially extremes, on these time scales is very challenging because the information from initial conditions is gradually lost. Therefore, data-driven methods are discussed as an alternative to numerical weather prediction models. Here, quantile regression forests (QRFs) and random forest classifiers (RFCs) are used for probabilistic forecasting of central European mean wintertime 2-m temperatures and cold wave days at lead times of 14, 21, and 28 days. ERA5 reanalysis meteorological predictors are used as input data for the machine learning models. For the winters of 2000/01–2019/20, the predictions are compared with a climatological ensemble obtained from E-OBS observational data. The evaluation is performed as full distribution predictions for continuous values using the continuous ranked probability skill score and as binary categorical forecasts using the Brier skill score. We find skill at lead times up to 28 days in the 20-winter mean and for individual winters. Case studies show that all used machine learning models are able to learn patterns in the data beyond climatology. A more detailed analysis using Shapley additive explanations suggests that both random forest (RF)-based models are able to learn physically known relationships in the data. This underlines that RF-based data-driven models can be a suitable tool for forecasting central European mean wintertime 2-m temperatures and the occurrence of cold wave days.

Significance Statement

Because of the chaotic nature of weather, it is very complicated to make predictions with traditional numerical methods 2–4 weeks in advance. Therefore, we use alternative, interpretable methods that “learn” to find statistically relevant patterns in meteorological data that can be used for forecasting central European mean surface wintertime temperatures and cold wave days. These methods are part of the so-called machine learning methods that do not rely on the traditional numerical equations anymore. We test our methods for 20 winters between 2000/01 and 2019/20 against a static weather prediction consisting of the past 30 winters. For single winters and in a mean over the 20 predicted winters, we find improved predictions up to 4 weeks in advance.

Open access
Clément Brochet
,
Laure Raynaud
,
Nicolas Thome
,
Matthieu Plu
, and
Clément Rambour

Abstract

Emulating numerical weather prediction (NWP) model outputs is important to compute large datasets of weather fields in an efficient way. The purpose of the present paper is to investigate the ability of generative adversarial networks (GANs) to emulate distributions of multivariate outputs (10-m wind and 2-m temperature) of a kilometer-scale NWP model. For that purpose, a residual GAN architecture, regularized with spectral normalization, is trained against a kilometer-scale dataset from the AROME Ensemble Prediction System (AROME-EPS). A wide range of metrics is used for quality assessment, including pixelwise and multiscale Earth-mover distances, spectral analysis, and correlation length scales. The use of wavelet-based scattering coefficients as meaningful metrics is also presented. The GAN generates samples with good distribution recovery and good skill in average spectrum reconstruction. Important local weather patterns are reproduced with a high level of detail, while the joint generation of multivariate samples matches the underlying AROME-EPS distribution. The different metrics introduced describe the GAN’s behavior in a complementary manner, highlighting the need to go beyond spectral analysis in generation quality assessment. An ablation study then shows that removing variables from the generation process is globally beneficial, pointing at the GAN limitations to leverage cross-variable correlations. The role of absolute positional bias in the training process is also characterized, explaining both accelerated learning and quality-diversity trade-off in the multivariate emulation. These results open perspectives about the use of GAN to enrich NWP ensemble approaches, provided that the aforementioned positional bias is properly controlled.

Open access
William Yik
,
Sam J. Silva
,
Andrew Geiss
, and
Duncan Watson-Parris

Abstract

Exploring the climate impacts of various anthropogenic emissions scenarios is key to making informed decisions for climate change mitigation and adaptation. State-of-the-art Earth system models can provide detailed insight into these impacts but have a large associated computational cost on a per-scenario basis. This large computational burden has driven recent interest in developing cheap machine learning models for the task of climate model emulation. In this paper, we explore the efficacy of randomly wired neural networks for this task. We describe how they can be constructed and compare them with their standard feedforward counterparts using the ClimateBench dataset. Specifically, we replace the serially connected dense layers in multilayer perceptrons, convolutional neural networks, and convolutional long short-term memory networks with randomly wired dense layers and assess the impact on model performance for models with 1 million and 10 million parameters. We find that models with less-complex architectures see the greatest performance improvement with the addition of random wiring (up to 30.4% for multilayer perceptrons). Furthermore, of 24 different model architecture, parameter count, and prediction task combinations, only one had a statistically significant performance deficit in randomly wired networks relative to their standard counterparts, with 14 cases showing statistically significant improvement. We also find no significant difference in prediction speed between networks with standard feedforward dense layers and those with randomly wired layers. These findings indicate that randomly wired neural networks may be suitable direct replacements for traditional dense layers in many standard models.

Significance Statement

Modeling various greenhouse gas and aerosol emissions scenarios is important for both understanding climate change and making informed political and economic decisions. However, accomplishing this with large Earth system models is a complex and computationally expensive task. As such, data-driven machine learning models have risen in prevalence as cheap emulators of Earth system models. In this work, we explore a special type of machine learning model called randomly wired neural networks and find that they perform competitively for the task of climate model emulation. This indicates that future machine learning models for emulation may significantly benefit from using randomly wired neural networks as opposed to their more-standard counterparts.

Open access