Browse
Abstract
Weather forecasting centers currently rely on statistical postprocessing methods to minimize forecast error. This improves skill but can lead to predictions that violate physical principles or disregard dependencies between variables, which can be problematic for downstream applications and for the trustworthiness of postprocessing models, especially when they are based on new machine learning approaches. Building on recent advances in physics-informed machine learning, we propose to achieve physical consistency in deep learning–based postprocessing models by integrating meteorological expertise in the form of analytic equations. Applied to the postprocessing of surface weather in Switzerland, we find that constraining a neural network to enforce thermodynamic state equations yields physically consistent predictions of temperature and humidity without compromising performance. Our approach is especially advantageous when data are scarce, and our findings suggest that incorporating domain expertise into postprocessing models allows the optimization of weather forecast information while satisfying application-specific requirements.
Significance Statement
Postprocessing is a widely used approach to reduce forecast error using statistics, but it may lead to physical inconsistencies. This outcome can be problematic for trustworthiness and downstream applications. We present the first machine learning–based postprocessing method intentionally designed to strictly enforce physical laws. Our framework improves physical consistency without sacrificing performance and suggests that human expertise can be incorporated into postprocessing models via analytic equations.
Abstract
Weather forecasting centers currently rely on statistical postprocessing methods to minimize forecast error. This improves skill but can lead to predictions that violate physical principles or disregard dependencies between variables, which can be problematic for downstream applications and for the trustworthiness of postprocessing models, especially when they are based on new machine learning approaches. Building on recent advances in physics-informed machine learning, we propose to achieve physical consistency in deep learning–based postprocessing models by integrating meteorological expertise in the form of analytic equations. Applied to the postprocessing of surface weather in Switzerland, we find that constraining a neural network to enforce thermodynamic state equations yields physically consistent predictions of temperature and humidity without compromising performance. Our approach is especially advantageous when data are scarce, and our findings suggest that incorporating domain expertise into postprocessing models allows the optimization of weather forecast information while satisfying application-specific requirements.
Significance Statement
Postprocessing is a widely used approach to reduce forecast error using statistics, but it may lead to physical inconsistencies. This outcome can be problematic for trustworthiness and downstream applications. We present the first machine learning–based postprocessing method intentionally designed to strictly enforce physical laws. Our framework improves physical consistency without sacrificing performance and suggests that human expertise can be incorporated into postprocessing models via analytic equations.
Abstract
This paper explores the application of emerging machine learning methods from image super resolution (SR) to the task of statistical downscaling. We specifically focus on convolutional neural network–based generative adversarial networks (GANs). Our GANs are conditioned on low-resolution (LR) inputs to generate high-resolution (HR) surface winds emulating Weather Research and Forecasting (WRF) Model simulations over North America. Unlike traditional SR models, where LR inputs are idealized coarsened versions of the HR images, WRF emulation involves using nonidealized LR and HR pairs, resulting in shared-scale mismatches due to internal variability. Our study builds upon current SR-based statistical downscaling by experimenting with a novel frequency-separation (FS) approach from the computer vision field. To assess the skill of SR models, we carefully select evaluation metrics and focus on performance measures based on spatial power spectra. Our analyses reveal how GAN configurations influence spatial structures in the generated fields, particularly biases in spatial variability spectra. Using power spectra to evaluate the FS experiments reveals that successful applications of FS in computer vision do not translate to climate fields. However, the FS experiments demonstrate the sensitivity of power spectra to a commonly used GAN-based SR objective function, which helps interpret and understand its role in determining spatial structures. This result motivates the development of a novel partial frequency-separation scheme as a promising configuration option. We also quantify the influence on GAN performance of nonidealized LR fields resulting from internal variability. Furthermore, we conduct a spectrum-based feature-importance experiment, allowing us to explore the dependence of the spatial structure of generated fields on different physically relevant LR covariates.
Significance Statement
We use artificial intelligence algorithms to mimic wind patterns from high-resolution climate models, offering a faster alternative to running these models directly. Unlike many similar approaches, we use datasets that acknowledge the essentially stochastic nature of the downscaling problem. Drawing inspiration from computer vision studies, we design several experiments to explore how different configurations impact our results. We find evaluation methods based on spatial frequencies in the climate fields to be quite effective at understanding how algorithms behave. Our results provide valuable insights into and interpretations of the methods for future research in this field.
Abstract
This paper explores the application of emerging machine learning methods from image super resolution (SR) to the task of statistical downscaling. We specifically focus on convolutional neural network–based generative adversarial networks (GANs). Our GANs are conditioned on low-resolution (LR) inputs to generate high-resolution (HR) surface winds emulating Weather Research and Forecasting (WRF) Model simulations over North America. Unlike traditional SR models, where LR inputs are idealized coarsened versions of the HR images, WRF emulation involves using nonidealized LR and HR pairs, resulting in shared-scale mismatches due to internal variability. Our study builds upon current SR-based statistical downscaling by experimenting with a novel frequency-separation (FS) approach from the computer vision field. To assess the skill of SR models, we carefully select evaluation metrics and focus on performance measures based on spatial power spectra. Our analyses reveal how GAN configurations influence spatial structures in the generated fields, particularly biases in spatial variability spectra. Using power spectra to evaluate the FS experiments reveals that successful applications of FS in computer vision do not translate to climate fields. However, the FS experiments demonstrate the sensitivity of power spectra to a commonly used GAN-based SR objective function, which helps interpret and understand its role in determining spatial structures. This result motivates the development of a novel partial frequency-separation scheme as a promising configuration option. We also quantify the influence on GAN performance of nonidealized LR fields resulting from internal variability. Furthermore, we conduct a spectrum-based feature-importance experiment, allowing us to explore the dependence of the spatial structure of generated fields on different physically relevant LR covariates.
Significance Statement
We use artificial intelligence algorithms to mimic wind patterns from high-resolution climate models, offering a faster alternative to running these models directly. Unlike many similar approaches, we use datasets that acknowledge the essentially stochastic nature of the downscaling problem. Drawing inspiration from computer vision studies, we design several experiments to explore how different configurations impact our results. We find evaluation methods based on spatial frequencies in the climate fields to be quite effective at understanding how algorithms behave. Our results provide valuable insights into and interpretations of the methods for future research in this field.
Abstract
There is growing use of machine learning algorithms to replicate subgrid parameterization schemes in global climate models. Parameterizations rely on approximations; thus, there is potential for machine learning to aid improvements. In this study, a neural network is used to mimic the behavior of the nonorographic gravity wave scheme used in the Met Office climate model, important for stratospheric climate and variability. The neural network is found to require only two of the six inputs used by the parameterization scheme, suggesting the potential for greater efficiency in this scheme. Use of a one-dimensional mechanistic model is advocated, allowing neural network hyperparameters to be chosen based on emergent features of the coupled system with minimal computational cost, and providing a testbed prior to coupling to a climate model. A climate model simulation, using the neural network in place of the existing parameterization scheme, is found to accurately generate a quasi-biennial oscillation of the tropical stratospheric winds, and correctly simulate the nonorographic gravity wave variability associated with El Niño–Southern Oscillation and stratospheric polar vortex variability. These internal sources of variability are essential for providing seasonal forecast skill, and the gravity wave forcing associated with them is reproduced without explicit training for these patterns.
Significance Statement
Climate simulations are required for providing advice to government, industry, and society regarding the expected climate on time scales of months to decades. Machine learning has the potential to improve the representation of some sources of variability in climate models that are too small to be directly simulated by the model. This study demonstrates that a neural network can simulate the variability due to atmospheric gravity waves that is associated with El Niño–Southern Oscillation and with the tropical and polar regions of the stratosphere. These details are important for a model to produce more accurate predictions of regional climate.
Abstract
There is growing use of machine learning algorithms to replicate subgrid parameterization schemes in global climate models. Parameterizations rely on approximations; thus, there is potential for machine learning to aid improvements. In this study, a neural network is used to mimic the behavior of the nonorographic gravity wave scheme used in the Met Office climate model, important for stratospheric climate and variability. The neural network is found to require only two of the six inputs used by the parameterization scheme, suggesting the potential for greater efficiency in this scheme. Use of a one-dimensional mechanistic model is advocated, allowing neural network hyperparameters to be chosen based on emergent features of the coupled system with minimal computational cost, and providing a testbed prior to coupling to a climate model. A climate model simulation, using the neural network in place of the existing parameterization scheme, is found to accurately generate a quasi-biennial oscillation of the tropical stratospheric winds, and correctly simulate the nonorographic gravity wave variability associated with El Niño–Southern Oscillation and stratospheric polar vortex variability. These internal sources of variability are essential for providing seasonal forecast skill, and the gravity wave forcing associated with them is reproduced without explicit training for these patterns.
Significance Statement
Climate simulations are required for providing advice to government, industry, and society regarding the expected climate on time scales of months to decades. Machine learning has the potential to improve the representation of some sources of variability in climate models that are too small to be directly simulated by the model. This study demonstrates that a neural network can simulate the variability due to atmospheric gravity waves that is associated with El Niño–Southern Oscillation and with the tropical and polar regions of the stratosphere. These details are important for a model to produce more accurate predictions of regional climate.
Abstract
This paper presents the Thunderstorm Nowcasting Tool (ThunderCast), a 24-h, year-round model for predicting the location of convection that is likely to initiate or remain a thunderstorm in the next 0–60 min in the continental United States, adapted from existing deep learning convection applications. ThunderCast utilizes a U-Net convolutional neural network for semantic segmentation trained on 320 km × 320 km data patches with four inputs and one target dataset. The inputs are satellite bands from the Geostationary Operational Environmental Satellite-16 (GOES-16) Advanced Baseline Imager (ABI) in the visible, shortwave infrared, and longwave infrared spectra, and the target is Multi-Radar Multi-Sensor (MRMS) radar reflectivity at the −10°C isotherm in the atmosphere. On a pixel-by-pixel basis, ThunderCast has high accuracy, recall, and specificity but is subject to false-positive predictions resulting in low precision. However, the number of false positives decreases when buffering the target values with a 15 km × 15 km centered window, indicating ThunderCast’s predictions are useful within a buffered area. To demonstrate the initial prediction capabilities of ThunderCast, three case studies are presented: a mesoscale convective vortex, sea-breeze convection, and monsoonal convection in the southwestern United States. The case studies illustrate that the ThunderCast model effectively nowcasts the location of newly initiated and ongoing active convection, within the next 60 min, under a variety of geographical and meteorological conditions.
Significance Statement
In this research, a machine learning model is developed for short-term (0–60 min) forecasting of thunderstorms in the continental United States using geostationary satellite imagery as inputs for predicting active convection based on radar thresholds. Pending additional testing, the model may be able to provide decision-support services for thunderstorm forecasting. The case studies presented here indicate the model is able to nowcast convective initiation with 5–35 min of lead time in areas without radar coverage and anticipate future locations of storms without additional environmental context.
Abstract
This paper presents the Thunderstorm Nowcasting Tool (ThunderCast), a 24-h, year-round model for predicting the location of convection that is likely to initiate or remain a thunderstorm in the next 0–60 min in the continental United States, adapted from existing deep learning convection applications. ThunderCast utilizes a U-Net convolutional neural network for semantic segmentation trained on 320 km × 320 km data patches with four inputs and one target dataset. The inputs are satellite bands from the Geostationary Operational Environmental Satellite-16 (GOES-16) Advanced Baseline Imager (ABI) in the visible, shortwave infrared, and longwave infrared spectra, and the target is Multi-Radar Multi-Sensor (MRMS) radar reflectivity at the −10°C isotherm in the atmosphere. On a pixel-by-pixel basis, ThunderCast has high accuracy, recall, and specificity but is subject to false-positive predictions resulting in low precision. However, the number of false positives decreases when buffering the target values with a 15 km × 15 km centered window, indicating ThunderCast’s predictions are useful within a buffered area. To demonstrate the initial prediction capabilities of ThunderCast, three case studies are presented: a mesoscale convective vortex, sea-breeze convection, and monsoonal convection in the southwestern United States. The case studies illustrate that the ThunderCast model effectively nowcasts the location of newly initiated and ongoing active convection, within the next 60 min, under a variety of geographical and meteorological conditions.
Significance Statement
In this research, a machine learning model is developed for short-term (0–60 min) forecasting of thunderstorms in the continental United States using geostationary satellite imagery as inputs for predicting active convection based on radar thresholds. Pending additional testing, the model may be able to provide decision-support services for thunderstorm forecasting. The case studies presented here indicate the model is able to nowcast convective initiation with 5–35 min of lead time in areas without radar coverage and anticipate future locations of storms without additional environmental context.
Abstract
Precipitation nowcasting is essential for weather-dependent decision-making, but it remains a challenging problem despite active research. The combination of radar data and deep learning methods has opened a new avenue for research. Radar data are well suited for precipitation nowcasting due to the high space–time resolution of the precipitation field. On the other hand, deep learning methods allow the exploitation of possible nonlinearities in the precipitation process. Thus far, deep learning approaches have demonstrated equal or better performance than optical flow methods for low-intensity precipitation, but nowcasting high-intensity events remains a challenge. In this study, we have built a deep generative model with various extensions to improve nowcasting of heavy precipitation intensities. Specifically, we consider different loss functions and how the incorporation of temperature data as an additional feature affects the model’s performance. Using radar data from KNMI and 5–90-min lead times, we demonstrate that the deep generative model with the proposed loss function and temperature feature outperforms other state-of-the-art models and benchmarks. Our model, with both loss function and feature extensions, is skillful at nowcasting precipitation for high rainfall intensities, up to 60-min lead time.
Abstract
Precipitation nowcasting is essential for weather-dependent decision-making, but it remains a challenging problem despite active research. The combination of radar data and deep learning methods has opened a new avenue for research. Radar data are well suited for precipitation nowcasting due to the high space–time resolution of the precipitation field. On the other hand, deep learning methods allow the exploitation of possible nonlinearities in the precipitation process. Thus far, deep learning approaches have demonstrated equal or better performance than optical flow methods for low-intensity precipitation, but nowcasting high-intensity events remains a challenge. In this study, we have built a deep generative model with various extensions to improve nowcasting of heavy precipitation intensities. Specifically, we consider different loss functions and how the incorporation of temperature data as an additional feature affects the model’s performance. Using radar data from KNMI and 5–90-min lead times, we demonstrate that the deep generative model with the proposed loss function and temperature feature outperforms other state-of-the-art models and benchmarks. Our model, with both loss function and feature extensions, is skillful at nowcasting precipitation for high rainfall intensities, up to 60-min lead time.
Abstract
Extreme wildfires continue to be a significant cause of human death and biodiversity destruction within countries that encompass the Mediterranean Basin. Recent worrying trends in wildfire activity (i.e., occurrence and spread) suggest that wildfires are likely to be highly impacted by climate change. To facilitate appropriate risk mitigation, it is imperative to identify the main drivers of extreme wildfires and assess their spatiotemporal trends, with a view to understanding the impacts of the changing climate on fire activity. To this end, we analyze the monthly burnt area due to wildfires over a region encompassing most of Europe and the Mediterranean Basin from 2001 to 2020 and identify high fire activity during this period in eastern Europe, Algeria, Italy, and Portugal. We build an extreme quantile regression model with a high-dimensional predictor set describing meteorological conditions, land-cover usage, and orography, for the domain. To model the complex relationships between the predictor variables and wildfires, we make use of a hybrid statistical deep learning framework that allows us to disentangle the effects of vapor pressure deficit (VPD), air temperature, and drought on wildfire activity. Our results highlight that while VPD, air temperature, and drought significantly affect wildfire occurrence, only VPD affects wildfire spread. Furthermore, to gain insights into the effect of climate trends on wildfires in the near future, we focus on the extreme wildfires in August 2001 and perturb VPD and temperature according to their observed trends. We find that, on average over Europe, trends in temperature (median over Europe: +0.04 K yr−1) lead to a relative increase of 17.1% and 1.6% in the expected frequency and severity, respectively, of wildfires in August 2001; similar analyses using VPD (median over Europe: +4.82 Pa yr−1) give respective increases of 1.2% and 3.6%. Our analysis finds evidence suggesting that global warming can lead to spatially nonuniform changes in wildfire activity.
Abstract
Extreme wildfires continue to be a significant cause of human death and biodiversity destruction within countries that encompass the Mediterranean Basin. Recent worrying trends in wildfire activity (i.e., occurrence and spread) suggest that wildfires are likely to be highly impacted by climate change. To facilitate appropriate risk mitigation, it is imperative to identify the main drivers of extreme wildfires and assess their spatiotemporal trends, with a view to understanding the impacts of the changing climate on fire activity. To this end, we analyze the monthly burnt area due to wildfires over a region encompassing most of Europe and the Mediterranean Basin from 2001 to 2020 and identify high fire activity during this period in eastern Europe, Algeria, Italy, and Portugal. We build an extreme quantile regression model with a high-dimensional predictor set describing meteorological conditions, land-cover usage, and orography, for the domain. To model the complex relationships between the predictor variables and wildfires, we make use of a hybrid statistical deep learning framework that allows us to disentangle the effects of vapor pressure deficit (VPD), air temperature, and drought on wildfire activity. Our results highlight that while VPD, air temperature, and drought significantly affect wildfire occurrence, only VPD affects wildfire spread. Furthermore, to gain insights into the effect of climate trends on wildfires in the near future, we focus on the extreme wildfires in August 2001 and perturb VPD and temperature according to their observed trends. We find that, on average over Europe, trends in temperature (median over Europe: +0.04 K yr−1) lead to a relative increase of 17.1% and 1.6% in the expected frequency and severity, respectively, of wildfires in August 2001; similar analyses using VPD (median over Europe: +4.82 Pa yr−1) give respective increases of 1.2% and 3.6%. Our analysis finds evidence suggesting that global warming can lead to spatially nonuniform changes in wildfire activity.
Abstract
In this study, we introduce a self-supervised deep neural network approach to classify satellite images into independent classes of cloud systems. The driving question of the work is to understand whether our algorithm can capture cloud variability and identify distinct cloud regimes. Ultimately, we want to achieve generalization such that the algorithm can be applied to unseen data and thus help automatically extract relevant information important to atmospheric science and renewable energy applications from the ever-increasing satellite data stream. We use cloud optical depth (COD) retrieved from postprocessed high-resolution Meteosat Second Generation (MSG) satellite data as input for the network. The network’s architecture is based on the DeepCluster, version 2, and consists of a convolutional neural network and a multilayer perceptron, followed by a k-means algorithm. We explore the network’s training capabilities by analyzing the centroids and feature vectors found from progressive minimization of the cross-entropy loss function. By making use of additional MSG retrieval products based on multichannel information, we derive the optimum number of classes to determine independent cloud regimes. We test the network capabilities on COD data from 2013 and find that the trained neural network gives insights into the cloud systems’ persistence and transition probability. The generalization on the 2015 data shows good skills of our algorithm with unseen data, but results depend on the spatial scale of cloud systems.
Significance Statement
This study uses a self-supervised deep neural network to identify distinct cloud systems from cloud optical depth satellite images over central Europe. Satellite-retrieved products support the physical interpretation of the identified cloud classes and help optimize the number of identified classes. The trained neural network gives insights into cloud systems’ persistence and transition probability. The generalization capacity of the deep neural network with unseen data is promising but depends on the spatial scale of cloud systems.
Abstract
In this study, we introduce a self-supervised deep neural network approach to classify satellite images into independent classes of cloud systems. The driving question of the work is to understand whether our algorithm can capture cloud variability and identify distinct cloud regimes. Ultimately, we want to achieve generalization such that the algorithm can be applied to unseen data and thus help automatically extract relevant information important to atmospheric science and renewable energy applications from the ever-increasing satellite data stream. We use cloud optical depth (COD) retrieved from postprocessed high-resolution Meteosat Second Generation (MSG) satellite data as input for the network. The network’s architecture is based on the DeepCluster, version 2, and consists of a convolutional neural network and a multilayer perceptron, followed by a k-means algorithm. We explore the network’s training capabilities by analyzing the centroids and feature vectors found from progressive minimization of the cross-entropy loss function. By making use of additional MSG retrieval products based on multichannel information, we derive the optimum number of classes to determine independent cloud regimes. We test the network capabilities on COD data from 2013 and find that the trained neural network gives insights into the cloud systems’ persistence and transition probability. The generalization on the 2015 data shows good skills of our algorithm with unseen data, but results depend on the spatial scale of cloud systems.
Significance Statement
This study uses a self-supervised deep neural network to identify distinct cloud systems from cloud optical depth satellite images over central Europe. Satellite-retrieved products support the physical interpretation of the identified cloud classes and help optimize the number of identified classes. The trained neural network gives insights into cloud systems’ persistence and transition probability. The generalization capacity of the deep neural network with unseen data is promising but depends on the spatial scale of cloud systems.
Abstract
Snow is an important component of Earth’s climate system, and snowfall intensity and variation often significantly impact society, the environment, and ecosystems. Understanding monthly and seasonal snowfall intensity and variations is challenging because of multiple controlling mechanisms at different spatial and temporal scales. Using 65 years of in situ snowfall observation, we evaluated seven machine learning algorithms for modeling monthly and seasonal snowfall in the Lower Peninsula of Michigan (LPM) based on selected environmental and climatic variables. Our results show that the Bayesian additive regression tree (BART) has the best fitting (R 2 = 0.88) and out-of-sample estimation skills (R 2 = 0.58) for the monthly mean snowfall followed by the random forest model. The BART also demonstrates strong estimation skills for large monthly snowfall amounts. Both BART and the random forest models suggest that topography, local/regional environmental factors, and teleconnection indices can significantly improve the estimation of monthly and seasonal snowfall amounts in the LPM. These statistical models based on machine learning algorithms can incorporate variables at multiple scales and address nonlinear responses of snowfall variations to environmental/climatic changes. It demonstrated that the multiscale machine learning techniques provide a reliable and computationally efficient approach to modeling snowfall intensity and variability.
Abstract
Snow is an important component of Earth’s climate system, and snowfall intensity and variation often significantly impact society, the environment, and ecosystems. Understanding monthly and seasonal snowfall intensity and variations is challenging because of multiple controlling mechanisms at different spatial and temporal scales. Using 65 years of in situ snowfall observation, we evaluated seven machine learning algorithms for modeling monthly and seasonal snowfall in the Lower Peninsula of Michigan (LPM) based on selected environmental and climatic variables. Our results show that the Bayesian additive regression tree (BART) has the best fitting (R 2 = 0.88) and out-of-sample estimation skills (R 2 = 0.58) for the monthly mean snowfall followed by the random forest model. The BART also demonstrates strong estimation skills for large monthly snowfall amounts. Both BART and the random forest models suggest that topography, local/regional environmental factors, and teleconnection indices can significantly improve the estimation of monthly and seasonal snowfall amounts in the LPM. These statistical models based on machine learning algorithms can incorporate variables at multiple scales and address nonlinear responses of snowfall variations to environmental/climatic changes. It demonstrated that the multiscale machine learning techniques provide a reliable and computationally efficient approach to modeling snowfall intensity and variability.
Abstract
Visible and infrared radiance products of geostationary orbiting platforms provide virtually continuous observations of Earth. In contrast, low-Earth orbiters observe passive microwave (PMW) radiances at any location much less frequently. Prior literature demonstrates the ability of a machine learning (ML) approach to build a link between these two complementary radiance spectra by predicting PMW observations using infrared and visible products collected from geostationary instruments, which could potentially deliver a highly desirable synthetic PMW product with nearly continuous spatiotemporal coverage. However, current ML models lack the ability to provide a measure of uncertainty of such a product, significantly limiting its applications. In this work, Bayesian deep learning is employed to generate synthetic Global Precipitation Measurement (GPM) Microwave Imager (GMI) data from Advanced Baseline Imager (ABI) observations with attached uncertainties over the ocean. The study first uses deterministic residual networks (ResNets) to generate synthetic GMI brightness temperatures with as little mean absolute error as 1.72 K at the ABI spatiotemporal resolution. Then, for the same task, we use three Bayesian ResNet models to produce a comparable amount of error while providing previously unavailable predictive variance (i.e., uncertainty) for each synthetic data point. We find that the Flipout configuration provides the most robust calibration between uncertainty and error across GMI frequencies, and then demonstrate how this additional information is useful for discarding high-error synthetic data points prior to use by downstream applications.
Abstract
Visible and infrared radiance products of geostationary orbiting platforms provide virtually continuous observations of Earth. In contrast, low-Earth orbiters observe passive microwave (PMW) radiances at any location much less frequently. Prior literature demonstrates the ability of a machine learning (ML) approach to build a link between these two complementary radiance spectra by predicting PMW observations using infrared and visible products collected from geostationary instruments, which could potentially deliver a highly desirable synthetic PMW product with nearly continuous spatiotemporal coverage. However, current ML models lack the ability to provide a measure of uncertainty of such a product, significantly limiting its applications. In this work, Bayesian deep learning is employed to generate synthetic Global Precipitation Measurement (GPM) Microwave Imager (GMI) data from Advanced Baseline Imager (ABI) observations with attached uncertainties over the ocean. The study first uses deterministic residual networks (ResNets) to generate synthetic GMI brightness temperatures with as little mean absolute error as 1.72 K at the ABI spatiotemporal resolution. Then, for the same task, we use three Bayesian ResNet models to produce a comparable amount of error while providing previously unavailable predictive variance (i.e., uncertainty) for each synthetic data point. We find that the Flipout configuration provides the most robust calibration between uncertainty and error across GMI frequencies, and then demonstrate how this additional information is useful for discarding high-error synthetic data points prior to use by downstream applications.