Browse
Abstract
This study focused on developing a consensus machine learning (CML) model for tropical cyclone (TC) intensity-change forecasting, especially for rapid intensification (RI). This CML model was built upon selected classical machine learning models with the input data extracted from a high-resolution hurricane model, the Hurricane Weather Research and Forecasting (HWRF) system. The input data contained 21 or 34 RI-related predictors extracted from the 2018 version of HWRF (H218). This study found that TC inner-core predictors can be critical for improving RI predictions, especially the inner-core relative humidity. Moreover, this study emphasized the importance of performing resampling on an imbalanced input dataset. Edited nearest-neighbor and synthetic minority oversampling techniques improved the probability of detection (POD) by ∼10% for the RI class. This study also showed that the CML model has satisfactory performance on RI predictions compared to the operational models. CML reached 56% POD and 46% false alarm ratio (FAR), while the operational models had only 10%–30% POD but 50%–60% FAR. The CML performance on the non-RI classes was comparable to the operational models. The results indicated that, with proper and sufficient training data and RI-related predictors, CML has the potential to provide reliable probabilistic RI forecasts during a hurricane season.
Abstract
This study focused on developing a consensus machine learning (CML) model for tropical cyclone (TC) intensity-change forecasting, especially for rapid intensification (RI). This CML model was built upon selected classical machine learning models with the input data extracted from a high-resolution hurricane model, the Hurricane Weather Research and Forecasting (HWRF) system. The input data contained 21 or 34 RI-related predictors extracted from the 2018 version of HWRF (H218). This study found that TC inner-core predictors can be critical for improving RI predictions, especially the inner-core relative humidity. Moreover, this study emphasized the importance of performing resampling on an imbalanced input dataset. Edited nearest-neighbor and synthetic minority oversampling techniques improved the probability of detection (POD) by ∼10% for the RI class. This study also showed that the CML model has satisfactory performance on RI predictions compared to the operational models. CML reached 56% POD and 46% false alarm ratio (FAR), while the operational models had only 10%–30% POD but 50%–60% FAR. The CML performance on the non-RI classes was comparable to the operational models. The results indicated that, with proper and sufficient training data and RI-related predictors, CML has the potential to provide reliable probabilistic RI forecasts during a hurricane season.
Abstract
Dynamical climate predictions are produced by assimilating observations and running ensemble simulations of Earth system models. This process is time consuming and by the time the forecast is delivered, new observations are already available, making it obsolete from the release date. Moreover, producing such predictions is computationally demanding, and their production frequency is restricted. We tested the potential of a computationally cheap weighting average technique that can continuously adjust such probabilistic forecasts—in between production intervals—using newly available data. The method estimates local positive weights computed with a Bayesian framework, favoring members closer to observations. We tested the approach with the Norwegian Climate Prediction Model (NorCPM), which assimilates monthly sea surface temperature (SST) and hydrographic profiles with the ensemble Kalman filter. By the time the NorCPM forecast is delivered operationally, a week of unused SST data are available. We demonstrate the benefit of our weighting method on retrospective hindcasts. The weighting method greatly enhanced the NorCPM hindcast skill compared to the standard equal weight approach up to a 2-month lead time (global correlation of 0.71 vs 0.55 at a 1-month lead time and 0.51 vs 0.45 at a 2-month lead time). The skill at a 1-month lead time is comparable to the accuracy of the EnKF analysis. We also show that weights determined using SST data can be used to improve the skill of other quantities, such as the sea ice extent. Our approach can provide a continuous forecast between the intermittent forecast production cycle and be extended to other independent datasets.
Abstract
Dynamical climate predictions are produced by assimilating observations and running ensemble simulations of Earth system models. This process is time consuming and by the time the forecast is delivered, new observations are already available, making it obsolete from the release date. Moreover, producing such predictions is computationally demanding, and their production frequency is restricted. We tested the potential of a computationally cheap weighting average technique that can continuously adjust such probabilistic forecasts—in between production intervals—using newly available data. The method estimates local positive weights computed with a Bayesian framework, favoring members closer to observations. We tested the approach with the Norwegian Climate Prediction Model (NorCPM), which assimilates monthly sea surface temperature (SST) and hydrographic profiles with the ensemble Kalman filter. By the time the NorCPM forecast is delivered operationally, a week of unused SST data are available. We demonstrate the benefit of our weighting method on retrospective hindcasts. The weighting method greatly enhanced the NorCPM hindcast skill compared to the standard equal weight approach up to a 2-month lead time (global correlation of 0.71 vs 0.55 at a 1-month lead time and 0.51 vs 0.45 at a 2-month lead time). The skill at a 1-month lead time is comparable to the accuracy of the EnKF analysis. We also show that weights determined using SST data can be used to improve the skill of other quantities, such as the sea ice extent. Our approach can provide a continuous forecast between the intermittent forecast production cycle and be extended to other independent datasets.
Abstract
Over the past decade the use of machine learning in meteorology has grown rapidly. Specifically neural networks and deep learning have been used at an unprecedented rate. To fill the dearth of resources covering neural networks with a meteorological lens, this paper discusses machine learning methods in a plain language format that is targeted to the operational meteorological community. This is the second paper in a pair that aim to serve as a machine learning resource for meteorologists. While the first paper focused on traditional machine learning methods (e.g., random forest), here a broad spectrum of neural networks and deep learning methods is discussed. Specifically, this paper covers perceptrons, artificial neural networks, convolutional neural networks, and U-networks. Like the Part I paper, this manuscript discusses the terms associated with neural networks and their training. Then the manuscript provides some intuition behind every method and concludes by showing each method used in a meteorological example of diagnosing thunderstorms from satellite images (e.g., lightning flashes). This paper is accompanied with an open-source code repository to allow readers to explore neural networks using either the dataset provided (which is used in the paper) or as a template for alternate datasets.
Abstract
Over the past decade the use of machine learning in meteorology has grown rapidly. Specifically neural networks and deep learning have been used at an unprecedented rate. To fill the dearth of resources covering neural networks with a meteorological lens, this paper discusses machine learning methods in a plain language format that is targeted to the operational meteorological community. This is the second paper in a pair that aim to serve as a machine learning resource for meteorologists. While the first paper focused on traditional machine learning methods (e.g., random forest), here a broad spectrum of neural networks and deep learning methods is discussed. Specifically, this paper covers perceptrons, artificial neural networks, convolutional neural networks, and U-networks. Like the Part I paper, this manuscript discusses the terms associated with neural networks and their training. Then the manuscript provides some intuition behind every method and concludes by showing each method used in a meteorological example of diagnosing thunderstorms from satellite images (e.g., lightning flashes). This paper is accompanied with an open-source code repository to allow readers to explore neural networks using either the dataset provided (which is used in the paper) or as a template for alternate datasets.
Abstract
Low-Earth-orbiting (LEO) hyperspectral infrared (IR) sounders have significant yet untapped potential for characterizing thermodynamic environments of convective initiation and ongoing convection. While LEO soundings are of value to weather forecasters, the temporal resolution needed to resolve the rapidly evolving thermodynamics of the convective environment is limited. We have developed a novel nowcasting methodology to extend snapshots of LEO soundings forward in time up to 6 h to create a product available within National Weather Service systems for user assessment. Our methodology is based on parcel forward-trajectory calculations from the satellite-observing time to generate future soundings of temperature (T) and specific humidity (q) at regularly gridded intervals in space and time. The soundings are based on NOAA-Unique Combined Atmospheric Processing System (NUCAPS) retrievals from the Suomi National Polar-Orbiting Partnership (Suomi NPP) and NOAA-20 satellite platforms. The tendencies of derived convective available potential energy (CAPE) and convective inhibition (CIN) are evaluated against gridded, hourly accumulated rainfall obtained from the Multi-Radar Multi-Sensor (MRMS) observations for 24 hand-selected cases over the contiguous United States. Areas with forecast increases in CAPE (reduced CIN) are shown to be associated with areas of precipitation. The increases in CAPE and decreases in CIN are largest for areas that have the heaviest precipitation and are statistically significant compared to areas without precipitation. These results imply that adiabatic parcel advection of LEO satellite sounding snapshots forward in time are capable of identifying convective initiation over an expanded temporal scale compared to soundings used only during the LEO satellite overpass time.
Significance Statement
Advection of low-Earth-orbiting (LEO) satellite observations of temperature and specific humidity forward in time exhibits skill in determining where and when convection eventually initiates. This approach provides a foundation for a new nowcasting methodology leveraging thermodynamic soundings derived from hyperspectral infrared (IR) sounders on LEO satellite platforms. This method may be useful for creating time-resolved soundings with the constellation of LEO satellites until hyperspectral infrared soundings are widely available from geostationary platforms.
Abstract
Low-Earth-orbiting (LEO) hyperspectral infrared (IR) sounders have significant yet untapped potential for characterizing thermodynamic environments of convective initiation and ongoing convection. While LEO soundings are of value to weather forecasters, the temporal resolution needed to resolve the rapidly evolving thermodynamics of the convective environment is limited. We have developed a novel nowcasting methodology to extend snapshots of LEO soundings forward in time up to 6 h to create a product available within National Weather Service systems for user assessment. Our methodology is based on parcel forward-trajectory calculations from the satellite-observing time to generate future soundings of temperature (T) and specific humidity (q) at regularly gridded intervals in space and time. The soundings are based on NOAA-Unique Combined Atmospheric Processing System (NUCAPS) retrievals from the Suomi National Polar-Orbiting Partnership (Suomi NPP) and NOAA-20 satellite platforms. The tendencies of derived convective available potential energy (CAPE) and convective inhibition (CIN) are evaluated against gridded, hourly accumulated rainfall obtained from the Multi-Radar Multi-Sensor (MRMS) observations for 24 hand-selected cases over the contiguous United States. Areas with forecast increases in CAPE (reduced CIN) are shown to be associated with areas of precipitation. The increases in CAPE and decreases in CIN are largest for areas that have the heaviest precipitation and are statistically significant compared to areas without precipitation. These results imply that adiabatic parcel advection of LEO satellite sounding snapshots forward in time are capable of identifying convective initiation over an expanded temporal scale compared to soundings used only during the LEO satellite overpass time.
Significance Statement
Advection of low-Earth-orbiting (LEO) satellite observations of temperature and specific humidity forward in time exhibits skill in determining where and when convection eventually initiates. This approach provides a foundation for a new nowcasting methodology leveraging thermodynamic soundings derived from hyperspectral infrared (IR) sounders on LEO satellite platforms. This method may be useful for creating time-resolved soundings with the constellation of LEO satellites until hyperspectral infrared soundings are widely available from geostationary platforms.
Abstract
Simulation of atmosphere–ocean–ice interactions in coupled Earth modeling systems with kilometer-scale resolution is a new challenge in operational numerical weather prediction. This study presents an assessment of sensitivity experiments performed with different sea ice products in a convective-scale weather forecasting system for the European Arctic. On kilometer-scale resolution sea ice products are challenged by the large footprint of passive microwave satellite observations and issues with spurious sea ice detection of the higher-resolution retrievals based on synthetic aperture radar instruments. We perform sensitivity experiments with sea ice concentration fields of 1) the global ECMWF-IFS forecast system, 2) a newly developed multisensor product processed through a coupled sea ice–ocean forecasting system, and 3) the AMSR2 product based on passive microwave observations. There are significant differences between the products on O(100) km scales in the northern Barents Sea and along the Marginal Ice Zone north of the Svalbard archipelago and toward the Fram Strait. These differences have a direct impact on the modeled surface skin temperature over ocean and sea ice, the turbulent heat flux, and 2-m air temperature (T2M). An assessment of Arctic weather stations shows a significant improvement of forecasted T2M in the north and east of Svalbard when using the new multisensor product; however, south of Svalbard this product has a negative impact. The different sea ice products are resulting in changes of the surface turbulent heat flux of up to 400 W m−2, which in turn results in T2M variations of up to 5°C. Over a 2-day forecast lead time this can lead to uncertainties in weather forecasts of about 1°C even hundreds of kilometers away from the sea ice.
Significance Statement
Weather forecasting in polar regions requires an accurate description of sea ice properties due to the very important atmosphere–ocean–ice interactions. With the increasing resolution of weather forecasting systems, there is also a need to advance the resolution of the sea ice characteristics in the models. This is, however, not straightforward due to various issues in the sea ice satellite products. This study explores new products and approaches to integrate high-resolution sea ice in a weather prediction system. We find that the model is sensitive to the choice of the sea ice product and that it is still challenging to provide an accurate sea ice field on a kilometer-scale resolution.
Abstract
Simulation of atmosphere–ocean–ice interactions in coupled Earth modeling systems with kilometer-scale resolution is a new challenge in operational numerical weather prediction. This study presents an assessment of sensitivity experiments performed with different sea ice products in a convective-scale weather forecasting system for the European Arctic. On kilometer-scale resolution sea ice products are challenged by the large footprint of passive microwave satellite observations and issues with spurious sea ice detection of the higher-resolution retrievals based on synthetic aperture radar instruments. We perform sensitivity experiments with sea ice concentration fields of 1) the global ECMWF-IFS forecast system, 2) a newly developed multisensor product processed through a coupled sea ice–ocean forecasting system, and 3) the AMSR2 product based on passive microwave observations. There are significant differences between the products on O(100) km scales in the northern Barents Sea and along the Marginal Ice Zone north of the Svalbard archipelago and toward the Fram Strait. These differences have a direct impact on the modeled surface skin temperature over ocean and sea ice, the turbulent heat flux, and 2-m air temperature (T2M). An assessment of Arctic weather stations shows a significant improvement of forecasted T2M in the north and east of Svalbard when using the new multisensor product; however, south of Svalbard this product has a negative impact. The different sea ice products are resulting in changes of the surface turbulent heat flux of up to 400 W m−2, which in turn results in T2M variations of up to 5°C. Over a 2-day forecast lead time this can lead to uncertainties in weather forecasts of about 1°C even hundreds of kilometers away from the sea ice.
Significance Statement
Weather forecasting in polar regions requires an accurate description of sea ice properties due to the very important atmosphere–ocean–ice interactions. With the increasing resolution of weather forecasting systems, there is also a need to advance the resolution of the sea ice characteristics in the models. This is, however, not straightforward due to various issues in the sea ice satellite products. This study explores new products and approaches to integrate high-resolution sea ice in a weather prediction system. We find that the model is sensitive to the choice of the sea ice product and that it is still challenging to provide an accurate sea ice field on a kilometer-scale resolution.
Abstract
On average, modern numerical weather prediction forecasts for daily tornado frequency exhibit no skill beyond day 10. However, in this extended-range lead window, there are particular model cycles that have exceptionally high forecast skill for tornadoes because of their ability to correctly simulate the future synoptic pattern. Here, model initial conditions that produced a more skillful forecast for tornadoes over the United States were exploited while also highlighting potential causes for low-skill cycles within the Global Ensemble Forecasting System, version 12 (GEFSv12). There were 88 high-skill and 91 low-skill forecasts in which the verifying day-10 synoptic pattern for tornado conditions revealed a western U.S. thermal trough and an eastern U.S. thermal ridge, a favorable configuration for tornadic storm occurrence. Initial conditions for high skill forecasts tended to exhibit warmer sea surface temperatures throughout the tropical Pacific Ocean and Gulf of Mexico, an active Madden–Julian oscillation, and significant modulation of Earth-relative atmospheric angular momentum. Low-skill forecasts were often initialized during La Niña and negative Pacific decadal oscillation conditions. Significant atmospheric blocking over eastern Russia—in which the GEFSv12 overforecast the duration and characteristics of the downstream flow—was a common physical process associated with low-skill forecasts. This work helps to increase our understanding of the common causes of high- or low-skill extended-range tornado forecasts and could serve as a helpful tool for operational forecasters.
Significance Statement
This research provides a framework for the anticipation of a more (or less) skillful 10-day tornado forecast in an operational numerical weather prediction system. High-skill forecasts were associated with substantial tropical convection and warm sea surface temperature throughout the Pacific Ocean and Gulf of Mexico, whereas the underlying cause of low-skill forecasts were typically associated with a blocking anticyclone over eastern Russia. These findings are important because they permit increased or decreased confidence in a long-range forecast of tornado occurrence based on a dynamical prediction system.
Abstract
On average, modern numerical weather prediction forecasts for daily tornado frequency exhibit no skill beyond day 10. However, in this extended-range lead window, there are particular model cycles that have exceptionally high forecast skill for tornadoes because of their ability to correctly simulate the future synoptic pattern. Here, model initial conditions that produced a more skillful forecast for tornadoes over the United States were exploited while also highlighting potential causes for low-skill cycles within the Global Ensemble Forecasting System, version 12 (GEFSv12). There were 88 high-skill and 91 low-skill forecasts in which the verifying day-10 synoptic pattern for tornado conditions revealed a western U.S. thermal trough and an eastern U.S. thermal ridge, a favorable configuration for tornadic storm occurrence. Initial conditions for high skill forecasts tended to exhibit warmer sea surface temperatures throughout the tropical Pacific Ocean and Gulf of Mexico, an active Madden–Julian oscillation, and significant modulation of Earth-relative atmospheric angular momentum. Low-skill forecasts were often initialized during La Niña and negative Pacific decadal oscillation conditions. Significant atmospheric blocking over eastern Russia—in which the GEFSv12 overforecast the duration and characteristics of the downstream flow—was a common physical process associated with low-skill forecasts. This work helps to increase our understanding of the common causes of high- or low-skill extended-range tornado forecasts and could serve as a helpful tool for operational forecasters.
Significance Statement
This research provides a framework for the anticipation of a more (or less) skillful 10-day tornado forecast in an operational numerical weather prediction system. High-skill forecasts were associated with substantial tropical convection and warm sea surface temperature throughout the Pacific Ocean and Gulf of Mexico, whereas the underlying cause of low-skill forecasts were typically associated with a blocking anticyclone over eastern Russia. These findings are important because they permit increased or decreased confidence in a long-range forecast of tornado occurrence based on a dynamical prediction system.
Abstract
Four-dimensional COAMPS dynamic initialization (FCDI) analyses with high temporal and spatial resolution GOES-16 atmospheric motion vectors (AMVs) are utilized to analyze the development and rapid intensification of a mesovortex about 150 km to the south of the center of the subtropical cyclone, Cyclone Henri (2021). During the period of the unusual Henri westward track along 30°N, the FCDI z = 300-m wind vector analyses demonstrate highly asymmetric wind fields and a horseshoe-shaped isotach maximum that is about 75 km from the center, which are characteristics more consistent with the definition of a subtropical cyclone than of a tropical cyclone. Furthermore, the Henri westward track and the vertical wind shear have characteristics resembling a Rossby wave breaking conceptual model. The GOES-16 mesodomain AMVs allow the visualization of a series of outflow bursts in space and time in association with the southern mesovortex development and intensification. Then the FCDI analyses forced by those thousands of AMVs each 15 min depict the z = 13 910-m wind field responses and the subsequent z = 300-m wind field adjustments in the southern mesovortex. A second northern outflow burst displaced to the southeast of the main Henri vortex also led to a strong low-level mesovortex. It was when the two outflow bursts joined to create an eastward radial outflow all along the line between them that the southern mesovortex reached maximum intensity and maximum size. In contrast to the numerical model predictions of intensification, outflow from the mesovortex directed over the main Henri vortex led to a decrease in intensity.
Abstract
Four-dimensional COAMPS dynamic initialization (FCDI) analyses with high temporal and spatial resolution GOES-16 atmospheric motion vectors (AMVs) are utilized to analyze the development and rapid intensification of a mesovortex about 150 km to the south of the center of the subtropical cyclone, Cyclone Henri (2021). During the period of the unusual Henri westward track along 30°N, the FCDI z = 300-m wind vector analyses demonstrate highly asymmetric wind fields and a horseshoe-shaped isotach maximum that is about 75 km from the center, which are characteristics more consistent with the definition of a subtropical cyclone than of a tropical cyclone. Furthermore, the Henri westward track and the vertical wind shear have characteristics resembling a Rossby wave breaking conceptual model. The GOES-16 mesodomain AMVs allow the visualization of a series of outflow bursts in space and time in association with the southern mesovortex development and intensification. Then the FCDI analyses forced by those thousands of AMVs each 15 min depict the z = 13 910-m wind field responses and the subsequent z = 300-m wind field adjustments in the southern mesovortex. A second northern outflow burst displaced to the southeast of the main Henri vortex also led to a strong low-level mesovortex. It was when the two outflow bursts joined to create an eastward radial outflow all along the line between them that the southern mesovortex reached maximum intensity and maximum size. In contrast to the numerical model predictions of intensification, outflow from the mesovortex directed over the main Henri vortex led to a decrease in intensity.
Abstract
This study introduces a novel method for comparing vertical thermodynamic profiles, focusing on the atmospheric boundary layer, across a wide range of meteorological conditions. This method is developed using observed temperature and dewpoint temperature data from 31 153 soundings taken at 0000 UTC and 32 308 soundings taken at 1200 UTC between May 2019 and March 2020. Temperature and dewpoint temperature vertical profiles are first interpolated onto a height above ground level (AGL) coordinate, after which the temperature of the dry adiabat defined by the surface-based parcel’s temperature is subtracted from each quantity at all altitudes. This allows for common sounding features, such as turbulent mixed layers and inversions, to be similarly depicted regardless of temperature and dewpoint temperature differences resulting from altitude, latitude, or seasonality. The soundings that result from applying this method to the observed sounding collection described above are then clustered to identify distinct boundary layer structures in the data. Specifically, separately at 0000 and 1200 UTC, a k-means clustering analysis is conducted in the phase space of the leading two empirical orthogonal functions of the sounding data. As compared to clustering based on the original vertical profiles, which results in clusters that are dominated by seasonal and latitudinal differences, clusters derived from transformed data are less latitudinally and seasonally stratified and better represent boundary layer features such as turbulent mixed layers and pseudoadiabatic profiles. The sounding-comparison method thus provides an objective means of categorizing vertical thermodynamic profiles with wide-ranging applications, as demonstrated by using the method to verify short-range Global Forecast System model forecasts.
Abstract
This study introduces a novel method for comparing vertical thermodynamic profiles, focusing on the atmospheric boundary layer, across a wide range of meteorological conditions. This method is developed using observed temperature and dewpoint temperature data from 31 153 soundings taken at 0000 UTC and 32 308 soundings taken at 1200 UTC between May 2019 and March 2020. Temperature and dewpoint temperature vertical profiles are first interpolated onto a height above ground level (AGL) coordinate, after which the temperature of the dry adiabat defined by the surface-based parcel’s temperature is subtracted from each quantity at all altitudes. This allows for common sounding features, such as turbulent mixed layers and inversions, to be similarly depicted regardless of temperature and dewpoint temperature differences resulting from altitude, latitude, or seasonality. The soundings that result from applying this method to the observed sounding collection described above are then clustered to identify distinct boundary layer structures in the data. Specifically, separately at 0000 and 1200 UTC, a k-means clustering analysis is conducted in the phase space of the leading two empirical orthogonal functions of the sounding data. As compared to clustering based on the original vertical profiles, which results in clusters that are dominated by seasonal and latitudinal differences, clusters derived from transformed data are less latitudinally and seasonally stratified and better represent boundary layer features such as turbulent mixed layers and pseudoadiabatic profiles. The sounding-comparison method thus provides an objective means of categorizing vertical thermodynamic profiles with wide-ranging applications, as demonstrated by using the method to verify short-range Global Forecast System model forecasts.
Abstract
Developed as part of a larger effort by the National Weather Service (NWS) Radar Operations Center to modernize their suite of single-radar severe weather algorithms for the WSR-88D network, the Tornado Probability Algorithm (TORP) and the New Mesocyclone Detection Algorithm (NMDA) were evaluated by operational forecasters during the 2021 National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) Experimental Warning Program Radar Convective Applications experiment. Both TORP and NMDA leverage new products and advances in radar technology to create rotation-based objects that interrogate single-radar data, providing important summary and trend information that aids forecasters in issuing time-critical and potentially life-saving weather products. Utilizing virtual resources like Google Workspace and cloud instances on Amazon Web Services, 18 forecasters from the NOAA/NWS and the U.S. Air Force participated remotely over three weeks during the spring of 2021, providing valuable feedback on the efficacy of the algorithms and their display in an operational warning environment, serving as a critical step in the research-to-operations process for the development of TORP and NMDA. This article will discuss the details of the virtual HWT experiment and the results of each algorithm’s evaluation during the testbed.
Significance Statement
Before transitioning newly developed radar-based severe weather applications to forecasting operations, an experiment simulating the use of these tools by end users issuing severe weather warnings is helpful to identify both how they are best utilized and address any needed improvements to increase their operational readiness. Conducted in 2021, this study describes the forecaster evaluation of the single-radar Tornado Probability Algorithm (TORP) and the New Mesocyclone Detection Algorithm (NMDA) in one of the first completely virtual Hazardous Weather Testbed (HWT) experiments. Participants stated both TORP and NMDA offered marked improvement over the currently available algorithms by helping the operational forecaster build their confidence when issuing severe weather warnings and increasing their overall situational awareness of storms within their domain.
Abstract
Developed as part of a larger effort by the National Weather Service (NWS) Radar Operations Center to modernize their suite of single-radar severe weather algorithms for the WSR-88D network, the Tornado Probability Algorithm (TORP) and the New Mesocyclone Detection Algorithm (NMDA) were evaluated by operational forecasters during the 2021 National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) Experimental Warning Program Radar Convective Applications experiment. Both TORP and NMDA leverage new products and advances in radar technology to create rotation-based objects that interrogate single-radar data, providing important summary and trend information that aids forecasters in issuing time-critical and potentially life-saving weather products. Utilizing virtual resources like Google Workspace and cloud instances on Amazon Web Services, 18 forecasters from the NOAA/NWS and the U.S. Air Force participated remotely over three weeks during the spring of 2021, providing valuable feedback on the efficacy of the algorithms and their display in an operational warning environment, serving as a critical step in the research-to-operations process for the development of TORP and NMDA. This article will discuss the details of the virtual HWT experiment and the results of each algorithm’s evaluation during the testbed.
Significance Statement
Before transitioning newly developed radar-based severe weather applications to forecasting operations, an experiment simulating the use of these tools by end users issuing severe weather warnings is helpful to identify both how they are best utilized and address any needed improvements to increase their operational readiness. Conducted in 2021, this study describes the forecaster evaluation of the single-radar Tornado Probability Algorithm (TORP) and the New Mesocyclone Detection Algorithm (NMDA) in one of the first completely virtual Hazardous Weather Testbed (HWT) experiments. Participants stated both TORP and NMDA offered marked improvement over the currently available algorithms by helping the operational forecaster build their confidence when issuing severe weather warnings and increasing their overall situational awareness of storms within their domain.
Abstract
The prediction of supercooled large drops (SLD) from the Thompson–Eidhammer (TE) microphysics scheme—run as part of the High-Resolution Rapid Refresh (HRRR) model—is evaluated with observations from the In-Cloud Icing and Large drop Experiment (ICICLE) field campaign. These observations are also used to train a random forest machine learning (ML) model, which is then used to predict SLD from several variables derived from HRRR model output. Results provide insight on the limitations and benefits of each model. Generally, the ML model results in an increase in the probability of detection (POD) and false alarm rate (FAR) of SLD compared to prediction from TE microphysics. Additionally, the POD of SLD increases with increasing forecast lead time for both models, likely since clouds and precipitation have more time to develop as forecast length increases. Since SLD take time to develop in TE microphysics and may be poorly represented in short-term (<3 h) forecasts, the ML model can provide improved short-term guidance on supercooled large-drop icing conditions. Results also show that TE microphysics predicts a frequency of SLD in cold (<−10°C) or high ice water content (IWC) environments that is too low compared to observations, whereas the ML model better captures the relative frequency of SLD in these environments.
Abstract
The prediction of supercooled large drops (SLD) from the Thompson–Eidhammer (TE) microphysics scheme—run as part of the High-Resolution Rapid Refresh (HRRR) model—is evaluated with observations from the In-Cloud Icing and Large drop Experiment (ICICLE) field campaign. These observations are also used to train a random forest machine learning (ML) model, which is then used to predict SLD from several variables derived from HRRR model output. Results provide insight on the limitations and benefits of each model. Generally, the ML model results in an increase in the probability of detection (POD) and false alarm rate (FAR) of SLD compared to prediction from TE microphysics. Additionally, the POD of SLD increases with increasing forecast lead time for both models, likely since clouds and precipitation have more time to develop as forecast length increases. Since SLD take time to develop in TE microphysics and may be poorly represented in short-term (<3 h) forecasts, the ML model can provide improved short-term guidance on supercooled large-drop icing conditions. Results also show that TE microphysics predicts a frequency of SLD in cold (<−10°C) or high ice water content (IWC) environments that is too low compared to observations, whereas the ML model better captures the relative frequency of SLD in these environments.