Browse
Abstract
Artificial intelligence (AI) is gaining popularity for severe weather forecasting. Recently, the authors developed an AI system using machine learning (ML) to produce probabilistic guidance for severe weather hazards, including tornadoes, large hail, and severe winds, using the National Severe Storms Laboratory’s (NSSL) Warn-on-Forecast System (WoFS) as input. Known as WoFS-ML-Severe, it performed well in retrospective cases, but its operational usefulness had yet to be determined. To examine the potential usefulness of the ML guidance, we conducted a control and treatment (experimental) group experiment during the 2022 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (HWT-SFE). The control group had full access to WoFS, while the experimental group had access to WoFS and ML products. Explainability graphics were also integrated into the WoFS web viewer. Both groups issued 1-h convective outlooks for each hazard. After issuing their forecasts, we surveyed participants on their confidence, the number of products viewed, and the usefulness of the ML guidance. We found the ML-based outlooks outperformed non-ML-based outlooks for multiple verification metrics for all three hazards and were rated subjectively higher by the participants. However, the difference in confidence between the two groups was not significant, and the experimental group self-reported viewing more products than the control group. Participants had mixed sentiments toward explainability products as it improved their understanding of the input/output relationships, but viewing them added to their workload. Although the experiment demonstrated the usefulness of ML guidance for severe weather forecasting, there are avenues to improve upon the ML guidance, and more training and exposure are needed to exploit its benefits fully.
Significance Statement
We developed an artificial intelligence (AI) system to predict tornadoes, large hail, and damaging straight-line winds. The AI system was leveraged in real time during the 2022 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. This study reveals that forecasters using AI guidance produced more reliable and spatially accurate outlooks than those without. While AI and complementary explainability products did not reduce forecaster workload, both demonstrated great potential for improving severe weather forecasting. This research also highlights the importance of user feedback in refining AI tools for severe weather forecasting.
Abstract
Artificial intelligence (AI) is gaining popularity for severe weather forecasting. Recently, the authors developed an AI system using machine learning (ML) to produce probabilistic guidance for severe weather hazards, including tornadoes, large hail, and severe winds, using the National Severe Storms Laboratory’s (NSSL) Warn-on-Forecast System (WoFS) as input. Known as WoFS-ML-Severe, it performed well in retrospective cases, but its operational usefulness had yet to be determined. To examine the potential usefulness of the ML guidance, we conducted a control and treatment (experimental) group experiment during the 2022 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (HWT-SFE). The control group had full access to WoFS, while the experimental group had access to WoFS and ML products. Explainability graphics were also integrated into the WoFS web viewer. Both groups issued 1-h convective outlooks for each hazard. After issuing their forecasts, we surveyed participants on their confidence, the number of products viewed, and the usefulness of the ML guidance. We found the ML-based outlooks outperformed non-ML-based outlooks for multiple verification metrics for all three hazards and were rated subjectively higher by the participants. However, the difference in confidence between the two groups was not significant, and the experimental group self-reported viewing more products than the control group. Participants had mixed sentiments toward explainability products as it improved their understanding of the input/output relationships, but viewing them added to their workload. Although the experiment demonstrated the usefulness of ML guidance for severe weather forecasting, there are avenues to improve upon the ML guidance, and more training and exposure are needed to exploit its benefits fully.
Significance Statement
We developed an artificial intelligence (AI) system to predict tornadoes, large hail, and damaging straight-line winds. The AI system was leveraged in real time during the 2022 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. This study reveals that forecasters using AI guidance produced more reliable and spatially accurate outlooks than those without. While AI and complementary explainability products did not reduce forecaster workload, both demonstrated great potential for improving severe weather forecasting. This research also highlights the importance of user feedback in refining AI tools for severe weather forecasting.
Abstract
Convective snow (CS) presents a significant hazard to motorists and is one of the leading causes of weather-related fatalities on Pennsylvania roadways. Thus, understanding environmental factors promoting CS formation and organization is critical for providing relevant and accurate information to those impacted. Prior research has been limited, mainly focusing on frontal CS bands often called “snow squalls”; thus, these studies do not account for the diversity of CS organizational modes that is frequently observed, highlighting a need for a robust climatology of broader CS events. To identify such events, a novel, radar-based CS detection algorithm was developed and applied to WSR-88D radar data from 10 cold seasons in central Pennsylvania, during which 159 cases were identified. Distinct convective organization modes were identified: linear (frontal) snow squalls, single cells, multicells, and streamer bands. Each algorithm-flagged radar scan containing CS was manually classified as one of these modes. Interestingly, the most-studied frontal mode only occurred <5% of the time, whereas multicellular modes dominated CS occurrence. Using the times associated with each CS mode, synoptic and local environmental information from model analyses was investigated. Key characteristics of CS environments compared to null cases include a 500-hPa trough in the vicinity, lower-tropospheric conditional instability, and sufficient moisture. Environments favorable for the different CS modes featured statistically significant differences in the 500-hPa trough axis position, surface-based CAPE, and the unstable layer depth, among others. These results provide insights into forecasting CS mode, explicitly presented in a forecasting decision tree.
Significance Statement
Convective snow events such as snow squalls are a leading cause of weather-related deaths on Pennsylvania roads. Research into these events is limited, thus negatively impacting forecast skill. To better understand convective snow event frequency of occurrence, inter- and intra-annual variability, and their supporting environments, we performed a 10-yr radar-based climatology of these events. We report the results of this climatology and on the statistically significant differences in their supporting environments. The latter are used to propose a forecasting framework for convective snow, which may improve the predictability of convective snow in an operational setting.
Abstract
Convective snow (CS) presents a significant hazard to motorists and is one of the leading causes of weather-related fatalities on Pennsylvania roadways. Thus, understanding environmental factors promoting CS formation and organization is critical for providing relevant and accurate information to those impacted. Prior research has been limited, mainly focusing on frontal CS bands often called “snow squalls”; thus, these studies do not account for the diversity of CS organizational modes that is frequently observed, highlighting a need for a robust climatology of broader CS events. To identify such events, a novel, radar-based CS detection algorithm was developed and applied to WSR-88D radar data from 10 cold seasons in central Pennsylvania, during which 159 cases were identified. Distinct convective organization modes were identified: linear (frontal) snow squalls, single cells, multicells, and streamer bands. Each algorithm-flagged radar scan containing CS was manually classified as one of these modes. Interestingly, the most-studied frontal mode only occurred <5% of the time, whereas multicellular modes dominated CS occurrence. Using the times associated with each CS mode, synoptic and local environmental information from model analyses was investigated. Key characteristics of CS environments compared to null cases include a 500-hPa trough in the vicinity, lower-tropospheric conditional instability, and sufficient moisture. Environments favorable for the different CS modes featured statistically significant differences in the 500-hPa trough axis position, surface-based CAPE, and the unstable layer depth, among others. These results provide insights into forecasting CS mode, explicitly presented in a forecasting decision tree.
Significance Statement
Convective snow events such as snow squalls are a leading cause of weather-related deaths on Pennsylvania roads. Research into these events is limited, thus negatively impacting forecast skill. To better understand convective snow event frequency of occurrence, inter- and intra-annual variability, and their supporting environments, we performed a 10-yr radar-based climatology of these events. We report the results of this climatology and on the statistically significant differences in their supporting environments. The latter are used to propose a forecasting framework for convective snow, which may improve the predictability of convective snow in an operational setting.
Abstract
Identifying radar signatures indicative of damaging surface winds produced by convection remains a challenge for operational meteorologists, especially within environments characterized by strong low-level static stability and convection for which inflow is presumably entirely above the planetary boundary layer. Numerical model simulations suggest the most prevalent method through which elevated convection generates damaging surface winds is via “up–down” trajectories, where a near-surface stable layer is dynamically lifted and then dropped with little to no connection to momentum associated with the elevated convection itself. Recently, a number of unique convective episodes during which damaging surface winds were produced by apparently elevated convection coincident with mesoscale gravity waves were identified and cataloged for study. A novel radar signature indicative of damaging surface winds produced by elevated convection is introduced through six representative cases. One case is then explored further via a high-resolution model simulation and related to the conceptual model of up–down trajectories. Understanding the processes responsible for, and radar signature indicative of, damaging surface winds produced by gravity wave coincident convection will help operational forecasters identify and ultimately warn for a previously underappreciated phenomenon that poses a threat to lives and property.
Significance Statement
We identified unique radar and observational signatures of thunderstorms that produce damaging surface winds through a recently discovered mechanism. The radar and observational signatures can be used to issue warnings to protect lives and property in situations where damaging winds were previously unexpected. Key observational signatures include associated increases in surface pressure, sustained wind, and wind gust magnitudes, as well as little to no change or an increase in surface temperature. In addition, base radar data exhibit a divergence signature, including in regions of little or no detectable precipitation. Additional study is needed to answer why some atmospheric environments are supportive of the unique damaging-wind-producing mechanism while others are not.
Abstract
Identifying radar signatures indicative of damaging surface winds produced by convection remains a challenge for operational meteorologists, especially within environments characterized by strong low-level static stability and convection for which inflow is presumably entirely above the planetary boundary layer. Numerical model simulations suggest the most prevalent method through which elevated convection generates damaging surface winds is via “up–down” trajectories, where a near-surface stable layer is dynamically lifted and then dropped with little to no connection to momentum associated with the elevated convection itself. Recently, a number of unique convective episodes during which damaging surface winds were produced by apparently elevated convection coincident with mesoscale gravity waves were identified and cataloged for study. A novel radar signature indicative of damaging surface winds produced by elevated convection is introduced through six representative cases. One case is then explored further via a high-resolution model simulation and related to the conceptual model of up–down trajectories. Understanding the processes responsible for, and radar signature indicative of, damaging surface winds produced by gravity wave coincident convection will help operational forecasters identify and ultimately warn for a previously underappreciated phenomenon that poses a threat to lives and property.
Significance Statement
We identified unique radar and observational signatures of thunderstorms that produce damaging surface winds through a recently discovered mechanism. The radar and observational signatures can be used to issue warnings to protect lives and property in situations where damaging winds were previously unexpected. Key observational signatures include associated increases in surface pressure, sustained wind, and wind gust magnitudes, as well as little to no change or an increase in surface temperature. In addition, base radar data exhibit a divergence signature, including in regions of little or no detectable precipitation. Additional study is needed to answer why some atmospheric environments are supportive of the unique damaging-wind-producing mechanism while others are not.
Abstract
Wildfire agencies use fire danger rating systems (FDRSs) to deploy resources and issue public safety measures. The most widely used FDRS is the Canadian fire weather index (FWI) system, which uses weather inputs to estimate the potential for wildfires to start and spread. Current FWI forecasts provide a daily numerical value, representing potential fire severity at an assumed midafternoon time for peak fire activity. This assumption, based on typical diurnal weather patterns, is not always valid. To address this, we developed an hourly FWI (HFWI) system using numerical weather prediction. We validate HFWI against the traditional daily FWI (DFWI) by comparing HFWI forecasts with observation-derived DFWI values from 917 surface fire weather stations in western North America. Results indicate strong correlations between forecasted HFWI and the observation-derived DFWI. A positive mean bias in the daily maximum values of HFWI compared to the traditional DFWI suggests that HFWI can better capture severe fire weather variations regardless of when they occur. We confirm this by comparing HFWI with hourly fire radiative power (FRP) satellite observations for nine wildfire case studies in Canada and the United States. We demonstrate HFWI’s ability to forecast shifts in fire danger timing, especially during intensified fire activity in the late evening and early morning hours, while allowing for multiple periods of increased fire danger per day—a contrast to the conventional DFWI. This research highlights the HFWI system’s value in improving fire danger assessments and predictions, hopefully enhancing wildfire management, especially during atypical fire behavior.
Abstract
Wildfire agencies use fire danger rating systems (FDRSs) to deploy resources and issue public safety measures. The most widely used FDRS is the Canadian fire weather index (FWI) system, which uses weather inputs to estimate the potential for wildfires to start and spread. Current FWI forecasts provide a daily numerical value, representing potential fire severity at an assumed midafternoon time for peak fire activity. This assumption, based on typical diurnal weather patterns, is not always valid. To address this, we developed an hourly FWI (HFWI) system using numerical weather prediction. We validate HFWI against the traditional daily FWI (DFWI) by comparing HFWI forecasts with observation-derived DFWI values from 917 surface fire weather stations in western North America. Results indicate strong correlations between forecasted HFWI and the observation-derived DFWI. A positive mean bias in the daily maximum values of HFWI compared to the traditional DFWI suggests that HFWI can better capture severe fire weather variations regardless of when they occur. We confirm this by comparing HFWI with hourly fire radiative power (FRP) satellite observations for nine wildfire case studies in Canada and the United States. We demonstrate HFWI’s ability to forecast shifts in fire danger timing, especially during intensified fire activity in the late evening and early morning hours, while allowing for multiple periods of increased fire danger per day—a contrast to the conventional DFWI. This research highlights the HFWI system’s value in improving fire danger assessments and predictions, hopefully enhancing wildfire management, especially during atypical fire behavior.
Abstract
The Weather Prediction Center (WPC) issues Mesoscale Precipitation Discussions (MPDs) to highlight regions where heavy rainfall is expected to pose a threat for flash flooding. Issued as short-term guidance, the MPD consists of a graphical depiction of the threat area and a technical discussion of the forecasted meteorological and hydrological conditions conducive to heavy rainfall and the potential for a flash flood event. MPDs can be issued either during or in anticipation of an event and typically are valid for up to 6 h. This study presents an objective verification of WPC’s MPDs issued between 2016 and 2022, complete with a climatology, false alarm analysis, and contingency table-based skill scores (e.g., critical success index and fractional coverage). Regional and seasonal differences become evident when MPDs are assessed based on these groupings. MPDs improved in basic skill scores between 2016 and 2020, with a slight decline in scores for 2021 and 2022. The false alarm ratio of MPDs has decreased between 2016 and 2021. The most dramatic improvement over the period occurs in the MPDs in the winter season (December, January, and February) and along the West Coast (primarily atmospheric river events). The accuracy of MPDs in this group has quadrupled when measured by fractional coverage, and the false alarm rate is approximately one-fifth of the 2016 value. Skill during active monsoon seasons tends to decrease, partially due to the large size of MPDs issued for monsoon-related flash flooding events.
Abstract
The Weather Prediction Center (WPC) issues Mesoscale Precipitation Discussions (MPDs) to highlight regions where heavy rainfall is expected to pose a threat for flash flooding. Issued as short-term guidance, the MPD consists of a graphical depiction of the threat area and a technical discussion of the forecasted meteorological and hydrological conditions conducive to heavy rainfall and the potential for a flash flood event. MPDs can be issued either during or in anticipation of an event and typically are valid for up to 6 h. This study presents an objective verification of WPC’s MPDs issued between 2016 and 2022, complete with a climatology, false alarm analysis, and contingency table-based skill scores (e.g., critical success index and fractional coverage). Regional and seasonal differences become evident when MPDs are assessed based on these groupings. MPDs improved in basic skill scores between 2016 and 2020, with a slight decline in scores for 2021 and 2022. The false alarm ratio of MPDs has decreased between 2016 and 2021. The most dramatic improvement over the period occurs in the MPDs in the winter season (December, January, and February) and along the West Coast (primarily atmospheric river events). The accuracy of MPDs in this group has quadrupled when measured by fractional coverage, and the false alarm rate is approximately one-fifth of the 2016 value. Skill during active monsoon seasons tends to decrease, partially due to the large size of MPDs issued for monsoon-related flash flooding events.
Abstract
It is widely known from energy balances that global oceans play a fundamental role in atmospheric seasonal anomalies via coupling mechanisms. However, numerical weather prediction models still have limitations in long-term forecasting due to their nonlinear sensitivity to initial deep oceanic conditions. As the Mediterranean climate has highly unpredictable seasonal variability, we designed a complementary method by supposing that 1) delayed teleconnection patterns provide information about ocean–atmosphere coupling on subseasonal time scales through the lens of 2) partially predictable quasi-periodic oscillations since 3) forecast signals can be extracted by smoothing noise in a continuous lead-time horizon. To validate these hypotheses, the subseasonal predictability of temperature and precipitation was analyzed at 11 reference stations in the Mediterranean area in the 1993–2021 period. The novel method, presented here, consists of combining lag-correlated teleconnections (15 indices) with self-predictability techniques of residual quasi-oscillation based on wavelet (cyclic) and autoregressive integrated moving average (ARIMA) (linear) analyses. The prediction skill of this teleconnection–wavelet–ARIMA (TeWA) combination was cross-validated and compared to that of the ECMWF’s Seasonal Forecast System 5 (SEAS5)–ECMWF model (3 months ahead). Results show that the proposed TeWA approach improves the predictability of first-month temperature and precipitation anomalies by 50%–70% compared with the forecast of SEAS5. On a moving-averaged daily scale, the optimum prediction window is 30 days for temperature and 16 days for precipitation. The predictable ranges are consistent with atmospheric bridges in teleconnection patterns [e.g., Upper-Level Mediterranean Oscillation (ULMO)] and are reflected by spatial correlation with sea surface temperature (SST). Our results suggest that combinations of the TeWA approach and numerical models could boost new research lines in subseasonal-to-seasonal forecasting.
Significance Statement
The Mediterranean climate presents a high natural variability that makes skillful seasonal forecasts very difficult to achieve. We propose to complement the current forecasting methods with a statistical approach that combines two conceptual models: First, climate anomalies (cold/warm or dry/wet periods) are considered as smooth waves (with slow changes); and second, atmospheric and oceanic indices perform the role of atmosphere–ocean interactions, which impact Mediterranean climate variability in a delayed way. The key findings are that combining both sides, a better predictability of climate variability is provided, which is an opportunity to improve natural resource management and planning.
Abstract
It is widely known from energy balances that global oceans play a fundamental role in atmospheric seasonal anomalies via coupling mechanisms. However, numerical weather prediction models still have limitations in long-term forecasting due to their nonlinear sensitivity to initial deep oceanic conditions. As the Mediterranean climate has highly unpredictable seasonal variability, we designed a complementary method by supposing that 1) delayed teleconnection patterns provide information about ocean–atmosphere coupling on subseasonal time scales through the lens of 2) partially predictable quasi-periodic oscillations since 3) forecast signals can be extracted by smoothing noise in a continuous lead-time horizon. To validate these hypotheses, the subseasonal predictability of temperature and precipitation was analyzed at 11 reference stations in the Mediterranean area in the 1993–2021 period. The novel method, presented here, consists of combining lag-correlated teleconnections (15 indices) with self-predictability techniques of residual quasi-oscillation based on wavelet (cyclic) and autoregressive integrated moving average (ARIMA) (linear) analyses. The prediction skill of this teleconnection–wavelet–ARIMA (TeWA) combination was cross-validated and compared to that of the ECMWF’s Seasonal Forecast System 5 (SEAS5)–ECMWF model (3 months ahead). Results show that the proposed TeWA approach improves the predictability of first-month temperature and precipitation anomalies by 50%–70% compared with the forecast of SEAS5. On a moving-averaged daily scale, the optimum prediction window is 30 days for temperature and 16 days for precipitation. The predictable ranges are consistent with atmospheric bridges in teleconnection patterns [e.g., Upper-Level Mediterranean Oscillation (ULMO)] and are reflected by spatial correlation with sea surface temperature (SST). Our results suggest that combinations of the TeWA approach and numerical models could boost new research lines in subseasonal-to-seasonal forecasting.
Significance Statement
The Mediterranean climate presents a high natural variability that makes skillful seasonal forecasts very difficult to achieve. We propose to complement the current forecasting methods with a statistical approach that combines two conceptual models: First, climate anomalies (cold/warm or dry/wet periods) are considered as smooth waves (with slow changes); and second, atmospheric and oceanic indices perform the role of atmosphere–ocean interactions, which impact Mediterranean climate variability in a delayed way. The key findings are that combining both sides, a better predictability of climate variability is provided, which is an opportunity to improve natural resource management and planning.
Abstract
Ten bow echo events were simulated using the Weather Research and Forecasting (WRF) Model with 3- and 1-km horizontal grid spacing with both the Morrison and Thompson microphysics schemes to determine the impact of refined grid spacing on this often poorly simulated mode of convection. Simulated and observed composite reflectivities were used to classify convective mode. Skill scores were computed to quantify model performance at predicting all modes, and a new bow echo score was created to evaluate specifically the accuracy of bow echo forecasts. The full morphology score for runs using the Thompson scheme was noticeably improved by refined grid spacing, while the skill of Morrison runs did not change appreciably. However, bow echo scores for runs using both schemes improved when grid spacing was refined, with Thompson runs improving most significantly. Additionally, near storm environments were analyzed to understand why the simulated bow echoes changed as grid spacing was changed. A relationship existed between bow echo production and cold pool strength, as well as with the magnitude of microphysical cooling rates. More numerous updrafts were present in 1-km runs, leading to longer intense lines of convection which were more likely to evolve into longer-lived bow echoes in more cases. Large-scale features, such as a low-level jet orientation more perpendicular to the convective line and surface boundaries, often had to be present for bow echoes to occur in the 3-km runs.
Abstract
Ten bow echo events were simulated using the Weather Research and Forecasting (WRF) Model with 3- and 1-km horizontal grid spacing with both the Morrison and Thompson microphysics schemes to determine the impact of refined grid spacing on this often poorly simulated mode of convection. Simulated and observed composite reflectivities were used to classify convective mode. Skill scores were computed to quantify model performance at predicting all modes, and a new bow echo score was created to evaluate specifically the accuracy of bow echo forecasts. The full morphology score for runs using the Thompson scheme was noticeably improved by refined grid spacing, while the skill of Morrison runs did not change appreciably. However, bow echo scores for runs using both schemes improved when grid spacing was refined, with Thompson runs improving most significantly. Additionally, near storm environments were analyzed to understand why the simulated bow echoes changed as grid spacing was changed. A relationship existed between bow echo production and cold pool strength, as well as with the magnitude of microphysical cooling rates. More numerous updrafts were present in 1-km runs, leading to longer intense lines of convection which were more likely to evolve into longer-lived bow echoes in more cases. Large-scale features, such as a low-level jet orientation more perpendicular to the convective line and surface boundaries, often had to be present for bow echoes to occur in the 3-km runs.
Abstract
Quantifying the costs of radar outages allows value to be attributed to the alternate datasets that help mitigate outages. When radars are offline, forecasters rely more heavily on nearby radars, surface reports, numerical weather prediction models, and satellite observations. Monetized radar benefit models allow value to be attributed to individual radars for mitigating the threat to life from tornadoes, flash floods, and severe winds. Eighteen radars exceed $20 million in annual benefits for mitigating the threat to life from these convective hazards. The Jackson, Mississippi, radar (KDGX) provides the most value ($41.4 million), with the vast majority related to tornado risk mitigation ($29.4 million). During 2020–23, the average radar is offline for 2.57% of minutes or 9.27 days per year and experiences an average of 58.9 outages per year lasting 4.32 h on average. Radar outage cost estimates vary by location and convective hazard. Outage cost estimates concentrate at the top, with 8, 2, 4, and 5 radars exceeding $1 million in outage costs during 2020, 2021, 2022, and 2023, respectively. The KDGX radar experiences outage frequencies of 4.92% and 5.50% during 2020 and 2023, resulting in outage cost estimates > $2 million in both years. Combining outage cost estimates for all radars suggests that approximately $29.1 million in annual radar outage costs may be attributable as value to alternative datasets for helping mitigate radar outage impacts.
Significance Statement
This study combines information on radar status and monetized radar benefit models to attribute value to individual radars, estimate radar outage costs, and quantify the potential value of alternative datasets during outage-induced gaps in coverage. Eighteen radars exceed $20 million in annual benefits for mitigating the combined threat to life from tornadoes, flash floods, and severe winds. The first and third most valuable radars, both in Mississippi, experience outage frequencies twice the national average, accounting for a disproportionate share of the overall outage costs. Our findings suggest that characterizing and mitigating these outages might provide a near-term solution to better protect these communities from convective hazards. Combining outage cost estimates for all radars suggests that approximately $29.1 million in annual radar outage costs may be attributable as value to alternative datasets for helping mitigate the impacts of radar outages.
Abstract
Quantifying the costs of radar outages allows value to be attributed to the alternate datasets that help mitigate outages. When radars are offline, forecasters rely more heavily on nearby radars, surface reports, numerical weather prediction models, and satellite observations. Monetized radar benefit models allow value to be attributed to individual radars for mitigating the threat to life from tornadoes, flash floods, and severe winds. Eighteen radars exceed $20 million in annual benefits for mitigating the threat to life from these convective hazards. The Jackson, Mississippi, radar (KDGX) provides the most value ($41.4 million), with the vast majority related to tornado risk mitigation ($29.4 million). During 2020–23, the average radar is offline for 2.57% of minutes or 9.27 days per year and experiences an average of 58.9 outages per year lasting 4.32 h on average. Radar outage cost estimates vary by location and convective hazard. Outage cost estimates concentrate at the top, with 8, 2, 4, and 5 radars exceeding $1 million in outage costs during 2020, 2021, 2022, and 2023, respectively. The KDGX radar experiences outage frequencies of 4.92% and 5.50% during 2020 and 2023, resulting in outage cost estimates > $2 million in both years. Combining outage cost estimates for all radars suggests that approximately $29.1 million in annual radar outage costs may be attributable as value to alternative datasets for helping mitigate radar outage impacts.
Significance Statement
This study combines information on radar status and monetized radar benefit models to attribute value to individual radars, estimate radar outage costs, and quantify the potential value of alternative datasets during outage-induced gaps in coverage. Eighteen radars exceed $20 million in annual benefits for mitigating the combined threat to life from tornadoes, flash floods, and severe winds. The first and third most valuable radars, both in Mississippi, experience outage frequencies twice the national average, accounting for a disproportionate share of the overall outage costs. Our findings suggest that characterizing and mitigating these outages might provide a near-term solution to better protect these communities from convective hazards. Combining outage cost estimates for all radars suggests that approximately $29.1 million in annual radar outage costs may be attributable as value to alternative datasets for helping mitigate the impacts of radar outages.
Abstract
This study uses fixed buoy time series to create an algorithm for sea surface temperature (SST) cooling underneath a tropical cyclone (TC) inner core. To build predictive equations, SST cooling is first related to single variable predictors such as the SST before storm arrival, ocean heat content (OHC), mixed layer depth, sea surface salinity and stratification, storm intensity, storm translation speed, and latitude. Of all the single variable predictors, initial SST before storm arrival explains the greatest amount of variance for the change in SST during storm passage. Using a combination of predictors, we created nonlinear predictive equations for SST cooling. In general, the best predictive equations have four predictors and are built with knowledge about the prestorm ocean structure (e.g., OHC), storm intensity (e.g., minimum sea level pressure), initial SST values before storm arrival, and latitude. The best-performing SST cooling equations are broken up into two ocean regimes: when the ocean heat content is less than 60 kJ cm−2 (greater spread in SST cooling values) and when the ocean heat content is greater than 60 kJ cm−2 (SST cooling is always less than 2°C), which demonstrates the importance of the prestorm oceanic thermal structure on the in-storm SST value. The new equations are compared to what is currently used in a statistical–dynamical model. Overall, since the ocean providing the latent heat and sensible heat fluxes necessary for TC intensification, the results highlight the importance for consistently obtaining accurate in-storm upper-oceanic thermal structure for accurate TC intensity forecasts.
Significance Statement
The ocean provides the heat and moisture necessary for tropical cyclone (TC) intensification. Since the heat and moisture transfer depend on the sea surface temperature (SST), we create statistical equations for the prediction of SST underneath the storm. The variables we use combine the initial SST before the storm arrives, the upper-ocean thermal structure, and the strength and translation speed of the storm. The predictive equations for SST are evaluated for how well they improve TC intensity forecasts. The best-performing equations can be used for prediction in operational statistical models, which can aid intensity forecasts.
Abstract
This study uses fixed buoy time series to create an algorithm for sea surface temperature (SST) cooling underneath a tropical cyclone (TC) inner core. To build predictive equations, SST cooling is first related to single variable predictors such as the SST before storm arrival, ocean heat content (OHC), mixed layer depth, sea surface salinity and stratification, storm intensity, storm translation speed, and latitude. Of all the single variable predictors, initial SST before storm arrival explains the greatest amount of variance for the change in SST during storm passage. Using a combination of predictors, we created nonlinear predictive equations for SST cooling. In general, the best predictive equations have four predictors and are built with knowledge about the prestorm ocean structure (e.g., OHC), storm intensity (e.g., minimum sea level pressure), initial SST values before storm arrival, and latitude. The best-performing SST cooling equations are broken up into two ocean regimes: when the ocean heat content is less than 60 kJ cm−2 (greater spread in SST cooling values) and when the ocean heat content is greater than 60 kJ cm−2 (SST cooling is always less than 2°C), which demonstrates the importance of the prestorm oceanic thermal structure on the in-storm SST value. The new equations are compared to what is currently used in a statistical–dynamical model. Overall, since the ocean providing the latent heat and sensible heat fluxes necessary for TC intensification, the results highlight the importance for consistently obtaining accurate in-storm upper-oceanic thermal structure for accurate TC intensity forecasts.
Significance Statement
The ocean provides the heat and moisture necessary for tropical cyclone (TC) intensification. Since the heat and moisture transfer depend on the sea surface temperature (SST), we create statistical equations for the prediction of SST underneath the storm. The variables we use combine the initial SST before the storm arrives, the upper-ocean thermal structure, and the strength and translation speed of the storm. The predictive equations for SST are evaluated for how well they improve TC intensity forecasts. The best-performing equations can be used for prediction in operational statistical models, which can aid intensity forecasts.