Browse
Abstract
In 2009, advancements in NWP and computing power inspired a vision to advance hazardous weather warnings from a warn-on-detection to a warn-on-forecast paradigm. This vision would require not only the prediction of individual thunderstorms and their attributes but the likelihood of their occurrence in time and space. During the last decade, the warn-on-forecast research team at the NOAA National Severe Storms Laboratory met this challenge through the research and development of 1) an ensemble of high-resolution convection-allowing models; 2) ensemble- and variational-based assimilation of weather radar, satellite, and conventional observations; and 3) unique postprocessing and verification techniques, culminating in the experimental Warn-on-Forecast System (WoFS). Since 2017, we have directly engaged users in the testing, evaluation, and visualization of this system to ensure that WoFS guidance is usable and useful to operational forecasters at NOAA national centers and local offices responsible for forecasting severe weather, tornadoes, and flash floods across the watch-to-warning continuum. Although an experimental WoFS is now a reality, we close by discussing many of the exciting opportunities remaining, including folding this system into the Unified Forecast System, transitioning WoFS into NWS operations, and pursuing next-decade science goals for further advancing storm-scale prediction.
Significance Statement
The purpose of this research is to develop an experimental prediction system that forecasts the probability for severe weather hazards associated with individual thunderstorms up to 6 h in advance. This capability is important because some people and organizations, like those living in mobile homes, caring for patients in hospitals, or managing large outdoor events, require extended lead time to protect themselves and others from potential severe weather hazards. Our results demonstrate a prediction system that enables forecasters, for the first time, to message probabilistic hazard information associated with individual severe storms between the watch-to-warning time frame within the United States.
Abstract
In 2009, advancements in NWP and computing power inspired a vision to advance hazardous weather warnings from a warn-on-detection to a warn-on-forecast paradigm. This vision would require not only the prediction of individual thunderstorms and their attributes but the likelihood of their occurrence in time and space. During the last decade, the warn-on-forecast research team at the NOAA National Severe Storms Laboratory met this challenge through the research and development of 1) an ensemble of high-resolution convection-allowing models; 2) ensemble- and variational-based assimilation of weather radar, satellite, and conventional observations; and 3) unique postprocessing and verification techniques, culminating in the experimental Warn-on-Forecast System (WoFS). Since 2017, we have directly engaged users in the testing, evaluation, and visualization of this system to ensure that WoFS guidance is usable and useful to operational forecasters at NOAA national centers and local offices responsible for forecasting severe weather, tornadoes, and flash floods across the watch-to-warning continuum. Although an experimental WoFS is now a reality, we close by discussing many of the exciting opportunities remaining, including folding this system into the Unified Forecast System, transitioning WoFS into NWS operations, and pursuing next-decade science goals for further advancing storm-scale prediction.
Significance Statement
The purpose of this research is to develop an experimental prediction system that forecasts the probability for severe weather hazards associated with individual thunderstorms up to 6 h in advance. This capability is important because some people and organizations, like those living in mobile homes, caring for patients in hospitals, or managing large outdoor events, require extended lead time to protect themselves and others from potential severe weather hazards. Our results demonstrate a prediction system that enables forecasters, for the first time, to message probabilistic hazard information associated with individual severe storms between the watch-to-warning time frame within the United States.
Abstract
The optical flow technique has advantages in motion tracking and has long been employed in precipitation nowcasting to track the motion of precipitation fields using ground radar datasets. However, the performance and forecast time scale of models based on optical flow are limited. Here, we present the results of the application of the deep learning method to optical flow estimation to extend its forecast time scale and enhance the performance of nowcasting. It is shown that a deep learning model can better capture both multispatial and multitemporal motions of precipitation events compared with traditional optical flow estimation methods. The model comprises two components: 1) a regression process based on multiple optical flow algorithms, which more accurately captures multispatial features compared with a single optical flow algorithm; and 2) a U-Net-based network that trains multitemporal features of precipitation movement. We evaluated the model performance with cases of precipitation in South Korea. In particular, the regression process minimizes errors by combining multiple optical flow algorithms with a gradient descent method and outperforms other models using only a single optical flow algorithm up to a 3-h lead time. Additionally, the U-Net plays a crucial role in capturing nonlinear motion that cannot be captured by a simple advection model through traditional optical flow estimation. Consequently, we suggest that the proposed optical flow estimation method with deep learning could play a significant role in improving the performance of current operational nowcasting models, which are based on traditional optical flow methods.
Significance Statement
The purpose of this study is to improve the accuracy of short-term rainfall prediction based on optical flow methods that have been employed for operational precipitation nowcasting. By utilizing open-source libraries, such as OpenCV, and commonly applied machine learning techniques, such as multiple linear regression and U-Net networks, we propose an accessible model for enhancing prediction accuracy. We expect that the improvement in prediction accuracy will significantly improve the practical application of operational precipitation nowcasting.
Abstract
The optical flow technique has advantages in motion tracking and has long been employed in precipitation nowcasting to track the motion of precipitation fields using ground radar datasets. However, the performance and forecast time scale of models based on optical flow are limited. Here, we present the results of the application of the deep learning method to optical flow estimation to extend its forecast time scale and enhance the performance of nowcasting. It is shown that a deep learning model can better capture both multispatial and multitemporal motions of precipitation events compared with traditional optical flow estimation methods. The model comprises two components: 1) a regression process based on multiple optical flow algorithms, which more accurately captures multispatial features compared with a single optical flow algorithm; and 2) a U-Net-based network that trains multitemporal features of precipitation movement. We evaluated the model performance with cases of precipitation in South Korea. In particular, the regression process minimizes errors by combining multiple optical flow algorithms with a gradient descent method and outperforms other models using only a single optical flow algorithm up to a 3-h lead time. Additionally, the U-Net plays a crucial role in capturing nonlinear motion that cannot be captured by a simple advection model through traditional optical flow estimation. Consequently, we suggest that the proposed optical flow estimation method with deep learning could play a significant role in improving the performance of current operational nowcasting models, which are based on traditional optical flow methods.
Significance Statement
The purpose of this study is to improve the accuracy of short-term rainfall prediction based on optical flow methods that have been employed for operational precipitation nowcasting. By utilizing open-source libraries, such as OpenCV, and commonly applied machine learning techniques, such as multiple linear regression and U-Net networks, we propose an accessible model for enhancing prediction accuracy. We expect that the improvement in prediction accuracy will significantly improve the practical application of operational precipitation nowcasting.
Abstract
Intense tropical cyclones can form secondary eyewalls (SEs) that contract toward the storm center and eventually replace the inner eyewall, a process known as an eyewall replacement cycle (ERC). However, SE formation does not guarantee an eventual ERC, and often, SEs follow differing evolutionary pathways. This study documents SE evolution and progressions observed in numerous tropical cyclones, and results in two new datasets using passive microwave imagery: a global subjectively labeled dataset of SEs and eyes and their uncertainties from 72 storms between 2016 and 2019, and a dataset of 87 SE progressions that highlights the broad convective organization preceding and following an SE formation. The results show that two primary SE pathways exist: “No Replacement,” known as “Path 1,” and “Replacement,” known as the “Classic Path.” Most interestingly, 53% of the most certain SE formations result in an eyewall replacement. The Classic Path is associated with stronger column average meridional wind, a faster poleward component of storm motion, more intense storms, weaker vertical wind shear, greater relative humidity, a larger storm wind field, and stronger cold-air advection. This study highlights that a greater number of potential SE pathways exist than previously thought. The results of this study detail several observational features of SE evolution that raise questions about the physical processes that drive SE formations. Most important, environmental conditions and storm metrics identified here provide guidance for predictors in artificial intelligence applications for future tropical cyclone SE detection algorithms.
Abstract
Intense tropical cyclones can form secondary eyewalls (SEs) that contract toward the storm center and eventually replace the inner eyewall, a process known as an eyewall replacement cycle (ERC). However, SE formation does not guarantee an eventual ERC, and often, SEs follow differing evolutionary pathways. This study documents SE evolution and progressions observed in numerous tropical cyclones, and results in two new datasets using passive microwave imagery: a global subjectively labeled dataset of SEs and eyes and their uncertainties from 72 storms between 2016 and 2019, and a dataset of 87 SE progressions that highlights the broad convective organization preceding and following an SE formation. The results show that two primary SE pathways exist: “No Replacement,” known as “Path 1,” and “Replacement,” known as the “Classic Path.” Most interestingly, 53% of the most certain SE formations result in an eyewall replacement. The Classic Path is associated with stronger column average meridional wind, a faster poleward component of storm motion, more intense storms, weaker vertical wind shear, greater relative humidity, a larger storm wind field, and stronger cold-air advection. This study highlights that a greater number of potential SE pathways exist than previously thought. The results of this study detail several observational features of SE evolution that raise questions about the physical processes that drive SE formations. Most important, environmental conditions and storm metrics identified here provide guidance for predictors in artificial intelligence applications for future tropical cyclone SE detection algorithms.
Abstract
Throughout the summer months in the Southeast United States (SEUS), the initiation of isolated convection (CI) can occur abundantly during the daytime with weak synoptic support (e.g., weak wind shear). Centered around this premise, a dual-summer, limited-area case study of CI events concerning both geographical and meteorological features was conducted. The goal of this study was to help explain SEUS summertime CI in weak synoptic environments, which can enhance CI predictability. Results show that spatial CI nonrandomness event patterns arise, with greater CI event density appearing over high elevation by midday. Later in the day, overall CI event counts subside with other mechanisms/factors emerging (e.g., urban heat island). Antecedent rainfall, instability, and moisture features are also higher on average where CI occurred. In a random forest feature importance analysis, elevation was the most important variable in dictating CI events in the early to midafternoon while antecedent rainfall and wind direction consistently rank highest in permutation importance. The results cumulatively allude to, albeit in a muted, nonsignificant statistical signal, and a degree of spatial clustering of CI event occurrences cross the study domain as a function of daytime heating and contributions of features to enhancing CI probabilities (e.g., differential heating and mesoscale thermal circulations).
Significance Statement
Widespread isolated thunderstorms in the Southeast United States summer season with weak synoptic support have been commonly observed. With forecasting these remaining a challenge, a dual-summer intercomparison of geographical/meteorological features with convective initiation events was conducted. Radar data with a minimum threshold for convective initiation detection (35 dBZ) were used. Spatial nonrandomness was discovered with greater event density appearing over higher elevation by midday. Features such as prior rainfall and atmospheric instability/moisture were higher on average where initiation occurred. In a feature importance analysis, elevation ranked higher in the early to midafternoon hours while antecedent rainfall and wind direction ranked highest overall in permutation importance. These results allude to the contribution of localized phenomena to the nonrandomness (e.g., mesoscale circulations).
Abstract
Throughout the summer months in the Southeast United States (SEUS), the initiation of isolated convection (CI) can occur abundantly during the daytime with weak synoptic support (e.g., weak wind shear). Centered around this premise, a dual-summer, limited-area case study of CI events concerning both geographical and meteorological features was conducted. The goal of this study was to help explain SEUS summertime CI in weak synoptic environments, which can enhance CI predictability. Results show that spatial CI nonrandomness event patterns arise, with greater CI event density appearing over high elevation by midday. Later in the day, overall CI event counts subside with other mechanisms/factors emerging (e.g., urban heat island). Antecedent rainfall, instability, and moisture features are also higher on average where CI occurred. In a random forest feature importance analysis, elevation was the most important variable in dictating CI events in the early to midafternoon while antecedent rainfall and wind direction consistently rank highest in permutation importance. The results cumulatively allude to, albeit in a muted, nonsignificant statistical signal, and a degree of spatial clustering of CI event occurrences cross the study domain as a function of daytime heating and contributions of features to enhancing CI probabilities (e.g., differential heating and mesoscale thermal circulations).
Significance Statement
Widespread isolated thunderstorms in the Southeast United States summer season with weak synoptic support have been commonly observed. With forecasting these remaining a challenge, a dual-summer intercomparison of geographical/meteorological features with convective initiation events was conducted. Radar data with a minimum threshold for convective initiation detection (35 dBZ) were used. Spatial nonrandomness was discovered with greater event density appearing over higher elevation by midday. Features such as prior rainfall and atmospheric instability/moisture were higher on average where initiation occurred. In a feature importance analysis, elevation ranked higher in the early to midafternoon hours while antecedent rainfall and wind direction ranked highest overall in permutation importance. These results allude to the contribution of localized phenomena to the nonrandomness (e.g., mesoscale circulations).
Abstract
Convective available potential energy (CAPE) is an important index for storm forecasting. Recent versions (v15.2 and v16) of the Global Forecast System (GFS) predict lower values of CAPE during summertime in the continental United States than analysis and observation. We conducted an evaluation of the GFS in simulating summertime CAPE using an example from the Unified Forecast System Case Study collection to investigate the factors that lead to the low CAPE bias in GFS. Specifically, we investigated the surface energy budget, soil properties, and near-surface and upper-level meteorological fields. Results show that the GFS simulates smaller surface latent heat flux and larger surface sensible heat flux than the observations. This can be attributed to the slightly drier-than-observed soil moisture in the GFS that comes from an offline global land data assimilation system. The lower simulated CAPE in GFS v16 is related to the early drop of surface net radiation with excessive boundary layer cloud after midday when compared with GFS v15.2. A moisture-budget analysis indicates that errors in the large-scale advection of water vapor does not contribute to the dry bias in the GFS at low levels. Common Community Physics Package single-column model (SCM) experiments suggest that with realistic initial vertical profiles, SCM simulations generate a larger CAPE than runs with GFS IC. SCM runs with an active LSM tend to produce smaller CAPE than that with prescribed surface fluxes. Note that the findings are only applicable to this case study. Including more warm-season cases would enhance the generalizability of our findings.
Significance Statement
Convective available potential energy (CAPE) is one of the key parameters for severe weather analysis. The low bias of CAPE is identified by forecasters as one of the key issues for the NOAA operational global numerical weather prediction model, Global Forecast System (GFS). Our case study shows that the lower CAPE in GFS is related to the drier atmosphere than observed within the lowest 1 km. Further investigations suggest that it is related to the drier atmosphere that already exists in the initial conditions, which are produced by the Global Data Assimilation System, in which an earlier 6-h GFS forecast is combined with current observations. It is also attributed to the slightly lower simulated soil moisture than observed. The lower CAPE in GFS v16 when compared with GFS v15.2 in the case analyzed here is related to excessive boundary layer cloud formation beginning at midday that leads to a drop of net radiation reaching the surface and thus less latent heat feeding back to the low-level atmosphere.
Abstract
Convective available potential energy (CAPE) is an important index for storm forecasting. Recent versions (v15.2 and v16) of the Global Forecast System (GFS) predict lower values of CAPE during summertime in the continental United States than analysis and observation. We conducted an evaluation of the GFS in simulating summertime CAPE using an example from the Unified Forecast System Case Study collection to investigate the factors that lead to the low CAPE bias in GFS. Specifically, we investigated the surface energy budget, soil properties, and near-surface and upper-level meteorological fields. Results show that the GFS simulates smaller surface latent heat flux and larger surface sensible heat flux than the observations. This can be attributed to the slightly drier-than-observed soil moisture in the GFS that comes from an offline global land data assimilation system. The lower simulated CAPE in GFS v16 is related to the early drop of surface net radiation with excessive boundary layer cloud after midday when compared with GFS v15.2. A moisture-budget analysis indicates that errors in the large-scale advection of water vapor does not contribute to the dry bias in the GFS at low levels. Common Community Physics Package single-column model (SCM) experiments suggest that with realistic initial vertical profiles, SCM simulations generate a larger CAPE than runs with GFS IC. SCM runs with an active LSM tend to produce smaller CAPE than that with prescribed surface fluxes. Note that the findings are only applicable to this case study. Including more warm-season cases would enhance the generalizability of our findings.
Significance Statement
Convective available potential energy (CAPE) is one of the key parameters for severe weather analysis. The low bias of CAPE is identified by forecasters as one of the key issues for the NOAA operational global numerical weather prediction model, Global Forecast System (GFS). Our case study shows that the lower CAPE in GFS is related to the drier atmosphere than observed within the lowest 1 km. Further investigations suggest that it is related to the drier atmosphere that already exists in the initial conditions, which are produced by the Global Data Assimilation System, in which an earlier 6-h GFS forecast is combined with current observations. It is also attributed to the slightly lower simulated soil moisture than observed. The lower CAPE in GFS v16 when compared with GFS v15.2 in the case analyzed here is related to excessive boundary layer cloud formation beginning at midday that leads to a drop of net radiation reaching the surface and thus less latent heat feeding back to the low-level atmosphere.
Abstract
Despite the enormous potential of precipitation forecasts to save lives and property in Africa, low skill has limited their uptake. To assess the skill and improve the performance of the forecast, validation and postprocessing should continuously be carried out. Here, we evaluate the quality of reforecasts from the European Centre for Medium-Range Weather Forecasts over equatorial East Africa (EEA) against satellite and rain gauge observations for the period 2001–18. The 24-h rainfall accumulations are analyzed from short- to medium-range time scales. Additionally, 48- and 120-h rainfall accumulations were also assessed. The skill was assessed using an extended probabilistic climatology (EPC) derived from the observations. Results show that the reforecasts overestimate rainfall, especially during the rainy seasons and over high-altitude areas. However, there is potential of skill in the raw forecasts up to 14-day lead time. There is an improvement of up to 30% in the Brier score/continuous ranked probability score relative to EPC in most areas, especially the higher-altitude regions, decreasing with lead time. Aggregating the reforecasts enhances the skill further, likely due to a reduction in timing mismatches. However, for some regions of the study domain, the predictive performance is worse than EPC, mainly due to biases. Postprocessing the reforecasts using isotonic distributional regression considerably improves skill, increasing the number of grid points with positive Brier skill score (continuous ranked probability skill score) by an average of 81% (91%) for lead times 1–14 days ahead. Overall, the study highlights the potential of the reforecasts, the spatiotemporal variation in skill, and the benefit of postprocessing in EEA.
Abstract
Despite the enormous potential of precipitation forecasts to save lives and property in Africa, low skill has limited their uptake. To assess the skill and improve the performance of the forecast, validation and postprocessing should continuously be carried out. Here, we evaluate the quality of reforecasts from the European Centre for Medium-Range Weather Forecasts over equatorial East Africa (EEA) against satellite and rain gauge observations for the period 2001–18. The 24-h rainfall accumulations are analyzed from short- to medium-range time scales. Additionally, 48- and 120-h rainfall accumulations were also assessed. The skill was assessed using an extended probabilistic climatology (EPC) derived from the observations. Results show that the reforecasts overestimate rainfall, especially during the rainy seasons and over high-altitude areas. However, there is potential of skill in the raw forecasts up to 14-day lead time. There is an improvement of up to 30% in the Brier score/continuous ranked probability score relative to EPC in most areas, especially the higher-altitude regions, decreasing with lead time. Aggregating the reforecasts enhances the skill further, likely due to a reduction in timing mismatches. However, for some regions of the study domain, the predictive performance is worse than EPC, mainly due to biases. Postprocessing the reforecasts using isotonic distributional regression considerably improves skill, increasing the number of grid points with positive Brier skill score (continuous ranked probability skill score) by an average of 81% (91%) for lead times 1–14 days ahead. Overall, the study highlights the potential of the reforecasts, the spatiotemporal variation in skill, and the benefit of postprocessing in EEA.
Abstract
A new version of the Weather Research and Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme was developed based on the existing WDM6 scheme by predicting snow and graupel number concentrations. The new WDM6 scheme was tested for summer rainfall and winter snowfall cases to evaluate the effects of prognostic number concentration of snow and graupel on the simulated precipitation. The number concentration of snow decreases at the upper layers and the one of graupel also decreases at all layers in the new WDM6 scheme compared to the diagnosed ones in the original WDM6 scheme. Rain number concentration is remarkably reduced in the new WDM6 scheme due to the newly added and modified sink processes. Therefore, the new scheme produces a larger size of raindrops with a reduced number concentration than the original scheme, which hinders raindrop evaporation and produces more surface rain. Even though the enhanced surface rainfall in the new scheme deteriorates the bias score, the new scheme improves the statistical skill of the equitable threat score and probability of detection in most cases. These scores all improved for warm-type summer cases in the new scheme. The new scheme also shows more comparable features to the observation for the probability density functions of simulated liquid equivalent precipitation rates by alleviating the overprediction problem of precipitation frequencies belonging to heavy precipitation categories. Therefore, the new scheme improves the precipitation forecast for warm-type summer cases, which occur most frequently during the summer season over the Korean Peninsula.
Abstract
A new version of the Weather Research and Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme was developed based on the existing WDM6 scheme by predicting snow and graupel number concentrations. The new WDM6 scheme was tested for summer rainfall and winter snowfall cases to evaluate the effects of prognostic number concentration of snow and graupel on the simulated precipitation. The number concentration of snow decreases at the upper layers and the one of graupel also decreases at all layers in the new WDM6 scheme compared to the diagnosed ones in the original WDM6 scheme. Rain number concentration is remarkably reduced in the new WDM6 scheme due to the newly added and modified sink processes. Therefore, the new scheme produces a larger size of raindrops with a reduced number concentration than the original scheme, which hinders raindrop evaporation and produces more surface rain. Even though the enhanced surface rainfall in the new scheme deteriorates the bias score, the new scheme improves the statistical skill of the equitable threat score and probability of detection in most cases. These scores all improved for warm-type summer cases in the new scheme. The new scheme also shows more comparable features to the observation for the probability density functions of simulated liquid equivalent precipitation rates by alleviating the overprediction problem of precipitation frequencies belonging to heavy precipitation categories. Therefore, the new scheme improves the precipitation forecast for warm-type summer cases, which occur most frequently during the summer season over the Korean Peninsula.
Abstract
The Marshall Fire on 30 December 2021 became the most destructive wildfire costwise in Colorado history as it evolved into a suburban firestorm in southeastern Boulder County, driven by strong winds and a snow-free and drought-influenced fuel state. The fire was driven by a strong downslope windstorm that maintained its intensity for nearly 11 hours. The southward movement of a large-scale jet axis across Boulder County brought a quick transition that day into a zone of upper-level descent, enhancing the midlevel inversion providing a favorable environment for an amplifying downstream mountain wave. In several aspects, this windstorm did not follow typical downslope windstorm behavior. NOAA rapidly updating numerical weather prediction guidance (including the High-Resolution Rapid Refresh) provided operationally useful forecasts of the windstorm, leading to the issuance of a High-Wind Warning (HWW) for eastern Boulder County. No Red Flag Warning was issued due to a too restrictive relative humidity criterion (already published alternatives are recommended); however, owing to the HWW, a countywide burn ban was issued for that day. Consideration of spatial (vertical and horizontal) and temporal (both valid time and initialization time) neighborhoods allows some quantification of forecast uncertainty from deterministic forecasts—important in real-time use for forecasting and public warnings of extreme events. Essentially, dimensions of the deterministic model were used to roughly estimate an ensemble forecast. These dimensions including run-to-run consistency are also important for subsequent evaluation of forecasts for small-scale features such as downslope windstorms and the tropospheric features responsible for them, similar to forecasts of deep, moist convection and related severe weather.
Significance Statement
The Front Range windstorm of 30 December 2021 combined extreme surface winds (>45 m s−1) with fire ignition resulting in an extraordinary and quickly evolving, extremely destructive wildfire–urban interface fire event. This windstorm differed from typical downslope windstorms in several aspects. We describe the observations, model guidance, and decision-making of operational forecasters for this event. In effect, an ensemble forecast was approximated by use of a frequently updated deterministic model by operational forecasters, and this combined use of temporal, spatial (horizontal and vertical), and other forecast dimensions is suggested to better estimate the possibility of such extreme events.
Abstract
The Marshall Fire on 30 December 2021 became the most destructive wildfire costwise in Colorado history as it evolved into a suburban firestorm in southeastern Boulder County, driven by strong winds and a snow-free and drought-influenced fuel state. The fire was driven by a strong downslope windstorm that maintained its intensity for nearly 11 hours. The southward movement of a large-scale jet axis across Boulder County brought a quick transition that day into a zone of upper-level descent, enhancing the midlevel inversion providing a favorable environment for an amplifying downstream mountain wave. In several aspects, this windstorm did not follow typical downslope windstorm behavior. NOAA rapidly updating numerical weather prediction guidance (including the High-Resolution Rapid Refresh) provided operationally useful forecasts of the windstorm, leading to the issuance of a High-Wind Warning (HWW) for eastern Boulder County. No Red Flag Warning was issued due to a too restrictive relative humidity criterion (already published alternatives are recommended); however, owing to the HWW, a countywide burn ban was issued for that day. Consideration of spatial (vertical and horizontal) and temporal (both valid time and initialization time) neighborhoods allows some quantification of forecast uncertainty from deterministic forecasts—important in real-time use for forecasting and public warnings of extreme events. Essentially, dimensions of the deterministic model were used to roughly estimate an ensemble forecast. These dimensions including run-to-run consistency are also important for subsequent evaluation of forecasts for small-scale features such as downslope windstorms and the tropospheric features responsible for them, similar to forecasts of deep, moist convection and related severe weather.
Significance Statement
The Front Range windstorm of 30 December 2021 combined extreme surface winds (>45 m s−1) with fire ignition resulting in an extraordinary and quickly evolving, extremely destructive wildfire–urban interface fire event. This windstorm differed from typical downslope windstorms in several aspects. We describe the observations, model guidance, and decision-making of operational forecasters for this event. In effect, an ensemble forecast was approximated by use of a frequently updated deterministic model by operational forecasters, and this combined use of temporal, spatial (horizontal and vertical), and other forecast dimensions is suggested to better estimate the possibility of such extreme events.
Abstract
Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric—algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data are considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate that the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm, thus resulting in different biases in near −0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Although the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.
Significance Statement
Many investigators are developing new-and-improved algorithms to diagnose the surface precipitation type in winter storms. Whether these algorithms can be declared as objectively superior to existing strategies is unknown. Herein, we evaluate different methods to measure algorithm performance to assess whether it is possible to state one algorithm is superior to another. The results of this study suggest such claims are difficult, if not impossible, to make, at least not for the algorithms considered herein. Because algorithms can have certain biases, we advocate a multi-algorithm approach wherein multiple algorithms are applied to forecasts and a probabilistic prediction of precipitation type is generated. The potential value of this is demonstrated through a case-study analysis that shows promise for enhanced decision support.
Abstract
Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric—algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data are considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate that the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm, thus resulting in different biases in near −0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Although the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.
Significance Statement
Many investigators are developing new-and-improved algorithms to diagnose the surface precipitation type in winter storms. Whether these algorithms can be declared as objectively superior to existing strategies is unknown. Herein, we evaluate different methods to measure algorithm performance to assess whether it is possible to state one algorithm is superior to another. The results of this study suggest such claims are difficult, if not impossible, to make, at least not for the algorithms considered herein. Because algorithms can have certain biases, we advocate a multi-algorithm approach wherein multiple algorithms are applied to forecasts and a probabilistic prediction of precipitation type is generated. The potential value of this is demonstrated through a case-study analysis that shows promise for enhanced decision support.
Abstract
Visible satellite imagery is widely used by operational weather forecast centers for tropical and extratropical cyclone analysis and marine forecasting. The absence of visible imagery at night can significantly degrade forecast capabilities, such as determining tropical cyclone center locations or tracking warm-topped convective clusters. This paper documents ProxyVis imagery, an infrared-based proxy for daytime visible imagery developed to address the lack of visible satellite imagery at night and the limitations of existing nighttime visible options. ProxyVis was trained on the VIIRS day/night band imagery at times close to the full moon using VIIRS IR channels with closely matching GOES-16/17/18, Himawari-8/9, and Meteosat-9/10/11 channels. The final operational product applies the ProxyVis algorithms to geostationary satellite data and combines daytime visible and nighttime ProxyVis data to create full-disk animated GeoProxyVis imagery. The simple versions of the ProxyVis algorithm enable its generation from earlier GOES and Meteosat satellite imagery. ProxyVis offers significant improvement over existing operational products for tracking nighttime oceanic low-level clouds. Further, it is qualitatively similar to visible imagery for a wide range of backgrounds and synoptic conditions and phenomena, enabling forecasters to use it without special training. ProxyVis was first introduced to National Hurricane Center (NHC) operations in 2018 and was found to be extremely useful by forecasters becoming part of their standard operational satellite product suite in 2019. Currently, ProxyVis implemented for GOES-16/18, Himawari-9, and Meteosat-9/10/11 is being used in operational settings and evaluated for transition to operations at multiple NWS offices and the Joint Typhoon Warning Center.
Significance Statement
This paper describes ProxyVis imagery, a new method for combining infrared channels to qualitatively mimic daytime visible imagery at nighttime. ProxyVis demonstrates that a simple linear regression can combine just a few commonly available infrared channels to develop a nighttime proxy for visible imagery that significantly improves a forecaster’s ability to track low-level oceanic clouds and circulation features at night, works for all current geostationary satellites, and is useful across a wide range of backgrounds and meteorological scenarios. Animated ProxyVis geostationary imagery has been operational at the National Hurricane Center since 2019 and is also currently being transitioned to operations at other NWS offices and the Joint Typhoon Warning Center.
Abstract
Visible satellite imagery is widely used by operational weather forecast centers for tropical and extratropical cyclone analysis and marine forecasting. The absence of visible imagery at night can significantly degrade forecast capabilities, such as determining tropical cyclone center locations or tracking warm-topped convective clusters. This paper documents ProxyVis imagery, an infrared-based proxy for daytime visible imagery developed to address the lack of visible satellite imagery at night and the limitations of existing nighttime visible options. ProxyVis was trained on the VIIRS day/night band imagery at times close to the full moon using VIIRS IR channels with closely matching GOES-16/17/18, Himawari-8/9, and Meteosat-9/10/11 channels. The final operational product applies the ProxyVis algorithms to geostationary satellite data and combines daytime visible and nighttime ProxyVis data to create full-disk animated GeoProxyVis imagery. The simple versions of the ProxyVis algorithm enable its generation from earlier GOES and Meteosat satellite imagery. ProxyVis offers significant improvement over existing operational products for tracking nighttime oceanic low-level clouds. Further, it is qualitatively similar to visible imagery for a wide range of backgrounds and synoptic conditions and phenomena, enabling forecasters to use it without special training. ProxyVis was first introduced to National Hurricane Center (NHC) operations in 2018 and was found to be extremely useful by forecasters becoming part of their standard operational satellite product suite in 2019. Currently, ProxyVis implemented for GOES-16/18, Himawari-9, and Meteosat-9/10/11 is being used in operational settings and evaluated for transition to operations at multiple NWS offices and the Joint Typhoon Warning Center.
Significance Statement
This paper describes ProxyVis imagery, a new method for combining infrared channels to qualitatively mimic daytime visible imagery at nighttime. ProxyVis demonstrates that a simple linear regression can combine just a few commonly available infrared channels to develop a nighttime proxy for visible imagery that significantly improves a forecaster’s ability to track low-level oceanic clouds and circulation features at night, works for all current geostationary satellites, and is useful across a wide range of backgrounds and meteorological scenarios. Animated ProxyVis geostationary imagery has been operational at the National Hurricane Center since 2019 and is also currently being transitioned to operations at other NWS offices and the Joint Typhoon Warning Center.