Browse
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Analyses of cloud-top temperature and lightning characteristics of 48 Weather Research and Forecasting (WRF) Model–simulated ocean-based wind events, with 1-min temporal and 0.5-km horizontal resolution, revealed signatures similar to the corresponding 13 observed events detected by buoys and Coastal-Marine Automated Network (C-MAN) stations as shown in prior research on ocean-based wind events by the first author. These events occurred in the eastern Gulf of Mexico and in the Atlantic Ocean from Florida northward through South Carolina. The coldest WRF cloud-top temperature (WCTT) and peak WRF-estimated lightning flash rate values of the model-simulated events, where each event was required to have a negative vertical velocity of at least 10 m s−1 in the lowest 2 km associated with a convective storm, occurred at an average of 4.2 and 1.1 min prior to the events, respectively. With 36 of the events, the peak estimated flash rate occurred within 5 min of the coldest WCTT. Cloud depth typically increased as the WCTT decreased, and the maximum depth occurred at an average of 2.9 min prior to the events. Thermal cooling and precipitation loading provided negative buoyancy needed to help drive the wind events. Environmental characteristics of the model-simulated ocean-based wind events also resembled those associated with land-based wet downbursts, including moist air near the surface, lapse rates near moist adiabatic, and low cloud bases.
Abstract
Analyses of cloud-top temperature and lightning characteristics of 48 Weather Research and Forecasting (WRF) Model–simulated ocean-based wind events, with 1-min temporal and 0.5-km horizontal resolution, revealed signatures similar to the corresponding 13 observed events detected by buoys and Coastal-Marine Automated Network (C-MAN) stations as shown in prior research on ocean-based wind events by the first author. These events occurred in the eastern Gulf of Mexico and in the Atlantic Ocean from Florida northward through South Carolina. The coldest WRF cloud-top temperature (WCTT) and peak WRF-estimated lightning flash rate values of the model-simulated events, where each event was required to have a negative vertical velocity of at least 10 m s−1 in the lowest 2 km associated with a convective storm, occurred at an average of 4.2 and 1.1 min prior to the events, respectively. With 36 of the events, the peak estimated flash rate occurred within 5 min of the coldest WCTT. Cloud depth typically increased as the WCTT decreased, and the maximum depth occurred at an average of 2.9 min prior to the events. Thermal cooling and precipitation loading provided negative buoyancy needed to help drive the wind events. Environmental characteristics of the model-simulated ocean-based wind events also resembled those associated with land-based wet downbursts, including moist air near the surface, lapse rates near moist adiabatic, and low cloud bases.
Abstract
This study conducts the first large-sample comparison of the impact of dropsondes in the tropical cyclone (TC) inner core, vortex, and environment on NWP-model TC forecasts. We analyze six observing-system experiments, focusing on four sensitivity experiments that denied dropsonde observations within annuli corresponding with natural breakpoints in reconnaissance sampling. These are evaluated against two other experiments detailed in a recent parallel study: one that assimilated and another that denied dropsonde observations. Experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting (HWRF) Model and covered active periods of the 2017–20 North Atlantic hurricane seasons. Analysis focused on forecasts initialized with dropsondes that used mesoscale error covariance derived from a cycled HWRF ensemble, as these forecasts were where dropsondes had the greatest benefits in the parallel study. Some results generally support findings of previous research, while others are novel. Most notable was that removing dropsondes anywhere, particularly from the vortex, substantially degraded forecasts of maximum sustained winds. Removing in-vortex dropsondes also degraded outer-wind-radii forecasts in many instances. As such, in-vortex dropsondes contribute to a majority of the overall impacts of the dropsonde observing system. Additionally, track forecasts of weak TCs benefited more from environmental sampling, while track forecasts of strong TCs benefited more from in-vortex sampling. Finally, inner-core-only sampling strategies should be avoided, supporting a change made to the U.S. Air Force Reserve’s sampling strategy in 2018 that added dropsondes outside of the inner core.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment to date of the impact of dropsondes at different distances away from the center of a tropical cyclone (TC) on TC forecasts. The main finding is that in-vortex dropsondes are most important for intensity and outer-wind-radii forecasts. Particularly notable is the impact of dropsondes on TC maximum wind speed forecasts, as reducing sampling anywhere would degrade those forecasts.
Abstract
This study conducts the first large-sample comparison of the impact of dropsondes in the tropical cyclone (TC) inner core, vortex, and environment on NWP-model TC forecasts. We analyze six observing-system experiments, focusing on four sensitivity experiments that denied dropsonde observations within annuli corresponding with natural breakpoints in reconnaissance sampling. These are evaluated against two other experiments detailed in a recent parallel study: one that assimilated and another that denied dropsonde observations. Experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting (HWRF) Model and covered active periods of the 2017–20 North Atlantic hurricane seasons. Analysis focused on forecasts initialized with dropsondes that used mesoscale error covariance derived from a cycled HWRF ensemble, as these forecasts were where dropsondes had the greatest benefits in the parallel study. Some results generally support findings of previous research, while others are novel. Most notable was that removing dropsondes anywhere, particularly from the vortex, substantially degraded forecasts of maximum sustained winds. Removing in-vortex dropsondes also degraded outer-wind-radii forecasts in many instances. As such, in-vortex dropsondes contribute to a majority of the overall impacts of the dropsonde observing system. Additionally, track forecasts of weak TCs benefited more from environmental sampling, while track forecasts of strong TCs benefited more from in-vortex sampling. Finally, inner-core-only sampling strategies should be avoided, supporting a change made to the U.S. Air Force Reserve’s sampling strategy in 2018 that added dropsondes outside of the inner core.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment to date of the impact of dropsondes at different distances away from the center of a tropical cyclone (TC) on TC forecasts. The main finding is that in-vortex dropsondes are most important for intensity and outer-wind-radii forecasts. Particularly notable is the impact of dropsondes on TC maximum wind speed forecasts, as reducing sampling anywhere would degrade those forecasts.
Abstract
Realistic initialization of the land surface is important to produce accurate NWP forecasts. Therefore, making use of available observations is essential when estimating the surface state. In this work, sequential land surface data assimilation of soil variables is replaced with an offline cycling method. To obtain the best possible initial state for the lower boundary of the NWP system, the land surface model is rerun between forecasts with an analyzed atmospheric forcing. We found a relative reduction of 2-m temperature root-mean-square errors and mean errors of 6% and 12%, respectively, and 4.5% and 11% for 2-m specific humidity. During a convective event, the system was able to produce useful (fractions skill score greater than the uniform forecast) forecasts [above 30 mm (12 h)−1] down to a 100-km length scale where the reference failed to do so below 200 km. The different precipitation forcing caused differences in soil moisture fields that persisted for several weeks and consequently impacted the surface fluxes of heat and moisture and the forecasts of screen level parameters. The experiments also indicate diurnal- and weather-dependent variations of the forecast errors that give valuable insight on the role of initial land surface conditions and the land–atmosphere interactions in southern Scandinavia.
Abstract
Realistic initialization of the land surface is important to produce accurate NWP forecasts. Therefore, making use of available observations is essential when estimating the surface state. In this work, sequential land surface data assimilation of soil variables is replaced with an offline cycling method. To obtain the best possible initial state for the lower boundary of the NWP system, the land surface model is rerun between forecasts with an analyzed atmospheric forcing. We found a relative reduction of 2-m temperature root-mean-square errors and mean errors of 6% and 12%, respectively, and 4.5% and 11% for 2-m specific humidity. During a convective event, the system was able to produce useful (fractions skill score greater than the uniform forecast) forecasts [above 30 mm (12 h)−1] down to a 100-km length scale where the reference failed to do so below 200 km. The different precipitation forcing caused differences in soil moisture fields that persisted for several weeks and consequently impacted the surface fluxes of heat and moisture and the forecasts of screen level parameters. The experiments also indicate diurnal- and weather-dependent variations of the forecast errors that give valuable insight on the role of initial land surface conditions and the land–atmosphere interactions in southern Scandinavia.
Abstract
The prediction of snow accumulation remains a forecasting challenge. While the adoption of ensemble numerical weather prediction has enabled the development of probabilistic guidance, the challenges associated with snow accumulation, particularly snow-to-liquid ratio (SLR), still remain when building snow-accumulation tools. In operations, SLR is generally assumed to either fit a simple mathematical relationship or conform to a historic average. In this paper, the impacts of the choice of SLR on ensemble snow forecasts are tested. Ensemble forecasts from the nine-member High-Resolution Rapid Refresh Ensemble (HRRRE) were used to create 24-h snowfall forecasts for five snowfall events associated with winter cyclones. These snowfall forecasts were derived from model liquid precipitation forecasts using five SLR relationships. These forecasts were evaluated against daily new snowfall observations from the Community Collaborative Rain Hail and Snow network. The results of this analysis show that the forecast error associated with individual members is similar to the error associated with choice of SLR. The SLR with the lowest forecast error showed regional agreement across nearby observations. This suggests that, while there is no one SLR that works best everywhere, it may be possible to improve ensemble snow forecasts if regions where SLRs perform best can be determined ahead of time. The implications of these findings for future ensemble snowfall tools will be discussed.
Significance Statement
Snowfall prediction remains a challenge. Computer models are used to address the inherent uncertainty in forecasts. This uncertainty includes aspects like the location and rate of snowfall. Meteorologists run multiple similar computer models to understand the range of possible weather outcomes. One aspect of uncertainty is the snow-to-liquid ratio, or the ratio of snow depth to the amount of liquid water it melts into. This study tests how common predictions of snow-to-liquid ratio impact snowfall forecasts. The results show that snow-to-liquid ratio choices are as impactful as the models’ differing snow rate or snow location forecasts, and that no particular snow-to-liquid ratio is most accurate. These results underscore the importance of better snow-to-liquid ratio prediction to improve snowfall forecasts.
Abstract
The prediction of snow accumulation remains a forecasting challenge. While the adoption of ensemble numerical weather prediction has enabled the development of probabilistic guidance, the challenges associated with snow accumulation, particularly snow-to-liquid ratio (SLR), still remain when building snow-accumulation tools. In operations, SLR is generally assumed to either fit a simple mathematical relationship or conform to a historic average. In this paper, the impacts of the choice of SLR on ensemble snow forecasts are tested. Ensemble forecasts from the nine-member High-Resolution Rapid Refresh Ensemble (HRRRE) were used to create 24-h snowfall forecasts for five snowfall events associated with winter cyclones. These snowfall forecasts were derived from model liquid precipitation forecasts using five SLR relationships. These forecasts were evaluated against daily new snowfall observations from the Community Collaborative Rain Hail and Snow network. The results of this analysis show that the forecast error associated with individual members is similar to the error associated with choice of SLR. The SLR with the lowest forecast error showed regional agreement across nearby observations. This suggests that, while there is no one SLR that works best everywhere, it may be possible to improve ensemble snow forecasts if regions where SLRs perform best can be determined ahead of time. The implications of these findings for future ensemble snowfall tools will be discussed.
Significance Statement
Snowfall prediction remains a challenge. Computer models are used to address the inherent uncertainty in forecasts. This uncertainty includes aspects like the location and rate of snowfall. Meteorologists run multiple similar computer models to understand the range of possible weather outcomes. One aspect of uncertainty is the snow-to-liquid ratio, or the ratio of snow depth to the amount of liquid water it melts into. This study tests how common predictions of snow-to-liquid ratio impact snowfall forecasts. The results show that snow-to-liquid ratio choices are as impactful as the models’ differing snow rate or snow location forecasts, and that no particular snow-to-liquid ratio is most accurate. These results underscore the importance of better snow-to-liquid ratio prediction to improve snowfall forecasts.