Browse
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale Data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS) and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best-performing individual component VTS. The 10-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1–6, while Both/ConVTS had better skill at later hours 7–15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale Data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS) and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best-performing individual component VTS. The 10-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1–6, while Both/ConVTS had better skill at later hours 7–15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Analyses of cloud-top temperature and lightning characteristics of 48 Weather Research and Forecasting (WRF) Model–simulated ocean-based wind events, with 1-min temporal and 0.5-km horizontal resolution, revealed signatures similar to the corresponding 13 observed events detected by buoys and Coastal-Marine Automated Network (C-MAN) stations as shown in prior research on ocean-based wind events by the first author. These events occurred in the eastern Gulf of Mexico and in the Atlantic Ocean from Florida northward through South Carolina. The coldest WRF cloud-top temperature (WCTT) and peak WRF-estimated lightning flash rate values of the model-simulated events, where each event was required to have a negative vertical velocity of at least 10 m s−1 in the lowest 2 km associated with a convective storm, occurred at an average of 4.2 and 1.1 min prior to the events, respectively. With 36 of the events, the peak estimated flash rate occurred within 5 min of the coldest WCTT. Cloud depth typically increased as the WCTT decreased, and the maximum depth occurred at an average of 2.9 min prior to the events. Thermal cooling and precipitation loading provided negative buoyancy needed to help drive the wind events. Environmental characteristics of the model-simulated ocean-based wind events also resembled those associated with land-based wet downbursts, including moist air near the surface, lapse rates near moist adiabatic, and low cloud bases.
Abstract
Analyses of cloud-top temperature and lightning characteristics of 48 Weather Research and Forecasting (WRF) Model–simulated ocean-based wind events, with 1-min temporal and 0.5-km horizontal resolution, revealed signatures similar to the corresponding 13 observed events detected by buoys and Coastal-Marine Automated Network (C-MAN) stations as shown in prior research on ocean-based wind events by the first author. These events occurred in the eastern Gulf of Mexico and in the Atlantic Ocean from Florida northward through South Carolina. The coldest WRF cloud-top temperature (WCTT) and peak WRF-estimated lightning flash rate values of the model-simulated events, where each event was required to have a negative vertical velocity of at least 10 m s−1 in the lowest 2 km associated with a convective storm, occurred at an average of 4.2 and 1.1 min prior to the events, respectively. With 36 of the events, the peak estimated flash rate occurred within 5 min of the coldest WCTT. Cloud depth typically increased as the WCTT decreased, and the maximum depth occurred at an average of 2.9 min prior to the events. Thermal cooling and precipitation loading provided negative buoyancy needed to help drive the wind events. Environmental characteristics of the model-simulated ocean-based wind events also resembled those associated with land-based wet downbursts, including moist air near the surface, lapse rates near moist adiabatic, and low cloud bases.
Abstract
This study conducts the first large-sample comparison of the impact of dropsondes in the tropical cyclone (TC) inner core, vortex, and environment on NWP-model TC forecasts. We analyze six observing-system experiments, focusing on four sensitivity experiments that denied dropsonde observations within annuli corresponding with natural breakpoints in reconnaissance sampling. These are evaluated against two other experiments detailed in a recent parallel study: one that assimilated and another that denied dropsonde observations. Experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting (HWRF) Model and covered active periods of the 2017–20 North Atlantic hurricane seasons. Analysis focused on forecasts initialized with dropsondes that used mesoscale error covariance derived from a cycled HWRF ensemble, as these forecasts were where dropsondes had the greatest benefits in the parallel study. Some results generally support findings of previous research, while others are novel. Most notable was that removing dropsondes anywhere, particularly from the vortex, substantially degraded forecasts of maximum sustained winds. Removing in-vortex dropsondes also degraded outer-wind-radii forecasts in many instances. As such, in-vortex dropsondes contribute to a majority of the overall impacts of the dropsonde observing system. Additionally, track forecasts of weak TCs benefited more from environmental sampling, while track forecasts of strong TCs benefited more from in-vortex sampling. Finally, inner-core-only sampling strategies should be avoided, supporting a change made to the U.S. Air Force Reserve’s sampling strategy in 2018 that added dropsondes outside of the inner core.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment to date of the impact of dropsondes at different distances away from the center of a tropical cyclone (TC) on TC forecasts. The main finding is that in-vortex dropsondes are most important for intensity and outer-wind-radii forecasts. Particularly notable is the impact of dropsondes on TC maximum wind speed forecasts, as reducing sampling anywhere would degrade those forecasts.
Abstract
This study conducts the first large-sample comparison of the impact of dropsondes in the tropical cyclone (TC) inner core, vortex, and environment on NWP-model TC forecasts. We analyze six observing-system experiments, focusing on four sensitivity experiments that denied dropsonde observations within annuli corresponding with natural breakpoints in reconnaissance sampling. These are evaluated against two other experiments detailed in a recent parallel study: one that assimilated and another that denied dropsonde observations. Experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting (HWRF) Model and covered active periods of the 2017–20 North Atlantic hurricane seasons. Analysis focused on forecasts initialized with dropsondes that used mesoscale error covariance derived from a cycled HWRF ensemble, as these forecasts were where dropsondes had the greatest benefits in the parallel study. Some results generally support findings of previous research, while others are novel. Most notable was that removing dropsondes anywhere, particularly from the vortex, substantially degraded forecasts of maximum sustained winds. Removing in-vortex dropsondes also degraded outer-wind-radii forecasts in many instances. As such, in-vortex dropsondes contribute to a majority of the overall impacts of the dropsonde observing system. Additionally, track forecasts of weak TCs benefited more from environmental sampling, while track forecasts of strong TCs benefited more from in-vortex sampling. Finally, inner-core-only sampling strategies should be avoided, supporting a change made to the U.S. Air Force Reserve’s sampling strategy in 2018 that added dropsondes outside of the inner core.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment to date of the impact of dropsondes at different distances away from the center of a tropical cyclone (TC) on TC forecasts. The main finding is that in-vortex dropsondes are most important for intensity and outer-wind-radii forecasts. Particularly notable is the impact of dropsondes on TC maximum wind speed forecasts, as reducing sampling anywhere would degrade those forecasts.