Browse
Abstract
This study investigates regional, seasonal biases in convection-allowing model forecasts of near-surface temperature and dewpoint in areas of particular importance to forecasts of severe local storms. One method compares model forecasts with objective analyses of observed conditions in the inflow sectors of reported tornadoes. A second method captures a broader sample of environments, comparing model forecasts with surface observations under certain warm-sector criteria. Both methods reveal a cold bias across all models tested in Southeast U.S. cool-season warm sectors. This is an operationally important bias given the thermodynamic sensitivity of instability-limited severe weather that is common in the Southeast cool season. There is not a clear bias across models in the Great Plains warm season, but instead more varied behavior with differing model physics.
Significance Statement
The severity of thunderstorms and the types of hazards they produce depend in part on the low-level temperature and moisture in the near-storm environment. It is important for numerical forecast models to accurately represent these fields in forecasts of severe weather events. We show that the most widely used short-term, high-resolution forecast models have a consistent cold bias of about 1 K (up to 2 K in certain cases) in storm environments in the southeastern U.S. cool season. Human forecasters must recognize and adjust for this bias, and future model development should aim to improve it.
Abstract
This study investigates regional, seasonal biases in convection-allowing model forecasts of near-surface temperature and dewpoint in areas of particular importance to forecasts of severe local storms. One method compares model forecasts with objective analyses of observed conditions in the inflow sectors of reported tornadoes. A second method captures a broader sample of environments, comparing model forecasts with surface observations under certain warm-sector criteria. Both methods reveal a cold bias across all models tested in Southeast U.S. cool-season warm sectors. This is an operationally important bias given the thermodynamic sensitivity of instability-limited severe weather that is common in the Southeast cool season. There is not a clear bias across models in the Great Plains warm season, but instead more varied behavior with differing model physics.
Significance Statement
The severity of thunderstorms and the types of hazards they produce depend in part on the low-level temperature and moisture in the near-storm environment. It is important for numerical forecast models to accurately represent these fields in forecasts of severe weather events. We show that the most widely used short-term, high-resolution forecast models have a consistent cold bias of about 1 K (up to 2 K in certain cases) in storm environments in the southeastern U.S. cool season. Human forecasters must recognize and adjust for this bias, and future model development should aim to improve it.
Abstract
Fog is a phenomenon that exerts significant impacts on transportation, aviation, air quality, agriculture, and even water resources. While data-driven machine learning algorithms have shown promising performance in capturing nonlinear fog events at point locations, their applicability to different areas and time periods is questionable. This study addresses this issue by examining five decision-tree-based classifiers in a South Korean region, where diverse fog formation mechanisms are at play. The five machine learning algorithms were trained at point locations and tested with other point locations for time periods independent of the training processes. Using the ensemble classifiers and high-resolution atmospheric reanalysis data, we also attempted to establish fog occurrence maps in a regional area. Results showed that machine learning models trained on the local datasets exhibited superior performance in mountainous areas, where radiative cooling predominantly contributes to fog formation, compared to inland and coastal regions. As the fog generation mechanisms diversified, the tree-based ensemble models appeared to encounter challenges in delineating their decision boundaries. When they were trained with the reanalysis data, their predictive skills were significantly decreased, resulting in high false alarm rates. This prompted the need for postprocessing techniques to rectify overestimated fog frequency. While postprocessing may ameliorate overestimation, caution is needed to interpret the resultant fog frequency estimates, especially in regions with more diverse fog generation mechanisms. The spatial upscaling of machine learning–based fog prediction models poses challenges owing to the intricate interplay of various fog formation mechanisms, data imbalances, and potential inaccuracies in reanalysis data.
Abstract
Fog is a phenomenon that exerts significant impacts on transportation, aviation, air quality, agriculture, and even water resources. While data-driven machine learning algorithms have shown promising performance in capturing nonlinear fog events at point locations, their applicability to different areas and time periods is questionable. This study addresses this issue by examining five decision-tree-based classifiers in a South Korean region, where diverse fog formation mechanisms are at play. The five machine learning algorithms were trained at point locations and tested with other point locations for time periods independent of the training processes. Using the ensemble classifiers and high-resolution atmospheric reanalysis data, we also attempted to establish fog occurrence maps in a regional area. Results showed that machine learning models trained on the local datasets exhibited superior performance in mountainous areas, where radiative cooling predominantly contributes to fog formation, compared to inland and coastal regions. As the fog generation mechanisms diversified, the tree-based ensemble models appeared to encounter challenges in delineating their decision boundaries. When they were trained with the reanalysis data, their predictive skills were significantly decreased, resulting in high false alarm rates. This prompted the need for postprocessing techniques to rectify overestimated fog frequency. While postprocessing may ameliorate overestimation, caution is needed to interpret the resultant fog frequency estimates, especially in regions with more diverse fog generation mechanisms. The spatial upscaling of machine learning–based fog prediction models poses challenges owing to the intricate interplay of various fog formation mechanisms, data imbalances, and potential inaccuracies in reanalysis data.
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale Data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS) and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best-performing individual component VTS. The 10-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1–6, while Both/ConVTS had better skill at later hours 7–15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale Data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS) and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best-performing individual component VTS. The 10-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1–6, while Both/ConVTS had better skill at later hours 7–15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
The weather and climate greatly affect socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution, and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation, and insurance. It becomes evident that weather and ocean forecasting is high-value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high-quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave, and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
We evaluated the ability of the fvGFS with a 13-km resolution in simulating tropical cyclone genesis (TCG) by conducting hindcast experiments for 42 TCG events over 2018–19 in the western North Pacific (WNP). We observed an improved hit rate with a lead time of between 5 and 4 days; however, from 4- to 3-day lead time, no consistent improvement in the temporal and spatial errors of TCG was obtained. More “Fail” cases occurred when and where a low-level easterly background flow prevailed: from mid-August to September 2018 and after October 2019 and mainly in the eastern WNP. In “Hit” cases, 850-hPa streamfunction and divergence, 200-hPa divergence, and genesis potential index (GPI) provided favorable TCG conditions. However, the Hit–Fail case differences in other suggested factors (vertical wind shear, 700-hPa moisture, and SST) were nonsignificant. By contrast, the reanalysis used for validation showed only significant difference in 850-hPa streamfunction. We stratified the background flow of TCG into four types. The monsoon trough type (82%) provided the most favorable environmental conditions for successful hindcasts, followed by the subtropical high (45%), easterly (17%), and others (0%) types. These results indicated that fvGFS is more capable of enhancing monsoon trough circulation and provides a much better environment for TCG development but is less skillful in other types of background flow that provides weaker large-scale forcing. The results suggest that the most advanced high-resolution weather forecast models such as the fvGFS warrant further improvement to properly simulate the subtle circulation features (e.g., mesoscale convection system) that might provide seeds for TCG.
Significance Statement
This study provides an evaluation of tropical cyclone genesis prediction skill of fvGFS. Favorable large-scale environmental factors for successful prediction are identified. Skill dependence on environmental factors provides guidance for evaluating the reliability of a genesis forecast in advance.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
The recently deployed GOES-R series Geostationary Lightning Mapper (GLM) provides forecasters with a new, rapidly updating lightning data source to diagnose, forecast, and monitor atmospheric convection. Gridded GLM products have been developed to improve operational forecast applications, with variables including flash extent density (FED), minimum flash area (MFA), and total optical energy (TOE). While these gridded products have been evaluated, there is a continual need to integrate these products with other datasets available to forecasters such as radar, satellite imagery, and ground-based lightning networks. Data from the Advanced Baseline Imager (ABI), Multi-Radar Multi-Sensor (MRMS) system, and one ground-based lightning network were compared against gridded GLM imagery from GOES-East and GOES-West in case studies of two supercell thunderstorms, along with a bulk study from 13 April to 31 May 2019, to provide further validation and applications of gridded GLM products from a data fusion perspective. Increasing FED and decreasing MFA corresponded with increasing thunderstorm intensity from the perspective of ABI infrared imagery and MRMS vertically integrated reflectivity products, and was apparent for more robust and severe convection. Flash areas were also observed to maximize between clean-IR brightness temperatures of 210–230 K and isothermal reflectivity at −10°C of 20–30 dBZ. TOE observations from both GLMs provided additional context of local GLM flash rates in each case study, due to their differing perspectives of convective updrafts.
Significance Statement
The Geostationary Lightning Mapper (GLM) is a lightning sensor on the current generation of U.S. weather satellites. This research shows how data from the space-based lightning sensor can be combined with radar, satellite imagery, and ground-based lightning networks to improve how forecasters monitor thunderstorms and issue warnings for severe weather. The rate of GLM flashes detected and the area they cover correspond well with radar and satellite signatures, especially in cases of intense and severe thunderstorms. When the GLM observes the same thunderstorm from the GOES-East and GOES-West satellites, the optical energy (brightness) of the flashes may help forecasters interpret the types of flashes observed from each sensor.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.