Browse
Abstract
The meteorological conditions over the South Coast of New South Wales, Australia, are investigated on 18 March 2018, the day of the Tathra bushfire. We present an analysis of the event based on high-resolution (100- and 400-m grid-length) simulations with the Bureau of Meteorology’s ACCESS numerical weather prediction system and available observations. Through this analysis we find several mesoscale features that likely contributed to the extreme fire event. Key among these was the development of horizontal convective rolls, which emanated from inland and aided the fire’s spread toward Tathra. The rolls interacted with the terrain to produce complex regions of strongly ascending and descending air, likely accelerating the lofting of firebrands and potentially contributing to the significant lee-slope fire behavior observed. Mountain waves, specifically trapped lee waves, occurred on the day and are hypothesized to have contributed to the strong winds around the time the fire began. These waves may also have influenced conditions during the period of peak fire activity when the fire spotted across the Bega River and impacted Tathra. Finally, the passage of the cold front through the fireground was complex, with frontal regression observed at a nearby station and likely also through Tathra. We postulate that interactions between the strong prefrontal flow and the initially weak change resulted in highly variable and dangerous fire weather across the fireground for a significant period after the change initially occurred.
Significance Statement
The town of Tathra on the South Coast of New South Wales, Australia, was devastated on 18 March 2018, when a wildfire ignited in nearby bushland and quickly intensified to impact the town. Using high-resolution numerical weather simulations, we investigate the conditions that led to the extreme fire behavior. The simulations show that the fire ignited and intensified under highly variable conditions driven by complex interactions between the flow over nearby mountains and the passage of a strong cold front. This case study highlights the value of such models in understanding high-impact weather for the purpose of hazard preparedness and emergency response. Additionally, it contributes to a growing number of case studies that indicate the future direction of high-impact forecast services.
Abstract
The meteorological conditions over the South Coast of New South Wales, Australia, are investigated on 18 March 2018, the day of the Tathra bushfire. We present an analysis of the event based on high-resolution (100- and 400-m grid-length) simulations with the Bureau of Meteorology’s ACCESS numerical weather prediction system and available observations. Through this analysis we find several mesoscale features that likely contributed to the extreme fire event. Key among these was the development of horizontal convective rolls, which emanated from inland and aided the fire’s spread toward Tathra. The rolls interacted with the terrain to produce complex regions of strongly ascending and descending air, likely accelerating the lofting of firebrands and potentially contributing to the significant lee-slope fire behavior observed. Mountain waves, specifically trapped lee waves, occurred on the day and are hypothesized to have contributed to the strong winds around the time the fire began. These waves may also have influenced conditions during the period of peak fire activity when the fire spotted across the Bega River and impacted Tathra. Finally, the passage of the cold front through the fireground was complex, with frontal regression observed at a nearby station and likely also through Tathra. We postulate that interactions between the strong prefrontal flow and the initially weak change resulted in highly variable and dangerous fire weather across the fireground for a significant period after the change initially occurred.
Significance Statement
The town of Tathra on the South Coast of New South Wales, Australia, was devastated on 18 March 2018, when a wildfire ignited in nearby bushland and quickly intensified to impact the town. Using high-resolution numerical weather simulations, we investigate the conditions that led to the extreme fire behavior. The simulations show that the fire ignited and intensified under highly variable conditions driven by complex interactions between the flow over nearby mountains and the passage of a strong cold front. This case study highlights the value of such models in understanding high-impact weather for the purpose of hazard preparedness and emergency response. Additionally, it contributes to a growing number of case studies that indicate the future direction of high-impact forecast services.
Abstract
The Center for Analysis and Prediction of Storms has recently developed capabilities to directly assimilate radar reflectivity and radial velocity data within the GSI-based ensemble Kalman filter (EnKF) and hybrid ensemble three-dimensional variational (En3DVar) system for initializing convective-scale forecasts. To assess the performance of EnKF and hybrid En3DVar with different hybrid weights (with 100%, 20%, and 0% of static background error covariance corresponding to pure 3DVar, hybrid En3DVar, and pure En3DVar) for assimilating radar data in a Warn-on-Forecast framework, a set of data assimilation and forecast experiments using the WRF Model are conducted for six convective storm cases of May 2017. Using an object-based verification approach, forecast objects of composite reflectivity and 30-min updraft helicity swaths are verified against reflectivity and rotation track objects in Multi-Radar Multi-Sensor data on space and time scales typical of National Weather Service warnings. Forecasts initialized by En3DVar or the best performing EnKF ensemble member produce the highest object-based verification scores, while forecasts from 3DVar and the worst EnKF member produce the lowest scores. Averaged across six cases, hybrid En3DVar using 20% static background error covariance does not improve forecasts over pure En3DVar, although improvements are seen in some individual cases. The false alarm ratios of EnKF members for both composite reflectivity and updraft helicity at the initial time are lower than those from variational methods, suggesting that EnKF analysis reduces spurious reflectivity and mesocyclone objects more effectively.
Abstract
The Center for Analysis and Prediction of Storms has recently developed capabilities to directly assimilate radar reflectivity and radial velocity data within the GSI-based ensemble Kalman filter (EnKF) and hybrid ensemble three-dimensional variational (En3DVar) system for initializing convective-scale forecasts. To assess the performance of EnKF and hybrid En3DVar with different hybrid weights (with 100%, 20%, and 0% of static background error covariance corresponding to pure 3DVar, hybrid En3DVar, and pure En3DVar) for assimilating radar data in a Warn-on-Forecast framework, a set of data assimilation and forecast experiments using the WRF Model are conducted for six convective storm cases of May 2017. Using an object-based verification approach, forecast objects of composite reflectivity and 30-min updraft helicity swaths are verified against reflectivity and rotation track objects in Multi-Radar Multi-Sensor data on space and time scales typical of National Weather Service warnings. Forecasts initialized by En3DVar or the best performing EnKF ensemble member produce the highest object-based verification scores, while forecasts from 3DVar and the worst EnKF member produce the lowest scores. Averaged across six cases, hybrid En3DVar using 20% static background error covariance does not improve forecasts over pure En3DVar, although improvements are seen in some individual cases. The false alarm ratios of EnKF members for both composite reflectivity and updraft helicity at the initial time are lower than those from variational methods, suggesting that EnKF analysis reduces spurious reflectivity and mesocyclone objects more effectively.
Abstract
This paper utilizes statistical and statistical–dynamical methodologies to select, from the full observational record, a minimal subset of dates that would provide representative sampling of local precipitation distributions across the contiguous United States (CONUS). The CONUS region is characterized by a great diversity of precipitation-producing systems, mechanisms, and large-scale meteorological patterns (LSMPs), which can provide favorable environment for local precipitation extremes. This diversity is unlikely to be adequately captured in methodologies that rely on grossly reducing the dimensionality of the data—by representing it in terms of a few patterns evolving in time—and thus requires data thinning techniques based on high-dimensional dynamical or statistical data modeling. We have built a novel high-dimensional empirical model of temperature and precipitation capable of producing statistically accurate surrogate realizations of the observed 1979–99 (training period) evolution of these fields. This model also provides skillful hindcasts of precipitation over the 2000–20 (validation) period. We devised a subsampling strategy based on the relative entropy of the empirical model’s precipitation (ensemble) forecasts over CONUS and demonstrated that it generates a set of dates that captures a majority of high-impact precipitation events, while substantially reducing a heavy-precipitation bias inherent in an alternative methodology based on the direct identification of large precipitation events in the Global Ensemble Forecast System (GEFS), version 12 reforecasts. The impacts of data thinning on the accuracy of precipitation statistical postprocessing, as well as on the calibration and validation of the Hydrologic Ensemble Forecast Service (HEFS) reforecasts are yet to be established.
Significance Statement
High-impact weather events are usually associated with extreme precipitation, which is notoriously difficult to predict even using highly resolved state-of-the-art numerical weather prediction models based on first physical principles. The same is true for statistical models that use past data to anticipate the future behavior likely to stem from an observed initial state. Here we use both types of models to identify the occurrences of the states, over the historical climate record, which are likely to lead to extreme precipitation events. We show that the overall statistics of precipitation over the contiguous United States is encapsulated in a greatly reduced set of such states, which could substantially alleviate the computational burden associated with testing of hydrological forecast models used for decision support.
Abstract
This paper utilizes statistical and statistical–dynamical methodologies to select, from the full observational record, a minimal subset of dates that would provide representative sampling of local precipitation distributions across the contiguous United States (CONUS). The CONUS region is characterized by a great diversity of precipitation-producing systems, mechanisms, and large-scale meteorological patterns (LSMPs), which can provide favorable environment for local precipitation extremes. This diversity is unlikely to be adequately captured in methodologies that rely on grossly reducing the dimensionality of the data—by representing it in terms of a few patterns evolving in time—and thus requires data thinning techniques based on high-dimensional dynamical or statistical data modeling. We have built a novel high-dimensional empirical model of temperature and precipitation capable of producing statistically accurate surrogate realizations of the observed 1979–99 (training period) evolution of these fields. This model also provides skillful hindcasts of precipitation over the 2000–20 (validation) period. We devised a subsampling strategy based on the relative entropy of the empirical model’s precipitation (ensemble) forecasts over CONUS and demonstrated that it generates a set of dates that captures a majority of high-impact precipitation events, while substantially reducing a heavy-precipitation bias inherent in an alternative methodology based on the direct identification of large precipitation events in the Global Ensemble Forecast System (GEFS), version 12 reforecasts. The impacts of data thinning on the accuracy of precipitation statistical postprocessing, as well as on the calibration and validation of the Hydrologic Ensemble Forecast Service (HEFS) reforecasts are yet to be established.
Significance Statement
High-impact weather events are usually associated with extreme precipitation, which is notoriously difficult to predict even using highly resolved state-of-the-art numerical weather prediction models based on first physical principles. The same is true for statistical models that use past data to anticipate the future behavior likely to stem from an observed initial state. Here we use both types of models to identify the occurrences of the states, over the historical climate record, which are likely to lead to extreme precipitation events. We show that the overall statistics of precipitation over the contiguous United States is encapsulated in a greatly reduced set of such states, which could substantially alleviate the computational burden associated with testing of hydrological forecast models used for decision support.
Abstract
The relative operating characteristic (ROC) curve is a popular diagnostic tool in forecast verification, with the area under the ROC curve (AUC) used as a verification metric measuring the discrimination ability of a forecast. Along with calibration, discrimination is deemed as a fundamental probabilistic forecast attribute. In particular, in ensemble forecast verification, AUC provides a basis for the comparison of potential predictive skill of competing forecasts. While this approach is straightforward when dealing with forecasts of common events (e.g., probability of precipitation), the AUC interpretation can turn out to be oversimplistic or misleading when focusing on rare events (e.g., precipitation exceeding some warning criterion). How should we interpret AUC of ensemble forecasts when focusing on rare events? How can changes in the way probability forecasts are derived from the ensemble forecast affect AUC results? How can we detect a genuine improvement in terms of predictive skill? Based on verification experiments, a critical eye is cast on the AUC interpretation to answer these questions. As well as the traditional trapezoidal approximation and the well-known binormal fitting model, we discuss a new approach that embraces the concept of imprecise probabilities and relies on the subdivision of the lowest ensemble probability category.
Abstract
The relative operating characteristic (ROC) curve is a popular diagnostic tool in forecast verification, with the area under the ROC curve (AUC) used as a verification metric measuring the discrimination ability of a forecast. Along with calibration, discrimination is deemed as a fundamental probabilistic forecast attribute. In particular, in ensemble forecast verification, AUC provides a basis for the comparison of potential predictive skill of competing forecasts. While this approach is straightforward when dealing with forecasts of common events (e.g., probability of precipitation), the AUC interpretation can turn out to be oversimplistic or misleading when focusing on rare events (e.g., precipitation exceeding some warning criterion). How should we interpret AUC of ensemble forecasts when focusing on rare events? How can changes in the way probability forecasts are derived from the ensemble forecast affect AUC results? How can we detect a genuine improvement in terms of predictive skill? Based on verification experiments, a critical eye is cast on the AUC interpretation to answer these questions. As well as the traditional trapezoidal approximation and the well-known binormal fitting model, we discuss a new approach that embraces the concept of imprecise probabilities and relies on the subdivision of the lowest ensemble probability category.
Abstract
In this exploratory study, storm-motion deviations are examined for concurrent tornadic and nontornadic supercells using 171 cases. This deviation, or “delta,” is defined as the shear-orthogonal distance between the observed supercell motion and a baseline supercell-motion prediction. Larger deltas—representing supercells moving farther right (in a shear-relative sense) compared to the baseline prediction—are hypothesized as more likely to be associated with tornadoes than nearby supercells with smaller deltas, consistent with recent research. Automated radar tracking is used to calculate supercell motion every scan, which then is compared to a model-derived hourly supercell-motion prediction to calculate the deltas. Tornadic supercells have larger average deltas (by 1.9–2.0 m s−1) than nearby nontornadic supercells when using 20- and 30-min storm-motion calculations, and the deltas are larger for the tornadic versus nontornadic supercells ∼80% of the time. Average delta trends also are positive 62%–70% of the time prior to tornadogenesis. The supercell-motion deltas show a modest positive correlation with EF-scale damage rating, indicating a possible relationship between tornado rating and storm deviation. The relative delta differences between tornadic and nontornadic supercells appear more meaningful than the absolute delta magnitudes (i.e., about 70% of tornadic cases with negative average deltas had deltas that were less negative compared to concurrent nontornadic supercells). This concept shows promise as a potential tool to assist operational forecasters in tornado warning decisions.
Significance Statement
Supercells are rotating thunderstorms, and these storms produce the most destructive tornadoes. However, it has been challenging to forecast which supercells will produce tornadoes. In this exploratory study to help better forecast supercell tornadoes, we looked at how the observed supercell motion compared to the predicted motion, based on a commonly used method. We found tornadic supercells tend to move somewhat differently from the predicted motion—compared to nearby nontornadic supercells. This unusual movement often starts prior to tornadogenesis, potentially providing lead time to tornado formation. Pending further validation, development, and testing of real-time analysis tools, this storm-motion behavior could be used by operational forecasters as a factor to help determine when (or when not) to issue a tornado warning for a supercell thunderstorm, thus providing better information to the public.
Abstract
In this exploratory study, storm-motion deviations are examined for concurrent tornadic and nontornadic supercells using 171 cases. This deviation, or “delta,” is defined as the shear-orthogonal distance between the observed supercell motion and a baseline supercell-motion prediction. Larger deltas—representing supercells moving farther right (in a shear-relative sense) compared to the baseline prediction—are hypothesized as more likely to be associated with tornadoes than nearby supercells with smaller deltas, consistent with recent research. Automated radar tracking is used to calculate supercell motion every scan, which then is compared to a model-derived hourly supercell-motion prediction to calculate the deltas. Tornadic supercells have larger average deltas (by 1.9–2.0 m s−1) than nearby nontornadic supercells when using 20- and 30-min storm-motion calculations, and the deltas are larger for the tornadic versus nontornadic supercells ∼80% of the time. Average delta trends also are positive 62%–70% of the time prior to tornadogenesis. The supercell-motion deltas show a modest positive correlation with EF-scale damage rating, indicating a possible relationship between tornado rating and storm deviation. The relative delta differences between tornadic and nontornadic supercells appear more meaningful than the absolute delta magnitudes (i.e., about 70% of tornadic cases with negative average deltas had deltas that were less negative compared to concurrent nontornadic supercells). This concept shows promise as a potential tool to assist operational forecasters in tornado warning decisions.
Significance Statement
Supercells are rotating thunderstorms, and these storms produce the most destructive tornadoes. However, it has been challenging to forecast which supercells will produce tornadoes. In this exploratory study to help better forecast supercell tornadoes, we looked at how the observed supercell motion compared to the predicted motion, based on a commonly used method. We found tornadic supercells tend to move somewhat differently from the predicted motion—compared to nearby nontornadic supercells. This unusual movement often starts prior to tornadogenesis, potentially providing lead time to tornado formation. Pending further validation, development, and testing of real-time analysis tools, this storm-motion behavior could be used by operational forecasters as a factor to help determine when (or when not) to issue a tornado warning for a supercell thunderstorm, thus providing better information to the public.
Abstract
The radius of maximum wind (R max) in a tropical cyclone governs the footprint of hazards, including damaging wind, surge, and rainfall. However, R max is an inconstant quantity that is difficult to observe directly and is poorly resolved in reanalyses and climate models. In contrast, outer wind radii are much less sensitive to such issues. Here we present a simple empirical model for predicting R max from the radius of 34-kt (1 kt ≈ 0.51 m s−1) wind (R 17.5 ms). The model only requires as input quantities that are routinely estimated operationally: maximum wind speed, R 17.5 ms, and latitude. The form of the empirical model takes advantage of our physical understanding of tropical cyclone radial structure and is trained on the Extended Best Track database from the North Atlantic 2004–20. Results are similar for the TC-OBS database. The physics reduces the relationship between the two radii to a dependence on two physical parameters, while the observational data enables an optimal estimate of the quantitative dependence on those parameters. The model performs substantially better than existing operational methods for estimating R max. The model reproduces the observed statistical increase in R max with latitude and demonstrates that this increase is driven by the increase in R 17.5 ms with latitude. Overall, the model offers a simple and fast first-order prediction of R max that can be used operationally and in risk models.
Significance Statement
If we can better predict the area of strong winds in a tropical cyclone, we can better prepare for its potential impacts. This work develops a simple model to predict the radius where the strongest winds in a tropical cyclone are located. The model is simple and fast and more accurate than existing models, and it also helps us to understand what causes this radius to vary in time, from storm to storm, and at different latitudes. It can be used in both operational forecasting and models of tropical cyclone hazard risk.
Abstract
The radius of maximum wind (R max) in a tropical cyclone governs the footprint of hazards, including damaging wind, surge, and rainfall. However, R max is an inconstant quantity that is difficult to observe directly and is poorly resolved in reanalyses and climate models. In contrast, outer wind radii are much less sensitive to such issues. Here we present a simple empirical model for predicting R max from the radius of 34-kt (1 kt ≈ 0.51 m s−1) wind (R 17.5 ms). The model only requires as input quantities that are routinely estimated operationally: maximum wind speed, R 17.5 ms, and latitude. The form of the empirical model takes advantage of our physical understanding of tropical cyclone radial structure and is trained on the Extended Best Track database from the North Atlantic 2004–20. Results are similar for the TC-OBS database. The physics reduces the relationship between the two radii to a dependence on two physical parameters, while the observational data enables an optimal estimate of the quantitative dependence on those parameters. The model performs substantially better than existing operational methods for estimating R max. The model reproduces the observed statistical increase in R max with latitude and demonstrates that this increase is driven by the increase in R 17.5 ms with latitude. Overall, the model offers a simple and fast first-order prediction of R max that can be used operationally and in risk models.
Significance Statement
If we can better predict the area of strong winds in a tropical cyclone, we can better prepare for its potential impacts. This work develops a simple model to predict the radius where the strongest winds in a tropical cyclone are located. The model is simple and fast and more accurate than existing models, and it also helps us to understand what causes this radius to vary in time, from storm to storm, and at different latitudes. It can be used in both operational forecasting and models of tropical cyclone hazard risk.
Abstract
The Earth Networks Total Lightning Network (ENTLN) lightning observation and rainfall data from 270 automatic weather stations (AWS) over Guangzhou in 2017 are examined. The high spatiotemporal resolution data are used to analyze the relationship between lightning activity and precipitation in 14 758 short-duration rainfall (SDR) events. About 43% of the SDR events are reported to be accompanied by lightning activity (SDRWL). The rainfall intensity of SDRWL is significantly higher than that of SDR events with no lightning (SDRNL). Lightning activity is more likely to occur in SDR events with higher rainfall rates. A power-law relationship is found between lightning flash rate and rainfall rate, with a max correlation coefficient of 0.44. In about 55% of SDRWL, lightning flashes occur later than precipitation, and the opposite is found in about 35% of SDRWL. When lightning is delayed for 5–10 min, the lagged correlation coefficient between lightning and precipitation is the largest. The results also show that the lightning flash rate peak mostly occurs from −10 to 20 min after the rainfall rate peak, and this time lag is common in SDRWL with all intensities. The starting time of lightning is related to the rainfall intensity. In heavy SDRWL, lightning activity usually occurs from −10 to 20 min after the beginning of precipitation, while in weak SDRWL, the above time window expands to ±1 h. These results indicate that the quantity and time relationship between lightning and precipitation are more solid in heavy SDR events.
Abstract
The Earth Networks Total Lightning Network (ENTLN) lightning observation and rainfall data from 270 automatic weather stations (AWS) over Guangzhou in 2017 are examined. The high spatiotemporal resolution data are used to analyze the relationship between lightning activity and precipitation in 14 758 short-duration rainfall (SDR) events. About 43% of the SDR events are reported to be accompanied by lightning activity (SDRWL). The rainfall intensity of SDRWL is significantly higher than that of SDR events with no lightning (SDRNL). Lightning activity is more likely to occur in SDR events with higher rainfall rates. A power-law relationship is found between lightning flash rate and rainfall rate, with a max correlation coefficient of 0.44. In about 55% of SDRWL, lightning flashes occur later than precipitation, and the opposite is found in about 35% of SDRWL. When lightning is delayed for 5–10 min, the lagged correlation coefficient between lightning and precipitation is the largest. The results also show that the lightning flash rate peak mostly occurs from −10 to 20 min after the rainfall rate peak, and this time lag is common in SDRWL with all intensities. The starting time of lightning is related to the rainfall intensity. In heavy SDRWL, lightning activity usually occurs from −10 to 20 min after the beginning of precipitation, while in weak SDRWL, the above time window expands to ±1 h. These results indicate that the quantity and time relationship between lightning and precipitation are more solid in heavy SDR events.
Abstract
The performance of first-moment and full-distribution bias-correction methods of monthly temperature distributions for seasonal prediction is analyzed by comparing two approaches: the standard all-in-data procedure and the 6-hourly stratification of data. Five models are applied to remove the systematic errors of the CFSv2 forecasts of temperature for the rainy season in the Ethiopian Blue Nile River basin domain. Using deterministic evaluation measures, it is found that the stratification marginally increases the forecast skill especially in regions where the data distribution of temperature is prominently multimodal. The improvement may be attributed to a split of the mixed distribution into a set of unimodal distributions. A necessary condition for this splitting into unimodal distributions is that the amplitude of the diurnal cycle be larger than the interannual variability in the sample. The maximum improvement of stratification is achieved by the first-moment correction model.
Significance Statement
This paper evaluates bias-correction methods of monthly forecast distributions of temperature to improve seasonal forecast skill. It is found that marginal skill is gained when bias correction of the diurnal cycle is performed. This paper contributes to the discussion on the value of subdaily model output data.
Abstract
The performance of first-moment and full-distribution bias-correction methods of monthly temperature distributions for seasonal prediction is analyzed by comparing two approaches: the standard all-in-data procedure and the 6-hourly stratification of data. Five models are applied to remove the systematic errors of the CFSv2 forecasts of temperature for the rainy season in the Ethiopian Blue Nile River basin domain. Using deterministic evaluation measures, it is found that the stratification marginally increases the forecast skill especially in regions where the data distribution of temperature is prominently multimodal. The improvement may be attributed to a split of the mixed distribution into a set of unimodal distributions. A necessary condition for this splitting into unimodal distributions is that the amplitude of the diurnal cycle be larger than the interannual variability in the sample. The maximum improvement of stratification is achieved by the first-moment correction model.
Significance Statement
This paper evaluates bias-correction methods of monthly forecast distributions of temperature to improve seasonal forecast skill. It is found that marginal skill is gained when bias correction of the diurnal cycle is performed. This paper contributes to the discussion on the value of subdaily model output data.
Abstract
Better representation of the planetary boundary layer (PBL) in numerical models is one of the keys to improving forecasts of TC structure and intensity, including rapid intensification. To meet this goal, our recent work has used observations to improve the eddy-diffusivity mass flux with prognostic turbulent kinetic energy (EDMF-TKE) PBL scheme in the Hurricane Analysis and Forecast System (HAFS). This study builds on that work by comparing a modified version of EDMF-TKE (MEDMF-TKE) with the hybrid EDMF scheme based on a K-profile method (HEDMF-KP) in the 2020 HAFS-globalnest model. Verification statistics based on 101 cases in the 2020 season demonstrate that MEDMF-TKE improves track forecasts, with a reduction in a large right bias seen in HEDMF-KP forecasts. The comparison of intensity performance is mixed, but the magnitude of low bias at early forecast hours is reduced with the use of the MEDMF-TKE scheme, which produces a wider range of TC intensities. Wind radii forecasts, particularly the radius of maximum wind speed (RMW), are also improved with the MEDMF-TKE scheme. Composites of TC inner-core structure in and above the PBL highlight and explain differences between the two sets of forecasts, with MEDMF-TKE having a stronger and shallower inflow layer, stronger eyewall vertical velocity, and more moisture in the eyewall region. A case study of Hurricane Laura shows that MEDMF-TKE better represented the subtropical ridge and thus the motion of the TC. Finally, analysis of Hurricane Delta through a tangential wind budget highlights how and why MEDMF-TKE leads to faster spinup of the vortex and a better prediction of rapid intensification.
Abstract
Better representation of the planetary boundary layer (PBL) in numerical models is one of the keys to improving forecasts of TC structure and intensity, including rapid intensification. To meet this goal, our recent work has used observations to improve the eddy-diffusivity mass flux with prognostic turbulent kinetic energy (EDMF-TKE) PBL scheme in the Hurricane Analysis and Forecast System (HAFS). This study builds on that work by comparing a modified version of EDMF-TKE (MEDMF-TKE) with the hybrid EDMF scheme based on a K-profile method (HEDMF-KP) in the 2020 HAFS-globalnest model. Verification statistics based on 101 cases in the 2020 season demonstrate that MEDMF-TKE improves track forecasts, with a reduction in a large right bias seen in HEDMF-KP forecasts. The comparison of intensity performance is mixed, but the magnitude of low bias at early forecast hours is reduced with the use of the MEDMF-TKE scheme, which produces a wider range of TC intensities. Wind radii forecasts, particularly the radius of maximum wind speed (RMW), are also improved with the MEDMF-TKE scheme. Composites of TC inner-core structure in and above the PBL highlight and explain differences between the two sets of forecasts, with MEDMF-TKE having a stronger and shallower inflow layer, stronger eyewall vertical velocity, and more moisture in the eyewall region. A case study of Hurricane Laura shows that MEDMF-TKE better represented the subtropical ridge and thus the motion of the TC. Finally, analysis of Hurricane Delta through a tangential wind budget highlights how and why MEDMF-TKE leads to faster spinup of the vortex and a better prediction of rapid intensification.