Browse
Abstract
Understanding how model physics impact tropical cyclone (TC) structure, motion, and evolution is critical for the development of TC forecast models. This study examines the impacts of microphysics and planetary boundary layer (PBL) physics on forecasts using the Hurricane Analysis and Forecast System (HAFS), which is newly operational in 2023. The “HAFS-B” version is specifically evaluated, and three sensitivity tests (for over 400 cases in 15 Atlantic TCs) are compared with retrospective HAFS-B runs. Sensitivity tests are generated by 1) changing the microphysics in HAFS-B from Thompson to GFDL, 2) turning off the TC-specific PBL modifications that have been implemented in operational HAFS-B, and 3) combining the PBL and microphysics modifications. The forecasts are compared through standard verification metrics, and also examination of composite structure. Verification results show that Thompson microphysics slightly degrades the days 3–4 forecast track in HAFS-B, but improves forecasts of long-term intensity. The TC-specific PBL changes lead to a reduction in a negative intensity bias and improvement in RI skill, but cause some degradation in prediction of 34-kt (1 kt ≈ 0.51 m s−1) wind radii. Composites illustrate slightly deeper vortices in runs with the Thompson microphysics, and stronger PBL inflow with the TC-specific PBL modifications. These combined results demonstrate the critical role of model physics in regulating TC structure and intensity, and point to the need to continue to develop improvements to HAFS physics. The study also shows that the combination of both PBL and microphysics modifications (which are both included in one of the two versions of HAFS in the first operational implementation) leads to the best overall results.
Significance Statement
A new hurricane model, the Hurricane Analysis and Forecast System (HAFS), is being introduced for operational prediction during the 2023 hurricane season. One of the most important parts of any forecast model are the “physics parameterizations,” or approximations of physical processes that govern things like turbulence, cloud formation, etc. In this study, we tested these approximations in one configuration of HAFS, HAFS-B. Specifically, we looked at two different versions of the microphysics (modeling the growth of water and ice in clouds) and boundary layer physics (the approximations for turbulence in the lowest level of the atmosphere). We found that both of these sets of model physics had important effects on the forecasts from HAFS. The microphysics had notable impacts on the track forecasts, and also changed the vertical depth of the model hurricanes. The boundary layer physics, including some of our changes based on observed hurricanes and turbulence-resolving models, helped the model better predict rapid intensification (periods where the wind speed increases quickly). Work is ongoing to improve the model physics for better forecasts of rapid intensification and overall storm structure, including storm size. The study also shows the combination of both PBL and microphysics modifications overall leads to the best results and thus was used as one of the two first operational implementations of HAFS.
Abstract
Understanding how model physics impact tropical cyclone (TC) structure, motion, and evolution is critical for the development of TC forecast models. This study examines the impacts of microphysics and planetary boundary layer (PBL) physics on forecasts using the Hurricane Analysis and Forecast System (HAFS), which is newly operational in 2023. The “HAFS-B” version is specifically evaluated, and three sensitivity tests (for over 400 cases in 15 Atlantic TCs) are compared with retrospective HAFS-B runs. Sensitivity tests are generated by 1) changing the microphysics in HAFS-B from Thompson to GFDL, 2) turning off the TC-specific PBL modifications that have been implemented in operational HAFS-B, and 3) combining the PBL and microphysics modifications. The forecasts are compared through standard verification metrics, and also examination of composite structure. Verification results show that Thompson microphysics slightly degrades the days 3–4 forecast track in HAFS-B, but improves forecasts of long-term intensity. The TC-specific PBL changes lead to a reduction in a negative intensity bias and improvement in RI skill, but cause some degradation in prediction of 34-kt (1 kt ≈ 0.51 m s−1) wind radii. Composites illustrate slightly deeper vortices in runs with the Thompson microphysics, and stronger PBL inflow with the TC-specific PBL modifications. These combined results demonstrate the critical role of model physics in regulating TC structure and intensity, and point to the need to continue to develop improvements to HAFS physics. The study also shows that the combination of both PBL and microphysics modifications (which are both included in one of the two versions of HAFS in the first operational implementation) leads to the best overall results.
Significance Statement
A new hurricane model, the Hurricane Analysis and Forecast System (HAFS), is being introduced for operational prediction during the 2023 hurricane season. One of the most important parts of any forecast model are the “physics parameterizations,” or approximations of physical processes that govern things like turbulence, cloud formation, etc. In this study, we tested these approximations in one configuration of HAFS, HAFS-B. Specifically, we looked at two different versions of the microphysics (modeling the growth of water and ice in clouds) and boundary layer physics (the approximations for turbulence in the lowest level of the atmosphere). We found that both of these sets of model physics had important effects on the forecasts from HAFS. The microphysics had notable impacts on the track forecasts, and also changed the vertical depth of the model hurricanes. The boundary layer physics, including some of our changes based on observed hurricanes and turbulence-resolving models, helped the model better predict rapid intensification (periods where the wind speed increases quickly). Work is ongoing to improve the model physics for better forecasts of rapid intensification and overall storm structure, including storm size. The study also shows the combination of both PBL and microphysics modifications overall leads to the best results and thus was used as one of the two first operational implementations of HAFS.
Abstract
This study documents the features of tornadoes, their parent storms, and the environments of the only two documented tornado outbreak events in China. The two events were associated with Tropical Cyclone (TC) Yagi on 12 August 2018 with 11 tornadoes and with an extratropical cyclone (EC) on 11 July 2021 (EC 711) with 13 tornadoes. Most tornadoes in TC Yagi were spawned from discrete minisupercells, while a majority of tornadoes in EC 711 were produced from supercells imbedded in QLCSs or cloud clusters. In both events, the high-tornado-density area was better collocated with the K index rather than MLCAPE, and with entraining rather than non-entraining parameters possibly due to their sensitivity to midlevel moisture. EC 711 had a larger displacement between maximum entraining CAPE and vertical wind shear than TC Yagi, with the maximum entraining CAPE better collocated with the high-tornado-density area than vertical wind shear. Relative to TC Yagi, EC 711 had stronger entraining CAPE, 0–1-km storm relative helicity, 0–6-km vertical wind shear, and composite parameters such as an entraining significant tornado parameter, which caused its generally stronger tornado vortex signatures (TVSs) and mesocyclones with a larger diameter and longer life span. No significant differences were found in the composite parameter of these two events from U.S. statistics. Although obvious dry air intrusions were observed in both events, no apparent impact was observed on the potential of tornado outbreak in EC 711. In TC Yagi, however, the dry air intrusion may have helped tornado outbreak due to cloudiness erosion and thus the increase in surface temperature and low-level lapse rate.
Abstract
This study documents the features of tornadoes, their parent storms, and the environments of the only two documented tornado outbreak events in China. The two events were associated with Tropical Cyclone (TC) Yagi on 12 August 2018 with 11 tornadoes and with an extratropical cyclone (EC) on 11 July 2021 (EC 711) with 13 tornadoes. Most tornadoes in TC Yagi were spawned from discrete minisupercells, while a majority of tornadoes in EC 711 were produced from supercells imbedded in QLCSs or cloud clusters. In both events, the high-tornado-density area was better collocated with the K index rather than MLCAPE, and with entraining rather than non-entraining parameters possibly due to their sensitivity to midlevel moisture. EC 711 had a larger displacement between maximum entraining CAPE and vertical wind shear than TC Yagi, with the maximum entraining CAPE better collocated with the high-tornado-density area than vertical wind shear. Relative to TC Yagi, EC 711 had stronger entraining CAPE, 0–1-km storm relative helicity, 0–6-km vertical wind shear, and composite parameters such as an entraining significant tornado parameter, which caused its generally stronger tornado vortex signatures (TVSs) and mesocyclones with a larger diameter and longer life span. No significant differences were found in the composite parameter of these two events from U.S. statistics. Although obvious dry air intrusions were observed in both events, no apparent impact was observed on the potential of tornado outbreak in EC 711. In TC Yagi, however, the dry air intrusion may have helped tornado outbreak due to cloudiness erosion and thus the increase in surface temperature and low-level lapse rate.
Abstract
The evolution of supercell thunderstorms traversing complex terrain is not well understood and remains a short-term forecast challenge across the Appalachian Mountains of the eastern United States. Although case studies have been conducted, there has been no large multicase observational analysis focusing on the central and southern Appalachians. To address this gap, we analyzed 62 isolated warm-season supercells that occurred in this region. Each supercell was categorized as either crossing (∼40%) or noncrossing (∼60%) based on their maintenance of supercellular structure while traversing prominent terrain. The structural evolution of each storm was analyzed via operationally relevant parameters extracted from WSR-88D radar data. The most significant differences in radar-observed structure among storm categories were associated with the mesocyclone; crossing storms exhibited stronger, wider, and deeper mesocyclones, along with more prominent and persistent hook echoes. Crossing storms also moved faster. Among the supercells that crossed the most prominent peaks and ridges, significant increases in base reflectivity, vertically integrated liquid, echo tops, and mesocyclone intensity/depth were observed, in conjunction with more frequent large hail and tornado reports, as the storms ascended windward slopes. Then, as the supercells descended leeward slopes, significant increases in mesocyclone depth and tornado frequency were observed. Such results reinforce the notion that supercell evolution can be modulated substantially by passage through and over complex terrain.
Significance Statement
Understanding of thunderstorm evolution and severe weather production in regions of complex terrain remains limited, particularly for storms with rotating updrafts known as supercell thunderstorms. This study provides a systematic analysis of numerous warm season supercell storms that moved through the central and southern Appalachian Mountains. We focus on operationally relevant radar characteristics and differences among storms that maintain supercellular structure as they traverse the terrain (crossing) versus those that do not (noncrossing). Our results identify radar characteristics useful in distinguishing between crossing and noncrossing storms, along with typical supercell evolution and severe weather production as storms cross the more prominent peaks and ridges of the central and southern Appalachian Mountains.
Abstract
The evolution of supercell thunderstorms traversing complex terrain is not well understood and remains a short-term forecast challenge across the Appalachian Mountains of the eastern United States. Although case studies have been conducted, there has been no large multicase observational analysis focusing on the central and southern Appalachians. To address this gap, we analyzed 62 isolated warm-season supercells that occurred in this region. Each supercell was categorized as either crossing (∼40%) or noncrossing (∼60%) based on their maintenance of supercellular structure while traversing prominent terrain. The structural evolution of each storm was analyzed via operationally relevant parameters extracted from WSR-88D radar data. The most significant differences in radar-observed structure among storm categories were associated with the mesocyclone; crossing storms exhibited stronger, wider, and deeper mesocyclones, along with more prominent and persistent hook echoes. Crossing storms also moved faster. Among the supercells that crossed the most prominent peaks and ridges, significant increases in base reflectivity, vertically integrated liquid, echo tops, and mesocyclone intensity/depth were observed, in conjunction with more frequent large hail and tornado reports, as the storms ascended windward slopes. Then, as the supercells descended leeward slopes, significant increases in mesocyclone depth and tornado frequency were observed. Such results reinforce the notion that supercell evolution can be modulated substantially by passage through and over complex terrain.
Significance Statement
Understanding of thunderstorm evolution and severe weather production in regions of complex terrain remains limited, particularly for storms with rotating updrafts known as supercell thunderstorms. This study provides a systematic analysis of numerous warm season supercell storms that moved through the central and southern Appalachian Mountains. We focus on operationally relevant radar characteristics and differences among storms that maintain supercellular structure as they traverse the terrain (crossing) versus those that do not (noncrossing). Our results identify radar characteristics useful in distinguishing between crossing and noncrossing storms, along with typical supercell evolution and severe weather production as storms cross the more prominent peaks and ridges of the central and southern Appalachian Mountains.
Abstract
The accurate prediction of short-term rainfall, and in particular the forecast of hourly heavy rainfall (HHR) probability, remains challenging for numerical weather prediction (NWP) models. Here, we introduce a deep learning (DL) model, PredRNNv2-AWS, a convolutional recurrent neural network designed for deterministic short-term rainfall forecasting. This model integrates surface rainfall observations and atmospheric variables simulated by the Precision Weather Analysis and Forecasting System (PWAFS). Our DL model produces realistic hourly rainfall forecasts for the next 13 h. Quantitative evaluations show that the use of surface rainfall observations as one of the predictors achieves higher performance (threat score) with 263% and 186% relative improvements over NWP simulations for the first 3 h and the entire forecast hours, respectively, at a threshold of 5 mm h−1. Noting that the optical-flow method also performs well in the initial hours, its predictions quickly worsen in the final hours compared to other experiments. The machine learning model, LightGBM, is then integrated to classify HHR from the predicted hourly rainfall of PredRNNv2-AWS. The results show that PredRNNv2-AWS can better reflect actual HHR conditions compared with PredRNNv2 and PWAFS. A representative case demonstrates the superiority of PredRNNv2-AWS in predicting the evolution of the rainy system, which substantially improves the accuracy of the HHR prediction. A test case involving the extreme flood event in Zhengzhou exemplifies the generalizability of our proposed model. Our model offers a reliable framework to predict target variables that can be obtained from numerical simulations and observations, e.g., visibility, wind power, solar energy, and air pollution.
Abstract
The accurate prediction of short-term rainfall, and in particular the forecast of hourly heavy rainfall (HHR) probability, remains challenging for numerical weather prediction (NWP) models. Here, we introduce a deep learning (DL) model, PredRNNv2-AWS, a convolutional recurrent neural network designed for deterministic short-term rainfall forecasting. This model integrates surface rainfall observations and atmospheric variables simulated by the Precision Weather Analysis and Forecasting System (PWAFS). Our DL model produces realistic hourly rainfall forecasts for the next 13 h. Quantitative evaluations show that the use of surface rainfall observations as one of the predictors achieves higher performance (threat score) with 263% and 186% relative improvements over NWP simulations for the first 3 h and the entire forecast hours, respectively, at a threshold of 5 mm h−1. Noting that the optical-flow method also performs well in the initial hours, its predictions quickly worsen in the final hours compared to other experiments. The machine learning model, LightGBM, is then integrated to classify HHR from the predicted hourly rainfall of PredRNNv2-AWS. The results show that PredRNNv2-AWS can better reflect actual HHR conditions compared with PredRNNv2 and PWAFS. A representative case demonstrates the superiority of PredRNNv2-AWS in predicting the evolution of the rainy system, which substantially improves the accuracy of the HHR prediction. A test case involving the extreme flood event in Zhengzhou exemplifies the generalizability of our proposed model. Our model offers a reliable framework to predict target variables that can be obtained from numerical simulations and observations, e.g., visibility, wind power, solar energy, and air pollution.
Abstract
A novel differential reflectivity (Z DR) column detection method, the hotspot technique, has been developed. Utilizing constant altitude plan projection indicators (CAPPI) of Z DR, reflectivity, and a proxy for circular depolarization ratio at the height of the −10°C isotherm, the method identifies the location of the base of the Z DR column rather than the entire Z DR column depth. The new method is compared to two other existing Z DR column detection methods and shown to be an improvement in regions where there is a Z DR bias.
Significance Statement
Thunderstorm updrafts are the area of a storm where precipitation grows, electrification is initiated, and tornadoes may form. Therefore, accurate detection and quantification of updraft properties using weather radar data is of great importance for assessing a storm’s damage potential in real time. Current methods to automatically detect updraft areas, however, are error-prone due to common deficiencies in radar measurements. We present a novel algorithmic approach to identify storm updrafts that eliminates some of the known shortcomings of existing methods. In the future, our method could be used to develop new hail detection algorithms, or to improve short-term weather forecasting models.
Abstract
A novel differential reflectivity (Z DR) column detection method, the hotspot technique, has been developed. Utilizing constant altitude plan projection indicators (CAPPI) of Z DR, reflectivity, and a proxy for circular depolarization ratio at the height of the −10°C isotherm, the method identifies the location of the base of the Z DR column rather than the entire Z DR column depth. The new method is compared to two other existing Z DR column detection methods and shown to be an improvement in regions where there is a Z DR bias.
Significance Statement
Thunderstorm updrafts are the area of a storm where precipitation grows, electrification is initiated, and tornadoes may form. Therefore, accurate detection and quantification of updraft properties using weather radar data is of great importance for assessing a storm’s damage potential in real time. Current methods to automatically detect updraft areas, however, are error-prone due to common deficiencies in radar measurements. We present a novel algorithmic approach to identify storm updrafts that eliminates some of the known shortcomings of existing methods. In the future, our method could be used to develop new hail detection algorithms, or to improve short-term weather forecasting models.
Abstract
This study assesses the accuracy of the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) forecasts for clouds within stable and unstable environments (thereafter refers as “stable” and “unstable” clouds). This evaluation is conducted by comparing these forecasts against satellite retrievals through a combination of traditional, spatial, and object-based methods. To facilitate this assessment, the Model Evaluation Tools (MET) community tool is employed. The findings underscore the significance of fine-tuning the MET parameters to achieve a more accurate representation of the features under scrutiny. The study’s results reveal that when employing traditional pointwise statistics (e.g., frequency bias and equitable threat score), there is consistency in the results whether calculated from Method for Object-Based Diagnostic Evaluation (MODE)-based objects or derived from the complete fields. Furthermore, the object-based statistics offer valuable insights, indicating that COAMPS generally predicts cloud object locations accurately, though the spread of these predicted locations tends to increase with time. It tends to overpredict the object area for unstable clouds while underpredicting it for stable clouds over time. These results are in alignment with the traditional pointwise bias scores for the entire grid. Overall, the spatial metrics provided by the object-based verification methods emerge as crucial and practical tools for the validation of cloud forecasts.
Significance Statement
As the general Navy meteorological and oceanographic (METOC) community engages in collaboration with the broader scientific community, our goal is to harness community tools like MET for the systematic evaluation of weather forecasts, with a specific focus on variables crucial to the Navy. Clouds, given their significant impact on visibility, hold particular importance in our investigations. Cloud forecasts pose unique challenges, primarily attributable to the intricate physics governing cloud development and the complexity of representing these processes within numerical models. Cloud observations are also constrained by limitations, arising from both top-down satellite measurements and bottom-up ground-based measurements. This study illustrates that, with a comprehensive understanding of community tools, cloud forecasts can be consistently verified. This verification encompasses traditional evaluation methods, measuring general qualities such as bias and root-mean-squared error, as well as newer techniques like spatial and object-based methods designed to account for displacement errors.
Abstract
This study assesses the accuracy of the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) forecasts for clouds within stable and unstable environments (thereafter refers as “stable” and “unstable” clouds). This evaluation is conducted by comparing these forecasts against satellite retrievals through a combination of traditional, spatial, and object-based methods. To facilitate this assessment, the Model Evaluation Tools (MET) community tool is employed. The findings underscore the significance of fine-tuning the MET parameters to achieve a more accurate representation of the features under scrutiny. The study’s results reveal that when employing traditional pointwise statistics (e.g., frequency bias and equitable threat score), there is consistency in the results whether calculated from Method for Object-Based Diagnostic Evaluation (MODE)-based objects or derived from the complete fields. Furthermore, the object-based statistics offer valuable insights, indicating that COAMPS generally predicts cloud object locations accurately, though the spread of these predicted locations tends to increase with time. It tends to overpredict the object area for unstable clouds while underpredicting it for stable clouds over time. These results are in alignment with the traditional pointwise bias scores for the entire grid. Overall, the spatial metrics provided by the object-based verification methods emerge as crucial and practical tools for the validation of cloud forecasts.
Significance Statement
As the general Navy meteorological and oceanographic (METOC) community engages in collaboration with the broader scientific community, our goal is to harness community tools like MET for the systematic evaluation of weather forecasts, with a specific focus on variables crucial to the Navy. Clouds, given their significant impact on visibility, hold particular importance in our investigations. Cloud forecasts pose unique challenges, primarily attributable to the intricate physics governing cloud development and the complexity of representing these processes within numerical models. Cloud observations are also constrained by limitations, arising from both top-down satellite measurements and bottom-up ground-based measurements. This study illustrates that, with a comprehensive understanding of community tools, cloud forecasts can be consistently verified. This verification encompasses traditional evaluation methods, measuring general qualities such as bias and root-mean-squared error, as well as newer techniques like spatial and object-based methods designed to account for displacement errors.
Abstract
In this study, we investigate the impact of assimilating densely distributed Global Navigation Satellite System (GNSS) zenith total delay (ZTD) and surface station (SFC) data on the prediction of very short-term heavy rainfall associated with afternoon thunderstorm (AT) events in the Taipei basin. Under weak synoptic-scale conditions, four cases characterized by different rainfall features are chosen for investigation. Experiments are conducted with a 3-h assimilation period, followed by 3-h forecasts. Also, various experiments are performed to explore the sensitivity of AT initialization. Data assimilation experiments are conducted with a convective-scale Weather Research and Forecasting–local ensemble transform Kalman filter (WRF-LETKF) system. The results show that ZTD assimilation can provide effective moisture corrections. Assimilating SFC wind and temperature data could additionally improve the near-surface convergence and cold bias, further increasing the impact of ZTD assimilation. Frequently assimilating SFC data every 10 min provides the best forecast performance especially for rainfall intensity predictions. Such a benefit could still be identified in the earlier forecast initialized 2 h before the start of the event. Detailed analysis of a case on 22 July 2019 reveals that frequent assimilation provides initial conditions that can lead to fast vertical expansion of the convection and trigger an intense AT. This study proposes a new metric using the fraction skill score to construct an informative diagram to evaluate the location and intensity of heavy rainfall forecast and display a clear characteristic of different cases. Issues of how assimilation strategies affect the impact of ground-based observations in a convective ensemble data assimilation system and AT development are also discussed.
Significance Statement
In this study, we investigate the impact of frequently assimilating densely distributed ground-based observations on predicting four afternoon thunderstorm events in the Taipei basin. While assimilating GNSS-ZTD data can improve the moisture fields for initializing convection, assimilating surface station data improves the prediction of rainfall location and intensity, particularly when surface data are assimilated at a very high frequency of 10 min.
Abstract
In this study, we investigate the impact of assimilating densely distributed Global Navigation Satellite System (GNSS) zenith total delay (ZTD) and surface station (SFC) data on the prediction of very short-term heavy rainfall associated with afternoon thunderstorm (AT) events in the Taipei basin. Under weak synoptic-scale conditions, four cases characterized by different rainfall features are chosen for investigation. Experiments are conducted with a 3-h assimilation period, followed by 3-h forecasts. Also, various experiments are performed to explore the sensitivity of AT initialization. Data assimilation experiments are conducted with a convective-scale Weather Research and Forecasting–local ensemble transform Kalman filter (WRF-LETKF) system. The results show that ZTD assimilation can provide effective moisture corrections. Assimilating SFC wind and temperature data could additionally improve the near-surface convergence and cold bias, further increasing the impact of ZTD assimilation. Frequently assimilating SFC data every 10 min provides the best forecast performance especially for rainfall intensity predictions. Such a benefit could still be identified in the earlier forecast initialized 2 h before the start of the event. Detailed analysis of a case on 22 July 2019 reveals that frequent assimilation provides initial conditions that can lead to fast vertical expansion of the convection and trigger an intense AT. This study proposes a new metric using the fraction skill score to construct an informative diagram to evaluate the location and intensity of heavy rainfall forecast and display a clear characteristic of different cases. Issues of how assimilation strategies affect the impact of ground-based observations in a convective ensemble data assimilation system and AT development are also discussed.
Significance Statement
In this study, we investigate the impact of frequently assimilating densely distributed ground-based observations on predicting four afternoon thunderstorm events in the Taipei basin. While assimilating GNSS-ZTD data can improve the moisture fields for initializing convection, assimilating surface station data improves the prediction of rainfall location and intensity, particularly when surface data are assimilated at a very high frequency of 10 min.
Abstract
Estimates of soil moisture from two National Oceanic and Atmospheric Administration (NOAA) models are compared to in situ observations. The estimates are from a high-resolution atmospheric model with a land surface model [High-Resolution Rapid Refresh (HRRR) model] and a hydrologic model from the NOAA Climate Prediction Center (CPC). Both models produce wetter soils in dry regions and drier soils in wet regions, as compared to the in situ observations. These soil moisture differences occur at most soil depths but are larger at the deeper depths below the surface (100 cm). Comparisons of soil moisture variability are also assessed as a function of soil moisture regime. Both models have lower standard deviations as compared to the in situ observations for all soil moisture regimes. The HRRR model’s soil moisture is better correlated with in situ observations for drier soils as compared to wetter soils—a trend that was not present in the CPC model comparisons. In terms of seasonality, soil moisture comparisons vary depending on the metric, time of year, and soil moisture regime. Therefore, consideration of both the seasonality and soil moisture regime is needed to accurately determine model biases. These NOAA soil moisture estimates are used for a variety of forecasting and societal applications, and understanding their differences provides important context for their applications and can lead to model improvements.
Significance Statement
Soil moisture is an essential variable coupling the land surface to the atmosphere. Accurate estimates of soil moisture are important for forecasting near-surface temperature and moisture, predicting where clouds will form, and assessing drought and fire risks. There are multiple estimates of soil moisture available, and in this study, we compare soil moisture estimates from two different National Oceanic and Atmospheric Administration (NOAA) models to in situ observations. These comparisons include both soil moisture amount and variability and are conducted at several soil depths, in different soil moisture regimes, and for different seasons and years. This comprehensive assessment allows for an accurate assessment of biases within these models that would be missed when conducting analyses more broadly.
Abstract
Estimates of soil moisture from two National Oceanic and Atmospheric Administration (NOAA) models are compared to in situ observations. The estimates are from a high-resolution atmospheric model with a land surface model [High-Resolution Rapid Refresh (HRRR) model] and a hydrologic model from the NOAA Climate Prediction Center (CPC). Both models produce wetter soils in dry regions and drier soils in wet regions, as compared to the in situ observations. These soil moisture differences occur at most soil depths but are larger at the deeper depths below the surface (100 cm). Comparisons of soil moisture variability are also assessed as a function of soil moisture regime. Both models have lower standard deviations as compared to the in situ observations for all soil moisture regimes. The HRRR model’s soil moisture is better correlated with in situ observations for drier soils as compared to wetter soils—a trend that was not present in the CPC model comparisons. In terms of seasonality, soil moisture comparisons vary depending on the metric, time of year, and soil moisture regime. Therefore, consideration of both the seasonality and soil moisture regime is needed to accurately determine model biases. These NOAA soil moisture estimates are used for a variety of forecasting and societal applications, and understanding their differences provides important context for their applications and can lead to model improvements.
Significance Statement
Soil moisture is an essential variable coupling the land surface to the atmosphere. Accurate estimates of soil moisture are important for forecasting near-surface temperature and moisture, predicting where clouds will form, and assessing drought and fire risks. There are multiple estimates of soil moisture available, and in this study, we compare soil moisture estimates from two different National Oceanic and Atmospheric Administration (NOAA) models to in situ observations. These comparisons include both soil moisture amount and variability and are conducted at several soil depths, in different soil moisture regimes, and for different seasons and years. This comprehensive assessment allows for an accurate assessment of biases within these models that would be missed when conducting analyses more broadly.
Abstract
In this study, we developed and evaluated the Korean Forecast Icing Potential (K-FIP), an in-flight icing forecast system for the Korea Meteorological Administration (KMA) based on the simplified forecast icing potential (SFIP) algorithm. The SFIP is an algorithm used to postprocess numerical weather prediction (NWP) model forecasts for predicting potential areas of icing based on the fuzzy logic formulations of four membership functions: temperature, relative humidity, vertical velocity, and cloud liquid water content. In this study, we optimized the original version of the SFIP for the global NWP model of the KMA through three important updates using 34 months of pilot reports for icing as follows: using total cloud condensates, reconstructing membership functions, and determining the best weight combination for input variables. The use of all cloud condensates and the reconstruction of these membership functions resulted in a significant improvement in the algorithm compared with the original. The weight combinations for the KMA’s global model were determined based on the performance scores. While several sets of weights performed equally well, this process identified the most effective weight combination for the KMA model, which is referred to as the K-FIP. The K-FIP demonstrated the ability to successfully predict icing over the Korean Peninsula using observations made by research aircraft from the National Institute of Meteorological Sciences of the KMA. Eventually, the K-FIP icing forecasts will provide better forecasts of icing potentials for safe and efficient aviation operations in South Korea.
Significance Statement
In-flight aircraft icing has posed a threat to safe flights for decades. With advances in computing resources and an improvement in the spatiotemporal resolutions of numerical weather prediction (NWP) models, icing algorithms have been developed using NWP model outputs associated with supercooled liquid water. This study evaluated and optimized the simplified forecast icing potential, an NWP model–based icing algorithm, for the global model of the Korean Meteorological Administration (KMA) using a long-term observational dataset to improve its prediction skills. The improvements shown in this study and the SFIP implemented in the KMA will provide more informative predictions for safe and efficient air travel.
Abstract
In this study, we developed and evaluated the Korean Forecast Icing Potential (K-FIP), an in-flight icing forecast system for the Korea Meteorological Administration (KMA) based on the simplified forecast icing potential (SFIP) algorithm. The SFIP is an algorithm used to postprocess numerical weather prediction (NWP) model forecasts for predicting potential areas of icing based on the fuzzy logic formulations of four membership functions: temperature, relative humidity, vertical velocity, and cloud liquid water content. In this study, we optimized the original version of the SFIP for the global NWP model of the KMA through three important updates using 34 months of pilot reports for icing as follows: using total cloud condensates, reconstructing membership functions, and determining the best weight combination for input variables. The use of all cloud condensates and the reconstruction of these membership functions resulted in a significant improvement in the algorithm compared with the original. The weight combinations for the KMA’s global model were determined based on the performance scores. While several sets of weights performed equally well, this process identified the most effective weight combination for the KMA model, which is referred to as the K-FIP. The K-FIP demonstrated the ability to successfully predict icing over the Korean Peninsula using observations made by research aircraft from the National Institute of Meteorological Sciences of the KMA. Eventually, the K-FIP icing forecasts will provide better forecasts of icing potentials for safe and efficient aviation operations in South Korea.
Significance Statement
In-flight aircraft icing has posed a threat to safe flights for decades. With advances in computing resources and an improvement in the spatiotemporal resolutions of numerical weather prediction (NWP) models, icing algorithms have been developed using NWP model outputs associated with supercooled liquid water. This study evaluated and optimized the simplified forecast icing potential, an NWP model–based icing algorithm, for the global model of the Korean Meteorological Administration (KMA) using a long-term observational dataset to improve its prediction skills. The improvements shown in this study and the SFIP implemented in the KMA will provide more informative predictions for safe and efficient air travel.
Abstract
During the 2021 Spring Forecasting Experiment (SFE), the usefulness of the experimental Warn-on-Forecast System (WoFS) ensemble guidance was tested with the issuance of short-term probabilistic hazard forecasts. One group of participants used the WoFS guidance, while another group did not. Individual forecasts issued by two NWS participants in each group were evaluated alongside a consensus forecast from the remaining participants. Participant forecasts of tornadoes, hail, and wind at lead times of ∼2–3 h and valid at 2200–2300, 2300–0000, and 0000–0100 UTC were evaluated subjectively during the SFE by participants the day after issuance, and objectively after the SFE concluded. These forecasts exist between the watch and the warning time frame, where WoFS is anticipated to be particularly impactful. The hourly probabilistic forecasts were skillful according to objective metrics like the fractions skill score. While the tornado forecasts were more reliable than the other hazards, there was no clear indication of any one hazard scoring highest across all metrics. WoFS availability improved the hourly probabilistic forecasts as measured by the subjective ratings and several objective metrics, including increased POD and decreased FAR at high probability thresholds. Generally, expert forecasts performed better than consensus forecasts, though expert forecasts overforecasted. Finally, this work explored the appropriate construction of practically perfect fields used during subjective verification, which participants frequently found to be too small and precise. Using a Gaussian smoother with σ = 70 km is recommended to create hourly practically perfect fields in future experiments.
Significance Statement
This work explores the impact of cutting-edge numerical weather prediction ensemble guidance (the Warn-on-Forecast System) on severe thunderstorm hazard outlooks at watch-to-warning time scales, typically between 1 and 6 h of lead time. Real-time forecast products in this time frame are currently provided on an as-needed basis, and the transition to continuous probabilistic forecast products across scales requires targeted research. Results showed that hourly probabilistic participant forecasts were skillful subjectively and statistically, and that the experimental guidance improved the forecasts. These results are promising for the implementation and value of the Warn-on-Forecast System to provide improved hazard timing and location guidance within severe weather watches. Suggestions are made to aid future subjective evaluations of watch-to-warning-scale probabilistic forecasts.
Abstract
During the 2021 Spring Forecasting Experiment (SFE), the usefulness of the experimental Warn-on-Forecast System (WoFS) ensemble guidance was tested with the issuance of short-term probabilistic hazard forecasts. One group of participants used the WoFS guidance, while another group did not. Individual forecasts issued by two NWS participants in each group were evaluated alongside a consensus forecast from the remaining participants. Participant forecasts of tornadoes, hail, and wind at lead times of ∼2–3 h and valid at 2200–2300, 2300–0000, and 0000–0100 UTC were evaluated subjectively during the SFE by participants the day after issuance, and objectively after the SFE concluded. These forecasts exist between the watch and the warning time frame, where WoFS is anticipated to be particularly impactful. The hourly probabilistic forecasts were skillful according to objective metrics like the fractions skill score. While the tornado forecasts were more reliable than the other hazards, there was no clear indication of any one hazard scoring highest across all metrics. WoFS availability improved the hourly probabilistic forecasts as measured by the subjective ratings and several objective metrics, including increased POD and decreased FAR at high probability thresholds. Generally, expert forecasts performed better than consensus forecasts, though expert forecasts overforecasted. Finally, this work explored the appropriate construction of practically perfect fields used during subjective verification, which participants frequently found to be too small and precise. Using a Gaussian smoother with σ = 70 km is recommended to create hourly practically perfect fields in future experiments.
Significance Statement
This work explores the impact of cutting-edge numerical weather prediction ensemble guidance (the Warn-on-Forecast System) on severe thunderstorm hazard outlooks at watch-to-warning time scales, typically between 1 and 6 h of lead time. Real-time forecast products in this time frame are currently provided on an as-needed basis, and the transition to continuous probabilistic forecast products across scales requires targeted research. Results showed that hourly probabilistic participant forecasts were skillful subjectively and statistically, and that the experimental guidance improved the forecasts. These results are promising for the implementation and value of the Warn-on-Forecast System to provide improved hazard timing and location guidance within severe weather watches. Suggestions are made to aid future subjective evaluations of watch-to-warning-scale probabilistic forecasts.