Browse
Abstract
Intensity consensus forecasts can provide skillful overall guidance for intensity forecasting at the Joint Typhoon Warning Center as they provide among the lowest mean absolute errors; however, these forecasts are far less useful for periods of rapid intensification (RI) as guidance provided is generally low biased. One way to address this issue is to construct a consensus that also includes deterministic RI forecast guidance in order to increase intensification rates during RI. While this approach increases skill and eliminates some bias, consensus forecasts from this approach generally remain low biased during RI events. Another approach is to construct a consensus forecast using an equally-weighted average of deterministic RI forecasts. This yields a forecast that is generally among the top performing RI guidance, but suffers from false alarms and a high bias due to those false alarms. Neither approach described here is a prescription for forecast success, but both have qualities that merit consideration for operational centers tasked with the difficult task of RI prediction.
Abstract
Intensity consensus forecasts can provide skillful overall guidance for intensity forecasting at the Joint Typhoon Warning Center as they provide among the lowest mean absolute errors; however, these forecasts are far less useful for periods of rapid intensification (RI) as guidance provided is generally low biased. One way to address this issue is to construct a consensus that also includes deterministic RI forecast guidance in order to increase intensification rates during RI. While this approach increases skill and eliminates some bias, consensus forecasts from this approach generally remain low biased during RI events. Another approach is to construct a consensus forecast using an equally-weighted average of deterministic RI forecasts. This yields a forecast that is generally among the top performing RI guidance, but suffers from false alarms and a high bias due to those false alarms. Neither approach described here is a prescription for forecast success, but both have qualities that merit consideration for operational centers tasked with the difficult task of RI prediction.
Abstract
The weather and climate greatly affect the socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation and insurance. It becomes evident that weather and ocean forecasting is high value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
The weather and climate greatly affect the socioeconomic activities on multiple temporal and spatial scales. From a climate perspective, atmospheric and ocean characteristics have determined the life, evolution and prosperity of humans and other species in different areas of the world. On smaller scales, the atmospheric and sea conditions affect various sectors such as civil protection, food security, communications, transportation and insurance. It becomes evident that weather and ocean forecasting is high value information highlighting the need for state-of-the-art forecasting systems to be adopted. This importance has been acknowledged by the authorities of Saudi Arabia entrusting the National Center for Meteorology (NCM) to provide high quality weather and climate analytics. This led to the development of a numerical weather prediction (NWP) system. The new system includes weather, wave and ocean circulation components and has been operational since 2020 enhancing the national capabilities in NWP. Within this article, a description of the system and its performance is discussed alongside future goals.
Abstract
Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric – algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data is considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm thus resulting in different biases in near-0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Though the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.
Abstract
Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric – algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data is considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm thus resulting in different biases in near-0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Though the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.
Abstract
Assimilating radar reflectivity into convective-scale NWP models remains a challenging topic in radar data assimilation. A primary reason is that the reflectivity forward observation operator is highly nonlinear. To address this challenge, a power transformation function is applied to the WRF Model’s hydrometeor and water vapor mixing ratio variables in this study. Three 3D variational data assimilation experiments are performed and compared for five high-impact weather events that occurred in 2019: (i) a control experiment that assimilates reflectivity using the original hydrometeor mixing ratios as control variables, (ii) an experiment that assimilates reflectivity using power-transformed hydrometeor mixing ratios as control variables, and (iii) an experiment that assimilates reflectivity and retrieved pseudo–water vapor observations using power-transformed hydrometeor and water vapor mixing ratios (qυ ) as control variables. Both qualitative and quantitative evaluations are performed for 0–3-h forecasts from the five cases. The analysis and forecast performance in the two experiments with power-transformed mixing ratios is better than the control experiment. Notably, the assimilation of pseudo–water vapor with power-transformed qυ as an additional control variable is found to improve the performance of the analysis and short-term forecasts for all cases. In addition, the convergence rate of the cost function minimization for the two experiments that use the power transformation is faster than that of the control experiments.
Significance Statement
The effective use of radar reflectivity observations in any data assimilation scheme remains an important research topic because reflectivity observations explicitly include information about hydrometeors and also implicitly include information about the distribution of moisture within storms. However, it is difficult to assimilate reflectivity because the reflectivity forward observation operator is highly nonlinear. This study seeks to identify a more effective way to assimilate reflectivity into a convective-scale NWP model to improve the accuracy of predictions of high-impact weather events.
Abstract
Assimilating radar reflectivity into convective-scale NWP models remains a challenging topic in radar data assimilation. A primary reason is that the reflectivity forward observation operator is highly nonlinear. To address this challenge, a power transformation function is applied to the WRF Model’s hydrometeor and water vapor mixing ratio variables in this study. Three 3D variational data assimilation experiments are performed and compared for five high-impact weather events that occurred in 2019: (i) a control experiment that assimilates reflectivity using the original hydrometeor mixing ratios as control variables, (ii) an experiment that assimilates reflectivity using power-transformed hydrometeor mixing ratios as control variables, and (iii) an experiment that assimilates reflectivity and retrieved pseudo–water vapor observations using power-transformed hydrometeor and water vapor mixing ratios (qυ ) as control variables. Both qualitative and quantitative evaluations are performed for 0–3-h forecasts from the five cases. The analysis and forecast performance in the two experiments with power-transformed mixing ratios is better than the control experiment. Notably, the assimilation of pseudo–water vapor with power-transformed qυ as an additional control variable is found to improve the performance of the analysis and short-term forecasts for all cases. In addition, the convergence rate of the cost function minimization for the two experiments that use the power transformation is faster than that of the control experiments.
Significance Statement
The effective use of radar reflectivity observations in any data assimilation scheme remains an important research topic because reflectivity observations explicitly include information about hydrometeors and also implicitly include information about the distribution of moisture within storms. However, it is difficult to assimilate reflectivity because the reflectivity forward observation operator is highly nonlinear. This study seeks to identify a more effective way to assimilate reflectivity into a convective-scale NWP model to improve the accuracy of predictions of high-impact weather events.
Abstract
The Marshall Fire on 30 December 2021 became the most destructive wildfire cost-wise in Colorado history as it evolved into a suburban firestorm in southeastern Boulder County, driven by strong winds and a snow-free and drought-influenced fuel state. The fire was driven by a strong downslope windstorm that maintained its intensity for nearly eleven hours. The southward movement of a large-scale jet axis across Boulder County brought a quick transition that day into a zone of upper-level descent, enhancing the mid-level inversion providing a favorable environment for an amplifying downstream mountain wave. In several aspects, this windstorm did not follow typical downslope windstorm behavior. NOAA rapidly updating numerical weather prediction guidance (including the High-Resolution Rapid Refresh) provided operationally useful forecasts of the windstorm, leading to the issuance of a high-wind warning (HWW) for eastern Boulder County. No Red Flag Warning was issued due to a too restrictive relative humidity criterion (already published alternatives are recommended); however, owing to the HWW, a county-wide burn ban was issued for that day. Consideration of spatial (vertical and horizontal) and temporal (both valid time and initialization time) neighborhoods allows some quantification of forecast uncertainty from deterministic forecasts – important in real-time use for forecasting and public warnings of extreme events. Essentially, dimensions of the deterministic model were used to roughly estimate an ensemble forecast. These dimensions including run-to-run consistency are also important for subsequent evaluation of forecasts for small-scale features such as downslope windstorms and the tropospheric features responsible for them, similar to forecasts of deep, moist convection and related severe weather.
Abstract
The Marshall Fire on 30 December 2021 became the most destructive wildfire cost-wise in Colorado history as it evolved into a suburban firestorm in southeastern Boulder County, driven by strong winds and a snow-free and drought-influenced fuel state. The fire was driven by a strong downslope windstorm that maintained its intensity for nearly eleven hours. The southward movement of a large-scale jet axis across Boulder County brought a quick transition that day into a zone of upper-level descent, enhancing the mid-level inversion providing a favorable environment for an amplifying downstream mountain wave. In several aspects, this windstorm did not follow typical downslope windstorm behavior. NOAA rapidly updating numerical weather prediction guidance (including the High-Resolution Rapid Refresh) provided operationally useful forecasts of the windstorm, leading to the issuance of a high-wind warning (HWW) for eastern Boulder County. No Red Flag Warning was issued due to a too restrictive relative humidity criterion (already published alternatives are recommended); however, owing to the HWW, a county-wide burn ban was issued for that day. Consideration of spatial (vertical and horizontal) and temporal (both valid time and initialization time) neighborhoods allows some quantification of forecast uncertainty from deterministic forecasts – important in real-time use for forecasting and public warnings of extreme events. Essentially, dimensions of the deterministic model were used to roughly estimate an ensemble forecast. These dimensions including run-to-run consistency are also important for subsequent evaluation of forecasts for small-scale features such as downslope windstorms and the tropospheric features responsible for them, similar to forecasts of deep, moist convection and related severe weather.
Abstract
The object-based verification procedure described in a recent paper by Duda and Turner was expanded herein to compare forecasts of composite reflectivity and 6-h precipitation objects between the two most recent operational versions of the High-Resolution Rapid Refresh (HRRR) model, versions 3 and 4, over an expanded set of warm season cases in 2019 and 2020. In addition to analyzing all objects, a reduced set of forecast–observation object pairs was constructed by taking the best forecast match to a given observation object for the purposes of bias-reduction and unequivocal object comparison. Despite the apparent signal of improved scalar metrics such as the object-based threat score in HRRRv4 compared to HRRRv3, no statistically significant differences were found between the models. Nonetheless, many object attribute comparisons revealed indications of improved forecast performance in HRRRv4 compared to HRRRv3. For example, HRRRv4 had a reduced overforecasting bias for medium- and large-sized reflectivity objects, and all objects during the afternoon. HRRRv4 also better replicated the distribution of object complexity and aspect ratio. Results for 6-h precipitation also suggested superior performance in HRRRv4 over HRRRv3. However, HRRRv4 was worse with centroid displacement errors and more severely overforecast objects with a high maximum precipitation amount. Overall, this exercise revealed multiple forecast deficiencies in the HRRR, which enables developers to direct development efforts on detailed and specific endeavors to improve model forecasts.
Significance Statement
This work builds upon the authors’ prior work in assessing model forecast quality using an alternative verification method—object-based verification. In this paper we verified two versions of the same model (one an upgrade from the other) that were making forecasts covering the same time window, using the object-based verification method. We found that the updated model was not statistically significantly better, although there were indications it performed better in certain aspects such as capturing the change in the number of storms during the daytime. We were able to identify specific problem areas in the models, which helps us direct model developers in their efforts to further improve the model.
Abstract
The object-based verification procedure described in a recent paper by Duda and Turner was expanded herein to compare forecasts of composite reflectivity and 6-h precipitation objects between the two most recent operational versions of the High-Resolution Rapid Refresh (HRRR) model, versions 3 and 4, over an expanded set of warm season cases in 2019 and 2020. In addition to analyzing all objects, a reduced set of forecast–observation object pairs was constructed by taking the best forecast match to a given observation object for the purposes of bias-reduction and unequivocal object comparison. Despite the apparent signal of improved scalar metrics such as the object-based threat score in HRRRv4 compared to HRRRv3, no statistically significant differences were found between the models. Nonetheless, many object attribute comparisons revealed indications of improved forecast performance in HRRRv4 compared to HRRRv3. For example, HRRRv4 had a reduced overforecasting bias for medium- and large-sized reflectivity objects, and all objects during the afternoon. HRRRv4 also better replicated the distribution of object complexity and aspect ratio. Results for 6-h precipitation also suggested superior performance in HRRRv4 over HRRRv3. However, HRRRv4 was worse with centroid displacement errors and more severely overforecast objects with a high maximum precipitation amount. Overall, this exercise revealed multiple forecast deficiencies in the HRRR, which enables developers to direct development efforts on detailed and specific endeavors to improve model forecasts.
Significance Statement
This work builds upon the authors’ prior work in assessing model forecast quality using an alternative verification method—object-based verification. In this paper we verified two versions of the same model (one an upgrade from the other) that were making forecasts covering the same time window, using the object-based verification method. We found that the updated model was not statistically significantly better, although there were indications it performed better in certain aspects such as capturing the change in the number of storms during the daytime. We were able to identify specific problem areas in the models, which helps us direct model developers in their efforts to further improve the model.
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS), and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best performing individual component VTS. Ten-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1-6 while Both/ConVTS had better skill at later hours 7-15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
Multiscale valid time shifting (VTS) was explored for a real-time convection-allowing ensemble (CAE) data assimilation (DA) system featuring hourly assimilation of conventional in situ and radar reflectivity observations, developed by the Multiscale data Assimilation and Predictability Laboratory. VTS triples the base ensemble size using two subensembles containing member forecast output before and after the analysis time. Three configurations were tested with 108-member VTS-expanded ensembles: VTS for individual mesoscale conventional DA (ConVTS) or storm-scale radar DA (RadVTS), and VTS integrated to both DA components (BothVTS). Systematic verification demonstrated that BothVTS matched the DA spread and accuracy of the best performing individual component VTS. Ten-member forecasts showed BothVTS performs similarly to ConVTS, with RadVTS having better skill in 1-h precipitation at forecast hours 1-6 while Both/ConVTS had better skill at later hours 7-15. An objective splitting of cases by 2-m temperature cold bias revealed RadVTS was more skillful than Both/ConVTS out to hour 10 for cold-biased cases, while BothVTS performed best at most hours for less-biased cases. A sensitivity experiment demonstrated improved performance of BothVTS when reducing the underlying model cold bias. Diagnostics revealed enhanced spurious convection of BothVTS for cold-biased cases was tied to larger analysis increments in temperature than moisture, resulting in erroneously high convective instability. This study is the first to examine the benefits of a multiscale VTS implementation, showing that BothVTS can be utilized to improve the overall performance of a multiscale CAE system. Further, these results underscore the need to limit biases within a DA and forecast system to best take advantage of VTS analysis benefits.
Abstract
A previous study has shown that a large portion of subseasonal-to-seasonal European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecasts for 2-m temperature exhibit properties of univariate bimodality, in some locations occurring in over 30% of forecasts. This study introduces a novel methodology to identify “bimodal events,” meteorological events that trigger the development of spatially and temporally correlated bimodality in forecasts. Understanding such events not only provides insight into the dynamics of the meteorological phenomena causing bimodal events, but also indicates when Gaussian interpretations of forecasts are detrimental. The methodology that is developed allows one to systematically characterize the spatial and temporal scales of the derived bimodal events, and thus uncover the flow states that lead to them. Three distinct regions that exhibit high occurrence rates of bimodality are studied: one in South America, one in the Southern Ocean, and one in the North Atlantic. It is found that bimodal events in each region appear to be triggered by synoptic processes interacting with geographically specific processes: in South America, bimodality is often related to Andes blocking events; in the Southern Ocean, bimodality is often related to an atmospheric Rossby wave interacting with sea ice; and in the North Atlantic, bimodality is often connected to the displacement of a persistent subtropical high. This common pattern of large-scale circulation anomalies interacting with local boundary conditions suggests that any deeper dynamical understanding of these events should incorporate such interactions.
Significance Statement
Repeatedly running weather forecasts with slightly different initial conditions provides some information on the confidence of a forecast. Occasionally, these sets of forecasts spread into two distinct groups or modes, making the “typical” interpretation of confidence inappropriate. What leads to such a behavior has yet to be fully understood. This study contributes to our understanding of this process by presenting a methodology that identifies coherent bimodal events in forecasts of near-surface air temperature. Applying this methodology to a database of such forecasts reveals several key dynamical features that can lead to bimodal events. Exploring and understanding these features is crucial for saving forecasters’ resources, creating more skillful forecasts for the public, and improving our understanding of the weather.
Abstract
A previous study has shown that a large portion of subseasonal-to-seasonal European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecasts for 2-m temperature exhibit properties of univariate bimodality, in some locations occurring in over 30% of forecasts. This study introduces a novel methodology to identify “bimodal events,” meteorological events that trigger the development of spatially and temporally correlated bimodality in forecasts. Understanding such events not only provides insight into the dynamics of the meteorological phenomena causing bimodal events, but also indicates when Gaussian interpretations of forecasts are detrimental. The methodology that is developed allows one to systematically characterize the spatial and temporal scales of the derived bimodal events, and thus uncover the flow states that lead to them. Three distinct regions that exhibit high occurrence rates of bimodality are studied: one in South America, one in the Southern Ocean, and one in the North Atlantic. It is found that bimodal events in each region appear to be triggered by synoptic processes interacting with geographically specific processes: in South America, bimodality is often related to Andes blocking events; in the Southern Ocean, bimodality is often related to an atmospheric Rossby wave interacting with sea ice; and in the North Atlantic, bimodality is often connected to the displacement of a persistent subtropical high. This common pattern of large-scale circulation anomalies interacting with local boundary conditions suggests that any deeper dynamical understanding of these events should incorporate such interactions.
Significance Statement
Repeatedly running weather forecasts with slightly different initial conditions provides some information on the confidence of a forecast. Occasionally, these sets of forecasts spread into two distinct groups or modes, making the “typical” interpretation of confidence inappropriate. What leads to such a behavior has yet to be fully understood. This study contributes to our understanding of this process by presenting a methodology that identifies coherent bimodal events in forecasts of near-surface air temperature. Applying this methodology to a database of such forecasts reveals several key dynamical features that can lead to bimodal events. Exploring and understanding these features is crucial for saving forecasters’ resources, creating more skillful forecasts for the public, and improving our understanding of the weather.
Abstract
On 28 April 2019, hourly forecasts from the operational High-Resolution Rapid Refresh (HRRR) model consistently predicted an isolated supercell storm late in the day near Dodge City, Kansas, that subsequently was not observed. Two convection-allowing model (CAM) ensemble runs are created to explore the reasons for this forecast error and implications for severe weather forecasting. The 40-member CAM ensembles are run using the HRRR configuration of the WRF-ARW Model at 3-km horizontal grid spacing. The Gridpoint Statistical Interpolation (GSI)-based ensemble Kalman filter is used to assimilate observations every 15 min from 1500 to 1900 UTC, with resulting ensemble forecasts run out to 0000 UTC. One ensemble only assimilates conventional observations, and its forecasts strongly resemble the operational HRRR with all ensemble members predicting a supercell storm near Dodge City. In the second ensemble, conventional observations plus observations of WSR-88D radar clear-air radial velocities, WSR-88D diagnosed convective boundary layer height, and GOES-16 all-sky infrared brightness temperatures are assimilated to improve forecasts of the preconvective environment, and its forecasts have half of the members predicting supercells. Results further show that the magnitude of the low-level meridional water vapor flux in the moist tongue largely separates members with and without supercells, with water vapor flux differences of 12% leading to these different outcomes. Additional experiments that assimilate only radar or satellite observations show that both are important to predictions of the meridional water vapor flux. This analysis suggests that mesoscale environmental uncertainty remains a challenge that is difficult to overcome.
Significance Statement
Forecasts from operational numerical models are the foundation of weather forecasting. There are times when these models make forecasts that do not come true, such as 28 April 2019 when successive forecasts from the operational High-Resolution Rapid Refresh (HRRR) model predicted a supercell storm late in the day near Dodge City, Kansas, that subsequently was not observed. Reasons for this forecast error are explored using numerical experiments. Results suggest that relatively small changes to the prestorm environment led to large differences in the evolution of storms on this day. This result emphasizes the challenges to operational severe weather forecasting and the continued need for improved use of all available observations to better define the atmospheric state given to forecast models.
Abstract
On 28 April 2019, hourly forecasts from the operational High-Resolution Rapid Refresh (HRRR) model consistently predicted an isolated supercell storm late in the day near Dodge City, Kansas, that subsequently was not observed. Two convection-allowing model (CAM) ensemble runs are created to explore the reasons for this forecast error and implications for severe weather forecasting. The 40-member CAM ensembles are run using the HRRR configuration of the WRF-ARW Model at 3-km horizontal grid spacing. The Gridpoint Statistical Interpolation (GSI)-based ensemble Kalman filter is used to assimilate observations every 15 min from 1500 to 1900 UTC, with resulting ensemble forecasts run out to 0000 UTC. One ensemble only assimilates conventional observations, and its forecasts strongly resemble the operational HRRR with all ensemble members predicting a supercell storm near Dodge City. In the second ensemble, conventional observations plus observations of WSR-88D radar clear-air radial velocities, WSR-88D diagnosed convective boundary layer height, and GOES-16 all-sky infrared brightness temperatures are assimilated to improve forecasts of the preconvective environment, and its forecasts have half of the members predicting supercells. Results further show that the magnitude of the low-level meridional water vapor flux in the moist tongue largely separates members with and without supercells, with water vapor flux differences of 12% leading to these different outcomes. Additional experiments that assimilate only radar or satellite observations show that both are important to predictions of the meridional water vapor flux. This analysis suggests that mesoscale environmental uncertainty remains a challenge that is difficult to overcome.
Significance Statement
Forecasts from operational numerical models are the foundation of weather forecasting. There are times when these models make forecasts that do not come true, such as 28 April 2019 when successive forecasts from the operational High-Resolution Rapid Refresh (HRRR) model predicted a supercell storm late in the day near Dodge City, Kansas, that subsequently was not observed. Reasons for this forecast error are explored using numerical experiments. Results suggest that relatively small changes to the prestorm environment led to large differences in the evolution of storms on this day. This result emphasizes the challenges to operational severe weather forecasting and the continued need for improved use of all available observations to better define the atmospheric state given to forecast models.
Abstract
Cutoff lows are often associated with high-impact weather; therefore, it is critical that operational numerical weather prediction systems accurately represent the evolution of these features. However, medium-range forecasts of upper-level features using the Global Forecast System (GFS) are often subjectively characterized by excessive synoptic progressiveness, i.e., a tendency to advance troughs and cutoff lows too quickly downstream. To better understand synoptic progressiveness errors, this research quantifies seven years of 500-hPa cutoff low position errors over the globe, with the goal of objectively identifying regions where synoptic progressiveness errors are common and how frequently these errors occur. Specifically, 500-hPa features are identified and tracked in 0–240-h 0.25° GFS forecasts during April 2015–March 2022 using an objective cutoff low and trough identification scheme and compared to corresponding 500-hPa GFS analyses. In the Northern Hemisphere, cutoff lows are generally underrepresented in forecasts compared to verifying analyses, particularly over continental midlatitude regions. Features identified in short- to long-range forecasts are generally associated with eastward zonal position errors over the conterminous United States and northern Asia, particularly during the spring and autumn. Similarly, cutoff lows over the Southern Hemisphere midlatitudes are characterized by an eastward displacement bias during all seasons.
Significance Statement
Cutoff lows are often associated with high-impact weather, including excessive rainfall, winter storms, and severe weather. GFS forecasts of cutoff lows over the United States are often subjectively noted to advance cutoff lows too quickly downstream, and thus limit forecast skill in potentially impactful scenarios. Therefore, this study quantifies the position error characteristics of cutoff lows in recent GFS forecasts. Consistent with typically anecdotal impressions of cutoff low position errors, this analysis demonstrates that cutoff lows over North America and central Asia are generally associated with an eastward position bias in medium- to long-range GFS forecasts. These results suggest that additional research to identify both environmental conditions and potential model deficiencies that may exacerbate this eastward bias would be beneficial.
Abstract
Cutoff lows are often associated with high-impact weather; therefore, it is critical that operational numerical weather prediction systems accurately represent the evolution of these features. However, medium-range forecasts of upper-level features using the Global Forecast System (GFS) are often subjectively characterized by excessive synoptic progressiveness, i.e., a tendency to advance troughs and cutoff lows too quickly downstream. To better understand synoptic progressiveness errors, this research quantifies seven years of 500-hPa cutoff low position errors over the globe, with the goal of objectively identifying regions where synoptic progressiveness errors are common and how frequently these errors occur. Specifically, 500-hPa features are identified and tracked in 0–240-h 0.25° GFS forecasts during April 2015–March 2022 using an objective cutoff low and trough identification scheme and compared to corresponding 500-hPa GFS analyses. In the Northern Hemisphere, cutoff lows are generally underrepresented in forecasts compared to verifying analyses, particularly over continental midlatitude regions. Features identified in short- to long-range forecasts are generally associated with eastward zonal position errors over the conterminous United States and northern Asia, particularly during the spring and autumn. Similarly, cutoff lows over the Southern Hemisphere midlatitudes are characterized by an eastward displacement bias during all seasons.
Significance Statement
Cutoff lows are often associated with high-impact weather, including excessive rainfall, winter storms, and severe weather. GFS forecasts of cutoff lows over the United States are often subjectively noted to advance cutoff lows too quickly downstream, and thus limit forecast skill in potentially impactful scenarios. Therefore, this study quantifies the position error characteristics of cutoff lows in recent GFS forecasts. Consistent with typically anecdotal impressions of cutoff low position errors, this analysis demonstrates that cutoff lows over North America and central Asia are generally associated with an eastward position bias in medium- to long-range GFS forecasts. These results suggest that additional research to identify both environmental conditions and potential model deficiencies that may exacerbate this eastward bias would be beneficial.