Search Results
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
The authors investigated differences in the environments associated with tornadic and nontornadic mesocyclones are investigated using proximity soundings. Questions about the definition of proximity are raised. As the environments of severe storms with high spatial and temporal resolution are observed, the operational meaning of proximity becomes less clear. Thus the exploration of the proximity dataset is subject to certain caveats that are presented in some detail.
Results from this relatively small proximity dataset support a recently developed conceptual model of the development and maintenance of low-level mesocyclones within supercells. Three regimes of low-level mesocyclonic behavior are predicted by the conceptual model: (i) low-level mesocyclones are slow to develop, if at all, (ii) low-level mesocyclones form quickly but are short lived, and (iii) low-level mesocyclones develop slowly but have the potential to persist for hours. The model suggests that a balance is needed between the midtropospheric storm-relative winds, storm-relative environmental helicity, and low-level absolute humidity to develop long-lived tornadic mesocyclones. In the absence of that balance, such storms should be rare. The failure of earlier forecast efforts to discriminate between tornadic and nontornadic severe storms is discussed in the context of a physical understanding of supercell tornadogenesis. Finally, it is shown that attempts to gather large datasets of proximity soundings associated with rare weather events are likely to take many years.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
ECMWF provides the ensemble-based extreme forecast index (EFI) and shift of tails (SOT) products to facilitate forecasting severe weather in the medium range. Exploiting the ingredients-based method of forecasting deep moist convection, two parameters, convective available potential energy (CAPE) and a composite CAPE–shear parameter, have been recently added to the EFI/SOT, targeting severe convective weather. Verification results based on the area under the relative operating characteristic curve (ROCA) show high skill of both EFIs at discriminating between severe and nonsevere convection in the medium range over Europe and the United States. In the first 7 days of the forecast ROCA values show significant skill, staying well above the no-skill threshold of 0.5. Two case studies are presented to give some practical considerations and discuss certain limitations of the EFI/SOT forecasts and how they could be overcome. In particular, both convective EFI/SOT products are good at providing guidance for where and when severe convection is possible if there is sufficient lift for convective initiation. Probability of precipitation is suggested as a suitable ensemble product for assessing whether convection is likely to be initiated. The model climate should also be considered when determining whether severe convection is possible; EFI and SOT values are related to the climatological frequency of occurrence of deep, moist convection over a given place and time of year.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
Increasing tornado warning skill in terms of the probability of detection and false-alarm ratio remains an important operational goal. Although many studies have examined tornado warning performance in a broad sense, less focus has been placed on warning performance within subdaily convective events. In this study, we use the NWS tornado verification database to examine tornado warning performance by order-of-tornado within each convective day. We combine this database with tornado reports to relate warning performance to environmental characteristics. On convective days with multiple tornadoes, the first tornado is warned significantly less often than the middle and last tornadoes. More favorable kinematic environmental characteristics, like increasing 0–1-km shear and storm-relative helicity, are associated with better warning performance related to the first tornado of the convective day. Thermodynamic and composite parameters are less correlated with warning performance. During tornadic events, over one-half of false alarms occur after the last tornado of the day decays, and false alarms are 2 times as likely to be issued during this time as before the first tornado forms. These results indicate that forecasters may be better “primed” (or more prepared) to issue warnings on middle and last tornadoes of the day and must overcome a higher threshold to warn on the first tornado of the day. To overcome this challenge, using kinematic environmental characteristics and intermediate products on the watch-to-warning scale may help.
Abstract
The primary objective of this study was to estimate the percentage of U.S. tornadoes that are spawned annually by squall lines and bow echoes, or quasi-linear convective systems (QLCSs). This was achieved by examining radar reflectivity images for every tornado event recorded during 1998–2000 in the contiguous United States. Based on these images, the type of storm associated with each tornado was classified as cell, QLCS, or other.
Of the 3828 tornadoes in the database, 79% were produced by cells, 18% were produced by QLCSs, and the remaining 3% were produced by other storm types, primarily rainbands of landfallen tropical cyclones. Geographically, these percentages as well as those based on tornado days exhibited wide variations. For example, 50% of the tornado days in Indiana were associated with QLCSs.
In an examination of other tornado attributes, statistically more weak (F1) and fewer strong (F2–F3) tornadoes were associated with QLCSs than with cells. QLCS tornadoes were more probable during the winter months than were cells. And finally, QLCS tornadoes displayed a comparatively higher and statistically significant tendency to occur during the late night/early morning hours. Further analysis revealed a disproportional decrease in F0–F1 events during this time of day, which led the authors to propose that many (perhaps as many as 12% of the total) weak QLCSs tornadoes were not reported.
Abstract
The primary objective of this study was to estimate the percentage of U.S. tornadoes that are spawned annually by squall lines and bow echoes, or quasi-linear convective systems (QLCSs). This was achieved by examining radar reflectivity images for every tornado event recorded during 1998–2000 in the contiguous United States. Based on these images, the type of storm associated with each tornado was classified as cell, QLCS, or other.
Of the 3828 tornadoes in the database, 79% were produced by cells, 18% were produced by QLCSs, and the remaining 3% were produced by other storm types, primarily rainbands of landfallen tropical cyclones. Geographically, these percentages as well as those based on tornado days exhibited wide variations. For example, 50% of the tornado days in Indiana were associated with QLCSs.
In an examination of other tornado attributes, statistically more weak (F1) and fewer strong (F2–F3) tornadoes were associated with QLCSs than with cells. QLCS tornadoes were more probable during the winter months than were cells. And finally, QLCS tornadoes displayed a comparatively higher and statistically significant tendency to occur during the late night/early morning hours. Further analysis revealed a disproportional decrease in F0–F1 events during this time of day, which led the authors to propose that many (perhaps as many as 12% of the total) weak QLCSs tornadoes were not reported.
Abstract
Over the last 50 yr, the number of tornadoes reported in the United States has doubled from about 600 per year in the 1950s to around 1200 in the 2000s. This doubling is likely not related to meteorological causes alone. To account for this increase a simple least squares linear regression was fitted to the annual number of tornado reports. A “big tornado day” is a single day when numerous tornadoes and/or many tornadoes exceeding a specified intensity threshold were reported anywhere in the country. By defining a big tornado day without considering the spatial distribution of the tornadoes, a big tornado day differs from previous definitions of outbreaks. To address the increase in the number of reports, the number of reports is compared to the expected number of reports in a year based on linear regression. In addition, the F1 and greater Fujita-scale record was used in determining a big tornado day because the F1 and greater series was more stationary over time as opposed to the F2 and greater series. Thresholds were applied to the data to determine the number and intensities of the tornadoes needed to be considered a big tornado day. Possible threshold values included fractions of the annual expected value associated with the linear regression and fixed numbers for the intensity criterion. Threshold values of 1.5% of the expected annual total number of tornadoes and/or at least 8 F1 and greater tornadoes identified about 18.1 big tornado days per year. Higher thresholds such as 2.5% and/or at least 15 F1 and greater tornadoes showed similar characteristics, yet identified approximately 6.2 big tornado days per year. Finally, probability distribution curves generated using kernel density estimation revealed that big tornado days were more likely to occur slightly earlier in the year and have a narrower distribution than any given tornado day.
Abstract
Over the last 50 yr, the number of tornadoes reported in the United States has doubled from about 600 per year in the 1950s to around 1200 in the 2000s. This doubling is likely not related to meteorological causes alone. To account for this increase a simple least squares linear regression was fitted to the annual number of tornado reports. A “big tornado day” is a single day when numerous tornadoes and/or many tornadoes exceeding a specified intensity threshold were reported anywhere in the country. By defining a big tornado day without considering the spatial distribution of the tornadoes, a big tornado day differs from previous definitions of outbreaks. To address the increase in the number of reports, the number of reports is compared to the expected number of reports in a year based on linear regression. In addition, the F1 and greater Fujita-scale record was used in determining a big tornado day because the F1 and greater series was more stationary over time as opposed to the F2 and greater series. Thresholds were applied to the data to determine the number and intensities of the tornadoes needed to be considered a big tornado day. Possible threshold values included fractions of the annual expected value associated with the linear regression and fixed numbers for the intensity criterion. Threshold values of 1.5% of the expected annual total number of tornadoes and/or at least 8 F1 and greater tornadoes identified about 18.1 big tornado days per year. Higher thresholds such as 2.5% and/or at least 15 F1 and greater tornadoes showed similar characteristics, yet identified approximately 6.2 big tornado days per year. Finally, probability distribution curves generated using kernel density estimation revealed that big tornado days were more likely to occur slightly earlier in the year and have a narrower distribution than any given tornado day.
Abstract
The problem of forecasting the maintenance of mesoscale convective systems (MCSs) is investigated through an examination of observed proximity soundings. Furthermore, environmental variables that are statistically different between mature and weakening MCSs are input into a logistic regression procedure to develop probabilistic guidance on MCS maintenance, focusing on warm-season quasi-linear systems that persist for several hours. Between the mature and weakening MCSs, shear vector magnitudes over very deep layers are the best discriminators among hundreds of kinematic and thermodynamic variables. An analysis of the shear profiles reveals that the shear component perpendicular to MCS motion (usually parallel to the leading line) accounts for much of this difference in low levels and the shear component parallel to MCS motion accounts for much of this difference in mid- to upper levels. The lapse rates over a significant portion of the convective cloud layer, the convective available potential energy, and the deep-layer mean wind speed are also very good discriminators and collectively provide a high level of discrimination between the mature and dissipation soundings as revealed by linear discriminant analysis. Probabilistic equations developed from these variables used with short-term numerical model output show utility in forecasting the transition of an MCS with a solid line of 50+ dBZ echoes to a more disorganized system with unsteady changes in structure and propagation. This study shows that empirical forecast tools based on environmental relationships still have the potential to provide forecasters with improved information on the qualitative characteristics of MCS structure and longevity. This is especially important since the current and near-term value added by explicit numerical forecasts of convection is still uncertain.
Abstract
The problem of forecasting the maintenance of mesoscale convective systems (MCSs) is investigated through an examination of observed proximity soundings. Furthermore, environmental variables that are statistically different between mature and weakening MCSs are input into a logistic regression procedure to develop probabilistic guidance on MCS maintenance, focusing on warm-season quasi-linear systems that persist for several hours. Between the mature and weakening MCSs, shear vector magnitudes over very deep layers are the best discriminators among hundreds of kinematic and thermodynamic variables. An analysis of the shear profiles reveals that the shear component perpendicular to MCS motion (usually parallel to the leading line) accounts for much of this difference in low levels and the shear component parallel to MCS motion accounts for much of this difference in mid- to upper levels. The lapse rates over a significant portion of the convective cloud layer, the convective available potential energy, and the deep-layer mean wind speed are also very good discriminators and collectively provide a high level of discrimination between the mature and dissipation soundings as revealed by linear discriminant analysis. Probabilistic equations developed from these variables used with short-term numerical model output show utility in forecasting the transition of an MCS with a solid line of 50+ dBZ echoes to a more disorganized system with unsteady changes in structure and propagation. This study shows that empirical forecast tools based on environmental relationships still have the potential to provide forecasters with improved information on the qualitative characteristics of MCS structure and longevity. This is especially important since the current and near-term value added by explicit numerical forecasts of convection is still uncertain.
Abstract
The Storm Prediction Center (SPC) tornado database, generated from NCEI’s Storm Data publication, is indispensable for assessing U.S. tornado risk and investigating tornado–climate connections. Maximizing the value of this database, however, requires accounting for systemically lower reported tornado counts in rural areas owing to a lack of observers. This study uses Bayesian hierarchical modeling to estimate tornado reporting rates and expected tornado counts over the central United States during 1975–2016. Our method addresses a serious solution nonuniqueness issue that may have affected previous studies. The adopted model explains 73% (>90%) of the variance in reported counts at scales of 50 km (>100 km). Population density explains more of the variance in reported tornado counts than other examined geographical covariates, including distance from nearest city, terrain ruggedness index, and road density. The model estimates that approximately 45% of tornadoes within the analysis domain were reported. The estimated tornado reporting rate decreases sharply away from population centers; for example, while >90% of tornadoes that occur within 5 km of a city with population > 100 000 are reported, this rate decreases to <70% at distances of 20–25 km. The method is directly extendable to other events subject to underreporting (e.g., severe hail and wind) and could be used to improve climate studies and tornado and other hazard models for forecasters, planners, and insurance/reinsurance companies, as well as for the development and verification of storm-scale prediction systems.
Abstract
The Storm Prediction Center (SPC) tornado database, generated from NCEI’s Storm Data publication, is indispensable for assessing U.S. tornado risk and investigating tornado–climate connections. Maximizing the value of this database, however, requires accounting for systemically lower reported tornado counts in rural areas owing to a lack of observers. This study uses Bayesian hierarchical modeling to estimate tornado reporting rates and expected tornado counts over the central United States during 1975–2016. Our method addresses a serious solution nonuniqueness issue that may have affected previous studies. The adopted model explains 73% (>90%) of the variance in reported counts at scales of 50 km (>100 km). Population density explains more of the variance in reported tornado counts than other examined geographical covariates, including distance from nearest city, terrain ruggedness index, and road density. The model estimates that approximately 45% of tornadoes within the analysis domain were reported. The estimated tornado reporting rate decreases sharply away from population centers; for example, while >90% of tornadoes that occur within 5 km of a city with population > 100 000 are reported, this rate decreases to <70% at distances of 20–25 km. The method is directly extendable to other events subject to underreporting (e.g., severe hail and wind) and could be used to improve climate studies and tornado and other hazard models for forecasters, planners, and insurance/reinsurance companies, as well as for the development and verification of storm-scale prediction systems.
Abstract
Radar-based convective modes were assigned to a sample of tornadoes and significant severe thunderstorms reported in the contiguous United States (CONUS) during 2003–11. The significant hail (≥2-in. diameter), significant wind (≥65-kt thunderstorm gusts), and tornadoes were filtered by the maximum event magnitude per hour on a 40-km Rapid Update Cycle model horizontal grid. The filtering process produced 22 901 tornado and significant severe thunderstorm events, representing 78.5% of all such reports in the CONUS during the sample period. The convective mode scheme presented herein begins with three radar-based storm categories: 1) discrete cells, 2) clusters of cells, and 3) quasi-linear convective systems (QLCSs). Volumetric radar data were examined for right-moving supercell (RM) and left-moving supercell characteristics within the three radar reflectivity designations. Additional categories included storms with marginal supercell characteristics and linear hybrids with a mix of supercell and QLCS structures. Smoothed kernel density estimates of events per decade revealed clear geographic and seasonal patterns of convective modes with tornadoes. Discrete and cluster RMs are the favored convective mode with southern Great Plains tornadoes during the spring, while the Deep South displayed the greatest variability in tornadic convective modes in the fall, winter, and spring. The Ohio Valley favored a higher frequency of QLCS tornadoes and a lower frequency of RM compared to the Deep South and the Great Plains. Tornadoes with nonsupercellular/non-QLCS storms were more common across Florida and the high plains in the summer. Significant hail events were dominated by Great Plains supercells, while variations in convective modes were largest for significant wind events.
Abstract
Radar-based convective modes were assigned to a sample of tornadoes and significant severe thunderstorms reported in the contiguous United States (CONUS) during 2003–11. The significant hail (≥2-in. diameter), significant wind (≥65-kt thunderstorm gusts), and tornadoes were filtered by the maximum event magnitude per hour on a 40-km Rapid Update Cycle model horizontal grid. The filtering process produced 22 901 tornado and significant severe thunderstorm events, representing 78.5% of all such reports in the CONUS during the sample period. The convective mode scheme presented herein begins with three radar-based storm categories: 1) discrete cells, 2) clusters of cells, and 3) quasi-linear convective systems (QLCSs). Volumetric radar data were examined for right-moving supercell (RM) and left-moving supercell characteristics within the three radar reflectivity designations. Additional categories included storms with marginal supercell characteristics and linear hybrids with a mix of supercell and QLCS structures. Smoothed kernel density estimates of events per decade revealed clear geographic and seasonal patterns of convective modes with tornadoes. Discrete and cluster RMs are the favored convective mode with southern Great Plains tornadoes during the spring, while the Deep South displayed the greatest variability in tornadic convective modes in the fall, winter, and spring. The Ohio Valley favored a higher frequency of QLCS tornadoes and a lower frequency of RM compared to the Deep South and the Great Plains. Tornadoes with nonsupercellular/non-QLCS storms were more common across Florida and the high plains in the summer. Significant hail events were dominated by Great Plains supercells, while variations in convective modes were largest for significant wind events.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The threat of damaging hail from severe thunderstorms affects many communities and industries on a yearly basis, with annual economic losses in excess of $1 billion (U.S. dollars). Past hail climatology has typically relied on the National Oceanic and Atmospheric Administration/National Climatic Data Center’s (NOAA/NCDC) Storm Data publication, which has numerous reporting biases and nonmeteorological artifacts. This research seeks to quantify the spatial and temporal characteristics of contiguous United States (CONUS) hail fall, derived from multiradar multisensor (MRMS) algorithms for several years during the Next-Generation Weather Radar (NEXRAD) era, leveraging the Multiyear Reanalysis of Remotely Sensed Storms (MYRORSS) dataset at NOAA’s National Severe Storms Laboratory (NSSL). The primary MRMS product used in this study is the maximum expected size of hail (MESH). The preliminary climatology includes 42 months of quality controlled and reprocessed MESH grids, which spans the warm seasons for four years (2007–10), covering 98% of all Storm Data hail reports during that time. The dataset has 0.01° latitude × 0.01° longitude × 31 vertical levels spatial resolution, and 5-min temporal resolution. Radar-based and reports-based methods of hail climatology are compared. MRMS MESH demonstrates superior coverage and resolution over Storm Data hail reports, and is largely unbiased. The results reveal a broad maximum of annual hail fall in the Great Plains and a diminished secondary maximum in the Southeast United States. Potential explanations for the differences in the two methods of hail climatology are also discussed.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.
Abstract
The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.
The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.
Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.
Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.
The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.