Reliability and Climatological Impacts of Convective Wind Estimations

Roger Edwards National Weather Service Storm Prediction Center, Norman, Oklahoma

Search for other papers by Roger Edwards in
Current site
Google Scholar
PubMed
Close
,
John T. Allen Central Michigan University, Mount Pleasant, Michigan

Search for other papers by John T. Allen in
Current site
Google Scholar
PubMed
Close
, and
Gregory W. Carbin National Weather Service Weather Prediction Center, College Park, Maryland

Search for other papers by Gregory W. Carbin in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Convective surface winds in the contiguous United States are classified as severe at 50 kt (58 mi h−1, or 26 m s−1), whether measured or estimated. In 2006, NCDC (now NCEI) Storm Data, from which analyzed data are directly derived, began explicit categorization of such reports as measured gusts (MGs) or estimated gusts (EGs). Because of the documented tendency of human observers to overestimate winds, the quality and reliability of EGs (especially in comparison with MGs) has been challenged, mostly for nonconvective winds and controlled-testing situations, but only speculatively for bulk convective data. For the 10-yr period of 2006–15, 150 423 filtered convective-wind gust magnitudes are compared and analyzed, including 15 183 MGs and 135 240 EGs, both nationally and by state. Nonmeteorological artifacts include marked geographic discontinuities and pronounced “spikes” of an order of magnitude in which EG values (in both miles per hour and knots) end in the digits 0 or 5. Sources such as NWS employees, storm chasers, and the general public overestimate EGs, whereas trained spotters are relatively accurate. Analysis of the ratio of EG to MG and their sources also reveals an apparent warning-verification-influence bias in the climatological distribution of wind gusts imparted by EG reliance in the Southeast. Results from prior wind-tunnel testing of human subjects are applied to 1) illustrate the difference between measured and perceived winds for the database and 2) show the impact on the severe-wind dataset if EGs were bias-corrected for the human overestimation factor.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Roger Edwards, roger.edwards@noaa.gov

Abstract

Convective surface winds in the contiguous United States are classified as severe at 50 kt (58 mi h−1, or 26 m s−1), whether measured or estimated. In 2006, NCDC (now NCEI) Storm Data, from which analyzed data are directly derived, began explicit categorization of such reports as measured gusts (MGs) or estimated gusts (EGs). Because of the documented tendency of human observers to overestimate winds, the quality and reliability of EGs (especially in comparison with MGs) has been challenged, mostly for nonconvective winds and controlled-testing situations, but only speculatively for bulk convective data. For the 10-yr period of 2006–15, 150 423 filtered convective-wind gust magnitudes are compared and analyzed, including 15 183 MGs and 135 240 EGs, both nationally and by state. Nonmeteorological artifacts include marked geographic discontinuities and pronounced “spikes” of an order of magnitude in which EG values (in both miles per hour and knots) end in the digits 0 or 5. Sources such as NWS employees, storm chasers, and the general public overestimate EGs, whereas trained spotters are relatively accurate. Analysis of the ratio of EG to MG and their sources also reveals an apparent warning-verification-influence bias in the climatological distribution of wind gusts imparted by EG reliance in the Southeast. Results from prior wind-tunnel testing of human subjects are applied to 1) illustrate the difference between measured and perceived winds for the database and 2) show the impact on the severe-wind dataset if EGs were bias-corrected for the human overestimation factor.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Roger Edwards, roger.edwards@noaa.gov

1. Background

By definition, convectively produced surface winds are classified in the United States as severe when they are greater than or equal to 50 kt [58 mi h−1 (hereinafter mph), or 26 m s−1] whether measured or estimated. However, the history of U.S. severe-wind criteria shows uncertainty and variability in their development well into the twentieth century (Galway 1989), partly associated with the vagaries of assigning explicit wind speed thresholds for perceived aviation and public interests. Those purposes did not necessarily match, as evident in the different, somewhat arbitrary thresholds established by the U.S. Air Force (USAF) and the U.S. Weather Bureau (USWB) in the 1950s and 1960s. During some of this period, for example, the USWB defined severe winds as a sustained 1-min average > 43 kt (50 mph, 22 m s−1) or gusts > 65 kt (75 mph, 34 m s−1). Using these criteria, the USWB Severe Local Storms (SELS) Center, forebear to the Storm Prediction Center (SPC), issued two classes of severe thunderstorm watches: aviation watches for surface gusts > 43 kt (50 mph, 22 m s−1) and public versions for surface gusts > 65 kt (75 mph, 34 m s−1). SELS discontinued the dual-watch system in 1970 when the USWB agreed to use the USAF gust threshold of 50 kt (58 mph) for the severe-wind definition. In the National Weather Service (NWS), that criterion nominally continues to the present.

Besides explicitly measured or estimated gusts meeting or exceeding the current threshold, wind reports that can verify warnings, and that appear in the SPC severe-weather database (Schaefer and Edwards 1999), include convective wind damage to structures and trees. Though the “wind” portion of the SPC dataset includes both in situ gust values (measured or estimated), and damage reports with gusts estimated ex post facto, the metadata do not consistently distinguish between them. This study encompasses all convective wind reports’ assigned values, whether or not damage was recorded with any given gust. For clarity, “convective gusts” hereinafter refer to all gusts in the database, regardless of whether thunder specifically was associated with any given report.

The SPC provides severe-convective-gust reports since 1950, taken from Storm Data, in comma-separated and GIS formats available online (http://www.spc.noaa.gov/wcm). Quality of these data matters for many applications in research, operational prediction, forecast verification, and hazard mitigation and mapping. Unreliable wind data hinder research and forensic efforts to, among many possible uses, 1) compare storm intensities to each other or in bulk over time for climate-effect analysis; 2) determine climatological frequencies, geographic distributions, or recurrence intervals for specific wind speeds and wind speed ranges; 3) detrend temporal increases in data volume that were documented as early as Weiss and Vescio (1998); and 4) offer explicit, probabilistic, convective-wind guidance from postprocessing of numerical weather-prediction (NWP) ensembles. Such NWP guidance incorporates SPC wind data for the purpose of devising calibrated convective hazard probabilities (Jirak et al. 2014). Because severe winds occur from every convective mode (Smith et al. 2013), convective wind already has shown to be the most challenging of the severe-weather modes to calibrate from ensemble NWP (Jirak et al. 2014), even before considering wind-data quality.

Assorted biases and nonmeteorological secularities in these data have been discussed in the literature for decades (e.g., Kelly et al. 1985; Doswell et al. 2005; Brooks and Dotzek 2008). Doswell (1985) described the process by which estimated gusts were entered to that time, often with no distinction from measured events. Weiss et al. (2002) illustrated the following:

  • A roughly threefold increase in severe-gust reports from 1970 to 1999 (their Fig. 1), a report inflation continuing through 2014 (Tippett et al. 2015).

  • A distinct clustering of gust records around marginal severe thresholds (58 mph, 50 kt, 26 m s−1) and at miles-per-hour integers ending in 0 or 5 (their Fig. 2). This indicated a dominant influence of estimated gusts in the data, despite the lack of consistent, systematic distinction of measured versus estimated sources in that era.

Doswell et al. (2005) documented, among other factors, sharp discontinuities in convective gust-report density across borders of NWS county warning areas (CWAs), changes within those jurisdictions over time, and proliferation of gusts in the 1990s and early 2000s from nonstandard sensors (mesonet and privately deployed sensors) of unknown or differing calibrations. Trapp et al. (2006) noted substantial spatiotemporal misrepresentation of convective wind-damage areas by associated wind reports and contended that the mandated use of peak-wind estimates for damage is “essentially arbitrary and fraught with potential errors.” Those investigations did not explicitly differentiate measured and estimated gusts in the data. A radar- and detected-lightning-aided 5-yr analysis of Chinese “severe convective winds” (Yang et al. 2017) found northward warm-season shifts and a strong afternoon preference for MGs, as the United States experiences, but with a substantially lower “severe” threshold of 17 m s−1 (38 mph, 33 kt) and purposeful disuse of reports in high-altitude areas. MGs in the United States are logged as reported, without explicit regard to altitude, typically from sites reporting thunderstorms (but not always, as in convective wind events with little or no lightning) and rarely (in Storm Data comments) with specific mention of radar signature.

Weiss and Vescio (1998) recommended distinguishing measured and estimated winds in Storm Data, from which the SPC convective-reports database is derived directly. In 2006, NCDC (now NCEI) began doing so. Formats before and after this change are exemplified in Fig. 1. Storm Data contains default entries of “thunderstorm wind” followed by values in parentheses, with an acronym specifying whether it was a measured gust (MG) or estimated gust (EG), along with the measured “sustained” (MS) and estimated “sustained” (ES) convective wind categories. MGs from standard Automated Surface Observing System (ASOS) and Automated Weather Observing System (AWOS) sites are available independently prior to 2006 and have been analyzed in previous studies. For example, the Smith et al. (2013) climatology and spatial analysis, using a 2003–09 version of the same wind dataset as ours, indicated a strong eastward shift in maximum density of combined EG and MG report data into the Appalachian region, versus the subset of those that were MGs from ASOS and AWOS (their Fig. 6). Their sourcing breakdowns were not as detailed over as many years as ours (section 3d), so it is unknown how many of their non-ASOS, non-AWOS reports were MGs by other means, especially prior to 2006. Still, given the relatively high volume of explicitly binned EGs (including damage reports) in our data, including the 4 yr that overlap theirs, the implied eastern-United States weight toward EGs is generally consistent with our results (below) to the extent comparisons are valid across changing report-logging standards.

Fig. 1.
Fig. 1.

Screen captures of cropped portions of Storm Data pages for (a) 4 Jun 2005 in Iowa, exemplifying pre-specification-era gust format, and (b) 23 May 2011 in Arkansas. In (b), EG and MG signify estimated and measured gusts, respectively. Note that neither the source of the EG nor instrument information for the MG are given. Gust values (kt) are in parentheses.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Specific categorization of EGs and their conterminous comparison with MGs from Storm Data necessarily begins in 2006. Among severe convective report modes, wind data are unique in that numeric magnitude estimates are assigned not only in the field by observers and surveyors (as with hail and tornadoes), but remotely for all other damage by mandate. Even tornadoes have an option for assigning “unknown” to the enhanced Fujita scale (EF) rating (Edwards et al. 2013). For more details on Storm Data convective-wind policy, see NOAA (2007).

The quality and reliability of EGs (especially compared to MGs) has been challenged, mostly for nonconvective winds and controlled-testing situations, but only speculatively for bulk convective wind reports. Doswell et al. (2005) stated, albeit with no sourcing or citation, “Human observers typically overestimate the wind speed, owing to a lack of experience with extreme winds.” The lead author’s three decades of anecdotal observations from storm-observing experience strongly support their contention, but likewise have no analytic basis.

By contrast, Miller et al. (2016b, hereinafter M16) performed the most thorough known examination of nonconvective EGs. They used daily wind data from the U.S. Global Historical Climatological Network (GHCN; Menne et al. 2012), applied gust factors, and then in turn compared to nearest actual or assumed human-estimated reports available in Storm Data. Unable to distinguish between in situ and ex post facto estimates, as is our situation with convective gusts, they assumed that unspecified gust reports were human estimated. Further assumptions were made in proximal terms: that a GHCN-derived gust factor was representative on the scale of NWS forecast zones (or roughly meso-β scale). Gust factors used were in relatively “flat” land away from the western United States and Appalachians—mainly the plains, Midwest, south, and Gulf and Atlantic coastal states. However, no further mitigation was performed to account for local terrain irregularities such as the Ozarks, Mesabi Iron Range, Ouachita Mountains, Raton Mesa, Caprock Escarpment, or Black Hills that do exist within those broader “flat” physiographic provinces. Regardless of those limitations, they found that estimates of gradient-wind gusts disproportionately resided in the upper portion of the observed distribution and were statistically improbable overestimates.

While M16 revealed probable human overestimation of nonconvective wind speeds, their proximity criterion is too large spatially to apply on the convective scale for similar observational versus human-estimate comparisons. Furthermore, convective wind estimates may be influenced by factors that are uncommon in nonconvective events, including rapid accelerations and decelerations over a timespan of seconds, visual impairment due to outflow dust or heavy precipitation, overlap of wind noise with sounds from thunder and precipitation, and inconsistent presence or lack of reference indicators as used in the Beaufort scale (Curtis 1897; Abbe 1914; Varney 1925). Unknown influences also may exist from psychological duress imparted by those factors, as well as from the mere presence of a frightening storm and non-wind convective hazards such as heavy rain, lightning, and hail that may influence perception of storm severity.

Immersive experience with wind estimation appears to matter in controlled settings. Using an anemometer-calibrated indoor chamber, Pluijms et al. (2015) determined that expert sailors judged wind speed and direction better than non-sailors did. This implies that experienced observers, such as many storm spotters and chasers, likewise may estimate wind more accurately than novices or the general public and helps to justify a breakdown of estimations by source as per M16 and section 3d herein. However, Storm Data contains no systematic information on experience levels within each stated estimation source. Pluijms et al. (2015) also used a maximum speed of ≈5 kt (2.6 m s−1), an order of magnitude below NWS severe criteria. Given these limitations and the results of M16, breakdowns by source are justifiable, but not with finer granularity than stated source type. Substantive discussion of explicitly psychological factors (e.g., worrisome wind noise, lightning and thunder, dark clouds) is beyond the scope of this study.

Agdas et al. (2012, hereinafter A12) conducted a wind-chamber experiment on 76 in situ human subjects. These individuals were tasked to estimate wind speeds of 10, 20, 30, 40, 50, and 60 mph (4.5, 8.9, 13.4, 17.9, 22.3, and 26.8 m s−1, respectively), all applied to each subject in random speed order. As with M16, wind speed errors were smaller among those reporting more exposure to high wind—in their case, Florida tropical cyclones. Convective effects (lightning, precipitation, extreme gustiness, etc.) were not considered explicitly in the A12 study either. The gap between actual and perceived NWS severe-threshold winds is represented by the dark gray area between the blue and red lines in Fig. 2. Positive absolute wind-estimate errors grew with increasing speed, beginning at 30 mph (13 m s−1), and nonlinearity of estimates became greater with strengthening speeds after 20 mph (9 m s−1). Highest tested winds of 60 mph (27 m s−1) were perceived to be 75 mph (33.5 m s−1), an overestimate by 1.25.

Fig. 2.
Fig. 2.

Human-perceived vs actual wind speeds in wind-chamber testing with point values at testing intervals. The horizontal and vertical scales are not equal (see axes). Short black vertical bars at each plotted point represent 95% confidence intervals at 10-mph (4.5 m s−1) intervals. The dark gray shading represents the difference domain between actual (red) and perceived (blue) severe wind. Adapted from A12.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

A controlled, large-sample human study of wind-gust estimation in real convective scenarios is practically impossible. As such, A12 findings, as summarized graphically in Fig. 2 and annotated for actual and perceived “severe” winds, likely represent the best available numerical approximations to in situ human overestimation factors. Results from the A12 wind-perception curve therefore will be incorporated into a portion of this work (section 3e). The next section describes the analyses performed on the wind dataset itself.

2. Data and methods

The SPC convective database, as earlier described, contains specific tracking of measured and estimated reporting for each event beginning in 2006. As such, available 2006–15 convective-wind data are used herein, sorted first by MG and EG categories. For 2006–15, 124 subsevere values (119 estimated, 5 measured) were found in the SPC data and removed from the analysis set. A total of 150 423 convective-wind values remain for the 10-yr analysis period, encompassing 15 148 MGs, 35 MSs, 135 053 EGs, and 187 ESs.

For MGs, Storm Data does not supply calibration information regarding wind instruments, nor the type of instrument used, nor other consistent specifics on either human or instrumental sources (e.g., Fig. 1b); therefore, no such filtering can be done in this study. All gusts within each category (MG and EG) are treated without preference in terms of instrument reliability or potential classification error, acknowledging that systematic mechanical differences across instrument classes and misfiled reports may affect our results in unknown ways.1 We assume that the number of erroneously filed events in the 10-yr database is comparatively small and their effects will be minimized by the very large sample size. Still, we did mine Storm Data for sourcing information for both EGs and MGs.

The reasoning by which sustained (ES and MS) events are segregated from gusts in Storm Data is neither given in the publication’s documentation nor specified in most entry comments. No regulations or guidelines for distinguishing sustained wind from gusts are specifically elucidated in Storm Data policy either (e.g., NOAA 2007, p. 70). Given their relatively minuscule sample sizes (0.14% and 0.23% of total estimated and measured events, respectively), the sustained winds will be included within the EG and MG categories for our analytic purposes, which encompass severe convective winds as a whole. Thus, EG and MG hereinafter refer to all estimated and measured values, respectively. Figure 3 shows the geographic report distribution for the decadal period of our study.

Fig. 3.
Fig. 3.

Geographic distribution of convective-gust reports, 2006–15: (a) measured (blue) and (b) estimated (red).

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Values further were sorted by the ≥65-kt (75 mph, 33 m s−1) operational definition of “significant” wind (Hales 1988), and by state and CWA, for comparison between wind-strength categories and across different parts of the contiguous United States (CONUS). On the basis of the results of Smith et al. (2013), we hypothesized that significant–severe gusts should be more common in the western CONUS and Great Plains states and that MGs would be a greater portion of the data in those regions than east of the Mississippi River. Based on Weiss et al. (2002) and operational experience with storm reports, we also hypothesized that values corresponding to digits ending in 0 and 5 (in miles per hour) would exhibit peaks relative to surrounding speeds for EGs but not MGs.

3. Analytic results and interpretations

a. Basic wind speed data

On the whole, both MG and EG report counts exhibited pronounced decreases with increasing speed at near-logarithmic rates (Fig. 4), past the apparent high value at the marginal warning-verification threshold of 50 kt (58 mph, 26 m s−1). The largest anomalies were with the EGs at speeds ending in 0 or 5, whether in miles per hour or knots. In each such case, these anomalies added at least an order of magnitude to report totals. By the time values exceeded 100 mph (87 kt, 45 m s−1), the sample size steadily decreased on a logarithmic scale to ~101 total events from ~104 at marginal–severe thresholds. Away from those ending digits, MGs (blue) typically outnumbered EGs, except at the largest values (≥90 kt) where sample size was small.

Fig. 4.
Fig. 4.

Histogram of wind-report distribution. The ordinate is logarithmically scaled by event count, and the abscissa is linearly scaled by wind speed (kt). Measured reports are in blue, and estimated counts are in red, such that the stacked height of each combined red and blue pillar represents the total count for that speed value. Red numbers correspond to originally reported miles-per-hour values.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

No physical explanation exists for the unlikely possibility of actual atmospheric production of winds an order of magnitude greater in count for integers ending in 0 or 5 by any measurement scale versus those ending in, say, 3 or 7. This very strongly suggests that EG “spikes” are secular artifacts. By contrast, the presence of precise EGs (Fig. 4) that fall between the zeroes and fives of both English wind units is counterintuitive and presents its own quandaries, namely: how do estimates such as 59 mph (51 kt, 26 m s−1), 71 mph (62 kt, 32 m s−1), or 83 mph (72 kt, 37 m s−1) occur? In those three instances, even metric units fall between the zeroes and fives, lowering the probability of EG sourcing from metrically literate foreign visitors. The limited available metadata do not reveal the extent to which errors in rounding and/or data entry (e.g., bad unit conversions or misclassification of MGs as EGs) contributed to the otherwise inexplicable specificity of such estimates. However, the great variability in neighboring values does imply false precision in the database.

As somewhat apparent in logarithmically scaled Fig. 4, and even more so in linearly scaled Fig. 5, MGs likewise exhibit relative peaks at values ending in the digits 0 or 5 (in miles per hour and knots). Though much lower in relative amplitude compared to neighboring values than EGs, the differences remain pronounced, especially on a linear ordinate. This also defies physical explanation in the real atmosphere. While Storm Data offers insufficient information to establish definitive causes for this nonmeteorological artifact (e.g., rounding), the misclassifications of nonconvective winds discussed in the section 2 footnote and the sourcing data available (section 3d) indicate similar errors may contribute to the “spikes” for MGs here as well. On a logarithmic scale, the decrease in MG counts with strength conforms closely to a linear best fit, even with the aforementioned secularities that are relatively minor in amplitude compared to those in the EG data.

Fig. 5.
Fig. 5.

Histogram of the 50–70-kt subset of MGs; the ordinate is scaled linearly. Blue numbers represent miles-per-hour speeds.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

A pronounced EG peak also is evident at the 50-kt (58 mph, 25.7 m s−1) marginal–severe threshold (Fig. 4)—the minimum speed criterion to verify a warning. That bin contained 54 229 reports, compared to 366 51-kt (59 mph, 26 m s−1) EGs. This represents a decrease of two orders of magnitude across 1 kt (0.5 m s−1) of wind speed, also with no known physical cause. A somewhat less-pronounced MG peak also exists at the same marginal–severe threshold (Fig. 5). Preferential clustering of EGs at NWS warning (and warning-verification) criteria also has been documented in the nonconvective gust data (e.g., Fig. 6). These data, including the unknown specific number of damage events to which ≥50-kt (26 m s−1) wind values were arbitrarily assigned, collectively suggest a strong secular influence of warning-verification practices on gust values that get recorded into the climatology. That phenomenon is akin to documented verification-related thresholding effects on hail-data collection (e.g., Amburn and Wolf 1997; Allen and Tippett 2015; Blair et al. 2017), and similarly reinforces the idea that one purpose of convective-wind event collection is to serve a subjective warning-verification database. To the extent data are entered (or withheld) for warning-verification purposes, which is not knowable numerically without explicit documentation, the data may not represent a true climatology of severe thunderstorm winds.

Fig. 6.
Fig. 6.

Linear-scaled histograms of nonconvective report counts as follows: (a) human EGs from Storm Data and (b) MGs from the GHCN (Menne et al. 2012) U.S. daily station data. Bins including the NWS 58 mph (26 m s−1) nonconvective warning criterion, which matches that for severe convection, are labeled with the MGs in blue and EGs in red, as elsewhere herein. Adapted from Fig. 1 in Miller et al. (2016a).

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

b. Geographic distributions

Severe-convective gust data show pronounced discontinuities and shifts across the CONUS. As visually evident in Fig. 3a, MGs are most common in a roughly triangular corridor from the southern Great Plains to the northern plains, upper Midwest, and lower Great Lakes. Within this broad area, relatively dense spatial-density maxima appear in population centers such as Chicago–Milwaukee, Dallas–Fort Worth, Oklahoma City, Denver, Kansas City, Saint Louis, Indianapolis, Minneapolis–Saint Paul, and Omaha. A pronounced relative minimum in MGs within this area exists over the Nebraska Sand Hills, likely related to lack of both observation stations and population, except for a north–south strand corresponding to a major transportation corridor (US-81).

Compared to the contiguous plains and Midwest regions, lesser densities of MGs exist from the Appalachians to the East Coast and across the Gulf Coast region and Florida, except for a relatively higher concentration around the D.C. metropolitan area. Despite much lower population density (not shown), the plains states exhibit noticeably greater concentration and absolute numbers of MGs compared to the Atlantic and Gulf Coast states. This is consistent with the findings of Smith et al. (2013), whose data overlap ours by 4 yr, and whose radar-based results confirm a dominant meteorological cause. The presence of low MG densities over the sparsely populated Rocky Mountain west, by contrast, indicate some combination of sparse population and fewer instrument-dense influences on the data there, since MGs are sourced from both automated and manned instrumentation.

Relatively large MG concentrations also appear in Fig. 3a over southeastern Idaho, northern Utah (notably the Great Salt Lake Desert and embedded mountain ranges, as well as the urban corridor), and around the Phoenix and Tucson metropolitan areas of southern Arizona. In addition to a relative density of surface-observing sites, even over the deserts, the northern Utah maximum collocates with a meteorological tendency for dry microbursts and/or severe-wind-producing mesoscale convective systems to occur over this area (Seaman et al. 2016). Farther west, a striking lack of MGs is evident over both densely and sparsely populated areas of central and Northern California, as well as Southern California from the Los Angeles metropolitan area westward. This also suggests a meteorological influence dominating those from either population or observing-site density. However, MG gaps across northern Arizona, western New Mexico, and around the Nevada Test and Training Range (Area 51) region of south–central Nevada, appear to correspond to a dearth of available observing sites. Sparseness of MGs over the north woods of Maine and Minnesota, and around Lake Superior, may be both meteorological and population related.

In contrast, EG concentration (Fig. 3b) generally increases from the Rockies eastward to the East Coast, except for relative population minima across parts of the Appalachians, Maine, the Adirondack region of northeastern New York, and the Lake Superior region. A pronounced relative EG minimum appears over most of central and southern Florida, including the coastal corridors. The interior part of the minimum appears to be population driven, in terms of the relatively depopulated Everglades area and agricultural regions around Lake Okeechobee. However, there is a puzzling lack of more reports in the densely settled southeastern coastal metropolitan corridor from Palm Beach through Miami–Dade county, especially when compared with other metropolitan areas of the Southeast. Maxima in EGs over southern Arizona, however, likely are population related, as well as around the Twin Cities of Minnesota and the Dallas–Fort Worth metroplex. Curiously, a relative concentration of EGs appears over the California Central Valley, where several observing sites exist, yet MGs are absent in the decade of record.

The ratio of EGs and MGs exhibits striking differences across the CONUS (Fig. 7). The highest proportion of EGs to MGs is over parts of New England, where sample sizes for gusts are relatively minimized, and over the Southeast (excluding Florida), where sample sizes are large. South Carolina, in particular, has a 78:1 ratio of EGs to MGs. This is partly related to the relative lack of MGs in that area (Fig. 3a), but also likely involves secular factors. The EG/MG ratio decreases westward toward the Intermountain West; in fact, Utah and Nevada offer more MGs than EGs by approximately 3:1–4:1 proportions. Oklahoma has the lowest EG/MG ratio of the plains states east of the Rockies. Explicit sourcing of reports via Storm Data (section 3d) indicates the influence of the relatively anemometer-dense Oklahoma Mesonet (Brock et al. 1995), which was in operation during the entire study period. By contrast, neighboring Kansas more than doubles the EG/MG ratio, but not only because of fewer stations; Kansas also has a greater relative concentration of EGs (Fig. 3b) than any of the other plains states.

Fig. 7.
Fig. 7.

Map of EG/MG ratio by state, 2006–15. Red (blue) numbers correspond to ratios above (below) unity. Ratios round to unity over Colorado and New Mexico (reddish gray number).

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Gust records normalized and mapped by state land area (not shown) reveal that EGs per unit area increase eastward from minima over the West Coast and Great Basin to maxima over the Southeast, mid-Atlantic, and southern New England. South Carolina again stands out relative to other Southeastern states, with 73 EGs per 1000 km2, whereas Maryland (much smaller in size but with 42% of South Carolina’s EG count) has 94 EGs per 1000 km2, highest among states. Oklahoma had the greatest number of MGs per 1000 km2 with 7.5, again strongly indicating the mesonet influence. Otherwise, the mid-Atlantic and central plains had relatively maximized EG-density values, with minima across California and the Great Basin.

Data also were sorted according to the operationally customary 65-kt (33.4 m s−1) “significant–severe” threshold (Hales 1988). Nationwide, 6.3% of EGs were significant, compared to 8.8% of MGs. When mapped, the proportion of significant EGs is greatest across the plains and Rocky Mountain regions (Fig. 8), except for a low-sample-size anomaly in DC. In contrast, the proportion of MGs that are significant exhibits little regional change (not shown), except for 20% and 18% anomalies over Oregon and Utah, respectively, and another low-sample-size, high-percentage anomaly in DC.

Fig. 8.
Fig. 8.

Map by state of percent of EGs ≥ 65 kt (33.4 m s−1), 2006–15. Underlined values come from sample sizes < 10.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

c. Population influence

Population influences on the EG and MG data, using 2010 census figures, are spatially apparent in mapping, as in the metropolitan geospatial clustering of Fig. 3 and as discussed above. When normalizing the data by state populations (Fig. 9), one characteristic stands out most prominently: the relatively high number of both MGs and EGs per 100 000 people in the central CONUS (Great Plains to Mississippi River) and Rocky Mountain states, which are relatively sparsely populated compared to areas to their east and to California. The Southeast, largely because of its high absolute volume of EGs (e.g., Fig. 3b), also has high EG tallies per capita, similar to the central CONUS and Rockies. California, with its CONUS-leading state population and overall dearth of both MGs and EGs (Fig. 3), has the lowest MG and EG total per capita (Fig. 9b).

Fig. 9.
Fig. 9.

State-by-state map per 100 000 people of (a) MGs in blue and (b) EGs in red.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

d. MG and EG sourcing analyses

To ascertain magnitude and geographic tendencies in MGs and EGs by the sources of the reports, severe-wind entries from Storm Data were parsed in bulk for the sourcing information. Since source titles changed within the 10-yr time frame of sampling (e.g., general public to public), and since other source bins were similar enough to combine subjectively with confidence (e.g., airplane pilot and trained spotter)—equivalency assumptions were made between old and new source titles and in matching two very similar categories. Please see the appendix for details on the filtering procedure we employed to distill the sources to the consistent sets listed in Tables 1 and 2, which offer the percentages of individual sources for years in which each source appeared.

Table 1.

Yearly sources of MGs and percentages of contribution (%) to the total number of reports. Sources of <0.25% are combined into the category “Other minor sources,” and include social media, unknown, government official, Severe Hazards Analysis and Verification Experiment (SHAVE) project (Ortega et al. 2009), parks/forest service, county officials, utility companies, 911 call centers, Coast Guard, state officials, post office, and buoy data. Note that in 2006 individual categories were not available for the ASOS, AWOS, and mesonet records sothese are listed together. Official measurements include automated or reliably measured observation stations recorded as AWOS, ASOS, official NWS observations, COOP observers, mesonets, and buoy observations.

Table 1.
Table 2.

As in Table 1, but for EGs and these sources making up >0.25%: social media, county and state officials, utility company, and 911 call center.

Table 2.

Different sources influence numeric report distributions and magnitudes differently. Figure 10 shows the MG fractional breakdown (using a log-y axis) by source type, normalized to a total of 1 across the set. In Fig. 10a, the bulk reports are evident, as in Fig. 4. In the remaining panels, where the blue bars exceed the red (i.e., atop purple), the distribution is overly skewed toward that source at that value, and vice versa. Where they overlap, they follow similar distributions. For example, the highest values of measured gusts are entirely due to mesonets, but otherwise these and other measured sources are well distributed. Storm chasers tend to skew magnitudes, as do the public, law enforcement, NWS employees, and other (non-NWS) federal agencies, while trained spotters seem to have little influence. Public MGs, in particular, preferentially yield higher-end, significant–severe values, with instrumentation of undocumented but likely poorer calibration relative to standardized ASOS and mesonet equipment.

Fig. 10.
Fig. 10.

Normalized MG frequency by magnitude for each source, 2006–15. (a) All (red); and MG (red) overlaid in blue by (b) ASOS, (c) AWOS, (d) mesonet, (e) COOP observer, (f) trained spotters, (g) amateur radio, (h) public, (i) emergency manager, (j) law enforcement, (k) official NWS observations, (l) fire/rescue, (m) department of highways, (n) NWS employees, (o) other federal agencies, (p) broadcast media, (q) storm chasers, and (r) other minor sources for MGs as characterized in Table 1. Each source set is normalized by its own sample size to allow for comparison for skewness in the respective distributions and displayed with a log-y axis to facilitate interpretation of the logarithmic decrease in likelihood associated with increasing wind intensity.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Assuming that MGs only occur for the categories known to be associated with MGs (buoy, AWOS, ASOS, mesonet, cooperative observer, official NWS observations), this suggests 63.2% of measurements could be identified reliably as coming from measuring devices over the 2006–15 period. For the last 3 yr of the record it has been closer to 67.8%. Exploring the EGs, 0.38% of instrument-measured gusts (i.e., Mesonet, ASOS, and AWOS) were misclassified as estimated. For EGs (Fig. 11), the pronounced packing of speed-value counts from departments of highways, toward lower-end severe levels, resembles those of instrumental “estimate” sources (ASOS, AWOS, mesonets, and official NWS observations) that apparently were misclassified as EGs. For these reasons, and because of the lead author’s operational experience with using such data, where available in near–real time, we suspect (though Storm Data entries do not contain enough metadata to say with certainty) that most highway-department “estimates” instead are measurements from instrumented sites.

Fig. 11.
Fig. 11.

As in Fig. 10, but for EGs. Source panels are as in Fig. 10, except for (o) newspapers, (p) broadcast media, (r) social media, (s) utility companies, (t) NWS storm survey, (u) county officials, (v) state officials, (w) 911 call centers, and (x) other minor sources for EGs as characterized in Table 2.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

NWS storm surveys yield the least-pronounced decrease in counts for higher magnitudes. This is an apparent artifact of the tendency for surveys preferentially to target suspected tornadoes and the most severe wind damage, with peak-wind estimations assigned to damage indicators from surveys; see Trapp et al. (2006) for more discussion on unrepresentativeness issues and untrustworthiness of values arising from such practices. At least secondarily, the verification-related requirement of Storm Data report entry for significant (65 kt or 33 m s−1; Hales 1988) wind values may contribute to this high-value survey bias as well. Within all MGs, Fig. 12 subsamples the collection of sources typically associated with reliable readings: “official” measurements including ASOS, AWOS, mesonets, cooperative observers, and buoys. While generally following a logarithmically linear decrease in values of higher sample size, the trend becomes more erratic in small sample sizes, particularly with values ≥80 kt (92 mph, 41 m s−1).

Fig. 12.
Fig. 12.

Distribution of MGs in comparison with a subsampled set composed of sources known to be reliably measured (ASOS, AWOS, mesonet, buoys, official NWS observations, and COOP observers). Purple represents the overlap between reliable (blue) and all (red) MGs on each column count.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Sourcing information for MGs (Fig. 13) and EGs (Fig. 14) reveals well-defined regional- to local-scale geographic concentrations and discontinuities for several types of sources. EG sourcing in particular exhibits pronounced relative maxima and minima that appear tied to NWS CWAs (e.g., overall density and fractions of total reports from emergency managers, storm spotters, newspapers, public, law enforcement, state and county officials, and amateur radio). Sharp discontinuities in the fractional distributions of many EG sources appear along many CWA boundaries. In absolute numbers, but not density (not shown), a strong regional concentration in EG reports from 911 call centers is apparent in a long swath of the eastern states from the Florida Panhandle to eastern New York. Usage of damage surveys to estimate convective gusts is proportionally high over northeastern Kansas (Fig. 14) and has been a relatively common practice in absolute numbers (not shown) across parts of Arkansas, western Kentucky, and southern Illinois, and the Red River of the North region in the Dakotas and Minnesota. Storm chasers offer <5% of EGs everywhere—even over the Great Plains and Midwest where that avocation is most common. Highway departments provided EGs in relatively dense proportions over parts of Florida, the Northeast, and the central plains. The local proportion of EGs from spotters and county and state officials is highly patchwork in nature across the CONUS. Some notable gaps or absences in sourcing appear as well—for example, law enforcement in central and northern Georgia, southwestern Alabama, and most of Massachusetts. Proportionally, mesonet-based EGs (presumably erroneous since mesonets are instruments) appear in northern New York and northwestern New England, as well as parts of Oklahoma, Missouri, Iowa, and Illinois. The absolute distribution of misclassified “estimated” reports from sources composed of measuring instruments (not shown) reveal no clear geographic biases, except for a concentration of mesonet sources in central Illinois.

Fig. 13.
Fig. 13.

Chloropleth of MGs by CWA: (a) all sources as a density of reports per 10 000 mi2 (25 900 km2) within each CWA, colored as in the left legend at bottom; (b)–(r) fractional contribution to the total number of reports for each respective source identified with sample size n for each, colored as in the right legend at bottom. “Other minor sources” are characterized in Table 1.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Fig. 14.
Fig. 14.

As in Fig. 13, but for EGs, with sources as labeled. Other minor sources are characterized in Table 1.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

As for MGs, the lower sample size and generally fewer, more diffuse gradients in sourcing are apparent in absolute numbers (not shown); however, as with some EG sourcing, strong CWA-boundary discontinuities and patchwork appearances exist in their local proportionalities (Fig. 13). The expected exception in numbers was with mesonets, with maxima in Oklahoma, west Texas, Kentucky, northern Utah, and central Iowa. However, the fraction of mesonet-based MGs does not stand out as much over those areas, because of relatively large report counts from all other sources. Still, a few other pronounced relative sourcing concentrations do appear, such as amateur radio in the northern Rockies and central Kentucky, In absolute numbers (not shown), storm spotters are maximized near urban centers such as Dallas–Fort Worth, Saint Louis, and Chicago, as well as over both urban and rural areas of northeastern Colorado, while media MG reports concentrate heavily in the Texas Panhandle. A remarkable maximum in spatial density of MGs appears over northeastern Kansas with no apparent reason, given that conventional observations (ASOS, AWOS) are not relatively densely packed there.

e. Gust modulation for wind-tunnel results

As noted in section 1, A12 test subjects estimated winds at ≈1.25 times actual speeds; in other words, perceived winds can be multiplied by ≈0.8 to obtain adjusted “actual” winds. We use a rounded 1.25 perceived/actual (P/A) ratio for all estimates, since the rounded P/A variation in A12 is within ±0.01 of that for all MG or EG winds at severe levels. If applied to all EG data, including the unknown number of post facto estimates, all values <73 mph (32.6 m s−1), or 126 474 data points (93% of all EGs), fall below severe limits. This procedure also reduces 7981 (93% of) significant EGs [i.e., all EGs 65–80 kt (75–92 mph, 33–41 m s−1)], to below significant–severe criteria. Put another way, normalizing all the EG data based on human-testing results leaves only 7% of the EGs as severe, and 7% of significant EGs as significant–severe. Interpolating 95% confidence intervals that A12 calculated at their fixed 50- and 60-mph (43 and 52 kt, 22.4 and 26.8 m s−1) experiment values yields an uncertainty of ±5 mph of perceived wind speed at the 73-mph “actual” severe-EG threshold. That uncertainty level would remove up to 1099 reports (0.8% of all EGs) or add up 3201 (2.4% of all EGs, including the spike of 70-mph estimates) to the “actual” severe-EG data.

Figure 15 visually illustrates the gap between measured and adjusted values with increasing speeds using the A12 P/A ratio, showing adjusted benchmarks for severe and significant–severe gusts. Values are given in kt because of Storm Data unit conventions, per Fig. 1 (i.e., when logged, miles-per-hour values are converted to rounded knots in the dataset). Marginal–severe EGs of 50 kt (58 mph, 26.7 m s−1) reduce to 40 kt (46 mph, 20.6 m s−1).

Fig. 15.
Fig. 15.

Estimated (blue) and vertically corresponding adjusted (red) data points spanning EGs of 50–113 kt (58–130 mph, 26–58 m s−1). Dots represent datapoint values and not the sample size at each. EG values with null points (e.g., no records exist of 97–99-kt EGs) have clear centers. The x axis has labels for severe and significant–severe adjusted values (kt). The gray area represents the spread between original and adjusted EGs. Analogous to Fig. 2, the horizontal lines (y-axis values) can be considered as “perceived” and the vertical lines (x-axis values) as “actual.”

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

Finally, we grouped all severe MGs with filtered bulk EGs (using the A12 P/A ratio), to create a bias-corrected, 10-yr severe-wind climatology. The density of resulting wind events per CWA (Fig. 16a) is influenced by both the raw distribution (manifest by yearly average in Fig. 16b) and CWA land area. The smoothed national distribution of the bias-corrected climatology (Fig. 16c) resembles that of the MG-only wind distributions in our Fig. 3a and in Fig. 1 of Smith et al. (2013). A strong severe-wind preference exists for the southern and central Great Plains, connected to a somewhat less-dense (but still relatively maximized) swath from the northern plains across much of the Midwest to the Ohio Valley. To the extent bias-corrected EGs represent “real” severe winds, they conform well to climatological distributions of severe MGs.

Fig. 16.
Fig. 16.

(a) Total gust density delineated by CWA area of the subset of all MGs ≥ 50 mph (26 m s−1) and EGs ≥ 73 mph (2006–15). (b) Mean annual density of gusts for the subset as in (a), but per 80 × 80 km2 grid box. (c) As in (b), but using a 1.5-sigma-bandwidth Gaussian kernel smoother.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

4. Summary and discussion

The convective severe-wind data are deeply suffused with artifacts that evade physical justification, as addressed in section 3a. Aside from the human-bias modulations to speeds themselves, discussed below, one method of bulk control would be to reduce the counts of winds ending in 0 or 5 (in knots or miles per hour). This can be accomplished most straightforwardly either of two ways: 1) interpolate linearly between adjacent integers that are not affected by the secularity or 2) apply a best-fit curve to values not ending in 0 or 5 and adjust to the resulting interpolated count.

Wind-tunnel tests by A12 offered a 1.25 P/A ratio that we judged to be a most-representative reduction curve from in situ EGs to MGs; however, individuals’ estimation skill should vary considerably, as also indicated by A12. Results herein should be used for evaluating bulk data at least to the extent in situ EGs are specified, and not “repairing” any given single estimate. The larger (≈1.33) overestimation factor of M16 also may be valid in some convective scenarios, regardless. Our use of ≈1.25 P/A ratio therefore should be considered conservative in evaluating a large-sample grouping of estimated convective gusts where human observers (as opposed to delayed, remote-damage-based estimates) were used. As such, we suggest that in situ estimated values < 73 mph (32.6 m s−1) can be disregarded to the extent they are known, for purposes of bulk research on the severe-wind data, and all such estimates multiplied by 0.8 for “apples-to-apples” comparison with MGs. As noted above, this means only ≈7% of all in situ estimates would be retained as strictly severe winds with the reduction factor applied.

Furthermore, since 73-mph values represent 0.07% of the whole EG dataset used herein, effectively all in situ convective-wind estimates less than hurricane force (74 mph, 64 kt, 33 m s−1; Simpson 1974) may be considered subsevere. This coincidentally harkens back to the pre-1970s SELS notion of gusts below significant thresholds as nonsevere, implied by the old public-watch criteria discussed in section 1. The degree to which the 93% of EGs adjusted below current 50-kt severe limits still represents damaging and hazardous conditions worthy of warning is unknown. Given these issues, to what extent should the numeric threshold for “severe” convective wind be changed, if any? This may be as much an engineering and perceptive social-science issue as a meteorological one. Overall, the EG records may tell no more than that subjectively intense winds occurred, but how intense cannot be determined accurately without calibrated sensors.

As presently formatted, the relative Storm Data population of in situ and post facto (damage derived) estimates is not known precisely. Gust reports derived from damage may not carry remarks, thereby appearing similar to in situ estimates. Those mentioning damage in remarks do not necessarily elaborate on whether the gust was determined in situ or post facto. Such determinants should be labeled explicitly, so that every entry specifies whether the EG is taken explicitly from a human observer or inferred from a damage report. Alternatively, given the arbitrary nature of ex post facto assignments, only measurements and in situ estimates could be assigned numeric wind values—not damage reports.

In a similar vein, of the measured gusts, subsampling to those sources we know to be reliably measured appears to reduce greatly the influence of the “zero and five” peaks and provides closer to a true logarithmic distribution, at some cost of sample size. As such, depending on application, we recommend stratifying between estimations and measurements, and consider preferentially using station-only data (e.g., Smith et al. 2013, plus mesonets) for MG research to ensure highest data fidelity.

Analysis of raw, real-time “rough log” data from local storm reports (LSRs) was beyond the scope of this Storm Data–based study. Still, to assess their veracity for near–real time, subjective verification, tabulation, and mapping of bias-corrected estimated reports could be done daily on report-log data in parallel with tabulation and display of original raw reports. This would illustrate both numeric and geographic shifts imparted by A12-based reduction of human-estimated values to those that most probably were severe in reality. Where measured and estimated gusts exist in very close proximity, the former should be used; though answering the question of precisely how close is outside of both the scope of this study and the nature of the existing wind data.

One avenue of further investigation should be the comparison of proximal MGs and EGs in a convective setting, analogous to M16’s nonconvective work. Anecdotal evidence and operational experience suggest similar overestimation factors in convective and nonconvective events, with respect to A12 testing that is used in ours. One recent example, among numerous possibilities nationwide since 2006, was from a forward-propagating convective complex straddling the border of North and South Dakota on 10 August 2016. The Mobridge, South Dakota, automated station at 0516 UTC measured a peak gust of 54 mph (47 kt, 24 m s−1). According to a local NWS storm report, a “trained spotter” in Mobridge estimated 65-mph (56.5 kt, 29 m s−1) thunderstorm winds at 0520 UTC (Fig. 17). Rather consistent with our bulk findings, this event contained an apparent overestimate to a digit ending in 5, at a factor of 1.2, and also with a time estimate rounded to a digit ending in zero. The same thunderstorm complex did produce measured severe gusts elsewhere along its track at other times. This situation also is similar to some Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX) results noted by Trapp et al. (2006).

Fig. 17.
Fig. 17.

Sub-severe raw METAR observation (kt, medium gray shade) and severe spotter estimate in an LSR (light gray shade, mph) for convective gusts at Mobridge, 10 Aug 2016. Peak speeds are underlined in red.

Citation: Journal of Applied Meteorology and Climatology 57, 8; 10.1175/JAMC-D-17-0306.1

The A12 adjustment factor will not ameliorate the aforementioned discontinuities (“spikes”) in the data inherent to observers’ customary use of EG integers that end in 0 or 5 (in knots or miles per hour). In nations not using English units, relative frequency maxima analogously should appear where estimated metric values (whether in meters per second or kilometers per hour) end in 0 or 5. Such determinations could be done, customized to units used in areas such as parts of Europe, where relatively robust datasets recently have been accumulated (e.g., the European Severe Weather Database; Dotzek et al. 2009; Groenemeijer et al. 2017). Bias correcting the data for any such metric secularities could allow more “apples-to-apples” comparison and interchangeable usage of convective wind speed data worldwide.

Results herein reveal that human estimators (generally spotters, storm chasers, and public sources) preferentially offer convective wind estimates based on either minimal thresholds to verify as severe (50 kt, 26 m s−1), or digits ending in 0 and 5 when using either miles per hour or knots. The difference in EG versus MG report counts, at these most-commonly estimated values, is consistently close to one order of magnitude for speeds up to about 90 kt (46 m s−1; Fig. 4); thereafter, sample size becomes very small. This raises issues with implications for mapping and weighting of bulk convective wind data, as well as for forecast verification. For example, even if not reduced in magnitude per the discussion above, should EG (MG) counts at those specific speeds be reduced (increased) by a factor of 10 to compensate for the secular count differences, and minimize potential biases to the spatial climatology, especially in areas of high (low) population or high (low) density of structures and trees to damage? Also assuming no magnitude reduction, we ask the following questions:

  • Should forecast-verification metrics (especially for outlooks and watches, but perhaps even warnings) be weighted heavily toward measured-wind reports? If so, in what way and how should observationally data-sparse areas be verified?

  • Should planar or gridded EG coverage be detrended by corresponding density of MGs in the same regions or states in order to normalize them for the sake of verifying outlooks and watches? For example, in a given geographic area where the ratio of EGs to MGs (Fig. 7) is ≈20:1, one may require 20 EGs per grid unit to count the same as 1 MG for verifying a forecast. This would require new areal forecast-verification methods that are more sophisticated, but likely more physically representative, than the one-size-fits-all nationwide metrics currently employed.

  • Alternatively, given the overestimation findings referenced above, should EGs weaker than hurricane force be discarded for verification purposes (as well as for research use)? Perhaps EGs should be disused altogether for climatological and hazard-mapping applications. These and other issues are likely to arise from objective, scientifically based consideration of the disparity between EGs and MGs, both in terms of magnitude and report counts.

Differences in the distribution of MG and EG sourcing across gust speeds were readily apparent in some cases—for example, when comparing public MG reports (Fig. 10h) to more reliably calibrated instrumental sources such as ASOS, AWOS, and mesonets (Figs. 10b–d). As noted before, Storm Data metadata on calibration of anemometers do not exist, but one inference is that public MGs come from poorer-quality instruments; another possible contributor is misclassification of some public EGs as MGs. In the operational setting, where all LSRs issued nationwide are displayed on a monitor as they are received, the lead author has noticed and coordinated corrections on obviously erroneously classified LSRs involving MG/EG transpositions; an unknown number more likely slip through from the preliminary to final logging process. EGs sourced from measuring instruments in Storm Data (e.g., Figs. 11b–d, 14b–d) are straightforward representations of misclassification and the most preventable. For research purposes and reproducibility, such as climatological evaluations, such data that are known (or strongly suspected) to be mistaken either should be reclassified or removed and flagged as erroneous.

Preferential regional- to CWA-scale reliance on different sources of both EG and MG data was readily apparent (section 3d; Figs. 13 and 14), with often sharp discontinuities across borders. In this way, severe-wind data can be used to show that some jurisdictions, for example, rely relatively strongly on county or local officials to make reports, whereas others preferentially collect such information from 911 call centers, newspapers, amateur radio, or law enforcement. We only can speculate on the reasons for such highly variable sourcing preferences or for sharp discontinuities in them across political or jurisdictional boundaries unrelated to atmospheric processes. Regardless, the clear message is that secularities in the storm-report data arise not only from the quality of the data (e.g., presence of “zero and five” spikes) and the tendency for human overestimation, but also from whom the data are obtained. The documentation of strong inconsistencies in data-gathering practices for wind reports across the country could justify a more uniform, standardized policy across jurisdictions for collecting and accepting reports. Other related possibilities include introduction of explicit digital sourcing metadata to Storm Data for more analysis-friendly weighting of convective wind speeds, not only by measured versus estimated (as in this article) but by reliability of source, for processing in research and climate-assessment uses. Our work shows that not all wind reports of the same value are created equally, nor should they be treated alike for research and forecast-verification purposes.

On individual human levels, possibilities exist for improvement in wind-gust reporting. An innovative, experiential approach to spotter training conceivably may involve personal calibration with severe winds in a chamber, where one may be available, in order to improve estimates through personal immersion. In the meantime, spotters should be encouraged to measure instead of estimate winds using calibrated, scientific-grade anemometers. These should be sited out of the lee of obstacles, placed at standard 10-m heights if at fixed sites, and where on mobile or portable platforms, situated above vehicular slipstreams. In the data, trained-spotter measurements appear to be mostly well distributed, as expected, as opposed to those from some of the other sources (e.g., storm chasers).

Flawed and inconsistent as we have found them to be, the U.S. convective-wind data still represent a large part of the most comprehensive severe-weather-event database in existence. The data are used initially for event documentation and warning verification at the local level, as well as assessment of national-scale to mesoscale forecasts from SPC. These purposes, as well as meteorological research, hazard mapping, climatological tracking, and preparedness efforts, compel a greater understanding of data quality. This includes identification and mapping of nonmeteorological artifacts and secular influences, as became the main focus of this project. Additional ways to filter or normalize EGs may be developed, as well as investigations into causes and fixes for MG artifacts. We hope this work guides improvements in both temporal and spatial consistency of Storm Data gathering, sourcing, and verification practices—the primary aim being a more scientifically robust and useful future dataset for evaluating the convective-wind hazard in the United States.

Acknowledgments

We appreciate helpful insights from and discussions with Chuck Doswell, Patrick Marsh, David Prevatt, Bryan Smith, and Steve Weiss. Smith plotted the EG and MG maps used for Fig. 3, and Marsh created most of Fig. 4. The SPC Science Support Branch provided the software and maintained hardware to enable this work. Israel Jirak (SPC) provided beneficial manuscript review and helpful suggestions for data interpretation. Weiss offered historical insights and a review of an early draft of this paper. Much of the material herein previously appeared in an informal conference preprint (Edwards and Carbin 2016). The formal reviewers’ insights helped to mold this work into publishable form and are greatly appreciated.

APPENDIX

NCEI Storm Data Filtering Procedure for MG and EG Sourcing

For reproducibility, this appendix describes in more detail the source-filtering process employed herein. Sorting and condensing of categories were performed similarly to Allen and Tippett’s (2015) hail-data methods. Using the original (raw) NCEI Storm Data entries, we filtered for only “thunderstorm wind” reports in the “source” column. All source titles are uppercase.

Initial analysis revealed the following raw independent data source titles for all wind, not including categories with variations of similar names—the full list is NWS EMPLOYEE, SOCIAL MEDIA, LOCAL OFFICIAL, RAWS, TRIBAL OFFICIAL, UNKNOWN, UTILITY COMPANY, LAW ENFORCEMENT, AWOS, SHAVE PROJECT, STORM CHASER, PARK/FOREST SERVICE, DEPT OF HIGHWAYS, BROADCAST MEDIA, EMERGENCY MANAGER, AWSS, NWS STORM SURVEY, COUNTY OFFICIAL, TRAINED SPOTTER, RIVER/STREAM GAGE, ASOS, FIRE DEPARTMENT/RESCUE, COOP STATION, NWS EMPLOYEE (OFF DUTY), DROUGHT MONITOR, GOVT OFFICIAL, FIRE DEPT/RESCUE SQUAD, C-MAN STATION, OTHER FEDERAL AGENCY, AMATEUR RADIO, INSURANCE COMPANY, AIRPLANE PILOT, COASTAL OBSERVING STATION, 911 CALL CENTER, PUBLIC, NEWSPAPER, OFFICIAL NWS OBSERVATIONS, COAST GUARD, STATE OFFICIAL, COOP OBSERVER, COCORAHS, MESONET, DEPARTMENT OF HIGHWAYS, POST OFFICE, WLON, MARINER, “AWOS, ASOS, MESONET, ETC.,” BUOY, OFFICIAL NWS OBS., GENERAL PUBLIC, and METEOROLOGIST (NON NWS).

Restricting to severe wind made no change to the raw categories. Restricting further to EGs reduced the number of categories by 1. Restricting severe to MGs reduced the number of categories by 3 from the original. To offset changes in name, these were corrected via comprehension (mostly 2006 where the categories were changed); categories that were blended (based on similarity to the change to category) were as follows: NWS EMPLOYEE (OFF DUTY) → NWS EMPLOYEE, GENERAL PUBLIC → PUBLIC, DEPT OF HIGHWAYS → DEPARTMENT OF HIGHWAYS, OFFICIAL NWS OBS. → OFFICIAL NWS OBSERVATIONS, COOP STATION→ COOP OBSERVER, NWS EMPLOYEE (OFF DUTY) → NWS EMPLOYEE, METEOROLOGIST (NON NWS) → TRAINED SPOTTER, COASTAL OBSERVING STATION → “AWOS, ASOS, MESONET, ETC.” (2006 only), FIRE DEPT/RESCUE SQUAD → FIRE DEPARTMENT/RESCUE, AWSS → AWOS, C-MAN STATION → BUOY, AIRPLANE PILOT → TRAINED SPOTTER, RAWS → AWOS, MARINER → TRAINED SPOTTER, and COCORAHS → TRAINED SPOTTER. This left a list of 36 sources for severe winds, 35 for EGs, and 33 for MGs.

REFERENCES

  • Abbe, C., 1914: The Beaufort wind scale. Mon. Wea. Rev., 42, 231232, https://doi.org/10.1175/1520-0493(1914)42<231:TBWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Agdas, D., G. D. Webster, and F. J. Masters, 2012: Wind speed perception and risk. PLOS ONE, 7, e49944, https://doi.org/10.1371/journal.pone.0049944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Allen, J. T., and M. K. Tippett, 2015: The characteristics of United States hail reports: 1955–2014. Electron. J. Severe Storms Meteor., 10, 131.

    • Search Google Scholar
    • Export Citation
  • Amburn, S. A., and P. L. Wolf, 1997: VIL density as a hail indicator. Wea. Forecasting, 12, 473478, https://doi.org/10.1175/1520-0434(1997)012<0473:VDAAHI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blair, S. F., and Coauthors, 2017: High-resolution hail observations: Implications for NWS warning operations. Wea. Forecasting, 32, 11011119, https://doi.org/10.1175/WAF-D-16-0203.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The Oklahoma Mesonet: A technical overview. J. Atmos. Oceanic Technol., 12, 519, https://doi.org/10.1175/1520-0426(1995)012<0005:TOMATO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brooks, H. E., and N. Dotzek, 2008: The spatial distribution of severe convective storms and an analysis of their secular changes. Climate Extremes and Society, H. F. Diaz and R. Murnane, Eds., Cambridge University Press, 35–53.

    • Crossref
    • Export Citation
  • Curtis, R. H., 1897: An attempt to determine the velocity equivalents of wind-forces estimated by Beaufort’s scale. Quart. J. Roy. Meteor. Soc., 23, 2461, https://doi.org/10.1002/qj.49702310104.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 1985: Storm scale analysis. The operational meteorology of convective weather, NOAA Tech. Memo. ERL ESG-15, Vol. II, 240 pp.

  • Doswell, C. A., III, H. E. Brooks, and M. P. Kay, 2005: Climatological estimates of daily local nontornadic severe thunderstorm probability for the United States. Wea. Forecasting, 20, 577595, https://doi.org/10.1175/WAF866.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dotzek, N., P. Groenemeijer, B. Feuerstein, and A. M. Holzer, 2009: Overview of ESSL’s severe convective storms research using the European Severe Weather Database ESWD. Atmos. Res., 93, 575586, https://doi.org/10.1016/j.atmosres.2008.10.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Edwards, R., and G. W. Carbin, 2016: Estimated convective winds: Reliability and effects on severe-storm climatology. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 14B.6, https://ams.confex.com/ams/28SLS/webprogram/Manuscript/Paper300279/sls-estd.pdf.

  • Edwards, R., J. G. LaDue, J. T. Ferree, K. Scharfenberg, C. Maier, and W. L. Coulbourne, 2013: Tornado intensity estimation: Past, present, and future. Bull. Amer. Meteor. Soc., 94, 641653, https://doi.org/10.1175/BAMS-D-11-00006.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Galway, J. G., 1989: The evolution of severe thunderstorm criteria within the Weather Service. Wea. Forecasting, 4, 585592, https://doi.org/10.1175/1520-0434(1989)004<0585:TEOSTC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Groenemeijer, P., and Coauthors, 2017: Severe convective storms in Europe: Ten years of research at the European Severe Storms Laboratory. Bull. Amer. Meteor. Soc., 98, 26412651, https://doi.org/10.1175/BAMS-D-16-0067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hales, J. E., Jr., 1988: Improving the watch/warning program through use of significant event data. Preprints, 15th Conf. on Severe Local Storms, Baltimore, MD, Amer. Meteor. Soc., 165168.

  • Jirak, I. L., C. J. Melick, and S. J. Weiss, 2014: Combining probabilistic ensemble information from the environment with simulated storm attributes to generate calibrated probabilities of severe weather hazards. 27th Conf. on Severe Local Storms, Madison, WI, Amer. Meteor. Soc., 2.5, https://ams.confex.com/ams/27SLS/webprogram/Manuscript/Paper254649/SLS2014_Cal_Hazards_exabs_Final.pdf.

  • Kelly, D. L., J. T. Schaefer, and C. A. Doswell III, 1985: Climatology of nontornadic severe thunderstorm events in the United States. Mon. Wea. Rev., 113, 19972014, https://doi.org/10.1175/1520-0493(1985)113<1997:CONSTE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Menne, M. J., I. Durre, R. S. Vose, B. E. Gleason, and T. G. Houston, 2012: An overview of the Global Historical Climatology Network-daily database. J. Atmos. Oceanic Technol., 29, 897910, https://doi.org/10.1175/JTECH-D-11-00103.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, P. W., A. W. Black, C. A. Williams, and J. A. Knox, 2016a: Maximum wind gusts associated with human-reported nonconvective wind events and a comparison to current warning issuance criteria. Wea. Forecasting, 31, 451465, https://doi.org/10.1175/WAF-D-15-0112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, P. W., A. W. Black, C. A. Williams, and J. A. Knox, 2016b: Quantitative assessment of human wind speed overestimation. J. Appl. Meteor. Climatol., 55, 10091020, https://doi.org/10.1175/JAMC-D-15-0259.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2007: National Weather Service instruction. NOAA Rep. 10-1605, 70–74, http://www.nws.noaa.gov/directives/sym/pd01016005curr.pdf.

  • Ortega, K. L., T. M. Smith, K. L. Manross, K. A. Scharfenberg, A. Witt, A. G. Kolodziej, and J. J. Gourley, 2009: The Severe Hazards Analysis and Verification Experiment. Bull. Amer. Meteor. Soc., 90, 15191530, https://doi.org/10.1175/2009BAMS2815.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pluijms, J. P., R. Cañal-Bruland, W. M. B. Tiest, F. A. Mulder, and G. J. P. Savelsbergh, 2015: Expertise effects in cutaneous wind perception. Atten. Percept. Psychophys., 77, 21212133, https://doi.org/10.3758/s13414-015-0893-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schaefer, J. T., and R. Edwards, 1999: The SPC tornado/severe thunderstorm database. Preprints, 11th Conf. on Applied Climatology, Dallas, TX, Amer. Meteor. Soc., 6.11, https://ams.confex.com/ams/older/99annual/abstracts/1360.htm.

  • Seaman, M. P., C. R. Kruse, and R. Graham, 2016: An assessment of environments supportive of discretely propagating mesoscale convective systems in the Great Basin. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 13A.3, https://ams.confex.com/ams/28SLS/webprogram/Handout/Paper299656/HailEnvironmentPoster_final.pdf.

  • Simpson, R. H., 1974: The hurricane disaster-potential scale. Weatherwise, 27, 169186, https://doi.org/10.1080/00431672.1974.9931702.

  • Smith, B. T., T. E. Castellanos, A. C. Winters, C. M. Mead, A. R. Dean, and R. L. Thompson, 2013: Measured severe convective wind climatology and associated convective modes of thunderstorms in the contiguous United States, 2003–09. Wea. Forecasting, 28, 229236, https://doi.org/10.1175/WAF-D-12-00096.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., J. T. Allen, V. A. Gensini, and H. E. Brooks, 2015: Climate and hazardous convective weather. Curr. Climate Change Rep., 1, 6073, https://doi.org/10.1007/s40641-015-0006-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trapp, R. J., D. M. Wheatley, N. T. Atkins, R. W. Przybylinski, and R. Wolf, 2006: Buyer beware: Some words of caution on the use of severe wind reports in postevent assessment and research. Wea. Forecasting, 21, 408415, https://doi.org/10.1175/WAF925.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Varney, B. M., 1925: On the use of the Beaufort scale of wind by the United States Weather Bureau. Mon. Wea. Rev., 53, 119120, https://doi.org/10.1175/1520-0493(1925)53<119:UOTBSO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, S. J., and M. D. Vescio, 1998: Severe local storm climatology 1955–1996: Analysis of reporting trends and implications for NWS operations. Preprints, 18th Conf. on Severe Local Storms, Minneapolis, MN, Amer. Meteor. Soc., 536–539.

  • Weiss, S. J., J. A. Hart, and P. R. Janish, 2002: An examination of severe thunderstorm wind report climatology: 1970–1999. Extended Abstracts, 21st Conf. on Severe Local Storms, San Antonio, TX, Amer. Meteor. Soc., 11B.2, https://ams.confex.com/ams/pdfpapers/47494.pdf.

  • Yang, X., J. Sun, and Y. Zheng, 2017: A 5-yr climatology of severe convective wind events over China. Wea. Forecasting, 32, 12891299, https://doi.org/10.1175/WAF-D-16-0101.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

Some EGs likely were MGs and vice versa, given our sourcing results in section 3d, and M16’s findings that ≈5000 nonconvective winds measured by automated stations were misclassified as estimates from 1996 to 2013. The true extent of any such erroneous transpositions in the convective dataset is unknown, and cannot necessarily be modeled statistically from M16’s nonconvective assumptions covering a partly different time period.

Save
  • Abbe, C., 1914: The Beaufort wind scale. Mon. Wea. Rev., 42, 231232, https://doi.org/10.1175/1520-0493(1914)42<231:TBWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Agdas, D., G. D. Webster, and F. J. Masters, 2012: Wind speed perception and risk. PLOS ONE, 7, e49944, https://doi.org/10.1371/journal.pone.0049944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Allen, J. T., and M. K. Tippett, 2015: The characteristics of United States hail reports: 1955–2014. Electron. J. Severe Storms Meteor., 10, 131.

    • Search Google Scholar
    • Export Citation
  • Amburn, S. A., and P. L. Wolf, 1997: VIL density as a hail indicator. Wea. Forecasting, 12, 473478, https://doi.org/10.1175/1520-0434(1997)012<0473:VDAAHI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blair, S. F., and Coauthors, 2017: High-resolution hail observations: Implications for NWS warning operations. Wea. Forecasting, 32, 11011119, https://doi.org/10.1175/WAF-D-16-0203.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The Oklahoma Mesonet: A technical overview. J. Atmos. Oceanic Technol., 12, 519, https://doi.org/10.1175/1520-0426(1995)012<0005:TOMATO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brooks, H. E., and N. Dotzek, 2008: The spatial distribution of severe convective storms and an analysis of their secular changes. Climate Extremes and Society, H. F. Diaz and R. Murnane, Eds., Cambridge University Press, 35–53.

    • Crossref
    • Export Citation
  • Curtis, R. H., 1897: An attempt to determine the velocity equivalents of wind-forces estimated by Beaufort’s scale. Quart. J. Roy. Meteor. Soc., 23, 2461, https://doi.org/10.1002/qj.49702310104.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 1985: Storm scale analysis. The operational meteorology of convective weather, NOAA Tech. Memo. ERL ESG-15, Vol. II, 240 pp.

  • Doswell, C. A., III, H. E. Brooks, and M. P. Kay, 2005: Climatological estimates of daily local nontornadic severe thunderstorm probability for the United States. Wea. Forecasting, 20, 577595, https://doi.org/10.1175/WAF866.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dotzek, N., P. Groenemeijer, B. Feuerstein, and A. M. Holzer, 2009: Overview of ESSL’s severe convective storms research using the European Severe Weather Database ESWD. Atmos. Res., 93, 575586, https://doi.org/10.1016/j.atmosres.2008.10.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Edwards, R., and G. W. Carbin, 2016: Estimated convective winds: Reliability and effects on severe-storm climatology. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 14B.6, https://ams.confex.com/ams/28SLS/webprogram/Manuscript/Paper300279/sls-estd.pdf.

  • Edwards, R., J. G. LaDue, J. T. Ferree, K. Scharfenberg, C. Maier, and W. L. Coulbourne, 2013: Tornado intensity estimation: Past, present, and future. Bull. Amer. Meteor. Soc., 94, 641653, https://doi.org/10.1175/BAMS-D-11-00006.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Galway, J. G., 1989: The evolution of severe thunderstorm criteria within the Weather Service. Wea. Forecasting, 4, 585592, https://doi.org/10.1175/1520-0434(1989)004<0585:TEOSTC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Groenemeijer, P., and Coauthors, 2017: Severe convective storms in Europe: Ten years of research at the European Severe Storms Laboratory. Bull. Amer. Meteor. Soc., 98, 26412651, https://doi.org/10.1175/BAMS-D-16-0067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hales, J. E., Jr., 1988: Improving the watch/warning program through use of significant event data. Preprints, 15th Conf. on Severe Local Storms, Baltimore, MD, Amer. Meteor. Soc., 165168.

  • Jirak, I. L., C. J. Melick, and S. J. Weiss, 2014: Combining probabilistic ensemble information from the environment with simulated storm attributes to generate calibrated probabilities of severe weather hazards. 27th Conf. on Severe Local Storms, Madison, WI, Amer. Meteor. Soc., 2.5, https://ams.confex.com/ams/27SLS/webprogram/Manuscript/Paper254649/SLS2014_Cal_Hazards_exabs_Final.pdf.

  • Kelly, D. L., J. T. Schaefer, and C. A. Doswell III, 1985: Climatology of nontornadic severe thunderstorm events in the United States. Mon. Wea. Rev., 113, 19972014, https://doi.org/10.1175/1520-0493(1985)113<1997:CONSTE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Menne, M. J., I. Durre, R. S. Vose, B. E. Gleason, and T. G. Houston, 2012: An overview of the Global Historical Climatology Network-daily database. J. Atmos. Oceanic Technol., 29, 897910, https://doi.org/10.1175/JTECH-D-11-00103.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, P. W., A. W. Black, C. A. Williams, and J. A. Knox, 2016a: Maximum wind gusts associated with human-reported nonconvective wind events and a comparison to current warning issuance criteria. Wea. Forecasting, 31, 451465, https://doi.org/10.1175/WAF-D-15-0112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, P. W., A. W. Black, C. A. Williams, and J. A. Knox, 2016b: Quantitative assessment of human wind speed overestimation. J. Appl. Meteor. Climatol., 55, 10091020, https://doi.org/10.1175/JAMC-D-15-0259.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2007: National Weather Service instruction. NOAA Rep. 10-1605, 70–74, http://www.nws.noaa.gov/directives/sym/pd01016005curr.pdf.

  • Ortega, K. L., T. M. Smith, K. L. Manross, K. A. Scharfenberg, A. Witt, A. G. Kolodziej, and J. J. Gourley, 2009: The Severe Hazards Analysis and Verification Experiment. Bull. Amer. Meteor. Soc., 90, 15191530, https://doi.org/10.1175/2009BAMS2815.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pluijms, J. P., R. Cañal-Bruland, W. M. B. Tiest, F. A. Mulder, and G. J. P. Savelsbergh, 2015: Expertise effects in cutaneous wind perception. Atten. Percept. Psychophys., 77, 21212133, https://doi.org/10.3758/s13414-015-0893-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schaefer, J. T., and R. Edwards, 1999: The SPC tornado/severe thunderstorm database. Preprints, 11th Conf. on Applied Climatology, Dallas, TX, Amer. Meteor. Soc., 6.11, https://ams.confex.com/ams/older/99annual/abstracts/1360.htm.

  • Seaman, M. P., C. R. Kruse, and R. Graham, 2016: An assessment of environments supportive of discretely propagating mesoscale convective systems in the Great Basin. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 13A.3, https://ams.confex.com/ams/28SLS/webprogram/Handout/Paper299656/HailEnvironmentPoster_final.pdf.

  • Simpson, R. H., 1974: The hurricane disaster-potential scale. Weatherwise, 27, 169186, https://doi.org/10.1080/00431672.1974.9931702.

  • Smith, B. T., T. E. Castellanos, A. C. Winters, C. M. Mead, A. R. Dean, and R. L. Thompson, 2013: Measured severe convective wind climatology and associated convective modes of thunderstorms in the contiguous United States, 2003–09. Wea. Forecasting, 28, 229236, https://doi.org/10.1175/WAF-D-12-00096.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., J. T. Allen, V. A. Gensini, and H. E. Brooks, 2015: Climate and hazardous convective weather. Curr. Climate Change Rep., 1, 6073, https://doi.org/10.1007/s40641-015-0006-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trapp, R. J., D. M. Wheatley, N. T. Atkins, R. W. Przybylinski, and R. Wolf, 2006: Buyer beware: Some words of caution on the use of severe wind reports in postevent assessment and research. Wea. Forecasting, 21, 408415, https://doi.org/10.1175/WAF925.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Varney, B. M., 1925: On the use of the Beaufort scale of wind by the United States Weather Bureau. Mon. Wea. Rev., 53, 119120, https://doi.org/10.1175/1520-0493(1925)53<119:UOTBSO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, S. J., and M. D. Vescio, 1998: Severe local storm climatology 1955–1996: Analysis of reporting trends and implications for NWS operations. Preprints, 18th Conf. on Severe Local Storms, Minneapolis, MN, Amer. Meteor. Soc., 536–539.

  • Weiss, S. J., J. A. Hart, and P. R. Janish, 2002: An examination of severe thunderstorm wind report climatology: 1970–1999. Extended Abstracts, 21st Conf. on Severe Local Storms, San Antonio, TX, Amer. Meteor. Soc., 11B.2, https://ams.confex.com/ams/pdfpapers/47494.pdf.

  • Yang, X., J. Sun, and Y. Zheng, 2017: A 5-yr climatology of severe convective wind events over China. Wea. Forecasting, 32, 12891299, https://doi.org/10.1175/WAF-D-16-0101.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Screen captures of cropped portions of Storm Data pages for (a) 4 Jun 2005 in Iowa, exemplifying pre-specification-era gust format, and (b) 23 May 2011 in Arkansas. In (b), EG and MG signify estimated and measured gusts, respectively. Note that neither the source of the EG nor instrument information for the MG are given. Gust values (kt) are in parentheses.

  • Fig. 2.

    Human-perceived vs actual wind speeds in wind-chamber testing with point values at testing intervals. The horizontal and vertical scales are not equal (see axes). Short black vertical bars at each plotted point represent 95% confidence intervals at 10-mph (4.5 m s−1) intervals. The dark gray shading represents the difference domain between actual (red) and perceived (blue) severe wind. Adapted from A12.

  • Fig. 3.

    Geographic distribution of convective-gust reports, 2006–15: (a) measured (blue) and (b) estimated (red).

  • Fig. 4.

    Histogram of wind-report distribution. The ordinate is logarithmically scaled by event count, and the abscissa is linearly scaled by wind speed (kt). Measured reports are in blue, and estimated counts are in red, such that the stacked height of each combined red and blue pillar represents the total count for that speed value. Red numbers correspond to originally reported miles-per-hour values.

  • Fig. 5.

    Histogram of the 50–70-kt subset of MGs; the ordinate is scaled linearly. Blue numbers represent miles-per-hour speeds.

  • Fig. 6.

    Linear-scaled histograms of nonconvective report counts as follows: (a) human EGs from Storm Data and (b) MGs from the GHCN (Menne et al. 2012) U.S. daily station data. Bins including the NWS 58 mph (26 m s−1) nonconvective warning criterion, which matches that for severe convection, are labeled with the MGs in blue and EGs in red, as elsewhere herein. Adapted from Fig. 1 in Miller et al. (2016a).

  • Fig. 7.

    Map of EG/MG ratio by state, 2006–15. Red (blue) numbers correspond to ratios above (below) unity. Ratios round to unity over Colorado and New Mexico (reddish gray number).

  • Fig. 8.

    Map by state of percent of EGs ≥ 65 kt (33.4 m s−1), 2006–15. Underlined values come from sample sizes < 10.

  • Fig. 9.

    State-by-state map per 100 000 people of (a) MGs in blue and (b) EGs in red.

  • Fig. 10.

    Normalized MG frequency by magnitude for each source, 2006–15. (a) All (red); and MG (red) overlaid in blue by (b) ASOS, (c) AWOS, (d) mesonet, (e) COOP observer, (f) trained spotters, (g) amateur radio, (h) public, (i) emergency manager, (j) law enforcement, (k) official NWS observations, (l) fire/rescue, (m) department of highways, (n) NWS employees, (o) other federal agencies, (p) broadcast media, (q) storm chasers, and (r) other minor sources for MGs as characterized in Table 1. Each source set is normalized by its own sample size to allow for comparison for skewness in the respective distributions and displayed with a log-y axis to facilitate interpretation of the logarithmic decrease in likelihood associated with increasing wind intensity.

  • Fig. 11.

    As in Fig. 10, but for EGs. Source panels are as in Fig. 10, except for (o) newspapers, (p) broadcast media, (r) social media, (s) utility companies, (t) NWS storm survey, (u) county officials, (v) state officials, (w) 911 call centers, and (x) other minor sources for EGs as characterized in Table 2.

  • Fig. 12.

    Distribution of MGs in comparison with a subsampled set composed of sources known to be reliably measured (ASOS, AWOS, mesonet, buoys, official NWS observations, and COOP observers). Purple represents the overlap between reliable (blue) and all (red) MGs on each column count.

  • Fig. 13.

    Chloropleth of MGs by CWA: (a) all sources as a density of reports per 10 000 mi2 (25 900 km2) within each CWA, colored as in the left legend at bottom; (b)–(r) fractional contribution to the total number of reports for each respective source identified with sample size n for each, colored as in the right legend at bottom. “Other minor sources” are characterized in Table 1.

  • Fig. 14.

    As in Fig. 13, but for EGs, with sources as labeled. Other minor sources are characterized in Table 1.

  • Fig. 15.

    Estimated (blue) and vertically corresponding adjusted (red) data points spanning EGs of 50–113 kt (58–130 mph, 26–58 m s−1). Dots represent datapoint values and not the sample size at each. EG values with null points (e.g., no records exist of 97–99-kt EGs) have clear centers. The x axis has labels for severe and significant–severe adjusted values (kt). The gray area represents the spread between original and adjusted EGs. Analogous to Fig. 2, the horizontal lines (y-axis values) can be considered as “perceived” and the vertical lines (x-axis values) as “actual.”

  • Fig. 16.

    (a) Total gust density delineated by CWA area of the subset of all MGs ≥ 50 mph (26 m s−1) and EGs ≥ 73 mph (2006–15). (b) Mean annual density of gusts for the subset as in (a), but per 80 × 80 km2 grid box. (c) As in (b), but using a 1.5-sigma-bandwidth Gaussian kernel smoother.

  • Fig. 17.

    Sub-severe raw METAR observation (kt, medium gray shade) and severe spotter estimate in an LSR (light gray shade, mph) for convective gusts at Mobridge, 10 Aug 2016. Peak speeds are underlined in red.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 855 278 12
PDF Downloads 722 223 7