1. Introduction
As one of the National Centers for Environmental Prediction (NCEP), the Storm Prediction Center (SPC) will provide operational, short-term (0–12 h) forecast guidance for hazardous winter weather (McPherson 1994). Current plans for the SPC include issuance of regional event-driven mesoscale forecast products for all hazardous mesoscale weather phenomena within the contiguous United States (conUS). These products will be similar to mesoscale convective discussions currently being issued by the Operations Branch of the SPC (formerly the Severe Local Storms unit of the National Severe Storms Forecast Center) and will pertain to all types of mesoscale weather hazards, including winter weather and heavy rain in addition to severe convective storms. Primary users of the SPC products will be National Weather Service (NWS) forecast offices (or WFOs), which will use the products as guidance in preparing local short-term forecasts (or “nowcasts”).
As the SPC mission broadens to include greater emphasis on short-term forecasting of winter weather, it is necessary to develop a greater understanding of the mesoscale aspects of winter weather events. Such an understanding begins with a climatological study of winter weather events affecting the forecast domain of the SPC, that is, the conUS. In other words, a winter weather climatology will give SPC planners a better idea of what they will be facing once operational forecasting of winter weather begins. Specifically, it is hoped that data on seasonal frequency trends, duration, and areal size distributions of winter weather events will assist SPC developers in operational planning issues such as staffing requirements, workloads, forecast formats, forecast content, reasonable lead times, and forecast product schedules. This concept is the primary purpose of developing the climatological database to be discussed herein.
In this study, data were assembled exclusively from all reported Storm Data (SD) winter weather entries from 1982 to 1994. The exclusive use of SD as a data source reflects a secondary purpose of this study: to assess the ability of SD to provide useful observational data for purposes of winter weather forecast verification. Although alternative, more quantitative climatological data sources are available for some winter weather elements (e.g., snowfall amounts), no such alternatives exist for freezing precipitation. (The occurrence or nonoccurrence of freezing precipitation can be determined from other sources, but freezing precipitation amounts are not measured or recorded routinely.) Also, reports of blizzards are considered more likely to be found in SD than in other data sources, since one generally must infer the occurrence and duration of blizzard conditions from surface observations of wind speed, visibility, and type of obstruction.
Thus it becomes apparent that SD is perhaps the only unified source of data on all types of hazardous winter weather events. So if SD cannot be used to construct a useful database, then there may be no viable alternative data source, at least for freezing precipitation and blizzards. The importance of this issue becomes apparent when one notes that forecasting of these elements is a major part of the NWS mission (e.g., ice storm and blizzard warnings) and will become an even larger part with the advent of winter weather forecasts by the SPC. Thus, if problems are found in SD that prevent its use as a data source, then there would be no reliable source of observational data to verify many (perhaps most) forecasts of significant winter weather.
The objectives of this research thus are heavily dependent on the answer to the question of SD reliability. Nonetheless, results can be regarded as meaningful regardless of the answer, for even if SD is found to be deficient as a data source, then deficiencies can be documented with the aim of either (a) establishing an alternative data source for verification purposes, or (b) improving SD itself. On the other hand, should SD be found adequate, then the resultant climatological database would be more useful and SD can be shown to be a potentially viable source of data for winter weather forecast verification.
Section 2 contains a detailed description of the method by which state-by-state SD entries and narratives were combined into events based on time and space continuity. In section 3, results are presented from the database of more than 1600 events. Forecast verification and SD consistency issues are discussed in more detail in section 4. A summary and general discussion of findings and recommendations appear in section 5, followed by plans for future work in section 6.
2. Methodology
Monthly SD publications from 1982 through 1994 were reviewed systematically for “character of storm” titles containing any references to winter-type weather hazards (freezing or frozen precipitation, blizzard conditions, or extreme cold). Unfortunately, over much of the period examined, it was found that such titles generally were not used consistently in SD. For example, an entry featuring freezing rain, regardless of amount or intensity, might appear in SD with a title of “Freezing Rain,” “Ice,” “Glaze,” “Ice Storm,” or simply “Winter Storm,” depending on the state or region. (Note that SD entries are prepared locally by NWS offices for their particular area of forecast/warning responsibility and then are forwarded to a central location for compilation into the final monthly SD publication.) Thus it was necessary to conduct a general search for key words or phrases that relate to winter-type hazards in entry titles. These key words/phrases generally included any phrase containing “snow,” “blizzard,” “winter,” “freezing,” “ice,” “cold,” “glaze,” or “sleet.” If any of these words appeared in a given “character of storm” title, that entry was considered for examination as part of a winter weather event.
Since winter weather events often affect more than one state, a given event often is represented in SD by multiple entries, one from each area of SD responsibility. Thus it was necessary to determine which entries were part of the same event. This determination was made by passing through each monthly volume of SD four times, as described below. The process is demonstrated in step-by-step fashion using an example from a particularly active period in early January 1994, during which several significant winter storms affected the conUS.
In the first pass, all winter weather entries were identified by calendar date(s) and states (or sections of states) affected. For each calendar day, all states reporting winter weather on the given day were compiled into a tabular listing for each month, as shown in the example in Fig. 1. These monthly tables were used to obtain statistical data on frequency of winter weather as a function of calendar date. They also were used in subsequent passes to identify general patterns that may relate to the occurrence of widespread events affecting several states. For example, in Fig. 1, most northeastern and mid-Atlantic states are listed on 3–4 January and again on 7–8 January, suggesting the occurrence of widespread winter weather events in the eastern states during each of these two periods.
In the second pass, the first attempt was made to combine individual SD entries into events. For this study, an “event” is defined as a set of SD entries that are continuous in time and space. For example, if there indeed was a large winter storm that affected most of the northeastern United States on 3–4 January 1994 (as suggested initially by the listings in Fig. 1), then such a storm would be represented in SD by separate entries from each affected state (or portion of state, in some cases). Furthermore, each entry would include dates, times, and areas affected within the state. If examination of those data revealed a spatially and temporally continuous swath of winter weather, then all entries thus identified would be combined into a single event. [This process is similar to examination of damage reports following a tornado, to determine whether damage was caused by a single tornado (unbroken damage swath caused by one event) or a family of two or more tornadoes (broken areas, indicative of multiple events).]
The desire to compile entries into events is driven by the need to determine size and duration characteristics of each winter weather event. This need will become more apparent in section 3 when data are presented. (Intensity characteristics also would be desirable, but since winter storms often produce inherent variations in hazard types and/or freezing and frozen precipitation amounts, meaningful comparison of intensity between events is difficult, if not impossible, to achieve.) From a climatological perspective, the data obtained after the first pass (i.e., states affected as a function of dates of occurrence) are sufficient to provide useful information on seasonal and geographic variations in winter weather frequency. However, such “event day” data do not allow one to determine the actual magnitude of a given winter weather event, particularly a widespread or long-lived one, in terms of impact on national, regional, or local forecast operations (much less on human activities).
In the second pass, greater attention was paid to event type (i.e., heavy snow, ice, blizzard, etc.). Using the calendar tables created in the first pass, all SD winter weather entries again were reviewed, but this time they were sorted chronologically and combined into events if inspection of the SD entries suggested that they were spatially and temporally continuous.
An example of an event listing after the second pass is shown in Fig. 2a, which lists six events that were identified during the period 1–8 January 1994: 1) an isolated heavy lake-effect snow event in Michigan on 1 January, 2) a long-lived heavy snow event over the western mountain areas on 1–6 January, 3) a heavy snow event in Iowa on 2 January, 4) a heavy snow and ice storm over the eastern states on 3–5 January, 5) another widespread heavy snow event from the northern plains to New England on 5–8 January, and 6) an isolated ice event in Oklahoma on 6 January.
Each event identified was given one or more titles (snow, ice, heavy snow, ice storm, extreme cold, ground blizzard, and/or blizzard) based on characteristics described in the SD narratives. Every effort was made to standardize the use of these titles as much as the data in SD would allow (the often-variable character of storm titles used in SD notwithstanding), based on specific definitions created for each title as given in appendix A. A series of yes/no indicators was recorded in tabular form for each event, indicating the reported occurrence of heavy snow (HS), blowing/drifting snow (BS), freezing precipitation (ZR), wet snow (WS), high winds (HW), downed power lines (PL), and thunder with freezing or frozen precipitation (TS). Definitions of these elements also are given in appendix A. States, or parts of states, for which entries were found for each particular event also were listed (see Fig. 2a).
In the third pass, event listings were refined using closer reexamination of each event that was identified as such after the second pass. Using the list of states contained in each event listing, each state entry for that event was reexamined individually in more detail. To confirm time and space continuity, closer attention was paid to reported dates and beginning/ending times of each entry and the geographical areas affected within each state.
As part of the third pass, graphical outlines were drawn for each event in order to document general size, shape, and geographic location of the area affected. This aspect of the analysis involved transferring outlines of affected areas within each state to a base map of the United States. The resulting outlines thus also served to confirm spatial continuity of the area affected by a given event. Since most states reported affected regions within their area by listing affected forecast zones (as defined by the NWS) or counties, the base map was a county map of the United States, which was supplemented with reference maps of standard NWS zone divisions from each state. (Forecast zone boundaries do not always coincide with county boundaries, especially in western states where topography often dictates zone divisions.)
The process of creating graphical outlines is illustrated in Figs. 3–5 for the period 1–8 January 1994. This is an example of a particularly active period that involved several events occurring concurrently in different parts of the conUS. The task of identifying individual events for active periods such as this was more involved than usual. Thus, a preliminary step was required before individual events could be outlined and defined: a general depiction of all areas that were affected by reported winter weather during the period. Figure 3 is a depiction of all areas within which winter weather was reported during the period 1–8 January. This graph was created by highlighting all forecast zones or counties that were listed in at least one SD entry during the 8-day period. Areas within the overall winter weather area were annotated with appropriate dates of occurrence, which established the occurrence of overlapping events (i.e., multiple events affecting a given location during the period) in some areas. (These annotations have been omitted from Fig. 3 for the sake of clarity.) The process of generating these graphics led to confirmation of time and space continuity in some cases but revealed spatial or temporal gaps in some areas that initially were diagnosed as one event. Or, occasionally, the third pass would reveal that two events initially considered as separate were in fact continuous in time and space. Thus, the third pass was a refinement process that either confirmed the second-pass listings or led to adjustments therein.
Based on this additional analysis, the general graphic (Fig. 3) was redrawn as a series of graphics depicting each event separately that could be verified as being both spatially and temporally continuous. The individual events in our example, thus separated, are shown in the sectionals in Fig. 4. The initial set of six events was expanded to 10 in this case, for several reasons. First, time continuity factors suggested that two events actually occurred in the western mountains from 1 to 6 January: one from Oregon to Wyoming on 1–3 January (affecting Oregon on 1 January, and Idaho and Wyoming on 1–3 January), and another that began in Oregon on 4 January and spread to Colorado and Wyoming on 6 January. The fact that there were temporal “gaps” of 2–3 days between entries from both Oregon and Wyoming supports the splitting of this initially single event into two events. Second, a spatial gap was found between the 5–6 January event over the north-central states and the 7–8 January event in the northeast. Therefore, this event also was redefined as two events. Third, further inspection identified a local lake-effect snow event in northern upper Michigan on 4–5 January, which occurred in the wake of the first eastern states event on 3–5 January. (The 3–5 January event was unchanged in the third pass, but the particular Michigan entry for 4–5 January initially was thought to be part of the larger event that occurred farther west and south on 5–6 January.) Finally, heavy snow from the upper Great Lakes region to western New York on 5–8 January was determined to be lake-effect snowfall and, thus, was considered as a separate event from the events in the north-central and eastern states on 5–6 and 7–8 January, respectively. (The latter two events, according to the SD narratives, were associated directly with migratory midlatitude weather systems.) Events initially identified on 1 January over Michigan, on 2 January over Iowa, and on 6 January over Oklahoma were confirmed in the third pass and were not changed.
A note must be made regarding exceptions to the requirement of time and space continuity. For this study, the general requirement was that the area affected by a given event be spatially and temporally continuous; that is, there must be no breaks in the area affected, and the time of occurrence be unbroken between adjacent entries. The purpose of this restriction was to ensure (to the extent possible) that all reports from a given event did, in fact, result from similar processes within the same weather system. The spatial continuity requirement was relaxed, however, for events affecting mountainous terrain and for lake-effect snow events. In the case of mountain snow events, true spatial continuity is rarely achieved since snowfall usually is a function of elevation and, thus, can vary tremendously over short distances even when caused by similar processes within a single weather system. An example would be an event that produced heavy snow over the higher elevations of the Wasatch Mountains of Utah and the Rockies of central Colorado, but did not produce snow at lower elevations in between. (The events of 1–3 and 4–6 January in Fig. 4, which affected higher elevations from Oregon to Colorado, are examples.) In such cases, it was necessary to make subjective determinations regarding true continuity. In a vast majority of cases like this, time continuity was adequate to ensure that mountain snows in adjacent states were in fact produced by the same parent weather system, even if spatial distribution was discontinuous due to elevation. Similar rationale applies to lake-effect snow events, which often affect several isolated areas (e.g., downwind of Lakes Michigan, Erie, and Ontario, as demonstrated in the 5–8 January event over the Great Lakes in Fig. 4), yet are relatively consistent in time and arise from the same basic physical process. In cases of both mountain and lake-effect snow, it sometimes was necessary to group many reports over several days into a single long-duration event when it was apparent that similar conditions persisted over an extended period of time. (Such often is the case in mountain and lake-effect snow events.) In these cases it was impossible to break the event down further based on local effects, since SD narratives were not specific enough to identify cases of, say, local orographic effects or local changes in lake-effect snow band patterns
The often-tedious process of generating detailed graphic outlines (e.g., Figs. 3 and 4) involved several complications, including periodic changes in forecast zone configurations during the period of study in many states, occasional errors in zone coding (incorrect format, missing zones, nonexistent zones, etc.), and the fact that some states occasionally did not designate zones or counties at all (having used more generic statements such as “southern half” or “central and east” instead). Since the goal was to obtain general geographical data in the form of location, size, and shape, the graphical outlines, derived as in Fig. 4, were “smoothed” into general outlines as shown in Fig. 5.
Figure 2b represents a refined version of the listing in Fig. 2a following the third pass. In this example, the events of 1 January in Michigan, 2 January in Iowa, 3–5 January in the Northeast, and 6 January in Oklahoma were confirmed in the third pass and were not changed. However, the initially single event of 1–6 January in the western states was broken into two events, and the initially single event of 5–8 January was redefined as four events, as discussed earlier.
In the third pass, all precipitation events (i.e., all events that were not extreme cold events) were classified based on size (areal coverage) and maximum duration. For each element, rating scales from zero to five were established in order to determine climatological distributions of these two elements. Similar in principle to the F scale for tornadoes and damaging windstorms (Fujita 1971) or the Saffir–Simpson scale for hurricanes (U.S. Department of Commerce 1995), higher numbers in each element were defined to reflect more severe conditions. Thus, a rating of zero in any element would reflect a minimal event (localized or brief), while a rating of five would reflect a major event (widespread or long duration). Appendix B contains definitions of the rating criteria for each element. (Note that extreme cold events, which composed only about 1% of all events, typically are much more widespread and long lived than precipitation events, so their inclusion in the database along with precipitation events would create an undesirable bias in size and duration statistics.)
Both rating category thresholds were defined using nonlinear scales, a necessary concession that was employed in order to obtain useful data that account for the extreme size and duration variations among all events. Since value ranges for category thresholds were kept rather broad, most events either could be rated easily into one of the six categories for each element, or could be narrowed into one of no more than two adjacent categories (analogous to, say, a borderline F1/F2 tornado).
Thus, in the third pass, two ratings from zero to five were assigned to each event: one for areal coverage and one for maximum duration. These ratings can be seen in the examples in Fig. 2b, below AR and DR, respectively. Area ratings (AR) were assigned based on the size of the area affected by winter storm conditions, defined here as either heavy snow, ice storm conditions, or blizzard conditions, per NWS definitions. (This definition conforms closely with NWS criteria for winter storm warnings; see appendix B.) Thus, no area rating was assigned if the event did not meet any of these criteria during its lifetime.
Using the graphical outlines and SD narratives, each precipitation event also was given one of four subjective classifications based on prevailing topographic influence and/or shape of the area affected: elongated or banded (BA), circular or irregular in shape (CI), occurring mainly over mountainous terrain (M), or lake effect (L). These classifications, which appear in the examples in Fig. 2b under “Area,” were not used directly in this study but were included in the database for possible future use. Note that in most cases of lake-effect snow, it was possible to separate lake-effect and non-lake-effect snowfall into separate events by examining the narratives. (Such was the case in the examples in Figs. 2–5.) Some states routinely listed lake-effect and non-lake-effect components as separate entries, even when both were present around the same time and region.
At this point it should be clear that the subjective nature of SD narratives and the nonstandardization of several SD elements (e.g., character of storm titles, reporting convention for geographic locations) led to an unavoidable amount of subjectivity in several aspects of this study. This is an unfortunate necessity when dealing with SD winter weather entries, the narratives of which often are written with varying levels of detail. (The issue of variable reporting procedures will be addressed further in section 4.) However, the amount of information that can be derived from SD winter weather reports actually is quite considerable. Many elements are reported with sufficient regularity and objectivity to allow one to construct a useful database. For example, maximum snowfall totals are reported in nearly all snow entries, and maximum ice (glaze) thicknesses are reported in most freezing rain or ice storm entries. Geographic areas affected within a state usually are listed by forecast zone, allowing for reasonably accurate assessments of areal coverages of affected areas in most cases. (Generic references, as discussed earlier, are the exception rather than the rule.) It is notable that similar levels of subjectivity exist in the reporting of other severe weather elements (e.g., reported hail size, F-scale ratings), and yet such data often are used in climatological studies (e.g., Kelly et al. 1978, 1985).
In order to minimize the effects of subjective interpretation as much as possible, a fourth pass through SD was conducted. Following completion of the third pass for the entire 13-yr period, the entire third-pass procedure was repeated from the beginning of the database. Changes then were made to the database if fourth-pass results disagreed with those of the third pass. This apparent redundancy was a test of repeatability, to ensure that the method of combining SD entries into events remained consistent throughout the time the third-pass analysis was conducted. Very few changes were made during the fourth pass, supporting the idea that data collection and analysis procedures were consistent despite the partially subjective nature of the data source. The need for few changes also supports the concept of reproducibility, that is, that the database could be reproduced and similar results obtained if procedures described herein were undertaken independently.
3. Data summary
Totals, means, and percentages by event type are summarized in Table 1 for all winter weather events within the conUS. Over the 13-yr period, a total of 1661 events were identified (from an estimated 5000–10 000 SD winter weather entries). There were 2075 “event days,” calendar days on which one or more events, or parts of events, occurred.
The number of events and event days were examined by month, calendar day, size, and duration. Numbers also were broken down by type (heavy snow, blizzard, ice storm, etc.), yielding relative frequencies of these hazards.
a. Types of hazards
The breakdown of events by type of hazard is given in the bottom six rows of Table 1. Heavy snow, defined here as a storm total of 4 in. (10 cm) or more over most of the south-central and southeastern United States, and 6 in. (15 cm) or more elsewhere (corresponding closely to NWS 12-h heavy snow criteria, except in some mountainous areas where criteria are higher), was the most frequent hazard by far, being present in 81% of all reported events. Significant freezing rain or drizzle (“ice,” as defined in appendix A) was reported in only 24% of all events, and only half of those (or 12% of all events) qualified as ice storms by NWS definitions [i.e., freezing precipitation resulting in either structural damage or ice accumulations of 0.25 in. (0.64 cm) or more]. Only 10% of all events produced blizzard conditions. Thunder was reported with freezing or frozen precipitation in 6% of all events, but the actual number of thunder–snow and thunder–ice events likely is larger since many such events may be unreported in SD. (Thunder accompanying freezing or frozen precipitation, by itself, is not a criterion for required reporting in SD.)
The high percentage of events that resulted in power outages (27%) is interesting, since only about half of these events were ice storms. This finding indicates that other factors (e.g., wet snow, high winds) are the causal factors in roughly half of all winter-weather-related power outages.
b. Yearly trend
Figure 6 shows the number of reported events and event days, respectively, by year. The general increase in both events and event days reflects improvements in SD reporting procedures over the years. A general observation from examination of SD is that several states failed to report winter weather at all during the early years of the study (i.e., most of the 1980s) but that most of these states began reporting winter weather at some point during the late 1980s or early 1990s. Thus, by 1994, nearly every state was reporting winter weather regularly. This point will be discussed in more detail in the following section, but is mentioned here since it would appear to play a key role in the yearly trend in numbers. Specifically, the lower numbers in earlier years are likely too low due to “missing” reports from the states that did not report winter weather (for whatever reason), while the higher numbers in later years are considered more representative.
Note that the number of events may be arbitrary (since the grouping of SD entries into events was done subjectively for the most part), but the number of event days is based directly on calendar dates as reported in SD and, thus, should be considered more objective. Therefore, the number of event days is particularly significant, especially when one notes from the 150–200-day average in recent years that significant winter weather occurs somewhere within the conUS on more than half of all calendar days, averaged throughout the year.
c. Monthly and seasonal variations
Monthly distributions of events and event days are depicted in Fig. 7. Events have occurred within the conUS in every month of the year. (Events in the summer months occurred exclusively in the Rocky Mountain regions of Colorado, Wyoming, or Montana.) Numbers of events exhibit a general steady decrease in number during the late winter and spring months, and a general steady increase in the fall months. However, percentages of event days remain approximately uniform near or above 75% from November into March, before decreasing sharply in April and May. This finding reflects a greater tendency for multiple midwinter events to occur on the same day, especially during peak months of December and January.
In order to examine seasonal variations in greater detail, frequencies were calculated by calendar day after the first data pass. These numbers, expressed as percentages of possible days and smoothed by a 7-day running average, are shown in Fig. 8. For any given calendar day, the data in Fig. 8 approximate the probability of occurrence of significant winter weather somewhere within the conUS on that day. Note that individual peaks in the traces in Fig. 8 are likely due to sampling variations and thus are probably not meaningful (although one could speculate that relative peaks from late November through mid-January are related to holiday travel periods, when winter storms are more likely to affect travel and thus are more likely to be reported).
Maximum frequencies are from late November through late January. During this period, significant winter weather has occurred somewhere within the conUS, on average, on 80%–90% of all days. This finding suggests a busy time for forecast centers with national interest (e.g., the SPC), as significant winter weather can be expected on an average of about five days in six during midwinter. Several other notable features appear in Fig. 8, including 1) a sharp rise in frequencies from late October to mid-November; 2) persistent high frequencies (70% or more) through March, before decreasing sharply through April and early May; and 3) zero frequencies limited only to late July and early August.
Another observation worth noting in Fig. 8 is the persistence of relatively high winter weather frequencies well into the spring season. This period coincides with increasing frequencies of severe convective events [large hail and damaging convective winds; Kelly et al. (1985)] and overlaps the climatological peak for violent tornadoes in April (Kelly et al. 1978). Thus, forecast operations involving both winter storms and severe convection appear most likely during early spring but are hypothetically possible virtually any time of the year. This is an important finding to consider for planning of SPC operations, since it implies a need to plan for simultaneous severe convection and winter weather forecast operations, especially in early spring.
d. Horizontal scale (areal coverage)
Areal coverage statistics were compiled using area ratings (discussed in section 2) that were assigned to each precipitation event. (The 20 extreme cold events that were documented were not included in this compilation.) Distribution of events by size categories is shown in Fig. 9 for all events that qualified as winter storms.
A vast majority of events were found to be confined to small areas. More than half of all winter storms affected areas smaller than about 25 000 km2 (roughly the size of Vermont; area categories 0 and 1 in Fig. 9), and 84% affected areas smaller than about 250 000 km2 (roughly the size of Kansas; categories 0, 1, and 2 in Fig. 9). This finding strongly indicates the importance of mesoscale processes in most winter weather systems. Effective short-term forecasting of such systems thus will require a thorough understanding of the mesoscale processes that produce and focus the often-localized areas of severe winter weather.
e. Local duration
Local duration is defined as the maximum duration of the event at a given location, not storm lifetime, which typically is considerably longer. As discussed earlier, one must deal with subjectivity in determining maximum duration since SD time entries often are vague. Frequently, times of occurrence were given in generic terms such as “afternoon,” “all day,” etc., or occasionally only a single local time was given. In the latter case, it was not always clear whether the time was a start time or an end time for the event (although in most cases one could obtain a reasonable estimate of duration from the narrative). And in cases where both start times and end times were given, there was (and still is) no standard as to what exactly constitutes the “start” and “end” of a winter weather event. However, it is felt that the information available in SD entries from a given event were adequate to classify each event into one of the duration categories defined in appendix B.
As seen in Fig. 10, 70% of all reported precipitation events lasted a maximum of 6–24 h locally (duration categories 2 and 3). A considerable number (16%) lasted a maximum of 24–48 h locally (category 4). This finding has some bearing on viable forecast lead times and forecast intervals, suggesting that forecasts out to 12 h or less will cover only part of a winter weather event in most cases.
4. Forecast verification and Storm Data reliability
In order to improve forecast quality over time, and to document the level of improvement, forecasts must be verified systematically. This is true especially for new or “spinup” forecast efforts, where forecast problems are more likely to arise. A structured verification effort, in turn, requires a reliable database with observed data on the element(s) being forecast. For the SPC winter weather forecasts, as well as for any NWS winter weather warnings, verification will require observed data on snowfall and freezing rain measurements, as well as documentation of blizzard conditions. Yet, it must be restated from the discussion in section 1 that no sources exist to provide quantitative climatological data on freezing rain accumulations or blizzard conditions. The only possible source of observational data on these types of events is SD, in which all such events are reported (in theory) and described, at least subjectively.
Several problems were found with winter weather documentation methods in SD. A few of these problems (such as generic descriptions of affected areas, nonstandard definitions of start/end times and inconsistent use thereof) have been discussed in previous sections. Other problems include wide variations in the amount of detail among narrative entries from different states, regional differences in heavy snow criteria [NWS criteria vary locally and regionally from 4 to 18 in. (10–45 cm) in 12 h], some nonmeteorological factors [e.g., blizzards may not be reported or forecast in the Colorado mountains, since the word “blizzard” may adversely affect business at local ski resorts (E. Thaler 1996, personal communication)], and general differences in reporting criteria from state to state. The latter problem sometimes led to sharp frequency gradients across state lines, as certain states did not report winter weather in SD at all during parts of the data period. This problem appeared more often during the beginning of the data period, but occasional decreases in frequency of winter weather reports were observed in some states. These trends often were absent in adjacent states and, thus, appeared to have nonmeteorological causes. (Personnel changes at local NWS offices are believed to be responsible for many local inconsistencies within a single state or region, as changes in staff bring in new individuals having different standards regarding how and what to report in SD.)
In general, improvements were noted in all aspects of SD winter weather reporting between the beginning and end of the period of study. More and more offices used forecast zone designators by 1994, thus reducing the use of generic phraseology to describe areal coverage. Start and end times, which often were missing or replaced with generic phrases in the 1980s, were included in nearly every entry in 1994 (although there still were no exact definitions of the start or end of a winter weather event). And, as mentioned in the previous section, several states that failed to report winter weather events at all in the early years of the study began to report them regularly by 1994.
But it is felt that more improvements are needed before SD can serve as a reliable source of observational data for forecast verification. Needed improvements include definitions, or at least better guidelines, for determining when a winter weather event starts and ends; standardization in reporting of certain key elements such as maximum reported snowfall or ice accumulations; and more specific guidelines covering which events to report, so that similar events are reported similarly by each state or region affected. Perhaps a quality control program would ensure more standardization of reporting procedures and help to implement the above improvements. Since SD is the only current source of ice and blizzard data (and that being mostly qualitative at present), it is vital to incorporate the above improvements. Alternatives are either (a) develop an entirely new database and observing system for reporting freezing rain accumulations and blizzards, or (b) forego verification of forecasts for these events. The first choice is considered impractical, and the second scientifically and operationally unsound.
While the above suggestions for improving the quality of SD should lead to improved forecast verification, there is a related issue involving real-time verification of forecasts, especially those with short lead times. The turnaround time between a given event and the final release of applicable SD reports typically is on the order of months due to the necessary time to accumulate and validate data sources locally, compose the appropriate entries, and collate local summaries into a national product. Thus, an accurate, consistent SD product would allow winter weather forecasts—including winter weather watches, warnings, and advisories, as well as SPC short-term forecasts—to be verified, but only well after the event. What also is needed is a standard means of providing observational data for verification during or immediately after a winter weather event. These data not only would help in subsequent preparation of SD reports on the event, but also would allow forecasters to update forecasts, watches, warnings, etc. more effectively in real time.
Currently, some WFOs use special weather statements to report snowfall, ice, or significant “impact” reports (road closings, ice damage, etc.). Other WFOs use public information statements, which often are issued well after the event is over. Still others use state or area weather summaries. Very few use local storm reports (LSRs) to report anything other than severe convective events (hail, tornadoes, etc.). Real-time winter weather reports could be standardized easily through universal use of LSRs by all WFOs to report significant winter weather. This reporting method would not only improve real-time tracking of forecasts and warnings, but would serve to inform downstream offices more effectively of approaching weather conditions. As the mission of the NWS focuses more on short-term forecasting, effective real-time dissemination of storm reports, including winter weather, will become even more important.
5. Summary and recommendations
A comprehensive climatological database has been assembled, containing all winter weather events reported in SD within the conUS from 1982 through 1994. Although winter weather climatological studies have been conducted on local and regional scales (e.g., Kocin and Uccellini 1990), this is believed to be the first comprehensive database of its type to be assembled on a national scale.
Potential benefits of a national winter weather database are far reaching. A primary example of such benefits is related to the original goal of this study: to support the operational spinup of the SPC national winter weather forecast program. Analysis of the database has provided statistics on seasonal and other variations in winter weather event frequencies, which are intended to help in operational planning issues such as forecast content, staffing requirements, forecast intervals, and lead times, etc. Furthermore, since all events in the database are identified by state(s), regional and local (state-by-state) studies are possible by extracting subsets of the complete database and performing statistical analyses of the subsets.
All winter weather precipitation events have been classified based on areal coverage and local duration. These classifications provide a means of comparing winter weather events in a relative sense and allow for a climatological assessment of size and duration distributions among winter weather events. It might be useful to extend these classifications into an overall rating system for winter storms (analogous to the Fujita or Saffir–Simpson scales for wind storms and hurricanes, respectively) by determining maximum intensity in addition to size and duration for each event. But considering the variety of elements to evaluate in winter storms (enormous range of spatial and temporal scales, differing precipitation types, and the frequent lack of a direct relationship between storm impact and meteorological parameters, such as maximum wind speeds or cyclone central pressure), an ideal, universal winter weather categorization scheme is considered difficult if not impossible to achieve.
Specific findings from the analysis of the data presented herein are summarized as follows, as related to operational forecast planning.
Winter weather is nearly a daily occurrence within the ConUS during the winter months. Significant winter weather occurs somewhere on roughly 70%–90% of all possible days from November through March. This finding suggests that the SPC will need to prepare for fulltime winter weather operations, in addition to other storm-related operations (e.g., severe convection, flooding), during the period from late fall through early spring.
Winter weather events have occurred within the conUS in all months. Although summertime events are rare, and are limited to the mountains of the western United States, winter weather forecast operations may be needed at almost any time of the year.
High frequencies of winter weather and severe convection overlap, especially in early spring. Forecast operations at the SPC thus may involve concurrent winter weather and severe convection, especially in March and April. (Heavy rain and flooding also may overlap these two hazards during the year.) Extra staffing may be needed to contend with these multiple hazards.
Most events affect small areas. This finding stresses the fact that winter weather events typically are mesoscale phenomena, and forecasting of these events thus will require development of improved mesoscale forecasting techniques. This requirement applies especially to the SPC, which will focus mainly on short-term, small-scale developments. Development of a science-support branch should be a priority during the SPC spinup phase, in order to establish an arena of applied research to assemble and test short-term winter weather forecast techniques.
Heavy snow is the dominant hazard on a national scale, being present in more than 80% all winter weather events. Significant ice is present in roughly one-fourth of all events, of which only about half (12% of all events) qualify as ice storms.
Most events last 12 h or more locally. This fact, coupled with the fact that even most small-scale events affect large areas relative to, say, flash floods or severe convective storms, makes it necessary for communities to begin effective preparations for widespread winter weather on the order of 12–24 h in advance of the onset of severe conditions. Thus it is important for effective watches, warnings, and forecasts to cover lead times of 12 h or more. This lead time is beyond the short-term (0–12 h) focus of the SPC. However, if NCEP is to provide effective guidance to WFOs, it will be necessary to establish guidance products that cover lead times of 12–24 h. The identity of the issuing center [either the SPC or another NCEP center; McPherson (1994)] is less important at this stage than the simple fact that such a guidance product will be needed.
The data analysis process employed in this study reveals that SD, in its current state, needs further standardization before it can be used effectively as a source of forecast verification data. Since it is the only viable verification source for freezing precipitation and blizzard conditions, it is imperative that reporting guidelines are standardized before forecast verification can be undertaken meaningfully. Based on inconsistencies found in recent SD winter weather entries, it is recommended that NWS management reexamine current SD reporting guidelines. The following specific recommendations are offered for improving the quality of SD and, thus, improving winter weather forecast verification efforts.
Remind offices with SD responsibility that all significant weather events, including winter storms, should be reported.
Incorporate guidelines that standardize titles of winter weather events. Definitions contained in appendix A are suggested.
Incorporate enhancements into SD guidelines regarding reporting detail levels. Require that all reports of snow and ice events include maximum reported snowfall and/or maximum reported ice (glaze) accumulations within the area of responsibility.
Clarify procedures for reporting beginning and ending times of long-duration events. Provide more specific definitions of what constitutes the beginning and end of an event.
Ensure that all offices are using proper zone or county designators when referring to the location(s) of winter storm conditions.
Include forecast verification as an SPC function. Verification should include local NWS winter weather watches, warnings, and advisories in addition to SPC winter weather forecasts.
There is no consistent procedure for real-time dissemination of most winter weather reports. Ground-truth information is needed in real time to track a forecast’s performance. This is true for all NWS offices, not just the SPC. Thus, it is strongly recommended that the NWS adopt a standard procedure and set of guidelines for reporting and disseminating winter weather reports in real time as LSRs. Some specific recommendations include the following.
Establishing the LSR product as a standard product for dissemination of all significant reports of winter weather.
Strongly encouraging all NWS offices to issue LSRs as soon as possible after a report of significant winter weather is received.
Establishing a set of specific criteria for “significant winter weather” events warranting LSR issuance. These criteria may include, but need not be limited to, any report that meets local or regional NWS winter storm, heavy snow, blizzard, or ice storm criteria. Specifically, criteria should include (a) any report of snowfall exceeding local heavy snow criteria; (b) any report of freezing rain/drizzle accumulations in excess of 0.25 in. (0.65 cm); (c) any report of power outages or structural damage due to downing of tree limbs, power lines, or other structures by snow or ice accumulations, whether or not winds are believed to have contributed to the outage; (d) road closings or other significant travel problems arising from snow, blowing snow, or freezing precipitation, even if winter storm criteria are not met; and (e) any report, whether localized or widespread, of blizzard conditions (measured or estimated winds of 35 mi h−1 or more and visibility less than 0.25 mi in falling and/or blowing snow1).
Providing specific definitions and titles for the above report types, as has been done already for severe convective reports (convective winds, hail, tornadoes). Titles defined in appendix A are suggested.
Incorporating winter weather events into the current standardization efforts regarding the format of LSRs, so that such reports can be entered, read, and decoded electronically using standard LSR encoding/decoding software.
6. Future work
A primary goal is to make the master database of winter weather events available to as many potential users as possible. It is hoped that other users, once having access to the database, can use the data to conduct specific research as they see fit. Unfortunately, the database is far too voluminous to include in a standard publication and is expected to continue growing as data from future years are added. Thus, current plans are to post the tabular (text) data on a year-by-year basis on the Internet, where prospective users can access the data for their own purposes.
The current database will be updated yearly as appropriate SD volumes become available. Data analysis will continue in several areas, most notably in the area of geographic distribution over the conUS. Current ongoing research involves computation of bimonthly event frequencies over a 1° latitude–longitude grid. Results should help locate areas or corridors of enhanced (or suppressed) winter weather activity and assist in evaluation of local winter storm frequencies. Other ongoing and/or planned research topics include a closer, more quantitative look at factors related to power outages, and establishment of a viable intensity rating system. Other ideas doubtless will arise with time, perhaps involving other local or regional tendencies.
In this study, analysis of the database consists of basic studies of frequency variations by year, month, calendar day, type of hazard, size, and duration. However, the database contains additional information that was not analyzed in this study. For example, detailed studies on causes of power outages are possible, since the occurrence of power outages and their causal factors are documented for each event. Also, each event has been given one of four classifications based on prevailing topographic influence and/or shape of the area affected. These data have been assembled but have yet to be analyzed. Analyses are possible, for example, of data pertaining to only lake-effect events as compared to all events, or relative frequencies of banded versus circular areas. Such studies are beyond the scope of this article, which is intended only to document the data acquisition process and present results that are directly applicable to SPC operational planning issues.
Acknowledgments
The author gratefully acknowledges the support of the NWS Southern Region, the SPC, and the National Severe Storms Laboratory (NSSL) during the preparation of this article. Publication support was provided jointly by Scientific Services Division, NWS Southern Region and by the SPC. Helpful reviews of the manuscript were provided by Robert Johns, SPC; David Andra, NWS, Norman Oklahoma; Dr. Harold Brooks, NSSL; and two anonymous reviewers. I also am grateful to the Mesoscale Analysis Group, NSSL, for many ideas shared and feedback provided. Special thanks are extended to Preston Leftwich, NWS Central Region, for making it possible to present an early version of this research at the 1995 NWS Winter Weather Workshop, and to Ronald McPherson, NCEP, for providing the opportunity to present preliminary findings of this research at the NWS 1995 Fall Directors Meeting. Joan O’Bannon, NSSL, was instrumental in the preparation of several of the figures.
REFERENCES
Fujita, T. T., 1971: Proposed characterization of tornadoes and hurricanes by area and intensity. SMRP Research Paper 91, 42 pp. [Available from Department of Geophysical Sciences, University of Chicago, Chicago, IL 60637.].
Kelly, D. L., J. T. Shaefer, R. P. McNulty, C. A. Doswell III, and R. F. Abbey Jr., 1978: An augmented tornado climatology. Mon. Wea. Rev.,106, 1172–1183.
——, ——, and C. A. Doswell III, 1985: The climatology of nontornadic severe thunderstorm events. Mon. Wea. Rev.,113, 1997–2014.
Kocin, P. J., and L. W. Uccellini, 1990: Snowstorms along the Northeastern Coast of the United States: 1955 to 1985. Meteor. Monogr., No. 44, Amer. Meteor. Soc., 280 pp.
McPherson, R. D., 1994: The National Centers for Environmental Prediction: Operational climate, ocean, and weather prediction for the 21st century. Bull. Amer. Meteor. Soc.,75, 363–373.
U.S. Department of Commerce, 1995: National hurricane operations plan. Publ. FCM-P12-1995, 134 pp. [Available from Federal Coordinator for Meteorological Services and Supporting Research, 8455 Colesville Rd., Ste 1500, Silver Spring, MD 20910.].
APPENDIX A
Definitions of Winter Weather Events Extracted from Storm Data
The following definitions are based on examination of all Storm Data entries for a given event and are not necessarily related to character of storm titles that appear with those entries.
Hvy snow (AA): An event in which storm-total snowfall exceeds 4 in. (10 cm) over the south-central and southeastern states (generally south of 37°N, excluding mountain areas) or 6 in. (15 cm) or more elsewhere. Although heavy snow criteria vary locally due to frequency of occurrence and elevation, this definition was chosen as a simple one to best fit NWS winter storm and heavy snow criteria. Thus, a hvy snow event should verify most winter storm or heavy snow warnings. The value AA is the maximum reported storm-total snow accumulation, in inches.
Snow: Any reported snowfall event in which the above heavy snow criteria are not met. This may include sleet events or blowing snow events that do not reach blizzard criteria. A snow event should be considered at most an advisory-type event and normally would not verify a winter storm warning. (Note: A single entry that is included because of an isolated incident, such as an auto accident on a slick road, generally is not included.)
Ice storm (xxx): Freezing precipitation that causes structural damage, particularly downed power lines, or in which accumulations of ice are reported to be 0.25 in. (0.64 cm) or more. An ice storm should verify a winter storm or ice storm warning. The xxx is either a 3-digit number indicating maximum thickness of ice in hundredths of inches (e.g., 150 would be 1.5 in.), or, if ice thickness is not reported, a letter indicating estimated extent of structural damage reports due ice: I = isolated, S = scattered, or W = widespread or numerous.
Ice: A freezing rain or freezing drizzle event in which ice storm criteria are not met. Consequences are limited to travel-related problems (e.g., accidents, slick roads). An ice event should be considered an advisory-type event and would not verify a winter storm or ice storm warning.
Ground blizzard: Blizzard conditions [winds >35 mi h−1 (15 m s−1) and frequent visibility <1/4 mi (0.4 km) in falling or blowing snow] can be confirmed, but maximum snowfall accumulations are below hvy snow criteria (above). A ground blizzard event that lasts more than 3 h should verify a blizzard warning. (To qualify as a blizzard, the narrative must either state “blizzard conditions” or indicate clearly that blizzard criteria were met. References to “near-blizzard conditions” are not sufficient.)
Blizzard (AA): Blizzard conditions can be confirmed and heavy snow criteria (see hvy snow, above) are met. A blizzard event should verify most winter storm or heavy snow warnings and should verify a blizzard warning if it lasts more than 3 h. (To qualify as a blizzard, the narrative must either state blizzard conditions or indicate clearly that blizzard criteria were met. References to near-blizzard conditions are not sufficient.) Here, AA is the maximum reported storm-total snowfall accumulation, in inches.
Extreme cold: Three or more states (or parts of states) have entries during the same period that refer specifically to either record low temperatures, wind chills of −20°F or less, or significant damage or loss (e.g., crop loss, frozen pipes) due to unusually cold weather. Isolated reports of cold-related casualties (hypothermia) were not considered unless they were accompanied by other information of the type described above.
Definitions of additional elements, as reported in yesno indicator columns:
HS (heavy snow): heavy snow criteria (as defined under hvy snow, above) are met in at least one SD entry from the event;
BS (blowing/drifting snow): blowing or drifting snow is mentioned as a contributing factor to weather-related problems, or blizzard or near-blizzard conditions are mentioned, in at least one SD entry from the event;
ZR (freezing rain): significant freezing precipitation (i.e., enough to contribute to weather hazards) is mentioned in at least one SD entry;
WS (wet snow): there is reference to wet (or high water content) snow in at least one SD entry from the event;
HW (high winds): at least one SD entry mentions either “high” winds, “strong” winds, winds of 35 mph or greater, blizzard conditions, or near-blizzard conditions;
PL (power lines): power outages, or downed power/utility lines, are reported in at least one SD entry from the event (note that power outages include those resulting from downed trees falling onto power lines); and
TS (thunder snow): thunder and/or lightning with freezing or frozen precipitation is reported in at least one SD entry from the event.
Note that an asterisk in a given column indicates that criteria for that element were met for that event; a dash indicates they were not met. For events in which power lines were downed (asterisk under PL), a plus sign (appearing occasionally under ZR, WS, or HW) indicates that the element was reported but did not contribute to downed power lines. A question mark indicates an element that was not reported specifically but was considered most likely to be the cause of downed power lines based on available information.
APPENDIX B
Rating Criteria for Winter Weather Events
All precipitation events (i.e., all events except extreme cold events) are rated in terms of size (areal coverage) and duration. For each element, an event was given the maximum rating that can be applied, based on available information.
Listing of states that reported significant winter weather in Storm Data by calendar date, January 1994. Standard two-letter state abbreviations are listed alphabetically for each date. Lower case letters represent geographic subdivisions: s = southern, e = eastern, n = northern, w = western, p = panhandle, and c = coastal. Similar listings were prepared for all calendar months from 1982 through 1994.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Example of winter weather event listings from the period 1–8 January 1994 (a) after second data pass and (b) after third data pass. See text for definitions of element headings (HS, BS, ZR, etc.). Asterisks (dashes) indicate that the given element was (was not) reported. Area and duration classifications and ratings (below area, AR, and DR; see text) (a) were not assigned in the second pass, (b) but were added in the third pass.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Areas in which significant winter weather was reported in Storm Data between 1 and 8 January 1994. Shaded areas enclosed by heavy solid lines are areas in which winter storm conditions (heavy snow, ice storm, or blizzard conditions; see appendix A) were reported. Heavy dashed lines enclose areas where winter weather was reported but did not meet winter storm criteria. Localized reports (e.g., specific snowfall amounts) that met winter storm criteria, but were not part of a larger area of winter weather, are denoted by ×.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Sectional maps showing breakdown of areas in Fig. 3 into individual events after third-pass analysis of SD entries, 1–8 January 1994. Format as in Fig. 3. Numbers refer to calendar dates of respective events.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Smoothed graphical outlines of winter weather events in Fig. 4. Format as in Fig. 4.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Yearly totals of reported winter weather events and event days, 1982–94.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Event and event day frequencies by month, 1982–94.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Percentage of winter weather days as a function of calendar day, based on 7-day running averages, 1982–94. Numbers for a given day represent the percent of days that significant winter weather occurred somewhere within the conUS on that day during the period 1982–94.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Distribution of winter storms by size of area affected by winter storm conditions. Bars represent percentages with respect to all events; plotted values are percentages with respect to winter storms only. “Adv” represents advisory events (i.e., those that did not meet NWS winter storm criteria). Area rating categories are defined in appendix B. For reference, the states of Vermont and Kansas lie near the lower and upper limits of category 2, respectively.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Distribution of winter weather events by maximum local duration. Categories refer to the maximum reported duration at a given location, as defined in appendix B. Percentages are plotted above each bar.
Citation: Weather and Forecasting 12, 2; 10.1175/1520-0434(1997)012<0193:ACOSWT>2.0.CO;2
Summary of winter weather event data, 1982–94. Note that sums of percentages by type add up to more than 100%, since many events contained multiple hazards (e.g., heavy snow and significant ice).
Note that the NWS definition of a ‘blizzard’ requires that these conditions persist for at least 3 h.