Through the Eyes of the Experts: Meteorologists’ Perceptions of the Probability of Precipitation

Alan E. Stewart Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by Alan E. Stewart in
Current site
Google Scholar
PubMed
Close
,
Castle A. Williams Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by Castle A. Williams in
Current site
Google Scholar
PubMed
Close
,
Minh D. Phan Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by Minh D. Phan in
Current site
Google Scholar
PubMed
Close
,
Alexandra L. Horst Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by Alexandra L. Horst in
Current site
Google Scholar
PubMed
Close
,
Evan D. Knox Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by Evan D. Knox in
Current site
Google Scholar
PubMed
Close
, and
John A. Knox Department of Counseling and Human Development and Department of Geography, University of Georgia, Athens, Georgia

Search for other papers by John A. Knox in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Prior surveys of the public indicated that a variety of meanings and interpretations exist about the probability of precipitation (PoP). Does the same variety of meanings for the PoP exist among members of the professional atmospheric science community? What do members of the professional community think that the public should know to understand the PoP more fully? These questions were examined in a survey of 188 meteorologists and broadcasters. Meteorologists were observed to express a variety of different definitions of the PoP and also indicated a high degree of confidence in the accuracy of their definitions. Differences in the definitions stemmed from the way the PoP was derived from model output statistics, parsing of a 12-h PoP over shorter time frames, and generalizing from a point PoP to a wider coverage warning area. In this regard 43% of the online survey respondents believed that there was no or very little consistency in the definition of PoP; only 8% believed that the PoP definition has been used in a consistent manner. The respondents believed that the PoP was limited in its value to the general public because, on average, those surveyed believed that only about 22% of the population had an accurate conception of the PoP. These results imply that the atmospheric science community should work to achieve a wider consensus about the meaning of the PoP. Further, until meteorologists develop a consistent conception of the PoP and disseminate it, the public’s understanding of PoP-based forecasts may remain fuzzy.

Corresponding author address: Alan E. Stewart, Dept. of Counseling and Human Development, University of Georgia, 402 Aderhold Hall, Athens, GA 30602. E-mail: aeswx@uga.edu

Abstract

Prior surveys of the public indicated that a variety of meanings and interpretations exist about the probability of precipitation (PoP). Does the same variety of meanings for the PoP exist among members of the professional atmospheric science community? What do members of the professional community think that the public should know to understand the PoP more fully? These questions were examined in a survey of 188 meteorologists and broadcasters. Meteorologists were observed to express a variety of different definitions of the PoP and also indicated a high degree of confidence in the accuracy of their definitions. Differences in the definitions stemmed from the way the PoP was derived from model output statistics, parsing of a 12-h PoP over shorter time frames, and generalizing from a point PoP to a wider coverage warning area. In this regard 43% of the online survey respondents believed that there was no or very little consistency in the definition of PoP; only 8% believed that the PoP definition has been used in a consistent manner. The respondents believed that the PoP was limited in its value to the general public because, on average, those surveyed believed that only about 22% of the population had an accurate conception of the PoP. These results imply that the atmospheric science community should work to achieve a wider consensus about the meaning of the PoP. Further, until meteorologists develop a consistent conception of the PoP and disseminate it, the public’s understanding of PoP-based forecasts may remain fuzzy.

Corresponding author address: Alan E. Stewart, Dept. of Counseling and Human Development, University of Georgia, 402 Aderhold Hall, Athens, GA 30602. E-mail: aeswx@uga.edu

1. Introduction

Precipitation probability forecasts became available to the public in the United States nearly 50 years ago (Murphy 1998). The probability of precipitation (PoP) forecast conveys the likelihood that measurable precipitation (≥0.01 in.) will occur at any given point in the forecast area within a particular time frame, usually 12 h (National Weather Service 1984, part C, chapter 11). Probabilistic forecasts have the advantage of quantitatively communicating the degree of uncertainty about the precipitation event compared to categorical forecasts (Murphy et al. 1980).

Shortly after the widespread use of the PoP in forecasts for the public, Allan Murphy, Robert Winkler, and their colleagues began a series of studies aimed at examining what forecasters and the general public understood about PoP forecasts. In a small study of meteorologists from the Travelers Weather Service, Murphy and Winkler observed that the forecasters differed in their interpretations of what constituted a precipitation event, and some forecasters were uncertain about the meaning and roles of probabilities in the forecasts (Murphy and Winkler 1971a). Building on this survey and examining both the conceptual and practical perspectives on the use of the PoP, Murphy and Winkler (1971b, p. 246) concluded that “[a]n operational recommendation which seems almost imperative is the clarification of the definition of precipitation and the interpretation of probabilities, both for the forecaster and for the public… and it is necessary to have a clear definition of a ‘precipitation probability,’ one that is meaningful both to the forecasters and to the public.” These observations were largely reinforced by the results of a survey of 689 National Weather Service (NWS) forecasters (Murphy and Winkler 1974). The data Murphy and Winkler collected suggested that forecasters at the time tended not to think in subjective probability terms (i.e., the likelihood of precipitation) but instead conceptualized the PoP from an objective perspective that involved relative frequencies (i.e., the proportion of times that the synoptic conditions like those being forecasted produced precipitation). Murphy and Winkler also reported that forecasters did not seem to understand fully the relationship between PoPs of different time periods. Perhaps most importantly, the questionnaire data indicated that forecasters expressed different preferred definitions of both a precipitation event and of the probability of precipitation. In a small study of Oregon residents, Murphy et al. (1980) found that the public did not understand what constituted a precipitation event, especially the point versus areal aspects of an event. The researchers also observed that while the 79 questionnaire respondents preferred probabilities to express uncertainty about precipitation, people frequently misunderstood the probability information they received.

Other researchers primarily have focused upon the public’s understanding, interpretation, and use of the PoP. Sink (1995) observed that although people favored quantitative probability information, they did not understand these probabilities better than verbal descriptors and did not correctly associate given probability levels with such descriptors. Wallsten et al. (1986) similarly observed that people did not consistently associate verbal descriptors with quantitative probabilities within a general context. Vislocky et al. (1995) argued that quantitative probability forecasts provided greater information for decision-making purposes than verbal narratives. They also suggested that an explicit numerical statement of the PoP be made that is separate from any forecast narratives.

More recently, Morss et al. (2008) observed that the public in the United States seldom knew the meteorologically correct definition of the PoP. Nonetheless, consistent with other studies, the public preferred that a PoP be communicated and appeared to derive some heuristic value from PoP forecasts despite using idiosyncratic or grossly incorrect definitions of the PoP. Such a “good enough” use of the PoP is somewhat consistent with the subjective probability of precipitation discussed by Murphy and Winkler (1971b), in which increasing values of PoP simply indicated a greater likelihood of precipitation. Joslyn et al. (2009) similarly reported that university undergraduates in the Pacific Northwest region of the United States largely did not understand the PoP. The research participants erroneously believed that the PoP pertained to amount of areal coverage or the proportion of the day in which precipitation would occur. A study of university students in the United Kingdom revealed, consistent with the prior research, that they had a variety of generally inaccurate interpretations of the PoP (Peachey et al. 2013).

Joslyn and Savelli (2010) indicated that although people wanted uncertainty information in forecasts, this information needed to be provided explicitly and in ways that left little room for end users to equivocate about the meaning of the uncertainty. In this regard, the provision of a reference class for precipitation probability forecasts appears especially important. A percentage in a PoP forecast is most useful when the percentage is accompanied by a point of comparison that anchors the intended meaning of the forecast (e.g., reference classes could be the probability of precipitation on days like those forecasted for today or the probability of rainfall sometime during the next 12 h; Gigerenzer et al. 2005; Joslyn et al. 2009).

Beyond the earlier work of Murphy and Winkler (1971a,b, 1974), relatively few studies have examined the perspectives of contemporary meteorologists regarding the definition, understanding, and uses of the PoP. For example, O’Hanrahan and Sweeney (2013) surveyed 15 meteorologists from the Met Office and reported that they often differed in their definitions and uses of probabilistic products in preparing forecasts. Thus, our purposes in this project were to ask a range of meteorologists who have some degree of experience with the PoP as modelers, operational forecasters, broadcasters, and so forth, to 1) convey their training and experiences in using the PoP, 2) provide their working definitions of the PoP and their degree of confidence in their definitions, 3) share their views about the degree of consistency with which PoP is defined in the atmospheric sciences, 4) provide estimates of the proportion of the general public that seems to have an accurate definition of the PoP, and 5) suggest possible alternatives to the PoP that might better convey the degree of uncertainty that exists for precipitation events and what people need to know so that they have a better understanding of the PoP. In pursuing these purposes, we were interested in the extent to which meteorologists’ length of professional work experience, training in general probability and statistics, and training specifically in the PoP might be related to the responses that they provided. In the sections that follow we describe the creation of a questionnaire to elicit meteorologists’ perceptions of the PoP, the deployment of the survey via the Internet, and the methods of data analysis used. We then present the results of the survey and discuss the similarities and differences that we observed compared to the results that Murphy and Winkler (1971a,b, 1974) reported nearly 40 years ago.

2. Method

This study was reviewed and approved prior to its beginning in July 2013 by the University of Georgia Institutional Review Board (IRB).

a. Participants

The demographic information for the 188 survey respondents appears in Table 1. The participants’ mean age was 46.0 yr [standard deviation (std dev) = 11.1 yr], with a range spanning from 25 to 70 yr. The 21 female participants (mean M = 40.0 yr, std dev = 12.7 yr) were significantly younger than the male participants [M = 46.7 yr, std dev = 10.7 yr, t(185) = 2.55, p = 0.008, and 95% confidence interval (CI) = 6.72–11.71 yr]. The participants reported nearly a quarter-century of work experience (M = 23.2 yr, std dev = 10.7 yr); the length of work experience ranged from 4 to 45 yr. Women (M = 18.3 yr, std dev = 12.9 yr) reported about 5 yr less work experience on average than men [M = 12.8 yr, std dev = 10.3 yr, t(185) = 2.12, p = 0.029, and 95% CI = 5.5–10.6 yr]. Approximately one-half of the participants reported working in academic settings or the public sector (e.g., National Weather Service); the remainder of the participants reported private sector employment. We also inquired about the scope of the respondents’ professional work activities; here, multiple activities could be selected on the online survey. Approximately 61% of the participants reported that their professional work involved broadcast meteorology and/or forecasting and operational meteorology (see Table 1). Forecasting and operational meteorology were most frequently mentioned in combination with consultation and decision support activities (54 respondents, 28.6%), aviation meteorology (49 respondents, 25.9%), research (43 respondents, 22.8%) and forecast verification (39 respondents, 20.6%). Broadcast meteorology activities were most frequently mentioned in combination with forecasting and operational meteorology (57 respondents, 30.1%) and with consulting meteorology (26 respondents, 13.8%). These combinations of work settings and professional activities suggested that the survey respondents were very involved in both generating and disseminating weather forecast products.

Table 1.

Descriptive statistics for study participants. For the scope of professional work activities, respondents could select more than one professional work activity; therefore, the total number of responses in the work activity categories is greater than the sample size of 188.

Table 1.

b. Survey instrument

The online survey consisted of 15 questions that assessed the respondents’ demographic information; length of work experience in the atmospheric sciences; work settings and activities; and formal and informal education in probability, statistics, and training concerning the probability of precipitation. We inquired about the respondents’ definitions of the PoP and asked them also to provide a rating of their confidence in their own definition. After inquiring about the respondent’s perception of consistency in the way PoP is defined in the atmospheric science community, we asked them to estimate the proportion of the public that have a technically accurate idea of the PoP. The survey concluded with two open-ended questions concerning the type of knowledge respondents thought that the public should have to understand the PoP and any suggested alternatives to the PoP for conveying the likelihood of precipitation to the public. The survey items and response options (for closed-ended items) appear in the appendix to this article. We deliberately designed the survey to be brief to increase the likelihood of its completion and to minimize the time and effort required for respondents. Because we distributed the survey to professional work e-mail addresses, we were cognizant that many respondents would complete the survey while at work; this was another incentive to keep the survey brief.

c. Procedure

We developed an e-mail list, beginning with meteorologists employed with the National Weather Service. Meteorologists in charge (MICs) were identified from each state within NWS offices. Professional meteorologists with membership in the American Meteorological Society were chosen based on a range of backgrounds that included certified broadcast meteorologists; atmospheric scientists with noted interests (e.g., publications, presentations, and prior work and/or research experience) in statistics, modeling, and precipitation forecasting; and/or individuals interested in societal aspects of weather forecasting. The list of potential participants was chosen to represent a range of diverse roles and activities within meteorology that pertained to various aspects of the PoP, such as modeling, statistics, operational meteorology, and broadcast meteorology. We launched the survey on 25 July 2013 by sending an e-mail message and the survey link to the pool of 494 potential participants identified. The message provided a brief overview of the nature and purpose of the survey and functioned as the IRB-approved informed consent document for the study. The message contained a link to the survey, which was administered through the SurveyMonkey online survey platform. The survey was designed to be anonymous in order to minimize positive self-presentation biases and to increase the likelihood of the survey’s completion. For the same reasons, given that we sent the survey to specific NWS offices, we did not inquire about the respondent’s location, as this could have made it possible to identify those who returned their completed surveys to us. Some of the potential participants that we emailed sent automated replies indicating that they were temporarily away or unavailable. For this reason, and to increase the response rates to the survey, we sent out a reminder e-mail about the survey to all people on the original e-mail list weeks after the initial survey launch. We closed the survey on 1 October 2013. At the close of the survey we had received 188 usable (complete) responses. This yielded a survey return rate of 38.1%.

d. Data analysis

We calculated descriptive statistics for the respondents’ demographic information; work settings and activities; and their prior training in probability, statistics, and training in the PoP. We content coded the open-ended replies that the respondents provided concerning their definitions of the PoP, what consumers knew about the PoP, and potential alternatives to the PoP. After reviewing these open-ended responses initially, a team of three raters agreed upon the themed categories that encompassed the responses. The three coders then worked independently to apply the codes to the responses. We evaluated the consistency of the coding with Krippendorff’s α statistic (Hayes and Krippendorff 2007). Krippendorff’s α can vary from 0 to 1, with higher values indicating a greater degree of consistency in how the independent raters applied the codes to each response. Higher interrater consistency means that the raters reached a consensus about the meaning of the responses.

3. Results

a. Training in probability and statistics and the PoP

Table 2 depicts the proportion of the survey respondents that completed formal and/or informal (i.e., on the job) training in probability and statistics. A majority of the respondents had completed at least one formal course in probability and statistics as part of their undergraduate or graduate training in the atmospheric sciences. Over one-half of the respondents had completed on-the-job training in probability and statistics, with 87 (46.3%) reporting that they had both formal and informal training. These results are not surprising given the emphasis on calculus and statistics that exists in many atmospheric science or meteorology curricula.

Table 2.

Respondents’ training in probability and statistics. The original survey questions included the option of “not sure” for the formal course work and on-the-job training items. Because so few people chose this option for the two questions depicted in this table, we combined the not sure responses with the “no” responses.

Table 2.

We asked the respondents to provide an open-ended description of their formal or informal training about the PoP; the authors content coded the participants’ responses. The degree of interrater agreement, as assessed via Krippendorff’s α, for the content coding on this survey item was acceptable (α = 0.81, 95% CI = 0.75–0.86). Slightly over half of the respondents did not report any training in the PoP (count n = 97, 51.6%). The most frequently reported modality of PoP-related training was a formal course in school (n = 32, 17.0%), followed next by a National Weather Service course or module on PoP (n = 19, 10.1%). Other PoP-related learning activities included discussion or conversations about PoP (4.8%), on-the-job training (3.7%), reading literature on one’s own (3.7%), or completing a COMET module (2.7%). Twelve respondents (6.4%) indicated informal or indeterminate training on the PoP. Of the 87 respondents in Table 2 that reported both formal coursework and on-the-job training in probability and statistics, 40.2% indicated they had received no specific PoP training, 19.5% covered the PoP in a university-based course, and 13.8% had completed an NWS course or module on the PoP. Anecdotally, several of the respondents who had completed an NWS course made specific mention of the NOAA technical report “Probability forecasting—Reasons, procedures, problems” by Hughes (1980). There were nine (4.8%) who completed training in probability and statistics (either formal or on the job) and who also reported receiving no training in the PoP.

b. Respondents’ definitions of PoP

All but three people provided responses to the open-ended inquiry of “what does the probability of precipitation mean to you? Please provide your definition of it.” The authors first examined the wide variety of responses to this item and agreed on a content-coding classification strategy to group the responses according to their basic meanings. Three of the authors independently coded the definitions of PoP based upon the initial categories that we developed. The overall interrater agreement was good: α = 0.85 and 95% CI = 0.82–0.89. In cases where ambiguous or lengthy responses resulted in different content classifications, the authors consulted to resolve those issues and reach agreement on the meaning of the response. The thematic categories, example responses, and the frequency of use for each category are provided in Table 3.

Table 3.

Respondents’ definitions of the probability of precipitation. In the example column, quotations indicate responses given by the survey participants. In the confidence in definition M, the respondents used a four-point rating scale to indicate the confidence in their definitions ranging from 1 = not confident at all to 5 = completely confident.

Table 3.

One of the two most frequent responses (26.2% of respondents) we encountered generally approximated the National Weather Service’s established definition of PoP, which for the purposes of this article we are taking to be the technically correct definition: the probability of at least 0.01 in. of precipitation occurring somewhere in the forecast area within a given time frame (National Weather Service 1984). Among the five thematic categories, this one possessed the lowest amount of variability, primarily because of the very specific nature of the NWS definition. For example, the following definition is nearly equivalent, but somewhat less precise: “The percentage chance that a specified location will get measurable precipitation during a set time period.” Another definition in this category was a bit more elaborated and represented an upscaling from a point within a forecast area to a larger area: “The NWS PoP forecast represents the probability of .01″ of precipitation (measurable) at a given location during the specified time period (usually 12 hours)… . It is considered to be equivalent to the areal coverage of .01″ being observed over a specified region. For example, a 20% PoP means a 20% chance of .01″ at a point and/or 20% of the specified area will see .01″ of precipitation… forecasters frequently misuse the definition or use the PoP forecast to imply greater or lesser ‘amounts’ of precipitation.”

The second most common type of definition the respondents provided (mentioned by 26.2%) involved the likelihood of precipitation occurring over a wider area [e.g., city or county warning area (CWA)]. The definitions in this category differed from those discussed above in that they tended to prioritize aspects of areal coverage (over occurrence of precipitation at a given point) and were somewhat less precise concerning either amounts of precipitation or the occurrence of precipitation within a particular time frame (e.g., 12 h). A typical exemplary PoP definition within this category was: “From 0 to 100% with 0 meaning no chance of precipitation in your immediate area to 100% meaning definite precipitation in your immediate area.” Another such response was: “Probability of precipitation with 25 miles of a point with a forecast area.” Some respondents, however, tended to equate the PoP with the area that would be receiving precipitation: “The percentage of the area that will receive measurable rainfall.” Other responses tended to encompass both areal and point interpretations of the PoP: “If we have a 30%—that means 30 percent of the area will see measurable precipitation; maybe for just one minute. If we are speaking about gridded forecasts; then for each grid point there is a 30 percent chance the grid point will see measurable precipitation.”

The third category of definitions, mentioned by 24% of the respondents, contained components of both the temporal dimension (timing or duration of the precipitation) along with the areal or spatial extent that would be affected by the precipitation. To some degree, the definitions in this category appeared to be an amalgam of the meanings in the two aforementioned categories, but they were also less specific regarding the amount of precipitation. A straightforward example in this category was: “Formal measure of the likelihood of precipitation within a spatial region and within a given time period.” Another definition conveyed a more uncertain notion of area: “The chance precipitation will fall at a certain point, in a somewhat ‘undefined’ area, over a defined period of time (usually 12 hours).” In contrast, some respondents clearly indicated that areal coverage was not to be equated with the PoP: “The probability of precipitation over a defined region in a defined period. A 40% PoP means there is a 40% chance of precipitation at one point within a defined geographic area not a 40% ‘coverage’ of precipitation over that defined area.”

The fourth category of definitions (12% of respondents) reflected the more basic or fundamental conceptions of the PoP and how it is forecasted. At the heart of these definitions was the question: Given the current and forecasted synoptic conditions, how frequently in the past have such conditions resulted in precipitation? Examples of the definitions in this category included: “Given 100 weather patterns of similar nature, the number of patterns that produce measurable precipitation at a given point for a specified time period.” Similarly, “PoP is the number of times, given 100 similar situations (similar atmospheric states), that your location will receive measureable precipitation.” Other definitions, while seeming to acknowledge the type of synoptic conditions, emphasized the days or instances of the pattern: “A 20% chance of rain means that in cases like this, 2 out of 10 days will have rain,” or “The fraction of days in which precipitation occurs, when conditions are identical or close to the current ones.” These frequentist-based conceptions of the PoP, while operationalized in models to produce PoP MOS, do not map onto how a PoP forecast is verified using the NWS definition as discussed in the first category of responses (i.e., has a precipitation event that produced at least 0.01 in. somewhere in the forecast area during a given time).

The final category of definitions was specific to the Schaefer and Livingston (1990) operationalization of the PoP as incorporating both the forecaster’s confidence in precipitation occurring and the areal extent of the precipitation. Exemplary definitions in this category included “PoP = C × A, where C = the confidence that precipitation will occur somewhere in the forecast area, and A = the percent of the area that will receive measurable precipitation, if it occurs at all.” A similar definition included: “I understand probability of precipitation as the chance of rainfall in a given area or region, taking into account overall coverage and forecaster confidence. The value is expressed as a percentage.” The semantic variability of the responses classified in this category was somewhat lower, given the rather specific formulation of PoP provided by Schaefer and Livingston (1990).

We observed several qualitative differences in the respondents’ preferred definitions of the PoP according to the scope of their professional work activities as given in Table 1. Broadcast meteorologists more frequently provided PoP definitions that involve the spatial and temporal (third category in Table 3) aspects of precipitation (28% of broadcasters). The second most common conception of PoP for broadcast meteorologists was one that emphasized the spatial aspects of precipitation (second category in Table 2; 24% of broadcasters). Operational meteorologists rather evenly identified their preferred conception of the PoP as encompassing the spatial and temporal aspects of precipitation with measurement (first category in Table 3; 28% of operational meteorologists) or definitions that emphasized the areal aspects of precipitation (second category in Table 3; 28% of operational meteorologists). We observed a similar preference of PoP definitions among those working in aviation meteorology. Meteorologists working in consultation and decision support preferred the spatial and temporal with measurement definitions of PoP (first category in Table 3; 30% of consultation and decision support meteorologists) and secondarily cited the spatial and temporal definition of PoP (third category in Table 3; 29% of consultation and decision support meteorologists).

c. Confidence in PoP definitions

We asked the respondents to provide a rating of their degree of confidence in the definition that they provided. We used a four-point fully anchored rating scale (from 1 = not confident at all to 4 = completely confident). The mean values of the respondents’ confidence ratings as a function of membership in one of the five thematic categories are provided in Table 3. What is perhaps the most noteworthy about the summary statistics is that despite the wide variability in the definitions, the respondents were uniformly confident in the definitions that they provided about the PoP. Although the ratings of confidence were numerically the highest for the first category, which included the standard and technically correct definition of PoP, no statistically significant differences were observed among the five means [F ratio F(4, 176) = 1.80, p = 0.13]. The respondents generally were between somewhat confident and completely confident in their definitions (M = 3.44, std dev = 0.64). One respondent was not confident at all in the definition of PoP he or she provided and 12 respondents (6.9%) expressed only a little confidence in their definitions. Almost all of the respondents providing this lower degree of confidence, in their definition of the PoP, reported working as broadcast meteorologists.

d. Perceived consistency in PoP definitions

We were interested in finding out the extent to which the respondents perceived consistency in the way the PoP is defined within the atmospheric science community. We asked them to use a four-point fully anchored rating scale (from 1 = no consistent definition at all to 4 = very consistent definition exists). The mean for this question was 2.54 (std dev = 0.69), which fell between the very little and somewhat consistent options. Both the median and modal responses were 3.0 (somewhat consistent definition). The respondents’ acknowledgment of a little to somewhat degree of consistency in how PoP is defined parallels the variety of PoP definitions we encountered and described in Table 3. In addition, there was a slight and statistically significant association between the respondent’s confidence in his or her definition and the perception of consistency of the definition of PoP among atmospheric scientists (Spearman rank-order correlation rs = 0.30, 95% CI = 0.08–0.50).

e. Perceptions of the public’s idea of PoP

We asked the survey respondents to provide an estimate of the proportion of public consumers of weather forecast products that possess a technically accurate idea of the PoP. This was an interesting question to pose for several reasons, the first of which is that each respondent’s estimate likely reflected the proportion of the public that has an understanding of the PoP that is consistent with his or her own. Second, those who responded to this question were helpful in revealing the extent to which they believed the public understood a very common forecast product. The distribution of the respondents’ estimates is shown in Fig. 1. The respondents’ estimates of the consuming public that had a technically accurate idea of the PoP ranged from 1% to 91%, where M = 22.68% and std dev = 18.19%. The median response was 16%, and the modal response was 11%. We also examined the estimates of the respondents who provided a PoP definition that was in accord with the National Weather Service definition of PoP (see the first row of Table 3). That is, the respondents with a technically correct definition of the PoP estimated that between 2% and 71% of the public had a correct understanding of the PoP (M = 23.17%, std dev = 16.46%). The median was 21%.

Fig. 1.
Fig. 1.

Respondents’ estimates of the percentage of the public that has a technically accurate idea of the probability of precipitation.

Citation: Weather and Forecasting 31, 1; 10.1175/WAF-D-15-0058.1

There was a weak but statistically significant relationship between the respondents’ degree of confidence in his/her definition of the PoP and the proportion of the public that was estimated to understand PoP (rs = 0.24, 95% CI = 0.029–0.46). A more noteworthy result was that the degree of consistency perceived in the atmospheric science community’s conception of PoP was significantly related to the proportion of the public that was estimated to have a technically accurate understanding of PoP (rs = 0.45, 95% CI = 0.27–0.59). That is, the more a respondent believed that a consistent definition of the PoP existed in the meteorological community, the greater the proportion of the public that the respondent believed possessed an accurate understanding of the PoP.

f. What does the public need to know to properly understand the PoP?

Given their views that few consumers understood the meaning of PoP, we asked the respondents to convey what they thought the public needed to know so that they developed a proper understanding of the PoP. Most of the respondents (n = 176, 93.1%) provided responses. In coding the responses to this question, the overall interrater agreement was good (α = 0.90, 95% CI = 0.87–0.93). Table 4 shows the category themes, examples of responses for each theme, and the proportion frequency with which the respondents mentioned the theme.

Table 4.

What should the public know to have a proper understanding of the PoP? In the example column, quotations indicate responses given by the survey participants. In the n (%) column, there were 176 (93.1%) respondents of the 188 who suggested possible alternatives to the use of the PoP.

Table 4.

The respondents cited a broad and varied knowledge base that would enable the public to develop a proper understanding of the PoP. The most frequent recommendation (27% of the respondents) was that people should know how forecasters define the PoP. Some respondents emphasized providing an accurate and succinct definition while others, especially broadcast meteorologists, cited the need for a consistent and broadly agreed-upon definition. A majority of respondents who mentioned this issue provided definitions of the PoP that emphasized the probability of precipitation over an area (i.e., scaling up from a point probability) or the probability of precipitation at a given point for some time interval.

The second most frequent category was related to the first theme and conveyed that people would benefit from understanding the meanings (explicit or implied) that are associated with particular PoP percentages. That is, how should users interpret the PoP percentages? Many respondents cautioned that the public should remember that a 20% PoP, for example, does not mean no rain and that, conversely, a 100% PoP does not guarantee that all locations in a forecast area will receive rainfall, although forecasters are certain at least one point in the area will. In the third thematic category, we observed a somewhat disconcerting result that about 10% of the respondents were doubtful about the utility of the PoP for the public or of its ability to understand the PoP. In the fourth thematic category, approximately 9% of the respondents indicated the public should know about the areal extent of a PoP forecast and, relatedly, that the PoP refers to areal coverage of the forecasted precipitation. This recommendation contradicted what other respondents conveyed in the fifth category, namely, that the PoP is for a point and not for an area; further, PoP does not correspond to areal coverage of the precipitation. Another group of survey respondents (Table 4, eighth theme) indicated that the public needs to know or distinguish between the point and areal concepts as these are used in the PoP forecast. A smaller number of respondents indicated that the public should know about the role of time in the PoP forecast; how the PoP forecasts are verified; and that PoP does not carry implications for the intensity, amount, or duration of the precipitation.

g. Alternatives to the use of the PoP

The final question in our survey asked the participants if they could think of alternatives to the PoP for conveying the likelihood of precipitation. There were 130 (69.1%) respondents of the 188 who suggested possible alternatives. In coding the responses to this question, the interrater agreement for this item was acceptable (α = 0.84, 95% CI = 0.79–0.89). Table 5 shows the category themes, exemplary responses for each category, and the proportion of the respondents whose suggested alternatives were within each topic.

Table 5.

Respondents’ suggested alternatives to the PoP for conveying the likelihood of precipitation. In the example column, quotations indicate responses given by the survey participants. In the n (%) column, there were 130 (69.1%) respondents of the 188 who suggested possible alternatives to the use of the PoP.

Table 5.

Nearly one-half of the respondents to this item suggested greater use of qualitatively descriptive words and phrases for conveying the likelihood of precipitation. Twenty-six percent of the people who suggested this alternative also indicated from the previous question that the public should have a clearer definition of the PoP. This suggestion (i.e., descriptive words and phrases) appeared evenly among respondents of differing professional backgrounds. Another 18.5% of the respondents suggested supplying consumers with additional explanations about the PoP and using graphical methods for communicating the probability. The third most frequent suggestion was to emphasize information about the spatial distribution of forecasted precipitation, rather than probabilities for a grid point. This option was appealing to broadcast meteorologists whose audiences are widely dispersed throughout a viewing area. Another 8.5% of the respondents suggested greater reliance upon quantitative precipitation forecasts (QPFs) for conveying the probability of different amounts of precipitation over different time frames. The remaining 7% of the respondents mentioned providing some indication of forecaster confidence in the likelihood of the precipitation forecast, the use of odds ratios, and yes/no deterministic kinds of products as alternatives to the PoP.

4. Discussion

Our results reveal that there is no one definition of probability of precipitation, even among the experts who know the most about the subject and who create and use PoP forecasts on a regular basis. Most noteworthy was the finding that 73.8% of the respondents expressed an understanding of the PoP that differed, in various ways, from the standard operational definition of the National Weather Service (i.e., the probability of measurable precipitation, ≥0.01 in., at a single point over a definite time period). Despite believing in the relative accuracy of their definitions, the respondents acknowledged that little consistency existed in the ways that the meteorological community defined the PoP. Further, they cited the need for greater definitional clarity and consistency so that the public can develop a better understanding of the PoP. In this regard, the 26.3% of respondents who provided a technically accurate definition of the PoP estimated that a comparable proportion of the public (23.2%) possessed a technically accurate understanding of the PoP. With this diversity of PoP meanings within the meteorological community, to what extent is it reasonable to expect the public’s understanding of the PoP will be improved?

Our results were remarkably consistent with what Murphy and Winkler (1974) reported regarding the ways that meteorologists defined precipitation events within the context of the PoP: “Different forecasters prefer different definitions of a precipitation event and of a precipitation probability, and, as a result, they often use different definitions in connection with their PoP forecasts” (p. 1451). Murphy and Winkler also acknowledged that the lack of a definitional consensus may lead to confusion among the public:

The fact that the forecasters perceive some confusion on the part of the public with regard to probability forecasts and the fact that the forecasters themselves exhibit some confusion both suggest that some confusion undoubtedly exists among members of the general public. Thus, we believe that a need also exists to initiate a program to educate the general public and members of specific user groups with regard to the purposes, meaning, and the use of probability forecasts (Murphy and Winkler 1974, p. 1452).

These authors also recommended that forecasters receive training in probability and statistics, especially as this relates to conveying uncertainty about different meteorological events. Our data suggested that most respondents (88.8%) had at least one formal course in probability and statistics as part of their training. Approximately 53% of them reported receiving on-the-job training in probability and statistics or PoP. Only 6.7% of our sample did not have any formal coursework or on-the-job training in this area. Thus, in contrast to the implications of Murphy and Winkler, possessing knowledge of probability and statistics in general has not necessarily brought clarity or consistency to the use of the PoP among members of the weather community over the last 40 years.

It is important to bracket our findings with respect to the sample that we surveyed: meteorologists in charge, broadcast meteorologists, atmospheric scientists with interests and expertise in precipitation modeling and forecasting, and those with interests in the societal aspects of forecasting. We did not focus as much on recruiting science operations officers, warning coordination meteorologists, or lead forecasters from the National Weather Service. Thus, our findings may not generalize to all professional forecasters.

While conducting this project, we became curious about the deeper meaning of the likelihood conveyed by the PoP. In speaking with some forecast modelers as we prepared the survey and in communicating with some respondents, we wondered whether additional clarity or consistency would be gained by understanding the method by which numerical models calculate PoP values. That is, would a further understanding of the PoP computational algorithm (i.e., its assumptions, variables, methods of calculation, and so forth) lead to a deeper understanding of the product and how it could be used in various ways? Many of our respondents were able to provide a definition of the PoP (see Table 3), but very few people seemed to know what went into generating the product. It would be interesting to observe the extent to which providing this background knowledge of the PoP algorithm might change the ways in which they use and provide consultation about the product.

Still, we are left with the interesting result that although forecast model performance and forecaster skill has increased and professional training in atmospheric science has advanced, forecasters’ knowledge and use of the PoP does not reflect this progress. The situation of the PoP interpretation and use has changed little over the 40 years since the Murphy and Winkler (1974) paper. What influences may have contributed to this outcome?

The steadfast fuzziness in the definition of the PoP, among both meteorologists and the public, may stem from the questions that people ask of the PoP product. Our survey respondents and the people who listened to the presentation of our results at the American Meteorological Society Annual Meeting frequently made comments such as 1) “Viewers in our broadcast area want a ‘yes’ or ‘no’ answer about whether they will see rain tomorrow”; 2) “If rain occurred when the PoP was low, people think the forecast was wrong”; or 3) “Unless the entire area gets rainfall, a high PoP might not mean very much to some people.”

We contend such comments and the implicit questions that they pose constitute a wicked problem for the forecaster using the PoP product (Allenby and Sarewitz 2011; Rittel and Webber 1973). Wicked problems are those that by their sheer nature and scope are resistant to resolution. Many forecasts of discrete meteorological events over an area and within a time frame, such as rain, thunderstorms, snow, and so forth, share the following characteristics of wicked problems:

  1. To understand or prepare for the event (e.g., rain) people must acquire new knowledge (i.e., of how the PoP is generated and what it means) and/or change their behavior (e.g., ask questions that are answerable by the PoP product).

  2. Important aspects of the forecast problem may not be fully understood until after the forecast is made and, to varying extents, the forecast verifies or not.

  3. The forecast is not unequivocally right or wrong (e.g., some people in a CWA did get rain with a 40% PoP, while some did not).

  4. Each precipitation event, even considering climatology and performance of forecast models in the recent past, possess novel and unique aspects—each atmospheric scenario and its corresponding forecast are slightly different.

  5. Alternative solutions to providing probabilistic statements about the likelihood of the event (i.e., rain) are few or nonexistent. (Conklin 2006).

It is possible that the definitional variety we observed among our survey respondents reflected different heuristics to address the questions of will we see rain (or thunderstorms or snow) tomorrow? For example, although models generate 6- and 12-h PoPs, these are for grid points—not county warning areas or viewing areas. How does the forecaster upscale the product from the grid point to the larger area? The Schaefer and Livingston (1990) definition that PoP = confidence × area may represent one such attempt. Similarly, conceptualizations of the PoP that encompass the proportion of an area likely to experience precipitation may reflect the differences between precipitation arising from synoptic forcing, such as frontal passage, and that associated with summertime convection. In other words, the results we observed about what defines the PoP and how it is used may reflect forecasters’ efforts to muddle intelligently through the recurrent wicked problems of translating specific forecasts to varied constituencies.

To some extent this approach may be working in that research has continued to suggest that the public is attached to the PoP and uses it (often in incorrect ways) to estimate the likelihood that they will encounter rain during the day (Morss et al. 2008; Murphy et al. 1980; Peachey et al. 2013). If nothing else, people seem to understand that higher values of PoP mean that precipitation is more likely while lower numbers suggest the opposite. Perhaps this will be as good as it gets. In recognition of this possibility, one of the most frequently mentioned strategies for supplementing the PoP involved the use of descriptive verbal phrases to communicate information about uncertainty (e.g., slight chance or likely) or areal coverage (e.g., isolated or scattered). The National Weather Service (1984) discussed the recommended use of such verbal qualifiers in an earlier edition of its operational manual. Some respondents believed that these phrases might function as interpretive anchors for different values of the PoP. The potential effectiveness of this strategy is tempered somewhat by previous psychological research suggesting that people make only loose and approximate associations of numerical probabilities with verbal descriptors (Sink 1995; Wallsten et al. 1986). Consequently, it would be important to present both quantitative and verbal descriptive information together, and perhaps on a regular basis, so that the meanings of the two modalities converge more consistently.

Along similar lines, the respondents also suggested the use of additional information to explain the nature of the PoP product and how it should be used. A graphical depiction of precipitation likelihood may present the information in a modality that is especially meaningful for some users. A number of respondents (8.5%) suggested the use of QPFs to supplement the PoP, especially as this may provide additional data about the amount and duration of precipitation (National Research Council 2006). These strategies fit well with the approaches for dealing with wicked problems in that they incorporate the perspectives of those with different relationships with the problem (e.g., forecasters, general public, and specialized users in the weather enterprise) and lead to dialog that can open new, viable options (Conklin 2006).

Our results are informative for the ongoing and emerging efforts to convey forecast uncertainty for routine weather and for extreme weather like severe thunderstorms or tornadoes (National Research Council 2006). It is essential to establish clear and consistent definitions of the uncertainty information (e.g., meanings of slight, marginal, or enhanced likelihoods of severe thunderstorms) in existing and new products and also to effectively communicate this uncertainty to end users.

The PoP is unique because it appears to be one of the oldest and most misunderstood of forecast products. The PoP’s age, heralding from 1965, along with its intended public, rather than a technical audience, has led some to view it as a vestigial product from an earlier era. The fact that the public and members of the meteorological community have varied conceptions of its definition could be taken as an argument to retire or replace the PoP with an alternative that embodies the best of our current forecasting knowledge. Although such alternatives would be exciting to see, we contend that it is not so much the PoP itself, but its role in conveying degrees of uncertainty about precipitation events that gives rise to its problematic interpretations and uses. A new product in this area may possess technological improvements, but care should be taken so that a product meant to quantify uncertainty does not become a product that ultimately is used with uncertainty because of confusion over its interpretation.

Acknowledgments

Thanks to the University of Georgia Vice President for Instruction for an Innovative Summer Instruction Grant to AES and JAK that made possible the research seminar that led to this work. The second author (CAW) acknowledges the support by an American Meteorological Society Graduate Fellowship and the AMS 21st Century Campaign in the completion of this project.

APPENDIX

Items Administered in the Online Survey

Perceptions of the probability of precipitation

We are interested in learning about the meanings and perceptions that people within the atmospheric science community associate with the phrase, probability of precipitation (or PoP). We hope to learn something from you and view you as a helpful source of information about this concept.

  1. What is your age?

  2. What is your gender?

  3. What is your ethnic identification? Please check all that apply. (Options: African American, Asian American, Caucasian American, Hispanic American, Native American, other.)

  4. For approximately how many years have you been working in the atmospheric sciences?

  5. Please indicate the professional settings in which you currently work. (Check all that apply.) [Options: academics—university, college, or research center; public sector meteorological services (National Weather Service/NOAA); private sector meteorological services; private sector research; and other.]

  6. Please indicate the scope of your professional activities in atmospheric science. (Please check all that apply). (Options: agricultural meteorology, aviation meteorology, broadcast meteorology, consultation/decision support, forensic meteorology, forecasting/operational meteorology, research, statistics/modeling, forecast verification, other.)

  7. In your formal training related to atmospheric sciences, did you complete a course that included probability and statistics? (Options: yes, no, not sure.)

  8. In your informal training related to atmospheric sciences (e.g., on-the-job training, an internship, a COMET module), did you complete training that included probability and statistics? (Options: yes, no, not sure.)

  9. If you received any training (formal or informal) specifically on the topic of the probability of precipitation (PoP), please describe the nature of that training briefly. (Text box)

  10. What does the probability of precipitation mean to you? (Please provide your definition of it.) (Text box)

  11. How confident are you in the definition of the PoP that you provided above? (Options: not confident at all, only a little confident, somewhat confident, completely confident.)

  12. To what extent do you believe that a consistent definition of the PoP exists within the atmospheric science community? (Options: no consistent definition of PoP at all, very little consistency in definition of PoP, somewhat consistent definition of PoP, very consistent definition of PoP.)

  13. What proportion of the public consumers of weather forecast products would you estimate have a technically accurate idea of the PoP? (0%–100%)

  14. What do you think that public consumers should know in order to have a proper understanding of the PoP? (Text box)

  15. Can you think of alternatives to the PoP for conveying the likelihood of precipitation? If so, can you describe these briefly? (Text box)

REFERENCES

  • Allenby, B. R., and Sarewitz D. , 2011: The Techno-Human Condition. The MIT Press, 222 pp.

  • Conklin, J., 2006: Dialogue Mapping: Building Shared Understanding of Wicked Problems. J. Wiley and Sons, 242 pp.

  • Gigerenzer, G., Hertwig R. , Van Den Broek E. , Fasolo B. , and Katsikopoulos K. V. , 2005: “A 30% chance of rain tomorrow”: How does the public understand probabilistic weather forecasts? Risk Anal., 25, 623629, doi:10.1111/j.1539-6924.2005.00608.x.

    • Search Google Scholar
    • Export Citation
  • Hayes, A. F., and Krippendorff K. , 2007: Answering the call for a standard reliability measure for coding data. Commun. Methods Meas., 1, 7789, doi:10.1080/19312450709336664.

    • Search Google Scholar
    • Export Citation
  • Hughes, L. A., 1980: Probability forecasting—Reasons, procedures, problems. NOAA Tech. Memo. NWS FCST 24, 84 pp.

  • Joslyn, S., and Savelli S. , 2010: Communicating forecast uncertainty: Public perception of weather forecast uncertainty. Meteor. Appl., 17, 180195, doi:10.1002/met.190.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., Nadav-Greenberg L. , and Nichols R. M. , 2009: Probability of precipitation: Assessment and enhancement of end-user understanding. Bull. Amer. Meteor. Soc., 90, 185193, doi:10.1175/2008BAMS2509.1.

    • Search Google Scholar
    • Export Citation
  • Morss, R., Demuth J. , and Lazo J. , 2008: Communicating uncertainty in weather forecasts: A survey of the U.S. public. Wea. Forecasting, 23, 974991, doi:10.1175/2008WAF2007088.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1998: The early history of probability forecasts: Some extensions and clarifications. Wea. Forecasting, 13, 515, doi:10.1175/1520-0434(1998)013<0005:TEHOPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1971a: Forecasters and probability forecasts: The responses to a questionnaire. Bull. Amer. Meteor. Soc., 52, 158166, doi:10.1175/1520-0477(1971)052<0158:FAPFTR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1971b: Forecasters and probability forecasts: Some current problems. Bull. Amer. Meteor. Soc., 52, 239248, doi:10.1175/1520-0477(1971)052<0239:FAPFSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1974: Probability forecasts: A survey of National Weather Service forecasters. Bull. Amer. Meteor. Soc., 55, 14491452, doi:10.1175/1520-0477(1974)055<1449:PFASON>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., Lichtenstein S. , Fischhoff B. , and Winkler R. L. , 1980: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61, 695701, doi:10.1175/1520-0477(1980)061<0695:MOPPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • National Research Council, 2006: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academies Press, 191 pp.

  • National Weather Service, 1984: Zone and local forecasts. NWS Operations Manual W/OM15. [Available online at http://www.nws.noaa.gov/wsom/manual/archives/NC118411.HTML.]

  • O’Hanrahan, P., and Sweeney C. , 2013: Odds on the weather: Probabilities and the public. Weather, 68, 247250, doi:10.1002/wea.2137.

    • Search Google Scholar
    • Export Citation
  • Peachey, J. A., Schultz D. M. , Morss R. , Roebber P. J. , and Wood R. , 2013: How forecasts expressing uncertainty are perceived by UK students. Weather, 68, 176181, doi:10.1002/wea.2094.

    • Search Google Scholar
    • Export Citation
  • Rittel, H. W. J., and Webber M. M. , 1973: Dilemmas in a general theory of planning. Policy Sci., 4, 155169, doi:10.1007/BF01405730.

  • Schaefer, J. T., and Livingston R. L. , 1990: Operational implications of the “probability of precipitation.” Wea. Forecasting, 5, 354356, doi:10.1175/1520-0434(1990)005<0354:OIOTOP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Sink, S. A., 1995: Determining the public’s understanding of precipitation forecasts: Results of a survey. Natl. Wea. Dig., 19, 915.

    • Search Google Scholar
    • Export Citation
  • Vislocky, R. L., Fritsch J. M. , and DiRienzo S. N. , 1995: Operational omission and misuse of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 76, 4952.

    • Search Google Scholar
    • Export Citation
  • Wallsten, T. S., Budescu D. V. , Rappoport A. , Zwick R. , and Forsyth B. , 1986: Measuring the vague meanings of probability terms. J. Exp. Psychol., 115, 348365, doi:10.1037/0096-3445.115.4.348.

    • Search Google Scholar
    • Export Citation
Save
  • Allenby, B. R., and Sarewitz D. , 2011: The Techno-Human Condition. The MIT Press, 222 pp.

  • Conklin, J., 2006: Dialogue Mapping: Building Shared Understanding of Wicked Problems. J. Wiley and Sons, 242 pp.

  • Gigerenzer, G., Hertwig R. , Van Den Broek E. , Fasolo B. , and Katsikopoulos K. V. , 2005: “A 30% chance of rain tomorrow”: How does the public understand probabilistic weather forecasts? Risk Anal., 25, 623629, doi:10.1111/j.1539-6924.2005.00608.x.

    • Search Google Scholar
    • Export Citation
  • Hayes, A. F., and Krippendorff K. , 2007: Answering the call for a standard reliability measure for coding data. Commun. Methods Meas., 1, 7789, doi:10.1080/19312450709336664.

    • Search Google Scholar
    • Export Citation
  • Hughes, L. A., 1980: Probability forecasting—Reasons, procedures, problems. NOAA Tech. Memo. NWS FCST 24, 84 pp.

  • Joslyn, S., and Savelli S. , 2010: Communicating forecast uncertainty: Public perception of weather forecast uncertainty. Meteor. Appl., 17, 180195, doi:10.1002/met.190.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., Nadav-Greenberg L. , and Nichols R. M. , 2009: Probability of precipitation: Assessment and enhancement of end-user understanding. Bull. Amer. Meteor. Soc., 90, 185193, doi:10.1175/2008BAMS2509.1.

    • Search Google Scholar
    • Export Citation
  • Morss, R., Demuth J. , and Lazo J. , 2008: Communicating uncertainty in weather forecasts: A survey of the U.S. public. Wea. Forecasting, 23, 974991, doi:10.1175/2008WAF2007088.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1998: The early history of probability forecasts: Some extensions and clarifications. Wea. Forecasting, 13, 515, doi:10.1175/1520-0434(1998)013<0005:TEHOPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1971a: Forecasters and probability forecasts: The responses to a questionnaire. Bull. Amer. Meteor. Soc., 52, 158166, doi:10.1175/1520-0477(1971)052<0158:FAPFTR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1971b: Forecasters and probability forecasts: Some current problems. Bull. Amer. Meteor. Soc., 52, 239248, doi:10.1175/1520-0477(1971)052<0239:FAPFSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1974: Probability forecasts: A survey of National Weather Service forecasters. Bull. Amer. Meteor. Soc., 55, 14491452, doi:10.1175/1520-0477(1974)055<1449:PFASON>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., Lichtenstein S. , Fischhoff B. , and Winkler R. L. , 1980: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61, 695701, doi:10.1175/1520-0477(1980)061<0695:MOPPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • National Research Council, 2006: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academies Press, 191 pp.

  • National Weather Service, 1984: Zone and local forecasts. NWS Operations Manual W/OM15. [Available online at http://www.nws.noaa.gov/wsom/manual/archives/NC118411.HTML.]

  • O’Hanrahan, P., and Sweeney C. , 2013: Odds on the weather: Probabilities and the public. Weather, 68, 247250, doi:10.1002/wea.2137.

    • Search Google Scholar
    • Export Citation
  • Peachey, J. A., Schultz D. M. , Morss R. , Roebber P. J. , and Wood R. , 2013: How forecasts expressing uncertainty are perceived by UK students. Weather, 68, 176181, doi:10.1002/wea.2094.

    • Search Google Scholar
    • Export Citation
  • Rittel, H. W. J., and Webber M. M. , 1973: Dilemmas in a general theory of planning. Policy Sci., 4, 155169, doi:10.1007/BF01405730.

  • Schaefer, J. T., and Livingston R. L. , 1990: Operational implications of the “probability of precipitation.” Wea. Forecasting, 5, 354356, doi:10.1175/1520-0434(1990)005<0354:OIOTOP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Sink, S. A., 1995: Determining the public’s understanding of precipitation forecasts: Results of a survey. Natl. Wea. Dig., 19, 915.

    • Search Google Scholar
    • Export Citation
  • Vislocky, R. L., Fritsch J. M. , and DiRienzo S. N. , 1995: Operational omission and misuse of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 76, 4952.

    • Search Google Scholar
    • Export Citation
  • Wallsten, T. S., Budescu D. V. , Rappoport A. , Zwick R. , and Forsyth B. , 1986: Measuring the vague meanings of probability terms. J. Exp. Psychol., 115, 348365, doi:10.1037/0096-3445.115.4.348.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Respondents’ estimates of the percentage of the public that has a technically accurate idea of the probability of precipitation.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 5444 797 42
PDF Downloads 3350 369 35