1. Introduction
Because the atmosphere is a dynamical system that exhibits limited predictability, weather forecasts are unavoidably uncertain. Meteorologists have recognized forecasts’ inherent uncertainty since the early days of modern weather forecasting (Murphy 1998; NRC 2006). Moreover, users of weather forecasts have substantial experience with forecasts and subsequent weather and, thus, likely understand that forecasts are imperfect. Despite this recognition of forecast uncertainty by meteorologists and users, most weather forecasts communicated to the public today contain, at best, limited information about uncertainty (NRC 2006).
Recently, advances in ensemble forecasting, growing understanding of potential pitfalls of deterministic1 forecasting, and evolving user needs have revitalized interest in the provision of weather forecast uncertainty information. In 2002, for example, the American Meteorological Society (AMS) “endorse[d] probability forecasts and recommend[ed] their use be substantially increased” (AMS 2002). In 2006, a National Research Council (NRC) committee on estimating and communicating uncertainty in weather and climate forecasts, sponsored by the National Weather Service (NWS), recommended: “The entire [weather and climate] Enterprise should take responsibility for providing products that effectively communicate forecast uncertainty information” (NRC 2006, p. 2). Communicating forecast uncertainty is important because it avoids conveying false certainty in forecasts, allows forecast providers to impart their knowledge about forecast uncertainty, and may help forecast users make more informed decisions (Murphy 1998; AMS 2002; NRC 2003, 2006, and references therein). Yet meteorologists often find it challenging to communicate uncertainty effectively.
To improve the meteorological community’s understanding of issues related to communicating forecast uncertainty, this article investigates how members of the U.S. public view weather forecast uncertainty and their preferences with respect to receiving forecast uncertainty information. Here, following the NRC (2006) report,2 we interpret weather forecast uncertainty information to include any communication format that conveys ambiguity or imperfect knowledge about future weather, in other words, that conveys something other than a single-valued prediction. Example formats include percentage probabilities; other numerical, worded, and graphical expressions of uncertainty; objective and subjective expressions of confidence; and indications of alternative future states. Our study is exploratory, investigating only a few of the many aspects of uncertainty-related forecasts. Nevertheless, the knowledge gained contributes to understanding when communicating uncertainty information might be desirable and what communication formats might be most effective. Such understanding can, if coupled with user-oriented product development efforts, support development of uncertainty-explicit forecast products grounded in empirical research.
In learning how to better communicate uncertainty, the weather forecasting community can build on knowledge about uncertainty communication in related areas, including seasonal climate prediction (e.g., Pulwarty and Redmond 1997; Phillips 2001; Patt 2001; Hartmann et al. 2002) and climate change (e.g., Moss and Schneider 2000; Patt and Schrag 2003; Oppenheimer and Todorov 2006). Relevant knowledge can also be drawn from research on other types of forecast and risk communication (e.g., Fischhoff 1994, 1995; Jardine and Hrudey 1997; Friedman et al. 1999; Morgan et al. 2002; Morss et al. 2005) and decision making under uncertainty (e.g., Kahneman et al. 1982; NRC 2006; Marx et al. 2007). Findings from this research include the importance of understanding how target audiences are likely to interpret and use the information of interest. However, while some knowledge from other contexts can be applied, uncertainty communication must also be investigated specifically in weather forecasting settings. Communication of weather forecasts is different from communication of longer-term climate-related and other risks in ways that raise interesting opportunities and research questions. For example, unlike some other contexts involving risk communication, weather forecasts are familiar to most people. Because they are widely available and regularly used, everyday weather forecasts also offer audiences frequent opportunities to evaluate new types of information and learn to interpret new formats. This suggests that the communication of weather forecast uncertainty will evolve through an iterative, dynamic process that connects learning from forecast recipients with product development.
A few previous studies have examined aspects of weather forecast uncertainty communication (e.g., Murphy et al. 1980; Ibrekk and Morgan 1987; Baker 1995; Patt and Schrag 2003; Gigerenzer et al. 2005; CFI Group 2005; Roulston et al. 2006; Broad et al. 2007; Joslyn et al. 2007). However, further work is needed, to update findings that are several decades old, explore existing findings in new and broader contexts, and answer many questions that have not yet been addressed. Current key knowledge gaps include understanding how people interpret weather forecast uncertainty and how to communicate uncertainty more effectively in real-world (rather than theoretical or idealized) settings. Developing this understanding requires empirical research. Through such research, the meteorological community can learn from forecast users what uncertainty information they can and will understand and use, rather than basing products on what meteorologists assume or believe users should want or use (e.g., Ban 2007).
To begin addressing these knowledge gaps, this article examines fundamental aspects of laypeople’s perceptions of weather forecast uncertainty and their interpretations of and preferences for weather forecast uncertainty information. It focuses on the public, a major audience for weather forecasts, but results are likely also relevant to other, more targeted audiences. Five research questions are explored:
Do people infer uncertainty into deterministic forecasts and, if so, how much?
How much confidence do people have in different types of weather forecasts?
How do people interpret a type of uncertainty forecast information already commonly available and familiar: probability of precipitation forecasts?
To what extent do people prefer to receive forecasts that are deterministic versus those that express uncertainty?
What formats do people prefer for receiving forecast uncertainty information?
The research questions explored in this study are sufficiently complex and context-dependent that they cannot be definitively addressed in a single study. However, the findings presented here provide a baseline understanding that can be built upon in future work. The findings also contribute to our underlying knowledge about laypeople’s perceptions, interpretations, and preferences that can help forecasters provide forecast information that better meets users’ needs.
We investigate these research questions using results from a recent survey of members of the U.S. public. The survey questions analyzed here focus on everyday weather forecasts, to provide a foundation for understanding laypeople’s perspectives across a range of weather situations. Some of the questions focus on information currently available to the public, while others focus on information not currently provided to most people. The survey was implemented on a controlled-access Internet site with a nationwide sample. Because of this survey implementation, the results are more generalizable than previous studies performed with convenience samples, students, and other small or limited populations.
Section 2 presents the methodology, including the survey design and implementation. In section 3, we present and discuss our findings on the five research questions. The final section discusses the main results and their potential implications for real-world weather forecasting. Areas requiring further research are identified throughout the article.
2. Methodology: Survey design and implementation
To begin investigating our five research questions, we included eight uncertainty-related questions in a broader survey of the U.S. public’s experiences with and views on weather and weather forecasts. Other topics investigated in the survey (reported elsewhere) include the public’s sources of, uses of, preferences for, and value for weather forecast information in general (Lazo et al. 2008, manuscript submitted to Bull. Amer. Meteor. Soc., hereafter LMD); weather saliency (Stewart 2006); and use of forecast uncertainty information in decision making. The survey also asked questions about respondents’ weather-related activities and experiences and basic demographic questions. The uncertainty-related survey questions are presented in the appendix; the full survey is available upon request from the authors.
In developing the survey, we followed accepted methods and principles for writing survey questions (Dillman 2000) as well as general principles of survey research (Schuman and Presser 1996; Tourangeau et al. 2000). We first drafted the survey instrument through multiple iterations among the research team. We then had several peers review it for structure, content, and clarity. After revising the questions based on this review, we formally pretested a hard copy version of the survey with nonmeteorologists by conducting one-on-one verbal protocols (“think-alouds”) with recruited subjects (Ericsson and Simon 1993). These evaluations were used to refine and finalize the survey. While our research questions could be addressed using a variety of survey questions, development and testing procedures such as those we employed provide a reasonable assurance that survey questions are interpreted by respondents as intended by the researchers and can provide the information sought from respondents. Our survey questions—composed of question wording, formatting, and response categories—also provide a foundation for future related research.
The survey data were collected in November 2006 using a controlled-access Internet-based implementation. A survey research company (ResearchExec) programmed and hosted the survey and managed the data collection and quality control. A second company (Survey Sampling International, SSI) provided the sample. The sample was drawn from SSI’s U.S. Internet panel, which is a regularly screened and maintained database of people, recruited from multiple sources, who have actively indicated their willingness to respond to online surveys on a variety of topics. The only people permitted to access the survey were those invited by SSI via an e-mail containing a specific link to the survey Web site.3
After we and several others tested ResearchExec’s Internet version of the survey, the survey was implemented in three stages. We first obtained approximately 100 responses to confirm survey functionality and basic data quality. We then proceeded directly with full data collection, designed to be limited to the first 1200 complete responses. Preliminary analysis of this dataset indicated that Caucasians were overrepresented, and so we targeted 300 additional responses from non-Caucasians. Upon cutoff of survey implementation, we had 1891 responses, 371 of which were incomplete. We began our analysis with the 1520 completed surveys.
The survey was implemented with one question per screen, with questions in the same order for all respondents. Respondents were required to provide responses to each question other than the demographic questions, and they could not return to previous questions. The order of response options was randomized for those questions in which the options did not follow a logical sequence (see the appendix). The median time to complete the survey was 21 min; because respondents could start the survey and complete it at a later time without stopping the clock, a few long completion times skew the mean upward to 28 min.
While we cannot say that our sample is random, it includes a much broader range of people than some previous work limited to students, other convenience samples, or specific geographic regions. By hosting the survey on a controlled-access site with a sample provided by a reputable survey sampling company, our study also avoids some of the representativeness difficulties that occur with Internet-based surveys hosted on open-access Web sites (sometimes with a weather-specific orientation) with self-selected respondents (who sometimes can provide multiple responses). We have compared the sociodemographic characteristics of our respondent population to those of the general U.S. population, using data from the 2006 American Community Survey (ACS; U.S. Census Bureau 2007a),4 and based on the results, we believe that our results are more generalizable to the U.S. population than previous related work.
Our survey population is geographically diverse: it includes respondents from every U.S. state and the District of Columbia as well as two military personnel overseas. As shown in Table 1, our respondent population has similar gender, race, and household size characteristics to the U.S. public. Our population is somewhat older and more educated than the ACS population (Table 1); a more detailed comparison (available from the authors) shows that our survey population underrepresents people under 24 and over 75, as well as people with incomes less than $25,000 per year and over $100,000 per year. The underrepresentation of people with limited formal education and low incomes is typical with general surveys. It is also consistent with ACS data showing that households with Internet access tend to be better educated and have higher family incomes (U.S. Census Bureau 2007b). Some coverage error5 is inevitable with Internet-based surveys (Couper 2001). Thus, to complement studies such as ours, further work is needed with other survey implementations and research methods (including in-person methods such as interviews). Different methods are particularly important to reach difficult-to-access populations—especially those that may include people highly vulnerable to weather-related hazards.
Because many of the survey questions assume some basic knowledge and use of weather forecasts, the first question asked whether respondents ever use weather forecasts. Fifty-five of the 1520 respondents answered “no” and were not asked to answer most of the remaining questions, including those related to forecast uncertainty. The results reported in the remainder of this paper are based on the 1465 respondents who were asked the uncertainty questions.
3. Results
This section investigates the five research questions presented in the introduction, one research question in each of the subsections. The results are based on our analysis of the responses to eight survey questions (Q11–Q18), presented in the appendix. Note that the survey questions are not always discussed in the order in which they appeared in the survey.
a. Inferences of uncertainty in a deterministic forecast
As mentioned above, more than 96% of our survey respondents said that they use weather forecasts. In a survey question not discussed in detail here, we asked respondents how often they get weather forecasts from each of 10 different sources, including local and cable television, radio, newspapers, Web sites, and other people (family, friends, coworkers, etc.). On average, summed across all sources, people reported getting forecasts about 115 times a month (LMD). This suggests that most people in the United States have substantial experience with weather forecasts and subsequent weather. We hypothesize that, based in part on this experience, people have formed impressions about weather forecast accuracy and uncertainty. These impressions affect how they interpret and use forecasts. Understanding these interpretations (and potential misinterpretations) can help the meteorological community provide more useful forecast information. To explore how laypeople perceive forecast uncertainty, we employed two complementary approaches: investigating people’s uncertainty-related perceptions of a deterministic forecast (discussed in this section) and investigating their confidence in different types of forecasts (discussed in section 3b).
Q13 assessed how people perceive uncertainty in a deterministic weather forecast by asking respondents, given a high temperature forecast of 75°F for the following day, what they thought the actual high temperature would be.6 Response options included 75°F, various temperature ranges symmetric about 75°F (ranging from ±1°F to ±10°F), and “other.” As shown in Fig. 1, fewer than 5% of respondents expected the temperature to be the single value provided in the deterministic temperature forecast. About 95% expected the temperature to fall within a range around the single value. In other words, given this single-valued forecast, the vast majority of people inferred a range of possible values, that is, inferred uncertainty. Note that people’s multivalued perceptions of a single-valued forecast could arise for a variety of (perhaps interrelated) reasons, including an expectation of forecast inaccuracy, experience with spatial variations in temperature over a forecast region, perceptions of forecaster uncertainty, or an understanding that the future state of the atmosphere is uncertain. Here, given our definition of uncertainty, we summarize these as inferences of uncertainty in a deterministic forecast.
Among the 25 “other” responses written in by respondents, the most common were asymmetric temperature ranges (e.g., 74°–80°F) and comments that it depends on the situation. The asymmetric ranges suggest that some people may perceive forecasts as biased. We did not have enough of this type of response to permit further analysis, but the prevalence of perceived forecast bias could be explored in future work.
That 95% of respondents indicated nondeterministic perceptions of a deterministic forecast supports our hypothesis that most people have developed concepts about uncertainty in weather forecasts, even when verification or uncertainty information is not formally provided. This suggests that most people are aware that weather forecasts involve uncertainty. Figure 1 also shows that, although the majority of respondents expected the temperature to be within 1°–2° of the forecast, many interpreted the forecast as more uncertain. In other words, different people tended to infer a different range of uncertainties into the single-valued forecast. This will be discussed further in section 3b.
Because typical forecast variability accuracy varies with location, some of the differences among responses to this survey question may be associated with respondents’ different experiences with forecasts based on where they live (and perhaps have previously lived). Other factors (such as the season or demographic characteristics) likely also play a role. Exploring relationships between such potential explanatory factors and people’s perceptions of forecasts is a topic for future research. Another possible research area is exploring people’s conceptions of weather forecast accuracy, variability, and uncertainty in greater detail and how these relate to people’s perceptions and interpretations of forecasts.
b. Confidence in forecasts
People’s confidence in forecasts is likely related to their perceptions of forecast uncertainty, but not directly parallel. We examined the public’s confidence in weather forecasts from two perspectives. To assess people’s confidence in forecasts of different lead times, Q11 asked respondents to rate their confidence in weather forecasts (in general) for each of six different lead times, ranging from less than 1 day to 7–14 days. As Fig. 2 shows, respondents’ confidence in forecasts tended to decrease noticeably as lead time increased. For <1 day lead time, more than 40% of the respondents reported very high confidence and fewer than 2% reported very low confidence. For a 3-day lead time, nearly half of the respondents reported medium confidence. For a 7–14-day lead time, nearly half reported very low confidence. When individuals’ responses are analyzed separately for internal consistency, nearly 90% of the respondents expressed a similar trend of decreasing confidence with longer lead time. While meteorologists might expect this result based on their understanding of forecast skill, this is an empirical question that to our knowledge has not previously been investigated with the general public. Note also that about 10% of the respondents did not report lower confidence in longer lead time forecasts.
To assess people’s confidence in different types of forecasts, Q12 asked respondents to rate their confidence in forecasts of temperature, chance of precipitation, and amount of precipitation, separately, for 1-, 3-, and 7-day lead times. For all three lead times, our respondent population expressed the highest confidence in temperature forecasts, less confidence in forecasts of precipitation chance, and the least confidence in forecasts of precipitation amount. This is depicted for 1-day forecasts in Fig. 3. Consistent with the results in Fig. 2, for each of the three forecast types, respondents’ confidence decreases with increasing lead time (not shown).
Figures 2 and 3 also show that for any given forecast lead time or type, confidence varied significantly among individuals. This is similar to the result discussed in section 3a—that individuals had different perceptions of uncertainty in a deterministic forecast. To explore this relationship, we examined the correlation between responses to Q13 (perception of uncertainty in tomorrow’s high temperature forecast) and responses to the first part of Q12 (confidence in 1-day temperature forecasts). The results suggest, as expected, that people who inferred more uncertainty tended to have lower confidence in forecasts (Spearman’s ρ = −0.20, p < 0.001). The fairly low correlation, however, suggests that these are somewhat different psychological constructs, an idea that could be explored further in future work.
Longer lead time forecasts tend to be less accurate (more uncertain) than shorter lead time forecasts (Murphy and Brown 1984), and precipitation tends to be more challenging to forecast than temperature due to its greater spatial and temporal variability. In conjunction with respondents’ tendency to express less confidence in longer lead time (versus shorter lead time) and precipitation (versus temperature) forecasts, this suggests that, at least on a general level, many members of the public understand that some forecast types tend to be more uncertain than others. Although our data cannot explain how people developed this relative forecast confidence, we hypothesize that, as discussed in section 3a, it is based at least in part on people’s day-to-day experience with forecasts and subsequent weather.
Meteorologists have substantial information about forecast uncertainty—both in general and in specific situations—much of which is not easily available to the public. Providing this information in an accessible format may help people decide how much confidence to place in a given forecast, augmenting their general (and widely varying) expectations of forecast uncertainty. Understanding how providing explicit uncertainty information may influence people’s perceptions and interpretations of forecasts is an important topic for future research, as is understanding how such uncertainty information may affect people’s behavioral responses to forecasts.
c. Interpretations of probability of precipitation forecasts
Although most publicly available weather forecasts currently contain limited uncertainty information, one type of uncertainty forecast—probability of precipitation (PoP)—has been routinely provided to the U.S. public for more than four decades. Understanding how the public interprets PoP forecasts is of inherent interest, given the frequent use of the PoP communication format in the United States. Such understanding may also inform the provision and design of other uncertainty forecasts.
Laypeople’s interpretations of PoP forecasts have previously been investigated in several studies, including Murphy et al. (1980), Sink (1995), Saviers and van Bussum (1997), and Gigerenzer et al. (2005). All of these studies found that the majority of respondents (except in Gigerenzer et al.’s New York City sample) did not know the meteorologically correct interpretation of the event component of PoP forecasts.7 This general result held whether precipitation likelihood was communicated using percentages or nonnumerical text, and whether respondents were asked in a multiple-choice or open-ended format. While these results on PoP interpretation have been widely cited in the meteorological community and their implications for forecast uncertainty communication discussed, all of these studies used geographically focused, convenience samples, and most of the studies’ samples were small. Moreover, Murphy et al.’s results are more than 25 yr old, and the most recent study (Gigerenzer et al.) focused primarily on Europe rather than the United States. Thus, we decided to update and reexamine previous findings by investigating interpretations of PoP using a larger, more representative U.S. respondent population.
Given our goal of further examining results from previous research, we modeled our survey questions on PoP interpretation after these previous studies (with minor changes that turned out to be important for our findings, as discussed below). Q14 asked respondents what they thought the forecast “There is a 60% chance of rain tomorrow” means. Q15 asked the same question using the NWS’s nonnumerical text equivalent forecast, “Rain likely tomorrow” (NWS 2005). Approximately 90% of respondents (1335) were asked both questions as multiple choice versions; the remainder (135) received both as open-ended versions. The multiple choice versions (Q14a, Q15a) asked respondents to choose among six options: four interpretations of the forecast, “I don’t know,” and “other (please explain)” (see Tables 2 and 3, and the appendix). The interpretation options in the “60% chance of rain” question (Q14a) were based primarily on a similar question used in Gigerenzer et al.; those in the “rain likely” question (Q15a) were based primarily on a similar question used in Murphy et al.8 We provided “I don’t know” and “other” options, in contrast to the previous studies cited above, to encourage respondents to select an interpretation only if they agreed with it, rather than forcing them to choose one of the provided interpretations even if they agreed with none. The open-ended versions (Q14b, Q15b) asked respondents to explain the meaning of the forecast in their own words, being as specific as they could.
First, we discuss results from the multiple-choice versions of both questions. According to Gigerenzer et al. (2005, p. 624), PoP forecast accuracy is measured according to “the percentage correct of days when rain was forecast.” Therefore, in the “60% chance of rain” question, Gigerenzer et al. consider the “rain on 60% of the days like tomorrow” interpretation (third in Table 2) to be correct, although this interpretation does not mention precipitation location. The NWS definition of PoP is the likelihood of measurable precipitation at a point (NWS 2005). Therefore, in the “rain likely” question, Murphy et al. consider the “likely rain at any one particular point in the forecast area” interpretation (third in Table 3) to be correct. In the “60% chance of rain” question, the “correct” interpretation was selected by only 19% of respondents (Table 2). The correct interpretation was more popular in the “rain likely” question (Table 3), but it was still selected by only 29% of the respondents. Only 7% selected the correct answer in both. Our results therefore corroborate previous findings that the majority of laypeople do not identify the meteorologically correct interpretation of PoP. Unlike Gigerenzer et al., we found (with a more geographically diverse U.S. respondent population) that this is still true in the United States.
The first, second, and fourth interpretations in both questions (see Tables 2 and 3) refer to the areal coverage of precipitation, temporal coverage of precipitation, and forecasters’ beliefs about precipitation, respectively. Different interpretation types were preferred in each question, but the meaning of the options in the two questions is sufficiently different that the results are not directly comparable. Nearly one-quarter of the respondents to both questions selected the “forecasters” interpretation that was not offered in previous studies. This suggests that when asked in a multiple-choice format, people’s stated interpretations of precipitation likelihood depend on the question wording and options offered. Moreover, in the “60% chance of rain” question, 24% of respondents selected “other,” choosing to provide their own interpretation, and 9% selected “I don’t know.” That one-third of the respondents to this question did not select any of the four interpretations provided also suggests that a closed-ended question with limited options does not capture many laypeople’s interpretations of PoP.9
Next, we discuss write-in interpretations of precipitation likelihood, which were provided by the 135 respondents who received the two open-ended questions (Q14b, Q15b), the 320 respondents who selected “other” in Q14a, and the 41 who selected “other” in Q15a. These write-in responses were coded by one researcher, using categories discussed in Murphy et al. and Gigerenzer et al., and developed inductively from the data. Although these responses were provided in different contexts and were analyzed separately, overall they evoked similar themes. Consequently, we discuss the write-in results from the different questions together, noting differences as appropriate. For clarity and conciseness, we focus on our overall findings rather than detailed results. Table 4 summarizes the major types of write-in interpretations, with examples.
In each of the questions, nearly half or more of the write-in respondents restated the precipitation likelihood in some form (see Table 4), with no further interpretative information (other than perhaps a definition of “tomorrow”).10 Restatements were also common when Murphy et al. and Gigerenzer et al. asked similar open-ended questions. In our study, restatements were especially frequent among the “other” respondents to the “60% chance of rain” question, 100 of whom simply restated the 60% percentage chance of rain. Several indicated that this was obvious (e.g., “exactly what it says”). For these respondents, the probability of precipitation, on its own, appeared to be sufficient explanation. Some who offered only a restatement may not require a more detailed interpretation. Others may have had a cognitive difficulty forming a response, or they may have had a more detailed interpretation in mind that they did not (or could not) articulate.
A few respondents provided one of the two “correct” interpretations, and a few others provided one of the areal or temporal interpretations in Tables 2 or 3. Although a few respondents mentioned forecasters, none provided the “forecasters” interpretation from Tables 2 or 3. Thus, the vast majority of write-in respondents did not provide an option from the multiple-choice versions of the questions. Some respondents offered the rain “somewhere in the area” interpretation discussed in Murphy et al., and some discussed other forms of spatial and/or temporal coverage. Beyond this, respondents offered a variety of other interpretations (see Table 4). This diversity of interpretations, together with the fact that so few respondents to the open-ended questions provided an option from the multiple-choice versions, again suggests that the multiple-choice questions do not adequately assess many people’s interpretations of PoP. The discrepancy between responses to the multiple-choice and open-ended versions of the questions also suggests that when offered a closed set of meteorological interpretations, some respondents may have selected whichever one sounded best at the time.11 Prior to being asked the question, many people may not have considered what PoP means.
Previous studies have emphasized that most people do not know the meteorologically correct interpretation of PoP. This approaches interpretation of PoP from a meteorological or expert perspective. So do the multiple-choice questions, which asked people to select among meteorological interpretations. While some respondents answered from this perspective, others did not. As illustrated in Table 4, some interpreted the forecast in terms of the likelihood that they personally would experience rain, and some interpreted it in terms of implications for action. From these personal and use perspectives, understanding the technical definition of PoP may have limited value. In addition, only three respondents to the open-ended questions provided a response such as “I don’t know” rather than some type of interpretation. Thus, consistent with Murphy et al. and Gigerenzer et al., respondents to the open-ended questions seemed to think they had a sense of what PoP meant.
Overall, we found that even today, in the United States, most members of the public do not know the meteorologically correct interpretation of PoP forecasts. The interpretations of PoP that laypeople have are diverse. This makes sense given that forecasters and the media sometimes use different definitions of PoP when communicating with the public, and that even some meteorologists are confused about what PoP means [e.g., Murphy and Winkler (1974), Sink (1995), and Vislocky et al. (1995); note also the different definitions used by Murphy et al. and Gigerenzer et al.]. For many laypeople, the technical meaning of PoP has not been adequately, consistently explained. Based on this, some meteorologists and researchers have concluded that it is important to correct people’s misinterpretations, for example, by educating the public about the definition of PoP (Murphy et al. 1980) or specifying the reference class12 with PoP forecasts (Gigerenzer et al. 2005). Yet for many respondents, a detailed understanding of PoP did not appear necessary; for them, the likelihood of precipitation seemed, on its own, sufficient. Moreover, even though many respondents had nonspecific or incorrect interpretations of PoP, 70% of the respondents said that chance of precipitation forecasts were very or extremely important to them (LMD).
Why do many people find PoP forecasts important, despite having a nonspecific or meteorologically incorrect interpretation? We suspect that many people have used experience to form their own interpretations of PoP. Even if someone’s interpretation is not technically correct, it may be very close in many situations. This may be sufficient to meet that person’s needs given how he/she uses the forecast information. Many people are also likely interested in PoP to the extent that it indicates the chance of precipitation at locations and times of concern to them, to help them make weather-related decisions. From this personal or use perspective, the meteorologically correct definition of PoP may have limited meaning, and so better explanation of the meteorological definition may have limited value. Even if people knew the technically correct interpretation, they would still have to infer what it meant for their interests. In many situations, the important question for providing PoP and other types of uncertainty forecasts may be not whether people know the technical definition or understand the forecast precisely, but whether it meets their information needs—in other words, whether they can interpret the forecast well enough to use it in ways that benefit their decisions.
d. Preferences for deterministic versus uncertainty forecasts
Meteorologists sometimes argue that the meteorological community provides deterministic forecasts because a single number is what users, particularly members of the public, want. Others argue that providing uncertainty information will increase the value of forecasts to users. Assessing the validity of these beliefs requires empirical research. Understanding the extent to which users want and can use uncertainty information is important for deciding whether, when, and how to provide forecast uncertainty information and how rapidly new uncertainty information should be introduced. Such understanding can also aid in decision making about what user education and outreach may be needed to effectively communicate forecast uncertainty (NRC 2006). To begin exploring this issue, we investigated laypeople’s stated preferences for deterministic forecasts versus those expressing uncertainty in two scenarios: one in which respondents were not provided information about the weather situation, and one in which the uncertainty in a given weather situation was discussed. Since our goal is to test the extent to which people prefer single-valued forecasts, here we attempt to separate this from people’s attitudes toward complex uncertainty information by testing fairly brief, simple uncertainty communication formats.
Q18 assessed respondents’ preferences for deterministic forecasts in general by presenting two options for a local evening news forecast of tomorrow’s high temperature: a forecast of 76°F from channel A, and a forecast of 74°–78°F from channel B. Channel A’s forecast is deterministic (single valued), while channel B’s forecast expresses uncertainty. Respondents were asked whether they preferred the way channel A gives the forecast, preferred the way channel B gives the forecast, liked both, liked neither, or did not know. As shown in Fig. 4, only 22% of the respondents preferred the deterministic forecast. Twice as many (45%) preferred the forecast that expressed uncertainty. Combining those who preferred the uncertainty forecast with the 27% who liked both channels’ forecasts, over 70% of respondents prefer or are willing to receive this type of uncertainty information. As discussed in section 3a, people may have interpreted the temperature range in channel B’s forecast in different ways and, thus, may have had different reasons for liking this forecast (or not). At a broad level, however, in this scenario many respondents preferred or liked a non-single-valued forecast.
Q17 examined people’s preferences in a more complex scenario, when the uncertainty in the weather situation was briefly explained. This scenario told respondents that tomorrow’s high temperature would probably be 85°F, but a cold front might move through, in which case tomorrow’s high temperature would only be 70°F. Seven forecast options for this situation were provided, and for each option, respondents were asked whether or not they liked how the forecast was given. The first option was a high-temperature forecast of 85°F—a deterministic forecast. The remaining six forecasts expressed uncertainty in some form and will be discussed in section 3e.
About 35% of the respondents liked being given the forecast in the deterministic format (top bar in Fig. 5), and only 7% of the respondents liked the deterministic option but none of the uncertainty options. In contrast, over 90% of respondents liked being given the forecast in at least one of the uncertainty formats. Moreover, about 63% of the respondents liked at least one of the uncertainty options but not the deterministic option, which suggests that they preferred an uncertainty forecast. Thus, in this scenario, the vast majority of the respondents were willing to receive forecast uncertainty information in one of the formats tested, and the majority of the respondents appeared to prefer an uncertainty format.13
Despite being told the forecast situation was uncertain in Q17, 7% of the respondents liked only the simple deterministic forecast. About half of this group (nearly 4% of respondents overall) also said they preferred channel A’s deterministic forecast format in Q18. Some of these respondents may like uncertainty information in a format other than those we provided; others may simply prefer deterministic forecasts. More generally, some respondents expressed a consistent preference for deterministic or uncertainty forecasts across Q17 and Q18, while others did not. This suggests that some people may prefer deterministic or uncertainty information across a range of situations, while others may be more flexible or may have different preferences in different circumstances. Further exploring the extent to which people can be categorized according to their preferences for deterministic versus uncertainty forecast information is a topic for future research. Related issues to investigate include when and why people want deterministic versus uncertainty information and how individuals’ preferences for deterministic versus uncertainty forecasts affect their use of forecast information.
In summary, these results suggest that even if some users have expressed a desire for single-value forecasts in certain situations, one cannot generalize that most people do not want forecasts that express uncertainty. In the two situations explored here, a significant majority of the respondents were willing to receive forecasts that express uncertainty—at least in the fairly simple formats tested—and many preferred the uncertainty forecasts. Only a small percent of the respondents indicated a preference for the deterministic forecast format in both scenarios tested. Since people’s preferences depend on the forecast situation and the format of the uncertainty information, further work is needed exploring this issue in other contexts. In conjunction with preferences, empirically investigating the use and value of uncertainty versus deterministic forecasts is also important.
e. Preferences for uncertainty forecast formats
Given that some people like nondeterministic forecasts, it is important to understand how uncertainty forecast information is best conveyed. We began exploring the public’s preferences for different uncertainty forecast formats through two questions: one on communication of a probability forecast and one on communication of a bimodal forecast. Both questions tested only textual (nongraphical) formats.
Q16 followed the questions discussed in section 3c, on the interpretation of PoP forecasts. The question started by defining PoP, to provide respondents with a common understanding. We then provided four statements that communicated equivalent PoP information in different ways: percentage probability (e.g., 20% chance), relative frequency (e.g., 1 in 5 chance), odds (e.g., odds are 1 to 4, a common format in gambling), and nonnumerical text14 (e.g., slight chance). For each option, respondents were asked whether or not they liked the information given in this way; they could say yes or no to as many as they wished. We asked three versions of the question (Q16a–c), using PoPs of 20%, 50%, or 80%, with approximately one-third of the respondents receiving each version.
As shown in Fig. 6, most respondents liked the forecast conveyed in percentage format. A majority also liked the nonnumerical format. Only a minority liked the relative frequency format, and fewer liked odds. These general results are consistent across the three levels of PoP tested. Some results from previous research in nonweather contexts suggest that numeric probabilities can be a less effective communication format than relative frequencies, and that probability terms (such as those in the nonnumerical format) can lead to misinterpretations (Wallsten et al. 1986; Gigerenzer and Hoffrage 1995; NRC 2006). Thus, people’s preferred formats might not be the most readily understood. Results from section 3c, however, suggest that respondents who liked the percentage and nonnumerical formats at least believed they understood the information, even if most did not know the meteorological interpretation. Since PoP forecasts are commonly communicated to the U.S. public using percentages and nonnumerical text, respondents’ strong preference for these formats may be related to their familiarity. Frequent experience with forecasts in these formats and subsequent weather may have helped people understand the information well enough to find it useful, or this exposure may simply have led people to prefer the familiar formats.
Figure 6 also indicates that as the PoP increased from 20% to 80%, respondents tended to like the three numerical formats more and the nonnumerical format less. As the likelihood of precipitation increased, respondents may have tended to prefer the more specific information provided by the numerical formats. However, this result may not be due to differences between numerical and nonnumerical information in general. Rather, it may be related to the form of the NWS text equivalents for the different percentages: as the percentage changes from 20% to 50%, and then to 80%, the text equivalents become less specific, and for a PoP of 80%, the text equivalent implies certainty about the forecast.
Q17 investigated people’s preferences for different uncertainty forecast formats in a more complex situation, the cold front passage scenario described in section 3d. We designed this question to begin to assess people’s preferences for temperature forecast uncertainty information, which is less familiar than PoP. We also wanted to explore formats for communicating uncertainty information of a type common in weather forecasting, where the distribution of likely future weather states has a bimodal shape due to potential variations in the positioning or timing of a front or other weather system. Such scenarios are commonly communicated to the public using weather maps and explanations of the weather situation, but it is not clear how to best communicate this information in a relatively compact textual or numerical form.
In Q17, respondents were asked whether or not they liked how the forecast was given in each of seven options: one deterministic format and six uncertainty formats. Again, they could say yes or no to as many as they wished. The deterministic option (discussed in section 3d) presented the most likely situation: the high temperature will be 85°F. The six uncertainty options consist of three sets of two forecasts, with each set conveying the high-temperature possibilities in a different way. The first set described two high-temperature possibilities (the most likely situation and the most likely alternative) nonnumerically: it will most likely be 85°F but may be 70°F. The second set provided a high-temperature range: it will be between 70° and 85°F. The third set presented the likelihood of two possible high temperatures using percentage probabilities: there is an 80% chance of 85°F and 20% chance of 70°F. Note that unlike Q16, these formats do not provide equivalent information. Set three is the most specific and complex, while set two is the least specific. Within each set, two formats were tested: one with the added information “because a cold front may move through during the day” and one without the added explanation. (See the appendix for details.)
As discussed in section 3d, over 90% of the respondents to Q17 liked at least one of the six uncertainty options. As shown in Fig. 5, however, the extent to which the respondents liked each of three formats for communicating the high-temperature possibilities depended on whether the cold front explanation was also provided. Without the cold front explanation, none of the formats were especially liked; approximately 15%–30% of respondents liked each, with “between 70°F and 85°F” being the most popular. With the explanation, the majority of the respondents liked the two nonpercentage formats, while the percentage format was much less popular. The effect of the explanation on people’s responses complicates comparing results on the three ways of conveying the high-temperature possibilities. It also illustrates the potential importance of details in how information is communicated.
Adding the cold front explanation increased the number of respondents who liked each format by 85% (nearly a factor of 2) or more. In this scenario, therefore, the respondent population clearly preferred forecasts with the cold front explanation, even though it made the forecasts longer. Understanding the generality of this result requires testing it in other contexts, but it suggests that people may like receiving an explanation of the weather situation or the source of forecast uncertainty as part of uncertainty-explicit weather forecasts. This interest in explanations is corroborated by the popularity of television weather forecasts. The explanation offered here is concise; results likely depend on the type of explanation provided.
Comparing Fig. 5 with Fig. 6 shows that a percentage format was much less liked in the cold front scenario (Q17) than it was for PoP forecasts (Q16). One possible explanation is that, as discussed above, percentage is a familiar format for communicating PoP but not temperature forecasts. Another possibility is that many people found the percentage options in the cold front scenario too long or complex. The differences between results of Q16 and Q17 illustrate the importance of context for people’s preferences in how forecast information is conveyed.
Together, the results from Q16 and Q17 suggest that when communicating uncertainty information to the public, details can be important. One cannot draw general conclusions based on these results. However, at least for PoP, significantly more people liked a percentage format than a relative frequency or odds format. For the cold front scenario, many people preferred an explanation for the forecast uncertainty. Further research is needed to test these findings in other forecast contexts, study other uncertainty communication formats (including graphics and icons), and investigate people’s understanding and use of different uncertainty forecast formats as well as their stated preferences.
4. Summary and discussion
Effectively communicating uncertainty is a major challenge for the meteorological community. To help meet this challenge, this article investigates laypeople’s perspectives on weather forecast uncertainty and uncertainty information by analyzing data from a nationwide survey with over 1400 respondents. The results address five research questions, presented in the introduction. Some parts of the article further examine results from previous research, using a larger, more representative respondent population. Other parts explore issues not previously addressed through empirical research. While definitively answering the research questions will require further research, our findings contribute to fundamental understanding of the public’s uncertainty-related perceptions, interpretations, and preferences. In doing so, the study can inform future research and may support the development of user-oriented uncertainty forecast products.
As background for communicating weather forecast uncertainty, we investigated people’s perceptions of uncertainty in deterministic weather forecasts and their confidence in forecasts. The vast majority of survey respondents (95%) inferred uncertainty into a deterministic high-temperature forecast. In addition, for a given type of forecast, different respondents had different notions of forecast confidence and uncertainty. Respondents generally had more confidence in shorter lead time weather forecasts and more confidence in temperature than precipitation forecasts. This suggests that many respondents have a general sense of the relative accuracy or uncertainty in different types of weather forecasts. Understanding people’s preexisting concepts related to forecast uncertainty is important for deciding when and how to provide additional forecast uncertainty information. Communicating uncertainty effectively on a forecast-by-forecast basis may help augment people’s general notions of how much confidence to place in weather forecasts with situation-specific information. Individuals’ confidence in forecasts is also likely to evolve as they gain experience with new information formats.
We also investigated people’s preferences for deterministic forecasts versus those that express uncertainty, using fairly simple uncertainty information in two forecast scenarios. In both scenarios, a significant majority of the respondents were willing to receive the forecast uncertainty information tested, and many respondents preferred the uncertainty forecasts to the deterministic forecasts. Some meteorologists have expressed concern that explicitly communicating uncertainty may reduce weather forecasters’ credibility (NRC 2006). However, our results suggest that most people are already aware that forecasts are imperfect. Moreover, at least in some situations, many members of the public may be receptive to more forecast uncertainty information than is now commonly provided to them.
Given community interest in improving communication of forecast uncertainty, we began exploring the broad issue of how to effectively communicate weather forecast uncertainty in different circumstances. Building on previous work, we investigated people’s interpretations of probability of precipitation forecasts, an uncertainty communication format that is already familiar in the United States. Consistent with previous studies, we found (with a larger, more representative sample) that the majority of the U.S. public does not know the meteorological interpretation of PoP forecasts. Respondents interpreted PoP in a variety of ways. Many provided nonspecific interpretations. Some interpreted PoP from a personal or use perspective, discussing how likely rain was to fall on them or how the forecast should be used. The results on PoP interpretation suggest that questions about weather forecasts asked from a meteorological perspective may not fully reflect nonmeteorologists’ interpretations and views, and that for some people the meteorological interpretation of PoP may have limited relevance. The effectiveness of information communication should be evaluated from different perspectives, including people’s understanding of the information, their attitudes toward it, and its influence on their behavior. When communicating uncertainty forecasts such as PoP, we propose that it is less important that people understand the forecast precisely from a meteorological perspective, and more important that they can understand the forecast well enough to infer information of interest to them that they can use in decisions.
Finally, we investigated people’s preferences among different textual (nongraphical) formats for communicating uncertainty, in two scenarios. For precipitation likelihood forecasts, we found that respondents generally liked the communication formats that are currently used: percentages and nonnumerical text. Relative frequency and odds formats were much less popular. People might, however, like different formats for a less familiar forecast type. For communication of a bimodal high-temperature forecast resulting from a possible frontal passage, we found that respondents liked forecasts that included a concise explanation of the weather situation creating the forecast uncertainty.
This article explores a broad set of issues related to the communication of weather forecast uncertainty. In doing so, it raises a variety of topics, both fundamental and practical, for more in-depth future work. These include further investigating people’s conceptualizations of forecast uncertainty, their interpretations of different types of uncertainty information, and their preferences for forecast uncertainty information presented in a wider range of formats (including graphics) in different contexts. Also important is examining how people use different types of forecast uncertainty information, and how this relates to people’s perceptions, interpretations, and preferences. This study focused on everyday weather forecasts and the general public’s perspectives in order to take advantage of our nationwide sample and create baseline knowledge applicable across a range of situations. Future work is needed in a range of contexts, especially in high-impact weather forecast situations (Morss et al. 2008) and with vulnerable populations that raise particular challenges for communicating weather-related risks. Future work is also needed with targeted user groups, such as emergency managers and private sector decision makers.
Addressing these issues will require empirical research employing a range of quantitative and qualitative social science methods. Moreover, because research results do not always translate directly to real-world behavior, practice-based knowledge on communicating uncertainty is also needed, from both the public and private sectors. By employing such complementary approaches focused on targeted issues, the weather research and forecasting communities can learn what forecast uncertainty information members of the public and other users want, need, can understand, and can use in different situations. Integrating this knowledge back into the forecast product development process can then help the meteorological community communicate uncertainty more effectively.
Acknowledgments
The authors thank Alan Stewart, Barbara Brown, and an anonymous reviewer for helpful comments on the paper. We are especially grateful to Susan Joslyn for comments that helped us clarify aspects of our interpretations.
This work is supported by NCAR’s Collaborative Program on the Societal Impacts and Economic Benefits of Weather Information (SIP), which is funded by the National Science Foundation and the National Oceanic and Atmospheric Administration through the U.S. Weather Research Program. Views and opinions in this paper are those of the authors.
REFERENCES
AMS, 2002: Enhancing weather information with probability forecasts. Bull. Amer. Meteor. Soc., 83 , 450–452.
Baker, E. J., 1995: Public response to hurricane probability forecasts. Prof. Geogr., 47 , 137–147.
Ban, R., 2007: Moving towards symbiosis between physical and social sciences. Wea. Soc. Watch, 1 , 3. 1–, 11.
Broad, K., Leiserowitz A. , Weinkle J. , and Steketee M. , 2007: Misinterpretations of the “cone of uncertainty” in Florida during the 2004 hurricane season. Bull. Amer. Meteor. Soc., 88 , 651–667.
CFI Group, 2005: National Weather Service customer satisfaction survey: General public. Report to the National Oceanic and Atmospheric Administration, 154 pp. [Available online at http://www.nws.noaa.gov/com/files/NWS_Public_survey050608.pdf.].
Couper, M. P., 2001: Web surveys: A review of issues and approaches. Public Opinion Quart., 64 , 464–494.
Dillman, D. A., 2000: Mail and Internet Surveys: The Tailored Design Method. 2d ed. John Wiley and Sons, 464 pp.
Ericsson, K. A., and Simon H. A. , 1993: Protocol Analysis: Verbal Reports as Data. Rev. ed. The MIT Press, 443 pp.
Fischhoff, B., 1994: What forecasts (seem to) mean. Int. J. Forecasting, 10 , 387–403.
Fischhoff, B., 1995: Risk perception and communication unplugged: Twenty years of process. Risk Anal., 15 , 137–145.
Friedman, S. M., Rogers C. L. , and Dunwoody S. , 1999: Communicating Uncertainty: Media Coverage of New and Controversial Science. Lawrence Erlbaum, 350 pp.
Gigerenzer, G., and Hoffrage U. , 1995: How to improve Bayesian reasoning without instruction: Frequency formats. Psychol. Rev., 102 , 684–704.
Gigerenzer, G., Hertwig R. , van den Broek E. , Fasolo B. , and Katsikopoulos K. V. , 2005: A 30% chance of rain tomorrow: How does the public understand probabilistic weather forecasts? Risk Anal., 25 , 623–629.
Hartmann, H. C., Pagano T. C. , Sorooshian S. , and Bales R. , 2002: Confidence builders: Evaluating seasonal climate forecasts from user perspectives. Bull. Amer. Meteor. Soc., 83 , 683–698.
Ibrekk, H., and Morgan M. G. , 1987: Graphical communication of uncertain quantities to nontechnical people. Risk Anal., 7 , 519–529.
Jardine, C. G., and Hrudey S. E. , 1997: Mixed messages in risk communication. Risk Anal., 17 , 489–498.
Joslyn, S., Pak K. , Jones D. , Pyles J. , and Hunt E. , 2007: The effect of probabilistic information on threshold forecasts. Wea. Forecasting, 22 , 804–812.
Kahneman, D., Slovic P. , and Tversky A. , 1982: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 555 pp.
Lazo, J. K., Morss R. E. , and Demuth J. , 2008: 300 billion served: Households’ sources, perceptions, uses, and values for weather forecast information. Bull. Amer. Meteor. Soc., submitted.
Marx, S. M., Weber E. U. , Orlove B. S. , Leiserowitz A. , Krantz D. H. , Roncoli C. , and Phillips J. , 2007: Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ. Change, 17 , 47–58.
Morgan, M. G., Fischhoff B. , Bostrom A. , and Atman C. J. , 2002: Risk Communication: A Mental Models Approach. Cambridge University Press, 366 pp.
Morss, R. E., Wilhelmi O. V. , Downton M. W. , and Gruntfest E. , 2005: Flood risk, uncertainty, and scientific information for decision-making: Lessons from an interdisciplinary project. Bull. Amer. Meteor. Soc., 86 , 1593–1601.
Morss, R. E., Lazo J. K. , Brown B. G. , Brooks H. E. , Ganderton P. T. , and Mills B. N. , 2008: Societal and economic research and applications for weather forecasts: Priorities for the North American THORPEX program. Bull. Amer. Meteor. Soc., 89 , 335–346.
Moss, R. H., and Schneider S. H. , 2000: Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment reporting. Guidance Papers on the Cross Cutting Issues of the Third Assessment Report, R. Pachauri, T. Taniguchi, and K. Tanaka, Eds., World Meteorological Organization, 33–51.
Murphy, A. H., 1998: The early history of probability forecasts: Some extensions and clarifications. Wea. Forecasting, 13 , 5–15.
Murphy, A. H., and Winkler R. L. , 1974: Probability forecasts: A survey of National Weather Service forecasters. Bull. Amer. Meteor. Soc., 55 , 1449–1452.
Murphy, A. H., and Brown B. G. , 1984: A comparative evaluation of objective and subjective weather forecasts in the United States. J. Forecasting, 3 , 369–393.
Murphy, A. H., Lichtenstein S. , Fischoff B. , and Winkler R. L. , 1980: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61 , 695–701.
NRC, 2003: Communicating Uncertainties in Weather and Climate Information: A Workshop Summary. National Academies Press, 68 pp.
NRC, 2006: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academies Press, 124 pp.
NWS, 2005: WFO public weather forecast products specification. National Weather Service Instruction 10-503, 75 pp. [Available online at http://www.nws.noaa.gov/directives/010/pd01005003b.pdf.].
Oppenheimer, M., and Todorov A. , 2006: Global warming: The psychology of long term risk. Climatic Change, 77 , 1–6.
Patt, A., 2001: Understanding uncertainty: Forecasting seasonal climate for farmers in Zimbabwe. Risk Decision Policy, 6 , 105–119.
Patt, A., and Schrag D. P. , 2003: Using specific language to describe risk and probability. Climatic Change, 61 , 17–30.
Phillips, J., 2001: Proceedings of the Workshop on Communication of Climate Forecast Information. IRI-CW/01/4, International Research Institute for Climate Prediction and Society, Palisades, NY, 74 pp.
Pulwarty, R. S., and Redmond K. T. , 1997: Climate and salmon restoration in the Columbia River basin: The role and usability of seasonal forecasts. Bull. Amer. Meteor. Soc., 78 , 381–397.
Roulston, M. S., Bolton G. E. , Kleit A. N. , and Sears-Collins A. L. , 2006: A laboratory study of the benefits of including uncertainty information in weather forecasts. Wea. Forecasting, 21 , 116–122.
Saviers, A. M., and van Bussum L. J. , 1997: Juneau public questionnaire: Results, analysis, and conclusions. NOAA Tech. Memo. NWS AR-44. [Available online at http://pajk.arh.noaa.gov/info/articles/survey/intro.htm.].
Schuman, H., and Presser S. , 1996: Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. Sage Publications, 372 pp.
Sink, S. A., 1995: Determining the public’s understanding of precipitation forecasts: Results of a survey. Natl. Wea. Dig., 19 , 3. 9–15.
Stewart, A. E., 2006: Assessing the human experience of weather and climate: A further examination of weather salience. Preprints. Environmental Risk and Impacts on Society: Benefits and Challenges, Atlanta, GA, Amer. Meteor. Soc., 1.6. [Available online at http://ams.confex.com/ams/pdfpapers/101916.pdf.].
Tourangeau, R., Rips L. J. , and Rasinski K. , 2000: The Psychology of Survey Response. Cambridge University Press, 415 pp.
U.S. Census Bureau, cited. 2007a: 2006 American community survey. [Available online at http://www.census.gov/acs/www/.].
U.S. Census Bureau, cited. 2007b: Computer and Internet use in the United States: 2003. [Available online at http://www.census.gov/prod/2005pubs/p23-208.pdf.].
Vislocky, R. L., Fritsch J. M. , and DiRienzo S. N. , 1995: Operational omission and misuse of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 76 , 49–52.
Wallsten, T. S., Budescu D. V. , Rapoport A. , Zwick R. , and Forsyth B. , 1986: Measuring the vague meanings of probability terms. J. Experimental Psychol., 115 , 348–365.
APPENDIX
Survey Questions
The survey questions discussed in this manuscript are presented below, using the question numbers and order from the full survey. Where different respondents were asked different versions of a question, the versions are denoted by a, b or a, b, c. For Q16, the three different versions are presented in curly brackets within the question text, separated by slashes. Subquestions that all respondents were asked to answer are denoted by i, ii, iii, etc. The number of respondents for each question or question version is provided following the question number, denoted by N.
The question wording is reproduced below, but the formatting (spacing, typeset, etc.) has been altered for space considerations.
Q11. (N = 1465; response choices for each lead time: very low, low, medium, high, very high)
Weather forecasts are available for up to 14 days into the future. This means that a 1-day forecast is for the weather 1 day (24 h) from now, that a 2-day forecast is for the weather 2 days (48 h) from now, and so on. How much confidence do you have in weather forecasts for the times listed below?
Forecasts for weather . . .
i. less than 1 day from now.
ii. 1 day from now.
iii. 2 days from now.
iv. 3 days from now.
v. 5 days from now.
vi. 7 to 14 days from now.
Q12. (N = 1465; response choices for each forecast type: very low, low, medium, high, very high)
For forecasts of weather 1 day (24 h) from now, how much confidence do you have in forecasts of the weather elements listed below?
i. temperature
ii. chance of precipitation
iii. amount of precipitation
Repeat: For forecasts of weather 3 days (72 h) from now . . .
Repeat: For forecasts of weather 7 days (168 h) from now . . .
Q13. (N = 1465)
Suppose the forecast high temperature for tomorrow for your area is 75°F. What do you think the actual high temperature will be?
I think the temperature will be . . .
75°F.
between 74°F and 76°F.
between 73°F and 77°F.
between 70°F and 80°F.
between 65°F and 85°F.
Other (please explain).
Q14a. (N = 1330; order of response options randomized except for “I don’t know” and “Other”)
Suppose the following text is the forecast for tomorrow:
“There is a 60% chance of rain tomorrow.”
Which of the options listed below do you think best describes what the forecast means?
It will rain tomorrow in 60% of the region.
It will rain tomorrow for 60% of the time.
It will rain on 60% of the days like tomorrow.
60% of weather forecasters believe that it will rain tomorrow.
I don’t know.
Other (please explain).
Q14b. (N = 135)
Suppose the following text is the forecast for tomorrow.
“There is a 60% chance of rain tomorrow.”
In your own words, please explain what you think this means. Please be as specific as you can.
Q15a. (N = 1330; order of response options randomized except for “I don’t know” and “Other”)
Suppose the following text is the forecast for tomorrow:
“Rain likely tomorrow.”
Which of the options listed below do you think best describes what the forecast means?
It will likely rain over the entire forecast area tomorrow.
It will likely rain throughout the day somewhere in the forecast area tomorrow.
It will likely rain at any one particular point in the forecast area tomorrow.
Weather forecasters are likely to believe that it will rain tomorrow.
I don’t know.
Other (please explain).
Q15b. (N = 135, same respondents as Q14b)
Suppose the following text is the forecast for tomorrow:
“Rain likely tomorrow.”
In your own words, please explain what you think this means. Please be as specific as you can.
Q16 {a/b/c}. [N = {489/489/487}; order of forecast options randomized for all versions; response choices for each forecast option: no, yes]
Probability of precipitation is defined as the chance that there will be a measurable amount of precipitation (such as rain, snow, hail, or sleet) at a certain location during a specified period of time. All the choices listed below are the same as a probability of precipitation of {20%/50%/80%}. For the options listed below, do you like this information given this way? Please think about each option separately (i.e., do not compare each option to the others listed).
i. Chance of precipitation tomorrow is {20%/50%/ 80%}.
ii. There is a {1 in 5/1 in 2/4 in 5} chance of precipitation tomorrow.
iii. The odds are {1 to 4/1 to 1/4 to 1} that it will rain tomorrow.
iv. {There is a slight chance of rain tomorrow/There is a chance of rain tomorrow/Rain tomorrow.}
Q17. (N = 1465; response choices for each forecast option: no, yes)
Suppose that weather forecasters think the high temperature tomorrow will probably be 85°F. However, a cold front may move through during the day, in which case the high temperature tomorrow would only be 70°F. Based on this weather scenario, for the options listed below, would you like the forecast given in this way? Please think about each option separately (i.e., do not compare each option to the others listed).
i. The high temperature tomorrow will be 85°F.
ii. The high temperature tomorrow will most likely be 85°F, but it may be 70°F.
iii. The high temperature tomorrow will most likely be 85°F, but it may be 70°F, because a cold front may move through during the day.
iv. The high temperature tomorrow will be between 70°F and 85°F.
v. The high temperature tomorrow will be between 70°F and 85°F, because a cold front may move through during the day.
vi. There is an 80% chance that the high temperature tomorrow will be 85°F and a 20% chance that the high temperature tomorrow will be 70°F.
vii. There is an 80% chance that the high temperature tomorrow will be 85°F and a 20% chance that the high temperature tomorrow will be 70°F, because a cold front may move through during the day.
Q18. (N = 1465)
Suppose you are watching the local evening news on channel A. The weather report comes on, and the channel A weather forecaster says that the high temperature will be 76°F tomorrow. You then turn to the channel B local evening news. The weather report comes on, and the channel B weather forecaster says that the high temperature will be between 74°F and 78°F tomorrow.
Which way would you prefer to be given the weather forecast?
I prefer the way channel A gives the forecast.
I prefer the way channel B gives the forecast.
I like the way both channels give the forecast.
I don’t like the way either channel gives the forecast.
I don’t know.
Comparison of the survey respondent population (N = 1520) with data from the 2006 American Community Survey (U.S. Census Bureau 2007a; data available online at http://factfinder.census.gov, accessed 1 Feb 2008). All results are rounded to three significant figures.
Responses to Q14a, the meaning of the forecast “There is a 60% chance of rain for tomorrow” (N = 1330).
Responses to Q15a, the meaning of the forecast “Rain likely tomorrow” (N = 1330).
Summary of write-in responses to Q14b, to Q15b, and from respondents who selected “other” in Q14a and Q15a. Interpretation types are listed in approximate order of decreasing frequency across the four questions. Some responses fit more than one interpretation type.
Deterministic forecasts predict a single future state without providing information about forecast uncertainty.
As defined in NRC (2006, p. 1), “Uncertainty is an overarching term that refers to the condition whereby the state of a system cannot be known unambiguously. Probability is one way of expressing uncertainty.”
As a token of appreciation for completing the survey, respondents were entered into a prize drawing.
The ACS is a nationwide survey implemented by the U.S. Census Bureau to provide demographic data annually.
Coverage error results when not all members of the target population (in this case, the U.S. public) have an equal or known chance of participating in a survey (Dillman 2000).
The survey referred to temperature in degrees Fahrenheit because that is the unit generally used in forecasts provided to the U.S. public. For consistency with the survey, we use Fahrenheit rather than Celsius when discussing the results.
The event component of a PoP forecast is the precipitation event (Murphy et al.) or class of events (Gigerenzer et al.) to which the probability refers. Both Murphy et al. and Gigerenzer et al. found that the majority of respondents interpreted the probability component of PoP forecasts correctly.
Specifically, the first three interpretation options in Q14a and Q15a were modeled after three interpretation options from questions in Gigerenzer et al. and Murphy et al., respectively. The fourth (“forecasters”) interpretation option in both of our questions was added based on the open-ended responses reported in Gigerenzer et al.
A much smaller fraction of the respondents selected “I don’t know” or “other” in the “Rain likely” question. This difference in results between Q14a and Q15a is likely due at least in part to the different phrasing and meaning of the four interpretations, although it could also be due to the different wording of the forecast or the order of the two questions.
In Q14, most numerical restatements of 60% were accurate, suggesting (as discussed in Murphy et al. and Gigerenzer et al.) that most people understand the percentage component of PoPs.
This may explain why results from multiple-choice PoP interpretation questions vary significantly among previous studies and among the different cities surveyed in Gigerenzer et al.
The reference class is the class (group) of events to which the probability refers (see Gigerenzer et al. 2005 and references therein).
The magnitude of these results may have been affected by the fact that respondents were offered six uncertainty options and only one deterministic option.