Search Results

You are looking at 1 - 10 of 14 items for

  • Author or Editor: Susan Joslyn x
  • Refine by Access: All Content x
Clear All Modify Search
Raoni Demnitz
and
Susan Joslyn

Abstract

The four experiments reported here tested the impact of recent negative events on decision-making. Participants were given a virtual budget to spend on crops of varying costs and payoffs that, in some cases, depended on drought conditions. Participants made 46 decisions based on either deterministic or probabilistic seasonal climate predictions. Participants experienced a sequence of droughts either immediately prior to the target trials (recent condition) or early in the sequence (distant condition). In experiment 1, participants made overly cautious crop choices when droughts were experienced recently. Subsequent experiments probed the cognitive mechanisms involved. The effect of recency on overcautiousness was reduced by a midexperiment message, although it did not matter whether the message described a changed or consistent venue and time period. This suggests that overcautiousness was not caused by deducing a climatic trend in the particular area. Instead, we argue that availability—events that are easier to recall are judged to be more likely—was the major cause for increased cautiousness following recent droughts. Importantly, probabilistic predictions attenuated the impact of recency, inspired greater trust, and allowed participants to make better decisions overall than did deterministic predictions. Implications are discussed.

Free access
Sonia Savelli
and
Susan Joslyn

Abstract

Recreational boaters in the Pacific Northwest understand that there is uncertainty inherent in deterministic forecasts as well as some of the factors that increase uncertainty. This was determined in an online survey of 166 boaters in the Puget Sound area. Understanding was probed using questions that asked respondents what they expected to observe when given a deterministic forecast with a specified lead time, for a particular weather parameter, during a particular time of year. It was also probed by asking respondents to estimate the number of observations, out of 100 or out of 10, that they expected to fall within specified ranges around the deterministic forecast. Almost all respondents anticipated some uncertainty in the deterministic forecast as well as specific biases, most of which were born out by an analysis of local National Weather Service verification data. Interestingly, uncertainty and biases were anticipated for categorical forecasts indicating a range of values as well, suggesting that specifying numeric uncertainty would improve understanding. Furthermore, respondents’ answers suggested that they expected a high rate of false alarms among warning and advisory forecasts. Nonetheless, boaters indicated that they would take precautionary action in response to such warnings, in proportions related to the size of boat they were operating. This suggests that uncertainty forecasts would be useful to these experienced forecast consumers, allowing them to adapt the forecast to their specific boating situation with greater confidence.

Full access
Jared LeClerc
and
Susan Joslyn

Abstract

What is the best way to communicate the risk of rare but extreme weather to the public? One suggestion is to communicate the relative risk of extreme weather in the form of odds ratios; but, to the authors’ knowledge, this suggestion has never been tested systematically. The experiment reported here provides an empirical test of this hypothesis. Participants performed a realistic computer simulation task in which they assumed the role of the manager of a road maintenance company and used forecast information to decide whether to take precautionary action to prevent icy conditions on a town’s roads. Participants with forecasts expressed as odds ratios were more likely to take appropriate precautionary action on a single target trial with an extreme low temperature forecast than participants using deterministic or probabilistic forecasts. However, participants using probabilistic forecasts performed better on trials involving weather within the normal range than participants with only deterministic forecast information. These results may provide insight into how best to communicate extreme weather risk. This paper offers clear evidence that people given relative risk information are more inclined to take precautionary action when threatened with an extreme weather event with a low probability than people given only single-value or probabilistic forecasts.

Full access
Susan Joslyn
and
Raoni Demnitz

Abstract

Despite near unanimous agreement among climate scientists about global warming, a substantial proportion of Americans remain skeptical or unconcerned. The two experiments reported here tested communication strategies designed to increase trust in and concern about climate change. They also measured attitudes toward climate scientists. Climate predictions were systematically manipulated to include either probabilistic (90% predictive interval) or deterministic (mean value) projections that described either concrete (i.e., heat waves and floods) or abstract events (i.e., temperature and precipitation). The results revealed that projections that included the 90% predictive interval were considered more trustworthy than deterministic projections. In addition, in a nationally representative sample, Republicans who were informed of concrete events with predictive intervals reported greater concern and more favorable attitudes toward climate scientists than when deterministic projections were used. Overall, these findings suggest that while climate change beliefs may be rooted in partisan identity, they remain malleable, especially when targeted communication strategies are used.

Full access
Chen Su
,
Jessica N. Burgeno
, and
Susan Joslyn

Abstract

People access weather forecasts from multiple sources [mobile telephone applications (“apps”), newspapers, and television] that are not always in agreement for a particular weather event. The experiment reported here investigated the effects of inconsistency among forecasts on user trust, weather-related decisions, and confidence in user decisions. In a computerized task, participants made school-closure decisions on the basis of snow forecasts from different sources and answered a series of questions about each forecast. Inconsistency among simultaneous forecasts did not significantly reduce trust, although inaccuracy did. Moreover, inconsistency may convey useful information to decision-makers. Not only do participants appear to incorporate the information provided by all forecasts into their own estimates of the outcome, but our results also suggest that inconsistency gives rise to the impression of greater uncertainty, which leads to more cautious decisions. The implications for decisions in a variety of domains are discussed.

Free access
Susan Joslyn
,
Lou Nemec
, and
Sonia Savelli

Abstract

Two behavioral experiments tested the use of predictive interval forecasts and verification graphics by nonexpert end users. Most participants were able to use a simple key to understand a predictive interval graphic, showing a bracket to indicate the upper and lower boundary values of the 80% predictive interval for temperature. In the context of a freeze warning task, the predictive interval forecast narrowed user expectations and alerted participants to the possibility of colder temperatures. As a result, participants using predictive intervals took precautionary action more often than did a control group using deterministic forecasts. Moreover, participants easily understood both deterministic and predictive interval verification graphics, based on simple keys, employing them to correctly identify better performing forecast periods. Importantly, participants with the predictive interval were more likely than those with the deterministic forecast to say they would use that forecast type in the future, demonstrating increased trust. Verification graphics also increased trust in both predictive interval and deterministic forecasts when the effects were isolated from familiarity in the second study. These results suggest that forecasts that include an uncertainty estimate might maintain user trust even when the single-value forecast fails to verify, an effect that may be enhanced by explicit verification data.

Full access
Jessica N. Burgeno
and
Susan L. Joslyn

Abstract

For high-impact weather events, forecasts often start days in advance. Forecasters believe that consistency among subsequent forecasts is important to user trust and can be reluctant to make changes when newer, potentially more accurate information becomes available. However, to date, there is little empirical evidence for an effect of inconsistency among weather forecasts on user trust, although the reduction in trust due to inaccuracy is well documented. The experimental studies reported here compared the effects of forecast inconsistency and inaccuracy on user trust. Participants made several school closure decisions based on snow accumulation forecasts for one and two days prior to the target event. Consistency and accuracy were varied systematically. Although inconsistency reduced user trust, the effect of the reduction due to inaccuracy was greater in most cases suggesting that it is inadvisable for forecasters to sacrifice accuracy in favor of consistency.

Free access
Jessica N. Burgeno
and
Susan L. Joslyn

Abstract

When forecasts for a major weather event begin days in advance, updates may be more accurate but inconsistent with the original forecast. Evidence suggests that resulting inconsistency may reduce user trust. However, adding an uncertainty estimate to the forecast may attenuate any loss of trust due to forecast inconsistency as has been shown with forecast inaccuracy. To evaluate this hypothesis, the experiment reported here, tested the impact on trust of adding probabilistic snow accumulation forecasts to single value forecasts in a series of original and revised forecast pairs (based on historical records) that varied in both consistency and accuracy. Participants rated their trust in the forecasts and used them to make school closure decisions. Half of participants received single-value forecasts and half also received the probability of 6 or more inches (decision threshold in the assigned task). As with previous research, forecast inaccuracy was detrimental to trust although probabilistic forecasts attenuated the effect. Moreover, the inclusion of probabilistic forecasts allowed participants to make economically better decisions. Surprisingly, in this study, inconsistency increased, rather than decreased trust, perhaps because it alerted participants to uncertainty and led them to make more cautious decisions. Furthermore, the positive effect of inconsistency on trust was enhanced by the inclusion of probabilistic forecast. This work has important implications for practical settings, suggesting that both probabilistic forecasts and forecast inconsistency provide useful information to decision makers. Therefore, members of the public may well benefit from well-calibrated uncertainty estimates and newer, more reliable information.

Restricted access
Gala Gulacsik
,
Susan L. Joslyn
,
John Robinson
, and
Chao Qin

Abstract

The likelihood of threatening events is often simplified for members of the public and presented as risk categories such as the “watches” and “warnings” currently issued by National Weather Service in the United States. However, research (e.g., Joslyn and LeClerc) suggests that explicit numeric uncertainty information—for example, 30%—improves people’s understanding as well as their decisions. Whether this benefit extends to dynamic situations in which users must process multiple forecast updates is as yet unknown. It may be that other likelihood expressions, such as color coding, are required under those circumstances. The experimental study reported here compared the effect of the categorical expressions “watches” and “warnings” with both color-coded and numeric percent chance expressions of the likelihood of a tornado in a situation with multiple updates. Participants decided whether and when to take shelter to protect themselves from a tornado on each of 40 trials, each with seven updated tornado forecasts. Understanding, decision quality, and trust were highest in conditions that provided percent chance information. Color-coded likelihood information inspired the least trust and led to the greatest overestimation of likelihood and confusion with severity information of all expressions.

Restricted access
Gala Gulacsik
,
Susan L. Joslyn
,
John Robinson
, and
Chao Qin

Abstract

There are lingering questions about the effectiveness of the watch, warning, and advisory system (WWA) used to convey weather threats in the United States. Recently there has been a shift toward alternative communication strategies such as the impact-based forecast. The study reported here compared users’ interpretation of a color-coded impact-based prototype designed for email briefings, to a legacy WWA format. Participants, including emergency managers and members of the public, saw a weather briefing and rated event likelihood, severity, damage, and population affected. Then they recommended emergency response actions. Each briefing described the severity of the weather event and the degree of impact on population and property. In one condition a color-coded impacts scale was added to the text description. In another, an advisory and/or warning was added to the text description. These were compared with the text-only control. Both emergency managers and members of the public provided higher ratings for event likelihood, severity, damage, and population affected and recommended a greater response for higher impact levels regardless of format. For both groups, the color-coded format decreased ratings for lower-impact events. Among members of the public, the color-coded format also led to increases for many ratings and greater response at higher levels relative to the other two conditions. However, the highest ratings among members of the public were in the WWA condition. Somewhat surprisingly, the only effect of the WWA format on emergency managers was to reduce action recommendations, probably because of the inclusion of the “advisory” in some briefings.

Restricted access