• Avila, L. A., and S. R. Stewart, 2013: Annual weather summary: Atlantic hurricane season of 2011. Mon. Wea. Rev., 141, 25772596, doi:10.1175/MWR-D-12-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Baker, E. J., 1979: Predicting response to hurricane warnings: A reanalysis of data from four studies. Mass Emerg., 4, 924.

  • Brommer, D. M., and J. C. Senkbeil, 2010: Pre-landfall evacuee perception of the meteorological hazards associated with Hurricane Gustav. Nat. Hazards, 55, 353369, doi:10.1007/s11069-010-9532-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Christensen, L., and C. E. Ruch, 1980: The effect of social influence on response to hurricane warnings. Disasters, 4, 205210, doi:10.1111/j.1467-7717.1980.tb00273.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dash, N., and H. Gladwin, 2007: Evacuation decision making and behavioral response: Individual and household. Nat. Hazards Rev., 8, 6977, doi:10.1061/(ASCE)1527-6988(2007)8:3(69).

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeYoung, S. E., T. Wachtendorf, R. A. Davidson, K. Xu, L. Nozick, A. K. Farmer, and L. Zelewicz, 2016: A mixed method study of hurricane evacuation: Demographic predictors for stated compliance to voluntary and mandatory orders. Environ. Hazards, 15, 95112, doi:10.1080/17477891.2016.1140630.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drake, L., 2012: Scientific prerequisites to comprehension of the tropical cyclone forecast: Intensity, track, and size. Wea. Forecasting, 27, 462472, doi:10.1175/WAF-D-11-00041.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Emanuel, K., 2005: Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686688, doi:10.1038/nature03906.

  • Huang, S.-K., M. K. Lindell, and C. S. Prater, 2015: Who leaves and who stays? A review and statistical meta-analysis of hurricane evacuation studies. Environ. Behav., 48, 9911029, doi:10.1177/0013916515578485.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and R. M. Nichols, 2009: Probability or frequency? Expressing forecast uncertainty in public weather forecasts. Meteor. Appl., 16, 309314, doi:10.1002/met.121.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and J. LeClerc, 2013: Decisions with uncertainty: The glass half full. Curr. Dir. Psychol. Sci., 22, 308315, doi:10.1177/0963721413481473.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and M. A. Grounds, 2015: The use of uncertainty forecasts in complex decision tasks and various weather conditions. J. Exp. Psychol.: Appl., 21, 407417, doi:10.1037/xap0000064.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., S. Savelli, and L. Nadav-Greenberg, 2011: Reducing probabilistic weather forecasts to the worst-case scenario: Anchoring effects. J. Exp. Psychol.: Appl., 17, 342353, doi:10.1037/a0025901.

    • Search Google Scholar
    • Export Citation
  • Judd, C. M., G. H. McClelland, and C. S. Ryan, 2009 : Data Analysis: A Model Comparison Approach. 2nd ed. Routledge, 328 pp.

  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, doi:10.1111/j.1539-6924.2011.01647.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lindell, M. K., S.-K. Huang, H.-L. Wei, and C. D. Samuelson, 2016: Perceptions and expected immediate reactions to tornado warning polygons. Nat. Hazards, 80, 683707, doi:10.1007/s11069-015-1990-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loewenstein, G. F., E. U. Weber, C. K. Hsee, and N. Welch, 2001: Risk as feelings. Psychol. Bull., 127, 267286, doi:10.1037/0033-2909.127.2.267.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meyer, R. J., K. Broad, B. Orlove, and N. Petrovic, 2013: Dynamic simulation as an approach to understanding hurricane risk responses: Insights from the Stormview lab. Risk Anal., 33, 15321552, doi:10.1111/j.1539-6924.2012.01935.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meyer, R. J., J. Baker, K. Broad, J. Czajkowski, and B. Orlove, 2014: The dynamics of hurricane risk perception: Real-time evidence from the 2012 Atlantic hurricane season. Bull. Amer. Meteor. Soc., 95, 13891404, doi:10.1175/BAMS-D-12-00218.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. L. Demuth, J. K. Lazo, K. Dickinson, H. Lazrus, and B. H. Morrow, 2015: Understanding public hurricane evacuation decisions and responses to forecast and warning messages. Wea. Forecasting, 31, 395417, doi:10.1175/WAF-D-15-0066.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2009: “Hurricane local statement.” National Weather Service Glossary. Accessed 5 June 2016. [Available online at http://w1.weather.gov/glossary/.]

  • Petrolia, D., S. Bhattacharjee, and T. R. Hanson, 2011: Heterogeneous evacuation responses to storm forecast attributes. Nat. Hazards Rev., 12, 117124, doi:10.1061/(ASCE)NH.1527-6996.0000038.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Richard, F. D., C. F. Bond Jr., and J. J. Sokes-Zoota, 2003: One hundred years of social psychology quantitatively described. Rev. Gen. Psychol., 7, 331363, doi:10.1037/1089-2680.7.4.331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sherif, M., D. Taub, and C. I. Hovland, 1958: Assimilation and contrast effects of anchoring stimuli on judgments. J. Exp. Psychol., 55, 150155, doi:10.1037/h0048784.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sherman-Morris, K., 2013: The public response to hazardous weather events: 25 years of research. Geogr. Compass, 7, 669685, doi:10.1111/gec3.12076.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tversky, A., and D. Kahneman, 1974: Judgment under uncertainty: Heuristics and biases. Science, 185, 11241131, doi:10.1126/science.185.4157.1124.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tversky, A., and D. Kahneman, 1992: Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertainty, 5, 297323, doi:10.1007/BF00122574.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weber, E. U., 1994: From subjective probabilities to decision weights: The effect of asymmetric loss functions on the evaluation of uncertain outcomes and events. Psychol. Bull., 115, 228242, doi:10.1037/0033-2909.115.2.228.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weber, E. U., and D. J. Hilton, 1990: Contextual effects in the interpretation of probability words: Perceived base rate and severity of events. J. Exp. Psychol.: Hum. Percept. Perform., 16, 781789, doi:10.1037/0096-1523.16.4.781.

    • Search Google Scholar
    • Export Citation
  • Webster, P. J., G. J. Holland, J. A. Curry, and H. R. Chang, 2005: Changes in tropical cyclone number, duration, and intensity in a warm environment. Science, 309, 18441846, doi:10.1126/science.1116448.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitehead, J. C., B. Edwards, M. Van Willigen, J. R. Maiolo, K. Wilson, and K. T. Smith, 2000: Heading for higher ground: Factors affecting real and hypothetical hurricane evacuation behavior. Global Environ. Change, 2B, 133142, doi:10.1016/S1464-2867(01)00013-4.

    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, C. S. Prater, and C. D. Samuelson, 2014: Effects of track and threat information on judgments of hurricane strike probability. Risk Anal., 34, 10251039, doi:10.1111/risa.12128.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, and C. S. Prater, 2015a: Process tracing analysis of hurricane information displays. Risk Anal., 35, 22022220, doi:10.1111/risa.12423.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, and C. S. Prater, 2015b: Strike probability judgments and protective action recommendations in dynamic hurricane tracking task. Nat. Hazards, 79, 355380, doi:10.1007/s11069-015-1846-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    Means and standard errors (SE) for estimates of lives lost per condition.

  • View in gallery

    Means and SE for estimates of percent chance the storm will be severe per condition.

  • View in gallery

    Means and SE for severity expectations per condition.

  • View in gallery

    Means and SE for estimates of lives lost per condition.

  • View in gallery

    Means and SE for estimates of percent chance the storm will be severe per condition.

  • View in gallery

    Means and SE for severity expectations per condition.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 197 197 19
PDF Downloads 124 124 10

Weather Warning Uncertainty: High Severity Influences Judgment Bias

View More View Less
  • 1 Department of Psychology, College of Liberal Arts and Sciences, University of Florida, Gainesville, Florida
  • | 2 Department of Psychology, College of Liberal Arts and Social Sciences, Georgia Southern University, Statesboro, Georgia
  • | 3 Department of Psychology, College of Liberal Arts and Sciences, University of Florida, Gainesville, Florida
© Get Permissions
Full access

Abstract

Information about hurricanes changes as the storm approaches land. Additionally, people tend to think that severe events are more likely to occur even if the probability of that event occurring is the same as a less severe event. Thus, holding probability constant, this research tested the influence of severity on storm judgments in the context of updates about the approaching storm’s severity. In two studies, participants watched one of four (experiment 1) or one of five (experiment 2) sequences of updating hurricane warnings. The position of category 1 and category 5 hurricane warnings in the sequences varied (e.g., category 1 first and category 5 last, or category 5 first and category 1 last). After the videos, participants made judgments about the approaching storm. In experiment 1, participants generally overestimated the threat of the storm if they saw a category 5 hurricane warning in any position. Experiment 2, designed to test whether experiment 1 results were due to a contrast effect, revealed a similar pattern to experiment 1. Overall, when participants saw a category 5 hurricane warning, they anchored to severity regardless of updates that the storm had decreased in severity. Importantly, however, the extent of anchoring to severity depended on the type of judgment participants made. In terms of policy, the study proposes that weather warning agencies focus on message content at least as much as they focus on message accuracy.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Joy Losee, jl01745@ufl.edu

Abstract

Information about hurricanes changes as the storm approaches land. Additionally, people tend to think that severe events are more likely to occur even if the probability of that event occurring is the same as a less severe event. Thus, holding probability constant, this research tested the influence of severity on storm judgments in the context of updates about the approaching storm’s severity. In two studies, participants watched one of four (experiment 1) or one of five (experiment 2) sequences of updating hurricane warnings. The position of category 1 and category 5 hurricane warnings in the sequences varied (e.g., category 1 first and category 5 last, or category 5 first and category 1 last). After the videos, participants made judgments about the approaching storm. In experiment 1, participants generally overestimated the threat of the storm if they saw a category 5 hurricane warning in any position. Experiment 2, designed to test whether experiment 1 results were due to a contrast effect, revealed a similar pattern to experiment 1. Overall, when participants saw a category 5 hurricane warning, they anchored to severity regardless of updates that the storm had decreased in severity. Importantly, however, the extent of anchoring to severity depended on the type of judgment participants made. In terms of policy, the study proposes that weather warning agencies focus on message content at least as much as they focus on message accuracy.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Joy Losee, jl01745@ufl.edu

1. Introduction

As global temperature increases, the intensity of damage from natural disasters such as hurricanes is expected to increase (Emanuel 2005). Although atypical, in 2004 alone 9 of 14 named storms in the North Atlantic became hurricanes, 4 of which struck the southeastern United States (Webster et al. 2005). Additionally, Atlantic hurricanes in 2011 claimed 70 lives and cost $7.5 billion in the United States (Avila and Stewart 2013). Even with the potential consequences of approaching storms, people sometimes fail to heed weather warnings (Baker 1979). One prominent weather decision-making model, the Protective Action Decision Model (PADM; Lindell and Perry 2012), argues that three psychological processes occur as cues of a threat emerge from either the environment or warnings. These psychological processes are 1) predecisional processes (i.e., exposure, attention, and comprehension); 2) perceptions of threat, alternative mitigation options, and social stakeholders; and 3) the decision to take protective action. Unless the message or warning is particularly powerful, leading to skipping over stages, people usually proceed through the stages before preparing for a threat. And although each stage is not necessary for a person to engage in preparedness behavior, it is essential that people perceive threat.

However, much of the research on people’s lack of adherence to weather warnings and evacuation orders focuses on discovering why people do not comprehend or correctly interpret the weather warnings (Wu et al. 2014; Drake 2012) and how to make the information more accessible (Joslyn and LeClerc 2013; Joslyn et al. 2011). Understanding these comprehension processes is important. However, a complimentary understanding of the way different people may process this information after comprehending it is also important. Indeed, comprehension does not always link directly to rational behavior. People do not equally weigh gains and losses and, although they comprehend weather warnings, people may make decisions based on their subjective perception of the warning information. Therefore, it is important to investigate how people subjectively perceive weather warnings in addition to how they comprehend them.

Prospect theory (Tversky and Kahneman 1992) describes subjective probability—the difference in weights people apply to potential gain and loss outcomes of a decision—which is an important factor in determining whether people heed weather warnings. Other research on weather and information processing has revealed that multiple contextual factors contribute to people’s judgments of weather threat (Christensen and Ruch 1980; Joslyn et al. 2011; Weber 1994; Wu et al. 2015a,b). For example, people are more likely to respond to warnings that include protective action recommendations, and the number of protective action recommendations in a warning is positively associated with people’s self-reported strike probabilities (Wu et al. 2015b).

Indeed, severity—especially high severity—may influence people’s perceptions of and predictions about an approaching weather threat. Severity is important because people will weigh expected outcomes based on the consequences of misjudging such outcomes (asymmetric loss function; Weber 1994; Weber and Hilton 1990). For example, people’s judgments about the probability of an event are influenced by the severity of the event they are predicting (Weber and Hilton 1990). High severity tends to make people think that an event is more likely to happen even if the base rate or prior probability (i.e., actual likelihood) of the event is low.

One example of the influence of severity on information processing is its possible influence on the use of the anchoring heuristic (Joslyn et al. 2011; Tversky and Kahneman 1974). The anchoring heuristic refers to a failure to adequately adjust an estimate away from an initial piece of quantitative information (Tversky and Kahneman 1974). A typical anchoring effect emerges when people are asked to estimate the solution to math problems such as 8 × 7× 6 × 5 × 4 × 3 × 2 × 1 versus 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8. People who see the first sequence estimate higher numbers than people who see the second one even though the correct answer for both groups is the same (Tversky and Kahneman 1974). In this example, the anchoring heuristic represents an insufficient adjustment away from the first number in the sequence, leading to inaccurate responses.

The anchoring heuristic is relevant to weather warnings because people use numerical quantities when judging likelihood under conditions of uncertainty. Weather warnings typically contain quantitative information, including percentages, degrees of severity, and descriptions of the weather such as wind speed or area affected. Indeed, many weather warning studies suggest the anchoring heuristic as an explanation for odd patterns in their data (e.g., Lindell et al. 2016; Wu et al. 2015a,b). Research related to anchoring and weather judgments does show that people exhibit an anchoring-like bias when processing information about severe weather but only under certain circumstances (Joslyn et al. 2011). For example, when participants received wind speed information emphasizing the lower bound [e.g., there is a 10% chance that the high wind speed will be lower than 3 kt (1 kt = 0.51 m s−1)], people did not use an anchoring heuristic. Instead, after learning the lower bound information, people adjusted their estimates closer to the more accurate 15-kt wind speed, which was the most likely wind speed for each day. Alternatively, when participants received wind speed forecast information emphasizing the upper bound (e.g., 10% chance that the high wind speed will be greater than 27 kt), their wind speed predictions for the next day were higher and anchored toward 27 kt; thus, anchoring emerged, leading to an insufficient adjustment away from 27 kt. In other words, people exhibited anchoring bias in the form of insufficient adjustment in the upper bound condition (estimation higher than 15 kt) even though they were aware of the low likelihood (10%) of the 27-kt wind speed (Joslyn et al. 2011). Compared to traditional anchoring (i.e., insufficient adjustment from the first position value), these results suggest that anchoring can be based on some salient degree of severity rather than order of information (described above). Conversely, if there are no salient severe values, then predictions relate less to the preceding information and consequently estimates are more accurate.

Another example of anchoring to severity occurred in a study examining judgments of likelihood of a storm striking a given area (Wu et al. 2014). Wu et al. suspected that people might not understand “cone of uncertainty” depictions (i.e., visual representations of the uncertainty—or error—surrounding a predicted storm track). To determine people’s comprehension of the cone of uncertainty, the study manipulated six factors: 1) direction of the track, 2) whether the storm was a category 1 or a category 4, 3) whether participants saw the storm’s track or intensity first, 4) type of track (cone with track, cone without track, or track without cone), 5) type of track likelihood judgment, and 6) hurricane information training. People were able to use base-rate information gained in training (a variable expected to influence comprehension) and to distribute probability of storm strike across locations in the cone and not only the location to which the track pointed. Indeed, people were generally able to accurately process basic hurricane information (e.g., use base-rate information and understand that the hurricane is most likely to strike toward the sector that the track points) independent of whether they saw a track, a track with a cone, or just a cone. However, although not an original prediction of the study, estimates of strike probability were not only higher along the given track, but also in each of the sectors on the map not in the cone of uncertainty when the focal storm was a category 4 versus category 1. Thus, people thought the storm was more likely to strike everywhere when the forecast was for a more severe storm. Comparatively, people narrowed their estimates when the forecast was for a less severe storm. In sum, people may have anchored to the severity of the category 4 storm, which, similar to findings related to the asymmetrical loss function (Weber 1994), influenced larger likelihood estimates about landfall likelihood.

Wu et al. (2014) acknowledged that participants made judgments based on a single piece of location information, whereas in a real-world setting, people would hear multiple predictions about a hurricane’s likelihood to make landfall. Little research beyond that of studies by Wu and et al. (2015b) and Meyer et al. (2013) has examined how current weather warning formats—multiple or changing predictions—affect threat perception. To our knowledge, however, neither of these groups has examined the specific effect of severity changes over the course of an evolving hurricane. Typically, people hear the first forecast about a storm and subsequent weather warnings follow, often confirming or reducing the risk. If anchoring occurs toward a feature of salient severity, such as what Joslyn et al. (2011) and Wu et al. (2014) found, then a tendency to anchor, or perceive higher-than-average severity, may occur even when people hear a severe forecast in the context of reports in which storm severity is downgraded. Indeed, though some research has described results in reference to the anchoring heuristic (e.g., Lindell et al. 2016), the effect of severity on the use of the anchoring heuristic remains untested empirically. Thus, we set out to test the effect of changing threat severity on people’s judgments and interpretations of weather risk information.

The present studies

The information that people have available influences the decision-making strategies that they use when making judgments under uncertainty, such as under the threat of severe weather (Weber and Hilton 1990; Joslyn and Nichols 2009). Indeed, weather warnings may involve multiple pieces of information that could also serve as an influence on decision-making (e.g., news updates). Currently, evidence suggests that when faced with multiple pieces of information, people may exhibit an anchoring-like bias when one of those pieces includes a severe prediction (Joslyn et al. 2011; Wu et al. 2014). Specifically, we propose that, in weather contexts, people will anchor to severity by insufficiently adjusting predictions away from a salient value of severity. Thus, this research directly examined two questions: 1) Do people process sequences of updating storm information differently based on the content (i.e., severity) of those updates? 2) Does high severity cause people to exhibit an anchoring-like bias when judging the probability of a storm?

To answer these questions, we used an experimental design with hypothetical scenarios. Although there are some differences in effect size of responding to real versus hypothetical hurricanes, the types of responses tend to be similar (Huang et al. 2015). In weather contexts, experimental designs are useful because they allow researchers to examine specific variables while controlling for others (e.g., location, susceptibility, timing; Huang et al. 2015). Much of the research on human interaction with severe weather involves case studies, which are useful for big-picture analyses but rarely allow for determining the causal influence of specific factors on risk perception (Sherman-Morris 2013). Because this study focused on severity, we used an experimental design to isolate and manipulate severity.

2. Experiment 1

To manipulate severity, we used a between-subjects design, where participants saw one of four possible sequences that contained two hurricane warnings. The four possible sequences or conditions were a combination of warnings for either a category 1 hurricane or a category 5 hurricane. We chose these two extremes because they would be the most powerful manipulation of severity drawing the greatest difference between levels of severity. Thus, if severity does influence an anchoring-like bias, specifically in terms of a failure to sufficiently adjust estimates following a high severity warning, then we hypothesized that seeing a category 5 warning, first, last, or twice in the sequence, would influence participants to make higher estimates of severity and probability than seeing a category 1 warning twice in the sequence.

Using videos with text and audio in a computer-based, in-laboratory, and online survey, experiment 1 tested order effects of two sequential hurricane warnings of either a category 1 hurricane or a category 5 hurricane on perceptions of the storm when it hypothetically made landfall. The sequences were as follows: 1) no change in the predicted severity of the storm, either staying a category 5 across both warnings or staying a category 1 across both warnings; 2) an upgrade in storm severity from first (i.e., least severe; category 1) to last warning (i.e., most severe; category 5); or 3) a downgrade in severity from first to last warning (i.e., category 5 then category 1). This extends prior work (Joslyn et al. 2011; Wu et al. 2014,b) by examining the extent to which anchoring occurs if severe and nonsevere information occur in the same context (e.g., such as changes in weather severity). Based on these studies, we expected that participants’ forecasts would be influenced by severity of the hurricane in the warning. Specifically, we expected estimates to remain high even when contradictory information was presented later (e.g., when the storm downgraded from a category 5 to a category 1) if people tend to anchor to severity. If anchoring and insufficient adjustment away from severity did not occur, then we expected that there would be no difference between those who saw two category 1 warnings and those who saw a category 5 first and a category 1 second.

Method

1) Participants

The 214 undergraduate students primarily enrolled in an Introduction to Psychology course at a midsize college in the Southeast (60% women and 40% men) participated in this study in exchange for course credit. The sample size reflects the number of participants available in a semester. We conducted a post hoc power analysis using the overall average effect size of social psychology of r = 0.21 based on a meta-analysis of psychological meta-analyses (Richard et al. 2003). Thus, with a sample of 214 participants, we have 0.88 power to detect an effect size of 0.21 with a two-tailed alpha of 0.05. All data were collected according to American Psychological Association guidelines approved by the institutional review board.

2) Design and materials

Using a between-subjects design, we randomly assigned participants to one of the four different conditions, which were sequences of two hurricane warnings for the same hurricane. Two conditions (sequences) indicated that the expected storm stayed the same (category 1 first and second, category 5 first and second) and one sequence indicated the storm was weakening (category 5 first and category 1 second), whereas another indicated the storm was strengthening (category 1 first and category 5 second).

3) Procedure

Participants completed the experiment in-laboratory using an online survey. A research assistant greeted participants and informed them that they would watch a series of weather forecasts about a single storm. We designed the warnings adapted from the “hurricane local statement” (HLS; NOAA 2009) structure, which includes a lead statement, a sentence detailing the counties to be affected, expected wind speed, and expected damage. To administer the warnings in a video format, the warnings closely resembled a television interruption message. Thus, the warnings were on a black background with white font that contained scrolling text, which read as follows: “The National Weather Service has issued a Category 1[Category 5] Hurricane warning for the following counties/areas: Bulloch County, Jenkins County, Chatham County, Effingham County. Effective 04/26/2013 18:20:00 CDT.” Additionally, stationary text in the center of screen read: “Emergency Alert System, [scrolling text], National Weather Service Issued a Category 1 [5] Hurricane Warning.” A computerized male voice also announced a warning of the storm and damage typically associated with that category of the storm. As the video began, it read, “Imagine you’re watching television at home and you see the following warning…” Next, participants saw the first warning (which was either a category 1 or category 5 hurricane warning). Next, the screen turned black before words reappeared reading, “It’s later in the week and you see this warning…” The next screen showed the second warning, which was either a category 1 or category 5. After watching both hurricane warnings, participants read the instructions, “Consider the video you just watched and answer the following questions about the storm,” and made two numerical estimates: the percent chance of a severe storm and the number of lives that would be lost in the storm. Specifically, participants answered the question, “When the storm makes landfall, what percentage (%) of a chance is there that the storm will be severe?” before answering the question, “Considering the aftermath of the storm, estimate how many lives will be lost?” Although the participants were not qualified in terms of a meteorological or environmental science degree to make actual estimates on these items, we were interested in layperson predictions. Additionally, typical anchoring studies use an open-ended numerical response as the dependent variable (Tversky and Kahneman 1974). Next, participants responded on four 7-point Likert scales—one asking the participant’s likelihood of preparing, another asking the participant’s estimate of the degree of damage the storm will have on their home, another asking the likelihood of damaging winds, and another the likelihood of flooding as a result of the storm. For analyses, we computed the mean of these four measures and created a composite variable called severity expectations. To assess intention to prepare, participants answered binary yes–no questions about evacuation. One question asked participants whether they would issue an evacuation order (and that they should do so only if they thought the storm was going to be more severe than a category 3). The second question asked participants if they would personally evacuate. Participants also answered whether they would like to receive more information about hurricane preparedness. Although not included in analyses because they were not directly related to hypotheses in the present studies, participants also rated the amount of damage they perceived in 20 pictures of hurricane damage and completed demographics measures that included race, age, gender, and previous experience with hurricanes.

3. Results

After log-transforming participants’ estimates of lives lost and combining the Likert scale estimates into a composite measure called severity expectations (α = 0.84), we conducted hierarchical regression analyses using separate Helmert contrast codes for each hypothesis as simultaneous predictors (Table 1). This method has several advantages over typical one-way ANOVA and post hoc testing, including increased power and ability to test specific a priori hypotheses (Judd et al. 2009). Helmert contrasts compare the mean of one level of a categorical variable to the mean of each other level of the categorical variable. Thus, analyses with Helmert contrasts will have k − 1 contrasts entered in a hierarchical regression analysis, where k is the number of levels or categories.

Table 1.

Planned contrasts for expected effect directions tested for each of the three dependent variables (DVs).

Table 1.

Specifically, this study had an independent variable with four levels, requiring testing three separate contrast codes. Importantly, each contrast code tests a specific hypothesis. Contrast H1 tested whether those who saw only category 1 warnings made lower estimates than those who saw any other warning sequence. Contrast H2 then tested the hypothesis that participants who saw category 5 followed by a category 1 warning made lower estimates than the average of those who 1) saw category 1 followed by a category 4 warning and 2) saw only category 5 warnings. Contrast H3 tested whether those who saw category 1 followed by a category 5 warning made lower estimates than those who saw only category 5 warnings. In this experiment, the focal hypothesis was H1; thus, it was entered alone in the first step of the hierarchical regression. Contrasts H2 and H3 were entered in the second step to determine whether their addition significantly increased the variance explained. Additionally, because H1 was the focal hypothesis, follow-up polynomial trend analyses, which describe the shape (i.e., linear, quadratic, or cubic) of responses, were conducted only if contrasts H2 and H3 were significant predictors as a set. See Table 2 for all regression results. Like the Helmert contrasts, the polynomial contrasts are predictors in a multiple regression with the estimates as dependent variables. The linear contrast tests whether estimates increased linearly. The quadratic trend tested whether the responses across conditions formed a curved or U-shaped pattern. The cubic trend tested whether the responses across conditions followed a pattern with more than one peak and valley. See Table 1 for the coding scheme.

Table 2.

Summary of multiple regression analyses for Helmert contrasts predicting three outcomes. CI = confidence interval.

Table 2.

a. Estimates of lives lost

Contrast H1 was significant, whereas in the second step contrasts H2 and H3 explained little additional variance in the model (ΔR2 < 0.01, ns). Thus, participants who saw only category 1 warnings estimated the number of lives lost to be significantly lower than those who saw any sequence containing a category 5 warning, regardless of order (Table 3; Fig. 1). That only contrast H1 was significant, whereas H2 and H3 were nonsignificant, supports the hypothesis that people anchor to severity of the warnings regardless of the order of the warnings; that is, in terms of estimates of lives lost, participants provided higher estimations of potential life loss if they were in a condition that included a category 5 warning in any position, first or last.

Table 3.

Means and SE for each outcome by warning order.

Table 3.
Fig. 1.
Fig. 1.

Means and standard errors (SE) for estimates of lives lost per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

b. Estimates of the chance the storm will be severe

Contrast H1 was significant, as expected (Table 2). Contrary to predictions, however, in the second step the addition of contrasts H2 and H3 explained a significant amount of variance (ΔR2 = 0.08, p < 0.01). To analyze these differences further, we used polynomial contrasts (linear, quadratic, and cubic effects; Table 1). A significant linear trend revealed increasing estimates as the sequence of warnings increased in severity (Fig. 2). Thus, seeing two category 5 warnings led to higher chance estimates than seeing category 1 followed by category 5 warnings, which was greater than the chances estimated by participants who saw category 5–followed by a category 1 warning, which were in turn greater than those who saw only category 1 warnings. Additionally, a marginal cubic trend suggests that the difference in estimates for percent chance of severity leveled off between those who saw a category 5 followed by a category 1 warning and those who saw category 1 followed by a category 5 warning. Those who saw a category 5 warning first then second made substantially higher estimates than all groups (Table 3). Although some unexpected results emerged, these findings support the overarching hypothesis specifically because participants who saw a category 5 warning in any position made higher estimates than those who saw only category 1 warnings.

Fig. 2.
Fig. 2.

Means and SE for estimates of percent chance the storm will be severe per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

c. Severity expectation estimates

Again, contrast H1 was significant. Also in this case, contrasts H2 and H3 explained a significant amount of variance (ΔR2 = 0.05, p < 0.05). To further explore these effects, we used polynomial regression. The significant linear trend revealed that severity expectations increase starting from those who saw only category 1 warnings to those who saw only category 5 warnings (Fig. 3). Additionally, a significant quadratic effect suggested that severity expectations increased most between participants who saw the category 1 warning twice and participants who saw category 5 followed by a category 1 warning with a slower increase between the latter and the other sequences that included a category 5 warning (Table 3; Fig. 3). Again, although the anchoring-to-severity effect was less extreme in the category 5 followed by a category 1 warning sequence (being lower than seeing both category 5 warnings), it still represents a bias toward the severity information in that the severity expectations differ from participants who saw the category 1 warning twice. Additionally, although there was bias toward severity, the anchoring-to-severity effect is less extreme for participants who viewed a category 5 followed by a category 1 warning, suggesting that a downgrade does diminish—but not eliminate—bias.

Fig. 3.
Fig. 3.

Means and SE for severity expectations per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

d. Binary decision questions

A binary logistic regression using the same Helmert codes for the previous models (Table 1) revealed that both the H1 and H2 codes were significant predictors for the decisions to issue a warning. See Table 4 for all regression results. This model accurately predicted responses at 84.5%, which is more than 25% over chance accuracy. The significant H1 code showed that those who saw only category 1 warnings had 2.4 times the odds of saying no to issuing the warning than those who saw any other combination. The significant, albeit weaker, H2 code shows that those who saw category 5 followed by a category 1 warning were 1.98 times more likely to not issue an evacuation order compared to participants who saw category 1 followed by a category 5 warning and participants who saw the category 5 warning twice. A chi-squared analysis also showed a relationship between sequence and decision to issue a hurricane warning, χ2 (3, N = 213) = 91.61, p < 0.001, Cramer’s υ = 0.67. Of those who saw only category 1 warnings, 20.8% agreed to issue an evacuation warning compared to 96.2% of those who saw only category 5 warnings, 92.6% of those who saw category 1 followed by a category 5 warning, and 69.8% of those who saw category 5 followed by a category 1 warning.

Table 4.

Summary of binary logistic regression analyses for Helmert contrasts predicting three outcomes; N = 214.

Table 4.

We also analyzed participants’ own intentions to evacuate using a binary logistic regression with the same Helmert codes as all other analyses (Table 1). This analysis showed that H1 and H2 were significant predictors of responses on this item. This model accurately classified cases at 82.6%, which was more than 25% better than chance accuracy. Indeed, when it came to saying whether they would evacuate, participants who saw only category 1 warnings were significantly more likely (OR = 2.07) to say no than those who saw a category 5 warning in any position. Additionally, participants who saw category 5 followed by a category 1 warning were more likely (OR = 1.39) to say no than the others who saw a category 5 warning. Another chi-squared analysis showed a relationship between sequence and intention to evacuate, χ2(3, N = 213) = 70.64, p < 0.001, Cramer’s υ =.58. Of those who saw only category 1 warnings, 26.0% said they would evacuate compared to 92.5% of those who saw only category 5 warnings, 87.0% of those who saw category 1 followed by a category 5 warning, and 77.4% of those who saw category 5 followed by a category 1 warning.

Interestingly, the significant H2 code in both binary logistic regressions indicated that participants who saw category 5 followed by a category 1 warnings were less likely to issue the evacuation order and to evacuate than participants who saw category 5 last (either preceded by a category 1 warning or a category 5 warning). However, the percentage of participants who opted to issue the evacuation order (69.8%) and evacuate (77.4%) in the category 5 followed by a category 1 warning sequence were both relatively high, suggesting, as noted previously, a bias toward severity although diminished.

Finally, we used a chi-squared analysis on yes–no responses as to whether the participants wanted to receive more information about preparedness, which did not show a significant relationship, χ2(3, N = 184) = 3.32, p > 0.05, Cramer’s υ = 0.13. The binary logistic regression for this effect was also nonsignificant, as it did not predict any yes responses.

4. Discussion

As expected, the results demonstrated that the type of sequence influenced participants’ judgments of the landfalling storm. Whether participants saw a sequence of severe forecasts, a downgrade, or an upgrade, they tended to make the same high severity predictions compared to those who saw a sequence that did not include a severe warning. Participants also reported greater intention to engage in evacuation-related behaviors if they saw a category 5 warning at any position in the sequence compared to seeing only category 1 warnings. This pattern, however, was strongest when participants estimated how many lives would be lost in the storm and weakest when participants estimated the percent chance the storm would be severe. The difference in the results pattern by measure may indicate that asking people about death may engage a more visceral response than asking about percentages. The risk-as-feelings hypothesis (Loewenstein et al. 2001) proposes that some decisions may result from an emotional rather than cognitive response. Specifically, they propose that vividness of imagined consequences is one factor that may influence people to decide based more on affect than cognitive evaluation. Although we did not measure affect in this study, the fact that participants exhibited a bias toward the category 5 forecast most strongly on the item referring to death provides preliminary evidence that thinking about a fear-arousing event, such as loss of life, is an important context for the anchoring-to-severity effect.

Indeed, in comparison, estimates of the percent chance did not show such a strong bias toward the category 5 warning. The significant H2 and H3 codes, in addition to the significant linear trend, indicated that participants attempted to adjust their percent chance estimates from the first to the second hurricane warning. The percent chance estimate is a more numerical judgment, which may not include the salience that imagining loss of life evokes.

For the other measure, severity expectations, participants in the category 1 followed by a category 5 sequence and the category 5 followed by a category 1 sequence again attempted to adjust their estimates up or down from the first forecast. The significant H2 and H3 codes indicated that participants attempted to adjust their estimates toward the second forecast. However, what the significant quadratic trend reveals—specifically in terms of the category 5 followed by a category 1 sequence—is that adjustment is insufficient when a severe forecast is involved. Concretely, rather than this effect being a simple anchoring effect (basing estimates on the first forecast), the quadratic trend suggests that participants’ severity expectation estimates in the category 5 followed by a category 1 sequence are similar to participants’ estimates in the category 1 followed by a category 5 sequence, in which higher estimates represent a sufficient adjustment away from the category 1 warning.

Importantly, that high severity expectation estimates were made even when a severe warning was downgraded suggests an anchoring-to-severity effect, where the severe information was more influential than other contradictory information. Further, though not tested here, both the “risk as feelings” hypothesis (people react more cautiously to fear-arousing stimuli; Loewenstein et al. 2001) and the asymmetric loss function (people place more weight on fear-arousing information; Weber 1994) may explain why people would anchor and insufficiently adjust away from high severity information rather than exhibiting traditional anchoring (i.e., failure to adjust away from an initial piece of information). Additionally, this measure was a composite of items that referred to damage (e.g., “estimate the degree of damage the storm will have on your home”), which again may elicit a more visceral response than making percent chance estimates.

Even with the more realistic paradigm (i.e., sequential weather warnings), these results fit well with existing research (Joslyn et al. 2011; Weber 1994; Wu et al. 2014, 2015a). However, an additional alternative explanation for the similarities between the category 1 followed by a category 5 sequence and the category 5 followed by a category 1 sequence may be that the sequences involved only two warnings that in the case of an upgrade or downgrade represented extreme severity changes and participants’ responses may reflect a contrast effect (Sherif et al. 1958). Contrast effects emerge when the reference point (or new information) is too far removed from the scale on which people are anchored (based on old information). In this case, people will make judgments in the opposite direction from the contrasting reference point (Sherif et al. 1958). Thus, it may have been that in the downgrade condition, when participants saw a category 1 warning following a category 5, they adjusted in the opposite direction from the category 1 back toward the category 5 because of the contrast, and thus implausibility, of the subsequent category 1 warning. The following experiment addressed this possibility while also providing a conceptual replication of the results from experiment 1.

5. Experiment 2

Although the results of experiment 1 suggested that participants anchored to a severe warning regardless of the context of the other warning, we conducted experiment 2 to examine possible contrast effects (Sherif et al. 1958). Thus, experiment 2 created extra perceptual distance between the category 1 and 5 warnings by using sequences of four hurricane warnings that each varied the serial position (first, second, third, or last) of the category 5 warning in a series of three other category 1 warnings. If the results mentioned above were due to a contrast effect, then we expected the similarities between upgrade and downgrade conditions observed in experiment 1 to disappear. Alternatively, if experiment 1’s results reflected an anchoring-to-severity bias, then participants who viewed a sequence with a category 5 hurricane warning in any position should make higher estimates across the same measures from experiment 1 even if the sequence indicated a downgrade in the storm. This method is similar to that used by Wu et al. (2015b) and Meyer et al. (2013); however, in the present study this method was used to test whether the contrast effects would occur with more than two warnings in a series. Wu et al. (2015b) set out to examine how people process hurricane warning information generally and thus included many more parameters on which participants could base their judgments. Meyer et al. (2013) examined how hypothetical hurricane tracks would influence preparedness. In contrast, our approach is a more focused test of a single factor—severity—holding other factors constant.

Method

1) Participants

The 102 undergraduate students in introductory psychology courses at a midsize college in the Southeast (62% women, 38% men) participated in this study. As with experiment 1, this sample size reflects the number of participants available for the semester. We conducted a post hoc power analysis using rp = 0.39, which was the lowest effect size from the H1 hypotheses of experiment 1 (Table 2). Thus, with a sample of 102 participants we had 0.99 power to detect an effect size of r = 0.39 with a two-tailed alpha of 0.05. All data were collected according to American Psychological Association guidelines approved by an institutional review board.

2) Design, materials, and procedure

Participants were randomly assigned to one of the five different sequences of four hurricane warnings for the same hurricane. The sequences were composed of either all category 1 warnings or one category 5 warnings positioned first, second, third, or last among three category 1 warnings. The sequence that had a category 5 warning first then three category 1 warnings represented the most extreme example of a downgrade, whereas the sequence that had a category 5 warning last preceded by category 1 warnings represented the most extreme upgrade. Like experiment 1, participants were instructed to think of the storm as a whole. The procedure and questionnaire were identical to experiment 1. Again, although we collected demographic measures and questions regarding participants’ desire for additional hurricane information, we did not include them in this analysis because they were not relevant to our hypotheses.

6. Results

After log-transforming estimates of lives lost and combining the Likert scale estimates into a single value called severity expectations (α = 0.83), we examined all dependent measures using the regression analog of one-way between-groups ANOVAs with planned contrasts (Table 5). Again, we computed Helmert contrast codes to test specific hypotheses. Thus, each contrast code represents a hypothesis. The first set of contrasts tested category 1 warnings only as the reference group. Specifically, contrast A0 (category 1 warnings vs else) tested the focal hypothesis that seeing only category 1 information would result in lower estimates than seeing any sequence that included category 5 information (Table 5). Contrast A1 tested whether estimates linearly increase as the position of the category 5 warning nears the end position. Contrast A2 tested whether estimates exhibit more of a curve or U-shaped pattern such that they are highest when the category 5 warning is either first or last and lowest when it is in one of the two middle positions. Contrast A3 tested a cubic trend. We did not hypothesize whether contrast A2 or A3 would actually be significant, but these tests needed to be included to complete the model. The post hoc contrasts B0–B3 are similar with the exception that they tested the sequence with the category 5 warning in the last position as the reference group (Table 5; Judd et al. 2009). See Table 6 for all regression results.

Table 5.

Planned contrasts for expected effect directions tested for each of the three outcomes.

Table 5.
Table 6.

Summary of multiple regression analyses for contrasts predicting three outcomes; N = 102.

Table 6.

a. Estimates of lives lost

Confirming the focal hypothesis A0, participants who saw only category 1 warnings predicted lower death tolls than all those who saw a category 5 warning in any position (see Table 5 for contrasts, Table 6 for regression results, and Table 7 for means). As seen in Fig. 4, among those who saw a category 5 warning in any of the positions, neither the linear, quadratic, nor the cubic trends were significant (p > 0.05). This result suggests that estimates among those who saw a category 5 warning in any position were similar.

Table 7.

Means and SE for each outcome according to the sequence of weather warnings viewed.

Table 7.
Fig. 4.
Fig. 4.

Means and SE for estimates of lives lost per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

b. Estimates of chance the storm will be severe

The contrasts testing the focal hypothesis and the polynomial trends within those who saw a category 5 warning in any position were nonsignificant (ps1 > 0.05; see Table 8 and Fig. 5 for means). Thus, we conducted additional follow-up analyses using a second set of contrast codes (B0–B3; Table 5) that examined whether participants who received category 5 information last made higher estimates than others. With a significant B0, we found only that participants made higher estimates than every other group if they saw a category 5 warning last. The polynomial trends for the groups that did not see a category 5 warning last were nonsignificant (ps > 0.05).

Table 8.

Summary of regression analyses for contrasts predicting evacuation outcomes.

Table 8.
Fig. 5.
Fig. 5.

Means and SE for estimates of percent chance the storm will be severe per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

c. Severity expectation estimates

With a significant A0, the focal hypothesis, participants who saw only category 1 warnings had lower severity expectations than those who saw a category 5 warning in any position (see Table 7 for means). As seen in Fig. 6, a significant linear trend (A1) among those who saw category 5 warnings revealed that estimates increase linearly, starting low from the group that saw a category 5 warning first to those who saw a category 5 warning last. Neither the quadratic nor cubic trend was significant for severity expectations among those who saw category 5 warnings (ps > 0.05).

Fig. 6.
Fig. 6.

Means and SE for severity expectations per condition.

Citation: Weather, Climate, and Society 9, 3; 10.1175/WCAS-D-16-0071.1

d. Binary decision questions

We averaged responses on the two evacuation questions for two reasons: First, none of the participants who saw a category 5 last said no, which violated binary logistic regression assumptions. Second, the correlation between deciding to issue an evacuation and personally evacuating was significant (r = 0.47, p < 0.01). Using linear regression with the same contrast codes as the analyses mentioned above (Table 5), this model significantly predicted evacuation decisions (Table 8). A significant A0 contrast showed that participants who saw only category 1 warnings were more likely to respond no than participants who saw a category 5 warning in any position (see Table 9 for means). Additionally, of those that saw a category 5 warning in any position, there was a linear trend—as the position of the category 5 warning moved from second to fourth, participants were increasingly likely to respond yes. Neither the quadratic nor cubic trend was significant for these participants (ps > 0.05).

Table 9.

Means and SE for each outcome by condition.

Table 9.

7. Discussion

The majority of the results supported the notion that severity biases people’s estimates in experiment 2. Although the percent chance estimates in experiment 2 did not follow fully the expected pattern, they also did not support the contrast-effect explanation (Sherif et al. 1958). Instead, on this measure, participants appeared to make normative use of the information with mean percent chance estimates for all conditions—except when the category 5 warning was shown last—hovering around 50%. If contrast effects best explained the results, then the percent chance estimates should not be highest for those who saw a category 5 warning last, but should have been similar to those who saw only category 1 warnings. After seeing three category 1 warnings followed by a category 5 warning, the category 5 warning—according to the contrast effect—should have seemed implausible. Instead, participants who saw category 5 warnings last made much higher estimates than those who saw other sequences.

In terms of severity expectations, participants who saw a category 5 warning first had severity expectations that were lower than those who saw a category 5 warning second, third, or fourth. This result suggests one of two possibilities. First, there may be a boundary condition for this anchoring-to-severity effect, where after a considerable amount of information is given about a downgrade, people will anchor less on the severe forecast. Alternatively, the type of judgment may influence the extent to which people will anchor to severity. As with experiment 1, results from the death toll estimates suggest the latter explanation. On this measure, seeing a category 5 warning in any position led to higher estimates than not seeing one at all. This result provides the strongest support for the anchoring-to-severity hypothesis. Indeed, this result in combination with that from experiment 1 suggest that making judgments about loss of life elicits anchoring-to-severity the most. Thus, our findings corroborate research on the asymmetric loss function (Weber 1994) and the risk-as-feelings hypothesis (Loewenstein et al. 2001).

Anchoring to severity also emerged in the measures of participants’ intended evacuation behaviors. Much like the pattern of responding for the severity expectations measure, participants reported being more likely to engage in an evacuation-related action if they saw a category 5 warning in any position compared to seeing all category 1 warnings. Among those who saw a category 5 warning, intentions to engage in evacuation-related action increased as the position of the category 5 warning advanced from first to last. On this measure, responses did indicate somewhat of a preference for caution over negligence. Together, our results suggest that there are occasions when people tend to anchor to severe information even when new information signals less severity, such as a downgrade in storm warning.

8. Conclusions

This research provides novel evidence that people prefer to overestimate the severity of a storm when they have heard a severe prediction related to that storm. The present research supports prior research on hurricane warnings because it shows the importance of hurricane severity for risk perception and eventual protective action (DeYoung et al. 2016; Petrolia et al. 2011; Whitehead et al. 2000; Wu et al. 2014, 2015b). Indeed, Whitehead et al. (2000) found that the best predictor of hypothetical hurricane evacuation was storm intensity. Other studies have shown, for example, that hurricane intensity received the most clicks (i.e., attention) as participants tracked hypothetical hurricanes using a variety of information about the storms (Wu et al. 2015b). Yet another study found that the most important parts of a hurricane forecast were wind speed and landfall time (Brommer and Senkbeil 2010).

Together, prior findings and those of the present work suggest that storm severity is among the most salient features of an approaching hurricane. To date, however, much of the research on hurricane warnings does not directly test, with experimental control, the effect of severity as we have done here. Much of the existing research focuses on the importance of risk perception in people’s decisions to prepare for a hurricane (Dash and Gladwin 2007; Morss et al. 2015), and while the present research confirms the importance of risk perception (e.g., people who saw category 5 warnings were more likely to evacuate and issue an evacuation order), it also identifies conditions under which people may have a biased perception of risk.

Our results, however, were in some ways preliminary. Participants in our research were undergraduate students in a controlled environment, and thus we caution against offering direct applications to forecasting and warnings. Indeed, a field study about risk perceptions for real hurricanes revealed that although participants overestimated the likelihood of hurricane-force wind, they underestimated the amount of damage those winds would cause (Meyer et al. 2014). Thus, questions remain about the correspondence between anchoring to severity and actual behaviors such as protective action. For that reason, future research should determine both the extent of this anchoring-to-severity effect, its effect in a wider population, and its causal role in people’s decisions to prepare for severe weather. In terms of risk as feelings (Loewenstein et al. 2001), research should further explore whether the type of judgment about a storm prompts a more visceral reaction, and how this reaction prompts an anchoring-to-severity bias. Additionally, future investigations may also examine the extent to which such reactions may be used to improve responding to weather warnings.

We propose that weather warning agencies focus on message content. For example, emerging research on warnings suggest that focusing more on including uncertainty information will improve decision-making for those potentially at risk (Joslyn and Grounds 2015). Additionally, we echo the recommendation made by Lindell et al. (2016) that decision-making officials should weigh the effect that higher hurricane intensity may have to increase their perceptions of a hurricane’s likelihood to strike their area. Indeed, although the costs associated with failing to act may include unnecessary damage or injury, deciding to act when it is not necessary may also cause negative outcomes, such as economic loss associated with closing schools and businesses. We hope that our findings will encourage both response agencies and other researchers to use experimental and behavioral methods to better understand how people perceive and make key decisions about hurricane preparedness and evacuation.

Acknowledgments

We thank the Brain Storm undergraduate research assistants for their assistance in data collection. None of the authors received funding for this work.

REFERENCES

  • Avila, L. A., and S. R. Stewart, 2013: Annual weather summary: Atlantic hurricane season of 2011. Mon. Wea. Rev., 141, 25772596, doi:10.1175/MWR-D-12-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Baker, E. J., 1979: Predicting response to hurricane warnings: A reanalysis of data from four studies. Mass Emerg., 4, 924.

  • Brommer, D. M., and J. C. Senkbeil, 2010: Pre-landfall evacuee perception of the meteorological hazards associated with Hurricane Gustav. Nat. Hazards, 55, 353369, doi:10.1007/s11069-010-9532-7.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Christensen, L., and C. E. Ruch, 1980: The effect of social influence on response to hurricane warnings. Disasters, 4, 205210, doi:10.1111/j.1467-7717.1980.tb00273.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dash, N., and H. Gladwin, 2007: Evacuation decision making and behavioral response: Individual and household. Nat. Hazards Rev., 8, 6977, doi:10.1061/(ASCE)1527-6988(2007)8:3(69).

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeYoung, S. E., T. Wachtendorf, R. A. Davidson, K. Xu, L. Nozick, A. K. Farmer, and L. Zelewicz, 2016: A mixed method study of hurricane evacuation: Demographic predictors for stated compliance to voluntary and mandatory orders. Environ. Hazards, 15, 95112, doi:10.1080/17477891.2016.1140630.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drake, L., 2012: Scientific prerequisites to comprehension of the tropical cyclone forecast: Intensity, track, and size. Wea. Forecasting, 27, 462472, doi:10.1175/WAF-D-11-00041.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Emanuel, K., 2005: Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686688, doi:10.1038/nature03906.

  • Huang, S.-K., M. K. Lindell, and C. S. Prater, 2015: Who leaves and who stays? A review and statistical meta-analysis of hurricane evacuation studies. Environ. Behav., 48, 9911029, doi:10.1177/0013916515578485.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and R. M. Nichols, 2009: Probability or frequency? Expressing forecast uncertainty in public weather forecasts. Meteor. Appl., 16, 309314, doi:10.1002/met.121.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and J. LeClerc, 2013: Decisions with uncertainty: The glass half full. Curr. Dir. Psychol. Sci., 22, 308315, doi:10.1177/0963721413481473.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., and M. A. Grounds, 2015: The use of uncertainty forecasts in complex decision tasks and various weather conditions. J. Exp. Psychol.: Appl., 21, 407417, doi:10.1037/xap0000064.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S. L., S. Savelli, and L. Nadav-Greenberg, 2011: Reducing probabilistic weather forecasts to the worst-case scenario: Anchoring effects. J. Exp. Psychol.: Appl., 17, 342353, doi:10.1037/a0025901.

    • Search Google Scholar
    • Export Citation
  • Judd, C. M., G. H. McClelland, and C. S. Ryan, 2009 : Data Analysis: A Model Comparison Approach. 2nd ed. Routledge, 328 pp.

  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, doi:10.1111/j.1539-6924.2011.01647.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lindell, M. K., S.-K. Huang, H.-L. Wei, and C. D. Samuelson, 2016: Perceptions and expected immediate reactions to tornado warning polygons. Nat. Hazards, 80, 683707, doi:10.1007/s11069-015-1990-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Loewenstein, G. F., E. U. Weber, C. K. Hsee, and N. Welch, 2001: Risk as feelings. Psychol. Bull., 127, 267286, doi:10.1037/0033-2909.127.2.267.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meyer, R. J., K. Broad, B. Orlove, and N. Petrovic, 2013: Dynamic simulation as an approach to understanding hurricane risk responses: Insights from the Stormview lab. Risk Anal., 33, 15321552, doi:10.1111/j.1539-6924.2012.01935.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meyer, R. J., J. Baker, K. Broad, J. Czajkowski, and B. Orlove, 2014: The dynamics of hurricane risk perception: Real-time evidence from the 2012 Atlantic hurricane season. Bull. Amer. Meteor. Soc., 95, 13891404, doi:10.1175/BAMS-D-12-00218.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. L. Demuth, J. K. Lazo, K. Dickinson, H. Lazrus, and B. H. Morrow, 2015: Understanding public hurricane evacuation decisions and responses to forecast and warning messages. Wea. Forecasting, 31, 395417, doi:10.1175/WAF-D-15-0066.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2009: “Hurricane local statement.” National Weather Service Glossary. Accessed 5 June 2016. [Available online at http://w1.weather.gov/glossary/.]

  • Petrolia, D., S. Bhattacharjee, and T. R. Hanson, 2011: Heterogeneous evacuation responses to storm forecast attributes. Nat. Hazards Rev., 12, 117124, doi:10.1061/(ASCE)NH.1527-6996.0000038.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Richard, F. D., C. F. Bond Jr., and J. J. Sokes-Zoota, 2003: One hundred years of social psychology quantitatively described. Rev. Gen. Psychol., 7, 331363, doi:10.1037/1089-2680.7.4.331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sherif, M., D. Taub, and C. I. Hovland, 1958: Assimilation and contrast effects of anchoring stimuli on judgments. J. Exp. Psychol., 55, 150155, doi:10.1037/h0048784.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sherman-Morris, K., 2013: The public response to hazardous weather events: 25 years of research. Geogr. Compass, 7, 669685, doi:10.1111/gec3.12076.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tversky, A., and D. Kahneman, 1974: Judgment under uncertainty: Heuristics and biases. Science, 185, 11241131, doi:10.1126/science.185.4157.1124.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tversky, A., and D. Kahneman, 1992: Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertainty, 5, 297323, doi:10.1007/BF00122574.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weber, E. U., 1994: From subjective probabilities to decision weights: The effect of asymmetric loss functions on the evaluation of uncertain outcomes and events. Psychol. Bull., 115, 228242, doi:10.1037/0033-2909.115.2.228.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weber, E. U., and D. J. Hilton, 1990: Contextual effects in the interpretation of probability words: Perceived base rate and severity of events. J. Exp. Psychol.: Hum. Percept. Perform., 16, 781789, doi:10.1037/0096-1523.16.4.781.

    • Search Google Scholar
    • Export Citation
  • Webster, P. J., G. J. Holland, J. A. Curry, and H. R. Chang, 2005: Changes in tropical cyclone number, duration, and intensity in a warm environment. Science, 309, 18441846, doi:10.1126/science.1116448.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitehead, J. C., B. Edwards, M. Van Willigen, J. R. Maiolo, K. Wilson, and K. T. Smith, 2000: Heading for higher ground: Factors affecting real and hypothetical hurricane evacuation behavior. Global Environ. Change, 2B, 133142, doi:10.1016/S1464-2867(01)00013-4.

    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, C. S. Prater, and C. D. Samuelson, 2014: Effects of track and threat information on judgments of hurricane strike probability. Risk Anal., 34, 10251039, doi:10.1111/risa.12128.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, and C. S. Prater, 2015a: Process tracing analysis of hurricane information displays. Risk Anal., 35, 22022220, doi:10.1111/risa.12423.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wu, H. T., M. K. Lindell, and C. S. Prater, 2015b: Strike probability judgments and protective action recommendations in dynamic hurricane tracking task. Nat. Hazards, 79, 355380, doi:10.1007/s11069-015-1846-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

Term “ps” in this manuscript refers to multiple p values as opposed to the strike probability convention used in other weather-related articles.

Save