1. Introduction
Tornadoes have captured the public’s attention with their catastrophic risks through popular culture phenomena like the “Wizard of Oz” movie, “Wicked the Musical,” and the television series “Storm Chasers.” People often think of the U.S. Midwest as tornado alley, yet the southeastern United States is also vulnerable to tornadoes, including the 27 April 2011 tornadoes in Alabama that killed 252 people and resulted in 11 billion U.S. dollars in damage (Samenow 2012). On average, more than 25% of the 1200 tornadoes that occur annually in the United States strike the Southeast (NOAA 2018). Most fatalities from tornadoes in the United States have occurred in the Southeast, in part because of high mobile home density (Ashley 2007; Donner 2007; Niederkrotenthaler et al. 2013; Schmidlin and King 1995). In addition to being lethal, tornadoes may present a high risk for public complacency. According to some estimates, 75% of tornado warnings in the United States are false alarms, or warnings that do not manifest into tornadoes (Stirling 2015). One unique aspect of the southeastern United States is frequent, short-lived, low-end tornadoes, which increase concerns about “too many false alarms and how this may desensitize the public” (Rasmussen 2015, p. 24).
Indeed, scientists have been concerned that false alarms for high-risk weather events like tornadoes generate a complacent public (LeClerc and Joslyn 2015; Mason 1982; Olson 1965). How stakeholders perceive a risk, like a tornado, can affect to what extent they are complacent to a risk (Eiser 2004). However, past research examining the impact of false alarms and complacency has focused on systems that create and distribute warnings, rather than how members of the public process warnings (Brooks and Correia 2018; Brotzge et al. 2011; Golden and Adams 2000). This limitation creates a significant speculative element about whether false alarms affect public compliance with warnings. For example, factors such as individuals’ false alarm intolerance or their responses to deterministic forecasts can affect their warning compliance (Roulston and Smith 2004), but more research is needed to understand public responses to tornado warnings.
Research so far has showed mixed results about how false alarms affect people’s responses to tornadoes (e.g., Ripberger et al. 2015; Simmons and Sutter 2009; Trainor et al. 2015). For example, individuals define and understand false alarms differently (Trainor et al. 2015). Roulston and Smith (2004) emphasized that deterministic forecasts make implicit assumptions about people’s false alarm intolerance whereas probabilistic forecasts separate forecasting from individuals’ decision-making. Research on the 2011 tornado in Joplin, Missouri, found that frequent false alarms may have impacted whether people responded to the tornado warning (Paul et al. 2015). Studies also have identified that emotions are potential triggers for protective action taking in response to risks (Jin et al. 2016; Jin et al. 2012; Lerner et al. 2015; So 2013; Witte 1992; Witte and Allen 2000). However, it is unknown how tornado false alarms may affect individuals’ protective behaviors.
This study examines how people in the southeastern United States understand, process, and react to tornado watches and warnings, especially when these alerts turn out to be false alarms. We do so through four surveys (N = 4162) of southeastern U.S. adult residents. We then compare the social science research results with actual county false alarm ratio data.
2. Literature review
This section first defines false alarms, complacency, and the cry wolf effect. Then, the limited research on false alarm effects and tornadoes is reviewed, leading to the study’s research questions.
a. Definitions
1) False alarms
False alarms refer to events that were “forecast to occur but did not” (Wilks 2006, p. 261). The most relevant information is the two-by-two warning performance typology contingency table that the National Oceanic and Atmospheric Administration (NOAA) (NOAA 2019) uses. Based on 1) whether a severe weather warning was issued and 2) whether a severe weather event was observed, the contingency table provides four results: hits/true positives, false alarms/false positives, missed events/false negatives, and all clear/true negatives (see Table 1). NOAA (2019) defines the false alarm ratio (FAR) as “the number of false alarms divided by the total number of events forecast.”
Warnings are issued for the majority of confirmed tornadoes (7 out of 10) (Erdman 2014; Golden and Adams 2000). However, approximately three out of every four tornado warnings that the National Weather Service (NWS) issues nationally are false alarms, largely due to insufficient technology and resources to more accurately detect tornadoes (Barnes et al. 2007; Erdman 2014). A tornado warning is considered a false alarm if 1) the tornado did not occur, 2) NOAA cannot confirm whether a tornado occurred, 3) the tornado occurred in a different location than predicted, and/or 4) the tornado occurred during a different time period than predicted (Trainor et al. 2015). Since 2012, lead time, probability of detection, and false alarm ratios for tornadoes across the United States have decreased, reflecting an increased emphasis on reducing tornado false alarms (Brooks and Correia 2018).
2) Complacency
Complacency is the “public’s propensity to believe a threat would not happen and therefore the public ignores the threat and is unwilling to prepare for the threat” (Wang and Kapucu 2008, p. 58). In other words, complacency can cause members of the public to become less effective at responding to low-probability, high-impact events (Drabek 2001; Fitzpatrick 1999; Heath and Millar 2004; Wang and Kapucu 2008). Heuristic research has found that individuals perceive the likelihood of rare, adverse events differently from the objective risk threat that these events pose (Kahneman 2011; Meyer and Kunreuther 2017). For example, individuals tend to perceive the likelihood of an event as high when they can imagine the event saliently and easily (availability bias). Individuals can believe that they are more immune than others to risks, and thereby treat risks below their threshold of concern (optimism bias). Last, individuals tend to focus on the low probability of risks in the immediate future, compared to the high probability of risks over a longer period (compounding bias). Given the limited research on complacency and tornadoes, it is important to note that complacency is a contested problem for weather-related risks (Burby 1998; Corbacioglu and Kapucu 2006; Tierney 2000; Tierney et al. 2001; Trainor et al. 2015; Wang and Kapucu 2008). In other words, it remains unclear whether complacency is indeed a problem for tornado warnings in the southeastern United States.
3) Cry wolf effect
The cry wolf effect is distrust of weather warning systems (Wickens et al. 2009), which may be fueled by false alarms (Bliss et al. 1995; Dixon and Wickens 2006; Sorensen and Sorenson 2007). Experimental research found that people respond to alarms proportionally to their perception of the probability of a risk occurring (Bliss 1995). Therefore, people are unlikely to respond productively in the face of threats like tornadoes if they believe that alarms are unreliable.
b. False alarms and tornadoes
1) False alarm effect
Findings are mixed in terms of whether false alarms negatively affect how members of the public respond to tornadoes (e.g., Ripberger et al. 2015; Simmons and Sutter 2009; Trainor et al. 2015). Some research strongly suggests no false alarm effect (e.g., LeClerc and Joslyn 2015; Lindell et al. 2016). Summarizing this research area, Trainor et al. (2015) noted, “Most of the hazards literature suggests no cry wolf effect or negative influence of false alarms on household perceptions and behavior” (p. 335). For example, Lindell et al. (2016) found that previous false alarm experience has a nonsignificant negative correlation with individuals’ expectations of immediate sheltering, indicating that false alarms tend to decrease sheltering. In another study, lowering false alarm ratios did not significantly affect compliance with winter weather warnings (LeClerc and Joslyn 2015). Moreover, Trainor et al. (2015) found that the actual county false alarm ratio does not predict people’s perceived false alarm ratio, but does predict people’s protective action taking.
In comparison, some research finds that the false alarm ratio affects individual behaviors (e.g., Donner et al. 2012; Trainor et al. 2015; Wang and Kapucu 2008). For example, Wang and Kapucu (2008) speculated that public complacency results from 1) repeated threat warnings and experiences, like those from high-probability, low-impact events; 2) individuals’ capabilities to prepare; 3) individuals’ demographics; and 4) individuals’ risk assessments of expected damage. Furthermore, Simmons and Sutter (2009) found that tornadoes in areas with higher false alarm ratios killed and injured more people than tornadoes in areas with lower false alarm ratios, controlling for other factors like tornado frequency, tornado intensity, and length of advance warning. Ripberger et al. (2015) found that false alarms as well as missed detection contributed to lowering public trust in the National Weather Service and reduced intentions to respond productively to future tornadoes. Similarly, Jauernic and van den Broeke (2017) found that an increase in perceived false alarm ratios was associated with a lower likelihood of seeking shelter.
2) Public understanding of false alarms
The most comprehensive study of how people perceive tornado false alarms provides insights into potential complacency among tornado warning recipients. This survey of 804 U.S. residents living in counties that experienced tornado warnings between 2008 and 2010 found large variations in how people define false alarms (Trainor et al. 2015). Most respondents perceived false alarms as any weather event that does not occur as predicted, which misses the nuances of NOAA’s definition as presented earlier in this study. Likewise, another study conducted in the southern United States found that residents consider “watches, warnings, or siren without the clear presence of a tornado” as false alarms (Donner et al. 2012, p. 10).
In addition, some respondents considered a tornado to be a near-miss event if the intensity, direction, or other factors changed, but the tornado still occurred (Trainor et al. 2015). Still, others defined false alarms based on their perceptions of information credibility (e.g., spotter was mistaken) and their perceptions of the potential tornado impact (their property was not at risk). Others had no concept of what a false alarm was or thought that false alarms occurred when a warning system accidentally activated (Trainor et al. 2015).
Despite the many strengths of Trainor et al.’s (2015) comprehensive study, additional research is needed given that the study’s data collection occurred prior to widespread adoption of social media and new alerting technologies such as Wireless Emergency Alerts (WEAs). Trainor et al.’s study also did not include mobile home residents, who are more susceptible to tornado risks than fixed home residents (Donner 2007; Niederkrotenthaler et al. 2013; Schmidlin and King 1995) and contribute to the southeastern region’s higher fatality rate compared to the rest of the United States (Ashley 2007). In addition, Trainor et al.’s study took a national rather than regional approach, which did not allow for a contextual understanding of the tornado problem in the southeastern United States.
Research is needed to understand to what extent tornado false alarms affect people’s perceptions of false alarms and their protective action taking. We also need research to identify how people feel when they take unnecessary actions in response to tornado false alarms, given that people feel different emotions for different risks and some emotions trigger protective action taking (Lerner et al. 2015; Jin et al. 2016). Studies have found that individuals process and feel different thoughts and feelings in risky situations, which potentially affects how they respond to warnings (Jin et al. 2016; Jin et al. 2012; Lerner et al. 2015; So 2013; Witte 1992; Witte and Allen 2000). Previous content analysis research further found that individuals may feel anxiety, fear, and sadness in response to natural disasters (Jin et al. 2012). However, studies have not confirmed the potential impact of such emotions on how people respond to tornado warnings, including false alarms.
Last, limited research has studied mobile home residents, despite calls for more research on mobile home residents’ responses to tornado risks (Chaney and Weaver 2010; Simmons and Sutter 2009). This small body of scholarship finds that mobile home residents experience more fatalities and injuries from tornadoes than fixed home residents (Brooks and Doswell 2002; Donner 2007; Niederkrotenthaler et al. 2013; Schmidlin and King 1995; Simmons and Sutter 2009), but we do not know to what extent complacency to false alarms explains these findings.
Given the research reviewed, this study asks:
RQ1: How do residents of the southeastern United States evaluate the tornado false alarm ratios for their communities?
RQ2: To what extent do prior tornado false alarms affect southeastern U.S. residents’ perceptions of false alarms?
RQ3: How do southeastern U.S. residents respond when protective actions taken turn out to be in response to tornado false alarms?
3. Method
a. Survey development
Four surveys were conducted with the same population as a part of a funded research project. Items about false alarms were spread across the surveys. A large private survey company, Qualtrics, recruited participants from the southeastern United States based on a quota of demographics. The same company administered the online survey. Data collection ran from July 2016 through August 2016 and included 4165 residents of the southeastern United States. Specifically, the following states were included in this study: Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia. The average completion time for the surveys was 30 min [surveys 1 and 2: M = 42.1, standard deviation (SD) = 157.99, median (Mdn) = 26.37; surveys 3 and 4: M = 33.51, SD = 205.96, Mdn = 19.78]. Participants were compensated for their time through Qualtrics in accordance with Institutional Review Board guidelines.
b. Data cleaning
Responses under 30% of the median time were eliminated from the analysis, as were responses more than three standard deviations above the median, after reviewing open-ended responses and skipped questions. The researchers manually inspected all cases that took longer than two standard deviations above the median for any concerns about data integrity, such as random character strings in open-ended responses and skipped questions. No additional data concerns were uncovered.
After this data cleaning procedure, 2076 cases from surveys 1 and 2 remained for analysis, with a mean completion time of 35.87 min (SD = 37.96, Mdn = 27.05) and 1973 cases from surveys 3 and 4 with a mean completion time of 26.01 min (SD = 30.11, Mdn = 19.93). The final dataset included 995 mobile home residents and 3054 general residents of the southeastern United States. The cases removed from the data cleaning procedure did not significantly impact any racial, gender, or age-based statistics, suggesting that the cases were fairly normally distributed and roughly proportionate across demographic categories. For more information about the participants’ demographics, see Table 2 below.
Survey demographics (total). The numbers reported in the tables deviate slightly from the total number of survey participants because participants could select multiple races and genders. The team used unique survey identification numbers to calculate the totals in these tables.
c. Measures and instruments
Each research question was addressed with a distinct set of survey questions. When possible (and described below), established measures were used. When no previously established measures could be found, questions were constructed based on prior literature and the focus group findings (N = 77) from the larger funded project.
1) Perceived false alarm ratio
To assess the perceived false alarm ratio, we modified Trainor et al.’s (2015) scale, and asked participants to answer the following questions: 1) “How frequent do you perceive false alarms to be for tornadoes in your area?” on a 0–100 scale for surveys 1 and 2 and 2) “[Tornado] False alarms occur frequently in my area” on a scale from 0 to 100 for surveys 3 and 4 where 0 is “never” and 100 is “certainly.”
2) Actual county false alarm ratio
NOAA (2019) defines the false alarm ratio as “the number of false alarms divided by the total number of events forecast.” To assess the actual false alarm ratio, the researchers used data from Iowa State University’s Iowa Environmental Mesonet (IEM) Cow database (NWS Storm Based Warning Verification) from 1 July 2011 to 31 July 2016 (Iowa Environmental Mesonet 2018). First, we pulled each NWS office’s data in the southeastern United States. Then, we calculated the county false alarm ratio by dividing the number of tornado warnings without observed tornadoes with the total number of tornado warnings by each county. We combined the actual county false alarm ratios with our survey data based on participants’ zip codes, lending a sensible actual county false alarm ratio to our dataset (surveys 1 and 2: M = 0.873, SD = 0.149; surveys 3 and 4: M = 0.880, SD = 0.135). About 3% of data was missing (surveys 1 and 2: n = 71; surveys 3 and 4: n = 75) because participants did not enter proper zip codes, their zip codes did not correspond with county names, or their county did not have any false alarm information from the IEM Cow database. About 1% (n = 12) of the data received no false alarms and showed a 0% false alarm ratio. About 30% of the data showed a 100% false alarm ratio, meaning that for 30% of the survey participants’ counties, the NWS did not observe any of the tornadoes for which they issued tornado warnings.
3) Perceived tornado alert accuracy
To explore perceived tornado alert accuracy, we asked the questions: 1) “How do you rate the accuracy of tornado weather alerts that you received in the past year” on a 0–100 scale for surveys 1 and 2 and 2) “Tornado alerts usually provide accurate information” on a scale from 0 to 100 for surveys 3 and 4 where zero is “never” and 100 is “certainly.”
4) Alerts received
To identify participants who received tornado alerts, we asked: “Did you receive a watch or warning about the tornado?” with options “watch,” “warning,” “both,” “I did not receive a message,” and “I do not recall.”
5) Actions taken
To assess actions that participants may have taken in response to a tornado warning, we asked: “Did you take action (have a physical response like going to a safe place in your home or collecting supplies) after receiving the message?” with options, such as “Yes,” “No” or “I do not recall.”
6) Future behavioral responses to tornado false alarms
To explore the potential effect of false alarms on how people respond to future tornado warnings, we asked six questions, such as “After receiving false alarms, I am more likely to prepare for the next tornado warning,” “After receiving false alarms, I am more likely to listen to future tornado warnings and follow directions,” and “After receiving false alarms, I am more critical of future tornado warnings” on a 1–7 scale where 1 is “strongly disagree,” 4 is “neither agree nor disagree,” and 7 is “strongly agree.”
7) Reasons for not taking protective action
To explore reasons why participants did not take protective action, we asked the question “Why did you not take action? (Choose all that apply)” with five responses such as “I did not believe the storm was a threat,” “A previous experience indicated I was not in danger,” and “I did not believe the alert was accurate.”
8) Emotional reactions to tornado warnings and false alarms
To explore positive and negative emotions in response to tornado warnings and false alarms, we used the modified differential emotions scale (Izard 1977) and asked the question “If you are in an area under a tornado warning, how often do you feel ______ about tornadoes?” on a 1–5 scale ranging from “never” to “most of the time.” We also asked the question “After you receive a tornado warning, but the event does NOT occur (a false alarm), how often do you feel?” on a 1–7 scale ranging from “never” to “always.” The scale for false alarms was transformed to a 1–5 scale to compare it with participants’ emotional reactions to tornado warnings. This scale covers the following emotions: anger, anxiety, apprehension, confusion, contempt, disgust, embarrassment, fear, guilt, sadness, shame, surprise, sympathy, gratitude, hope, relief, and uneasiness. Prior research also has employed this scale (Fredrickson et al. 2003; Jin et al. 2010; Jin et al. 2016) or a similar question (Coombs and Holladay 2005; Kim and Cameron 2011).
9) Number of tornado warnings issued by counties
To assess the number of tornado warnings issued at the county level, the researchers used data from Iowa State University’s IEM Cow database from 1 July 2011 to 31 July 2016 (Iowa Environmental Mesonet 2018). We pulled the total number of tornado warnings issued by each county using each NWS office’s data in the southeastern U.S. area. Again, we combined the number of tornado warnings issued by each county with our survey data based on the participants’ zip codes, yielding data on tornado warnings issued (surveys 1 and 2: M = 12.56, SD = 9.30, surveys 3 and 4: M = 12.69, SD = 8.99). About 3% of data was missing (surveys 1 and 2: n = 71, surveys 3 and 4: n = 75) because participants did not enter proper zip codes, their zip codes did not correspond with county names, or their county did not have any false alarm information from the IEM Cow database.
10) Covariates: Gender, race, state, children, age, income, and housing type
To control for potential covariate effects in regression analysis, we included the following covariates: gender, race, state, children, age, income, and type of home. We measured gender by asking: “How do you identify your sex?” with three potential responses: “male,” “female,” and “other, prefer not to say.” We measured race by asking the question “How do you racially identify? Select as many as relevant” with the following response options: “Caucasian,” “Black or African American,” “Asian,” “Hispanic Caucasian,” “Native American,” “Hispanic Non-Caucasian,” and “Other.” We measured state by asking the following question: “What state do you currently live in?” with options of the 50 U.S. states and Washington, D.C. We measured parental status by asking the following question: “Do you have children under 18 living with you?” with options of “yes” and “no.” We measured age by asking the following question: “What is your age in years?” We measured income by asking the following question: “What is your total household income?” with options such as: “less than 20,000,” “20,001–30,000,” “30,0001–40,000,” and “90,001+.” Last, we measured housing type by asking the following question: “How would you describe the home you currently live in?” with options such as “apartment building (4 or fewer units),” “one family house detached from any other house,” and “mobile home.”
4. Results
a. Perceived false alarms ratio and accuracy rate
Before reporting how the results answer our research questions, it is important to note that the data are slightly overdispersed as indicated by the M/SD ratio. High standard deviations could indicate a lack of certainty in the responses from participants or divided opinions on divisive issues (Fitton et al. 2012; Leikin et al. 2013). Visual examination of the data found that the distribution across perceived false alarms and tornado alert accuracy showed peaks at low, middle, and high points, indicating divided opinions.
1) Perceived false alarm ratio
Participants estimated their false alarm ratio, on average, at 39% (M = 39.11, SD = 27.95). Mobile home residents estimated their false alarm ratio significantly lower (M = 36.02, SD = 27.23) than fixed home residents (M = 40.14, SD = 28.13) [t(2025) = 2.842, p < 0.01]. The distribution was skewed right (skewness = 0.326), indicating that more individuals estimated their false alarm ratio lower than higher (see Fig. 1). No statistical difference was found by states (see Table 3).
Perceived false alarm ratio.
Overall, participants estimated that false alarms were relatively infrequent (M = 36.43, SD = 28.07). The distribution was right-skewed (skewness = 0.522), indicating that more people thought that false alarms were relatively infrequent than frequent. An analysis of variance found that there was a significant difference among states [F(11, 2020) = 2.534, p < 0.01] (see Table 2), yet no significant differences were found by housing type.
2) Tornado alert accuracy
Overall, respondents estimated that 65% of alerts generally provided accurate information (M = 64.95, SD = 24.05). The distribution was left-skewed (skewness = −0.617), indicating that more individuals agreed that alerts generally are accurate than inaccurate (see Fig. 2). No significant differences were found by housing type or states, yet descriptive differences among states exist (see Table 4).
Perceived tornado alert accuracy.
Respondents estimated tornado alert accuracy in the past year at 63.66% (SD = 25.93), with no significant difference between fixed and mobile home residents. The distribution was skewed left (skewness statistic = −0.630), indicating that more individuals estimated tornado alert accuracy higher than lower. Analysis of variance found that there was a significant difference among states [F(11, 2026) = 3.044, p < 0.001].
3) Relationship between perceived false alarm ratio and tornado alert accuracy
To examine the relationship between perceived false alarm ratio and perceived tornado alert accuracy, a Pearson correlation was computed. Results showed that there was no correlation between perceived false alarm ratio and perceived tornado alert accuracy. The effect size is close to 0 even with the large sample size, although a part of the results is significant (surveys 1 and 2: r = 0.079, n = 2009, p < 0.01; surveys 3 and 4: r = 0.033, n = 1887, p > 0.05). This finding seems to indicate that participants perceive false alarms and tornado alert accuracy as separate topics, perhaps indicating that false alarms are not the only factor that contribute to their perceptions of inaccurate warnings.
b. Actual false alarm ratio’s impact on perceived false alarm ratio
To address RQ2, a Pearson correlation between the actual false alarm ratio and the perceived false alarm ratio was computed. Results show that there was no correlation between the perceived and actual false alarm ratios (surveys 1 and 2: r = 0.013, n = 1953, p > 0.05; surveys 3 and 4: r = 0.009, n = 1884, p > 0.05).
c. Behavioral and emotional responses to tornado false alarms
To address RQ3, we first asked those who reported that they did not take protective action during a past tornado why they did not take action. Results showed that 32.9% of them believed that the storm was not a threat, 21.3% reported that a previous experience indicated that they were not in danger, and 11.3% did not believe the alert was accurate (see Table 5).
Why not to take action.
We then asked participants how they think they would respond after receiving a hypothetical tornado false alarm (see Table 6). About 31% of participants “neither agree nor disagree” with all statements about their possible responses to tornado false alarms, regardless of home type. The results seem to indicate ambivalence toward false alarms.
After being exposed to a tornado false alarm, 44.3% of participants agreed that they would be a little more critical of future tornado warnings (M = 4.23, SD = 1.67), regardless of housing type [t(4005) = 1.634, p > 0.05]. Fifty-five percent of respondents reported that they would seek additional information before deciding whether to take action in response to future warnings (M = 4.62, SD = 1.59). Fixed home residents were more likely to seek additional information in response to future warnings (M = 4.65, SD = 1.56) than mobile home residents (M = 4.51, SD = 1.66) [t(1592.273) = 2.364, p < 0.05]. While only 38.3% of participants agreed that they would second-guess future tornado warnings (M = 3.84, SD = 1.73), fixed home residents (M = 4.04, SD = 1.72) were more likely to second-guess future tornado warnings than mobile home residents (M = 3.70, SD = 1.80) [t(1613.293) = 5.240, p < 0.001].
Behavioral change after false alarms (on a 1–7 scale where 1 is “strongly disagree,” 4 is “neither agree nor disagree,” and 7 is “strongly agree”).
Nevertheless, 49.5% of participants agreed with being a little more likely to prepare for future tornadoes after receiving a false alarm, regardless of housing type (M = 4.56, SD = 1.54). More than half (57.9%) of participants reported being more likely to listen to future warnings and to follow directions after receiving a false alarm (M = 4.85, SD = 1.45) as well as 49.1% agreeing with being more likely to share tornado warnings in the future (M = 4.58, SD = 1.51).
To further assess public responses to tornado false alarms, we looked at participants’ emotions. Participants were asked how they would feel under a tornado warning and how they would feel after receiving a tornado false alarm. Paired samples t tests between emotional reactions to false alarms and tornado warnings were conducted (see Table 7). Results showed that relief [M = +1.63, SD = 1.80, t(1900) = 39.566, p < 0.001] and gratitude [M = +1.38, SD = 1.67, t(1906) = 35.99, p < 0.001] significantly increased when tornado warnings turned out to be false alarms. On the other hand, fear [M = −1.18, SD = 1.49, t(1906) = −34.461, p < 0.001], uneasiness [M = −1.17, SD = 1.53, t(1894) = −33.316, p < 0.001], and anxiety [M = −1.11, SD = 1.46, t(1913) = −33.444, p < 0.001] significantly decreased when tornado warnings turned out to be false alarms. Apprehension, sadness, surprise, and sympathy significantly decreased, while embarrassment, hope, guilt, and shame significantly increased, but mean differences in these emotions between warnings and false alarms were less than one (see Table 7 for t statistics). There was no statistical difference in confusion, disgust, contempt, and anger when tornado warnings turned out to be false alarms. Results indicate that, when tornado warnings turn out to be false alarms, participants felt a great amount of relief and gratitude, while they felt much less fear, uneasiness, and anxiety.
Emotional change after false alarms (on a 1–5 scale ranging from “never” to “most of the time.”).
Last, we examined whether the actual and perceived false alarm ratios and the number of warnings issued at the county level predict physical protective action taking. Hierarchical logistic regression analysis was conducted (see Table 8). Based on previous research (Procopio and Procopio 2007; Senkbeil et al. 2014; Trainor et al. 2015), we included the following control variables: participants’ age, gender, race, income, housing type, the state where they live, and whether they have children.
Hierarchical logistic regression analysis predicting protective action taking via demographics, perceived, and actual false alarm ratios (eB = exponentiated B; * p < 0.05; ** p < 0.01; *** p < 0.001). SE = standard error.
Results indicate that the perceived false alarm ratio significantly predicted physical action taking (e.g., collecting supplies or sheltering in place) [surveys 1 and 2: Exp(B) = 1.008, p < 0.01, surveys 3 and 4: Exp(B) = 1.008, p < 0.01], controlling for demographics [surveys 1 and 2: Cox and Snell R2 = 0.103, Nagelkerke’s R2 = 0.143, χ2(23) = 144.708, p < 0.001, surveys 3 and 4: Cox and Snell R2 = 0.138, Nagelkerke’s R2 = 0.188, χ2(23) = 176.823, p < 0.001]. In other words, the higher individuals perceived the false alarm ratio to be, the more likely they were to report protective action taking. Specifically, when the perceived false alarm ratio increases by one percent, the estimated odds of taking protective behavior increase by about one percent.
However, the actual false alarm ratio did not predict reported protective action behaviors like sheltering in place [surveys 1 and 2: Exp(B) = 1.762, p = 0.192; surveys 3 and 4: Exp(B) = 1.337, p = 0.546]. The number of warnings issued by counties also did not predict protective behaviors [surveys 1 and 2: Exp(B) = 1.014, p = 0.107; surveys 3 and 4: Exp(B) = 1.010, p = 0.265].
Additionally, results indicate that perceived tornado alert accuracy predicts protective action taking [surveys 1 and 2: Exp(B) = 1.014, p < 0.001; surveys 3 and 4: Exp(B) = 1.017, p < 0.001]. Specifically, when perceived tornado alert accuracy increases by one percent, the estimated odds of taking protective behavior increases by 1.4 to 1.7%. Living in mobile homes did not affect protective action taking [surveys 1 and 2: Exp(B) = 1.033, p = 0.829; surveys 3 and 4: Exp(B) = 1.073, p = 0.643].
5. Discussion and conclusions
Research on tornado false alarms has focused on technological advances, such as detecting event genesis, reducing actual false alarms, and advances in radars (e.g., Barnes et al. 2007; Brooks 2004; Polger et al. 1994; Trafalis et al. 2003; Smith 1994). Less research has focused on public responses to tornado warnings and false alarms (Donner et al. 2012; Ripberger et al. 2015; Simmons and Sutter 2009; Trainor et al. 2015). We add to that nascent research.
a. Perceptions of false alarms
Overall, the study’s findings suggest that southeastern U.S. residents may not closely pay attention to false alarm ratios. Interestingly, the survey results indicate that southeastern U.S. residents found tornado warnings to be more accurate than they are. Survey respondents estimated their false alarm ratio to be 39%. Prior research found that 75% of tornado warnings in the United States are false alarms, or warnings that do not manifest into tornadoes (Stirling 2015). Previous research also found that people perceive the average false alarm ratio as 45% with a standard deviation of 27% (Jauernic and van den Broeke 2017), which is much lower and with a greater variance than the national average false alarm ratio. Mobile home residents in our study estimated their false alarm ratios significantly lower than fixed home residents. There were significant differences among the states where residents reside. As Trainor et al. (2015) and Lindell et al. (2016) noted, how individuals interpret false alarms can differ by their definition of a relevant tornado event and false alarms, as well as the recency, frequency, and severity of tornado events.
We did not find a correlation between participants’ perceived false alarm ratios and their actual county false alarm ratios, similar to prior research (Trainor et al. 2015). In addition, survey participants estimated tornado alert accuracy to be around 65%. Unexpectedly, we did not find a correlation between perceived false alarm ratios and estimated tornado alert accuracy. This may indicate that people estimate false alarm ratios and tornado alert accuracy as separate topics. In other words, people may perceive false alarm ratios and tornado alert accuracy as distinct concepts rather than two items at the opposite ends of the same continuum. One important note in interpreting these results is that our survey participants had divided opinions and are likely to lack certainty in their false alarm ratio estimates, as indicated by the large standard deviations in their responses and visual examination of distributions. Future research needs to focus on factors other than false alarms that impact people’s perceptions of warning accuracy and the connection between these perceptions and behavioral responses to warnings.
b. Responses to false alarms
Prior research argued that high actual tornado false alarm ratios may contribute to a complacent public (e.g., Ripberger et al. 2015; Simmons and Sutter 2009; Trainor et al. 2015). Prior research did not empirically test this hypothesis in the southeastern United States, where most U.S. tornado fatalities have occurred (Ashley 2007; Donner 2007; Niederkrotenthaler et al. 2013; Schmidlin and King 1995). The results of our study indicate that the higher survey participants’ perceived tornado false alarm ratios to be, the more likely they were to report taking protective behaviors (e.g., seeking shelter). Participants’ actual tornado false alarm ratios did not predict their reported protective action taking. Moreover, the number of tornado warnings that participants have received did not predict their reported protective action taking. Additionally, living in mobile homes did not predict participants’ reported protective behaviors.
These findings are contrary to prior research that found the exact opposite. For example, Trainor et al. (2015) found that people in areas with higher actual county tornado false alarm ratios are less likely to take protective actions like sheltering in place. Jauernic and van den Broeke (2017) found that an increase in perceived false alarm ratio is associated with a lower likelihood of seeking shelter. Paul et al. (2015) found that residents in an area with frequent false alarms did not respond to warnings in a timely manner. Our findings are also different from Lindell et al.’s (2016) study, which found that people’s previous experience with false alarms has a nonsignificant, very weak, negative correlation with their expectations of immediate sheltering.
Additionally, perceived tornado alert accuracy predicted protective behaviors in our study. The higher that survey participants estimated tornado alert accuracy to be, the more likely they were to report taking protective actions in response to tornado warnings. In our logistic regression, perceived tornado alert accuracy showed a higher increase in estimated odds of taking protective actions than perceived false alarm ratios.
Residents’ perceptions of false alarm ratios and tornado alert accuracy appear to matter more than the actual false alarm ratios and the number of tornado warnings they have received when it comes to their behavioral responses. In particular and counterintuitively, when perceived false alarm ratios increased, participants were more likely to report taking protective actions. In fact, more than half (57.9%) of our survey participants reported being more likely to listen to future warnings and to follow directions after receiving a tornado false alarm. After receiving a false alarm, 49.1% of survey participants reported they would be more likely to share tornado warnings in the future. Future qualitative research is needed to unpack these counterintuitive findings. In the meantime, the findings are promising given that residents’ perceived false alarm ratios are lower than the actual false alarm ratios, indicating that concerns about an abundance of false alarms in the southeastern United States may be overblown.
Results indicated that when tornado warnings turn out to be false alarms, participants felt a great amount of relief and gratitude and much less fear, uneasiness, and anxiety. The changes in emotions may result from participants realizing that the tornado was a false alarm and/or realizing that they are safe. Or, there may be another reason why participants experienced relief and gratitude after learning that a tornado warning was a false alarm, which future research can explore. Because different emotions influence different behavioral decisions (Lerner et al. 2015; Jin et al. 2016), understanding how people’s emotions flow over the course of a tornado event can help design more effective warning messages (Nabi 2015), which future research can test. Future research can also study mediation and moderation of behavioral and emotional responses to tornado false alarms.
Our survey participants further shared that after being exposed to a tornado false alarm, 44.3% of them would be a little more critical of future tornado warnings and 55% of them would seek additional information before deciding whether to take action to future warnings. Only 38.3% of survey participants would second-guess future tornado warnings, but fixed home residents were more likely to second-guess than mobile home residents. These findings point to the importance of providing easily accessible information about ongoing tornado threats so that when people seek additional information, they have a higher chance of finding accurate information.
c. Conclusions
The study’s findings indicate that concerns about false alarms generating a complacent public may be overblown (e.g., Ripberger et al. 2015; Simmons and Sutter 2009; Trainor et al. 2015), at least for tornadoes in the southeastern United States. We did not find clear evidence that false alarms (perceived and actual) generate a complacent public. Rather, we found that the higher that individuals perceive (i) false alarm ratios to be and (ii) tornado alert accuracy to be, the more likely they are to report taking protective actions. Furthermore, our study found that participants’ perceived false alarm ratios were lower than the actual false alarm ratios for their counties. These counterintuitive findings are puzzling and merit future research. It may be that members of the public and the scientific community conceptualize the false alarm ratio in different ways. In other words, for the scientific community, the false alarm ratio is how they assesses forecasters’ performance. For the pubic, the false alarm ratio may be how they assign confidence in a forecast.
Our results also showed the complex behavioral and emotional responses participants experienced when tornado warnings turned out to be false alarms. Participants felt a great amount of relief and gratitude and much less fear, uneasiness, and anxiety when they learned that tornado warnings turned out to be false alarms. Still, more research is needed to understand how people respond to false alarms. The effect sizes from this study and a previous study (Trainor et al. 2015) indicate that demographics and false alarm ratios (actual and perceived) explain less than 20% of the variance in whether people take protective actions in response to tornado warnings. Similarly, an experiment on false alarms in a winter weather context found that the effect size of increased false alarms was only moderate (LeClerc and Joslyn 2015). Therefore, future research should focus on factors other than false alarms to understand why people may not take life-saving actions in response to tornado warnings, such as probabilistic versus deterministic forecasts (e.g., LeClerc and Joslyn 2015).
Overall, the study’s findings suggest that when on the fence about issuing a tornado warning, forecasters should issue the warning given that false alarms (actual and perceived) do not clearly generate a complacent public. This is not to say that forecasters should warn for all potential tornadoes. Instead, lowering tornado false alarm ratios does not necessarily increase residents’ appropriate protective action-taking in response to future tornado warnings.
6. Limitations
This research is limited by several factors. First, the findings cannot be generalized to other regions of the United States, other countries, or other hazard types, which future research can examine. Second, the study examined self-reported measures that can be affected from retrospective bias, in particular during disasters (Fischhoff et al. 2005). Future research could deploy the surveys developed here immediately after a tornado, should funding allow for such an immediate data collection. Future research should identify the emotions people experience after immediately learning that a tornado warning is a false alarm and how these emotions may affect intended behavioral responses to future tornado warnings. The study examined a single point in time, and longitudinal research is needed in the future to examine how people respond to repeated tornado exposure over time. As the study used only two direct and indirect measures for perceived false alarm ratios and tornado alert accuracy, future research can develop additional measurement items to more directly measure false alarm ratios (e.g., ask participants to identify their county’s false alarm ratio in an open-ended survey question). Future research also can use qualitative approaches to investigate how residents in tornado-prone areas evaluate false alarms and how they respond when tornado warnings turn out to be false alarms.
In sum, findings from this study indicate that lowering false alarm ratios is not the magic bullet for preventing loss of life during tornadoes. Attention is needed to how policy changes (e.g., access to shelters), improved risk communication, and other factors can mitigate complacency and encourage appropriate protective action taking.
Acknowledgments
The material presented in the paper is based upon work supported by the National Oceanic and Atmospheric Administration (NOAA), VORTEX-SE Award NA15OAR4590237. The research findings contained in the paper are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NOAA. The authors thank Holly Roberts for her contributions to the larger funded project. We also thank the anonymous reviewers for their constructive feedback.
REFERENCES
Ashley, W. S., 2007: Spatial and temporal analysis of tornado fatalities in the United States: 1880–2005. Wea. Forecasting, 22, 1214–1228, https://doi.org/10.1175/2007WAF2007004.1.
Barnes, L. R., E. C. Gruntfest, M. H. Hayden, D. M. Schultz, and C. Benight, 2007: False alarms and close calls: A conceptual model of warning accuracy. Wea. Forecasting, 22, 1140–1147, https://doi.org/10.1175/WAF1031.1.
Bliss, J., 1995: Human probability matching behaviour in response to alarms of varying reliability. Ergonomics, 38, 2300–2312, https://doi.org/10.1080/00140139508925269.
Bliss, J., M. Dunn, and B. S. Fuller, 1995: Reversal of the cry-wolf effect: An investigation of two methods to increase alarm response rates. Perceptual Mot. Skills, 80, 1231–1242, https://doi.org/10.2466/pms.1995.80.3c.1231.
Brooks, H. E., 2004: Tornado-warning performance in the past and future: A perspective from signal detection theory. Bull. Amer. Meteor. Soc., 85, 837–844, https://doi.org/10.1175/BAMS-85-6-837.
Brooks, H. E., and C. A. Doswell III, 2002: Deaths in the 3 May 1999 Oklahoma City tornado from a historical perspective. Wea. Forecasting, 17, 354–361, https://doi.org/10.1175/1520-0434(2002)017<0354:DITMOC>2.0.CO;2.
Brooks, H. E., and J. Correia Jr., 2018: Long-term performance metrics for National Weather Service tornado warnings. Wea. Forecasting, 33, 1501–1511, https://doi.org/10.1175/WAF-D-18-0120.1.
Brotzge, J., S. Erickson, and H. Brooks, 2011: A 5-yr climatology of tornado false alarms. Wea. Forecasting, 26, 534–544, https://doi.org/10.1175/WAF-D-10-05004.1.
Burby, R. J., Ed., 1998: Cooperating with Nature: Confronting Natural Hazards with Land-Use Planning for Sustainable Communities. Joseph Henry Press, 366 pp.
Chaney, P. L., and G. S. Weaver, 2010: The vulnerability of mobile home residents in tornado disasters: The 2008 Super Tuesday Tornado in Macon County, Tennessee. Wea. Climate Soc., 2, 190–199, https://doi.org/10.1175/2010WCAS1042.1.
Coombs, W. T., and S. J. Holladay, 2005: Exploratory study of stakeholder emotions: Affect and crisis. The Effect of Affect in Organizational Settings, N. M. Ashkanasy, W. J. Zerbe, and C. E. J. Hartel, Eds., Emerald Group Publishing, 271–288.
Corbacioglu, S., and N. Kapucu, 2006: Organisational learning and self adaptation in dynamic disaster environments. Disasters, 30, 212–233, https://doi.org/10.1111/j.0361-3666.2006.00316.x.
Dixon, S. R., and C. D. Wickens, 2006: Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload. Hum. Factors, 48, 474–486, https://doi.org/10.1518/001872006778606822.
Donner, W. R., 2007: The political ecology of disaster: An analysis of factors influencing U.S. tornado fatalities and injuries, 1998–2000. Demography, 44, 669–685, https://doi.org/10.1353/dem.2007.0024.
Donner, W. R., H. Rodriguez, and W. Diaz, 2012: Tornado warnings in three southern states: A qualitative analysis of public response patterns. J. Homeland Secur. Emerg. Manage., 9 (2), https://doi.org/10.1515/1547-7355.1955.
Drabek, T. E., 2001: Disaster warning and evacuation responses by private business employees. Disasters, 25, 76–94, https://doi.org/10.1111/1467-7717.00163.
Eiser, J. R., 2004: Public Perception of Risk. Report prepared for Foresight Office of Science and Technology, 63 pp., https://pdfs.semanticscholar.org/a37b/331ea4e9afc023d13adfaf115c27a65233cb.pdf.
Erdman, J., 2014, January 8: Tornado warning false alarms: National Weather Service upgrades to impact-based warning system. Weather.com, 20 April 2014, http://www.weather.com/safety/tornado/news/tornado-warning-false-alarms-impact-based-warnings-20140418.
Fischhoff, B., R. M. Gonzalez, J. S. Lerner, and D. A. Small, 2005: Evolving judgments of terror risks: Foresight, hindsight, and emotion. J. Exp. Psychol. Appl., 11, 124–139, https://doi.org/10.1037/1076-898X.11.2.124.
Fitton, D., J. C. Read, M. Horton, L. Little, N. Toth, and Y. Guo, 2012: Constructing the cool wall: A tool to explore teen meanings of cool. PsychNology, 10, 2, 141–162, http://www.psychnology.org/File/PNJ10(2)/PSYCHNOLOGY_JOURNAL_10_2_FITTON.pdf.
Fitzpatrick, P., 1999: Hurricanes: A Reference Handbook. ABC-CLIO, 286 pp.
Fredrickson, B. L., M. M. Tugade, C. E. Waugh, and G. R. Larkin, 2003: What good are positive emotions in crises? A prospective study of resilience and emotions following the terrorist attacks on the United States on September 11, 2001. J. Pers. Soc. Psychol., 84, 365–376, https://doi.org/10.1037/0022-3514.84.2.365.
Golden, J. H., and C. R. Adams, 2000: The tornado problem: Forecast, warning, and response. Nat. Hazards Rev., 1, 107–118, https://doi.org/10.1061/(ASCE)1527-6988(2000)1:2(107).
Heath, R. L., and D. P. Millar, 2004: A rhetorical approach to crisis communication: Management, communication processes, and strategic responses, Responding to Crisis: A Rhetorical Approach to Crisis Communication, R. L. Heath and D. P. Millar, Eds., Taylor & Francis, 1–18.
Iowa Environmental Mesonet, 2018: Iowa State University, Iowa Environmental Mesonet (IEM) Cow (NWS Storm Based Warning Verification), accessed 20 July 2018, https://mesonet.agron.iastate.edu/cow/.
Izard, C. E., 1977: Human Emotions. Springer, 496 pp.
Jauernic, S. T., and M. S. van den Broeke, 2017: Tornado warning response and perceptions among undergraduates in Nebraska. Wea. Climate Soc., 9, 125–139, https://doi.org/10.1175/WCAS-D-16-0031.1.
Jin, Y., S. Park, and M. Len-Ríos, 2010: Strategic communication of hope and anger: A case of Duke University’s conflict management with multiple publics. Public Relat. Rev., 36, 63–65, https://doi.org/10.1016/j.pubrev.2009.08.015.
Jin, Y., A. Pang, and G. T. Cameron, 2012: Toward a publics-driven, emotion-based conceptualization in crisis communication: Unearthing dominant emotions in multi-staged testing of the Integrated Crisis Mapping (ICM) Model. J. Public Relations Res., 24, 266–298, https://doi.org/10.1080/1062726X.2012.676747.
Jin, Y., J. D. Fraustino, and B. F. Liu, 2016: The scared, the outraged, and the anxious: How crisis emotions, involvement, and demographics predict publics’ conative coping. Int. J. Strateg. Commun., 10, 289–308, https://doi.org/10.1080/1553118X.2016.1160401.
Kahneman, D., 2011: Thinking, Fast and Slow. Farrar, Straus and Giroux, 512 pp.
Kim, H. J., and G. T. Cameron, 2011: Emotions matter in crisis: The role of anger and sadness in the publics’ response to crisis news framing and corporate crisis response. Communic. Res., 38, 826–855, https://doi.org/10.1177/0093650210385813.
LeClerc, J., and S. Joslyn, 2015: The cry wolf effect and weather-related decision making. Risk Anal., 35, 385–395, https://doi.org/10.1111/risa.12336.
Leikin, R., R. Subotnik, D. Pitta-Pantazi, F. M. Singer, and I. Pelczer, 2013: Teachers’ views on creativity in mathematics education: An international survey. ZDM, 45, 309–324, https://doi.org/10.1007/s11858-012-0472-4.
Lerner, J. S., Y. Li, P. Valdesolo, and K. S. Kassam, 2015: Emotion and decision making. Annu. Rev. Psychol., 66, 799–823, https://doi.org/10.1146/annurev-psych-010213-115043.
Lindell, M. K., S.-K. Huang, H.-L. Wei, and C. D. Samuelson, 2016: Perceptions and expected immediate reactions to tornado warning polygons. Nat. Hazards, 80, 683–707, https://doi.org/10.1007/s11069-015-1990-5.
Mason, I., 1982: A model for assessment of weather forecasts. Aust. Meteor. Mag., 30, 291–303.
Meyer, R., and H. Kunreuther, 2017: The Ostrich Paradox: Why We Underprepare For Disasters. Wharton Digital Press, 132 pp.
Nabi, R. L., 2015: Emotional flow in persuasive health messages. Health Commun., 30, 114–124, https://doi.org/10.1080/10410236.2014.974129.
Niederkrotenthaler, T., and Coauthors, 2013: Injuries and post-traumatic stress following historic tornadoes: Alabama, April 2011. PLOS ONE, 8, e83038, https://doi.org/10.1371/journal.pone.0083038.
NOAA, 2018: U.S. Tornado Climatology: Average annual number of tornadoes (1991–2010). NOAA, accessed 21 July 2018, https://www.ncdc.noaa.gov/climate-information/extreme-events/us-tornado-climatology.
NOAA, 2019: Glossary of forecast verification metrics. NOAA, accessed 26 February 2019, https://www.nws.noaa.gov/oh/rfcdev/docs/Glossary_Forecast_Verification_Metrics.pdf.
Olson, R. H., 1965: On the use of Bayes’ theorem in estimating false alarm rates. Mon. Wea. Rev., 93, 557–558, https://doi.org/10.1175/1520-0493(1965)093<0557:OTUOBT>2.3.CO;2.
Paul, B. K., M. Stimers, and M. Caldas, 2015: Predictors of compliance with tornado warnings issued in Joplin, Missouri, in 2011. Disasters, 39, 108–124, https://doi.org/10.1111/disa.12087.
Polger, P. D., B. S. Goldsmith, R. C. Prsywarty, and J. R. Bocchieri, 1994: National Weather Service warning performance based on the WSR-88D. Bull. Amer. Meteor. Soc., 75, 203–214, https://doi.org/10.1175/1520-0477(1994)075<0203:NWSWPB>2.0.CO;2.
Procopio, C. H., and S. T. Procopio, 2007: Do you know what it means to miss New Orleans? Internet communication, geographic community, and social capital in crisis. J. Appl. Commun. Res., 35, 67–87, https://doi.org/10.1080/00909880601065722.
Rasmussen, E., 2015: VORTEX-Southeast program overview. 36 pp., ftp://ftp.atdd.noaa.gov/pub/vortexse/ProjectOverview.pdf.
Ripberger J. T., C. L. Silva, H. C. Jenkins-Smith, D. E. Carlson, M. James, and K. G. Herron, 2015: False alarms and missed events: The impact and origins of perceived inaccuracy in tornado warning systems. Risk Anal., 35, 44–56. https://doi.org/10.1111/risa.12262.
Roulston, M. S., and L. A. Smith, 2004: The boy who cried wolf revisited: The impact of false alarm intolerance on cost–loss scenarios. Wea. Forecasting, 19, 391–397, https://doi.org/10.1175/1520-0434(2004)019<0391:TBWCWR>2.0.CO;2.
Samenow, J., 2012: Super tornado outbreak of April 27, 2011: One year anniversary. Washington Post, 27 April 2012, https://www.washingtonpost.com/blogs/capital-weather-gang/post/super-tornado-outbreak-of-april-27-2011-one-year-anniversary/2012/04/27/gIQARRLJlT_blog.html?utm_term=.818c26c970e9.
Schmidlin, T. W., and P. King, 1995: Risk factors for death in the 27 March 1994 Georgia and Alabama tornadoes. Disasters, 19, 170–177, https://doi.org/10.1111/j.1467-7717.1995.tb00367.x.
Senkbeil, J. C., D. A. Scott, P. Guinazu-Walker, and M. S. Rockman, 2014: Ethnic and racial differences in tornado hazard perception, preparedness, and shelter lead time in Tuscaloosa. Prof. Geogr., 66, 610–620, https://doi.org/10.1080/00330124.2013.826562.
Simmons, K. M., and D. Sutter, 2009: False alarms, tornado warnings, and tornado casualties. Wea. Climate Soc., 1, 38–53, https://doi.org/10.1175/2009WCAS1005.1.
Smith, M. R., 1994: The Moral Problem. Blackwell, 240 pp.
So, J., 2013: A further extension of the Extended Parallel Process Model (E-EPPM): Implications of cognitive appraisal theory of emotion and dispositional coping style. Health Commun., 28, 72–83, https://doi.org/10.1080/10410236.2012.708633.
Sorensen, J. B., and O. Sorenson, 2007: Corporate demography and income inequality. Amer. Sociol. Rev., 72, 766–783, https://doi.org/10.1177/000312240707200506.
Stirling, S., 2015: Three out of every four tornado warnings are false alarms. FiveThirtyEight, ABC News, https://fivethirtyeight.com/features/three-out-of-every-four-tornado-warnings-are-false-alarms/.
Tierney, K. J., 2000: Implementing a seismic computerized alert network (SCAN) for Southern California: Lessons and guidance from the literature on warning response and warning systems. University of Delaware Disaster Research Center, 91 pp., http://udspace.udel.edu/handle/19716/1155.
Tierney, K. J., M. K. Lindell, and R. W. Perry, 2001: Facing the Unexpected: Disaster Preparedness and Response in the United States. Joseph Henry Press, 320 pp., doi:10.17226/9834.
Trafalis, T. B., H. Ince, and M. Richman, 2003: Tornado detection with support vector machines. Proceedings of Dynamic Data Driven Application Systems, International Conference on Computational Science, Melbourne, Australia, https://dl.acm.org/citation.cfm?id=1757634.
Trainor, J. E., D. Nagele, B. Philips, and B. Scott, 2015: Tornadoes, social science, and the false alarm effect. Wea. Climate Soc., 7, 333–352, https://doi.org/10.1175/WCAS-D-14-00052.1.
Wang, X., and N. Kapucu, 2008: Public complacency under repeated emergency threats: Some empirical evidence. J. Public Adm. Res. Theory, 18, 57–78, https://doi.org/10.1093/jopart/mum001.
Wickens, C. D., S. Rice, D. Keller, S. Hutchins, J. Hughes, and K. Clayton, 2009: False alerts in air traffic control conflict alerting system: Is there a “cry wolf” effect? Hum. Factors, 51, 446–462, https://doi.org/10.1177/0018720809344720.
Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. Academic Press, 627 pp.
Witte, K., 1992: Putting the fear back into fear appeals: The extended parallel process model. Commun. Monogr., 59, 329–349, https://doi.org/10.1080/03637759209376276.
Witte, K., and M. Allen, 2000: A meta-analysis of fear appeals: Implications for effective public health campaigns. Health Educ. Behav., 27, 591–615, https://doi.org/10.1177/109019810002700506.