1. Introduction
Since the 1950s, the U.S. Weather Bureau, now the National Weather Service (NWS), has provided warnings for severe weather and tornadoes. Improvements over the last 30 years, including the implementation of Doppler radar has significantly reduced casualties (Simmons and Sutter 2005). Currently, the NWS is considering refining warnings to include probabilities of severe weather, rather than a binary deterministic warning. In a paper published in 2000, Sorensen laments that, for almost all hazards, no warning systems are 100% reliable (Sorensen 2000). While 100% reliability may not be achieved, probabilistic warnings do hold the promise of providing better and timelier information to those in the path of a violent storm or tornado.
Several surveys have queried the public about their likely response to probabilistic warnings, but to our knowledge none have surveyed businesses on how they may respond to new probabilistic warnings. This project intends to fill that gap, fulfilling the objective of the Weather Research and Forecasting Innovation Act of 2017, House of Representatives (H.R.) bill 353, which calls for the use of social and behavioral science to study severe weather warning systems.
Using local chambers of commerce to distribute a survey instrument, businesses in north Texas were studied to compare their likely response to deterministic and probabilistic tornado warnings. The survey was tested with a focus group prior to distribution. This group met at the Grayson County courthouse and included decision-makers who would be responsible for decisions in the event of a tornado warning. A significant contribution of the focus group was the calibration of a 10-point behavior response scale that ranked responses from low to high effort. Use of the behavior ranking scale provides a tool to examine how businesses may respond under different warning systems. The number of fully completed surveys was 180. Most responding firms were small but almost one-quarter were from businesses with more than 50 employees. The largest number of returned surveys came from financial firms, business information, health services, education, and real estate.
Next, a regression model was created to analyze the survey result. The goal of the logit regression model was to identify factors that have the greatest influence on a business’s response under different warning scenarios. The results from the regression model provide estimates of the likelihood of high-effort response. Influential variables are then isolated to examine how the likelihood of high-effort response varies as other variables are held constant.
The paper begins with a review of literature (section 2), followed by discussion of the data collection method in section 3. Results from the survey are found in section 4, which also includes the results from the regression model and estimates of how the likelihood of high-effort response could be improved. Section 5 offers a discussion of our results and compares findings with earlier research. Section 6 provides conclusions. We find that providing probabilistic information has a positive effect on how warnings are received and understood. Further, we find that as trust in the warnings increases response improves.
2. Review of literature
Below we review foundational literature in three distinct areas to provide a foundation for the current study. The survey was conducted on businesses rather than individuals, so our first section reviews previous studies on businesses and disaster. That is followed by a review of previous work on warning response including the impact of the presentation of the warning, confirmation of the warning, provided lead time, false alarms, and trust in the warning itself. The last section considers previous studies that used focus groups and weather-related scenarios. These studies assisted us in our research design and implementation.
a. Businesses and disasters
Businesses prepare and respond to disasters differently than individuals. Beyond protection of life there are financial considerations in terms of protecting assets and lost productivity or sales. Plus, the impacts from a disaster go beyond the business itself, affecting employees who may see a loss of income and communities that may see lower tax revenues (Tierney 2007). In addition, the interruption to economic activity from major events can have a lingering reduction on economic growth of 0.46% that impact the community at large (Felbermayr and Gröschl 2014).
Despite the potential vulnerability to disasters, Webb et al. (2000) suggests most businesses are ill prepared particularly in preparations that are difficult, expensive and time-consuming. Overall small and newer businesses are more likely to fail after a disaster (Collier 2016) since most are underinsured or not insured at all. Thus, these businesses are largely dependent on government for both predisaster mitigation and postdisaster recovery (Tierney 2007).
Business survival from a disaster depends on the impacts of the storm, how a business plans for these events, experience with past disasters and the size of the business. For example, after Hurricane Katrina, Sydnor et al. (2017) conducted a survey of over 350 businesses and concluded that the amount of damage and loss of inventory and equipment were significant predictors of its ability to reopen. Xiao and Peacock (2014) found that predisaster planning promotes adoption of mitigation that reduces damage when the storm arrives. Marshall et al. (2015) found that businesses that had experienced previous disasters are more likely to survive and that the size of the business influences survival, similar to Collier (2016). Larger businesses were more resilient than smaller ones.
b. Weather warnings and responses
Response is a complex decision prompted by the warning presentation, confirmation of the threat, lead times, false alarms, and trust in the information itself. Lindell (2018) found that warning response increases when individuals receive information that provides the timing of the event, which areas are likely to be impacted and what actions could be taken for protection. Schumann et al. (2018) found that the users’ interpretation of a warning presentation matters and that other influencing variables include how users gather information, their previous experience with tornadoes, and the local culture surrounding tornadoes.
Receipt of a warning is often followed by a tendency to confirm that the warning applies personally. This process of risk confirmation is common in tornado warnings (Mileti 1995; Mileti and Sorensen 1990). As an example, in January of 2008 a tornado occurred in Starkville, Mississippi, near Mississippi State University. A study of almost 3000 undergraduates found that over 40% went outside to confirm the tornado warning (Sherman-Morris 2009). Paul et al. (2015), also found that a major reason that individuals did not heed the tornado warning in Joplin, Missouri, in a 2011 event was failure to credibly confirm the threat.
Lead times are another essential aspect of tornado warnings; the more time provided before the tornado arrives, the more response time people have to protect assets and themselves. Lead times have also improved over the years: after the WSR-88D installation in 1992, 62% of all tornadoes were preceded by warnings, increasing from 37% before the installation (Bieringer and Ray 1996). On average, people prefer about 35 min of lead time before a tornado arrives (Mason and Senkbeil 2015; Hoekstra et al. 2011). However, Simmons and Sutter (2008), found that casualties from tornadoes increase with excessive lead time, finding the optimal lead time to be between 15 and 20 min. Miran et al. (2018) found that adding probabilities to the warning may increase response. That study indicated that when people are given a lead time with a probabilistic warning, they are more likely to respond than when they receive a deterministic warning with the same lead time.
When a warned tornado does not occur, it creates a false alarm. Even though people appear to want more accurate warnings than are currently received, 86% of respondents from Schultz et al. (2010) said that false alarms do not affect future response to severe weather warnings. However, other literature has shown the opposite effect on false alarms. False alarms damage trust in future severe weather warnings (Mackie 2014; Breznitz 1984). Findings also show that experiencing false alarms can decrease action taken (LeClerc and Joslyn 2015) and increase casualties (Simmons and Sutter 2009) and that people in an area with higher false alarms are less likely to take shelter (Trainor et al. 2015). However, the study of false alarms is complicated by how a false alarm is defined officially as compared with how the public perceives a false alarm. Officially, a false alarm occurs when a warned tornado does not occur, but the public may perceive a false alarm if a warned tornado did not directly affect them even if it did occur. More work on false alarms is needed, but, because of misunderstanding of its definition, it may be necessary to approach studies on the subject with some care.
The trust people have in current warning messages can also explain response. Drost et al. (2016) found that warning recipients neither trust nor distrust the received severe weather warnings. However, Rhyerd (2016) found respondents are more likely to seek shelter with improved tornado warnings and that women were significantly more likely to respond than men. The study did not find a significant relationship between education or income on shelter-seeking behavior. Kox and Thieken (2017) found respondents had higher confidence in the more-immediate (2 day) forecast than in the longer-term (7 day) forecast. Trust in the forecast had a significant effect on lowering the decision threshold to respond in all of their severe weather (frost) scenarios.
c. Focus group and survey design
One unique aspect of this study is the use of a behavior ranking scale to quantify response to a series of warning scenarios using both deterministic and probabilistic warnings. Prior to survey distribution a focus group was used to create the scale. Previous studies investigating warning response have utilized focus groups to provide feedback. Studies use focus groups to provide feedback on surveys prior to their implementation (Jauernic and Van Den Broeke 2017; Stalker et al. 2015; Baumgart et al. 2008).
The current study asked respondents to consider a tornado scenario using deterministic and probabilistic warnings similar to previous research. For example, to estimate whether respondents would take protective action, Morss et al. (2010) used two risk scenarios, protection of a reservoir and protection of a fruit orchard and asked respondents to make binary decisions on nine different forecast scenarios. Kox and Thieken (2017) also used four scenarios created from varying risk (low/high) and property value (low/high) to protect a garden from frost damage. Their study also examined the change to decision thresholds for response including the impact of prior experience, ability to act and trust in the forecast. They found that as trust in the information increased response improved.
3. Methods
We began our project by consulting with the director of emergency management in Grayson County. In this meeting, we discussed the types of behaviors we expected from businesses during a tornado warning and how that may change with a probabilistic format. That led to creation of a prototype survey with input from NOAA officials at the National Weather Center in Norman, Oklahoma. The purpose of the survey was to examine businesses’ intended responses to different types of tornado warnings and ultimately to compare responses across different warning systems. However, to evaluate the intensity of businesses’ responses to tornado scenarios, we needed a measurement tool that could capture reported behavior and the effort associated with it. With the help of the Grayson County Office of Emergency Management and the county judge, we then convened a focus group to assist in the final survey and a behavior ranking scale to gauge business response to tornado warnings under deterministic and probabilistic formats. Once their work was complete, a final survey was produced to be distributed to north Texas businesses through regional chambers of commerce.
a. Focus group
In January 2019, the focus group, composed of nine local business owners, was convened with the facilitation of the Grayson County Office of Emergency Management. Table 1 displays the variety of participants represented in the focus group: Emergency departments, distribution centers, manufacturing plants, and a community college are represented. Participants’ positions within such firms included executive officers, directors, and safety personnel. However, all participants expressed having a role in and experience with making decisions for their organization in the face of severe weather.
Focus group members.
The intention of the focus group was to establish a behavior ranking scale that captured the level of effort in response to a tornado warning. The focus group protocol was as follows: After informed consent signatures were obtained, participants spent several minutes responding to a “behavior ranking exercise.” Individual participants were each asked to first reflect on the appropriateness of each of 10 behaviors as practical responses to a tornado, and then to rank the level of effort of each of these behaviors. Following their independent evaluations, we discussed the scale collectively. Consensus produced the behavior ranking scale reported in Table 2.
Behavior ranking scale.
This behavior ranking scale represents a reasonable list of actions businesses may take in response to a tornado, as well as the level of effort associated with each action—both of which were vetted by professionals who regularly make decisions in the face of a tornado. However, a brief caveat is warranted: The ranking in this scale should not be considered universal for every industry. For example, a community college’s action of “10” may be different from that of a manufacturing plant (i.e., the former does not shut down a production line like the latter does). However, the focus group agreed that the behavior ranking scale produced reflected a reasonable range of behaviors for most industries, properly ordered from least to most severe in nature. In addition to this caveat, the focus group also helped to develop a distinction between “low effort” and “high effort” responses to a warning. Scores above 5 were considered responses that required significantly more cost, consideration, and coordination. Actions scored 5 or below were considered “low effort” in that they do not require substantial costs or behavioral change on the part of the firm. Thus, the behavior ranking scale can be conceived of as a spectrum of 10 actions, each involving increasing effort and cost, which can be reasonably split between the actions that require “low” and “high” effort. With this vetted measurement scale in hand, we produced a survey that used this behavior ranking scale to capture businesses’ anticipated behaviors in response to varied warning systems.
b. Survey design
The survey instrument contained three sections: “descriptive questions,” “scenario questions,” and “closing questions and debrief.” All questions used in the survey, as well as the response options offered, are detailed in the appendix under Table A1. After responding to the descriptive questions (questions 1–20 in Table A1), respondents were shown a series of tornado scenarios that varied only with respect to the warning indicated. Each scenario was prefaced with the following statement: “It is a standard weekday afternoon. You are at your firm’s location and receive the following in a digital message.” Next, respondents received a deterministic warning in a text only format, which is shown in Fig. 1. They were then asked four questions in response, which are questions 21–24 in Table A1. Importantly, one of these questions asked respondents to select 5 “behavioral responses” from the list of 10 possible actions constructed in the focus group (see Table 2). Note also that the list was randomized, and respondents were unaware of the underlying level of effort associated with each behavior.
The deterministic warning.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
Next, respondents were shown four probabilistic warning scenarios and responded to the same four questions as above. Probabilistic warnings were presented in graphical format as a radar image superimposed on a “plume” or “cone shaped” warning with accompanying explanation for the graphic. Figure 2 offers an example of how probabilistic warnings were presented. Respondents were shown three additional plumes, with dots placed at 50%, 100%, and then 75%. In addition to the plume, respondents were given the following message to ensure they understood the figure: “This “plume” or “cone” represents your position in proximity to the storm in terms of probability. Assume time is not a factor in this scenario. Currently, there is almost a X% chance that the storm will affect your firm.”
The probabilistic warning.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
After responding to the five warning scenarios, respondents were asked two conclusion questions, and then given space to provide open-ended commentary (questions 25–27 in Table A1).
c. Recruitment and sampling
To facilitate distribution of the above survey, we coordinated with Dallas–Fort Worth (Texas) area chambers of commerce (Dallas, Frisco, McKinney, Plano, and Sherman). Relative to other parts of the United States, the Dallas–Fort Worth region has a higher risk of tornadoes, which means that most businesses have experience with decision-making in the face of tornados and severe weather warnings. Our choice of study area may limit the transferability of our result to regions of the country with less severe weather decision-making.
The survey instrument was sent via email from the chambers to their members, accompanied with a brief statement from the research team about the project. Between March and July 2019, over 400 responses were collected, although not all respondents answered all questions. Most responding businesses, 75%, were small, with fewer than 10 employees and/or customers. Follow-up emails were sent by the respective chambers, requesting participation in an attempt to elicit more responses.
d. Logit regression analysis
Using regression models allows the researcher to determine which variables are important in determining changes to the dependent variable. Additionally, parameter estimates from a regression model can be used to examine what happens to the dependent variable when explanatory variables change. The regression analysis uses a logit model, so the dependent variable is binary. For this model, 0 indicates a low-effort response and 1 is high effort. Therefore, the result estimates the likelihood a business will respond to the warning with a high-effort action based on the values of the explanatory variables. To make this estimate from our survey we need to classify response as low or high effort. Using our behavior ranking scale low-effort responses are represented by the lower end of the scale and high-effort responses the upper. Each responding business was asked to choose five actions they would take for each warning scenario. First, those answers are used to find an average rating for each business and each warning scenario. Next, this is converted to a binary variable set to 0 when low effort (0–5) or 1 when high effort (>5). Explanatory variables for the regression model are broken into four groups: 1) type of warning, 2) experience with warnings, including trust in the warnings and experience with false alarms, 3) size of the business, and 4) attributes of the person who would decide how to respond.
Survey questions must be chosen and converted to a form consistent with a logit model. Some variables were converted to categorical or binary variables to iterate the impact of these variables properly. The model includes eight explanatory variables: two categorical, two binary, and four continuous. A necessary data transformation was to create categories for two variables. The two categorical variables are warning type, with the 25% probability warning used as the omitted category, and decision-maker attributes, with “other” as the omitted category. There were 180 returned surveys with responses to each question used in the regression. Each responding business had five responses, one at each probabilistic warning and the current deterministic warning, giving the final logit model 900 observations. With a limited number of explanatory variables, the risk of multicollinearity is decreased while the predictive power of those variables used is increased in the model (Daniere and Gilboy 1960). A correlation matrix and variance inflation test were conducted to test for multicollinearity, which was not found.
4. Results
In this section our result is provided from the survey in several forms. First, we summarize the most important survey responses on the descriptive questions with regard to the businesses who responded. Next, responses to two questions about the various warning scenarios are examined: what level of response they report based on the behavior scale and how much they trust the warning in each scenario. A logistic regression is then used to evaluate which variables are most significant in determining whether their response to the warnings can be classified as low or high effort. Next, the regression result is used to estimate the likelihood of a high-effort response for all warnings. Last, parameter estimates from the regression model are used to illustrate what change in response could be expected if the most significant variable, trust in the warning, is minimized then maximized.
a. Descriptive statistics
The first question asked for the industry that best described the firm. Their selection came from the list shown in Fig. 3. Besides “other” (26%), the most frequently selected categories were finance and insurance (14%) and then business and information (11%). These two industries alone accounted for one-fourth of total respondents. Respondents who chose “other” were not asked to provide an industry.
Distribution of responses arranged by industry.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
After types of industries, we asked respondents about their experience with tornado warnings to set a baseline for their knowledge. Table 3 provides information on businesses that responded to the survey. Responses to the experience question showed about 44% having “no experience” with tornado decisions, and 56% with experience. For false alarms, approximately 44% had experienced a false alarm in the past while 56% had not. The question on training found 56% of firms reported they had previous training in severe weather decision-making. Next, they were asked about information sources. The highest selected answer choice was the National Weather Service, with 35%, and local news came in second with 25%. In terms of warning time, 88% stated they preferred less than 1 h.
Business characteristics.
b. Warnings
Tornado warning scenarios formed the last portion of the survey. Respondents were asked to choose up to five actions their business would take specific to each warning scenario. Possible actions are from the 10 choices on the behavior ranking scale. The first warning scenario in the survey is the current deterministic warning. It is worth noting again that the actions were not in the order of the scale, nor did businesses know the actions they selected were used to create a scale.
In the appendix, we provide frequency distributions of responses for each warning format and probability. To illustrate the change in response from one warning type and probability we calculate a mean for each warning and probability from the frequency distribution for each. The means are shown in Fig. 4. That mean is then compared with the means of other warnings with the applicable frequency distribution from the appendix noted in parentheses. Not surprisingly, as the probability of tornado occurrence increases so does the level of effort in responding to the warning. The deterministic warning will first be compared with the 100% warning followed by the response result for each of the other probabilistic warnings in ascending order.
Response and trust comparison for all warning scenarios.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
For the deterministic warning scenario, responses led to a mean of 6.34 on the behavior response scale (Fig. A1). This result shows businesses indicate a high-effort response when given a deterministic warning. The nearest probabilistic comparison to the deterministic warning is the 100% warning where responses led to a mean of 6.55 on the behavior scale, which is slightly higher than the deterministic response mean of 6.34 (Fig. A5).
The survey presented probabilistic warnings randomly to avoid ordering effects. However, they are discussed in ascending order to simplify the analysis. For the 25% probabilistic warning, responses led to a mean of 4.4 on the behavior scale (Fig. A2). Based on the choices on the behavior ranking scale, businesses receiving this warning are exhibiting low effort in their response to the warning. The 50% probabilistic warning had a mean response of 5.4 on the behavior scale (Fig. A3). This suggests some businesses are now actively responding to the warning. Next, the 75% probabilistic warning had a mean response of 5.7 on the behavior scale (Fig. A4). Last, as mentioned earlier the mean response for a 100% probabilistic warning is 6.55 (Fig. A5).
Next, trust in the warning is considered for all warning scenarios. Like the discussion on response, a comparison with the deterministic and 100% warning is first, followed by the probabilistic warnings in ascending order. Like the result on response, trust in the deterministic warning was high, with firms trusting the warning at approximately 7 of 10. Trust in the 100% warning is the highest of all scenarios at 8.19, a notable change from the deterministic response of 7.
For the 25% warning, trust in the warning is 5 of 10. Trust increased to 5.12 at the 50% warning indicating that trust follows the probability of tornado occurrence, although the change in trust is small. A larger increase in trust is seen from the 50%–75% warning where businesses trusted the 75% probabilistic warning at 7.16 of 10, and for the 100% warning, trust is 8.19. Figure 4 compares the result among all warning scenarios for their responses on the behavior scale and their trust in the warning.
c. Logit regression result
The regression result can identify explanatory variables that have a significant influence on the dependent variable, which for this model is low- or high-effort response. Each warning scenario was significant at the p < 0.01 level, suggesting increased probabilities have a significant effect on the likelihood of a high-effort response from a business in the path of a tornado. In addition to the warning format and probability of tornado occurrence, trust in the warning was the only other variable significant at the p < 0.01 level with higher levels of trust in the warning corresponding to an increase in the probability of a high-effort response. Variables on the size of the business, false alarm, employee, and customer counts, were insignificant. Three attributes of the decision-maker were significant: whether they were trained to make response decisions and if their position was that of an executive or a communication liaison. Table 4 provides the results of the logit model.
Logit regression model result. One, two, and three asterisks indicate significance at p < 0.1, p < 0.05, and p < 0.01, respectively.
A count pseudo-R squared of 74% was calculated. It correctly predicted whether a business would take a low- or high-effort response on 667 of the 900 observations.
d. Warning response improvement
Regression models can identify the most significant variables that would impact the likelihood of high-effort response. This result can also be used to estimate how response would react if variables are allowed to change. First, a baseline estimate is provided of the likelihood of high-effort response for each warning scenario. Next, we provide an illustration of how response may change given an increase or decrease in warning trust, the only other highly significant variable. The baseline estimate had continuous variables set to their median value, while binary and categorical variables were set to the most selected response. Trust in the warning is the only variable that varied across each of the five warning scenarios and is set to the average response for each warning scenario. An estimate can then be provided of the likelihood a business will respond with high effort to the warning. Figure 5 shows the baseline estimates and the change in response when trust is either decreased or increased.
Trust estimates—baseline, minimized trust, and maximized trust—showing the effect of trust on response.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
At the 25% probabilistic warning, businesses are projected to respond with high effort about 21% of the time. A 50% probabilistic warning significantly increases the likelihood of high-effort response to 55%. Change from a 50%–75% probabilistic warning suggests the likelihood of an active response will climb to 66%, an 11% increase from the 50% warning probability. At 100% probability, there is an 89% chance that a business will respond with high effort. The likelihood of a high-effort response from a deterministic warning is 80%.
The result from a regression model can also be used to examine how the dependent variable changes when an explanatory variable is manipulated. Changes to trust in the warning are used to illustrate the decrease or increase to the likelihood of high-effort response. Trust in the warning is first minimized and then maximized to examine changes to the baseline estimates of high-effort response.
When the level of trust is set to its minimum, the 25% and 50% probabilistic warnings decreased by 10% and 28% relative to baseline estimates. The 75% probabilistic warning saw the biggest change, dropping from 64% baseline estimate to 34% likelihood. Both the 100% probabilistic warning and deterministic warning also had large changes, dropping 25% and 29% from baseline, moving to a 64% and 51% likelihood of businesses responding with high effort.
Probabilistic warnings of 25% and 50% have the most notable changes when trust in the warning is maximized. The likelihood of high-effort response jumps to 43% at the 25% warning and 73% at the 50% warning, both about 20% higher than the baseline. A 75% probabilistic warning sees a 14% increase in the likelihood of high-effort response, increasing to 78%. The 100% probabilistic warning has a 92% likelihood of active response, increasing 3% from the baseline result, while the deterministic warning is up 8%–88% likelihood of a high-effort response. This result highlights the promise of probabilistic warnings. Overall, by providing the public with probabilistic information and as trust in that information grows, the result may be a more robust response.
5. Discussion
In this section, a brief discussion of our result is offered to put this study in context with others that have addressed similar issues. Also, it is used to point out some of the more striking findings that may warrant further investigation.
Most of the descriptive statistics relate to the business itself and do not provide comparison with previous research, although one question, preferred warning time, has been asked by other studies. In this project, 88% stated they preferred a warning time of less than 1 h. This result corresponds closely to the result from Mason and Senkbeil (2015) and Hoekstra et al. (2011).
The level of effort in the anticipated response action mirrored the probability of tornado occurrence communicated in the warning. As probability went up, response increased. One result warrants mention and that is the small change in response between the 50% and 75% warnings. The mean response from our scale for the 25% warning was 4.4. At the 50% warning mean response was 5.4 then 5.7 for the 75% warning. It is interesting to note how close the response at 75% is to the 50% scenario, indicating businesses may look at the probabilistic warnings of 50% and 75% similarly, creating a plateau in response. Then predicted response made another large jump at the 100% warning, increasing to 6.55.
Trust also increases with the probability of severe weather but shows only a small change between the 25% and 50% warnings and then makes a notable jump at the 75% warning. For a 100% warning, trust increases to 8.19. Rather than a midrange plateau as we saw in response, trust shows a marked rise after the 50% warning. A deterministic warning has an average trust value of 7. This result differs from that found by Drost et al. (2016), who found that participants had an average trust level on their scale, which suggested participants had neither high nor low trust in severe weather warnings. However, the finding is in line with that found by Ryherd (2016) and Kox and Thieken (2017).
Experience with a false alarm was not significant to predict low or high effort in the regression model. This corresponds to the result found by Schultz et al. (2010). Our result, however, may be misleading if false alarms influence trust in the warning that is the strongest variable. Simmons and Sutter (2009) found that false alarms do statistically influence response. That study used the official definition of a false alarm but as mentioned in the literature review, the public may define a false alarm differently than the National Weather Service. Trainor et al. (2015) found there is a great deal of variation in public perception of a false alarm, yet also finds that official county-level false alarm rates do predict behavioral response. False alarms have been the topic of much research in recent years and will continue to generate interest among researchers. In this case, a nonsignificant result is interesting within the context of that ongoing research.
6. Conclusions
This paper uses a behavior ranking scale to examine how businesses anticipate responding to tornado warning information when presented in a probabilistic versus a deterministic format. Our result shows that as the probability of a tornado increases, businesses anticipate taking on higher-effort actions to protect their employees, customers, and assets.
The result from our regression shows the most significant factor besides the probabilistic warnings themselves in business decision-making is trust in the warning, which increases as provided probabilities of a tornado increase. This adds to the understanding of how businesses respond to tornado threats and highlights the improvement probabilistic warnings offer. First, providing probabilistic information can have a positive effect on how warnings are received and understood. Furthermore, increased trust in the validity of the warning pushes response even more.
This provides a promising outlook for warnings that utilize probabilities, but we acknowledge the study is not fully generalizable and has limitations. First, the study was conducted in north Texas, which is in a region with elevated tornado risk. Residents and businesses expect severe weather and may be more familiar with it than other parts of the country. A useful follow up to this paper would be a study in a different part of the United States or a nationwide study. Second, we allowed participants to choose up to five actions they would engage in for each warning, but it is possible that each choice resulted in other actions that would be taken implicitly though not chosen for the survey. Surveys designed with “if/then” types of decision processes may provide a solution. Third, the visualization of the probabilistic warnings focused solely on the spatial aspects of where the storm is in relation to the firm’s location. This ignores any effect time may have on response. Here, a logical follow-up study would be one that uses laboratory simulations allowing participants to utilize both the spatial and time elements of an impending storm before determining their response. Last, using only text for the deterministic warning, as compared with text and a picture, may have inadvertently influenced the anticipated response and perceived trust in the warnings. Including a picture may have assisted respondents in the warning confirmation process (Mileti and Sorensen 1990), enabling them to bypass actions 2 and 5, which were both categorized as low effort, in favor of higher-effort actions. Additionally, being able to confirm the tornado’s location with respect to their business may have increased trust in the warning itself. Regardless of our acknowledged limitations, the results do provide a significant contribution in understanding how businesses respond to probabilistic warnings and should be viewed as a valuable first step in evaluating how businesses may respond when warnings contain probabilities.
Acknowledgments
Funding was provided by the NOAA/Office of Oceanic and Atmospheric Research under NOAA–University of Oklahoma Cooperative Agreement NA16OAR4320115, U.S. Department of Commerce.
APPENDIX
Additional Material about the Questions and Responses Used in this Study
Table A1 shows all questions used in the survey, including descriptive questions, scenario questions, and closing questions, along with the response options offered. Figures A1–A5 show the frequency distributions of responses for each warning format and probability.
Survey instrument questions and response options.
Frequency distribution of deterministic responses.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
Frequency distribution of 25% probabilistic responses.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
Frequency distribution of 50% probabilistic responses.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
Frequency distribution of 75% probabilistic responses.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
Frequency distribution of 100% probabilistic responses.
Citation: Weather, Climate, and Society 14, 1; 10.1175/WCAS-D-21-0029.1
REFERENCES
Baumgart, L. A., E. J. Bass, B. Philips, and K. Kloesel, 2008: Emergency management decision making during severe weather. Wea. Forecasting, 23, 1268–1279, https://doi.org/10.1175/2008WAF2007092.1.
Bieringer, P., and P. Ray, 1996: A comparison of tornado warning lead times with and without NEXRAD Doppler Radar. Wea. Forecasting, 11, 47–52, https://doi.org/10.1175/1520-0434(1996)011<0047:ACOTWL>2.0.CO;2.
Breznitz, S., 1984: Cry Wolf : The Psychology of False Alarms. Lawrence Erlbaum Associates, 280 pp.
Collier, B., 2016: Small and young businesses are especially vulnerable to severe weather. Harvard Business Review, 23 November 2016, https://hbr.org/2016/11/small-and-youngbusinesses-are-especially-vulnerable-to-extreme-weather.
Daniere, A., and E. Gilboy, 1960: The specification of empirical consumption structures. Proceedings of the Conference on Consumption and Saving, I. Friend and R. Jones, Eds., Vols. 1 and 2, University of Pennsylvania Press, 93–136, https://doi.org/10.9783/9781512818444-003.
Drost, R., M. Casteel, J. Libarkin, S. Thomas, and M. Meister, 2016: Severe weather warning communication: Factors impacting audience attention and retention of information during tornado warnings. Wea. Climate Soc., 8, 361–372, https://doi.org/10.1175/WCAS-D-15-0035.1.
Felbermayr, G., and J. Gröschl, 2014: Naturally negative: The growth effects of natural disasters. J. Dev. Econ., 111, 92–106, https://doi.org/10.1016/j.jdeveco.2014.07.004.
Hoekstra, S., K. Klockow, R. Riley, J. Brotzge, H. Brooks, and S. Erickson, 2011: A preliminary look at the social perspective of warn-on-forecast: Preferred tornado warning lead time and the general public’s perceptions of weather risks. Wea. Climate Soc., 3, 128–140, https://doi.org/10.1175/2011WCAS1076.1.
Jauernic, S. T., and M. S. Van Den Broeke, 2017: Tornado warning response and perceptions among undergraduates in Nebraska. Wea. Climate Soc., 9, 125–139, https://doi.org/10.1175/WCAS-D-16-0031.1.
Kox, T., and A. H. Thieken, 2017: To act or not to act? Factors influencing the general public’s decision about whether to take protective action against severe weather. Wea. Climate Soc., 9, 299–315, https://doi.org/10.1175/WCAS-D-15-0078.1.
LeClerc, J., and S. Joslyn, 2015: The cry wolf effect and weather-related decision making. Risk Anal., 35, 385–395, https://doi.org/10.1111/risa.12336.
Lindell, M. K., 2018: Communicating imminent risk. Handbook of Disaster Research, 2nd ed., H. Rodríguez, W. Donner, and J. Trainor, Eds., Handbooks of Sociology and Social Research, Springer, 449–477, https://doi.org/10.1007/978-3-319-63254-4_22.
Mackie, B., 2014: Warning fatigue: Insights from the Australian bushfire context. Ph.D. dissertation, University of Canterbury, 294 pp., https://ir.canterbury.ac.nz/handle/10092/9029.
Marshall, M., L. Niehm, S. Sydnor, and H. Schrank, 2015: Predicting small business demise after a natural disaster: An analysis of pre-existing conditions. Nat. Hazards, 79, 331–354, https://doi.org/10.1007/s11069-015-1845-0.
Mason, J. B., and J. C. Senkbeil, 2015: A tornado watch scale to improve public response. Wea. Climate Soc., 7, 146–158, https://doi.org/10.1175/WCAS-D-14-00035.1.
Mileti, D. S., 1995: Factors related to flood warning response. U.S.–Italy Research Workshop on the Hydrometeorology, Impacts, and Management of Extreme Floods, Perugia, Italy, U.S. National Science Foundation and Italian National Research Council, 4.6, https://www.engr.colostate.edu/ce/facultystaff/salas/us-italy/papers/46mileti.pdf.
Mileti, D. S., and J. H. Sorensen, 1990: Communication for emergency public warnings. Oak Ridge National Laboratory Rep. ORNL-6609, 162 pp., https://doi.org/10.2172/6137387.
Miran, S. M., C. Ling, A. Gerard, and L. Rothfusz, 2018: The effect of providing probabilistic information about a tornado threat on people’s protective actions. Nat. Hazards, 94, 743–758, https://doi.org/10.1007/s11069-018-3418-5.
Morss, R. E., J. K. Lazo, and J. L. Demuth, 2010: Examining the use of weather forecasts in decision scenarios: Results from a US survey with implications for uncertainty communication. Meteor. Appl., 17, 149–162, https://doi.org/10.1002/met.196.
Paul, B., M. Stimers, and M. Caldus, 2015: Predictors of compliance with tornado warnings issued in Joplin, Missouri, in 2011. Disasters, 39, 108–124, https://doi.org/10.1111/disa.12087.
Ryherd, J. M., 2016: A qualitative analysis of public compliance to severe weather and tornado warnings. Senior Thesis, Dept. of Meteorology, Iowa State University, 22 pp., https://doi.org/10.31274/mteor_stheses-180813-4.
Schultz, D. M., E. C. Gruntfest, M. H. Hayden, C. C. Benight, S. Drobot, and L. R. Barnes, 2010: Decision making by Austin, Texas, residents in hypothetical tornado scenarios. Wea. Climate Soc., 2, 249–254, https://doi.org/10.1175/2010WCAS1067.1.
Schumann, R. L., III, K. D. Ash, and G. C. Bowser, 2018: Tornado warning perception and response: Integrating the roles of visual design, demographics, and hazard experience. Risk Anal., 38, 311–332, https://doi.org/10.1111/risa.12837.
Sherman-Morris, K., 2009: Tornado warning dissemination and response at a university campus. Nat. Hazards, 52, 623–638, https://doi.org/10.1007/s11069-009-9405-0.
Simmons, K. M., and D. Sutter, 2005: 2005: WSR-88D Radar, tornado warnings, and tornado casualties. Wea. Forecasting, 20, 301–310, https://doi.org/10.1175/WAF857.1.
Simmons, K. M., and D. Sutter, 2008: 2008: Tornado warnings, lead times and tornado casualties: An empirical investigation. Wea. Forecasting, 23, 246–258, https://doi.org/10.1175/2007WAF2006027.1.
Simmons, K. M., and D. Sutter, 2009: False alarms, tornado warnings, and tornado casualties. Wea. Climate Soc., 1, 38–53, https://doi.org/10.1175/2009WCAS1005.1.
Sorensen, J. H., 2000: Hazard warning systems: Review of 20 years of progress. Nat. Hazards Rev., 1, 119–125, https://doi.org/10.1061/(ASCE)1527-6988(2000)1:2(119).
Stalker, S. L., T. Cullen, and K. Kloesel, 2015: Using PBL to prepare educators and emergency managers to plan for severe weather. Interdiscip. J. Probl. Based Learn., 9, 1, https://doi.org/10.7771/1541-5015.1441.
Sydnor, S., L. Niehm, Y. Lee, M. Marshall, and H. Schrank, 2017: Analysis of post-disaster damage and disruptive impacts on the operating status of small businesses after Hurricane Katrina. Nat. Hazards, 85, 1637–1663, https://doi.org/10.1007/s11069-016-2652-y.
Tierney, K. J., 2007: Businesses and disasters: Vulnerability, impacts, and recovery. Handbook of Disaster Research. H. Rodríguez, E. L. Quarantelli, and R. R. Dynes, Eds., Handbooks of Sociology and Social Research, Springer, 275–296, https://doi.org/10.1007/978-0-387-32353-4_16.
Trainor, J. E., D. Nagele, B. Phillips, and B. Scott, 2015: Tornadoes, social science, and the false alarm effect. Wea. Climate Soc., 7, 333–352, https://doi.org/10.1175/WCAS-D-14-00052.1.
Webb, G. R., K. J. Tierney, and J. M. Dahlhamer, 2000: Businesses and disasters: Empirical patterns and unanswered questions. Nat. Hazards Rev., 1, 83–90, https://doi.org/10.1061/(ASCE)1527-6988(2000)1:2(83).
Xiao, Y., and W. Peacock, 2014: Do hazard mitigation and preparedness reduce physical damage to businesses in disasters: The critical role of business disaster planning. Nat. Hazards Rev., 15, 04014007, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000137.