1. Introduction
Tornadoes are one of the most frequent and deadly windstorms in the United States (Ashley 2007). Over 1000 tornadoes are reported on average annually, although only a small number of them cause injuries and deaths (NOAA 2021a). When a violent tornado occurs, buildings with a few exceptions (e.g., storm shelters, critical facilities) do not have enough resistance to ensure the safety for their occupants (Brooks and Doswell 2002). According to the 2013 Joplin, Missouri, tornado posthazard surveys, 87% of the fatalities occurred inside both nonresidential and residential structures (Kuligowski et al. 2014). Timely delivery of warning to the public at risk is crucial to seeking shelters or evacuating to the nearest safe rooms. Once a warning is delivered, the process to understand, believe, confirm, and personalize a warning takes time before an individual can properly respond (Brotzge and Donner 2013). Unlike hurricanes, winter storms and heat waves in which warning lead times vary from hours to days, official tornado warnings issued by the National Weather Service (NWS) are based on direct observation (e.g., radar) or spotters and media (Mayhorn and McLaughlin 2014; Brotzge and Donner 2013). The average lead time for tornadoes is about 13 min (NOAA 2011). This places higher emphasis on timeliness and effectiveness, which in turn call for a better understanding of how warnings are communicated to the public and how certain factors influence the warning diffusion process.
Past research has investigated different warning scenarios and communication schemes with varied scopes. A communication–human information processing (C–HIP) model organized diverse research literatures into a coherent structure (Wogalter 2006). It divided the warning process into stages of source, channel, and receiver. Important factors that could affect each stage were identified as well. Official channels were designated by the government to warn other agencies and the public, while unofficial channels pertained to personal networks (Parker and Handmer 1998). The necessity of official warning channels is intuitive as in the majority of cases they initialize the information diffusion process (Mayhorn and McLaughlin 2014). However, it does not always reach the whole population and is subject to system failure (Parker and Handmer 1998). Unofficial warning channels add to the effectiveness of an overall communication scheme. Hui et al. (2010) developed a dynamic information diffusion model based on the concepts from epidemiology, standard threshold, and cascade models. The model was then applied to an evacuation scenario to investigate the effects of different network structures, seeding strategy, network trust, and trust distribution on the diffusion process. Nagarajan et al. (2010) considered warning disseminated through neighbors. Scenarios with various choices of neighbors to inform for each household and the effect of distance between the household and the hazard occurring location were studied and compared. It was found that the choice of which neighbors to inform had considerable effect on the overall number of people who can receive warnings, meanwhile, the threat proximity has influence on the time needed to reach the maximum warning level for the modeled area. A later work (Nagarajan et al. 2012) examined the influence of several factors of unofficial channels, such as the number of people who are willing to disseminate warning, assimilation time (time required to make decision), and time to inform (time required to finish the informing process within a hypothetical community). The results showed that informing in person has considerable influence on the overall outcome.
These aforementioned studies collectively shed lights on the critical components of warning diffusion process and reveal the extent to which they affect the final efficacy. Most of them focused on the theoretical exploration and development in a hypothetical environment, whereas few considered the interactions between different warning channels. With simplification of a warning channel as a path through which the message flows from its source to its destination, the distinctions among these channels in accommodating information processing activities of the receivers have not been fully represented (Lindell and Perry 2003).
In this study, an agent-based model is developed with NetLogo (Wilensky 1999) to simulate a coupled reception–dissemination process for tornado warning. It is not reasonable to assume individuals would react in the same manner toward the warning (Stokoe 2016; Du et al. 2017). Age, for example, is a major factor that influences the behavior of individual in response to warning dissemination and reception. Previous studies have shown that older adults could be disadvantaged in receiving warnings because of their impaired mobility, visual and/or hearing capacities, and higher likelihood of living alone (Mayhorn 2012). It was also found that older adults and younger adults were different in their access patterns to information channels (Cong et al. 2017; Hayden et al. 2007). The relationship between age groups and their probability to receive tornado warning from various channels shall be investigated so that the influence of lead time, demographic composition, and tornado occurrence time could be probed and quantified. Therefore, traditional modeling techniques, with a focus on system states rather than individual behaviors, would be ill equipped for describing such a dynamic, interactive communication process. Agent-based models, on the other hand, hold the promise for modeling the adaptive behavior of individuals as well the interactions between them and with the environment (Railsback and Grimm 2011).
The remainder of this paper is organized as follows: section 2 presents the model development and design concepts for tornado warning’s diffusion patterns. In section 3, a case study of the 2013 Moore, Oklahoma, tornado is used to calibrate and validate the model. In addition, its unique value in addressing certain what-if scenarios is demonstrated. The concluding remarks are provided in section 4.
2. Model development
The model development starts with a review of agent-based modeling and simulation with respect to its capabilities. The standard process, as well as the theoretical frameworks proposed by others, plays a helpful role in design of the overall model structure and constituent agents. The rule of interactions among these agents is encoded through a detailed execution flow.
a. Capabilities of ABMS and its applications in disaster research
The method of agent-based modeling and simulation (ABMS) is a tool for understanding and analyzing organized complex systems that are otherwise too difficult to model given their nonlinearity and size (Macal and North 2005; Rand and Rust 2011; Kerluke et al. 1994). Emerging from the complex and chaos theories, ABMS began gaining popularity in the 1990s as an alternative to the traditional approaches such as discrete-event simulation (Heath et al. 2009). Its basic concept is to encode simple rules of behaviors of individual agents and allow them to interact with each other and with the environment. Then the aggregated patterns are measured and fed back to affect individual choices. The ABMS allows for the exploration of agent-level theories of behaviors and the cascading effect on large-scale phenomena. In comparison, rules of behaviors must be written at the system level in system dynamic modeling where heterogeneity can be difficult to examine (Sterman 2000). Analytical models, on the other hand, do not already match well to the real-world data because of simplistic assumptions whereas empirical models rarely link to behavioral theory at the individual level. Although it is more computationally intensive than closed-form solutions and regression models, the cost of computation has been rapidly decreasing. One common limitation is lack of generalization beyond the instance being codified in ABMS. To overcome it, researchers often use analytical models as complements in order to extrapolate the model’s results.
ABMS has been successfully applied to problems in a wide range of fields including economics, social science, biology, and engineering. Researchers have used it to model the role of influencers in diffusion of innovation and product adoption (Bohlmann et al. 2010; Garcia and Jager 2011) while large firms used it to improve revenues (North et al. 2010). Epstein (1999) argued that the agent‐based approach is well equipped to decouple individual rationality from macroscopic equilibrium and invites the interpretation of society as a distributed computational device. It is also applied to model brain cancer growth (Zhang et al. 2009), animal movement (Tang and Bennett 2010), and epidemic spread (Frias-Martinez et al. 2011).
Given its high resolution in temporal and spatial scales, ABMS in recent years has been explored in disaster research (Aguirre et al. 2011; Chen and Zhan 2008; Ha and Lykotrafitis 2012; Pan et al. 2007; Chen et al. 2006; Wagner and Agrawal 2014). Hawe et al. (2012) offered a thorough survey on the application of agent-based methods for simulating large-scale emergency response. Relative to system dynamics and discrete-event simulation, the ABMS could be better suited for addressing the emergent phenomena during natural and man-made disasters while representing the agency of different actors. The ABMS was a tool for studying crowd evacuations from fire by assigning different types of agents and their behaviors (e.g., Ren et al. 2009; Tan et al. 2015). The microscopic approach adopted by Manley and Kim (2012) examined individuals with various levels and types of disabilities when evacuated. On the urban scale, models were constructed to evaluate emergency medical response to a mass casualty incident (Wang et al. 2012). Nejat and Damnjanovic (2012) developed an agent-based model accounting for homeowners’ dynamic interactions with their neighbors during disaster recovery. The result pointed to the significant impact of discount factor and the accuracy of the signals on homeowners’ reconstruction decisions as well as formation clusters by reconstructed properties. Miles et al. (2019) identified the largest need for lifeline recovery modeling being how to simulate lifeline infrastructure as sociotechnical systems in comprehensive, meaningful ways. For housing recovery modeling, a major gap was the inability to simulate rental dynamics, as well as the role of race and ethnicity. More recently, O’Shea et al. (2020) created a coupled hydrodynamic agent-based model to test flooding warnings on human response.
b. Design specification for modeling tornado warning diffusion
The model design is specified in terms of scope, agents, properties, behaviors, environment, inputs and outputs, and time step. First, the scope of this study focuses on communities under tornado warning, regardless of whether a tornado actually occurs. Communication networks in these communities represent a fundamental medium through which participants create and share information with one another (Rogers 1995). Conceptually, one thinks each node on the network in a binary state (e.g., active/inactive, informed/uninformed, influenced/uninfluenced, and adopter/nonadopter) and active nodes can then spread the information along the edges of the underlying social network. The mechanisms by which the spreading or diffusion takes place have been studied in economics, information science, sociology, and epidemiology (e.g., Rogers 1995; Young 1998; Cosley et al. 2010; Hethcote 2000). The commonality between various modeling approaches is a probabilistic framework for describing individual’s behaviors within a population.
A warning is issued when a tornado has been sighted or indicated by weather radar, but over 70% of warnings are false alarms (Brotzge et al. 2011). False alarms reduce the credibility of a warning system and therefore would reduce the warning responses (Breznitz 1984). However, the evidence from field studies has not always been consistent: some suggested false alarm effect (e.g., Atwood and Major 1998) while others did not (e.g., Dow and Cutter 1998). A study on tornado casualties between 1998 and 2004 by Simmons and Sutter (2009) found tornadoes killed and injured more people when occurred in areas with a higher false alarm ratio. In addition, individuals’ past experiences with tornadoes, storm’s timing, and local preparedness can all affect their perception and personalization of risk (Mileti 1999; Drabek 1999). When a tornado does occur, its narrow path means that only a small portion of the warned area (i.e., polygon) would be affected. However, tornado’s destructive force and short lead time require immediate actions to save lives. The agents in the model represent all adults (18 and older) who could receive a warning and act on it. The choice of individuals as unit of analysis over households would allow the model to capture finer, person-level information and interactions. People with sensory or development disabilities are excluded from the study but could be examined in the future.
The properties of agents include the determinants for patterns in a warning’s dissemination and reception such as demographics (e.g., age, race, gender, ethnicity, education, and marital status), socioeconomic characteristics (e.g., income, employment, and household size), and housing ownership and condition. Secondary factors related to their social network, information sourcing, and distribution are also considered.
The behavioral rules of agents are based on concepts from the modified persuasion model. This classic persuasion model is widely used in the field of communication. It analyzes communication in terms of source, message, channel, receiver, and effect (Lasswell 1948). Information starts from source and finally causes an effect in a receiver, and in turn creating feedback to the source. These terms set up a unidirectional information flow loop and the original information source is the only information source of the loop. This assumption is inconsistent with the fact that a receiver usually interacts with multiple sources. Relative to this classic persuasive model, the modified model adds intermediary as additional information sources so that the receivers can get the information from both original sources as well the other sources. Meanwhile, the ultimate receivers can communicate both with each other and can obtain information directly from the source (Lindell and Perry 2003).
The environment factors pertain to the tornado’s forecast and actual impact. The warning polygon, damage path, and intensity rating are among variables to be examined. The intent is to reveal the linkage between warning, action (e.g., taking shelter), and outcome (e.g., injuries) toward improvement in disaster preparedness and response.
The inputs to the model are warning messages issued through official channels such as television (TV) and radio. The diffusion mechanism prescribes how initial messages are passed onto other agents through personal channels such as telephone, texting, social media, and face-to-face conversation. The model outputs describe not only the overall percentage of agents who have received the warning but also the breakdown in various categories. For example, one might be interested in knowing the reception rate in selected subpopulations (e.g., the elderly and minorities) for the purpose of highlighting the elevated vulnerabilities and contributing variables.
Last, the time step dictates the duration of each simulation (i.e., tick) and therefore determines the total number of reception–dissemination cycles to perform. In general, it reflects the sum of time required for agents to receive the warning and disseminate it to others on their social network.
c. Construction of an agent-based model
Model construction is a coding process for creating a computational version of the conceptual model. The coding can be done with a generally purposed programming language (e.g., C++, Java, and Python) or with an ABMS toolkit (Railsback et al. 2006). The execution flow for each tick of the warning diffusion model is shown in Fig. 1. At the initiation, a set number of agents are created to represent properties and connectedness of people in the real world: 1) Agents labeled as younger adults and as older adults are in proportion to their percentages recorded in the target population. 2) These agents are then linked together to form a network through which warning messages are exchanged.
At first, a tornado warning can only be issued by the National Weather Service on its own platform and partners for wider distribution through a range of other channels (e.g., TV, radio, Internet, and siren) and so triggering the dispersion process. Agents who are exposed to and receive the warning would change their status from “uninformed” to “informed.” This is referred as an initial “seeding” process. When a new tick starts, one of the informed agents is selected as the sender before passing the message on to other agents (or receivers) on its social network. Decisions on who the receivers are and through which communication channel are to be made. For example, when senders use a telephone, texting, or face-to-face conversation as their communication channel, most likely only one receiving agent on a social network is selected. In contrast, potentially all agents connected to the senders will be exposed to the message once it is posted on social media.
Note that not every informed agent is willing to passing on warning to others. Therefore, a probability function is introduced for the sender before choosing a communication channel. At the same time, for the agent who has an incoming message, the reception is never guaranteed. In many cases, people missed telephone calls, did not read text messages, or paid little attention to social media posts. Therefore, a reception probability function is defined on the receiver side. Once the warning is successfully sent and received, the linkage between this pair of agents is removed so that no further communication will be allowed. The rationale is that agents are motivated to maximize the utility of time by connecting with never-before-communicated agents on their network rather than engaging with having-communicated ones. This way, they can still receive warning from other agents or official channels. The reception–dissemination process is repeated for every informed agent on the network. Once it is complete, another process takes place to afford uninformed agents the opportunity to receive warning from official channels. This simply assumes that within a tick unofficial communication precedes the official ones. In reality, the reverse order or the coincidence of two might be possible, given the tick size.
Once official and unofficial communication processes are applied to all agents, the current tick ends and a new one will start. The simulation is terminated when any one of the following conditions is satisfied:
-
all agents have been informed,
-
no link exists between informed and uninformed agents, or
-
last tick is reached (i.e., time runs out).
Then a range of statistics are be calculated to provide an in-depth view of how warning messages have been communicated in the temporal dimension and through agents.
3. Case study: 2013 Moore tornado
a. Overview of the 2013 Moore tornado
On 20 May 2013, multiple storms developed during the early afternoon in central Oklahoma and the supercell produced a single tornado that touched down near Newcastle, Oklahoma (NWS 2014). The violent tornado moved after the touchdown east-northeastward across Moore and parts of south Oklahoma City for about 40 min before finally dissipating near Lake Stanley Draper (NWS 2013). The EF5 (on the enhanced Fujita scale), 12-mi-long (19 km) tornado killed 24 people, injured over 200, and caused $2 billion in property damage (NOAA 2021b). In addition to high wind, large hail and damaging winds cause damage in many areas. Storm interactions and the relatively narrow corridor of the most favorable conditions limited the time window and spatial extent of the tornado warning. The timeline of this event is presented below [times are all given in central standard time (CST)]:
-
1310 CST: The National Weather Service in Norman, Oklahoma, issued a tornado watch in effect until 2200 CST for 30 counties in central Oklahoma.
-
1408 CST: National Weather Service meteorologists detected a severe thunderstorm located near Bridge Creek moving northeast at 40 mi h−1 (1 mi h−1 = 1.6 km h−1).
-
1412 CST: The National Weather Service in Norman issued a severe thunderstorm warning for several counties in central and western Oklahoma until 1500 CST.
-
1438 CST: National Weather Service meteorologists detected a severe thunderstorm, capable of producing a tornado, located near Newcastle and moving east at 20 mi h−1.
-
1440 CST: The National Weather Service in Norman issued a tornado warning for several counties in central Oklahoma in effect until 1515 CST.
-
1442 CST: Tornado sirens were activated for the first time in Moore (2-min time lag because of clock synchronization).
-
1456 CST: A tornado formed and touched down at latitude 35.303, longitude −97.605.
-
1459 CST: National Weather Service meteorologists and storm spotters tracked a large and dangerous tornado near Newcastle. Doppler radar showed this tornado as moving northeast at 20 mi h−1.
-
1501 CST: The National Weather Service in Norman issued a tornado warning for several counties in central and western Oklahoma in effect until 1545 CST.
-
1505 CST: Tornado sirens were activated for the second time in Moore.
-
1510 CST: Tornado sirens were activated for the third time in Moore, transmitting prerecorded messages that the tornado warning was still in effect.
-
1513 CST: Tornado sirens were activated for the fourth time in Moore.
-
1516 CST: A tornado moved into Moore.
-
1520 CST: Tornado sirens were activated for the fifth time in Moore.
-
1533 CST: The tornado dissipated at latitude 35.341, longitude −97.3999.
Among the 24 people who were killed in the storm, 15 (63%) were female, 9 (38%) were male; 1 (4%) was killed at a business, 13 (54%) were killed at home, 7 (29%) were killed at school, and 3 (13%) were killed at other buildings. In terms of age, 10 (42%) were children, 11 (46%) were adults, and 3 (13%) were older adults.
b. Poststorm telephone survey
The posttornado survey was approved by Texas Tech University’s Human Research Protection Program [Institutional Review Board (IRB) number 504002] and conducted by the Earl Survey Research Laboratory. The sample consisted of publicly listed telephone numbers drawn randomly from the area within the five zip codes (73173, 73065, 73170, 73160, 73165) on the path of the tornado (see Fig. 2). Interviews took place between 29 July and 8 September 2014. Calls were placed between 1600 and 2000 CST on Sunday–Thursday and between 1100 and 1500 CST on Saturday. Each telephone number was called up to five times, depending on the day and time, to ensure many opportunities to contact possible respondents.
One-half of the samples were treated as an oversample targeting people 65 or older (see Table 1). The average time to complete a survey was 21 min. Response rates and cooperation rates were calculated using American Association for Public Opinion Research (AAPOR) response rate 1 and cooperation rate 1 (AAPOR 2016). The population of the counties was from the 2012 American Fact Finder in the U.S. Census Bureau. The margin of error was calculated using the raw number of completes (American Research Group 2017).
Summary of telephone interviews conducted for the 2013 Moore tornado (CI indicates confidence interval).
Of the sample above, 80% (270) are randomly labeled as the training set for parameter estimation. The remaining 20% (68) is used as the testing set.
c. Parameter estimation
Parameters of the agent-based model are calibrated using data from multiple sources: a posttornado survey (Cong et al. 2018), U.S. Census Bureau, American Time Use Survey, and published literatures (see Table 2 for a summary). When data are not available, they are given values based on sensitivity analysis and/or self-defined.
Summary of key model parameters and their sources.
The simulation starts at 1440 CST when the tornado warning was first issued by NWS and ends at 1533 CST when the tornado dissipated, for a total of 53 min. Because the tick is set empirically to 6 min long (accounting for the average time taken for a typical cycle of warning reception, assimilation, and dissemination), there are nine ticks included in the simulation. Note that few literature sources exist for estimating the time required for passing a message from one person to another, averaged over multiple communication channels. Three agent sizes are tried—100, 500, and 1000—to identify the threshold value beyond which the simulation result will not improve much.
With respect to agent properties, this study focuses on age being the key driving factor for interactions among agents. Thus, the heterogeneity exhibited in other factors such as marital status, education, and income of individuals is considered outside the scope of this work. The distribution of younger and older agents is derived from American Community Survey (ACS) data for Cleveland County, Oklahoma (where Moore is located), in 2013. The percentages of younger (18–64) and older (65+) adults are 85% and 15%, respectively. The number of links for a given agent reflects the size of its social network. In social network research, Pool and Kochen (1978) posed two fundamental questions: 1) for individuals, the size of their network (i.e., degree); and 2) for a population, the distribution of degree. The network size has been directly estimated using reverse small-world method (Killworth et al. 1984; Bernard et al. 1990), summation method (McCarty et al. 2001), diary method (Pool and Kochen 1978), and cellular telephone record method (Onnela et al. 2007), among others. The scale-up method, which uses responses to questions of this form (“How many Xs do you know?”), can be very efficient but suffers from three distinct problems: barrier effects, transmission effects, and recall error (McCormick et al. 2010; Killworth et al. 2003, 2006). For “scale free” networks (Barabási 2003), degree distribution is extremely skewed and seems to follow the power law. While the actual functional form of the degree distribution of the social network is not known, one estimate put an average personal network size in the United States at 750 (Zheng et al. 2006). Based on the assumption that these people are distributed evenly within the general population, one could calculate the size of social network for residents living in Cleveland County as 0.64 (750 times the ratio of Cleveland County population to the U.S. population in 2013). Obviously, this would be a gross underestimate as people know much more neighbors, coworkers, and household members in proximity. In this study, the social network size is the sum of three components: family members living in the same household, family members not living together, and close acquaintance, assuming that given a short lead time emergency communication is only limited between people with a close relationship (Marsden 1987). First, the average size of households for Cleveland County in 2013 is 2.58, minus 0.25 for children under 17 (U.S. Census Bureau 2017). Thus, the number of adult family members per household is about 2. The family members not living together (0.54) are calculated as the difference between family size (3.12) and household size (2.58). The acquaintance located in the affected areas (e.g., relatives, neighbors, and coworkers) is set to 0.5, because of the lack of data. As a result, the average number of links per agent is 3 after rounding before being applied to all agents. Assigning a uniform link size rather than varying one to agents would avoid creating a few “superagents” whose behaviors likely dominate others. Furthermore, the location of agents, which affects the hearing of a siren, is defined as an environmental variable. Based on American Time Use Survey (ATUS), 85.2% of younger adults and 86.8% of older adults are indoors during the simulation period.
During the seeding process, the tornado warning is sent out by weather forecasters via channels such as local TV and radio stations, and NWS website and social media account. The reception of a warning by agents depends on their exposure to TV and radio programs within the window of the specific tornado warning (i.e., 1440–1533 CST) as well as computer use for checking email and Internet posting. The proportions of people at each age group watching TV, listening to radio, or using computer were extracted from the American Time Use Survey (Hofferth et al. 2018). The ATUS is an ongoing survey on time use in the United States sponsored by Bureau of Labor Statistics and conducted by U.S. Census Bureau since 2003. Each respondent provides detailed information on his or her activities during a designated 24-h period. To minimize year-to-year fluctuation, the ATUS data for 2011, 2012, and 2013 are aggregated using ATUS Extract Builder (ATUS-X) to estimate time allocation across population subgroups and over time (IPUMS 2020). Thus, the TV reception rates for younger and older adults are 13.5% and 5.39%, respectively. For radio, they are both as low as 0.1% (rounded up to 1%). The percentage of younger adults exposed to web, email, and social media posting is 1.3% as compared with 0.4% of older adults.
A tornado siren, by design, is only to alert those who are outdoors that dangerous weather is approaching (NWS 2021). However, the survey shows that 54% of those who were indoors received warning from the siren as compared with 37% for those who were outdoors. No detailed information is available on either the order of warning received from multiple sources nor the possible transition of persons between being indoors and outdoors during the simulation. Thus, it is assumed that 30% of adults (younger and older) could hear tornado siren when they are outdoors and only 10% for those who are indoors. The repeated activations of tornado siren are marked in four ticks (3:10 and 3:13 fell into one tick) and all uninformed agents are exposed to it.
The decision to disseminate warning to others is modeled as independent repeated Bernoulli trials with two outcomes: P(x = yes) = p and P(x = no) = 1 − p. Its simple form facilitates direct implementation in the ABMS while serving as a credible proxy for individual’s decision making thanks to the temporal discretization. Posttornado survey reveals that 49.6% of younger adults and 39.7% of older adults were willing to pass on the warning to people they knew. When a randomly generated float number within the range [0, 1) is smaller than these values, the dispersion would follow. Otherwise, the simulation moves on to the next available informed agents. At each tick, an unofficial channel is chosen also in a random manner. Based on the same survey, the probabilities of choosing telephone, texting, face-to-face conversation, and email/social media among younger adults are 44.3%, 20.8%, 22.6%, and 12.3%, respectively. For older adults, the probabilities are 64.8%, 18.3%, 14.1%, and 2.8%, respectively. Subsequently, each channel chosen corresponds to single or multiple receivers. For telephone, texting, and face-to-face, it is assumed that only one agent on the sender’s social network is randomly selected. In contrast, when warning is disseminated through email or social media, all agents on one’s network are selected.
Last, reception condition needs to be satisfied to change a receiver’s status from uninformed to informed though the related research is scarce. The probability for answering a telephone call is set to 12% based on the abandonment rate reported by the call center industry. People are more likely to read the text message, and therefore 25% is used. For face-to-face conversation, the reception is assumed to be 10%, primarily accounting for the difficulty in finding people nearby (family members, coworkers, neighbors, etc.). Once again, 1.3% of younger adults and 0.4% of older adults spend time on checking web, their email, and social media posting.
Since many model parameters are random variables, the outcome from each realization set would be different. To find the balance between result convergency and computational demand, the simulation runs for 10, 50, and 100 times and the comparison is made about not only the overall rates of warning reception for younger and older adults but also the rates for separate channels.
As shown in Fig. 3, the difference between results from 10 runs and that from 100 runs is in the range from −2% to +5%. The range shrinks to being from −1% to +3% for 50 runs. Because the performance improvement is only marginal from 50 to 100, the 50 run is selected as a default for the subsequent simulations.
Another parameter affecting the result is the number of agents included in the model. The results from 100 (baseline), 500, and 1000 agents are compared. The differences between them are all less than 3%. Therefore, use of 100 agents is deemed sufficient.
d. Results and discussion
Model validation determines how well the model corresponds to the reality. For ABMS, it can be performed qualitatively (e.g., microface validation, macroface validation), quantitatively (e.g., empirical input validation and empirical output validation), or both (Macal and North 2005). Microface and macroface validations are to ensure that the mechanisms and properties of the agents and the aggregate patterns in a meaningful way match the reality. They are often accomplished by following established theories or tested models when agent properties and rules are defined. In empirical input validation, inputs data are partitioned into two sets: 1) training set for calibration and 2) test set for validation. In addition, the relationship between model parameters and outputs can be explored through sensitivity analysis in which one parameter is to vary while others are held constant. Empirical output validation determines whether a real case lies within the statistical distribution of simulation. Optionally, cross validation is used to compare the ABMS result with the result from another model (e.g., regression and game theory) that has been validated. A close agreement would be indicative of the model’s validity.
The simulation results are plotted against the testing set of survey data for two age groups. As shown in Fig. 4 the modeled reception rate (0.77) for the older adults is consistent with the observed one (0.78), indicating an overall model efficacy. With respect to warning sources, the rates for TV, radio, Internet/app, and siren appear underestimated while the rate for receiving warning from texting is overestimated. When measured in absolute terms, older adults seem more exposed to official channels (TV, radio, and siren) than what the model suggests. For younger adults, the overall reception rate is overestimated by 20 percentage points as shown in Fig. 5. Reasonable agreement can be found for the rates of reception for TV, radio, telephone, texting. At the same time, the rates for Internet/app and siren are lower than the observation by ∼20 percentage points.
Another important measure in hazard communication is the number of sources from which persons receive their warning. Cong et al. (2017) analyzed communication patterns in the Joplin tornado (2011) and Tuscaloosa, Alabama, tornado (2011) and found that having more warning information sources significantly increased the odds of taking protective action. The average numbers of warning sources for younger adults are 2.91 (testing set) and 2.69 (simulation), whereas for older adults they are 2.43 (testing set) and 2.02 (simulation). The model underestimates the values for both age groups but performs better for the younger adults.
Across age groups and communication channels, the deviation between modeled and observed rates of reception is between −20 and +18 percentage points. Although one can certainly reduce it further, inherent variability and uncertainties within the model and observation make perfect agreement infeasible. Furthermore, overfitting to a limited set of data points would diminish the utility of model when being applied to other datasets. As stated earlier, the real power of ABSM lies with its capability to capture system dynamics that are often unobservable or difficult to observe. Recognizing a reasonable agreement with the observation (i.e., error less than 20 percentage points), the baseline model can then be tasked to address some of interesting and important questions. Two examples are presented below.
1) If tornado siren had not been available, how would the warning patterns be altered?
Tornado sirens (sometimes referred as outdoor warning system) are an expensive emergency management infrastructure to install and operate. For example, Lubbock, Texas, recently approved $750,000 to purchase and install 45 outdoor sirens within the city limit (Juarez 2021). The debate often centers on cost and benefit, with consideration of increasing adoption of mobile apps, reverse 911, and other technologies.
After removing the siren as a warning source and keeping other parameters intact, the model suggests that the overall reception rates for younger and older adults would drop from the baseline by 17 percentage points (see Fig. 6) and 6 percentage points (see Fig. 7), respectively. This indicates a greater reliance among the older adults on receiving waring from tornado siren. For younger adults, the loss of tornado siren would be largely offset by other official and unofficial communication channels in use. Nevertheless, what is of more concern is reduction in the number of warning sources: For younger adults, the average declines from 2.69 (baseline simulation) to 2.18 (no-siren simulation); and for older adults it declines from 2.02 (baseline simulation) to 1.40 (no-siren simulation). As the consequence, people would be less likely to act on the warning upon reception, a common precursor to injuries and deaths. Similar analysis could be conducted for special scenarios in which siren malfunctions, or telephone/Internet/TV service becomes interrupted. Only with agent-based models could one fully capture the innerworkings of such a dynamic and networked system.
2) What is the effect of social connectedness on warning’s diffusion pattern?
One of the less known subjects in emergency communication is dissemination and sharing of information between people and their impact on saving lives and avoiding losses. Past studies conclude that connectivity, social capital, community functions, and planning are some of the most common disaster resilience indicators at the community level (Cutter 2016). One parameter in the model is the number of links one agent could have with others. By dialing it up or down, one can measure the change at either global or local levels.
Two figures above show the comparison between the baseline simulation (with 3 links for each agent) and the new simulations (with 2 and 5 links). The reception rate for older adults improves from 77% to 80% but for younger adults stays the same when the number of links increases from 3 to 5 (as shown in Fig. 8). Figure 9 indicates that the contribution from telephone, texting, Internet/app, and face-to-face seems marginal, possibly because of the fact that TV and siren are two dominate sources of warning in Moore. The impact of increasing connectivity would be more pronounced when people are not watching TV (e.g., at midnight or early morning) or tornado siren is not available. When the number of links decreases, the older adults see their reception rate dropping to 72%. Once again, the effect on the younger adults is minimal.
Other appropriate questions to examine with ABMS relate to timing of tornado (e.g., daytime vs nighttime; weekday vs weekend), lead time (e.g., short vs long), and technology adoption.
4. Concluding remarks
There are two common types of critiques of ABMS: 1) it does not link to real data and is only suitable for “toy problems,” and 2) agent-based models can have so many parameters that any data can be fit with. It is worth noting that such critiques are true of many, but not all, modeling techniques. In fact, ABMS provides a natural way to integrate real-world data and complexities in the models and add a layer of realism that is difficult to capture otherwise. To that end, the agent-based model developed in this study mimics reliably the reception–dissemination process of tornado warning. The multichannel scheme allows individual residents to receive warning from both official sources and unofficial sources. In addition, each of them has the capability to pass on time-sensitive, life-saving information to those on their social network. As the result, not only is the warning able to reach a larger fraction of at-risk population but also the probability of getting the warning from multiple sources would increase. The latter is shown to have a positive correlation with taking protective actions.
Then the model is applied to an actual EF5 tornado that struck Moore in 2013. Some of the model’s parameters are first calibrated using data from government-sponsored programs and a poststorm telephone survey commissioned by the authors. Otherwise, they are derived from literature reviews, expert judgement, and sensitivity analysis. Once it is verified and validated, the model is used in two what-if analyses that examine the effect of tornado siren and social network. These examples illustrate the unique capabilities of agent-based modeling and simulation in unpacking complex problems and therefore justify the original motivation of this study.
Several aspects of the model could be further improved. For example, cross validation with other tornado events will help testing the generalizability of the model and its parameters. However, it is conditional on the availability of individual-level behavioral data of high granularity. The authors are in the process of collecting such data on two other tornado events (2019 north Dallas tornado and 2020 Nashville tornado) to which the model could be extended. Comparing multiple storms could help to add to the literature on how individuals are influenced by expressed or perceived risk severity in seeking additional information and taking protective actions (Neuwirth et al. 2000). It is worth noting that the National Weather Service in 2012 implemented new impact-based tornado warnings (IBWs) that use more extreme language such “considerable” or “catastrophic” for intense tornadoes (Ripberger et al. 2015). However, a growing body of research questions the utility of extreme language in promoting effective sheltering responses (Casteel 2018).
The properties of agents can be broadened to include other factors affecting warning’s diffusion such as gender, ethnic, education, employment, and income. However, too many variables may lead to multicollinearity, overfitting, inflated computation demand, and other model problems. Therefore, it is important to take a balanced view when defining agents. With respect to whether or not to disseminate warnings, the Bass diffusion model could be better in representing agent’s decision and its impact at a given time (Bass 1969). The model classifies adopters as “innovators” or “imitators” and states that the new adoptions at time t is a linear function of potential market, innovation coefficient, imitation coefficient, and prior adoptions. In a real world, the more that people talk about a product, the more other people would adopt it. Introducing time-dependent choices will add further credence to the behavioral rules specified in the model.
In the meantime, people constantly move even within their own city or town, based on the time of day, day of week, and activities engaged. Even though aggregated patterns are derivable, they may never be reliable enough to predict the precise time–location relationship at individual levels. Use of newer technologies, on the other hand, is altering the way in which information is exchanged. After 2011 Joplin tornado, National Institute of Standards and Technology (NIST) recommended the deployment of a whole range of current and next-generation emergency communication “push” technologies (e.g., location-based mobile alerts and warnings, reverse 911, and outdoor siren systems with voice communication) to maximize an individual’s chance of receiving emergency information and responding safely, effectively, and timely (Kuligowski et al. 2014). Opportunities emerges to fill the gap by leveraging high-resolution spatiotemporal datasets recorded by location-aware mobile telephones in model inputs. However, note that a wide adoption of newer information technology in disaster communication would inadvertently lead to an increase in vulnerability if key components fail because of power outage or other unmitigated interruptions. During this tornado outbreak Kuligowski et al. (2013) reported that power outages peaked at 61 500, including 18 432 in Moore. Because there was no indication or record of when the power was cut and how many homes or businesses were affected during the warning phase, the current study did not examine the dependencies between critical lifeline systems, which is one of its limitations. Microlevel models such as ABM have the capability to learn from cascading failures observed in past events and test future scenarios to improve system robustness.
Furthermore, the model could be improved by replacing the point estimates of model parameters with probability distribution functions to allow for systematic quantification of uncertainties and their prorogations.
Acknowledgments.
This material is based upon work supported by the National Science Foundation under Grant CMMI 1663264. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Review and suggestions by Dr. Susan Jasko are greatly appreciated.
Data availability statement.
Because of privacy concerns, supporting data on human subjects can only be made available in aggregated terms. Details of the data and how to request access are available from daan.liang@ua.edu at the University of Alabama.
REFERENCES
Aguirre, B. E., S. El-Tawil, E. Best, K. B. Gill, and V. Fedorov, 2011: Contributions of social science to agent based models of building evacuation. Contemp. Soc. Sci., 6, 415–432, https://doi.org/10.1080/21582041.2011.609380.
American Association for Public Opinion Research, 2016: Standard definitions: Final dispositions of case codes and outcome rates for surveys. AAPOR, 80 pp., accessed 22 October 2021, https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.
American Research Group, 2017: Margin of error calculator. Accessed 7 May 2021, http://americanresearchgroup.com/moe.html.
Ashley, W. S., 2007: Spatial and temporal analysis of tornado fatalities in the United States: 1880–2005. Wea. Forecasting, 22, 1214–1228, https://doi.org/10.1175/2007WAF2007004.1.
Atwood, L. E., and A. M. Major, 1998: Exploring the “cry wolf” hypothesis. Int. J. Mass Emerg. Disasters, 16, 279–302.
Barabási, A. L., 2003: Linked. Plume, 294 pp.
Bass, F. M., 1969: A new product growth for model consumer durables. Manage. Sci., 15, 215–227, https://doi.org/10.1287/mnsc.15.5.215.
Bernard, H. R., E. C. Johnsen, P. D. Killworth, C. McCarty, G. A. Shelley, and S. Robinson, 1990: Comparing four different methods for measuring personal social networks. Soc. Networks, 12, 179–215, https://doi.org/10.1016/0378-8733(90)90005-T.
Bohlmann, J. D., R. J. Calantone, and M. Zhao, 2010: The effects of market network heterogeneity on innovation diffusion: An agent-based modeling approach. J. Prod. Innovation Manage., 27, 741–760, https://doi.org/10.1111/j.1540-5885.2010.00748.x.
Breznitz, S., 1984: Cry Wolf: The Psychology of False Alarms. Lawrence Erlbaum Associates, 265 pp.
Brooks, H. E., and C. A. Doswell III, 2002: Deaths in the 3 May 1999 Oklahoma City tornado from a historical perspective. Wea. Forecasting, 17, 354–361, https://doi.org/10.1175/1520-0434(2002)017<0354:DITMOC>2.0.CO;2.
Brotzge, J., and W. Donner, 2013: The tornado warning process: A review of current research, challenges, and opportunities. Bull. Amer. Meteor. Soc., 94, 1715–1733, https://doi.org/10.1175/BAMS-D-12-00147.1.
Brotzge, J., S. Erickson, and H. Brooks, 2011: A 5-yr climatology of tornado false alarms. Wea. Forecasting, 26, 534–544, https://doi.org/10.1175/WAF-D-10-05004.1.
Casteel, M. A., 2018: An empirical assessment of impact based tornado warnings on shelter in place decisions. Int. J. Disaster Risk Reduct., 30, 25–33, https://doi.org/10.1016/j.ijdrr.2018.01.036.
Chen, X., and F. B. Zhan, 2008: Agent-based modeling and simulation of urban evacuation: Relative effectiveness of simultaneous and staged evacuation strategies. J. Oper. Res. Soc., 59, 25–33, https://doi.org/10.1057/palgrave.jors.2602321.
Chen, X., J. W. Meaker, and F. B. Zhan, 2006: Agent‐based modeling and analysis of hurricane evacuation procedures for the Florida Keys. Nat. Hazards, 38, 321–338, https://doi.org/10.1007/s11069-005-0263-0.
Cong, Z., J. Luo, D. Liang, and A. Nejat, 2017: Predictors for the number of warning information sources during tornadoes. Disaster Med. Public Health Prep., 11, 168–172, https://doi.org/10.1017/dmp.2016.97.
Cong, Z., A. Nejat, D. Liang, Y. Pei, and R. Javid, 2018: Individual relocation decisions after tornadoes: A multilevel analysis. Disasters, 42, 233–250, https://doi.org/10.1111/disa.12241.
Cosley, D., D. Huttenlocher, J. Kleinberg, X. Lan, and S. Suri, 2010: Sequential influence models in social networks. Proc. Fourth Int. AAAI Conf. on Weblogs and Social Media, Washington, DC, Association for the Advancement of Artificial Intelligence, 26–33, https://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/view/1530/1829.
Cutter, S. L., 2016: The landscape of disaster resilience indicators in the USA. Nat. Hazards, 80, 741–758, https://doi.org/10.1007/s11069-015-1993-2.
Dow, K., and S. L. Cutter, 1998: Crying wolf: Repeat responses to hurricane evacuation orders. Coastal Manage., 26, 237–252, https://doi.org/10.1080/08920759809362356.
Drabek, T. E., 1999: Understanding disaster warning responses. Soc. Sci. J., 36, 515–523, https://doi.org/10.1016/S0362-3319(99)00021-X.
Du, E., S. Rivera, X. Cai, L. Myers, A. Ernest, and B. Minsker, 2017: Impacts of human behavioral heterogeneity on the benefits of probabilistic flood warnings: An agent‐based modeling framework. J. Amer. Water Resour. Assoc., 53, 316–332, https://doi.org/10.1111/1752-1688.12475.
Epstein, J. M., 1999: Agent‐based computational models and generative social science. Complexity, 4, 41–60, https://doi.org/10.1002/(SICI)1099-0526(199905/06)4:5<41::AID-CPLX9>3.0.CO;2-F.
Frias-Martinez, E., G. Williamson, and V. Frias-Martinez, 2011: An agent-based model of epidemic spread using human mobility and social network information. 2011 IEEE Third Int. Conf. on Privacy, Security, Risk and Trust/Third Int. Conf. on Social Computing, Boston, MA, IEEE, 57–64, https://doi.org/10.1109/PASSAT/SocialCom.2011.142.
Garcia, R., and W. Jager, 2011: From the special issue editors: Agent‐based modeling of innovation diffusion. J. Prod. Innovation Manage., 28, 148–151, https://doi.org/10.1111/j.1540-5885.2011.00788.x.
Ha, V., and G. Lykotrafitis, 2012: Agent-based modeling of a multi-room multi-floor building emergency evacuation. Physica A, 391, 2740–2751, https://doi.org/10.1016/j.physa.2011.12.034.
Hawe, G. I., G. Coates, D. T. Wilson, and R. S. Crouch, 2012: Agent-based simulation for large-scale emergency response: A survey of usage and implementation. ACM Comput. Surv., 45, 8, https://doi.org/10.1145/2379776.2379784.
Hayden, M. H., S. Drobot, S. Radil, C. Benight, E. C. Gruntfest, and L. R. Barnes, 2007: Information sources for flash flood warnings in Denver, CO and Austin, TX. Environ. Hazards, 7, 211–219, https://doi.org/10.1016/j.envhaz.2007.07.001.
Heath, B., R. Hill, and F. Ciarallo, 2009: A survey of agent-based modeling practices. J. Artif. Soc. Soc. Simul., 12, 9.
Hethcote, H. W., 2000: The mathematics of infectious diseases. SIAM Rev., 42, 599–653, https://doi.org/10.1137/S0036144500371907.
Hofferth, S. L., S. M. Flood, and M. Sobek, 2018: American time use survey data extract builder: Version 2.7 dataset. University of Maryland and IPUMS, accessed 5 January 2021, https://doi.org/10.18128/D060.V2.7.
Hui, C., M. Goldberg, M. Magdon-Ismail, and W. A. Wallace, 2010: Simulating the diffusion of information: An agent-based modeling approach. Int. J. Agent Technol. Syst., 2, 31–46, https://doi.org/10.4018/jats.2010070103.
IPUMS, 2020: ATUS extract builder. University of Minnesota, accessed 7 May 2021, https://www.atusdata.org/atus/index.shtml.
Juarez C., 2021: After decades of debate, Lubbock will install tornado sirens within city limits. KCBD, accessed 7 May 2021, https://www.kcbd.com/2021/02/25/after-decades-debate-lubbock-will-install-tornado-sirens-within-city-limits/.
Kerluke, J. L., M. Ratke, and R. Adams, 1994: Complexification: Explaining a Paradoxical World through the Science of Surprise. HarperCollins, 320 pp.
Killworth, P. D., H. R. Bernard, and C. McCarty, 1984: Measuring patterns of acquaintanceship. Curr. Anthropol., 23, 318–397, https://doi.org/10.1086/203158.
Killworth, P. D., C. McCarty, H. R. Bernard, E. C. Johnsen, J. Domini, and G. A. Shelly, 2003: Two interpretations of reports of knowledge of subpopulation sizes. Soc. Networks, 25, 141–160, https://doi.org/10.1016/S0378-8733(02)00040-0.
Killworth, P. D., C. McCarty, E. C. Johnsen, H. R. Bernard, and G. A. Shelley, 2006: Investigating the variation of personal network size under unknown error conditions. Sociol. Methods Res., 35, 84–112, https://doi.org/10.1177/0049124106289160.
Kuligowski, E. D., L. Phan, M. Levitan, and D. Jorgensen, 2013: Preliminary reconnaissance of the May 20, 2013, Newcastle-Moore tornado in Oklahoma. NIST Special Publ. 1164, 59 pp., accessed 2 May 2021, https://doi.org/10.6028/NIST.SP.1164.
Kuligowski, E. D., F. T. Lombardo, L. T. Phan, M. L. Levitan, and D. P. Jorgensen, 2014: Technical investigation of the May 22, 2011, tornado in Joplin, Missouri. NIST NCSTAR 3, 428 pp., https://doi.org/10.6028/NIST.NCSTAR.3.
Lasswell, H. D., 1948: The structure and function of communication in society. The Communication of Ideas, Harper and Row, 37–51.
Lindell, M. K., and R. W. Perry, 2003: Communicating Environmental Risk in Multiethnic Communities. Communicating Effectively in Multicultural Contexts, Vol. 7, Sage Publications, 272 pp.
Macal, C. M., and M. J. North, 2005: Tutorial on agent-based modeling and simulation. Proc. Winter Simulation Conf., Orlando, FL, IEEE, https://doi.org/10.1109/WSC.2005.1574234.
Manley, M., and Y. S. Kim, 2012: Modeling emergency evacuation of individuals with disabilities (exitus): An agent-based public decision support system. Expert Syst. Appl., 39, 8300–8311, https://doi.org/10.1016/j.eswa.2012.01.169.
Marsden, P. V., 1987: Core discussion networks of Americans. Amer. Sociol. Rev., 52, 122–131, https://doi.org/10.2307/2095397.
Mayhorn, C. B., 2012: Warning the elderly: Understanding and overcoming barriers to risk communication. SUPDET 2012, National Fire Protection Association Conf., Phoenix, AZ, NFPA.
Mayhorn, C. B., and A. C. McLaughlin, 2014: Warning the world of extreme events: A global perspective on risk communication for natural and technological disaster. Saf. Sci., 61, 43–50, https://doi.org/10.1016/j.ssci.2012.04.014.
McCarty, C., P. D. Killworth, H. R. Bernard, E. Johnsen, and G. A. Shelley, 2001: Comparing two methods for estimating network size. Hum. Organ., 60, 28–39, https://doi.org/10.17730/humo.60.1.efx5t9gjtgmga73y.
McCormick, T. H., M. J. Salganik, and T. Zheng, 2010: How many people do you know? Efficiently estimating personal network size. J. Amer. Stat. Assoc., 105, 59–70, https://doi.org/10.1198/jasa.2009.ap08518.
Miles, S. B., H. V. Burton, and H. Kang, 2019: Community of practice for modeling disaster recovery. Nat. Hazards Rev., 20, 04018023, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000313.
Mileti, D., 1999: Disasters by Design: A Reassessment of Natural Hazards in the United States. Joseph Henry Press, 371 pp.
Nagarajan, M., D. Shaw, and P. Albores, 2010: Informal dissemination scenarios and the effectiveness of evacuation warning dissemination of households—A simulation study. Procedia Eng., 3, 139–152, https://doi.org/10.1016/j.proeng.2010.07.014.
Nagarajan, M., D. Shaw, and P. Albores, 2012: Disseminating a warning message to evacuate: A simulation study of the behaviors of neighbors. Eur. J. Oper. Res., 220, 810–819, https://doi.org/10.1016/j.ejor.2012.02.026.
Nejat, A., and I. Damnjanovic, 2012: Agent-based modeling of behavioral housing recovery following disasters. Comput.-Aided Civ. Infrastruct. Eng., 27, 748–763, https://doi.org/10.1111/j.1467-8667.2012.00787.x.
Neuwirth, K., S. Dunwoody, and R. J. Griffin, 2000: Protection motivation and risk communication. Risk Anal., 20, 721–734, https://doi.org/10.1111/0272-4332.205065.
NOAA, 2011: Tornadoes 101—An essential guide to tornadoes: Stay alert to stay alive. Accessed 10 December 2018, https://www.noaa.gov/stories/tornadoes-101.
NOAA, 2021a: U.S. tornado climatology. Accessed 25 May 2021, http://www.ncdc.noaa.gov/climate-information/extreme-events/us-tornado-climatology.
NOAA, 2021b: Storm Events Database. NCEI, accessed 7 May 2021, https://www.ncdc.noaa.gov/stormevents/.
North, M. J., and Coauthors, 2010: Multiscale agent‐based consumer market modeling. Complexity, 15, 37–47, https://doi.org/10.1002/cplx.20304.
NWS, 2013: The tornado outbreak of May 20, 2013. Norman, OK Weather Forecast Office, accessed 7 May 2021, https://www.weather.gov/oun/events-20130520.
NWS, 2014: May 2013 Oklahoma tornadoes and flash flooding. NOAA service assessment, accessed 7 May 2021, 63 pp., https://www.weather.gov/media/publications/assessments/13oklahoma_tornadoes.pdf.
NWS, 2021: Outdoor warning sirens: Frequently asked questions. Quad Cities, IA/IL Weather Forecast Office, accessed 24 May 2021, https://www.weather.gov/dvn/sirenFAQ.
Onnela, J., J. Saramäki, J. Hyvönen, G. Szabó, D. Lazer, K. Kaski, J. Kertész, and A. Barabáasi, 2007: Structure and tie strengths in mobile communication networks. Proc. Natl. Acad. Sci. USA, 104, 7332–7336, https://doi.org/10.1073/pnas.0610245104.
O’Shea, T., P. Bates, and J. Neal, 2020: Testing the impact of direct and indirect flood warnings on population behaviour using an agent-based model. Nat. Hazards Earth Syst. Sci., 20, 2281–2305, https://doi.org/10.5194/nhess-20-2281-2020.
Pan, X. S., C. S. Han, K. Dauber, and K. H. Law, 2007: A multi‐agent based framework for the simulation of human and social behaviors during emergency evacuations. AI Soc., 22, 113–132, https://doi.org/10.1007/s00146-007-0126-1.
Parker, D. J., and J. W. Handmer, 1998: The role of unofficial flood warning systems. J. Contingencies Crisis Manage., 6, 45–60, https://doi.org/10.1111/1468-5973.00067.
Pool, I. S., and M. Kochen, 1978: Contacts and influence. Soc. Networks, 1, 5–51, https://doi.org/10.1016/0378-8733(78)90011-4.
Railsback, S. F., and V. Grimm, 2011: Agent-Based and Individual-Based Modeling: A Practical Introduction. Princeton University Press, 352 pp.
Railsback, S. F., S. L. Lytinen, and S. K. Jackson, 2006: Agent-based simulation platforms: Review and development recommendations. Simulation, 82, 609–623, https://doi.org/10.1177/0037549706073695.
Rand, W., and R. T. Rust, 2011: Agent-based modeling in marketing: Guidelines for rigor. Int. J. Res. Mark., 28, 181–193, https://doi.org/10.1016/j.ijresmar.2011.04.002.
Ren, C., C. Yang, and S. Jin, 2009: Agent-based modeling and simulation on emergency evacuation. Complex Sciences, J. Zhou, Ed., Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Vol. 5, Springer, 1451–1461.
Ripberger, J. T., C. L. Silva, H. C. Jenkins-Smith, and M. James, 2015: The influence of consequence-based messages on public responses to tornado warnings. Bull. Amer. Meteor. Soc., 96, 577–590, https://doi.org/10.1175/BAMS-D-13-00213.1.
Rogers, E. M., 1995: Diffusion of Innovations. 4th ed. Free Press, 518 pp.
Simmons, K. M., and D. Sutter, 2009: False alarms, tornado warnings, and tornado casualties. Wea. Climate Soc., 1, 38–53, https://doi.org/10.1175/2009WCAS1005.1.
Sterman, J. D., 2000: Business Dynamics: Systems Thinking and Modeling for a Complex World. Irwin/McGraw-Hill, 982 pp.
Stokoe, R. M., 2016: Putting people at the centre of tornado warnings: How perception analysis can cut fatalities. Int. J. Disaster Risk Reduct., 17, 137–153, https://doi.org/10.1016/j.ijdrr.2016.04.004.
Tan, L., M. Hu, and H. Lin, 2015: Agent-based simulation of building evacuation: Combining human behavior with predictable spatial accessibility in a fire emergency. Inf. Sci., 295, 53–66, https://doi.org/10.1016/j.ins.2014.09.029.
Tang, W., and D. A. Bennett, 2010: Agent‐based modeling of animal movement: A review. Geogr. Compass, 4, 682–700, https://doi.org/10.1111/j.1749-8198.2010.00337.x.
U.S. Census Bureau, 2017: Average number of people per household in the United States from 1960 to 2017. Statista, accessed 25 December 2018, https://www.statista.com/statistics/183648/average-size-of-households-in-the-us/.
Wagner, N., and V. Agrawal, 2014: An agent-based simulation system for concert venue crowd evacuation modeling in the presence of a fire disaster. Expert Syst. Appl., 41, 2807–2815, https://doi.org/10.1016/j.eswa.2013.10.013.
Wang, Y., K. L. Luangkesorn, and L. Shuman, 2012: Modeling emergency medical response to a mass casualty incident using agent based simulation. Socio-Econ. Plann. Sci., 46, 281–290, https://doi.org/10.1016/j.seps.2012.07.002.
Wilensky, U., 1999: NetLogo. Northwestern University Center for Connected Learning and Computer-Based Modeling, http://ccl.northwestern.edu/netlogo/.
Wogalter, M. S., Ed., 2006: Handbook of Warnings. CRC Press, 864 pp.
Young, H. P., 1998: Individual Strategy and Social Structure: An Evolutionary Theory of Institutions. Princeton University Press, 33 pp.
Zhang, L., Z. Wang, J. A. Sagotsky, and T. S. Deisboeck, 2009: Multiscale agent-based cancer modeling. J. Math. Biol., 58, 545–559, https://doi.org/10.1007/s00285-008-0211-1.
Zheng, T., M. J. Salganik, and A. Gelman, 2006: How many people do you know in prison? Using overdispersion in count data to estimate social structure in networks. J. Amer. Stat. Assoc., 101, 409–423, https://doi.org/10.1198/016214505000001168.