Determining Patterns of Routine Weather Information Usage and Their Demographic Determinants

Wesley Wehde aTexas Tech University, Lubbock, Texas

Search for other papers by Wesley Wehde in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-1616-5673
and
Matthew Nowlin bCollege of Charleston, Charleston, South Carolina

Search for other papers by Matthew Nowlin in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Social science studies of weather and natural hazards have examined in depth the sources of information individuals use in response to a disaster. This research has primarily focused on information sources in isolation and as they relate to severe weather. Thus, less research has examined how individuals use information acquisition strategies during routine times. This paper addresses this limitation by examining patterns of routine weather information source usage. Using three unique survey datasets and latent class analysis, we find that weather information source usage can be summarized by a limited number of coherent classes. Importantly, our results suggest that weather information types, or classes, are generally consistent across datasets and samples. We also find demographic determinants, particularly age, help to explain class membership; older respondents were more likely to belong to classes that are less reliant on technology-based information sources. Income and education also were related to more complex or comprehensive information use strategies. Results suggest that the prevalent view of single-source information usage in previous research may not be adequate for understanding how individuals access information, in both routine and extreme contexts.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Wesley Wehde, wwehde@ttu.edu

Abstract

Social science studies of weather and natural hazards have examined in depth the sources of information individuals use in response to a disaster. This research has primarily focused on information sources in isolation and as they relate to severe weather. Thus, less research has examined how individuals use information acquisition strategies during routine times. This paper addresses this limitation by examining patterns of routine weather information source usage. Using three unique survey datasets and latent class analysis, we find that weather information source usage can be summarized by a limited number of coherent classes. Importantly, our results suggest that weather information types, or classes, are generally consistent across datasets and samples. We also find demographic determinants, particularly age, help to explain class membership; older respondents were more likely to belong to classes that are less reliant on technology-based information sources. Income and education also were related to more complex or comprehensive information use strategies. Results suggest that the prevalent view of single-source information usage in previous research may not be adequate for understanding how individuals access information, in both routine and extreme contexts.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Wesley Wehde, wwehde@ttu.edu

1. Introduction

Forecasts, from a wide variety of outlets, are a highly valued information resource in American society. Lazo et al. (2009) estimate that weather forecasts are worth over $31 billion per year to U.S. households from over 300 billion uses of forecasts per year, which amounts to approximately 115 uses per month by the average U.S. adult. Despite the high value of forecasts and weather information in general in the United States, research has not fully examined the different sources individuals use to acquire them, that is, who uses which sources (Wehde et al 2021). Many different sources produce forecast products from which Americans can choose. These sources vary widely, ranging from the government itself to private companies such as the Weather Channel or AccuWeather or even private individuals such as family and friends.

People make choices daily about checking these many sources; however, we know little about who uses which sources or how much they rely on these sources. Interestingly, an initial examination of weather information sources by Demuth et al. (2011) finds little evidence of distinct patterns. Because these information sources vary widely across a variety of characteristics, understanding patterns among their users has distinct implications for the creation of a weather-aware public. Despite this, most other research in this domain focuses on sources individually (Comstock and Mallonee 2005; Sherman-Morris 2010; Doksæter Sivle and Kolstø 2016; Eachus and Keim 2019). However, members of the public likely use information sources in tandem, in predictable weather information-seeking patterns, as Demuth et al.’s (2011) model suggests. Additionally, models of protective action often include predecisional processes, including information search, that affect how members of the public respond to severe weather and take protective action (Lindell and Perry 2012). Despite the importance of these predecisional processes, research has focused on information search as it relates more immediately to severe weather, and we know little about what kinds of routine weather information source usage patterns exist. As a result of this lack of existing research, we also know relatively little about the individual determinants of those patterns.

In the following section, we review the relevant literature on information use behaviors in weather and natural hazards contexts. We then introduce the three datasets and latent class analysis techniques used to examine them. Next, we uncover a series of reoccurring distinct patterns of weather information source usage. A relatively similar proportion of individuals fall into each category, suggesting no single information use pattern accounts for the majority of the sample. We also find evidence of contextual effects, especially in the Oklahoma sample. Last, we find that sociodemographic characteristics, particularly age and race, help explain patterns of routine weather information-seeking behaviors in logically predictable ways. We end with the implications of our findings for future research on weather information in both routine and extreme contexts.

a. Routine weather information use

In an influential article, Lazo et al. (2009) describe the process of valuing weather forecasts as four steps from sources to perceptions to uses to values, focusing primarily on value. The authors define sources as where, when, and how the public acquires information about weather forecasts. Perceptions are the understanding of those forecasts. Use then relates to what purposes forecasts play as an input to decisions or actions. Values are the dollar amounts the public ties to available forecast information. In a later paper, Demuth et al. (2011) focus in more depth on the first three steps of this model, using a nationwide sample of over 1400 respondents in the United States. Demuth et al. (2011) measure forecast use frequency from 10 sources: local television (TV), cable TV, newspapers, dial-in weather services, radio, NOAA weather radios, National Weather Service (NWS) webpages, other webpages, cellular telephones (hereinafter cell phones), and friends and family. However, except for a few exceptions (Lazo et al. 2009; Morss et al. 2010; Demuth et al. 2011; Doksæter Sivle and Kolstø 2016; Eachus and Keim 2019), research from meteorology, as well as the social sciences, broadly ignores the daily, routine gathering of weather information.

Although research on routine weather information and forecast use is relatively limited, existing models of communication may help to examine this context. The first model we use is the four-stage model of Lazo et al. (2009), focusing specifically on the sources stage. However, we also rely on Lindell and Perry’s (2012) model of protective action decision-making (PADM) and its incorporation of decisional processes before threat perception and decision-making. One aspect of these predecisional processes is channel access and preferences as they relate to information sources. Predecisional information-gathering processes are important because they may constrain or help determine how individuals seek out or receive information when a threat is imminent.

Research on this predecisional stage is mixed with earlier studies finding that television is the most popular source of weather information (Demuth et al. 2011) while more recent research finds that Facebook and cell phone applications (i.e., “apps”) are the public’s most preferred sources (Eachus and Keim 2019). Eachus and Keim’s (2019) findings suggest that the relatively recent expansion of communication technologies, such as social media, has made the information environment for warning/severe weather information and routine/daily forecasts more complex (Reuter and Kaufhold 2018; Wehde et al. 2019). These changes, particularly that of social media, have resulted in what some have conceptualized as an interconnected dynamic system as opposed to a linear process (Morss et al. 2017). Information is no longer communicated primarily in an authority-to-citizen manner but also can be communicated in a citizen-to-citizen or resident-to-resident manner—see a series of papers by Robinson et al. (2019, 2022) for an investigation of this phenomenon related to tornadoes. These studies also examine the important relationships between demographics and information reception (Robinson et al. 2019, 2022).

b. Individual determinants of weather information use

Understanding who receives weather information is important as it may help forecasters, emergency managers, and other stakeholders target underserved populations. For example, Robinson and colleagues find that older respondents are less likely to receive warnings through electronic media such as text messages or Facebook (Robinson et al. 2019; Wehde et al. 2019). These findings emphasize the importance of what Mayhorn (2005) calls cognitive aging where cognitive capabilities for information search and technology use decline with age. Robinson et al. (2019) also find that gender matters, with men being less likely to receive warning information than women unless exposed to the highest levels of tornado risk. Other research finds that racial, but not ethnic (Hispanic), minority respondents were more likely to report receiving tornado warning information than their white counterparts (Ripberger et al. 2019; Robinson et al. 2022). These studies suggest that predecisional contexts depend on individual characteristics such as gender, age, and others.

While research on routine weather information reception is less frequent, the study by Demuth et al. (2011) suggests older respondents and those who have lived in the same area longer report receiving more frequent forecast information. Importantly, this study does not distinguish across types of sources and suggests this somewhat surprising finding may be because that generation relies on television as a source. They also find the longer a person has lived in the same area the more forecasts they reported receiving. Taken together with research on tornados and other severe weather, existing research suggests demographic factors—especially gender, race, and age—may be related to weather information use and reception.

Interestingly, the other research reviewed on routine weather information use does not examine demographic differences (Doksæter Sivle and Kolstø 2016; Eachus and Keim 2019). These studies are also limited by their focus on individual information sources, often in isolation or without specification. For example, Eachus and Keim (2019) focus on a ranking method that isolates sources from each other. On the other hand, Demuth et al. (2011) simply look at the frequency of forecast information reception without regard to source specifics. However, some research does attempt to look at patterns of weather information sources, which we review next.

c. Patterns of information use and use of more than one source

We know understanding patterns of weather information use, as a predecisional process, is important as previous work has found general but limited support for the positive relationship between multiple warning sources and response to the warning (Sorensen 2000). This finding has been echoed in more recent research by Paul et al. (2015), which finds that the use of one or more is positively associated with compliance with the tornado warning. Even more recently, Miran et al. (2018) also find that the number of information sources, as opposed to just more than one, is positively associated with protective action.

It is unsurprising that these studies only examine information sources in the event of extreme or severe weather; however, understanding patterns of information usage in routine weather may set the stage for which sources are available or familiar to individuals in the event of severe weather. Also importantly, these studies do not examine what combinations of these sources are used. Even those that emphasize the importance of more than one source (Paul et al. 2015; Miran et al. 2018) ignore exactly which sources, be it websites or family and friends, or television, that deliver the warning or information. Although individuals are allowed to report receiving information from multiple sources, scholars have primarily focused on either the rank ordering of these sources (see Sherman-Morris 2010; Comstock and Mallonee 2005; Eachus and Keim 2019, among others) or simply the number of sources used.

However, some research on warnings and severe weather contexts examines the specifics of information sources. This research often draws on a process called “milling” in which respondents confirm information from one source with a subsequent source (Mileti and Darlington 1997; Hammer and Schmidlin 2002). These studies focus especially on the order of information use and the speed with which information travels. For example, Hammer and Schmidlin (2002) find that telephone calls were often used in the confirmation as opposed to the initial search stage. Another study by Rogers and Sorensen (1991) finds that permanent sirens combined with telephones or tone-alert radios reach the population most rapidly. The media reaches the population at the slowest rate, followed up by sirens in isolation. These studies also suggest that sources differ in their ability to provoke action quickly enough. Comstock and Mallonee (2005) suggest that when individuals receive storm information from multiple sources, certain ones, such as information from weather changes, are less likely to provoke protective action. Sorensen’s (2000) review finds electronic and media sources have mixed effects on responses while sirens decrease responses to warnings. These sources also have differential effectiveness at reaching the population. The differences in both reception and response are multifaceted in that they consider the speed of information traveling, the effectiveness of communication, and the interaction of a variety of potential information sources (Mileti and Sorensen 1990; Wu et al. 2020). Given the consistency of the use of multiple sources and their varying effects, it is likely that patterns of use of weather information sources and their effects in nonsevere day-to-day settings are similarly complex.

Research on patterns of information not focused on sequential use requires different analytical techniques than those previously used. As such, recently scholars have begun to use factor analysis and structural equation modeling to examine the interitem correlations of information used in crisis or disaster settings (Arlikatti et al. 2019; Hua et al. 2020). Regarding information use patterns, the use of factor analysis by Demuth et al. (2011) failed to find an adequate solution, despite most individuals in their data reporting using many sources. More recently, Hua et al. (2020) successfully used factor analysis to produce three information scales: TV, other mass media, and peers. Another method of categorizing indicators is used by Li et al. (2022) who categorize protective action decisions into basic and complex behaviors (following Huntsman et al. 2021) based on researcher expertise. However, these methods of analysis consider only the correlation between the individual indicators but do not consider the patterns of use of all indicators at once, by individuals, or rely exclusively on information external to the data.

Thus, we propose that research on information use related to weather, severe or routine, can advance beyond these relatively simple, though useful, analytical techniques. Specifically, techniques that consider information sources as a package, as opposed to individual indicators, and that rely on statistical and researcher information, such as latent class analysis, may be useful. Our solution to this is latent class analysis (LCA), which was used by Walters et al. (2019) who examine different types of tornado warning response patterns. They suggest that these patterns allow for targeting strategies from the NWS for tornado warnings and safety. We follow methodologically from this study, applying the technique to information use as opposed to warning response. Similarly, the implications of our study may also help forecasters and others better target their public based on our findings. In the next section, we describe in detail our data sources and the application of LCA to our data.

2. Data and methods

To examine the use of sources for routine information about the weather, we draw on three different surveys of public opinion including a national sample of U.S. residents (hereinafter the National sample), a sample of residents living in coastal counties in the Southeast and Gulf Coasts in the United States, and a sample of residents from the U.S. state of Oklahoma. The National and Southeast/Gulf Coast samples were purchased through Qualtrics with demographic quotas based on the U.S. Census. The Oklahoma data were recruited using a random address-based sample. Initial contact was made through telephone calls and respondents could choose to respond to the surveys over the telephone or online.1 Using three unique data sources allows us to compare and contrast how individuals in differing regions of the United States, which face various types of natural hazards, gather information about the weather.

a. National data

The National data are taken from the 2019, 2020, and 2021 Severe Weather and Society Survey (WX), which is an annual survey administered through the University of Oklahoma to a nationally representative sample of the U.S. population (Ripberger et al. 2020a; Ripberger et al. 2020b, 2022). The National survey asks a series of questions regarding weather risk perceptions and behaviors as well as basic demographic information. For analysis, we combine the 2019 (sample number n = 3006), 2020 (n = 3000), and 2021 (n = 1550) data.

b. Southeast and Gulf Coast data

The Southeast and Gulf Coast data (SEGC) are taken from a survey fielded in October 2018 of a representative sample of U.S. residents living in coastal counties in the Southeast and Gulf Coast regions in the United States including the states of North Carolina, South Carolina, Georgia, Florida, Alabama, Mississippi, Louisiana, and Texas. Respondents (n = 1520) were recruited through Qualtrics and were asked questions regarding risk perceptions and response behaviors associated with hurricanes.

c. Oklahoma data

The Oklahoma data are from the Mesoscale Integrated Sociogeographic Network (M-SISNet), a longitudinal (panel) survey, using an address-based sampling frame, that continuously measures public perceptions of climatic conditions and beliefs in Oklahoma (Jenkins-Smith et al. 2017). M-SISNet surveys are administered at the end of each season (winter, spring, summer, and autumn), and we combined data from the winter of 2014 to the winter of 2018 for a total of n = 41 592 responses.

d. Survey questions

All three surveys asked respondents about the sources of weather information on which they rely. However, the National and Oklahoma surveys asked respondents about the frequency for each source on a 1–6 scale with 1 = never, 2 = less than once per week, 3 = about once per week, 4 = several times per week, 5 = about once per day, and 6 = several times per day. The Southeast/Gulf Coast survey asked respondents to “check all that apply.” Each survey asked about the same sources and worded the sources in the same way. Specific question wording was as follows: The National and Oklahoma surveys asked, “How frequently do you get information about the WEATHER from each of the following sources?” The Southeast/Gulf Coast survey asked, “Which of the following sources do you use to get information about the WEATHER? Please check all that apply.” The sources were

  • Newspapers

  • Nongovernment internet websites (such as weather.com)

  • Government sponsored internet websites (such as noaa.gov)

  • Local TV (television) news

  • Cable TV (television) news (such as the Weather Channel)

  • RadioFamily, friends, or colleagues

  • Social media, such as Facebook and Twitter

  • Cell phone applications or automated text messages

  • Other (please specify).

To examine patterns in the use of information sources we used those questions to construct a measurement model using LCA.

e. LCA

To further examine information use we use LCA to detect patterns in the uses of sources of weather information. LCA is used in this study for several purposes. First, LCA is robust regarding polytomous input data. Indicator or manifest variables do not have to be binary or continuous but can take on multiple categories. Second, as compared with other measurement models such as factor or principal components analyses, LCA takes advantage of correlations between patterns of responses as opposed to correlations between the responses themselves. This modeling technique stratifies the manifest, or observed, variables by identifying a latent, or unobserved, categorical variable to eliminate confounding between the manifest variables. This relies on the conditional or local independence assumption, which states that all manifest variables are statistically independent when conditioned on the latent variable (Linzer and Lewis 2011). This latent variable is represented by a probabilistic outcome for each individual for each class (for each survey or survey wave, in this case). Individuals can then be categorized by the class into which they are more likely to fall. Thus, individuals with similar sets of responses will cluster into the same latent class. Because we apply LCA to two longitudinal datasets, we also assume homogeneity. This assumption states that the relationship between the manifest variables and the latent variable does not change over time (Ciampi et al. 2011). Also, LCA itself does not identify the number of relevant classes or categories. Rather, because the method relies on the distributional assumptions of the manifest variables, LCA produces a series of fit statistics to determine the appropriate model selection and fit. These statistics guide model selection based on parsimony, fit, and the goal of the analysis. The Bayesian information criterion (BIC) is the most used measure for the parsimony of model fit; however, improvements in log-likelihood ratios have also been used to guide model choice in the political science literature (Oser et al. 2013). In the next section, we discuss our findings regarding the LCA of information usage across each survey as well as results from logit analysis that examines the relationships between demographics and the likelihood of being a particular class.

3. Findings

a. Descriptive results for information sources

We first look at the average use of the various information sources across the three surveys. As noted, both the National and Oklahoma surveys asked respondents about their frequency of use for each source, whereas the Southeast/Gulf Coast survey listed each information source and asked respondents to “check all that apply.” Figure 1 presents the average use for each source across all three surveys.

Fig. 1.
Fig. 1.

Average use of weather information sources across surveys.

Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0106.1

As can be seen, local television is the most used source across all three samples. For the National data, the average use of local television is just under 4 (3.978), or several times per week, and for the Oklahoma data, the mean is 4.75, or between several times per week and about once per day. For the Southeast/Gulf Coast data, the mean is 0.753, which means that about 75% of the sample indicated that they use local television to get information about the weather. Beyond local television, there is variation in information use across the three samples.

For the National data, respondents indicated that they use applications on their cell phones at a mean of 3.533, or between once and several times per week. Next, respondents, on average, use the web, cable television, radio, and people (i.e., family, friends, or colleagues) around once per week with averages of 3.169, 3,156, 2.984, and 2.928, respectively. Respondents from the National surveys were less likely to rely on social media, newspapers, and government websites for weather information than the other sources.

For the Southeast/Gulf Coast sample, over half the respondents noted that they use cable television, at 60.9%, and cell phone applications at 57.7%. Each of the remaining sources was indicated by less than one-half of the respondents, with newspapers being the least used.

The Oklahoma data respondents indicated that, on average, they rely on other people, radio, and the web at least once per week. This was followed closely by cell phone applications and cable television. Respondents from Oklahoma were likely to rely on government websites, newspapers, and social media less often than the other sources of weather information.

The different use of sources may be, in part, because of the different time frames in which the data were collected. As noted, the Oklahoma data were collected between 2014 and 2018, the Southeast/Gulf Coast data were collected in 2018, and the National data were collected between 2019 and 2021. The fact that cell phone applications and the internet were used more frequently in the National data than the others may be because the National data were collected most recently when the use of apps in daily life has become more ubiquitous.

Another possible reason for the different use of sources is the different types of natural hazards that participants from each survey face. Previous research has found correlations between risk perceptions of the public and their likelihood of exposure to various types of severe weather. In particular, exposure to tornadoes, hurricanes, and droughts is highly correlated to risk perceptions regarding those hazards among the public living in areas susceptible to those hazards (Allan et al. 2020). It follows that the public would rely on various sources based on their related exposure and risk perceptions. For example, Southeast/Gulf Coast respondents are exposed to hurricanes, which are often covered by national news programs on cable television as well as other cable channels such as the Weather Channel, which may explain the increased use of cable television as opposed to the National and Oklahoma data. The Oklahoma respondents are exposed to tornados, which are fast-developing and localized hazards. Therefore, respondents may rely on friends, family, neighbors, and other close contacts to contact them when a tornado may be approaching their area.

In the next section, we use LCA to determine patterns of weather information use across the three samples. In addition, we use logistic regression to examine the relationship between patterns in information use and demographics including age, gender, race/ethnicity, income, and education.

b. LCA and logistic regression results

As noted, LCA requires the analyst to choose the number of latent classes for the model. Then, the analyst examines several goodness-of-fit measures to choose the appropriate model. One way of assessing model fit is comparing LCA estimations with a one-class model, increasing the number of classes by one each time. In this manner, BIC is the most commonly used statistic for identifying appropriate solutions, and a smaller BIC indicates a better model fit.2 However, another approach that complements the use of a BIC statistic is to assess the percent reduction of the likelihood ratio chi-squared statistic (G squared) in comparison with the one-cluster model (Magidson and Vermunt 2004). In the following analysis, we compare fit statistics ranging from 1 to 7 classes for each of the three datasets.

We first examine the model fit statistics to determine the appropriate number of classes. Then, we compare the average use of each information source within each class with the overall average of that information source. Last, we use logistic regression to examine the associations between patterns of weather information use with demographic variables including age, gender, race/ethnicity, education, and income. We look first at the National data, followed by the Southeast/Gulf Coast data, and then the Oklahoma data.

c. National data

The LCA model fit statistics for the National data, ranging from 1 to 7 classes, are shown in Table 1.

Table 1.

Model fit statistics for the National data.

Table 1.

For the National data, after comparing results across the three-, four-, and five-cluster models, we choose to report results for the four-cluster model because, of the three models, it had a markedly lower BIC than the three-cluster model, but not as large as a difference from the five-cluster model. Additionally, although the value of the BIC decreases beyond the five-cluster model, the other fit statistics suggest only marginal improvement.

Next, we examine the nature of the LCA results by comparing information source usage within each class to the overall usage of that information source. This allows us to determine differences in the patterns of information use within and across the different classes. Each class represents a distinct pattern of weather information usage. Figure 2 presents a summary of the class outputs from the LCA analysis for National data.

Fig. 2.
Fig. 2.

Differences between class mean and grand mean for source indicators for the National data.

Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0106.1

The x axis represents the classification output from the LCA model. Each bar represents the difference between the grand mean (mean for the entire sample on the 1 to 6 scale shown in Fig. 1) and the mean within each class for each information source. Thus, the y axis represents the value of the differences between the grand mean, which is set at 0, and each information source within each class.

The first two indicators represent the most traditional modes of weather information: newspapers and radio. The second two bars represent the television information source indicators, specifically cable and local television. The next three bars represent the mean differences for indicators of network information sources including family and friends, social media, and cell phone applications. The last two bars represent these differences for website information sources, both government and nongovernment. These in combination with social media represent the weather information sources we inquired about that are available through the internet. Grouping the indicators and comparing each with a baseline provides a better understanding of patterns of information use within each class.

As shown in Fig. 2, respondents in class 1 were 24% of the sample and used all sources more than average except for local television. In particular, those in class 1 used social media and government websites more than average. Next, those in class 2, 27% of the sample, used all sources less than the average. Of note, respondents in class 2 used local television, cable television, and social media less than average. For class 2, websites followed by people were the closest to the average.

For class 3, respondents were likely to use all sources more than average, particularly social media. This class accounts for 10% of the sample. Last, respondents in class 4 were less likely to use all sources of weather information when compared with the overall average. In particular, those in class 4 were less likely to use websites (government and others) and social media, followed closely by newspapers, radio, and people. Local television was the source closest to the average; however, it was still 0.5 less on the 1–6 scale. Class 4 accounts for the plurality of respondents, with 39% of the sample falling into that class.

Next, we use logistic regression to examine associations between the pattern of weather information sources and demographics. Respondents from the National survey were placed in one of the classes and coded as 1 if they are in that class and 0 otherwise. The demographic variables included age, gender (1 = male and 0 = nonmale; in this paper we also refer to this variable as “male”), race (1 = white and 0 = nonwhite; in this paper we also refer to this variable as “white”), income, and education. The results for the National data are shown in Table 2.

Table 2.

Logit coefficient estimates with data from the National survey for the four dependent-variable classes. One, two, and three asterisks denote statistical significance at p < 0.10, p < 0.05, and p < 0.01, respectively. Standard errors are in parentheses.

Table 2.

As noted, each class represents a unique pattern of information use, and the logit regression analysis allows us to associate various demographic characteristics with those patterns of information use. Looking first at the results for class 1, the demographic profile for those more likely to be in class 1 includes those that are younger, male, nonwhite, with higher income, and with more education. For class 2, those that were older, nonmale, white, and with more education and more likely than others to be in that class. Respondents in class 3 were more likely to be younger, male, higher income, and educated. Those in class 4 were older, nonmale, nonwhite, and with lower levels of income and education than others.

Overall, the logit results for the National data suggest that respondents who are younger, male, educated, and with higher incomes tend to use all information sources more than average, particularly social media (class 1 and class 3). Older, nonmale respondents used all sources less than average (class 2 and class 4). Increased education was positively associated with being in classes 1, 2, and 3 but negatively associated with being in class 4, where all information sources were used less than average.

d. Southeast/Gulf Coast data

Table 3 shows the LCA model statistics for the Southeast/Gulf Coast data for 1 to 7 classes. The results are straightforward, and we chose the four-cluster model because the five-cluster model has a larger BIC.

Table 3.

LCA model fit statistics for the Southeast/Gulf data.

Table 3.

Next, we compare the use of information sources within each class with the overall mean. The information source variable in the Southeast/Gulf Coast is dichotomous with a 1 if respondents indicated they use that source and a 0 if not. The y axis in Fig. 3 is based on the mean of the 0 or 1 information source variable and 0 is the overall mean for each source. Just as in Fig. 2, the first two indicators represent newspapers and radio, respectively, followed by cable and local television, then family and friends, social media, and cell phone applications. The last two bars represent website information sources.

Fig. 3.
Fig. 3.

As in Fig. 2, but for the Southeast/Gulf Coast data.

Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0106.1

As can be seen in Fig. 3, respondents in class 1 were more likely to use social media but less likely to use all other sources than average. They represent 19% of the sample. Class 2 made up 17% of respondents and were more likely to use website and television sources, and less likely than average to use all other sources. Class 3 was 28% of respondents and used all sources more than average, most notably people. Those in class 4 were somewhat more likely to indicate local television as a source and were much less likely to use all other sources, particularly websites, than the average. They were the plurality of respondents making up 36% of the sample.

Table 4 shows the results of the logistic regression that examines the association between the demographic variables and the likelihood of being in one of the four classes.

Table 4.

Logit coefficient estimates with data from the Southeast/Gulf Coast survey for the four dependent-variable classes. One, two, and three asterisks denote statistical significance at p < 0.10, p < 0.05, and p < 0.01, respectively. Standard errors are in parentheses.

Table 4.

As shown in Table 4, respondents in class 1 tended to be younger, whereas respondents in class 2 were more likely to be older, male, white, and with higher levels of education. Those in class 3, which were likely to use all sources more than average tended to be younger, white, and more educated. Those who use local television more than average and all other sources less (class 4), were older, nonmale, nonwhite, and with less education.

The pattern of results for the Southeast/Gulf Coast data shows that younger respondents tend to rely on social media more than average and other sources less than average particularly traditional sources such as newspapers, radio, and television (class 1). Additionally, younger, white, and more educated respondents tend to use all sources more than average (class 3), whereas older and less educated respondents tended to use local TV (class 4) more and all sources, particularly websites, less than average. Class 2 presented some interesting results. Respondents in class 2 used television and internet sources more and all other sources less than average, which is a combination unique to the Southeast/Gulf Coast sample, and they were more likely to be older, male, white, and educated.

e. Oklahoma data

Looking at the Oklahoma data, the results of the LCA model fit for 1 to classes are shown in Table 5. Based on these results, we chose the five-cluster model because it provides more nuance than the four-cluster model even though the difference in BIC is not as large.

Table 5.

LCA model fit statistics for the Oklahoma data.

Table 5.

Next, we compare the use of information sources across the classes with the overall average. The y axis represents the differences on the 1–6 scale of how often each source is used. As before, the sources from left to right are newspapers, radio, cable TV, local TV, family and friends, social media, cell phone applications, and website information sources, both government and nongovernment. The results for the Oklahoma data are shown in Fig. 4.

Fig. 4.
Fig. 4.

As in Fig. 2, but for source indicators for the Oklahoma data.

Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0106.1

As shown in Fig. 4, those in class 1 were more likely to use technology including social media, apps, and websites, and less likely to use all other sources than average. They were 14% of respondents. Those in class 2 were 16% of the sample and were more likely to use nearly all sources of information, but they were less likely to use social media than average. Additionally, the use of people and cell phone applications by respondents in class 2 was near the average. Those in class 3 were the plurality of respondents at 38% of the sample and used traditional media sources including newspapers and television more than average and used websites, apps, and social media a lot less than average. Respondents in class 4 were 13% of the sample and used websites more than average and all other sources less than average. Respondents in class 5 were more likely to use all sources of information than average, particularly social media, apps, and people. They were 18% of the sample.

We next examine the associations between demographics and information using logistic regression and the results for the Oklahoma data are shown in Table 6.

Table 6.

Logit coefficient estimates with data from the Oklahoma survey for the five dependent-variable classes. One, two, and three asterisks denote statistical significance at p < 0.10, p < 0.05, and p < 0.01, respectively. Standard errors are in parentheses.

Table 6.

Looking first at the results for class 1, which included increased use of technology sources, it is seen that respondents in this class were more likely than others to be younger, nonmale, white, and with higher levels of income. For class 2, respondents tended to be older, male, and have higher levels of income and education. Those that use traditional sources (class 3) tended to be older, nonwhite, with less income and less education. Those in class 4, which used websites more and people, radio, and social media less than average tended to be older, male, white, and with more education. Respondents in class 5 used all sources more than average, most notably social media, apps, and people and they tended to be younger, nonmale, nonwhite, with more income, and have less education than other respondents.

With the Oklahoma data, age was a determining factor in different patterns of information use with younger respondents using technological information sources such as social media, apps, and websites (classes 1, 4, and 5), and older respondents more likely to use traditional sources of information (classes 2 and 3).

4. Discussion

There is a large variety of weather information sources available, yet little work has examined whether patterns exist in the use of information sources for routine weather information. In this paper, we drew on three distinct samples of the U.S. public including several national samples, a sample from the coastal counties of the Southeast and Gulf Coast, and several samples from the state of Oklahoma to examine patterns in consumption of weather information sources. The use of different samples allowed us to examine results in regions that experience distinct weather hazards (e.g., coastal communities that experience hurricanes and the state of Oklahoma that experiences tornados) along with a national sample that experiences all types of hazards.

We first examine the average use of information sources, and those results are shown in Fig. 1. As shown, local television was the source used the most in each sample; however, there was wide variation in the use of other sources. To determine the patterns of weather information sources used, both within and across our samples, we used LCA to determine classes or categories of those patterns. Next, we compared the use of sources within each class with the overall average use of each source to determine variation within the classes. Last, we used logit regression to examine the associations between demographics (i.e., age, gender, race, income, and education) and the likelihood of being within each class across each sample.

Overall, we find some patterns in the use of weather information across the samples. Of note, each sample had a class of high use that tended to use all sources more than average; however, the percentage of respondents in the high user class varied across the samples. For the National data high users made up 10% of the sample, for the Southeast/Gulf Coast data it was 28%, and for the Oklahoma data it was 18%. Also of note, in the National (class 3) and Oklahoma (class 5) data social media was the source used the most in the high user classes, whereas in the Southeast/Gulf Coast data people were the most used information source in that class (class 3).

All three samples had classes that tended to use various information sources less than average. In general, there was a pattern in each sample where a class of respondents would use traditional sources such as newspapers, radio, and television (local and cable) more (less) than average and use technology sources like websites, apps, and social media less (more) than average. With the National data, two classes used all information less than average, but in one of those classes, class 2, websites and people were closer to the average than newspapers and television. A clearer pattern is found with the Southeast/Gulf Coast and the Oklahoma samples where the lower-use classes tended to use one or a few sources more than average. For example, in the Southeast/Gulf Coast data, there was a class that used social media more than average and traditional sources less than average (class 1), and a class that used local television more and internet less than average (class 2). In the Oklahoma sample, one class of respondents used newspapers and television more than average and websites, apps, and social media far less than average (class 3). Another class in the Oklahoma data, class 1, used websites, apps, and social media more than average, but used newspapers and television sources a lot less than average.

We also found some patterns between demographics and weather information use. Below we discuss the demographics associated with the high-use classes, traditional sources classes, and technology sources classes. Additionally, we also discuss a few other patterns that emerged from the LCA and the logit models.

a. High-use classes

The high-use classes were those that used all information sources more than average. Across each sample, those in the high-use classes were younger, and in the National and Southeast/Gulf Coast data respondents were more educated, and in the National and Oklahoma data respondents had higher levels of income. Additionally, in the National data, high-use respondents were more likely to be male, and in the Southeast/Gulf Coast data respondents were more likely to be white. However, the Oklahoma data had some unique patterns where respondents in the high-use class tended to be nonmale, nonwhite, and with less education.

b. Traditional sources classes

The traditional classes were those that used traditional sources more and other sources less than average. In the Southeast/Gulf Coast data, class 4 used local TV more than average but all other sources were used less than average, most notably the technology sources. There was a similar pattern in the Oklahoma data, where class 3 used newspapers and television more than average and all sources, particularly technology sources, less than average. For demographics, respondents in the traditional sources classes in both samples were older, nonwhite, and less educated. Those in the Oklahoma data also had less income. In addition, class 2 in the Oklahoma data used traditional sources and websites more than average, but social media sources slightly less and apps and people slightly more than average. Respondents in class 2 were older and male, with higher income and education.

The National data did not have as clear of a traditional sources class, rather two classes used all information sources less than average. Class 4 in the National data had the lowest use of all information sources and respondents in that class tended to be older, nonmale, nonwhite, and with less income and less education. Class 2 was lower than average on all sources but some of the technology sources (i.e., websites) were close to the average. Respondents in class 2 tended to be older, nonmale, white, and more educated.

c. Technology sources classes

The technology classes were those that used technology sources (websites, apps, social media) more and other sources less than average. Class 1 in the National data used government websites and social media more than average, yet newspapers and radio were also used more than average by respondents in that class. In the Southeast/Gulf Coast data, class 1 used social media more than average, and all other sources, particularly traditional sources, were used less than average. With the Oklahoma data respondents in class 1 used social media, apps, and websites more than average and traditional sources less than average. Class 4 in the Oklahoma data used websites more than average and traditional sources quite a bit less than average. Across each sample respondents in the technology sources classes, including class 1 in the National data, were younger. Apart from being younger, in the Oklahoma data respondents in class 1 were also nonmale, white, and had higher income. Class 4 in the Oklahoma data was younger, male, white, and more educated.

As noted, class 1 in the National data used some technology sources as well as some traditional sources more than average and they tended to be younger, male, and nonwhite and had higher income and more education.

5. Conclusions

Our research suggests that weather information use can be classified into a relatively small number of types or strategies across three independent samples. More specifically, we find that certain patterns of high engagement, regardless of source, exist consistently across datasets. We also find evidence of web- or internet-based information strategies and traditional media strategies in differing contexts, suggesting consistency in these categories as well as gaps in individual weather information environments. While these information use patterns seem mostly robust to context, we do find that individuals who rely on their interpersonal networks are less consistently present across regions, only emerging in the Oklahoma data. Importantly, these weather information use strategies or patterns are also related to more readily available demographic information in line with previous research (Demuth et al. 2011; Wehde et al. 2019; Robinson et al. 2019; Hua et al. 2020).

However, our research is subject to certain limitations. First, our assumption of homogeneity over time in the Oklahoma and National samples limits our ability to determine if patterns in the use of information are dynamic as opposed to static. However, for ease of analysis, we used this approach. Future research could examine how and if weather information patterns change systematically over time. Are there important seasonal patterns in strategies? Does the public learn from recent severe weather and diversify or change their routine information patterns? Another avenue for future research is to examine the consequences of the weather information source patterns we have described. Given we consider these patterns as predecisional processes in protective action, future research could examine which classes lead to more effective protective action (Mileti and Sorensen 1990). Second, while we have three samples on which we examine weather information use there is some geographic bias toward the South and Midwest regions. While we have one national sample, our data are more limited in representing Western regions beset primarily by drier weather patterns and the Northeast where cold and snow are a more routine and common occurrence. Third, our measurement across datasets is not perfectly consistent. This may be a strength as the general similarity of patterns emerges, regardless of measurement specifics. However, this fact belies the lack of consistent measurement in existing research and literature. To best understand these phenomena, the scholarly community should develop a consistent battery of questions regarding sources of information about the weather that can be asked across contexts. Fourth, latent class analysis, although a powerful tool, does require some researcher input to determine the number of classes. Given the improvements over existing approaches, we believe this subjectivity is justified; however, as efforts to standardize measurement progress, the data necessary for more strict measurement models may become available.

We believe that this research is important for two primary reasons. First, research on weather information use has primarily focused on severe weather, as opposed to routine, contexts [see Demuth et al. (2011) for a key exception]. While we do not measure this explicitly, something future research should endeavor to do, it seems likely that members of the public rely on familiar sources of information in severe contexts. Thus, as the PADM and its concepts of predecisional processes might imply, members of the public may be limited to relying on sources used in nonsevere, more routine contexts. Second, and more importantly, we argue that individuals do not rely on weather information sources in isolation despite this being the most common research approach [see Hua et al. (2020) for a recent exception]. Instead, individuals use coherent patterns or strategies of weather information acquisition and use (Wu et al. 2020). Our research suggests that this intuitive idea is supported by the data. Therefore, those tasked with providing information in the weather enterprise can build strategies to reach wider audiences. For example, many of the lower-use classes we identify suggest those individuals who consume less weather information on average do still rely on social media to some extent more than other sources. Thus, local forecasters concerned with missing these potentially less informed and more vulnerable groups in their community could then utilize their social media channels to try to ensure they receive important weather information. Certain information use patterns, related to levels of engagement but also similar source preferences such as internet-based ones, emerge across multiple, diverse samples. Knowing these patterns may help forecasters, broadcast meteorologists, emergency managers, and other stakeholders to develop effective communication strategies that reach the broadest proportion of individuals.

1

Descriptive statistics and question wording for the demographic variables can be found in the appendix.

2

Model fit cannot be determined using the more familiar chi-squared distribution for computing the p value since data are sparse. There are 68, or approximately 1.68 million, possible combinations. So, information criteria like the Bayesian information criterion provide goodness-of-fit indicators that take both model fit and parsimony into account.

Acknowledgments.

We thank the National Science Foundation under Grant IIA-1301789 for support of this research. We also thank the National Academies of Science, Engineering, and Medicine Gulf Research Program under Grant 2000007353 for support in data collection.

Data availability statement.

Oklahoma data are available upon request (http://crcm.ou.edu/epscordata/). National weather data are available at the Harvard Dataverse for 2019, 2020, and 2021. Southeast/Gulf Coast data are available upon reasonable request from the principal investigator (and author) Matthew Nowlin (nowlinmc@cofc.edu) at the College of Charleston.

APPENDIX

Demographic Survey Questions

There were three surveys in this study: the “National” survey, the “Southeast/Gulf Coast” survey, and the “Oklahoma” survey. In each survey, respondents were asked similar or, in some cases, identical demographic questions. Table A1 presents the descriptive statistics for each demographic variable across the three data sources. This appendix presents the specific question wording from each survey for the demographic questions.

Table A1.

Descriptive statistics for demographic variables across each sample.

Table A1.

For the age variable, each survey recorded a verbatim response to the question, “How old are you?” For the gender (male) variable, each survey asked, “Are you male or female?” The responses were limited to 0 for female and 1 for male. For the race/ethnicity (white) variable, responses were recoded as 1 for white and 0 for all others. The National and Oklahoma surveys asked, “Which of the following best describes your race?”:

  1. White

  2. Black or African American

  3. American Indian or Alaska Native

  4. Asian

  5. Native Hawaiian or Pacific Islander

  6. Two or more races

  7. Some other race (please specify).

The Southeast/Gulf Coast survey asked, “Which of the following best describes your race or ethnic background?”:

  1. American Indian

  2. Asian

  3. Black

  4. Hispanic

  5. White

  6. Something else.

For the income variable, the National survey asked, “Was the estimated annual income for your household in [the year before the survey]

  1. Less than $50,000

  2. At least $50,000 but less than $100,000

  3. At least $100,000 but less than $150,000

  4. $150,000 or more.

The Southeast/Gulf Coast survey asked, “Last year, that is, in 2017, what was your total family income from all sources, before taxes?” Possible responses were

  1. 0 to $19,999

  2. 20 to $39,999

  3. 40 to $59,999

  4. 60 to $79,999

  5. 80 to $99,999

  6. 100 to $149,999

  7. $150,000 or more.

The Oklahoma survey asked, “Thinking specifically about the past 12 months, what was your annual household income from all sources?” Because this survey asked for a verbatim response, we reported the median in Table A1 and used the logarithm of the variable in the analysis.

For the education variable, the National and Oklahoma surveys asked, “What is the highest level of education you have COMPLETED?” Possible responses were

  1. Less than high school

  2. High school/GED

  3. Vocational or technical training

  4. Some college—No degree

  5. 2-year college/associate’s degree

  6. Bachelor’s degree

  7. Master’s degree

  8. Ph.D./J.D. (law)/doctor.

The Southeast/Gulf Coast survey asked, “What is the highest level of education that you have completed?” Possible responses were

  1. Less than high school

  2. High school graduate/GED

  3. Vocational or technical training

  4. Some college

  5. 2 year/associate’s degree

  6. 4 year/bachelor’s degree

  7. Graduate or professional degree.

REFERENCES

  • Allan, J. N., J. T. Ripberger, W. Wehde, M. Krocak, C. L. Silva, and H. C. Jenkins‐Smith, 2020: Geographic distributions of extreme weather risk perceptions in the United States. Risk Anal., 40, 24982508, https://doi.org/10.1111/risa.13569.

    • Search Google Scholar
    • Export Citation
  • Arlikatti, S., S.-K. Huang, C.-H. Yu, and C. Hua, 2019: ‘Drop, cover and hold on’ or ‘Triangle of life’ attributes of information sources influencing earthquake protective actions. Int. J. Saf. Secur. Eng., 9, 213224, https://doi.org/10.2495/SAFE-V9-N3-213-224.

    • Search Google Scholar
    • Export Citation
  • Ciampi, A., A. Dyachenko, M. Cole, and J. McCusker, 2011: Delirium superimposed on dementia: Defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains. Int. Psychogeriatr., 23, 16591670, https://doi.org/10.1017/S1041610211000871.

    • Search Google Scholar
    • Export Citation
  • Comstock, R. D., and S. Mallonee, 2005: Comparing reactions to two severe tornadoes in one Oklahoma community. Disasters, 29, 277287, https://doi.org/10.1111/j.0361-3666.2005.00291.x.

    • Search Google Scholar
    • Export Citation
  • Demuth, J. L., J. K. Lazo, and R. E. Morss, 2011: Exploring variations in people’s sources, uses, and perceptions of weather forecasts. Wea. Climate Soc., 3, 177192, https://doi.org/10.1175/2011WCAS1061.1.

    • Search Google Scholar
    • Export Citation
  • Doksæter Sivle, A., and S. D. Kolstø, 2016: Use of online weather information in everyday decision‐making by laypeople and implications for communication of weather information. Meteor. Appl., 23, 650662, https://doi.org/10.1002/met.1588.

    • Search Google Scholar
    • Export Citation
  • Eachus, J. D., and B. D. Keim, 2019: A survey for weather communicators: Twitter and information channel preferences. Wea. Climate Soc., 11, 595607, https://doi.org/10.1175/WCAS-D-18-0091.1.

    • Search Google Scholar
    • Export Citation
  • Hammer, B., and T. W. Schmidlin, 2002: Response to warnings during the 3 May 1999 Oklahoma City tornado: Reasons and relative injury rates. Wea. Forecasting, 17, 577581, https://doi.org/10.1175/1520-0434(2002)017<0577:RTWDTM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hua, C., S.-K. Huang, M. K. Lindell, and C.-H. Yu, 2020: Rural households’ perceptions and behavior expectations in response to seismic hazard in Sichuan, China. Saf. Sci., 125, 104622, https://doi.org/10.1016/j.ssci.2020.104622.

    • Search Google Scholar
    • Export Citation
  • Huntsman, D., H.-C. Wu, and A. Greer, 2021: What matters? Exploring drivers of basic and complex adjustments to tornadoes among college students. Wea. Climate Soc., 13, 665676, https://doi.org/10.1175/WCAS-D-21-0008.1.

    • Search Google Scholar
    • Export Citation
  • Jenkins-Smith, H., J. Ripberger, C. Silva, N. Carlson, K. Gupta, M. Henderson, and A. Goodin, 2017: The Oklahoma meso-scale integrated socio-geographic network: A technical overview. J. Atmos. Oceanic Technol., 34, 24312441, https://doi.org/10.1175/JTECH-D-17-0088.1.

    • Search Google Scholar
    • Export Citation
  • Lazo, J. K., R. E. Morss, and J. L. Demuth, 2009: 300 billion served: Sources, perceptions, uses, and values of weather forecasts. Bull. Amer. Meteor. Soc., 90, 785798, https://doi.org/10.1175/2008BAMS2604.1.

    • Search Google Scholar
    • Export Citation
  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, https://doi.org/10.1111/j.1539-6924.2011.01647.x.

    • Search Google Scholar
    • Export Citation
  • Linzer, D. A., and J. B. Lewis, 2011: poLCA: An R package for polytomous variable latent class analysis. J. Stat. Software, 42 (10), 129, https://doi.org/10.18637/jss.v042.i10.

    • Search Google Scholar
    • Export Citation
  • Li, Y., H.-C. Wu, A. Greer, and D. O. Huntsman, 2022: Drivers of household risk perceptions and adjustment intentions to tornado hazards in Oklahoma. Wea. Climate Soc., 14, 11771199, https://doi.org/10.1175/WCAS-D-22-0018.1.

    • Search Google Scholar
    • Export Citation
  • Magidson, J., and J. K. Vermunt, 2004: Latent class models. The SAGE Handbook of Quantitative Methodology for the Social Sciences, D. Kaplan, Ed., SAGE, 175–198, https://doi.org/10.4135/9781412986311.

  • Mayhorn, C. B., 2005: Cognitive aging and the processing of hazard information and disaster warnings. Nat. Hazards Rev., 6, 165170, https://doi.org/10.1061/(ASCE)1527-6988(2005)6:4(165).

    • Search Google Scholar
    • Export Citation
  • Mileti, D. S., and J. H. Sorensen, 1990: Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. Tech. Rep. ORNL--6609 ON: DE91 004981, 160 pp., https://doi.org/10.2172/6137387.

  • Mileti, D. S., and J. D. Darlington, 1997: The role of searching in shaping reactions to earthquake risk information. Soc. Probl., 44, 89103.

    • Search Google Scholar
    • Export Citation
  • Miran, S. M., C. Ling, and L. Rothfusz, 2018: Factors influencing people’s decision-making during three consecutive tornado events. Int. J. Disaster Risk Reduct., 28, 150157, https://doi.org/10.1016/j.ijdrr.2018.02.034.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. K. Lazo, and J. L. Demuth, 2010: Examining the use of weather forecasts in decision scenarios: Results from a US survey with implications for uncertainty communication. Meteor. Appl., 17, 149162, https://doi.org/10.1002/met.196.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., and Coauthors, 2017: Hazardous weather prediction and communication in the modern information environment. Bull. Amer. Meteor. Soc., 98, 26532674, https://doi.org/10.1175/BAMS-D-16-0058.1.

    • Search Google Scholar
    • Export Citation
  • Oser, J., M. Hooghe, and S. Marien, 2013: Is online participation distinct from offline participation? A latent class analysis of participation types and their stratification. Political Res. Quart., 66, 91101, https://doi.org/10.1177/1065912912436695.

    • Search Google Scholar
    • Export Citation
  • Paul, B. K., M. Stimers, and M. Caldas, 2015: Predictors of compliance with tornado warnings issued in Joplin, Missouri, in 2011. Disasters, 39, 108124, https://doi.org/10.1111/disa.12087.

    • Search Google Scholar
    • Export Citation
  • Reuter, C., and M.-A. Kaufhold, 2018: Fifteen years of social media in emergencies: A retrospective review and future directions for crisis informatics. J. Contingencies Crisis Manage., 26, 4157, https://doi.org/10.1111/1468-5973.12196.

    • Search Google Scholar
    • Export Citation
  • Ripberger, J. T., M. J. Krocak, W. W. Wehde, J. N. Allan, C. Silva, and H. Jenkins-Smith, 2019: Measuring tornado warning reception, comprehension, and response in the United States. Wea. Climate Soc., 11, 863880, https://doi.org/10.1175/WCAS-D-19-0015.1.

    • Search Google Scholar
    • Export Citation
  • Ripberger, J. T., C. Silva, H. Jenkins-Smith, and M. Krocak, 2020a: The Severe Weather and Society Survey: Wx19. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/MLCJEW.

  • Ripberger, J. T., M. Krocak, C. Silva, and H. Jenkins-Smith, 2020b: The Severe Weather and Society Survey: Wx20. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/EWOCUA.

  • Ripberger, J. T., M. Krocak, C. Silva, and H. Jenkins-Smith, 2022: The Severe Weather and Society Survey: Wx21. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/QYZLSO.

  • Robinson, S. E., J. M. Pudlo, and W. Wehde, 2019: The new ecology of tornado warning information: A natural experiment assessing threat intensity and citizen‐to‐citizen information sharing. Public Adm. Rev., 79, 905916, https://doi.org/10.1111/puar.13030.

    • Search Google Scholar
    • Export Citation
  • Robinson, S. E., W. Wehde, and J. M. Pudlo, 2022: Use and access in the new ecology of public messaging. J. Contingencies Crisis Manage., 30, 5970, https://doi.org/10.1111/1468-5973.12361.

    • Search Google Scholar
    • Export Citation
  • Rogers, G. O., and J. H. Sorensen, 1991: Diffusion of emergency warning: Comparing empirical and simulation results. Risk Analysis, Springer, 117–134, https://doi.org/10.1007/978-1-4899-0730-1_14.

  • Sherman-Morris, K., 2010: Tornado warning dissemination and response at a university campus. Nat. Hazards, 52, 623638, https://doi.org/10.1007/s11069-009-9405-0.

    • Search Google Scholar
    • Export Citation
  • Sorensen, J. H., 2000: Hazard warning systems: Review of 20 years of progress. Nat. Hazards Rev., 1, 119125, https://doi.org/10.1061/(ASCE)1527-6988(2000)1:2(119).

    • Search Google Scholar
    • Export Citation
  • Walters, J. E., L. R. Mason, and K. N. Ellis, 2019: Examining patterns of intended response to tornado warnings among residents of Tennessee, United States, through a latent class analysis approach. Int. J. Disaster Risk Reduct., 34, 375386, https://doi.org/10.1016/j.ijdrr.2018.12.007.

    • Search Google Scholar
    • Export Citation
  • Wehde, W., J. M. Pudlo, and S. E. Robinson, 2019: “Is there anybody out there?”: Communication of natural hazard warnings at home and away. Soc. Sci. Quart., 100, 26072624, https://doi.org/10.1111/ssqu.12641.

    • Search Google Scholar
    • Export Citation
  • Wehde, W., J. T. Ripberger, H. Jenkins-Smith, B. A. Jones, J. N. Allan, and C. L. Silva, 2021: Public willingness to pay for continuous and probabilistic hazard information. Nat. Hazards Rev., 22, 04021004, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000444.

    • Search Google Scholar
    • Export Citation
  • Wu, H.-C., A. Greer, and H. Murphy, 2020: Perceived stakeholder information credibility and hazard adjustments: A case of induced seismic activities in Oklahoma. Nat. Hazards Rev., 21, 04020017, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000378.

    • Search Google Scholar
    • Export Citation
Save
  • Allan, J. N., J. T. Ripberger, W. Wehde, M. Krocak, C. L. Silva, and H. C. Jenkins‐Smith, 2020: Geographic distributions of extreme weather risk perceptions in the United States. Risk Anal., 40, 24982508, https://doi.org/10.1111/risa.13569.

    • Search Google Scholar
    • Export Citation
  • Arlikatti, S., S.-K. Huang, C.-H. Yu, and C. Hua, 2019: ‘Drop, cover and hold on’ or ‘Triangle of life’ attributes of information sources influencing earthquake protective actions. Int. J. Saf. Secur. Eng., 9, 213224, https://doi.org/10.2495/SAFE-V9-N3-213-224.

    • Search Google Scholar
    • Export Citation
  • Ciampi, A., A. Dyachenko, M. Cole, and J. McCusker, 2011: Delirium superimposed on dementia: Defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains. Int. Psychogeriatr., 23, 16591670, https://doi.org/10.1017/S1041610211000871.

    • Search Google Scholar
    • Export Citation
  • Comstock, R. D., and S. Mallonee, 2005: Comparing reactions to two severe tornadoes in one Oklahoma community. Disasters, 29, 277287, https://doi.org/10.1111/j.0361-3666.2005.00291.x.

    • Search Google Scholar
    • Export Citation
  • Demuth, J. L., J. K. Lazo, and R. E. Morss, 2011: Exploring variations in people’s sources, uses, and perceptions of weather forecasts. Wea. Climate Soc., 3, 177192, https://doi.org/10.1175/2011WCAS1061.1.

    • Search Google Scholar
    • Export Citation
  • Doksæter Sivle, A., and S. D. Kolstø, 2016: Use of online weather information in everyday decision‐making by laypeople and implications for communication of weather information. Meteor. Appl., 23, 650662, https://doi.org/10.1002/met.1588.

    • Search Google Scholar
    • Export Citation
  • Eachus, J. D., and B. D. Keim, 2019: A survey for weather communicators: Twitter and information channel preferences. Wea. Climate Soc., 11, 595607, https://doi.org/10.1175/WCAS-D-18-0091.1.

    • Search Google Scholar
    • Export Citation
  • Hammer, B., and T. W. Schmidlin, 2002: Response to warnings during the 3 May 1999 Oklahoma City tornado: Reasons and relative injury rates. Wea. Forecasting, 17, 577581, https://doi.org/10.1175/1520-0434(2002)017<0577:RTWDTM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hua, C., S.-K. Huang, M. K. Lindell, and C.-H. Yu, 2020: Rural households’ perceptions and behavior expectations in response to seismic hazard in Sichuan, China. Saf. Sci., 125, 104622, https://doi.org/10.1016/j.ssci.2020.104622.

    • Search Google Scholar
    • Export Citation
  • Huntsman, D., H.-C. Wu, and A. Greer, 2021: What matters? Exploring drivers of basic and complex adjustments to tornadoes among college students. Wea. Climate Soc., 13, 665676, https://doi.org/10.1175/WCAS-D-21-0008.1.

    • Search Google Scholar
    • Export Citation
  • Jenkins-Smith, H., J. Ripberger, C. Silva, N. Carlson, K. Gupta, M. Henderson, and A. Goodin, 2017: The Oklahoma meso-scale integrated socio-geographic network: A technical overview. J. Atmos. Oceanic Technol., 34, 24312441, https://doi.org/10.1175/JTECH-D-17-0088.1.

    • Search Google Scholar
    • Export Citation
  • Lazo, J. K., R. E. Morss, and J. L. Demuth, 2009: 300 billion served: Sources, perceptions, uses, and values of weather forecasts. Bull. Amer. Meteor. Soc., 90, 785798, https://doi.org/10.1175/2008BAMS2604.1.

    • Search Google Scholar
    • Export Citation
  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, https://doi.org/10.1111/j.1539-6924.2011.01647.x.

    • Search Google Scholar
    • Export Citation
  • Linzer, D. A., and J. B. Lewis, 2011: poLCA: An R package for polytomous variable latent class analysis. J. Stat. Software, 42 (10), 129, https://doi.org/10.18637/jss.v042.i10.

    • Search Google Scholar
    • Export Citation
  • Li, Y., H.-C. Wu, A. Greer, and D. O. Huntsman, 2022: Drivers of household risk perceptions and adjustment intentions to tornado hazards in Oklahoma. Wea. Climate Soc., 14, 11771199, https://doi.org/10.1175/WCAS-D-22-0018.1.

    • Search Google Scholar
    • Export Citation
  • Magidson, J., and J. K. Vermunt, 2004: Latent class models. The SAGE Handbook of Quantitative Methodology for the Social Sciences, D. Kaplan, Ed., SAGE, 175–198, https://doi.org/10.4135/9781412986311.

  • Mayhorn, C. B., 2005: Cognitive aging and the processing of hazard information and disaster warnings. Nat. Hazards Rev., 6, 165170, https://doi.org/10.1061/(ASCE)1527-6988(2005)6:4(165).

    • Search Google Scholar
    • Export Citation
  • Mileti, D. S., and J. H. Sorensen, 1990: Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. Tech. Rep. ORNL--6609 ON: DE91 004981, 160 pp., https://doi.org/10.2172/6137387.

  • Mileti, D. S., and J. D. Darlington, 1997: The role of searching in shaping reactions to earthquake risk information. Soc. Probl., 44, 89103.

    • Search Google Scholar
    • Export Citation
  • Miran, S. M., C. Ling, and L. Rothfusz, 2018: Factors influencing people’s decision-making during three consecutive tornado events. Int. J. Disaster Risk Reduct., 28, 150157, https://doi.org/10.1016/j.ijdrr.2018.02.034.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., J. K. Lazo, and J. L. Demuth, 2010: Examining the use of weather forecasts in decision scenarios: Results from a US survey with implications for uncertainty communication. Meteor. Appl., 17, 149162, https://doi.org/10.1002/met.196.

    • Search Google Scholar
    • Export Citation
  • Morss, R. E., and Coauthors, 2017: Hazardous weather prediction and communication in the modern information environment. Bull. Amer. Meteor. Soc., 98, 26532674, https://doi.org/10.1175/BAMS-D-16-0058.1.

    • Search Google Scholar
    • Export Citation
  • Oser, J., M. Hooghe, and S. Marien, 2013: Is online participation distinct from offline participation? A latent class analysis of participation types and their stratification. Political Res. Quart., 66, 91101, https://doi.org/10.1177/1065912912436695.

    • Search Google Scholar
    • Export Citation
  • Paul, B. K., M. Stimers, and M. Caldas, 2015: Predictors of compliance with tornado warnings issued in Joplin, Missouri, in 2011. Disasters, 39, 108124, https://doi.org/10.1111/disa.12087.

    • Search Google Scholar
    • Export Citation
  • Reuter, C., and M.-A. Kaufhold, 2018: Fifteen years of social media in emergencies: A retrospective review and future directions for crisis informatics. J. Contingencies Crisis Manage., 26, 4157, https://doi.org/10.1111/1468-5973.12196.

    • Search Google Scholar
    • Export Citation
  • Ripberger, J. T., M. J. Krocak, W. W. Wehde, J. N. Allan, C. Silva, and H. Jenkins-Smith, 2019: Measuring tornado warning reception, comprehension, and response in the United States. Wea. Climate Soc., 11, 863880, https://doi.org/10.1175/WCAS-D-19-0015.1.

    • Search Google Scholar
    • Export Citation
  • Ripberger, J. T., C. Silva, H. Jenkins-Smith, and M. Krocak, 2020a: The Severe Weather and Society Survey: Wx19. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/MLCJEW.

  • Ripberger, J. T., M. Krocak, C. Silva, and H. Jenkins-Smith, 2020b: The Severe Weather and Society Survey: Wx20. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/EWOCUA.

  • Ripberger, J. T., M. Krocak, C. Silva, and H. Jenkins-Smith, 2022: The Severe Weather and Society Survey: Wx21. Harvard Dataverse, accessed 30 October 2022, https://doi.org/10.7910/DVN/QYZLSO.

  • Robinson, S. E., J. M. Pudlo, and W. Wehde, 2019: The new ecology of tornado warning information: A natural experiment assessing threat intensity and citizen‐to‐citizen information sharing. Public Adm. Rev., 79, 905916, https://doi.org/10.1111/puar.13030.

    • Search Google Scholar
    • Export Citation
  • Robinson, S. E., W. Wehde, and J. M. Pudlo, 2022: Use and access in the new ecology of public messaging. J. Contingencies Crisis Manage., 30, 5970, https://doi.org/10.1111/1468-5973.12361.

    • Search Google Scholar
    • Export Citation
  • Rogers, G. O., and J. H. Sorensen, 1991: Diffusion of emergency warning: Comparing empirical and simulation results. Risk Analysis, Springer, 117–134, https://doi.org/10.1007/978-1-4899-0730-1_14.

  • Sherman-Morris, K., 2010: Tornado warning dissemination and response at a university campus. Nat. Hazards, 52, 623638, https://doi.org/10.1007/s11069-009-9405-0.

    • Search Google Scholar
    • Export Citation
  • Sorensen, J. H., 2000: Hazard warning systems: Review of 20 years of progress. Nat. Hazards Rev., 1, 119125, https://doi.org/10.1061/(ASCE)1527-6988(2000)1:2(119).

    • Search Google Scholar
    • Export Citation
  • Walters, J. E., L. R. Mason, and K. N. Ellis, 2019: Examining patterns of intended response to tornado warnings among residents of Tennessee, United States, through a latent class analysis approach. Int. J. Disaster Risk Reduct., 34, 375386, https://doi.org/10.1016/j.ijdrr.2018.12.007.

    • Search Google Scholar
    • Export Citation
  • Wehde, W., J. M. Pudlo, and S. E. Robinson, 2019: “Is there anybody out there?”: Communication of natural hazard warnings at home and away. Soc. Sci. Quart., 100, 26072624, https://doi.org/10.1111/ssqu.12641.

    • Search Google Scholar
    • Export Citation
  • Wehde, W., J. T. Ripberger, H. Jenkins-Smith, B. A. Jones, J. N. Allan, and C. L. Silva, 2021: Public willingness to pay for continuous and probabilistic hazard information. Nat. Hazards Rev., 22, 04021004, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000444.

    • Search Google Scholar
    • Export Citation
  • Wu, H.-C., A. Greer, and H. Murphy, 2020: Perceived stakeholder information credibility and hazard adjustments: A case of induced seismic activities in Oklahoma. Nat. Hazards Rev., 21, 04020017, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000378.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 486 118 0
Full Text Views 163 78 25
PDF Downloads 144 38 12