What Does It Mean to Stand Out? How Visual Design and Presentation Affect Attention and Memory in a Warning Message

Nicholas Waugh College of Emergency Preparedness, Homeland Security, and Cybersecurity, University at Albany, State University of New York, Albany, New York

Search for other papers by Nicholas Waugh in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0009-0006-0945-721X
,
Jeannette Sutton College of Emergency Preparedness, Homeland Security, and Cybersecurity, University at Albany, State University of New York, Albany, New York

Search for other papers by Jeannette Sutton in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-4345-9108
,
Laura Fischer Department of Agricultural Education and Communications, Davis College of Agricultural Sciences and Natural Resources, Texas Tech University, Lubbock, Texas

Search for other papers by Laura Fischer in
Current site
Google Scholar
PubMed
Close
, and
Ginger Orton Department of Agricultural Education and Communications, Davis College of Agricultural Sciences and Natural Resources, Texas Tech University, Lubbock, Texas

Search for other papers by Ginger Orton in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

In emergency communication, it is essential to call attention to key information that can be interpreted quickly and remembered easily. Individuals possess a limited number of cognitive resources to allocate to message processing in an emergency. Because of this, they are more likely to allocate attention to messages they are motivated to care about or to message attributes that stand out. In this study, we focus on how warning messages are attended to when they are viewed in a busy media environment and ask the question: “What does it mean to stand out?” To address the research questions, we used a sequential explanatory design, mixed-methods approach. We employed eye tracking, a memory exercise, and think-aloud interviews to investigate visual attention, memory, and perceptions in response to warnings communicated via Twitter and Wireless Emergency Alerts (WEAs) for snow squall (SS) and dust storm (DS) hazards. Our findings revealed insights to assist message designers as they develop warning messages without burdening the message receiver with contents that require additional cognitive load. Colors help to draw attention to key elements and evoke a feeling of risk. Icons also draw attention and can serve as a signal that catches the eye, especially when actively viewed in a busy messaging environment. Additionally, techniques to make key text stand out through bold or the use of ALL CAPS may reduce effortful processing and eliminate the need for conscious fixations while resulting in easily remembered content.

Significance Statement

Visual risk communication messaging is often used to provide individuals with quick decision-making and protective action information in response to hazards. As messages increase in length and complexity, a burden is placed on risk communicators to capture the attention of message receivers. This study uses eye tracking, a memory exercise, and think-aloud interviews to investigate what factors influence visual attention, memory, and perceptions in response to warning messages over different channels. These methods allow us to not only answer questions about where people look, what they remember, and what draws their attention but also recommend tactics such as using ALL CAPS, colors, symbols, and icons, that can be used by message creators to maximize message effectiveness.

© 2025 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jeannette Sutton, jsutton@albany.edu

Abstract

In emergency communication, it is essential to call attention to key information that can be interpreted quickly and remembered easily. Individuals possess a limited number of cognitive resources to allocate to message processing in an emergency. Because of this, they are more likely to allocate attention to messages they are motivated to care about or to message attributes that stand out. In this study, we focus on how warning messages are attended to when they are viewed in a busy media environment and ask the question: “What does it mean to stand out?” To address the research questions, we used a sequential explanatory design, mixed-methods approach. We employed eye tracking, a memory exercise, and think-aloud interviews to investigate visual attention, memory, and perceptions in response to warnings communicated via Twitter and Wireless Emergency Alerts (WEAs) for snow squall (SS) and dust storm (DS) hazards. Our findings revealed insights to assist message designers as they develop warning messages without burdening the message receiver with contents that require additional cognitive load. Colors help to draw attention to key elements and evoke a feeling of risk. Icons also draw attention and can serve as a signal that catches the eye, especially when actively viewed in a busy messaging environment. Additionally, techniques to make key text stand out through bold or the use of ALL CAPS may reduce effortful processing and eliminate the need for conscious fixations while resulting in easily remembered content.

Significance Statement

Visual risk communication messaging is often used to provide individuals with quick decision-making and protective action information in response to hazards. As messages increase in length and complexity, a burden is placed on risk communicators to capture the attention of message receivers. This study uses eye tracking, a memory exercise, and think-aloud interviews to investigate what factors influence visual attention, memory, and perceptions in response to warning messages over different channels. These methods allow us to not only answer questions about where people look, what they remember, and what draws their attention but also recommend tactics such as using ALL CAPS, colors, symbols, and icons, that can be used by message creators to maximize message effectiveness.

© 2025 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jeannette Sutton, jsutton@albany.edu

1. Introduction

As messages get longer, alerting authorities need to know how to call attention to key information that can be interpreted quickly and remembered easily. Choosing what to attend to is difficult due to issues related to cognitive load and the overwhelming amount of content that people are exposed to daily. It is estimated that the average person spends 2 h and 27 min on social media every day and checks their mobile device 159 times a day (Howarth 2024). Therefore, the average person is exposed to new information in a near-constant feed of visual stimuli. This has the potential to pose problems for those who need to be alerted to imminent threats in their environments. Theoretical and empirical research on the process of motivating a person to take a protective action has demonstrated that the initial step includes receiving and attending to a cue from the environment, other people, or an information source in the form of a warning (Lindell and Perry 2012). Mileti and Sorensen (1990) identified warning message receipt as the first step in their Warning Response Model (WRM), a step that initiates message interpretation. Receiving the warning means that the message, in some form, is delivered and captures their attention in the mid of all of the other competing sounds, sights, and feelings in the physical environment (Mileti and Peek 2000). Because individuals possess a limited number of cognitive resources to allocate to message processing, they are more likely to allocate attention to content they are preemptively motivated to care about (Lang 2000, 2009; Lang et al. 2012). Additionally, they will direct these resources to things that stand out—that is, features that are visually salient or different from the neutrally presented text (i.e., lower case, nonitalicized, and nonbold) and images (i.e., static and lacking color). Therefore, the presentation of content in a warning message takes on the dual task of communicating risk and urgency (Fischer et al. 2022), which can motivate a person to process the message while directing attention to the most salient content (Lang 2000, 2006, 2009; Lang et al. 2012). This means breaking through a very “noisy” and crowded media environment, as well as mental environment.

Some warning channels have technological affordances that elicit attention to important information using visual cues to highlight key information for viewers. For example, social media platforms allow users to include static and animated images with and without video, both of which have been shown to increase message saliency (Sutton et al. 2015). Investigations of attention to these visuals showed colors, fonts, and icons or symbols drew individual gaze to key information when viewed in a laboratory setting (Sutton and Fischer 2021). In contrast, channels such as opt-in Short Message Service (SMS) alerts or Wireless Emergency Alerts (WEAs) are presently limited to text content for warning messages (with some use of emoji). WEAs are limited to 90 and 360 characters of text and include a single symbol (a yellow triangle containing a red exclamation point near the words EMERGENCY ALERT) at the top of the message. Many studies have been conducted on message comprehension related to the contents of WEA messages, including what should be included (Sutton and Kuligowski 2019; Sutton et al. 2024) and in what order (Bean et al. 2014) to optimize message perception outcomes. However, how WEAs are attended to visually, and how this may differ from warnings that present content in text and visual format, has not yet been studied.

In this study, we focused on how warning messages are attended to when they are viewed in a busy media environment and asked the question: What does it mean to stand out? To do so, we investigated responses to two types of warning messages that are issued by the National Weather Service (NWS): the warning tweet and the Wireless Emergency Alert. We selected two hazard types that are both relatively new to the NWS suite of alerts: snow squall and dust storm. We investigated visual attention using a mixed-methods approach where we collected demographic information and eye-tracking data and asked participants to elaborate on their thoughts about the message they viewed in a memory task and think-aloud interview. A better understanding of how message design affects the ways audiences view and remember key content can help to inform future visual design strategies. Importantly, these messages may be the primary cue that signals a person to take protective action.

2. Background and literature

a. Visual attention

Within information processing frameworks, visual attention allocation plays a critical role in information processing. Simply put—humans are constantly inundated with information and messages, and the human mind has a limited capacity to process incoming messages and new information (Zillman and Bryant 1985; Lang 2000; Duchowski 2007; King et al. 2019). When a viewer first looks at a stimulus, they will begin to identify specific areas or regions within the message that attract their immediate attention—this is like scanning the landscape. After this scan, the viewer will prioritize which areas of an image or message to inspect more carefully (Duchowski 2007; Gong and Cummins 2020). Theoretically, visual attention has been described as the process of allocating cognitive and mental resources to a component of a message (Duchowski 2007). It has been operationalized as an eye movement that is stationary over an object (measured in s and ms) and the number of times (frequency count) the eye was stationary (Duchowski 2007).

The selection of regions to prioritize in their visual inspection will depend on two types of salience: 1) motivational or 2) visual. Motivational salience refers to items that capture visual attention due to an individual’s intrinsic motivation—it can be goal oriented (i.e., they have been assigned a task, so they look for things that help them to do the task), or it can relate to their prior knowledge or experience (i.e., I look for information that aligns with my prior knowledge and beliefs) (MacInnis and Jaworski 1989; Fischer et al. 2020; Gong and Cummins 2020). In contrast, visual salience refers to items in the visual search field that stand out and can be interpreted by a receiver (Lang 2000, 2009; Lang et al. 2012; Fisher and Weber 2020; Fischer et al. 2022). Visual salience has been described as components of a message that “pop out” from their surroundings, can be detected from other components of the message, and garner visual attention (Yantis 1993; Pieters and Wedel 2007; Sutton and Fischer 2021; Fischer et al. 2022). Graphically, message designers use contrasting colors, new color additions, the edge of objects, novelty, motion, symbols, and emojis, to help draw visual attention (Yantis 2005; Bruce and Tsotsos 2009; Zhang and Lin 2013; Sutton and Fischer 2021).

Textually, message designers can manipulate text to elicit visual salience through effects such as the use of all capital letters (“ALL CAPS”) or the use of italics, bold, or colored fonts which can draw attention to an important word or phrase (Edworthy and Hellier 2006; Frascara 2006; Vos et al. 2018; Sutton and Fischer 2021). Designers may also make use of punctuation like an exclamation point (!), which may draw attention to important sentences (Edworthy and Hellier 2006; Frascara 2006; Vos et al. 2018; Sutton and Fischer 2021). Visual salience has been found to drive attention, and if items elicit attention, there is a higher chance that they will be processed and later interpreted by the viewer (Yantis 1993; Pieters and Wedel 2007; Fischer et al. 2022).

Previous research, including studies by Fischer et al. (2022) and Sutton and Fischer (2021), has explored the design of tornado warning messages, particularly those disseminated via Twitter, now known as X, through the use of think-aloud interviews and eye-tracking methods. Fischer et al. (2022) found that within Twitter warning messages, design elements such as bold and ALL CAPS letters, icons, and graphics enhanced the visibility of certain message features. Similarly, Sutton and Fischer (2021) made use of eye-tracking methods to examine how these elements capture visual attention within tornado warnings, specifically identifying which features most effectively attract viewers’ attention. The current study expands this scope of research by analyzing visual attention to warnings for dust storm and snow squall sent via Twitter and Wireless Emergency Alert, allowing us to understand how visual attention to these features varies across message media.

b. Message processing

In this study, we utilize the Limited Capacity Model of Motivated Mediated Message Processing (LC4MP) (Lang 2006; Fisher and Weber 2020), to operationalize the cognitive processes related to message viewing, including “encoding,” or selecting the components of the message to view, “storing” memories about message contents, and mentally “retrieving” or remembering the information obtained following message exposure. The LC4MP outlines three phases of cognitive resource allocation upon message exposure: encoding, storage, and retrieval. Encoding represents the information a receiver views and selects to process when exposed to a message. Next, if cognitive resources allow, information is then stored or logged into the accessible memory. Then, in the retrieval stage, individuals will access, or retrieve, the knowledge (i.e., remember). The allocation of cognitive resources to the memory storage process is determined by the individual’s motivation to process the message and surrounding environmental stimulation (Fisher and Weber 2020), as well as message-related variables like message length, content density, and the medium, or channel, through which a message is received (Lang 2000).

c. Eye-tracking approach overview

The use of eye-tracking technology allows researchers to effectively explore where participants focus their gaze across various stimuli. As noted by Duchowski (2007) and discussed by Sutton and Fischer (2021), this method can effectively pinpoint locations within the visual field that capture individual attention, while also identifying the elements of visual stimuli that are most salient. By identifying and isolating specific components that command the highest levels of attention, particularly in the context of risk communication message design, we can establish which are the most engaging and impactful. Additionally, the combined use of eye tracking with follow-up think-aloud interviews enhances our understanding of the effectiveness of these components.

d. Study purpose and research questions

As noted in the WRM, publics must receive and attend to a message prior to preparing to take action (Mileti and Sorensen 1990). We argue that attention to a warning message also allows us to understand what components of the message elicit encoding, storage, and remembering of message content, which helps to facilitate protective action taking. Therefore, in this study, we investigated what people remembered about a warning message’s content and features as a proxy for where they placed their cognitive attention and why. We used memory to identify any potential differences between eye-tracking data that measured what information individuals spent time looking at and what information or message features individuals actually remembered shortly after message exposure.

Additionally, think-aloud interviews, or asking a person to verbalize their opinions as they view a message, also provide insight into why participants allocated visual attention (see Bean et al. 2014; Sutton and Fischer 2021). By verbally describing what stands out in a message while looking at the message, viewers dedicated additional cognitive effort alongside visual scanning and attention. In other words, they became aware of what drew their attention as they are looking and describe what and why they see it. This may differ from what a viewer describes when asked what they remember about a message as well as what a viewer actively fixates on during eye tracking (Lamme 2003).

By exploring eye-gaze tracking, memory, and think-aloud interviews, researchers can more fully understand the relationship between what people attended to in a message, what they remember about the message, and what elements and features attract their attention. Where users focus their attention and simultaneously reflect on what they are thinking can give insight into cognitive processing (Clive et al. 2021). Additionally, the message features that are remembered most clearly can provide insight into what factors drew their attention and why. Therefore, we ask the following questions:

  • RQ1: Where do participants allocate visual attention when viewing a WEA and Twitter message about an approaching hazard in their location?

  • RQ2: What do participants remember about a WEA and Twitter message about an approaching hazard in their location shortly after viewing the message?

  • RQ3: What visual features do participants say draw their attention in a WEA and Twitter message about an approaching hazard in their location?

3. Methods

To address the research questions posed in this study, we used a mixed-methods approach using sequential explanatory design (Creswell and Clark 2007), investigating responses to warnings communicated via Twitter and WEAs for snow squall (SS) and dust storm (DS) events. In the first phase of this study, we collected eye-tracking data including fixation frequency and fixation duration as a measure of visual attention. Next, we conducted a memory exercise, where participants were asked about what contents and features in each message type were the most memorable and attention grabbing. Following this, we conducted think-aloud interviews, where participants reviewed a WEA or Twitter message and were asked to describe what features stood out to them and why. Finally, we asked participants to complete a short questionnaire to understand their prior hazard experience and collect demographic information.

a. Participants

Our participants included college students at two large public universities: one located in the northeast United States, assigned to messages focusing on a snow squall, and one in the southwestern United States, assigned to messages focusing on a dust storm. Participants were recruited using university listserv email lists and were offered a $20 gift card incentive in return for their time. In each location, 20 participants completed the study for a total of 40 participants. Eye-tracking studies typically have relatively low numbers of participants (King et al. 2019; Sutton et al. 2020), especially those focusing on the usability of websites or other online technologies (King et al. 2019). In descriptive research studies, prior research indicates smaller samples, often fewer than 20 participants, are adequate for understanding patterns of viewing behavior (Jacob and Karn 2003). Because this study provides descriptive trends involving visual attention and features that affect it, the number of participants was evenly split between the two hazards and the two message types of interest. Participants were randomly assigned to one of the two conditions programmed for their geographical area.

At the northeast university, the participants were diverse in ethnicity, predominantly male, and had an average age of 21.7. Many of the participants indicated they did not have any formal training in map reading or meteorology. In contrast, the participants in the southwestern university all described themselves as white, were majority female, and had an average age of 20.7. Most of the southwestern participants indicated that they did not have any formal training in map reading or meteorology. See Table 1 for the participants’ full sociodemographic breakdown.

Table 1.

Sociodemographic characteristics for participants.

Table 1.

In both locations, the participants were found to have high levels of prior experience with their assigned hazard condition: SS or DS. Experience was measured using methods similar to Demuth (2018), identifying hazard experience using a five-point scale. In the SS location, which had just experienced an SS event 2–3 weeks before data collection (Schneider 2023), the mean experience level with SS events was determined to be 3.49. In the DS location, which had just experienced a significant dust storm location only 6–7 weeks prior to data collection (Lozano 2023), the mean experience level with DS events was determined to be 4.35.

b. Eye-tracking data collection

1) Eye-tracking procedure

The goal of eye-tracking data collection is to objectively measure the areas where the participant fixates their eyes (fixation counts) and the amount of time allocated to viewing each portion of the message (fixation duration). Eye-tracking data were collected in an on-campus research laboratory at their respective university location. Upon arrival at the laboratory, participants were introduced to the study activities and then completed informed consent. Participants began the study session sitting in front of a computer screen for an eye-tracking procedure that used a Tobii Pro Fusion device to monitor their eye movements and gaze patterns (Sutton and Fischer 2021). This noninvasive eye-tracking method measures pupil movement, fixations, and viewing patterns by emitting infrared light into the eyes of participants (Sutton and Fischer 2021). Prior to beginning the eye-tracking activity, participants performed a calibration procedure where they fixated on moving objects on the screen to ensure accurate data capture.

After the eye-tracking calibration, the laboratory researcher asked the participants to imagine they were members of the public living in the city respective to where the study was being performed. They were told that they were scrolling through Twitter when they stopped to look at some messages. Participants were advised to view each image on the computer screen at their own pace and to press the spacebar when they were done to move on to the next message to mimic scrolling through a Twitter feed. The laboratory researcher remained present during the eye-tracking activity to ensure that participants remained still and to observe each participant’s response as they viewed the stimuli. Time spent viewing the stimuli varied by location with approximately 78 s for participants in the northeast and 57 s for participants in the southwest.

2) Stimuli

The study consisted of four primary stimuli and four “foils.” The primary stimuli included two Twitter-style messages (one for DS and one for SS) and two messages in the form of Wireless Emergency Alerts (one for DS and one for SS) that have been posted in a Twitter stream. Each message was designed to mimic real messages issued by the National Oceanic and Atmospheric Administration Storm Prediction Center (SPC) and NWS Weather Forecast Offices. Foils or “filler images” that were shown during the experiment served as a distraction from the primary stimuli. They were designed to simulate generic Twitter content that a college student might normally encounter when scrolling through a Twitter feed. See appendix C for a sample foil image.

(i) Twitter warning message

Twitter-style warning messages include two distinct components: the textual content of the tweet and an accompanying graphic image (see Fig. 1). The tweet text contains information pertaining to the hazard, its timing, location, and recommended protective actions. Text presented in ALL CAPS in our given stimuli includes the name of the hazard and the primary protective action guidance.

Fig. 1.
Fig. 1.

Twitter warning message for (top) SS and (bottom) DS.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

In the graphic portion of the tweet, the image includes a banner with the name of the hazard warning; a large map with a red polygon displaying the area at risk; a smaller inset map showing the risk area relative to a broader geographic area; and a box, in black, containing text and icons describing the hazard and safety information. Additional information that is common to Twitter posts was also included for realism including the account information, date/time of tweet, and engagement metrics.

(ii) Wireless Emergency Alerts

The WEA replicates the exact text contained in WEAs issued for SS and DS events but is displayed in a Twitter stream (see Fig. 2). A real WEA would be delivered directly to an individual’s phone and would appear as a text-type message on the home screen until the receiver dismissed the message (i.e., a “push” notification). Each WEA is comprised of text and includes the name of the source, the hazard, time, and protective action recommendations. WEAs also include a small icon and the words EMERGENCY ALERT in ALL CAPS. For the purpose of this experiment, the WEA message is posted to a fictitious Twitter account and includes the time of posting and engagement metrics.

Fig. 2.
Fig. 2.

(top) SS WEA and (bottom) DS WEA posted to a Twitter stream.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

(iii) Foils

Four foils, or filler messages, served as a distraction from the target image and were designed to represent the type of messages that a college student might normally encounter when scrolling through a Twitter feed (see appendix C). Foil topics included a recently released movie, a humorous tweet about dogs and cats, an advertisement for a streaming service, and content about an upcoming university event. The tweet for the university event varied by the location of data collection. Each foil included a brief text component and an image; all were issued by organizations.

(iv) Areas of interest

Each stimulus image was segmented into multiple areas of interest (AOIs) that varied depending on message content (see Fig. 3). AOIs represented defined regions or elements within each image, encompassing components such as text, objects, images, or other graphical and visual components (see appendixes A and B for full descriptions of the AOIs). The use of AOIs enabled a granular analysis of participant gaze and visual attention allocation to components or features within an overall message. In short, AOIs allowed us to identify components or features of the image that participants most and least often contribute to their visual attention.

Fig. 3.
Fig. 3.

AOIs outlined within (top) Twitter warning message stimuli and (bottom) WEA message stimuli.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

c. Eye-tracking analysis

Eye-tracking data were analyzed using Tobii Pro Laboratories software tools. After creating AOIs, all eye-tracking data collected via Tobii were downloaded as an interval-based tab-separated values (TSV) file, opened, and analyzed using Microsoft Excel. We measured the total number of fixations on the entire image and across each AOI. Fixation duration was measured for the overall image and each AOI and presented in seconds. Our results present the calculated frequencies and percentages for the fixation counts and the calculated means and standard deviation in seconds for the fixation duration per the overall message and specific areas of interest.

d. Interview data collection

Interviews were conducted to gain insights into the contents and features of each message the participants remembered, what features drew their attention, and why attention was directed to specific regions. Interviews were conducted in two parts: first as a memory task and later as a think-aloud interview. Both are described next.

1) Memory task procedure

The memory task focused on what participants could remember from the warning image they viewed among the set of Twitter images. Using a semistructured interview guide, the researcher asked the participants to describe message contents and features that they could remember from the warning message to which they were previously exposed. For both the tweet and the WEA, participants were asked to describe what they remembered about the written words, colors, symbols, placement of features, and general information from the message.

2) Think-aloud procedure

Following the memory task, the participant was shown the warning message a second time and asked to describe the message features that stood out or drew their attention. They were also encouraged to state their initial reaction to the message and discuss parts of the message that they liked or disliked. Think-aloud interviews ranged from 11 to 28 min with an average of 16.6 min (northeast) and 5–20 min with an average of 8.8 min (southwest).

e. Interview data analysis

To ensure the trustworthiness of the data, each interview was audio recorded and uploaded into Otter.ai, an online audio transcription tool, and transcribed (Guest et al. 2006; Otter.ai 2024). After processing, transcripts were checked for accuracy by laboratory researchers and imported into Microsoft Excel for coding and analysis. Participant responses were organized by interview questions, and coding was performed on a question-by-question basis. Responses to each question were later grouped together and organized by topic.

The coding process began with the primary researcher performing a manual transcript analysis of each participant interview absent a predefined codebook. Reviewing each transcript, the researcher identified and generated inductive codes based on emergent themes in the participants’ responses. Specific message features highlighted or mentioned by participants, such as the presence of icons, the name of the sending organization, text formatting such as ALL CAPS, or color, were documented in a coding spreadsheet (see Tables 8, 12, 15, and 18 for identified codes). The process was iterative; upon identifying a new theme, the researcher revisited all other analyzed responses to assess the presence or absence of that theme across the entire dataset (Aurini et al. 2021). As the analysis progressed, the researchers were able to identify the most prevalent themes found in participant responses, quantifying their occurrence across interviews. To ensure the dependability and confirmability of the results, two researchers met and utilized a peer debriefing process (MacQueen et al. 1998; Erlandson et al. 1993). During the peer debriefing, the two researchers discussed and debriefed the codebook, code formation, and code definitions. The debriefing led to consistent codes and definitions, thus increasing the confirmability and dependability of the results (MacQueen et al. 1998).

4. Results

a. Visual attention

We first examined participants’ allocation of visual attention to the messages. For each of our eye-tracking measures, we present the number of participants who viewed each stimulus, the AOI, the average fixation duration (mean and standard deviation), the percentage of time they looked at the AOI, and the mean number of fixations.

1) Visual attention to the Twitter warning message

In response to the Twitter warning message (see Table 2), we found visual attention was highest for the overall graphic portion of the Twitter warning message for both SS (M = 10.07 s) and DS (M = 7.85 s). Within the graphic, most attention was directed to the large map (SS: M = 2.86 s; DS: M = 2.75 s), followed by the red polygon within the large map AOI (SS: M = 2.26 s; DS: M = 2.21 s).

Table 2.

Descriptive results from eye-tracking data for SS and DS NWS tweets. Mean and standard deviation data are presented in s.

Table 2.

Visual attention to the tweet text area garnered the second highest viewing time, with similar lengths of time allocated to text across both groups (SS: M = 8.01 s; DS: M = 6.97 s). Within the text area, participants who viewed the SS message looked at the protective action guidance AOI for a greater overall amount of time than those who viewed the DS message, varying from a mean of 3.99 s (SS) to 2.12 s (DS), respectively. While viewing time was longer for the tweet image component, there was a higher average number of fixations on the tweet text area, especially on the protective action guidance. Visual attention directed to the Twitter user interface (UI) features was considerably less in terms of an average number of fixations as well as the overall viewing time.

2) Visual attention to the Wireless Emergency Alert

In response to the Wireless Emergency Alert image (see Table 3), we found participants’ visual attention across both groups was highest in the whole graphic area (SS: M = 11.76 s; DS: M = 14.99 s). Participants visual attention in this area was particularly focused on the WEA notification AOI (SS: M = 9.07 s; DS: M = 13.50 s).

Table 3.

Descriptive results from eye-tracking data for SS and DS WEA tweets. Mean and standard deviation data are presented in s.

Table 3.

Within the notification AOI, participants directed most of their visual attention to the WEA text (DS: M = 6.98 s; SS: M = 10.50 s) and the protective action guidance (DS: M = 3.46 s; SS: M = 5.90 s) AOIs. Attention was also placed on the ALL CAPS portion of the text (DS: M = 1.20 s; SS: M = 0.75 s). Noticeably, only one participant out of 40 fixated on the alert logo AOI.

b. Memory task

1) NWS tweet—Memorable features

In their description of the warning tweet, participants most commonly remembered information about the location (included in the content of the message) and the presence of a map. Location was noted by all participants (n = 10) in the SS group and a majority (70%; n = 7) of those in the DS group, in many cases by referencing the included map (see Table 4). For example, one participant stated they remembered, “the affected areas and if you are stuck in one what you should do.” They continued by saying, “and then there was a map about where it would . . . was gonna strike.” Another participant stated they remembered the severity of the hazard relative to the location, saying the hazard “was in the Lubbock area and around the towns around Lubbock, and that it seemed pretty severe.”

Table 4.

Interview participant quotes referencing “location.”

Table 4.

Another feature commonly remembered across both the DS and SS groups was the use of color within the messages. 90% (n = 9) of participants in the SS group and 70% (n = 7) of participants in the DS group mentioned the use of color (see Table 5). One participant noted, “they had it covered in red, which caught my eye,” and another mentioned, “the redded out area of the map was the most impactful since it gave just the easy . . . like easy information to read from the map.” Others mentioned other colors such as white text and the black background of the image.

Table 5.

Interview participant quotes referencing “color.”

Table 5.

Notably, when comparing across hazard groups, more participants in the SS group (n = 8) mentioned the name of the hazard and the inclusion of protective action guidance within the message when compared to the DS group (n = 4; see Tables 6 and 7). For example, one SS interviewee emphasized specific protective action guidance saying, “well, I remember it said that there is no safe place in a snow squall. That you should delay or not travel at all,” and another participant said, “they really made sure to emphasize that, that if you stay like stay put, then you’re gonna stay alive. So that was interesting.”

Table 6.

Interview participant quotes referencing “snow squall/dust storm.”

Table 6.
Table 7.

Interview participant quotes referencing “guidance.”

Table 7.

The remaining contents and features were less commonly discussed by both participant groups (see Table 8 for other identified codes). For example, less than half of each condition mentioned the inclusion of information about the time, the hazard conditions, and the source of the message. In addition, only one person in each group noted the differences in the use of font style to emphasize some words in the text.

Table 8.

Content features of the NWS tweet remembered by participants from both the SS and DS groups.

Table 8.

2) WEA—Memorable features

There was variation in what was remembered in the Wireless Emergency Alert message depending upon which hazard a participant was exposed to. Across both hazards, the most frequently remembered content was information about the time. 80% (n = 8) of participants in the SS group discussed the inclusion of a timing aspect in the message, while this was mentioned by 50% (n = 5) of DS participants (see Table 9). For example, one participant stated they remembered “when [the hazard] was happening, when it was taking place”; another participant recalled that the message, “specified the time of the dust storm.”

Table 9.

Interview participant quotes referencing “time.”

Table 9.

The next most frequently remembered feature when summed across both hazards was the name of the hazard (n = 12). However, this was mentioned by DS interviewees far more often (n = 9) than SS interviewees (n = 3; see Table 10).

Table 10.

Interview participant quotes referencing snow squall/dust storm.

Table 10.

The third most frequently remembered feature of the WEA was the inclusion of the yellow emergency alert symbol as part of the header of the WEA. This symbol was mentioned by seventy percent (n = 7) of SS interviewees and forty percent (n = 4) of DS interviewees (see Table 11). When discussing this feature, one participant stated,

Table 11.

Interview participant quotes referencing “emergency alert symbol.”

Table 11.

The symbol represented a caution kind of scenario where it’s a very serious situation for residents in the area. So, they’re trying to warn them in a sense to make them alert of the weather conditions in the area and make sure that they’re on point of not causing any injuries to themselves or anyone else.

Other contents, such as message source, hazard location and population at risk, hazard conditions, and protective action guidance, were mentioned infrequently. Only three out of the 20 interviewees mentioned the use of font differences (use of ALL CAPS) to call attention to individual words (see Table 12 for other identified codes).

Table 12.

Content features of the WEA remembered by participants from both the SS and DS groups.

Table 12.

c. Think-aloud interviews

1) Tweet

During think-aloud interviews, participants described features that captured their attention while viewing the tweet message. The most discussed feature across both participant groups was the color red, as stated by all participants (n = 10) in the SS group and eighty percent (n = 8) of those in the DS group (see Table 13). One participant said they noticed the color red, explaining, “I think the colors are associated with, certain feelings, or emotions. The colors aren’t necessarily a coincidence.” Another participant explained the use of color similarly, saying, “also red, right, is like warning, it’s danger. It’s anything that’s warm tone. In the sense, it seems more urgent than cooler tone colors.”

Table 13.

Interview participant quotes referencing “color red.”

Table 13.

Interviewees in both hazard groups also noted that the map caught their attention (SS: n = 5; DS: n = 4). However, similar to the mention of color red on the graphic of the text, participants frequently pointed to the red area of the map (the polygon) as the section that drew attention the most (see Table 14). Participants less frequently discussed the use of capital or bold letters, the heading of the message, and the time (see Table 15 for other identified codes).

Table 14.

Interview participant quotes referencing “map.”

Table 14.
Table 15.

Content features of the NWS tweet that were noted as capturing the attention of participants.

Table 15.

2) Wireless Emergency Alert

While viewing the WEA message again, participants in both the SS and DS groups most discussed the use of capital letters or ALL CAPS within the message (see Table 7). 80% (n = 8) of participants in the SS group and 40% (n = 4) of those in the DS group made specific reference to the change in text as attention grabbing (see Table 16). For example, one participant said they looked at “the capital snow squall warning . . . Because it’s in capital. So, I see it first looking at it.”

Table 16.

Interview participant quotes referencing “capital letters.”

Table 16.

Another commonly discussed content feature that captured attention was the emergency alert hazard symbol. 60% of participants across both the SS (n = 6) and DS (n = 6) groups noted that this feature was effective in capturing their attention when viewing the message (see Table 17). One participant said they noticed “the caution icon next to emergency alerts, cause that shows something is serious.” Another participant explained how color and differences in font were both important for capturing attention, saying:

Table 17.

Interview participant quotes referencing “hazard symbol.”

Table 17.

The exclamation point in the yellow is the first thing that my eyes are drawn to and then the big bolder words, emergency alert. And then obviously, the all caps are where my eyes go after that.

Less frequently mentioned contents and features included the message heading, the use of bold letters, and the inclusion of time (see Table 18 for other identified codes).

Table 18.

Content features of the WEA that were noted as capturing the attention of participants.

Table 18.

5. Discussion

In this study, we investigated participants’ visual attention, memory, and verbalized thoughts toward two different types of warning messages, Twitter and WEA, for two different hazards, snow squall and dust storm. We collected eye-tracking data to identify where visual attention was allocated to the messages. We also conducted a memory task to understand what individual participants remembered about the message and followed this with a think-aloud interview to identify why message contents and features drew the attention of each individual. We addressed three primary research questions: Where do people look, what do they remember, and what draws their attention? The overarching question focuses on alert and warning message design and asks, What does it mean to stand out? Across all research questions, our findings were broadly aligned with previous research which focused only on Twitter warning messages for tornado. Our research extends beyond the scope of these previously performed studies, incorporating the examination of two new hazards, dust storm and snow squall, as well as WEA messages.

RQ1 investigated where people allocated visual attention when viewing a WEA and a Twitter message about an approaching hazard in their location. We found consistent patterns across both hazard types for each message. For the tweet, we found that most visual attention is allocated to the graphic portion of the message, with the focus placed on the map. The text portion of the message also received visual attention, as evidenced by the time allocated as well as the number of fixations on the words. However, there was limited visual attention directed to the words in ALL CAPS. These findings align with those seen in previous research focused on the use of Twitter warning messages in emergency communications (Fischer et al. 2022; Sutton and Fischer 2021). However, we are unaware of prior research that uses eye-tracking methods to measure visual attention to contents in a WEA. We found most visual attention was allocated to the guidance within the message, with some visual attention to the words presented in ALL CAPS. Notably, only one participant fixated on the alert logo and for only a fraction of a second.

RQ2 investigated what people remembered about a WEA and Twitter message about an approaching hazard in their location shortly after viewing the message. Here, we found consistent patterns across both hazard types for the tweet, but less consistency for the WEA. For the Twitter warning, participants described remembering the color—in the banner of the graphic portion of the message and the polygon on the map. Fewer described remembering the recommended protective actions included in the messages; however, those who viewed the snow squall tweet described remembering protective actions nearly twice as much as those who viewed the dust storm tweet. Only one person said they remembered the use of ALL CAPS to call attention to specific words in the guidance portion of the message. Again, these findings are consistent with those seen in similar research on Twitter messages performed previously (Sutton and Fischer 2021; Fischer et al. 2022).

In response to the WEA, both participant groups remembered the time at which the hazard was occurring, but snow squall participants described remembering it twice as frequently as dust storm. Both groups also remembered the hazard type, but it was described three times more frequently by the dust storm participants. We also found that more than half of the participants who viewed a WEA described remembering the inclusion of the warning logo. Less than half the participants described the protective action guidance included in the message, and approximately 10% (n = 4) described remembering the use of ALL CAPS to call attention to specific words in the message. We are unaware of any prior research focused specifically on remembering the content included in a WEA message.

The underlying reason for the differences in content remembered between the different hazard types remains unclear. Notably, the content of the snow squall WEA was significantly briefer (170 characters) than the dust storm WEA (355 characters), which contained additional details about the hazard. This suggests that the amount of content included in a WEA message may affect how much detail is remembered. These observations highlight a need for future research to explore differences in message length and its influence on remembering information in emergency communications.

RQ3 investigated what visual features participants say draw their attention in a WEA and Twitter message about an approaching hazard in their location. Here, we again found consistency within message types and across hazards. Participants who viewed the tweets described the importance of color because it stands out and represents feelings of warning, danger, and urgency. They also stated that the use of ALL CAPS captures their attention because it stands out in contrast with other texts. Participants who viewed the WEA also indicated the importance of contrast by identifying the use of ALL CAPS, the use of bold text for the header of the message, and the inclusion of a symbol to signify alert. This finding is again consistent with prior research conducted by Sutton and Fischer (2021), which found that, across different hazard types, participants tend to focus on differences in text such as those seen with the use of ALL CAPS.

While there was general consistency across the eye tracking, memory task, and think-aloud interviews for the message types and across hazards (with the exception of remembering for the WEA), we do find that what “stands out” may not be the same as what receives visual attention allocation. In this case, the features that were described as being remembered following the initial observation of the warning and the features that stood out during the think-aloud interviews were not always consistent with features that were fixated on during the eye-tracking activity. For example, visual attention was more frequently allocated to the tweet text than to the guidance presented in ALL CAPS, but the text in ALL CAPS was both remembered and, later, discussed during think-aloud interviews. In other words, while text in ALL CAPS did not require fixation to be remembered by the viewer, it drew attention and was identified as salient without additional cognitive effort in tweets and WEA messages.

Similarly, for those who viewed the WEA message, only one person out of 20 allocated visual attention to the alert logo, but seven participants reported remembering it and 12 said that it was an important feature during think-aloud interviews. Furthermore, 19 out of 20 participants viewed and fixated on the WEA emergency alert header that was presented in bold, but none described remembering it when discussing the message.

This suggests a rather complicated relationship between visual attention allocated through eye fixations, remembering, and active viewing. Prior research has demonstrated the limitations associated with measuring visual attention to visual features through eye tracking alone because attention allocation signals deeper cognitive processing. Measuring eye fixations does not provide a reason for that level of effort (Duchowski 2007; King et al. 2019), which would leave researchers to wonder if visual attention allocation is due to confusion, interest, or some other reason. Through the use of active viewing interviews, similar to think-aloud interviews implemented by Sutton and Fischer (2021), we can learn why attention was placed on specific features. The addition of memory interviews adds new information. In the case of the messages examined in this study, we find that the allocation of visual attention did not consistently result in memories of that content. Instead, we find that some message elements that received little to no fixation time were easily remembered and described. Furthermore, during active viewing, our participants clearly identified the elements that drew their attention at that moment; however, these same elements did not lead to eye fixations when they viewed the warnings earlier.

Neuroscientists explain this puzzle as an instance where components of stimuli, such as an alert icon, are fully attended to but not perceived (Lamme 2003). This means that individuals may process a symbol, text, or other sensory inputs without having been consciously aware of them being present in a message. Some elements do not require fixation or cognitive effort to remember (Lamme 2003). Similarly, there may be parts of a message that are attended to, but it does not reach awareness. That is, a message receiver may fixate on different parts of the message, but it does not reach working memory.

Stimuli that are visually salient, such as bright colors and animated graphics, are processed more efficiently without the need for deep cognitive effort (Lamme 2003). Visual design experts endeavor to create imagery that reduces effortful processing. This is especially important for the design of content meant to be viewed, processed, stored, and retrieved under conditions of heightened stress. Those elements that can be more easily identified with little effort may be more likely to also be retrieved.

For those who design alert and warning messages, these findings lead to a few suggestions to increase visual attention and memory without burdening the message receiver with contents that require additional cognitive load. Colors help to draw attention to key elements and, for some, evoke a feeling of risk. Icons also draw attention and can serve as a signal that catches the eye, especially when actively viewed in a busy messaging environment, but perhaps it is the use of techniques to make key text stand out through bold, italics, underline, or the use of ALL CAPS that reduces effortful processing and, in fact, may eliminate the need for conscious fixations while resulting in easily remembered content. This is an important detail for risk communicators and message designers and one that should be investigated further as we continue to consider what it means to stand out.

6. Limitations and future research

This study focused on two messaging channels—the tweet and the WEA—for two hazards—snow squall and dust storm. Participants were drawn from university populations in areas where they had recently experienced the hazards represented in the messages. The university student research population represents a convenience sample and is not representative of the overall population. Furthermore, while the sample sizes for eye-tracking studies are historically quite small for descriptive and exploratory research (King et al. 2019; Sutton et al. 2020), we recognize that the results of this study are not generalizable to other hazards or other populations. However, this research does lay a foundation for future research, especially as it relates to cognitive load and the strategies that can be used to increase visual saliency in bodies of text that are continuous. The results of this research assist researchers in understanding patterns of what stood out to the participants via eye tracking and qualitative insights from the interviews pertaining to why it stood out. This is especially important for messages that are issued and received under conditions of heightened stress, when the cognitive effort is limited but also required for action. Future research should include experiments that provide statistical and inferential comparisons between message formats and manipulated elements.

The findings of the current study provide unique insights into what elements of a message stand out; however, we find some visually salient areas were not remembered (i.e., warning icon). This finding lends itself to future research. One technique that we have not explored yet is the visual path participants take when viewing a message. We suggest future research should explore participants’ scan paths across a message.

We also recognize that the AOIs identified in this study were drawn to represent a computer screen that might be viewed on a laptop computer. Therefore, their size makes it difficult to truly assess the accurate allocation of visual attention in some of the regions of each image. Future research should take this into account and manipulate messages to increase the size of AOIs and the space between the respective AOIs. Measuring visual attention on a smaller screen, such as a handheld device using eye-tracking goggles, would also provide insight into how viewers interact with warnings that are delivered in a mobile context.

While the layout and design of the messages we tested were ecologically valid, representing the same kinds of messages issued by NWS, we did not vary the presentation of content to account for other potential designs. Furthermore, because we embedded the WEA in a tweet message, it was not presented in a format that would be displayed on the screen of a digital device. Future research should examine visual attention to WEAs delivered on cellular devices.

7. Conclusions and practical implications

This study delves into the dynamics of visual risk communication and furthers insights into what tends to draw attention within this form of messaging, as well as why those contents draw such attention. These findings can guide visual risk communication writing practices and not only increase visual salience within messaging but also enhance message perception and comprehension, ultimately influencing decision-making for message recipients. As such, this work yields some practical implications that can be used in designing these forms of visual messaging across diverse message distribution channels such as Twitter and WEA and offers guidance on how to capture recipients’ attention and enhance information retention.

Design techniques, such as making use of ALL CAPS, prove to be effective in calling attention to specific words that warrant recipients’ attention. The use of ALL CAPS serves to make information stand out, capturing attention within a busy information environment. Similarly, the strategic use of color within messages allows message writers to emphasize either textual components or graphic components of visual messages, thus working to make them more visually salient and memorable to message receivers. The color red, for instance, invokes a sense of urgency when seen within a message. Thus, the use of this color can make specific, important components of the message pop out from others, providing a more effective presentation of critical message information. The incorporation of symbols and icons can further amplify the effectiveness of messages, allowing urgency and severity to be visually noted in a concise and consistent fashion.

Designers of visual risk communication messaging should practice care and balance when making use of these techniques. Overuse of techniques such as ALL CAPS can dilute their effectiveness. As such, it should be reserved for genuinely crucial message information that should be attended to and remembered. Similarly, the use of other techniques such as the incorporation of colors, symbols, and icons within messaging should be carefully balanced to avoid potential confusion.

Looking ahead, the advancing capabilities of systems such as WEA open opportunities for enhancing message content, potentially introducing features such as maps, icons, and emojis. In designing messages, the full scope of these capabilities should be employed. The use of these items within WEA messaging can help draw recipient attention, leading to heightened participant attention and information retention.

Acknowledgments.

This research was funded by NOAA Grant NA21OAR4590124 to Dr. Jeannette Sutton and Dr. Laura Fischer. The results and recommendations are those made by the authors and may not reflect those of the funder.

Data availability statement.

Anonymized data included in this study can be made available upon request to the corresponding author.

APPENDIX A

Area of Interest Descriptions for Twitter Warning Messages

Table A1 outlines the AOIs that were identified for the Twitter warning messages shown as stimuli. Identified are the AOI names, the color of the AOI as seen in Fig. 3, and a short description of the AOI.

Table A1

Descriptions of AOIs in the tweets.

Table A1

APPENDIX B

Area of Interest Descriptions for Wireless Emergency Alerts

Table B1 outlines the AOIs that were identified for the Twitter warning messages shown as stimuli. Identified are the AOI names, the color of the AOI as seen in Fig. 3, and a short description of the AOI.

Table B1

Descriptions of AOIs in WEAs.

Table B1

APPENDIX C

Foil Images

Figures C1C4 show the various foil images that were presented to participants. In each respective study location, all participants were shown identical images. Figure C4 includes the two location-dependent images that were shown.

Fig. C1.
Fig. C1.

Movie tweet.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

Fig. C2.
Fig. C2.

Humorous tweet.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

Fig. C3.
Fig. C3.

Advertisement tweet.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

Fig. C4.
Fig. C4.

Campus tweets.

Citation: Weather, Climate, and Society 17, 1; 10.1175/WCAS-D-23-0140.1

REFERENCES

  • Aurini, J., M. Heath, and S. Howells, 2021: The How to of Qualitative Research. Sage Publications, 351 pp., https://www.researchgate.net/publication/304181341_The_How_To_of_Qualitative_Research#fullTextFileContent.

  • Bean, H., B. Liu, S. Madden, D. Mileti, J. Sutton, and M. Wood, 2014: Comprehensive testing of imminent threat public messages for mobile devices. National Consortium for the Study of Terrorism and Responses to Terrorism Tech. Rep., 206 pp., https://www.dhs.gov/sites/default/files/publications/Comprehensive%20Testing%20of%20Imminent%20Threat%20Public%20Messages%20for%20Mobile%20Devices.pdf.

  • Bruce, N. D. B., and J. K. Tsotsos, 2009: Saliency, attention, and visual search: An information theoretic approach. J. Vision, 9, 5, https://doi.org/10.1167/9.3.5.

    • Search Google Scholar
    • Export Citation
  • Clive, M. A. T., J. M. Lindsay, G. S. Leonard, C. Lutteroth, A. Bostrom, and P. Corballis, 2021: Volcanic hazard map visualisation affects cognition and crisis decision-making. Int. J. Disaster Risk Reduct., 55, 102102, https://doi.org/10.1016/j.ijdrr.2021.102102.

    • Search Google Scholar
    • Export Citation
  • Creswell, J. W., and V. L. P. Clark, 2007: Designing and Conducting Mixed Methods Research. SAGE Publications, 520 pp.

  • Demuth, J. L., 2018: Explicating experience: Development of a valid scale of past hazard experience for tornadoes. Risk Anal., 38, 19211943, https://doi.org/10.1111/risa.12983.

    • Search Google Scholar
    • Export Citation
  • Duchowski, A., 2007: Eye Tracking Methodology: Theory and Practice. Springer, 328 pp.

  • Edworthy, J., and E. Hellier, 2006: Complex nonverbal auditory signals and speech warnings. Handbook of Warnings, M. S. Wogalter Ed., Human Factors and Ergonomics, Lawrence Erlbaum Associates Publishers, 199–220.

  • Erlandson, D. A., E. L. Harris, B. L. Skipper, and S. D. Allen, 1993: Doing Naturalistic Inquiry: A Guide to Methods. Sage Publications, 224 pp.

  • Fisher, J. T., and R. Weber, 2020: Limited capacity model of motivated mediated message processing. The International Encyclopedia of Media Psychology, J. Bulck, Ed., Wiley, 1–14.

  • Fischer, L. M., C. Meyers, R. G. Cummins, C. Gibson, and M. Baker, 2020: Creating relevancy in agricultural science information: Examining the impact of motivational salience, involvement and pre-existing attitudes on visual attention to scientific information. J. Appl. Commun., 104, 25, https://doi.org/10.4148/1051-0834.2287.

    • Search Google Scholar
    • Export Citation
  • Fischer, L. M., G. Orton, J. Sutton, and M. Wallace, 2022: Show me and what will I remember? Exploring recall in response to NWS tornado warning graphics. J. Appl. Commun., 106, 22, https://doi.org/10.4148/1051-0834.2440.

    • Search Google Scholar
    • Export Citation
  • Frascara, J., Ed., 2006: Designing Effective Communications: Creating Contexts for Clarity and Meaning. Allworth Press, 297 pp.

  • Gong, Z., and R. G. Cummins, 2020: Redefining rational and emotional advertising appeals as available processing resources: Toward an information processing perspective. J. Promot. Manage., 26, 277299, https://doi.org/10.1080/10496491.2019.1699631.

    • Search Google Scholar
    • Export Citation
  • Guest, G., A. Bunce, and L. Johnson, 2006: How many interviews are enough?: An experiment with data saturation and variability. Field Methods, 18, 5982, https://doi.org/10.1177/1525822X05279903.

    • Search Google Scholar
    • Export Citation
  • Howarth, J., 2024: Worldwide daily social media usage (new 2024 data). Exploding Topics, accessed 8 January 2025, https://explodingtopics.com/blog/social-media-usage.

  • Jacob, R. J. K., and K. S. Karn, 2003: Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, J. Hyönä, R. Radach and H. Deubel, Eds., Elsevier, 573–605.

  • King, A. J., N. Bol, R. G. Cummins, and K. K. John, 2019: Improving visual behavior research in communication science: An overview, review, and reporting recommendations for using eye-tracking methods. Commun. Methods Meas., 13, 149177, https://doi.org/10.1080/19312458.2018.1558194.

    • Search Google Scholar
    • Export Citation
  • Lamme, V. A. F., 2003: Why visual attention and awareness are different. Trends Cogn. Sci., 7, 1218, https://doi.org/10.1016/S1364-6613(02)00013-X.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2000: The limited capacity model of mediated message processing. J. Commun., 50, 4670, https://doi.org/10.1111/j.1460-2466.2000.tb02833.x.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2006: Using the limited capacity model of motivated mediated message processing to design effective cancer communication messages. J. Commun., 56, S57S80, https://doi.org/10.1111/j.1460-2466.2006.00283.x.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2009: The limited capacity model of motivated mediated message processing. The SAGE Handbook of Media Processes and Effects, Sage, 99–112.

  • Lang, C., T. V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, 2012: Depth matters: Influence of depth cues on visual saliency. Computer Vision—ECCV 2012: 12th European Conf. on Computer Vision, Florence, Italy, Springer, 101–115, https://link.springer.com/chapter/10.1007/978-3-642-33709-3_8.

  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, https://doi.org/10.1111/j.1539-6924.2011.01647.x.

    • Search Google Scholar
    • Export Citation
  • Lozano, J., 2023: Here comes the haboob: Texas high plains getting walloped by dust storms. Texas Tribune, 27 February, https://news4sanantonio.com/news/local/here-comes-the-haboob-texas-high-plains-getting-walloped-by-dust-storms-texas-wind-dust-dirt-accident-visibility-traffic-roadways.

  • MacInnis, D. J., and B. J. Jaworski, 1989: Information processing from advertisements: Toward an integrative framework. J. Mark., 53 (4), 123, https://doi.org/10.1177/002224298905300401.

    • Search Google Scholar
    • Export Citation
  • MacQueen, K. M., E. McLellan, K. Kay, and B. Milstein, 1998: Codebook development for team-based qualitative analysis. CAM J., 10, 3136, https://doi.org/10.1177/1525822X980100020301.

    • Search Google Scholar
    • Export Citation
  • Mileti, D. S., and J. H. Sorensen, 1990: Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. U.S. Department of Energy Tech. Rep., 162 pp., https://www.osti.gov/servlets/purl/6137387.

  • Mileti, D. S., and L. Peek, 2000: The social psychology of public response to warnings of a nuclear power plant accident. J. Hazard. Mater., 75, 181194, https://doi.org/10.1016/S0304-3894(00)00179-5.

    • Search Google Scholar
    • Export Citation
  • Otter.ai, 2024: Otter.ai. https://otter.ai/.

  • Pieters, R., and M. Wedel, 2007: Goal control of attention to advertising: The Yarbus implication. J. Consum. Res., 34, 224233, https://doi.org/10.1086/519150.

    • Search Google Scholar
    • Export Citation
  • Schneider, H. R., 2023: Brief but severe snow squalls expected to hit Capital Region Wednesday night. Times Union, 29 March, https://www.timesunion.com/news/article/snow-squalls-whiteouts-possible-wednesday-night-17866976.php.

  • Sutton, J., and E. D. Kuligowski, 2019: Alerts and warnings on short messaging channels: Guidance from an expert panel process. Nat. Hazards Rev., 20, 04019002, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000324.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., and L. M. Fischer, 2021: Understanding visual risk communication messages: An analysis of visual attention allocation and think-aloud responses to tornado graphics. Wea. Climate Soc., 13, 173188, https://doi.org/10.1175/WCAS-D-20-0042.1.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., C. B. Gibson, N. E. Phillips, E. S. Spiro, C. League, B. Johnson, S. M. Fitzhugh, and C. T. Butts, 2015: A cross-hazard analysis of terse message retransmission on Twitter. Proc. Natl. Acad. Sci. USA, 112, 14 79314 798, https://doi.org/10.1073/pnas.1508916112.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., L. Fischer, L. E. James, and S. E. Sheff, 2020: Earthquake early warning message testing: Visual attention, behavioral responses, and message perceptions. Int. J. Disaster Risk Reduct., 49, 101664, https://doi.org/10.1016/j.ijdrr.2020.101664.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., M. K. Olson, and N. A. Waugh, 2024: The Warning Lexicon: A multiphased study to identify, design, and develop content for warning messages. Nat. Hazards Rev., 25, 04023055, https://doi.org/10.1061/NHREFO.NHENG-1900.

    • Search Google Scholar
    • Export Citation
  • Vos, S. C., J. Sutton, Y. Yu, S. L. Renshaw, M. K. Olson, C. B. Gibson, and C. T. Butts, 2018: Retweeting risk communication: The role of threat and efficacy. Risk Anal., 38, 25802598, https://doi.org/10.1111/risa.13140.

    • Search Google Scholar
    • Export Citation
  • Yantis, S., 1993: Stimulus-driven attentional capture and attentional control settings. J. Exp. Psychol. Hum. Percept. Perform., 19, 676681, https://doi.org/10.1037/0096-1523.19.3.676.

    • Search Google Scholar
    • Export Citation
  • Yantis, S., 2005: How visual salience wins the battle for awareness. Nat. Neurosci., 8, 975977, https://doi.org/10.1038/nn0805-975.

  • Zhang, L., and W. Lin, 2013: Selective Visual Attention: Computational Models and Applications. IEEE Wiley, 352 pp.

  • Zillman, D., and J. Bryant, 1985: Selective Exposure to Communication. Routledge, 264 pp., https://doi.org/10.4324/9780203056721.

Save
  • Aurini, J., M. Heath, and S. Howells, 2021: The How to of Qualitative Research. Sage Publications, 351 pp., https://www.researchgate.net/publication/304181341_The_How_To_of_Qualitative_Research#fullTextFileContent.

  • Bean, H., B. Liu, S. Madden, D. Mileti, J. Sutton, and M. Wood, 2014: Comprehensive testing of imminent threat public messages for mobile devices. National Consortium for the Study of Terrorism and Responses to Terrorism Tech. Rep., 206 pp., https://www.dhs.gov/sites/default/files/publications/Comprehensive%20Testing%20of%20Imminent%20Threat%20Public%20Messages%20for%20Mobile%20Devices.pdf.

  • Bruce, N. D. B., and J. K. Tsotsos, 2009: Saliency, attention, and visual search: An information theoretic approach. J. Vision, 9, 5, https://doi.org/10.1167/9.3.5.

    • Search Google Scholar
    • Export Citation
  • Clive, M. A. T., J. M. Lindsay, G. S. Leonard, C. Lutteroth, A. Bostrom, and P. Corballis, 2021: Volcanic hazard map visualisation affects cognition and crisis decision-making. Int. J. Disaster Risk Reduct., 55, 102102, https://doi.org/10.1016/j.ijdrr.2021.102102.

    • Search Google Scholar
    • Export Citation
  • Creswell, J. W., and V. L. P. Clark, 2007: Designing and Conducting Mixed Methods Research. SAGE Publications, 520 pp.

  • Demuth, J. L., 2018: Explicating experience: Development of a valid scale of past hazard experience for tornadoes. Risk Anal., 38, 19211943, https://doi.org/10.1111/risa.12983.

    • Search Google Scholar
    • Export Citation
  • Duchowski, A., 2007: Eye Tracking Methodology: Theory and Practice. Springer, 328 pp.

  • Edworthy, J., and E. Hellier, 2006: Complex nonverbal auditory signals and speech warnings. Handbook of Warnings, M. S. Wogalter Ed., Human Factors and Ergonomics, Lawrence Erlbaum Associates Publishers, 199–220.

  • Erlandson, D. A., E. L. Harris, B. L. Skipper, and S. D. Allen, 1993: Doing Naturalistic Inquiry: A Guide to Methods. Sage Publications, 224 pp.

  • Fisher, J. T., and R. Weber, 2020: Limited capacity model of motivated mediated message processing. The International Encyclopedia of Media Psychology, J. Bulck, Ed., Wiley, 1–14.

  • Fischer, L. M., C. Meyers, R. G. Cummins, C. Gibson, and M. Baker, 2020: Creating relevancy in agricultural science information: Examining the impact of motivational salience, involvement and pre-existing attitudes on visual attention to scientific information. J. Appl. Commun., 104, 25, https://doi.org/10.4148/1051-0834.2287.

    • Search Google Scholar
    • Export Citation
  • Fischer, L. M., G. Orton, J. Sutton, and M. Wallace, 2022: Show me and what will I remember? Exploring recall in response to NWS tornado warning graphics. J. Appl. Commun., 106, 22, https://doi.org/10.4148/1051-0834.2440.

    • Search Google Scholar
    • Export Citation
  • Frascara, J., Ed., 2006: Designing Effective Communications: Creating Contexts for Clarity and Meaning. Allworth Press, 297 pp.

  • Gong, Z., and R. G. Cummins, 2020: Redefining rational and emotional advertising appeals as available processing resources: Toward an information processing perspective. J. Promot. Manage., 26, 277299, https://doi.org/10.1080/10496491.2019.1699631.

    • Search Google Scholar
    • Export Citation
  • Guest, G., A. Bunce, and L. Johnson, 2006: How many interviews are enough?: An experiment with data saturation and variability. Field Methods, 18, 5982, https://doi.org/10.1177/1525822X05279903.

    • Search Google Scholar
    • Export Citation
  • Howarth, J., 2024: Worldwide daily social media usage (new 2024 data). Exploding Topics, accessed 8 January 2025, https://explodingtopics.com/blog/social-media-usage.

  • Jacob, R. J. K., and K. S. Karn, 2003: Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, J. Hyönä, R. Radach and H. Deubel, Eds., Elsevier, 573–605.

  • King, A. J., N. Bol, R. G. Cummins, and K. K. John, 2019: Improving visual behavior research in communication science: An overview, review, and reporting recommendations for using eye-tracking methods. Commun. Methods Meas., 13, 149177, https://doi.org/10.1080/19312458.2018.1558194.

    • Search Google Scholar
    • Export Citation
  • Lamme, V. A. F., 2003: Why visual attention and awareness are different. Trends Cogn. Sci., 7, 1218, https://doi.org/10.1016/S1364-6613(02)00013-X.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2000: The limited capacity model of mediated message processing. J. Commun., 50, 4670, https://doi.org/10.1111/j.1460-2466.2000.tb02833.x.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2006: Using the limited capacity model of motivated mediated message processing to design effective cancer communication messages. J. Commun., 56, S57S80, https://doi.org/10.1111/j.1460-2466.2006.00283.x.

    • Search Google Scholar
    • Export Citation
  • Lang, A., 2009: The limited capacity model of motivated mediated message processing. The SAGE Handbook of Media Processes and Effects, Sage, 99–112.

  • Lang, C., T. V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, 2012: Depth matters: Influence of depth cues on visual saliency. Computer Vision—ECCV 2012: 12th European Conf. on Computer Vision, Florence, Italy, Springer, 101–115, https://link.springer.com/chapter/10.1007/978-3-642-33709-3_8.

  • Lindell, M. K., and R. W. Perry, 2012: The protective action decision model: Theoretical modifications and additional evidence. Risk Anal., 32, 616632, https://doi.org/10.1111/j.1539-6924.2011.01647.x.

    • Search Google Scholar
    • Export Citation
  • Lozano, J., 2023: Here comes the haboob: Texas high plains getting walloped by dust storms. Texas Tribune, 27 February, https://news4sanantonio.com/news/local/here-comes-the-haboob-texas-high-plains-getting-walloped-by-dust-storms-texas-wind-dust-dirt-accident-visibility-traffic-roadways.

  • MacInnis, D. J., and B. J. Jaworski, 1989: Information processing from advertisements: Toward an integrative framework. J. Mark., 53 (4), 123, https://doi.org/10.1177/002224298905300401.

    • Search Google Scholar
    • Export Citation
  • MacQueen, K. M., E. McLellan, K. Kay, and B. Milstein, 1998: Codebook development for team-based qualitative analysis. CAM J., 10, 3136, https://doi.org/10.1177/1525822X980100020301.

    • Search Google Scholar
    • Export Citation
  • Mileti, D. S., and J. H. Sorensen, 1990: Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. U.S. Department of Energy Tech. Rep., 162 pp., https://www.osti.gov/servlets/purl/6137387.

  • Mileti, D. S., and L. Peek, 2000: The social psychology of public response to warnings of a nuclear power plant accident. J. Hazard. Mater., 75, 181194, https://doi.org/10.1016/S0304-3894(00)00179-5.

    • Search Google Scholar
    • Export Citation
  • Otter.ai, 2024: Otter.ai. https://otter.ai/.

  • Pieters, R., and M. Wedel, 2007: Goal control of attention to advertising: The Yarbus implication. J. Consum. Res., 34, 224233, https://doi.org/10.1086/519150.

    • Search Google Scholar
    • Export Citation
  • Schneider, H. R., 2023: Brief but severe snow squalls expected to hit Capital Region Wednesday night. Times Union, 29 March, https://www.timesunion.com/news/article/snow-squalls-whiteouts-possible-wednesday-night-17866976.php.

  • Sutton, J., and E. D. Kuligowski, 2019: Alerts and warnings on short messaging channels: Guidance from an expert panel process. Nat. Hazards Rev., 20, 04019002, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000324.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., and L. M. Fischer, 2021: Understanding visual risk communication messages: An analysis of visual attention allocation and think-aloud responses to tornado graphics. Wea. Climate Soc., 13, 173188, https://doi.org/10.1175/WCAS-D-20-0042.1.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., C. B. Gibson, N. E. Phillips, E. S. Spiro, C. League, B. Johnson, S. M. Fitzhugh, and C. T. Butts, 2015: A cross-hazard analysis of terse message retransmission on Twitter. Proc. Natl. Acad. Sci. USA, 112, 14 79314 798, https://doi.org/10.1073/pnas.1508916112.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., L. Fischer, L. E. James, and S. E. Sheff, 2020: Earthquake early warning message testing: Visual attention, behavioral responses, and message perceptions. Int. J. Disaster Risk Reduct., 49, 101664, https://doi.org/10.1016/j.ijdrr.2020.101664.

    • Search Google Scholar
    • Export Citation
  • Sutton, J., M. K. Olson, and N. A. Waugh, 2024: The Warning Lexicon: A multiphased study to identify, design, and develop content for warning messages. Nat. Hazards Rev., 25, 04023055, https://doi.org/10.1061/NHREFO.NHENG-1900.

    • Search Google Scholar
    • Export Citation
  • Vos, S. C., J. Sutton, Y. Yu, S. L. Renshaw, M. K. Olson, C. B. Gibson, and C. T. Butts, 2018: Retweeting risk communication: The role of threat and efficacy. Risk Anal., 38, 25802598, https://doi.org/10.1111/risa.13140.

    • Search Google Scholar
    • Export Citation
  • Yantis, S., 1993: Stimulus-driven attentional capture and attentional control settings. J. Exp. Psychol. Hum. Percept. Perform., 19, 676681, https://doi.org/10.1037/0096-1523.19.3.676.

    • Search Google Scholar
    • Export Citation
  • Yantis, S., 2005: How visual salience wins the battle for awareness. Nat. Neurosci., 8, 975977, https://doi.org/10.1038/nn0805-975.

  • Zhang, L., and W. Lin, 2013: Selective Visual Attention: Computational Models and Applications. IEEE Wiley, 352 pp.

  • Zillman, D., and J. Bryant, 1985: Selective Exposure to Communication. Routledge, 264 pp., https://doi.org/10.4324/9780203056721.

All Time Past Year Past 30 Days
Abstract Views 297 297 0
Full Text Views 349 349 258
PDF Downloads 266 266 179