1. Introduction
Individuals have a vast array of viewable weather information available to them from a variety of sources. They can watch a live television broadcast, check a weather website or a social media platform, or use a mobile weather application (MWA). These sources provide a range of weather information and specific tools used to organize, interpret, and display weather data. Most general forecast information such as temperature, humidity, and current sky conditions is displayed in a numerical, text, or even graphical format. However, this information and other weather information such as satellite imagery, weather model ensembles, hurricane track forecasts, and weather radar—the topic of this study—are displayed using a geographic display.
Weather radar displays were first introduced to the U.S. population through television stations that installed and broadcast using their own radars during the 1960s (Henson 2010). From these early beginnings, broadcast meteorologists provided detailed explanations of radar displays, interpreting the reflectivity images for viewers. By 1997, the National Weather Service (NWS) installed 158 Doppler radar stations across the United States (Whiton et al. 1998). Since the adoption of smartphone use in the mid-2000s, there are now hundreds of MWAs available, allowing anyone with this technology to access a weather radar display. However, as users interact with or view radar displays outside a weather broadcast, they are on their own to interpret and use the data without an expert explanation from a meteorologist. Thus, this study aims to understand how geographically displayed radar and related weather information are perceived and used to make decisions about impending weather.
Theoretical framework
The overarching goals of this study are to understand how weather radar is used by individuals and how the construal of situational risks and outcomes influences their perceived usefulness of a radar display. Usefulness is a key variable in this study as it helps to capture how understandable, effective, and/or accurate users find a radar display. Previous research by Saunders et al. (2018, 2021) defined and studied the perceived usefulness surrounding a radar display. To better understand how radar is perceived and used by individuals, this study incorporates construal level theory (CLT) and geospatial thinking into the theoretical framework created by Saunders et al. (2018) (see Fig. 1).

Full theoretical framework for the factors that may influence the perceived usefulness of a radar display, adapted from Saunders et al. (2018) and Saunders and Collins (2021).
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

Full theoretical framework for the factors that may influence the perceived usefulness of a radar display, adapted from Saunders et al. (2018) and Saunders and Collins (2021).
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Full theoretical framework for the factors that may influence the perceived usefulness of a radar display, adapted from Saunders et al. (2018) and Saunders and Collins (2021).
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Construal level theory is based in general psychological research that explains how an individual thinks about objects or events that are separate from their immediate, self-centered environment (Trope and Liberman 2010). Objects, people, and events that are psychologically close to an individual are thought about more concretely while psychologically distant objects, people, or events are more abstract. Within psychological distance, there are four subjective dimensions; spatial distance, temporal distance, social distance, and hypothetical (probabilistic) distance (Trope and Liberman 2010). Trope and Liberman (2010) demonstrate the connections and relatedness between these four psychological distances. They also explain how these are influenced by the level of mental construal and can affect how people react and think about a future reality.
CLT has been applied within risk perception and communication research (Zwickle and Wilson 2013) and for research on climate change (Duan et al. 2017; McDonald et al. 2015), and the CLT framework is also gaining attention in geographical research to understand how humans use distance to view the world around them, both in real and imagined contexts (Simandan 2016). While several applications of CLT exist for climate change and other environmental and technological risks, the framework has not yet been used to study how meteorological phenomena are construed.
To study the perception and use of radar displays, the model of geospatial thinking can help identify how and what information is interpreted from a radar display. Lobben and Lawrence (2015) explain how geospatial thinking skills involve three primitives–space, time, and attributes, and their synthesized geospatial thinking model can be applied in research, aiding in organizing ideas and the creation and testing of hypotheses. Radar data are displayed as both static images as well as rendered into animations that incorporate a dynamic attribute (reflectivity), over a dynamic time frame, within a given space. Lobben (2003) describes this type of animation as a “process animation” because each of the three components (attribute, time, and space) are dynamic. It has also been found that people think differently when they use maps in comparison with other geometric objects, and evidence for this can be seen in the neural pathways using a functional magnetic resonance imaging (fMRI) scan (Lobben et al. 2014). Therefore, using a radar display requires the user to interpret meteorological data as they are displayed across a space that is moving forward in time.
Using a radar display requires the user to be able to make assessments based on their meteorological knowledge, past experiences both using radar and from their previous weather experiences (knowledge domain), as well as their spatial knowledge (familiarity of place). Radar animations, in some cases, can be used to estimate the location, timing, and movement of echoes from previous frames, providing an idea of how fast a storm or system is moving. This information is sometimes used to extrapolate into the future for what may occur at the user’s location depending on the type of precipitation event. The temporal and spatial attributes must be understood in combination with their knowledge of meteorological attributes for the user to approximate how far away a weather event is from their location, estimate how much time they might have, and determine what they may experience. This study used radar reflectivity data as they are the most observed radar variable that also helped to focus the scope of the paper. However, there are several other radar variables available through an array of sources such as velocity data displays or products that help assess hail or tornadic debris signatures.
While this study does not explicitly test for the comprehension of using a radar display, some study aspects may reveal a radar user’s background meteorological knowledge. Radar users must have a way of interpreting what they view in a radar display and extrapolate to make decisions. Radar users would therefore need some level of meteorological knowledge to apply to what they are seeing when using a radar display. They also must use a scale to interpret the intensity of precipitation. Radar reflectivity is officially measured in decibels of Z (dBZ), which is defined by the NWS as a “nondimensional ‘unit’ of radar reflectivity that represents a logarithmic power ratio (in decibels, or dB) with respect to radar reflectivity factor, Z” (NWS 2022). However, on many radar displays, rainfall intensity is represented only by a color scale without any numerical values (Bryant et al. 2014). Scale color ramps are also not consistent across different radar displays. Users should have a general meteorological understanding of what to expect based on these scales (League et al. 2010; Wiggins 2014). In regard to severe weather situations, radar displays can include NWS warning products; therefore, it would be an advantage to the user if they are familiar with what the warnings represent and what to do if they are within an area with a warning (Lindell et al. 2016; Nagele and Trainor 2012). Klockow-McClain et al. (2020) found that how geospatial risks are depicted can influence threat estimates. Specifically, distance, risk boundaries, and color coding can change how risk is perceived over space.
Geographic literacy may also influence the use and understanding of a radar display. Bednarz and Bednarz (2008) argue that increased abilities in spatial thinking are necessary to maximize use of geospatial technologies that are now widely available. They reported that many people use spatial thinking passively as maps and geospatial data that are supplemental items rather than more direct, intentional uses of maps and geospatial thinking. Hegarty et al. (2009) looked at how naïve cartography—the use of simple symbology with static map representations—could have a negative effect on content understanding of a visual display. One example tested the animation component of a weather map. The authors note that participants preferred an animation to a still image and performed spatial information tasks better, despite other studies that suggested animations may not improve map and task comprehension. In contrast, Drost et al. (2016) found that using an animation instead of a static image may provide more information allowing for better comprehension of the information. More recently, Witt and Clegg (2022) demonstrated potential for animated ensemble forecast visualizations for hurricanes to improve viewer comprehension of possible hurricane paths and impacts. These concepts therefore still require further analyses.
Together, construal level theory and geospatial thinking, among other theories, provide the conceptual background for this research to explain how people might identify, comprehend, anticipate, and respond to near-future events. CLT could enhance personalization of an individual’s perception of threats from meteorological phenomena, allowing them to anticipate possible outcomes from viewing a radar display. This may include what type of precipitation is expected, the amount of precipitation they might receive, the timing of when they might expect a weather event, and if they will experience any weather hazards (lightning, flooding, tornadoes, etc.). In this way, CLT can be combined with risk perception theories for how people gather and respond to meteorological information. This relates to people’s attitudes, risk sensitivity, and specific fears that play a role in the perception of risk in general, or of a particular event or threat (Sjöberg 2000). Having the ability to perceive specific weather hazards from viewing a radar display is covered under the knowledge domains section of our theoretical framework. This knowledge of hazards and ultimately potential impacts caused from those hazards may come from personal past experiences and would be vital for knowing the appropriate protective actions to take during a hazardous weather event.
Using the Tampa Bay area of Florida as a case study, geospatial thinking and CLT guide the mixed methods used in this study to attain four objectives: 1) to discover the primary reasons for why individuals seek out information from a radar display and what information they want most from this tool, 2) to understand what information a user receives from viewing a radar display and how the information is described, 3) to examine how time is estimated when viewing a radar display, and 4) to understand how different meteorological phenomena impact the usefulness of a radar display. Florida is an ideal place to study the use and perception of radar displays because it is an area of significant thunderstorm development (Collins et al. 2017).
2. Data and methods
a. Study participants
Participants were recruited from an advertisement placed at the end of the weather radar survey created by Saunders and Collins (2021). All participants were Tampa Bay area residents. There were 17 male and 13 female participants, for a total of 30 interviewees. Interviews were conducted from July through September 2019 mostly at public county libraries in the study area. The average age of a participant was 50, with a range between 25 and 73 years old.
The purpose of this study was not to test participants on their meteorological knowledge but rather to serve as a novel inquiry into how users describe what they see when they view a radar display and what information they gather from the tool. Twenty-two participants had no formal meteorological training. Of the eight participants who did have meteorological training, four said they had taken a meteorology course in college, three were self-taught through an online course/resources, and one said they were Skywarn1 trained. Overall, most participants came across as very knowledgeable about weather occurring in the Tampa Bay area and displayed experience with using a radar display. Interview length ranged from 31 to 96 min, with an average of 53 min. A $10 gift card was offered as an incentive for their participation in the study. Participants were given pseudonyms, which we use to refer to them throughout the article. This study was approved by the Institutional Review Board (IRB) and was certified exempt (IRB Pro00041097).
b. A structured scenario-based approach
The theoretical framework discussed above was critical for structuring our scenario-based research (Fig. 1). To identify how different meteorological phenomena impact the usefulness of a radar display three psychological distances (spatial, temporal, and hypothetical) were incorporated into the “construal of situational risks and outcomes” section of the framework. The psychological distances aided in the design of our study protocol by serving as a guide for selecting the historical weather events that were chosen based on their meteorological characteristics. Therefore, events were chosen and varied based on their severity, direction of motion, onset location, translational speed, and duration. In addition, three important risk perception dimensions including perceived likelihood (the probability of an individual being harmed), perceived susceptibility (how the individual views their vulnerability), and perceived severity (the degree of harm potentially caused by the hazard) were incorporated into the study protocol (Brewer et al. 2007). These dimensions became the degree of impact and the degree of certainty, which are included within the construal of situational risks and outcomes section of the theoretical framework.
c. Interview protocol and study questions
Semistructured interviews were conducted with two directives in mind. The first was to ask a series of follow up questions to a survey from Saunders and Collins (2021) to have participants elaborate on their previous responses about their radar use. Participants were first asked about their main reason for using a radar display and to think about a time in their life they found most memorable using a radar display. In contrast, they were also asked whether they had ever had a time when they did not find a radar display to be useful. Participants were asked about what information they seek when using a radar display. The second directive was to observe participants using radar by having them participate in six radar scenarios with accompanying follow up questions. This allowed participants to describe in their own words what they were viewing on a radar display, whether they were concerned for any meteorological hazards, and whether they were confident that they would be impacted during the scenario among other topics. Before their interviews, participants were asked to fill out a preinterview survey to collect some demographic data and to ask questions related to radar use such as their preferred device type and how often they view a radar display.
The location used in the scenarios was Curtis Hixon Park (CHP) in downtown Tampa, which is centrally located and popular among Tampa Bay area residents. CHP is located 17.2 mi (27.7 km) northwest of the NWS Ruskin office’s Doppler weather radar, which has a spatial coverage radius between 80 and 140 mi (130–225 km). Before beginning the scenarios, each participant was shown images of CHP using “Google Street View,” to make sure participants were familiar with what their surroundings would look like. A dot was marked on the map to indicate where the participant was in each scenario (CHP) [for this publication, a white star outline has been added to Figs. 2–7 (described below) to highlight the location of the study area]. Participants were then guided through each radar scenario, which were split into two animation segments, with each segment having its own set of questions. Archived weather events (used in the scenarios) were varied by their distance to the study location (CHP), their translational speed of movement, the type of weather event (e.g., synoptic front vs airmass thunderstorm), and the degree of impacts expected at the park (severe or nonsevere). Tampa receives weather from different directions depending on the time of year; therefore, seasonality was also varied. The first animation segment consisted of the first six radar reflectivity frames that were shown on a loop to participants while they answered three questions. These questions used a 1–5 Likert-type scale to gather information such as how far the participant felt the (green) rainbands were from their location at the park (from 1 = far away to 5 = at their location) making their assessment from the last radar frame within the short loop; how certain they were that it would rain at the park based on the radar animation (from 1 = very uncertain to 5 = very certain); and then how much time they estimated they would have before any rain would begin at the park, if they thought it would rain (from 1 = having no time to 5 = having plenty of time). Participants were also asked to estimate how much time in minutes they would have before the green reflectivity values (lighter rain) would reach CHP after seeing the short animation for each scenario. Participants were told the month in which each scenario occurred to provide additional context. Each radar display had the valid date/time shown to the participants, and they were told that there were 6 min per radar frame.

Radar frame of greatest reflectivity intensity for Curtis Hixon Park (CHP) for scenario NS1.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

Radar frame of greatest reflectivity intensity for Curtis Hixon Park (CHP) for scenario NS1.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Radar frame of greatest reflectivity intensity for Curtis Hixon Park (CHP) for scenario NS1.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S1. A severe thunderstorm warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S1. A severe thunderstorm warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
As in Fig. 2, but for scenario S1. A severe thunderstorm warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S2. A tornado warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S2. A tornado warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
As in Fig. 2, but for scenario S2. A tornado warning was present over the study area.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario NS2.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario NS2.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
As in Fig. 2, but for scenario NS2.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario NS3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario NS3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
As in Fig. 2, but for scenario NS3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

As in Fig. 2, but for scenario S3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
As in Fig. 2, but for scenario S3.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
For the second animation segment, participants were shown the full radar scenario animation (the first six frames plus an additional 17–22 frames depending on the duration of the scenario). Participants were asked how they thought their location at CHP would be impacted by the event, first estimating the intensity of rain tha tthey would expect to experience (from 1 = experiencing light rain to 5 = experiencing heavy rain). Participants were also asked about their concern for any hazards for the scenario at the park and why. They then described what they were seeing as they looked at the radar display as well as what most drew their attention.
To conclude each scenario, participants were asked a series of questions to understand their decision-making and confidence when making decisions about what to do during a weather event while using a radar display. The first question asked whether they thought that the radar animation provided them with enough information to make a decision about what to do in the scenario, and they were then asked to explain their answer. Participants were asked if using a weather radar display helped them to feel confident when making a decision about a real-time precipitation event, using a 1–5 scale (from 1 = strongly disagree to 5 = strongly agree). The last question asked participants to rate on a 1–5 scale (from 1 = not at all useful to 5 = very useful) how useful they found the weather radar display for that particular scenario.
After the scenarios, participants were asked which scenario they would be most concerned about and why, about what prompted them to view a radar display, and what information they typically seek when they view radar. They were also asked several follow-up questions that included topics about zoom abilities and using “future radar” (reflectivity model data) animations and lightning indicators.
d. Radar scenarios
Scenarios were created using Gibson Ridge Analyst Software capable of displaying level-3 base reflectivity radar data. Archived radar data were downloaded from National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information and used to create six radar scenarios (National Centers for Environmental Information 2019). Three scenarios representing severe events were labeled with an “S” (S1, S2, and S3), and nonsevere events were labeled “NS” (NS1, NS2, and NS3). Radar images/animations were purposefully selected to vary the situational risks and outcomes observed in each scenario (distance, time, and meteorological attributes) (see Table 1 and Figs. 2–7 for details: the figures are labeled and displayed in the order in which they were shown to participants; the animations of each full radar scenario are available in the online supplemental material). Scenarios S1 and S2 (Figs. 3 and 4) had severe weather warnings issued that included the area of CHP. These warnings were included in the scenario and were available to participants. To select each scenario, the NOAA Storm Events Database was used to help search for specific events that impacted the Tampa Bay region, specifically at or near CHP. The Iowa Mesonet archived text products were used to verify the weather conditions for each of the scenarios, including area forecast discussions, local storm reports, and specific products such as severe thunderstorm warnings, tornado warnings, and flood advisories/statements.
Details of the six archived weather scenarios used in this study. Each scenario is broken down by attributes that describe the type of weather event, degree of impacts for the Tampa Bay region, and the number of minutes before precipitation would have started at CHP.


e. Mixed methods analyses
Interviews were audio recorded and transcribed, and all Likert-type scale and numeric responses were added to a spreadsheet; statistical analyses were performed using SPSS 25 software (Bryman 2012; Elliott and Woodward 2007). Qualitative analyses were performed through inductive code generation to allow themes to emerge by way of thematic analysis using NVivo 12 software (Denzin and Lincoln 2003; Gilbert 2001). All 30 interviews were coded by the principal investigator. A second outside researcher coded a subsection of the interviews to provide intercoder reliability. Any divergences in coding were discussed and modified until agreement was reached.
3. Findings
a. Objective 1: Primary use and information seeking
Obviously, it’s a blueprint, of what kind of weather is on its way, short term maybe within two or three hours or less. And I watch it usually throughout the day and in the evening just to see what’s happening around me and anticipate (Patrick, age 63).
Twenty percent of participants directly mentioned wanting to know about severe weather, either a tropical cyclone or severe storms. Twenty-five participants indicated that they use a radar display several times a day, and the remaining five, several times a week. When they were asked to recall a most memorable time while using a radar display, over 75% mentioned using it during a hurricane, especially Hurricane Irma in 2017.
Survey data collected by Saunders and Collins (2021) found that “locating a hazard watch or warning for their area” and “locating a weather event” were the two most important pieces of information people were seeking when viewing radar; however, in that study, respondents chose from a predetermined list of responses. This study’s interviews, in which participants could describe their reasons for using radar in their own words, led to similar results, in that most participants said they use a radar display to understand what will happen for their area. The direction weather events move was mentioned often throughout the interviews as an important factor used for decision-making, as well as leading to the inability to make a decision. In fact, lack of directionality or storm motion was one of the most common reasons why a participant in this study said that a radar display was less useful.
Sixty percent of participants indicated that they “always” use their “smart” telephone (hereinafter smartphone) to view a radar display, and 30% indicated that they “usually” use their smartphone. Only 7% of participants indicated that they always use their TV to view radar (see Fig. 8). Radar was often brought up as participants discussed their daily routines. One-half of the participants noted using weather radar as part of their morning routine, just to “look around” their area for precipitation or storms in the vicinity, which suggests that participants have a nearby spatial lens when using radar. Some participants also referenced viewing a weather forecast in addition to viewing a radar display but phrased these actions in a way that did not differentiate between a weather forecast and radar. This merging of meteorological data sources makes us aware of the integration of a radar display into weather apps that serve as a multipurpose tool (i.e., delivering a forecast, having a radar display, satellite imagery, and other tools). This also came up when participants discussed viewing hurricane information—a topic that was often brought up when asked about a memorable time using radar—some participants mentioned viewing hurricane forecast tracks from model ensembles (spaghetti plots), tropical forecasts, satellite images, and even upper-air charts when they were asked about using a radar display. Therefore, it might be that some participants collectively view these meteorological data sources as “weather maps” while using the term “radar” display.

Participants indicated the frequency with which they use each device type to view a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

Participants indicated the frequency with which they use each device type to view a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Participants indicated the frequency with which they use each device type to view a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
b. Objective 2: Information and hazards conveyed
When describing what they saw when viewing radar, participants did not focus on the same characteristics for every scenario and instead described the most pressing attributes, which varied based on the spatial proximity and degree of impacts occurring in each scenario. Each of the nonsevere scenarios were described in a similar way that focused more on the location of reflectivity values. For scenario NS1, 21 participants described the location and or direction for where “cells of rain” were moving. Participant descriptions for scenario NS2 focused on the structure and location of storm development. Descriptions for scenario NS3 mostly discussed the locations at which intensification of convective storms was occurring. Another interesting finding was that 7 participants for NS2 and 14 for NS3 noted that the weather event was likely caused by either the east coast or west coast “sea breeze.”
For Scenario S1, the “linear” or “band” structure was highlighted the most followed by defining this event as either a strong/severe thunderstorm or a thunderstorm that most likely occurring as a result of a “cold front.” There was a noticeable difference in how participants described scenario S2 when compared with S1, especially when discussing the colors of the reflectivity and the strength of the storms. There were 23 participants who labeled scenario S2 as a strong or severe thunderstorm and 13 for S1. Several participants focused on the size and structure of the main line of storms in S2, with five participants describing the “bowing” nature of the severe thunderstorm. Scenarios NS2, NS3, and S3 were all summertime events and had a larger range of descriptions than the other scenarios. For scenario S3, most participants described how the precipitation continued to strengthen over the study area.
We explored participants’ perception of risk for potentially hazardous weather as they viewed each radar scenario. Scenarios were purposefully varied by severity/impacts and by the primary type of convective forcing so that each scenario had the potential for different hazards. In general, the most mentioned potential hazard in every scenario was lightning (Table 2). Because Florida—and the Tampa Bay region specifically—experiences some of the highest flash rate densities and lightning fatalities in the United States, this was expected (Albrecht et al. 2016; Ashley and Gilson 2009). Each scenario only displayed base reflectivity values and therefore did not include any form of lightning count or indication as some radar displays offer. When participants were asked why they were concerned about lightning there were a variety of reoccurring responses. The most common statement about lightning was that there is always the concern or threat of lightning because they live in Florida. Therefore, participants were more certain they would experience lightning than other hazards. An unexpected finding was that many participants stated that they knew there was lightning occurring during a scenario when they would see “red” or “orange” reflectivity values. This could mean that participants interpreted the red and orange reflectivity values to mean that negative impacts were more likely:
Participants were asked if they would be concerned for any meteorological hazards while viewing the radar display animation for each scenario. The center column displays the hazards about which participants stated they would be concerned in each scenario. Parentheses indicate the number of participants who stated each hazard; the boldface text indicates the hazards that occurred in the Tampa Bay area during the scenario. The right column describes what weather conditions and associated hazards occurred during each scenario in the Tampa Bay area. The Iowa Mesonet archived text products (i.e., local storm reports), the NOAA Storm Events Database, and the Storm Prediction Center’s Storm Reports were used to verify the weather conditions that occurred during each scenario.


Anytime I see like the reds and stuff, that’s the first thing, I’m like, “Oh, that’s going to be heavier, there’ll probably be more lightning and thunder associated with it” (Liza, 41).
Another participant stated that “Usually, you know, green, I don’t think of lightning, it’s just rain. Yellow and orange is more intense. I think orange is more wind” (Lucy, 51). Wind was the second most mentioned hazard for all scenarios except NS2, while scenario S2 had an equal number of mentions for wind and heavy rain. Just as for lightning, participants noted that seeing “red” and “orange” reflectivity values equated to experiencing greater wind speeds. One participant referred to this during scenario NS1:
Well in December, you don’t get a lot of lightning. But I mean, when I see the red and orange cells pass over, I would be concerned that there could be some high winds or there could be lightning that could typically show up when you have that much … that kind of weather pattern show up (Paul, 42).
Throughout the scenarios, the colors representing reflectivity values were mentioned as being what drew the most attention to participants as color indicated the “intensity.” Finding an association with color and the potential for other hazards besides rainfall intensity may be due to the use of a rainbow color scale, as red in this scale is often associated with danger. One participant expressed what drew the most attention to them as saying: “I always look for red, red’s bad” (Jim, 64). Radar reflectivity is the measure of the strength of energy returned to the radar in decibels of Z (dBZ). Therefore, any associations of color with hazards other than rain or hail would come from previous experiences with using radar and experiencing hazardous weather events with negative impacts.
For severe scenario S1 and S2, a lot of attention was given to the severe weather warning polygons, which were often noted as “boxes” or “outlines”; however, there was some confusion as to what the “boxes” represented. While all but three participants made mention of the warnings in scenario S2, some participants were not sure if it was a tornado watch, tornado warning, or severe thunderstorm warning. The three participants who did not mention the warnings at all in their descriptions of the radar display also did not mention the possibility of a tornado when asked about potential hazards. As warning interpretation had not been a part of the original focus for this study, there was no way for participants to hover over or click on these warnings to gain more information. In some apps and online products weather warnings are interactive and often provide additional information about the weather warning, so in a real world scenario the participants might have access to additional information about the warning. While part of the confusion about warning interpretation was due to the limitations of the Gibson Ridge Analyst software used to display the radar scenarios, it is still important to mention that some participants did not note or recognize a severe weather warning.
I’m trying to remember. I think yellow is watches and reds are warnings. And then it depends on if it’s tornadic or thunderstorms, severe thunderstorms. I always have to check sometimes as I go to different products, I always need to verify which one’s which (Diana, 55).
Scenario S2, which had a tornado warning in effect for CHP, had 12 participants mention the potential for a tornado (1 for a waterspout) and 18 who mentioned concern for high winds. This meant that less than one-half of the participants mentioned the possibility of a tornado occurring during an event with an active tornado warning at the study location. It was not expected that participants should ascertain a tornado from reflectivity alone. Instead, these findings may highlight the importance of radar displays having/needing legends that label features such as severe weather warnings.
Other hazards that participants reported were heavy rains and the potential for flooding. Scenario S3 was of particular interest for flooding hazards as up to 6 in. (15 cm) of rain fell around the Tampa Bay area during the event. However, more participants mentioned a concern for lightning (25 of 30 participants) and wind (19 of 30) than flooding (12 of 30). This was surprising, especially because (9 of 30) participants said they were concerned about flooding for scenario S1, which was more of a wind event.
Definitely rain and lightning, maybe wind. I’m sure there’s some wind in there somewhere, you know, there’s quite a bit of red. Again, maybe the rare waterspout, but maybe hail because its … June. So hail’s a possibility (Melissa, 41).
Inference of a waterspout could be due to the direction from which the storms were moving (from southwest to northeast) and from interpreting wind speeds using reflectivity values. Inference of hail could be from viewing the coverage of the “red” reflectivity values on the radar display and expecting more negative impacts. This fits well within the knowledge domains section of our framework; participant Melissa does note that a waterspout would be rare and that hail would be possible because it is June, both of which are pieces of information that she either learned or gleaned from personal experience.
c. Objective 3: Extrapolating/estimating time
To gain perspective on temporal decision-making and for how individuals interpret temporal proximity, participants were asked how much time in minutes they thought they had before the green reflectivity values (lighter rain) would reach their area at CHP after seeing the short animation for each scenario. We used one sample Student’s t tests to compare participant’s time estimates with the actual, calculated time until rain for each scenario. Because some participants gave a range of time for an estimate, the average times were calculated and analyses were run two ways: first, where the ranges were averaged; and second, using the lower bound estimate. This comparison with actual time was not meant to assess skill but instead to help to gain an understanding of how participants would extrapolate meteorological attributes, over a spatial area, into the future to help relate the theories of CLT and geospatial thinking. Using either the averages or the lower bound time estimates, we found that participants overestimated the amount of time they had for scenarios NS1, NS2, NS3, and S2 (see Table 3).
Participant’s mean estimated average time for how long it would take for rain to reach them at the study location (CHP) for each radar scenario vs the actual time it took for rain to begin at CHP. Scenarios S2, NS1, NS2, and NS3 were all overestimated. Values in square brackets apply to start of heavy rain; values in parentheses are a lower bound.


Scenario S1 was the only scenario that was underestimated, and, for scenario S3, estimates matched the actual mean as the scenario began with precipitation in the park. For S1 there was only one participant who overestimated the actual time and six who underestimated by 6 min or less. Participants may have underestimated the number of minutes they had before rain would begin during scenario S1 due to difficulties judging their distance to the storm system and the speed storms were moving. While reflectivity values appear to be closer to CHP, the storm system is moving at a slower speed, perhaps giving the illusion that they would have less time than they actually would. The structure and orientation of the storm may have also had an impact as some smaller pockets of reflectivity values appear slightly out ahead of the larger system.
In the opposite sense, for scenario S2, the mean difference was about 20 min more than the actual time. This scenario was interesting because eight participants estimated 30 min and eight estimated an hour, with only five participants providing times in between. There were two participants who underestimated the actual time, and four who overestimated by only 2 min. Scenario S2 reflectivity values appear to be farther away from CHP, but the storm is moving at a faster pace, thus making it seem like they would have more time before experiencing any rain. Therefore, the distance between a participant’s location and the storm system may have a greater influence for how participants extrapolate meteorological attributes (reflectivity values) into the future, rather than the speed of the storm system. In this study, this led to participants overestimating the amount of time before rain would reach their location.
In scenario NS1 the mean difference was about 9.5 min more than the actual time it would take for rain to begin at CHP, with the participant mean estimate around 15 min. For this scenario there was only one outlier of 45 min; even accounting for the outlier, the mean difference was 8.5 min. Ten percent of participants were within 1 min of the actual time, 57% estimated between 10 and 15 min, and the rest of participants (33%), estimated 17.5 or more minutes.
Scenario NS2 representing a convective or “pop up” thunderstorm, proved to be a difficult scenario to estimate time. Two different times were analyzed, first with 18 min in reference to the first light rain to reach the park, however, short lived. Because of the sporadic nature of this event, CHP experienced a second and slightly heavier rain event around 60 min into the scenario. Using this second time marker, participants’ average time was underestimated. We include both calculations in case participants estimated time based on the second occurrence of rain. While NS2 scenario was analyzed for both the first and second rain events, note that participants were asked to estimate based on the first chance of rain.
Scenario NS3 was an event that appeared to be far to the east of the park at the first frame of the animation and most participants indicated that they were very uncertain that they would receive any rain; however, many noted that August is an unpredictable month when thunderstorms can “pop up.” Therefore, it was not surprising that NS3 had the largest range of estimated time, 102.5 min. It also had the largest standard deviation (20.8). There were eight participants who estimated within 5 min or less of the actual time for this event. The summertime airmass or pop-up storms (NS2 and NS3) were mentioned numerous times as being unpredictable and hard to estimate. Overall, we identified that participants in this study typically overestimated the amount of time they had before rain would begin in their area, typically by about 10–15 min. This section highlights some of the difficulties in being able to think geospatially. These results are preliminary; therefore, this is a topic that would be useful to study in more depth as part of future work.
d. Objective 4: Radar usefulness
Participants were asked to rate the usefulness of the radar display after viewing each scenario. Initially we hypothesized that if the degree of impact (severity or duration) and the degree of certainty for being impacted by a precipitation or storm event were greater, then the radar display should be perceived as more useful. It was also hypothesized that the closer a weather event is to an individual, either spatially or temporally, the radar display should be perceived as more useful. Descriptive statistics were used to compare the usefulness ratings, which highlighted slight variation between the six scenarios. Scenarios S1 and S2 had the highest usefulness ratings with 90% of participants rating the radar display as very useful (5) on the five-point scale for scenario S1 and 93% for scenario S2. Scenarios NS2 and NS3 had the fewest number of five ratings in comparison with the other scenarios with 63% for NS2 and 60% for NS3. What is most interesting is that NS2 and NS3 had much larger ranges in usefulness ratings. This prompted further exploration to uncover the reasons for decreased usefulness ratings.
It is possible that the severity of a weather event has a positive relationship with the usefulness of viewing a radar display as two of the severe scenarios, S1 and S2, had the highest usefulness ratings. However, an increase in a participant’s level of certainty that they will experience a weather event may also positively influence usefulness ratings. Therefore, the distance someone is to a weather event and the structure/type of weather event help to influence a participant’s level of certainty that they would experience the weather event in each scenario. One interesting example of how meteorological knowledge increased the usefulness ratings occurred in scenario NS3. Twenty participants were uncertain that they would be impacted by the weather event. However, there were participants who were able to identify a unique feature within the radar reflectivity and noted seeing either a “gust front, “sea breeze,” or “outflow boundary.” Eight of those who specified this feature then indicated that the radar display provided them with enough information to make an independent decision. They stated that the radar made them very confident in this scenario and rated the radar display as very useful. In contrast, some of the participants who indicated they were not certain that the weather event would impact them mentioned that this event would have “caught them off guard.” Having prior knowledge and experience with viewing this type of convective weather event using a radar display helped participants to anticipate what weather was possible at CHP far better than those who did not. Of the eight participants who identified the feature, six had indicated that they had taken a meteorology course either for college credit or online. When comparing participant groups who had taken a meteorology course with those who did not, NS3 was the only scenario that resulted in statistically significant differences in the distributions of participant’s certainty, confidence, and usefulness ratings.
Similarly, for scenario NS2, which displayed summertime convective thunderstorms, participants who indicated that the display was less useful, mostly said it was because of the uncertainty of where the storms were moving, given that they lacked direction. This highlights the importance of being able to interpret spatial proximity (see Fig. 9).

Participant quotations relating radar usefulness and uncertainty when using a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

Participant quotations relating radar usefulness and uncertainty when using a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Participant quotations relating radar usefulness and uncertainty when using a radar display.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
These examples also highlight the importance of being able to interpret temporal proximity, that is, estimate the amount of time a person has before they would be impacted by a weather event and whether they thought there was enough time to decide about what to do. Even though these storms were in the vicinity of the park, the usefulness ratings were lower due to not being able to make a decision about the direction the storms would move and ultimately, how much time participants would have before rain reached their location. This example expresses just how intricate and connected the relationships of distance, time, and meteorological attributes are and the importance of being able to understand all three variables.
Information sufficiency plays a key role in understanding how people use a radar display to make decisions and really speaks to the human elements connected to uncertainty within risk perception and decision-making. Just because someone has enough information to make a decision does not guarantee they will make the “correct” decision at the “correct” time. When asked if a radar display provided them with enough information to make a decision about what to do during the scenario, one participant stated: “Yes. Which doesn’t mean it’s the correct decision and only means I have the information to make it” (Ron, 65). To some degree, whether participants felt they were provided with enough information did influence how useful they rated the radar display. Many participants expressed that they use multiple radar displays (i.e., different MWAs or other sources) because they prefer specific features from each application or source. Others mentioned that they would use multiple radars (MWAs or sources) to verify what they were viewing between each radar application.
Other information discussed with participants was about the use of other data and tools available within some radar displays such as lightning indicators and “future radar.” The majority of participants found a lightning indicator to be a useful tool. There were a few that described it as messy or unnecessary. For example, it was mentioned that if one had already heard thunder then one knows that one is at risk for lightning, deeming the indicator as unnecessary. Another participant described why she does not find lightning detection as useful (see Fig. 10). Again, although there were a few people that questioned how this information was collected and what the data represented, the majority found this feature to be useful.

Participant quotations that discuss their perception of lightning data.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1

Participant quotations that discuss their perception of lightning data.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Participant quotations that discuss their perception of lightning data.
Citation: Weather, Climate, and Society 15, 1; 10.1175/WCAS-D-22-0069.1
Another tool available within some mobile weather applications and from most television stations is the ability to view “future radar.” A “future radar” display depicts model reflectivity data over an area for a short window into the future, usually from 1 to 6 h. Of 25 participants who responded about their use of “future radar” displays, only 5 participants said they found it to be a useful tool. Eight participants said they used “future radar” but had reservations about the quality or accuracy of the display, and 12 stated that they did not find “future radar” displays to be useful. They described it as only being useful to understand the flow of the weather pattern but did not trust the actual precipitation amounts or location that were shown. The remaining 12 participants responded that they did not use “future radar” because they did not find it to be useful or accurate. Most in this group said that they preferred to extrapolate into the future using their own knowledge of meteorological patterns based on what is currently displayed on the radar display. The remaining 5 participants either did not answer this question or did not state whether they found it to be a useful tool.
It’s not the radar so much, it’s less confidence in what Mother Nature is going to do. The radar’s great. But what is Mother Nature up to? I prefer the storms that are in a high wind and you know, they’re coming down here to there. It’s those weird pop-up ones and those weird stationary ones, they don’t know what it’s going to do. It’s like a, it’s like an erratic child or something (Jay, 52).
Scenarios NS2 and NS3 had the lowest participant confidence ratings of the six scenarios, which mirrored the usefulness ratings.
In the winter, in the front situations it’s obvious that I could ignore that (false echo), but like in the summer when I literally need to know is it raining on the place that I’m supposed to be at like right now, and there’s all sorts of stuff, it’s like what is any of this, you know (Mike, 29).
This participant owns a company that performs most of its work outdoors and therefore relies on using radar to know when precipitation may be in the area. This participant noted that, especially during the summer months, this information could be the difference between keeping materials dry or delaying work. The complaints about delay or “lag” and “noise” suggest that some participants may not be as familiar with radar data and some of the data limitations or differences.
4. Conclusions
Uncertainty plays a key role in how people make decisions about impending weather events. A weather radar display is one tool that can be used to help address this uncertainty as it provides users with useful spatial and temporal inferences of meteorological data. The goal of this study was to highlight the information that is used and wanted most by radar users. It also shows the complexity and interconnectedness of meteorological data construed over distance and time, which are used to infer information and ultimately make decisions. This was done by incorporating construal level theory and the synthesized geospatial thinking model as theoretical background during the creation of interview protocol and the design of the scenarios used in this study. To evaluate the perceptions of distance and time and how they are related, scenarios were designed with specific variations in spatial, temporal, and severity attributes. The synthesized model of geospatial thinking also served as a guide when interpreting participants’ responses to questions asked while viewing each scenario. It is important to understand how participants perceive distance and time in relation to meteorological hazards as these attributes may influence how they perceive risks associated with weather events.
Our findings show that radar reflectivity data are used most often as a tool to anticipate and in some cases plan for what will happen meteorologically, for their area, in the near future. This requires users to interpret what radar reflectivity is currently displaying and then extrapolate both spatially and temporally to conceptualize what meteorological attributes may occur at a location in the near future. In addition, users are trying to construe whether these meteorological attributes are hazardous as well as how certain they are they will be impacted. A notable finding is that radar reflectivity is a tool used not only to infer information about precipitation but also to anticipate the degree of impacts for meteorological hazards such as lightning, wind, and severe weather phenomena such as tornadoes and hail. Some of these inferences go beyond what reflectivity values are intended to communicate, such as associating that lightning or strong winds are present because red and orange reflectivity values are displayed on the radar. This finding fits well into the knowledge domains section of the theoretical framework and highlights the diversity of experiences people have when experiencing weather and the connections they make from past experiences that influence their future use and interpretation of meteorological information. This finding may also help to explain why several participants overestimated some of the hazards they expected to receive at the study location.
Overall participants were knowledgeable about what Florida weather is capable of and what they should expect during different times of the year. This suggests that participants used Bayesian inference in an effort to lessen psychological distance associated with situational uncertainty and anticipated degree of impacts [as in Brügger (2020)]. This seasonal and place-based knowledge interacted with their domain-specific knowledge of weather radar data to influence how they described each radar scenario and as they explained how they know what hazards to anticipate. It would be of great interest to use a similar protocol in different geographic contexts to see if location-based meteorological knowledge plays an important role in radar interpretation for other regions. We hypothesize that geographic location and prevailing weather patterns will have an important influence on how people use and interpret weather radar displays. It would likewise be useful to investigate if the result would be similar if participants were not explicitly given the date or any seasonal information as part of the visual stimuli; would they ask for that information if it were not provided?
The average time that a participant had before any rain would begin in each scenario was commonly overestimated. This directly highlights the difficulties of geospatial thinking that requires combining distance, time, and extrapolating attributes for meteorological events. Especially when faced with needing to decide what to do to prepare for or when traveling to avoid impending weather, this finding could mean that individuals would not have enough time to carry out their plans before being impacted. Participants were more confident using a radar display during scenarios with a west to east flow pattern and less so during convective and sea breeze scenarios. Thus, even when participants tried to make the meteorological information provided in radar displays more concrete and relevant to their lives, they appeared to experience cognitive dissonance and possibly more abstract construal than expected on the basis of geographic proximity alone. This occurred more often in scenarios where geospatial thinking fundamentals (where and when) were challenged by the broader context such as a meteorological environment characterized by slow and seemingly erratic storm movement. Future work is needed to gather more data on how distance and time are used to extrapolate meteorological data into the near future.
For most participants in most scenarios, a radar display was found to be a very useful tool that provided them with enough information to make a confident decision about what they should do when weather was approaching their area. However, the usefulness ratings decreased for many participants when the directionality of a precipitation or storm event was unclear, such as during a convective pop-up thunderstorm event or during an afternoon sea-breeze-induced event, therefore, making it difficult to interpret spatial proximity. When discussing past experiences, radar data were described as inaccurate if participants experienced precipitation at their location but the precipitation was not displayed on radar or vice versa. Some participants also disliked that there was a “lag” or delay in the update for radar images. These findings are also connected to credibility of a radar display section of the theoretical model. Finding radar data to have a “lag” or delay may represent different understandings for how radar data are collected and displayed.
We found that 70% of our participants are using more than one source or mobile weather application to gather their weather information even though the underlying meteorological data is the same (NWS radar). Often this has to do with wanting specific features that only one application or source offers or that offers a feature better than a competitor’s application, especially for extra features such as a lightning indicator or for wanting satellite images or hurricane tracking capabilities. This is an important finding about the use of mobile weather applications that could be explored further. It points to the importance in our theoretical framework of display usability and knowledge domain factors that may enhance the ability to comprehend via other types of weather information.
While it was intentional to include severe weather scenarios within this study, the perceptions of the severe weather warnings themselves were not originally intended to be a focal point for analysis. Therefore, the uncertainty surrounding these warnings as they are displayed on the radar will lead to future research. However, the discussions with participants surrounding how they typically interact with these warning polygons (being able to “hover over” or click on the polygon to get more information) and their inability to do so during this experiment highlights the importance of information sufficiency and understanding information-seeking behavior. If individuals are able to infer potential impending hazards when using a radar display it is important that they are then able to determine the possible impacts from those hazards so that they take the appropriate protective actions. In addition, these findings may suggest the need for or importance of having legends that label features such as severe weather warnings.
This research has a few limitations including that several participants gave some responses to Likert-type questions as a range, which were then averaged for calculations. While responses were elaborated on, no true comparisons for distance were made in this study because of the subjectivity of the measurement. To address this in a future study, we would increase the distance of storms for one of the scenarios and would ask participants to estimate how far away those storms are from their location using a standard measure of distance (e.g., miles, kilometer). Participants may also have had difficulty estimating the distance of storms as there was no scale bar on the radar display. This was not unique to the radar display used in this study, in fact, most radar displays do not provide a scale bar. Another limitation was that a laptop computer was used during the scenario portion of the interviews, which provided users with a larger screen than a smartphone, which is easier for viewing. Given that some respondents said they did use a computer or television to view radar in addition to a smartphone, this was not a large concern. However, for future research, use of a smartphone would be ideal. While our sample has characteristics that are useful for this study, it will be important to do complementary work with a sample that is representative of a broader population.
While the concept of usefulness captures several relevant components to explain why members of the public consult radar displays more or less frequently, there are more specific indications for future investigation. In this study, usefulness was often connected to the confidence participants had when making decisions using radar and the certainty they had about whether they would be impacted by a weather event. The severity of an event as well as being able to identify the direction of motion were also important. Future work should continue to explore those aspects to understand how construal of threats using weather radar displays can more effectively leverage concepts related to geospatial thinking and psychological distance. It was clear in this study that geographic aspects of location and movement were not sufficient alone because users drew on broader place-based seasonal patterns and previous experiences as a backdrop for interpretation of radar images. In fact, psychological distance may even increase and threats become more abstract in the absence of such preconditioning information because cognitive dissonance or confusion increases uncertainty. Future work should also address accessibility and motivational factors more thoroughly.
Acknowledgments.
The authors thank Dr. Meghan Cook for providing intercoder reliability for this study. Her expertise was a huge asset to ensuring the quality of data analyses. We thank Marshay Forbes for assistance in transcribing the interviews. Sections of this article have been previously published as part of the lead author’s dissertation (Saunders 2020). The authors also acknowledge the Tharp Fellowship for providing a research scholarship to support this study. Funds were used to pay for participant compensation in the form of $10 gift cards.
Data availability statement.
The interview data transcripts and recordings collected for this study are confidential and are stored on the lead author’s personal hard drive. Therefore, in adherence to human subject research guidelines and confidentiality agreements in our consent forms, these interview data are not publicly available.
Skywarn is the National Weather Service storm-spotter program. Volunteers can enroll in a class to become a spotter.
REFERENCES
Albrecht, R. I., S. J. Goodman, D. E. Buechler, R. J. Blakeslee, and H. J. Christian, 2016: Where are the lightning hotspots on Earth? Bull. Amer. Meteor. Soc., 97, 2051–2068, https://doi.org/10.1175/BAMS-D-14-00193.1.
Ashley, W. S., and C. W. Gilson, 2009: A reassessment of U.S. lightning mortality. Bull. Amer. Meteor. Soc., 90, 1501–1518, https://doi.org/10.1175/2009BAMS2765.1.
Bednarz, R. S., and S. W. Bednarz, 2008: The importance of spatial thinking in an uncertain world. Geospatial Technologies Homeland Security, D. Z. Sui, Ed., The GeoJournal Library, Vol. 94, Springer, 315–330, https://doi.org/10.1007/978-1-4020-8507-9_16.
Brewer, N. T., G. B. Chapman, F. X. Gibbons, M. Gerrard, K. D. McCaul, and N. D. Weinstein, 2007: Meta-analysis of the relationship between risk perception and health behavior: The example of vaccination. Health Psychol., 26, 136–145, https://doi.org/10.1037/0278-6133.26.2.136.
Brügger, A., 2020: Understanding the psychological distance of climate change: The limitations of construal level theory and suggestions for alternative theoretical perspectives. Global Environ. Change, 60, 102023, https://doi.org/10.1016/j.gloenvcha.2019.102023.
Bryant, B., M. Holiner, R. Kroot, K. Sherman-Morris, W. B. Smylie, L. Stryjewski, M. Thomas, and C. I. Williams, 2014: Usage of color scales on radar maps. J. Oper. Meteor., 2, 169–179, https://doi.org/10.15191/nwajom.2014.0214.
Bryman, A., 2012: Social Research Methods. 4th ed. Oxford University Press, 766 pp.
Collins, J. M., R. V. Rohli, and C. H. Paxton, 2017: Florida Weather and Climate More Than Just Sunshine. University Press of Florida, 247 pp.
Denzin, N. K., and Y. S. Lincoln, 2003: Collecting and Interpreting Qualitative Materials. 2nd ed. Sage Publications, 696 pp.
Drost, R., M. Casteel, J. Libarkin, S. Thomas, and M. Meister, 2016: Severe weather warning communication: Factors impacting audience attention and retention of information during tornado warnings. Wea. Climate Soc., 8, 361–372, https://doi.org/10.1175/WCAS-D-15-0035.1.
Duan, R., A. Zwickle, and B. Takahashi, 2017: A construal-level perspective of climate change images in US newspapers. Climatic Change, 142, 345–360, https://doi.org/10.1007/s10584-017-1945-9.
Elliott, A. C., and W. A. Woodward, 2007: Statistical Analysis Quick Reference Guidebook. Sage Publications, 259 pp.
Gilbert, N., 2001: Researching Social Life. 2nd ed. Sage Publications, 406 pp.
Hegarty, M., H. S. Smallman, A. T. Stull, and M. S. Canham, 2009: Naïve cartography: How intuitions about display configuration can hurt performance. Cartographica Int. J. Geogr. Inf. Geovisualization, 44, 171–186, https://doi.org/10.3138/carto.44.3.171.
Henson, R., 2010: Weather on the Air: A History of Broadcast Meteorology. Amer. Meteor. Soc., 241 pp.
Klockow-McClain, K. E., R. A. McPherson, and R. P. Thomas, 2020: Cartographic design for improved decision making: Trade-offs in uncertainty visualization for tornado threats. Ann. Amer. Assoc. Geogr., 110, 314–333, https://doi.org/10.1080/24694452.2019.1602467.
League, C. E., W. Diaz, B. Philips, E. J. Bass, K. Kloesel, E. Gruntfest, and A. Gessner, 2010: Emergency manager decision-making and tornado warning communication. Meteor. Appl., 17, 163–172, https://doi.org/10.1002/met.201.
Lindell, M. K., S. K. Huang, H. L. Wei, and C. D. Samuelson, 2016: Perceptions and expected immediate reactions to tornado warning polygons. Nat. Hazards, 80, 683–707, https://doi.org/10.1007/s11069-015-1990-5.
Lobben, A., 2003: Classification and application of cartographic animation. Prof. Geogr., 55, 318–328, https://doi.org/10.1111/0033-0124.5503016.
Lobben, A., and M. Lawrence, 2015: Synthesized model of geospatial thinking. Prof. Geogr., 67, 307–318, https://doi.org/10.1080/00330124.2014.935155.
Lobben, A., M. Lawrence, and R. Pickett, 2014: The map effect. Ann. Amer. Assoc. Geogr., 104, 96–113, https://doi.org/10.1080/00045608.2013.846172.
McDonald, R. I., H. Y. Chai, and B. R. Newell, 2015: Personal experience and the “psychological distance” of climate change: An integrative review. J. Environ. Psychol., 44, 109–118, https://doi.org/10.1016/j.jenvp.2015.10.003.
Nagele, D. E., and J. E. Trainor, 2012: Geographic specificity, tornadoes, and protective action. Wea. Climate Soc., 4, 145–155, https://doi.org/10.1175/WCAS-D-11-00047.1.
National Centers for Environmental Information, 2019: NEXRAD level III radar data. NOAA National Center for Environmental Information, accessed 1 December 2018, https://www.ncei.noaa.gov/has/HAS.FileAppRouter?datasetname=7000&subqueryby=STATION&applname=&outdest=FILE.
NWS, 2022: Glossary. NOAA, accessed 28 November 2022, https://w1.weather.gov/glossary/index.php?letter=d.
Saunders, M. E., 2020: The perceived usefulness of a weather radar display by Tampa Bay residents. PhD. dissertation, University of South Florida, 131 pp., https://digitalcommons.usf.edu/etd/9038/.
Saunders, M. E., and J. M. Collins, 2021: Factors influencing the motivations and perceived usefulness of a weather radar display in Tampa Bay. Bull. Amer. Meteor. Soc., 102, E1192–E1205, https://doi.org/10.1175/BAMS-D-20-0052.1.
Saunders, M. E., K. D. Ash, and J. M. Collins, 2018: Usefulness of the United States National Weather Service radar display as rated by website users. Wea. Climate Soc., 10, 673–691, https://doi.org/10.1175/WCAS-D-17-0108.1.
Simandan, D., 2016: Proximity, subjectivity, and space: Rethinking distance in human geography. Geoforum, 75, 249–252, https://doi.org/10.1016/j.geoforum.2016.07.018.
Sjöberg, L., 2000: Factors in risk perception. Risk Anal., 20 (1), 1–12, https://doi.org/10.1111/0272-4332.00001.
Trope, Y., and N. Liberman, 2010: Construal-level theory of psychological distance. Psychol. Rev., 117, 440–463, https://doi.org/10.1037/a0018963.
Whiton, R. C., P. L. Smith, S. G. Bigler, K. E. Wilk, and A. C. Harbuck, 1998: History of operational use of weather radar by U.S. weather services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13, 244–252, https://doi.org/10.1175/1520-0434(1998)013<0244:HOOUOW>2.0.CO;2.
Wiggins, M. W., 2014: Differences in situation assessments and prospective diagnoses of simulated weather radar returns amongst experienced pilots. Int. J. Ind. Ergon., 44, 18–23, https://doi.org/10.1016/j.ergon.2013.08.006.
Witt, J. K., and B. A. Clegg, 2022: Dynamic ensemble visualizations to support understanding for uncertain trajectories. J. Exp. Psychol. Appl., 28, 451–467, https://par.nsf.gov/servlets/purl/10304290.
Zwickle, A., and R. Wilson, 2013: Construing risk: Implications for risk communication. Effective Risk Communication, J. Arvai and L. Rivers III, Eds., Routledge, 190–203, https://www.taylorfrancis.com/chapters/edit/10.4324/9780203109861-19.