There have been multiple efforts in recent years to simplify visual weather forecast products, with the goal of more efficient risk communication for the general public. Many meteorological forecast products, such as the cone of uncertainty, storm surge graphics, warning polygons, and Storm Prediction Center (SPC) convective outlooks, have created varying levels of public confusion resulting in revisions, modifications, and improvements. However, the perception and comprehension of private weather graphics produced by television stations has been largely overlooked in peer-reviewed research. The goal of this study is to explore how the extended forecast graphic, more commonly known as the 7, 10 day, etc., is utilized by broadcasters and understood by the public. Data were gathered from surveys with the general public and also from broadcast meteorologists. Results suggest this graphic is a source of confusion and highlights a disconnect between the meteorologists producing the graphic and the content prioritized by their audiences. Specifically, timing and intensity of any precipitation or adverse weather events are the two most important variables to consider from the viewpoint of the public. These variables are generally absent from the extended forecast graphic, thus forcing the public to draw their own conclusions, which may differ from what the meteorologist intends to convey. Other results suggest the placement of forecast high and low temperatures, use of probability of precipitation, icon inconsistency, and length of time the graphic is shown also contribute to public confusion and misunderstanding.
There is limited nonproprietary research concerning the efficacy and comprehension of individual television weather forecast graphics. Although smartphone weather apps are quickly overtaking television broadcasts as people’s primary source of weather information (Phan et al. 2018; Zabini 2016), television remains an important staple in the suite of modalities used to consume weather information, especially among the 55 and older age demographic (Pew Research Center 2019, 2018, 2008). Weather forecast graphics, such as hourly forecasts and future satellite and radar, are often also used in local television station’s weather apps and are therefore deserving of further study. Due to the multiple formats, types of information, and different time periods, these graphics are a potential source of confusion for viewers. Relatively little attention has been devoted to the possible misperceptions resulting from these graphics. The graphics are often only displayed for 20 s or less, without the presence of an on-camera meteorologist, or posted on social media with little or no contextual information included.
Previous research has sought to understand and close gaps in knowledge and communication between meteorologists and the public on products like tornado watches (Mason and Senkbeil 2015), the use of different colors in radar displays (Bryant et al. 2014), and tropical cyclone track and surge graphics (Lindner et al. 2019; Bostrom et al. 2018; Saunders and Senkbeil 2017; Sherman-Morris et al. 2015). Less is known about how the public perceives and understands graphics they routinely see in local television news broadcasts. The primary graphic of concern in our research is the extended forecast graphic (EFG) or long-range forecast graphic (i.e., the 5, 7, 10 day).
EFGs typically show between 3 and 10 days of predicted weather. Most EFGs shown by television stations across the country are formatted similarly with vertical panels containing the day of the week, forecast high and low temperatures, weather icons, and a probability of precipitation (PoP). In some cases, single-word or single-phrase text descriptions of the day’s weather are used instead of or in addition to the PoP. Information within the EFG is highly sought after online (Phan et al. 2018) and on television. This graphic is a cornerstone of local television weather broadcasts. It is often mentioned (but not shown) in teases on air and online to increase viewership and is sometimes shown multiple times during a weather broadcast. The format of local television weather broadcasts and the EFG may play an important role in the effectiveness of communicating a forecast, but this is a nascent area of research. This paper lays the groundwork for understanding more about how local television weather forecasts are formatted, what graphics are shown, what information is shown on graphics, and the length of time devoted to each graphic.
EFGs are a typical location for forecasters to display PoPs, a forecast variable known to be a source of confusion. The National Weather Service first used numerical PoPs in forecasts in 1965 (Murphy et al. 1980). The use of PoPs has since become widespread and is now included in almost every weather forecast on television, smartphone weather apps, newspapers, radio (including weather radio), and the internet. The use of PoPs has become so ubiquitous, meteorologists believe the public has grown attached to them and expects to see them in weather forecasts (Stewart et al. 2016; Zabini et al. 2015). This is despite the conclusions of abundant previous research that demonstrates the public’s poor understanding of the PoP (Kox and Thieken 2017; Abraham et al. 2015; Stewart et al. 2016; Zabini et al. 2015; Morss et al. 2010, 2008; Gigerenzer et al. 2005; Murphy et al. 1980). Confusion surrounding the PoP is not limited to the public. Stewart et al. (2016) found meteorologists are not in agreement with what a PoP represents or with how it should be implemented in a forecast. Surprisingly, over 70% of meteorologists surveyed in Stewart et al. (2016) had a different interpretation of the PoP from the National Weather Service’s (NWS) definition. Broadcast meteorologists exhibited the lowest confidence in their interpretation of how a PoP should be used (Stewart et al. 2016).
The NWS defines PoP as a combination of the forecaster’s confidence that precipitation will occur somewhere in the forecast area and the percent of the forecast area that will receive measurable precipitation (National Weather Service 2008). Assuming the forecaster’s confidence level is 100%, the PoP becomes strictly an expression of how much of the forecast area will receive measurable precipitation. In this case, the PoP is no longer a percent chance of rain, but rather a percent coverage of rain. There is currently no distinction on EFGs as to which of these interpretations of the PoP (Joslyn et al. 2009) should be used. Previous research (Klockow-McClain 2019; Joslyn et al. 2009) has concluded that there will be errors in interpretation of the forecast by the public without an explicit explanation of PoP in a given forecast (the reference class, timing, amount, etc.). Indeed, Morss et al. (2008) demonstrate the public’s confidence in precipitation forecasts is lower than that of other forecast components, mainly temperature. It is important to minimize these errors because the misinterpretation of a forecast can erode the relationship of trust between the public and broadcast meteorologist (Joslyn et al. 2009; Sherman-Morris 2005).
Confusion surrounding the PoP continues despite the efforts of broadcast meteorologists to educate the public (Spann 2017; Shepherd 2015; Panovich 2013). One goal of this paper is to better understand what forecast variables the public prioritizes. If PoP information is less important than other forecast components, the PoP could be modified or removed in an effort to reduce confusion. Murphy (1993) opined that one of the aspects of creating a good forecast was to include all the information that was important to the user or users. This paper adds to existing knowledge about what forecast variables are important to a group of users, but more importantly the ways in which the public prioritizes this information. It should be the goal of meteorologists producing graphics shown in weather broadcasts to include the information the public prioritizes in a format that is easily interpretable. Previous research has shown that while the public has demonstrated a desire to see a PoP (Morss et al. 2008), it was not the most important part of the precipitation forecast (Demuth et al. 2011). Other variables, like timing, intensity, and duration of the precipitation were listed as important (Demuth et al. 2011), but these aspects of the forecast are rarely presented on the EFG.
Four key research questions will be answered in this study:
1) What forecast variables are prioritized by the public and are they included on the EFG?
2) To what extent can the public comprehend and answer questions about a forecast after looking at an EFG?
3) What weather graphic do broadcasters say is most important to their audience and why?
4) For what length of time is the EFG shown during local weather broadcasts on television?
The data collected from question 2 will reveal whether or not the misinterpretations of the PoP described earlier remain present in our sample. Other factors analyzed in this research include the length of time the EFG is displayed during the television broadcast and where in the broadcast the EFG is shown relative to other graphics.
Methods and data collection
Public survey: Design, distribution, and data analysis.
Design and distribution.
A 33-question survey consisting of free-response and Likert-scale questions, forecast scenarios (Sealls 2015), and EFG interpretations was created in Qualtrics. The survey was distributed by Qualtrics to a database of their users who were at least 30 years old and were residents of Alabama, Georgia, Mississippi, or Tennessee (Fig. 1). The survey was completed by 158 people. The decision was made to limit the survey to this age group because questions related to graphics routinely shown on television were asked, and this represents, roughly, the demographic of people for whom television remains a popular modality to consume weather information (Pew Research Center 2018, 2008). The survey was limited to residents in four states in the southeast United States because this is a region where precipitation occurs year-round and as such, residents routinely consume precipitation forecasts. Confusion about the use of PoP seems to peak during the warm season when the precipitation regime is dominated by airmass convection or along the sea breeze front (Spann 2017; Hill et al. 2010). Additionally, previous projects (Joslyn et al. 2009) have limited their study area for similar investigations to regions that are notoriously wet. Because precipitation events are becoming more extreme in the southeast United States (Skeeter et al. 2019), further examination of the efficacy of precipitation forecasts in this region is of particular interest.
Participants were asked to pick out basic forecast data, like the forecast high temperature, timing, amount, and intensity of precipitation using the EFGs shown in Fig. 2. There was no time limit set for how long someone could view the EFG in the survey, as the graphic was visible on each page where there was a question related to it. Those who took the survey were then prompted to comment on what they liked or disliked about each EFG. Survey participants also ranked the following precipitation forecast variables from most important to least important: timing, intensity, duration, amount, percent chance (PoP). Tied ranks were not permitted.
A mixture of quantitative and qualitative procedures was used to determine whether or not the EFG is a source of confusion. For public audiences, previous research (Demuth et al. 2011) found that the most important elements of a forecast were timing and intensity. Thus, the first objective of this research was to test how robust those findings were, participants based their importance of each precipitation forecast variable on a Likert scale of 1–5, with 1 representing very low importance and 5 indicating the variable was extremely important. A chi-square test was performed to test the null hypothesis that the number of responses across the categories of precipitation forecast importance are proportional between the less important (1–3) and more important (4–5) ranks.
To measure the ability of participants to pick out data from an EFG, respondents were shown the EFG in Fig. 2 and asked to answer the following questions:
1) What is the chance of rain on Sunday?
2) What time will it rain on Sunday?
3) How much rain will fall on Sunday?
4) What will be the low temperature Sunday morning?
5) How much rain will fall between Tuesday and Wednesday?
6) What did you like or dislike about the graphic?
Questions one to five have correct answers and are therefore graded and discussed as percentages correctly answered. Question six was open response about the public’s opinions of the EFG, which required the categorization of responses into themes for discussion. These themes are used to assess the comprehension and perception of the EFG (Fig. 2).
Broadcaster survey: Design, distribution, and data analysis.
Design and distribution.
Because broadcast meteorologists are the producers and communicators of EFGs, their insight into better understanding this issue is invaluable. A 13-question survey was developed using Qualtrics and included free-response, closed-response, and Likert-scale questions. Limited information was gathered about a broadcaster’s personal information. The primary goal of surveying broadcasters was to gain more information about how they utilize the EFG, and what methods of conveying weather information they felt were or were not effective. Specific questions included how many days’ worth of data are typically included in the EFG, when in the broadcast the EFG is shown, how long the EFG is shown, and what the EFG’s purpose is in the forecast. Other data collected in this survey includes the typical shift the broadcaster works, state where currently employed, and opinions on the effectiveness of the PoP.
A different distribution method was used for the broadcaster survey. Responses to this survey were solicited twice through a private online group of over 2,500 broadcast meteorologists. Additionally, the National Weather Association emailed their database of members requesting participation from broadcast meteorologists across the country. Unlike the public survey, no geographic restrictions or age restrictions were placed on the broadcaster survey. Despite these efforts, the response rate for the broadcaster survey was less than 5%; 113 broadcast meteorologists completed the survey.
The broadcaster survey consisted of open-ended questions with longer responses as it was important to capture broadcasters’ thoughts on the EFG. Questions were not limited to the communication of precipitation forecasts. Rather, questions were focused on how broadcasters utilize weather graphics like the EFG, the length of time broadcasters spend creating graphics, how long they show the graphics during broadcasts, and opinions about what graphics were most important to their audiences. Specifically, broadcasters were asked to comment on the importance of the EFG, its exact purpose, and how long they show it during the broadcast. Thus, the results for this part of the research are summarized by descriptive statistics and qualitative thematic content lists of responses. The content themes were divided into categories that say the EFG is either important or very important and other categories of responses indicating that something else was more important.
An additional step in the process of evaluating the EFG feedback from broadcasters was to view features of the EFG used in television weather broadcasts. Sixty television weather segments from 20 television stations across the study area shown in Fig. 1 were sampled during the summer of 2018. Morning, midday, and evening weekday newscasts were observed either live or on demand to measure the length of time the EFGs were shown in the main weather segment. Since the segment times did not follow a normal distribution, a nonparametric Kruskal–Wallis (KW) test was used to test for significant differences in the length of time the EFG was shown across each of the time categories.
Public survey characteristics.
Demographic data were collected for 158 participants in the public survey (Table 1). The same demographic data averaged across the study area is also presented in Table 1 (Reed 2019). Respondents in age groups 30–39, 40–49, 50–59, and 60+ were evenly distributed, with 41, 35, 41, and 41 members in each respective group. White females made up the overwhelming majority of respondents in the general public survey. Females composed 77% (males 23%) of our sample. The average across our four-state sample area is 51% female (49% male). However, our sample is more closely aligned with data (Pew Research Center 2018) that show women are more likely to watch local news, where they would see an EFG. A total of 76% of those who took the survey listed their ethnicity as White/Caucasian. Most participants reported themselves as having advanced education, with only 5% possessing less than a high school diploma.
The sample represents a mixture of urban and suburban zip codes (Fig. 1). Approximately 77% of people say they check their local weather forecast at least once per day. Local television was the most frequent method for checking weather conditions (28%), followed closely by smartphone weather apps (21%). A total of 86% of respondents reported that their source of weather information includes a PoP with 91% saying they pay attention to it.
What forecast variables are most important for the public?
Results suggest the EFG is a source of confusion among members of the public showcasing a potential disconnect between broadcast meteorologists and their audiences. The EFG contains vague or limited information, does not contain or does not prioritize information the public prioritizes, is not designed to communicate hazardous or otherwise atypical weather, and rarely, if ever, shows nighttime weather.
Results of the chi-square test on ranked forecast variables are shown in Fig. 3. The null hypothesis that the number of responses across the categories of forecast importance is proportional between the less important (ranks 1–3) and more important (ranks 4–5) categories is rejected. The disproportionality between the two groups is statistically significant: χ2(4, N = 152) = 27.928, p = 0.000 (Table 2). This result is in agreement with Demuth et al. (2011): with regard to precipitation, timing and intensity are most important to the public. These two factors are absent from the EFG in Fig. 2.
EFG comprehension and perception.
As described in the methods, six questions were asked to assess public comprehension and perception of a sample EFG. Results from each question are presented in Fig. 4, where each question is labeled above its corresponding plot. Questions related to the timing and amount of expected precipitation within the EFG gave participants the most trouble. The dominant answers for each question in Fig. 4 show that 72% of people were not sure what time it would rain Sunday, 86% were not sure how much rain would fall Sunday, and 79% were not sure how much rain would fall between Tuesday and Wednesday. Participants struggled to answer these questions related to the precipitation forecast, most likely because there was no information displayed on the EFG regarding timing or amount of expected precipitation. The strongest evidence of this graphic creating confusion is seen when looking at the answers to question 2, where 72% of people said they were not sure when it would rain Sunday. This is problematic because timing was the forecast variable ranked as most important by participants. Without the timing information, members of the public are left to guess and fill the information gaps on their own.
Results from this question agree with previous studies (Zabini et al. 2015; Joslyn et al. 2009) that suggest that in search of additional information not present on the EFG, people try to infer the information they want from the weather icon present, the PoP, or any text that may be present (i.e., “isolated,” “scattered”). The second most frequently chosen answer to question 2 was “all day.” Previous research has shown the public tends to mistakenly conflate high PoPs with either high-intensity and/or high-amount and/or long-duration precipitation events (Zabini et al. 2015; Joslyn et al. 2009). This can sometimes be much different from what the meteorologist intends to convey. Indeed, Sealls’s (2015) study of broadcast meteorologists and their audiences revealed differences in how both groups assigned PoPs to various forecast scenarios. Because Sunday’s forecast shows a 90% PoP, some may have interpreted this as meaning it was going to rain all day, which may or may not have been the intent of the broadcast meteorologist.
Furthermore, the single weather icon shown on each day of the EFG is often misinterpreted (Zabini et al. 2015). When surveyed on the interpretation of weather forecast icons like the ones used in the EFG, Zabini et al. (2015) and Joslyn et al. (2009) find that the public’s interpretation of the forecast icon was almost always different from what the forecaster intended. For example, a broadcast meteorologist may select an icon with sunshine and a small thunderstorm to indicate the chance of isolated afternoon summer storms, but this may be interpreted by the public as an assurance of precipitation. Another possible explanation of people believing it was going to rain all day Sunday (question 2) was because of the heavy rain icon shown. This type of icon may represent to the public a guarantee of rain and if this conflicts with what the meteorologist had in mind when choosing that icon, the forecast could be falsely perceived as inaccurate (Joslyn et al. 2009), which over time may negatively impact the relationship of trust between the broadcast meteorologist and the audience (Sherman-Morris 2005; Joslyn et al. 2009; Joslyn and LeClerc 2012; Ripberger et al. 2015). Participants were able to identify the PoP included in Sunday’s forecast (94% accuracy), as well as the low temperature forecast for Sunday morning (82% accuracy). Because these two elements are displayed prominently in the EFG in Fig. 2, the level of accuracy of the responses is not surprising.
The open-ended responses to questions about the EFG in Fig. 2 produced a variety of answers and opinions. Specifically, a question prompting participants to comment about the EFG offered perhaps the greatest insight into what people like or dislike about the EFG and how the EFG could possibly be modified to better match the demands of public audiences (Table 3). Comments were summarized into categorical themes. Some of the themes contained multiple comments that could be easily grouped together, while other themes consisted of one common phrase. Of the 157 participants that responded with comments, a total of 61 (39%) made critical comments about the deficiencies of the EFG. The most frequent comments from the “critical” category (category 1) were a desire for more information, more specific information, and the timing, intensity, and amount of precipitation to be included. The lack of subdaily timing of precipitation was the primary problem for four other participants. The “positive or favorable opinion” category (N = 27, 17%) was characterized by general likes with less feedback than the “critical” category provided. The “simple and easy to understand” category had 11 responses. Likewise, the “status quo” category (N = 19, 12%) were participants who simply answered the EFG was “ok” or “fine,” which did not provide much feedback. A total of 10 participants commented “none” or “nothing,” while the remaining 29 provided responses that did not answer the question or were incoherent expressions. Ten of these 29 responses were likes or dislikes about the weather on the EFG, but nothing specific about the graphic itself. If the “positive,” “simple,” and “status quo” categories are combined, then that sum of 57 is nearly the same size as the “critical” category. The categories of these responses come from a sample that is more educated than the general population. It is unclear how the categorical responses would change with a more representative sample, but it is believed that there would be slightly fewer “critical” responses than the 39% found here. Thus, it can be hypothesized that roughly one-third of a broadcast meteorologist’s audience critically watches and questions the details of an EFG. This number is important to consider because these viewers are more likely to remember forecast errors and raise questions.
Other responses not related to precipitation amounts, timing, or intensity helped explain another aspect of this graphic that may create confusion: forecast high and low temperature placement. The position of the low temperatures on the EFG in Fig. 2 is staggered, meaning it is located between each day’s forecast. The staggered nature of forecast highs and lows, a common EFG design scheme used by many television stations, may be the reason for the drop in number of correct responses about Sunday’s forecast low temperature. Approximately 8% of people chose “26,” which was located between Sunday and Monday, indicating this was the forecast low for Monday morning. One respondent commented that “it was always difficult to determine which [low temperature] numbers go with a certain day.” Another said, “It was not easy to understand the numbers, because they are not labeled in any way.” Although participants had the least trouble identifying the forecast temperatures, the feedback received suggests there may be a more efficient way of showcasing that data. Other comments regarding the EFG’s lack of distinction between morning and afternoon weather, and the general lack of details highlight existing flaws in the design of this graphic, which are the subject of research in progress.
A total of 113 broadcast meteorologists from 35 states and the District of Columbia participated in the broadcaster survey (Fig. 5a). The state with the highest number of respondents was Texas with 12 broadcasters. A majority of broadcasters who took this survey were between the ages of 25–34 (47%) or 35–44 (24%) (Fig. 5b). There was a nearly uniform distribution of respondents across the on-air shift they say they typically work (Fig. 5c). The grouping by shift was 35% for weekday evening, 26% work weekday mornings and/or midday, 28% of broadcasters surveyed work weekend evenings, and 11% work weekend mornings.
What forecast variables and graphics are most important for broadcasters?
Our results identify a disagreement among broadcast meteorologists about the effectiveness of the PoP. A total of 47% of broadcasters surveyed here did not believe the PoP is an effective way of communicating precipitation forecasts, while 44% say PoP is effective (Fig. 5d). In some cases, PoPs are only shown on the EFG and not on other graphics within the same broadcast, which may result in a compounding effect on confusion to the public.
Just over half (53%) of the broadcasters in our sample name the EFG as generally being the most important graphic shown to their audiences. One respondent points out that EFGs typically contain sponsorships, which underscores this graphic’s importance in a weathercast. Broadcasters were asked to qualify the importance of the EFG. There is a common thread among the responses that this is “the number one reason viewers watch” because “they want to know what’s coming.” One broadcaster called the EFG a “one-stop shop for forecast temperatures, sky conditions, and PoPs.” Other selected comments regarding the importance of the EFG reveal important insight into how broadcasters view the role of this graphic (Table 4). These beliefs seem to be in direct conflict with the findings of this paper. Because the public was unable to answer questions about precipitation timing and amounts/intensity (the variables they prioritize) after looking at an EFG, it is difficult to believe in the efficacy of this graphic as a means to plan ahead. Additionally, the lack of information present on the EFG may inhibit people from knowing how to dress for the next day, especially if atypical weather is expected, such as when the warmest part of the day does not occur in the afternoon. It would be wrong to not acknowledge the commendable effort broadcasters put into their forecasts prior to showing the EFG and the specific forecast variables that may be important in a given weather situation. The problem is that the high level of detail that may be presented earlier in the weather broadcast is lost in translation when the EFG is created. If it is true that this is the most important graphic when viewers are paying the most attention, there must be an effort to improve its efficacy.
EFG length and timing.
Factors not related to the design of the EFG may also contribute to the confusion created by this graphic. The length of time the EFG is shown during a weather broadcast and the order in which it is shown relative to other graphics are both important aspects that have not previously been studied. While the EFG is occasionally shown at times during the local newscast outside of the weather segment, it is normally reserved for a small window during the roughly 3-min weathercast. The analysis of the broadcasts watched for this study show that the main weather segment usually happens between 10 and 20 min after the start of the newscast, and after at least one commercial break. A total of 98% of those surveyed (N = 113) say they show the EFG at the end of their main weather segment, despite 53% saying it is the most important graphic. It is a common practice by television stations to show the EFG at the end of the weather broadcast in order to keep viewers watching the broadcast longer, which can lead to an increase in advertising revenue if the EFG or other graphics are sponsored. In most cases, however, this is a decision made outside the control of the broadcast meteorologist.
A total of 60 newscasts from 20 stations in the study area of Fig. 1 were watched. An equal number of newscasts were sampled from weekday morning, midday, and evening time periods. Figures 6a and 6b show a comparison of the amount of time the EFG is shown versus the total time of the main weather segment. Our results reveal another dimension to the EFG problem: the public may not have enough time to digest any of the information that is present. The EFG is shown for the shortest amount of time during weekday morning newscasts (µ = 12.96 s). The average time the EFG was shown during weekday noon or midday newscasts was 13.51 s. Broadcasters working the weekday evening shift showed the EFG for an average of 20.26 s. The minimum time the EFG was shown was 8.6 s (weekday morning) and the longest time the graphic was shown was 50 s on a weekday evening newscast. The results of a KW test show a statistically significant difference across the three groups χ2(2) = 11.724, p = 0.000. It is important to remember that in the online survey, people were given unlimited time to view the EFG while answering the questions, and the public was still unable to identify important information. Even less information might be retained in 12–20 s on television. A goal of broadcasters should be to increase the amount of time the EFG is shown, and also work to keep the length of time shown consistent across all newscasts. While a person watching a morning weather broadcast might not prioritize the same information as someone watching in the evening, someone should not have less time to digest the information shown on the EFG than someone watching during the evening, especially if there is impactful weather in the long-range forecast. Based on our findings, there is room to increase the amount of time the EFG is shown in all time slots we analyzed. Figure 6b shows the amount of time broadcasters say they typically have for their main weather segments. Almost half say their weathercasts are typically three minutes long. Additionally, broadcasters were asked to estimate the length of time they show the EFG, and the mean answer was 23.5 s. Because this is higher than any of the values observed during the 60 newscasts, broadcasters may be overestimating the time they are showing the EFG, and possibly overestimating the graphic’s efficacy.
Our results suggest that the weather graphic named as most important by broadcast meteorologists is confusing and easily misunderstood and misinterpreted by the public. Additionally, forecast variables the public ranks as important are generally absent from this graphic. While previous studies have focused on improving tornado watches (Mason and Senkbeil 2015), advancing comprehension of radar color schemes (Bryant et al. 2014), and more effective hurricane track and surge forecasts (Sherman-Morris et al. 2015; Radford et al. 2013), this study is some of the first peer-reviewed research to assess the public’s perception and comprehension of a graphic produced by local television stations. Our results show the public’s inability to answer important forecast questions, such as how much rain is expected to fall and when. Free-response comments from some participants suggest other aspects of the EFG are confusing, such as the placement of the high and low temperatures and the lack of any labels or distinctions between a.m. and p.m. weather.
The frequent use of a PoP on EFGs is arguably the preeminent reason the EFG is confusing. Previous studies (Zabini et al. 2015; Joslyn et al. 2009) concluded that the public tends to conflate high PoPs with long duration and/or high intensity and/or high amounts of precipitation, and this remained evident in our findings. Our findings about how the public prioritizes information about precipitation forecast variables are in accordance with Demuth et al. (2011), and can be used to improve the graphical communication of precipitation forecasts by including the information people want. Other factors that can make weather icons a source of confusion are similar to what makes the PoP incomprehensible: What does the icon represent? Is it the most inclement weather expected? Is it an average? Is it the weather expected during the daylight hours? Can a single icon represent 24 h of weather conditions?
An analysis of 60 newscasts revealed the short length of time the public has to digest any information from the EFG, regardless of the time of day they consume weather information from television. Based on the feedback from the survey of broadcasters, it was conveyed that viewers expect to see the EFG during the main weather broadcast. Although the EFG is sometimes shown near the end of the newscast, this is usually dependent on the time remaining at the end of the newscast, which can be highly inconsistent. It should not be assumed that viewers remain watching beyond the main weathercast for the possibility of seeing the EFG again. And even if the EFG is shown at the end of the newscast, it is typically for less time than during the main weather broadcast, which was analyzed in this research. On rare occasions, if a broadcast meteorologist is asked to fill time at the end of the newscast, the EFG might be shown for at least 30 s or more. In these instances, the meteorologist is typically not on camera, and this may introduce other limitations to effective communication (Drost et al. 2015) of important or complex weather information.
Our results pinpoint a disconnect between broadcast meteorologists and their audience. The responses from the broadcasters confirm the importance of the EFG, and they also suggest that it is an effective graphic to communicate routine and hazardous weather in a simple way; our results suggest otherwise.
It is not the intent of the authors to set the policies and strategic goals for private weather or government meteorological enterprise. Rather, our purpose is to expose inefficiencies in television weather graphics, because this is an area where published research is lacking. Any potential changes to forecast products will require considerable discussion and careful consideration by the meteorological community and additional research before any changes are formally implemented. A copy of the surveys administered in this research is available in the online supplement.
Limitations and future research
The authors do not intend to argue the EFG in its current form should be able to stand alone, with no context, and be expected to be perfectly understood by the public. Indeed, much of the forecast data the public would need to answer the questions asked in our study is typically detailed in other parts of the weather broadcast. There is insufficient peer-reviewed research, however, to quantify how much or how little of a weather broadcast someone typically watches, so it cannot be assumed that information is being received. Phan et al. (2018) found that among weather app users, video forecasts were ranked as generally not important—a finding that might suggest people are not willing to watch a full weather forecast and may only be interested in certain graphics. This question was asked during a recent conference presentation by the lead author to an audience of more than 100 broadcast meteorologists. Not a single broadcaster was confident that their audience typically watched their entire weather segments on television, confirming the hypothesis that important weather graphics should be able to stand alone, regardless of how or where they are presented. Combined with the variety of EFG products being consumed over different modalities, in addition to television, this validates the need for more academic research regarding the effectiveness of individual weather graphics, such as the EFG.
While the results of this study demonstrate the varying levels of confusion the EFG may cause, there are still many questions left to answer. Specifically with regard to the design of the EFG itself, a separate manuscript will test the efficacy of modifications to the EFG with the goal of maintaining favorability but improving comprehension. The graphics to be tested use the feedback from the public and broadcasters learned in this study to propose possible solutions to the issues with the EFG in a third survey of the general public (N = 885). New EFG designs to be tested in future research will not include a PoP, and it will be tested whether or not those graphics are preferred and understood better than the current format that contains the PoP.
The authors would like to thank the three anonymous reviewers for their constructive comments and suggestions regarding the content and organization of this manuscript. Additionally, the authors thank the University of Alabama Graduate School and Department of Geography for providing funding to administer the online surveys used in this research.