• Allen, G. L., Miller Cowan C. R. , and Power H. , 2006: Acquiring information from simple weather maps: Influences of domain-specific knowledge and general visual-spatial abilities. Learn. Individ. Differ., 16, 337349, doi:10.1016/j.lindif.2007.01.003.

    • Search Google Scholar
    • Export Citation
  • AMS, 1996: Hurricane path now a computer graphic. Bull. Amer. Meteor. Soc., 77, 23482349.

  • Baker, E. J., Broad K. , Czajkowski J. , Meyer R. , and Orlov B. , 2012: Risk perceptions and preparedness among mid-Atlantic coastal residents in advance of Hurricane Sandy, preliminary report. Wharton University of Pennsylvania Working Paper 2012-18, 42 pp. [Available online at http://opim.wharton.upenn.edu/risk/library/WP2012-18_EJB-etal_RiskPerceptions-Sandy.pdf.]

  • Bertin, J., 1983: Semiology of Graphics. University of Wisconsin Press, 415 pp.

  • Blake, E. S., Kimberlain T. B. , Berg R. J. , Cangialosi J. P. , and Beven J. L. II, 2013: Tropical cyclone report Hurricane Sandy (AL182012) 22–29 October 2012. National Hurricane Center Rep., 157 pp. [Available online at www.nhc.noaa.gov/data/tcr/AL182012_Sandy.pdf.]

  • Borland, D., and Taylor R. M. II, 2007: Rainbow color map (still) considered harmful. IEEE Comput. Graphics Appl., 27 (2), 1417, doi:10.1109/MCG.2007.323435.

    • Search Google Scholar
    • Export Citation
  • Breslow, L. A., Ratwani R. M. , and Trafton J. G. , 2009: Cognitive models of the influence of color scale on data visualization tasks. Hum. Factors, 51, 321338, doi:10.1177/0018720809338286.

    • Search Google Scholar
    • Export Citation
  • Brewer, C. A., 1994: Color use guidelines for mapping and visualization. Visualization in Modern Cartography, A. M. MacEachren and D. R. Fraser Taylor, Eds., Elsevier, 123–147.

  • Broad, K., Leiserowitz A. , Weinkle J. , and Steketee M. , 2007: Misinterpretations of the “cone of uncertainty” in Florida during the 2004 hurricane season. Bull. Amer. Meteor. Soc., 88, 651667, doi:10.1175/BAMS-88-5-651.

    • Search Google Scholar
    • Export Citation
  • Bryant, B., Holiner M. , Kroot R. , Sherman-Morris K. , Smylie W. B. , Stryjewski L. , Thomas M. , and Williams C. I. , 2014: Usage of color scales on radar maps. J. Oper. Meteor., 2, 169179, doi:10.15191/nwajom.2014.0214.

    • Search Google Scholar
    • Export Citation
  • Canham, M., and Hegarty M. , 2010: Effects of knowledge and display design on comprehension of complex graphics. Learn. Instr., 20, 155166, doi:10.1016/j.learninstruc.2009.02.014.

    • Search Google Scholar
    • Export Citation
  • Coltekin, A., Heil B. , Garlandini S. , and Fabrikant S. I. , 2009: Evaluating the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis. Cartogr. Geogr. Inf. Sci., 36, 517, doi:10.1559/152304009787340197.

    • Search Google Scholar
    • Export Citation
  • Cox, J., House D. , and Lindell M. , 2013: Visualizing uncertainty in predicted hurricane tracks. Int. J. Uncertainty Quantif., 3, 143156, doi:10.1615/Int.J.UncertaintyQuantification.2012003966.

    • Search Google Scholar
    • Export Citation
  • Demuth, J. L., Morss R. B. , Mmorrow B. H. , and Lazo J. K. , 2012: Creation and communication of hurricane risk information. Bull. Amer. Meteor. Soc., 93, 11331145, doi:10.1175/BAMS-D-11-00150.1.

    • Search Google Scholar
    • Export Citation
  • Deubel, H., and Schneider W. X. , 1996: Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Res., 36, 18271837, doi:10.1016/0042-6989(95)00294-4.

    • Search Google Scholar
    • Export Citation
  • Doore, G. S., and Coauthors, 1993: Guidelines for using color to depict meteorological information. Bull. Amer. Meteor. Soc., 74, 17091713, doi:10.1175/1520-0477(1993)074<1709:GFUCTD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fabrikant, S., Hespanha S. , and Hegarty M. , 2010: Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Ann. Assoc. Amer. Geogr., 100, 1329, doi:10.1080/00045600903362378.

    • Search Google Scholar
    • Export Citation
  • Garlandini, S., and Fabrikant S. I. , 2009: Evaluating the effectiveness and efficiency of visual variables for geographic information visualization. Spatial Information Theory, K. S. Hornsby et al., Eds., Springer-Verlag, 195–211, doi:10.1007/978-3-642-03832-7_12.

  • Griffith, L. J., and Leonard S. D. , 1997: Association of colors with warning signal words. Int. J. Ind. Ergon., 20, 317325.

  • Healey, C. G., and Enns J. T. , 2012: Attention and visual memory in visualization and computer graphics. IEEE Trans. Visualization Comput. Graphics, 18, 11701188, doi:10.1109/TVCG.2011.127.

    • Search Google Scholar
    • Export Citation
  • Hegarty, M., M. S. Canham, and Fabrikant S. I. , 2010: Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. J. Exp. Psychol.: Learn. Mem. Cognit., 36, 3753, doi:10.1037/a0017683.

    • Search Google Scholar
    • Export Citation
  • Heitgerd, J. L., and Coauthors, 2008: Community health status indicators: Adding a geospatial component. Prev. Chronic Dis.: Public Health Res. Pract. Policy,5,15. [Available online at www.ncbi.nlm.nih.gov/pmc/articles/PMC2483562/pdf/PCD53A96.pdf.]

  • Hoffman, J. E., and Subramaniam B. , 1995: The role of visual attention in saccadic eye movements. Percept. Psychophys., 57, 787795, doi:10.3758/BF03206794.

    • Search Google Scholar
    • Export Citation
  • Hoffman, R. R., Detweiler M. , Conway J. A. , and Lipton K. , 1993: Some consideration in using color in meteorological displays. Wea. Forecasting, 8, 505517, doi:10.1175/1520-0434(1993)008<0505:SCIUCI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kelleher, C., and Wagener T. , 2011: Ten guidelines for effective data visualization in scientific publications. Environ. Modell. Software, 26, 822827, doi:10.1016/j.envsoft.2010.12.006.

    • Search Google Scholar
    • Export Citation
  • Light, A., and Bartlein P. J. , 2004: The end of the rainbow? Color schemes for improved data graphics. Eos, Trans. Amer. Geophys. Union, 85, 385–391, doi:10.1029/2004EO400002.

    • Search Google Scholar
    • Export Citation
  • Lipkus, I. M., 2007: Numeric, verbal, and visual formats of conveying health risks: Suggested best practices and future recommendations. Med. Decis. Making, 27, 696713, doi:10.1177/0272989X07307271.

    • Search Google Scholar
    • Export Citation
  • Mayhorn, C. B., Wolgarter M. S. , and Shaver E. F. , 2004: What does code red mean? Ergon. Des., 12, 12–14, doi:10.1177/106480460401200404.

    • Search Google Scholar
    • Export Citation
  • Mersey, J. E., 1990: Colour and Thematic Map Design: The role of Colour Scheme and Map Complexity in Choropleth Map Communication. University of Toronto Press, 157 pp.

  • Meyer, R., Broad K. , Orlove B. , and Petrovic N. , 2013: Dynamic simulation as an approach to understanding hurricane risk response: Insights from the Stormview Lab. Risk Anal., 33, 15321552, doi:10.1111/j.1539-6924.2012.01935.x.

    • Search Google Scholar
    • Export Citation
  • Monmonier, M., 1991: How to Lie With Maps. University of Chicago Press, 207 pp.

  • Morrow, B. H., Lazo J. K. , Rhome J. , and Feyen J. , 2015: Improving storm surge risk communication: Stakeholder perspectives. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-13-00197.1, in press.

    • Search Google Scholar
    • Export Citation
  • O’Hare, D., and Stenhouse N. , 2009: Under the weather: An evaluation of different modes of presenting meteorological information for pilots. Appl. Ergon., 40, 688693, doi:10.1016/j.apergo.2008.06.007.

    • Search Google Scholar
    • Export Citation
  • Phipps, M., and Rowe S. , 2010: Seeing satellite data. Public Understanding Sci., 19, 311321, doi:10.1177/0963662508098684.

  • Radford, L., Senkbeil J. C. , and Rockman M. , 2013: Suggestions for alternative tropical cyclone warning graphics in the USA. Disaster Prev. Manage., 22, 192209, doi:10.1108/DPM-06-2012-0064.

    • Search Google Scholar
    • Export Citation
  • Rayner, K., 2009: Eye movements and attention in reading, scene perception, and visual search. Quart. J. Exp. Psychol., 62, 14571506, doi:10.1080/17470210902816461.

    • Search Google Scholar
    • Export Citation
  • Severtson, D. J., and Vatovec C. , 2012: The theory-based influence of map features on risk beliefs: Self-reports of what is seen and understood for maps depicting an environmental health hazard. J. Health Commun., 17, 836856, doi:10.1080/10810730.2011.650933.

    • Search Google Scholar
    • Export Citation
  • Severtson, D. J., and Myers J. D. , 2013: The influence of uncertain map features on risk beliefs and perceived ambiguity for maps of modeled cancer risk from air pollution. Risk Anal., 33, 818837, doi:10.1111/j.1539-6924.2012.01893.x.

    • Search Google Scholar
    • Export Citation
  • Sherman-Morris, K., 2005: Enhancing threat: Using cartographic principles to explain differences in hurricane threat perception. Fla. Geogr., 36, 6183. [Available online at http://journals.fcla.edu/flgeog/article/view/76887/75296.]

    • Search Google Scholar
    • Export Citation
  • Silva, S., Madeira J. , and Santos B. S. , 2007: There is more to color scales than meets the eye: A review on the use of color in visualization. 11th Int. Conf. Information Visualization (IV’07), Zurich, Switzerland, IEEE, 943–950. [Available online at http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4272091&tag=1.]

  • Silva, S., Santos B. S. , and Madeira J. , 2011: Using color in visualization: A survey. Comput. Graphics, 35, 320333, doi:10.1016/j.cag.2010.11.015.

    • Search Google Scholar
    • Export Citation
  • Slovic, P., Peters E. , Finucane M. L. , and MacGregor D. G. , 2005: Affect, risk, and decision making. Health Psychol., 24, S35S40, doi:10.1037/0278-6133.24.4.S35.

    • Search Google Scholar
    • Export Citation
  • Weinstein, N. D., and Sandman P. M. , 1993: Some criteria for evaluating risk messages. Risk Anal., 13, 103114, doi:10.1111/j.1539-6924.1993.tb00733.x.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Maps of the five storm surge scenario locations and five color/legend combinations tested: (a) blue feet values; (b) yellow–purple feet values; (c) yellow–purple category text; (d) green–red feet values; and (e) green–red category text. (f) An example of how the maps were presented to the participants, here showing the map associated with question 2 (see Table 1).

  • View in gallery

    Mean response times for the five conditions for questions 1–6. Error bars are the standard error of the mean.

  • View in gallery

    Mean response times for the five conditions for question 6 by meteorological experience. Error bars are the standard error of the mean.

  • View in gallery

    Fixations on the map used in scenario B (Fig. 1b) in the blue feet values condition, for question 4, based on five participants in the community group.

  • View in gallery

    Proportion of fixations to the (a) legend, (b) letter markers, and (c) map area by question and color/legend condition. Error bars are the standard error of the mean.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 517 348 50
PDF Downloads 394 270 35

Measuring the Effectiveness of the Graphical Communication of Hurricane Storm Surge Threat

View More View Less
  • 1 Department of Geosciences, Mississippi State University, Mississippi State, Mississippi
  • | 2 Department of Psychology, Mississippi State University, Mississippi State, Mississippi
  • | 3 Department of Psychology, California State University, San Marcos, California
© Get Permissions
Full access

Abstract

Color is an important variable in the graphical communication of weather information. The effect of different colors on understanding and perception is not always considered prior to releasing an image to the public. This study tests the influence of color as well as legend values on the effectiveness of communicating storm surge potential. In this study, 40 individuals participated in an eye-tracking experiment in which they responded to eight questions about five different storm scenarios. Color was varied among three palettes (shades of blue, green to red, and yellow to purple), and legends were varied to display categorical values in feet (<3, 3–6, etc.) or text descriptions (low, medium, etc.). Questions measured accuracy, perceived risk, and perceived helpfulness. Overall, accuracy was high and few statistically significant differences were observed across color/legend combinations. Evidence did suggest that the blue values condition may have been the most difficult to interpret. Statistical support for this claim includes longer response times and a greater number of eye fixations on the legend. The feet values condition also led to a greater number of eye fixations on the legend and letter markers than the category text condition. The green–red condition was the strong preference among all groups as the color condition that best informs the public about storm surge risk. This color palette led to slightly higher levels of accuracy and perceived helpfulness, but the differences were not significant.

Corresponding author address: Kathleen Sherman-Morris, P.O. Box 5448, Department of Geosciences, Mississippi State University, Mississippi State, MS 39762. E-mail: kms5@geosci.msstate.edu

Abstract

Color is an important variable in the graphical communication of weather information. The effect of different colors on understanding and perception is not always considered prior to releasing an image to the public. This study tests the influence of color as well as legend values on the effectiveness of communicating storm surge potential. In this study, 40 individuals participated in an eye-tracking experiment in which they responded to eight questions about five different storm scenarios. Color was varied among three palettes (shades of blue, green to red, and yellow to purple), and legends were varied to display categorical values in feet (<3, 3–6, etc.) or text descriptions (low, medium, etc.). Questions measured accuracy, perceived risk, and perceived helpfulness. Overall, accuracy was high and few statistically significant differences were observed across color/legend combinations. Evidence did suggest that the blue values condition may have been the most difficult to interpret. Statistical support for this claim includes longer response times and a greater number of eye fixations on the legend. The feet values condition also led to a greater number of eye fixations on the legend and letter markers than the category text condition. The green–red condition was the strong preference among all groups as the color condition that best informs the public about storm surge risk. This color palette led to slightly higher levels of accuracy and perceived helpfulness, but the differences were not significant.

Corresponding author address: Kathleen Sherman-Morris, P.O. Box 5448, Department of Geosciences, Mississippi State University, Mississippi State, MS 39762. E-mail: kms5@geosci.msstate.edu

1. Introduction

The loss of life caused by storm surge from Hurricane Sandy in 2012 (Blake et al. 2013) combined with reports noting residents underestimated their level of risk from storm surge (Baker et al. 2012) help demonstrate a need to focus on the communication of storm surge potential. When computerized hurricane path/forecast track graphics were first released in 1996, the National Hurricane Center’s goal was to help provide “a ready, unambiguous description of what’s going on,” according to then director Bob Burpee (AMS 1996, p. 2348, italics added). Unfortunately, there have been cases since then that suggest the graphics may have been misunderstood (e.g., Broad et al. 2007). Even if not misunderstood, it is possible for the same information about risk to be interpreted differently, depending on how it is presented. In a study where participants viewed hurricane satellite images along with hurricane forecast information, image type led some participants to perceive more threat among those who received a stronger hurricane scenario (a color-enhanced infrared satellite image led to higher perceived threat than a grayscale visible satellite image) (Sherman-Morris 2005). Broad et al. (2007, p. 653) suggest that forecasters must “more systematically” identify the needs of their users to design better products. The duty also resides with the academic community, where graphical communication of hurricane forecast information has been the subject of relatively few papers compared to the substantial amount of literature on hurricane risk perception and behavior. In fact, the graphical communication of weather information in general has not been sufficiently studied despite calls for more attention to the comprehension of graphical weather information as early as 1993 (Hoffman et al. 1993). More recently, O’Hare and Stenhouse (2009, p. 690) remarked that “the effectiveness of graphical displays in helping understand weather forecast information remains almost entirely unstudied.”

There has been a recent increase in research on the graphical communication of hurricane forecast information. Building on the suggestions made by Broad et al. (2007), Meyer et al. (2013) investigated the influence of the center track line on level of concern and preparedness. Their results indicated that regardless of the respondent’s location in the cone, the track line caused concern to be greater, but the track line had no influence on preparedness. Other researchers have recently examined alternatives to the cone of uncertainty (e.g., Cox et al. 2013; Radford et al. 2013), while Demuth et al. (2012) examined some of the challenges faced by different members of the hurricane warning system. While most of the recent research has focused on the hurricane track forecast, Morrow et al. (2015) asked emergency managers, meteorologists, and members of the public to evaluate several experimental storm surge images on ease of understanding and usefulness. As in several of the hurricane track research studies, these authors do not discuss whether any image led to differences in actual understanding among the participants. This paper attempts to fill a gap in the literature by examining how hurricane storm surge potential is communicated graphically through varying the color scale used. Color is one of the primary visual variables identified by Bertin (1983). It helps it to emphasize display items because it is considered preattentive, meaning that information is extracted by the eye intuitively and rapidly (Healey and Enns 2012). Color is also one of the most misused graphical variables. Monmonier (1991, p. 147) called color a “cartographic quagmire.” Others have been critical of the rainbow color palette commonly used to depict weather information (Borland and Taylor 2007; Silva et al. 2011; Phipps and Rowe 2010). Recent research in meteorology suggests that the rainbow scale may not be as effective as other color scales in communicating weather information (Bryant et al. 2014). The goal in conducting the present study was to test the effectiveness of three different color palettes in communicating storm surge potential. Doing so provided an opportunity to examine whether those maps designed according to cartographic best practices would produce superior results in terms of accuracy and efficiency, which are two common usability metrics (Coltekin et al. 2009).

2. Background

There are a number of factors one should consider when designing graphical information. One of the most frequent recommendations regarding color in visualization pertains to the type of data being represented by the map or image. When displaying variation in a qualitative variable (e.g., types of land cover), multiple hues (colors) with no obvious perceptual order should be used. When the image shows variation in a quantitative variable (e.g., amount of precipitation), a sequential scale where a single hue’s luminance or brightness is altered is often most appropriate (Kelleher and Wagener 2011; Borland and Taylor 2007; Light and Bartlein 2004; Silva et al. 2007, 2011; Severtson and Vatovec 2012; Brewer 1994). In the use of luminance, there is a strong convention that “light is less—dark is more” (Garlandini and Fabrikant 2009, p. 195; Mersey 1990). Therefore, quantitative variables are often most appropriately displayed as a single hue with light shades indicating low values and dark shades indicating high values.

The relationship among colors in a graphic is also important. Care should also be taken to ensure that the perceived distance between colors is equivalent to the numerical distance they portray (Silva et al. 2007, 2011). A greater difference may be inferred among different colors than among different shades of a similar color. Problems may also exist with the use of red and violet, which are common in weather graphics. These two colors do not inherently communicate differences in magnitude (Light and Bartlein 2004) and have no automatic order to those unfamiliar with the electromagnetic spectrum, so whether one indicates a high or low value is not intuitive (Silva et al. 2007, 2011). When values are not intuitive, individuals must rely on a legend (Breslow et al. 2009). This is not as true when different shades of one hue are used. For this reason, brightness scales have led to improved performance in comparison tasks, while multicolored scales performed best in identification tasks (Breslow et al. 2009). Other issues that may interfere with ideal map comprehension include a lower visual acuity for blue (Doore et al. 1993; Hoffman et al. 1993) as well as higher brightness and more perceptual salience for yellow (Light and Bartlein 2004; Silva et al. 2007, 2011). Individuals can also distinguish a greater number of different hues than different shades of a single hue (Mersey 1990).

When designing a map, the author should also consider strong cultural conventions that exist with respect to color. In a study matching signal words with colors, red was associated with the word danger by over three-quarters of study participants (Griffith and Leonard 1997). In the same study, yellow was the only other color as strongly associated with a signal word—caution. Other colors such as orange, blue, or purple may not have strong risk connotations. For example, in an experiment ranking the colors in the Department of Homeland Security advisory system, 57.9% of participants made at least one error. The most frequent error made was in comparatively ranking blue/green and yellow/orange (Mayhorn et al. 2004). Purple also does not appear to be linked with risk beliefs in the literature. Severtson and Vatovec (2012, p. 17) reported that no participant examining a hazard map thought purple should be used to convey risk because (as one participant noted) “on purple it’s not so clear, it doesn’t have the bad connotation.”

Red, yellow, and green are also commonly associated with “stoplight” meanings of go, caution, and stop (Severtson and Vatovec 2012). For example, 98% of participants in a study by Griffith and Leonard (1997) associated the color red with the word stop when the word was presented without context. Other colors also have common conventions such as blue with water or cold and red with heat (Monmonier 1991; Hoffman et al. 1993). The American Meteorological Society’s Interactive Information Processing Systems Subcommittee for Color Guidelines reviewed colors associated with common weather variables (Doore et al. 1993). Included among the subcommittee’s recommendations were red for depicting hazardous weather phenomena such as hurricanes, thunderstorms, severe weather, and tornadoes. Yellow, orange, red/orange, and red were recommended for a progression of high winds from small craft advisories to hurricane warnings. The recommendations very closely matched common western cultural associations between red and danger and yellow with caution.

One’s ability to obtain the correct information from a map also varies by individual factors. For example, making accurate inferences from a map is highly dependent upon a viewer’s knowledge about the map’s subject (Allen et al. 2006; Canham and Hegarty 2010; Hegarty et al. 2010). A person visually examines a display differently depending on his or her background knowledge or the purpose of the exercise. Experts utilize a more “top-down” approach to map reading, in which attention is directed toward items that have more meaning relevant to the map activity (Severtson and Vatovec 2012). Perceived personal relevance also influences attention (Severtson and Vatovec 2012). For that reason, a person viewing a hazard map would be drawn to his or her own location. When participants have less background knowledge to understand what is important on the map, or when personal relevance is lower, their attention is more driven by visually salient features of the map, such as color or position (Fabrikant et al. 2010). This type of lower-level visual processing in which preattentive features influence where the eye fixates is called “bottom-up processing.” Hegarty et al. (2010) suggested that top-down knowledge guides eye fixations, but that salient features guide attention. Because individuals use both top-down and bottom-up processes, the most task-relevant features should be designed in a way that they are also the most visually salient (Fabrikant et al. 2010).

3. Methods

a. Storm surge potential map design—Independent variables

Combining the factors discussed above, one may start to piece together the components of an effective hazard potential graphic with respect to color. The colors used should match existing cultural associations. Single hues should be used to depict quantitative variables, and multiple hues should be used for qualitative variables. In this study, one color palette could not be designed to satisfy all recommended conditions to effectively communicate storm surge potential. A single-hued blue palette was chosen because of the association of blue with water. As it was noted earlier, blue does not provide the greatest sharpness of vision. Whether this would be a factor was not known. A second color palette already used in other National Oceanic and Atmospheric Administration (NOAA) applications that included yellow, orange, red, and purple (in that order) was also selected. Because of the known issues with purple, a third palette that ranged from green to red was created. This palette did not follow the guideline to use a single hue, but it did most closely capture the conventional progression from green to yellow to red to indicate increasing danger. Because of the design strategies used, the authors expected that the yellow to purple palette would lead to the worst measures of effectiveness on the map-reading questions. Because of the trade-offs involved, the authors did not anticipate whether the blue or the green to red palette would be most effective.

Two different legend types were also used: one that displayed storm surge potential in feet and one that represented the storm surge potential in descriptive text. The legends were tested as provided and as such were not influenced by past research. Evidence from the literature does provide some guidelines on how individuals interpret risk from numeric/nonnumeric data. For example, legend categories that are imprecise (e.g., less vs more) can increase perceived uncertainty because they are relative and not associated with a base value (Severtson and Myers 2013). Conversely, numbers can lead to a more accurate risk perception as well as communicate scientific credibility (Lipkus 2007). Numeric information is not as effective at tapping “gut-level reactions and intuitions” (Lipkus 2007, p. 699), which are an important component of risk perception (Slovic et al. 2005). In addition, similar to differences in expert versus nonexpert visual processing described above, Lipkus (2007) also suggests that individuals with lower ability to work with numbers may make better decisions when the presentation format makes the information easier to process. To the extent that storm surge potential in feet is more difficult to process by nonexperts, we believed this could lead to lower accuracy levels.

b. Research design and procedure

The experimental design was within subject, with five different storm surge scenarios. Participants answered questions about storm surge potential while viewing the images in three color presentation conditions (blue, green–red, or yellow–purple) and two legend types (storm surge values in numerical feet or text warning categories). The blue color condition was presented only with the legend type of values in feet. It did not make sense at the time to design a single-hued map with a more qualitative textual scale. The limitations of this decision are discussed later in the paper. The resulting five combinations for color/legend condition are presented in Fig. 1. Each storm surge potential image depicted a different area of coastline with no overlap in geography to prevent effects of increasing familiarity by participants. Each participant saw all five storm surge scenario locations, one in each color/legend condition. A Latin square (Williams design) was used to pair storms and conditions across the five presentations. This design ensures that every storm surge scenario preceded and followed every other storm surge scenario and every color/legend condition preceded and followed every other color/legend condition at least once across subjects. The use of five different storm surge scenarios with five color/legend conditions allowed the order to be rotated so that every possible order of color/legend conditions was tested without repeating storm surge scenario locations. Doing so minimized any possible ordering effects. However, it was not possible to fully balance legend combinations across the presentation order for each participant expertise subgroup.

Fig. 1.
Fig. 1.

Maps of the five storm surge scenario locations and five color/legend combinations tested: (a) blue feet values; (b) yellow–purple feet values; (c) yellow–purple category text; (d) green–red feet values; and (e) green–red category text. (f) An example of how the maps were presented to the participants, here showing the map associated with question 2 (see Table 1).

Citation: Weather, Climate, and Society 7, 1; 10.1175/WCAS-D-13-00073.1

Images were displayed on a 19-in. CRT monitor. For each storm surge potential image, a question/statement was displayed at the bottom of the viewing screen that queried specific information about the map. Questions were designed to test several of the characteristics of effective risk communication as proposed by Weinstein and Sandman (1993) including comprehension, uniformity, and helpfulness. Eight questions (see Table 1 for the list of questions) were presented for each storm surge potential scenario, for a total of 40 questions. For questions referencing a particular location, the location was identified on the map using a letter (A or B) for the specific location and a black arrow to make locating the area easier. Prior to the presentation of the questions for a new location, a map depicting only the coastline of the area was displayed to familiarize the participant with the geographic area for which the storm surge potential would be queried.

Table 1.

Questions displayed with each of five storm surge potential maps and the effectiveness variable associated with them.

Table 1.

Following the main sequence of five storm surge potential maps with accompanying questions, participants were presented with two graphs, separately depicting the three color conditions and the two legend conditions, and were asked to choose which they felt did the best job of providing storm surge prediction information. The location of these images on the screen (e.g., right or left, top or bottom row) was also rotated to prevent any influence from the image’s placement.

c. Dependent variables

1) Question accuracy and response time

Speed of response and accuracy are common measures of a display’s effectiveness, but they do not always correspond directly. For instance, Coltekin et al. (2009) found that maps with lower response time were not always more accurate, and one map with a higher level of accuracy had longer response times. Bearing these caveats in mind, generally, question accuracy is an indication of the ability of the participant to accurately answer the question in the different color/legend conditions, and response time is a global measure of the processing effort of a particular question or judgment for that image. As the following analyses will show, response times for some questions in the different color/legend combinations were elevated.

2) Eye-tracking measures

Fixation patterns (what areas on an image are looked at, for how long, and in what order) are an indication of what is attended in an image. Because the locations fixated have been actively attended, eye movement information can be informative about where attention is directed during the processing of an image [e.g., Deubel and Schneider 1996; Hoffman and Subramaniam 1995; see Rayner (2009) for a review eye movement research]. The measure of interest for this study is the proportion of fixations to particular regions of interest (ROIs) in the images during a viewing trial. Proportions were used in this case to accommodate the highly variable response times found in the study.

d. Participants

Through postings of flyers in several university buildings and in local businesses, 40 participants (21 male and 19 female) were recruited with a brief explanation of the research and need for participants. Recruitment goals based on hurricane knowledge expertise were to have 10 meteorology experts (junior/senior-level students or faculty from the meteorology program), 10 nonmeteorology students, and 20 nonstudent members of the public. The final groups recruited were nine meteorology experts, 11 nonmeteorology students, and 20 members of the local community. All participants were screened for normal color vision. Participants’ ages ranged from 19 to 55 yr of age, and they received a $20 gift card for their participation in the study (approximately 1 h).

e. Data analysis

All data analyses were performed with SPSS software. All analyses were completed using repeated measures’ analyses of variance (ANOVAs), with the within subjects factor of color/legend condition (blue feet values, green–red feet values, green–red category text, yellow–purple feet values, and yellow–purple category text) and the between subjects factor of participant experience (meteorological faculty/student–experts, undergraduate student, and community member) and storm surge scenario location order list. Participants were distinguished by level of experience because, even though task-irrelevant information on a map can be distracting regardless of a viewer’s subject knowledge (Canham and Hegarty 2010), the influence of map design on a viewer’s interpretation ability is mediated by better domain knowledge and understanding of the subject matter (Hegarty et al. 2010). For all analyses reported here a significance level of 0.05 was assumed, with the exception of the discussion of marginal effects. Effect size measures are reported in partial eta squared. Any violations of sphericity (the equality of variances among group combinations) were corrected using Greenhouse–Geisser corrections.

4. Results and discussion

a. Question accuracy and response time

Participant accuracy averages were collapsed across questions 1–4 (see Table 2) because these questions had correct answers, whereas the other questions asked for a judgment. Overall, accuracy was fairly high for all groups (greater than 84% for all conditions). There was no overall effect of color/legend condition on accuracy: F(4, 68) = 1.60, p = 0.18, and ηp2 (effect size) = 0.086. There was also no statistical difference between participant experience groups (expertise) [F(2, 17) = 2.145, p = 0.15, and ηp2 = 0.202], and no interaction between color/legend condition and expertise [F(8, 68) = 0.99, p = 0.45, and ηp2 = 0.104]. Table 3 shows accuracy results by question number across color/legend conditions.

Table 2.

Mean accuracy for expertise group by color/legend condition.

Table 2.
Table 3.

Number of participants (out of 40) answering correctly by color/legend condition; questions 5 and 6 indicate the number who would “take precautions” if their property was located where any storm surge at all was predicted.

Table 3.

Questions 1–6 each required the participant to find or address a different element of the image, so these response times were analyzed separately and are presented in Fig. 2. Unless indicated, participant expertise level did not affect response times.

Fig. 2.
Fig. 2.

Mean response times for the five conditions for questions 1–6. Error bars are the standard error of the mean.

Citation: Weather, Climate, and Society 7, 1; 10.1175/WCAS-D-13-00073.1

For question 1, participants viewed the storm surge potential graphic without a legend and were asked to determine the color that indicated the highest storm surge. For this question, there was a significant difference in the response time: F(4, 68) = 5.94, p < 0.001, and ηp2 = 0.259, with the blue color condition producing significantly longer response times than the green–red conditions (p values < 0.01 after Bonferroni correction), but not the yellow–purple conditions (p values = 0.13 and 0.18 after Bonferroni correction). The remaining conditions were all statistically equivalent. The unexpectedly longer response time for the blue condition in question 1 may have been because of a flawed question design. Participants had to search the map to see which color was associated with the answer choices for this color condition, because the answers were all on a single dimension of color, but not with the other two, because those conditions each had four categorically different colors associated with the answer choices. The accuracy for the blue color condition was among the highest (36 out of 40 answered correctly). Only the green–red category text was higher with 38 out of 40 correct responses, while yellow–purple category text was lowest (32 out of 40 correct).

In question 2, participants compared the severity of the storm surge potential at two different locations to determine which was more severe. Unlike in question 1, there were no statistical differences in the response times based on color/legend condition: F(4, 68) = 1.35, p = 0.26, and ηp2 = 0.073. Interestingly, although there was no overall statistical difference for this question, the blue feet values condition prompted the quickest response. Unlike the previous question, however, only participants answered the question correctly when viewing the blue condition. The green–red category text and feet values conditions yielded the highest accuracy percentages (37 and 40 out of 40 correct). Question 2 was the only question in which participants were required to compare two locations. Because of this, we also examined whether there might have been an effect of distance between the colors associated with the two locations (i.e., colors one category apart vs two categories apart on the legend). It appeared that this did not influence the response time.

Question 3 asked participants to judge and report the level of storm surge at a single location based on the map’s legend. Similar to question 1, a reliable difference was demonstrated in response times [F(4, 68) = 6.11, p < 0.001, and ηp2 = 0.264], with responses to the blue feet values condition taking reliably longer than the green–red and yellow–purple category text conditions (p values < 0.01 after Bonferroni correction) and marginally longer than the green–red and yellow–purple feet values conditions (p values = 0.08 after Bonferroni correction). There was no significant difference between the green–red and yellow–purple conditions.

For question 4, participants were required to determine if a property at a specified location would experience storm surge flooding (yes or no). As shown in Fig. 2, there was no significant difference in response times based on color/legend condition: F(4, 68) = 0.134, p = 0.97, and ηp2 = 0.008. Across conditions, the average number of participants answering question 4 correctly was 31 out of 40 with green–red feet values leading to the highest number (34), yellow–purple feet values leading to the lowest number (28), and blue feet values leading to correct responses.

With respect to questions 5 and 6, participants were asked to make a judgment about whether they personally would take precautions to prevent damage to a single-level house or ground floor apartment at a specific location on the image. Although the two questions were identical (each referred to a different location on the same map), the results were different. In question 5, there was no significant difference based on the color/legend condition: F(4, 68) = 0.151, p = 0.96, ηp2 = 0.009. However, in question 6, there was not only a significant difference in the response times [F(4, 68) = 6.20, p < 0.001, and ηp2 = 0.267], with the blue feet values condition having longer response times than the other conditions (all p values < 0.015 after Bonferroni correction), but there was also an interaction between the color/legend condition and participant meteorological experience [F(8, 68) = 5.38, p < 0.001, and ηp2 = 0.388]. This interaction (see Fig. 3) resulted from the fact that the expert group (meteorological faculty/students) spent on average 5 s more than the community members and 6 s more than the undergraduate students examining the blue feet values image. In the remaining conditions, the expert group’s response times were similar to the other groups. Because the questions were identical except for location, there is no readily available explanation for the differences in pattern on questions 5 and 6.

Fig. 3.
Fig. 3.

Mean response times for the five conditions for question 6 by meteorological experience. Error bars are the standard error of the mean.

Citation: Weather, Climate, and Society 7, 1; 10.1175/WCAS-D-13-00073.1

A separate analysis was performed across questions 2–6 combined in order to evaluate if any differences existed among the green–red and yellow–purple color/legend conditions as well as among the three color palettes in the feet values legend condition. Question 1 was not included in this analysis because it had no legend. When comparing response time among the green–red, yellow–purple, and blue feet values conditions, the green–red feet values condition led to numerically faster times than the blue feet values condition by 830 ms and the yellow–purple feet values condition by 470 ms. However, there was no significant difference found in these results: F(2, 34) = 0.288, p = 0.70, and ηp2 = 0.017. Next, the responses were omitted for response times associated with the blue feet values condition, so that both green–red and yellow–purple conditions could be compared. In this test, responses for the green–red conditions were also faster (by approximately 200 ms) than the yellow–purple conditions. However, once again, they were not statistically different: F(1, 17) = 0.232, p = 0.64, and ηp2 = 0.013. For the legend type, maps with category text legends had response times approximately 500 ms faster than those maps with feet values legends, but this trend was only marginally significant: F(1, 17) = 3.10, p = 0.096, and ηp2 = 0.154.

Overall, most questions in the blue feet values condition took longer to answer than in other conditions, suggesting that it was the most difficult display condition to interpret. The single exception was when a direct comparison of storm surge potential was made between two locations of the same map. This trend may suggest an advantage to using one color dimension when comparing two locations. While the general public may seldom have the need to make this kind of comparison, past research in risk communication suggests that individuals acquire more meaningful information about risk when they are able to judge their own risk both relative to others and relative to the range of potential risk (Severtson and Myers 2013). All of the remaining conditions produced similar response times, though with a numerical trend that a legend giving category text designations like extreme, high, medium, and low was easier to interpret and apply than a legend giving storm surge values in feet. If the goal were to speed map interpretation and assessment of threat based on storm surge predictions, these findings indicate that maps using different color category distinctions (as opposed to a single color dimension) and a legend with qualitative storm surge potential designations (rather than surge values in feet) appear to be the better choices. In a real-life situation, speed would most likely not be a primary goal. However, response time is also a proxy for processing effort, which is a more important consideration.

Question 7 was asked as an overall measure of risk perception and also to determine if any of the color/legend conditions led to different levels of perceived risk than one might expect based on the severity of the storm surge potential depicted. The classification of the images according to how “bad” they are perceived is subjective; however, one of the authors sought the opinions of four meteorology faculty to help determine if the storm surge scenarios could be placed in a single order based on the intensity of storm surge. There was complete agreement among the meteorologists in the ranking of the images according to their storm surge potential. Referring to the images shown in Fig. 1, the storm surge scenarios were ranked by the meteorologists in the following order from most to least bad: Figs. 1c, 1e, 1b, 1a, and 1d (hereinafter scenarios C, E, B, A and D respectively). The average rating by the study participants of how bad the storm surge potential was agreed with the meteorologists on the strongest and weakest scenarios, but differed in the midranked scenarios. The order of the average perceived severity ratings was not consistent across color/legend conditions. Only the green–red feet values condition and the yellow–purple category text condition matched the meteorologists’ judgment on which scenarios were the strongest and the weakest. Only three out of five color/legend conditions produced an average score that ranked scenario C the highest, and three out of five color/legend conditions produced an average score that ranked scenario D the lowest. To the extent that the meteorologists’ opinions of the storm surge scenarios represent actual differences in risk level, the five color/legend conditions produced ratings that were not in complete agreement with the level of risk presented.

Finally, question 8 measured the perceived helpfulness among the color/legend conditions. There was very little variation among the five color/legend conditions in perceived helpfulness. For most storm surge location–color/legend combinations, there was a greater range in scores within each color/legend condition among the five storm surge scenarios than within each storm surge scenario among the five color/legend conditions. Images for the scenarios depicting a more severe storm surge potential tended to produce higher helpfulness scores than less severe scenarios.

b. Eye-tracking results

Response time information gives a global measure of processing time required for each question. However, eye-tracking data allow for analysis of specific processing while one is trying to determine the answer to a question. People normally have approximately three separate fixations per second of viewing. By analyzing these fixations, we can examine what elements of the map were critical in answering the relevant questions. For the following analyses, the proportions of fixations on a specific area (the map, the legend, or the letter marker on the map) were calculated for each question. The ROI was defined as the smallest rectangle that contained the area plus an additional 10 pixels in each direction. In other words, the ROI was slightly larger than the actual region. Any fixation landing in the ROI was counted as in the region. For the letter region, the ROI rectangle also included the arrow that pointed to the letter. For ROIs that were embedded in a larger ROI (e.g., the legend is presented within the boundaries of the map region), we counted the fixation as being on the embedded ROI. As previously described, there was variability in response times to questions between the conditions, so this proportion measure was created to allow for comparison. To compare fixation patterns across the different color/legend conditions, the proportion of fixations to the ROIs were analyzed separately for questions 2–6 (the questions that had all three ROIs represented). The question number was entered as an additional within-subjects variable in the analysis. Because of the expectation of violations of sphericity, Greenhouse–Geisser corrections were applied to all analyses performed for the eye-tracking data. However, uncorrected degrees of freedom are listed for clarity.

Figure 4 displays the plotted locations of all fixations made by five members of the community group while attempting to answer question 4 using scenario B in the blue feet values condition. The number of participant responses reflected in the plot is constrained by the number of participants from that expertise group who saw that storm surge location/condition combination. The green dots represent the locations of fixations.

Fig. 4.
Fig. 4.

Fixations on the map used in scenario B (Fig. 1b) in the blue feet values condition, for question 4, based on five participants in the community group.

Citation: Weather, Climate, and Society 7, 1; 10.1175/WCAS-D-13-00073.1

In Fig. 5, the proportion of fixations that were located in the various ROIs are shown. The ROIs are the legend and the letter marker, as were defined previously, and the map area, defined as any location on the map excluding the legend or letter marker regions. The critical questions for this analysis are whether the color/legend condition affected proportion of fixations to the respective ROIs and if there were any interactions between color/legend condition, question, or expertise level.

Fig. 5.
Fig. 5.

Proportion of fixations to the (a) legend, (b) letter markers, and (c) map area by question and color/legend condition. Error bars are the standard error of the mean.

Citation: Weather, Climate, and Society 7, 1; 10.1175/WCAS-D-13-00073.1

1) Fixations in the legend ROI

For the legend ROI, fixations were significantly affected by color/legend condition [F(4, 72) = 4.47, p = 0.012, and ηp2 = 0.199] and question [F(4, 72) = 76.94, p < 0.001, and ηp2 = 0.810], but there was no effect or interaction with expertise level. The condition with the greatest proportion of fixations to the legend (Fig. 5a) was the blue feet values condition (0.120), followed by the yellow–purple feet values condition (0.109), the green–red feet values condition (0.098), and the yellow–purple and green–red category text conditions (both 0.095). Although there was an overall effect of condition on fixation proportions to the legend, when a Bonferroni correction was applied, no condition was statistically different from the other conditions.

2) Fixations in the letter markers ROI

Questions 2–6 all had letter markers placed on the map with an arrow in order to evaluate the participant’s understanding of the storm surge information. As with the legend ROI, there was an effect of question [F(4, 72) = 37.28, p < 0.001, and ηp2 = 0.674] on the proportion of fixations to the letter markers (including the arrow; Fig. 5b), and there was again a significant effect of the color/legend condition [F(4, 72) = 4.81, p = 0.004, and ηp2 = 0.211]. Similar to the legend analysis, the blue feet values condition (0.100) had the greatest proportion of fixations to the letter markers ROI followed by the green–red feet values (0.088), yellow–purple feet values (0.084), green–red category text (0.084), and yellow–purple feet values (0.075). As with the legend, although there was an overall effect of color/legend condition on fixation proportions to the letter marker, when a Bonferroni correction was applied, no condition was statistically different from the other conditions. Additionally, there was a significant interaction between color/legend condition and question number [F(16, 288) = 2.04, p = 0.043, and ηp2 = 0.102] and an interaction between color/legend condition and expertise [F(8, 72) = 2.24, p = 0.05, and ηp2 = 0.199].

The interaction between question and color/legend condition appears to result from the fact that for questions 2–4 (but not questions 5 and 6), letter markers in the blue feet values condition received a greater proportion of fixations compared to the other conditions. Because questions 2–4 queried specific forecast information from the map, this information may have required more processing to ascertain on a one-dimensional color scheme, whereas questions 5 and 6 asked participants to make a judgment based on their level of perceived risk, for which the one-dimensional scheme was not as detrimental.

Community members had fewer fixations to the letter markers ROI, especially in the blue feet values condition, compared to both the experts and the undergraduate students. This may explain the interaction between expertise and color/legend condition. Overall, experts (0.106) had a greater proportion of fixations to the letter markers than the undergraduate students (0.08) and the community members (0.075): F(2, 18) = 4.32, p = 0.029, and ηp2 = 0.324. The difference between experts and community members was marginally significant after Bonferroni correction (p = 0.065), but the other groups were not statistically different from one another.

3) Fixations in the map area ROI

The final comparison was the proportion of fixations to the map area, again defined as map areas excluding the legend and the letter marker ROIs (Fig. 5c). These fixations may indicate processing for obtaining general familiarization with the storm surge potential pattern, placing the question in the context of the whole map, or scanning the map while searching for the letter marker, but are not required to answer the question. Although there was an overall effect of the question [F(4, 72) = 50.20, p < 0.001, and ηp2 = 0.736], color/legend condition [F(4, 72) = 1.38, p = 0.251, and ηp2 = 0.071] and expertise had no effect or interaction. Thus, the map area fixations were equally distributed across conditions.

c. Surge map preferences

Finally, participants were shown one image with the three color conditions and a separate image displaying the two legend conditions. They were asked the question, “Which of these maps do you think does the best job of informing the public about their storm surge risk?” The green–red color condition was the participants’ overwhelming choice of the three color conditions. Slightly more participants chose the image presented with the feet values legend condition over the category text legend (Table 4). The fact that two-thirds of the participants chose the green–red color condition provides additional evidence that individuals perceive this color palette to be easier to understand or an appropriate choice to display storm surge potential. Although there was little difference in participants’ perception of feet value and category text legends overall, over three-quarters of the experts preferred the feet values, while other participant groups were more evenly split.

Table 4.

Number of participants indicating a preference for each color condition (39 participants), and number of participants indicating a preference for each legend condition (40 participants).

Table 4.

5. Conclusions

In this study, we examined the influence of three color palettes and two legend types on accuracy and response time when participants responded to questions or statements about a storm surge potential map. We supplemented this analysis with eye-tracking measures to examine the areas of the map with the greatest proportion of fixations. Overall, color/legend condition appeared to have little significant influence on accuracy or response time. However, the significant differences found were accompanied by a persistent trend throughout the tests that the blue feet values condition may have been the most difficult to interpret. Evidence to support this includes longer response times and a greater number of fixations on the legend.

Regardless of expertise, or type of image shown, participants generally did well answering the questions based on the storm surge potential maps. The lowest weighted accuracy was over 84%, with some individual questions answered correctly at a lower rate. In four of the six questions where accuracy was measured, one of the yellow–purple color/legend conditions produced the lowest raw mean accuracy, and one of the green–red color/legend conditions produced the highest accuracy. The differences in accuracy across the color/legend conditions were not significant. Participants also had a strong preference in the green–red color condition as the color condition that best informs the public about risk from storm surge.

The proportion of fixation analysis indicated that fixations on the legend and letter markers were significantly affected by the color/legend condition. The proportion of fixations in these two regions of interest was greatest among the feet values legend, which could also indicate that participants had a more difficult time interpreting the feet values information. Despite the feet values condition possibly being more difficult to process, it was not accompanied by lower accuracy levels among the nonexpert groups as prior research may have suggested. However, consistently equal or higher levels of accuracy in the feet values condition by experts make this a possible area for future study. Another interesting result that could also lead to a future study was the trend in lower perceived helpfulness for images depicting a lower severity of storm surge potential.

With the lack of significance in many of the results, other considerations may take precedence in determining the implications of this research for the creation of storm surge potential maps for the public. It is estimated that approximately 8% of the U.S. population is color blind (Doore et al. 1993; Heitgerd et al. 2008). The green–red color condition would be the worst choice for someone with red–green color blindness because the two ends of the color palette would be indistinguishable. Despite strong cultural associations with risk, this is the primary concern with using a green–red color scale.

Finally, the known limitations with this study should be acknowledged. As it was mentioned earlier, the blue condition was not tested with a category text legend. This limited the study’s ability to distinguish if the blue color condition’s results were more a function of color or the legend only being shown in the feet values condition, which also tended to produce poorer results. Also, the participants initially responded to questions that tested their hurricane knowledge as well as questions about their familiarity with the locations on the maps, use of weather information, education level, and past hurricane behavior. Because of an error in the recording, these data could not be linked to the eye-tracking responses. A better differentiation among expertise groups may have been possible given this hurricane knowledge data. Given the lack of significance among the color/legend conditions, however, this most likely would not have altered the results of the color/legend analysis. Also, some characteristics were allowed to vary across the hurricane scenarios, including scale and storm surge severity. While severity was measured through the risk perception questions, scale was not controlled. Images were chosen that presented the range of storm surge potential in a scale that the authors believed would allow for differentiation among categories.

Acknowledgment

This project was funded by NOAA through the Northern Gulf Institute. The authors also thank Ethan Gibney for creating the storm surge images and the two anonymous reviewers for their comments.

REFERENCES

  • Allen, G. L., Miller Cowan C. R. , and Power H. , 2006: Acquiring information from simple weather maps: Influences of domain-specific knowledge and general visual-spatial abilities. Learn. Individ. Differ., 16, 337349, doi:10.1016/j.lindif.2007.01.003.

    • Search Google Scholar
    • Export Citation
  • AMS, 1996: Hurricane path now a computer graphic. Bull. Amer. Meteor. Soc., 77, 23482349.

  • Baker, E. J., Broad K. , Czajkowski J. , Meyer R. , and Orlov B. , 2012: Risk perceptions and preparedness among mid-Atlantic coastal residents in advance of Hurricane Sandy, preliminary report. Wharton University of Pennsylvania Working Paper 2012-18, 42 pp. [Available online at http://opim.wharton.upenn.edu/risk/library/WP2012-18_EJB-etal_RiskPerceptions-Sandy.pdf.]

  • Bertin, J., 1983: Semiology of Graphics. University of Wisconsin Press, 415 pp.

  • Blake, E. S., Kimberlain T. B. , Berg R. J. , Cangialosi J. P. , and Beven J. L. II, 2013: Tropical cyclone report Hurricane Sandy (AL182012) 22–29 October 2012. National Hurricane Center Rep., 157 pp. [Available online at www.nhc.noaa.gov/data/tcr/AL182012_Sandy.pdf.]

  • Borland, D., and Taylor R. M. II, 2007: Rainbow color map (still) considered harmful. IEEE Comput. Graphics Appl., 27 (2), 1417, doi:10.1109/MCG.2007.323435.

    • Search Google Scholar
    • Export Citation
  • Breslow, L. A., Ratwani R. M. , and Trafton J. G. , 2009: Cognitive models of the influence of color scale on data visualization tasks. Hum. Factors, 51, 321338, doi:10.1177/0018720809338286.

    • Search Google Scholar
    • Export Citation
  • Brewer, C. A., 1994: Color use guidelines for mapping and visualization. Visualization in Modern Cartography, A. M. MacEachren and D. R. Fraser Taylor, Eds., Elsevier, 123–147.

  • Broad, K., Leiserowitz A. , Weinkle J. , and Steketee M. , 2007: Misinterpretations of the “cone of uncertainty” in Florida during the 2004 hurricane season. Bull. Amer. Meteor. Soc., 88, 651667, doi:10.1175/BAMS-88-5-651.

    • Search Google Scholar
    • Export Citation
  • Bryant, B., Holiner M. , Kroot R. , Sherman-Morris K. , Smylie W. B. , Stryjewski L. , Thomas M. , and Williams C. I. , 2014: Usage of color scales on radar maps. J. Oper. Meteor., 2, 169179, doi:10.15191/nwajom.2014.0214.

    • Search Google Scholar
    • Export Citation
  • Canham, M., and Hegarty M. , 2010: Effects of knowledge and display design on comprehension of complex graphics. Learn. Instr., 20, 155166, doi:10.1016/j.learninstruc.2009.02.014.

    • Search Google Scholar
    • Export Citation
  • Coltekin, A., Heil B. , Garlandini S. , and Fabrikant S. I. , 2009: Evaluating the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis. Cartogr. Geogr. Inf. Sci., 36, 517, doi:10.1559/152304009787340197.

    • Search Google Scholar
    • Export Citation
  • Cox, J., House D. , and Lindell M. , 2013: Visualizing uncertainty in predicted hurricane tracks. Int. J. Uncertainty Quantif., 3, 143156, doi:10.1615/Int.J.UncertaintyQuantification.2012003966.

    • Search Google Scholar
    • Export Citation
  • Demuth, J. L., Morss R. B. , Mmorrow B. H. , and Lazo J. K. , 2012: Creation and communication of hurricane risk information. Bull. Amer. Meteor. Soc., 93, 11331145, doi:10.1175/BAMS-D-11-00150.1.

    • Search Google Scholar
    • Export Citation
  • Deubel, H., and Schneider W. X. , 1996: Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Res., 36, 18271837, doi:10.1016/0042-6989(95)00294-4.

    • Search Google Scholar
    • Export Citation
  • Doore, G. S., and Coauthors, 1993: Guidelines for using color to depict meteorological information. Bull. Amer. Meteor. Soc., 74, 17091713, doi:10.1175/1520-0477(1993)074<1709:GFUCTD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fabrikant, S., Hespanha S. , and Hegarty M. , 2010: Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Ann. Assoc. Amer. Geogr., 100, 1329, doi:10.1080/00045600903362378.

    • Search Google Scholar
    • Export Citation
  • Garlandini, S., and Fabrikant S. I. , 2009: Evaluating the effectiveness and efficiency of visual variables for geographic information visualization. Spatial Information Theory, K. S. Hornsby et al., Eds., Springer-Verlag, 195–211, doi:10.1007/978-3-642-03832-7_12.

  • Griffith, L. J., and Leonard S. D. , 1997: Association of colors with warning signal words. Int. J. Ind. Ergon., 20, 317325.

  • Healey, C. G., and Enns J. T. , 2012: Attention and visual memory in visualization and computer graphics. IEEE Trans. Visualization Comput. Graphics, 18, 11701188, doi:10.1109/TVCG.2011.127.

    • Search Google Scholar
    • Export Citation
  • Hegarty, M., M. S. Canham, and Fabrikant S. I. , 2010: Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. J. Exp. Psychol.: Learn. Mem. Cognit., 36, 3753, doi:10.1037/a0017683.

    • Search Google Scholar
    • Export Citation
  • Heitgerd, J. L., and Coauthors, 2008: Community health status indicators: Adding a geospatial component. Prev. Chronic Dis.: Public Health Res. Pract. Policy,5,15. [Available online at www.ncbi.nlm.nih.gov/pmc/articles/PMC2483562/pdf/PCD53A96.pdf.]

  • Hoffman, J. E., and Subramaniam B. , 1995: The role of visual attention in saccadic eye movements. Percept. Psychophys., 57, 787795, doi:10.3758/BF03206794.

    • Search Google Scholar
    • Export Citation
  • Hoffman, R. R., Detweiler M. , Conway J. A. , and Lipton K. , 1993: Some consideration in using color in meteorological displays. Wea. Forecasting, 8, 505517, doi:10.1175/1520-0434(1993)008<0505:SCIUCI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kelleher, C., and Wagener T. , 2011: Ten guidelines for effective data visualization in scientific publications. Environ. Modell. Software, 26, 822827, doi:10.1016/j.envsoft.2010.12.006.

    • Search Google Scholar
    • Export Citation
  • Light, A., and Bartlein P. J. , 2004: The end of the rainbow? Color schemes for improved data graphics. Eos, Trans. Amer. Geophys. Union, 85, 385–391, doi:10.1029/2004EO400002.

    • Search Google Scholar
    • Export Citation
  • Lipkus, I. M., 2007: Numeric, verbal, and visual formats of conveying health risks: Suggested best practices and future recommendations. Med. Decis. Making, 27, 696713, doi:10.1177/0272989X07307271.

    • Search Google Scholar
    • Export Citation
  • Mayhorn, C. B., Wolgarter M. S. , and Shaver E. F. , 2004: What does code red mean? Ergon. Des., 12, 12–14, doi:10.1177/106480460401200404.

    • Search Google Scholar
    • Export Citation
  • Mersey, J. E., 1990: Colour and Thematic Map Design: The role of Colour Scheme and Map Complexity in Choropleth Map Communication. University of Toronto Press, 157 pp.

  • Meyer, R., Broad K. , Orlove B. , and Petrovic N. , 2013: Dynamic simulation as an approach to understanding hurricane risk response: Insights from the Stormview Lab. Risk Anal., 33, 15321552, doi:10.1111/j.1539-6924.2012.01935.x.

    • Search Google Scholar
    • Export Citation
  • Monmonier, M., 1991: How to Lie With Maps. University of Chicago Press, 207 pp.

  • Morrow, B. H., Lazo J. K. , Rhome J. , and Feyen J. , 2015: Improving storm surge risk communication: Stakeholder perspectives. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-13-00197.1, in press.

    • Search Google Scholar
    • Export Citation
  • O’Hare, D., and Stenhouse N. , 2009: Under the weather: An evaluation of different modes of presenting meteorological information for pilots. Appl. Ergon., 40, 688693, doi:10.1016/j.apergo.2008.06.007.

    • Search Google Scholar
    • Export Citation
  • Phipps, M., and Rowe S. , 2010: Seeing satellite data. Public Understanding Sci., 19, 311321, doi:10.1177/0963662508098684.

  • Radford, L., Senkbeil J. C. , and Rockman M. , 2013: Suggestions for alternative tropical cyclone warning graphics in the USA. Disaster Prev. Manage., 22, 192209, doi:10.1108/DPM-06-2012-0064.

    • Search Google Scholar
    • Export Citation
  • Rayner, K., 2009: Eye movements and attention in reading, scene perception, and visual search. Quart. J. Exp. Psychol., 62, 14571506, doi:10.1080/17470210902816461.

    • Search Google Scholar
    • Export Citation
  • Severtson, D. J., and Vatovec C. , 2012: The theory-based influence of map features on risk beliefs: Self-reports of what is seen and understood for maps depicting an environmental health hazard. J. Health Commun., 17, 836856, doi:10.1080/10810730.2011.650933.

    • Search Google Scholar
    • Export Citation
  • Severtson, D. J., and Myers J. D. , 2013: The influence of uncertain map features on risk beliefs and perceived ambiguity for maps of modeled cancer risk from air pollution. Risk Anal., 33, 818837, doi:10.1111/j.1539-6924.2012.01893.x.

    • Search Google Scholar
    • Export Citation
  • Sherman-Morris, K., 2005: Enhancing threat: Using cartographic principles to explain differences in hurricane threat perception. Fla. Geogr., 36, 6183. [Available online at http://journals.fcla.edu/flgeog/article/view/76887/75296.]

    • Search Google Scholar
    • Export Citation
  • Silva, S., Madeira J. , and Santos B. S. , 2007: There is more to color scales than meets the eye: A review on the use of color in visualization. 11th Int. Conf. Information Visualization (IV’07), Zurich, Switzerland, IEEE, 943–950. [Available online at http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4272091&tag=1.]

  • Silva, S., Santos B. S. , and Madeira J. , 2011: Using color in visualization: A survey. Comput. Graphics, 35, 320333, doi:10.1016/j.cag.2010.11.015.

    • Search Google Scholar
    • Export Citation
  • Slovic, P., Peters E. , Finucane M. L. , and MacGregor D. G. , 2005: Affect, risk, and decision making. Health Psychol., 24, S35S40, doi:10.1037/0278-6133.24.4.S35.

    • Search Google Scholar
    • Export Citation
  • Weinstein, N. D., and Sandman P. M. , 1993: Some criteria for evaluating risk messages. Risk Anal., 13, 103114, doi:10.1111/j.1539-6924.1993.tb00733.x.

    • Search Google Scholar
    • Export Citation
Save