An increase in the severity of extreme weather is arguably one of the most important consequences of climate change with immediate and potentially devastating impacts. Recent events, like Hurricane Harvey, stimulated public discourse surrounding the role of climate change in amplifying, or otherwise modifying, the patterns of such events. Within the scientific community, recent years have witnessed considerable progress on “climate attribution”—the use of statistical techniques to assess the probability that climate change is influencing the character of some extreme weather events. Using a novel application of signal detection theory, this article assesses when, and to what extent, laypeople attribute changes in hurricanes to climate change and whether and how certain characteristics predict this decision. The results show that people attribute hurricanes to climate change based on their preexisting climate beliefs and numeracy. Respondents who were more dubious about the existence of climate change (and more numerate) required a greater degree of evidence (i.e., a more extreme world) before they were willing to suggest that an unusual hurricane season might be influenced by climate change. However, those who have doubts were still willing to make these attributions when hurricane behavior becomes sufficiently extreme. In general, members of the public who hold different prior views about climate change are not in complete disagreement about the evidence they perceive, which leaves the possibility for future work to explore ways to bring such judgments back into alignment.
The extent to which climate change affects any individual weather event involves a variety of natural and anthropogenic factors (e.g., the state of large-scale circulation, aerosol effects, the level of anthropogenic climate change). By definition, extreme events are rare, which means that at any specific location, there are typically only a few examples of past events. Despite this, several methods now exist for making statistical attributions to the effect of the changing climate. Some approaches use historical comparisons of long-term averages and model simulations of climate and weather with and without climate change (National Academies of Sciences, Engineering, and Medicine 2016). A study done in the aftermath of Hurricane Harvey suggests that rainfall may have been increased by 40% (Risser and Wehner 2017). Another study synthesized a suite of climate models, finding that, as a result of climate change, the return period of Harvey-like rainfall events could change from a 1-in-2000-yr event to a 1-in-100-yr event (Emanuel 2017). These and other studies argue that, because the temperature of the ocean surface waters that drive hurricanes is rising, they can be expected to intensify in the future (Gutmann et al. 2018; Pachauri et al. 2014).
Previous psychological research finds that laypeople, or nonspecialists, tend to use their experiences with daily weather and certain extreme weather events as evidence of climate change (Broomell 2020; Broomell et al. 2017; Taylor et al. 2014). Individuals holding different beliefs may have different interpretations of extreme weather due to diverse knowledge and experience and make altogether different interpretations of extreme events (Kahan et al. 2012; Weber and Stern 2011). This process of judgment can create problems in the interpretation and application of climate attribution, since various locations experience different hazards, and laypeople may have different views about the extent to which the same event provides evidence for the existence of climate change (Howe and Leiserowitz 2013; Goebbert et al. 2012). Asserting that an unusually hot or cold day, or an individual hurricane, is clear evidence of climate change is generally not defensible (Broomell 2020). Such attribution of specific events can create challenges for individual and collective action and support for climate abatement. For example, CNN quotes Donald Trump as remarking in 2015, “Wow, 25° below zero, record cold and snow spell. Global warming anyone?” (https://www.cnn.com/2017/08/08/politics/trump-global-warming/index.html).
Since personal experience motivates one’s concern for short- and long-term hazards (e.g., a hurricane and climate change, respectively) and willingness to act to lessen adverse effects (Broomell et al. 2015), such experience can motivate some people to support mitigative action more than others (Rosentrater et al. 2013). Thus, it is important to evaluate what drives differences of interpretation of the same extreme weather event by different individuals. Several things may influence these interpretations, including various levels of perceptual ability (e.g., strong climate change beliefs may lead to more inferences drawn between global warming and extreme weather) or proportional differences in how frequent extreme weather, like a hurricane, must become before it is considered evidence of climate change.
While prior work has focused on decision-makers’ views on climate attribution (Parker et al. 2017; Sippel et al. 2015), lay perceptions of this issue have yet to be adequately explored. Here, we use signal detection theory (SDT) to explore whether and how laypeople attribute hurricanes to climate change and the circumstances under which they make this connection. SDT allows us to quantify the extent to which people identify changes in the occurrence of hurricanes as evidence of global warming and how those judgments are influenced by their beliefs. Specifically, we ask the following questions:
How do individuals’ beliefs about climate change affect their interpretation of hurricane frequencies?
What drives personal perceptual abilities (sensitivity) and decision thresholds in hurricane judgments?
Signal detection theory
The theory of signal detection (Swets 1961) describes two influences on judgments that are elicited in our experimental setup: 1) sensitivity and 2) decision threshold. Sensitivity and decision thresholds are two important, theoretically independent dimensions of judgment (Swets 1961).
Sensitivity is the distance along the perceptual continuum between the centers of the distributions for a signal in the presence of global warming and a similar distribution without global warming (Broomell et al. 2017). This distance reflects the objective difference between signal and noise and an individual’s ability to differentiate between the two. Estimates of sensitivity can serve as a measure of one’s perceptual ability and determine whether participants who do not believe in anthropogenic climate change differentiate these events from past events or identify them as similar.
The second dimension is one’s decision threshold, or the criterion for providing a given response. Also known as response bias, the detection threshold is a measure of relative frequency between false alarms (saying the signal is present when there is only noise) and correct hits (saying signal is present when a true signal is present).
Sensitivity and decision threshold have specific meanings within our experimental setup. Sensitivity should be interpreted as the extent to which people can accurately encode information from their personal experience with hurricanes to distinguish between the two states of the world (i.e., Earth with a warming signal and Earth without). Decision threshold should be interpreted as how severe (or frequent) an event must be before it is deemed evidence of global warming.
In a study using Amazon Mechanical Turk (MTurk), we adopt a design similar to prior work by Broomell et al. (2017), who applied SDT to analyze judgments of daily temperatures as evidence of global warming. How our results differ and compare to the findings of Broomell et al. (2017) is discussed in our study conclusions.
Participants were recruited from a national, convenience sample (N = 250) via MTurk. The ages of respondents ranged from 20 to 79 years old (mean = 35.4; median = 32). This convenience sample, on average, was younger than the U.S. population (mean = 37). In this sample, 100% had finished high school (U.S. = 88%), 57% had completed college (U.S.some = 59%; U.S.all = 33%), and 7% had completed graduate training (U.S. = 12%). Of the total sample, 52% identified as liberal (U.S. 47%) and 38% of respondents were female (U.S. = 51%). This set biases the results toward younger, liberal males (Table 1).
All participants completed two tasks: 1) extreme event attribution, in which they classified 21 extreme events from a list containing both real (based on the IPCC AR5 report; Pachauri et al. 2014) and bogus examples of events attributable to anthropogenic climate change (Table 2); and 2) perception of hurricanes, in which they classified 45 hurricane frequencies projected over the next decade. We estimated each participant’s sensitivity and decision threshold for the hurricane perception task using SDT.
To evaluate how people’s beliefs may influence when and how they attribute changes in hurricanes to climate change, we compared performance across two groups. Participants in each group classified the same stimuli but with different framings, depending on random assignment. In the control group, participants were instructed to classify hurricane projections over the next decade as either normal (noise) or abnormal (signal), without making any mention of global warming. In the climate frame, participants were instructed to classify hurricane projections over the next decade as either not evidence of global warming (noise) or evidence of global warming (signal). For example, imagine that a category-5 hurricane made landfall in the United States. The task is to decide how likely this observation is given that global warming is occurring (vs not occurring) and using this judgment to attribute the hurricane’s occurrence to global warming or not. We predicted an interaction where people’s climate beliefs would more strongly influence their interpretations of hurricanes as evidence of global warming (climate group) compared to their interpretations of hurricanes as abnormal (control group).
Potential participants read a short description of the task posted to MTurk. If interested, they were routed to an online survey (implemented through Qualtrics, LLC). Upon completion of the task, participants received unique confirmation codes to redeem $2.50 as compensation for their time. The task took less than 20 min to complete. To be eligible to participate in the study, participants had to be 18 or older, be fluent in English, use a desktop computer, reside in the United States, and have at least 95% approval rate and at least 500 previously approved tasks. We included four attention checks with obvious answers to assess whether participants were paying attention for the task duration.
As noted, participants first completed a short extreme event attribution task to measure their perceptions of extreme events that could be attributable to climate change. Table 2 displays a list of actual and bogus events attributable to climate change. Participants were shown these lists (in random order) and were asked to categorize each into one of two bins labeled, “Could be evidence of man-made climate change” and “Could NOT be evidence of man-made climate change.”
After this sorting task, participants were randomly assigned to either the control or climate condition for classifying hurricane frequencies with applicable background and instructions. Participants then proceeded to classify 45 hurricane observations in the form of hypothetical news headlines projecting the potential number of hurricane landfalls and their intensity over the next decade, such as this example: “Roughly [frequency] Category [intensity] Hurricanes could Strike the U.S. over the Next Decade, Study says.”
The hurricane stimuli were generated for the United States from decadal means and standard deviations of hurricane landfall data from the NOAA Hurricane Research Division (NOAA 2017) from 1851 to 2017. Since the hypothetical news headlines asked participants to judge observations “over the next decade,” evaluating historical hurricane incidence by decade was appropriate. The mean number of landfalls, their highest intensity, and standard deviations were computed for each decade in the United States over the 166-yr period of record (POR). While not critical for our study, we note that this POR should not be strictly interpreted in this manner, since hurricane observation methods have changed over this time period and would lead to potential differences in frequency, intensity, or landfall records. An alternative formulation would be to extract stimuli from the POR when satellite observations were in operation (i.e., 1970s to present day). Summary statistics for each POR follow.
Based on the 166-yr POR, the mean number of hurricane landfalls in the United States (per decade) was 7 category-1 hurricanes [standard deviation (SD) = 2], 5 category-2 hurricanes (SD = 2), 4 category-3 hurricanes (SD = 2), 1 category-4 hurricane (SD = 1), and 0 category-5 hurricanes (SD = 0). Based on a POR from 1971 to 2017, the mean number of hurricane landfalls in the United States (per decade) was 6 category-1 hurricanes (SD = 2), 3 category-2 hurricanes (SD = 2), 3 category-3 hurricanes (SD = 2), 1 category-4 hurricane (SD = 1), and 0 category-5 hurricanes (SD = 0). The calculations and results are based on the full, 166-yr POR.
Given the greater incidence of category-1–3 hurricanes, frequencies ranged from 0 to 10 to adequately capture the mean number of hurricanes per decade, as well as frequencies above and below the mean. Category-4 and category-5 hurricanes included a range from 0 to 5 due to the historically lower incidence of strong hurricanes in the database in both the full, 166-yr POR and the POR from 1971 to 2017. The order of presentation of category-1–5 hurricanes, and the presentation of frequencies, were randomized. For each trial, participants were asked to rate their confidence in their decision (i.e., low, medium, or high).
We analyzed the dependent variables of sensitivity and decision threshold in hurricane perception as a function of 1) experimental condition (control or climate group) and location (noncoastal or coastal resident), 2) demographic variables, 3) four scales measuring beliefs and experience, 4) extreme event preparation measures (whether people have planned shelter and emergency supplies), 5) numeracy, and 6) one knowledge question on whether people know a hurricane watch is less severe than a hurricane warning.
The four scales measuring beliefs and experience include weather salience, climate change beliefs, objective hurricane experience, and a subjective impacts score (see the appendix tables for lists of all scale questions). The Weather Salience Questionnaire (short form) (Stewart et al. 2012; Stewart 2009) was composed of seven questions measuring the importance of weather with a Cronbach’s α reliability score of 0.66 (appendix Table A1). The Climate Change Beliefs Questionnaire was composed of six questions measuring beliefs in anthropogenic climate change with a Cronbach’s α reliability score of 0.81 (Table A2). The nine questions meant to gauge participants’ hurricane experience had a Cronbach’s α reliability score of 0.85 (Table A3). The impacts measure was a single, subjective item based on participants’ self-reported impacts.
We also had three additional measures that were not continuous measurement scales but binary response variables. These measured whether each participant had taken extreme event preparation measures (i.e., having planned shelter and emergency supplies) and whether or not participants understood that hurricane warnings are more severe than hurricane watches (Table A4); we also calculated numeracy based on the proportion of questions answered correctly in a five-item questionnaire (short form) (Weller et al. 2013; Table A5).
d. Estimating SDT parameters
We estimated two SDT parameters in this experiment: sensitivity and decision bias measures for people’s hurricane interpretations. SDT estimates are calculated based on hit rates and false alarm rates. The hit rate is the proportion of trials where participants responded “signal” when the trial was a true signal. The false alarm rate is the proportion of trials where participants responded “signal” when the trial was noise (i.e., not a true signal). The hit and false alarm rates were calculated on the basis of grading criteria for each experimental framing group, described next.
1) Defining abnormal hurricane frequencies
Two calculation methods were explored in defining abnormality: 1) based on standard deviations and 2) using a Poisson arrival process. The standard deviation approach yields a broader interval than the Poisson approach. We decided the standard deviation approach is conservative but appropriate for defining abnormal hurricane frequencies. As a result, for the control condition, we classified landfall frequencies that were 1 (or more) standard deviations above or below the decadal mean as abnormal. All other trials were classified as normal.
2) Defining evidence of global warming
For the climate condition, we estimated the SDT parameters based on the results from a related study of 28 face-to-face interviews (Dryden Steratore 2019), where all participants said that climate change caused hurricanes to be more frequent and more intense.1 We developed our grading criteria based on these mental models, which were not meant to represent the scientific community’s understanding of how climate change may influence hurricanes. Rather, we assume that our sample is responding as if they have adopted the same mental model as participants in the Dryden Steratore (2019) study. As a result, we classified frequencies that were higher than the decadal mean number of landfalls as representing evidence of global warming, and anything at or below the mean were not identified as evidence of global warming.
We estimated the sensitivity and decision threshold parameters by creating a receiver operating characteristic (ROC) curve for each participant. ROC curves were calculated using the hit rate (HR, or accurate signal detection based on our grading criteria), false alarm rate (FAR, or inaccurate signal detection), and self-reported confidence. The sensitivity parameter was estimated by the area under the ROC curve (AUC), while the decision threshold was estimated as the criterion C = −0.5[Φ−1(1 − HR) + Φ−1(1 − FAR)], where Φ−1 is the inverse Gaussian cumulative density function (Gescheider 2013). Sensitivity and decision thresholds are theoretically independent dimensions of judgment, and their correlation R2 was virtually zero in the hurricane perception task (R2 < 0.01, with p = 0.97).
a. Attribution of extreme events (sorting task)
Table 2 shows responses to the extreme event attribution task, displayed as the proportion of participants indicating each event as potential “evidence” of climate change. These proportions show that participants classified several events incorrectly (i.e., their perceptions of which events were attributable to climate change did not reflect those listed in the IPCC AR5 report). The fraction of trials classified as “Could be evidence of man-made climate change” for heavy snow (0.56), tornadoes (0.53), landslides (0.55), and avalanches (0.45) were not significantly different than random chance (i.e., 0.50; one-sample t tests, p > 0.05 for all). Participants also identified some bogus examples as attributable to climate change (e.g., “holes in the ozone layer”: 0.81). Table 2 also shows that participants perceived hurricanes as potential evidence of man-made climate change (0.64), following only heat waves and wildfires (0.75), droughts (0.71), and floods (0.68), of the events on the IPCC list.
b. Attribution of hurricane frequency (signal detection task)
1) Descriptive statistics
Figure 1 presents responses to the hurricane signal detection task, shown as the proportion of participants who described each hurricane frequency as abnormal (control group) or evidence of global warming (climate group) separately for each hurricane category. Responses for the control group tended to be bimodal (i.e., peaks at low and high hurricane frequencies), whereas responses for the climate group tended to increase linearly with increasing frequency. Overall, 51% of control and 49% of climate participants judged the hurricane frequencies presented as abnormal.
The sensitivity measure ranges from 0 (perfect reverse discrimination between signal and noise) to 1 (perfect discrimination between signal and noise) with a score of 0.5 in the middle, representing chance. Sensitivity estimates obtained from respondents ranged from 0.31 (minimum) to 0.97 (maximum). Decision threshold estimates ranged from −2.88 (lax thresholds resulting in more abnormal/evidence responses) to 2.88 (strict thresholds resulting in more normal/not evidence responses). The average sensitivity for the control condition [mean (M) = 0.62; standard deviation (SD) = 0.12] was lower than the one for the climate condition (M = 0.73; SD = 0.16). Participants in both conditions performed significantly better than chance in discriminating the hurricane frequencies with respect to our grading criteria (one-sample t test, with p < 0.0001). The average decision threshold for the control condition (M = 0.23; SD = 0.61) was less strict than for the climate condition (M = 0.78; SD = 1.41). This means that individuals in the control group perceived the hurricane frequencies as overall more abnormal than those in the climate condition perceived them as being evidence of global warming. This result reflects findings similar to those of Broomell et al. (2017) for perceptions of temperatures.
2) Inferential statistics
Table 3 shows results of regression analyses predicting the SDT parameters. We regressed threshold and sensitivity onto five sets of predictors: 1) experimental condition (control or climate group) and location (noncoastal or coastal resident), 2) demographic variables, 3) four scales measuring beliefs and experience, 4) extreme event preparation measures, 5) numeracy, and 6) one knowledge question on whether people know a hurricane watch is less severe than a hurricane warning.
We have two types of predictor variables. The first type is dummy codes that take a value of 0 or 1 and indicate the presence of a predictor (see Table 3 caption for specifics). The second type is continuous measures that we standardized by subtracting their mean and dividing by the standard deviation. The regression model includes an interaction term between experimental group and the individual covariates to allow for different slopes in each of the respective experimental conditions. Significant interactions represent differential effects of individual predictors on perceiving abnormality in the control group versus perceiving evidence of global warming in the climate group.
Table 3 displays the coefficients estimated for both sensitivity and decision threshold. The predictors accounted for more variance in the sensitivity (R2 = 0.46) than in the decision threshold (R2 = 0.31). Two interaction effects were significant. The bottom of the table shows significant interaction with experimental group and climate change beliefs for predicting sensitivity, as well as the interaction of experimental group and numeracy. These variables also had significant interactions with experimental group for predicting decision threshold.
Simple interaction plots (Figs. 2, 3 ) display fitted means from the regression model using values for the predictor variables that are one standard deviation above and below the mean, while holding all other variables constant at their center value (see the online supplemental material for more detail on calculations).
Figure 2 shows the simple interaction effect between experimental group and climate change beliefs for predicting decision threshold (top) and predicting sensitivity (bottom). For both dependent variables, there were no differences between individuals reporting higher and lower climate change beliefs in the control condition, but there were significant differences in the climate condition. For decision threshold (Fig. 2, top), individuals reporting lower climate change beliefs had a less strict threshold for responding “abnormal” compared to responding “evidence of global warming,” while individuals with high climate change beliefs did not significantly differ across conditions. For sensitivity (Fig. 2, bottom), individuals reporting higher climate change beliefs had higher sensitivity in the climate group compared to the control group, while individuals with low climate change beliefs did not significantly differ across conditions. These interactions provide support that 1) climate change beliefs are associated with decision thresholds and may be contributing to bias; and 2) higher climate change beliefs are associated with a higher sensitivity to differentiate hurricanes, according to our grading criteria, compared to those with lower climate change beliefs.
Figure 3 shows the simple interaction effect between experimental group and numeracy for predicting decision threshold (top) and for predicting sensitivity (bottom). For both dependent variables, there were no differences between individuals with higher and lower numeracy in the control condition, but there were significant differences in the climate condition. For decision threshold (Fig. 3, top), individuals with higher numeracy had a less strict threshold for responding “abnormal” compared to responding “evidence of global warming,” while individuals with lower numeracy did not significantly differ across conditions. For sensitivity (Fig. 3, bottom), both higher and lower numerate individuals had higher sensitivity in the climate group relative to the control group, but individuals with higher numeracy had a significantly steeper increase. These interactions provide support that 1) numeracy is associated with decision thresholds and may be contributing to bias; and 2) higher numeracy is associated with a higher sensitivity to differentiate hurricanes, according to our grading criteria, compared to those with lower numeracy.
This study reports the first psychophysical investigation of climate attribution for hurricanes, a topic of growing interest and attention in public and scientific discourse. We aimed to better understand whether, and to what extent, peoples’ perceptions are influenced by demographic factors and beliefs. The goal was to assess when and how people view hurricanes as abnormal (control group) or attributable to climate change (climate group).
Our signal detection analysis found that, independent of their prior beliefs about whether the climate was changing and whether those changes were affecting the frequency or intensity of hurricanes, all respondents across experimental groups performed significantly better than chance in identifying trials in accordance with our grading criteria. Overall, our grading criteria appear to reflect the heuristics that participants are applying to this task. When specifically asked to identify hurricane events that might be indicative of some influence from climate change, respondents’ average decision threshold was close to neutral, but leaned to the more conservative side in making such an attribution, a reasonable position given that not all unusual events may be influenced by climate change. However, there were several important interaction effects that add more nuance to these results. In terms of sensitivity, individuals who believed in climate change more (and were more numerate) adhered even more to our grading criteria. Respondents who were more dubious about the existence of climate change (and more numerate) required a greater degree of evidence (i.e., a more extreme world) before they were willing to suggest that an unusual hurricane season might be influenced by climate change.
These results provide a replication and a test of generalization, demonstrating that the results of Broomell et al. (2017) are not unique to temperatures but extend to other observations linked to climate attribution. However, there are some notable differences between our results and the Broomell et al. (2017) study. In particular, our results demonstrate a psychological feature of hurricanes that differs from temperatures. For example, Broomell et al. (2017) found that decision thresholds were highly predictable from beliefs (R2 = 0.62); however, sensitivity for judging temperatures was less predictable (R2 = 0.30) and not significantly related to beliefs. In our study, we find that sensitivity for judging hurricanes is more predictable (R2 = 0.46) compared to decision threshold (R2 = 0.31). We also find that beliefs significantly predict sensitivity for judging hurricanes, perhaps because “believers” will generally pay more attention to extreme events. This manifests for hurricanes, because knowledge of typical frequencies is (likely) less well known compared to a person’s knowledge of typical temperatures where they live, which they experience daily. This study therefore expands the focus of Broomell et al. (2017) from perceptions of cues in one’s immediate environment to a cue that is witnessed by the broader population (i.e., a hurricane event).
Many possible pathways exist for beliefs to bias judgments about evidence of global warming: 1) biases could have been induced by either decision threshold, sensitivity, neither, or both; 2) climate believers could have shifted their decision thresholds and sensitivities instead of nonbelievers; 3) there could have been bias for both believers and nonbelievers in opposite directions; and 4) participants could have had very little sensitivity or very extreme decision thresholds, suggesting that our proposed heuristics do not match the public’s true judgment strategy. Instead, we found that participants are fairly good at detecting unusual (hurricane) seasons according to our criteria, and they apply a relatively neutral decision threshold to attribute such events as evidence of an influence from climate change. While those who have doubts about climate change apply a higher threshold to making an attribution, they are still willing to do so once hurricane behavior becomes sufficiently extreme. In general, members of the public who hold different prior views are not in complete disagreement about the evidence they perceive, which leaves the possibility for future work to explore ways of bringing such judgments into alignment.
5. Policy implications and future study
As the science of climate attribution progresses, the public is joining the conversation with their own sets of beliefs and heuristics. The relative infancy of attribution science warrants an understanding of how it may be interpreted and used by different members of the public. A necessary first step is to establish baselines of when, and to what extent, people cite extreme events as evidence of climate change. Future work could employ signal detection methods similar to those presented here for different stimuli (i.e., different types of extreme weather events) for comparison. These approaches could also be expanded to other groups of interest (e.g., decision makers), and gaps could be identified between their sensitivities and decision thresholds and those of diverse publics. While our current study focused on a few beliefs, future work should explore people’s emotional states and cultural values that drive variability in their perceptions and responses to climate risks.
Understanding what “signals” people to attribute events to climate change may aid in the development of more effective risk communications, as well as improve our understanding of how perceptions of climate change influence how people classify extreme events and translate to support for climate policy. Better identifying the perceptual biases that cause systematic and predictable attribution of extreme weather to climate change should enable researchers to explore alternative decision-making for protective actions, as well as other educational strategies and interventions. It is also important to continue to explore ways to communicate the probabilistic nature of attribution science in order not to get laypeople bogged down with the details but rather to provide enough information to narrow the gap between science and its application.
This work has been supported by the Center for Climate and Energy Decision Making (CEDM) through a cooperative agreement between the National Science Foundation and Carnegie Mellon University (SES-0949710) and by academic funds from Carnegie Mellon University.
Data availability statement. Data can be obtained by contacting the corresponding author.
Individual Difference Measures
The empirical evidence to date for the Atlantic Ocean is that hurricanes are becoming more intense. There is no evidence that they are becoming significantly more frequent as of yet.