1. Introduction
Climate hazard risk assessments have been a required component of the Federal Emergency Management Agency (FEMA) Multihazard Mitigation Plan (HMP) since the Disaster Mitigation Act was passed in 2000 (Public Law 106-390). State, tribal, territorial, and local jurisdictions are required to have a FEMA-approved plan to receive certain types of nonemergency disaster assistance, including funding for mitigation projects (FEMA 2019). Hazard mitigation is recognized as a wise investment when considering longer-term planning time scales (Porter et al. 2017). One step toward mitigating the impact of climate hazards on longer time scales is to first understand the climate risk profile of a jurisdiction(s). Yet, many HMPs contain information that is inadequate or not locally relevant.
In addition to the FEMA HMP, climate data and information are increasingly being included in other types of plans including comprehensive plans, land use plans, sustainability plans, and economic development plans (Schwab 2010; Lempert et al. 2018). The challenges associated with climate change are contributing to that integration into additional plans [National Research Council (NRC) 2010]. The existence of some plans also enables jurisdictions to be eligible for other types of funding in addition to FEMA’s. The American Planning Association (APA), the premiere professional organization for planners across the United States, has taken a more active role in facilitating the incorporation of climate and natural hazards information into a variety of plans in recent years. The APA operates the Hazards Planning program, which contains hazards-related resources that are accessible to its more than 40 000 members across the country (APA 2020). APA also operates a hazard mitigation and disaster recovery division, one of 21 divisions within the organization. Altogether these factors mean that decision-makers, such as emergency managers and planning professionals who have little to no formal training in atmospheric science, are increasingly using climate data and information within their professions. Broadly speaking, these decision-makers can be referred to as sophisticated users, a categorization also used by Novak et al. (2008), Morss et al. (2010), and Wall et al. (2017a).
The number and availability of online climate data and information tools have greatly increased over the last decade (Lourenço et al. 2016; Overpeck et al. 2011), which increases accessibility to sophisticated users. However, those users often have difficulty identifying what climate data and information are suitable for their needs (Briley et al. 2015). Many credible climate data and information tools exist, but tools that are of poor quality also exist (Hewitson et al. 2017). Further, even with an understanding of which organizations and tools to trust, sophisticated users do not always know where to access the tools, how to use them, or how to interpret information appropriately. Guidance on the appropriate interpretation and use of climate data and information can be provided in different formats such as within a single web tool or through face-to-face interactions with an expert. Some jurisdictions can afford to or need to hire a consultant to address a problem. However, for small- to medium-sized jurisdictions, hiring a consultant is not always feasible.
During 2017–18, the Southern Climate Impacts Planning Program (SCIPP) developed Simple Planning Tools (SPTs) for Oklahoma and Arkansas climate hazards, a climate decision support tool (DST), because of needs expressed at a workshop with planners and emergency managers (herein referred to as the “decision-makers”). SCIPP is a climate boundary organization, an organization whose goal is to support the production of actionable knowledge for various decision contexts (Goodrich et al. 2020). An evaluation was conducted to understand the SPT’s utility, or usefulness, to decision-making. Before explaining how and why the SPT was developed, along with a description of its features, a literature review of climate DST evaluation is presented.
2. Literature review
Climate DST evaluation is a relatively new field. VanderMolen et al. (2019) state that climate tool evaluation is underexplored or underreported but administering evaluations and improving tools based on the results of those evaluations increases the likelihood that such tools are useful to their intended users. Furthermore, Guido et al. (2013) note that few studies have formally evaluated how climate-related information contributes to informing decisions. One general type of evaluation is called summative, which VanderMolen et al. (2019) describe as an assessment of the impact of an intervention on the target group or determines what a project achieved. Summative evaluation is often associated with objective, quantitative data collection methods and is outcome focused rather than process focused. In the context of climate DSTs, three evaluation subtypes are relevant: evaluating suitability, usability, and/or utility. Suitability is “the quality of being right or appropriate for a particular person, purpose, or situation” (Lexico 2020). Therefore, a suitable DST is one that is appropriate for a decision-maker’s purpose or situation. Usability is whether a tool enables a specified user to achieve its goals with effectiveness, efficiency, and satisfaction (International Standards Organization 2018). Utility is a measure of the usefulness of a tool. It includes characteristics such as salience and relevance to the context in which it is used, along with its impact on decision-making or how information obtained from it is used (McNie 2007).
Evaluations of the suitability of climate DSTs are lacking, but recent studies such as Oakley and Daudert (2016) and VanderMolen et al. (2019) have evaluated usability. Furthermore, Maudlin et al. (2020) evaluated climate DST usability differences between males and females, and Argyle et al. (2017) evaluated the usability of a weather forecasting DST. Other examples of climate tool evaluations include the following. Hartmann et al. (2002) provide an early example of climate tool evaluation from a user standpoint and included perspectives from resource managers in the southwestern United States. The study focused on NOAA Climate Prediction Center seasonal temperature and precipitation outlooks and considered how different presentation formats affected users’ ease, accuracy, and reliability of interpretation of the products. The evaluation methodology was not described in detail, however. Hawkins et al.’s (2017) evaluation is another example, which focused on how each Weather Forecast Office used the National Weather Service heat products rather than on how end users interpreted or used them. Furthermore, Guido et al. (2013) evaluated their monthly Drought Tracker climate summary product. The evaluation was not categorized as a particular subtype, but utility was mentioned as one of its components. Guido et al.’s (2013) evaluation assessed whether respondents perceived the product to have improved their understanding of drought and climate, if or how they perceived the product to have helped inform their decisions, and what respondents liked about the product. Guido et al. (2013) said their results revealed more about the process of the transfer of information rather than the outcomes spurred by (e.g., decisions made or influenced) decision-makers using the product, however.
Evaluating the utility of a climate DST is uncommon in the literature and this study helps fill that void. McNie (2007) notes that useful information is perceived as salient, credible, and legitimate, which are also mentioned in Cash and Clark (2001) and Cash et al. (2003). According to VanderMolen et al. (2019), salient information is relevant to the user’s context. Information is deemed credible when it is perceived to be accurate, valid, and of high quality, and information is legitimate when it is transparent and lacks bias. Perceived reliability and trust, meaning the information is dependable, are also utility characteristics (Feldman and Ingram 2009).
Utility can also be evaluated by assessing how information is used in decision-making, a form of impact. In fact, McNie (2007) states that understanding whether information is actually used to improve decision-making is the “ultimate metric” for what constitutes useful information (p. 20). Oh (1996) identified three primary information use categories: 1) conceptual, 2) justification, and 3) instrumental. Wall et al. (2017b) expanded on those three categories and developed a list of 10 proposed impact indicators that can be used to evaluate coproduced climate science, which are also relevant to evaluating the utility of coproduced DSTs [see Table 1 in Wall et al. (2017b) for details]. Shafer (2008) notes that there is a range in how information is used by decision-makers and some uses are more difficult for scientists to achieve. Shafer’s (2008) assessment is consistent with the coproduction impact indicators presented in Wall et al. (2017b). For example, enlightenment, which is where a decision-maker perceives themselves to be better informed about an issue, requires less time and is easier to accomplish than information being explicitly used in agency planning, resource allocation, or a policy decision.
The utility characteristics that are measured in this study include saliency, credibility, trustworthiness, and reasons for and impact of information use. Several practical measures of utility are also assessed. Before the method for evaluating the SPT’s utility is described, however, an explanation of why and how the SPT was developed, along with its features, is provided.
3. The SPT
a. Development
The SPT is a climate DST that was inspired by and codeveloped with emergency managers and planners to help meet their climate risk assessment needs. Emergency managers and planners who work with or in small- and medium-sized communities are the primary audience for the tool, but other decision-makers and those who serve large jurisdictions are also welcome to use it. The SPT was developed through a series of four in-person workshops held in Oklahoma and Arkansas in 2017 and 2018; 93 decision-makers attended the workshops in total, and some decision-makers attended both workshops that took place in their state. Most of the attendees represented city government, but a few tribal nation, state government and private sector representatives also participated.
The SPT development process aimed to follow the NRC (2009) six principles of effective decision support, which provides guidance for developing climate DSTs that are useful to decision-makers. Table 1 indicates how the principles were followed during the SPT development process. It is important to note, though, that developing the SPT was not the purpose of the initial workshop. Rather, the workshop’s original purpose was to help facilitate relationships between planners and emergency managers. The two decision-makers can mutually benefit from working together on hazard and climate planning because of their complementary strengths and weaknesses (Schwab 2010), but they often do not interact because of city and county departmental silos.
Principles of effective decision support defined by the National Research Council (NRC 2009) and accompanying description of how the principles were applied during the Simple Planning Tool (SPT) development process.
The idea to develop the SPT came from workshop participants, which then motivated SCIPP to produce the tool guided by their feedback. The SPT’s development, therefore, was a collaborative process aimed to help translate climate information to decision-makers that was initially driven by the target population, planners, and emergency managers. The two-way communication process was in contrast to the traditional loading-dock or linear model, which is where the flow of scientific knowledge to society is strictly one way. The one-way model has been proven to be ineffective at making climate information to be useful and usable (Lemos and Morehouse 2005; Dilling and Lemos 2011; Cash et al. 2006; Kirchhoff et al. 2013; Meadow et al. 2015). The workshops also helped SCIPP understand the contexts in which the decision-makers worked, which is an important step toward making climate tools and information useful according to Feldman and Ingram (2009) and Parris et al. (2016).
Informal usability testing was conducted using a draft version of the Oklahoma SPT at Oklahoma Workshop 2 (November 2017), and version 1.0 was released in April 2018. The informal usability testing included asking each of the decision-makers to review 3 of the 13 hazard sections and rate the following statements on a 5-point Likert scale ranging from strongly disagree to strongly agree for each of their sections: The document is organized in a useful way; the content of the document is easy to understand; individual tools are easy to access; individual tools are easy to use; and data provided by individual tools are relevant to my needs. For each of the five statements, participants could also provide specific feedback on how each individual tool or hazard section could be improved. A more thorough usability evaluation was desirable but was not plausible given time and resource constraints.
Some adjustments were made to the layout and content of the hazard sections following the usability testing including specifying the output of each individual tool within its accompanying instructions, allowing the tool’s URL to span the page rather than being forced into a narrow column, the addition of earthquakes as a hazard section, alphabetizing the hazard sections, and adding hazard definitions and descriptions as an appendix. Suggestions to host the tool in an interactive online format and provide local hazards impact data were received but removed for consideration for the time being due to resource constraints.
Engagement with the Arkansas decision-makers followed a similar format but was offset from Oklahoma and benefitted from the knowledge gained and lessons learned at the Oklahoma workshops. Arkansas Workshop 1 (initial engagement and discussion) took place in September 2017 and Workshop 2 (informal usability testing) in September 2018. Version 1.0 of the Simple Planning Tool for Arkansas Climate Hazards was released in November 2018. Versions 1.5 for both states were released in January 2019 following the addition of a few tools that were identified during the development of the Arkansas version 1.0 or released in the subsequent months. Now that the SPT development process has been explained, it is important to understand the features of the tool.
b. Description
The SPT is a document that compiles relatively easy-to-use online interactive climate tools, maps, and graphs that can assist planners and emergency managers who need to assess historical and future climate hazards for their jurisdiction(s). (Versions for Arkansas and Oklahoma are available on the SCIPP website; http://www.southernclimate.org/pages/data-tools.) The SPT is alphabetically organized by 10 climate hazard sections and 3 nonclimate hazard sections. Figure 1 shows an example hazard section page. Each hazard section features a data limitations summary along with names of and links to historical data tools, accompanying instructions on how to use each tool, their associated outputs, and a sample image. Each section also includes a future trend summary that provides a concise statement of how the hazard has and/or is projected to change with climate change, if known.
Example tornado hazard section page from the Simple Planning Tool for Arkansas Climate Hazards, version 1.5. Both versions of the SPT are available online (http://www.southernclimate.org/pages/data-tools).
Citation: Weather, Climate, and Society 13, 1; 10.1175/WCAS-D-20-0019.1
Individual tool instructions and the future trend summaries are tailored to each state, which helps mitigate against information not being salient (Cash et al. 2003; Guido et al. 2013) as a reason for decision-makers to not use scientific information. The tailoring also reduces the amount of staff time needed to use the SPT, which is often another barrier (Finucane et al. 2013). The SPT contains four appendixes that include additional information and links to large reports and programs for decision-makers who are interested: Hazard Definitions and Descriptions, Historical FEMA/Presidential Disaster Declarations, Climate Change Resources, and Incentive and Action Programs for Risk Reduction.
The historical data tools that are included in the SPT were selected because of their accessibility (e.g., simplicity and ease of use; Dilling and Lemos 2011), appropriateness of spatial and temporal scales (information is provided at scales relevant to the decision-maker’s context; Lemos et al. 2012), and decision-maker recommendations that were obtained though interactions at the workshops. All of the tools linked within the SPT are available on the Internet for free from government and academic organizations. There are a few tools that are more complex than others and require some time to learn how to use, but that caveat is noted in the accompanying instructions. A more complex tool was included when it was the best available tool for a particular hazard and/or geographic location.
The future trend summary text was included to meet another need that was expressed at the workshops. That is, the decision-makers said they need to be able to reference or use climate projection information for their jobs but 1) do not have time to distill lengthy reports themselves and 2) do not usually need detailed climate projections based on multiple carbon emissions scenarios. Those preferences are consistent with findings from other studies (e.g., Guido et al. 2013; Page and Dilling 2019); however, it is possible that the decision-makers will be interested in more comprehensive climate projections in the future.
4. Method
This study evaluated the SPT after it was produced, so a summative evaluation was appropriate. An evaluation of the tool’s utility including its impacts on decision-making was the specific focus. To that end, a survey was developed and approved by the University of Oklahoma Institutional Review Board (IRB 10579). The survey began with introductory material required by IRB followed by definitions of terms. Definitions were included to reduce the possibility that participants would misinterpret terms, a practice demonstrated in Finucane et al. (2013). Terms that were defined included weather, climate, hazard, climate change, climate adaptation, hazard mitigation, and jurisdiction.
Participants were asked to answer 7–25 questions, the number depending on the answers selected and their level of awareness of the SPT (e.g., those who had not previously heard of the SPT jumped to demographic questions). There were a mix of multiple-choice, select all that apply, and open-ended questions. Answer choices were randomized when possible to reduce response bias. The survey questions assessed the SPT’s saliency, credibility, trustworthiness, and reasons for and impact of information use on decision-making along with frequency of use. Several information use (i.e., impact) indicators from Oh (1996) and Wall et al. (2017b) that represent characteristics of utility were adapted and incorporated into survey response options. When applicable, the indicators relevant to the literature are noted in italics within the manuscript text and tables but were not labeled in the survey itself. Any additional response options were included to reflect the specific context of the study. A few practical questions were also included such as the type of plan(s) the SPT informed. For that question, participants were provided a list of 16 plan types that were preidentified by a small decision-maker cohort, as well as “other (please specify).” Basic demographic information was also collected. The survey was administered online using the Qualtrics survey platform in spring 2019.
a. Procedures
A purposive (nonrandom) sampling strategy was used to collect data from the target audience. An invitation to complete the survey was directly sent to the 93 emergency managers and planners who attended the 2017 and 2018 workshops. The invitation was also sent through the Oklahoma (n = 290) and Arkansas (n = 138) state APA chapter email list services (listservs) and the Oklahoma’s First-Response Information Resource System using Telecommunications (OK-First) program listserv (n = 753).1 Therefore, up to 1274 people received the survey invitation but some people likely received it through more than one channel. Follow-up invitations were sent approximately 4 weeks after the initial invitation, but most of the responses were received within 1 week of the initial request. In total, 110 surveys were returned, but 6 were removed from the dataset because the participant either only selected their state or did not answer a single question. Therefore, 104 people participated in the survey, and the minimum response rate was 8.2%.
There are several important factors to consider when interpreting the validity of the response rate and subsequent survey results. First, the OK-First listserv includes emergency managers, fire officials, law enforcement officers, and state troopers, many of whom only focus on immediate severe weather preparedness and response and are not necessarily the target audience for the SPT. Second, it was known that many of the state APA chapter listserv members have jobs for which the SPT was not relevant. For example, of the 1770 members in the aforementioned APA hazard mitigation and disaster recovery division, only 12 are from Oklahoma and 3 are from Arkansas (M. Kraus, APA, 2019, personal communication). The three lists were used, however, in the event that the SPT had been shared beyond the workshop participants. Therefore, although the response rate appears low, it is likely indicative of the number of individual emergency managers and planners who are actively or semiactively incorporating climate hazards into their planning initiatives across the two states.
Given the sample size and the likely limited number of hazard planners within the study domain, the results presented below are suggestive rather than definitive. However, as Etz and Arroyo (2015) and Hopkin et al. (2015) note, statistical robustness challenges are typical when researching a specific user group, or those that involve small samples. The study is also not generalizable beyond the domain studied, but it provides an evaluation of the utility and impacts of a climate DST for a specific group of decision-makers, a type of study that is lacking within the literature.
b. Analysis
Descriptive statistics were computed for each question. Mean utility scores were computed for the two questions that assessed 12 predefined SPT uses and impacts. Mean utility was also calculated for broad statements about the SPT. Thematic analysis of open-ended questions, which Braun and Clarke (2006) note is a way to evaluate qualitative data and reveals patterns and consistencies, was limited due to the small sample size of SPT users. Comparison between participant groups, including between Arkansas and Oklahoma participants and emergency managers and planners, was also limited because of the limited statistical power. Responses are presented as number of participants rather than percentages for clarity.
5. Results
a. Participant demographic characteristics
The sample included 88 participants from Oklahoma and 16 from Arkansas, but some did not answer all the questions. Table 2 shows the participants’ demographics in detail. Almost half of the participants were emergency managers (n = 46) followed by planners (n = 28) and fire officials (n = 16). Most commonly, participants worked at the city/community geographic scale (n = 65) and/or county scale (n = 33). Some participants worked at more than one scale (e.g., consultants).
Survey participant demographics. The number of responses is reported for the total sample, those who were aware of the SPT at the time of the survey, and those who had used the SPT at the time of the survey. For some questions, participants were able to select more than one category, so the numbers add up to greater than the total. In addition, some participants did not answer all of the questions.
About one-third of participants worked in jurisdictions of 10 000–49 999 people (n = 34) or 1000–9999 people (n = 30) each, but the sample included representatives from all jurisdiction sizes. Most participants worked for a government organization (n = 82) while a few worked for the other organizational types listed. Overall, the sample was experienced in their respective fields. One-third had worked for 11–20 years and 21+ years each.
Forty participants were aware of the SPT at the time of the survey and 64 were unaware of it. Of the 40 who were aware of the tool, about half had used it (n = 19), including 15 from Oklahoma and four from Arkansas. Those who were aware of the tool but had not used it (“SPT-aware participants” herein) were asked to select one or more reasons of six why they had not. Of the 21 SPT-aware participants, a majority (n = 12) said they had not had time to look at the tool, and 9 said they had not needed to use it (e.g., have not had to update a plan recently, give a presentation). Only one participant did not understand how to use it, and one said the information is irrelevant to job duties (lack of saliency). No participant selected “The online tools are too complicated to use” or “I don’t trust the information in it” (lack of trust).
b. Frequency of use
SPT users (n = 19) were asked whether they only used the tool when updating a plan. About one-third (n = 7) said that was true. SPT users were also asked about how often they use the tool, either to reference specific information in it or to access one or more of the individual tools. The responses varied among “once per day” (n = 2), “once per week” (n = 1), “once per month” (n = 5), “once every 6 months” (n = 4), and “once per year” (n = 7). The variation is likely due to only needing the SPT during plan update and/or varying job responsibilities. The question could be revised in a future survey iteration to gather more detailed use frequency information. For instance, the participants who said they used the SPT once per year could have interpreted that as a period of time during a year (e.g., during a plan update), one day per year, or a few minutes per year.
c. Indicators of utility
Participants were also asked about their reasons for using the SPT, a form of impact of the tool on decision-making, to dig deeper into understanding its utility. Six possible use reasons were listed and are shown in Table 3. Most commonly, the SPT assisted users with becoming better informed about an issue (enlightenment; n = 13) or gathering information for a plan (used in agency planning; n = 12). Participants who selected “to justify a decision that was already made” (justification; n = 3) and/or “inform a new decision” (instrumental; n = 8) were asked to describe the decisions that the SPT informed. Four participants responded: creating a more applicable HMP for their county, checking tornado frequencies for their area, developing a drought prevention project, and that the SPT provided a “much needed database” to assist with hazard or climate research planning and projects.
Results for the question, “What have you used the SPT for or plan to use it for? Select all that apply.” Response to “other” included “To educate jurisdictions about this resource to use for their mitigation plans/planning purposes.” Types of information use that were adapted from Oh (1996) and Wall et al. (2017b) are noted in italics.
Participants who selected “to gather information for a plan” as a reason for using the SPT (n = 12) were shown a list of 16 plan types and asked to select the plan(s) in which the SPT informed. The most commonly informed plan was the FEMA HMP (n = 9). The comprehensive plan (n = 4) and emergency operations plan (n = 4) tied for the second most common. However, the SPT also informed at least 11 other types of plans: economic development plan (n = 3), emergency action plan (n = 3), emergency response plan (n = 3), master drainage plan (n = 3), storm water plan (n = 3), corridor plan (n = 2), comprehensive water plan (quality or quantity) (n = 1), climate adaptation plan (n = 1), evacuation plan (n = 1), land use plan (n = 1), and transportation plan (n = 1). One participant commented that they “expect [the SPT] to be useful for creating other plans in the future.” The SPT had informed at least 39 individual plans at the time of data collection, less than one year since the release of version 1.0 of the Oklahoma SPT.
Another way to evaluate utility is to assess the tool’s impact on a decision-maker’s work. As such, the 19 SPT users were asked that question. Table 4 shows that the most common impact out of the six response options was “it helped me gather more comprehensive hazard data than before” (problem understanding; n = 11). Approximately one-third said it saved them time (motivational; n = 7) and helped them communicate more effectively to an elected official (personal or political; n = 6). One-quarter of users said it helped them communicate more effectively to a nonelected colleague or stakeholder (personal or political; n = 5). The impacts selected by the fewest participants depend on timelines beyond the control of the individual user and are more difficult to achieve.
Results for the question, “What impact has the SPT had on your work, if any? Select all that apply.” Impact categories adapted from Wall et al. (2017b) are noted in italics.
To provide a single quantitative utility metric that encompasses the qualitative statements in the two aforementioned use and impact questions, a utility “score” was calculated by adding the number of selected indicators for each user. The maximum possible defined score was 12 but a participant could score 14 if they provided an additional use or impact. The utility scores ranged from 1 to 11 across all SPT users and the mean was 4.7 (SD = 3.4). Oklahoma users (n = 15) scored higher (M = 5.1, SD = 3.6) than the four Arkansas users (M = 3.0, SD = 1.82), and the planners scored higher (M = 7.0, SD = 4.1) than the emergency managers (M = 5.5, SD = 3.6). Oklahoma participants may have found the SPT to be more useful than the Arkansas participants because the idea for the SPT originated with the Oklahoma workshop cohort and/or because the Oklahoma version was released before that of Arkansas. The sample was too small to compute statistical significance, however.
The seven participants who said time was saved by using the SPT compared to previous methods were asked how many total hours were saved in the past 12 months or less. Five of the participants’ responses ranged from one to 12 h, with one participant reporting 320 h and another one reporting “countless hours.” The large range is noteworthy and could indicate that use increases considerably when a plan is being created or updated or for “superusers” such as consultants. The participant who reported saving 320 h was a county emergency manager and planner who worked in a jurisdiction of 100 000–499 999 people. Based on that participant’s responses to other survey questions, it appears that they put together an HMP at some point after the SPT became available. The participant said that without the SPT, “I’m not sure I would even know where to begin,” and it “really helped me expedite the weather-related research for my hazard mitigation plan.” The participant who reported that the SPT saved them “countless” hours was a planner who worked for a for-profit organization and worked with all sizes of jurisdictions. They also selected that the SPT had influenced 12 types of plans.
Another form of information use is sharing it with others. Over half of SPT users shared the tool with a colleague (n = 11), and seven users shared it with 1–5 people. In total, SPT users shared the tool with 76–108 other individuals, a range based on the listed multiple-choice categories.
SPT users were also asked to rate their agreement with several broad statements about the tool, three of which reflected utility characteristics, on a scale that ranged from strongly disagree (1) to strongly agree (5). “Not applicable” was also a selection option but was not selected by any participant for any of the statements. The four statements and accompanying mean agreement ratings were “Using the SPT helped me include higher quality hazard information in my plan(s)” (credibility; M = 4.4), “because of my use of the SPT I am more confident that the hazard information in my plan(s) can be trusted” (trustworthiness; M = 4.3), “the SPT helped me gather hazard information that is more relevant to my jurisdiction(s)” (saliency; M = 4.2), and “overall, I feel more confident about using climate information in my job because of what I’ve learned from using the SPT” (M = 4.3). The ratings depict high utility, but the high utility is based on a relatively small number of users as previously stated.
To inform future iterations of the tool, SPT users were asked to rate the usefulness of each of the major sections of the SPT on a 5-point scale from not at all useful (1) to very useful (5). The sections were rated as follows: future trends summaries (M = 4.4); historical data tools (M = 4.3); Appendix B: Historical FEMA/presidential disaster declarations (M = 4.2); Appendix D: Incentive and action programs for risk reduction (M = 4.2); Appendix A: Hazard definitions (M = 4.1); Appendix C: Climate change resources (M = 4.0); and data limitations summaries (M = 3.9). None of the SPT users selected “have not used” for any of the sections. When given the opportunity to state the kind of hazard or climate data they have trouble finding, only one participant provided a usable response, ice storms, which is an appropriate response given that ice storm data are very limited.
[The SPT is] another source of existing data sources. So we usually go to the source rather than look to SCIPP. Example [council of governments], local planners, hazard mitigation [subject matter experts], emergency managers or FEMA . . . [State Emergency Management Department], as do many other states, puts out a standard HMP template, so if the SCIPP data or links could be in those documents to enable quicker access that could help otherwise we are going to go right to our typical sources.
The detailed response provides insight into the importance of the primary channels and sources from which hazard planners seek information. Developing the SPT is one step of several toward connecting climate data and information to the existing organizations and templates that decision-makers are already using to accomplish their planning goals.
This is a relatively new tool, we are still exploring and finding all of the uses the SPT can provide. We are using the SPT more in our planning processes and within our development services department for current/past/future weather impacts and for future community expansions. Great tool.
6. Discussion
This study not only evaluated the utility of a climate DST and produced practical insights, but it also adds to the methodological rigor of the currently limited body of literature on the topic. The evaluation indicators that were incorporated into survey questions appeared to function as planned, and the open-ended questions provided an important outlet to capture nuances that could not be gained with predefined answer choices. However, there is room for additional evaluation indicators to be developed to reflect the complex decision contexts in which a climate DST such as the SPT is used.
In practical terms, the SPT was found to be useful to an array of planning contexts given the professional diversity of tool users. Survey participants also rated the utility indicator statements and SPT components highly. Over half of the SPT users had shared the tool with a colleague or colleagues, demonstrating a high level of utility (Knott and Wildavsky 1980). The lack of feedback about how the SPT could be improved could indicate that the participants could not think of how it could be improved, or they had not used the tool enough to know how it could be improved. It could also indicate that the SPT was sufficient for their needs, but that conclusion cannot be stated with certainty based on the available data.
“To become better informed about an issue” was the most common reason for using the tool and was not surprising given that it is the least difficult to achieve of the six presented use reasons. It can also be accomplished by a sole user. In other words, it does not depend on another decision-maker’s (elected official or nonelected official) needs, interests, or schedule. This result is consistent with Shafer (2008) who notes that information reception is an easier form of utilization for a scientist to achieve than others. The data also showed that some decision-makers are “superusers” who use the SPT more often and thoroughly than others, and some only used it during a plan update. Therefore, it appears that, to some degree, the SPT’s utility and impact on decision-making are tied to a decision-maker’s specific employment responsibilities. Altogether, these findings indicate that evaluating the utility of a climate DST is complex. Using interview or ethnographic methods would help further identify the nuanced uses of the SPT, including a richer understanding of the frequency with which it is used and why. Furthermore, the impacts of scientific information on decision-making can take a while to emerge (Blackstock et al. 2007), so conducting another survey in a few years would capture additional uses and impacts, perhaps from a larger sample of users.
A further point on which to elaborate is one that has implications for climate boundary organizations. The SPT user serving a large population provided evidence for the importance of climate data and information being communicated through existing trusted channels and procedures. Climate boundary organizations and service providers often serve decision-makers across multiple sectors, each of which likely has its own authoritative sources and information channels. Therefore, a DST user base may2 increase if climate boundary organizations and service providers identify and utilize those trusted channels. In the context of the SPT, SCIPP continues to work with a subset of emergency managers and planners to not only identify the most effective information channels for sharing climate data and information, but also toward mainstreaming climate-informed planning in the south-central United States.
Although about half of the participants who were aware of the SPT had not used it, their most common reason for not using it—time—is a known decision-maker barrier to using scientific information (Finucane et al. 2013; Riley et al. 2013). Some other known barriers such as accessibility and appropriateness of scales were addressed through the SPT development process.
There are several other reasons that may explain the small SPT user base, though: 1) Relatively few hazard planners across the two states, which is explained in the methodology section and might also reflect the fact that climate-informed planning is not widely institutionalized within the geographic domain studied; 2) 5- to 20-yr plan update cycles; 3) some decision-makers do not have the scientific expertise to utilize complex climate information (Briley et al. 2015); and 4) the nuanced nature of climate change impacts in the geographic domain studied reduces the perceived urgency to address climate change in planning.
To expand on the fourth point, extreme heat and drought periods in the 1930s and 1950s, anomalous lack of warming in the southeastern United States,3 historically highly variable precipitation, and prevalence of multiple climate hazards (large hail, tornadoes, ice storms, etc.) means that local climate change impacts may not be obvious to decision-makers who have not dug into data and/or are unaware of national and global climate trends. Therefore, some decision-makers within the study domain might be less motivated to use such a tool as the SPT compared to decision-makers in other parts of the United States where climate change impacts are much more obvious.
Although the SPT does not focus solely on climate change, the nuanced impacts of climate change in Oklahoma and Arkansas should be considered as a factor for limited DST uptake related to the subject. To increase the SPT’s user base, SCIPP should continue its efforts to promote the tool through trusted sectoral channels (e.g., leaders of professional associations, state agencies) and include regular reminders about it in outreach materials so that it is at the forefront of decision-makers’ minds during plan updates. That strategy should be used for other climate DSTs as well.
7. Conclusions
This study evaluated the utility and impact of a climate DST on a group of decision-makers in Oklahoma and Arkansas, primarily emergency managers and planners. By testing and expanding upon utility metrics that were recently proposed in the literature and evaluating a climate DST itself, this study adds to the limited body of literature that is available on the topic. This study takes a step toward quantifying the utility of a climate DST, but there remains room for additional evaluation indicators to be developed to properly reflect the complexity and nuances of the environments in which climate DSTs are used.
The results showed that the SPT user group was relatively small at the time of data collection, which was less than 1 year since the tool became available. However, the data show the SPT to have a high utility for the individuals who had used the tool, and the users represented a range of jurisdictional sizes, geographical scales, and years of experience. In other words, the SPT was found to be salient to an array of decision contexts. The utility of the SPT was also practically demonstrated by the variety of plans it informed: 14 types and 39 individual plans at the time of data collection. The SPT was designed in a way that reduces the need for frequent updates, but SCIPP plans to update it if exceptional new tools become available or significant climate science advancements are made. Additional improvements and geographical expansion are possible subject to the availability of funds.
Acknowledgments
This work was supported by the National Oceanic and Atmospheric Administration Climate Program Office Grant NA18OAR4310337. The author thanks Paula Dennison and Rob Hill for their initial suggestion for the Simple Planning Tool, Leah Shore for her time collaborating with the author to help to develop the tool, along with Danielle Barker, Franklin Barnes, Danielle Semsrott, and the dozens of other emergency managers and city and regional planners who provided feedback and ideas during the development of the SPT. The author also thanks the anonymous reviewers for their constructive feedback that helped to improve the paper.
Data availability statement
Deidentified data that support the findings of this study are available from the corresponding author upon reasonable request. Those data are not publicly available because of institutional review board regulations.
REFERENCES
APA, 2020: Hazards planning. American Planning Association, accessed 19 June 2020, https://www.planning.org/nationalcenters/hazards/.
Argyle, E. M., J. J. Gourley, Z. L. Flamig, T. Hansen, and K. Manross, 2017: Toward a user-centered design of a weather forecasting decision-support tool. Bull. Amer. Meteor. Soc., 98, 373–382, https://doi.org/10.1175/BAMS-D-16-0031.1.
Blackstock, K. L., G. J. Kelly, and B. L. Horsey, 2007: Developing and applying a framework to evaluate participatory research for sustainability. Ecol. Econ., 60, 726–742, https://doi.org/10.1016/j.ecolecon.2006.05.014.
Braun, V., and V. Clarke, 2006: Using thematic analysis in psychology. Qual. Res. Psychol., 3, 77–101, https://doi.org/10.1191/1478088706qp063oa.
Briley, L., D. Brown, and S. E. Kalafatis, 2015: Overcoming barriers during the co-production of climate information for decision-making. Climate Risk Manage., 9, 41–49, https://doi.org/10.1016/j.crm.2015.04.004.
Cash, D. W., and W. C. Clark, 2001: From science to policy: Assessing the assessment process. Harvard University John F. Kennedy School of Government RWP01-045, 20 pp., https://doi.org/10.2139/ssrn.295570.
Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, J. Jager, and R. B. Mitchell, 2003: Knowledge systems for sustainable development. Proc. Natl. Acad. Sci. USA, 100, 8086–8091, https://doi.org/10.1073/pnas.1231332100.
Cash, D. W., J. C. Borck, and A. G. Patt, 2006: Countering the loading-dock approach to linking science and decision making: Comparative analysis of the El Niño/Southern Oscillation (ENSO) forecasting systems. Sci. Technol. Hum. Values, 31, 465–494, https://doi.org/10.1177/0162243906287547.
Dilling, L., and M. C. Lemos, 2011: Creating usable science: Opportunities and constraints for climate knowledge use and their implications for science policy. Global Environ. Change, 21, 680–689, https://doi.org/10.1016/j.gloenvcha.2010.11.006.
Ellenburg, W. L., R. T. McNider, J. F. Cruise, and J. R. Christy, 2016: Towards an understanding of the twentieth-century cooling trend in the southeastern United States: Biogeophysical impacts of land-use change. Earth Interact., 20, https://doi.org/10.1175/EI-D-15-0038.1.
Etz, K. E., and J. A. Arroyo, 2015: Small sample research: Considerations beyond statistical power. Prev. Sci., 16, 1033–1036, https://doi.org/10.1007/s11121-015-0585-4.
Feldman, D. L., and H. M. Ingram, 2009: Making science useful to decision makers: Climate forecasts, water management, and knowledge networks. Wea. Climate Soc., 1, 9–21, https://doi.org/10.1175/2009WCAS1007.1.
FEMA, 2019: Hazard mitigation plan requirement. Accessed 12 December 2019, https://www.fema.gov/hazard-mitigation-plan-requirement.
Finucane, M. L., R. Miller, L. K. Corlew, V. W. Keener, M. Burkett, and Z. Grecni, 2013: Understanding the climate-sensitive decisions and information needs of freshwater resource managers in Hawaii. Wea. Climate Soc., 5, 293–308, https://doi.org/10.1175/WCAS-D-12-00039.1.
Goodrich, K. A., K. D. Sjostrom, C. Vaughan, L. Nichols, A. Bednarek, and M. Carmen Lemos, 2020: Who are boundary spanners and how can we support them in making knowledge more actionable in sustainability fields? Curr. Opin. Environ. Sustainability, 42, 45–51, https://doi.org/10.1016/j.cosust.2020.01.001.
Guido, Z., D. Hill, M. Crimmins, and D. Ferguson, 2013: Informing decisions with a climate synthesis product: Implications for regional climate services. Wea. Climate Soc., 5, 83–92, https://doi.org/10.1175/WCAS-D-12-00012.1.
Hartmann, H. C., T. C. Pagano, S. Sorooshian, and R. Bales, 2002: Confidence builders: Evaluating seasonal climate forecasts from user perspectives. Bull. Amer. Meteor. Soc., 83, 683–698, https://doi.org/10.1175/1520-0477(2002)083<0683:CBESCF>2.3.CO;2.
Hawkins, M. D., V. Brown, and J. Ferrell, 2017: Assessment of NOAA National Weather Service methods to warn for extreme heat events. Wea. Climate Soc., 9, 5–13, https://doi.org/10.1175/WCAS-D-15-0037.1.
Hewitson, B., K. Waagsaether, J. Wohland, K. Kloppers, and T. Kara, 2017: Climate information websites: An evolving landscape. Wiley Interdiscip. Rev.: Climate Change, 8, e470, https://doi.org/10.1002/wcc.470.
Hocker, J. E., A. D. Melvin, K. A. Kloesel, C. A. Fiebrich, R. W. Hill, R. D. Smith, and S. F. Piltz, 2018: The evolution and impact of a meteorological outreach program for public safety officials: An update on the Oklahoma Mesonet’s OK-First Program. Bull. Amer. Meteor. Soc., 99, 2009–2024, https://doi.org/10.1175/BAMS-D-17-0100.1.
Hopkin, C. R., R. H. Hoyle, and N. C. Gottfredson, 2015: Maximizing the yield of small samples in prevention research: A review of general strategies and best practices. Prev. Sci., 16, 950–955, https://doi.org/10.1007/s11121-014-0542-7.
International Standards Organization, 2018: Usability: Definitions and concepts. Part 11, Ergonomics of Human-System Interaction, ISO 9241, https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en.
Kirchhoff, C. J., M. C. Lemos, and S. Dessai, 2013: Actionable knowledge for environmental decision making: Broadening the usability of climate science. Annu. Rev. Environ. Resour., 38, 393–414, https://doi.org/10.1146/annurev-environ-022112-112828.
Knott, J., and A. Wildavsky, 1980: If dissemination is the solution, what is the problem? Knowl. Creat. Diffus. Util., 1, 421–442.
Lemos, M. C., and B. J. Morehouse, 2005: The co-production of science and policy in integrated climate assessments. Global Environ. Change, 15, 57–68, https://doi.org/10.1016/j.gloenvcha.2004.09.004.
Lemos, M. C., C. J. Kirchhoff, and V. Ramprasad, 2012: Narrowing the climate information usability gap. Nat. Climate Change, 2, 789–794, https://doi.org/10.1038/nclimate1614.
Lempert, R., J. Arnold, R. Pulwarty, K. Gordon, K. Greig, C. Hawkins Hoffman, D. Sands, and C. Werrell, 2018: Reducing risks through adaptation actions. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, D. R. Reidmiller et al., Eds., Vol. II, U.S. Global Change Research Program, 1309–1345, https://doi.org/10.7930/NCA4.2018.CH28.
Lexico, 2020: Definition of suitability in English. Accessed 20 May 2020, https://www.lexico.com/en/definition/suitability.
Lourenço, T. C., R. Swart, H. Goosen, and R. Street, 2016: The rise of demand-driven climate services. Nat. Climate Change, 6, 13–14, https://doi.org/10.1038/nclimate2836.
Maudlin, L. C., K. S. McNeal, H. Dinon-Aldridge, C. Davis, R. Boyles, and R. M. Atkins, 2020: Website usability differences between males and females: An eye-tracking evaluation of a climate decision support system. Wea. Climate Soc., 12, 183–192, https://doi.org/10.1175/WCAS-D-18-0127.1.
McNie, E. C., 2007: Reconciling the supply of scientific information with user demands: An analysis of the problem and review of the literature. Environ. Sci. Policy, 10, 17–38, https://doi.org/10.1016/j.envsci.2006.10.004.
Meadow, A. M., D. B. Ferguson, Z. Guido, A. Horangic, G. Owen, and T. Wall, 2015: Moving toward the deliberate coproduction of climate science knowledge. Wea. Climate Soc., 7, 179–191, https://doi.org/10.1175/WCAS-D-14-00050.1.
Morss, R. E., J. K. Lazo, and J. L. Demuth, 2010: Examining the use of weather forecasts in decision scenarios: Results from a US survey with implications for uncertainty communication. Meteor. Appl., 17, 149–162, https://doi.org/10.1002/met.196.
Novak, D. R., D. R. Bright, and M. J. Brennan, 2008: Operational forecaster uncertainty needs and future roles. Wea. Forecasting, 23, 1069–1084, https://doi.org/10.1175/2008WAF2222142.1.
NRC, 2009: Informing Decisions in a Changing Climate. The National Academies Press, 200 pp., https://doi.org/10.17226/12626.
NRC, 2010: Informing an Effective Response to Climate Change. The National Academies Press, 348 pp.
Oakley, N. S., and B. Daudert, 2016: Establishing best practices to improve usefulness and usability of web interfaces providing atmospheric data. Bull. Amer. Meteor. Soc., 97, 263–274, https://doi.org/10.1175/BAMS-D-14-00121.1.
Oh, C. H., 1996: Linking Social Science Information to Policy-Making. Emerald Group Publishing, 201 pp.
Overpeck, J. T., G. A. Meehl, S. Bony, and D. R. Easterling, 2011: Climate data challenges in the 21st century. Science, 331, 700–702, https://doi.org/10.1126/science.1197869.
Page, R., and L. Dilling, 2019: The critical role of communities of practice and peer learning in scaling hydroclimate information adoption. Wea. Climate Soc., 11, 851–862, https://doi.org/10.1175/WCAS-D-18-0130.1.
Parris, A. S., G. M. Garfin, K. Dow, R. Meter, and S. L. Close, 2016: Climate in Context: Science and Society Partnering for Adaptation. John Wiley and Sons, 312 pp.
Porter, K., C. Scawthorn, N. Dash, and J. Santos, 2017: Natural hazard mitigation saves: 2017 interim report. National Institute of Building Sciences, Multihazard Mitigation Council, 16 pp., https://www.fema.gov/media-library-data/1516812817859-9f866330bd6a1a93f54cdc61088f310a/MS2_2017InterimReport.pdf.
Riley, R., R. Edwards, L. Carter, M. Shafer, and M. Boone, 2013: South central U.S. hazard and climate change planning assessment. Southern Climate Impacts Planning Program, 55 pp., http://www.southernclimate.org/publications/Hazard_Planning_Assessment.pdf.
Schwab, J. C., 2010: Hazard mitigation: Integrating best practices into planning. American Planning Association Planning Advisory Service Rep. 560, 146 pp., https://www.planning.org/publications/report/9026884/.
Shafer, M. A., 2008: Climate literacy and a national climate service. Phys. Geogr., 29, 561–574, https://doi.org/10.2747/0272-3646.29.6.561.
VanderMolen, K., T. U. Wall, and B. Daudert, 2019: A call for the evaluation of web-based climate data and analysis tools. Bull. Amer. Meteor. Soc., 100, 257–268, https://doi.org/10.1175/BAMS-D-18-0006.1.
Wall, T. U., T. J. Brown, and N. J. Nausler, 2017a: Spot weather forecasts: Improving utilization, communication, and perceptions of accuracy in sophisticated user groups. Wea. Climate Soc., 9, 215–226, https://doi.org/10.1175/WCAS-D-15-0055.1.
Wall, T. U., A. M. Meadow, and A. Horangic, 2017b: Developing evaluation indicators to improve the process of coproducing usable climate science. Wea. Climate Soc., 9, 95–107, https://doi.org/10.1175/WCAS-D-16-0008.1.
OK-First is a weather and radar data training program that serves public safety officials across Oklahoma (Hocker et al. 2018).
A DST being promoted through a trusted information channel is not the only factor in whether it is used. Other factors include, e.g., relevance to decision contexts, decision-maker regulatory requirements, interest from organizational leaders to incorporate climate information, and time.
Ellenburg et al. (2016) recently found this to be due to reforestation practices across the region.