Scholarship on climate information use has focused significantly on engagement with practitioners as a means to enhance knowledge use. In principle, working with practitioners to incorporate their knowledge and priorities into the research process should improve information uptake by enhancing accessibility and improving users’ perceptions of how well information meets their decision needs, including knowledge credibility, understandability, and fit. Such interactive approaches, however, can entail high costs for participants, especially in terms of financial, human, and time resources. Given the likely need to scale up engagement as demand for climate information increases, it is important to examine whether and to what extent personal interaction is always a necessary condition for increasing information use. In this article, we report the results from two experimental studies using students as subjects to assess how three types of interaction (in-person meeting, live webinar, and self-guided instruction) affect different aspects of climate information usability. Our findings show that while in-person interaction is effective in enhancing understanding of climate knowledge, in-person interaction may not always be necessary, depending on the kinds of information involved and outcomes desired.
Current and future impacts of climate change underscore the need for climate information to support societal responses (Moss et al. 2013). Meeting this societal need for information is nontrivial as traditional ways to produce and communicate science often fail to yield usable knowledge to meet users’ needs (Kirchhoff et al. 2013). Engagement with practitioners in the process of creating climate information is believed to accelerate the production of usable knowledge. While there have been growing calls for interaction with stakeholders to support climate adaptation, (NRC 2010; Williams et al. 2015) there has been relatively less empirical evidence of its impact on actual knowledge use [but see Ford et al. (2013) and Fujitani et al. (2017)]. Given the growing costs and popularity of engagement and interaction among environmental scientists and funding organizations, especially in communicating climate knowledge, there is a critical need to better understand the role of engagement and interaction in increasing knowledge use. On the one hand, we need to design better ways to evaluate and assess the impact of all forms of engagement in increasing knowledge use and supporting societal and ecological well-being (Klenk et al. 2015; Lemos et al. 2014; Meadow et al. 2015; Wall et al. 2017). On the other hand, we need to make better use of the science of understanding knowledge use to inform the practice and design of engagement processes (Lemos et al. 2018). In this study, we use randomized-controlled experiments to better understand how interaction between scientists and potential users shapes drivers of knowledge use, such as understanding, credibility, and perceptions of fit (Briley et al. 2015; Cash et al. 2003; Parris et al. 2016).
While there is growing evidence that engagement enhances usability—that is, the likelihood that knowledge will be used—recent scholarship has increasingly called attention to the amount of resources necessary to sustain face-to-face science–practice interactions (Kettle and Trainor 2015; Lemos et al. 2014). These costs include financial and logistical resources for getting scientists and users together, the time spent by producers and users in repeated interaction, and less tangible costs such as the long-term commitment required to build trust and legitimacy, which are often mentioned as significant constraints to usability (Pidgeon and Fischhoff 2011). On the one hand, concerns about resource demands for engagement have centered on the resources required of producers. These include the institutional and organizational constraints scientists face in engaging with users (Briley et al. 2015; Lemos and Morehouse 2005), the relatively low number of scientists willing to engage, and a perceived mismatch between the growing need for engagement and willingness to do so (McNie 2007). On the other hand, there is concern about resource demands placed on potential users such as focusing on a relatively small number of decision-makers involved in climate-related decisions at the local level, leading to “stakeholder fatigue,” and personal risks that may be involved in engagement when their place of employment discourages engagement (Lemos et al. 2018). Moreover, potential users are increasingly reluctant to interact with climate information producers due to the high costs involved in traveling and lost work days (e.g., Kettle and Trainor 2015). Finally, financial and human resources to organize such interactions are often not available. Understanding these costs and how to offset them is important for both maximizing existing resources and scaling up engagement processes across new sectors and communities.
One way to reduce the costs of engagement, particularly the cost and time associated with traveling and hosting in-person meetings, is to explore different ways of communicating and interacting with potential users. With the steady advance of technology, there are now many options to enable effective remote interaction, perhaps making it a viable alternative to in-person interactions. While the effectiveness of remote interaction for building trust, for sustaining effective communication, and for knowledge exchange have been explored in business and other contexts (Alsharo et al. 2017; Henttonen and Blomqvist 2005; Jarvenpaa and Leidner 1999), relatively little work has been done within the context of climate change research and application [but see Kettle and Trainor (2015)]. As such, we know very little about the effectiveness of remote interaction or its viability as an alternative to face-to-face interaction in supporting engagement in this context (Lach and Rayner 2017). This is especially the case with oft-cited factors that influence the usability of climate information: understanding, credibility, and fit (Lemos et al. 2012).
In this article, we report the results of two experimental studies, using University of Michigan students as subjects, to assess how three types of interaction (in-person meeting, live webinar, and self-guided instruction) affect different aspects of climate information usability and uptake. To our knowledge, this is the first effort using an experimental design to explore how different types of interaction—which is at the heart of engagement—influence climate knowledge uptake. Our findings show that while in-person interaction is sometimes effective at enhancing understanding of climate knowledge, in-person interaction may not always be necessary, depending on the kinds of information involved and outcomes expected.
In choosing to carry out the experiment with students, we are aware of the potential limitations of our findings when compared with using actual decision-makers as subjects. Our reasons to carry out the experiments with students were twofold. First there was feasibility: the logistics of carrying out randomized field experiments with samples large enough to allow statistical analyses were daunting without a compelling proof of concept that our ideas were viable. Second, while working with actual practitioners would have been ideal, previous research has shown the benefits of using students, in terms of the cost and recruitment efficiency, may outweigh the costs to external validity as student and nonstudent responses often are largely equivalent (Anderson and Edwards 2015).
In the next sections, we first describe the literature on knowledge use that grounds our experiment and second, the two studies that informed our findings. Subsequently, we describe each experiment in detail, including methods, analyses, and findings.
2. Literature review
a. Information use and usability
Questions about the use of information attract broad interest from scholars, policy-makers, practitioners, and funders alike. As an area of social inquiry, these questions motivate research to better understand the conditions by which scientific information and other forms of knowledge gets used by people and organizations in the course of decision-making (Gitomer and Crouse 2019). While pioneering work on this topic occurred in the late 1970s and early 1980s (e.g., Caplan 1979; Rich 1981; Weiss 1979), more recent scholarship is emerging in the context of different social problem domains such as education (Tseng 2012), health (Holmes et al. 2012), climate change (Kirchhoff et al. 2013), and sustainable development (Clark et al. 2016).
Across these arenas, the meaning of “use” and what drives the use of information open up to a range of definitions and explanations. First, use may refer to direct inputs to decision-making and implementation to support problem-solving. Second, use may refer to shaping how issues or agendas are framed, or for general enlightenment and rationalizing of preconceived actions, decisions or value judgements for political or tactical ends (Weiss 1979). Some explanations for why information is used (or not used) examine the quality or form of the information itself or the social or organizational context in which it is used (Landry et al. 2003). A recurring and dominant explanation examined across time and contexts focuses on the disconnect—institutional, cultural, even linguistic—between where information is produced and where it is used (Caplan 1979). This disconnect, in turn, hinders access to potentially useful information or leads to the production of information that is not relevant or does not fit decision contexts.
One line of study for understanding how to increase information use in decision-making examines the role of interaction between researchers and practitioners. For example, early research by David Cash and colleagues (Cash et al. 2003) found that environmental assessments would be more likely to be perceived by practitioners as credible, relevant, and legitimate if their production entailed some form of interaction between the providers and users of the assessments. Lemos and Morehouse (2005) argued that iteration between researchers and users was a necessary condition for the coproduction of usable knowledge. Subsequent work further suggests that particular kinds of information, like seasonal climate forecasts (Dilling and Lemos 2011) and downscaled climate projections (Vogel et al. 2016), could be rendered more usable for decision-making when produced through producer and user interactions, especially when addressing the complexities and uncertainties embedded in data-intensive climate information (Briley et al. 2015; Kirchhoff 2013; Kirchhoff et al. 2015b; McNie 2013).
b. Types of engagement and interaction
Much of the research on how interaction enhances climate information use centers on in-person engagements between producers and users, leading many to argue that sustained, in-person interactions increase usability. This is not surprising given that research on scientist–practitioner interaction tends to emphasize the importance of relationship building and trust (Brugger and Crimmins 2015; Dilling and Lemos 2011; Jones et al. 2016; Moss 2016). In particular, personal interaction that builds trust and understanding in the context of coproduction also increases users’ willingness to share that information and learning within their organizations and networks (Kirchhoff et al. 2015a). While scientist–practitioner interaction critically improves usability, doing it “right” is resource intensive, requiring not only financial and logistical resources but also time and long-term commitment from both producers and users to sustain collaboration over time (Pidgeon and Fischhoff 2011).
Mitigating this resource intensiveness and advancing our ability to meet expected demand for climate information requires exploring how different forms of interaction affect information use. First, by better understanding what specific characteristics of in-person interaction enhance different dimensions of usability, we may be able to reduce the costs of interaction by leveraging the capacity for engagement through webinars and other virtual technologies. Second, we may also be able to better evaluate other forms of knowledge sharing such as web-based decision-support tools, which have great potential to scale up use. For example, the proliferation of online decision support tools for climate decision-making (see, e.g., NOAA’s resilience tool kit—https://toolkit.climate.gov/) suggests that careful evaluation of the usability of remote interaction with climate information is overdue.
With the steady advance of technology, there are many more options that potentially enable effective remote interaction. Research in business and related fields has explored different forms of remote interaction and their role in building trust, sustaining effective communication, and exchanging knowledge among virtual teams (Alsharo et al. 2017; Bhappu et al. 2001; Henttonen and Blomqvist 2005; Jarvenpaa and Leidner 1999). The evidence from these studies is mixed. For example, Bhappu et al. (2001) found computer-mediated communication helped virtual team members with diverse backgrounds acquire and integrate different knowledges more effectively. Alsharo et al. (2017) found that sharing knowledge among virtual teams helped to build trust and collaboration (although they did not find a significant increase in team effectiveness as well). In contrast, Cramton and Orvis (2003) found that social (e.g., information about an individual’s networks, motives, and goals) and contextual (e.g., information about norms, rules, expectations) information are particularly difficult to share in virtual environments, potentially leading to misunderstanding and a breakdown of trust. Also, Riopelle et al. (2003) found that remote technologies must be carefully matched to the task and context. For complex tasks with complex contexts, face-to-face communication may be the best solution to facilitate understanding and task completion (Riopelle et al. 2003). While we know a great deal about remote interaction in business and related contexts, we know very little about the effectiveness of remote interaction or its viability as an alternative to face-to-face interaction in supporting climate information use.
In the area of distance learning, evaluations of in-person versus distance or remote learning has been carried out for many years. Early research on online learning signaled the possibility that few differences, and perhaps even benefits, may occur in pursuing Internet based learning (e.g., Bernard et al. 2004). Two meta-analyses of such studies suggest that online learners perform better than students in traditional learning environments (Means et al. 2013; Means et al. 2009). It is unclear, however, whether the results can be attributed to the mode of delivery per se, as the instructional methods used in online courses and face-to-face classrooms often differ. Furthermore, some research has found that online learning only has significant advantages when it also includes an element of face-to-face interaction (i.e., “blended” delivery mode). In the context of training, such as for one-time skill development or continuing education, additional studies have found opportunities for similar or even enhanced performance by learners, such as in the context of library instruction or health training (Hemmati et al. 2013; Silk et al. 2015). In the public health arena, online training has become increasingly popular such that studies may now be fully focused on the efficacy of online efforts (Colleran et al. 2012; Webb et al. 2017), which seek the promise of expanded and accelerated health worker training in underserved or under-resourced areas (Rowe et al. 2005).
3. Study experiments: Description and methods
Our studies investigate the influence of three different forms of interaction and their influence on climate information use for decision-making: in-person meeting, live webinar, and self-guided web-based instruction. For ease of conducting the studies, our focus is on one-time interactions, such as might be used to introduce practitioners to new climate tools or to share new research findings that may impact practitioners’ work. We assume in-person meeting to be more resource intensive (e.g., logistically, and in terms of human and financial resources) than live webinar. Following the same logic, we assume a live webinar to be more resource intensive than self-guided instruction.
We compare these different forms of interaction through two randomized experiments. In both studies, experienced climate information brokers (scientists who have worked with potential users to help them learn about and potentially use scientific information) interacted with participants in semicontrolled environments for the in-person meeting and live webinar. For purposes of the experiment, we refer to the climate information broker as the “instructor.” The first study (2015) was designed as a “proof of concept” seeking to explore the assumption that “closer” interaction would lead to better understanding and intention to use climate information in a decision context. Study 2, carried out in 2016, sought to further explore and validate the results of study 1 while also examining whether the type of interaction affects decision-making. All study protocols were approved by the Institutional Review Board at the University of Michigan.
In both studies, we examine the effects of interaction on three dimensions of usability: understanding, credibility, and fit. Given prior scholarship, we expected in-person interactions would yield greater levels of understanding, credibility, and perceived fit relative to other forms of scientist-user interaction. We additionally measure uptake of information. In study 1 this takes the form of intentions to use the presented climate information while in study 2, we ask participants to draw upon information provided to make a decision within a hypothetical scenario and then reflect on which types of information informed their decision making. Specifically, we expected the in-person group to be more accepting of uncertain projections from climate models and thus more likely to report using that information.
a. Study 1
In our first study, we tested whether the form of interaction affects understanding of and intention to use information provided in a climate adaptation planning tool.
1) Participants and procedure
To approximate potential users’ expertise in the context of climate-related decision-making, we recruited graduate students (N = 46) at the University of Michigan with either environmental/natural resources or urban planning backgrounds. Students were offered a $35 Amazon gift card in exchange for their participation.
Students interested in participating provided their availability during two 4-hour blocks in May 2015. Those who signed up for a given time block were then randomly assigned to one of three tutorials: in-person meeting, live webinar, or self-guided instruction (i.e., written instructions and recorded videos). This stratified randomization process helped ensure that students with similar characteristics (e.g., motivated students who signed up for the first time slot) would be distributed across the three treatments. The final sample sizes per condition were 11 students in the in-person meeting, 16 in the live webinar, and 19 in the self-directed group.
Students were told that the purpose of the study was to evaluate the Cities Impact and Adaptation Tool (CIAT; http://graham-maps.miserver.it.umich.edu/ciat/home.xhtml), an online resource aimed at helping city planners to plan and implement adaptive responses to climate change. All students were asked to complete a tutorial about the tool, which, depending on their assigned treatment, occurred through an in-person meeting, a live webinar, or self-guided instruction on the CIAT website. In all conditions, students were shown how to look at both historic climate data and modeled projections to ascertain whether and how temperatures and precipitation levels within a region might change. At the end of the presentation, students had the opportunity to ask questions of the presenter. Students in the in-person condition tended to ask more questions than in the webinar. Following the tutorial, participants completed a survey about their understanding and perceptions of the data presented.
To test objective understanding of CIAT data, students completed a short quiz with 19 possible correct answers. All other measures on the survey were assessed through five- or seven-point scaled questions (see Table 1). Students separately rated the understandability and credibility of both the observed historical data in the tool as well as the projected climate model data presented. We also measured understanding of the tool itself by asking students to rate their difficulty in learning the tool and whether they wanted additional guidance for using it. To assess fit—that is, the appropriateness of the information for city decision-makers—we asked participants to rate the perceived riskiness of making decisions based on the tool. Finally, as a measure of uptake, we asked respondents about their intentions to use or recommend the tool in the future. Where appropriate, we used principal component analysis with oblimin rotation to reduce the number of items into a smaller set of reliable scales.
Because of the small sample sizes and nonnormal distribution of the data, we initially used Kruskal Wallis H tests with Dunn’s test for multiple comparisons to identify differences between treatments. These analyses revealed that the in-person and live webinar treatments did not differ significantly in any respect (all p values > 0.3), including in participants’ evaluations of the scientist presenter (referred to as the instructor in the study 1 survey; see Table 1) (U = 73.5, p = 0.481) and the perceived level of interaction during the training (U = 73.5, p = 0.481) (which were only measured for the in-person and webinar groups). The relationships between each of these two treatments and the self-guided treatment also followed similar trends, with the exception of the results for uptake intentions. We therefore combined the in-person and group webinar treatments in subsequent analyses to enhance statistical power. Combining these treatments also resulted in observations that more closely approximated a normal distribution, thereby allowing us to use independent t tests to compare scientist-led and self-directed groups.
As shown in Fig. 1, no differences were found between scientist-led (in-person + live webinar) and self-directed trainings in terms of the understandability and credibility of the climate information presented or in the perceived riskiness of using climate models to inform decision-making (i.e., fit). We did observe, however, modest differences in objective knowledge, with the self-guided group performing slightly worse (M = 15.79 correct responses, SE = 0.31) on the quiz than those trained by a scientist [M = 17.07, SE = 0.32, t(44) = 2.75, p = 0.009, d = 0.84]. Self-guided participants had greater difficulty learning the tool (M = 2.74, SE = 0.22) and reported wanting more guidance (M = 5.11, SE = 0.23) on how to use it than those trained by a scientist [difficulty: M = 2.12, SE = 0.19, t(44) = 2.10, p = 0.042, d = 0.65; guidance: M = 4.07, SE = 0.27, t(44) = 2.72, p = 0.009, d = 0.84] (Fig. 1). In terms of uptake intentions, preliminary analyses suggested that the in-person group had higher intentions than the self-guided group (padj = 0.047, r = 0.44), but the effect disappeared when the in-person and webinar treatments were combined (Fig. 1).
b. Study 2
Study 2 tested whether the form of interaction influences climate information uptake in the context of a risky decision. Unlike study 1, where students learned about a climate tool for which they had no immediate use, study 2 asked participants to play the role of a water utility manager tasked with making a long-term investment decision to deal with harmful algal blooms (HABs). To inform their decision making, we presented information about the potential impacts of climate change on future occurrences of HABs, again manipulating whether this information was delivered through an in-person meeting, live webinar, or self-guided instruction (via a prerecorded webinar).
Participants (N = 156) were undergraduate and graduate students at the University of Michigan with backgrounds in natural resource management, urban planning, and business. Students were offered a $30 Amazon gift card to complete a short reading assignment, attend a presentation, and respond to two short questionnaires. Participants included in the dataset completed all parts of the study. The final sample sizes per condition were 55 students in the in-person group meeting, 50 in the live webinar, and 51 in the self-directed group.
2) Procedure and materials
To participate in the study, students first completed an online form that included questions about their program of study and year in school. They then signed up for one of nine time slots offered over a three-day period in September 2016. We randomly assigned students to each treatment through a two-stage process. In the first stage, we randomized time slots such that four slots were assigned to the in-person treatment, four to the live webinar treatment, and one to the self-guided recorded webinar treatment. Within each of the in-person and webinar time slots, we then stratified participants according to their major and tenure (year in program). From these stratified groups, we randomly selected a set number of students to participate in the self-directed treatment (which was done online during the students’ own time). This process ensured that students with similar experience and backgrounds were evenly distributed across the three treatments.
Upon signing up to participate, students were directed to an online pretest survey. The survey included a scenario (held constant across all three treatments) in which we asked students to assume the role of a drinking water utility manager for a city on Lake Erie experiencing harmful algal blooms (HABs; see the online supplemental material). The utility manager (i.e., the experiment participant) had five investment options for protecting the city from future HABs. Larger investments would provide greater protection from HABs but would divert funds from other important city programs. Participants had to weigh the risk of future HABs against the risk of wasting city funds, bearing in mind that the occurrence of future HABs was uncertain and dependent on factors such as climate change and regional agricultural practices. After reading the scenario, participants completed the pretest survey by selecting their investment decision.
Students participated in the experimental portion of the study (in-person group seminar, live webinar, or self-guided recorded webinar) four to eight days later. In each condition, an environmental scientist well-versed in topics related to climate information and harmful algal blooms (held constant across all treatments) presented information on how climate change could influence the occurrence and severity of HABs in the future. During the presentation, the scientist explained that changing temperature and precipitation levels may influence future HABs. Of these two factors, the connection between precipitation and HABs was described as being less certain. The presenter further explained that decision-makers have three types of climate data (for either temperature or precipitation) that might be used to make predictions about future HABs: projections from historical data, current observations, and projections from climate models.
To ensure a minimum level of interaction between the scientist and participants, we used two confederate students to ask the same predetermined questions in each of the conditions (including the recorded webinar in the self-directed condition). Students in the in-person meeting and live webinar could ask additional questions. More student-generated questions were observed in the in-person meeting than in the live webinar. Immediately following the presentation, students in the in-person meeting completed the posttest survey in an adjacent computer laboratory while participants in the live webinar were emailed a link to the posttest survey. Participants in the self-guided condition were instructed via e-mail to visit a website where they could watch a recorded webinar before completing the posttest survey.
The investment options presented to students on both the pretest and posttest are provided in the supplemental material. The choices were scaled such that each successive option required a greater upfront investment of money. Students were told that spending more money upfront would reduce the cost of future HAB events but doing so came with the risk of wasting city funds. If the number of future HABs was low, the money—which could have gone to other important city programs—would be wasted. If students underinvested and the number of future HABs was high, the city would have to borrow funds from other programs.
The posttest also included items to assess the overall quality of the tutorial and perceived usability of the information presented (Table 2). Similar to study 1, students rated the level of interaction with the scientist presenter (“instructor” in the study 2 survey; Table 2), the quality of the presentation, and how credible and engaging they found the presenter to be. Additional measures assessed the overall usability of the information presented, using separate items for fit, credibility, and understanding.
We also asked students about the fit and credibility of the different types of climate data presented. We defined fit in terms of how relevant, useful, and informative students found the climate data presented for their decision on how to handle HABs. An initial question asked students to rate how much each type of climate data (current and historical observations, projections from historical data, and climate model predictions), in general, influenced their decision making. Students then rated the perceived fit and credibility of the different types of temperature and precipitation data presented (i.e., projections from historic temperature data, projections from historic precipitation data, current observations of temperature, current observations of precipitation, projections climate model temperature data, projections from climate model precipitation). To examine differences on these measures between experimental conditions, we ran a series of mixed factorial ANOVAs, treating data type (current observations, historical projections, and climate model projections) as the within-subjects factor and experimental treatment as the between-subjects factor.
As shown in Fig. 2, the experimental manipulation demonstrated that participants perceived three different levels of interaction with the instructor [Welch’s F(2, 98.15) = 49.68, p < 0.001, est. ω2 = 0.38], but otherwise found the quality of the presentation and the credibility of the instructor to be equivalent. Perceptions of how engaging the presenter was also varied across treatments [F(2, 153) = 6.90, p = 0.001, ω2 = 0.07] with in-person participants rating the presenter as more engaging than participants in either the live webinar (p = 0.001, d = 0.38) or prerecorded webinar (p = 0.028, d = 0.50). Despite differences in perceived level of interaction, the treatments did not lead participants to perceive differences in terms of the overall fit, understandability, or credibility of the information presented (Fig. 3).
Next, we examined whether perceptions of different types of climate data varied by treatment. No main effects were found for experimental condition on any of the outcome variables, and, with one exception, no interactions were found between data type and treatment condition (see Figs. 4 and 5). The results, overall, suggest that perceptions of different data sources did not differ across treatments. The exception was for the perceived fit of climate precipitation data. Here we observed a significant interaction between data type and experimental condition [F (4,306) = 3.56, p = 0.008]. As shown in Fig. 5b, participants in the self-directed group perceive the fit of the information as lower and, thus according to the literature reviewed above, might be less likely to use projected precipitation data from climate models than participants in either the in-person group or live webinar.
Finally, to assess whether treatment condition influenced participants’ investment decisions, we calculated change scores from pretest to posttest. As most students did not change their investment plan, the data were not normally distributed and required a Kruskal Wallis H test to examine whether there were differences between treatments. No significant differences were found [H(2) = 1.91, p = 0.384].
Based on our two studies, we find limited support for the hypothesis that in-person interactions will yield a greater level of understanding and use of information relative to other forms of scientist–user interaction. While study 1 suggests there may be marginal benefits to disseminating climate information through forms of interaction where practitioners have direct contact with knowledge producers, we found no differences in perception of overall fit, understandability, or credibility of the information between treatment groups in study 2.
Yet, a few observations deserve attention. In study 1, participants who had a scientist guide them through the CIAT tool found it easier to understand and demonstrated greater understanding of the information presented. However, it does not appear to matter whether that guidance is delivered in person or through a live webinar. While study 2 indicates that both webinar and self-guided instruction may be reasonable alternatives to in-person interaction for enhancing usability (fit, understanding, and credibility of information), we found one exception—the perceived fit of climate precipitation data. Participants in the self-directed group reported lower perceived fit of climate precipitation data than participants in either the in-person group or live webinar. This suggests that for more uncertain climate change projections such as precipitation, more interaction is better.
Based on these results, we argue that to improve and potentially scale up climate information uptake, climate scientists and information brokers should consider the transaction costs associated with in-person interaction against the expected gains of that interaction. In such cases, it may be that intensive efforts to interact with practitioners should be reserved for complex information and contexts in which there may be no substitute for in-person interaction. For example, situations in which information is complex or highly uncertain, such as climate precipitation projections, may require in-person interaction. Similarly, in contexts where local politics or distrust of science may inhibit action, close and meaningful interaction to build legitimacy and trust may be desirable. In contrast, where credibility may not be an issue (e.g., when information is delivered by a well-respected university-based scientist with knowledge brokering expertise) remote means of interaction could present tangible advantages in terms of lower human and time costs without forgoing the opportunity for trust building.
Several methodological limitations point to avenues for future research. First, as mentioned before our studies were conducted with a relatively small sample of students and not practitioners in the field. While we attempted to recruit students who might reasonably use climate information in their future careers and our experimental design sought to instigate realistic stakes in a decision-making process, students may not have been as personally invested in the quality of the tool presented in study 1 or in the tradeoffs associated with the harmful algal bloom scenario described in study 2. Second, our studies only speak to the effects of one-time interactions between knowledge producers and users. Despite these limitations, our findings are consistent with those of scholars finding that virtual interaction can achieve certain goals as effectively as face-to-face instruction (Alsharo et al. 2017; Bhappu et al. 2001; Means et al. 2013). Additional field research is needed to determine how and when the results might generalize to different real-world contexts, including how power dynamics and governance contexts among and between different groups of practitioners would influence the role of in-person versus virtual interaction with scientists.
Through two randomized experiments, we examined whether different forms of interaction influence knowledge users’ understanding of climate information as well as their perceptions of credibility and fit in utilizing climate tools to support decision-making. The results of studies 1 and 2 together show that in the context of one-off efforts to
enhance climate information usability, increased interaction between knowledge producers and users may offer few advantages over less resource-intensive approaches. In both studies, the live webinar and in-person meetings led to similar outcomes, and with rare exception, offered little advantage over groups of participants who viewed the same materials on their own.
Our study is one of the first attempts to investigate the effects of science–practice interaction on different drivers of climate science usability through a randomized experiment. We believe this experimental approach has the potential to significantly increase understanding of how different forms of remote communication can be used to augment in-person engagement efforts. Rather than challenge the compelling evidence that person-to-person interaction fosters usability, our results suggest that there may be alternative avenues to enhance usability and to aid interaction that complement (rather than replace) well-established best practices documented in the climate science literature. While there are many others aspects of the role of engagement in increasing the usability of scientific knowledge that need to be explored, our findings suggest that climate scientists, information brokers, and practitioners should consider that more face-to face interaction may not always be better. Given limited resources and the urgency of climate change, strategic investment of time and effort is essential.
The research for this article was supported by National Science Foundation (NSF) Grant 1039043 and National Oceanic and Atmospheric Administration (NOAA) Grant NA15OAR4310148. We wish to thank Avik Basu for his advice on study 1, and Dan Brown and Ashley Grace for the presentations of the CIAT tool in study 1. Author contributions: M.C.L. conceptualized the study. K.S.W. designed, executed, and analyzed data from studies 1 and 2. K.S.W, J.C.A., L.V.R., M.K. and C.K. designed and executed study 2. All authors contributed to the manuscript. Data: Survey data for studies 1 and 2 can be found at https://osf.io/eb8ck/.
Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/WCAS-D-18-0075.s1.