1. Introduction
As the impacts of human-influenced climate change are increasingly recognized in the United States and around the world, the need for climate science and information that can be readily used in decision-making contexts for climate change adaptation and mitigation has grown rapidly (Melillo et al. 2014). As many researchers have acknowledged, however, simply producing more information does little to solve the problem (Clark et al. 2016). Information that will inform decision-making must apply directly to the problem at hand, be at spatial and temporal scales that match the problem, and be scientifically sound (Lemos et al. 2012; McNie et al. 2007). To address this need, some researchers have increasingly focused on approaches that involve the end users of research in a collaborative or “coproduced” research process.
Previous research has shown that taking a collaborative approach to knowledge development is more likely to result in science that is used by decision-makers (Jasanoff 2004; Jasanoff and Wynne 1998; Lemos and Morehouse 2005; van Kerkhoff and Lebel 2015) than science produced using the “loading dock” model of delivery in which the engagement with users is one way: from researcher to user (Carbone and Dow 2005; Cash et al. 2006; Jasanoff and Wynne 1998; Lemos et al. 2012). Social science research on science production has indicated that collaboratively produced science tends to be more easily accepted and applied by decision-makers because they better understand the process by which it was developed and feel a greater sense of knowledge ownership (Jasanoff and Wynne 1998), and the information is more likely to fit their needs (Lemos and Morehouse 2005; Lemos et al. 2012).This more collaborative approach to knowledge development has been termed coproduction of knowledge (Jasanoff and Wynne 1998), stakeholder-driven science, user-driven science (Dilling and Lemos 2011; McNie 2007), actionable science (ACCCNRS 2015), knowledge exchange (Cvitanovic et al. 2015), and transdisciplinary research (Jahn et al. 2012). While acknowledging these varied terms, we most often use coproduction of knowledge in this paper to refer to highly collaborative, user-driven research approaches.
Evaluating these types of programs and projects requires innovative approaches, as more traditional metrics of research success are often insufficient to assess the processes and outcomes of coproduced climate science, which differ from the more output-focused metric of traditional academic research (Bell et al. 2011; Evely et al. 2010; Fazey et al. 2014; Ferguson et al. 2016; Moser 2009; National Research Council 2005). Standard tools for evaluating scientific research are often inadequate to capture decision and policy impacts; they largely rely on scientific impacts of the research (Bell et al. 2011) that address scientific credibility (Cash et al. 2003) but fail to address its saliency to decision-makers or the legitimacy of the process of developing the knowledge (i.e., the extent to which stakeholders were involved in knowledge development; Cash et al. 2003; Evely et al. 2010; Fazey et al. 2014). New evaluative frameworks can help to identify, for example, which research approaches best support genuine collaboration between scientists and stakeholders, when a project has been successful in producing a collaborative product, and to what extent programs are successful in supporting such efforts.
While significant research has identified key principles that support this kind of collaborative effort (Lemos and Morehouse 2005; McNie 2013; Reed et al. 2014), those studying the field of coproduction continue to struggle with a lack of empirical evidence to support the principles (Hegger and Dieperink 2014), provide greater detail about how to apply the principles (Reed et al. 2014; van Kerkhoff and Lebel 2015), evaluate the processes and outcomes from collaborative research (Bellamy et al. 2001; Fazey et al. 2014; Meadow et al. 2015), and go beyond a set of best practices to effectively measure these key principles and their importance in the coproduction process.
In this paper we present our work on developing and testing an evaluative framework for coproduced climate science. In this research, we identified the key principles in coproducing knowledge from the existing literature, examined how usable climate research is currently evaluated, and interviewed experienced climate science integrators to gain insight from their direct experiences coproducing such knowledge. We synthesized information from these sources to develop an evaluative framework that consists of 45 indicators grouped into context; process; and output, outcome, and impact indicators. We also present lessons about the process of collaboratively producing climate knowledge based on findings from our evaluative framework. We then reflect upon lessons learned about the process of evaluating the coproduction of knowledge.
2. Literature review
In this section, we discuss three related areas of literature: coproduction of knowledge, information use in decision-making, and evaluating coproduced climate research. This body of peer-reviewed knowledge has focused on the benefits and challenges of coproduction approaches, as well as identifying future steps and unanswered questions, including a greater awareness of the role of researchers in informing adaptation process (Lacey et al. 2015) and the challenges of doing this type of research within an academic context (Brugger et al. 2015). Within the context of developing evaluation frameworks, understanding how information is used in an organization for decision-making (or barriers to its use) is relevant to interpreting and measuring the impacts and outcomes of information use (Choo et al. 2008; Rich and Oh 2000; Taylor 1991). Evaluation research focused on understanding the value of coproduced climate research contributes to developing best practices for coproduced climate research; increasing capacity to conduct coproduced climate research; and providing insights into when coproduced strategies or approaches are a good fit with the project, stakeholders, and researchers involved.
a. Coproduction of knowledge: Process and principles
The process of coproducing science knowledge holds challenges and benefits for both researchers and decision-makers. Decision-makers often must grapple with new scientific fields in which they have little training as well as with the inherent uncertainty of science knowledge, while simultaneously trying to protect and conserve the natural resources and human communities to which they have responsibilities (Brugger et al. 2015). As Lacey et al. (2015) and Ford et al. (2016) note, researchers also bear responsibility for understanding the implications of research focused on adaptation, that is, what the direct effects of adapting (or not) to climate change will be for the communities in question. For the purposes of this review, we define coproduction of knowledge as the process of collaboration between researchers and decision-makers to develop new or refined climate science with the intention of making that science usable by decision-makers (Meadow et al. 2015).
Early work on collaborations between scientists and decision-makers identified strategies that are linked to more successful outcomes (i.e., increased use of science in policy or decision-making). Lemos and Morehouse (2005) outlined the following list of activities within the research process in which stakeholders should, ideally, participate in order to improve the usability of climate science: defining the problem, formulating the question, selecting methods, conducting research, analyzing findings, developing knowledge, testing and evaluating results, and disseminating findings. More recent consideration has specifically identified strategies for coproduction approaches. Hegger and Dieperink (2014) and Hegger et al. (2012) propose a set of seven “success conditions” for coproduction of knowledge, including who is included in the process, whether they achieve a shared understanding of problems and goals, how project responsibilities are shared, and whether specific resources such as boundary objects and certain competencies are present. Van Kerkhoff and Lebel (2015), Wyborn (2015), and Schuttenberg and Guth (2015) all discuss the importance of coproductive capacities in setting the stage for coproduction of knowledge to take place. These capacities are as follows: material (resources available), cognitive (process of generating knowledge), social (capacity to produce effective and equitable governance), and normative (the underlying values inspiring actors to work toward a common goal). These are each mediated through the existing socioecological system in which the process takes place (Schuttenberg and Guth 2015). While all the capacities contribute to the level of influence of coproduced knowledge (Schuttenberg and Guth 2015), the capacities differ in various contexts, and therefore, different interventions to promote coproduction of knowledge are likely to be necessary in different contexts (van Kerkhoff and Lebel 2015).
Other analyses of scientist–stakeholder collaboration have focused on the role of communication and relationships in development of credible, salient, and legitimate information (Buizer et al. 2016; Jacobs et al. 2005; Lemos and Morehouse 2005; Wyborn 2015). Research also has highlighted certain elements in the relationship between climate science producers and users that seem to have particularly strong influences on ultimate use of information: two-way communication, building trust, being accountable for the findings, and the importance of building long-term relationships in order to be successful (Brugger et al. 2015; Kirchhoff et al. 2013). These long-term relationships also may contribute to the development of information-sharing networks that encourage the development of both weak and strong ties that influence how research is promulgated and its impacts amplified (Granovetter 1983). Ferguson et al. (2014) developed a set of guiding heuristics that emphasize the role of relationships and open communication to improve the process and outcomes of collaborative science research, including the following: 1) the importance of setting mutually agreed upon ground rules, 2) the responsibility of the researcher to learn about institutional governance and norms, and 3) the importance of demonstrating mutual respect throughout the collaboration.
Like Ferguson et al. (2014), Reed et al. (2014) synthesized literature and data from a series of interviews with researchers and stakeholders involved in knowledge exchange research for environmental management and proposed the following five principles for knowledge exchange: 1) design knowledge exchange into the project, 2) represent the diversity of stakeholders and systematically identify all stakeholders, 3) engage through two-way dialogue and long-term relationships, 4) generate impact by delivering tangible outputs, and 5) reflect upon and sustain connections with stakeholders. The list of questions and guidance provided by the cited authors are comprehensive but do not directly address the need to measure responses—such as how much participants’ perceptions changed, characterizing the specifics of communication, or measuring the intensity or length of relationships—in order to understand how a particular variable impacts the ultimate use or nonuse of information in decision support.
b. Information use in decision-making
Beyond coproduction of knowledge as a concept, other scientists have been exploring ways in which information is or is not used in organizational decision-making. Their research can inform the ways in which we frame the outcomes and impacts of coproduction processes by helping us understand how and under what conditions information is adopted by organizations. Patton (1978, 1982), Mark et al. (2006), and Alkin et al. (2006) have considered how to make the information generated by program evaluations more useful by program decision-makers. There are clear analogies between the struggle evaluators face and those faced by climate scientists hoping to develop actionable science. For example, Patton (1982) noted that “evaluators found that methodological rigor did not guarantee that findings would be used,” an experience similar to many researchers we interviewed for this project (see also Brugger et al. 2015).
Taylor (1991) identified eight different types of information use that provide a spectrum of ways to think about how information can inform decision-making, ranging from organizations or an individual perceiving itself to be better informed about an issue (enlightenment) through a tangible application of information to solve a problem or learn a new skill (instrumental). Oh (1996) further refined these information use types into three categories (with more detailed subcategories): 1) conceptual information use, where an organization/individual perceives itself/himself/herself to be better informed about an issue or has changed opinion about the issue; 2) justification, where information is used to justify a predetermined decision; and 3) instrumental, where information is directly used to inform a new decision. Choo (2006) presents three different conceptual models for how organizations use information, each driven by the reason they were seeking information: sense-making in response to a change in their environment, knowledge creating to develop new capabilities or innovations, and decision-making to select alternatives and take a goal-directed action.
c. Evaluating coproduced climate research
The challenge for those undertaking coproduction processes at either project or program levels is to link the principles and frameworks to “tangible (measurable) project goals or outcomes” (van Kerkhoff and Lebel 2015) and to understand how our capacities to coproduce knowledge contribute to its impacts on resource management and governance (Hegger and Dieperink 2014). New studies are employing empirical assessment of collaborative science research to propose ways to understand the processes involved and evaluate outcomes (Bell et al. 2011; Fazey et al. 2014; Ford et al. 2013; Hegger and Dieperink 2015; Walter et al. 2007). Even the idea of assessing the impact of research on decision-making is new within many academic disciplines, where reward structures rely primarily on the number of peer-reviewed publications (Bell et al. 2011; Roux et al. 2010).
Writing from the perspective of a resource manager, Jacobs (2002) proposed “measures of success” for collaborations between scientists and decision-makers, such as answering the following questions: Did participants modify behavior in response to information? Did participants initiate subsequent contacts? Did the stakeholders claim or accept partial ownership of final products? Was the process representative of all interests? Were the outcomes implementable in a reasonable time frame?
Bell et al. (2011) reviewed projects designed to produce environmental science results for policy and found a diversity of evaluative approaches, as well as some common challenges including the following: attributing management outcomes to any particular piece of information, timing the evaluation appropriately to observe any impacts, determining the reliability of the information, and assessing the resource-intensive nature of impact evaluation. Fazey et al. (2014) reviewed 135 studies of knowledge-exchange evaluations from a variety of fields to develop a set of principles for evaluating this type of work. These studies encouraged researchers to 1) build evaluation into the knowledge-exchange project, 2) be explicit about why a knowledge exchange approach is necessary to yield desired outcomes, and 3) evaluate diverse outcomes (not just the expected ones).
Focusing on the process of engagement between scientists and stakeholders, Walter et al. (2007) constructed an explanatory model to evaluate a transdisciplinary project. Through statistical analysis, they found that the outcomes of network building, distribution of knowledge, and transformation of knowledge were significantly correlated to the predictor variable “involvement” as measured by the number of engagement activities that took place during the project. Beierle (2002) examined 239 public processes focused on environmental management decisions. He categorized the participatory processes into four groups: public meetings or hearings, advisory committees not using consensus, advisory committees using consensus, and negotiations and mediations. He used the following four evaluative questions as criteria to determine the extent to which public participation led to higher-quality decisions: Are decisions more cost effective than the likely alternatives? Do decisions increase joint gains? Do participants contribute innovative ideas, useful analysis, or new information? Do participants have access to scientific information? He found that more intensive participatory processes tended to produce higher-quality decisions.
Blackstock et al. (2007) developed an evaluative framework for participatory research in sustainability science. Their framework examined the role of process (champion or leader, communication, conflict resolution, influence on the process, and representation), context (political, social, cultural, historical, and environmental), and outcomes (accountability, capacity building, emergent knowledge, recognized impacts, social learning, and transparency). A key finding from their test of the model was that impacts often take a long time to emerge, and simply evaluating at the end of a project is insufficient. Armitage et al. (2011) identified five following dimensions of coproduction of knowledge within marine mammal comanagement frameworks in the Arctic and empirical examples of each: knowledge gathering, knowledge sharing, knowledge integration, knowledge interpretation, and knowledge application. They note that each of these dimensions contains complex processes within them. At a program level, McNie (2013) proposed that evaluations consider whether end users’ understanding of climate science has improved, whether policies and decisions can be linked to the collaborative knowledge production effort, changes in resource allocation, and the number and breadth of stakeholder networks created by the project.
3. Methods
In this section, we describe our methods for developing an evaluative framework for the coproduction of usable climate science. Through a process of program theory-driven evaluation (Donaldson and Lipsey 2006), we synthesized the following to create the framework: 1) literature on the theory and practice of coproduction of knowledge, 2) the metrics currently used to evaluate usable science in several federal agencies and nongovernmental organizations, and 3) insights from the lived experiences of those engaged in this work. We combined insights from these sources to create a set of indicators of successful coproduction of knowledge, then used two case studies to both test the indicator framework and glean lessons about the practice of coproducing climate science.
a. Literature search and review
We focused our literature search (see literature review) on research concentrating on evaluation or assessment of collaborative research, coproduction of knowledge, or societal impacts of science—using a process analogous to snowball sampling (Given 2008) by using the search tool “Web of Knowledge” to identify journal articles and books cited by or within several key works in the field (e.g., Lemos and Morehouse 2005; Dilling and Lemos 2011; Bellamy et al. 2001; Reed 2008; Fazey et al. 2014; Walter et al. 2007; Cvitanovic et al. 2015; Feldman and Ingram 2009; McNie 2007) that helped us trace the similarities and differences in proposed metrics and indicators as ideas evolved through the literature. We also used keyword searches on several terms (i.e., evaluation, assessing science, participatory methods, coproduction, collaborative research, usability of science, observation theory, program theory, and utilization theory). In addition, we examined existing performance metrics for programs and organizations that conduct collaborative, decision-focused research. These sources included federal programs such as the National Research Council’s (2007) evaluation of the U.S. Climate Change Science Program; the U.S. Department of the Interior (DOI) and the U.S. Geological Survey’s strategic plans and budget justifications (U.S. Geological Survey 2014; U.S. Department of the Interior 2014); the annual reporting tool developed by the NOAA Regional Integrated Sciences and Assessments (RISA) program; recommendations developed by the Advisory Committee on Climate Change and Natural Resource Science (ACCCNRS 2015) to evaluate the DOI Climate Science Centers (CSCs); an evaluation of stakeholder involvement in the U.S. National Climate Assessment (Moser 2005); evaluations of other programs focused on coproduction of climate science, including Jorgensen et al. (2014) and Ferguson et al. (2016); and performance metrics used by nongovernmental organizations such as the Bill and Melinda Gates Foundation (2016) and the International Development Research Centre (IDRC; Earl et al. 2001) that specifically consider the process of collaboration within their evaluations.
b. Interviews with climate researchers, program managers, and climate program leaders
Through 19 in-depth interviews, we drew on the experiences of climate science integrators, program managers whose programs fund stakeholder-engaged climate research, and leadership within two federal programs focused on production of decision-relevant climate research (NOAA’s RISA program and the DOI CSCs). Because this work focused on research being conducted within the DOI CSCs, we included leadership within this organization to understand how they conceptualized successful projects and what they considered to be effective steps toward success. We also included leadership within the RISA program because of its long history of experimentation with collaborations between climate scientists and decision-makers (Ferguson et al. 2016; Pulwarty et al. 2009). We interviewed a convenience sample of other experienced climate science integrators (Brugger et al. 2015). We acknowledge this is a limited sample, so we used the interview data only to triangulate data from the literature and performance metrics. The interviews were semistructured and typically lasted approximately 60 min. The focus of the questions for the researchers was how they learned to conduct “engaged research,” the incentives and challenges involved in this kind of research, how they self-assessed and monitored their own successes and failures, and their recommendations for indicators of success and evaluative metrics for this kind of work. Our interviews with program managers and leaders more specifically focused on their recommendations for indicators and metrics and how they might use such metrics in their programs. The interviews were recorded then transcribed and coded in Dedoose, an online qualitative coding software.
c. Coding and indicator development
We coded all the indicators or metrics from the three sources (literature, existing performance metrics, and those recommended by climate science integrators and program leaders) using the five following categories common to evaluation frameworks (see, e.g., Earl et al. 2001; W. K. Kellogg Foundation 2004): context (including inputs to the project and external factors that influence the project), process, outputs, outcomes, and impacts. We then compared the suggested indicators (from different sources) to identify common themes across sources and any gaps, such as whether indicators suggested by experienced “integrators” have been identified in the literature or put into practice in existing performance metrics. We recoded all the compiled metrics by specific themes within each category and then summarized the themes into one coherent “indicator” statement.
Context factors relate to the preexisting conditions that may influence researchers’ and stakeholders’ ability to engage in the coproduction of science and ultimately use the information. We organized these context factors into input and external indicators. Input indicators assess capacity, including the skill set of the research team, team composition, resource allocation (both time and material resources), and stakeholder involvement. External indicators are those conditions that can affect the outcome of a project but are outside of either the research team’s or stakeholder’s control. These include factors such as employee turnover, scientific uncertainty, or a catalyzing event.
Process indicators are actions and activities such as inclusion of stakeholders in the proposal writing process, collaborative development of research questions and research design, and ongoing communication between researchers and stakeholders throughout the lifespan of the project.
We divided project results into three categories (i.e., outputs, outcomes, and impacts) to capture the nature of information use as a spectrum of activities, not a fixed end point (Taylor 1991; Oh 1996). We defined output indicators as tangible outputs from research, such as workshop reports or peer-reviewed publications. Outcome indicators are less tangible and more conceptual results. These include the perception that project goals have been achieved and end users’ perception of the credibility, saliency, and legitimacy of the final outputs and process. Impact indicators generally represent instrumental uses of science information, such as directly informing management decisions, policy actions, or adaptation decisions. The resulting indicators are listed in Table 1.
Proposed indicators for evaluating coproduced climate science.
With support of the DOI’s Southwest Climate Science Center (SW CSC) and its affiliated researchers, we tested our indicator framework in two case studies funded by the SW CSC. Our methods for analyzing the case studies were similar to Meagher et al. (2008), who conducted a retrospective analysis of the impacts of social science research on policy and practice. We developed the evaluative framework (described above) and then collected data using multiple methods including semistructured interviews, document analysis (project proposals, interim, and final reports, and project outputs), and experimented with use of observational data collection by developing several tools to gather data, such as detailed record sheets to count and categorize interactions at project-related meetings. We conducted 13 interviews with the principal investigator and coinvestigators in each project as well as key representatives of the stakeholder agencies involved in each project (as identified in the project proposals and by the research team). Interviews were recorded, transcribed, and coded using our indicators. We attended and observed four project-related meetings to gain more perspective on the relationships and collaborative partnerships developing between researchers and decision-makers. Although we took ethnographic field notes at each meeting and piloted several tools to assess equitable participation in the meetings, data from observations are not included in these assessments as the piloted tools were not consistent throughout data collection. We are continuing to refine our observation processes to ensure the validity and reliability of the methods.
In the following section, we report on our experience applying the evaluative framework to the case studies as well as lessons learned about both the practice of coproducing knowledge and the practice of evaluating the coproduction of knowledge. Indicators specifically referenced are in parentheses and refer to Table 1.
4. Results
a. Case study 1
Case study 1 involved academic researchers from several institutions working with a tribal community. The project objectives focused on understanding how the community might be affected by climate change, particularly their water resources, as well as development of a climate change adaptation plan and adaptive strategies. We started evaluating case study 1 during the final half of the project, meaning that some of the evaluation was retrospective, while other elements were concurrent with project activities. During the course of 18 months, we conducted interviews with four researchers and three stakeholders involved with the project. These were recorded, transcribed, and coded using our indicators. We observed an in-person meeting between the researchers and key stakeholders and a community-wide final project meeting (see discussion in section 5 about not including observation data at this time). In addition, we reviewed the original proposal, final reports, and other outputs from the project.
1) Case study 1: Context
Using our evaluation indicators, we identified the project inputs and external factors that influenced the project. Based on the project proposal, we mapped the project objectives against the research team expertise and in interviews specifically asked about the expertise on the project and how the research team members interacted with each other (I.3). Overall, the team expertise mapped well to the project objectives (i.e., hydrologists and water quality experts; I.1). The team included researchers with expertise in social science and collaborative research methods (I.9) as well as hydrologists and other physical scientists in the relevant fields. We also attempted to assess how researcher time was allocated to this project based on salaries included in the proposal (I.2). This is an imperfect metric because researchers may have dedicated additional unpaid time to the project, but we applied it as best as possible because of the importance of allocating adequate time to collaborative research (National Research Council 2007; Greenwood and Levin 2007).
We also were interested in tracking relationships, both those that existed previously and relationships formed or strengthened during the project (I.11). In this case study, two researchers had worked with the stakeholders for three years previously, and this project developed from that initial work. During the course of the project, employee turnover at the stakeholder agency led to the loss of those connections (E.1), but there were indications that the foundational relationships helped the new stakeholder representatives engage with the project and gain a sense of trust in the team. The stakeholder representatives supported the project by providing in-kind technical support, consultants in local knowledge, and serving as meeting hosts (I.4).
Decision-makers’ motivations for seeking new information often influence later use of that information (Oh 1996). We found a range of motivations among these decision-makers from seeking general knowledge to having specific questions about climate impacts. One stakeholder representative expressed a general interest in learning about climate change and the process of adaptation planning, while another had more specific questions related to a traditional food source that has cultural significance for the community (I.5).
2) Case study 1: Process
We identified project activities that involved the researchers communicating and collaborating with the stakeholders (e.g., workshops, trainings, meetings, phone calls, and conference presentations). In interviews with the stakeholders, they noted that the researchers had provided information proactively and often (P.2). When asked to rate their desired level of involvement against their perceived actual involvement, however, most wanted to have a greater level of involvement than what they felt they actually had (P.4). They cited lack of time or other resources, personnel turnover, and a perception that they were not invited to participate in the research process as barriers to greater collaboration. Both stakeholders and researchers commented on the limited time available for in-person meetings and lack of resources available to fund travel to the stakeholder community (P.5).
3) Case study 1: Outputs, outcomes, and impacts
This project produced a number of peer-reviewed articles (OP.1) and other materials (OP.6). However, the stakeholders reported that the adaptation recommendations produced by the research team were too general to be immediately useful for management action (OC.4). They did report that the recommendations would be useful in spurring additional community discussion and supporting future funding requests (IM.7, OC.4), which could lead to future management decisions. This possible delay in application of the research results is reflected in Oh’s (1996) explanation of the process decision-makers often go through from intake of new knowledge to ultimate application of the knowledge only after a period of time in which they become more familiar and comfortable with the new information.
b. Case study 2
Case study 2 was a project led by USGS researchers and academic scientists from several institutions along the U.S. West Coast who were focused on understanding climate change effects on shore-based ecosystems. Although the research team was working at several sites, we concentrated on one site, largely because of resource constraints (see section 5 for additional information on site selection). We conducted semistructured interviews with three researchers and four representatives from the management agencies involved in the project. These were recorded, transcribed, and coded using the indicators. We attended and observed one stakeholder workshop held by the research team (see discussion in section 5 about not including observation data at this time). While many of the findings were similar to case study 1, there also were several new findings of interest that helped us refine our indicators and evaluation process.
1) Case study 2: Context
One strong indicator in this project was the existing relationship between several of the researchers and representatives from stakeholder agencies (I.11). The researchers knew a majority of the research site contacts through previous work, and a lead researcher had particular familiarity with the agency that had jurisdiction over many of the sites, lending her both credibility and a greater level of trust; stakeholders felt “ [she] knows our business and she understands [what we do] . . . that’s huge.” In addition, several of the researchers, although working for different agencies, were located in the same building, allowing for greater collaboration between team members. Several researchers and stakeholders cited this as a key factor in the success of the project.
As in case study 1, stakeholders in this project varied in their desire for specific information versus more general information (I.5). Stakeholders who were located at the specific study site were seeking information relevant to their management of the area, while stakeholders from the broader region—who have responsibility for management at a regional scale—were interested in more general knowledge to use in regional-level planning efforts.
2) Case study 2: Process
Despite different reasons for seeking new information, the various stakeholders involved in this project expressed a desire for involvement and, in some cases, increased communication between the study site managers and the researchers (P.2). In particular, site managers expressed a desire for more upfront engagement in the project (P.4). Because of the design of the project, this site was included after the scientific research questions and research design had been established (P.1). Local managers expressed concern that site-specific limitations would impact data accuracy (OC.2). This reinforced the importance, in designing research intended to be used by decision-makers, of ensuring that intended end users are engaged in development of the research questions and design (P.1).
3) Case study 2: Outputs, outcomes, and impacts
Like case study 1, case study 2 produced a number of peer-reviewed articles (OP.1) and technical reports (OP.2). We found in case study 2 that the project also had outcomes beyond those outlined in the original research proposal, such as contributing to development of what appears to be a nascent “knowledge-to-action network” with resource managers in the region. For example, project researchers we interviewed indicated that they received requests from resource managers at other sites asking to be included in the project. This suggests that the project is reaching beyond the original individual researcher networks and that end users are disseminating information and outcomes within their own networks (an indicator of perceived credibility—OC.2).
One of the indicators we were not able to calculate and compare to case study 1 was the level of funding used specifically for stakeholder engagement (I.8). In this project, much of the travel expense related to data acquisition, and participants reported that these contributed to relationship building. However, a lack of clearly defined categories in the project budget (i.e., nothing specifically tagged as “engagement” or “collaboration”) limited our ability to calculate how much of the researchers’ time was allocated to engagement activities. While we feel the indicator is relevant, we need to identify how to alter our data collection approach to better capture this information in the future.
5. Discussion
a. Lessons about evaluating coproduction from employing the objectives and indicators framework
Through this study, we learned several important lessons about evaluating collaboratively produced climate science. Although it was not feasible because of timing differences between this project and the case study projects, we were reminded of the importance of integrating evaluation into the main project as early as possible. One impact of not engaging with the study participants sooner was that we failed to gain the trust of some resource managers, who declined to participate in this evaluation effort, contributing to our inability to include a second site in case study 2. Gaining trust is a key tenet of all social science research (Somekh and Lewin 2004), and we regret that our late introduction into one project meant we were unable to do so. We must note, however, the overwhelmingly positive reception we received from other study participants. They welcomed our questions and were pleased to discuss their experiences and perceptions of the process of producing actionable science because they saw it as an effort to strengthen coproduced climate science research in the future.
As Fazey et al. (2014) note, we needed to broaden our evaluation of outcomes, particularly in terms of looking for unexpected outcomes. For example, most current evaluations of project impacts focus on stakeholders, but as several climate science integrator interviewees noted, we also need to examine how this process impacts researchers. These interviewees were more likely to note the importance of tracking the impacts of participating in a coproduction process on their own, future scientific processes, a finding similar to Hegger and Dieperink (2015). One interviewee explained that working with a decision-maker, who may not be familiar with the science, “forces you to think out loud. There are a lot of unstated assumptions even in good research and the coproduction process makes you say things aloud.”
Another unexpected outcome was the nascent development of networks through connections made by case study 2. Development of such networks has the potential to create changes in the model of how stakeholder-driven research is conducted. Instead of a traditional loading dock model where research is disseminated largely through peer-reviewed journals, the research is made more credible and salient by peer-to-peer recommendation within and across agency and sector-based networks in an information-sharing network. While this might seem difficult to evaluate, our experience in case study 2 suggests that it is achievable, dependent on the duration of the project and timing of the evaluation process.
Additionally, there are indications that the categorizations of stakeholder or decision-maker are too coarse to effectively encompass the differences in stakeholder capabilities and the roles they can assume in a coproduction process. Through interviews and observation, we noted that stakeholders and decision-makers bring a range of knowledge, interests, and capabilities to a collaborative process that influence how, when, and what kinds of information they ultimately use. In case study 2, we noted significant differences in motivations for participating (i.e., what the stakeholder expected to get from the process) between the regional-scale managers and site managers, even when they represented the same agency. Stakeholders also vary in terms of their technical background and capacity. Their abilities and interest in contributing to various research tasks (such as problem definition, research design, analysis, and dissemination) vary depending on their existing capacity and that of their organization. Finally, stakeholders vary in terms of their roles in the decision-making agency or community; for example, whether the individual acts as a node in a social network or as a knowledge broker in a community of practice will influence the extent to which information is shared across a wider network of people. Understanding the role of stakeholders in coproduction processes and assessing the outcomes and outputs of coproduced research will require indicators that capture the complexity inherent in stakeholders and information end users and the interplay between user types and collaborative processes.
b. Findings concerning coproduction of knowledge
A clear finding from our two cases was that stakeholders became frustrated with the research process and outcomes/impacts when they were not included in development of research questions and research design. While this is not a novel finding (Lemos and Morehouse 2005), both case studies pointed to factors that contributed to this frustration. In case study 1, the current stakeholder representatives were brought into the project later in the process because of staff turnover, so they were not involved in the original conceptualization of the project. Although supportive of the project, even at the end they felt unsure of the original intent or what they should have expected in terms of results. This situation points to the importance of the stakeholder agency making a firm commitment to sustained and regular participation in coproduction processes. In the second case study, the specific site managers were not involved in initial project development because they were added to a preexisting project, largely because of the constraints of the funding mechanism, in which the research design had been set, although the research team attempted to integrate site-specific questions when possible. In this case, the site managers perceived that the research design did not accommodate site-specific constraints. There was a fine balance between collecting comparable data from multiple sites and providing specific, usable information about any one individual site that was not fully achieved in this particular case.
6. Next steps
To further explore implications of heterogeneity among stakeholders, even those within the same agency, future research efforts could focus more attention on how and under what conditions information is used within organizations. This exploration of organizations’ information use environments (Choo 2006) will help identification of whether agency practices can help or hinder adoption of new climate information, with or without a “successful” coproduction process.
Additionally, future research efforts should continue to test and refine use of observational data by experimenting with tools and methods in this evaluation process. An extension of this research could also include looking into exploring the role that researcher attitudes toward collaborative research approaches play in whether a coproduction process is successful and whether it results in instrumental information use within the agencies of interest.
7. Conclusions
We began this research by identifying the key principles in coproducing knowledge from the existing literature: building ongoing relationships between scientists and stakeholders, ensuring two-way communication between groups, and maintaining a focus on production of usable science. We examined how usable climate research is currently evaluated by federal agencies. Through interviews with experienced climate science integrators, we explored which activities, actions, and conditions they believe most influence the process and outcomes of knowledge coproduction. We combined information from all three sources to develop an evaluative framework that consists of 45 indicators grouped into context; process; and output, outcome, and impact indicators. We tested the indicators using two case studies, which allowed us to identify several lessons about evaluating coproduction from employing the objectives and indicators framework (including evaluation early in the project, evaluation from the perspective of the researcher as well as the stakeholder, impacts of external factors on projects, and identifying conceptual uses of information and measures) and coproducing climate science knowledge (more nuanced understanding of stakeholder roles and the importance of involving stakeholders early in the research design). We will refine these indicators and heed the call for more empirical research in this field (Bellamy et al. 2001; Cvitanovic et al. 2015; Fazey et al. 2014). We plan to continue to test and refine the indicators and develop metrics through additional case studies representing a diversity of resource management sectors and types of research teams. The end goal is creation of an evaluation-based framework relevant for a diversity of climate science programs, projects, and researchers.
Acknowledgments
The Department of the Interior Southwest Climate Science Center Award G13AC00326 supported this work. The National Oceanic and Atmospheric Administration’s Climate Program Office through Grant NA11OAR4310150 along with the California Nevada Applications Program at the Desert Research Institute also supported the project. The authors wish to thank the editors and three anonymous reviewers for their insightful and constructive feedback.
REFERENCES
ACCCNRS, 2015: Report to the Secretary of the Interior, 86 pp. [Available online at https://nccwsc.usgs.gov/sites/default/files/files/ACCCNRS_Report_2015.pdf.]
Alkin, M. C., Christie C. A. , and Rose M. , 2006: Communicating evaluation. The SAGE Handbook of Evaluation, I. Shaw, J. Greene, and M. Mark, Eds., SAGE Publications, 384–403.
Armitage, D., Berkes F. , Dale A. , Kocho-Schellenberg E. , and Patton E. , 2011: Co-management and the co-production of knowledge: Learning to adapt in Canada’s Arctic. Global Environ. Change, 21, 995–1004, doi:10.1016/j.gloenvcha.2011.04.006.
Beierle, T. C., 2002: The quality of stakeholder-based decisions. Risk Anal., 22, 739–749, doi:10.1111/0272-4332.00065.
Bell, S., Shaw B. , and Boaz A. , 2011: Real-world approaches to assessing the impact of environmental research on policy. Res. Eval., 20, 227–237, doi:10.3152/095820211X13118583635792.
Bellamy, J. A., Walker D. H. , McDonald G. T. , and Syme G. J. , 2001: A systems approach to the evaluation of natural resource management initiatives. J. Environ. Manage., 63, 407–423, doi:10.1006/jema.2001.0493.
Bill and Melinda Gates Foundation, 2016: How we work: Evaluation Policy. Accessed 2 November 2016. [Available online at http://www.gatesfoundation.org/How-We-Work/General-Information/Evaluation-Policy.]
Blackstock, K. L., Kelly G. J. , and Horsey B. L. , 2007: Developing and applying a framework to evaluate participatory research for sustainability. Ecol. Econ., 60, 726–742, doi:10.1016/j.ecolecon.2006.05.014.
Brugger, J., Meadow A. , and Horangic A. , 2015: Lessons from first-generation climate science integrators. Bull. Amer. Meteor. Soc., 97, 355–365, doi:10.1175/BAMS-D-14-00289.1.
Buizer, J., Jacobs K. , and Cash D. , 2016: Making short-term climate forecasts useful: Linking science and action. Proc. Natl. Acad. Sci. USA, 113, 4597–4602, doi:10.1073/pnas.0900518107.
Carbone, G. J., and Dow K. , 2005: Water resource management and drought forecasts in South Carolina. J. Amer. Water Resour. Assoc., 41, 145–155, doi:10.1111/j.1752-1688.2005.tb03724.x.
Cash, D. W., Clark W. C. , Alcock F. , Dickson N. M. , Eckley N. , Guston D. H. , Jäger J. , and Mitchell R. B. , 2003: Knowledge systems for sustainable development. Proc. Natl. Acad. Sci. USA, 100, 8086–8091, doi:10.1073/pnas.1231332100.
Cash, D. W., Borck J. C. , and Patt A. G. , 2006: Countering the loading-dock approach to linking science and decision making: Comparative analysis of El Niño/Southern Oscillation (ENSO) forecasting systems. Sci. Technol. Human Values, 31, 465–494, doi:10.1177/0162243906287547.
Choo, C. W., 2006: The Knowing Organization. Oxford University Press, 354 pp.
Choo, C. W., Bergeron P. , Detlor B. , and Heaton L. , 2008: Information culture and information use: An exploratory study of three organizations. J. Amer. Soc. Inf. Sci. Technol., 59, 792–804, doi:10.1002/asi.20797.
Clark, W. C., van Kerkhoff L. , Lebel L. , and Gallopin G. C. , 2016: Crafting usable knowledge for sustainable development. Proc. Natl. Acad. Sci. USA, 113, 4570–4578, doi:10.1073/pnas.1601266113.
Cvitanovic, C., Hobday A. J. , van Kerkhoff L. , Wilson S. K. , Dobbs K. , and Marshall N. A. , 2015: Improving knowledge exchange among scientists and decision-makers to facilitate the adaptive governance of marine resources: A review of knowledge and research needs. Ocean Coastal Manage., 112, 25–35, doi:10.1016/j.ocecoaman.2015.05.002.
Dilling, L., and Lemos M. C. , 2011: Creating usable science: Opportunities and constraints for climate knowledge use and their implications for science policy. Global Environ. Change, 21, 680–689, doi:10.1016/j.gloenvcha.2010.11.006.
Donaldson, S. I., and Lipsey M. W. , 2006: Roles for theory in contemporary evaluation practice: developing practical knowledge. The SAGE Handbook of Evaluation, I. Shaw, J. Greene, and M. Mark, Eds., SAGE Publications Ltd., 56–75.
Earl, S., Carden F. , and Smutylo T. , 2001: Outcome Mapping: Building Learning and Reflection into Development. International Development Research Centre, 139 pp.
Evely, A. C., Fazey I. , Lambin X. , Lambert E. , Allen S. , and Pinard M. , 2010: Defining and evaluating the impact of cross-disciplinary conservation research. Environ. Conserv., 37, 442–450, doi:10.1017/S0376892910000792.
Fazey, I., and Coauthors, 2014: Evaluating knowledge exchange in interdisciplinary and multi-stakeholder research. Global Environ. Change, 25, 204–220, doi:10.1016/j.gloenvcha.2013.12.012.
Feldman, D. L., and Ingram H. , 2009: Making science useful to decision makers: Climate forecasts, water management, and knowledge networks. Wea. Climate Soc., 1, 9–21, doi:10.1175/2009WCAS1007.1.
Ferguson, D. B., Rice J. , and Woodhouse C. A. , 2014: Linking environmental research and practice: Lessons from the integration of climate science and water management in the western United States. CLIMAS Tech. Rep., 18 pp. [Available online at http://www.climas.arizona.edu/sites/default/files/pdflink-res-prac-2014-final.pdf.]
Ferguson, D. B., Finucane M. L. , Keener V. W. , and Owen G. , 2016: Evaluation to advance science policy: Lessons from Pacific RISA and CLIMAS. Climate in Context: Science and Society Partnering for Adaptation, A. S. Parris et al., Eds., Wiley, 215–234, doi:10.1002/9781118474785.ch10.
Ford, J. D., Knight M. , and Pearce T. , 2013: Assessing the ‘usability’ of climate change research for decision-making: A case study of the Canadian International Polar Year. Global Environ. Change, 23, 1317–1326, doi:10.1016/j.gloenvcha.2013.06.001.
Ford, J. D., and Coauthors, 2016: Community-based adaptation research in the Canadian Arctic. Wiley Interdiscip. Rev.: Climate Change, 7, 175–191, doi:10.1002/wcc.376.
Given, L. M., Ed., 2008: The SAGE Encyclopedia of Qualitative Research Methods. SAGE Publications, 1072 pp.
Granovetter, M., 1983: The strength of weak ties: A network theory revisited. Sociol. Theory, 1, 201–233, doi:10.2307/202051.
Greenwood, D., and Levin M. , 2007: Power and social reform. Introduction to Action Research: Social Research for Social Change, 2nd ed., Sage Publications, 151–167.
Hegger, D., and Dieperink C. , 2014: Toward successful joint knowledge production for climate change adaptation: Lessons from six regional projects in the Netherlands. Ecol. Soc., 19, 34, doi:10.5751/ES-06453-190234.
Hegger, D., and Dieperink C. , 2015: Joint knowledge production for climate change adaptation: What is in it for science? Ecol. Soc., 20, 1, doi:10.5751/ES-07929-200401.
Hegger, D., Lamers M. , Van Zeijl-Rozema A. , and Dieperink C. , 2012: Conceptualising joint knowledge production in regional climate change adaptation projects: Success conditions and levers for action. Environ. Sci. Policy, 18, 52–65, doi:10.1016/j.envsci.2012.01.002.
Jacobs, K., 2002: Connecting science, policy, and decision-making: A handbook for researchers and science agencies. NOAA Rep., 25 pp. [Available online at http://www.climas.arizona.edu/sites/default/files/pdfjacobs-2002.pdf.]
Jacobs, K., Garfin G. , and Lenart M. , 2005: More than just talk: Connecting science and decisionmaking. Environ. Sci. Policy Sustainable Dev., 47, 6–21, doi:10.3200/ENVT.47.9.6-21.
Jahn, T., Bergmann M. , and Keil F. , 2012: Transdisciplinarity: Between mainstreaming and marginalization. Ecol. Econ., 79, 1–10, doi:10.1016/j.ecolecon.2012.04.017.
Jasanoff, S., Ed., 2004: States of Knowledge: The Co-production of Science and the Social Order. Routledge, 332 pp.
Jasanoff, S., and Wynne B. , 1998: Science and decisionmaking. Human Choice and Climate Change, S. Rayner and E. L. Malone, Eds., Battelle Press, 87 pp.
Jorgensen, B., Merton E. , Smith L. , and Wallis P. , 2014: Facilitating the use of research in policy development and implementation. VCCCAR Final Rep., 68 pp. [Available online at http://www.vcccar.org.au/sites/default/files/publications/Facilitating%20the%20use%20of%20research%20in%20policy%20development%20and%20implementation.pdf.]
Kirchhoff, C. J., Carmen Lemos M. , and Dessai S. , 2013: Actionable knowledge for environmental decision making: Broadening the usability of climate science. Annu. Rev. Environ. Resour., 38, 393–414, doi:10.1146/annurev-environ-022112-112828.
Lacey, J., Howden S. M. , Cvitanovic C. , and Dowd A.-M. , 2015: Informed adaptation: Ethical considerations for adaptation researchers and decision-makers. Global Environ. Change, 32, 200–210, doi:10.1016/j.gloenvcha.2015.03.011.
Lemos, M. C., and Morehouse B. J. , 2005: The co-production of science and policy in integrated climate assessments. Global Environ. Change, 15, 57–68, doi:10.1016/j.gloenvcha.2004.09.004.
Lemos, M. C., Kirchhoff C. J. , and Ramprasad V. , 2012: Narrowing the climate information usability gap. Nat. Climate Change, 2, 789–793, doi:10.1038/nclimate1614.
Mark, M. M., Greene J. C. , and Shaw I. F. , 2006: The evaluation of policies, programs, and practices. The SAGE Handbook of Evaluation, I. F. Shaw, J. C. Greene, and M. M. Mark, Eds., SAGE Publications, 1–30.
McNie, E. C., 2007: Reconciling the supply of scientific information with user demands: An analysis of the problem and review of the literature. Environ. Sci. Policy, 10, 17–38, doi:10.1016/j.envsci.2006.10.004.
McNie, E. C., 2013: Delivering climate services: Organizational strategies and approaches for producing useful climate-science information. Wea. Climate Soc., 5, 14–26, doi:10.1175/WCAS-D-11-00034.1.
McNie, E. C., Pielke R. A. Jr., and Sarewitz D. , Eds., 2007: 2005 SPARC Reconciling Supply and Demand Workshop: Climate science policy lessons from the RISAs. Workshop Rep., 110 pp. [Available online at http://sciencepolicy.colorado.edu/research_areas/sparc/research/projects/risa/risa_workshop_report.pdf.]
Meadow, A. M., Ferguson D. B. , Guido Z. , Horangic A. , Owen G. , and Wall T. , 2015: Moving toward the deliberate coproduction of climate science knowledge. Wea. Climate Soc., 7, 179–191, doi:10.1175/WCAS-D-14-00050.1.
Meagher, L., Lyall C. , and Nutley S. , 2008: Flows of knowledge, expertise and influence: A method for assessing policy and practice impacts from social science research. Res. Eval., 17, 163–173, doi:10.3152/095820208X331720.
Melillo, J. M., Richmond T. C. , and Yohe G. W. , 2014: Climate change impacts in the United States: The Third National Climate Assessment. U.S. Global Change Research Program Rep., 841 pp., doi:10.7930/J0Z31WJ2.
Moser, S., 2005: Stakeholder involvement in the first U.S. national assessment of the potential consequences of climate variability and change: An evaluation, finally. NRC Rep. 83 pp. [Available online at http://www.susannemoser.com/documents/Moser_Draft_2-6-05.pdf.]
Moser, S., 2009: Making a difference on the ground: The challenge of demonstrating the effectiveness of decision support. Climatic Change, 95, 11–21, doi:10.1007/s10584-008-9539-1.
National Research Council, 2005: Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program. National Academies Press, 162 pp., doi:10.17226/11292.
National Research Council, 2007: Evaluating Progress of the U.S. Climate Change Science Program: Methods and Preliminary Results. National Academies Press, 178 pp.
Oh, C. H., 1996: Linking Social Science Information to Policy-Making. Emerald Group Publishing, 201 pp.
Patton, M. Q., 1978: Utilization-Focused Evaluation. Sage Publications, 688 pp.
Patton, M. Q., 1982: Practical Evaluation. Sage Publications, 319 pp.
Pulwarty, R. S., Simpson C. , and Nierenberg C. R. , 2009: The Regional Integrated Sciences and Assessments (RISA) Program: Crafting effective assessments for the long haul. Integrated Regional Assessment of Global Climate Change, C. G. Knight and J. Jager, Eds., Cambridge University Press, 367–393.
Reed, M. S., 2008: Stakeholder participation for environmental management: A literature review. Biol. Conserv., 141, 2417–2431, doi:10.1016/j.biocon.2008.07.014.
Reed, M. S., Stringer L. C. , Fazey I. , Evely A. C. , and Kruijsen J. H. J. , 2014: Five principles for the practice of knowledge exchange in environmental management. J. Environ. Manage., 146, 337–345, doi:10.1016/j.jenvman.2014.07.021.
Rich, R. F., and Oh C. H. , 2000: Rationality and use of information in policy decisions: A search for alternatives. Sci. Commun., 22, 173–211, doi:10.1177/1075547000022002004.
Roux, D. J., Stirzaker R. J. , Breen C. M. , Lefroy E. C. , and Cresswell H. P. , 2010: Framework for participative reflection on the accomplishment of transdisciplinary research programs. Environ. Sci. Policy, 13, 733–741, doi:10.1016/j.envsci.2010.08.002.
Schuttenberg, H. Z., and Guth H. K. , 2015: Seeking our shared wisdom: A framework for understanding knowledge coproduction and coproductive capacities. Ecol. Soc., 20, 15, doi:10.5751/ES-07038-200115.
Somekh, B., and Lewin C. , 2004: Research Methods in the Social Sciences. Sage Publications, 376 pp.
Taylor, R. S., 1991: Information use environments. Progress in Communication Science, B. Dervin and M. J. Voigt, Eds., Ablex Publishing Corporation, 217–254.
U.S. Department of the Interior, 2014: Strategic plan for fiscal years 2014–2018. Tech. Doc., 56 pp. [Available online at https://www.doi.gov/sites/doi.gov/files/migrated/pmb/ppp/upload/DOI-Strategic-Plan-for-FY-2014-2018-POSTED-ON-WEBSITE-4.pdf.]
U.S. Geological Survey, 2014: Budget justifications and performance information: Fiscal year 2014. Tech Doc., 378 pp. [Available online at https://www2.usgs.gov/budget/2014/greenbook/2014_greenbook.pdf.]
van Kerkhoff, L. E., and Lebel L. , 2015: Coproductive capacities: Rethinking science-governance relations in a diverse world. Ecol. Soc., 20, 14, doi:10.5751/ES-07188-200114.
Walter, A. I., Helgenberger S. , Wiek A. , and Scholz R. W. , 2007: Measuring societal effects of transdisciplinary research projects: Design and application of an evaluation method. Eval. Program Plann., 30, 325–338, doi:10.1016/j.evalprogplan.2007.08.002.
W. K. Kellogg Foundation, 2004: Logic model development guide. Tech. Doc., 72 pp. [Available online at http://www.smartgivers.org/uploads/logicmodelguidepdf.pdf.]
Wyborn, C. A., 2015: Connecting knowledge with action through coproductive capacities: Adaptive governance and connectivity conservation. Ecol. Soc., 20, 11, doi:10.5751/ES-06510-200111.