Increasingly there are calls for climate services to be “co-produced” with users, taking into account not only the basic information needs of users but also their value systems and decision contexts. What does this mean in practice? One way that user values can be incorporated into climate services is in the management of inductive risk. This involves understanding which errors in climate service products would have particularly negative consequences from the users’ perspective (e.g., underestimating rather than overestimating the change in an impact variable) and then prioritizing the avoidance of those errors. This essay shows how inductive risk could be managed in climate services in ways that serve user values and argues that there are both ethical and practical reasons in favor of doing so.
Climate services should consider not just what users want to know, but also which errors users particularly want to avoid.
Climate services aim to provide “scientifically-based information and products that enhance users’ knowledge and understanding about the impacts of climate on their decisions and actions” (AMS 2015). Increasingly, there are calls for collaborative approaches to the delivery of climate services, including approaches in which products are “co-produced” by providers and users jointly (e.g., Brooks 2013; Hewitt et al. 2017). Such collaborative approaches can facilitate the development of more usable services and products in various ways; perhaps most fundamentally, they can help providers to better understand the information needs, values, and decision contexts of users, so that services can be better tailored to fit them (Lemos et al. 2012; McNie 2013; Vaughan and Dessai 2014; Hewitt et al. 2017). That user value systems and decision frameworks should be central to climate services delivery is a foundational assumption of recent work on the ethics of climate services (Adams et al. 2015).
It is far from obvious, however, what it means in practice to co-produce climate services, and to be responsive to user value systems and decision contexts when doing so. What concrete steps can providers take? Here we identify one significant way that providers, in collaborative consultation with users, can take account of user values in the delivery of climate services. We draw upon on a currently prominent view in philosophy of science—the inductive risk view—which argues that values sometimes can appropriately influence methodological choices in scientific investigations (Douglas 2000, 2009; Steel 2016). We show how the inductive risk view can be applied in climate services in ways that serve user values, and we argue that there are both ethical and practical reasons in favor of doing so. Most importantly, users will be less likely to unknowingly make decisions that fail to protect against the negative impacts of climate change—water shortages, crop losses, flooded homes—that they most want to avoid.
THE INDUCTIVE RISK VIEW.
Roughly speaking, values are what a person or group attaches positive significance to. When considering how values can and should influence science, philosophers often distinguish two types of value. Cognitive values (also called epistemic values)—such as internal consistency, predictive accuracy, and explanatory power—are considered a common and accepted part of scientific practice, for example, when selecting models or evaluating theories (Reiss and Sprenger 2017). More controversial is the influence of contextual values (also called nonepistemic values), encompassing social, political, ethical, cultural, and personal values, such as human well-being, individual freedom, social justice, or economic gain. The primary worry is that contextual value influence will bias scientific research in favor of desired conclusions, regardless of their truth or falsity (e.g., Biddle 2007).
Philosophers nevertheless are in broad agreement that contextual values can appropriately influence science in some ways, such as when deciding which questions to investigate or which applications of research to pursue (Reiss and Sprenger 2017). In addition, a number of philosophers argue that it is permissible, or even required, for scientists to employ contextual values when managing the risk of error in results, also known as “inductive risk.” Such errors can take different forms: scientists might accept a false hypothesis or reject a true one, or they might underestimate a quantity or overestimate it. According to the inductive risk view, when facing a methodological choice where it is unclear which option will give the most accurate results, scientists should consider how each option would affect the risk of different types of error and how bad the consequences of those errors would be; if some options carry a lower risk of errors that would have particularly bad consequences, this counts in their favor (Douglas 2000, 2009). The inductive risk view also requires that, when such value-influenced methodological choices are made, this is clearly communicated along with the results (Douglas 2009, chapter 8; Elliott and Resnik 2014).
For an illustration, consider a study in which scientists will monitor concentrations of toxic air pollutants from vehicle exhaust in an outdoor shopping district; their findings will help the local government decide when and how to reduce traffic in the area to better protect human health. The scientists must choose whether to install a type-A or type-B monitoring instrument. Neither is thought to be clearly more accurate or reliable than the other, but there is some evidence that at high concentration levels type-A instruments can occasionally give readings that are significantly low, while type-B instruments can occasionally give readings that are significantly high. According to the inductive risk view, scientists should consider the likely consequences of these potential errors: underestimating high concentrations can be expected to result in insufficient traffic reduction and adverse human health effects, while overestimating concentrations can be expected to result in unnecessary traffic reduction and economic losses (see Fig. 1). If there is agreement that human health is the value to be prioritized, then underestimating concentrations would be particularly undesirable; the type-B instrument is preferable.
Proponents contend that the inductive risk view reflects a basic responsibility that all moral agents, including scientists, have: to try to avoid mistakes or errors that can be expected to have particularly bad consequences (Douglas 2009, chapter 4).1 One might object to the inductive risk view, however, on the grounds that there are even better ways for scientists to proceed. Rather than appealing to contextual values to resolve an uncertain methodological choice, scientists could attempt to determine the implications of that uncertainty for the question under investigation, hedging their conclusions accordingly (Betz 2013). In the pollution study, for example, scientists might install both instruments, taking the range of their readings to provide an estimate of uncertainty about actual concentrations.
This alternative approach is not always feasible, however. It may be too expensive to install multiple instruments. In a modeling study, it may require more time and computational power than is available to comprehensively explore how uncertainty about modeling assumptions translates into uncertainty about results. In such situations, data or modeling results can be accompanied by an estimate of uncertainty based (either in part or entirely) on expert judgment, but arriving at such an estimate also can involve uncertain methodological choices, for example, about how much weight to place on different considerations that inform the expert judgment. Moreover, sometimes decision-makers request that results be provided in a particular format. For example, they might request a limited set of scenarios. Or they might ask that scientists classify outcomes into a limited set of categories, such as those whose chance of occurrence is at least 1% and those whose chance is smaller. In these situations too, uncertain methodological choices—such as whether to place a particular outcome in one category or another when the evidence is ambiguous—may be unavoidable (Steele 2012). It is when uncertain methodological choices are unavoidable that the inductive risk view is most applicable.
MANAGING INDUCTIVE RISK IN TAILORED CLIMATE SERVICES.
Among philosophers, a still-debated issue is whose values ought to exert an influence when managing inductive risk (Schroeder 2017). In tailored climate services, where providers are enlisted to assist particular users, the default answer seems relatively clear: insofar as a service to users is being provided, the values of users ought to be employed.2 That is, typically, the climate service provider should prioritize the avoidance of errors whose expected consequences would be particularly bad from the users’ perspective. To appreciate what those errors are, a provider will need to consult with users to understand how they hope to use climate service products in their decision-making and which outcomes they seek to promote or avoid via those decisions.
Ideally, this consultation should occur at the outset, since opportunities to manage inductive risk in ways that serve user values can arise at any point in the provision of climate services. They arise whenever a methodological choice must be made, there are no clear scientific grounds for choosing among available methodological options, and some options can be foreseen to carry a smaller risk of an error that users are particularly concerned to avoid. Below we identify several points at which it is plausible that such opportunities will arise in practice, and for each we provide an example illustrating how a provider could manage inductive risk in a way that serves user values. Figure 2 shows in general terms the key steps in managing inductive risk in accord with user values.
Selecting climate information sources.
Climate information sources, including observational datasets and model projections, are the foundation on which climate services are built. Yet often there is significant uncertainty about which information sources are most accurate, especially in the case of model projections (Collins 2017). If a provider needs to select from among several projections (or to choose to average them) when developing climate service products, and if some options carry a smaller risk than others that products will err in ways that users are particularly concerned to avoid, then there is an opportunity to manage inductive risk in a way that serves user values.
For example, suppose that a water manager requests three scenarios for future changes in water levels of a large lake, which supplies a local population with water; one scenario is sought for each of three representative concentration pathways (RCPs). The manager does not want to spend money unnecessarily on new infrastructure—for example, an expensive new intake pipe (Freeman 2016)—but her primary concern is to ensure that local populations have adequate water supply in future. Suppose the provider’s analysis will begin from available projections of rainfall from a set of regional climate models, since rainfall is the ultimate source of water for the lake. If it is unclear which of the models will give the most accurate rainfall projections, then the provider might select those projections that show the most frequent and severe droughts under each RCP—rather than, say, the mean or median of the model projections. In this way, the provider reduces the risk that the scenarios will significantly overestimate water levels in the lake under the different RCPs, an error that could have particularly bad consequences from the manager’s point of view.3 Such a methodological choice should be made in consultation with the water manager and should be clearly communicated along with the results.4
Building or selecting impact models.
Opportunities to manage inductive risk also can arise when building or selecting impact models. Should this model or that model be employed? What numerical value should be assigned to this parameter in the model? In some cases, there is substantial uncertainty about which option will give the most accurate results. Yet, it may be clear that some options carry a lower risk of errors that users are particularly concerned to avoid.
Continuing with the water management example above, suppose that the climate service provider next employs hydrological models that link rainfall and other factors to flow in the rivers and streams that feed the lake and, in turn, to key features of the lake, such as water level. There might be significant uncertainty about the numerical values that should be assigned to some parameters in these hydrological models, including parameters that account for land surface types (and thus affect runoff and streamflow) in the catchment; this is because future changes in population and development in the region, which will determine how much of the land surface remains farm field or becomes paved, are themselves significantly uncertain (see also Beven 2012, p. 306). Suppose that lower numerical values of the land surface parameters result in lower streamflow and lower lake levels in the models. Then the provider might select numerical values near the lower end of the range that is currently considered plausible, rather than a value in the middle of the range, in order to reduce the risk that the scenarios produced will overestimate the water supply that would be available under a given RCP—the error that the manager is particularly concerned to avoid. Once again, such choices should be made in consultation with the user and should be clearly communicated along with results.
Analyzing data and modeling results.
Choices made when analyzing data and modeling results also can affect the balance of inductive risk. Classic examples come from statistical testing. When selecting a statistical significance level at which a null hypothesis will be rejected, a less stringent level will increase the chance that the investigator will reject the null hypothesis when it is true (type 1 error), while a more stringent level will increase the chance that the investigator will fail to reject the null hypothesis when it is false (type 2 error). While a 2σ (or 0.05) significance level is a common default choice, a more or less stringent level might be chosen in light of how bad the consequences of erring one way rather than the other are expected to be, given the decisions that will be made in light of the conclusions reached (see also Anderegg et al. 2014; Lloyd and Oreskes 2018).
Opportunities to manage inductive risk can arise elsewhere in analysis too. Suppose a provider seeks to combine information from an ensemble of climate or impact models. Given limited opportunities for testing the models, there might be only weak scientific grounds for thinking that two of the models—models X and Y—will deliver more accurate projections than the others for a particular variable (Collins 2017; Lorenz et al. 2018). Here, weighting the projections equally in the analysis could be scientifically defensible, as could assigning somewhat higher weight to models X and Y. However, if models X and Y tend to project larger changes in the variable of interest (e.g., regional temperatures), and if the user is particularly concerned to avoid underestimating those changes, then the provider might opt for unequal weighting. Again, such choices should be made in consultation with the user and should be clearly communicated along with results.
Uncertain methodological choices can arise even when estimating uncertainties. Is a particular observation or modeling result plausible, and thus to be included in a set from which an uncertainty estimate will be inferred? Which available method for inferring the uncertainty estimate, making which statistical assumptions, will give the most accurate result? In some cases, current scientific understanding fails to give a decisive answer, but it is clear that some ways of proceeding will produce a broader estimated uncertainty range than others (e.g., counting borderline observations or modeling results as “plausible” rather than not). Here too, if choices must be made, they could be made in light of the inductive risk preferences of the user or client: if it would be particularly bad for the user’s purposes for uncertainty to be underestimated, then the provider might select those methodological options that will deliver a broader uncertainty estimate. Once again consultation with users should inform such choices, and the choices should be clearly reported along with results.
Uncertainty estimates can err not just by being too broad or too narrow, but in their precision as well (Parker and Risbey 2015). Many available methods for estimating uncertainty assign precise probabilities to future changes in climate (or climate impacts), even though current knowledge is insufficient to say exactly how likely it is that a change of a particular magnitude would occur under an RCP or emission scenario. A more accurate representation of uncertainty would take a coarser form, such as the probability intervals used by the Intergovernmental Panel on Climate Change (Mastrandrea et al. 2010). Whether it is worth investing extra time and resources to produce such a representation, however, can depend on the decision frameworks of users. If users will attempt to choose an “optimal” course of action if offered precise probabilities, then there might be a good chance that overprecision will lead to worse decisions (from the perspective of the user’s goals). But if users will seek a course of action that is robust under a range of possible futures and that includes an extra margin of safety, then some overprecision might not make a difference to their decision. Both the user’s values and the user’s decision framework could be relevant to the methodological choice.
INDUCTIVE RISK AND ON-DEMAND CLIMATE SERVICES.
It is less clear how to apply the inductive risk view when it comes to “on-demand” climate services, such as web portals. Unlike when services are tailored to a specific user, on-demand services often have a broad range of potential users; learning the inductive risk preferences of all potential users through consultation is often infeasible. There can also be inductive risk trade-offs: reducing the risk of errors that are of primary concern for some anticipated users might increase the risk of errors that are of primary concern for others. What can be done?
Here we offer just a few observations. First, to help users manage inductive risks for themselves, information about product quality, including clear warnings about product limitations and uncertainties, is crucial (Swart et al. 2017; Hewitson et al. 2017). Second, allowing for user-customization at the point of service can enable users to make choices that accord with their inductive risk preferences. For example, the Useful to Usable project’s web-based Corn Growing Degree Day tool (Angel et al. 2017) allows users to select among freeze thresholds and a number of other parameters; a user for whom significant crop loss would be worse than a slightly suboptimal yield might choose a freeze threshold at the high end of the range of temperatures at which it is plausible that significant crop loss could occur, thus reducing the chance that her estimated safe spring planting date is too early. Third, where user customizability is not feasible, ethical considerations might warrant prioritizing the inductive risk preferences of some anticipated users and stakeholders over others, especially those who otherwise have limited access to climate information and who stand to suffer especially severe harms due to climate change. Finally, insofar as particular values do shape choices in product development, which choices they influence and how should be clearly communicated. Users might then assess how those values align with their own and factor this in when interpreting results (Adams et al. 2015).
BENEFITS AND RISKS OF ADOPTING THE INDUCTIVE RISK VIEW.
We have shown how the inductive risk view could be applied in the context of climate services in a way that serves user values. But is it a good idea?
Both ethical and practical considerations speak in its favor. It accords with what Adams et al. (2015) identify as a basic starting point for ethical practice in climate services: making user value systems and decision frameworks central to climate services delivery. More generally, the inductive risk view is argued by some philosophers to reflect a basic moral responsibility (see section 2). From a practical perspective, the benefits could be significant. Adopting the inductive risk view could make it less likely that users will experience very undesirable outcomes—inadequate water supply, significant crop loss, and other serious harms—when they could have been avoided; this is because steps will be taken to reduce the risk that climate service products err in ways that lead users to unknowingly choose courses of action that fail to protect against those outcomes.
On the other hand, adopting the inductive risk view may itself carry some risks. Though the view permits only a very limited role for values—they in effect serve as a tie-breaker when choosing among scientifically reasonable methodological options—one might worry about a slippery slope: once the door is opened to value influence, will not scientifically unreasonable methodological choices soon be made in order to advance particular values, such as political power or financial gain? This is a possibility. But an important mechanism for discouraging this is built in to the inductive risk view: results should be accompanied by a clear statement of which uncertain methodological choices were resolved by appeal to values and how (Douglas 2009). This transparency is essential not only to open such choices to scrutiny—methodological choices that are unreasonable from the perspective of current scientific understanding can be called out—but also to help ensure that products reflecting a particular set of inductive risk preferences are not misinterpreted and misused.
Even if the slippery slope is avoided, there could be risks to the credibility (or perceived legitimacy) of climate services (Elliott et al. 2017). Perhaps climate services will be unfairly branded as “unscientific” if user values are openly allowed even the limited influence that the inductive risk view permits. This misperception could be reinforced when it inevitably happens that two products, apparently intended to estimate the same variable for two different users, are found to differ significantly in light of methodological choices tailored to the users’ differing values. In reality, if the inductive risk view has been adhered to, then the different estimates will both be within the bounds of current uncertainty; they will reflect preferences for avoiding different errors (e.g., overestimating versus underestimating) when assigning a numerical value to a variable, in a situation where existing scientific understanding is insufficient to tightly constrain such assignments. To maintain credibility, however, serious communication efforts may be needed to explain how such divergent estimates could reasonably arise, that they are indicative of significant underlying uncertainty, and so on.
We have identified one significant way that climate service providers can be responsive to user values in practice: by managing inductive risk in ways that accord with user values. Doing so involves appreciating which errors in climate service products could have particularly bad consequences from the user’s perspective, and then prioritizing the avoidance of those errors when confronting uncertain methodological choices. Managing inductive risk in this way, in collaborative consultation with users, can also be considered one aspect of co-production of climate services. To this end, providers should seek to understand the inductive risk preferences of users from the outset and to consult with users about how to proceed when relevant methodological choices arise.
We have suggested that there are ethical and practical reasons in favor of adopting the inductive risk view in climate services. Most importantly, managing inductive risk in ways that serve user values can reduce the chance that users will unknowingly make decisions that fail to protect against the climate-related harms that they most want to avoid. Nevertheless, our aim here is not to offer the last word on the matter but rather to open a conversation about the relevance of the inductive risk view for climate services and, more generally, to prompt further discussion of how user values can and should be incorporated into climate services in practice.
Thanks to audiences at Cambridge University, Durham University and PSA 2018, and to several anonymous BAMS reviewers, for helpful comments and suggestions. This research was supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement 667526 K4U). The content reflects only the authors’ views, and the ERC is not responsible for any use that may be made of the information it contains.
The inductive risk view calls for scientists to take due care to avoid errors when it is reasonably foreseeable that those errors will have particularly bad consequences. They need not always succeed in avoiding error; the requirement is just for due diligence.
Exceptions could include cases where user values permit inflicting significant and unjust harms on other stakeholders. We bracket such complexities here, as our primary aim is to show how user values can be taken into account in the management of inductive risk, when it is appropriate to do so. An important next step in developing an ethical framework for climate services is to clarify how providers should balance responsibilities to users/clients with broader societal and ethical responsibilities, especially when they conflict.
We assume that the user interprets each scenario as “one reasonable estimate” of what would happen under an RCP. Such an estimate errs if it differs significantly from what in fact would happen under that RCP. Whether such an error has bad consequences, however, depends on how the scenario is used in decision-making, for example, whether it is treated as just one possibility or as an accurate estimate. This points to the importance of communicating with users about how to interpret products and about their uncertainties (Adams et al. 2015; Otto et al. 2016). A broader literature on risk perception and risk communication is also relevant when considering how to avoid misinterpretation and misuse of products (see, e.g., Pidgeon and Fischhoff 2011).
Consultation is important since, even if an option aligns with the user’s inductive risk preferences, she might have overriding reasons for preferring a different option, for example, to facilitate comparability with other information resources or because of legal requirements.