Ouranos is a nonprofit consortium launched in 2002 with the mandate to provide climate services to its governmental, academic, and private partners. These services have focused on the impacts of climate change in the province of Québec, the identification of vulnerabilities and opportunities associated with the future climate, and the assessment of adaptation options. This paper discusses the experience and insights acquired at Ouranos over the last 10 years in building climate scenarios in support of these impact and adaptation studies. Most of this work is aimed at making climate science intelligible and useful for end users, and the paper describes approaches to developing climate scenarios that are tailored to the needs and level of climate expertise of different user categories. The experience has shown that a group of professionals dedicated to scenario construction and user support is a key element in the delivery of effective climate services.
The first 10 years of the Ouranos Consortium are reviewed with a focus on the creation and delivery of climate scenarios for a wide range of users.
In October 2011, the first International Conference on Climate Services (Vaughan 2011) was held in New York City “to initiate a dialogue between experienced climate information providers and those who currently use or wish to use such information.” This kind of dialogue is a core component of the Global Framework for Climate Services initiative, launched through the World Meteorological Organization (WMO) in 2009 by 150 countries dedicated to provide timely climate services to their population. Climate services aim to provide actionable science—that is, data, analysis, and forecasts that are sufficiently predictive, accepted, and understandable to support decision making (Kerr 2011), or as Bruce Hewitson, University of Cape Town climatologist, puts it: “science that scientists are willing to bet their own money on.”
According to WMO (2011), climate services “include the provision of data, data summaries and statistical analyses and predictions as well as tailored information products, scientific studies and expert advice delivered with ongoing support and user engagement.” In the United States this role is played by regional climate centers created in the 1990s under the purview of the National Oceanic and Atmospheric Administration (NOAA; www.noaa.gov/) to provide, in tandem with Regional Integrated Sciences and Assessments (Pulwarty et al. 2009), climate services to the public, industries, and governments (DeGaetano et al. 2010). In 2002, the government of the province of Québec in Canada partnered with universities and Hydro-Québec1 to launch the Ouranos consortium with the objective of providing climate information and expertise in support of adaptation to climate change.
Ouranos differs from most other climate service centers by merging operational climate modeling expertise, impacts, and adaptation expertise and climate analysis services under one roof. Ten years into this experiment in compulsory multidisciplinary scientific cohabitation, we felt that it was time to reflect on the path taken during these years and take a critical look at some of our successes and failures. This paper's focus is on the creation and evolution of a dedicated scenario group—a team specifically created to handle user requests for general climate information and climate change scenarios. The experience of Ouranos with and without a scenario group suggests that dedicated scenario professionals are a key ingredient to deliver effective climate services and maintain long-term healthy relationships between users and climate scientists.
THE CREATION AND ORGANIZATION OF OURANOS.
The devastating 1996 Saguenay flood and the 1998 ice storm over Québec, Ontario, and New England caused billions in damage and left public security officials wondering what would come next. These events, along with concerns about the exceptionally low levels of hydroelectric reservoirs due to a series of dry years, raised the profile of meteorological threats. This new national security concern called for an institution that could deliver adequate information. At the same time, support for a team of young research scientists working on the Canadian Regional Climate Model (CRCM; see Caya and Laprise 1999) at Université du Québec à Montréal was threatened by changes in funding rules. In a serendipitous unfolding of events, these scientists were recruited to form the core of what would, in 2002, officially become the climate simulation group of the Ouranos Consortium.
Launched in Montréal with an annual budget of Can$4.6 million, Ouranos first mandate was to provide Canada with a regional climate projections program. The funds were deployed to create the operational environment required for the task—namely, staff, hardware, software and digital storage—as well as to promote research efforts, both internal and university based. Over the first few years, the operational version of the Canadian Regional Climate Model was developed and a climate projections database was created. The next step was to provide data and expertise regarding climate change to support impact and adaptation studies carried out by its founding members. These members (Québec government departments, Environment Canada, Hydro-Québec, and four universities) contributed employees to Ouranos in a win–win arrangement that increased the manpower of Ouranos while at the same time facilitating the transfer of knowledge between organizations.
The most striking projected change for Québec's climate is probably the increases in winter precipitation. The entire province should see more rain and snowfall over winter, with the largest increases (15%–30%) over northern Québec (Ouranos 2010). Summer precipitation is also projected to increase in the north, but no significant changes are expected in the south. The pattern is similar for temperatures with the strongest warming (4°–7°C) in winter over northern regions. Although higher temperatures drive evapotranspiration rates upward, the net effect of these changes is an increase in annual runoff over central and northern Québec. For example, annual runoff is projected to increase 10%–14% over the La Grande complex east of James Bay, where about half the province's hydroelectric generation capacity is installed. Snow accumulation is expected to increase in the north but decrease in the south (Fig. SB1) because of the interplay between increasing precipitation and warmer winters (i.e., the solid–liquid fraction of precipitation and the duration of the snow accumulation period).
Following the Intergovernmental Panel on Climate Change (IPCC) working groups' structure, Ouranos is divided into two entities: “Climate Science” and “Vulnerability, Impacts, and Adaptation” (Fig. 1). The Vulnerability, Impacts, and Adaptation (VIA) group relies on a highly multidisciplinary staff, as well as experts from its network of member organizations, to coordinate a set of research and development (R&D) programs that span a variety of themes (see www.ouranos.ca/): agriculture, forests, water resources, health, biodiversity, energy, infrastructure, and tourism, including crosscutting issues across these themes. The VIA staff acts as network hubs and liaison officers in addition to participating in and/or ensuring proper oversight of the projects developed within these programs. Their work involves identifying priority research needs by working with potential stakeholders through a variety of mechanisms (program committees, workshops with stakeholders, etc.). These stakeholders are mainly within government and academia but also include some industry sectors.2 The projects that they develop typically assess climate-change-related vulnerabilities and opportunities and aim to identify and assess adaptation options. More generally, each project strives to bring together scientists, experts, and end users in a bid to narrow the interaction gap between science and decision making. The ideas underlying this knowledge transfer and examples of how this is carried out in practice are laid out in Vescovi et al. (2009) and Bourque et al. (2009).
Climate Science is divided in two groups: “Simulations and Analyses” and “Scenarios and Services.” Simulations and Analyses is responsible for producing and analyzing regional climate projections. This paper discusses the experience of the Scenarios and Services group, whose purpose is to serve the climate needs of VIA projects. These needs include the acquisition of global and regional climate model outputs and observations, data processing to produce climate scenarios based on known methodologies or novel approaches, and transferring the information to end users. The modus operandi for scenario building goes as follows: 1) Based on the project's topic, designate a Scenario staffer that will support the VIA project. 2) Meet with users to understand what the project is about, the objectives, the time horizon of interest, the kind of climate information needed, and what resources are available to incorporate this climate information into the project. This is usually an iterative process as new users have often unrealistic expectations regarding the breadth of details that can be provided by climate science. Requests for detailed future land wind patterns, fog conditions, or hail events cannot be met accurately with the current generation of models. A substantial part of our work involves discussions with users to identify a middle ground between information needs and climate science capabilities. 3) Based on the project's needs and resources, provide general climate expertise or custom climate scenarios using methods of varying complexity tailored to each problem. The size of the ensemble, whether simulations are regional or global, the downscaling method, the type of statistical analysis, and the observations against which simulations are compared are all parameters that vary according to projects. 4) Support users in incorporating these scenarios into their own research or analysis, taking care to properly account for leading sources of uncertainty. Uncertainty analyses are embedded in every project and monopolize a substantial fraction of the collective brainpower. They systematically include intermodel comparisons, and when required, assessments of natural variability, allowing users to evaluate consensus and significance of projected changes. 5) Report on the work done and the methods used for internal peer review.
The scenario group was initially composed of a geographer and a mathematician but grew to include scientists with backgrounds in atmospheric sciences, hydrology, physics, biology, and geomatics. Each VIA project is coordinated and supervised by a VIA professional and supported by the Scenario specialist whose training and interests best matches the project requirements. Regular meetings, cafeteria discussions, and impromptu chats in corridors take care of information flowing freely across VIA, Scenario, and Simulation groups.
CLIMATE SCENARIOS FOR VIA PROJECTS.
The 2001 IPCC report includes a chapter on “climate scenario development” (Mearns et al. 2001), in which a climate scenario is defined as a “plausible representation of future climate that has been constructed for explicit use in investigating the potential impacts of anthropogenic climate change.” The IPCC chapter discusses the most common questions that arise when constructing climate scenarios to investigate climate change impacts and has been the starting point for our work at Ouranos.
The first climate scenario produced by Ouranos dates back to 2004 in response to a request from Hydro-Québec concerning the electricity demand for residential heating and cooling (see sidebar on “Climate projections for electric demand”). Looking back, the methodology seems rather naive—for example, assuming that the climate had been stationary until 2000 and relying on a small ensemble of simulations from only one global climate model. The study was updated in 2007 with new Coupled Model Intercomparison Project III (CMIP3) simulations and considerably more experience in the development of climatic scenarios.
Hydro-Québec counts 4 million residential customers, 76% of whom heat their homes using electricity as the main source of energy (Publications Éconergie 2007). Winter temperatures along the St. Lawrence River valley average around –10°C and peak demand occurs in January when temperatures can plummet to –30°C, posing a considerable challenge because 95% of the electricity is generated by hydraulic turbines fed by dams largely filled during spring melt. In other words, demand occurs before water becomes available; electricity provision thus requires large reservoirs and tight water management.
Hydro-Québec's forecast system for electricity demand initially relied on climate normals computed over the previous 30 years. However, this method did not take account of the warming climate and resulted in overestimation of winter demand and excess storage in upstream reservoirs. The inclusion of a warming trend in decision making could allow for better load forecasts, more efficient water resource management, and greater potential to take advantage of export markets.
In 2004, the utility tasked Ouranos to provide an estimate of monthly decadal temperature trends using the latest information from climate models. Using Hadley Centre Coupled Model, version 3 (HadCM3) simulations, we estimated a trend of +0.5°C decade−1 for January—a figure Hydro-Québec used to modify its operational rules. Such a change to the utility's procedures required the approval of the Régie de l'Énergie—an institution regulating the electricity market. The position of Ouranos as an independent nonprofit organization with strong academic credentials considerably strengthened Hydro-Québec's request to update its forecasting rules.
Hydro-Québec asked Ouranos back in 2007 to update the warming scenarios being used to foresee the electricity demand for the next decades. This time, a larger ensemble of simulations was available from the first CMIP3 simulations coming online. The analysis was later updated in 2011 with a more complete set of simulations—137 in total—providing temperature change trends more robust to multidecadal natural variations and a better representation of model error. The revised scenarios suggest new intraannual warming patterns and, more importantly, even stronger winter warming. Figure SB2 shows the monthly decadal warming trends according to these three successive reports.
The year 2006 saw a flurry of demands for scenarios on a variety of subjects: heat waves, pollens, drought, and forest fires, as well as energy efficiency norms. Before 2007, studies used from three to seven global climate models (GCMs) from CMIP2. The selection of models was initially based on objective criteria outlined by Parry (2002), as well as requirements for spatial resolutions better than 4° and multilevel land surface schemes. Some of these criteria were eventually relaxed to include more models, reflecting an evolution of our grasp of model uncertainty. We now tend to drive impact models by as many climate models as possible to provide a more robust multimodel mean (Gleckler et al. 2008) and ensemble spread, the latter being an evaluation of model uncertainty. Almost all studies from 2006 and 2007 used the delta3 method to drive impact models.
The first end-user scenarios based on regional simulations from the CRCM appeared in 2006 (Plummer et al. 2006), mixed with scenarios from established GCMs. One year earlier, CRCM had switched from version 3 to version 4, replacing the old bucket-type land surface scheme with the more realistic Canadian Land Surface Scheme (Verseghy 2012). This evolution considerably improved evaporation and precipitation fields and increased our confidence in CRCM outputs.
Evolution of methodologies.
The year 2007 saw the first applications of the CMIP3 generation of models. With considerably more sophisticated models and a larger ensemble to choose from, the level of confidence in climatic projections was raised, although broad conclusions did not change significantly. At the same time, simulations from the CRCM started to become a standard component of climate scenarios. Typically, a few simulations from the CRCM would accompany an ensemble of GCM simulations, providing some insights on the advantages (e.g., higher spatial detail, more realistic precipitation) and disadvantages of regional ensembles (e.g., additional sources of uncertainty, numerically more intensive, smaller subset of driving CGMs4). With more simulations in our archives with each passing year and collaborations with other regional modeling centers, some projects were able to rely entirely on regional simulation ensembles. Larger ensembles allowed analysis of the sensitivity of results to model choice over selected domains such as North America. This would have been impossible to achieve without collaborations such as the North American Regional Climate Change Assessment Program (NARCCAP; see Mearns et al. 2012)—a coordinated modeling effort pooling simulations from six regional modeling centers to assess uncertainty in regional projections and support impact studies.
However, because of their smaller size and diversity, the range of responses from regional model ensembles still remains smaller than full CMIP GCM ensembles, implying a tradeoff between the increased spatial resolution and the range of climate futures explored. Also, better resolution is not a guarantee of added value, as the finescale variability of the climate change signal is often small compared to its large scale component, already captured by GCMs (di Luca et al. 2013). Moreover, regional models react to their GCM pilot in nonintuitive ways, further complicating the interpretation of results (Mearns et al. 2012). Another tradeoff was the fact that Ouranos decided early on to focus its efforts to the greenhouse gas (GHG) scenario A2 in order to keep the number of simulations in check. Because the sensitivity of climate change to GHG scenarios is small before 2050, the time horizon that interests most of our users, this choice was justified. For the second half of the twenty-first century, using only the A2 scenario results in an underestimation of uncertainty (Hawkins and Sutton 2009).
Selecting an ensemble of simulations.
The availability of CMIP3 and in-house CRCM runs meant that more simulations were accessible, but raised a number of questions and discussions regarding the choice of models and the contribution of natural climate variability to uncertainty (Mote et al. 2011). Another key issue was that the number of simulations that we wished to include in the construction of scenarios became, in fact, limited by user's capabilities. Not all users are able or interested in running experiments with hundreds of simulations. With finite resources, users need to strike a balance between the time devoted to climate impacts versus other factors likely to influence adaptation choices. For example, population growth and fluctuations in commodity prices may have larger impacts and introduce considerably more uncertainty in forecasts than climate change.
A common approach to reduce the number of model simulations is the selection of scenarios at the low and high end of expected changes to key climatic variables. Such a strategy is appropriate for sensitivity experiments, where we are interested in knowing the range of conditions likely to be encountered. This approach however dismisses information about the likelihood of those scenarios—that is, what the majority of models project for the future. An approach we are now using regularly is the selection of simulations by cluster analysis (N. Casajus et al. 2012, unpublished manuscript). The idea is to first define a list of climatic indices relevant to the problem at hand. For example, if one is interested in forestry, such indices could be growth season length, moisture stress, and magnitude of extreme cold events. For each simulation, those indices are computed in the control simulation and future simulations to compute delta factors (i.e., the mean climate change). The deltas are then standardized and a clustering algorithm is applied to identify simulations that are close together in the multidimensional space formed by the deltas. The number of clusters is up to the user, but it can also be objectively selected to maximize the coverage of model uncertainty while minimizing the number of clusters. This clustering approach to model selection was first introduced at Ouranos through a collaborative project to assess the impacts of climate change on Québec's biodiversity (see sidebar on “Changing climatic niches”). Results show that the number of models can be reduced without significant change to the shape of the climate change deltas' distribution.
As climatic conditions change, tree, plant, and animal species either adapt, migrate, or go extinct. The most common response to both past and current warming is to migrate either toward the poles or to higher and colder elevations (Berteaux et al. 2010). For individual species, biologists often define climatic niches—areas where the climatological conditions will allow a species to persist. Inevitably, the mapping of these niches and the actual species distribution is not perfect, and results can be limited by the fact that they are correlative models that assume a state of quasi equilibrium between species presence and current climate and ignore important processes such as dispersal and competition (Berteaux et al. 2010). Despite these limitations, niche model projections (such as those shown in Fig. SB3) have been employed to provide useful information on the magnitude of potential future species shifts (Thuiller et al. 2011; Lawler et al. 2009; Parker-Allie et al. 2009).
Ouranos participated in a project led by scientists at the Université du Québec à Rimouski (UQÀR), McGill University, Université de Montréal, and in partnership with different ministries and conservation authorities to estimate the future climatic niches of about 1,000 species of trees, plants, amphibians, reptiles, birds, and mammals (see http://cc-bio.uqar.ca). The climatic niche model was driven using a set of climate model scenarios to assess the effect of modeling uncertainty on the species distribution. In a fine example of successful collaboration between disciplines, project biologists provided expertise on cluster analysis—a method biologists use to group species with similar characteristics. This method was applied to climate scenarios to cluster those that share similar climate change patterns and reduce the number of simulations to run through the climatic niche model, solving an important bottleneck.
The Oriole case presented in Fig. SB3 is emblematic of the northern biodiversity paradox, which suggests that the northward shift in species distribution will lead to a net increase in biodiversity in northern ecosystems. The consequences of the arrival of such “exotic” species in the local ecosystems are still not clear, but results from the project are being used to inform conservation strategies aimed at preserving local biodiversity by helping to identify individual species, as well as geographical regions, which are most likely to be impacted by climate change. In addition, results provide valuable information about climate refuges, which is important for the management and future planning of protected area networks and migration corridors.
Postprocessing of climate model outputs.
Once a set of simulations is selected, the next step is to process the climate model outputs to either analyze directly the results or use the output to drive an impact model. Experience shows that feeding climate model output directly into an impact model sometimes leads to spurious results due to model biases (Wood et al. 2004). For example, a model for forest fires generated no fires at all over the historical period because of a cold and humid bias in the climate simulation. The need for bias correction was not always recognized: a few years ago, there were heated debates at Ouranos about bias correction. We argued whether differences between observations and model outputs were biases or a result of natural long-term oscillations of the climate system. We also worried about the loss of internal coherence between variables after bias correction. As more simulations became available, it became clear that these biases were robust features and would not go away with longer time series. The loss of coherence between variables was eventually seen as a lesser evil than feeding biases to impact models (e.g., see Muerth et al. 2013). Nevertheless, this potential loss of physical coherence remains a source of discontent and efforts are underway to evaluate how serious it is and what can be done to avoid it.
Generally speaking, methods that connect large scale climate model outputs with the smaller, local scales are referred to as downscaling methods (Maraun et al. 2010). The methods that we typically use fall in the category of empirical downscaling or model output statistics They can be divided in two classes: bias methods and delta change methods. Bias methods remove the differences between observations and the control simulation from the future simulated time series, while the delta method adds the change between control and future simulations to the observed time series. A few years ago, the bias or delta values were averaged at the monthly or seasonal scale, then applied to time series.5 Averaging values, however, masks changes that affect small and large events differently. For example, an increase in precipitation might be caused by more days with drizzle or more intense rainfall events. More drizzle days are an issue for farmers while intense rainfall is more of a concern for city drainage engineers. One way to address such differences is to define deltas, or biases, that are a function of the rank of the value being affected. Called quantile mapping methods, these algorithms apply a correction factor that varies according to the rank of the value to be corrected, meaning that the 10th percentile precipitation may have a delta quite different from that of the 90th percentile precipitation. Although these methods are quite simple, they perform on par with more elaborate ones (Themeßl et al. 2010) and are now used routinely at Ouranos.6
Climate projections uncertainty.
One of the more challenging issues in climate scenarios is to properly account for uncertainties inherent to climate projections and carry them over to impact models. Typically, this uncertainty is characterized by the dispersion within an ensemble of scenarios. Depending on the time horizon of interest and the type of questions that are being asked, ensemble members are chosen to explore what we expect to be the leading sources of uncertainty, whether it is the model choice, the emission scenario, natural variability, or postprocessing methods. While this is a standard procedure, it remains a rather limited and unimaginative way to describe uncertainties. Indeed, this dispersion cannot reflect processes absent from climate models nor be interpreted as a probabilistic sample from a real population of future climates, in part because of model interdependence. Attempts have been made by Murphy et al. (2009) to work around these caveats and build usable probabilistic projections, but only at formidable computational costs and only for a few variables. Until further progress is made, dispersion within ensembles of a priori equiprobable7 simulations are interpreted as rough proxies for the real uncertainty underlying climate projections.
This uncertainty can be communicated in multiple ways to users. In the simplest cases, interquartile ranges over climate change deltas are used. More frequently though, impact modelers process a subset of climate models and emission scenarios to assess themselves how climate uncertainty translates into impact uncertainty. This exercise allows a comparison against leading sources of nonclimatic uncertainty, allowing impact modelers to gauge how sensitive their results are to climate-related hypotheses. The extra work and complexity imposed by uncertainty assessments are not welcomed by all users, some of whom would prefer to work with a single number. In those cases, we take the time to explain where this uncertainty stems from: gaps in our understanding and capability to model processes, but also unpredictable human choices and the irreducible chaotic nature of the climate system. As words go, “irreducible” is pretty effective in saying “better get used to it.”
A side issue in this quest to capture modeling uncertainty is to ensure continuity between each generation of models. For example, CMIP5 simulations have replaced the Special Report on Emissions Scenarios (SRES) (A1B, A2, and B1) users are now familiar with by radiative concentration pathways (RCPs). Since no direct correspondence exists between SRES scenarios and RCPs, CMIP3 and CMIP5 ensembles cannot be mixed without answering compatibility questions first. Users will also inquire whether there are significant advantages to upgrade to the newer suite of simulations; that is, have projections significantly changed? Work by Markovic et al. (2013) indicates that users that have invested heavily in CMIP3 scenarios may not find the slight improvements from CMIP5 worth the transition efforts.
Although model selection, postprocessing methods, and uncertainty assessments are central to the construction of climate scenarios, the 2007 IPCC Fourth Assessment Report was modest on technical details related to that subject. Special reports have been published since then addressing some of these subjects (Knutti et al. 2010), but they do not provide the profile or breadth of an IPCC chapter. There is a real need to strengthen the science of scenario construction, scenario interpretation, and, more generally, help climate service organizations learn from each other.
USER INTERACTION AND ACCOUNTABILITY.
According to Cash and Clark (2001), three qualities are key to the effectiveness of scientific assessments: salience, the perceived relevance of the information to stakeholders; credibility, the perceived authoritativeness or believability of its technical dimensions; and legitimacy, the perceived fairness of the assessment process. These components are not independent and sometimes reinforce or compete with each other. Nevertheless, all three are essential to the translation of climate information into real-world action (Meinke et al. 2006). Our experience shows that close and sustained user interaction is crucial to achieve not only salience, but also credibility and legitimacy.
Scientific assessments are credible when they are understandable, and climate science proves to be challenging in this regard since it is far from intuitive. Indeed, many scientists work with models where the input data largely determine the model outcome. In climate models, imposed changes in GHG forcing are small compared to diurnal and annual solar radiation variations and do not directly influence individual weather events. The occurrence of a single cold or warm year, a hurricane, or a drought follows from the dynamics of the climate system and cannot be predicted far in the future. On the other hand, given changes in GHG concentrations, climate models have some skill predicting bulk properties of the system, such as 30-yr temperature trends or changes in average precipitation over large areas. This combination of predictability at multidecadal time scales and stochasticity at annual time scales seems to be a typical cause of confusion. Without sustained user interaction, such misinterpretations may undermine the credibility of climate information.
A tool that we use to improve transparency and legitimacy was inspired by Kloprogge et al. (2007). It consists of a document, handed out to users at the end of each project that synthesizes the methodological choices that are made in the course of scenario construction, such as number of simulations, number of GHG scenarios, and spatial resolution. These figures are placed within a range of possible values so users unfamiliar with climate science can get a feeling of where their scenarios stand in the grand scheme of things. For example, if the scenario includes five regional simulations with three different models, then the document lets users know that such an ensemble would be considered small compared to typical GCM ensembles but medium compared to typical regional climate model (RCM) ensembles. This comparison thus conveys the idea that, by using a regional ensemble, the user has sacrificed model uncertainty coverage for increased spatial resolution. By outlining these tradeoffs in a formal document given to end users, we hope to reduce the risks of misunderstandings and dispel unrealistic expectations.
With increasing experience, resources, and funding at our disposal, the number of projects served by the Scenario group has grown steadily, from 2 in 2004 to 20 in 2011. Table 1 gives a sense of the wide variety of disciplines involved in some of these projects, along with the methods used and the main deliverables. Projects are mostly geared for planning or research purposes, but some of them have already reached day-to-day operational status. The projects have also tended to become more complex over time in response to increasing knowledge of users, an increase in the number of model simulations, and more refined analysis and scenario construction methods. That being said, the methods used to create scenarios still remain relatively simple, with most scenarios relying on simple delta factors or quantile-corrected time series. More elaborate approaches are developed as needed to handle nonstandard scenarios such as precipitation or temperature extremes, whose skewed distributions are badly handled by correction factors (Casati et al. 2013).
For nontechnical users, spatial analogs have been found to be an effective tool to communicate climate change impacts. Spatial analogs are nearby regions or cities where the current climate is similar to the future climate of the region of interest (Grenier et al. 2013). Studying how these neighboring regions handle climate issues is a shortcut to identify adaptation solutions, as well as an effective communication tool. It is important to keep in mind that the uncertainty introduced by different approaches to create scenarios remains small compared with other sources of uncertainty such as the choice of climate model—a strong incentive to keep our analytical methods simple and straightforward, and avoid methodological excesses.
One limitation of CMIP-based scenarios is that they do not feature dramatic events such as the sudden release of methane from permafrost or a shutdown of the thermohaline circulation. While these events certainly capture the imagination, their likelihood appears too low at the moment to justify their inclusion in VIA studies (Lenton et al. 2008). On the other hand, many decision makers are understandably curious or concerned about these low-probability high-impact events or geoengineering initiatives and we are considering how to position these extraordinary scenarios within the CMIP model ensemble.
At the beginning of Ouranos, there was no Scenario group—modelers in the Simulation group would run the regional model, then scientists in an Analysis group would postprocess simulations to produce custom scenarios for VIA projects. The Analysis group was initially created from participating academics interested in statistical analysis of occurrences and recurrences of weather events. Their work was supposed to bridge the gap between climate modelers and VIA specialists, but this initial setup turned out to be unsatisfactory for a number of reasons.
While the collaboration between VIA and Climate Science might have looked good on paper, teams ended up working in parallel with few fruitful interactions. Differences in language and scientific backgrounds between the climate modelers, academics, and VIA professionals limited exchanges and communications. VIA specialists were left to themselves to try to put in practice climate simulations that modelers thought too immature to be used in real-life applications. These VIA studies were carried out nonetheless and then criticized by the modelers—a recurrent pattern that widened the gap between the two groups. Bound by the requirements of academic careers, members of the Analysis group concentrated more on methodological improvements than direct end-user support—a pattern described by Averyt (2010). The situation changed with the creation of a Scenario group dedicated to serve VIA projects. Acting as an intermediary between modelers and users, the Scenario staff focused on meeting climate information demands stemming from VIA projects.
The initially small scenario group grew to become a pivotal component of Ouranos. It has widened the user base by creating products adapted to individual needs and constraints and thus closer to decision-making requirements. The proximity to users also has the advantage of helping us identify the needs for climate information and anticipate requests; but this is something that we only realized after a few missteps.
Indeed, one mistake made early on was not properly guiding users to clearly identify their climate information needs. For example, in a joint project with civil engineers, we provided a set of climate change deltas for an extensive set of climatic variables. Because these variables were previously identified by the engineers working on the project, we assumed they would know how to incorporate the results in their decision-making framework. This assumption turned out to be flawed and the climate scenarios were barely considered. Our mistake was to miss the fact that most engineers rarely compute the climate variables that they use in their day-to-day work from raw climate values, but rather rely on official values produced in specific formats by governmental organizations. For climatic scenarios to be incorporated in practice, they must be ready to use, precisely tailored to be drop-in replacements for the usual data entering the decision-making process. This additional level of user customization is now a central pillar of the development of climate scenarios at Ouranos.
Another early oversight was to fail to anticipate user requests. Some users needed climate products that required methodologies we had no prior experience with. The “learn as we go” approach might be appropriate for academic work but is riskier in projects supporting decision making. While after-the-fact checking confirmed our methodological choices, these experiences stress the importance of giving staff time to experiment with new methods to widen the experience pool and to test the robustness of the prospective methodologies. To this end, Ouranos employees may spend up to 20% of their time on research and exploration of new ideas. In practice, Ouranos being a service-oriented organization, enduser requests and projects must take precedence over exploration. Still, some of these projects evolve into scientific publications to which Ouranos employees are encouraged to contribute. A balance between exploration and services is also maintained by hiring personnel with diverse interests and training.
Finally, a challenge that still confronts us is to avoid passing on to users our methodological burden. Scenario construction requires a number of hypotheses, and the impact of each one of these hypotheses could in principle be evaluated to understand its impact on the final results. In other words, we can provide users different scenarios using different methods and let them check whether or not it makes a difference to their analysis. However, this approach imposes additional and unexpected work to end users, who are often unable to assess the possible benefits. The challenge is to understand well enough the enduser problem to transfer only the methodological choices that have the potential to significantly impact results. Doing so requires a multidisciplinary team with enough experience in a wide range of disciplines to accurately gauge the relevance of each one of those methodological choices.
In 2001, the IPCC chapter on scenario development could be found in the first working group report The Physical Science Basis. In 2007, this topic was addressed in Working Group II “Impacts, Adaptation and Vulnerability,” with more emphasis on stakeholder interactions and less on how scenarios are actually generated (Carter et al. 2007). Of course, scenario construction belongs neither to one or the other, but to both subjects—the essential gateway from climate science to VIA. Getting this step right is critical to ensure that climate information is tailored to user needs and that it is used adequately and constructively.
After 10 years working at bringing climate science to end users, we now sense that climate scenarios and climate services are reaching a new level of maturity. Countries and nongovernmental organizations are now institutionalizing climate services (WMO 2009), intent on reaping the benefits of climate-smart development. Our experience suggests that offering climate services is not as simple as putting climate scientists and impact modelers in the same building. Scenario specialists play a key role in bridging disciplines and accurately translating up-to-date scientific information into custom climate products. At Ouranos, we feel privileged to be in such a strategic position at such an interesting time.
The material and ideas presented in this paper represent the product of work of numerous past and present members of the Ouranos staff. The authors would like to recognize Georges-Étienne Desrochers and Line Bourdages for their contribution to the Scenario group. Special thanks also go to René Roy, Alain Bourque, Ramón de Elía, Anne Frigon, Caroline Larrivée, and Daniel Caya, as well as all other interviewees for their insights and contribution to this paper.
This work was supported by the Ouranos' Fonds de Recherche en Sciences du Climat d'Ouranos (FRSCO) program.
1Hydro-Québec is Québec's publicly owned electricity utility.
2Industrial members may sponsor specific projects targeting their needs besides their annual contribution to the base funding of the consortium.
3The delta method consists of adding (or multiplying) a value to a series of observations to simulate the effect of climate change at local scales. These values are computed from climate model outputs using long time averages to filter out high-frequency natural climate variability.
4RCMs are usually developed with a specific GCM pilot, and configuring the model for a new pilot requires considerable time and effort.
5The bias, or delta, is usually a ratio for precipitation and an offset for temperatures.
6One question arising from postprocessing methods using quantile mapping is whether rank-based corrections to time series could affect the ordering of the deltas used to select simulations. If it did, it would make more sense to apply postprocessing methods before simulations are selected.
7Although the idea of giving more weights to models that perform better is attractive, it is a challenge to put in practice and the procedure runs a considerable risk of degrading results instead of improving them (Weigel et al. 2010).