The climate science research community produces a wealth of complex scientific outputs (e.g., climate models, databases, projections, peer-reviewed articles, and reports) regarding future climate conditions (Overpeck et al. 2011). Both the abundance and complexity of scientific information available create several challenges for potential users of the information, including scientists but also professional practitioners, such as climate adaptation planners, resource managers, or policy makers. The “practitioner’s dilemma” describes the overwhelming state practitioners face during the process of choosing, assessing, and using climate information and data (Barsugli et al. 2013). The dilemma stems from the time needed to assess the fit and digest the large amount of information available (de Elía 2014; Cash et al. 2003; Lemos et al. 2012). Moreover, a substantial portion of this information is not readily available, which can potentially affect those without an insider connection to data and resources (Cvitanovic et al. 2015). Finally, information that is available is often not in usable formats (Lemos and Rood 2010; Peters et al. 2018).
The foundation for information about the future of Earth’s climate comes from climate models. Dozens of models and model experiments exist (see Eyring et al. 2016), so model selection, or minimally understanding which models were used in an existing climate product, is a primary, nontrivial task required for informed use. The model selection process requires expert knowledge about the various pros and cons of different models, and practitioner needs should be considered for the climate information to fit specific problems. In the end, there will always be caveats and disclaimers to be made to the practitioner by the climate expert regarding limitations and weaknesses of the models.
Global climate models provide numerical information, such as air temperature, precipitation, and other environmental conditions based on computational simulations of physical, chemical, and biological laws that describe the climate system. Because the information is numerical, we find that practitioners often expect that the information is quantitatively accurate, in much the same way as a deterministic weather forecast. However, the uncertainties are large and as information is tailored to specific times and user’s locations the uncertainty increases (Hawkins and Sutton 2009). We prefer to frame these simulations as numerical guidance, which, appropriately, is used with other sources of information to provide plausible and usable information about the future climate. The use of the term “guidance” is consistent with the role that models play in the development of forecasts in numerical weather prediction.1 Other sources of information include observations, theory, and a portfolio of models and methods tailored to manage uncertainties.
Our approach of a consumer-report-style guidance is to fit better the simulations designed for scientific investigations to the problems of practitioners. We use the term “consumer” to describe the practitioner who needs climate information, and climate models and resources derived from models are “products.” We note that previous efforts have considered the design of audience-focused products for quick, effective communication of uncertainty (e.g., Kloprogge et al. 2007). Other examples include “nutrition labels” (see Bates 2011) developed for NASA data (P. Fox 2018, personal communication) and designed for climate simulations (J. Barsugli 2018, personal communication).
As foundational information, the global climate models are at the start of a supply chain for tailored productions, especially, downscaled climate models. The same type of consumer-report-style evaluation can be used for these products. Here, we focus on global climate models because they are at the root of the initial question posed by many practitioners, and we find that the uncertainty description is more straightforward than with the tailored products. Further, it is reasonable for the practitioner to filter their potential choice of a tailored product on global climate models deemed most suitable in the consumer report. Hence, our approach is posed as a starting point to investigate the merit of this type of reporting to reflect and inform constantly evolving modeling and practitioners’ needs. And as they evolve, consumer-report-style guidance is also likely to include new and emergent tools and approaches [see, e.g., Jagannathan et al.’s (2020) research on improving user-defined metrics for modeling].
Bridging and brokering climate knowledge for consumers
For the past few decades, knowledge brokers and boundary organizations have emerged to bridge the gap between producers and consumers of climate information (Lemos et al. 2014; Briley et al. 2015). Embedded in both the scientific and practitioner communities, brokers and boundary organizations synthesize knowledge to provide tools and guidance for specific audiences based on their needs. The Great Lakes Integrated Sciences and Assessments (GLISA) is a boundary organization in the climate science information arena and a U.S. National Oceanic and Atmospheric Administration (NOAA) Regional Integrated Sciences and Assessments (RISAs) project. GLISA serves the Great Lakes region, coproducing, bridging, and brokering climate information to regional practitioners. Though most practitioners, ultimately, settle on expert guidance, an important subset desires model output. As such, GLISA is often asked to provide guidance on which models are “best” or most suitable. Our response depends on the questions that the consumer is asking so that model choices and data analysis are defensible by the practitioner for their application.
Increasingly, practitioners express their desire for guidance about and curation of the ever-growing number of information options and decision-support tools (Moss et al. 2019). To offer guidance and facilitate knowledge transfer, we propose an approach utilizing a climate model consumer report format developed in partnership with GLISA’s Practitioner Working Group that aims to build the capacity of users to become better consumers through guided model-driven selection and to explain the rationale of the selection of the data and knowledge that are used. The reports are designed to be novel communication guides that support users to select which climate models or projections fit their needs. In the public sphere, consumer reports are typically viewed as reliable and straightforward documents that enable customers to make informed decisions when purchasing a given product or service (e.g., Faber et al. 2009). Yet, in the world of climate information, model outputs are often neither appropriate nor straightforward for users. Hence, we conceptualize our reports as a heuristic to help communicate complexity more than, strictly, as a guide. We share with traditional consumer reports the goal of building capacity toward better-informed decisions, while acknowledging that both the trade-offs and stakes of those decisions are significantly more complex. Below we outline the process we used to develop consumer reports for climate models and projections and share lessons learned. In doing so, we aim to inform the work of other applied climate groups and knowledge brokers trying to better communicate model and projection information to consumers.
Describing existing model and projection information
Information from climate models typically comes in two forms: raw data from model simulations and synthesis materials, such as maps, tables, graphs, and reports. To consume raw climate model data appropriately, users must acquire, evaluate, analyze, and then synthesize the data, which often requires a high level of technical expertise that can be beyond many users’ capacity or resources. Though acquisitions of data are increasingly facilitated through open access data archives, the chain of what to do with the data and the skills to do it needs to be completed (e.g., Hines et al. 1987).
To improve the relevance and usability of model data, organizations such as the IPCC and the U.S. National Climate Assessment have created synthesis reports designed to communicate climate model information to nonscientific audiences. However, information about model quality is often disconnected from the main content. For example, NOAA’s State Climate Summaries (Kunkel et al. 2017) provide state-level climate projections for temperature and precipitation, but there is no discussion of whether or not the underlying models and data processing techniques are suitable for individual states. The supplementary materials of the report indicate that the data are based on those used in the Third National Climate Assessment, but the discussion of the suitability of the models is limited (Abramowitz et al. 2019). The generalized nature of these reports typically does not address how well the underlying models simulate important regional and local climate processes. In many cases, current practitioners are early adopters of the simulation data, and their feedback has yet to be incorporated into model development and tailoring of simulation data.
Engaging practitioners
As a climate information broker, GLISA is well positioned to design and test a consumer report approach to communicating climate model and projection information. We engaged our Practitioner Working Group (PWG) to advise and coproduce prototype climate model consumer reports. The PWG includes climate information consumers from a wide range of backgrounds and expertise, including agricultural outreach, Tribal representatives, natural resource managers, university partners, and scientists. Our approach to engaging the PWG included one-on-one conversations and a short survey to determine the types of models and information each participant uses, whether or not they make choices about the models used in their work, and the types of metrics and other information they would like to know about climate models and projections. Then, after GLISA drafted the initial consumer-report products, multiple conference meetings were held where participants could provide feedback and input to future iterations of the products.
Reviewing existing consumer reports
To identify and replicate best practices in consumer reporting, we reviewed over 25 published consumer reports for a wide range of products and recorded commonly found presentation techniques. Most consumer reports use charts and tables to organize complex information so readers can quickly compare products based on a given set of criteria. Another common practice is ranking products (for either a particular criterion or overall performance) with traffic-light color schemes, with green as the highest ranking and red as the lowest. We also identified narrative styles of consumer reports, like buyer’s guides, that help consumers know what to look for in a product.
Engaging model developers
GLISA’s work with climate data and model information is supported by close relationships with model developers, who help us better understand important regional climate dynamics and how those processes are represented in climate models. Through group and face-to-face conversations, we collected information about climate models, such as details of model construction and underlying assumptions, that is not always available elsewhere. These relationships have been essential to helping us understand model strengths and weaknesses and communicate that information to practitioners, which in turn builds a sense of trust and credibility around the model data we use.
A suite of climate model consumer reports
Through the practitioner engagement process, we identified two different ways climate models are primarily used and developed multiple types of climate model consumer reports. Some consumers are interested in models or synthesis reports from only the highest-quality models, based on their own individual set of criteria. Others are more interested in choosing climate information based on the range of future climates that different models project—sometimes regardless of the attributes of the underlying model(s). As knowledge brokers, we steer consumers toward the more suitable or defensible choices.
Our role as a boundary organization includes both interpreting climate science knowledge and representing the integrity of science-based knowledge through expert opinion. However, there are times when the user’s requirements supersede our expert opinion, and it is important to capture our concerns in the uncertainty description. This creates the need for one set of reports that evaluate the suitability and quality of climate models and another set of reports that describe the content of models’ projections. Additionally, each practitioner brings varying levels of technical expertise and knowledge to the data and information selection process, so we created reports with multiple levels of technical information to serve novice to expert consumers.
To help practitioners decide which models to use in their work, we created multiple reports based on the buyer’s-guide format. It is important to note that our examples focus on model selection for a specific geography, but any of these products could be easily tailored to guide consumers on matters of importance for a particular sector, or both. The first product is a model checklist, which identifies the criteria that models should meet to provide credible and salient information for our region (Fig. 1 provides an example of the condensed checklist—details pertaining to each criterion are also available in our buyer’s-guide format, but not shown here).
To ensure relevance and usability, knowledge brokers should tailor the model checklist to their geographic region and problem of interest. For instance, the criteria in Fig. 1 are based on GLISA model research and input from trusted regional modelers (e.g., Briley et al. 2017; Notaro et al. 2015). In the Great Lakes region, the most detailed and sophisticated models include an interactive lake model that simulates important lake–atmosphere processes. If regional practitioners select models based on the model checklist, then they are likely to include the most comprehensive projections in their work. This choice might be more credible; however, this does not always equate with more robust simulation quality. The uncertainty range of the climate models is large, and measures of statistical quality do not always relate to the representation of local processes required for credibility. In this case our reports help to frame the uncertainty description.
The second product is a cross-model comparison. In our example we compare and contrast over 30 models using two key criteria that are, intuitively, essential for credible representation of our geographic region—the presence and dynamics of simulated large lakes (Fig. 2). Typically, a cross-model comparison is a synthesis of a large research effort, so information about where consumers can learn more should be included on the product to increase its transparency. The red and green icons in the cross-model comparison indicate whether each model simulates all five Great Lakes. The text above each group of models evaluates whether the models simulate lakes as dynamic and interactive bodies of water (either as lakes or oceans) or instead omit lakes.
We are cautious in using color coding for subjective model evaluations, because consumers’ individual value judgements may be biased. For example, some users may assess a model by how well it captures regional climate dynamics, whereas others may be studying specific processes, like lake effects or land–atmosphere feedbacks, and require models that perform especially well in those areas. We suggest using color coding only when the evaluation criteria are objective and clearly stated, and the audience is well known and shares similar values.
Together the model checklist and the cross-model comparison provide expert guidance from climate scientists about what elements models should include and the evaluation of models across important criteria. Our cross-model comparison is being used by academics and practitioners to inform model selection. For example, one researcher is comparing snow projections from models that treat the Great Lakes differently to attempt to understand past research results and potentially better explain model behavior. Another practitioner is using our cross-model comparison to communicate to their stakeholders how the models represent the lakes and what this may mean for the quality of the projections.
The Climate Projection Visualization Table displays future projection values across models (Fig. 3). This table compares multiple projections to highlight similarities and differences between models, characterize the amount of uncertainty within the set of models, and offer quantified projection information. The utility of a Climate Projection Visualization Table like ours is twofold. First, it enables the climate information broker to more easily synthesize multivariate projection information from multiple models and frame each model as a unique scenario. Second, the broker can use all or part of the table to communicate to consumers a specific set of scenarios, or models, with easy-to-interpret visuals. To create the Climate Projection Visualization Table, the knowledge broker must preselect models that are suitable for a particular audience of users and calculate the predicted amount of climate change for a specific geography at a given future time period. Applications include describing the range of variability among projections or informing the selection of specific projections for work such as scenario planning. The projections in Fig. 3 are from a six-model ensemble that was developed specifically for applications in the Great Lakes region.
A major challenge in developing consumer reports for climate models is tailoring information to the geographic locations and scales that matter to users. Practitioners typically require climate information at the regional scale or smaller, so global evaluation and analysis of climate models in a consumer report may not prove relevant to consumers. The examples thus far have included regional evaluation and analysis. Ultimately, there is no one-size-fits-all approach to developing climate model consumer reports. Rather, each report should emphasize matters of importance to its primary audience—whether a particular geographic region (e.g., the Great Lakes), or a specific sector (e.g., water resources), or both. Developing climate model consumer reports in partnership with practitioners through an iterative feedback process helps ensure that the reports provide the right information, in the right format.
Conclusions
Information about climate models and their future projections is complex and rarely presented in a way that enables consumers to make well-informed choices about which products to use (i.e., climate models, projections, or reports based on either of these). Unless consumers are relying on information that they received from a trusted broker, and sometimes even despite how trustworthy a source is, they face knowledge gaps concerning the type, quality, applicability, and limitations of information products. In almost every other market, ranging from automobiles to healthcare, consumer reports have been used as a mechanism to simplify, summarize, and guide product selection. In the field of climate adaptation, we believe that climate model consumer reports could be used to make climate information more relevant to and usable by practitioners. Our approach has proved a productive interaction between practitioners and climate-knowledge brokers, and we pose it as a starting point to be evolved as experience and research lead to increased usability of climate information.
Acknowledgments
This work was supported by NOAA Award NA15OAR4310148. GLISA would also like to thank our Scientific Advisory Committee and Practitioner Working Group for their contribution to help shape each climate model consumer report to meet the needs of practitioners. These individuals are either practitioners using climate information in their own work or they serve practitioner audiences in a variety of sectors. A list of members is available online at http://glisa.umich.edu/projects/great-lakes-ensemble. We thank four reviewers for their comments, which improved the manuscript.
References
Abramowitz, G., and Coauthors, 2019: Model dependence in multi-model climate ensembles: Weighting, sub-selection and out-of-sample testing. Earth Syst. Dyn., 10, 91–105, https://doi.org/10.5194/esd-10-91-2019.
Barsugli, J. J., and Coauthors, 2013: The practitioner’s dilemma: How to assess the credibility of downscaled climate projections. Eos, Trans. Amer. Geophys. Union, 94, 424–425, https://doi.org/10.1002/2013EO460005.
Bates, J. J., 2011: Assessing climate data record transparency and maturity. WCRP Observations and Assimilation Panel, 14 pp., www.wcrp-climate.org/documents/WOAP_frascati/bates.pdf.
Briley, L. J., D. Brown, and S. E. Kalafatis, 2015: Overcoming barriers during the co-production of climate information for decision-making. Climate Risk Manage ., 9, 41–49, https://doi.org/10.1016/j.crm.2015.04.004.
Briley, L. J., W. S. Ashley, R. B. Rood, and A. Krmenec, 2017: The role of meteorological processes in the description of uncertainty for climate change decision-making. Theor. Appl. Climatol., 127, 643–654, https://doi.org/10.1007/s00704-015-1652-2.
Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, J. Jäger, and R. B. Mitchell, 2003: Knowledge systems for sustainable development. Proc. Natl. Acad. Sci. USA, 100, 8086–8091, https://doi.org/10.1073/pnas.1231332100.
Cvitanovic, C., A. J. Hobday, L. van Kerkhoff, S. K. Wilson, K. Dobbs, and N. A. Marshall, 2015: Improving knowledge exchange among scientists and decision-makers to facilitate the adaptive governance of marine resources: A review of knowledge and research needs. Ocean Coastal Manage ., 112, 25–35, https://doi.org/10.1016/j.ocecoaman.2015.05.002.
de Elía, R., 2014: Specificities of climate modeling research and the challenges in communicating to users. Bull. Amer. Meteor. Soc., 95, 1003–1010, https://doi.org/10.1175/BAMS-D-13-00004.1.
Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016.
Faber, M., M. Bosch, H. Wollersheim, S. Leatherman, and R. Grol, 2009: Public reporting in health care: How do consumers use quality-of-care information? Med. Care, 47, 1–8, https://doi.org/10.1097/MLR.0b013e3181808bb5.
Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 1095–1108, https://doi.org/10.1175/2009BAMS2607.1.
Hines, H., R. Hungerford, and A. N. Tomera, 1987: Analysis and synthesis of research on responsible environmental behavior: A meta-analysis. J. Environ. Educ., 18, 1–8, https://doi.org/10.1080/00958964.1987.9943482.
Jagannathan, K., A. D. Jones, and I. Ray, 2020: The making of a metric: Coproducing decision-relevant climate science for water management. Bull. Amer. Meteor. Soc., https://doi.org/10.1175/BAMS-D-19-0296.1, in press.
Kloprogge, P., J. P. Van der Sluijs, and J. A. Wardekker, 2007: Uncertainty communication: Issues and good practice, version 2.0. Copernicus Institute Rep., 64 pp., www.nusap.net/downloads/reports/uncertainty_communication.pdf.
Kunkel, K., R. Frankson, J. Runkle, S. Champion, L. Stevens, D. Easterling, and B. Stewart, Eds., 2017: State climate summaries for the United States. North Carolina Institute for Climate Studies, https://statesummaries.ncics.org/.
Lemos, M. C., and R. B. Rood, 2010: Climate projections and their impact on policy and practice. Wiley Interdiscip. Rev.: Climate Change, 1, 670–682, https://doi.org/10.1002/WCC.71.
Lemos, M. C., C. J. Kirchhoff, and V. Ramprasad, 2012: Narrowing the climate information usability gap. Nat. Climate Change, 2, 789–794, https://doi.org/10.1038/nclimate1614.
Lemos, M. C.,C. J. Kirchhoff, S. E. Kalafatis, D. Scavia, and R. B. Rood, 2014: Moving climate information off the shelf: Boundary chains and the role of RISAs as adaptive organizations. Wea. Climate Soc., 6, 273–285, https://doi.org/10.1175/WCAS-D-13-00044.1.
Moss, R. H., and Coauthors, 2019: Evaluating knowledge to support climate action: A framework for sustained assessment. Report of an independent advisory committee on applied climate assessment. Wea. Climate Soc., 11, 465–487, https://doi.org/10.1175/WCAS-D-18-0134.1.
Notaro, M., V. Bennington, and S. Vavrus, 2015: Dynamically downscaled projections of lake-effect snow in the Great Lakes basin. J. Climate, 28, 1661–1684, https://doi.org/10.1175/JCLI-D-14-00467.1.
Overpeck, J. T., G. A. Meehl, S. Bony, and D. R. Easterling, 2011: Climate data challenges in the 21st century. Science, 331, 700–702, https://doi.org/10.1126/science.1197869.
Peters, C. B., M. W. Schwartz, and M. N. Lubell, 2018: Identifying climate risk perceptions, information needs, and barriers to information exchange among public land managers. Sci. Total Environ., 616–617, 245–254, https://doi.org/10.1016/j.scitotenv.2017.11.015.