1. Introduction
Climate assessments evaluate current scientific understanding across multiple disciplines and are designed to inform decision-making at local, national, and international levels. To contextualize assessment conclusions and identify limitations and opportunities for advancement, climate assessments often identify research gaps. There are many ways of knowing about climate change and many ways to frame gaps in that knowledge (Hulme 2018). Broadly, a research gap can be defined as a topic or area where missing or inadequate information limits a conclusion (Robinson et al. 2013). Research gaps can inform research outlooks and the initial scoping of climate assessments (Roesch-McNally et al. 2020). Through collaboration with practitioners and community scientists, the assessment community has been able to communicate uncertainty and identify gaps to inform climate risk management and adaptation decisions (Moss et al. 2019; Sutton 2019; Weaver et al. 2017). Further, research gaps can provide insight into how well the scientific community uses diverse knowledge, including physical and social sciences and place-based knowledge, such as indigenous knowledge (García-del-Amo et al. 2020). Adding research gaps into a central repository and categorizing the content increases the accessibility of the information. Functionality that allows researchers to search, filter, and group research gaps can inform prioritization of future research areas in the academic, private, and public sectors. Examining research gaps can also set a foundation for future assessment activities by explicitly identifying what we do and do not know through scientific consensus building.
The U.S. Global Change Research Program (USGCRP) has a mandate to assist the nation and the world to understand, assess, predict, and respond to global change (Global Change Research Act 1990). USGCRP coordinates and integrates federal expertise from 14 agencies across a range of subject matter and data products including health, adaptation, coasts, carbon cycle, water, social sciences, international issues, observations, modeling, and indicators.
Over three decades, USGCRP has developed an assessment process to evaluate scientific knowledge for decision-makers across U.S. regions and sectors (Buizer et al. 2016) and provide transparent, detailed metadata for assessment findings in the Global Change Information System (Elamparuthy and Sherman 2020). Public engagement is critical for ensuring that USGCRP assessments represent the priorities and needs of decision-makers across the country (Roesch-McNally et al. 2020). The assessment process includes public engagement workshops and comment periods to strengthen the science, gather important feedback, and ensure clear communication of report findings (Avery et al. 2018; Weaver et al. 2017). Further, to provide context for the use of assessment findings in decisions, USGCRP assessments include calibrated language determined by author-team consensus to describe the confidence in, and likelihood of, report findings. This terminology is rooted in the approach taken by the Intergovernmental Panel on Climate Change (IPCC) assessment reports to communicate risk, describe the quantity and quality of the evidence base, and discuss uncertainty. Specifically, confidence terms convey “a qualitative measure of the validity of a finding,” while likelihood terms provide “a quantified measure of confidence in a finding expressed probabilistically” based on a number of factors related to the evidence base (Arias et al. 2021).
To date, research gaps have not been consistently described across USGCRP assessments. Critically, only some USGCRP assessments included chapter-specific sections that elaborated on the underlying evidence base and key uncertainties. Depending on the assessment, these sections were referred to as traceable accounts, supporting evidence, or research needs. In practice, assessment authors have included phrasing such as “research gaps,” “research needs,” or “knowledge gaps.” However, in most instances, authors described research gaps without the label itself but by describing missing or limited information such as sparse observations, model structure and processes, choices for societal development, and human–climate system feedbacks.
To make research gaps more accessible, we present a methodology for evaluating and defining the scope of research gaps in assessment reports. We address two aims: 1) identify and categorize research gaps in a consistent way for a searchable database and 2) demonstrate the use of the research-gap database to support future research planning and assessment. Due to the qualitative nature of research-gap statements, the methodology presented here focuses on the use of confidence terminology when searching assessment text with calibrated language. The discussion acknowledges a broader conversation surrounding definitions of uncertainty from the literature and practice and how that shapes the definition and perception of research gaps for science planning. Recommendations are offered for refinements to quantifying and tracking research gaps in U.S. climate assessments.
2. Materials and methods
The systematic approach used to analyze research gaps from USGCRP climate assessments is outlined in two main aims with a total of four tasks (Fig. 1). The first aim was to identify and categorize research gaps from six USGCRP assessments in a transparent and traceable way to build a searchable database. The second aim was to demonstrate the use of the research-gap database to inform future research planning and assessment activities by the broader science community. These goals were designed to build on one another and be iterated upon as the USGCRP assessment process continues to evolve.
Iterative pathway for identifying, categorizing, evaluating, and analyzing research gaps to inform science planning and assessment activities. The pathway contains two aims: to build and to use a research-gap database. The four iterative tasks are to 1) identify research gaps, 2) organize a database, 3) demonstrate use, and 4) build on insight from database searches.
Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0041.1
a. Analyzing USGCRP climate assessments (2014–18)
The scope of this analysis was constrained to an “assessment of assessments” for USGCRP climate reports released between 2014 and 2018. The reports included were the Third National Climate Assessment (NCA3; Melillo et al. 2014) and the Fourth National Climate Assessment (NCA4; USGCRP 2018a), as well as four special reports: Climate Change, Global Food Security, and the U.S. Food System (FS; Brown et al. 2015), Impacts of Climate Change on Human Health in the United States (CHA; Crimmins et al. 2016), Climate Science Special Report: Fourth National Climate Assessment (CSSR; USGCRP 2017), and the Second State of the Carbon Cycle Report (SOCCR2; USGCRP 2018b). All reports varied in structure and length. Critically, only some included chapter-specific sections that elaborated on the underlying evidence base and key uncertainties. Depending on the assessment, these sections were referred to as traceable accounts, supporting evidence, or research needs.
b. Identifying and categorizing research gaps
Entries to the research-gap database take the form of short phrases or detailed statements and can describe research needs, information limitations, and uncertainty in a variety of ways. The methodology to identify research gaps presented here was performed through expert analysis and interpretation with the intent of making all decisions explicit and reducing the loss of context for research-gap information. At the time of this analysis, there was no single definition of “research gaps” used in USGCRP assessment products. Further, use of confidence terminology was not standardized across the six reports. The following set of criteria was selected to identify research-gap statements in the six reports analyzed:
-
The terms “research gaps” or “research needs” or “knowledge gaps.”
-
Descriptions of missing or limited information.
-
Characterization of uncertainty with calibrated language (i.e., low confidence).
Text searches were performed separately for each assessment using the criteria listed above. For reports with chapter-specific sections on research gaps (i.e., traceable accounts, supporting evidence, research needs), those sections were read in full and text search was applied to the whole document. For assessments that did not have analogous chapter-specific sections available for a structured search, the full document was read and a text search was applied throughout.
After a research-gap-related phrase was identified through the search criteria, the full statement and any supporting sentences were copied into a spreadsheet and grouped by report to form a preliminary database. Each entry included the statement’s original location within each report (chapter, key message, traceable accounts, etc.). Statements were also categorized by topics and themes to aid searchability and interpretation of the database (Fig. 2). Topic area categories were identified iteratively, drawing directly from the text of the research-gap statements. The categories were not determined ahead of extraction of research gaps. This allows for a categorization scheme that is comprehensive of the topics included in the research-gap statements extracted from each of the reports included in this assessment. Initially over 400 topics were added to the database. After review for duplicate wording and synonyms, this was reduced to 310 topics (see supplemental information). Topic areas, such as climate models, data, and temperature, were identified iteratively and emerged from the original research-gap statements. As statements were added to the database, topics were selected based on the text from individual entries. A separate topic list was generated for each of the six reports, though some topics did appear across all lists (see section 3b). The categorization is thus comprehensive of topics extracted from assessments used in this analysis. In many cases, a single research-gap statement generated multiple topics and remained associated with each one; in this way, the categorization was cumulative. Topics were grouped into overarching themes, and these groupings were recorded in a key list (Table S1 in the online supplemental material). Figure 2 demonstrates how entries were tagged in the database by category.
(a) Flowchart illustrating the steps for sorting entries into topics and grouping topics into overarching themes and (b) an example database entry from NCA4 and the resulting categorization. Database categorization was designed based on the content of each entry. Individual entries were sorted into topics with associated themes. Entries often included more than one topic and were categorized under multiple themes (see Table S1 for a full list of topics and themes).
Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0041.1
To provide an online platform for the database, an Excel macro was written to explicitly list themes associated with the topic of each database entry instead of relying on the key list alone (see supplemental information). This collated spreadsheet was uploaded to the browser-based Airtable platform for an enhanced user interface and extended filtering by report, key message, text, topic, and theme (see supplemental information).
3. Results
a. Topics and themes across six USGCRP assessments
The methodology presented above resulted in a research-gap database containing 1158 entries from the analysis of six USGCRP assessment products. Iterative categorization of the research-gap statements produced 310 topic areas organized into 22 overarching themes (see Table S1 for full key list). Ten themes contain over 80% (247) of the total topic areas identified (Table 1). Several of these top-10 themes relate to social or health sciences or research processes, reflecting an expanded scope of assessment beyond the physical sciences (Table 1). These results provide insight on where USGCRP can effectively coordinate research or strategic thinking with federal agencies. For example, if water-related research is prioritized, there is the opportunity for more water-related research gaps to be identified. Filling one research gap may lead to new insights, as well as new gaps.
Top 10 database themes (order determined by number of topics). See Table S1 for a full list of themes and topics.
Of the hundreds of topics identified, only 18 overlapped across all six assessments (Table 2). These recurring topics included physical science variables (e.g., temperature, ice, precipitation), modeling (e.g., climate models, time scale, downscaling/scale, magnitude of impact/change, and impact/integrated assessment models), and research techniques (e.g., methodology, data, measurement, monitoring/observations, and detection/attribution). Two recurring topics were specific to human–climate interactions and human-driven climate change: land-use–land-cover change and greenhouse gas emissions/sources. The list also included impact-focused topics that relate to social variables, multiple stressors and complex systems, and extreme events. There are several considerations to keep in mind when interpreting what is, and is not, on the list of recurring topics: the different scope and structure of each assessment, different author teams and timelines, available literature base, and national-level focus. Another factor that may have constrained this list is the nonlinear evolution of research gaps. However, it is also possible that within these topics, the individual gap statements have become more specific over time as research progress has been made. For example, a research gap identified by an early assessment could have broadly stated “extreme event attribution,” whereas a gap in a subsequent assessment could have more narrowly defined “projection and attribution of tornadoes.” For this database methodology, these example gaps would be counted as individual entries. However, other interpretations could regard the second, more-specific gap as an iteration of “extreme event attribution.”
Topics found across USGCRP assessments.
b. Database investigation of research gaps related to vulnerability
This section describes a use case (Fig. 1; task 3) for the research-gap database that filters entries based on the theme of social sciences and the topic of vulnerability. The knowledge generated by addressing these particular gaps could inform efforts to manage climate-related risks affecting frontline communities. Though a majority of the research-gap database content relates to the natural sciences, social sciences is the fourth-largest thematic area (24% of the database) when accounting for topics and entries (31 topics, 281 entries; Table 1, Fig. 3a). Adding the topic “vulnerability and social determinants of health” as an additional filter resulted in 49 tagged entries (4% of the database; Fig. 3a), 35 of which originated from NCA3 and NCA4 (Fig. 3b).
Filtering the research-gaps database for entries related to vulnerability. (a) Starting on the left of the flow diagram, the total number of database entries per USGCRP assessment (see section 2a for acronyms). Results are narrowed by selecting themes related to social sciences and the topic related to vulnerability. (b) The number of entries from each assessment after using categories for social sciences and vulnerability to narrow the search.
Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0041.1
In reading the full text of these filtered database entries, 16 research gaps related to vulnerability from NCA3 included the effectiveness of adaptation (e.g., interventions, scalability), human behavior and response (e.g., land use, adaptation funding, vulnerability reduction, risk-based decision tools), societal development (e.g., demographics, economics, policies), documentation of vulnerability (e.g., studies, inequities, projections), combination of stressors, and climate impacts (e.g., extremes, high-temperature scenarios, abrupt changes). The 19 vulnerability gaps in NCA4 contain some similarities to the NCA3 topics, including the scope and scale of adaptation actions that will impact future vulnerability; nonclimate factors that make it challenging to attribute injuries, illnesses, and death to climate; and lack of local vulnerability and resilience analyses for U.S. cities. However, NCA4 also included greater specificity for gaps related to the vulnerability of specific systems (e.g., energy, ecosystems, food) as well as specific communities (e.g., urban, rural, tribal). Several NCA4 research gaps point to impacts on urban quality of life, including tree-growing conditions, and the exacerbation of urban vulnerabilities depending on the frequency and intensity of extreme-weather events. Another NCA4 research gap specifies a range of factors related to vulnerability and exposure in rural areas, including death from cardiovascular diseases, population size and age, population migration, diet choices, health indicators, and temperature sensitivity.
Four NCA4 research gaps point to a shifting policy landscape where vulnerabilities may be exacerbated or alleviated in connection with the adjudication of federal reserved water rights and nonphysiological aspects of Indigenous health.
NCA4 research gaps related to vulnerability echo many of the research gaps from the special reports that preceded it. Applying the filters to CHA resulted in 11 research gaps that focused on community-level vulnerability, including population migration and demographics, effects of social/behavior characteristics on health impacts after extreme events, severity of risks to mental health and well-being of Indigenous populations, social determinants of health, public health interventions, adaptation options, limited availability or geographic scope of health data, impacts of rising atmospheric carbon dioxide levels on human and livestock nutrition, and modeling potential vulnerability in transportation infrastructure.
The two FS vulnerability gaps focus on industry data related to changing adaptive capacity and consumption, as well as potential vulnerabilities in transportation infrastructure. The one SOCCR2 vulnerability gap specifies the degree of societal vulnerability to climate change, systemic implications of action and policy in different locations, and the capacity and willingness to take action from institutions and individuals. A researcher, practitioner, or grant-funding agency conducting a similar search could use this understanding of vulnerability research gaps to help prioritize research strategy and funding.
Another way to home in on content of interest is to combine topics in a database search. For example, each database entry that falls under the vulnerability topic is also associated with myriad other topics. To illustrate this feature of the database, three separate searches were performed for topics related to vulnerability, vulnerability and resilience, and vulnerability and decision-making (Fig. 4). Five of the six assessments provided entries related to vulnerability; however, NCA3 and NCA4 provided the most entries across all three searches (Fig. 4). Comparing the initial search with the combination of vulnerability and resilience shows a narrowing of results from the five assessments (excluding CCSR, which had no initial results). The final search for vulnerability and decision-making results in the smallest number of entries, which are pulled from NCA3, NCA4, and SOCCR2—removing the entries from FS and CHA (Fig. 4).
Number of entries by assessment after three searches for different topic combinations: vulnerability, vulnerability and resilience, and vulnerability and decision-making.
Citation: Weather, Climate, and Society 15, 3; 10.1175/WCAS-D-22-0041.1
4. Discussion and conclusions
a. Benefits and challenges of a broad-net approach to categorization
The methodology presented here is the first time a systematic assessment of research gaps was conducted for U.S. national climate assessments. The multiphase approach involved performing expert analysis to define research gaps, locate research gaps in assessment reports, extract those gaps into a database, and categorize gaps by topics and themes. While this effort focused on national assessment products produced by the USGCRP, this approach could be applied to investigate and track research gaps in other subnational or international climate assessment reports. The process presented here can also contribute to scoping scientific projects, research coordination, and strategic planning across the climate science community.
This type of broad, in-depth assessment of research gaps has both benefits and limitations. Despite representing a broad swath of scientific understanding, U.S. climate assessments were historically developed by physical scientists, resulting in a bias against the social sciences. Recent USGCRP assessments, such as NCA4 and the Fifth National Climate Assessment (NCA5; currently under development), have enhanced efforts to incorporate social science research related to health, economics, and environmental justice. As assessments continue to expand into these areas, there could be an increase in database entries from diverse and interdisciplinary fields. One benefit of this approach is the comprehensive and consensus-driven process behind USGCRP climate assessments. In this analysis, research gaps were extracted from six USGCRP assessments written by teams of subject-matter experts who incorporated information sources spanning the human–climate interface (e.g., drivers, impacts, feedbacks, responses). Furthermore, USGCRP assessments adhere to a high level of scientific integrity, where content is refined through federal agency feedback, public comment periods, and independent peer review. Each assessment is produced under a timeline and with page constraints and thus cannot accommodate all available research. However, each of these documents incorporates a vast amount of information in one place through a transparent process, which makes them authoritative places to look for research gaps.
A second benefit of the approach presented in this manuscript is the expert-driven data-collection process. In this study, scientists with in-depth knowledge of USGCRP, its assessment processes, and broad knowledge of climate research facilitated a data collection, analysis, and tool-development process. While big-data approaches to mining trends in climate change literature are more agile and perhaps less labor intensive, the approach presented here is rigorous, comprehensive, and informed by context. Further, the database provides specific information about the location of the underlying text for each research gap so users can easily reference full statements and contextualize any research gap or its categorization into specific topic areas. Other researchers have used machine learning techniques to mine information about climate science literature more generally and in specific contexts (Berrang-Ford et al. 2021; Callaghan et al. 2020). This expert analysis and manual approach is an alternative method, and one that could potentially provide expertly identified constraints for machine learning algorithms to better contextualize results of big-data approaches (see section 4c).
A third benefit of the presented methodology is that it establishes an iterative process that includes review and refinement of how research gaps are defined and characterized in assessments, as well as categorized within the database. Assigning topics and themes by inspecting research gaps from individual reports provided substantial granularity for insight into the assessment process and findings. In doing so, the research-gap database extends the reach of assessment activities and can provide input to projects, proposals, and strategic planning within the climate science enterprise.
Challenges within this analysis approach include inconsistencies in the terminology used across the six USGCRP assessments (see section 2). Calibrated language is applied variably, and colloquial terms are used to capture uncertainty (similar to findings in Crimmins 2020). Standardizing uncertainty terminology could improve how scientific assessments describe research gaps and provide a foundation to better track research gaps over time. There has been some discourse on how uncertainty could be characterized to serve a variety of audiences and broaden the accessibility of scientific assessments (Corner et al. 2018; Shepherd et al. 2018). However, much of the scientific literature remains tethered to discipline-specific terminology. For example, climate science broadly defines uncertainty as a lack of knowledge (epistemic) or inherent variability or randomness (ontic; Walker et al. 2003). In climate-modeling communities, uncertainty has been associated with model structure, initial conditions, parameterization, and scenarios (Lehner et al. 2020; Hawkins and Sutton 2009). Literature on future trends and policy presents uncertainty as a spectrum between deterministic knowledge and ignorance, which includes probabilistic knowledge, scenarios, and “known unknowns” (van Dorsser et al. 2020; Walker et al. 2010). The label of “deep uncertainty” denotes plausible and possible futures that are characterized by high levels of uncertainty and a low amount of insight on future conditions (van Dorsser et al. 2018; Walker et al. 2013). Uncertainties in nature and human activities, as well as their interactions, can also be identified for the Earth system (e.g., human reflexive uncertainty; Patt and Dessai 2005). Given the conceptual diversity around uncertainty across disciplines, standardizing both uncertainty language and research-gap terminology in scientific assessment is a challenge. However, given the importance of understanding the state of knowledge for researchers and policy makers, further thought on these issues is critical (Mehta et al. 2019; Stults and Larsen 2018).
This methodology is also limited by the inherent subjectivity of the approach. This analysis used multiple steps to define and identify research gaps. Ultimately, a single researcher followed those steps to read through each assessment and extract research gaps for the database. It is possible that another researcher could reach slightly different results using the same methodology, especially since the reports were not standardized in defining, labeling, and framing research gaps.
b. Recommendations for improved identification and tracking of research gaps
Based on the lessons learned from creating the USGCRP research-gap database, we propose the following recommendations for future assessment developers, both within USGCRP and beyond:
-
Assessment leadership and guidance to authors should more clearly define what constitutes a research gap to improve the ability to assess research gaps in a more systematic and rigorous manner across future assessments.
-
Assessment authors should be trained in the use of uncertainty and research-gap definitions. Training should help authors avoid using imprecise terminology when describing uncertainty or characterizing the strength of evidence or a research gap.
-
Each assessment chapter team should include a science communications expert as an author to help provide consistency and best practices for uncertainty and risk terminology across the report.
-
Assessment leadership could formally designate users and style guidance for separate report sections so that uncertainty language better aligns with specific audiences. For example, guidance could include the following:
-
Overview: designed for a policy-making audience; use predefined calibrated language.
-
Key messages: designed for the general public and media; use visual representations of uncertainty where applicable.
-
Chapter text: designed for business owners, practitioners, and educators; write out descriptions of sources of uncertainty.
-
Traceable accounts: designed for a scientific research audience; use calibrated language and technical descriptions of uncertainty.
-
c. Recommendations for informing future assessment activities
Moving forward, if resources and a consistent process (e.g., storage, web interface, staff time, evaluation procedure) are put in place, research gaps could be assessed whenever a new product is produced. A postassessment evaluation could extend to better catalog author decision-making on the selection and descriptions of research gaps. Additionally, there is a potential for machine learning and other computational techniques to facilitate characterization and analysis of this extensive textual data (Benites-Lazaro et al. 2018; Callaghan et al. 2020; Hassani et al. 2019). Methods such as topic modeling and text network analysis paired with insights from discourse analysis could offer lower-effort and more-diverse ways of understanding the rich body of qualitative data in assessments. Computational approaches may present more research opportunities, but they also present trade-offs in terms of the interpretability, contextualization, and qualitative evaluation of the use of calibrated language. In the case of supervised machine learning, algorithms are shaped by expert design choices for preprocessing data that affect training data, domain knowledge, and data labeling (Zhou et al. 2017). One promising approach for retention of context in an automated classification scheme would be the use of a climate-relevant ontology (Sleeman et al. 2018), such as the NASA Global Change Master Directory keywords (NASA 2021) or the expert-directed categorization presented in the final key list of this analysis (Table S1). Furthermore, informed evaluation, or review by experts, is also a practice used in text analysis through preprocessing (cleaning up) text and testing topical models for fidelity and context to the text input (Callaghan et al. 2020).
We propose the following recommendations for future analysis of assessment research gaps:
-
Establish intervals for regular analysis of research gaps in the assessment process for more trackable information over time. A dedicated section on research gaps in each assessment chapter would support this analysis.
-
Offer the existing database, background explanation, and search examples on a public web interface. Building and maintaining a user interface and establishing quality-control procedures would require a sustained effort and infrastructure.
-
Contextualize machine learning approaches to climate assessment with trends and transparency in research priorities, diversity of scientific knowledge systems, scientific literature landscape, assessment structure, and author expert judgment.
-
After an assessment is complete, consider convening an author workshop or circulating a survey to define research gaps with greater detail and explore topics omitted from the assessment due to low confidence or insufficient evidence. Also consider stakeholder engagement activities that would help identify research gaps most relevant to stakeholder needs.
Assessment reports, stakeholder engagement, research coordination, and process evaluation are integral components of climate assessment processes. While these elements provide a clear pathway for the identification, examination, and analysis of research-gap information, until now there had not been a comprehensive or systematic assessment of research gaps across USGCRP national climate assessments. Here we established a methodology for identifying, categorizing, and analyzing research gaps to extend the utility of USGCRP assessment products and improve information access. Future work in this area will focus on improving the accessibility and usability of this complex dataset for federal agencies and other stakeholder communities.
Acknowledgments.
This material was developed with federal support through the U.S. Global Change Research Program under National Aeronautics and Space Administration Contract Award 80HQTR21D0004. Thank you to our colleagues who gave feedback on the project including Allison Crimmins, Dr. Michael Kuperberg, Alexa Jay, Ciara Lemery, and Dr. Elizabeth “Betsy” Weatherhead. Samantha Basile, Allyza Lustig, Bradley Akamine, and Christopher W. Avery are contracted employees of ICF to the National Coordination Office of USGCRP. Ashley Bieniek-Tobasco was a contracted ICF employee working for USGCRP when she generated and categorized the research-gap database. All authors are working on the Fifth National Climate Assessment in various capacities, and several authors have worked on previous USGCRP assessments.
Data availability statement.
All research-gap text was taken directly from the USGCRP climate assessments, which are available at globalchange.gov or through the following links: Third National Climate Assessment, https://nca2014.globalchange.gov; Climate Change, Global Food Security, and the U.S. Food System, https://www.usda.gov/oce/energy-and-environment/food-security; Impacts of Climate Change on Human Health in the United States, https://health2016.globalchange.gov; Climate Science Special Report, https://science2017.globalchange.gov; Second State of the Carbon Cycle Report, https://carbon2018.globalchange.gov; Fourth National Climate Assessment, https://nca2018.globalchange.gov. A spreadsheet version of the research-gap database and key list is provided as supplemental material. An online version of the research-gap database is formatted using the Airtable platform and available at https://www.globalchange.gov/nca-research-gaps.
REFERENCES
Arias, P. A., and Coauthors, 2021: Technical summary. Climate Change 2021: The Physical Science Basis, V. Masson-Delmotte et al., Eds., Cambridge University Press, 33–144, https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_TS.pdf.
Avery, C. W., D. R. Reidmiller, T. S. Carter, K. L. M. Lewis, and K. Reeves, 2018: Report development process. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, D. R. Reidmiller et al., Eds., Vol. II, U.S. Global Change Research Program, 1387–1409, https://doi.org/10.7930/NCA4.2018.AP1.
Benites-Lazaro, L. L., L. Giatti, and A. Giarolla, 2018: Topic modeling method for analyzing social actor discourses on climate change, energy and food security. Energy Res. Soc. Sci., 45, 318–330, https://doi.org/10.1016/j.erss.2018.07.031.
Berrang-Ford, L., A. J. Sietsma, M. Callaghan, J. C. Minx, P. F. Scheelbeek, N. R. Haddaway, A. Haines, and A. D. Dangour, 2021: Systematic mapping of global research on climate and health: A machine learning review. Lancet Planet. Health, 5, E514–E525, https://doi.org/10.1016/S2542-5196(21)00179-0.
Brown, M. E., and Coauthors, 2015: Climate change, global food security, and the U.S. food system. U.S. Global Change Research Program Rep., 157 pp., https://doi.org/10.7930/J0862DC7.
Buizer, J. L., and Coauthors, 2016: Building a sustained climate assessment process. The US National Climate Assessment, K. Jacobs et al., Eds., Springer Climate, Springer, 23–37, https://doi.org/10.1007/978-3-319-41802-5_3.
Callaghan, M. W., J. C. Minx, and P. M. Forster, 2020: A topography of climate change research. Nat. Climate Change, 10, 118–123, https://doi.org/10.1038/s41558-019-0684-5.
Corner, A., C. Shaw, and J. Clarke, 2018: Principles for effective communication and public engagement on climate change: A handbook for IPCC authors. IPCC Climate Outreach Rep., 28 pp., https://www.ipcc.ch/site/assets/uploads/2017/08/Climate-Outreach-IPCC-communications-handbook.pdf.
Crimmins, A., 2020: Improving the use of calibrated language in U.S. climate assessments. Earth’s Future, 8, e2020EF001817, https://doi.org/10.1029/2020EF001817.
Crimmins, A., and Coauthors, 2016: The impacts of climate change on human health in the United States: A scientific assessment. U.S. Global Change Research Program Rep., 332 pp., https://doi.org/10.7930/J0R49NQX.
Elamparuthy, A., and R. Sherman, 2020: Climate data you can trust. Eos, 101, https://doi.org/10.1029/2020EO141194.
García-del-Amo, D., P. G. Mortyn, and V. Reyes-García, 2020: Including indigenous and local knowledge in climate research: An assessment of the opinion of Spanish climate change researchers. Climatic Change, 160, 67–88, https://doi.org/10.1007/s10584-019-02628-x.
Global Change Research Act, 1990: Global Change Research Act of 1990. Pub. L. No. 101-606, 104 Stat. 3096, https://www.congress.gov/bill/101st-congress/senate-bill/169/text.
Hassani, H., X. Huang, and E. Silva, 2019: Big data and climate change. Big Data Cognit. Comput., 3, 12, https://doi.org/10.3390/bdcc3010012.
Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 1095–1108, https://doi.org/10.1175/2009BAMS2607.1.
Hulme, M., 2018: “Gaps” in climate change knowledge: Do they exist? Can they be filled? Environ. Humanit., 10, 330–337, https://doi.org/10.1215/22011919-4385599.
Lehner, F., C. Deser, N. Maher, J. Marotzke, E. M. Fischer, L. Brunner, R. Knutti, and E. Hawkins, 2020: Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. Earth Syst. Dyn., 11, 491–508, https://doi.org/10.5194/esd-11-491-2020.
Mehta, L., H. N. Adam, and S. Srivastava, 2019: Unpacking uncertainty and climate change from ‘above’ and ‘below.’ Reg. Environ. Change, 19, 1529–1532, https://doi.org/10.1007/s10113-019-01539-y.
Melillo, J. M., T. C. Richmond, and G. W. Yohe, 2014: Climate change impacts in the United States: The Third National Climate Assessment. U.S. Global Change Research Program Rep., 841 pp., https://doi.org/10.7930/J0Z31WJ2.
Moss, R. H., and Coauthors, 2019: Evaluating knowledge to support climate action: A framework for sustained assessment. Report of an independent advisory committee on applied climate assessment. Wea. Climate Soc., 11, 465–487, https://doi.org/10.1175/WCAS-D-18-0134.1.
NASA, 2021: Global Change Master Directory (GCMD) Keywords. EarthData, https://earthdata.nasa.gov/earth-observation-data/find-data/idn/gcmd-keywords.
Patt, A., and S. Dessai, 2005: Communicating uncertainty: Lessons learned and suggestions for climate change assessment. C. R. Geosci., 337, 425–441, https://doi.org/10.1016/j.crte.2004.10.004.
Robinson, K. A., and Coauthors, 2013: Framework for determining research gaps during systematic review: Evaluation. U.S. Dept. of Health and Human Services Rep., 66 pp., https://www.ncbi.nlm.nih.gov/books/NBK126702.
Roesch-McNally, G., and Coauthors, 2020: Beyond climate impacts: Knowledge gaps and process-based reflection on preparing a regional chapter for the Fourth National Climate Assessment. Wea. Climate Soc., 12, 337–350, https://doi.org/10.1175/WCAS-D-19-0060.1.
Shepherd, T. G., and Coauthors, 2018: Storylines: An alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change, 151, 555–571, https://doi.org/10.1007/s10584-018-2317-9.
Sleeman, J., T. Finin, and M. Halem, 2018: Ontology-grounded topic modeling for climate science research. Emerging Topics in Semantic Technologies: ISWC 2018 Satellite Events, E. Demidova et al., Eds., IOS Press, 191–202.
Stults, M., and L. Larsen, 2018: Tackling uncertainty in US local climate adaptation planning. J. Plann. Educ. Res., 40, 416–431, https://doi.org/10.1177/0739456X18769134.
Sutton, R. T., 2019: Climate science needs to take risk assessment much more seriously. Bull. Amer. Meteor. Soc., 100, 1637–1642, https://doi.org/10.1175/BAMS-D-18-0280.1.
USGCRP, 2017: Climate Science Special Report: Fourth National Climate Assessment. Vol. I, D. J. Wuebbles et al., Eds., U.S. Global Change Research Program Rep., 470 pp., https://doi.org/10.7930/J0J964J6.
USGCRP, 2018a: Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment. Vol. II, D. R. Reidmiller et al., Eds., U.S. Global Change Research Program Rep., 1515 pp., https://doi.org/10.7930/NCA4.2018.
USGCRP, 2018b: Second State of the Carbon Cycle Report (SOCCR2): A Sustained Assessment Report. N. Cavallaro et al., Eds., U.S. Global Change Research Program Rep., 878 pp., https://doi.org/10.7930/Soccr2.2018.
van Dorsser, C., W. E. Walker, P. Taneja, and V. A. W. J. Marchau, 2018: Improving the link between the futures field and policymaking. Futures, 104, 75–84, https://doi.org/10.1016/j.futures.2018.05.004.
van Dorsser, C., P. Taneja, W. Walker, and V. Marchau, 2020: An integrated framework for anticipating the future and dealing with uncertainty in policymaking. Futures, 124, 102594, https://doi.org/10.1016/j.futures.2020.102594.
Walker, W. E., P. Harremoës, J. Rotmans, J. P. van der Sluijs, M. B. A. van Asselt, P. Janssen, and M. P. K. von Krauss, 2003: Defining uncertainty: A conceptual basis for uncertainty management in model-based decision support. Integr. Assess., 4, 5–17, https://doi.org/10.1076/iaij.4.1.5.16466.
Walker, W. E., V. A. W. J. Marchau, and D. Swanson, 2010: Addressing deep uncertainty using adaptive policies: Introduction to section 2. Technol. Forecasting Soc. Change, 77, 917–923, https://doi.org/10.1016/j.techfore.2010.04.004.
Walker, W. E., R. J. Lempert, and J. H. Kwakkel, 2013: Deep uncertainty. Encyclopedia of Operations Research and Management Science, S. I. Gass and M. C. Fu, Eds., Springer, 395–402, https://doi.org/10.1007/978-1-4419-1153-7_1140.
Weaver, C. P., R. H. Moss, K. L. Ebi, P. H. Gleick, P. C. Stern, C. Tebaldi, R. S. Wilson, and J. L. Arvai, 2017: Reframing climate change assessments around risk: Recommendations for the US National Climate Assessment. Environ. Res. Lett., 12, 080201, https://doi.org/10.1088/1748-9326/aa7494.
Zhou, L., S. Pan, J. Wang, and A. V. Vasilakos, 2017: Machine learning on big data: Opportunities and challenges. Neurocomputing, 237, 350–361, https://doi.org/10.1016/j.neucom.2017.01.026.