• View in gallery

    Uncertainty in climate data records. Illustrated as a “fountain,” at each level of climate data manipulation, additional uncertainties are introduced. At the premier level, raw data are affected by measurement errors. During the quality control procedures of the next level, uncertainty may be introduced when correcting the geolocation of measurements and identifying inconsistent measurements. The homogenization methods used to remove systematic bias and data artifacts at the next level may introduce additional uncertainty. Finally, both the choice of dataset to use as input and the forward model algorithms applied to generate derived products may introduce another layer of uncertainty. The typical climate data user gathers records from the “pool” at the bottom of this fountain, thereby realizing the total sum of uncertainties from all the preceding levels.

  • View in gallery

    Climate observation stewardship. This conceptual framework idealizes the stewardship of climate observations from the original raw measurements, through homogenization, and derivation of products. Along the way, all processing should be documented and archived in a way that is easily accessible to the user community.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 92 86 9
PDF Downloads 66 64 7

Uncertainty Quantification for Climate Observations

View More View Less
  • 1 Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA's National Climatic Data Center, Asheville, North Carolina
  • | 2 Program in Spatial Statistics and Environmental Statistics, The Ohio State University, Columbus, Ohio
  • | 3 Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, North Carolina
© Get Permissions
Full access

*CURRENT AFFILIATION: Department of Statistics, North Carolina State University, Raleigh, North Carolina

CORRESPONDING AUTHOR: Jessica L. Matthews, NOAA's National Climatic Data Center, 151 Patton Avenue, Asheville, NC 28801, E-mail: jessica.matthews@noaa.gov

*CURRENT AFFILIATION: Department of Statistics, North Carolina State University, Raleigh, North Carolina

CORRESPONDING AUTHOR: Jessica L. Matthews, NOAA's National Climatic Data Center, 151 Patton Avenue, Asheville, NC 28801, E-mail: jessica.matthews@noaa.gov

EXECUTIVE SUMMARY.

Observations are key to uncertainty quantification (UQ) in climate research because they form the very basis for any evidence of climate change and provide a corroborating source of information about the way in which physical processes are modeled and understood. However, observations themselves possess uncertainties originating from many sources including measurement error and errors imposed by the algorithms generating derived products (see Fig. 1). Over time, global observing systems have undergone transformations on pace with technological advances. These changes require adequate quantification of resultant imposed biases to determine the impact on long-term trends. The uncertainties in climate observations pose a set of methodological and practical challenges for both the analysis of long-term trends and the comparison between data and model simulations.

Fig. 1.
Fig. 1.

Uncertainty in climate data records. Illustrated as a “fountain,” at each level of climate data manipulation, additional uncertainties are introduced. At the premier level, raw data are affected by measurement errors. During the quality control procedures of the next level, uncertainty may be introduced when correcting the geolocation of measurements and identifying inconsistent measurements. The homogenization methods used to remove systematic bias and data artifacts at the next level may introduce additional uncertainty. Finally, both the choice of dataset to use as input and the forward model algorithms applied to generate derived products may introduce another layer of uncertainty. The typical climate data user gathers records from the “pool” at the bottom of this fountain, thereby realizing the total sum of uncertainties from all the preceding levels.

Citation: Bulletin of the American Meteorological Society 94, 3; 10.1175/BAMS-D-12-00042.1

WORKSHOP ON UNCERTAINTY QUANTIFICATION FOR CLIMATE OBSERVATIONS

What: Approximately 60 statisticians, mathematicians, and climate scientists from academia and governmental institutions met to discuss the issues surrounding uncertainty quantification in the context of climate observations.

When: 17–19 January 2012

Where: Asheville, North Carolina

In January 2012, a workshop was held to discuss the issues surrounding uncertainty quantification in the context of climate observations.1 Workshop events included 14 invited speakers over five oral sessions, multiple panel discussions, and a poster session addressing the following themes: remote sensing issues, spatial scaling, Bayesian techniques for coupling data to models, data fusion and assimilation, and climate measurement networks. The detailed conference schedule, along with links to the presentations, may be found at www.samsi.info/uq-observations.

This workshop was an opportunity to engage with and understand the different concerns and perspectives from the largely academic mathematical and statistical communities and climate data product scientists and providers. Major outcomes of the workshop include the realization of interest in collaboration as well as identification of possible steps to work toward the mutual goal of robustly characterizing uncertainty in climate observation. We look forward to the possibility of evolving this workshop into an annual event to facilitate continued cooperation and communication within the scientific community.

ORIGINS OF UNCERTAINTY IN CLIMATE OBSERVATIONS

  • Instrument measurement error

  • Temporal and spatial sampling error

  • Digitalization error

  • Errors using forward models with derived datasets

    • In mathematical formulations as compared to reality

    • In numerical methods used to solve the forward model

    • In parametric fields

    • In boundary and initial conditions

TOPICS OF DISCUSSION.

Workshop presentations encompassed an outline of specific successes and challenges encountered when applying UQ to climate data. These served as a springboard for a lively discussion on a variety of points, briefly described below.

Raw measurement archival and UQ characterization.

All climate data products are based on original raw measurements but often involve subtle processing and transformations of the raw observations. Sometimes it is the direct observations, with quality checking and homogenization, that constitute the data product, although at other times the raw measurements are used to derive the product through a forward model. Questions were raised regarding the availability of original records and accurate quantification of the associated measurement errors. It was acknowledged that U.S. government agencies are successfully archiving and stewarding most remotely sensed raw measurements in addition to the derived products. However, the records of ground-based measurements still face issues with digitization, archival, and retention. A significant and sustained effort is required to save these data, which constitute the long-term (greater than 50 years) record. Figure 2 illustrates the idealization of climate data record stewardship.

Fig. 2.
Fig. 2.

Climate observation stewardship. This conceptual framework idealizes the stewardship of climate observations from the original raw measurements, through homogenization, and derivation of products. Along the way, all processing should be documented and archived in a way that is easily accessible to the user community.

Citation: Bulletin of the American Meteorological Society 94, 3; 10.1175/BAMS-D-12-00042.1

The characterization of UQ in remotely sensed data products can be done through two main ways: 1) propagation of uncertainties from instrument measurement errors through the retrievals to the final product, or 2) comparison to in situ measurements. More often, scientists choose to use in situ measurements to validate remotely sensed data products rather than the more complex alternative of propagation of uncertainties. Although there are some notable successes in characterizing UQ for the raw measurements, there remain large geographic regions, vertical domains, and measurement types where associated validation data are not available.

Quantification of the measurement errors traceable to absolute International System of Units (SI) standards in the ground-based measurement networks would be prohibitively expensive across the network as a whole. However, there is a growing realization of the potential value of reference networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the U.S. Climate Reference Network (USCRN), which are traceable in this manner and can form the backbone of a global system of systems approach to observations as envisioned by the Global Earth Observation System of Systems (GEOSS) and World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS).

Homogenization techniques.

The workshop focused on the current procedure for removing outlier and poor quality measurements from the data record. Climate scientists sometimes find it necessary to identify data points that suffer from measurement inconsistencies or underepresentativeness and then omit them from further analysis. In doing so, artifacts and systematic bias are removed from the data record. Given the skepticism with which so-called data cleansing procedures are received by both scientific and lay audiences, it was recommended that all such procedures should be formulated into a mathematical algorithm that is fully described to promote reliability and transparency. Even after homogenization techniques are applied, improved exploration of structural uncertainty and undertaking a consistent suite of benchmarking assessment exercises to understand more fully the strengths and limitations of each method will enable the data record to be better characterized. All participants agreed that care needs to be taken with any modification to, or subsetting of, the complete dataset as it changes the distribution of measurements, which are in fact only estimates of what is considered “truth.”

Challenges faced when applying UQ methods to climate observations.

Although this workshop was intended to further collaboration between the statistical and mathematical communities and climate scientists to apply UQ methods to climate observations, it became clear that each side has different challenges. One issue of concern for statisticians is the time investment required to educate themselves about a particular application, but it was agreed that this is where the collaboration with climate scientists could be instrumental.

Climate scientists and data providers have concerns related to choosing an appropriate statistical methodology. Many UQ techniques appear equally good, and it is difficult to choose which method is appropriate. Moreover, it is often unclear as to the scope of a particular method, where it is valid, and where it should not be used. It is equivocal to the community at large how to correctly intercom-pare uncertainty estimates derived via different techniques.

The remote sensing data product community has begun to address UQ issues by including data quality flags along with data. Although this is an excellent first step in qualitatively assessing the uncertainty associated with observations, a more quantitative representation is desirable. One suggestion for a quantitative UQ characterization was the inclusion of Monte Carlo ensemble predictions for data products, rather than only reporting means as is the current practice. From the data provider perspective, there are storage concerns for archiving a suite of ensemble predictions. Further, there is concern about the ability of the general user to understand how to use an ensemble product, given the diverse range of background experience for end users. A middle ground would be supplying standard errors or quantile values to indicate the range of uncertainty, which may be a more feasible option to address the interests of all parties.

All participants agreed that there is too often a lack of documentation in both products and papers describing the uncertainty calculation methods. Many techniques rely on varying assumptions, which may not be clearly communicated to, or understood by, data product users. The end users of climate data, which can include policymakers, comprise a wide range of backgrounds. This diversity of backgrounds needs to be considered when choosing uncertainty products to associate with climate data. The uncertainty estimates need to be robust, completely explained, and intellectually accessible to all users. It was agreed that the more approximate quality statements intended for nonexpert users should be traceable back to more detailed quality measures.

Reproducibility of applied UQ methods.

Transparency of applied UQ methods is a major concern. Accomplishment of transparency includes auditability, reproducibility, and replication of results; that is, every UQ method applied should be accompanied by an explanation of what, how, and why it was done.

One obvious way to facilitate auditability and reproducibility is with provision of code that operates on the raw measurement inputs to produce data products. Even in this relatively small subset of participants there was a wide range of perspectives regarding code provision. Some are willing to provide code, although there is doubt whether code alone will explain methodologies and whether external users can simply copy code and reproduce results. Although code provision can directly explain what and how analysis was done, it may be silent on why the particular methodology and implementation was selected. Some agencies have extremely strict requirements for code release, insisting on near-commercial-grade quality, which most researchers do not have the time or resources to develop. Therefore, code release is not a feasible option under these conditions. Another subset of statisticians feel that it is better to provide inputs and outputs to interested parties. Supplying (nonpeer reviewed) code can be a dangerous practice, while leaving it to the user to develop the code and reproduce results independently encourages thought and engagement by the potential user. In this replication of results, real scientific value is realized through the analysis of structural uncertainty.

Another effort that also achieves transparency is developing detailed technical documentation that outlines all assumptions, error handling, and procedures used to create the associated data product. This documentation is a living entity, modified as reprocessing methods are developed while retaining the historical records for version control.

Collaboration possibilities.

The intent of this workshop was to facilitate collaboration between the statistical and climate science communities. As such, ways for improving and strengthening the relationship between them were discussed. Ideally, this should be a synergistic relationship. Not only can statisticians significantly assist climate scientists with data, allowing for new interpretations of data, but also careful modeling and analysis of important climate datasets can suggest new statistical methods.

Several ideas for collaboration avenues were raised:

  • The need for statistical input during the development of experimental schemes.

  • Creation of more flexible data products, designed with input from both climatologists and statisticians, to allow for easy data manipulation for users.

  • A few mathematicians were interested in providing sensitivity analysis capabilities for measurement devices as well as forward models to connect what is being measured and what is being derived.

  • Scaling issues, both spatial and temporal, were a recurrent theme where the advice of statisticians would be greatly valued by climate scientists. A specific example of this is the collaboration to devise a statistical model for surface temperatures calibrated from ground station data.

  • There was common interest in working toward a communitywide definition for terminology describing trends and truth. Additionally, a joint effort is warranted to communicate clearly with the public what is meant by “uncertainty quantification methods,” and how uncertainty calculations are themselves uncertain.

KEY OUTCOMES

  • The relationship between the mathematical, statistical, climate science, and data provider communities is ideally synergistic, and there is a mutual interest in joining together in collaboration.

  • Raw measurements of remotely sensed data appear to be well archived and stewarded by U.S. government agencies. Ground-based measurement records still face significant challenges with digitization, archival, and retention.

  • Homogenization techniques require reliability and transparency.

  • Choosing an appropriate statistical methodology and intercomparing the options is a challenge for climate scientists and data providers.

  • Complete transparency requires auditability, reproducibility, and replication, which cannot all be accomplished merely via code provision.

SPONSORING ORGANIZATIONS

The CICS-NC (www.cicsnc.org) is formed through a consortium of academic, nonprofit, and community organizations with leadership from North Carolina State University on behalf of the University of North Carolina system. Its scientific vision centers on observation, including the development of new ways to use existing observations, the invention of new methods of observation, and the creation and application of ways to synthesize observations from many sources into a complete and coherent depiction of the full Earth system.

The mission of SAMSI (www.samsi.info) is to forge a synthesis of the statistical sciences and the applied mathematical sciences with disciplinary science to confront the very hardest and most important data- and model-driven scientific challenges. SAMSI is a formulator and stimulator of research. It conducts annual research programs that target areas most in need of attention and most amenable to high-impact progress. The 2011–12 program was on uncertainty quantification, with subprograms focusing on methodology, climate modeling, engineering and renewable energy, and geosciences.

SSES at OSU (www.stat.osu.edu/~sses) is involved in teaching, research, and science collaboration. Regarding research, SSES has emphasized the development of statistical methodology and computational aspects of spatial and spatiotemporal statistics applied to areas of “big science,” such as remote sensing of Earth on a global scale, regional climate modeling in space and time, and Bayesian statistical exposure modeling from sources to biomarkers.

1The National Oceanic and Atmospheric Administration (NOAA)/National Climatic Data Center (NCDC) hosted this event. This workshop was cosponsored by Cooperative Institute for Climate and Satellites at North Carolina (CICS-NC) and by the Statistical and Applied Mathematical Sciences Institute (SAMSI). It was organized in cooperation with the Program in Spatial Statistics and Environmental Statistics (SSES) at The Ohio State University (OSU).

Save