Evolving Obs4MIPs to Support Phase 6 of the Coupled Model Intercomparison Project (CMIP6)

Robert Ferraro Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California

Search for other papers by Robert Ferraro in
Current site
Google Scholar
PubMed
Close
,
Duane E. Waliser Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California

Search for other papers by Duane E. Waliser in
Current site
Google Scholar
PubMed
Close
,
Peter Gleckler Lawrence Livermore National Laboratory, Livermore, California

Search for other papers by Peter Gleckler in
Current site
Google Scholar
PubMed
Close
,
Karl E. Taylor Lawrence Livermore National Laboratory, Livermore, California

Search for other papers by Karl E. Taylor in
Current site
Google Scholar
PubMed
Close
, and
Veronika Eyring Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, Oberpfaffenhofen, Germany

Search for other papers by Veronika Eyring in
Current site
Google Scholar
PubMed
Close
Full access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

CORRESPONDING AUTHOR: Robert Ferraro, Jet Propulsion Laboratory, MS 301-330, 4800 Oak Grove Dr., Pasadena, CA 91109-8099, E-mail: robert.d.ferraro@jpl.nasa.gov

CORRESPONDING AUTHOR: Robert Ferraro, Jet Propulsion Laboratory, MS 301-330, 4800 Oak Grove Dr., Pasadena, CA 91109-8099, E-mail: robert.d.ferraro@jpl.nasa.gov

Over the past four years, an initiative known as Observations for Model Intercomparison Projects (Obs4MIPs) has successfully completed its pilot phase by adopting a set of technical protocols (dataset format, metadata standards, and documentation requirements) for dataset contributions, producing datasets that conform to these standards and archiving them for distribution on the Earth System Grid Federation (ESGF) alongside the fifth phase of the Coupled Model Intercomparison Project (CMIP5) model output (Teixeira et al. 2014). This pilot phase of Obs4MIPs, initiated by the National Aeronautics and Space Administration’s (NASA) Jet Propulsion Laboratory (JPL) and the Department of Energy’s (DOE) Program for Climate Model Diagnosis and Intercomparison (PCMDI) at Lawrence Livermore National Laboratory, supported CMIP5 (Taylor et al. 2012) and provided a path to improve the coordination between observational communities and major climate modeling intercomparison projects such as CMIP. Obs4MIPs is now being embraced by the international community, with the World Climate Research Programme (WCRP) Data Advisory Council (WDAC) empaneling a task team to provide guidance and governance for Obs4MIPs at an international level, in conjunction with the existing NASA Science Working Group that is more tightly focused on NASA satellite data products. Following the example of the first DOE–NASA Obs4MIPs meeting (Gleckler et al. 2011), and with an initial design of CMIP6 being published (Meehl et al. 2014), a meeting of over 50 experts in both climate modeling and satellite data from the United States, Europe, Japan, and Australia convened at NASA headquarters in Washington, D.C., for the purpose of planning the evolution of Obs4MIPs and its connection to the CMIP6 experiments.

OBS4MIPS–CMIP6 PLANNING MEETING

What: Experts in satellite data products and global climate modeling met to begin planning the evolution of the Observations for Model Intercomparison Projects (Obs4MIPs) in support of CMIP6.

When: 29 April–1 May 2014

Where: Washington, D.C.

To date, the Obs4MIPs collection has grown to over 50 contributed datasets that align with CMIP5 model output, including datasets corresponding to the International Satellite Cloud Climatology Project (ISCCP) and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations/Polarization and Anisotropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (CALIPSO/PARASOL) inline simulators (Bodas-Salcedo et al. 2011) that were originally assembled as part of the Cloud Feedback Model Intercomparison Project (CFMIP). This collection now includes contributions from the European Space Agency (ESA), and a diverse community of observational experts has expressed interest in contributing data to Obs4MIPs. The broad interest in Obs4MIPs has challenged some of the initial thinking regarding the current criteria for inclusion in Obs4MIPs, and it has generated much discussion at the meeting. These criteria include the notion of identifying one “best” dataset for each variable, the degree of exact matchup with CMIP5 output variables, the sampling mismatch between observations and model-averaged output, and the exclusion of model-based datasets (i.e., reanalysis).

The objectives for the meeting were as follows:

  1. Review aspects of the model evaluation from CMIP3/CMIP5 that utilize satellite observations and reanalysis for diagnosis and assessment.

  2. Assess the utility of the current Obs4MIPs holdings, including formatting, documentation, temporal and spatial resolution, and ESGF delivery, in the context of CMIP model evaluation.

  3. Identify currently underutilized and potentially valuable satellite observations and reanalysis for climate model evaluation and process understanding.

  4. Examine the mismatch between CMIP model output and satellite-based products, and recommend changes and additions to output and datasets to achieve more effective alignment.

  5. Provide recommendations for new observation datasets that target critical voids in model evaluation capabilities, including important phenomena, subgrid-scale features, and holistic Earth system considerations extending to composition, carbon cycle, hydrology, etc.

  6. Discuss the utility and expansion of satellite simulators for CMIP6 model evaluation, striving to identify key areas where such developments could yield high-impact advancements in model evaluation and improvement.

The meeting was organized around key topics driving current Earth system global model development and analysis: atmospheric composition and radiation, atmospheric physics, terrestrial water and energy exchange, land cover/land use, carbon cycle, and oceanography and cryosphere.

Each session began with short survey talks from a modeling perspective and an observational data perspective in order to promote the conversation between modelers and data providers. The intent was to inform their community counterparts of the observation needs from a modeling perspective, and the observational datasets potentially available from the provider perspective. Substantial time was reserved for open discussion. The organizers acknowledged that the agenda was driven by their perception of what were the highest priorities for Earth system global model evaluation in the context of CMIP, and that many other important topics had to be excluded in the interest of time. The highlights of these discussions were captured by rapporteurs and reported on the last day of the meeting. There were several consensus recommendations that applied to all of the topic areas:

  • Expand the inventory of included datasets. Many potential additions were suggested during the meeting, without an attempt to prioritize them.

  • Include higher-frequency datasets and higher-frequency model output. These are considered important for process-oriented evaluation, but the potential associated volume of data could tax resources of modeling groups. To reduce the burden, it was suggested that high-frequency model output be limited to an observationally rich “golden period,” but further discussion is required to define it.

  • Reliable and defendable error characterization/estimation of observations is a high priority, and Obs4MIPs should press harder for the inclusion of these estimates as part of each dataset.

  • Include datasets in support of offline simulators. The CFMIP Observation Simulator Package (COSP) simulators (Bodas-Salcedo et al. 2011) will likely continue to be included in model runs for CMIP6, and inclusion of relevant datasets for comparison should be expanded in Obs4MIPs. However, adding additional new simulators requires time and resources, and thus is unlikely to happen before CMIP6 simulations are started. If simulators exist that can be run offline on model output, then consideration should be given to recommending the appropriate model output and providing the appropriate datasets for comparison.

  • Reanalysis serves many useful purposes, and for some variables it is the best observationally based reference for climate models. However, inclusion of reanalysis fields in Obs4MIPs should be considered with caution and the degree to which the reanalysis models themselves might distort the observed field should be taken into account.

  • Collocated observations, including sparser in situ datasets, are particularly valuable for diagnosing certain processes and their inclusion in Obs4MIPs should therefore be encouraged.

  • Precise definitions of data products (what is actually being reported), including biases, and precise definitions of the model output variables are required. In some cases, it is not clear how closely the observations correspond to the model output, even though they have the same names and units. In this respect, the technical note requirement established in phase 1 of Obs4MIPs was regarded as being very useful, since it provides information on the data field description; data origin; validation and uncertainty estimate; considerations for use in model evaluation; and an instrument overview.

In addition to these recommendations, there were several additional recommendations that were supported by a subset of the participants but did not rise to the level of consensus:

  • Relax the requirement that variables included in Obs4MIPs correspond to a model output variable in the CMIP protocol. How far this should be relaxed is an issue, without general consensus.

  • Require averaging kernels for the retrieval observations. The experts in attendance asserted that this can be done offline from the model runs, and that it is low overhead compared to the benefit of consistent matchup between the model variable representation and the observational datasets. It appears to be most important for atmospheric chemistry and trace gas comparisons.

  • Include more process-level datasets to support diagnostics and tools for model development, in addition to model evaluation. This was a significant point of discussion and was considered by many to be beyond the scope of Obs4MIPs.

  • Sparse in situ datasets: where to start, how far to go? Inclusion of in situ data was generally deemed to be positive, but there are technical issues regarding formats and conventions (i.e., the current CMIP output is gridded on much coarser scales than the observations—What actually makes sense in terms of comparison?) In situ data collocated with high-resolution satellite observations seem to make the most sense currently.

  • Inclusion of more satellite simulators in the CMIP experiments. The modeling community may be reticent to add additional code (and execution overhead) to the experiments, which already consume considerable resources. Encouragement is needed from specific communities to produce stable, supported software with favorable licensing terms, and (in each case) a clear benefit to evaluation or diagnosis must be demonstrated.

A complete meeting report summarizing the details of the presentations and ensuing discussions, as captured by the rapporteurs and extracted from the presentation materials, is available on the Obs4MIPs Project website (www.earthsystemcog.org/projects/obs4mips/), where all project information and datasets are available. The WDAC task team will make use of these meeting discussions as it paves the way for the next steps in Obs4MIPs.

ACKNOWLEDGMENTS

This meeting would not have occurred without the assistance and support of Tsengdar Lee at NASA, and Renu Joseph at DOE. Thanks are also due to Michel Rixen at WCRP for providing additional meeting support. Ferraro’s and Waliser’s contributions to this activity were performed on behalf of the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Work by Gleckler and Taylor was performed on behalf of Lawrence Livermore National Laboratory as a contribution to the U.S. Department of Energy Office of Science, Climate and Environmental Sciences Division, Regional and Global Climate Modeling Program, under Contract DE-AC52-07NA27344. Eyring’s work was supported by the DLR Earth System Model Validation (ESMVal) project.

REFERENCES

  • Bodas-Salcedo, A., and Coauthors, 2011: COSP: Satellite simulation software for model assessment. Bull. Amer. Meteor. Soc., 92, 10231043, doi:10.1175/2011BAMS2856.1.

    • Search Google Scholar
    • Export Citation
  • Gleckler, P., R. Ferraro, and D. Waliser, 2011: Improving use of satellite data in evaluating climate models. Eos, Trans. Amer. Geophys. Union, 92, 172, doi:10.1029/2011EO200005.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., R. Moss, K. E. Taylor, V. Eyring, R. J. Stouffer, S. Bony, and B. Stevens, 2014: Climate model intercomparisons: Preparing for the next phase. Eos, Trans. Amer. Geophys. Union, 95, 77, doi:10.1002/2014EO090001.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, doi:10.1175/BAMS-D-11-00094.1.

    • Search Google Scholar
    • Export Citation
  • Teixeira, J., D. Waliser, R. Ferraro, P. Gleckler, T. Lee, and G. Potter, 2014: Satellite observations for CMIP5: The genesis of Obs4MIPs. Bull. Amer. Meteor. Soc., 95, 13291334, doi:10.1175/BAMS-D-12-00204.1.

    • Search Google Scholar
    • Export Citation
Save
  • Bodas-Salcedo, A., and Coauthors, 2011: COSP: Satellite simulation software for model assessment. Bull. Amer. Meteor. Soc., 92, 10231043, doi:10.1175/2011BAMS2856.1.

    • Search Google Scholar
    • Export Citation
  • Gleckler, P., R. Ferraro, and D. Waliser, 2011: Improving use of satellite data in evaluating climate models. Eos, Trans. Amer. Geophys. Union, 92, 172, doi:10.1029/2011EO200005.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., R. Moss, K. E. Taylor, V. Eyring, R. J. Stouffer, S. Bony, and B. Stevens, 2014: Climate model intercomparisons: Preparing for the next phase. Eos, Trans. Amer. Geophys. Union, 95, 77, doi:10.1002/2014EO090001.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, doi:10.1175/BAMS-D-11-00094.1.

    • Search Google Scholar
    • Export Citation
  • Teixeira, J., D. Waliser, R. Ferraro, P. Gleckler, T. Lee, and G. Potter, 2014: Satellite observations for CMIP5: The genesis of Obs4MIPs. Bull. Amer. Meteor. Soc., 95, 13291334, doi:10.1175/BAMS-D-12-00204.1.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1898 1116 604
PDF Downloads 415 136 4