Search Results

You are looking at 1 - 6 of 6 items for :

  • Author or Editor: Peter Gleckler x
  • Bulletin of the American Meteorological Society x
  • Refine by Access: All Content x
Clear All Modify Search
Florian Rauser, Peter Gleckler, and Jochem Marotzke

Abstract

We discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process. If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. With an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.

Full access
Joao Teixeira, Duane Waliser, Robert Ferraro, Peter Gleckler, Tsengdar Lee, and Gerald Potter

The objective of the Observations for Model Intercomparison Projects (Obs4MIPs) is to provide observational data to the climate science community, which is analogous (in terms of variables, temporal and spatial frequency, and periods) to output from the 5th phase of the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP5) climate model simulations. The essential aspect of the Obs4MIPs methodology is that it strictly follows the CMIP5 protocol document when selecting the observational datasets. Obs4MIPs also provides documentation that describes aspects of the observational data (e.g., data origin, instrument overview, uncertainty estimates) that are of particular relevance to scientists involved in climate model evaluation and analysis. In this paper, we focus on the activities related to the initial set of satellite observations, which are being carried out in close coordination with CMIP5 and directly engage NASA's observational (e.g., mission and instrument) science teams. Having launched Obs4MIPs with these datasets, a broader effort is also briefly discussed, striving to engage other agencies and experts who maintain datasets, including reanalysis, which can be directly used to evaluate climate models. Different strategies for using satellite observations to evaluate climate models are also briefly summarized.

Full access
Angeline G. Pendergrass, Peter J. Gleckler, L. Ruby Leung, and Christian Jakob
Free access
Robert Ferraro, Duane E. Waliser, Peter Gleckler, Karl E. Taylor, and Veronika Eyring
Full access
Yann Y. Planton, Eric Guilyardi, Andrew T. Wittenberg, Jiwoo Lee, Peter J. Gleckler, Tobias Bayr, Shayne McGregor, Michael J. McPhaden, Scott Power, Romain Roehrig, Jérôme Vialard, and Aurore Voldoire

Abstract

El Niño–Southern Oscillation (ENSO) is the dominant mode of interannual climate variability on the planet, with far-reaching global impacts. It is therefore key to evaluate ENSO simulations in state-of-the-art numerical models used to study past, present, and future climate. Recently, the Pacific Region Panel of the International Climate and Ocean: Variability, Predictability and Change (CLIVAR) Project, as a part of the World Climate Research Programme (WCRP), led a community-wide effort to evaluate the simulation of ENSO variability, teleconnections, and processes in climate models. The new CLIVAR 2020 ENSO metrics package enables model diagnosis, comparison, and evaluation to 1) highlight aspects that need improvement; 2) monitor progress across model generations; 3) help in selecting models that are well suited for particular analyses; 4) reveal links between various model biases, illuminating the impacts of those biases on ENSO and its sensitivity to climate change; and to 5) advance ENSO literacy. By interfacing with existing model evaluation tools, the ENSO metrics package enables rapid analysis of multipetabyte databases of simulations, such as those generated by the Coupled Model Intercomparison Project phases 5 (CMIP5) and 6 (CMIP6). The CMIP6 models are found to significantly outperform those from CMIP5 for 8 out of 24 ENSO-relevant metrics, with most CMIP6 models showing improved tropical Pacific seasonality and ENSO teleconnections. Only one ENSO metric is significantly degraded in CMIP6, namely, the coupling between the ocean surface and subsurface temperature anomalies, while the majority of metrics remain unchanged.

Full access
W. Lawrence Gates, James S. Boyle, Curt Covey, Clyde G. Dease, Charles M. Doutriaux, Robert S. Drach, Michael Fiorino, Peter J. Gleckler, Justin J. Hnilo, Susan M. Marlais, Thomas J. Phillips, Gerald L. Potter, Benjamin D. Santer, Kenneth R. Sperber, Karl E. Taylor, and Dean N. Williams

The Atmospheric Model Intercomparison Project (AMIP), initiated in 1989 under the auspices of the World Climate Research Programme, undertook the systematic validation, diagnosis, and intercomparison of the performance of atmospheric general circulation models. For this purpose all models were required to simulate the evolution of the climate during the decade 1979–88, subject to the observed monthly average temperature and sea ice and a common prescribed atmospheric CO2 concentration and solar constant. By 1995, 31 modeling groups, representing virtually the entire international atmospheric modeling community, had contributed the required standard output of the monthly means of selected statistics. These data have been analyzed by the participating modeling groups, by the Program for Climate Model Diagnosis and Intercomparison, and by the more than two dozen AMIP diagnostic subprojects that have been established to examine specific aspects of the models' performance. Here the analysis and validation of the AMIP results as a whole are summarized in order to document the overall performance of atmospheric general circulation–climate models as of the early 1990s. The infrastructure and plans for continuation of the AMIP project are also reported on.

Although there are apparent model outliers in each simulated variable examined, validation of the AMIP models' ensemble mean shows that the average large-scale seasonal distributions of pressure, temperature, and circulation are reasonably close to what are believed to be the best observational estimates available. The large-scale structure of the ensemble mean precipitation and ocean surface heat flux also resemble the observed estimates but show particularly large intermodel differences in low latitudes. The total cloudiness, on the other hand, is rather poorly simulated, especially in the Southern Hemisphere. The models' simulation of the seasonal cycle (as represented by the amplitude and phase of the first annual harmonic of sea level pressure) closely resembles the observed variation in almost all regions. The ensemble's simulation of the interannual variability of sea level pressure in the tropical Pacific is reasonably close to that observed (except for its underestimate of the amplitude of major El Niños), while the interannual variability is less well simulated in midlatitudes. When analyzed in terms of the variability of the evolution of their combined space–time patterns in comparison to observations, the AMIP models are seen to exhibit a wide range of accuracy, with no single model performing best in all respects considered.

Analysis of the subset of the original AMIP models for which revised versions have subsequently been used to revisit the experiment shows a substantial reduction of the models' systematic errors in simulating cloudiness but only a slight reduction of the mean seasonal errors of most other variables. In order to understand better the nature of these errors and to accelerate the rate of model improvement, an expanded and continuing project (AMIP II) is being undertaken in which analysis and intercomparison will address a wider range of variables and processes, using an improved diagnostic and experimental infrastructure.

Full access