Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: J. J. Hnilo x
  • Refine by Access: All Content x
Clear All Modify Search
Masao Kanamitsu, Wesley Ebisuzaki, Jack Woollen, Shi-Keng Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter

The NCEP–DOE Atmospheric Model Intercomparison Project (AMIP-II) reanalysis is a follow-on project to the “50-year” (1948–present) NCEP–NCAR Reanalysis Project. NCEP–DOE AMIP-II re-analysis covers the “20-year” satellite period of 1979 to the present and uses an updated forecast model, updated data assimilation system, improved diagnostic outputs, and fixes for the known processing problems of the NCEP–NCAR reanalysis. Only minor differences are found in the primary analysis variables such as free atmospheric geopotential height and winds in the Northern Hemisphere extratropics, while significant improvements upon NCEP–NCAR reanalysis are made in land surface parameters and land–ocean fluxes. This analysis can be used as a supplement to the NCEP–NCAR reanalysis especially where the original analysis has problems. The differences between the two analyses also provide a measure of uncertainty in current analyses.

Full access
Thomas J. Phillips, Gerald L. Potter, David L. Williamson, Richard T. Cederwall, James S. Boyle, Michael Fiorino, Justin J. Hnilo, Jerry G. Olson, Shaocheng Xie, and J. John Yio

To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework.

To further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.

Full access
W. Lawrence Gates, James S. Boyle, Curt Covey, Clyde G. Dease, Charles M. Doutriaux, Robert S. Drach, Michael Fiorino, Peter J. Gleckler, Justin J. Hnilo, Susan M. Marlais, Thomas J. Phillips, Gerald L. Potter, Benjamin D. Santer, Kenneth R. Sperber, Karl E. Taylor, and Dean N. Williams

The Atmospheric Model Intercomparison Project (AMIP), initiated in 1989 under the auspices of the World Climate Research Programme, undertook the systematic validation, diagnosis, and intercomparison of the performance of atmospheric general circulation models. For this purpose all models were required to simulate the evolution of the climate during the decade 1979–88, subject to the observed monthly average temperature and sea ice and a common prescribed atmospheric CO2 concentration and solar constant. By 1995, 31 modeling groups, representing virtually the entire international atmospheric modeling community, had contributed the required standard output of the monthly means of selected statistics. These data have been analyzed by the participating modeling groups, by the Program for Climate Model Diagnosis and Intercomparison, and by the more than two dozen AMIP diagnostic subprojects that have been established to examine specific aspects of the models' performance. Here the analysis and validation of the AMIP results as a whole are summarized in order to document the overall performance of atmospheric general circulation–climate models as of the early 1990s. The infrastructure and plans for continuation of the AMIP project are also reported on.

Although there are apparent model outliers in each simulated variable examined, validation of the AMIP models' ensemble mean shows that the average large-scale seasonal distributions of pressure, temperature, and circulation are reasonably close to what are believed to be the best observational estimates available. The large-scale structure of the ensemble mean precipitation and ocean surface heat flux also resemble the observed estimates but show particularly large intermodel differences in low latitudes. The total cloudiness, on the other hand, is rather poorly simulated, especially in the Southern Hemisphere. The models' simulation of the seasonal cycle (as represented by the amplitude and phase of the first annual harmonic of sea level pressure) closely resembles the observed variation in almost all regions. The ensemble's simulation of the interannual variability of sea level pressure in the tropical Pacific is reasonably close to that observed (except for its underestimate of the amplitude of major El Niños), while the interannual variability is less well simulated in midlatitudes. When analyzed in terms of the variability of the evolution of their combined space–time patterns in comparison to observations, the AMIP models are seen to exhibit a wide range of accuracy, with no single model performing best in all respects considered.

Analysis of the subset of the original AMIP models for which revised versions have subsequently been used to revisit the experiment shows a substantial reduction of the models' systematic errors in simulating cloudiness but only a slight reduction of the mean seasonal errors of most other variables. In order to understand better the nature of these errors and to accelerate the rate of model improvement, an expanded and continuing project (AMIP II) is being undertaken in which analysis and intercomparison will address a wider range of variables and processes, using an improved diagnostic and experimental infrastructure.

Full access