Search Results
You are looking at 1 - 10 of 34 items for
- Author or Editor: Philip J. Rasch x
- Refine by Access: All Content x
Abstract
A new discretization of the transport equation for two-dimensional transport is introduced. The scheme is two time level, shape preserving, and solves the transport equation in flux form. It uses an upwind-biased stencil of points. To ameliorate the very restrictive constraint on the length of the time step appearing with a regular (equiangular) grid near the pole (generated by the Courant-Friedrichs-Lewy restriction), the scheme is generalized to work on a reduced grid. Application on the reduced grid allows a much longer time step to be used. The method is applied to the test of advection of a coherent structure by solid body rotation on the sphere over the poles. The scheme is shown to be as accurate as current semi-Lagrangian algorithms and is inherently conservative. Tests that use operator splitting in its simplest form (where the 2D transport operator is approximated by applying a sequence of 1D operators for a nondivergent flow field) reveal large errors compared to the proposed unsplit scheme and suggest that the divergence compensation term ought to be included in split formulations in this computational geometry.
Abstract
A new discretization of the transport equation for two-dimensional transport is introduced. The scheme is two time level, shape preserving, and solves the transport equation in flux form. It uses an upwind-biased stencil of points. To ameliorate the very restrictive constraint on the length of the time step appearing with a regular (equiangular) grid near the pole (generated by the Courant-Friedrichs-Lewy restriction), the scheme is generalized to work on a reduced grid. Application on the reduced grid allows a much longer time step to be used. The method is applied to the test of advection of a coherent structure by solid body rotation on the sphere over the poles. The scheme is shown to be as accurate as current semi-Lagrangian algorithms and is inherently conservative. Tests that use operator splitting in its simplest form (where the 2D transport operator is approximated by applying a sequence of 1D operators for a nondivergent flow field) reveal large errors compared to the proposed unsplit scheme and suggest that the divergence compensation term ought to be included in split formulations in this computational geometry.
Abstract
An interpretation of the iterative schemes of nonlinear normal mode initialization (Machenhauer, Kitade, and Tribbia) is introduced, where the schemes are regarded as sequential applications of filters. The response functions of these filters provide a means of evaluating the convergence of the iterative methods. The filters annihilate that component of the signal corresponding to the theoretical frequency determined from the normal mode analysis. The actual frequencies are, of course, determined by the linear team in the differential equations (which are accounted for by the normal mode analysis) and the nonlinear terms which will change the actual frequencies. The interpretation is extremely simple, but has not appeared previously in the literature. It is primarily qualitative. It explains in a qualitative sense the convergence problems encountered when attempting to initialize modes with small equivalent depths, or models which include diabatic physical processes. It complements the analyses of Phillips and Errico. The analysis of Tribbia's higher order initialization suggests that higher order steps can be more sensitive to convergence problems than the earlier methods.
Abstract
An interpretation of the iterative schemes of nonlinear normal mode initialization (Machenhauer, Kitade, and Tribbia) is introduced, where the schemes are regarded as sequential applications of filters. The response functions of these filters provide a means of evaluating the convergence of the iterative methods. The filters annihilate that component of the signal corresponding to the theoretical frequency determined from the normal mode analysis. The actual frequencies are, of course, determined by the linear team in the differential equations (which are accounted for by the normal mode analysis) and the nonlinear terms which will change the actual frequencies. The interpretation is extremely simple, but has not appeared previously in the literature. It is primarily qualitative. It explains in a qualitative sense the convergence problems encountered when attempting to initialize modes with small equivalent depths, or models which include diabatic physical processes. It complements the analyses of Phillips and Errico. The analysis of Tribbia's higher order initialization suggests that higher order steps can be more sensitive to convergence problems than the earlier methods.
Abstract
The analysis of Part I suggested that the temporal characteristics of the nonlinear terms in the equations of motion could introduce convergence problems in currently used schemes for normal mode initialization (NMI). In Part II we 1) introduce a new scheme that is more robust, 2) use a large complex model to verify the existence of problem characteristics in some of the nonlinear parameterizations, and 3) make intercomparisons between new and old schemes.
We find that the time scales of some parameterizations used in models of the atmosphere are associated directly with the length of the time step. Some of these parameterizations are used routinely in almost all large numerical models; others provide insight into problems with similar parameterizations. This sensitivity of time scale to time step is due partly to the formulation of the parameterizations, and partly to their highly nonlinear nature and the inconsistency between spectral and grid resolutions in a Galerkin spectral transform model. For the model used here moist and dry convective adjustment, and large scale condensation are primarily responsible for the short time scale forcing. This short time scale forcing is the primary reason for failure of current NMI schemes.
The new scheme adjusts to the impact of the forcing on the mode, and converges in situations where the others diverge (during diabatic initializations and initializations of normal modes with small equivalent depths). The Hadley circulation, eliminated in adiabatic initializations, can now be retained. The specification of the moisture field is important in retaining this circulation. The diabatic initialization shows a small improvement over the adiabatic initialization when compared with the balance condition defined by the model itself.
Abstract
The analysis of Part I suggested that the temporal characteristics of the nonlinear terms in the equations of motion could introduce convergence problems in currently used schemes for normal mode initialization (NMI). In Part II we 1) introduce a new scheme that is more robust, 2) use a large complex model to verify the existence of problem characteristics in some of the nonlinear parameterizations, and 3) make intercomparisons between new and old schemes.
We find that the time scales of some parameterizations used in models of the atmosphere are associated directly with the length of the time step. Some of these parameterizations are used routinely in almost all large numerical models; others provide insight into problems with similar parameterizations. This sensitivity of time scale to time step is due partly to the formulation of the parameterizations, and partly to their highly nonlinear nature and the inconsistency between spectral and grid resolutions in a Galerkin spectral transform model. For the model used here moist and dry convective adjustment, and large scale condensation are primarily responsible for the short time scale forcing. This short time scale forcing is the primary reason for failure of current NMI schemes.
The new scheme adjusts to the impact of the forcing on the mode, and converges in situations where the others diverge (during diabatic initializations and initializations of normal modes with small equivalent depths). The Hadley circulation, eliminated in adiabatic initializations, can now be retained. The specification of the moisture field is important in retaining this circulation. The diabatic initialization shows a small improvement over the adiabatic initialization when compared with the balance condition defined by the model itself.
Abstract
A procedure for adjusting temperature and humidity analyses used as initial conditions for numerical weather prediction models so that diagnosed distributions of cumulus convection exist during the initial stages of the forecast is applied in a global atmospheric model. This cumulus initialization procedure is designed to ameliorate the problem of numerical weather prediction spinup, the departure of diabatic forcing from diagnosis and observation, which is characteristic of the early portions of integrations in numerical weather prediction. Formally, cumulus initialization consists of a minimization of the adjustments to the original analyses of temperature and humidity, subject to nonlinear constraints imposed by the cumulus parameterization in the numerical weather prediction model, if cumulus heating is taken as known by diagnostic methods as an initial condition. Experiments with a global model exhibiting severe spinup when initialized with an analysis subject only to diabatic normal-mode initialization show that cumulus initialization can recover initial horizontal and vertical distributions of latent heat release (produced synthetically by spinning up the same global model through an independent integration using an earlier analysis to provide initial conditions) quite successfully. This recovery depends on the simultaneous initialization of the divergence, temperature, and humidity fields. The magnitudes of the adjustments in the temperature and humidity fields produced by cumulus initialization are smaller than the changes in those fields produced by the global model itself as it is integrated forward from an analysis without cumulus initialization through spinup. The cumulus initialization procedure can be modified to allow for uncertainties in the diagnosis of initial heating rates. Despite the successfully initial recovery of cumulus heating, further adjustment occurs as the global model is integrated forward from a cumulus-initialized analysis; this adjustment is characterized by an overshoot in the intensity of both latent heat release and divergence. A severe imbalance between globally averaged precipitation and evaporation that occurred without cumulus initialization is considerably ameliorated in integrations with cumulus initialization.
Abstract
A procedure for adjusting temperature and humidity analyses used as initial conditions for numerical weather prediction models so that diagnosed distributions of cumulus convection exist during the initial stages of the forecast is applied in a global atmospheric model. This cumulus initialization procedure is designed to ameliorate the problem of numerical weather prediction spinup, the departure of diabatic forcing from diagnosis and observation, which is characteristic of the early portions of integrations in numerical weather prediction. Formally, cumulus initialization consists of a minimization of the adjustments to the original analyses of temperature and humidity, subject to nonlinear constraints imposed by the cumulus parameterization in the numerical weather prediction model, if cumulus heating is taken as known by diagnostic methods as an initial condition. Experiments with a global model exhibiting severe spinup when initialized with an analysis subject only to diabatic normal-mode initialization show that cumulus initialization can recover initial horizontal and vertical distributions of latent heat release (produced synthetically by spinning up the same global model through an independent integration using an earlier analysis to provide initial conditions) quite successfully. This recovery depends on the simultaneous initialization of the divergence, temperature, and humidity fields. The magnitudes of the adjustments in the temperature and humidity fields produced by cumulus initialization are smaller than the changes in those fields produced by the global model itself as it is integrated forward from an analysis without cumulus initialization through spinup. The cumulus initialization procedure can be modified to allow for uncertainties in the diagnosis of initial heating rates. Despite the successfully initial recovery of cumulus heating, further adjustment occurs as the global model is integrated forward from a cumulus-initialized analysis; this adjustment is characterized by an overshoot in the intensity of both latent heat release and divergence. A severe imbalance between globally averaged precipitation and evaporation that occurred without cumulus initialization is considerably ameliorated in integrations with cumulus initialization.
Abstract
The more attractive one dimensional, shape-preserving interpolation schemes as determined from a companion study are applied to two-dimensional semi-Lagrangian advection in plane and spherical geometry. Hermite cubic and a rational cubic are considered for the interpolation form. Both require estimates of derivatives at data points. A cubic derivative form and the derivative estimates of Hyman and Akima are considered. The derivative estimates are also modified to ensure that the interpolant is monotonic. The modification depends on the interpolation form.
Three methods are used to apply the interpolators to two-dimensional semi-Lagrangian advection. The first consists of fractional time steps or time splitting. The method has noticeable displacement errors and larger diffusion than the other methods. The second consists of two-dimensional interpolants with formal definitions of a two-dimensional monotonic surface and application of a two-dimensional monotonicity constraint. This approach is examined for the Hermite cubic interpolant with cubic derivative estimates and produces very good results. The additional complications expected in extending to it three dimensions and the lack of corresponding two-dimensional forms for the rational cubic led to the consideration of the third approach—a tensor product form of monotonic one-dimensional interpolants. Although a description of the properties of the implied interpolating surface is difficult to obtain, the results show this to be a viable approach. Of the schemes considered, the Hermic cubic coupled with the Akima derivative estimate modified to satisfy a C 0monotonicity condition produces the best solution to our test cases. The C 1monotonic forms of the Hermite cubic have serious differential phase errors that distort the test patterns. The C 1 forms of the rational cubic do not show this distortion and produce virtually the same solutions as the corresponding C 0forms. The second best scheme (or best C 1 continuity is desired) is the rational cubic with Hyman derivative approximations modified to satisfy C 1 monotonicity condition.
The two-dimensional interpolants are easily applied to spherical geometry using the natural polar boundary conditions. No problems are evident in advecting test shapes over the poles. A procedure is also introduced to calculate the departure point in spherical geometry. The scheme uses local geodesic coordinate systems based on each arrival point. It is shown to be comparable in accuracy to the one proposed Ritchie, which uses a Cartesian system in place of the local geodesic system.
Abstract
The more attractive one dimensional, shape-preserving interpolation schemes as determined from a companion study are applied to two-dimensional semi-Lagrangian advection in plane and spherical geometry. Hermite cubic and a rational cubic are considered for the interpolation form. Both require estimates of derivatives at data points. A cubic derivative form and the derivative estimates of Hyman and Akima are considered. The derivative estimates are also modified to ensure that the interpolant is monotonic. The modification depends on the interpolation form.
Three methods are used to apply the interpolators to two-dimensional semi-Lagrangian advection. The first consists of fractional time steps or time splitting. The method has noticeable displacement errors and larger diffusion than the other methods. The second consists of two-dimensional interpolants with formal definitions of a two-dimensional monotonic surface and application of a two-dimensional monotonicity constraint. This approach is examined for the Hermite cubic interpolant with cubic derivative estimates and produces very good results. The additional complications expected in extending to it three dimensions and the lack of corresponding two-dimensional forms for the rational cubic led to the consideration of the third approach—a tensor product form of monotonic one-dimensional interpolants. Although a description of the properties of the implied interpolating surface is difficult to obtain, the results show this to be a viable approach. Of the schemes considered, the Hermic cubic coupled with the Akima derivative estimate modified to satisfy a C 0monotonicity condition produces the best solution to our test cases. The C 1monotonic forms of the Hermite cubic have serious differential phase errors that distort the test patterns. The C 1 forms of the rational cubic do not show this distortion and produce virtually the same solutions as the corresponding C 0forms. The second best scheme (or best C 1 continuity is desired) is the rational cubic with Hyman derivative approximations modified to satisfy C 1 monotonicity condition.
The two-dimensional interpolants are easily applied to spherical geometry using the natural polar boundary conditions. No problems are evident in advecting test shapes over the poles. A procedure is also introduced to calculate the departure point in spherical geometry. The scheme uses local geodesic coordinate systems based on each arrival point. It is shown to be comparable in accuracy to the one proposed Ritchie, which uses a Cartesian system in place of the local geodesic system.
Abstract
The purpose of this paper is twofold. First, a formalism is presented that extends the conceptual framework identified by Ritchie as the “semi-Lagrangian method without interpolation.” While his words for this concept refer to a particular class of semi-Lagrangian approximations, the idea is actually much more general. The formalism may be used to convert any advection algorithm into the semi-Lagrangian format, and it makes most algorithms sufficient for the integration of flows characterized by large Courant numbers. The formalism is presented in an arbitrary curvilinear system of coordinates. Second, exploiting the generality of the theoretical considerations, the formalism is implemented in solving a practical problem of scalar advection in spherical geometry. Rather than elaborating on Ritchie's semi-Lagrangian techniques employing centered-in-time differencing, the focus is on the alternative of forward-in-time, dissipative finite-difference schemes. This class of schemes offers attractive computational properties in terms of the solutions' accuracy and preservation of a sign or monotonicity.
Abstract
The purpose of this paper is twofold. First, a formalism is presented that extends the conceptual framework identified by Ritchie as the “semi-Lagrangian method without interpolation.” While his words for this concept refer to a particular class of semi-Lagrangian approximations, the idea is actually much more general. The formalism may be used to convert any advection algorithm into the semi-Lagrangian format, and it makes most algorithms sufficient for the integration of flows characterized by large Courant numbers. The formalism is presented in an arbitrary curvilinear system of coordinates. Second, exploiting the generality of the theoretical considerations, the formalism is implemented in solving a practical problem of scalar advection in spherical geometry. Rather than elaborating on Ritchie's semi-Lagrangian techniques employing centered-in-time differencing, the focus is on the alternative of forward-in-time, dissipative finite-difference schemes. This class of schemes offers attractive computational properties in terms of the solutions' accuracy and preservation of a sign or monotonicity.
Abstract
A parameterization for specifying subgrid-scale cloud distributions in atmospheric models is developed. The fractional area of a grid-scale column in which clouds from two levels overlap (i.e., the cloud overlap probability) is described in terms of the correlation between horizontal cloudiness functions in the two levels. Cloud distributions that are useful for radiative transfer and cloud microphysical calculations are then determined from cloud fraction at individual model levels and a decorrelation depth. All pair-wise overlap probabilities among cloudy levels are obtained from the cloudiness correlations. However, those probabilities can overconstrain the determination of the cloud distribution. It is found that cloud fraction in each level along with the overlap probabilities among nearest neighbor cloudy levels is sufficient to specify the full cloud distribution.
The parameterization has both practical and interpretative advantages over existing parameterizations. The parameterized cloud fields are consistent with physically meaningful distributions at arbitrary vertical resolution. In particular, bulk properties of the distribution, such as total cloud fraction and radiative fluxes calculated from it, approach asymptotic values as the vertical resolution increases. Those values are nearly obtained once the cloud distribution is resolved; that is, if the thickness of cloudy levels is less than one half of the decorrelation depth. Furthermore, the decorrelation depth can, in principle, be specified as a function of space and time, which allows one to construct a wide range of cloud distributions from any given vertical profile of cloud fraction.
The parameterization is combined with radiative transfer calculations to examine the sensitivity of radiative fluxes to changes of the decorrelation depth. Calculations using idealized cloud distributions display strong sensitivities (∼50 W m−2) to changes of decorrelation depth. Those sensitivities arise primarily from the sensitivity of total cloud fraction to that parameter. Radiative fluxes calculated from a version of the National Center for Atmospheric Research Community Climate Model (CCM) show only a small sensitivity. The reason for this small sensitivity is traced to the propensity of CCM to produce overcast conditions within individual model levels. Thus, in order for the parameterization to be fully useful, it is necessary that other cloud parameterizations in the atmospheric model attain a threshold of realism.
Abstract
A parameterization for specifying subgrid-scale cloud distributions in atmospheric models is developed. The fractional area of a grid-scale column in which clouds from two levels overlap (i.e., the cloud overlap probability) is described in terms of the correlation between horizontal cloudiness functions in the two levels. Cloud distributions that are useful for radiative transfer and cloud microphysical calculations are then determined from cloud fraction at individual model levels and a decorrelation depth. All pair-wise overlap probabilities among cloudy levels are obtained from the cloudiness correlations. However, those probabilities can overconstrain the determination of the cloud distribution. It is found that cloud fraction in each level along with the overlap probabilities among nearest neighbor cloudy levels is sufficient to specify the full cloud distribution.
The parameterization has both practical and interpretative advantages over existing parameterizations. The parameterized cloud fields are consistent with physically meaningful distributions at arbitrary vertical resolution. In particular, bulk properties of the distribution, such as total cloud fraction and radiative fluxes calculated from it, approach asymptotic values as the vertical resolution increases. Those values are nearly obtained once the cloud distribution is resolved; that is, if the thickness of cloudy levels is less than one half of the decorrelation depth. Furthermore, the decorrelation depth can, in principle, be specified as a function of space and time, which allows one to construct a wide range of cloud distributions from any given vertical profile of cloud fraction.
The parameterization is combined with radiative transfer calculations to examine the sensitivity of radiative fluxes to changes of the decorrelation depth. Calculations using idealized cloud distributions display strong sensitivities (∼50 W m−2) to changes of decorrelation depth. Those sensitivities arise primarily from the sensitivity of total cloud fraction to that parameter. Radiative fluxes calculated from a version of the National Center for Atmospheric Research Community Climate Model (CCM) show only a small sensitivity. The reason for this small sensitivity is traced to the propensity of CCM to produce overcast conditions within individual model levels. Thus, in order for the parameterization to be fully useful, it is necessary that other cloud parameterizations in the atmospheric model attain a threshold of realism.
Abstract
Two widely used approaches for parameterizing tracer transport based on convective mass fluxes are the plume ensemble formulation (PEF) and the bulk formulation (BF). Here the behavior of these two is contrasted for the specific case in which the BF airmass fluxes are derived as a direct simplification of an explicit PEF. Relative to the PEF, the BF has a greater rate of entrainment of midtropospheric air into the parcels that reach the highest altitudes, and thus is expected to compute less efficient transport of surface-layer tracers to the upper troposphere. In this study, this difference is quantified using a new algorithm for computing mass conserving, monotonic tracer transport for both the BF and PEF, along with a technique for decomposing a bulk mass flux profile into a set of consistent, discrete plumes for use in the PEF. Runs with a 3D global chemistry transport model (MATCH) show that the BF is likely to be an adequate approximation for most tracers with lifetimes of a week or longer. However, for short-lived tracers (lifetimes of a couple days or less) the BF results in significantly less efficient transport to the upper troposphere than the PEF, with differences exceeding 30% on a monthly zonal mean basis. Implications of these results for tropospheric chemistry are discussed.
Abstract
Two widely used approaches for parameterizing tracer transport based on convective mass fluxes are the plume ensemble formulation (PEF) and the bulk formulation (BF). Here the behavior of these two is contrasted for the specific case in which the BF airmass fluxes are derived as a direct simplification of an explicit PEF. Relative to the PEF, the BF has a greater rate of entrainment of midtropospheric air into the parcels that reach the highest altitudes, and thus is expected to compute less efficient transport of surface-layer tracers to the upper troposphere. In this study, this difference is quantified using a new algorithm for computing mass conserving, monotonic tracer transport for both the BF and PEF, along with a technique for decomposing a bulk mass flux profile into a set of consistent, discrete plumes for use in the PEF. Runs with a 3D global chemistry transport model (MATCH) show that the BF is likely to be an adequate approximation for most tracers with lifetimes of a week or longer. However, for short-lived tracers (lifetimes of a couple days or less) the BF results in significantly less efficient transport to the upper troposphere than the PEF, with differences exceeding 30% on a monthly zonal mean basis. Implications of these results for tropospheric chemistry are discussed.
Abstract
Transport of momentum by convection is an important process affecting global circulation. Owing to the lack of global observations, the quantification of the impact of this process on the tropospheric climate is difficult. Here an implementation of two convective momentum transport parameterizations, presented by Schneider and Lindzen and Gregory et al., in the Community Atmosphere Model, version 3 (CAM3) is presented, and their effect on global climate is examined in detail. An analysis of the tropospheric zonal momentum budget reveals that convective momentum transport affects tropospheric climate mainly through changes to the Coriolis torque. These changes result in improvement of the representation of the Hadley circulation: in December–February, the upward branch of the circulation is weakened in the Northern Hemisphere and strengthened in the Southern Hemisphere, and the lower northerly branch is weakened. In June–August, similar improvements are noted. The inclusion of convective momentum transport in CAM3 reduces many of the model’s biases in the representation of surface winds, as well as in the representation of tropical convection. In an annual mean, the tropical easterly bias, subtropical westerly bias, and the bias in the 60°S jet are improved. Representation of convection is improved along the equatorial belt with decreased precipitation in the Indian Ocean and increased precipitation in the western Pacific. The improvements of the representation of tropospheric climate are greater with the implementation of the Schneider and Lindzen parameterization.
Abstract
Transport of momentum by convection is an important process affecting global circulation. Owing to the lack of global observations, the quantification of the impact of this process on the tropospheric climate is difficult. Here an implementation of two convective momentum transport parameterizations, presented by Schneider and Lindzen and Gregory et al., in the Community Atmosphere Model, version 3 (CAM3) is presented, and their effect on global climate is examined in detail. An analysis of the tropospheric zonal momentum budget reveals that convective momentum transport affects tropospheric climate mainly through changes to the Coriolis torque. These changes result in improvement of the representation of the Hadley circulation: in December–February, the upward branch of the circulation is weakened in the Northern Hemisphere and strengthened in the Southern Hemisphere, and the lower northerly branch is weakened. In June–August, similar improvements are noted. The inclusion of convective momentum transport in CAM3 reduces many of the model’s biases in the representation of surface winds, as well as in the representation of tropical convection. In an annual mean, the tropical easterly bias, subtropical westerly bias, and the bias in the 60°S jet are improved. Representation of convection is improved along the equatorial belt with decreased precipitation in the Indian Ocean and increased precipitation in the western Pacific. The improvements of the representation of tropospheric climate are greater with the implementation of the Schneider and Lindzen parameterization.
Abstract
This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite-volume (FV) dynamical core of the Community Atmosphere Model (CAM). Results from a 20-yr climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM, which both use a pressure-based (σ − P) coordinate. The same physical parameterization package is employed in all three dynamical cores.
The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. The authors show that the isentropic model is very effective in reducing the long-standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds, which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea level pressure field in the Northern Hemisphere.
Abstract
This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite-volume (FV) dynamical core of the Community Atmosphere Model (CAM). Results from a 20-yr climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM, which both use a pressure-based (σ − P) coordinate. The same physical parameterization package is employed in all three dynamical cores.
The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. The authors show that the isentropic model is very effective in reducing the long-standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds, which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea level pressure field in the Northern Hemisphere.