1. Introduction
The development of models for numerical simulation of the atmosphere and oceans was one of the great scientific triumphs of the twentieth century. The models have added enormously to our understanding of the diverse and complex processes at work in the Earth system, and to our ability to produce realistic simulations of both near-future weather and the longer-term future of the climate system. Understanding and simulation are the two broad goals of model development.
Today’s global atmospheric models are commonly coupled with ocean models, sea ice models, and land surface models that include representations of terrestrial vegetation and the carbon cycle. Because of the diversity of processes represented, it is becoming more common to refer to these large coupled models as “Earth system models (ESMs),” especially when the carbon cycle is included. In ESMs, the atmosphere, ocean, sea ice, and land surface models are included as submodels, which can be viewed as components of the larger coupled model. Some ESMs also include components representing atmospheric and marine chemistry, terrestrial ice sheets, ocean biology, and biogeochemistry, but we will not discuss those topics in this chapter. The atmosphere and ocean submodels of ESMs are often referred to as global circulation models (GCMs).
Each component of an ESM includes exchanges of mass, momentum, and energy with one or more of the other components. The atmosphere model is the only component of an ESM that carries out exchanges with all of the other components.
The air, water, and ice are in constant motion. In the atmospheric component of an ESM, the adiabatic terms of the equation of motion, the thermodynamic equation, and the continuity equations for dry air, moisture, and chemical species are solved on a three-dimensional grid1 using what is called a “dynamical core.”2 The horizontal and vertical grid spacings determine the spatial “resolution” of the model. This chapter includes an overview of the evolution of dynamical cores for global models of the atmosphere and ocean.
Atmospheric models also include parametric representations, called “parameterizations,” that are designed to incorporate the transports by radiation, precipitation, and the unresolved or “subgrid scale” motions of the air, as well as the phase changes of water, averaged up to the grid scale. This chapter includes a selective overview of the evolution of the parameterizations used in global atmospheric models. All of the parameterized processes are formulated in terms of the fields that are resolved by the model’s dynamical core. A fundamental issue in parameterization development is that the atmosphere and ocean contain eddies on all scales. Early studies aimed to choose the grid spacing so that it coincided with meteorologically inactive scales (e.g., Fiedler and Panofsky 1970), but it soon became clear that there is no such “spectral gap”; eddies exist on all scales (e.g., Nastrom et al. 1984), although of course some are more consequential than others.
The dynamical cores of ocean models are designed to cope with the complex geometry of the ocean basins. Numerical modeling of the ocean began somewhat later than numerical modeling of the atmosphere, but has today reached a comparable level of intellectual maturity. This chapter discusses the history of the hydrostatic primitive equation ocean models used as components of ESMs. Ocean models include parameterizations of the fluxes associated with unresolved motions of the water. We focus on dynamical and numerical aspects, and do not discuss regional and coastal ocean applications, biogeochemistry, or process modeling. Further discussion of ocean physics and dynamics is given in the chapter by Carl Wunsch and colleague, in this volume (Wunsch and Ferrari 2019).
Even sea ice and terrestrial ice sheet models can be said to have dynamical cores, in the sense that they include dynamical equations that govern the motions of the ice. Prior to 1950, there were no publications about sea ice models in English—possibly none at all—and few scientists had ever seen sea ice. Nevertheless, long before weather and climate models simulated the mass or momentum balance of sea ice, scientists recognized the importance for the climate system of the high albedo of sea ice. Early climate modelers used energy balance models with an ice–albedo feedback parameterized by raising the surface albedo when the surface temperature dropped below a critical value (Budyko 1969; Sellers 1969). When subjected to climate forcing, such as a reduction in the solar radiation, the energy balance models respond with cooling that is strongest in the high latitudes—a phenomenon now widely known as polar amplification.
Land surface models have no dynamical cores; in that sense, they are “all parameterization.” We humans live on the land surface, so it is hardly surprising that our science has spent a lot of effort to understand and predict conditions there. From the point of view of the atmosphere, the land is merely a lower boundary condition, but it is also where we grow most of our food, build our cities, and live our lives. Quantitative modeling of land surface processes goes back well over 100 years, primarily with applications to agriculture and water resources. The land surface is an important mediator in the flows of energy, water, carbon, and momentum. The albedo of the land surface is highly variable. Ordinarily, most net radiative energy absorbed by the surface is transferred to the atmosphere as turbulent fluxes of sensible and latent heat, with only a small residual driving changes of heat storage in the soil. These turbulent energy fluxes are important drivers of atmospheric energetics and circulation. Water from precipitation can infiltrate the surface or run off, and infiltrated water is stored and can be released later as vapor. The land surface is a strong sink of atmospheric momentum, and the friction arising from the land surface is felt throughout the atmospheric boundary layer and sometimes far beyond. The topography of the land surface has an enormous impact on the circulation of the atmosphere. Critically, much of the land surface is alive. It is inhabited by vegetation and by microbes in the soil, whose biological processes mediate the partition between turbulent fluxes of sensible and latent heat and regulate the ability of the atmosphere to extract water from beneath the ground. Vegetation is an important determinant of the surface albedo and surface friction. The responses of plants and soil microbes to changes in atmospheric conditions can dramatically affect the surface fluxes.
The purpose of this chapter is to give an account of the century or so of development work that led to today’s ESMs, starting from the early years of the twentieth century. Model development involves scientific analysis of how nature works, so that the model can work in the same way as far as possible. Some engineering is also involved, especially to achieve optimal performance on the available computers.
In writing this chapter, we have assumed that the readers have some familiarity with numerical modeling of the Earth system, but of course we have tried to avoid unnecessary technical details. Our chapter contains no equations. Applications of the models are briefly mentioned, but not emphasized. The story of the development of ESMs is huge and complicated, and our version of it is unavoidably incomplete. Space limitations make it impossible for us to mention all of the important contributions. We acknowledge our debt to earlier accounts, including those of Smagorinsky (1983), Wiin-Nielsen (1991), Arakawa (2000), Lynch (2006), Washington (2007), Edwards (2010, 2011), Weart (2010), Donner et al. (2011), Harper (2012), Bauer et al. (2015), and Fleming (2016).
The chapter is organized chronologically, to the extent possible. The first section deals with the gestational period from about 1900 to 1950. Then, starting with the 1950s, the sections are organized by decade, but with some exceptions to maintain narrative continuity. We tell the story of each decade using several subsections, some of which are focused on particular ESM components. We have attempted to interweave our accounts of the developments of numerical methods, radiative transfer, turbulence and cloud parameterizations, ocean and sea ice modeling, and land surface modeling, because of course that is the way it really happened. The all-important and rapidly evolving “supercomputers” needed to run the models are also mentioned in several places.
Our chapter inevitably infringes on the subject of numerical weather prediction, which is a major focus of a separate chapter in this volume by Stanley Benjamin and colleagues (Benjamin et al. 2019).
2. Before the beginning
a. Early work on dynamical cores
Concepts fundamental to Earth system modeling were developed in the early years of the twentieth century. Three visionary scientists played particularly central roles (Fig. 12-1). The great American meteorologist Cleveland Abbe recognized that meteorology is essentially the application of hydrodynamics and thermodynamics to the atmosphere (Abbe 1901), and he identified the system of mathematical equations that govern the evolution of the atmosphere (Willis and Hooke 2006). The Norwegian scientist Vilhelm Bjerknes undertook a more explicit analysis of the weather prediction problem from a scientific perspective (Bjerknes 1904). His stated goal was to make meteorology an exact science, a true physics of the atmosphere. He argued that it should be possible to predict changes in the weather by solving systems of partial differential equations, which is exactly what we do today.
(left) Cleveland Abbe (1838–1916). (middle) Vilhelm Bjerknes (1862–1951). (right) Lewis Fry Richardson (1881–1953).
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
The English Quaker mathematician, Lewis Fry Richardson, went further. He wanted a worked example for his book “Weather Prediction by Numerical Processes” (Richardson 1922). Partly to create such an example, he attempted what is now called numerical weather prediction (NWP): a direct (but approximate) solution of the equations of motion. The result was his famous “failed” numerical forecast (actually a hindcast) for 20 May 1910. He carried out the calculations by hand, in the intervals between driving for the Friends Ambulance Unit during the war in France (Ashford 1985; Lynch 2006). Although his results were not realistic, his achievement was heroic, and his book was remarkably prescient. His overall approach bears a striking resemblance to that used in modern weather and climate models, and he appreciated many of the issues that still preoccupy modelers today. In particular, he understood that the large-scale dynamics of the atmosphere would be resolved, while other processes, such as radiation, boundary layer turbulence, and cloud processes, would have to be parameterized. He used what we now call the quasi-static approximation. To obtain approximate solutions of the differential equations of the model, he proposed a method based on finite differences, a technique that he had devised and previously applied to stresses in a masonry dam (Richardson 1911). He discretized his domain on a longitude–latitude grid or “lattice” that covered part of western Europe, with five layers to represent the atmosphere’s vertical structure. He understood that a staggered arrangement of variables on the grid could improve the accuracy of finite differences, and he used what we would now recognize as a pair of C grids (Lynch 2006; Arakawa and Lamb 1977). He also foresaw that his proposed grid would present difficulties in the polar regions. His model included equations for predicting soil moisture based on empirical work by hydrologists in the mid-nineteenth century. He knew that the weather is influenced by terrestrial vegetation, which had already been appreciated by Von Humboldt et al. (1859), and he understood the role of plant physiology in regulating the extraction of water from the soil (transpiration). Finally, he provided suggestions for initializing and integrating his model.
Richardson’s hindcast gave a totally unrealistic rate of change of the surface pressure: 145 hPa over a 6-h period. The full story of Richardson’s work, the reason for his disappointing numerical results, and a complete reconstruction of the forecast are described by Lynch (2006). When, in a later retrospective recreation, Richardson’s initial data were dynamically balanced, the initial tendency of surface pressure was reduced to a reasonable value of less than 1 hPa over 6 h, confirming that his unrealistic prediction was due to the dynamical imbalance of the initial data that he used. Details are presented in Lynch (2006).
Richardson’s forecasting scheme was quite impractical in the precomputer era, but he was undaunted, speculating that “some day in the dim future it will be possible to advance the computations faster than the weather advances.” In fact, developments on several fronts were necessary before NWP could be put into practice. First, a more complete understanding of atmospheric dynamics allowed the development of simplified but sufficiently general systems of equations. Advances in what used to be called “physical meteorology” pointed the way to useful statistical representations of the effects of unresolved physical processes on the resolved scales. Regular observations of the free atmosphere provided the initial conditions needed for numerical weather prediction; accurate and stable discretization schemes were developed. Finally, increasingly powerful digital computers provided a practical means of carrying out the prodigious calculations needed to forecast changes in the weather.
At the time of the First World War, computational weather forecasting was impractical for at least four reasons. First, the observations of the three-dimensional structure of the atmosphere were available only on a very occasional basis, with inadequate coverage, and never in real time. The registering balloons had to be recovered and the recordings analyzed to obtain the data, a process that took days or even weeks. Second, the numerical algorithms for solving the atmospheric equations were subject to instabilities that were not understood. Because of this, the numerical solution might bear little or no resemblance to the solution of the continuous equations. Third, the nearly balanced (e.g., nearly geostrophic) nature of atmospheric flow was not yet understood, and the imbalances arising from observational and analysis errors confounded Richardson’s forecast. Fourth, the massive volume of computation required to advance the numerical solution could not be carried out, even by a huge team of human computers. In reality, Richardson’s estimate that 64 000 human computers would be needed to do the calculations for a useful forecast in real time, was a gross underestimate. It has been reckoned that closer to one million people would have been required for the task (Lynch 1993). It seems fair to say that what Richardson devised was a “method without a means.”
In the ensuing decades, a variety of key developments prepared the way for progress. Theoretical developments provided crucial understanding of atmospheric dynamics, in particular the approximate balance of the large-scale atmospheric state and the means of eliminating spurious high-frequency gravity waves. This led to the quasigeostrophic equations, which filter gravity waves and describe the large-scale motions of atmosphere away from the equator. Advances in numerical analysis led to the design of stable algorithms that faithfully replicated the true solution provided that certain restrictions on the size of the time step were respected. Timely observations of the three-dimensional atmosphere became available following the invention of the radiosonde in 1927. This provided real-time measurements of pressure, temperature, humidity and winds through a vertical column of the atmosphere. Finally, the development of digital computers provided a way of attacking the enormous computational task of weather forecasting.
The Electronic Numerical Integrator and Computer (ENIAC), an electronic computer commissioned by the U.S. Army for calculating the paths of projectiles, was completed in 1945. It was the first programmable electronic digital computer ever built. The gigantic machine used 18 000 thermionic tubes, filled a large room, and consumed 140 kW of power (Fig. 12-2). Both input and output were by means of punched cards. McCartney (1999) provides an absorbing account of the origins, design, development, and legacy of ENIAC.
The Electronic Numerical Integrator and Computer (ENIAC). [Courtesy of International Business Machines Corporation, ©1946 International Business Machines Corporation.]
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
In the late 1940s, the mathematician John von Neumann recognized that weather forecasting, a problem of both great economic and military importance, and strong intrinsic scientific interest, is an ideal application for a digital computer. He established a Meteorology Project at the Institute for Advanced Study in Princeton, and recruited meteorologist Jule Charney to lead it. The project created a model in which the atmosphere was treated as a single layer, represented by conditions at the 500-hPa level. ENIAC was used to time step the barotropic vorticity equation, which expresses the conservation of absolute vorticity following the flow, and filters out gravity wave solutions. Centered-in-space finite differences were used to evaluate the vorticity transport, and leapfrog time differencing was used. A Poisson equation was solved to obtain the geopotential height from the predicted vorticity. Fortunately, Charney and his colleagues were aware of the work of Courant et al. (1928, 1967), which showed that in order for their explicit time-stepping method to be stable, the size of the time step cannot exceed the grid size divided by the signal speed, a constraint that we now call the Courant–Friedrichs–Lewey (CFL) criterion.3 With the barotropic vorticity equation, the relevant signal speed is the wind speed; in a system that permits gravity waves, the signal speed would be the much faster speed of wave propagation, and as a result the time step would have to be much smaller to satisfy the CFL criterion for stability. The initial data for the forecasts were prepared manually from standard operational 500 hPa analysis charts produced by the U.S. Weather Bureau. The heights were held constant on the outer boundaries of the domain, throughout each 24-h integration.
The resulting numerical predictions, carried out on ENIAC, were truly groundbreaking. Four 24-h forecasts were performed, and the results clearly showed that the large-scale features of the midtropospheric flow could be predicted numerically with a reasonable resemblance to reality. The forecasts were described in a pioneering paper by Jule Charney, Ragnar Fjørtoft, and John von Neumann (Charney et al. 1950). The success of the ENIAC forecasts had an electrifying effect on the meteorological community, worldwide. Several baroclinic (multilevel) models were developed in the following years. All of them were based on the filtered or quasigeostrophic system of equations. Later, models using the more accurate primitive equations were introduced. Charney had anticipated this as a necessary step, and indeed André Robert later identified it as a key development in numerical weather prediction (see Lin et al. 1997).
Charney et al. (1950) noted that the computation time for a 24-h forecast was about 24 h. In other words, the team could just keep pace with the weather, provided that the ENIAC did not fail. The computation time included offline operations, such as the reading, punching, and interfiling of punch cards. Lynch and Lynch (2008) recreated the ENIAC integrations using a programmable cell phone, which they called the Portable Hand-Operated Numerical Integrator and Computer (PHONIAC). In this recreation, PHONIAC executed the main loop of the 24-h forecast in less than one second.
b. Early work on radiative transfer
Thanks to astronomers, methods that can be used for calculating radiative heating rates and fluxes in Earth’s atmosphere have been available since the first half of the twentieth century. Astronomers developed the two-stream methods used to compute the fluxes of radiation (Schuster 1905; Eddington 1916). The idea of collecting together parts of the spectrum with similar amounts of absorption, which forms the basis of the k-distribution technique now used in ESMs, was originally proposed by Ambartsumian (1936). The theory describing the scattering of light by round particles like cloud drops is usually attributed to Mie (1908).
Radiative transfer is fundamentally important for ESMs because radiation is (almost) the only mechanism by which Earth can exchange energy with the rest of the universe, and because motions of the atmosphere are fundamentally driven by spatial gradients in the electromagnetic radiation emitted by Earth, its atmosphere, and the sun. The same gradients also play a key role in determining the thermal structure of the atmosphere. The deep convective clouds of the tropics arise from a rough balance between destabilization by radiative cooling and the response of deep convection, for example, while the planetary-scale Hadley circulation is driven by the gradient in absorbed sunlight between the equator and higher latitudes. Models of atmospheric motion therefore need to represent the flow of radiation through the atmosphere, especially the radiative flux divergences within the atmosphere that give rise to heating and cooling, and the fluxes of radiation that are absorbed (and emitted) by the surface. Models that are aimed at understanding climate (as opposed to weather) must accurately compute the net energy input at the top of the atmosphere.
The practical calculations needed to advance an atmospheric model are daunting, even today. The underlying reason is that the solution to the radiative transfer equation is nonlinear in the parameters used in the equation (optical depth τ, single-scattering albedo
The history of radiative transfer parameterizations for ESMs is about maximizing the utility of available computational power by focusing our scientific thinking on specific, motivating problems. One theme that emerges is that computational challenges have, over the last century, sparked useful insights and novel methods. A second is that the collective efforts to understand parameterization errors by comparison to reference line-by-line models have been instrumental in identifying the sources and magnitudes of those errors and pointing to possible solutions.
c. Where things stood in 1950
As the 1940s came to an end, new data sources were being used to carry out pioneering observational studies of the global circulation of the atmosphere, notably by Victor Starr’s group at the Massachusetts Institute of Technology (MIT; Starr 1948; Starr and White 1951), Eric Palmen and colleagues in Finland and at the University of Chicago (Palmén 1948; Palmén and Riehl 1957), and Jacob Bjerknes (the son of Vilhelm Bjerknes, who was mentioned earlier), and Yale Mintz at the University of California, Los Angeles (UCLA; Mintz and Bjerknes 1951; Bjerknes 1955). These observations proved to be both a motivation for and a basis for evaluation of the global atmospheric models that were soon to follow.
3. The 1950s
The 1950s saw some major advances in our understanding of the global circulation. For example, Edward Lorenz (1955) of MIT published the first of his most influential papers, which defined and analyzed available potential energy, and provided important insights into the atmospheric energy cycle. At the University of Chicago, David Fultz carried out rotating annulus experiments that reproduced some of the observed characteristics of the global circulation of the atmosphere (Fultz et al. 1959). Both of these studies (and many others) influenced the development of atmospheric numerical models during the 1950s.
a. Progress with dynamical cores
The landmark NWP success of Charney et al. (1950) was soon emulated in several other places around the world (e.g., Persson 2005b). As the 1950s unfolded, operational numerical weather prediction began in Sweden (1954; Bolin 1955), the United States (1955), and Japan (1959; Lynch 2006; Persson 2005a,b), though none of those early models were global or even hemispheric.
During this period, experiments began with three-dimensional models that could supplant the barotropic vorticity equation. At first, these continued to use filtered systems of equations that have no gravity wave solutions, but more accurate systems were needed. Early baroclinic models were developed by Charney and Phillips (1953), and experimental forecasts with the primitive equations were carried out by Hinkelmann (1951). Later, Charney (1962) experimented with both the primitive and balance equations. The forecasts produced using three-dimensional filtered models were not much better than those produced using the barotropic vorticity equation, and this motivated more work on hydrostatic primitive equation models (e.g., Shuman and Hovermale 1968; Bushby and Timpson 1967). Because the primitive equations support rapidly propagating gravity waves, a shorter time step is needed to ensure computational stability. In compensation, primitive equation models do not need the expensive elliptic solvers of the quasigeostrophic and balanced models.
Early model builders had to make some very basic choices that are still under discussion today. An example is the choice of how the different variables in the model should be arranged in the vertical. Charney and Phillips (1953) offset the thermodynamic variable, potential temperature θ, relative to the horizontal wind components u and υ, because this arrangement is natural to capture hydrostatic and thermal wind balance. Lorenz (1960), on the other hand, placed θ at the same levels as u and υ (Fig. 12-3), because that arrangement is advantageous for conservation of total energy.
Schematic showing the vertical placement of the horizontal velocity components u and υ and potential temperature θ on the Charney–Phillips and Lorenz grids.
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
Subsequent applications of the Charney–Phillips and Lorenz vertical grids with more complete equation sets showed that the Charney–Phillips grid better captures wave motions that depend on buoyancy (e.g., Thuburn and Woollings 2005, and references therein). It also showed that the Lorenz grid possesses a computational mode—a pattern of perturbations in the model variables that is invisible to the numerical method and consequently behaves unphysically, for example, by failing to propagate (Tokioka 1978; Arakawa and Moorthi 1988). However, a satisfactory scheme for achieving energy conservation with a Charney–Phillips grid has proved elusive. For many years, Lorenz’s choice was almost universally adopted, but the relative merits of the Charney–Phillips and Lorenz grids were revisited several decades later (Arakawa and Moorthi 1988). Some recently developed models use the Charney–Phillips grid (e.g., Girard et al. 2014; Wood et al. 2014), while others use the Lorenz grid (e.g., Untch and Hortal 2004; Satoh et al. 2008; Skamarock et al. 2012; Zängl et al. 2015). The debate continues.
The dynamical simulation of climate using numerical models can be said to have started in 1956, when Norman Phillips carried out the first extended-range simulation of the global circulation of the atmosphere (Phillips 1956; Lewis 1998). The model predicted the winds at two vertical levels with, naturally, the Charney–Phillips vertical grid, which means that there was only one prognostic temperature, in the middle troposphere. The model was quasigeostrophic, on a beta-plane channel, with just 16 × 17 grid columns. It was driven by a specified meridionally varying distribution of heating and cooling. Because the temperature was predicted at only one level, the static stability had to be prescribed; a smaller-than-observed value was used to mimic the effects of moist convection. Starting from a zonal flow with small random perturbations, a disturbance with a wavelength of 6000 km developed. It had the characteristic westward tilt with height of a developing baroclinic wave. Phillips examined the energy transformations associated with the developing wave, and found good qualitative agreement with observations of baroclinic systems in the atmosphere.
His simulation broke down after a few simulated weeks because of a previously unknown form of numerical instability (Phillips 1959). It was not of the sort of instability that results from violation of the CFL criterion; instead, it turned out to be an inherently nonlinear instability in which the spatial scale of nonlinear terms is misrepresented (aliased) by the finite-resolution grid, leading to feedback and the growth of small-scale noise (Phillips 1959). This type of instability can occur, in principle, even in a time-continuous model. Arakawa (1966) reasoned that if the Jacobian term could be computed in such a way as to conserve either energy or enstrophy then there would be “no room for nonlinear computational instability.” Moreover, conservation of both energy and enstrophy would prevent an unrealistic downscale cascade of energy. This motivated Arakawa to develop his energy- and enstrophy-conserving finite-difference Jacobian. The value of numerical methods that conserve physically important quantities emerged as a major theme in later work (e.g., Thuburn 2008).
Von Neumann was tremendously impressed by Phillips’s work. To explore its implications, he arranged a conference at Princeton University in October 1955 on “Application of Numerical Integration Techniques to the Problem of the General Circulation.” The workshop had a galvanizing effect on the meteorological community. Within 10 years, there were several major research groups modeling the global circulation of the atmosphere. The first sign of these impending developments was Smagorinsky’s two-level model, formulated using a zonal channel on the sphere (Smagorinsky 1958).
In a further important advance, Norman Phillips proposed the use of the terrain-following σ coordinate (Phillips 1957a), which greatly simplifies the lower boundary conditions of atmospheric models. Variations of the σ coordinate are still very widely used today. Phillips’s invention of the σ coordinate marks the beginning of a multidecadal search for the optimal vertical coordinate systems for use in both atmosphere and ocean models. We return to that story later in this chapter.
b. Early work on parameterizations of the boundary layer, the land surface, clouds, and cumulus convection
The exchanges of momentum, sensible heat, and moisture between the atmosphere and the lower boundary are fundamental to understanding the Earth system. In a key development of the 1950s, the Russian scientists Monin and Obukhov formulated a similarity theory for the “surface layer,” which is the lower portion of the atmospheric boundary layer (Monin and Obukhov 1954; Foken 2006). They showed how the surface fluxes of sensible heat and momentum are related to the near-surface profiles of temperature and wind. Later their ideas were extended to include the surface moisture flux over the oceans and other water surfaces. Two decades later the similarity functions described by Monin and Obukhov were measured in famous field experiments carried out in Kansas (Businger et al. 1971; Haugen et al. 1971) and Minnesota (Izumi and Caughey 1976). Today, Monin–Obukhov similarity theory is used to determine the surfaces fluxes of sensible heat, momentum, and moisture in virtually all atmospheric models. Further discussion is given in the chapter in this volume by Margaret LeMone and colleagues (LeMone et al. 2019).
The 1950s produced major advances in understanding the atmospheric boundary layer and cumulus clouds. Joanne Starr Malkus Simpson and colleagues carried out pioneering observations of turbulence and cumulus convection over the tropical and subtropical oceans, and developed simple and insightful theories of cumulus updrafts and downdrafts (Bunker et al. 1949; Malkus 1952; Starr Malkus 1954, 1955; Simpson et al. 1965; Simpson and Wiggert 1969). Their ideas played crucial roles in the subsequent development of parameterizations of the boundary layer and cumulus convection. It is an interesting fact that the concept of cumulus entrainment, which plays an important role in those parameterizations, was first discussed by oceanographer Henry Stommel (1951).
Riehl and Malkus (1958) used the (relatively meager) observations of their time to analyze the flows of energy through what we now call the intertropical convergence zone (ITCZ). They drew the fundamentally important conclusion that thunderstorms strongly transport energy upward through the depth of the tropical troposphere, and that at some levels the upward energy flux is against the gradient. Their study motivated the representation of cumulus updrafts as penetrative “hot towers” that act like express elevators, carrying energy and other quantities upward through the troposphere in an hour or less. As we will see, these ideas were widely used in cumulus parameterizations during the 1960s and later.
Cloud microphysics deals with cloud and precipitation particles, including their formation and the processes governing their evolution such as condensation, evaporation, melting and freezing. Since these processes act at the microscale (smaller than a micron to centimeters), they cannot be resolved and must be parameterized in all weather and climate models, now and for the forseeable future. The parameterizations must describe the net effects of interactions between subgrid-scale microphysical processes and the gridscale temperature, water vapor, and winds. The parameterization of microphysics plays an essential role in quantitative precipitation forecasting, coupling with the model dynamics through latent heating and the condensate weight, radiative transfer, and coupling with aerosols and chemistry. While the roots of cloud microphysics extend back several centuries, quantitative understanding was not established until fairly recently. A rapid acceleration of microphysics research began abruptly around 1940, coinciding with growing military interest in cloud processes, the development of new observational techniques including radar, and a hope that it might be possible to modify precipitation production through artificial means (Pruppacher and Klett 1997). Cloud microphysics, and moist physics more generally, had a limited role in the early development of weather and climate models, because extreme simplicity was required. We will return to the subject of cloud physics later in this chapter. For a more thorough discussion of the history of cloud physics research, see the chapter in this volume by Sonia Kreidenweis and colleagues (Kreidenweis et al. 2019).
Modern land surface models also draw on important ideas from the 1950s. Soil temperature as a function of depth can be modeled as thermal diffusion of heat in the vertical, given estimates of heat capacity and thermal diffusivity (Lettau 1954). The vertical heat flux through the soil column is determined by the temperature difference between the air and the soil surface. Penman (1948) derived a simple parameterization for the rate of evaporation from a wet surface based on vapor pressure, wind speed, and net radiation. The chapter in this volume by Christa Peters-Lidard and colleagues summarizes 100 years of progress in hydrology, which is an important aspect of land surface modeling (Peters-Lidard et al. 2019).
c. Approaching 1960
As the 1950s drew to a close, the International Geophysical Year raised the profile of the Earth sciences (Sullivan 1961). Major technological innovations were also occurring. Digital computers were becoming more powerful, easier to program, and much more widely available. Beginning with Sputnik in 1957, artificial satellites were launched into orbit, soon to be followed by quantitative satellite-based observations of Earth. In the following decades, both of these new technologies had major impacts on the development and applications of ESMs.
4. Model development in the Age of Aquarius
The culturally, scientifically, and technologically tumultuous 1960s produced multiple landmark advances in the development of ESMs, including the creation of several now-legendary “ancestral” models, which were aimed mainly at climate simulation rather than weather prediction. In many cases, the earliest versions of the ancestral models were not truly global, and used simplified geography. They incorporated simple parameterizations of surface fluxes, radiation, cumulus convection, and stratiform or “large-scale” clouds, and they were coupled to very simple land surface models. With one important exception they used prescribed sea surface temperatures (SSTs), rather than coupling with an ocean model.
a. The GFDL model
Joseph Smagorinsky was the first director of the Geophysical Fluid Dynamics Laboratory (GFDL) of the National Oceanic and Atmospheric Administration. His vision was to recruit a team of scientists focused on the multidecadal task of using numerical models as an aid to understanding the global circulation of the atmosphere (Lewis 2008). GFDL’s atmosphere model was developed by Smagorinsky, Syukuro Manabe, and collaborators (Smagorinsky et al. 1965; Manabe and Smagorinsky 1967). Early versions covered only the Northern Hemisphere, with a stereographic map projection, and used idealized geography. The GFDL model used the σ coordinate of Phillips (1957a). Some versions used “reduced grids” with fewer grid points around latitude circles near the poles (Kurihara 1965). By 1965, the GFDL model had relatively high vertical resolution for the time, with nine glorious layers.
During the 1960s, the GFDL modeling team achieved many important firsts, including a very influential parameterization for the horizontal diffusion of momentum (Smagorinsky 1963), the first radiation parameterization (Manabe and Möller 1961; Manabe and Strickler 1964), the first cumulus parameterization (Smagorinsky 1963; Smagorinsky et al. 1965; Manabe et al. 1965), and the first land surface model (Budyko and Zubenok 1961; Manabe 1969a). Figure 12-4 schematically summarizes the formulation of the early GFDL model (Manabe 1969b).
A schematic of the early GFDL model [from Manabe (1969b)].
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
Moist convective adjustment was designed to remove convective instability by adjusting the lapse rate back to “moist neutral,” and limiting the relative humidity to 100% or less, while minimizing complexity. Moist convective adjustment couples neighboring layers of a model, pairwise. It does not try to represent the penetrative nature of deep convection, which had been emphasized by Riehl and Malkus (1958). Moist convective adjustment is still being used in some of today’s models.because of convective instability, intense grid-scale convection develops exponentially in the area where the lapse rate is unstable. . . . Therefore, it is desirable to design a scheme of convection such that the grid-scale convection does not develop. . . . We used a very simple scheme of convective adjustment depending upon both relative humidity and the lapse rate and successfully avoided the abnormal growth of grid-scale convection.
Early results from the GFDL atmosphere model were published by Smagorinsky (1963) and Manabe et al. (1965). The primary application of the GFDL model was climate simulation, but Miyakoda et al. (1969) also used versions of the model in experimental NWP.
GFDL scientists also developed a simple but explicit representation of the surface energy balance over land. The bulk aerodynamic formula of Penman (1948) had been extended by Monteith (1965), who combined the constraints of surface energy balance with the conservation of water through turbulent transport. The combined Penman–Monteith equation includes the effect of surface or stomatal resistance to evapotranspiration. Stomata are microscopic pores on the undersides of the leaves of plants through which water evaporates. The stomata provide the physiological mechanism for control of evapotranspiration. The Penman–Monteith equation has long been used by farmers and engineers to estimate the evapotranspiration, but the estimation of stomatal conductance remains entirely empirical. Budyko and Zubenok (1961) suggested that physiological control ratio of the ratio of actual evapotranspiration to the potential evapotranspiration could be usefully approximated by a linear ramp between 0 and 1 as soil moisture varied from the wilting point to field capacity. The linear ramp was adopted by Manabe (1969a), who represented soil hydrology by analogy to a bucket of water. Rainfall is added to the soil bucket, which has an arbitrarily set capacity of 15 cm. Additional rainfall when the bucket is full leads to runoff. Evapotranspiration removes water from the bucket at a rate β times the potential evapotranspiration rate, where β is just the ratio of the current contents of the bucket to its capacity. The linear ramp incorporated through β represents the well-known tendency of vegetation to take up less and less water as root-zone soil moisture is depleted.
The origins of numerical ocean circulation modeling can also be traced to GFDL. Smagorinsky recognized the importance of developing a World Ocean circulation model, and in 1960 he hired Kirk Bryan to lead GFDL’s ocean modeling project. It was a massive undertaking, believed by many to be a fool’s mission with extensive known and unknown scientific and engineering challenges. There were prominent naysayers in the community who felt that such efforts were ill-advised at best. Fortunately, Bryan was able to leverage from GFDL’s work on atmospheric numerical models. Mike Cox was an additional member of the ocean modeling team, whose pioneering scientific programming skills proved critical to the success of the project (Bryan 1991).
Bryan and Cox made assumptions to allow for efficient numerical integration using the computers available in the 1960s. One of these assumptions was that the upper boundary of the ocean is a rigid lid. Such a lid eliminates fast external gravity waves (effectively making their speed infinite), and converts a hyperbolic problem for surface gravity waves into an elliptic boundary value problem for the barotropic (depth integrated) streamfunction of the circulation. This innovation allowed for the use of relatively long time steps, thus enabling the century-long integrations needed for climate studies. An additional key element of the model was a momentum advection scheme based on the approach of Arakawa (1966) to remove nonlinear instabilities that had plagued models at that time (Bryan 1966). Bryan chose the Arakawa B grid (Arakawa and Lamb 1977) for staggering of tracer and velocity variables. This choice rendered a relatively accurate numerical calculation of geostrophically balanced motions using the coarse resolution allowed by computers of the day. Bryan and Cox completed their prototype World Ocean model in the mid-1960s (Bryan and Cox 1967). Their pioneering work has now been followed by nearly 50 years of enhancements and refinements. The further evolution of the Bryan–Cox code is discussed in section 5e.
Bryan’s ocean model was soon coupled to GFDL’s global atmosphere model to create the world’s first global coupled atmosphere–ocean model (Manabe and Bryan 1969), although with idealized geography. The fundamental importance of ocean–atmosphere interactions for climate makes it reasonable to say that the model of Manabe and Bryan (1969) was the first true climate model—a major milestone.
The GFDL ocean model was coupled with a sea ice model (Manabe 1969b; Bryan 1969a), which treated the sea ice as a slab of uniform thickness in each grid cell, with all-or-nothing coverage. The temperature profile of the sea ice was assumed to be linear and the effects of salt trapped in the sea ice were neglected. Sea ice less than 3 m thick was advected at the speed of ocean currents averaged over the upper 100 m, while thicker ice was assumed to be locked in place, so it could not converge indefinitely and build to excess. This method for treating sea ice motion came to be known as “free-drift with stoppage.”4
The domain and geography of the GFDL model were gradually made more realistic. First results from a global version of the model, with realistic topography, were published by Holloway and Manabe (1971).
b. Leith’s model
Starting in 1960, the Livermore model (Leith 1965a,b, 1988; Michael 1996) was developed single-handedly by Cecil “Chuck” Leith of the Lawrence Radiation Laboratory.5 Leith’s model ran on the Livermore Automatic Research Calculator (LARC), which was one of the first computers to use transistors rather than vacuum tubes. At first the model represented only the Northern Hemisphere up to 60°N, but a later version was truly global. It used a spherical grid based on longitude and latitude, with a grid spacing of 5° in each direction, but with fewer grid points around latitude circles near the poles. It had five layers and used pressure as its vertical coordinate—the only numerical model of the atmosphere ever to do so, as far as we know. The surface pressure was predicted. The effects of mountains were not included. The model predicted water vapor, and included the warming effects of latent heat release, as well as precipitation, but had no parameterization of cumulus convection. It did have a parameterization of radiative transfer, including the diurnal cycle, but neglected the radiative effects of clouds. Leith’s dynamical core needed very strong damping to maintain numerical stability. His model had a relatively short lifetime, because his interests shifted toward two-dimensional turbulence. In 1968, he relocated to the National Center for Atmospheric Research (NCAR), which, as discussed below, had its own global modeling project. After moving to NCAR, Leith continued his studies of large-scale atmospheric turbulence, but he was only peripherally involved in the development of NCAR’s global atmospheric model.
c. The UCLA model
Beginning in 1961, the UCLA model (Arakawa et al. 1968; Langlois and Kwok 1969; Arakawa 1972) was developed by Akio Arakawa and collaborators, including Yale Mintz, at the University of California, Los Angeles. It was the only one of the four ancestral models to be developed at a university. A detailed first-person account of the project is given by Arakawa (2000). The early two-level version of the model, which was finished in 1963, did not predict water vapor, but it was global and had a realistic (but low-resolution) land–sea distribution and topography. Results from this version were published by Mintz (1968).
The UCLA model brought several important innovations. Its dynamical core used what are now called “finite volume” methods for both advection and the horizontal pressure-gradient force. It was designed with an emphasis on conservation of mass, energy (Arakawa 1966; Lilly 1997; Arakawa 1972) and other important quantities. These conservation properties were achieved through what are now called “mimetic” discretization methods (Hyman and Shashkov 1997). The model’s dynamical core was designed to optimally simulate the propagation of inertia-gravity waves, including the shortest waves that could be represented on the grid (Arakawa and Lamb 1977).
The cumulus parameterization of the early UCLA model made use of the entraining-plume ideas advocated by Stommel (1951), Riehl and Malkus (1958), and Simpson and Wiggert (1969). It allowed multiple “types” of cumulus clouds; the number of cloud types was determined by the number of layers used, which was three at the time. The UCLA model was the first to use the “mass flux” approach for parameterizing convection (Arakawa 1969), which has now been almost universally adopted. The closure used in the cumulus parameterization removed convective instability, but allowed a less-than-saturated relative humidity. The model’s radiation parameterization (Katayama 1967, 1972) included the diurnal cycle and the radiative effects of the predicted clouds.
d. The NCAR model
NCAR’s first global atmospheric model was developed by Akira Kasahara, Warren Washington, and David Williamson. The earliest version had two levels (Kasahara and Washington 1967; Washington and Kasahara 1970; Oliger et al. 1970). It had no orography, and water vapor was assumed to be at its saturation value throughout the atmosphere, so that latent heat was released wherever and whenever the air moved upward. The NCAR model was the first (and so far only) quasi-static global model to use constant-height surfaces as its vertical coordinate. Richardson’s equation was solved to determine the vertical velocity. Kasahara and Washington (1969), Kasahara and Washington (1971), and Washington and Williamson (1977) described a later six-level version of the model, which included the effects of mountains and predicted clouds. It was coupled to a simple land surface model. The radiation parameterization was developed by Sasamori (1968).
e. Additional advances during the 1960s
Radiation parameterizations for atmospheric models must account for heating and cooling by gases that vary in concentration within the atmosphere, notably water vapor and ozone. Early models focused on the impacts of individual gases (carbon dioxide, ozone, and especially water vapor) on radiation and heating rates within the atmosphere, exploiting the fact that each gas affects a different spectral region. Some approaches used gas amounts as a function of temperature to compute a broadband emissivity (Elsasser and Culbertson 1960) by fitting to observations and/or laboratory data. Emissivity could be used to compute heating rates from a spectrally integrated equation describing flux (e.g., Sasamori 1968). Others used band models (Curtis and Goody 1954, is one example) in which an assumed distribution of absorption line shapes, strengths, and relative positions determines the average transmission of a model layer as a function of absorber amount, temperature, and pressure within some finite spectral region. The absorption features of each gas were assumed to be spectrally independent so that the total transmission is the product of the transmission due to each gas. Total fluxes and heating rates can then be computed by adding up contributions from each spectral region. Longwave cooling calculations were typically expressed as matrix problems, following Curtis (1956) and Rodgers and Walshaw (1966). This approach describes the exchange of radiation between pairs of not necessarily contiguous layers, and between each layer and the upper and lower boundary (space and the surface, respectively). Such a calculation scales as the number of layers squared times the number of spectral intervals, but the relatively coarse vertical and spectral resolutions of the time made it practical.
A desire to simulate the global circulation of the atmosphere on a more or less homogeneous grid motivated some early interest in quasi-uniform spherical grids, including overset grids (Phillips 1957b), icosahedral grids (Williamson 1968; Sadourny et al. 1968), and cubed spheres (Sadourny 1972). The first results with these methods were not very encouraging, however, and with the emergence of the spectral transform method around that time (Eliassen et al. 1970; Orszag 1970), interest waned until the 1990s. Both quasi-uniform spherical grids and the spectral method are discussed further later in this chapter.
The U.S. National Meteorological Center developed the first operational NWP model to incorporate precipitation and the effects of latent heating (Shuman and Hovermale 1968). The model predicted the precipitable water, that is, the total amount of water vapor in each atmospheric column, and instantaneously converted vapor to surface precipitation when the precipitable water in a grid cell exceeded 0.8 times the value that would occur if the air was saturated at all levels. An ad hoc approach was used to distribute the corresponding latent heating vertically, and there was no explicit representation of microphysical processes. This parameterization was used operationally starting in March 1967. A similar approach was used in many operational weather forecast models over the next two to three decades.
H. L. Kuo (1965) proposed the first cumulus parameterization to use an entraining plume to represent the cumulus updrafts (see also Kuo 1974). He assumed (incorrectly) that the environment of the cumulus clouds was warmed by outward diffusion of enthalpy from the updrafts rather than by convective fluxes. Kuo determined the intensity of convection based on the tendency of water vapor due to low-level convergence and surface evaporation. This moisture-convergence “closure” was widely used for many years (e.g., Anthes 1977; Tiedtke 1989), but later fell out of favor (Emanuel 1991; Arakawa 2004).
Cumulus convection was not the only important cloud type to receive close attention during the 1960s. Douglas Lilly published an elegant, insightful, and (ultimately) very influential analysis of marine subtropical stratocumulus clouds (Lilly 1968). He emphasized the importance of cloud-top processes, including radiative cooling, entrainment, and the evaporation of cloud water, for the evolution of stratiform cloud systems. Over a period of decades, Lilly’s 1968 paper has exerted a major influence on parameterizations of both clouds and boundary-layer turbulence.
During the 1960s and into the 1970s, global atmospheric models predicted water vapor distributions including the effects of precipitation and moist convection, but most specified a fixed distribution of clouds from observations for interaction with radiation (e.g., Manabe et al. 1965; Washington and Kasahara 1970). The UCLA model was an exception.
I worked with a strong sense for interactions among processes as discussed here, and in expectation that their study would be facilitated by simple means to portray microphysical processes. The first process to be considered was conversion of cloud to precipitation. How to portray it? I did little more than observe in the literature and with my own eyes that thin water clouds seem to be persistent, and that rain falls from dense clouds.
This behavior was captured by continuity equations for cloud water and rain mass that were developed and initially applied in a kinematic flow model (Kessler 1969). Conversion processes between cloud and rain were represented by “autoconversion” using a threshold cloud mass mixing ratio above which conversion occurred, and “accretion,” which represented the growth of existing raindrops by collection of cloud. Rain was allowed to evaporate and sediment and the precipitation rate was calculated explicitly from the predicted rain field. A diagram of the scheme is shown in Fig. 12-5a. It was a major advance, and it still provides a general framework for almost all bulk microphysics schemes used in weather and climate models up to the present day.
(a) Diagram of the Kessler microphysics parameterization. (b) Diagram of a typical two-moment parameterization with multiple ice classes.
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
From the late 1950s to the early 1980s, the first attempts were made to simulate turbulence and cumulus clouds using high-resolution numerical models with relatively small domains (Malkus and Witt 1959; Lilly 1962; Ogura 1962; Deardorff 1964, 1972b, 1974, 1980; Moeng 1984). Today we speak of models for large-eddy simulation (LES), and cloud-resolving models. Such models are now extensively used for developing and testing global models, and for determining the numerical values of parameters used in subgrid-scale parameterizations for lower-resolution models.
Lorenz’s revolutionary paper on deterministic nonperiodic flow (Lorenz 1963) transformed our understanding of the limits of deterministic weather prediction, and eventually led to ensemble forecasting (Lewis 2005). Motivated by Lorenz’s discovery, Charney (1966) used early versions of the Livermore, UCLA, and GFDL models to investigate the sensitivity of the atmospheric circulation to small perturbations. This work by Charney and colleagues could perhaps be viewed as the first model intercomparison study.
f. Where things stood at the end of the 1960s
It is interesting to list some of the ways in which the global modeling arena of the 1960s differed from today’s. First, all of the global atmosphere and ocean models of the 1960s were developed in the United States, although Japanese immigrants to the United States (Akio Arakawa, Akira Kasahara, and Syukuru Manabe) played key roles in the development of three of the models. All of the lead developers were men. The motivations for developing the models were purely academic, in the sense that the primary focus was improved understanding, rather than immediate practical applications. The funding that supported the modeling work was modest by today’s standards. The modeling teams were small and informally organized, in contrast to today’s much larger and more bureaucratic enterprises. All of the models used “gridpoint” methods with spherical (longitude–latitude) coordinates, and all of them used the quasi-static primitive equations. The atmospheric models simulated only the troposphere, with the exception of an early experiment by Manabe and Hunt (1968). Although, as discussed above, the 1960s did see some early work on models of the ocean, sea ice, and the land surface, by far the largest effort was aimed at developing atmospheric models. Finally, and importantly, the model-users of the 1960s were mostly the same as the model-developers, whereas today users vastly outnumber developers.
5. The 1970s
a. More modeling groups
During the 1970s, more global modeling projects started up, in various places around the world, including at the Met Office in Bracknell (Gilchrist et al. 1973; Rowntree 1976; Corby et al. 1977; Rowntree and Walker 1978), and the Laboratoire de Météorologie Dynamique (LMD) in Paris (Laval and Sadourny 1979; Laval et al. 1981b,a; Sadourny 1984). In the United States, the National Aeronautics and Space Administration (NASA) was motivated to enter the global modeling arena by a desire to maximize the meteorological utility of satellite data. Data assimilation is a process that combines new observations with preexisting information (often in the form of previous short-term forecasts), to provide an optimal estimate of the state of the atmosphere. Weather forecasts use data assimilation to create the “initial conditions” used to start a forecast. In work carried out at NASA’s Goddard Institute for Space Studies (GISS), in New York City, Charney et al. (1969) pointed to data assimilation, and especially the assimilation of satellite data, as an important new application of numerical models. To enable NASA’s work on data assimilation, a version of the UCLA model was provided to GISS in the early 1970s (Somerville et al. 1974). Data assimilation is now key to operational NWP, and to the production of “reanalyses,” which are discussed later in this chapter. See the chapter in this volume by Stanley Benjamin and colleagues for a more complete discussion of data assimilation (Benjamin et al. 2019).
GISS was originally organized to study astronomical problems, in which radiative transfer is of course central. Radiative transfer studies at GISS were strongly influenced by methods that had been developed by the planetary atmospheres community, and these were adapted for use in global atmospheric models. For example, the adding method for computing the transport of radiation in scattering atmospheres is attributed by Lacis and Hansen (1974) to papers describing gamma-ray transfer, although the atmospheric formulation arose from a collaboration between James Hansen and Hendrik van de Hulst (A. Lacis 2017, personal communication). GISS was the first modeling center to use a k distribution to model the spectral variation in optical depth (Somerville et al. 1974; Hansen et al. 1983). In a k distribution, spectral regions with each band are ordered by extinction absorption coefficient, so that the integral over wavelength becomes smooth and just a few quadrature points provide high accuracy.
Notably, the 1970s saw the beginning of operational global numerical weather prediction (Stackpole 1978; Woods 2006), and the founding of the European Centre for Medium-Range Weather Forecasts (ECMWF; Woods 2006), which quickly established itself as the most skillful of the operational centers.
b. Atmospheric dynamical cores
1) The spectral method becomes popular
During the 1970s and early 1980s, the global spectral6 method (Silberman 1954; Robert 1966; Baer 1972; Bourke 1974) became widely used in the dynamical cores of atmospheric models. In this approach, the horizontal distribution of model fields is represented by an expansion in spherical harmonics (Fig. 12-6). The spectral representation allows horizontal derivatives to be calculated very accurately and, with a triangular truncation of the expansion, gives homogeneous and isotropic resolution. Moreover, a spectral dynamical core that solves the barotropic vorticity equation conserves energy and enstrophy, as in the continuous system. The calculation of quadratic nonlinear terms directly from the spectral representation using interaction coefficients was prohibitively expensive, and for other types of nonlinearity even more so. This barrier to the use of the spectral method was removed with the introduction of the spectral transform method by Eliassen et al. (1970) and Orszag (1970). In the spectral transform method, the nonlinear advection terms, along with any terms based on physical parameterizations, are computed in grid space, and efficient transforms are used to go back and forth between grid space and the spectral representation (Jarraud and Simmons 1983).
Some examples of spherical harmonics. Spherical harmonics are wave-like functions defined on the surface of a sphere. They are spherical analogs of the sines and cosines that provide a basis for Fourier series in one dimension.
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
A further important advantage of the spectral method is that it greatly facilitates the use of a semi-implicit time integration scheme. An implicit treatment of the terms responsible for fast gravity waves effectively enlarges the domain of dependence of the numerical solution, allowing the CFL criterion to be satisfied with larger time steps. The price to pay was the reappearance of an elliptic problem to be solved at each time step.7 Inspired by the work of Marchuk, semi-implicit schemes were proposed for both gridpoint models (Kwizak and Robert 1971; Robert et al. 1972) and spectral models (Robert 1969; Bourke 1974; Hoskins and Simmons 1975). A major advantage of the spectral method is that it allows fast (i.e., computationally inexpensive) solution of the semi-implicit elliptic problem that arises with semi-implicit time differencing. This, in turn, allows spectral models to use long time steps, which enhances their computational speed.
As a result of these strengths of the spectral method, it was soon adopted by GFDL, NCAR, and ECMWF, and it dominated atmospheric modeling efforts around the world for the next two decades (see the review by Williamson 2007). It is still used today at several major modeling centers.
2) Improvements to gridpoint models
For the modeling centers that persevered with gridpoint methods, important progress was made along two lines. One was the understanding that, in order to adequately capture geostrophic balance, it is necessary to adequately simulate the adjustment toward balance that occurs through the radiation of gravity waves. Ideally, nonpropagating computational modes should be avoided and the entire wave spectrum should have group velocities of the correct sign. These properties depend crucially on the staggering of variables on the grid, and systematic study (Winninghoff 1968; Arakawa and Lamb 1977; Randall 1994) concluded that the B grid (for large
Schematic showing the horizontal distribution of variables on the (left) B grid, (middle) C grid, and (right) Z grid. Here, u is the eastward velocity component, υ is the northward velocity component, p is the pressure, ζ is the vertical component of vorticity, and δ is the horizontal velocity divergence.
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
Another line of progress built upon Arakawa’s Jacobian work (Arakawa 1966) to develop schemes that conserve energy, enstrophy, or angular momentum for more complete equation sets. These developments involved both horizontal (Sadourny 1975; Burridge and Haseler 1977; Arakawa and Lamb 1981) and vertical (Arakawa and Lamb 1977; Simmons and Burridge 1981) discretizations.
The improved dynamical cores had to adapt to changing computer architectures. The most important architectural change during the 1970s was the introduction of “vector” computing, which became available to many scientists when a Cray-1 computer was delivered to NCAR in 1976. A vector computer can perform arithmetic on lists of numbers (called vectors) much faster than conventional machines.8 To take advantage of the increased speed of the vector hardware, the computer codes of the models had to be rewritten; this entailed a significant amount of programming work, but had many beneficial side effects in addition to the direct benefit of faster-running models.
c. Adding the stratosphere
During the 1970s, some global atmospheric models were extended upward to include the stratosphere. The earliest such model was described by Manabe and Hunt (1968). Later studies include those of Manabe and Mahlman (1976), Schlesinger (1976), and Schlesinger and Mintz (1979). With support from the Climate Impact Assessment Program (CIAP) of the U. S. Department of Transportation, some of the models were used to simulate the effects of supersonic airliners on stratospheric ozone (Johnston 1971; Grobecker et al. 1974; National Research Council Climatic Impact Committee 1975; Morrisette 1989). This was the first time that agency funding was made available specifically for the application of global atmospheric models to investigate anthropogenic effects on the climate system. It can perhaps be viewed as a loss of innocence.
The temperature structure of the stratosphere is dominated by radiative processes, so including this layer motivated developments in radiative transfer parameterizations. At GFDL, Stephen Fels had an insight that pointed the way to more accurate calculations without large increases computational expense. As Green (1967) showed in a crisp two-page note, the radiative cooling calculation at any level in the atmosphere can be expressed as the sum of exchanges between the level in question and every other level, plus one more term representing the energy lost to the infinite heat sink of the rest of the universe—the cooling-to-space term. Temperatures throughout an atmospheric column, and hence the emitted longwave fluxes, vary by much less than the contrast between the atmosphere and outer space, so that when the cool-to-space term is nonzero it is usually much larger than the exchange term. Faced with the limited power of early computers, Fels and Schwarzkopf (1975) exploited this asymmetry in the simplified exchange approximation, the heart of which is a spectrally detailed, and hence more accurate, treatment of cooling to space, and a spectrally coarse treatment of regions in which intra-atmospheric exchanges dominate. The approach is one of the first in which a focus on a specific problem—for the simplified exchange approximation, the computation of heating rates within the atmosphere—allows for algorithms that save time by targeting that calculation. The parameterization was quickly adopted by the radiation community, and incorporated into GFDL’s SKYHI model in 1979.
d. Boundary layer and cloud parameterizations during the 1970s
1) Boundary layer parameterizations
The early ECMWF model used a surface-flux parameterization developed by Louis (1979). It was based on Monin–Obukhov similarity theory, but with some modifications to facilitate use in atmospheric models. The Louis parameterization is still very widely used today.
During the 1960s, a new approach to turbulence parameterization, called “higher-order closure,” emerged within the engineering community (Glushko 1966; Bradshaw et al. 1967; Beckwith and Bushnell 1968; Donaldson and Rosenbaum 1969). A few years later, an essay by Donaldson (1973) introduced higher-order closure to the atmospheric sciences. Soon thereafter, Mellor and Yamada (1974) proposed a detailed hierarchical approach for the application of higher-order closure to atmospheric modeling. Miyakoda and Sirutis (1977) were the first to test higher-order closure in a global atmospheric model. Higher-order closure has been of lasting and recently increasing importance for atmospheric modeling, so we devote some space to it here.
Higher-order closure uses the equations that govern selected “moments” of the subgrid-scale variables. The first moments are the gridcell-averaged values of the primary variables, which might include the liquid water potential temperature
Higher-order closure models need closures for four things:
the effects of higher moments that are not predicted, for example, as mentioned above, the third moments in a second-order closure model;
moments involving the pressure, which occur in the equations for the moments that involve velocity components;
dissipation terms, which are especially important in the equations governing variances; and
moments involving heating, precipitation, and other diabatic processes.
Ever since the 1970s, the literature on higher-order closure has been closely linked with the literature on cloud parameterizations, which were receiving greater attention in part because of more and better satellite observations of the atmosphere (e.g., Stowe et al. 1988; Schiffer and Rossow 1983). An important advance came when the equations of higher-order closure were applied to parameterize fractional cloudiness. Sommeria and Deardorff (1977) and Mellor (1977) independently proposed combining higher-order closure with assumed probability density functions. See also Manton and Cotton (1977) and Chen (1991). The idea is that within the small grid cells of an LES,
2) Cumulus parameterizations
Cumulus parameterization underwent major theoretical advances during the 1970s, supported by new field observations. Arakawa and Schubert (1974, hereafter AS) proposed a very influential cumulus parameterization with several important new ideas. First, following Arakawa (1969), they allowed a spectrum of cumulus cloud sizes, distinguished by their fractional entrainment rates, and each with its own mass flux. Second, they determined the intensity of convective activity using the hypothesis of “quasiequilibrium,” which asserts that the cumulus clouds consume convective available potential energy (almost) as rapidly as it is generated by other processes. Third, they included a very simple but explicit representation of the interactions between the cumulus clouds and the subcloud boundary layer. Finally, AS allowed the cumulus updrafts to detrain liquid water and ice (and also water vapor) into the environment, thus providing a “hook” that can be used in a parameterization of convectively generated stratiform clouds. It is noteworthy that AS cited a total of nine papers that were authored or coauthored by Joanne Simpson.
Although AS appreciated that stratiform clouds often form in the outflows from cumulus clouds, modeling research at the time emphasized the role of convection, and tended to treat stratiform clouds as having significance only for their radiative effects. This paradigm was challenged by Houze (1977), who used an analysis of tropical field data to demonstrate that about 40% of the precipitation in a tropical convective system is stratiform in nature. Stratiform clouds received increased attention in subsequent model development efforts. Models also used the mass-flux approach to include convective momentum transport by both updrafts and downdrafts, with the simplifying assumption that horizontal momentum is conserved within updrafts and downdrafts except for the effects of entrainment.
AS also neglected the effects of convective downdrafts, which had been recognized in observational studies (Starr Malkus 1955). Johnson (1976) proposed a way to include downdrafts in a cumulus parameterization, and more such work followed (e.g., Emanuel 1981; Cheng and Arakawa 1997).
e. The GFDL-based family of ocean models
Here we depart from the decade-by-decade organization of this chapter to describe a “family tree” of ocean models that sprang from the Bryan (1969b) model, which was developed during the 1960s. The tree began to grow during the 1970s, and continues to put out new branches in the twenty-first century.
1) Descendants of the Bryan (1969b) ocean model
As mentioned in section 4a, GFDL developed the Bryan–Cox ocean model during the 1960s. The model underwent extensive further development during the 1970s, and beyond. Figure 12-8 shows a flow diagram illustrating the lineage of ocean circulation models originating from the Bryan–Cox code. In addition to details offered in the extended figure caption, we highlight certain elements of the developments in the main text. The Bryan–Cox code was enhanced by Albert J. Semtner, Jr., who joined the global modeling group at UCLA after completing his Ph.D. at Princeton (Semtner and Mintz 1977). Semtner’s version of the code incorporated arbitrary land–sea masking (allowing for more realistic domains) and upgrades to the computational efficiency on vector machines (Semtner 1974). Semtner’s enhancements were incorporated into the Cox (1984) code, thus initiating a practice of sharing algorithmic upgrades among a community of developers. The Killworth et al. (1991) algorithm to include a free-surface option was also incorporated into the code. The Bryan–Cox–Semtner code was used for the first simulations of the global ocean at ½° resolution (Semtner and Chervin 1992). These simulations ushered in the era of global ocean models that admit transient mesoscale eddy activity (see Hecht et al. 2008 for a more recent compendium).
Flow diagram showing relationships among numerical ocean codes originating from the methods of Bryan (1969b). The Bryan (1969b) algorithm was the basis for Cox’s code at GFDL and the starting point for extensions made by Semtner (1974) at UCLA. The Semtner (1974) branch on the left led to the Parallel Ocean Climate Model (POCM) used at NCAR and the Naval Postgraduate School (NPS). It also fostered the Parallel Ocean Program (POP) developed at the Los Alamos National Laboratory (LANL). The Cox (1984) code formed the basis of the Fine Resolution Antarctic Model (FRAM) developed in the United Kingdom by Peter Killworth and David Webb. The OCCAM project in the United Kingdom (Webb et al. 1998) was among the first global models with active mesoscale eddy variability (using resolutions as fine as 1/12°). On the right side of the diagram are Modular Ocean Model versions 1 and 2 (MOM1 and MOM2), representing the GFDL descendants of the Bryan–Cox code. On the far right are the NCAR efforts with NCOM (Gent et al. 1998, NCAR CSM Ocean Model) and CSM (Climate System Model). [Courtesy of Albert Semtner, Jr.]
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
The Bryan–Cox–Semtner code was also used in the Parallel Ocean Climate Model (POCM) developed at NCAR during the 1990s. POCM was one of the first ocean models to make efficient use of the massively parallel computer architectures that are now standard in the community.
In 1989, Mike Cox died at a relatively young age,9 at which point Ron Pacanowski, Keith Dixon, and Tony Rosati at GFDL took charge of the GFDL ocean model. Their efforts led to the first version of the Modular Ocean Model (MOM1), thus furthering the GFDL lineage that continues to this day with MOM6. MOM1 was the ocean component of many global climate models in the 1990s, such as the first climate model developed by the Met Office (Murphy 1995), and the second version of the Canadian Climate Center model (Flato et al. 2000). Climate models in Germany, Japan, and Australia also made use of MOM1.
2) Support for a community of numerical modelers
The Semtner (1974) code and technical report were made available to other ocean modelers, which led to a much wider use of the GFDL code, especially in the United States and the United Kingdom. This idea of sharing code was then formalized in 1984 when Mike Cox made the GFDL ocean code freely available to the public (Cox 1984). The code could be configured to suit the scientific interests of the investigators. This promoted its use as an experimental tool for scientific investigation. Use of the Bryan–Cox–Semtner code thus spread through the ocean and climate modeling community worldwide. These efforts at community development are widespread in today’s world of open-source code development, but they were unique in the late 1970s and early 1980s. In addition to the FORTRAN code, Cox provided an updated technical manual describing the mathematical equations and numerical methods that formed the basis for the code. The Semtner (1974) technical report and the Cox (1984) manual proved extremely valuable in communicating the scientific and engineering rationales for various features of the model. As a result, the code was readily understandable by a broad community of oceanographers and numerical algorithm specialists.
These pioneering efforts at building a community of informed users paved a path toward enhancing the scientific integrity, transparency, and reproducibility of ocean model codes and the simulations produced with them (a formidable task to this day!). It also fostered several allied efforts to use the Bryan–Cox–Semtner code for a suite of scientific applications, and to enhance the physical parameterizations, numerical methods, and computational efficiency of the models.
3) Ocean codes inspired by MOM
The Parallel Ocean Program (POP) is a direct descendant of the Bryan–Cox–Semtner code. It was developed in the early 1990s for the Connection Machine by Rick Smith, John Dukowicz, and Bob Malone at Los Alamos National Laboratory (LANL; Smith et al. 1992). An implicit-free-surface formulation and other numerical improvements were added by Dukowicz and Smith (1994). Later, the capability for general orthogonal coordinates for the horizontal mesh was implemented (Smith et al. 1995). See also Murray (1996) for efforts with the Bryan–Cox–Semtner code and Madec et al. (1997) for efforts with the Océan Parallélisé (OPA) model in France. In 2001, POP was adopted as the ocean component of the Community Climate System Model (CCSM) based at NCAR. Substantial efforts at both the LANL and NCAR have gone into adding various features to meet the needs of the CCSM (Smith et al. 2010; Danabasoglu et al. 2006, 2012). The POP code has been used as the ocean component of the CCSM, and versions 1 and 2 of the Community Earth System Model (Hurrell et al. 2013).
Upon the release of the Cox code in 1984, scientists around the world had access to the fruits of more than 20 years of focused efforts at GFDL. Nonetheless, as scientists are prone to do, many arrived at distinct ideas for how best to go about developing numerical models. One such effort is the ocean component of the Nucleus for European Modeling of the Ocean (NEMO). This code was developed from the OPA model, release 8.2; (Madec et al. 1997). The NEMO code has been used for a wide range of applications, both regional and global, as a forced ocean model and as a component of a climate model. In particular, it is used today in the global models of the Met Office, the European Centre for Medium-Range Weather Forecasts, and the French National Centre for Scientific Research.
The Max Planck Institute ocean model (MPIOM) is the ocean–sea ice component of the MPI Earth System Model. MPIOM is a primitive equation model (C grid, z coordinates, free surface) with the hydrostatic and Boussinesq assumptions. It includes a bottom boundary layer scheme for the flow across steep topography, and uses a curvilinear orthogonal grid, which allows for a variety of configurations. A description of MPIOM can be found in Marsland et al. (2003). A list of model development efforts that is current up to the year 2000 can be found in Griffies et al. (2000). Any list is incomplete, and we do not attempt an update here.
f. Sea ice advances during the 1970s and 1980s
The 1970s and 1980s were a golden age for the development of sea ice models, with major advances in the treatment of sea ice thermodynamics and the emergence of models that simulate sea ice dynamics, in which mechanical failure causes ridging and rafting among floes and also creates openings between floes known as leads. The regional jumble of sea ice caused by the interplay of deformation, growth, and melt results in a distribution of thicknesses that modelers wanted to simulate in order to capture the highly nonlinear thickness dependence on compressive stress and growth.
Observations and scientific understanding of sea ice had recently expanded as a result of the International Geophysical Year (IGY) in 1957–58. Norbert Untersteiner spent a year on the sea ice as chief scientist of an IGY field camp. In the decade that followed he published a series of papers that established the basic principles that govern a numerical model of sea ice thermodynamics. Together with graduate student Gary Maykut of University of Washington, he assembled a sea ice model that treated the surface energy budget and sea ice growth and melt with the unique dependence on sea ice brine pockets (Maykut and Untersteiner 1971). The concentration of brine in the pockets varies with heat stored in the sea ice. The temperature and brine concentration were simulated in 10-cm layers that absorbed sunlight and conducted heat between ocean and atmosphere. The physical interactions were so complex that their model was limited to just the vertical dimension, and because of its computational expense no climate or weather model adopted the Maykut and Untersteiner model of brine-pocket dynamics until the twenty-first century.
The RAND Corporation sought to create a simpler thermodynamic sea ice model to couple to early ocean models. RAND commissioned Bert Semtner to reduce the complexity of the Maykut and Untersteiner model. Semtner did so by developing a very simple one-layer model of sea ice alone and an innovative three-layer model (two layers of sea ice and one of snow) with a reservoir of interior solar heating to mimic the effect of brine pockets and shift the timing of the surface melt season in a semirealistic way (Semtner 1976). These two reduced-complexity models by Semtner were the basis for sea ice thermodynamics in global climate models for decades.
In the fall of 1976, sea ice scientist William Hibler became the 25th visitor to GFDL. He was impressed by the practical issues of sea ice modeling in a global climate model. He learned how ocean models were formulated from his host Frank Bryan, who inspired him to simplify the sea ice model from the Arctic Ice Dynamics Joint Experiment (AIDJEX; Coon et al. 1974), which employed a constitutive law for plastic behavior to simulate the dependence of the stress tensor on the velocity field, allowing for material failure and deformation. Hibler (1979) greatly reduced the numerical complexity of the AIDJEX model by formulating a nonlinear viscous-plastic rheology for sea ice. He demonstrated the scheme in an 8-yr simulation of the Arctic basin. The AIDJEX model and Hibler’s viscous-plastic scheme remain the basis for the dynamics in most sea ice models used in climate models today, though many climate models, including GFDL’s, used highly idealized methods such as free drift with stoppage to model sea ice dynamics for several more decades as efforts continued to further reduce the computation demands of the viscous-plastic dynamics.
As Hibler and other sea ice modelers explored methods to simulate sea ice dynamics, the need for a subgrid-scale parameterization to simulate the distribution of sea ice thicknesses arose. An equation to describe the ice-thickness distribution was developed by Alan Thorndike with other colleagues at the University of Washington (Thorndike et al. 1975). Hibler soon implemented an ice-thickness distribution scheme in his Arctic basin model (Hibler 1980).
The advanced sea ice models developed during the 1980s were used only in experimental applications, occasionally coupled to an ocean model. They were not coupled to an atmosphere model for another decade. One reason is that climate modeling centers considered advanced sea ice models to be too computationally demanding. Another factor was that the focus of global climate modeling remained primarily on the tropics and northern midlatitudes until the twenty-first century.
g. Simulations of global warming
Manabe and Möller (1961) demonstrated that radiation is roughly balanced by convection (Manabe and Strickler 1964).10 The GFDL team performed pioneering one-dimensional simulations of “radiative-convective equilibrium” (RCE; Manabe and Strickler 1964; Manabe and Wetherald 1967), an idealization that continues to be useful today (e.g., Wing et al. 2018). To mention one very important example, the study of Manabe and Wetherald (1967) pointed to the importance of the water vapor feedback for climate change. As noted by Manabe and Strickler (1964) in a paper describing single-column modeling of RCE, “one of the major purposes of our study is the construction of a model of radiative transfer simple enough to be incorporated into a general circulation model of the atmosphere.”
During the 1960s, Manabe and Wetherald (1967) had already studied the effects of increasing atmospheric carbon dioxide concentrations on the “climate” of a one-dimensional RCE model, and this work pointed to the importance of the water-vapor feedback on climate change. The first simulation of global warming with a true climate model was reported by Manabe and Wetherald (1975). Their model was idealized through the use of a limited computational domain, simplified topography, no energy transport by the oceans, no seasonal or diurnal cycles, and fixed cloudiness. It is remarkable that this first simulation with a simplified model, more than 40 years ago, predicted many changes that have now been observed in the real atmosphere, including a warming troposphere with greater warming near the pole, a cooling stratosphere, stronger precipitation, and increased atmospheric water vapor. The successful strategy of Manabe and Wetherald (1975), Manabe and Stouffer (1980), and Manabe and Wetherald (1980) was to explore the possibility of anthropogenic climate change using the relatively simple models available at the time, rather than waiting for the more complete models of the future.
6. The 1980s
a. Community modeling gets under way
In 1983, NCAR released the Community Climate Model (CCM) (Pitcher et al. 1983; Williamson 1983; Williamson et al. 1983; Kiehl et al. 1998). Initially, the CCM was essentially an atmosphere model coupled to a simple land surface model. It lacked a coupled ocean model, so calling it a “climate model” was a bit of an exaggeration. The CCM was widely used because it was freely available and fully documented.
During the 1980s, Washington and Meehl (1983), Washington and Meehl (1984) and Washington and Meehl (1989) used versions of the CCM to perform increasingly detailed simulations of anthropogenic greenhouse warming. In the late 1990s, Washington et al. (2000) developed the Parallel Climate Model (PCM). The atmosphere component was the CCM3 at T42 resolution, and the ocean component was the POP model at about 0.5° resolution. The PCM was one of the first models designed to run very efficiently on the parallel computers that were emerging at that time. The PCM was subsequently used to run ensembles of twentieth-century simulations forced by the individual climate forcings, such as greenhouse gases, aerosols and solar variability, rather than their combined effects. The interesting results are presented in Meehl et al. (2003).
The CCM matured through a series of releases. During the 1990s, a sophisticated land model (Bonan 1998) was added (Kiehl et al. 1998). In 1996, CCM was coupled to an ocean and was able to run without flux adjustments through the introduction of the so-called Gent–McWilliams ocean mixing parameterization (Gent and McWilliams 1990). The entire model was renamed as CCSM in 2004, and then renamed again as the Community Earth System Model (CESM) in 2010; the Community Atmosphere Model (CAM) is the atmosphere component of the CESM.
b. Atmospheric dynamical cores in the 1980s
Although the semi-implicit method allowed the CFL criterion to be satisfied for gravity waves, and truncation errors were generally thought to be dominated by the space discretization, model time steps were still limited by the CFL criterion for explicit Eulerian advection. This motivated Robert (1981, 1982) to propose a semi-implicit semi-Lagrangian method of integrating the model equations. In the semi-Lagrangian method time derivatives are expressed as derivatives along fluid parcel trajectories. Fluid parcel trajectories arriving at model grid points are traced backward over one or two time steps, and the required fields are interpolated to the trajectory departure points. In this way the CFL criterion for advection is satisfied and significantly longer time steps are possible.
The first semi-implicit semi-Lagrangian schemes used three time levels, but more efficient two-time-level versions were soon formulated (Temperton and Staniforth 1987; McDonald and Bates 1987), and ways of handling the poles in spherical geometry were worked out (Ritchie 1991; Bates et al. 1990). Because of the resulting efficiency gains, the method was soon adopted at a number of operational centers (Ritchie 1991; Ritchie et al. 1995).
c. Radiative transfer work in the 1980s
in the eighties, the so-called OTGs, the other trace gases became very popular and well-known, particular following Ramanathan’s and Hansen’s papers (Ramanathan et al. 1983; Hansen et al. 1983). We [GFDL] felt that to create proper energy balance, especially when doing climate calculations, you needed to have methane, nitrous oxide, and chlorofluorocarbons (interview with Ramasawmy, 28 November 2017).
As Stephens (1984) notes, the treatment of interactions between clouds and radiation in global models in the mid-1980s was fairly rudimentary. Cloud properties were almost universally prescribed, perhaps as a function of relative humidity but frequently as a function of location and season, with limited spectral information. Methods for more sophisticated treatments were already in place, with insights from work on planetary atmosphere (Hansen and Travis 1974) informing complete parameterizations in both spectral regions (Stephens 1978; Roach and Slingo 1979; Slingo and Schrecker 1982). The delta-scaling method for treating the sharp forward peaks in the scattering phase function of clouds had been developed (Potter 1970; Joseph et al. 1976) and the variety of proposed two-stream methods had been unified by Meador and Weaver (1980) and Zdunkowski et al. (1980). These tools were in place as models began to make more detailed calculations of cloud properties (section 6d), although treatments of “cloud overlap,” that is, how radiation was partitioned between clear and cloudy skies in the vertical, remained simple.
The move to better prediction of cloud properties was partly motivated by recognition of the role of cloud-radiation interactions in shaping the large-scale circulation including tropical convection. One example came from a small group working with the UCLA/Goddard Laboratory for Atmospheres (GLA) GCM, who demonstrated that predicting cloud properties and allowing those variable properties to influence the radiation field (Harshvardhan et al. 1987) had tremendous impacts on the global distribution of cloudiness and the resulting energy budget (Randall et al. 1989; Harshvardhan et al. 1989).
I got the feeling that at least part of the problem might be in the clear-sky longwave cooling of the descending branch of the Hadley circulation (Now it is easy to state it so simply, at the time it was not so clear-cut.) The temperature dependence of the longwave absorption is very different in my band models (Morcrette 1990, 1991) from that in the then-current ECMWF scheme (essentially from Geleyn and Hollingsworth 1979). It took me till April 1989 to convince people that some revised version of the Lille codes [from which Morcrette’s codes have been adapted] were better overall even if they were less computer-efficient. The main impact is a much more stable maintenance of the Hadley circulation, which previously tended to weaken with the length of the forecast, and an increased geographical contrast in cloud forcing (J.-J. Morcrette 2017, personal communication).
The many independent spectral calculations required for broadband calculations, and the usual desire to compute fluxes with and without clouds to help understand the role of clouds in Earth’s energy budget, make radiation a computationally large burden. In most models, tendencies due to radiation are computed less frequently in time than other physical processes, but the need for computational efficiency has motivated other interesting compromises. Two approaches at NASA GISS do not seem to have been reported in the literature although they have been used since the original models described by Hansen et al. (1983).
[when sampling the diurnal cycle regularly] you get beat frequencies in there—pressure waves building up and stuff like that. When you make that odd fraction it would eliminate some of that type of noise. The other thing we did to speed up the radiation was sampling—we did every other grid box. So with sampling every 2 and a half hours and every other gridbox I think radiation might’ve been taking maybe 25 percent of the computing time (interview with Andrew Lacis, 26 October 2017).
so if this cloud has a 50 percent chance of being there we’ll draw a random number, and if it’s bigger than a half we’ll call it clear and smaller than a half we’ll call it cloudy. That idea might’ve come from Larry Travis, at least according to Bill Rossow. The rationale for doing that came from Charney, who basically said that random errors in climate model don’t matter that much but systematic errors do (interview with Andrew Lacis, 26 October 2017).
The 1980s also saw the development of widely available reference models and data for making radiative transfer calculations. For problems in clear-sky radiative transfer, this included the publication of spectroscopic databases covering an increasingly broad set of gases [e.g., the high-resolution transmission molecular absorption database (HITRAN); Rothman et al. 1987] and the development of line-by-line radiative transfer models (e.g., LBLRTM; Clough et al. 1992) capable of computing optical depths given the state of the atmosphere. Calculations involving clouds were made more tractable by the widespread availability of codes for doing Mie calculations (Wiscombe 1980) for single-scattering properties and discrete-ordinates calculations (Stamnes et al. 1988) to obtain the angularly resolved radiation field.
These codes provided an opportunity to test parameterizations against reference results. The first was the Intercomparison of Radiation Codes Used in Climate Models (ICRCCM; see Ellingson and Fouquart 1991; Ellingson et al. 1991; Fouquart et al. 1991). ICRCCM argued for the use of reference models, rather than direct observations, as the standard for radiation intercomparisons, given the difficulties in making simultaneous comparable measurements at the bottom and top of the atmosphere. The broad lesson from the intercomparison effort (Fig. 12-9) was that line-by-line models agreed to within a few percent (unsurprisingly, given that many use the same underlying spectropscopic information), but that parameterizations used in weather and climate models could be substantially in error, especially with respect to radiative forcing, that is, the sensitivity of changes in fluxes to changes in composition. The profiles used in ICRCCM were quite idealized, however, making it difficult to estimate the magnitude of errors in weather forecasting or climate projection applications, a problem that persists in more recent assessments (Oreopoulos et al. 2012; Pincus et al. 2015).
The treatments of radiation in global models have been compared to benchmark “line-by-line” calculations for many decades. (a) Results for longwave fluxes at the surface and the tropopause in a single idealized but quasi-realistic atmosphere. The line-by-line models, shown as plus signs, agree with each other quite well, partly because they share the same spectroscopic data; by comparison both narrow- and wide-band models (“NBMs” and “WBMs” respectively) show signifcantly more variation. (b) Published 15 years later, focuses on forcing (i.e., the change in flux caused by a change in composition) here the impacts of methane and nitrous oxide. Line-by-line calculations are indistinguishable from one another while the GCMs show variation of 25% or more of the signal in the longwave while entirely ignoring the impact in the shortwave. (c) Appearing almost a decade later still, shows that errors in GCM parameterizations (circles) still swamp those from line-by-line models (squares) in calculations of the forcing by quadrupled carbon dioxide concentrations. Panel (a) is Fig. 15 of Ellingson et al. (1991); (b) is Fig. 6 of Collins et al. (2006b); (c) is redrawn from Fig. 3 and supplemental material in Pincus et al. (2015).
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
d. Boundary layer and cloud parameterizations during the 1980s
Deardorff (1972a) emphasized the importance and highly variable nature of the depth of the boundary layer, especially over land. He proposed a boundary layer parameterization in which the depth of the boundary layer is an explicit prognostic (i.e., time stepped) variable of the model. Deardorff’s idea was implemented in the UCLA model by Randall (1976) and Suarez et al. (1983). A parameterization of stratocumulus clouds was included following Lilly (1968), and the boundary layer parameterization was coupled with the cumulus parameterization of Arakawa and Schubert (1974) by allowing the cumulus clouds to remove mass from the boundary layer. The model’s vertical coordinate system was modified to make the boundary layer top an internal coordinate surface. With this approach, the model layers below the boundary layer top comprised the boundary layer, and the depth of the boundary layer could change in response to mass fluxes across the boundary layer top because of entrainment and cumulus convection. Randall et al. (1985) analyzed seasonal simulations with the model, and reported the results of some numerical experiments, including one in which the boundary layer depth was artificially held constant, and another in which the diurnal cycle of solar radiation was replaced by daily mean insolation. The results showed the importance of variations of the boundary layer depth for precipitation over land and for determining the amount of low-level clouds.
We mention four advances in cumulus parameterization during the 1980s. Emanuel developed a similarity theory of convective downdrafts (Emanuel 1981). Raymond and Blyth (1986) proposed that mixed parcels created through entrainment migrate to their levels of neutral buoyancy. This idea, called “buoyancy sorting,” has been very influential. The Betts–Miller parameterization (Betts 1986; Betts and Miller 1986), developed at ECMWF, used an adjustment to empirically determined soundings of both temperature and water vapor. Finally, the Tiedtke convection parameterization (Tiedtke 1989) was implemented in the ECMWF model. It used the moisture-convergence closure developed by Kuo (1965), but a later version used by the Max Planck Institute for Meteorology was modified to use a buoyancy closure (Nordeng 1994).
Cloud microphysics parameterizations began to appear in global atmospheric models during the 1980s. The earliest low-resolution mesoscale models developed in the 1970s used the gridscale saturation removal method for calculating surface precipitation, similar to what was done in early operational NWP models. However, cloud-scale models around this time quickly adopted a Kessler-like approach with separate equations for cloud and rain mass (e.g., Klemp and Wilhelmson 1978). By the early to mid-1980s many mesoscale models had also adopted this type of approach (e.g., Hsie et al. 1984). Around this time both mesoscale and cloud models also began incorporating ice microphysics; commonly used schemes included those from Lin et al. (1983) and Rutledge and Hobbs (1984). These schemes generally assumed two or three ice categories (cloud or small ice, snow, and graupel or hail) and included conversion processes between the categories analogous to the Kessler approach for liquid microphysics. Beginning in the late 1960s and 1970s detailed bin microphysical schemes that explicitly evolved the particle size or mass distributions by predicting the total water mass in size or mass bins were also developed (e.g., Bleck 1970; Berry and Reinhardt 1974), but computational cost precluded their wider use in models until the 1990s and 2000s.
As noted above, larger-scale NWP models at operational centers through the 1970s and 1980s continued to convert water vapor to surface precipitation when the water vapor mixing ratio exceeded some threshold value. Microphysical processes were not considered in this approach. By the late 1990s, the operational Eta Model at the U.S. National Centers for Environmental Prediction (NCEP) adopted a prognostic cloud scheme that explicitly included evolution equations for cloud condensate and a diagnostic treatment of precipitation from the predicted cloud fields (Zhao et al. 1997). Both the cloud fraction and predicted cloud water content were accounted for in the radiation parameterization. Some forecast models with prognostic cloud condensate included more detailed representations of subgrid-scale condensation for both stratiform and convective clouds (Sundqvist et al. 1989) as well as prognostic cloud fraction (Tiedtke 1993). These models typically partitioned condensate as liquid or ice according to temperature. These were important advances for operational forecast models, but the representation of microphysics was still highly simplified compared to contemporaneous finer mesh mesoscale and cloud models that employed several prognostic categories of cloud and precipitation water.
The representation of the hydrologic cycle in global climate models in the 1970s through the 1990s was generally at a similar level of complexity to that of operational NWP at the time, but with more sophisticated diagnostic parameterizations to represent the cloud fraction and optical properties. The diagnostic schemes of the 1980s were used to predict the occurrence and radiative properties of clouds based on the relative humidity, vertical velocity, temperature, and/or precipitation rate (e.g., Slingo 1985; Wetherald and Manabe 1988).
The 1980s brought an increased emphasis on the radiative effects of clouds, for which greatly improved observations were becoming available (Schiffer and Rossow 1983; Barkstrom 1984; Ramanathan et al. 1989). Cloud feedback became an important focus of climate change simulations with global models (Charney et al. 1979; Hansen et al. 1984; Wetherald and Manabe 1988). The model intercomparison organized by Cess et al. (1989) pointed to the importance of cloud feedbacks for climate change. It marked the beginning of increased communication and cooperation among the world’s modeling groups. It had an immediate influence on the formulations of some of the participating models. For example, comparison of results from different models led to the discovery of some coding errors!
Starting in the 1980s, cloud-parameterization testing became organized on an international scale, beginning with NASA’s First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE) Program during the 1980s (Cox et al. 1987), and continuing in the 1990s and beyond with DOE’s Atmospheric Radiation Measurement (ARM) Program (Stokes and Schwartz 1994; Turner and Ellingson 2017) and the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) activities (GEWEX Cloud System Science Team 1993; Randall et al. 2003). The radiation intercomparisons mentioned in section 6c are important ways of testing radiation parameterizations. One strategy for testing the parameterizations of a model as a coupled set is to drive both the parameterized column physics of a GCM and a high-resolution CRM (e.g., Krueger 1988; Khairoutdinov and Randall 2003) or LES model with “forcing” databased on field observations, and then compare the results of the two models with each other and with additional observations from the field (Randall et al. 1996b). The column-physics is called a “single-column model.” The high-resolution models are called “process models.”
e. Momentum transport by gravity waves
Eliassen (1960) analyzed the vertical fluxes of energy and momentum associated with internal gravity waves excited by the wind blowing over mountain ranges. The importance of such fluxes for the global circulation began to be appreciated about 20 years later (Lindzen 1981). Since the mid-1980s, there has been a lot of interest in the effects of gravity wave momentum fluxes on the global circulation of the atmosphere; because the waves act to decelerate the mean flow, these interactions are often called “gravity-wave drag” (Palmer et al. 1986; McFarlane 1987). At the beginning, most of the discussion was about gravity waves forced by flow over topography, but later studies recognized the importance of gravity waves forced by convective storms (e.g., Fovell et al. 1992; Richter et al. 2014).
Ocean models are also parameterizing momentum transport and mixing due to internal gravity waves, as discussed in section 8d(3).
f. Land surface modeling during the 1980s
During the 1980s methods were developed to relate evapotranspiration on the land surface to the actual physiology of plant stomates. In a key paper, Jarvis (1976) used laboratory measurements to derive empirical functions that related stomatal conductance to light, humidity, and temperature. Plants actively control the aperture of their stomates in response to these three environmental variables. Light triggers photosynthesis, during which stomates must open to let CO2 diffuse into their tissues. The rate of transpiration through open stomates depends on the humidity of the environmental air, so plants close their stomates in very dry air to prevent desiccation. Finally, stomates tend to close when conditions are either too hot or too cold.
These ideas were combined with previous work in land surface modeling to create comprehensive schemes aimed at fulfilling Richardson’s vision of realistic land surface boundary conditions for atmospheric models. Examples of such models were the work of Deardorff (1978), the Biosphere–Atmosphere Transfer Scheme (BATS; Dickinson et al. 1986), the Simple Biosphere Model (SiB; Sellers et al. 1986), and the model of Noilhan and Planton (1989). These models were fully coupled to atmospheric global circulation models and provided interactive lower boundary conditions for the exchange of radiation, heat, water, and momentum. They included two-stream canopy radiative transfer for the calculation of leaf and soil temperatures and albedo. They prognosed soil moisture and temperature and diagnosed the temperature of vegetation, and turbulent fluxes of sensible and latent heat as well as ground heat flux. Surface parameters such as roughness, radiative properties, and soil hydraulic properties were prescribed as global maps derived from many disparate sources.
Turbulent fluxes in these models were represented using a network of nodes and resistors (Fig. 12-10) using an “electrical analogy” to Ohm’s law, which was first introduced by Richardson (1922). Temperature and water vapor are treated as potentials, the fluxes of sensible and latent heat among model components as resulting currents proportional to the difference in potentials, and the proportionality coefficients as variable resistors. Ohm’s law thus amounts to diffusion, with diffusivity at the molecular scale for transpiration through plant stomates and at turbulent scales elsewhere in the model. BATS (Dickinson et al. 1986) used a single plant canopy layer, whereas SiB (Sellers et al. 1986) introduced a subcanopy or “understory” of grass or shrubs beneath a taller tree canopy.
Equivalent resistor networks used to represent surface energy flux in land surface models of several levels of complexity, where T is temperature, e is vapor pressure, and g is conductance (transfer coefficient). Subscripts a, s, c, v, and g refer to lowest atmospheric layer, surface, canopy air space, vegetation, and ground surface respectively. Subscripts h and w refer to sensible and latent heat, respectively. [Redrawn from Bonan (2015); Ecological climatology: concepts and applications. © Gordan Bonan 2016. Reproduced with permission of The Licensor through PLSclear.]
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
Simulation experiments revealed important modes of interaction between the vegetated land surface and the atmosphere that can affect climate. Charney (1975) showed that land clearing and overgrazing could lead to drought through a feedback between surface albedo and enhanced atmospheric subsidence. Shukla and Mintz (1982) demonstrated that evapotranspiration from land exerted a profound influence on Earth’s hydrologic cycle and climate. Dickinson and Henderson-Sellers (1988) used a coupled land–atmosphere GCM to explore the consequences of tropical deforestation. They manipulated model surface parameters to simulate the conversion of the Amazon rain forest to grassland, which resulted in substantial surface warming due primarily to changes in albedo and roughness.
Land surface modelers recognized the very substantial mismatch in spatial scales between microscopic stomata, macroscopic plant canopies, and GCM grid cells that were hundreds of km across. Jarvis and McNaughton (1986) recognized that under low-wind conditions evapotranspiration from vegetation acted to humidify the air near the surface, reducing the vapor pressure gradient and thereby shifting the energy balance toward sensible heat. They introduced the idea of estimating surface energy fluxes at landscape scales as a continuum between physiological control by stomates and environmental control by radiation, atmospheric humidity, and wind speed. They introduced a coupling coefficient to represent this continuum, and showed that at larger scales transpiration is influenced less by stomata and more by radiation.
Besides the leaf-to-canopy conundrum, scaling local fluxes to GCM grid cells was also problematic because the coupling may include dynamical processes above heterogeneous surfaces that are distributed at subgrid scale. These are very common, arising from juxtaposed farms and cities, forests and pastures, or even locations with and without antecedent rain. Anthes (1984) showed that mesoscale circulations induced by strong gradients in temperature and sensible heat fluxes above wet and dry patches in semiarid regions acted like inland sea breezes to enhance convection along the boundaries between patches. These circulations can interact with both the atmospheric and land states to create mesoscale energy and water fluxes that cannot be obtained by linear averaging of surface fluxes in isolation (Avissar and Pielke 1989; Pinty et al. 1989; Pielke et al. 1991). Pielke et al. (1991) used limited area coupled models and found that they could be significant under conditions of large patches and low mean wind speeds.
Another key development in the 1980s was global mapping of vegetation properties using satellite imagery. Chlorophyll and other pigments absorb strongly in visible wavelengths to drive photosynthesis, but they are highly reflective in the near-infrared part of the solar spectrum (Tucker 1979). A series of polar-orbiting weather satellites maintained by the National Oceanic and Atmospheric Administration to monitor cloud properties produced near daily global coverage of a quantity called the normalized difference vegetation index (NDVI). The NDVI was shown to be highly correlated with plant growth and CO2 uptake (Tucker et al. 1986; Fung et al. 1987). Algorithms were developed to derive self-consistent vegetation parameters for land surface models from satellite imagery (Sellers 1985). These began to replace the ad hoc and often inconsistent sets of global parameter maps.
g. Reanalysis
Improved global observations of the atmosphere and Earth’s surface, especially from satellites, made global weather analyses a realistic possibility (Bengtsson et al. 1982). Global models and their associated data assimilation systems were essential for the production of these analyses. “Reanalysis” of historical data using the best available global model was advocated by Bengtsson et al. (1982), and soon became a reality (Kalnay et al. 1996; Uppala et al. 2005; Saha et al. 2010; Onogi et al. 2007; Schubert et al. 1993; Rienecker et al. 2011; Gibson et al. 1997; Dee et al. 2011). Reanalysis uses a fixed but up-to-date forecast model and data assimilation system to process historical observations over a long record. Fixing the systems avoids some of the temporal discontinuities that occur in a series of routine operational analyses, though not discontinuities caused by large changes in the available observations. Reanalyses have now become essential for atmospheric science research. The application of global ocean models to the reanalysis of ocean observations is at an earlier stage of development (Schiller et al. 2008; Balmaseda et al. 2013), but reanalyses of the coupled ocean–atmosphere system are beginning to appear (Laloyaux et al. 2018).
h. Global warming becomes a societal concern
During the 1960s, climate scientists were aware of the possibility of anthropogenic climate warming due to increasing greenhouse gas concentrations, but the issue had not yet reached the public consciousness. This changed during the 1980s, as observations showed continuing increases in CO2 concentrations, and the U.S. Congress and other governmental bodies began to take an interest (Shabecoff 1988; Weart 2008). More simulations of anthropogenic climate change were appearing in the literature (e.g., Hansen et al. 1981; Washington and Meehl 1989). Today, ESM development is increasingly driven by the global warming issue.
7. The 1990s
a. New models and new interactions among modeling groups
During the 1990s, important new models were created, and modeling groups began interacting in important new ways.
In 1990 the Met Office Hadley Centre was opened (Folland et al. 2004), creating a dedicated center for research on Earth’s climate (e.g., Senior and Mitchell 2000; Mitchell et al. 1995b). The Hadley Centre’s Unified Model (Cullen 1993; Cullen et al. 1997; Davies et al. 1998) is designed for use in both operational NWP and climate simulation. This has the advantage that operational NWP is an excellent way to test a climate model (e.g., Palmer et al. 2008; Senior et al. 2010).
A version of the ECMWF forecast model was modified to create ECHAM (Roeckner et al. 1989; Simmons et al. 1989; Stevens et al. 2013), a climate model in use at the Max Planck Institute for Meteorology in Hamburg, Germany. More recently, the center is using a new global atmosphere model called the Icosahedral Nonhydrostatic GCM (ICON), which is based on a geodesic grid and which has been developed in a partnership with the German Weather Service (Wan et al. 2013; Giorgetta et al. 2018). Here again we see a single model being used for both operational NWP and climate simulation.
As mentioned in section 6d, Cess et al. (1989) organized an intercomparison of results from many modeling groups. Additional intercomparisons proliferated during the 1990s. An important example is the Atmospheric Model Intercomparison Project (AMIP; Gates 1992). AMIP was presaged by the study of Lau (1985), who showed that the atmosphere responds strongly and predictably to prescribed observed interannual changes in sea surface temperatures. An AMIP simulation uses an atmospheric model (coupled to a land surface model) with prescribed observed sea surface temperatures for a sequence of real years. An AMIP simulation can be used to test the ability of a global atmospheric model to respond realistically to interannual variability of sea surface temperatures such as that associated with El Niño. The experimental design is similar to that developed by Lau (1985), but follows a formal protocol. AMIP simulations continue to be a valuable and widely used method to test global atmospheric models (e.g., Eyring et al. 2016). Intercomparisons have also been crucial for the work of the Intergovernmental Panel on Climate Change (IPCC), which issued its first assessment report in 1990 (IPCC 1990) and continues its work today (e.g., Stocker et al. 2013). The IPCC is a truly historic enterprise that is strongly reliant on results from ESMs. The Coupled Model Intercomparison Project (CMIP) has been particularly central to the work of the IPCC (Meehl et al. 2000; Covey et al. 2003; Eyring et al. 2016).
Operational seasonal prediction with coupled ocean–atmosphere models began during the 1990s, and has gradually been maturing since (Palmer et al. 2000; Kanamitsu et al. 2002; Woods 2006; Kirtman and Pirani 2009).
b. Atmospheric dynamical cores
During the 1990s spectral semi-implicit semi-Lagrangian models were well established. It was shown that the east–west density of grid points could be reduced near the poles (giving a reduced grid) with negligible loss in accuracy (Hortal and Simmons 1991; Courtier and Naughton 1994). Also, since the advective nonlinearity was now handled by the semi-Lagrangian advection, the extra grid resolution needed to avoid aliasing of quadratic nonlinear terms was no longer necessary, and a coarser “linear grid” could be used (Côté and Staniforth 1988; Williamson 1997). Both of these ideas led to further significant efficiency gains. An additional motivation for the adoption of semi-Lagrangian advection was that spectral advection of water vapor proved to be problematic (Williamson and Rasch 1994).
Around this time, interest was growing in global nonhydrostatic models. This stemmed partly from a desire for unified modeling systems that could operate either globally or at nonhydrostatic scales (Cullen et al. 1997) and partly from an ambition for global modeling that resolved nonhydrostatic scales, which growing computer power would soon permit (Qian et al. 1998; Yeh et al. 2002; Satoh et al. 2008; Matsuno 2016). The fully compressible equations support acoustic waves, and the CFL criterion for an explicit treatment of their vertical propagation would be very restrictive. Therefore, inspired by earlier work on small-scale nonhydrostatic models, one of four options was generally adopted. The first was a fully three-dimensional semi-implicit treatment of acoustic waves (Tapp and White 1976; Cullen et al. 1997; Qian et al. 1998; Yeh et al. 2002); a variety of solution methods for the resulting elliptic problem were tried. The second was a split-time-step method in which acoustic wave propagation was treated via shorter substeps. The third approach is to use a horizontally explicit but vertically implicit (HEVI) time-differencing scheme (Klemp and Wilhelmson 1978; Satoh et al. 2008; Weller et al. 2013). This is motivated by the fact that vertical grid spacing is typically much finer than horizontal grid spacing, so that it is the vertically propagating sound waves that place the most severe limit on a model’s time step. The fourth approach is to use a sufficiently accurate set of equations that filters vertically propagating sound waves (Arakawa and Konor 2009), thus eliminating the problem before discretization begins.
Semi-Lagrangian schemes proved to be very effective for weather forecasting, but for longer climate simulations they were limited by their lack of conservation. This prompted the development of several conservative large-time-step advection schemes (e.g., Harris et al. 2011, and references therein), some of which were eventually incorporated into operational models (Lauritzen and Nair 2008; Wood et al. 2014).
c. The evolution of the vertical coordinates used in atmosphere and ocean models
The vertical coordinate is a fundamental algorithmic choice for atmosphere and ocean models. It determines how subgrid-scale parameterizations manifest and what parameterizations are appropriate. A comprehensive discussion of vertical coordinate systems for hydrostatic atmosphere models was published by Kasahara (1974). For ocean models, the vertical coordinate determines representations of the upper ocean and bottom boundary layers and the stratified ocean interior as well as their interactions with each other and with the solid Earth (Griffies et al. 2000). Vertical coordinates for numerical modeling of the atmosphere and ocean have been under study at least since the 1950s, and work continues today. We choose to summarize the topic in this section of our chapter because many important ideas emerged during the 1990s.
1) Quasi-Eulerian vertical coordinates
As mentioned earlier, both pressure and height were used as vertical coordinates in early global atmospheric models. These choices are both problematic (especially pressure), in part because the coordinate surfaces intersect the lower boundary. The terrain-following σ coordinate of Phillips (1957a) solves that problem by conforming to the lower boundary, but it leads to difficulty in the accurate computation of the horizontal pressure-gradient force above steep topography (Smagorinsky et al. 1967; Kurihara 1968; Sundqvist 1975). The problem arises because with the sigma coordinate the horizontal pressure-gradient force is the sum of two terms. Over steep topography, these two terms are individually large and of opposite sign, and the horizontal pressure-gradient force is the relatively small difference between them. Mesinger (1982) and Mesinger and Janjić (1985) proposed a modified σ coordinate system, which they called η. The η coordinate eliminates the problem with the horizontal pressure gradient force near steep terrain by introducing “step mountains” that come in discrete sizes, like off-the-rack clothing. The sizes of the mountains are chosen to match the specified thicknesses of the model layers. For about 25 years, the η coordinate was used in the operational Eta Model used for regional prediction by the U.S. National Centers for Environmental Prediction (Janjić 1994); this was mentioned in section 6d. Simmons and Burridge (1981) suggested a different way to address the problem with the horizontal pressure-gradient force near steep terrain, through the use of a hybrid vertical coordinate that behaves like σ near the lower boundary but like pressure aloft. Their hybrid approach has been very widely used.
For most applications of large-scale basin and global ocean circulation modeling, the vertical coordinate is based on geopotential (or depth). Similar but more flexible “quasi-Eulerian” approaches have been developed (Adcroft and Hallberg 2006). They can be used with ocean models that retain the traditional Bryan (1969b) algorithmic architecture, in which the vertical motion crossing coordinate surfaces (i.e., vertical velocity) is diagnosed through mass continuity in non-Boussinesq models or volume continuity in Boussinesq models. Of particular note for global climate efforts is the rescaled geopotential coordinate,
Another advance was made by Marshall et al. (2004), who made use of an isomorphism between pressure coordinate non-Boussinesq (compressible) fluids and geopotential coordinate Boussinesq (incompressible) fluids. This made it straightforward to incorporate compressible dynamics into formerly incompressible ocean models. Doing so allows ocean models to include a full representation of oceanographic processes impacting the model’s sea level, including the global thermosteric effects that are missing from Boussinesq models (Griffies and Greatbatch 2012).
The σ coordinate has also been adapted for use in ocean models, for example, by Lemarié et al. (2012). Their work, implemented in the Regional Ocean Modeling System (ROMS; Shchepetkin and McWilliams 2005), bridges the gap between regional and global modeling applications. The Russian Institute of Numerical Mathematics Ocean Model (INMOM; Volodin et al. 2010) also uses a global, σ coordinate model as the ocean component of an ESM.
2) Quasi-Lagrangian methods
Quasi-Lagrangian methods have also been used in both atmosphere and ocean models. In these methods, the vertical “layers” of a model are bounded by surfaces that move with the fluid, as nearly as possible, so that little or no mass crosses layer edges. For atmospheric models, one approach is to use potential temperature θ as a vertical coordinate; in the absence of heating, the “vertical velocity” vanishes with the θ coordinate. For ocean models, the corresponding isopycnal approach is to use potential density as the vertical coordinate, so that the vertical velocity vanishes in the absence of diapycnal diffusion (Adcroft and Hallberg 2006). Isopycnal ocean models have the advantage of naturally including advection along potential density surfaces in the quasi-adiabatic ocean interior below the strongly mixed upper ocean. However, they are at a disadvantage at high latitudes where their vertical resolution declines because of the very small density difference between the surface and deep ocean.
The utility of θ coordinates for observational analyses was appreciated very early, by Rossby (1937) and Starr (1945). It was further developed by Johnson (1989), and by Hoskins et al. (1985), who emphasized the dynamical importance of the isentropic potential vorticity. Early atmospheric models based on θ coordinates were developed by Eliassen and Raustein (1968) and Bleck (1973). More recently, the merits of models based on θ coordinates have been discussed by Hsu and Arakawa (1990), Johnson (1997) and Benjamin et al. (2004), among others.
An issue with the use of θ coordinates in models is that θ surfaces intersect Earth’s surface. This has motivated numerous proposals for hybrid σ–θ coordinates (e.g., Johnson and Uccellini 1983; Zhu et al. 1992; Bleck and Benjamin 1993; Zapotocny et al. 1994; Konor and Arakawa 1997; Benjamin et al. 2004; Bleck et al. 2010). An alternative is the arbitrary Lagrangian–Eulerian (ALE) approach (Bleck and Benjamin 1993; Bleck et al. 2010), which allows deviations from strict θ coordinates on the basis of a set of “rules.” For example, a rule might enforce a minimum pressure difference across a model layer (Toy and Randall 2009), or it might periodically “remap” the edges of quasi-Eulerian layers to prespecified target values of the σ coordinate (Lin 2004). The ALE method allows for a mapping to an arbitrary vertical surface, such as geopotential, σ, potential temperature (or potential density), or even coordinates with no explicit mathematical definition.
Isopycnal ocean models were pioneered by Rainer Bleck at the University of Miami with the Miami Isopycnic Coordinate Ocean Model (MICOM), a well-used community layered model (Bleck and Boudra 1986; Sun and Bleck 2001). A version of MICOM has been developed by Helge Drange and colleagues at the University of Bergen, and is the ocean component of the Norwegian Earth System Model (Bentsen et al. 2013). Dunne et al. (2012) used a layered isopycnal model developed at GFDL for use in climate [General Ocean Layer Dynamics (GOLD)].
The quasi-Lagrangian approach used in isopycnal models provides a useful starting point for efforts to implement the ALE methods (Donea et al. 2004) in ocean models (Bleck 2002). The ALE method provides a natural framework for wetting and drying, such as for studies of coastal inundation and moving ice shelf grounding lines (Goldberg et al. 2012). Hybrid Coordinate Ocean Model (HYCOM) made use of ALE to blend an isopycnal coordinate in the deeper ocean with a depth coordinate in the strongly mixed upper ocean, with terrain-following coordinates along the shelves. HYCOM became the ocean component of the Goddard Institute for Space Sciences climate model (Sun and Bleck 2006).
The ALE approach is now spreading throughout the ocean modeling community to codes such as MPAS-O (Ringler et al. 2013) and MOM6 (Adcroft and Hallberg 2006). Similar methods are also becoming more fully realized in the atmospheric modeling community (Bleck et al. 2015, 2010; Sun et al. 2018).
d. Radiative transfer modeling in the 1990s: Unification
The errors evident in many radiation codes used in weather and climate models in the early 1990s prompted the development of new codes with a close link to reference models. Mlawer et al. (2016) describes the development of one such code: Rapid Radiative Transfer Model (RRTM) (Mlawer et al. 1997). RRTM implements a correlated k distribution (Goody et al. 1989; Lacis and Oinas 1991; Fu and Liou 1992), an extension of the original k-distribution technique to vertically inhomogeneous atmospheres. The code was originally developed as an offline column model aimed at reproducing line-by-line calculations (themselves tightly constrained by a new wealth of observations, as Mlawer et al. (2016) describes), but soon included an offshoot with reduced spectral resolution (RRTMG) for use in atmospheric models.
Tony Slingo was the one who had the vision for it, and a couple of things that he wanted very strongly. One was, because it was to be used in climate, we really wanted to get the forcings right, and so having the ability to run at different spectral resolutions and have, as far as possible, traceability from precise comparisons with line-by-line, was seen as very important. Another thing was that we wanted the same cloud overlap assumptions in long-wave and short-wave so we’d be doing cloud radiative effect consistently between the two spectral regions (interview with John Edwards, 20 October 2017).
e. Boundary layer, cloud, and aerosol parameterizations during the 1990s
As discussed in section 5d(1), Sommeria and Deardorff (1977) used higher-order closure with an assumed bivariate Gaussian distribution for (roughly speaking) temperature and moisture to determine the fractional cloudiness and liquid water mixing ratio. This approach can be called assumed distributions with higher-order closure (ADHOC). The intended application of Sommeria and Deardorff (1977) was large-eddy simulation, with grid cells less than 100 m across. Much later, Lewellen and Yoh (1993) suggested using a pair of joint Gaussians instead of one. This approach is more appropriate for larger grid cells that contain many clouds. In such larger grid cells, one of the Gaussians can represent the cloudy part of the domain, while the other represents the clear spaces between the clouds.
Randall (1987), Randall et al. (1992), and Lappen and Randall (2001) added vertical velocity to the mix, so that vertical fluxes of temperature and moisture could be computed from the parameters of the resulting trivariate distribution. Following the mass flux approach, they used a pair of delta functions, one representing turbulent updrafts and the other representing downdrafts. Finally, Golaz et al. (2002) combined the approaches of Lappen and Randall (2001) and Lewellen and Yoh (1993), resulting in a pair of trivariate Gaussians. This method has been used by Bogenschutz and Krueger (2013), Bogenschutz et al. (2013), and Thayer-Calder et al. (2015). It has now been implemented in version 6 of the Community Atmosphere Model, with encouraging results (Bogenschutz et al. 2018).
Increasingly detailed microphysics parameterizations have also been incorporated into global atmospheric models. Beginning in the early 1990s, climate models began to adopt prognostic equations for cloud water following the approach of Sundqvist et al. (1989), sometimes with separate equations for liquid and ice (e.g., Ose 1993; Lohmann and Roeckner 1996; Rotstayn et al. 2000). The fraction of cloud water present as liquid or ice is critical for cloud radiative properties. These schemes typically employed diagnostic precipitation schemes (e.g., Ghan and Easter 1992), while others adopted prognostic equations for both cloud and precipitation similar to mesoscale models developed in the 1980s and 1990s employing Kessler-like parameterizations (e.g., Fowler et al. 1996).
The value of predicting two characteristics or moments of the cloud and precipitation size distributions, namely the number and mass, has been recognized since at least the 1970s (Koenig and Murray 1976). Such “two moment” parameterizations allow independent evolution of bulk mass and mean size, which improves the physical realism for processes such as size sorting (the preferential fallout of larger and heavier particles). The prediction of cloud particle number by these schemes also allows explicit coupling with chemistry and aerosols through activation of cloud condensation and ice nuclei, allowing climate models to simulate aerosol indirect effects on clouds. Two-moment schemes were developed and applied in a few cloud-scale models in the 1980s (e.g., Ziegler 1985), but came into widespread use for cloud and mesoscale modeling in the mid-1990s through the 2000s (e.g., Schoenberg Ferrier 1994; Cohard and Pinty 2000; Seifert and Beheng 2001).
Starting in the 1990s, the development of aerosol representations for use in global climate models was motivated by a need to study the direct effects of aerosols on radiative forcing (e.g., Kiehl and Briegleb 1993; Taylor and Penner 1994; Mitchell et al. 1995a; Haywood et al. 1997). During this time, climate models also began to simulate the indirect effects of aerosols on radiation through their influence on clouds, by diagnostically relating droplet number to aerosol properties (e.g., Boucher and Lohmann 1995).
f. Land surface modeling during the 1990s
With the availability of fully coupled global land–atmosphere models and the widespread recognition of the problems of scale, a series of ambitious field experiments were undertaken to evaluate models by quantifying regional land–atmosphere interactions in nature. These included the First ISLSCP Field Experiment (FIFE) over the Kansas prairie (Hall and Sellers 1995), the Hydrologic Atmospheric Pilot Experiment (HAPEX) in the African Sahel (Prince et al. 1995), the Boreal Ecosystem–Atmosphere Study (BOREAS) in central Canada (Sellers et al. 1997), and the Large-Scale Biosphere–Atmosphere Experiment in Amazonia (LBA) in Brazil (Keller et al. 2004). Each of these experiments involved simultaneous measurements of both atmospheric and surface conditions at a range of spatial scales from individual leaves and soil probes to regional footprints meant to represent entire GCM grid cells. The pioneering field experiments made extensive use of new satellite datasets and provided a huge resource for both testing models derived from local relationships and especially for learning how scales of land–atmosphere interaction worked in nature.
During the 1990s, many studies used coupled models to analyze land–atmosphere interactions in nature (e.g., Betts et al. 1996). In particular, comparisons of models and observations showed that soil moisture could act as a long-memory component in the climate system to amplify or extend the duration of droughts and rainy periods (Oglesby and Erickson 1989; Lean and Rowntree 1993; Dirmeyer 1994; Milly and Dunne 1994; Brubaker and Entekhabi 1996; Diedhiou and Mahfouf 1996; Trenberth and Guillemot 1996; Eltahir 1998; Fennessy and Shukla 1999; Douville et al. 2001). Precipitation recycling of water through evapotranspiration was recognized as a major process at regional scales (Trenberth 1999). By this time, land–atmosphere coupling had also been adopted in numerical weather forecasting (Viterbo and Beljaars 1995). Interactive land–atmosphere models were used to analyze the role of the land surface in amplifying or extending the duration of droughts and rainy periods. Beljaars et al. (1996) used coupled models to analyze the effect of anomalies in soil moisture on persistent atmospheric circulation patterns associated with major drought and floods. They found that forecasts of the summer U.S. drought in 1988 and the Mississippi River floods in 1993 were dramatically improved when they initialized their coupled model with realistic soil moisture.
An innovative approach to the problem of subgrid-scale heterogeneity at the land surface was developed by Koster and Suarez (1992), in which many instances of the land parameterization are coupled to a single overlying atmosphere. The separate instances, or “tiles” have different properties such as assemblages of vegetation or soils, or may represent separate hydrologic catchments within a larger GCM grid cell. Separate calculations of prognostic soil temperature and moisture are done for each tile, and then the energy fluxes of each are weighted by their subgrid-scale fractional area before being passed to the atmospheric component. This approach has since been widely adopted to represent heterogeneity. Unlike the mesoscale flux experiments discussed above with limited area models (e.g., Pielke et al. 1991), tiling is tractable in global models because the computational expense of multiple instances of the land model is modest.
Plant physiologists worked with climate modelers to improve the biological realism of parameterized stomatal resistance. Rather than the simple empirical functions relating stomatal aperture to radiation, humidity, and temperature (Jarvis 1976; Dickinson et al. 1986), a new generation of models coupled stomatal function with photosynthesis. The new approach recognized that stomatal conductance solves an optimization problem in which plants evolved physiological mechanisms to maximize carbon gain under the constraint of minimizing water loss. Sellers et al. (1992) introduced the calculation of photosynthetic carbon assimilation using enzyme kinetic relationships previously studied in the laboratory (Farquhar et al. 1980). High rates of photosynthesis require highly conductive (open) stomates, which also allow transpiration. This simultaneously depletes CO2 and enhances vapor pressure at the leaf surface, which feeds back on both photosynthesis and stomatal conductance (Ball 1988). An additional node was inserted between stomatal pores and the canopy air space to the resistance network in previous models. This laminar boundary layer at the leaf surface may be only a few millimeters thick, but maintains higher vapor pressure in immediate contact with stomatal pores and retards the upward flow of water vapor by turbulent exchange. Adding this extra resistance largely solved the coupling problem previously highlighted by Jarvis and McNaughton and allowed a greater degree of biophysical realism (Collatz et al. 1991). The models were iterated to solve simultaneously for the stomatal conductance and the rates of photosynthesis and transpiration.
Research continued into the critical problem of scaling physiological processes from stomates to grid cells. Sellers et al. (1992) showed that a simultaneous solution for canopy-scale transpiration could be obtained from leaf-level parameters by assuming that 1) the progressive downward attenuation of solar radiation through vegetation canopies followed an exponential decay with cumulative leaf area (Beers’s law), and that 2) plants have evolved to redistribute scarce resources (primarily nitrogen) according to the time-mean vertical distribution of light. These two assumptions allowed leaf-level equations for stomatal conductance and the rates of photosynthesis and transpiration to be integrated vertically in closed form.
Leaf area index is the area of leaves in a canopy per unit area of ground, and Sellers et al. (1992) used the cumulative leaf area index above a point in the canopy as a vertical coordinate. Integrating assimilation (photosynthesis) rate from the top of the canopy to the ground, and assuming that photosynthesis decreases exponentially along with light, they obtained an equation for the fraction of photosynthetically active radiation (FPAR) absorbed by the canopy. Importantly, the FPAR is related to the remotely sensed NDVI, which was mentioned in section 6d. Retrievals of NDVI from space allow global estimates of canopy-scale stomatal conductance and the rates of photosynthesis and transpiration based on leaf-level physiology and the FPAR relationship. Coupling of photosynthesis and transpiration with canopy integration from remote sensing was used to construct a new coupled GCM (Sellers et al. 1996a; Randall et al. 1996a), and a complete suite of satellite-derived parameters for land surface modeling (Sellers et al. 1996b; Los et al. 2000). Within a few years, many groups around the world also developed global land surface models based on integrated photosynthesis and transpiration, which were coupled to GCMs (Friend et al. 2007; Foley et al. 1996; Bonan 1996, 1998; Cox et al. 1998).
Although Sellers et al. (1992) intended the FPAR to represent the continuous attenuation of light in vegetation canopies, Bonan (1996) showed that this quantity can also be interpreted as the sunlit (as opposed to shaded) leaf area index. In this interpretation, only sunlit leaves are integrated in the scheme of Sellers et al. (1992), meaning that photosynthesis and transpiration are likely underestimated because of the presence of shaded leaves illuminated by diffuse radiation from the sky. Pury and Farquhar (1997) developed a simple scheme to separate plant canopies into sunlit and shaded fractions with different temperatures, stomatal conductance, and rates of photosynthesis and transpiration. Although the photosynthesis rate of shaded leaves is less than that of sunlit leaves, they use light more efficiently because diffuse light penetrates more deeply into dense canopies. This “two big leaf” approach has since been adopted by most land models (Dai et al. 2004).
g. Sea ice advances during the 1990s
Eventually sea ice modelers tried to simplify the methods used to predict the motion of the sea ice, in order to make them practical for climate models. First Flato and Hibler (1992) simplified the viscous-plastic dynamics by treating sea ice as a cavitating fluid, which lacks shear strength. Several modeling centers implemented cavitating-fluid dynamics and so became the first to simulate sea ice with a constitutive law. But most of the centers abandoned cavitating-fluid dynamics when better options became available. Next, Los Alamos National Laboratory scientists Elizabeth Hunke and John Dukowicz developed a numerical approximation to the viscous-plastic dynamics to simulate sea ice as an elastic-viscous-plastic material (Hunke and Dukowicz 1997), a method that asymptotes to the full viscous-plastic solution but is more efficient and highly parallelizable. In the same year, Zhang and Hibler (1997) made the viscous-plastic numerics more efficient and parallelizable. The latter two dynamics schemes made possible major improvements in simulating sea ice in climate models. The elastic-viscous-plastic approach is widely used among climate models today in part because the code was made readily available for sharing, with high-quality documentation and regular updates maintained by Hunke et al. (2010) in a comprehensive model known as the Los Alamos sea ice model (CICE).
8. Into the twenty-first century
Our story now approaches the present day, which means that much of the work is still ongoing, and we lack a historical perspective. Selected current issues are highlighted, but we do not attempt a comprehensive overview.
a. Current issues in atmospheric dynamical cores
1) Horizontal grids in atmosphere and ocean models
Evolving computer architectures are now having a significant effect on preferred numerical methods. From the mid-1990s onward, the performance of computing machines has increased mainly through increased numbers of processors rather than faster processors. The communication of data between processors is relatively slow, and is becoming a significant bottleneck to computational performance for both the spectral method, which requires global communication for the spectral transforms, and for gridpoint methods on the longitude–latitude grid because of the polar resolution clustering. The trend toward massively parallel hardware has pushed the modeling world toward higher horizontal resolution. One consequence of these developments has been renewed interest in the use of quasi-uniform grids.
It is now conventional to distinguish between “structured” and “unstructured” horizontal grids. A structured horizontal grid covers the sphere with quadrilateral cells, so that each cell in the grid can be identified by a pair of indices, such as
Unstructured grids are more flexible. They cover the sphere with simple shapes, such as triangles, squares or hexagons. In contrast to structured methods, the unstructured approach does not rely on a fixed number of gridcell neighbors, nor does it insist on local coordinate orthogonality. Although the spatial pattern of the cells may be very orderly, unstructured grids require a prestored list of the neighbors of each cell.
Since the 1960s and 1970s the importance of numerical properties such as conservation, monotonicity, accurate wave dispersion and balance, and avoidance of computational modes, has been much better appreciated, and a wide range of methods giving acceptable performance on unstructured quasi-uniform grids has been developed (e.g., Masuda and Ohnishi 1986; Heikes and Randall 1995). A related point is that the relative cost, in time and energy, of data movement in and out of memory as well as between processors, compared to computation, has greatly increased. Consequently, computationally intensive methods such as high-order Galerkin methods are no longer seen as prohibitively expensive.
Starting with the German Weather Service’s GME icosahedral grid model (Majewski et al. 2002), this second wave of development on unstructured quasi-uniform grids has led to a number of production or production-capable models for both NWP and climate prediction (e.g., McGregor and Dix 2008; Satoh et al. 2008; Putman and Lin 2007; Qaddouri and Lee 2011; Skamarock et al. 2012; Dennis et al. 2012; Zängl et al. 2015; Sun et al. 2018). This appears to be a trend.
Today, most ocean models make use of structured grids in the horizontal according to either the Arakawa B grid or C grid (Arakawa and Lamb 1977; Griffies et al. 2000). With spherical coordinates, ocean models encounter a singularity at the North Pole, but not at the South Pole, which lies in the middle of the Antarctic continent. To remove the north polar singularity while retaining a structured grid, ocean modelers today use alternative coordinates while retaining local orthogonality. A common approach for is the tripolar grid of Murray (1996) and Madec et al. (1997), whereby the North Pole singularity is split into two singularities safely “hidden” over land. An alternative is to displace the North Pole over land as in the displaced pole approach used by POP simulations (Smith et al. 1995). More specialized uses have also been considered, such as Marsland et al. (2007) who used the MPI ocean model to study ice shelves in a global model.
Recent advances in the use of unstructured horizontal grids for ocean modeling have been based on both finite-volume (Ringler et al. 2013; Korn 2017) and finite-element (Danilov 2013) methods. The Model for Prediction Across Scales Ocean (MPAS-O) has been developed at LANL (Ringler et al. 2013), and uses the ALE vertical discretization described earlier, as well as an unstructured horizontal grid based on finite-volume methods. This model is targeted toward global ocean circulation applications as well as coupled climate modeling. The Finite Element Sea ice/Ocean Model (FESOM) was developed at the Alfred Wegener Institute, with particular applications focusing on high-latitude ocean domains and global ocean climate simulations (Danilov 2013). The greater flexibility of unstructured grids makes it possible to more faithfully represent the complex horizontal geometry of the World Ocean. It also offers an elegant means to nest fine-resolution subdomains within a coarser global grid. The drawback is that unstructured approaches can be more computationally expensive than structured approaches.
2) Couplers
As suggested in the discussion above, the atmosphere, ocean, and land surface components of an ESM are often implemented on grids that have different shapes and different resolutions. For this reason, ESMs include “couplers” (e.g., Craig et al. 2012) that are designed to allow the components to exchange information via interpolation (particularly from coarser to finer grids) or averaging (from finer to coarser). These exchanges are formulated so as to respect important physical principles such as conservation of mass and energy. It is possible that future very high-resolution models will not need couplers.
b. Current issues in radiative transfer parameterization
With the widespread availability of accurate parameterizations of clear-sky radiative transfer, attention turned to clouds, and particularly problems introduced by subgrid-scale variability. A range of observations (Cahalan et al. 1994; Pincus et al. 1999; Rossow et al. 2002) had shown that clouds are substantially inhomogeneous on the 10–100-km scales of the day’s global models, and Cahalan et al. (1994) had used simple calculations to argue that ignoring this variability inevitably biased reflectivity calculations, especially in the shortwave. Awareness that similar biases were likely influencing calculations of precipitation (Pincus and Klein 2000; Rotstayn 2000) motivated the development of cloud schemes that explicitly predicted internal variability (e.g., Tompkins 2002; Golaz et al. 2002). Further discussion is given in section 8c.
Various solutions to the problem have been proposed. Barker (1996) and Oreopoulos and Barker (1999) developed a closed-form solution to the two-stream equations integrated over a specific distribution of optical depth. Cairns et al. (2000) and Petty (2002) proposed rescaling the optical properties of the clouds based on a measure of their variability, an idea related to methods for treating radiative transfer in random media. A more flexible solution was to do independent calculations over optimally chosen elements of the cloud optical depth distribution (Collins 2001; Neu et al. 2007; Shonk and Hogan 2008).
An intercomparison effort was key to identifying the intertwined roles of cloud overlap and internal variability in causing errors in cloudy-sky radiation calculations. ICRCCM-III (Barker et al. 2003) reported domain-averaged fluxes from a range of three- and one-dimensional radiative models applied to a high-resolution description of clouds obtained from finescale models.
The intercomparison highlighted the weakness of the analytic treatments of cloud overlap that had been used since the 1960s, which introduce errors on par with those caused by neglecting variability. The paper describes errors as arising “mostly because of inappropriate cloud overlap assumptions, incorrect application of overlap assumptions, neglect of horizontal variability of cloud, and inappropriate assumptions about horizontal variability.”
ICRCCM-III highlighted the need for flexibility in computing radiative transfer in cloudy skies: accuracy required that calculations be able to adapt to a wide range of overlap specifications as well as complicated descriptions of internal variability. Any practical new method had to meet these accuracy requirements without substantially increasing computational cost, which was already high enough that models typically computed radiation less frequently in time, and possibly at lower spatial resolution, than other physical processes.
The Monte Carlo Independent Column Approximation (McICA; see Pincus et al. 2003) uses a different, randomly generated discrete sample from the distribution of all possible cloud states with each spectral quadrature point, essentially replacing a two-dimensional integral over wavelength and cloud state with a Monte Carlo estimate. The fluxes computed with McICA are unbiased but, if the states used in each calculation (location, time, etc.) are chosen independently, the error introduced is also random. Extensive experience (e.g., Barker et al. 2008) demonstrated that this random error does not impact the simulation, and the technique has been widely adopted.
c. New approaches to representing cloud processes in global atmospheric models
1) Global cloud-resolving models
The continuing increase in computer power (Habata et al. 2003) has made possible the development of global atmospheric models with grid spacings of just a few kilometers, so that they can crudely but explicitly simulate individual large clouds (Tomita and Satoh 2004; Tomita et al. 2005; Satoh et al. 2008, 2014; Putman and Suarez 2011). For now, these “global cloud-resolving models” are too expensive for simulation of century-scale climate change, but at present they can be used for simulations of about one year. Global cloud-resolving models are qualitatively different from lower-resolution models because they do not need parameterizations of deep convection, but they still need parameterizations of radiation, microphysics, and turbulence including small clouds.
2) Superparameterization
In 1999, NCAR scientists Wojciech Grabowski and Piotr Smolarkiewicz created a multiscale GCM in which the physical processes associated with clouds were represented by implementing a simple cloud-resolving model (CRM) within each grid column of a low-resolution global model (Grabowski and Smolarkiewicz 1999; Grabowski 2001, 2004). With the embedded CRM, parameterizations of radiation, cloud microphysics, and turbulence (including small clouds) are still needed, but larger clouds and some mesoscale processes are explicitly (though crudely) simulated. The GCM simulates the large-scale weather, while the CRMs simulate the small-scale convective response, which is fed back to the GCM. Grabowski and Smolarkiewicz found that their model produced interesting simulations of organized tropical convection, including systems that resembled the Madden–Julian oscillation (MJO; Madden and Julian 1971, 1972).
Inspired by the results of Grabowski and Smolarkiewicz, Khairoutdinov and Randall (2001) created a multiscale version of the CAM (Collins et al. 2006a). They replaced most of the parameterizations used by CAM with a simplified version of Khairoutdinov’s CRM (Khairoutdinov and Randall 2003). Parameterizations of radiation, microphysics, and turbulence are included in the CRM. One copy of the CRM runs in each grid column of the CAM. The CRM is two-dimensional (one horizontal dimension, plus the vertical), and uses periodic lateral boundary conditions. In the study of Khairoutdinov and Randall (2001), the CRM had a horizontal domain 64 grid columns wide, with a horizontal grid spacing of 4 km. Because the CRM is two-dimensional, it cannot produce realistic vertical fluxes of horizontal momentum. For this reason, the momentum feedback to the GCM was not included.
Khairoutinov and Randall dubbed the embedded CRM a “super-parameterization.” The combination of a GCM with a superparameterization is now called a Multiscale Modeling Framework (MMF), and the MMF based on the CAM is now called the SP-CAM. Several additional MMFs have since been created, each based on a different GCM. In a major step, Stan et al. (2010) coupled the SP-CAM to a low-resolution version of POP. As reported by Stan et al. (2010), the coupled model gives a more realistic simulation of the atmospheric circulation than the SP-CAM “right out of the box,” without any tuning, a somewhat surprising result in view of the earlier experiences of others (e.g., Sausen et al. 1988). Superparameterized GCMs are much less computationally expensive than global cloud-resolving models.
SP-CAM and the other MMFs have produced interesting simulations of the MJO, the diurnal cycle of precipitation, the Asian and African monsoons, and other phenomena, including anthropogenic climate change. Further discussion is given by Randall et al. (2016).
Super-parameterization has also been tested in an ocean model (Campin et al. 2011) to simulate the small but important regions where deep convection occurs. Though promising, the superparameterization technique has up to now been used less for the ocean than for the atmosphere.
3) Cloud microphysics and aerosols
With increased computing power and the related trend toward finer model resolution, more detailed representations of microphysics, including two-moment schemes, have recently been adopted for operational numerical weather prediction. Examples include the two-moment Milbrandt–Yau scheme in the High Resolution Deterministic Prediction System in Canada (Milbrandt et al. 2016), and the aerosol-aware Thompson scheme (Thompson and Eidhammer 2014) in the U.S. Rapid Refresh (RAP) and High Resolution Rapid Refresh (HRRR) models. A diagram of a typical two-moment, multi-ice-class scheme is shown in Fig. 12-5b. The recent development and operational use of high-resolution, convection-permitting kilometer-scale forecast models has in particular motivated the use of more sophisticated microphysics schemes, since convective and cloud scale motions are more directly coupled to the microphysics. Over the last decade, schemes have also been developed that have moved away from the traditional paradigm of using fixed categories representing different types of ice (e.g., Hashino and Tripoli 2007; Harrington et al. 2013; Morrison and Milbrandt 2015). These schemes evolve ice properties smoothly by predicting characteristics such as particle aspect ratio and density, and avoid some of the difficulties that arise with fixed ice categories. Although bin schemes are still too computationally expensive for operational modeling, they have been used for process studies of topics such as cloud-aerosol interactions (e.g., Feingold et al. 1996; Fridlind et al. 2004; Khain et al. 2005) and microphysical–dynamical interactions (e.g., Stevens et al. 1996; Ackerman et al. 2004). They have also been used to develop and test bulk-microphysics schemes for weather and climate models (e.g., Khairoutdinov and Kogan 2000).
Two-moment schemes have also been developed for climate models, but motivated more by a need to physically treat clouds and radiation by predicting cloud droplet number, and links to aerosols (e.g., Ghan et al. 1997; Lohmann et al. 1999; Ghan et al. 2001; Ming et al. 2007; Lohmann et al. 2007), which have important radiative as well as microphysical effects (Boucher et al. 2013). The models use parameterizations of cloud condensation nuclei activation, coupled with prognostic multispecies aerosol chemistry and transport schemes (e.g., Stier et al. 2005; Seland et al. 2008; Liu et al. 2012). Over the last decade, some climate models have also incorporated the effects on clouds of ice-nucleating aerosols (e.g., Lohmann and Hoose 2009; DeMott et al. 2010), and including mixed phase clouds (Hoose et al. 2010; Gettelman and Morrison 2015). Current state-of-the-art ice nucleation parameterizations (e.g., Hoose et al. 2010) can directly incorporate laboratory and field measurements of ice nucleating particles, but there are still large uncertainties in ice nucleation properties.
With higher resolution and increased computational resources, the microphysics schemes used in climate models now incorporate many features previously used in mesoscale models, including prognostic two-moment precipitation (Posselt and Lohmann 2008; Gettelman and Morrison 2015). A critical issue when incorporating such schemes in larger-scale models is that the cloud-scale and mesoscale motions driving the microphysics are not resolved, and thus the microphysics must be coupled with “macrophysics” parameterizations of the driving dynamic and thermodynamic processes. A related issue is that subgrid-scale variability of cloud quantities, typically neglected in small-scale models, is critical in larger-scale models because microphysical process rates often depend nonlinearly on predicted cloud quantities (e.g., Pincus and Klein 2000; Larson et al. 2001; Rotstayn 2000). This has been dealt with in global climate models by ad hoc tuning of process rates (e.g., Golaz et al. 2011), or by integrating them over an assumed subgrid-scale distribution of cloud water amount in each grid cell (Morrison and Gettelman 2008).
An important advance over the past decade has been the development of Lagrangian particle-based microphysics schemes in which the multitude of cloud and precipitation hydrometeors are represented by a collection of “super-particles” that evolve as they are transported by the modeled flow (e.g., Shima et al. 2009; Andrejczuk et al. 2010; Unterstrasser and Sölch 2010). Unlike bin (and bulk) schemes that employ continuous-medium, Eulerian microphysical variables, Lagrangian-based schemes do not suffer from numerical diffusion errors.
Up to now, most of the work on microphysics parameterization has been focused on stratiform clouds. The treatment of microphysics in convection parameterizations has generally remained very simple and crude, despite the facts that cumulus clouds generate a large fraction of Earth’s precipitation, and detrainment from cumulus updrafts produces many radiatively important stratiform clouds. All of these important effects of cumulus clouds are influenced by microphysical processes at work within the cumuli. Kerry Emanuel (1991) forcefully argued that more realistic microphysics is needed in cumulus parameterizations. There has been some recent progress in this area (Song and Zhang 2011; Elsaesser et al. 2017; Zhao et al. 2016). In addition, attempts have been made to unify cloud microphysics across cloud schemes with unified closures in climate models treating all clouds with the same microphysics (Bogenschutz et al. 2013). Overall, the ongoing convergence of models spanning scales from weather to climate requires detailed, yet efficient cloud microphysics schemes linked consistently to the parameterized turbulence, convection, and radiation.
d. Current issues in ocean modeling
1) Ocean model intercomparisons
The integrity of climate models depends on the integrity of the physical parameterizations in the ocean component. Early coupled ocean–atmosphere models drifted away from an Earth-like climate because of inaccuracies in the representation and parameterization of physical processes in the models. An important milestone was reached by Boville and Gent (1998) and Gordon et al. (2000), whereby their use of improved ocean physical parameterizations was shown to enable a much more stable simulation without the use of “flux adjustments.” Gent (2013) offers an overview of climate modeling and the role of the ocean and ocean physical parameterizations. Such results lend support for the need to study the ocean and sea ice components of climate models separately from the fully coupled AOGCMs. For that purpose, the community has developed the Ocean Model Intercomparison Project (OMIP) started in the late 1990s. It took nearly 20 years to develop a suitable protocol and to improve model integrity sufficiently to support the OMIP exercise (Griffies et al. 2016). Such comparison projects have been the foundation for ongoing model improvements throughout much of the history of ocean modeling, and will remain so into the future.
2) Mesoscale eddies and parameterizations of isopycnal diffusion
Realizing the importance of mesoscale eddies for ocean dynamics and the transport of heat, carbon, and other tracers, oceanographers became rather critical of numerical simulations that had no representation of these eddies. As with synoptic eddies in the atmosphere, ocean mesoscale eddies have scales largely determined by the first baroclinic Rossby radius due to their connection to baroclinic instability. Ocean mesoscale eddies range in size from 100 km in the tropics to less than 10 km near the poles and on continental shelves. One response to this situation was to focus on quasigeostrophic models, whose simpler dynamical equations and lack of thermodynamics allowed for the explicit representation of transient eddy features (Holland 1978). Another response was to tackle the problem of eddy parameterization while continuing to improve primitive equation models. Although much progress has been made since the 1970s, the eddy parameterization problem remains at the forefront of ocean theory and modeling to this day.
A conceptual framework for how mesoscale eddies act on the large-scale tracer field arose during the 1970s to 1990s. This framework arose largely from field measurements of transient radioactive ocean tracers as well as through atmospheric insights into transport from synoptic atmospheric eddies. The two key pieces to the framework are eddy-induced diffusion along neutral surfaces and eddy-induced stirring of density in a manner that reduces available potential energy. See the book by Griffies (2004) for a pedagogical treatment.
Neutral diffusion (more commonly known as isopycnal diffusion) was proposed by Solomon (1971) and Redi (1982). The neutral diffusion operator respected growing observational evidence (Veronis 1975) that tracers are stirred by mesoscale eddies along neutral directions rather than along constant geopotential surfaces (see McDougall 1987; McDougall et al. 2014, for discussion of neutral directions). Cox (1987) offered a numerical implementation of isopycnal diffusion, and Griffies et al. (1998) updated the Cox scheme to remove some pernicious numerical instabilities.
The eddy-induced tracer transport was proposed by Gent and McWilliams (1990). As per the energetic impacts from mesoscale eddies, the eddy-induced velocity is designed to dissipate available potential energy (Gent et al. 1995; Griffies 1998). Complementary ideas from Greatbatch and Lamb (1990) noted the equivalence, for geostrophic flows, of vertical momentum form drag realized through vertical viscosity.
As shown by Danabasoglu et al. (1994), the combined use of neutral diffusion and eddy-induced stirring remedied a huge suite of model biases that had plagued the ocean GCMs of the time. Evolved versions of these two parameterizations are still in use today in all ocean GCMs that do not explicitly resolve transient mesoscale eddy features. Neutral diffusion is simpler to implement in isopycnal models than the rotated neutral diffusion of geopotential models, but precludes the representation of water mass transformation by thermobaricity.
Models that use quasi-Lagrangian methods [see section 7c(2)] to ensure that model coordinate surfaces are isopycnal surfaces can directly represent neutral diffusion without the need for special numerical methods.
3) Diapycnal mixing within the ocean interior and boundary layers
Much of the ocean interior is a quasi-ideal fluid in that there is very little irreversible mixing between isopycnals. In contrast, mixing is vigorous in the mixed layer of the upper ocean, as well as in “benthic” boundary layers next to the ocean bottom. Mixing between interior ocean isopycnals (i.e., diapycnal mixing) affects stratification, ventilation, and the time scales for dynamical processes such as waves. Hence, this mixing has a very large impact on ocean circulation. The sensitivity of ocean circulation models to the levels of diapycnal mixing were emphasized by the watermass study of Bryan and Lewis (1979). They prescribed an enhanced diffusivity at depth to account for increased mixing in deep ocean regions of low stratification. This Bryan–Lewis diffusivity profile became the norm for ocean circulation models for the next 20 years, because it greatly improved the realism of the simulations, particularly those where deep flows appear such as in the Southern Ocean. This sensitivity of ocean circulation to diapycnal mixing has also been emphasized by the work of Walter Munk (1966) and Frank Bryan (1987).11
These two modeling studies pointed to the need for additional field measurements and process modeling to enable an understanding of the fundamental nature of interior ocean mixing. This work was recently reviewed by MacKinnon et al. (2013), who brought together ideas of interior mixing and summarized its connection to breaking internal gravity waves. Further, this study offers an example of how efforts at large-scale models, process-models, theory, and observations can be synergistically combined to render deeper understanding of how the ocean works.
The ocean is strongly forced at its surface through air–sea and ice–sea interactions, and at the bottom through interactions with the solid Earth. This forcing drives intense three-dimensional turbulent mixing with order unity vertical-to-horizontal aspect ratios (i.e., nonhydrostatic dynamics). It therefore must be parameterized in hydrostatic ocean models.
In ocean circulation models of the 1980s, the surface boundary layer was “parameterized” by using a top layer of order 50 m thick. However, as modelers refined their vertical grid spacing, the needs for more physically based schemes became apparent. In response to this need, Large et al. (1994) provided a review of the extant methods (e.g., bulk boundary layers and second-order turbulence closures). They proposed a new approach based on ideas that had been developed for atmospheric boundary layer parameterizations (Troen and Mahrt 1986; Holtslag et al. 1990; Holtslag and Boville 1993). Their K-profile parameterization (KPP) has been incorporated into many ocean climate models. Alternative methods based on energetic approaches have also provided the framework for boundary layer closure (e.g., Gaspar et al. 1990), particularly those used by the NEMO community. Such energetic approaches have also been traditionally used by isopycnal models (Hallberg 2003).
Much of the deep waters around Antarctica originate from overflows off the continental shelves. Similar processes occur in the Denmark Strait and Faroe Bank Channel regions of the North Atlantic. Faithfully incorporating such processes in ocean climate models is a combination of model frameworks (e.g., vertical coordinates) and parameterizations. The traditional geopotential vertical coordinate is ill suited to representing these processes because of high levels of spurious mixing, whereas isopycnal and terrain-following models are far better suited (Legg et al. 2006). Legg et al. (2009) summarized the results from a climate process team of global circulation modelers, theorists, process-physicists, and observationalists who focused on this overflow problem and offered recommendations for improving the climate-scale models.
e. Current issues in sea ice modeling
With satisfactory methods to solve sea ice dynamics, attention turned to improving the thermodynamics by implementing an ice-thickness distribution and brine-pocket physics, first in the University of Victoria Climate Model (Bitz and Lipscomb 1999; Bitz et al. 2001), soon after in version 2 of the Community Earth System Model (CCSM2; Bitz et al. 2005; Holland et al. 2006) and in version 2 of the GFDL Climate Model (CM2; Winton 2000; Gnanadesikan et al. 2006), and now in the majority of models. Detailed melt pond parameterizations and radiative transfer that includes scattering (important for sea ice because of brine inclusions and air bubbles) are in many models now too (e.g., Briegleb and Light 2007; Flocco and Feltham 2007; Holland et al. 2012). A desire to simulate brine pocket dynamics more faithfully, with prognostic salinity and sea ice biogeochemistry, has motivated practical schemes to better approximate mushy-layer physics (e.g., Vancoppenolle et al. 2009; Turner et al. 2013).
Figure 12-11 is a schematic illustration of the grid cell of a state-of-the-art sea ice model with a thickness distribution. Each thickness category has a unique snow depth, melt pond depth and coverage, heat fluxes at the top and bottom, and vertical profile of temperature and salinity. A fraction of the grid cell may be open water. Models that do not parameterize the thickness distribution effectively have just one thickness category.
Schematic of sea ice state variables and surface fluxes that are predicted within a grid cell of an Earth system model. It is common for the sea ice thickness distribution to be resolved in five discrete categories, each with a unique thickness range, and another category for open water. [Redrawn from Notz and Bitz (2017); © 2003, 2010, 2017 by John Wiley & Sons, Ltd.]
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
In weather forecast systems, the initial sea ice concentration is usually specified based on passive microwave satellite retrievals and held fixed throughout the forecast (e.g., Grumbine 2013). Other aspects of sea ice thermodynamics are usually rudimentary. Until recently, sea ice has been specified in medium-range and seasonal forecast models as well. However, with version 2 of the National Centers for Environmental Prediction Climate Forecast System (CFSv2; Saha et al. 2010), the GFDL sea ice component of CM2 was implemented. In 2017, ECMWF transitioned their subseasonal operational forecast model from fixed to active sea ice by adopting version 2 of the Louvain-la-Neuve Sea Ice Model (LIM; Fichefet and Morales-Maqueda (1997), F. Vitart 2017, personal communication, and see https://www.ECMWF.int/en/research/modelling-and-prediction/marine).
f. Current issues in land surface modeling
Turbulent fluxes of latent and sensible heat and momentum can be estimated from high-frequency measurements of wind speed and scalars through a technique called eddy covariance, pioneered in the 1950s (Swinbank 1951). These fluxes are examples of the second moments considered in boundary layer parameterizations based on higher-order closure, as discussed in section 5d(1). The development of relatively inexpensive sonic anemometers and fast-response sensors led to rapid expansion in the use of eddy covariance in the 1990s. The application of the technique to measure the carbon balance of ecosystems has led to the creation of a worldwide network of many hundreds of semipermanent eddy covariance towers that monitor turbulent fluxes over land surfaces (Baldocchi et al. 2001; Baldocchi 2003). The availability of hourly estimates of turbulent fluxes of heat, moisture, and CO2 over all kinds of surfaces in all kinds of weather has been incredibly valuable for the development and maturation of land surface modeling (Friend et al. 2007; Stöckli et al. 2008b).
The use of satellite imagery to prescribe vegetation and soil parameters made land models more realistic in the 1990s, but also limited their usefulness for prediction, since satellite imagery will never be available for the future. Two major developments in the 2000s intended to address this were prognostic phenology and dynamic global vegetation models (DGVMs). Rather than using satellite vegetation data as model input, land models seek to predict both seasonal and longer-term changes in vegetation properties. Observations of vegetation properties from satellites and other sources are then used to evaluate model output.
“Phenology” refers to the seasonal growth and shedding of leaves in response to changing environmental conditions. Models with prognostic phenology “grow their own leaves.” These models are based on empirical relationships between the timing of leaf activity and day length, temperature, and moisture (White et al. 1997; Lawrence and Slingo 2004; Arora and Boer 2005; Gienapp et al. 2005; Jolly et al. 2005; Stöckli et al. 2008a; Dickinson et al. 2008). The availability of global satellite coverage and hundreds of hourly flux tower records greatly accelerated the development of skillful parameterizations of phenology (Zhang et al. 2003; Reed et al. 1994; Gibelin et al. 2006; Bradley et al. 2007; Kathuroju et al. 2007; Stöckli et al. 2011).
Dynamic global vegetation models seek to predict not just the seasonal greening and browning of the land surface, but changes in long-term distribution of vegetation in response to climate. These models are important for century- and longer-scale climate simulation, allowing for feedback between the physical climate and the geographic patterns of surface properties (Cramer et al. 2001; Sitch et al. 2008). These models incorporate well-established physical and biological algorithms of earlier land surface parameterizations, but add algorithms for plant establishment, mortality, and competition for light and water (Cox 2001; Bonan et al. 2003; Woodward and Lomas 2004; Gerten et al. 2004; Krinner et al. 2005; Sitch et al. 2005; Lucht et al. 2006). Introduction of a new feedback process such as vegetation-climate interaction can result in large perturbations, which may not be realistic. An early DGVM result was catastrophic dieback of the Amazon rain forest in one such coupled model (Cox et al. 2000, 2004), which released large amounts of CO2 to the atmosphere and accelerated global warming. This result may reflect excessive drought stress in the hydrologic model component (Baker et al. 2008; Harper et al. 2014) than with a realistic assessment of carbon cycle instability (Cox et al. 2013).
Having linked land–atmosphere exchanges of energy and water with photosynthesis in the 1990s and then incorporated prognostic phenology and dynamic vegetation in the 2000s, the coupled models were then used to analyze sources and sinks of atmospheric CO2. It has long been known that terrestrial ecosystems currently sequester about half of global fossil fuel emissions because of an excess of photosynthesis over decomposition (Tans et al. 1990; Le Quéré et al. 2009). Increased atmospheric CO2 directly induces enhanced rates of photosynthesis (Norby et al. 2005; Luo et al. 2006), but nutrient limitation typically restricts growth (Oren et al. 2001; LeBauer and Treseder 2008; Thornton et al. 2007, 2009). Land carbon sinks also result from changes in land management and the age structure of forests (Shevliakova et al. 2009; Pan et al. 2011). Each of these carbon sink processes is likely to change in the future: some (e.g., CO2 fertilization) are likely to get stronger while others (e.g., regrowing forests) are likely to get weaker.
A systematic intercomparison of coupled carbon–climate models was undertaken by Friedlingstein et al. (2006). They ran 11 Earth system models from 1850 to 2100, prescribing fossil fuel emissions, allowed ocean and land sinks to interact, and predicted both CO2 and climate change. Their results showed a striking divergence in CO2 and climate in the twenty-first century. Most simulations developed stronger and stronger land carbon sinks driven primarily by CO2 fertilization, but the effect was highly variable across participating models. But several simulations showed sharply decreased land carbon uptake or even the release of hundreds of gigaton of land carbon as atmospheric CO2 as death and decomposition overtook photosynthesis. This highly uncertain carbon–climate feedback (Dufresne et al. 2002; Friedlingstein et al. 2003) was shown to produce uncertainty of over 250 ppm in simulated CO2 at the end of the runs (Friedlingstein et al. 2006), given identical fossil fuel emissions, with a resulting spread of about 1.5 K of global warming. A quantitative analysis of Earth system climate feedback showed that carbon–climate feedback is among the most uncertain, rivaling uncertainty in cloud feedbacks (Gregory et al. 2009).
9. The future
a. Increasing resolution
GCMs have always used the fastest computers available. Up to about the year 2000, the “clock speeds” of computer processors steadily increased. A faster clock means that a given arithmetic operation (e.g., addition or multiplication) can be performed in a shorter time. Faster clocks thus allowed longer simulations with the same model, or simulations of a given length with more “expensive” models, that is, models that use higher spatial resolution or more computationally demanding physical parameterizations.
The increase in clock speeds came to an end in large part because faster clocks demand increasing amounts of expensive electrical power; the cost has simply become unsupportable. Since about 2000, the supercomputers used to run ESMs have increased in performance largely through the use of increasing numbers of processors running in parallel. The most straightforward way for a modeling center to use more processors is to increase the horizontal resolution of the model. Unfortunately, however, having 4 times as many processors does not enable simulations of a given length with 4 times as many grid points, because the time step of the model will have to decrease at higher resolution. There are various practical difficulties of this type.
Increasing resolution brings a different issue. As a model’s grid cells become smaller, the character of the unresolved physical processes changes. For example, with grid cells 100 km across, an atmospheric model needs a parameterization that represents the gridcell-averaged heating and drying (and other processes) associated with deep cumulus convection, including vertical transports by strong but unresolved convective updrafts and downdrafts. This is because multiple deep cumulus clouds can fit in a grid cell 100 km wide. In stark contrast, a grid cell 1 km across can actually fit inside a deep cumulus cloud; a model with a horizontal grid spacing of 1 km can explicitly (if crudely) simulate the deep convective clouds, so that parameterizing them is inappropriate (and unnecessary), although of course parameterizations of microphysical processes, radiation, and small-scale turbulence are still needed.
There is an intermediate range of grid spacings, on the order of 10 km, that is too coarse to allow explicit simulation of deep cumulus clouds, but too fine to permit an accurate statistical representation of such clouds. This troublesome range of grid spacings, used today in some global models, is often called the “gray zone.” An analogous gray zone can be defined with respect to the turbulent eddies of the boundary layer, which are an order of magnitude smaller than deep cumulus clouds. In fact, because the atmosphere and ocean contain eddies on all scales larger than a millimeter or so, a gray zone can be defined for any practical choice of horizontal resolution.
The gray zone for deep cumulus convection is thought to be particularly important, however. Many of today’s models have grid spacings that are in or approaching the gray zone for deep convection. Ongoing research aims to create resolution-independent parameterizations that can work for a wide range of horizontal grid spacings, including those that fall within the gray zone (e.g., Arakawa and Wu 2013). This will allow a single code, based on a single set of equations, to be used with a wide range of horizontal grid spacings—a very practical and convenient modeling system.
b. The future of atmospheric dynamical cores
The ongoing increase in horizontal resolution, mentioned in the preceding section, has motivated the development of “nonhydrostatic” dynamical cores for global models, which do not use the quasi-static approximation. Some current research is aimed at evaluating the relative merits of using the “fully compressible” system of equations, which allows vertically propagating sound waves, versus alternative systems that filter such waves (e.g., Arakawa and Konor 2009).
Because its cost grows faster than the number of degrees of freedom, and because of issues such as “spectral ringing” in the presence of sharp gradients, the imminent demise of the spectral method has been predicted for several decades! The communication burden of the spectral transforms on massively parallel machines may be the final nail in the coffin.
Semi-Lagrangian advection schemes are complex both algorithmically and in terms of their communication patterns. At the same time, their advantage in being able to take large time steps is less important on quasi-uniform grids. We may see a move away from semi-Lagrangian schemes in the future.
Finally, semi-implicit integration schemes require the solution of global elliptic problems, which are perceived to be difficult to solve efficiently on massively parallel machines. Consequently, new nonhydrostatic model developments aimed at massively parallel machines have tended to time splitting or vertically implicit integration schemes (Satoh et al. 2008; Skamarock et al. 2012; Zängl et al. 2015), though some attempts have been made to demonstrate the feasibility and competitiveness of parallel elliptic solvers (Heikes et al. 2013; Sandbach et al. 2015).
There is now a vast number and variety of numerical methods for atmospheric modeling under consideration by the research community. A range of quasi-uniform grids is being explored, the most popular being cubed spheres, triangular and hexagonal icosahedra, and the overset yin–yang grid (Fig. 12-12). Spatial discretizations include finite-difference methods, finite-volume methods, and a variety of finite-element methods, which are analogous to spectral methods but use local (rather than global) basis functions. These are coupled with a range of explicit, implicit, subcycling, and Riemann-solver-based time integration schemes.
Three examples of quasi-uniform spherical grids. (left) Cubed sphere; (middle) hexagonal–icosahedral grid; (right) yin–yang grid. In practice the resolutions used would be much finer than shown here.
Citation: Meteorological Monographs 59, 1; 10.1175/AMSMONOGRAPHS-D-18-0018.1
Current work is exploring some approaches that have the potential for a major impact on the field, if they can be made to work well enough. Grids with geographically variable (but temporally fixed) resolution are being tested (e.g., Rauscher and Ringler 2014; Zarzycki and Jablonowski 2015). An idea that has great potential to improve the computational efficiency of weather and climate simulations is to use a grid that dynamically adapts to the solution, placing the highest resolution where it is most needed. Alternative approaches are moving the grid while retaining the grid topology, or inserting and removing grid points where needed. Experiments with both approaches appeared in the 1990s (e.g., Dietachmayer and Droegemeier 1992; Skamarock and Klemp 1993). Adaptive vertical grids are also being investigated (Marchand and Ackerman 2011; Yamaguchi et al. 2017). However, there are significant challenges both in defining suitable criteria for where to refine the grid and in maintaining conservation and balance and avoiding noise as the grid adapts. The only operational adaptive grid forecast model to date appears to be that of Bacon et al. (2000). Some of the challenges mentioned above are being addressed (e.g., St-Cyr et al. 008; Dubos and Kevlahan 2013; Bauer et al. 2014; see also the 28 November 2009 issue of Philos. Trans. Roy. Soc.). Also, with the evolution of supercomputers toward ever greater numbers of processors, a significant challenge is to devise algorithms with sufficient parallelism to take advantage. This has led to some exploration of parallel-in-time algorithms (e.g., Haut and Wingate 2014).
The future evolution of computer architecture (which itself is uncertain) is likely to continue to influence the development of numerical methods. Efforts are currently under way to test the feasibility of running global atmospheric models on machines that include graphics processing units (GPUs) to achieve greater speed (e.g., Leutwyler et al. 2016; Abdi et al. 2017).
c. The future of radiation parameterizations
Radiation is unique as a parameterization problem for atmospheric modeling because fundamental understanding of the problem is so complete. For this reason, the parameterization of radiative processes focuses on how to use incomplete information from a model to compute fluxes of sufficient accuracy with acceptable computational cost. Future research will likely be focused on strategies for mitigating computational cost and increasing accuracy and accounting for the horizontal transport of radiation.
As described in section 6c, the high computational cost of spectrally integrated calculations means that radiative fluxes are typically computed more sparsely in time than any other subgrid-scale diabatic processes, potentially degrading simulations by blurring the coupling between fast-changing clouds and radiative fluxes. One promising approach is to devote specific computational resources to computing radiative fluxes (e.g., Balaji et al. 2016), allowing more frequent radiation computations and speeding time to solution at the cost of using more resources overall. Because radiation calculations integrate over a spectral dimension the problem is well suited to exploit heterogenous computing environments. Highly parallel processors such as GPUs in particular offer tantalizing hints of very high efficiency (e.g., Price et al. 2013; Clement et al. 2018).
New frontiers for accuracy include better coupling of radiation among the atmospheric, oceanic, cryospheric, and terrestrial components of Earth system models and steps to relax the strong one-dimensional plane-parallel assumption. In all ESMs of which we are aware, radiative fluxes are computed independently in each domain, that is, in the atmosphere, ocean, and sea ice, using independent models that are nonetheless based on the same underlying equations (e.g., Yuan et al. 2017). Results from each domain serve as boundary conditions for the other domains; the cost of coupling components often requires that spectral resolution is degraded at the potential cost of accuracy. A more natural and potentially more accurate approach would be to compute radiative fluxes in the atmosphere and ocean simultaneously (e.g., Lee and Liou 2007); extending this approach to sea ice, whose albedo can vary dramatically, might improve prediction in the Arctic. Problems arising in the computation of radiation in heterogenous vegetation canopies (e.g., Yuan et al. 2017) have much in common with similar efforts in clouds, suggesting that progress might come from the two communities working more closely together (Hogan et al. 2018).
Despite the manifest three-dimensionality of the atmosphere, essentially all parameterizations of radiative transfer used in global models adopt plane-parallel geometry and make use of the assumption that all radiation travels straight up and down. Emerging new techniques (Schäfer et al. 2016; Hogan et al. 2016) relax the one-dimensional assumption, accounting parametrically for effects such as the casting of cloud shadows, the illumination of cloud sides, and the increased cooling from cloud edges (Hogan et al. 2016) within each column. These effects are small but systematic: finite clouds uniformly increase surface and top-of-atmosphere fluxes relative to their plane-parallel counterparts while impacts of solar illumination vary with solar zenith angle, and hence latitudinally and seasonally. As parametric treatments are evaluated more rigorously efforts to include these effects in coarse-resolution models may become more common.
d. The future of cloud and microphysics parameterizations
Parameterizing microphysics remains highly challenging because of the complexity of the underlying physics and a lack of fundamental knowledge on these processes, especially for ice microphysics. This is a critical challenge for weather and climate modeling because simulations are often quite sensitive to microphysical parameter settings, and the increasing complexity of schemes has not changed this picture. Overall, continued advancement of parameterizations will require greater knowledge of the underlying physical processes in order to reduce parameter uncertainty, including from laboratory studies, cloud observations, and detailed process modeling. Representing subgrid-scale cloud processes consistently across all model scales continues to be another major challenge despite increasing model resolution. Efforts have been made to develop subgrid representations of clouds and dynamics to consistently drive cloud microphysics across a range of scales and cloud types (e.g., Thayer-Calder et al. (2015)). These “unified” cloud parameterization efforts will likely be an important part of weather and climate model development in the coming years.
New approaches to superparameterization are also under development. For example, (Parishani et al. 2017) report encouraging results with an “ultra-parameterization” in which the horizontal grid spacing of the embedded cloud-resolving models is reduced to 250 m, and the vertical resolution is also increased, so that the eddies associated with shallow clouds can be explicitly simulated. Jung and Arakawa (2014) have developed a “quasi-three-dimensional” (Q3D) superparameterization, in which the CRMs take the form of narrow channels that form closed loops on the global model’s grid, for example, around meridians or latitude circles. The channels cross but do not intersect; they communicate only through the host GCM. With the Q3D approach, it is possible to include realistic topography (Jung 2016) including orgaphically enhanced precipitation, as well as vertical momentum transport by both convection and gravity waves, as explicitly simulated on the CRM grids.
Meanwhile, efforts are under way to use machine learning to create accurate and computationally efficient parameterizations (Chevallier et al. 1998; Brenowitz and Bretherton 2018; Gentine et al. 2018; Schneider et al. 2017). It seems likely that this approach can lead to improved simulations with tolerable computational cost, at least for the current climate. Can it also be used to simulate different climate states? Can it be used to learn more about the actual physical mechanisms through which the cloud systems interact with larger-scale motions? Work is needed to address these questions.
e. The future of ocean models
Since the 1970s, much of the focus of global ocean circulation modeling has been at understanding, representing, and parameterizing the impacts from mesoscale eddies. This focus remains a large part of today’s efforts. For example, prototype centennial-scale climate simulations have been run with a vigorous eddy field. In particular, Griffies et al. (2015) emphasize the role of mesoscale eddies in the vertical transport of heat in the ocean, thus directly impacting on the rate of transient climate change. Small et al. (2014) emphasize the role of small-scale ocean features in forcing the atmosphere circulation through the surface fluxes. However, new avenues of research are focused on the submesoscales, which are intermediate between the balanced motions at the mesoscale and unbalanced motions at the gravity wave scale (Fox-Kemper et al. 2008; Thomas et al. 2008; McWilliams 2016). Submesoscale processes impact the vertical transfer of properties in the upper ocean, and mediate the downscale cascade of energy and tracer variance to the small scales. In parallel, modelers are increasingly pushing the frontiers of coastal and shelf processes within the global climate models by grid refinement or nesting approaches. It is here that impacts from the changing climate will have their largest footprint on civilization because of changes in ecosystems and sea level.
We expect that numerical models of the ocean will continue to improve through advances in numerical methods and physical parameterizations, including many of the approaches outlined here (e.g., ALE for the vertical and unstructured meshes for the horizontal). Improvements to observational datasets will also be necessary to evaluate the simulations. The history of ocean modeling has not been linear, with examples of advances in one subfield spawning new understanding and development in unexpected areas. Nonetheless, ongoing advances in ocean models and modeling practices, along with new theoretical insights, will ensure that numerical models remain a fundamental component of oceanography and climate science into the future.
f. The future of sea ice models
The next developments for sea ice are likely to be more realistic sea ice dynamics that replicate the effects of anisotropy on lead formation (e.g., Sulsky et al. 2007; Tsamados et al. 2013) and joint thickness-floe distribution models (Horvat and Tziperman 2015; Roach et al. 2018)—the latter permitting better representation of the region near the sea ice edge were ocean surface waves interact with floes and floe size influences ice–albedo feedback.
g. The future of land surface models
As more processes are added to Earth system models, there is more room for unexpected interactions. Just as the coupling of ocean and atmosphere GCMs produced nonphysical climate drift that required flux corrections (e.g., Cubasch et al. 1992), fully coupled land–atmosphere models produced highly uncertain carbon–climate feedback (Friedlingstein et al. 2006). In response to the large spread in Earth system model outcomes, the land modeling community has embarked on a series of systematic model intercomparisons, evaluations, and benchmarking exercises using a wide range of global datasets (Luo et al. 2012; Huntzinger et al. 2012).
Land–atmosphere coupling in the CMIP5 ensemble of Earth system models produced an even wider spread of outcomes (Arora et al. 2013) than had been documented a decade earlier as more model complexity was added. An important approach to improving predictability of land–atmosphere climate futures is the application of emergent constraints on carbon–climate feedbacks (Wenzel et al. 2014). A subset of CMIP5 models forced with identical emissions and allowed to predict the behavior of land and ocean sinks and atmospheric CO2 found a spread of almost 350 ppm in CO2 concentration in 2100 (Hoffman et al. 2014). Uncertain carbon–climate feedback resulted in a spread in radiative forcing of more than 2 W m−2, comparable to emission scenario uncertainty. Hoffman et al. (2014) compared the models predicted CO2 concentrations in 2010 to observations and found that their biases in the present day were good linear predictors of the spread in 2100. Using integral constraints on anthropogenic carbon inventories in the ocean and atmosphere, they adjusted carbon sinks to match. This reduced the spread of CO2 in 2100 by a factor of 7 relative to the control (CMIP) simulations, showing the potential for leveraging emergent constraints to solve the carbon–climate feedback problem.
The International Land Model Benchmarking Project (ILAMB; Hoffman et al. 2016) provides a comprehensive suite of observational datasets from flux towers, field experiments, satellite imagery, and atmospheric sampling in a transparent framework for model evaluation and intercomparison. Dozens of land modeling groups from around the world have participated in the development of the benchmarks, and in model intercomparison and evaluation studies. As of late 2017, model evaluation and improvement is among the highest priorities for predictive modeling of land–atmosphere futures in the Earth system (Huntzinger et al. 2017).
10. The road goes ever on
Developments in atmospheric dynamics and physics, instrumentation and observing practice, and digital computing have made the utopian visions of Abbe, Bjerknes, and Richardson an everyday reality. Global numerical weather prediction models are now at the center of operational forecasting and enable us to predict the weather for several days in advance with a high degree of confidence. Progress has been rapid; the useful range of deterministic prediction has been increasing by about one day per decade (Bauer et al. 2015). In addition, Earth system models are now being used to simulate future climate changes that will have enormous societal consequences. Using Earth system models, we are gaining great insight into the factors causing changes in our climate, and the likely timing and severity of those changes.
As a result of these spectacular achievements, meteorology and oceanography are now firmly established as quantitative sciences, and their value and validity are demonstrated daily by the acid test of any science: its ability to predict the results of measurements. The advances in Earth system modeling over the past century have been truly revolutionary. The development of comprehensive Earth system models is a major and insufficiently appreciated scientific achievement of the twentieth century. Today’s most advanced models simulate not only the physics of the atmosphere, oceans and land surface, but also a wide range of chemical and biological processes and the associated couplings and feedbacks. The conceptual breadth of the models has rapidly increased over the last few decades, and is now rather breathtaking.
It is also essential, however, to maintain a focus on the conceptual depth of the models. The ever-expanding range of parameterized processes must be tied back to fundamental physics, as securely as possible. Although it is exciting and important to add new processes to a model, it is at least equally important (and, for some of us, equally exciting) to strengthen the conceptual foundations of a model’s “legacy” components, including such things as parameterizations of clouds and turbulence, and the numerical methods used to solve the equations that govern fluid motion, over a wide range of scales, on a great big rotating sphere.
A comprehensive ESM can simulate many of the emergent phenomena that we see in nature, but the output of such a simulation is just a pile of numbers; it is not an explanation of the natural world. To claim that we understand the results of a highly detailed and successful simulation, and by extension that we understand the real world, we must work to create much simpler models that can semiquantitatively reproduce the key results of the comprehensive models. Meeting this inspiring challenge is the highest goal of our science.
Acknowledgments
We gratefully acknowledge valuable input provided by our friend Albert J. Semtner, Jr., who passed away in December 2018. Bert was a pioneer of ocean and sea ice modeling.
Paul Edwards of Stanford University and Wayne Schubert of Colorado State University helped us to locate some of the early references. Additional input came from Richard Somerville of the Scripps Institution for Oceanography, and Milton Halem of the University of Maryland.
We are grateful to the three reviewers for very helpful comments on the manuscript. Bjorn Stevens, in particular, liberally annotated the 186-page-long first version of the manuscript.
David Randall acknowledges support from NSF Grant AGS-1538532. Cecilia M. Bitz is grateful for support from NSF PLR-1643431. Stephen Griffies thanks his longstanding support from GFDL for vigorous ocean and climate modeling activities. The National Center for Atmospheric Research is a major facility sponsored by the National Science Foundation under Cooperative Agreement 1852977.
REFERENCES
Abbe, C., 1901: The physical basis of long-range weather forecasts. Mon. Wea. Rev., 29, 551–561, https://doi.org/10.1175/1520-0493(1901)29[551c:TPBOLW]2.0.CO;2.
Abdi, D. S., L. C. Wilcox, T. C. Warburton, and F. X. Giraldo, 2017: A GPU-accelerated continuous and discontinuous Galerkin non-hydrostatic atmospheric model. Int. J. High Perform. Comput. Appl., 33, 81–109, https://doi.org/10.1177/1094342017694427.
Ackerman, A. S., M. P. Kirkpatrick, D. E. Stevens, and O. B. Toon, 2004: The impact of humidity above stratiform clouds on indirect aerosol climate forcing. Nature, 432, 1014–1017, https://doi.org/10.1038/nature03174.
Adcroft, A., and J.-M. Campin, 2004: Rescaled height coordinates for accurate representation of free-surface flows in ocean circulation models. Ocean Modell., 7, 269–284, https://doi.org/10.1016/j.ocemod.2003.09.003.
Adcroft, A., and R. Hallberg, 2006: On methods for solving the oceanic equations of motion in generalized vertical coordinates. Ocean Modell., 11, 224–233, https://doi.org/10.1016/j.ocemod.2004.12.007.
Ambartsumian, V., 1936: The effect of absorption lines on the radiative equillibrium of the outer layers of stars. Publ. Astron. Obs. Univ. Leningrad, 6, 7–18.
André, J., G. De Moor, P. Lacarrere, and R. Du Vachat, 1976: Turbulence approximation for inhomogeneous flows: Part II. The numerical simulation of a penetrative convection experiment. J. Atmos. Sci., 33, 482–491, https://doi.org/10.1175/1520-0469(1976)033<0482:TAFIFP>2.0.CO;2.
Andrejczuk, M., W. Grabowski, J. Reisner, and A. Gadian, 2010: Cloud–aerosol interactions for boundary layer stratocumulus in the Lagrangian cloud model. J. Geophys. Res., 115, D22214, https://doi.org/10.1029/2010JD014248.
Anthes, R. A., 1977: A cumulus parameterization scheme utilizing a one-dimensional cloud model. Mon. Wea. Rev., 105, 270–286, https://doi.org/10.1175/1520-0493(1977)105<0270:ACPSUA>2.0.CO;2.
Anthes, R. A., 1984: Enhancement of convective precipitation by mesoscale variations in vegetative covering in semiarid regions. J. Climate Appl. Meteor., 23, 541–554, https://doi.org/10.1175/1520-0450(1984)023<0541:EOCPBM>2.0.CO;2.
Arakawa, A., 1966: Computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I. J. Comput. Phys., 1, 119–143, https://doi.org/10.1016/0021-9991(66)90015-5.
Arakawa, A., 1969: Parameterization of cumulus convection. Proc. WMO/IUGG Symp. on Numerical Weather Prediction, Tokyo, Japan, Japan Meteorological Agency, Vol. IV, 1–6.
Arakawa, A., 1972: Design of the UCLA general circulation model. Vol. 7, Department of Meteorology, University of California, Los Angeles, 116 pp.
Arakawa, A., 2000: A personal perspective on the early years of general circulation modeling. General Circulation Model Development, D. A. Randall, Ed., Academic Press, 1–65.
Arakawa, A., 2004: The cumulus parameterization problem: Past, present, and future. J. Climate, 17, 2493–2525, https://doi.org/10.1175/1520-0442(2004)017<2493:RATCPP>2.0.CO;2.
Arakawa, A., and W. H. Schubert, 1974: Interaction of a cumulus cloud ensemble with the large-scale environment, Part I. J. Atmos. Sci., 31, 674–701, https://doi.org/10.1175/1520-0469(1974)031<0674:IOACCE>2.0.CO;2.
Arakawa, A., and V. R. Lamb, 1977: Computational design of the basic dynamical processes of the UCLA general circulation model. General Circulation Models of the Atmosphere, J. Chang, Ed., Methods in Computational Physics, Vol. 17, Academic Press, 173–265.
Arakawa, A., and V. R. Lamb, 1981: A potential enstrophy and energy conserving scheme for the shallow water equations. Mon. Wea. Rev., 109, 18–36, https://doi.org/10.1175/1520-0493(1981)109<0018:APEAEC>2.0.CO;2.
Arakawa, A., and S. Moorthi, 1988: Baroclinic instability in vertically discrete systems. J. Atmos. Sci., 45, 1688–1708, https://doi.org/10.1175/1520-0469(1988)045<1688:BIIVDS>2.0.CO;2.
Arakawa, A., and C. S. Konor, 2009: Unification of the anelastic and quasi-hydrostatic systems of equations. Mon. Wea. Rev., 137, 710–726, https://doi.org/10.1175/2008MWR2520.1.
Arakawa, A., and C.-M. Wu, 2013: A unified representation of deep moist convection in numerical modeling of the atmosphere. Part I. J. Atmos. Sci., 70, 1977–1992, https://doi.org/10.1175/JAS-D-12-0330.1.
Arakawa, A., Y. Mintz, and A. Katayama, 1968: Numerical Simulation of the General Circulation of the Atmosphere. Department of Meteorology, University of California, 40 pp.
Arora, V. K., and G. J. Boer, 2005: A parameterization of leaf phenology for the terrestrial ecosystem component of climate models. Global Change Biol., 11, 39–59, https://doi.org/10.1111/j.1365-2486.2004.00890.x.
Arora, V. K., and Coauthors, 2013: Carbon–concentration and carbon–climate feedbacks in CMIP5 earth system models. J. Climate, 26, 5289–5314, https://doi.org/10.1175/JCLI-D-12-00494.1.
Ashford, O. M., 1985: Prophet–or Professor?: The Life and Work of Lewis Fry Richardson. Taylor & Francis, 320 pp.
Avissar, R., and R. A. Pielke, 1989: A parameterization of heterogeneous land surfaces for atmospheric numerical models and its impact on regional meteorology. Mon. Wea. Rev., 117, 2113–2136, https://doi.org/10.1175/1520-0493(1989)117<2113:APOHLS>2.0.CO;2.
Bacon, D. P., and Coauthors, 2000: A dynamically adapting weather and dispersion model: The Operational Multiscale Environment Model with Grid Adaptivity (OMEGA). Mon. Wea. Rev., 128, 2044–2076, https://doi.org/10.1175/1520-0493(2000)128<2044:ADAWAD>2.0.CO;2.
Baer, F., 1972: An alternate scale representation of atmospheric energy spectra. J. Atmos. Sci., 29, 649–664, https://doi.org/10.1175/1520-0469(1972)029<0649:AASROA>2.0.CO;2.
Baker, I., L. Prihodko, A. Denning, M. Goulden, S. Miller, and H. Da Rocha, 2008: Seasonal drought stress in the Amazon: Reconciling models and observations. J. Geophys. Res., 113, G00B01, https://doi.org/10.1029/2007JG000644.
Balaji, V., R. Benson, B. Wyman, and I. Held, 2016: Coarse-grained component concurrency in earth system modeling: Parallelizing atmospheric radiative transfer in the GFDL AM3 model using the flexible modeling system coupling framework. Geosci. Model Dev., 9, 3605, https://doi.org/10.5194/gmd-9-3605-2016.
Baldocchi, D. D., 2003: Assessing the eddy covariance technique for evaluating carbon dioxide exchange rates of ecosystems: Past, present and future. Global Change Biol., 9, 479–492, https://doi.org/10.1046/j.1365-2486.2003.00629.x.
Baldocchi, D. D., and Coauthors, 2001: FLUXNET: A new tool to study the temporal and spatial variability of ecosystem-scale carbon dioxide, water vapor, and energy flux densities. Bull. Amer. Meteor. Soc., 82, 2415–2434, https://doi.org/10.1175/1520-0477(2001)082<2415:FANTTS>2.3.CO;2.
Ball, J. T., 1988: An analysis of stomatal conductance. Ph.D. thesis, Stanford University, 89 pp.
Balmaseda, M. A., K. Mogensen, and A. T. Weaver, 2013: Evaluation of the ECMWF Ocean Reanalysis System ORAS4. Quart. J. Roy. Meteor. Soc., 139, 1132–1161, https://doi.org/10.1002/qj.2063.
Barker, H. W., 1996: A parameterization for computing grid-averaged solar fluxes for inhomogeneous marine boundary layer clouds. 1. Methodology and homogeneous biases. J. Atmos. Sci., 53, 2289–2303, https://doi.org/10.1175/1520-0469(1996)053<2289:APFCGA>2.0.CO;2.
Barker, H. W., and Coauthors, 2003: Assessing 1D atmospheric solar radiative transfer models: Interpretation and handling of unresolved clouds. J. Climate, 16, 2676–2699, https://doi.org/10.1175/1520-0442(2003)016<2676:ADASRT>2.0.CO;2.
Barker, H. W., J. N. S. Cole, J.-J. Morcrette, R. Pincus, P. Raeisaenen, K. von Salzen, and P. A. Vaillancourt, 2008: The Monte Carlo Independent Column Approximation: An assessment using several global atmospheric models. Quart. J. Roy. Meteor. Soc., 134, 1463–1478, https://doi.org/10.1002/qj.303.
Barkstrom, B. R., 1984: The Earth Radiation Budget Experiment (ERBE). Bull. Amer. Meteor. Soc., 65, 1170–1185, https://doi.org/10.1175/1520-0477(1984)065<1170:TERBE>2.0.CO;2.
Bates, J., F. Semazzi, R. Higgins, and S. R. Barros, 1990: Integration of the shallow water equations on the sphere using a vector semi-Lagrangian scheme with a multigrid solver. Mon. Wea. Rev., 118, 1615–1627, https://doi.org/10.1175/1520-0493(1990)118<1615:IOTSWE>2.0.CO;2.
Bauer, W., M. Baumann, L. Scheck, A. Gassmann, V. Heuveline, and S. C. Jones, 2014: Simulation of tropical-cyclone-like vortices in shallow-water icon-hex using goal-oriented r-adaptivity. Theor. Comput. Fluid Dyn., 28, 107–128, https://doi.org/10.1007/s00162-013-0303-4.
Bauer, P., A. Thorpe, and G. Brunet, 2015: The quiet revolution of numerical weather prediction. Nature, 525, 47–55, https://doi.org/10.1038/nature14956.
Beckwith, I. E., and D. M. Bushnell, 1968: Detailed description and results of a method for computing mean and fluctuating quantities in turbulent boundary layers. NASA Tech Note NCAR/TN-D-4815, 119 pp., https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19680026599.pdf.
Beljaars, A. C., P. Viterbo, M. J. Miller, and A. K. Betts, 1996: The anomalous rainfall over the United States during July 1993: Sensitivity to land surface parameterization and soil moisture anomalies. Mon. Wea. Rev., 124, 362–383, https://doi.org/10.1175/1520-0493(1996)124<0362:TAROTU>2.0.CO;2.
Bengtsson, L., M. Kanamitsu, P. Kållberg, and S. Uppala, 1982: FGGE research activities at ECMWF. Bull. Amer. Meteor. Soc., 63, 277–303, https://doi.org/10.1175/1520-0477-63.3.277.
Benjamin, S. G., G. A. Grell, J. M. Brown, T. G. Smirnova, and R. Bleck, 2004: Mesoscale weather prediction with the RUC hybrid isentropic–terrain-following coordinate model. Mon. Wea. Rev., 132, 473–494, https://doi.org/10.1175/1520-0493(2004)132<0473:MWPWTR>2.0.CO;2.
Benjamin, S. G., J. M. Brown, G. Brunet, P. Lynch, K. Saito, and T. W. Schlatter, 2019: 100 years of progress in forecasting and NWP applications. A Century of Progress in Atmospheric and Related Sciences: Celebrating the American Meteorological Society Centennial, Meteor. Monogr., No. 59, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-18-0020.1.
Bentsen, M., and Coauthors, 2013: The Norwegian Earth System Model, NorESM1-M—Part 1: Description and basic evaluation of the physical climate. Geosci. Model Dev., 6, 687–720, https://doi.org/10.5194/gmd-6-687-2013.
Berry, E. X., and R. L. Reinhardt, 1974: An analysis of cloud drop growth by collection: Part I. Double distributions. J. Atmos. Sci., 31, 1814–1824, https://doi.org/10.1175/1520-0469(1974)031<1814:AAOCDG>2.0.CO;2.
Betts, A. K., 1986: A new convective adjustment scheme. Part I: Observational and theoretical basis. Quart. J. Roy. Meteor. Soc., 112, 677–691, https://doi.org/10.1002/qj.49711247307.
Betts, A. K., and M. Miller, 1986: A new convective adjustment scheme. Part II: Single column tests using GATE wave, BOMEX, ATEX and arctic air-mass data sets. Quart. J. Roy. Meteor. Soc., 112, 693–709, https://doi.org/10.1002/qj.49711247308.
Betts, A. K., J. H. Ball, A. Beljaars, M. J. Miller, and P. A. Viterbo, 1996: The land surface-atmosphere interaction: A review based on observational and global modeling perspectives. J. Geophys. Res., 101, 7209–7225, https://doi.org/10.1029/95JD02135.
Bitz, C. M., and W. H. Lipscomb, 1999: An energy-conserving thermodynamic model of sea ice. J. Geophys. Res., 104, 15 669–15 677, https://doi.org/10.1029/1999JC900100.
Bitz, C. M., M. M. Holland, A. J. Weaver, and M. Eby, 2001: Simulating the ice-thickness distribution in a coupled climate model. J. Geophys. Res., 106, 2441–2464, https://doi.org/10.1029/1999JC000113.
Bitz, C. M., M. M. Holland, E. C. Hunke, and R. E. Moritz, 2005: Maintenance of the sea-ice edge. J. Climate, 18, 2903–2921, https://doi.org/10.1175/JCLI3428.1.
Bjerknes, J., 1955: Investigations of the General Circulation of the Atmosphere. Department of Meteorology, University of California, Los Angeles, 350 pp.
Bjerknes, V., 1904: Das problem der wettervorhersage, betrachtet vom standpunkte der mechanik und der physik. Meteor. Z, 21, 1–7.
Bleck, R., 1970: A fast, approximative method for integrating the stochastic coalescence equation. J. Geophys. Res., 75, 5165–5171, https://doi.org/10.1029/JC075i027p05165.
Bleck, R., 1973: Numerical forecasting experiments based on the conservation of potential vorticity on isentropic surfaces. J. Appl. Meteor., 12, 737–752, https://doi.org/10.1175/1520-0450(1973)012<0737:NFEBOT>2.0.CO;2.
Bleck, R., 2002: An oceanic general circulation model framed in hybrid isopycnic–cartesian coordinates. Ocean Modell., 4, 55–88, https://doi.org/10.1016/S1463-5003(01)00012-9.
Bleck, R., and S. G. Benjamin, 1993: Regional weather prediction with a model combining terrain-following and isentropic coordinates. Part I: Model description. Mon. Wea. Rev., 121, 1770–1785, https://doi.org/10.1175/1520-0493(1993)121<1770:RWPWAM>2.0.CO;2.
Bleck, R., and D. Boudra, 1986: Wind-driven spin-up in eddy-resolving ocean models formulated in isopycnic and isobaric coordinates. J. Geophys. Res., 91, 7611–7621, https://doi.org/10.1029/JC091iC06p07611.
Bleck, R., S. Benjamin, J. Lee, and A. E. MacDonald, 2010: On the use of an adaptive, hybrid-isentropic vertical coordinate in global atmospheric modeling. Mon. Wea. Rev., 138, 2188–2210, https://doi.org/10.1175/2009MWR3103.1.
Bleck, R., and Coauthors, 2015: A vertically flow-following icosahedral grid model for medium-range and seasonal prediction. Part I: Model description. Mon. Wea. Rev., 143, 2386–2403, https://doi.org/10.1175/MWR-D-14-00300.1.
Bogenschutz, P. A., and S. K. Krueger, 2013: A simplified PDF parameterization of subgrid-scale clouds and turbulence for cloud-resolving models. J. Adv. Model. Earth Syst., 5, 195–211, https://doi.org/10.1002/jame.20018.
Bogenschutz, P. A., A. Gettelman, H. Morrison, V. E. Larson, C. Craig, and D. P. Schanen, 2013: Higher-order turbulence closure and its impact on climate simulations in the community atmosphere model. J. Climate, 26, 9655–9676, https://doi.org/10.1175/JCLI-D-13-00075.1.
Bogenschutz, P. A., A. Gettelman, C. Hannay, V. E. Larson, R. B. Neale, C. Craig, and C.-C. Chen, 2018: The path to CAM6: Coupled simulations with CAM5.4 and CAM5.5. Geosci. Model Dev., 11, 235, https://doi.org/10.5194/gmd-11-235-2018.
Bolin, B., 1955: Numerical forecasting with the barotropic model 1. Tellus, 7, 27–49, https://doi.org/10.3402/tellusa.v7i1.8770.
Bonan, G. B., 1996: A land surface model (LSM version 1.0) for ecological, hydrological, and atmospheric studies: Technical description and user’s guide. NCAR Tech. Note NCAR/TN-417+STR, 155 pp., https://doi.org/10.5065/D6DF6P5X.
Bonan, G. B., 1998: The land surface climatology of the NCAR land surface model coupled to the NCAR Community Climate Model. J. Climate, 11, 1307–1326, https://doi.org/10.1175/1520-0442(1998)011<1307:TLSCOT>2.0.CO;2.
Bonan, G. B., 2015: Ecological Climatology: Concepts and Applications. Cambridge University Press, 563 pp.
Bonan, G. B., S. Levis, S. Sitch, M. Vertenstein, and K. W. Oleson, 2003: A dynamic global vegetation model for use with climate models: Concepts and description of simulated vegetation dynamics. Global Change Biol., 9, 1543–1566, https://doi.org/10.1046/j.1365-2486.2003.00681.x.
Boucher, O., and U. Lohmann, 1995: The sulfate-CCN-cloud albedo effect. Tellus, 47B, 281–300, https://doi.org/10.3402/tellusb.v47i3.16048.
Boucher, O., and Coauthors, 2013: Clouds and aerosols. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 571–657.
Bourke, W., 1974: A multi-level spectral model. I. Formulation and hemispheric integrations. Mon. Wea. Rev., 102, 687–701, https://doi.org/10.1175/1520-0493(1974)102<0687:AMLSMI>2.0.CO;2.
Boville, B. A., and P. R. Gent, 1998: The NCAR Climate System Model, version one. J. Climate, 11, 1115–1130, https://doi.org/10.1175/1520-0442(1998)011<1115:TNCSMV>2.0.CO;2.
Bradley, B. A., R. W. Jacob, J. F. Hermance, and J. F. Mustard, 2007: A curve fitting procedure to derive inter-annual phenologies from time series of noisy satellite NDVI data. Remote Sens. Environ., 106, 137–145, https://doi.org/10.1016/j.rse.2006.08.002.
Bradshaw, P., D. Ferriss, and N. Atwell, 1967: Calculation of boundary-layer development using the turbulent energy equation. J. Fluid Mech., 28, 593–616, https://doi.org/10.1017/S0022112067002319.
Brenowitz, N. D., and C. S. Bretherton, 2018: Prognostic validation of a neural network unified physics parameterization. Geophys. Res. Lett., 45, 6289–6298, https://doi.org/10.1029/2018GL078510.
Briegleb, B. P., and B. Light, 2007: A Delta-Eddington multiple scattering parameterization for solar radiation in the sea ice component of the Community Climate System Model. NCAR Tech. Note 472+STR, 100 pp.
Brubaker, K. L., and D. Entekhabi, 1996: Analysis of feedback mechanisms in land-atmosphere interaction. Water Resour. Res., 32, 1343–1357, https://doi.org/10.1029/96WR00005.
Bryan, F., 1987: Parameter sensitivity of primitive equation ocean general circulation models. J. Phys. Oceanogr., 17, 970–985, https://doi.org/10.1175/1520-0485(1987)017<0970:PSOPEO>2.0.CO;2.
Bryan, K., 1966: A scheme for numerical integration of the equations of motion on an irregular grid free of nonlinear instability. Mon. Wea. Rev., 94, 39–40, https://doi.org/10.1175/1520-0493(1966)094<0039:ASFNIO>2.3.CO;2.
Bryan, K., 1969a: Climate and the ocean circulation III. The Ocean Model. Mon. Wea. Rev., 97, 806–827, https://doi.org/10.1175/1520-0493(1969)097<0806:CATOC>2.3.CO;2.
Bryan, K., 1969b: A numerical method for the study of the circulation of the world ocean. J. Comput. Phys., 4, 347–376, https://doi.org/10.1016/0021-9991(69)90004-7.
Bryan, K., 1991: Michael Cox (1941–1989): His pioneering contributions to ocean circulation modeling. J. Phys. Oceanogr., 21, 1259–1270, https://doi.org/10.1175/1520-0485(1991)021<1259:MCHPCT>2.0.CO;2.
Bryan, K., and M. D. Cox, 1967: A numerical investigation of the oceanic general circulation. Tellus, 19, 54–80, https://doi.org/10.3402/tellusa.v19i1.9761.
Bryan, K., and L. Lewis, 1979: A water mass model of the world ocean. J. Geophys. Res., 84, 2503–2517, https://doi.org/10.1029/JC084iC05p02503.
Budyko, M. I., 1969: The effect of solar radiatin variations on the climate of the earth. Tellus, 21, 611–619, https://doi.org/10.3402/tellusa.v21i5.10109.
Budyko, M. I., and L. Zubenok, 1961: The determination of evaporation from the land surface. Izv. Akad. Nauk SSSR Ser. Geogr., 6, 3–17.
Bunker, A. F., B. Haurwitz, J. S. Malkus, and H. M. Stommel, 1949: Vertical Distribution of Temperature and Humidity over the Caribbean Sea. Vol. 11, Papers in Physical Oceanography and Meteorology, Massachusetts Institute of Technology and Woods Hole Oceanographic Institution, 85 pp.
Burridge, D. M., and J. Haseler, 1977: A model for medium-range weather forecasts: Adiabatic formation. ECMWF Tech. Rep. 4, 46 pp.
Bushby, F., and M. S. Timpson, 1967: A 10-level atmospheric model and frontal rain. Quart. J. Roy. Meteor. Soc., 93, 1–17, https://doi.org/10.1002/qj.49709339502.
Businger, J. A., J. C. Wyngaard, Y. Izumi, and E. F. Bradley, 1971: Flux-profile relationships in the atmospheric surface layer. J. Atmos. Sci., 28, 181–189, https://doi.org/10.1175/1520-0469(1971)028<0181:FPRITA>2.0.CO;2.
Cahalan, R. F., W. Ridgway, W. J. Wiscombe, T. L. Bell, and J. B. Snider, 1994: The albedo of fractal stratocumulus clouds. J. Atmos. Sci., 51, 2434–2455, https://doi.org/10.1175/1520-0469(1994)051<2434:TAOFSC>2.0.CO;2.
Cairns, B., A. A. Lacis, and B. E. Carlson, 2000: Absorption within inhomogeneous clouds and its parameterization in general circulation models. J. Atmos. Sci., 57, 700–714, https://doi.org/10.1175/1520-0469(2000)057<0700:AWICAI>2.0.CO;2.
Callendar, G. S., 1938: The artificial production of carbon dioxide and its influence on temperature. Quart. J. Roy. Meteor. Soc., 64, 223–240, https://doi.org/10.1002/qj.49706427503.
Campin, J.-M., C. Hill, H. Jones, and J. Marshall, 2011: Super-parameterization in ocean modeling: Application to deep convection. Ocean Modell., 36, 90–101, https://doi.org/10.1016/j.ocemod.2010.10.003.
Cess, R. D., and Coauthors, 1989: Interpretation of cloud-climate feedback as produced by 14 atmospheric general circulation models. Science, 245, 513–516, https://doi.org/10.1126/science.245.4917.513.
Charney, J. G., 1962: Integration of the primitive and balance equations. Proc. Int. Symp. on Numerical Weather Prediction in Tokyo. Tokyo, Japan, Meteorological Society of Japan, 131–152.
Charney, J. G., 1966: The feasibility of a global observation and analysis experiment. Bull. Amer. Meteor. Soc., 47, 200–230, https://doi.org/10.1175/1520-0477-47.3.200.
Charney, J. G., 1975: Dynamics of deserts and drought in the Sahel. Quart. J. Roy. Meteor. Soc., 101, 193–202, https://doi.org/10.1002/qj.49710142802.
Charney, J. G., and N. A. Phillips, 1953: Numerical integration of the quasi-geostrophic equations for barotropic and simple baroclinic flow. J. Meteor., 10, 71–99, https://doi.org/10.1175/1520-0469(1953)010<0071:NIOTQG>2.0.CO;2.
Charney, J. G., R. Fjörtoft, and J. V. Neumann, 1950: Numerical integration of the barotropic vorticity equation. Tellus, 2, 237–254, https://doi.org/10.3402/tellusa.v2i4.8607.
Charney, J. G., M. Halem, and R. Jastrow, 1969: Use of incomplete historical data to infer the present state of the atmosphere. J. Atmos. Sci., 26, 1160–1163, https://doi.org/10.1175/1520-0469(1969)026<1160:UOIHDT>2.0.CO;2.
Charney, J. G., and Coauthors, 1979: Carbon dioxide and climate: A scientific assessment. National Academies Press, 34 pp., https://doi.org/10.17226/12181.
Chen, J.-M., 1991: Turbulence-scale condensation parameterization. J. Atmos. Sci., 48, 1510–1512, https://doi.org/10.1175/1520-0469(1991)048<1510:TSCP>2.0.CO;2.
Cheng, M.-D., and A. Arakawa, 1997: Inclusion of rainwater budget and convective downdrafts in the Arakawa–Schubert cumulus parameterization. J. Atmos. Sci., 54, 1359–1378, https://doi.org/10.1175/1520-0469(1997)054<1359:IORBAC>2.0.CO;2.
Chevallier, F., F. Chéruy, N. Scott, and A. Chédin, 1998: A neural network approach for a fast and accurate computation of a longwave radiative budget. J. Appl. Meteor., 37, 1385–1397, https://doi.org/10.1175/1520-0450(1998)037<1385:ANNAFA>2.0.CO;2.
Clement, V., S. Ferrachat, O. Fuhrer, X. Lapillonne, C. E. Osuna, R. Pincus, J. Rood, and W. Sawyer, 2018: The CLAW DSL: Abstractions for performance portable weather and climate models. Proc. Platform for Advanced Scientific Computing Conf., New York, NY, ACM, 2:1–2:10, https://doi.org/10.1145/3218176.3218226.
Clough, S. A., M. J. Iacono, and J.-L. Moncet, 1992: Line-by-line calculations of atmospheric fluxes and cooling rates: Application to water vapor. J. Geophys. Res., 97, 15 761–15 785, https://doi.org/10.1029/92JD01419.
Cohard, J.-M., and J.-P. Pinty, 2000: A comprehensive two-moment warm microphysical bulk scheme. I: Description and tests. Quart. J. Roy. Meteor. Soc., 126, 1815–1842, https://doi.org/10.1256/smsqj.56613.
Collatz, G. J., J. T. Ball, C. Grivet, and J. A. Berry, 1991: Physiological and environmental regulation of stomatal conductance, photosynthesis and transpiration: a model that includes a laminar boundary layer. Agric. For. Meteor., 54, 107–136, https://doi.org/10.1016/0168-1923(91)90002-8.
Collins, W. D., 2001: Parameterization of generalized cloud overlap for radiative calculations in general circulation models. J. Atmos. Sci., 58, 3224–3242, https://doi.org/10.1175/1520-0469(2001)058<3224:POGCOF>2.0.CO;2.
Collins, W. D., and Coauthors, 2006a: The Community Climate System Model Version 3, CCSM3. J. Climate, 19, 2122–2143, https://doi.org/10.1175/JCLI3761.1.
Collins, W. D., and Coauthors, 2006b: Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). J. Geophys. Res., 111, D14317, https://doi.org/10.1029/2005JD006713.
Coon, M. D., G. A. Maykut, R. S. Pritchard, D. A. Rothrock, and A. S. Thorndike, 1974: Modeling the pack ice as an elastic-plastic material. AIDJEX Bull., 24, 1–105.
Corby, G., A. Gilchrist, and P. Rowntree, 1977: United Kingdom Meteorological Office five-level general circulation model. Methods in Computational Physics: Advances in Research and Applications, Vol. 17, Elsevier, 67–110.
Côté, J., and A. Staniforth, 1988: A two-time-level semi-Lagrangian semi-implicit scheme for spectral models. Mon. Wea. Rev., 116, 2003–2012, https://doi.org/10.1175/1520-0493(1988)116<2003:ATTLSL>2.0.CO;2.
Courant, R., K. Friedrichs, and H. Lewy, 1928: Über die partiellen differenzengleichungen der mathematischen physik. Math. Ann., 100, 32–74, https://doi.org/10.1007/BF01448839.
Courant, R., K. Friedrichs, and H. Lewy, 1967: On the partial difference equations of mathematical physics. IBM J. Res. Dev., 11, 215–234, https://doi.org/10.1147/rd.112.0215.
Courtier, P., and M. Naughton, 1994: A pole problem in the reduced Gaussian grid. Quart. J. Roy. Meteor. Soc., 120, 1389–1407, https://doi.org/10.1002/qj.49712051913.
Covey, C., K. M. AchutaRao, U. Cubasch, P. Jones, S. J. Lambert, M. E. Mann, T. J. Phillips, and K. E. Taylor, 2003: An overview of results from the coupled model intercomparison project. Global Planet. Change, 37, 103–133, https://doi.org/10.1016/S0921-8181(02)00193-5.
Cox, M. D., 1984: A primitive equation, 3-dimensional model of the ocean. GFDL Ocean Group Tech. Rep. 1, GFDL, Princeton University, 163 pp.
Cox, M. D., 1987: Isopycnal diffusion in a z-coordinate ocean model. Ocean Modell., 74, 1–5.
Cox, P. M., 2001: Description of the triffid dynamic global vegetation model. Hadley Centre Tech. Note 24, 16 pp.
Cox, P. M., C. Huntingford, and R. J. Harding, 1998: A canopy conductance and photosynthesis model for use in a GCM land surface scheme. J. Hydrol., 212, 79–94, https://doi.org/10.1016/S0022-1694(98)00203-0.
Cox, P. M., R. A. Betts, C. D. Jones, S. A. Spall, and I. J. Totterdell, 2000: Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model. Nature, 408, 184, https://doi.org/10.1038/35041539.
Cox, P. M., R. Betts, M. Collins, P. P. Harris, C. Huntingford, and C. Jones, 2004: Amazonian forest dieback under climate-carbon cycle projections for the 21st century. Theor. Appl. Climatol., 78, 137–156, https://doi.org/10.1007/s00704-004-0049-4.
Cox, P. M., D. Pearson, B. B. Booth, P. Friedlingstein, C. Huntingford, C. D. Jones, and C. M. Luke, 2013: Sensitivity of tropical carbon to climate change constrained by carbon dioxide variability. Nature, 494, 341, https://doi.org/10.1038/nature11882.
Cox, S. K., D. S. McDougal, D. A. Randall, and R. A. Schiffer, 1987: FIRE—The first ISCCP regional experiment. Bull. Amer. Meteor. Soc., 68, 114–118, https://doi.org/10.1175/1520-0477(1987)068<0114:FFIRE>2.0.CO;2.
Craig, A. P., M. Vertenstein, and R. Jacob, 2012: A new flexible coupler for earth system modeling developed for CCSM4 and CESM1. Int. J. High Perform. Comput. Appl., 26, 31–42, https://doi.org/10.1177/1094342011428141.
Cramer, W., and Coauthors, 2001: Global response of terrestrial ecosystem structure and function to CO2 and climate change: Results from six dynamic global vegetation models. Global Change Biol., 7, 357–373, https://doi.org/10.1046/j.1365-2486.2001.00383.x.
Cubasch, U., K. Hasselmann, H. Höck, E. Maier-Reimer, U. Mikolajewicz, B. D. Santer, and R. Sausen, 1992: Time-dependent greenhouse warming computations with a coupled ocean–atmosphere model. Climate Dyn., 8, 55–69, https://doi.org/10.1007/BF00209163.
Cullen, M., 1993: The unified forecast/climate model. Meteor. Mag., 122, 81–94.
Cullen, M., T. Davies, M. Mawson, J. James, S. Coulter, and A. Malcolm, 1997: An overview of numerical methods for the next generation U.K. NWP and climate model. Atmos.–Ocean, 35 (Suppl. 1), 425–444, https://doi.org/10.1080/07055900.1997.9687359.
Curtis, A. R., 1956: The computation of radiative heating rates in the atmosphere. Proc. Roy. Soc. London, 236A, 156–159.
Curtis, A. R., and R. M. Goody, 1954: Spectral line shape and its effect on atmospheric transmissions. Quart. J. Roy. Meteor. Soc., 80, 58–67, https://doi.org/10.1002/qj.49708034307.
Dai, Y., R. E. Dickinson, and Y.-P. Wang, 2004: A two-big-leaf model for canopy temperature, photosynthesis, and stomatal conductance. J. Climate, 17, 2281–2299, https://doi.org/10.1175/1520-0442(2004)017<2281:ATMFCT>2.0.CO;2.
Danabasoglu, G., J. C. McWilliams, and P. R. Gent, 1994: The role of mesoscale tracer transports in the global ocean circulation. Science, 264, 1123–1126, https://doi.org/10.1126/science.264.5162.1123.
Danabasoglu, G., W. G. Large, J. J. Tribbia, P. R. Gent, B. P. Briegleb, and J. C. McWilliams, 2006: Diurnal coupling in the tropical oceans of CCSM3. J. Climate, 19, 2347–2365, https://doi.org/10.1175/JCLI3739.1.
Danabasoglu, G., S. C. Bates, B. P. Briegleb, S. R. Jayne, M. Jochum, W. G. Large, S. Peacock, and S. G. Yeager, 2012: The CCSM4 ocean component. J. Climate, 25, 1361–1389, https://doi.org/10.1175/JCLI-D-11-00091.1.
Danilov, S., 2013: Ocean modeling on unstructured meshes. Ocean Modell., 69, 195–210, https://doi.org/10.1016/j.ocemod.2013.05.005.
Davies, T., M. Cullen, M. Mawson, and A. Malcolm, 1998: A new dynamical formulation for the UK Meteorological Office Unified Model. Proc. Seminar on Recent Developments in Numerical Methods for Atmospheric Modelling, Shinfield Park, Reading, ECMWF, 7–11.
Deardorff, J. W., 1964: A numerical study of two-dimensional parallel-plate convection. J. Atmos. Sci., 21, 419–438, https://doi.org/10.1175/1520-0469(1964)021<0419:ANSOTP>2.0.CO;2.
Deardorff, J. W., 1972a: Parameterization of the planetary boundary layer for use in general circulation models. Mon. Wea. Rev., 100, 93–106, https://doi.org/10.1175/1520-0493(1972)100<0093:POTPBL>2.3.CO;2.
Deardorff, J. W., 1972b: Numerical investigation of neutral and unstable planetary boundary layers. J. Atmos. Sci., 29, 91–115, https://doi.org/10.1175/1520-0469(1972)029<0091:NIONAU>2.0.CO;2.
Deardorff, J. W., 1974: Three-dimensional numerical study of the height and mean structure of a heated planetary boundary layer. Bound.-Layer Meteor., 7, 81–106, https://doi.org/10.1007/BF00224974.
Deardorff, J. W., 1978: Efficient prediction of ground surface temperature and moisture, with inclusion of a layer of vegetation. J. Geophys. Res., 83, 1889–1903, https://doi.org/10.1029/JC083iC04p01889.
Deardorff, J. W., 1980: Stratocumulus-capped mixed layers derived from a three-dimensional model. Bound.-Layer Meteor., 18, 495–527, https://doi.org/10.1007/BF00119502.
Dee, D. P., and Coauthors, 2011: The ERA-Interim Reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828.
DeMott, P. J., and Coauthors, 2010: Predicting global atmospheric ice nuclei distributions and their impacts on climate. Proc. Natl. Acad. Sci., 107, 11 217–11 222, https://doi.org/10.1073/pnas.0910818107.
Dennis, J. M., and Coauthors, 2012: CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model. Int. J. High Perform. Comput. Appl., 26, 74–89, https://doi.org/10.1177/1094342011428142.
Dickinson, R. E., and A. Henderson-Sellers, 1988: Modelling tropical deforestation: A study of GCM land-surface parametrizations. Quart. J. Roy. Meteor. Soc., 114, 439–462, https://doi.org/10.1002/qj.49711448009.
Dickinson, R. E., A. Henderson-Sellers, P. Kennedy, and M. Wilson, 1986: Biosphere–Atmosphere Transfer Scheme (BATS) for the Community Climate Model. NCAR Tech. Note NCAR/TN-275+STR, 72 pp., https://doi.org/10.5065/D6668B58.
Dickinson, R. E., Y. Tian, Q. Liu, and L. Zhou, 2008: Dynamics of leaf area for climate and weather models. J. Geophys. Res., 113, D16115, https://doi.org/10.1029/2007JD008934.
Diedhiou, A., and J.-F. Mahfouf, 1996: Comparative influence of land and sea surfaces on the sahelian drought: A numerical study. Ann. Geophys., 14, 115–130, https://doi.org/10.1007/s00585-996-0115-6.
Dietachmayer, G. S., and K. K. Droegemeier, 1992: Application of continuous dynamic grid adaption techniques to meteorological modeling. Part I: Basic formulation and accuracy. Mon. Wea. Rev., 120, 1675–1706, https://doi.org/10.1175/1520-0493(1992)120<1675:AOCDGA>2.0.CO;2.
Dirmeyer, P. A., 1994: Vegetation stress as a feedback mechanism in midlatitude drought. J. Climate, 7, 1463–1483, https://doi.org/10.1175/1520-0442(1994)007<1463:VSAAFM>2.0.CO;2.
Donaldson, C. P., 1973: Construction of a dynamic model of the production of atmospheric turbulence and the dispersal of atmospheric pollutants. Workshop on Micrometeorology, Boston, MA, Amer. Meteor. Soc, 313–392.
Donaldson, C. P., and H. Rosenbaum, 1969: Calculation of turbulent, shear flows through closure of the Reynolds equations by invariant modeling. NASA Spec. Publ., 216, 231.
Donea, J., A. Huerta, J.-P. Ponthot, and A. Rodriguez-Ferran, 2004: Arbitrary Lagrangian–Eulerian methods. Fundamentals, Vol. 1, Encyclopedia of Computational Mechanics, E. Stein, R. Borst, and T. J. Hughes, Eds., Wiley & Sons, https://doi.org/10.1002/0470091355.ecm009.
Donner, L., W. Schubert, and R. Somerville, 2011: The Development of Atmospheric General Circulation Models: Complexity, Synthesis and Computation. Cambridge University Press, 272 pp.
Douville, H., F. Chauvin, and H. Broqua, 2001: Influence of soil moisture on the Asian and African monsoons. Part I: Mean monsoon and daily precipitation. J. Climate, 14, 2381–2403, https://doi.org/10.1175/1520-0442(2001)014<2381:IOSMOT>2.0.CO;2.
Dubos, T., and N. Kevlahan, 2013: A conservative adaptive wavelet method for the shallow-water equations on staggered grids. Quart. J. Roy. Meteor. Soc., 139, 1997–2020, https://doi.org/10.1002/qj.2097.
Dufresne, J.-L., L. Fairhead, H. Le Treut, M. Berthelot, L. Bopp, P. Ciais, P. Friedlingstein, and P. Monfray, 2002: On the magnitude of positive feedback between future climate change and the carbon cycle. Geophys. Res. Lett., 29, 1405, https://doi.org/10.1029/2001GL013777.
Dukowicz, J. K., and R. D. Smith, 1994: Implicit free-surface method for the Bryan–Cox–Semtner ocean model. J. Geophys. Res., 99, 7991–8014, https://doi.org/10.1029/93JC03455.
Dunne, J. P., and Coauthors, 2012: GFDL’s ESM2 global coupled climate–carbon earth system models. Part I: Physical formulation and baseline simulation characteristics. J. Climate, 25, 6646–6665, https://doi.org/10.1175/JCLI-D-11-00560.1.
Eddington, A. S., 1916: On the radiative equilibrium of the stars. Mon. Not. Roy. Astron. Soc., 77, 16–35, https://doi.org/10.1093/mnras/77.1.16.
Edwards, J. M., and A. Slingo, 1996: Studies with a flexible new radiation code. I: Choosing a configuration for a large-scale model. Quart. J. Roy. Meteor. Soc., 122, 689–719, https://doi.org/10.1002/qj.49712253107.
Edwards, P. N., 2010: A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming. MIT Press, 528 pp.
Edwards, P. N., 2011: History of climate modeling. Wiley Interdiscip. Rev.: Climate Change, 2, 128–139, https://doi.org/10.1002/wcc.95.
Eliassen, A., 1960: On the transfer of energy in stationary mountain waves. Geophys. Publ., 22, 1–23.
Eliassen, A., and E. Raustein, 1968: A numerical integration experiment with a model atmosphere based on isentropic surfaces. Meteor. Ann., 5, 45–63.
Eliassen, E., B. Machenhauer, and E. Rasmussen, 1970: On a numerical method for integration of the hydrodynamical equations with a spectral representation of the horizontal fields. Institute for Teoretical Meteorology, University of Copenhagen, 74 pp.
Ellingson, R. G., and Y. Fouquart, 1991: The intercomparison of radiation codes in climate models: An overview. J. Geophys. Res., 96, 8925–8927, https://doi.org/10.1029/90JD01618.
Ellingson, R. G., J. Ellis, and S. Fels, 1991: The intercomparison of radiation codes used in climate models: Long wave results. J. Geophys. Res., 96, 8929–8953, https://doi.org/10.1029/90JD01450.
Elsaesser, G. S., A. D. Del Genio, J. H. Jiang, and M. van Lier-Walqui, 2017: An improved convective ice parameterization for the NASA GISS global climate model and impacts on cloud ice simulation. J. Climate, 30, 317–336, https://doi.org/10.1175/JCLI-D-16-0346.1.
Elsasser, W. M., and M. F. Culbertson, 1960: Atmospheric Radiation Tables. Meteor. Monogr., No. 23, Amer. Meteor. Soc., 43 pp.
Eltahir, E. A., 1998: A soil moisture–rainfall feedback mechanism: 1. theory and observations. Water Resour. Res., 34, 765–776, https://doi.org/10.1029/97WR03499.
Emanuel, K. A., 1981: A similarity theory for unsaturated downdrafts within clouds. J. Atmos. Sci., 38, 1541–1557, https://doi.org/10.1175/1520-0469(1981)038<1541:ASTFUD>2.0.CO;2.
Emanuel, K. A., 1991: A scheme for representing cumulus convection in large-scale models. J. Atmos. Sci., 48, 2313–2329, https://doi.org/10.1175/1520-0469(1991)048<2313:ASFRCC>2.0.CO;2.
Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016.
Farquhar, G. V., S. V. von Caemmerer, and J. Berry, 1980: A biochemical model of photosynthetic CO2 assimilation in leaves of C3 species. Planta, 149, 78–90, https://doi.org/10.1007/BF00386231.
Feingold, G., S. M. Kreidenweis, B. Stevens, and W. Cotton, 1996: Numerical simulations of stratocumulus processing of cloud condensation nuclei through collision–coalescence. J. Geophys. Res., 101 21 391–21 402, https://doi.org/10.1029/96JD01552.
Fels, S. B., and M. D. Schwarzkopf, 1975: The Simplified Exchange Approximation: A new method for radiative transfer calculations. J. Atmos. Sci., 32, 1475–1488, https://doi.org/10.1175/1520-0469(1975)032<1475:TSEAAN>2.0.CO;2.
Fennessy, M. J., and J. Shukla, 1999: Impact of initial soil wetness on seasonal atmospheric prediction. J. Climate, 12, 3167–3180, https://doi.org/10.1175/1520-0442(1999)012<3167:IOISWO>2.0.CO;2.
Fichefet, T., and M. A. Morales-Maqueda, 1997: Sensitivity of a global sea ice model to the treatment of ice thermodynamics and dynamics. J. Geophys. Res., 102, 12 609–12 646, https://doi.org/10.1029/97JC00480.
Fiedler, F., and H. A. Panofsky, 1970: Atmospheric scales and spectral gaps. Bull. Amer. Meteor. Soc., 51, 1114–1120, https://doi.org/10.1175/1520-0477(1970)051<1114:ASASG>2.0.CO;2.
Flato, G. M., and W. D. Hibler III, 1992: Modeling pack ice as a cavitating fluid. J. Phys. Oceanogr., 22, 626–651, https://doi.org/10.1175/1520-0485(1992)022<0626:MPIAAC>2.0.CO;2.
Flato, G. M., G. Boer, W. Lee, N. McFarlane, D. Ramsden, M. Reader, and A. Weaver, 2000: The Canadian Centre for Climate Modelling and Analysis global coupled model and its climate. Climate Dyn., 16, 451–467, https://doi.org/10.1007/s003820050339.
Fleming, J. R., 2016: Inventing Atmospheric Science: Bjerknes, Rossby, Wexler, and the Foundations of Modern Meteorology. MIT Press, 306 pp.
Flocco, D., and D. L. Feltham, 2007: A continuum model of melt pond evolution on Arctic sea ice. J. Geophys. Res., 112, C08016, https://doi.org/10.1029/2006JC003836.
Foken, T., 2006: 50 years of the Monin–Obukhov similarity theory. Bound.-Layer Meteor., 119, 431–447, https://doi.org/10.1007/s10546-006-9048-6.
Foley, J. A., I. C. Prentice, N. Ramankutty, S. Levis, D. Pollard, S. Sitch, and A. Haxeltine, 1996: An integrated biosphere model of land surface processes, terrestrial carbon balance, and vegetation dynamics. Global Biogeochem. Cycles, 10, 603–628, https://doi.org/10.1029/96GB02692.
Folland, C. K., D. J. Griggs, and J. T. Houghton, 2004: History of the Hadley Centre for Climate Prediction and Research. Weather, 59, 317–323, https://doi.org/10.1256/wea.121.04.
Fouquart, Y., B. Bonnel, and V. Ramaswamy, 1991: Intercomparing shortwave radiation codes for climate studies. J. Geophys. Res., 96, 8955–8968, https://doi.org/10.1029/90JD00290.
Fovell, R., D. Durran, and J. Holton, 1992: Numerical simulations of convectively generated stratospheric gravity waves. J. Atmos. Sci., 49, 1427–1442, https://doi.org/10.1175/1520-0469(1992)049<1427:NSOCGS>2.0.CO;2.
Fowler, L. D., D. A. Randall, and S. A. Rutledge, 1996: Liquid and ice cloud microphysics in the CSU general circulation model. Part 1: Model description and simulated microphysical processes. J. Climate, 9, 489–529, https://doi.org/10.1175/1520-0442(1996)009<0489:LAICMI>2.0.CO;2.
Fox-Kemper, B., R. Ferrari, and R. Hallberg, 2008: Parameterization of mixed layer eddies. Part I: Theory and diagnosis. J. Phys. Oceanogr., 38, 1145–1165, https://doi.org/10.1175/2007JPO3792.1.
Fridlind, A. M., and Coauthors, 2004: Evidence for the predominance of mid-tropospheric aerosols as subtropical anvil cloud nuclei. Science, 304, 718–722, https://doi.org/10.1126/science.1094947.
Friedlingstein, P., J.-L. Dufresne, P. Cox, and P. Rayner, 2003: How positive is the feedback between climate change and the carbon cycle? Tellus, 55B, 692–700, https://doi.org/10.1034/j.1600-0889.2003.01461.x.
Friedlingstein, P., and Coauthors, 2006: Climate–carbon cycle feedback analysis: results from the C4MIP model intercomparison. J. Climate, 19, 3337–3353, https://doi.org/10.1175/JCLI3800.1.
Friend, A. D., and Coauthors, 2007: FLUXNET and modelling the global carbon cycle. Global Change Biol., 13, 610–633, https://doi.org/10.1111/j.1365-2486.2006.01223.x.
Fu, Q., and K. N. Liou, 1992: On the correlated k-distribution method for radiative transfer in nonhomogeneous atmospheres. J. Atmos. Sci., 49, 2139–2156, https://doi.org/10.1175/1520-0469(1992)049<2139:OTCDMF>2.0.CO;2.
Fultz, D., R. R. Long, G. V. Owens, W. Bohan, R. Kaylor, and J. Weil, 1959: Studies of thermal convection in a rotating cylinder with some implications for large-scale atmospheric motions. Studies of Thermal Convection in a Rotating Cylinder with Some Implications for Large-Scale Atmospheric Motions, Springer, 1–104.
Fung, I., C. Tucker, and K. Prentice, 1987: Application of advanced very high resolution radiometer vegetation index to study atmosphere-biosphere exchange of co2. J. Geophys. Res., 92, 2999–3015, https://doi.org/10.1029/JD092iD03p02999.
Gaspar, P., Y. Grégoris, and J.-M. Lefevre, 1990: A simple eddy kinetic energy model for simulations of the oceanic vertical mixing: Tests at station Papa and long-term upper ocean study site. J. Geophys. Res., 95, 16 179–16 193, https://doi.org/10.1029/JC095iC09p16179.
Gates, W. L., 1992: AMIP: The Atmospheric Model Intercomparison Project. Bull. Amer. Meteor. Soc., 73, 1962–1970, https://doi.org/10.1175/1520-0477(1992)073<1962:ATAMIP>2.0.CO;2.
Geleyn, J. F., and A. Hollingsworth, 1979: An economical analytical method for the computation of the interaction between scattering and line absorption of radiation. Beitr. Phys. Atmos., 52, 1–16.
Gent, P. R., 2013: Coupled models and climate projections. International Geophysics, Vol. 103, Elsevier, 609–623, https://doi.org/10.1016/B978-0-12-391851-2.00023-4.
Gent, P. R., and J. C. McWilliams, 1990: Isopycnal mixing in ocean circulation models. J. Phys. Oceanogr., 20, 150–155, https://doi.org/10.1175/1520-0485(1990)020<0150:IMIOCM>2.0.CO;2.
Gent, P. R., J. Willebrand, T. J. McDougall, and J. C. McWilliams, 1995: Parameterizing eddy-induced tracer transports in ocean circulation models. J. Phys. Oceanogr., 25, 463–474, https://doi.org/10.1175/1520-0485(1995)025<0463:PEITTI>2.0.CO;2.
Gent, P. R., F. O. Bryan, G. Danabasoglu, S. C. Doney, W. R. Holland, W. G. Large, and J. C. McWilliams, 1998: The NCAR Climate System Model global ocean component. J. Climate, 11, 1287–1306, https://doi.org/10.1175/1520-0442(1998)011<1287:TNCSMG>2.0.CO;2.
Gentine, P., M. Pritchard, S. Rasp, G. Reinaudi, and G. Yacalis, 2018: Could machine learning break the convection parameterization deadlock? Geophys. Res. Lett., 45, 5742–5751, https://doi.org/10.1029/2018GL078202.
Gerten, D., S. Schaphoff, U. Haberlandt, W. Lucht, and S. Sitch, 2004: Terrestrial vegetation and water balance—Hydrological evaluation of a dynamic global vegetation model. J. Hydrol., 286, 249–270, https://doi.org/10.1016/j.jhydrol.2003.09.029.
Gettelman, A., and H. Morrison, 2015: Advanced two-moment bulk microphysics for global models. Part I: Off-line tests and comparison with other schemes. J. Climate, 28, 1268–1287, https://doi.org/10.1175/JCLI-D-14-00102.1.
GEWEX Cloud System Science Team, 1993: The GEWEX Cloud System Study (GCSS). Bull. Amer. Meteor. Soc., 74, 387–400, https://doi.org/10.1175/1520-0477(1993)074<0387:TGCSS>2.0.CO;2.
Ghan, S. J., and R. C. Easter, 1992: Computationally efficient approximations to stratiform cloud microphysics parameterization. Mon. Wea. Rev., 120, 1572–1582, https://doi.org/10.1175/1520-0493(1992)120<1572:CEATSC>2.0.CO;2.
Ghan, S. J., L. R. Leung, R. C. Easter, and H. Abdul-Razzak, 1997: Prediction of cloud droplet number in a general circulation model. J. Geophys. Res., 102, 21 777–21 794, https://doi.org/10.1029/97JD01810.
Ghan, S. J., and Coauthors, 2001: A physically based estimate of radiative forcing by anthropogenic sulfate aerosol. J. Geophys. Res., 106, 5279–5293, https://doi.org/10.1029/2000JD900503.
Gibelin, A.-L., J.-C. Calvet, J.-L. Roujean, L. Jarlan, and S. O. Los, 2006: Ability of the land surface model ISBA-A-gs to simulate leaf area index at the global scale: Comparison with satellites products. J. Geophys. Res., 111, D18102, https://doi.org/10.1029/2005JD006691.
Gibson, J., P. Kallberg, S. Uppala, A. Hernandez, A. Nomura, and E. Serrano, 1997: ERA description. ECMWF Re-Analysis Project Rep. Series 1, 72 pp.
Gienapp, P., L. Hemerik, and M. E. Visser, 2005: A new statistical tool to predict phenology under climate change scenarios. Global Change Biol., 11, 600–606, https://doi.org/10.1111/j.1365-2486.2005.00925.x.
Gilchrist, A., G. A. Corby, and R. L. Newson, 1973: A numerical experiment using a general circulation model of the atmosphere. Quart. J. Roy. Meteor. Soc., 99, 2–34, https://doi.org/10.1002/qj.49709941903.
Giorgetta, M. A., and Coauthors, 2018: ICON-A, the atmosphere component of the ICON earth system model. Part I: Model description. J. Adv. Model. Earth Syst., 10, 1613–1637, https://doi.org/10.1029/2017MS001242.
Girard, C., and Coauthors, 2014: Staggered vertical discretization of the Canadian Environmental Multiscale (GEM) model using a coordinate of the log-hydrostatic-pressure type. Mon. Wea. Rev., 142, 1183–1196, https://doi.org/10.1175/MWR-D-13-00255.1.
Glushko, G., 1966: Turbulent boundary layer on a flat plate in an incompressible fluid. NASA Tech. Doc. 19660015623, 24 pp.
Gnanadesikan, A., and Coauthors, 2006: GFDL’S CM2 global coupled climate models. Part II: The baseline ocean simulation. J. Climate, 19, 675–697, https://doi.org/10.1175/JCLI3630.1.
Golaz, J.-C., V. E. Larson, and W. R. Cotton, 2002: A PDF-based model for boundary layer clouds. Part I: Method and model description. J. Atmos. Sci., 59, 3540–3551, https://doi.org/10.1175/1520-0469(2002)059<3540:APBMFB>2.0.CO;2.
Golaz, J.-C., M. Salzmann, L. J. Donner, L. W. Horowitz, Y. Ming, and M. Zhao, 2011: Sensitivity of the aerosol indirect effect to subgrid variability in the cloud parameterization of the GFDL atmosphere general circulation model AM3. J. Climate, 24, 3145–3160, https://doi.org/10.1175/2010JCLI3945.1.
Goldberg, D., C. Little, O. Sergienko, A. Gnanadesikan, R. Hallberg, and M. Oppenheimer, 2012: Investigation of land ice-ocean interaction with a fully coupled ice-ocean model: 2. sensitivity to external forcings. J. Geophys. Res., 117, F02038, https://doi.org/10.1029/2011JF002247.
Goody, R., R. West, L. Chen, and D. Crisp, 1989: The correlated-k method for radiation calculations in nonhomogeneous atmospheres. J. Quant. Spectrosc. Radiat. Transfer, 42, 539–550, https://doi.org/10.1016/0022-4073(89)90044-7.
Gordon, C., C. Cooper, C. A. Senior, H. Banks, J. M. Gregory, T. C. Johns, J. F. Mitchell, and R. A. Wood, 2000: The simulation of SST, sea ice extents and ocean heat transports in a version of the Hadley Centre coupled model without flux adjustments. Climate Dyn., 16, 147–168, https://doi.org/10.1007/s003820050010.
Grabowski, W. W., 2001: Coupling cloud processes with the large-scale dynamics using the Cloud-Resolving Convection Parameterization (CRCP). J. Atmos. Sci., 58, 978–997, https://doi.org/10.1175/1520-0469(2001)058<0978:CCPWTL>2.0.CO;2.
Grabowski, W. W., 2004: An improved framework for superparameterization. J. Atmos. Sci., 61, 1940–1952, https://doi.org/10.1175/1520-0469(2004)061<1940:AIFFS>2.0.CO;2.
Grabowski, W. W., and P. K. Smolarkiewicz, 1999: CRCP: A cloud resolving convection parameterization for modeling the tropical convecting atmosphere. Physica D, 133, 171–178, https://doi.org/10.1016/S0167-2789(99)00104-9.
Greatbatch, R. J., and K. G. Lamb, 1990: On parameterizing vertical mixing of momentum in non-eddy resolving ocean models. J. Phys. Oceanogr., 20, 1634–1637, https://doi.org/10.1175/1520-0485(1990)020<1634:OPVMOM>2.0.CO;2.
Green, J. S. A., 1967: Division of radiative streams into internal transfer and cooling to space. Quart. J. Roy. Meteor. Soc., 93, 371–372, https://doi.org/10.1002/qj.49709339710.
Gregory, J. M., C. Jones, P. Cadule, and P. Friedlingstein, 2009: Quantifying carbon cycle feedbacks. J. Climate, 22, 5232–5250, https://doi.org/10.1175/2009JCLI2949.1.
Griffies, S. M., 1998: The Gent–McWilliams skew flux. J. Phys. Oceanogr., 28, 831–841, https://doi.org/10.1175/1520-0485(1998)028<0831:TGMSF>2.0.CO;2.
Griffies, S. M., 2004: Fundamentals of Ocean Climate Models. Princeton University Press, 528 pp.
Griffies, S. M., and R. J. Greatbatch, 2012: Physical processes that impact the evolution of global mean sea level in ocean climate models. Ocean Modell., 51, 37–72, https://doi.org/10.1016/j.ocemod.2012.04.003.
Griffies, S. M., A. Gnanadesikan, R. C. Pacanowski, V. D. Larichev, J. K. Dukowicz, and R. D. Smith, 1998: Isoneutral diffusion in a z-coordinate ocean model. J. Phys. Oceanogr., 28, 805–830, https://doi.org/10.1175/1520-0485(1998)028<0805:IDIAZC>2.0.CO;2.
Griffies, S. M., and Coauthors, 2000: Developments in ocean climate modelling. Ocean Modell., 2 (3–4), 123–192, https://doi.org/10.1016/S1463-5003(00)00014-7.
Griffies, S. M., and Coauthors, 2005: Formulation of an ocean model for global climate simulations. Ocean Sci., 1, 45–79, https://doi.org/10.5194/os-1-45-2005.
Griffies, S. M., and Coauthors, 2015: Impacts on ocean heat from transient mesoscale eddies in a hierarchy of climate models. J. Climate, 28, 952–977, https://doi.org/10.1175/JCLI-D-14-00353.1.
Griffies, S. M., and Coauthors, 2016: OMIP contribution to CMIP6: Experimental and diagnostic protocol for the physical component of the Ocean Model Intercomparison Project. Geosci. Model Dev., 9, 3231, https://doi.org/10.5194/gmd-9-3231-2016.
Grobecker, A., S. C. Coroniti, and R. Cannon Jr., 1974: Report of findings: The effects of stratospheric pollution by aircraft. Department of Transportation Systems Development and TechnologyTech. Rep., 836 pp.
Grumbine, R. W., 2013: Keeping Ice’S simplicity—A modeling start. NOAA/NWS/NCEP Tech. Rep., MMAB Contribution 314, 30 pp., https://polar.ncep.noaa.gov/mmab/papers/tn314/MMAB_314.pdf.
Habata, S., M. Yokokawa, and S. Kitawaki, 2003: The earth simulator system. NEC Res. Dev., 44, 21–26.
Hall, F., and P. Sellers, 1995: First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) in 1995. J. Geophys. Res., 100, 25 383–25 395, https://doi.org/10.1029/95JD03300.
Hallberg, R., 2003: The ability of large-scale ocean models to accept parameterizations of boundary mixing, and a description of a refined bulk mixed-layer model. Proc. ‘Aha Huliko‘a Hawaiian Winter Workshop, Honolulu, HI, University of Hawai‘i at Mānoa, 187–203.
Hansen, J. E., and L. D. Travis, 1974: Light scattering in planetary atmospheres. Space Sci. Rev., 16, 527–610, https://doi.org/10.1007/BF00168069.
Hansen, J. E., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate impact of increasing atmospheric carbon dioxide. Science, 213, 957–966, https://doi.org/10.1126/science.213.4511.957.
Hansen, J., G. Russell, D. Rind, P. Stone, A. Lacis, S. Lebedeff, R. Ruedy, and L. Travis, 1983: Efficient three-dimensional global models for climate studies: Models I and II. Mon. Wea. Rev., 111, 609–662, https://doi.org/10.1175/1520-0493(1983)111<0609:ETDGMF>2.0.CO;2.
Hansen, J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner, 1984: Climate sensitivity: Analysis of feedback mechanisms. Climate Processes and Climate Sensitivity, Geophys. Monogr., Vol. 29, Amer. Geophys. Union, 130–163.
Harper, A., I. T. Baker, A. S. Denning, D. A. Randall, D. Dazlich, and M. Branson, 2014: Impact of evapotranspiration on dry season climate in the Amazon forest. J. Climate, 27, 574–591, https://doi.org/10.1175/JCLI-D-13-00074.1.
Harper, K. C., 2012: Weather by the Numbers: The Genesis of Modern Meteorology. MIT Press, 320 pp.
Harrington, J. Y., K. Sulia, and H. Morrison, 2013: A method for adaptive habit prediction in bulk microphysical models. Part I: Theoretical development. J. Atmos. Sci., 70, 349–364, https://doi.org/10.1175/JAS-D-12-040.1.
Harris, L. M., P. H. Lauritzen, and R. Mittal, 2011: A flux-form version of the Conservative Semi-Lagrangian Multi-Tracer Transport Scheme (CSLAM) on the cubed sphere grid. J. Comput. Phys., 230, 1215–1237, https://doi.org/10.1016/j.jcp.2010.11.001.
Harshvardhan, R. Davies, D. A. Randall, and T. G. Corsetti, 1987: A fast radiation parameterization for atmospheric circulation models. J. Geophys. Res., 92, 1009–1016, https://doi.org/10.1029/JD092iD01p01009.
Harshvardhan, R. Davies, D. A. Randall, T. G. Corsetti, and D. A. Dazlich, 1989: Earth radiation budget and cloudiness simulations with a general circulation model. J. Atmos. Sci., 46, 1922–1942, https://doi.org/10.1175/1520-0469(1989)046<1922:ERBACS>2.0.CO;2.
Hashino, T., and G. Tripoli, 2007: The Spectral Ice Habit Prediction System (SHIPS). Part I: Model description and simulation of the vapor deposition process. J. Atmos. Sci., 64, 2210–2237, https://doi.org/10.1175/JAS3963.1.
Haugen, D., J. Kaimal, and E. Bradley, 1971: An experimental study of Reynolds stress and heat flux in the atmospheric surface layer. Quart. J. Roy. Meteor. Soc., 97, 168–180, https://doi.org/10.1002/qj.49709741204.
Haut, T., and B. Wingate, 2014: An asymptotic parallel-in-time method for highly oscillatory pdes. SIAM J. Sci. Comput., 36, A693–A713, https://doi.org/10.1137/130914577.
Haywood, J., D. Roberts, A. Slingo, J. Edwards, and K. Shine, 1997: General circulation model calculations of the direct radiative forcing by anthropogenic sulfate and fossil-fuel soot aerosol. J. Climate, 10, 1562–1577, https://doi.org/10.1175/1520-0442(1997)010<1562:GCMCOT>2.0.CO;2.
Hecht, M. W., M. R. Petersen, B. A. Wingate, E. Hunke, and M. Maltrud, 2008: Lateral mixing in the eddying regime and a new broad-ranging formulation. Ocean Modeling in an Eddying Regime, Geophys. Monogr., Vol. 177, Amer. Geophys. Union, 339–352.
Heikes, R., and D. A. Randall, 1995: Numerical integration of the shallow-water equations on a twisted icosahedral grid. Part I: Basic design and results of tests. Mon. Wea. Rev., 123, 1862–1880, https://doi.org/10.1175/1520-0493(1995)123<1862:NIOTSW>2.0.CO;2.
Heikes, R., D. A. Randall, and C. S. Konor, 2013: Optimized icosahedral grids: Performance of finite-difference operators and multigrid solver. Mon. Wea. Rev., 141, 4450–4469, https://doi.org/10.1175/MWR-D-12-00236.1.
Hibler, W. D., 1979: A dynamic thermodynamic sea ice model. J. Phys. Oceanogr., 9, 815–846, https://doi.org/10.1175/1520-0485(1979)009<0815:ADTSIM>2.0.CO;2.
Hibler, W. D., 1980: Modeling a variable thickness ice cover. Mon. Wea. Rev., 108, 1943–1973, https://doi.org/10.1175/1520-0493(1980)108<1943:MAVTSI>2.0.CO;2.
Hinkelmann, K., 1951: Der mechanismus des meteorologischen lärmes. Tellus, 3, 285–296, https://doi.org/10.3402/tellusa.v3i4.8655.
Hoffman, F. M., and Coauthors, 2014: Causes and implications of persistent atmospheric carbon dioxide biases in earth system models. J. Geophys. Res. Biogeosci., 119, 141–162, https://doi.org/10.1002/2013JG002381.
Hoffman, F. M., and Coauthors, 2016: 2016 International Land Model Benchmarking (ILAMB) workshop report. U.S. DOE Office of ScienceTech. Rep. DOE/SC-0186, https://doi.org/10.2172/1330803, 172 pp.
Hogan, R. J., S. A. Schäfer, C. Klinger, J. C. Chiu, and B. Mayer, 2016: Representing 3-D cloud radiation effects in two-stream schemes: 2. Matrix formulation and broadband evaluation. J. Geophys. Res. Atmos., 121, 8583–8599, https://doi.org/10.1002/2016JD024875.
Hogan, R. J., T. Quaife, and R. Braghiere, 2018: Fast matrix treatment of 3-D radiative transfer in vegetation canopies: SPARTACUS-vegetation 1.1. Geosci. Model Dev., 11, 339, https://doi.org/10.5194/gmd-11-339-2018.
Holland, M. M., C. M. Bitz, E. C. Hunke, W. H. Lipscomb, and J. L. Schramm, 2006: Influence of the Sea ice thickness distribution on polar climate in CCSM3. J. Climate, 19, 2398–2414, https://doi.org/10.1175/JCLI3751.1.
Holland, M. M., D. A. Bailey, B. P. Briegleb, B. Light, and E. Hunke, 2012: Improved sea ice shortwave radiation physics in CCSM4: The impact of melt ponds and aerosols on arctic sea ice. J. Climate, 25, 1413–1430, https://doi.org/10.1175/JCLI-D-11-00078.1.
Holland, W. R., 1978: The role of mesoscale eddies in the general circulation of the ocean—Numerical experiments using a wind-driven quasi-geostrophic model. J. Phys. Oceanogr., 8, 363–392, https://doi.org/10.1175/1520-0485(1978)008<0363:TROMEI>2.0.CO;2.
Holloway, J. L., Jr., and S. Manabe, 1971: Simulation of climate by a global general circulation model: I. Hydrologic cycle and heat balance. Mon. Wea. Rev., 99, 335–370, https://doi.org/10.1175/1520-0493(1971)099<0335:SOCBAG>2.3.CO;2.
Holtslag, A., and B. Boville, 1993: Local versus nonlocal boundary-layer diffusion in a global climate model. J. Climate, 6, 1825–1842, https://doi.org/10.1175/1520-0442(1993)006<1825:LVNBLD>2.0.CO;2.
Holtslag, A., E. De Bruijn, and H. Pan, 1990: A high resolution air mass transformation model for short-range weather forecasting. Mon. Wea. Rev., 118, 1561–1575, https://doi.org/10.1175/1520-0493(1990)118<1561:AHRAMT>2.0.CO;2.
Hoose, C., J. E. Kristjánsson, J.-P. Chen, and A. Hazra, 2010: A classical-theory-based parameterization of heterogeneous ice nucleation by mineral dust, soot, and biological particles in a global climate model. J. Atmos. Sci., 67, 2483–2503, https://doi.org/10.1175/2010JAS3425.1.
Hortal, M., and A. Simmons, 1991: Use of reduced Gaussian grids in spectral models. Mon. Wea. Rev., 119, 1057–1074, https://doi.org/10.1175/1520-0493(1991)119<1057:UORGGI>2.0.CO;2.
Horvat, C., and E. Tziperman, 2015: A prognostic model of the sea-ice floe size and thickness distribution. Cryosphere, 9, 2119–2134, https://doi.org/10.5194/tc-9-2119-2015.
Hoskins, B. J., and A. J. Simmons, 1975: A multi-layer spectral model and the semi-implicit method. Quart. J. Roy. Meteor. Soc., 101, 637–655, https://doi.org/10.1002/qj.49710142918.
Hoskins, B. J., M. E. McIntyre, and A. W. Robertson, 1985: On the use and significance of isentropic potential vorticity maps. Quart. J. Roy. Meteor. Soc., 111, 877–946, https://doi.org/10.1002/qj.49711147002.
Houze, R. A., Jr., 1977: Structure and dynamics of a tropical squall-line system. Mon. Wea. Rev., 105, 1540–1567, https://doi.org/10.1175/1520-0493(1977)105<1540:SADOAT>2.0.CO;2.
Hsie, E.-Y., R. A. Anthes, and D. Keyser, 1984: Numerical simulation of frontogenesis in a moist atmosphere. J. Atmos. Sci., 41, 2581–2594, https://doi.org/10.1175/1520-0469(1984)041<2581:NSOFIA>2.0.CO;2.
Hsu, Y.-J. G., and A. Arakawa, 1990: Numerical modeling of the atmosphere with an isentropic vertical coordinate. Mon. Wea. Rev., 118, 1933–1959, https://doi.org/10.1175/1520-0493(1990)118<1933:NMOTAW>2.0.CO;2.
Hunke, E. C., and J. K. Dukowicz, 1997: An elastic-viscous-plastic model for sea ice dynamics. J. Phys. Oceanogr., 27, 1849–1867, https://doi.org/10.1175/1520-0485(1997)027<1849:AEVPMF>2.0.CO;2.
Hunke, E. C., W. H. Lipscomb, A. K. Turner, N. Jeffery, and S. Elliott, 2010: CICE: The Los Alamos Sea Ice Model Documentation and Software User’s Manual, version 4.1. LA-CC-06-012, 116 pp.
Huntzinger, D. N., and Coauthors, 2012: North American Carbon Program (NACP) regional interim synthesis: Terrestrial biospheric model intercomparison. Ecol. Modell., 232, 144–157, https://doi.org/10.1016/j.ecolmodel.2012.02.004.
Huntzinger, D. N., and Coauthors, 2017: Uncertainty in the response of terrestrial carbon sink to environmental drivers undermines carbon-climate feedback predictions. Sci. Rep., 7, 4765, https://doi.org/10.1038/s41598-017-03818-2.
Hurrell, J. W., and Coauthors, 2013: The Community Earth System Model: A framework for collaborative research. Bull. Amer. Meteor. Soc., 94, 1339–1360, https://doi.org/10.1175/BAMS-D-12-00121.1.
Hyman, J. M., and M. Shashkov, 1997: Natural discretizations for the divergence, gradient, and curl on logically rectangular grids. Comput. Math. Appl., 33, 81–104, https://doi.org/10.1016/S0898-1221(97)00009-6.
IPCC, 1990: Climate Change: The IPCC Scientific Assessment. Cambridge University Press, 365 pp.
Izumi, Y., and J. S. Caughey, 1976: Minnesota 1973 Atmospheric Boundary Layer Experiment data report. Air Force Cambridge Research Labs Hanscom AFB Tech. Rep. AFCRL-TR-76-0038, 29 pp.
Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927–945, https://doi.org/10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.
Jarraud, M., and A. J. Simmons, 1983: The spectral technique. Seminar on Numerical Methods for Weather Prediction, Reading, United Kingdom, ECMWF, 15–19, https://www.ecmwf.int/sites/default/files/elibrary/1983/10253-spectral-technique.pdf.
Jarvis, P., 1976: The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field. Philos. Trans. Roy. Soc. London, 273B, 593–610, https://doi.org/10.1098/rstb.1976.0035.
Jarvis, P., and K. McNaughton, 1986: Stomatal control of transpiration: Scaling up from leaf to region. Advances in Ecological Research, Vol. 15, A. MacFadyen and E. D. Ford, Eds., Elsevier, 1–49, https://doi.org/10.1016/S0065-2504(08)60119-1.
Johnson, R. H., 1976: The role of convective-scale precipitation downdrafts in cumulus and synoptic-scale interactions. J. Atmos. Sci., 33, 1890–1910, https://doi.org/10.1175/1520-0469(1976)033<1890:TROCSP>2.0.CO;2.
Johnson, D. R., 1989: The forcing and maintenance of global monsoonal circulations: An isentropic analysis. Advances in Geophysics, Vol. 31, Elsevier, 43–316.
Johnson, D. R., 1997: “General coldness of climate models” and the second law: Implications for modeling the earth system. J. Climate, 10, 2826–2846, https://doi.org/10.1175/1520-0442(1997)010<2826:GCOCMA>2.0.CO;2.
Johnson, D. R., and L. W. Uccellini, 1983: A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model. Mon. Wea. Rev., 111, 870–886, https://doi.org/10.1175/1520-0493(1983)111<0870:ACOMFC>2.0.CO;2.
Johnston, H., 1971: Reduction of stratospheric ozone by nitrogen oxide catalysts from supersonic transport exhaust. Science, 173, 517–522, https://doi.org/10.1126/science.173.3996.517.
Jolly, W. M., R. Nemani, and S. W. Running, 2005: A generalized, bioclimatic index to predict foliar phenology in response to climate. Global Change Biol., 11, 619–632, https://doi.org/10.1111/j.1365-2486.2005.00930.x.
Joseph, J. H., W. J. Wiscombe, and J. A. Weinman, 1976: The delta-Eddington approximation for radiative flux transfer. J. Atmos. Sci., 33, 2452–2459, https://doi.org/10.1175/1520-0469(1976)033<2452:TDEAFR>2.0.CO;2.
Jung, J.-H., 2016: Simulation of orographic effects with a quasi-3-D multiscale modeling framework: Basic algorithm and preliminary results. J. Adv. Model. Earth Syst., 8, 1657–1673, https://doi.org/10.1002/2016MS000783.
Jung, J.-H., and A. Arakawa, 2014: Modeling the moist-convective atmosphere with a Quasi-3-D Multiscale Modeling Framework (Q3D MMF). J. Adv. Model. Earth Syst., 6, 185–205, https://doi.org/10.1002/2013MS000295.
Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-year reanalysis project. Bull. Amer. Meteor. Soc., 77, 437–471, https://doi.org/10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2.
Kanamitsu, M., and Coauthors, 2002: NCEP dynamical seasonal forecast system 2000. Bull. Amer. Meteor. Soc., 83, 1019–1037, https://doi.org/10.1175/1520-0477(2002)083<1019:NDSFS>2.3.CO;2.
Kasahara, A., 1974: Various vertical coordinate systems used for numerical weather prediction. Mon. Wea. Rev., 102, 509–522, https://doi.org/10.1175/1520-0493(1974)102<0509:VVCSUF>2.0.CO;2.
Kasahara, A., and W. M. Washington, 1967: NCAR global general circulation model of the atmosphere. Mon. Wea. Rev., 95, 389–402, https://doi.org/10.1175/1520-0493(1967)095<0389:NGGCMO>2.3.CO;2.
Kasahara, A., and W. M. Washington, 1969: Thermal and dynamical effects of orography on the general circulation of the atmosphere. Proc. WMO/IUGG Symp. on Numerical Weather Prediction, Tokyo, Japan, WMO, 47–56.
Kasahara, A., and W. M. Washington, 1971: General circulation experiments with a six-layer NCAR model, including orography, cloudiness and surface temperature calculations. J. Atmos. Sci., 28, 657–701, https://doi.org/10.1175/1520-0469(1971)028<0657:GCEWAS>2.0.CO;2.
Katayama, A., 1967: On the radiation budget of the troposphere over the Northern Hemisphere (II). J. Meteor. Soc. Japan Ser. II, 45, 1–25.
Katayama, A., 1972: A simplified scheme for computing radiation transfer in the troposphere. Numerical Simulation of Weather and Climate Tech. Rep. 6, Department of Meteorology, Univeristy of California, Los Angeles, 77 pp.
Kathuroju, N., M. A. White, J. Symanzik, M. D. Schwartz, J. A. Powell, and R. R. Nemani, 2007: On the use of the Advanced Very High Resolution Radiometer for development of prognostic land surface phenology models. Ecol. Modell., 201, 144–156.
Keller, M., M. A. Silva-Dias, D. C. Nepstad, and M. O. Silva-Andreae, 2004: The Large-Scale Biosphere-Atmosphere Experiment in Amazonia: Analyzing regional land use change effects. Ecosystems and Land Use Change, Geophys. Monogr., Vol. 153, Amer. Geophys. Union, 321–334, https://doi.org/10.1029/153GM24.
Kessler, E., 1969: On the Distribution and Continuity of Water Substance in Atmospheric Circulations. Meteor. Monogr., No. 32, Amer. Meteor. Soc., 84 pp.
Kessler, E., 1995: On the continuity and distribution of water substance in atmospheric circulations. Atmos. Res., 38 (1–4), 109–145, https://doi.org/10.1016/0169-8095(94)00090-Z.
Khain, A., D. Rosenfeld, and A. Pokrovsky, 2005: Aerosol impact on the dynamics and microphysics of deep convective clouds. Quart. J. Roy. Meteor. Soc., 131, 2639–2663, https://doi.org/10.1256/qj.04.62.
Khairoutdinov, M. F., and Y. Kogan, 2000: A new cloud physics parameterization in a large-eddy simulation model of marine stratocumulus. Mon. Wea. Rev., 128, 229–243, https://doi.org/10.1175/1520-0493(2000)128<0229:ANCPPI>2.0.CO;2.
Khairoutdinov, M. F., and D. A. Randall, 2001: A cloud resolving model as a cloud parameterization in the NCAR Community Climate System Model: Preliminary results. Geophys. Res. Lett., 28, 3617–3620, https://doi.org/10.1029/2001GL013552.
Khairoutdinov, M. F., and D. A. Randall, 2003: Cloud resolving modeling of the ARM summer 1997 IOP: Model formulation, results, uncertainties, and sensitivities. J. Atmos. Sci., 60, 607–625, https://doi.org/10.1175/1520-0469(2003)060<0607:CRMOTA>2.0.CO;2.
Kiehl, J., and B. Briegleb, 1993: The relative roles of sulfate aerosols and greenhouse gases in climate forcing. Science, 260, 311–314, https://doi.org/10.1126/science.260.5106.311.
Kiehl, J., J. Hack, G. Bonan, B. Boville, D. Williamson, and P. Rasch, 1998: The National Center for Atmospheric Research Community Climate Model: CCM3. J. Climate, 11, 1131–1149, https://doi.org/10.1175/1520-0442(1998)011<1131:TNCFAR>2.0.CO;2.
Killworth, P. D., D. J. Webb, D. Stainforth, and S. M. Paterson, 1991: The development of a free-surface Bryan–Cox–Semtner ocean model. J. Phys. Oceanogr., 21, 1333–1348, https://doi.org/10.1175/1520-0485(1991)021<1333:TDOAFS>2.0.CO;2.
Kirtman, B., and A. Pirani, 2009: The state of the art of seasonal prediction: Outcomes and recommendations from the first World Climate Research Program workshop on seasonal prediction. Bull. Amer. Meteor. Soc., 90, 455–458, https://doi.org/10.1175/2008BAMS2707.1.
Klemp, J. B., and R. B. Wilhelmson, 1978: The simulation of three-dimensional convective storm dynamics. J. Atmos. Sci., 35, 1070–1096, https://doi.org/10.1175/1520-0469(1978)035<1070:TSOTDC>2.0.CO;2.
Koenig, L. R., and F. W. Murray, 1976: Ice-bearing cumulus cloud evolution: Numerical simulation and general comparison against observations. J. Appl. Meteor., 15, 747–762, https://doi.org/10.1175/1520-0450(1976)015<0747:IBCCEN>2.0.CO;2.
Konor, C. S., and A. Arakawa, 1997: Design of an atmospheric model based on a generalized vertical coordinate. Mon. Wea. Rev., 125, 1649–1673, https://doi.org/10.1175/1520-0493(1997)125<1649:DOAAMB>2.0.CO;2.
Korn, P., 2017: Formulation of an unstructured grid model for global ocean dynamics. J. Comput. Phys., 339, 525–552, https://doi.org/10.1016/j.jcp.2017.03.009.
Koster, R. D., and M. J. Suarez, 1992: Modeling the land surface boundary in climate models as a composite of independent vegetation stands. J. Geophys. Res., 97, 2697–2715, https://doi.org/10.1029/91JD01696.
Kreidenweis, S., M. Petters, and U. Lohmann, 2019: 100 years of progress in cloud physics, aerosols, and aerosol chemistry. A Century of Progress in Atmospheric and Related Sciences: Celebrating the American Meteorological Society Centennial, Meteor. Monogr., No. 59, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-18-0024.1, in press.
Krinner, G., and Coauthors, 2005: A dynamic global vegetation model for studies of the coupled atmosphere-biosphere system. Global Biogeochem. Cycles, 19, https://doi.org/10.1029/2003GB002199.
Krueger, S. K., 1988: Numerical simulation of tropical cumulus clouds and their interaction with the subcloud layer. J. Atmos. Sci., 45, 2221–2250, https://doi.org/10.1175/1520-0469(1988)045<2221:NSOTCC>2.0.CO;2.
Kuo, H.-L., 1965: On formation and intensification of tropical cyclones through latent heat release by cumulus convection. J. Atmos. Sci., 22, 40–63, https://doi.org/10.1175/1520-0469(1965)022<0040:OFAIOT>2.0.CO;2.
Kuo, H.-L., 1974: Further studies of the parameterization of the influence of cumulus convection on large-scale flow. J. Atmos. Sci., 31, 1232–1240, https://doi.org/10.1175/1520-0469(1974)031<1232:FSOTPO>2.0.CO;2.
Kurihara, Y., 1965: Numerical integration of the primitive equations on a spherical grid. Mon. Wea. Rev., 93, 399–415, https://doi.org/10.1175/1520-0493(1965)093<0399:NIOTPE>2.3.CO;2.
Kurihara, Y., 1968: Note on finite difference expressions for the hydrostatic relation and pressure gradient force. Mon. Wea. Rev., 96, 654–656, https://doi.org/10.1175/1520-0493(1968)096<0654:NOFDEF>2.0.CO;2.
Kwizak, M., and A. J. Robert, 1971: A semi-implicit scheme for grid point atmospheric models of the primitive equations. Mon. Wea. Rev., 99, 32–36, https://doi.org/10.1175/1520-0493(1971)099<0032:ASSFGP>2.3.CO;2.
Lacis, A. A., and J. Hansen, 1974: A parameterization for the absorption of solar radiation in the earth’s atmosphere. J. Atmos. Sci., 31, 118–133, https://doi.org/10.1175/1520-0469(1974)031<0118:APFTAO>2.0.CO;2.
Lacis, A. A., and V. Oinas, 1991: A description of the correlated k-distribution method for modeling nongray gaseous absorption, thermal emission, and multiple scattering in vertically inhomogeneous atmospheres. J. Geophys. Res., 96, 9027–9063, https://doi.org/10.1029/90JD01945.
Laloyaux, P., and Coauthors, 2018: CCERA-20C: A coupled reanalysis of the twentieth century. J. Adv. Model. Earth Syst., 10, 1172–1195, https://doi.org/10.1029/2018MS001273.
Langlois, W., and H. Kwok, 1969: Description of the Mintz-Arakawa numerical general circulation model. Department of Meteorology, University of California, Los Angeles, Tech. Rep., 95 pp.
Lappen, C.-L., and D. A. Randall, 2001: Toward a unified parameterization of the boundary layer and moist convection. Part I: A new type of mass-flux model. J. Atmos. Sci., 58, 2021–2036, https://doi.org/10.1175/1520-0469(2001)058<2021:TAUPOT>2.0.CO;2.
Large, W. G., J. C. McWilliams, and S. C. Doney, 1994: Oceanic vertical mixing: A review and a model with a nonlocal boundary layer parameterization. Rev. Geophys., 32, 363–403, https://doi.org/10.1029/94RG01872.
Larson, V. E., R. Wood, P. R. Field, J.-C. Golaz, T. H. Vonder Haar, and W. R. Cotton, 2001: Systematic biases in the microphysics and thermodynamics of numerical models that ignore subgrid-scale variability. J. Atmos. Sci., 58, 1117–1128, https://doi.org/10.1175/1520-0469(2001)058<1117:SBITMA>2.0.CO;2.
Lau, N.-C., 1985: Modeling the seasonal dependence of the atmospheric response to observed El Niños in 1962–76. Mon. Wea. Rev., 113, 1970–1996, https://doi.org/10.1175/1520-0493(1985)113<1970:MTSDOT>2.0.CO;2.
Lauritzen, P. H., and R. D. Nair, 2008: Monotone and conservative cascade remapping between spherical grids (CaRS): Regular latitude–longitude and cubed-sphere grids. Mon. Wea. Rev., 136, 1416–1432, https://doi.org/10.1175/2007MWR2181.1.
Laval, K., and R. Sadourny, 1979: Expériences de simulation de la circulation générale de l’atmosphère avec le modèle du LMD. Evolution of Planetary Atmospheres and Climatology of the Earth, Centre National d’Études Spatiales, Département des Affaires Universitaires, 493–506.
Laval, K., R. Sadourny, and Y. Serafini, 1981a: Land surface processes in a simplified general circulation model. Geophys. Astrophys. Fluid Dyn., 17, 129–150, https://doi.org/10.1080/03091928108243677.
Laval, K., H. L. Treut, and R. Sadourny, 1981b: Effect of cumulus parameterization on the dynamics of a general circulation model. Geophys. Astrophys. Fluid Dyn., 17, 113–127, https://doi.org/10.1080/03091928108243676.
Lawrence, D. M., and J. M. Slingo, 2004: An annual cycle of vegetation in a GCM. Part I: Implementation and impact on evaporation. Climate Dyn., 22, 87–105, https://doi.org/10.1007/s00382-003-0366-9.
Lean, J., and P. Rowntree, 1993: A GCM simulation of the impact of Amazonian deforestation on climate using an improved canopy representation. Quart. J. Roy. Meteor. Soc., 119, 509–530, https://doi.org/10.1002/qj.49711951109.
LeBauer, D. S., and K. K. Treseder, 2008: Nitrogen limitation of net primary productivity in terrestrial ecosystems is globally distributed. Ecology, 89, 371–379, https://doi.org/10.1890/06-2057.1.
Lee, W.-L., and K. Liou, 2007: A coupled atmosphere–ocean radiative transfer system using the analytic four-stream approximation. J. Atmos. Sci., 64, 3681–3694, https://doi.org/10.1175/JAS4004.1.
Legg, S., R. W. Hallberg, and J. B. Girton, 2006: Comparison of entrainment in overflows simulated by z-coordinate, isopycnal and non-hydrostatic models. Ocean Modell., 11, 69–97, https://doi.org/10.1016/j.ocemod.2004.11.006.
Legg, S., and Coauthors, 2009: Improving oceanic overflow representation in climate models: The gravity current entrainment climate process team. Bull. Amer. Meteor. Soc., 90, 657–670, https://doi.org/10.1175/2008BAMS2667.1.
Leith, C., 1965a: Numerical simulation of the earth’s atmosphere. Methods in Computational Physics, B. Adler, S. Fernbach, and M.Rotenberg, Eds., Vol. 4, Academic Press, 1–28.
Leith, C. E., 1965b: Convection in a six-level model atmosphere. Proc. Int. Symposium of Dynamics of Large-Scale Atmospheric Processes, International Association of Meteorology and Atmospheric Physics, Moscow, Russia, 134–138.
Leith, C., 1988: The computational physics of the global atmosphere. Energy in Physics, War and Peace, Springer, 161–173.
Lemarié, F., J. Kurian, A. F. Shchepetkin, M. J. Molemaker, F. Colas, and J. C. McWilliams, 2012: Are there inescapable issues prohibiting the use of terrain-following coordinates in climate models? Ocean Modell., 42, 57–79, https://doi.org/10.1016/j.ocemod.2011.11.007.
LeMone, M., and Coauthors, 2019: 100 years of progress in boundary layer meteorology. A Century of Progress in Atmospheric and Related Sciences: Celebrating the American Meteorological Society Centennial, Meteor. Monogr., No. 59, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-18-0013.1.
Le Quéré, C., and Coauthors, 2009: Trends in the sources and sinks of carbon dioxide. Nat. Geosci., 2, 831–836, https://doi.org/10.1038/ngeo689.
Lettau, H., 1954: Improved models of thermal diffusion in the soil. Eos, Trans. Amer. Geophys. Union, 35, 121–132, https://doi.org/10.1029/TR035i001p00121.
Leutwyler, D., O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär, 2016: Towards European-scale convection-resolving climate simulations with GPUs: A study with COSMO 4.19. Geosci. Model Dev., 9, 3393–3412, https://doi.org/10.5194/gmd-9-3393-2016.
Lewellen, W., and S. Yoh, 1993: Binormal model of ensemble partial cloudiness. J. Atmos. Sci., 50, 1228–1237, https://doi.org/10.1175/1520-0469(1993)050<1228:BMOEPC>2.0.CO;2.
Lewis, J. M., 1998: Clarifying the dynamics of the general circulation: Phillips’s 1956 experiment. Bull. Amer. Meteor. Soc., 79, 39–60, https://doi.org/10.1175/1520-0477(1998)079<0039:CTDOTG>2.0.CO;2.
Lewis, J. M., 2005: Roots of ensemble forecasting. Mon. Wea. Rev., 133, 1865–1885, https://doi.org/10.1175/MWR2949.1.
Lewis, J. M., 2008: Smagorinsky’s GFDL: Building the team. Bull. Amer. Meteor. Soc., 89, 1339–1353, https://doi.org/10.1175/2008BAMS2599.1.
Lilly, D. K., 1962: On the numerical simulation of buoyant convection. Tellus, 14, 148–172, https://doi.org/10.3402/tellusa.v14i2.9537.
Lilly, D. K., 1968: Models of cloud-topped mixed layers under a strong inversion. Quart. J. Roy. Meteor. Soc., 94, 292–309, https://doi.org/10.1002/qj.49709440106.
Lilly, D. K., 1997: Introduction to “computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I.” J. Comput. Phys., 135, 101–102, https://doi.org/10.1006/jcph.1997.5722.
Lin, C., R. Laprise, and H. Ritchie, Eds., 1997: Numerical Methods in Atmosphere and Ocean Modelling: The Andre Robert Memorial Volume. Canadian Meteorological and Oceanographic Society, 581 pp.
Lin, S.-J., 2004: A “vertically Lagrangian” finite-volume dynamical core for global models. Mon. Wea. Rev., 132, 2293–2307, https://doi.org/10.1175/1520-0493(2004)132<2293:AVLFDC>2.0.CO;2.
Lin, Y.-L., R. D. Farley, and H. D. Orville, 1983: Bulk parameterization of the snow field in a cloud model. J. Climate Appl. Meteor., 22, 1065–1092, https://doi.org/10.1175/1520-0450(1983)022<1065:BPOTSF>2.0.CO;2.
Lindzen, R. S., 1981: Turbulence and stress owing to gravity wave and tidal breakdown. J. Geophys. Res., 86, 9707–9714, https://doi.org/10.1029/JC086iC10p09707.
Liu, X., and Coauthors, 2012: Toward a minimal representation of aerosols in climate models: Description and evaluation in the Community Atmosphere Model CAM5. Geosci. Model. Dev., 5, 709–739, https://doi.org/10.5194/gmd-5-709-2012.
Lohmann, U., and E. Roeckner, 1996: Design and performance of a new cloud microphysics scheme developed for the ECHAM general circulation model. Climate Dyn., 12, 557–572, https://doi.org/10.1007/BF00207939.
Lohmann, U., and C. Hoose, 2009: Sensitivity studies of different aerosol indirect effects in mixed-phase clouds. Atmos. Chem. Phys., 9, 8917–8934, https://doi.org/10.5194/acp-9-8917-2009.
Lohmann, U., J. Feichter, C. C. Chuang, and J. E. Penner, 1999: Prediction of the number of cloud droplets in the ECHAM GCM. J. Geophys. Res., 104, 9169–9198, https://doi.org/10.1029/1999JD900046.
Lohmann, U., P. Stier, C. Hoose, S. Ferrachat, S. Kloster, E. Roeckner, and J. Zhang, 2007: Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-ham. Atmos. Chem. Phys., 7, 3425–3446, https://doi.org/10.5194/acp-7-3425-2007.
Lorenz, E. N., 1955: Available potential energy and the maintenance of the general circulation. Tellus, 7, 157–167, https://doi.org/10.3402/tellusa.v7i2.8796.
Lorenz, E. N., 1960: Energy and numerical weather prediction. Tellus, 12, 364–373, https://doi.org/10.3402/tellusa.v12i4.9420.
Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130–141, https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.
Los, S., and Coauthors, 2000: A global 9-yr biophysical land surface dataset from NOAA AVHRR data. J. Hydrometeor., 1, 183–199, https://doi.org/10.1175/1525-7541(2000)001<0183:AGYBLS>2.0.CO;2.
Louis, J.-F., 1979: A parametric model of vertical eddy fluxes in the atmosphere. Bound.-Layer Meteor., 17, 187–202, https://doi.org/10.1007/BF00117978.
Lucht, W., S. Schaphoff, T. Erbrecht, U. Heyder, and W. Cramer, 2006: Terrestrial vegetation redistribution and carbon balance under climate change. Carbon Balance Manag., 1 (1), 6, https://doi.org/10.1186/1750-0680-1-6.
Luo, W.-G., S. Wang, J. Huang, L. Yan, and J. Huang, 2006: Influence of plant photosynthesis and transpiration character on nitrogen removal effect in wetland. Zhongguo Huanjing Kexue, 26 (1), 30–33.
Luo, Y., and Coauthors, 2012: A framework for benchmarking land models. Biogeosciences, 9, 3857–3874, https://doi.org/10.5194/bg-9-3857-2012.
Lynch, P., 1993: Richardson’s forecast factory: The $64,000 question. Meteor. Mag., 122, 69–69.
Lynch, P., 2006: The Emergence of Numerical Weather Prediction: Richardson’s Dream. Cambridge University Press, 290 pp.
Lynch, P., and O. Lynch, 2008: Forecasts by PHONIAC. Weather, 63, 324–326, https://doi.org/10.1002/wea.241.
MacKinnon, J., L. St Laurent, and A. C. N. Garabato, 2013: Diapycnal mixing processes in the ocean interior. Ocean Circulation and Climate: A 21st Century Perspective, G. Siedler, et al., Eds., International Geophysics Series, Vol. 103, Elsevier, 159–183, https://doi.org/10.1016/B978-0-12-391851-2.00007-6.
Madden, R. A., and P. R. Julian, 1971: Detection of a 40-50 day oscillation in the zonal wind in the tropical Pacific. J. Atmos. Sci., 28, 702–708, https://doi.org/10.1175/1520-0469(1971)028<0702:DOADOI>2.0.CO;2.
Madden, R. A., and P. R. Julian, 1972: Description of global-scale circulation cells in the tropics with a 40–50 day period. J. Atmos. Sci., 29, 1109–1123, https://doi.org/10.1175/1520-0469(1972)029<1109:DOGSCC>2.0.CO;2.
Madec, G., P. Delecluse, M. Imbard, and C. Levy, 1997: OPA 8 ocean general circulation model reference manual. LODYC/IPSL, 91 pp.
Majewski, D., and Coauthors, 2002: The operational global icosahedral–hexagonal gridpoint model GME: Description and high-resolution tests. Mon. Wea. Rev., 130, 319–338, https://doi.org/10.1175/1520-0493(2002)130<0319:TOGIHG>2.0.CO;2.
Malkus, J. S., 1952: Recent advances in the study of convective clouds and their interaction with the environment. Tellus, 4 (2), 71–87, https://doi.org/10.3402/tellusa.v4i2.8680.
Malkus, J. S., and G. Witt, 1959: The evolution of a convective element: A numerical calculation. The Atmosphere and the Sea in Motion: Scientific Contributions to the Rossby Memorial Volume, B. Bolin, Ed., Rockefeller Institute, 425–439.
Manabe, S., 1969a: Climate and the ocean circulation: I. The atmospheric circulation and the hydrology of the earth’s surface. Mon. Wea. Rev., 97, 739–774, https://doi.org/10.1175/1520-0493(1969)097<0739:CATOC>2.3.CO;2.
Manabe, S., 1969b: Climate and the ocean circulation: II. The atmospheric circulation and the effect of heat transport by ocean currents. Mon. Wea. Rev., 97, 775–805, https://doi.org/10.1175/1520-0493(1969)097<0775:CATOC>2.3.CO;2.
Manabe, S., and F. Möller, 1961: On the radiative equilibrium and heat balance of the atmosphere. Mon. Wea. Rev., 89, 503–532, https://doi.org/10.1175/1520-0493(1961)089<0503:OTREAH>2.0.CO;2.
Manabe, S., and R. F. Strickler, 1964: Thermal equilibrium of the atmosphere with a convective adjustment. J. Atmos. Sci., 21, 361–385, https://doi.org/10.1175/1520-0469(1964)021<0361:TEOTAW>2.0.CO;2.
Manabe, S., and J. Smagorinsky, 1967: Simulated climatology of a general circulation model with a hydrologic cycle: II. Analysis of the tropical atmosphere. Mon. Wea. Rev., 95, 155–169, https://doi.org/10.1175/1520-0493(1967)095<0155:SCOAGC>2.3.CO;2.
Manabe, S., and R. T. Wetherald, 1967: Thermal equilibrium of the atmosphere with a given distribution of relative humidity. J. Atmos. Sci., 24, 241–259, https://doi.org/10.1175/1520-0469(1967)024<0241:TEOTAW>2.0.CO;2.
Manabe, S., and B. G. Hunt, 1968: Experiments with a stratospheric general circulation model. Mon. Wea. Rev., 96, https://doi.org/10.1175/1520-0493(1968)096<0477:EWASGC>2.0.CO;2.
Manabe, S., and K. Bryan, 1969: Climate calculations with a combined ocean–atmosphere model. J. Atmos. Sci., 26, 786–789, https://doi.org/10.1175/1520-0469(1969)026<0786:CCWACO>2.0.CO;2.
Manabe, S., and R. Wetherald, 1975: The effects of doubling the CO2 concentration on the climate of a general circulation model. J. Atmos. Sci., 32, 3–15, https://doi.org/10.1175/1520-0469(1975)032<0003:TEODTC>2.0.CO;2.
Manabe, S., and J. Mahlman, 1976: Simulation of seasonal and interhemispheric variations in the stratospheric circulation. J. Atmos. Sci., 33, 2185–2217, https://doi.org/10.1175/1520-0469(1976)033<2185:SOSAIV>2.0.CO;2.
Manabe, S., and R. J. Stouffer, 1980: Sensitivity of a global climate model to an increase of CO2 concentration in the atmosphere. J. Geophys. Res., 85, 5529–5554, https://doi.org/10.1029/JC085iC10p05529.
Manabe, S., and R. Wetherald, 1980: On the distribtution of climate change resulting from an increase of CO2 content in the atmosphere. J. Atmos. Sci., 37, 99–118, https://doi.org/10.1175/1520-0469(1980)037<0099:OTDOCC>2.0.CO;2.
Manabe, S., J. Smagorinsky, and R. F. Strickler, 1965: Simulated climatology of a general circulation model with a hydrologic cycle. Mon. Wea. Rev., 93, 769–798, https://doi.org/10.1175/1520-0493(1965)093<0769:SCOAGC>2.3.CO;2.
Manton, M. J., and W. R. Cotton, 1977: Formulation of approximate equations for modeling moist deep convection on the mesoscale. Colorado State University Atmospheric Science Paper 266, 73 pp.
Marchand, R., and T. Ackerman, 2011: A cloud-resolving model with an adaptive vertical grid for boundary layer clouds. J. Atmos. Sci., 68, 1058–1074, https://doi.org/10.1175/2010JAS3638.1.
Marshall, J., A. Adcroft, J.-M. Campin, C. Hill, and A. White, 2004: Atmosphere–ocean modeling exploiting fluid isomorphisms. Mon. Wea. Rev., 132, 2882–2894, https://doi.org/10.1175/MWR2835.1.
Marsland, S. J., H. Haak, J. H. Jungclaus, M. Latif, and F. Röske, 2003: The Max-Planck-Institute global ocean/sea ice model with orthogonal curvilinear coordinates. Ocean Modell., 5, 91–127, https://doi.org/10.1016/S1463-5003(02)00015-X.
Marsland, S., J. Church, N. Bindoff, and G. Williams, 2007: Antarctic coastal polynya response to climate change. J. Geophys. Res., 112, C07009, https://doi.org/10.1029/2005JC003291.
Masuda, Y., and H. Ohnishi, 1986: An integration scheme of the primitive equation model with an icosahedral-hexagonal grid system and its application to the shallow water equations. J. Meteor. Soc. Japan Ser. II, 64, 317–326.
Matsuno, T., 2016: Prologue: Tropical meteorology 1960–2010—personal recollections. Multiscale Convection-Coupled Systems in the Tropics: A Tribute to Dr. Michio Yanai, Meteor. Monogr., No. 56, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-15-0012.1.
Maykut, G. A., and N. Untersteiner, 1971: Some results from a time-dependent thermodynamic model of sea ice. J. Geophys. Res., 76, 1550–1575, https://doi.org/10.1029/JC076i006p01550.
McCartney, S., 1999: ENIAC: The Triumphs and Tragedies of the World’s First Computer. Walker & Company, 262 pp.
McDonald, A., and J. Bates, 1987: Improving the estimate of the departure point position in a two-time level semi-Lagrangian and semi-implicit scheme. Mon. Wea. Rev., 115, 737–739, https://doi.org/10.1175/1520-0493(1987)115<0737:ITEOTD>2.0.CO;2.
McDougall, T. J., 1987: Neutral surfaces. J. Phys. Oceanogr., 17, 1950–1964, https://doi.org/10.1175/1520-0485(1987)017<1950:NS>2.0.CO;2.
McDougall, T. J., S. Groeskamp, and S. M. Griffies, 2014: On geometrical aspects of interior ocean mixing. J. Phys. Oceanogr., 44, 2164–2175, https://doi.org/10.1175/JPO-D-13-0270.1.
McFarlane, N., 1987: The effect of orographically excited gravity wave drag on the general circulation of the lower stratosphere and troposphere. J. Atmos. Sci., 44, 1775–1800, https://doi.org/10.1175/1520-0469(1987)044<1775:TEOOEG>2.0.CO;2.
McGregor, J. L., and M. R. Dix, 2008: An updated description of the conformal-cubic atmospheric model. High Resolution Numerical Modelling of the Atmosphere and Ocean, K. Hamilton and W. Ohfuchi, Eds., Springer, 51–75.
McWilliams, J. C., 2016: Submesoscale currents in the ocean. Proc. Roy. Soc., 472A, 20160117, https://doi.org/10.1098/rspa.2016.0117.
Meador, W. E., and W. R. Weaver, 1980: Two-stream approximations to radiative transfer in planetary atmospheres: A unified description of existing methods and a new improvement. J. Atmos. Sci., 37, 630–643, https://doi.org/10.1175/1520-0469(1980)037<0630:TSATRT>2.0.CO;2.
Meehl, G. A., G. J. Boer, C. Covey, M. Latif, and R. J. Stouffer, 2000: The Coupled Model Intercomparison Project (CMIP). Bull. Amer. Meteor. Soc., 81, 313–318, https://doi.org/10.1175/1520-0477(2000)081<0313:TCMIPC>2.3.CO;2.
Meehl, G. A., W. M. Washington, T. Wigley, J. M. Arblaster, and A. Dai, 2003: Solar and greenhouse gas forcing and climate response in the twentieth century. J. Climate, 16, 426–444, https://doi.org/10.1175/1520-0442(2003)016<0426:SAGGFA>2.0.CO;2.
Mellor, G. L., 1977: The Gaussian cloud model relations. J. Atmos. Sci., 34, 356–358, https://doi.org/10.1175/1520-0469(1977)034<0356:TGCMR>2.0.CO;2.
Mellor, G. L., and T. Yamada, 1974: A hierarchy of turbulence closure models for planetary boundary layers. J. Atmos. Sci., 31, 1791–1806, https://doi.org/10.1175/1520-0469(1974)031<1791:AHOTCM>2.0.CO;2.
Mesinger, F., 1982: On the convergence and error problems of the calculation of the pressure gradient force in sigma coordinate models. Geophys. Astrophys Fluid Dyn., 19, 105–117, https://doi.org/10.1080/03091928208208949.
Mesinger, F., and Z. I. Janjić, 1985: Problems and numerical methods of the incorporation of mountains in atmospheric models. Large-Scale Computations in Fluid Mechanics, Part 2, B. E. Engquist, S. Osher, and R. C. J. Somerville, Eds., Lectures in Applied Mathematics, Vol. 22, American Mathematical Society, 81–120.
Michael, G. A., 1996: An interview with Chuck Leith. http://www.computer-history.info/Page1.dir/pages/Leith.html.
Mie, G., 1908: Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen. Ann. Phys., 330, 377–445, https://doi.org/10.1002/andp.19083300302.
Milbrandt, J. A., S. Bélair, M. Faucher, M. Vallée, M. L. Carrera, and A. Glazer, 2016: The Pan-Canadian high resolution (2.5 km) deterministic prediction system. Wea. Forecasting, 31, 1791–1816, https://doi.org/10.1175/WAF-D-16-0035.1.
Milly, P., and K. Dunne, 1994: Sensitivity of the global water cycle to the water-holding capacity of land. J. Climate, 7, 506–526, https://doi.org/10.1175/1520-0442(1994)007<0506:SOTGWC>2.0.CO;2.
Ming, Y., V. Ramaswamy, L. J. Donner, V. T. Phillips, S. A. Klein, P. A. Ginoux, and L. W. Horowitz, 2007: Modeling the interactions between aerosols and liquid water clouds with a self-consistent cloud scheme in a general circulation model. J. Atmos. Sci., 64, 1189–1209, https://doi.org/10.1175/JAS3874.1.
Mintz, Y., 1968: Very long-term global integration of the primitive equations of atmospheric motion: An experiment in climate simulation. Causes of Climatic Change, Springer, 20–36.
Mintz, Y., and J. Bjerknes, 1951: Progress Report: Investigation of the general circulation of the atmosphere. University of California, Los Angeles.
Mitchell, J. F. B., R. Davis, W. A. Ingram, and C. Senior, 1995a: On surface temperature, greenhouse gases, and aerosols: Models and observations. J. Climate, 8, 2364–2386, https://doi.org/10.1175/1520-0442(1995)008<2364:OSTGGA>2.0.CO;2.
Mitchell, J. F. B., T. Johns, J. M. Gregory, and S. Tett, 1995b: Climate response to increasing levels of greenhouse gases and sulphate aerosols. Nature, 376, 501–504, https://doi.org/10.1038/376501a0.
Miyakoda, K., and J. Sirutis, 1977: Comparative integrations of global models with various parameterized processes of subgrid-scale vertical transports—Description of the parameterizations (atmospheric circulation). Beitr. Phys. Atmos., 50, 445–487.
Miyakoda, K., J. Smagorinsky, R. F. Strickler, and G. Hembree, 1969: Experimental extended predictions with a nine-level hemispheric model. Mon. Wea. Rev., 97, 1–76, https://doi.org/10.1175/1520-0493(1969)097<0001:EEPWAN>2.3.CO;2.
Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 663–16 682, https://doi.org/10.1029/97JD00237.
Mlawer, E. J., M. J. Iacono, R. Pincus, H. W. Barker, L. Oreopoulos, and D. L. Mitchell, 2016: Contributions of the ARM Program to radiative transfer modeling for climate and weather applications. The Atmospheric Radiation Measurement (ARM) Program: The First 20 Years, Meteor. Monogr., No. 57, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-15-0041.1.
Moeng, C.-H., 1984: A large-eddy-simulation model for the study of planetary boundary-layer turbulence. J. Atmos. Sci., 41, 2052–2062, https://doi.org/10.1175/1520-0469(1984)041<2052:ALESMF>2.0.CO;2.
Monin, A. S., and A. M. Obukhov, 1954: Osnovnye zakonomernosti turbulentnogo peremesivanija v prizemnom sloe atmosfery. Tr. Geofiz. Inst., Akad. Nauk SSSR, 24, 163–187.
Monteith, J., 1965: Evaporation and the environment, the state and movement of water in living organisms. Proceedings of the XIX Symposium of the Society for Experimental Biology, Cambridge University Press, 205–234.
Morcrette, J.-J., 1990: Impact of changes to the radiation transfer parameterizations plus cloud optical properties in the ECMWF model. Mon. Wea. Rev., 118, 847–873, https://doi.org/10.1175/1520-0493(1990)118<0847:IOCTTR>2.0.CO;2.
Morcrette, J.-J., 1991: Radiation and cloud radiative properties in the European Centre for Medium Range Weather Forecasts forecasting system. J. Geophys. Res., 96, 9121–9132, https://doi.org/10.1029/89JD01597.
Morrisette, P. M., 1989: The evolution of policy responses to stratospheric ozone depletion. Nat. Resour. J., 29, 793–820.
Morrison, H., and A. Gettelman, 2008: A new two-moment bulk stratiform cloud microphysics scheme in the Community Atmosphere Model, version 3 (CAM3). Part I: Description and numerical tests. J. Climate, 21, 3642–3659, https://doi.org/10.1175/2008JCLI2105.1.
Morrison, H., and J. A. Milbrandt, 2015: Parameterization of cloud microphysics based on the prediction of bulk ice particle properties. Part I: Scheme description and idealized tests. J. Atmos. Sci., 72, 287–311, https://doi.org/10.1175/JAS-D-14-0065.1.
Munk, W. H., 1966: Abyssal recipes. Deep Sea Res., 13, 707–730, https://doi.org/10.1016/0011-7471(66)90602-4.
Murphy, J., 1995: Transient response of the Hadley Centre coupled ocean–atmosphere model to increasing carbon dioxide. Part 1: Control climate and flux adjustment. J. Climate, 8, 36–56, https://doi.org/10.1175/1520-0442(1995)008<0036:TROTHC>2.0.CO;2.
Murray, R. J., 1996: Explicit generation of orthogonal grids for ocean models. J. Comput. Phys., 126, 251–273, https://doi.org/10.1006/jcph.1996.0136.
Nastrom, G., K. Gage, and W. Jasperson, 1984: Kinetic energy spectrum of large-and mesoscale atmospheric processes. Nature, 310, 36, https://doi.org/10.1038/310036a0.
National Research Council Climatic Impact Committee, 1975: Environmental Impact of Stratospheric Flight: Biological and Climatic Effects of Aircraft Emissions in the Stratosphere. National Academies Press, 352 pp.
Neu, J. L., M. J. Prather, and J. E. Penner, 2007: Global atmospheric chemistry: Integrating over fractional cloud cover. J. Geophys. Res., 112, D11306, https://doi.org/10.1029/2006JD008007.
Noilhan, J., and S. Planton, 1989: A simple parameterization of land surface processes for meteorological models. Mon. Wea. Rev., 117, 536–549, https://doi.org/10.1175/1520-0493(1989)117<0536:ASPOLS>2.0.CO;2.
Norby, R. J., and Coauthors, 2005: Forest response to elevated CO2 is conserved across a broad range of productivity. Proc. Natl. Acad. Sci. USA, 102, 18 052–18 056, https://doi.org/10.1073/pnas.0509478102.
Nordeng, T. E., 1994: Extended versions of the convective parametrization scheme at ECMWF and their impact on the mean and transient activity of the model in the tropics. Research Department Tech. Memo. 206, 41 pp.
Notz, D., and C. M. Bitz, 2017: Sea ice in earth system models. Sea Ice, 3, 304–325.
Oglesby, R. J., and D. J. Erickson III, 1989: Soil moisture and the persistence of North American drought. J. Climate, 2, 1362–1380, https://doi.org/10.1175/1520-0442(1989)002<1362:SMATPO>2.0.CO;2.
Ogura, Y., 1962: Convection of isolated masses of a buoyant fluid: A numerical calculation. J. Atmos. Sci., 19, 492–502, https://doi.org/10.1175/1520-0469(1962)019<0492:COIMOA>2.0.CO;2.
Oliger, E., E. Wellck, A. Kasahara, and M. Washington, 1970: Description of NCAR global circulation model. NCAR Technical Note NCAR/TN-56+STR, 94 pp., https://doi.org/10.5065/D6XG9P35.
Onogi, K., and Coauthors, 2007: The JRA-25 Reanalysis. J. Meteor. Soc. Japan. Ser. II, 85, 369–432, https://doi.org/10.2151/jmsj.85.369.
Oren, R., and Coauthors, 2001: Soil fertility limits carbon sequestration by forest ecosystems in a CO2-enriched atmosphere. Nature, 411, 469–472, https://doi.org/10.1038/35078064.
Oreopoulos, L., and H. W. Barker, 1999: Accounting for subgrid-scale cloud variability in a multi-layer 1D solar radiative transfer algorithm. Quart. J. Roy. Meteor. Soc., 125A, 301–330, https://doi.org/10.1002/qj.49712555316.
Oreopoulos, L., and Coauthors, 2012: The continual intercomparison of radiation codes: Results from phase I. J. Geophys. Res., 117, D06118, https://doi.org/10.1029/2011JD016821.
Orszag, S. A., 1970: Transform method for the calculation of vector-coupled sums: Application to the spectral form of the vorticity equation. J. Atmos. Sci., 27, 890–895, https://doi.org/10.1175/1520-0469(1970)027<0890:TMFTCO>2.0.CO;2.
Ose, T., 1993: An examination of the effects of explicit cloud water in the UCLA GCM. J. Meteor. Soc. Japan. Ser. II, 71, 93–109.
Palmén, E., 1948: On the distribution of temperature and wind in the upper westerlies. J. Meteor., 5, 20–27, https://doi.org/10.1175/1520-0469(1948)005<0020:OTDOTA>2.0.CO;2.
Palmén, E., and H. Riehl, 1957: Budget of angular momentum and energy in tropical cyclones. J. Meteor., 14, 150–159, https://doi.org/10.1175/1520-0469(1957)014<0150:BOAMAE>2.0.CO;2.
Palmer, T., G. Shutts, and R. Swinbank, 1986: Alleviation of a systematic westerly bias in general circulation and numerical weather prediction models through an orographic gravity wave drag parametrization. Quart. J. Roy. Meteor. Soc., 112, 1001–1039, https://doi.org/10.1002/qj.49711247406.
Palmer, T., Č. Branković, and D. Richardson, 2000: A probability and decision-model analysis of provost seasonal multi-model ensemble integrations. Quart. J. Roy. Meteor. Soc., 126, 2013–2033, https://doi.org/10.1256/smsqj.56702.
Palmer, T., F. Doblas-Reyes, A. Weisheimer, and M. Rodwell, 2008: Toward seamless prediction: Calibration of climate change projections using seasonal forecasts. Bull. Amer. Meteor. Soc., 89, 459–470, https://doi.org/10.1175/BAMS-89-4-459.
Pan, Y., and Coauthors, 2011: A large and persistent carbon sink in the world’s forests. Science, 333, 988–993, https://doi.org/10.1126/science.1201609.
Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov, 2017: Toward low cloud-permitting cloud superparameterization with explicit boundary layer turbulence. J. Adv. Model. Earth Syst., 9, 1542–1571, https://doi.org/10.1002/2017MS000968.
Penman, H. L., 1948: Natural evaporation from open water, bare soil and grass. Proc. Roy. Soc. London, 193A, 120–145.
Persson, A., 2005a: Early operational numerical weather prediction outside the USA: An historical introduction. Part I: Internationalism and engineering NWP in Sweden, 1952–69. Meteor. Appl., 12, 135–159, https://doi.org/10.1017/S1350482705001593.
Persson, A., 2005b: Early operational numerical weather prediction outside the USA: An historical introduction: Part II: Twenty countries around the world. Meteor. Appl., 12, 269–289, https://doi.org/10.1017/S1350482705001751.
Peters-Lidard, C. D., F. Hossain, L. R. Leung, N. McDowell, M. Rodell, F. J. Tapiador, F. J. Turk, and A. Wood, 2019: 100 years of progress in hydrology. A Century of Progress in Atmospheric and Related Sciences: Celebrating the American Meteorological Society Centennial, Meteor. Monogr., No. 59, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-18-0019.1.
Petty, G. W., 2002: Area-average solar radiative transfer in three-dimensionally inhomogeneous clouds: The Independently Scattering Cloudlet model. J. Atmos. Sci., 59, 2910–2929, https://doi.org/10.1175/1520-0469(2002)059<2910:AASRTI>2.0.CO;2.
Phillips, N. A., 1956: The general circulation of the atmosphere: A numerical experiment. Quart. J. Roy. Meteor. Soc., 82, 123–164, https://doi.org/10.1002/qj.49708235202.
Phillips, N. A., 1957a: A coordinate system having some special advantages for numerical forecasting. J. Meteor., 14, 184–185, https://doi.org/10.1175/1520-0469(1957)014<0184:ACSHSS>2.0.CO;2.
Phillips, N. A., 1957b: A map projection system suitable for large-scale numerical weather prediction. J. Meteor. Soc. Japan. Ser. II, 35, 262–267, https://doi.org/10.2151/jmsj1923.35A.0_262.
Phillips, N. A., 1959: An example of non-linear computational instability. The Atmosphere and the Sea in Motion: Scientific Contributions to the Rossby Memorial Volume, B. Bolin, Ed., Rockefeller Institute, 501–504.
Pielke, R. A., G. Dalu, J. Snook, T. Lee, and T. Kittel, 1991: Nonlinear influence of mesoscale land use on weather and climate. J. Climate, 4, 1053–1069, https://doi.org/10.1175/1520-0442(1991)004<1053:NIOMLU>2.0.CO;2.
Pincus, R., and S. A. Klein, 2000: Unresolved spatial variability and microphysical process rates in large-scale models. J. Geophys. Res., 105, 27 059–27 065, https://doi.org/10.1029/2000JD900504.
Pincus, R., S. A. McFarlane, and S. A. Klein, 1999: Albedo bias and the horizontal variability of clouds in subtropical marine boundary layers: Observations from ships and satellites. J. Geophys. Res., 104, 6183–6191, https://doi.org/10.1029/1998JD200125.
Pincus, R., H. W. Barker, and J.-J. Morcrette, 2003: A fast, flexible, approximate technique for computing radiative transfer in inhomogeneous cloud fields. J. Geophys. Res., 108, 4376, https://doi.org/10.1029/2002JD003322.
Pincus, R., and Coauthors, 2015: Radiative flux and forcing parameterization error in aerosol-free clear skies. Geophys. Res. Lett., 42, 5485–5492, https://doi.org/10.1002/2015GL064291.
Pinty, J.-P., P. Mascart, E. Richard, and R. Rosset, 1989: An investigation of mesoscale flows induced by vegetation inhomogeneities using an evapotranspiration model calibrated against HAPEX-MOBILHY data. J. Appl. Meteor., 28, 976–992, https://doi.org/10.1175/1520-0450(1989)028<0976:AIOMFI>2.0.CO;2.
Pitcher, E. J., R. C. Malone, V. Ramanathan, M. L. Blackmon, K. Puri, and W. Bourke, 1983: January and July simulations with a spectral general circulation model. J. Atmos. Sci., 40, 580–604, https://doi.org/10.1175/1520-0469(1983)040<0580:JAJSWA>2.0.CO;2.
Posselt, R., and U. Lohmann, 2008: Introduction of prognostic rain in ECHAM5: Design and single column model simulations. Atmos. Chem. Phys., 8, 2949–2963, https://doi.org/10.5194/acp-8-2949-2008.
Potter, J. F., 1970: The delta function approximation in radiative transfer theory. J. Atmos. Sci., 27, 943–949, https://doi.org/10.1175/1520-0469(1970)027<0943:TDFAIR>2.0.CO;2.
Price, E., J. Mielikainen, B. Huang, H. A. Huang, and T. Lee, 2013: GPU acceleration experience with RRTMG long wave radiation model. Proc. SPIE, 8895, 88950H, https://doi.org/10.1117/12.2031450.
Prince, S. D., and Coauthors, 1995: Geographical, biological and remote sensing aspects of the Hydrologic Atmospheric Pilot Experiment in the Sahel (HAPEX-Sahel). Remote Sens. Environ., 51, 215–234, https://doi.org/10.1016/0034-4257(94)00076-Y.
Pruppacher, H., and J. Klett, 1997: Microphysics of Clouds and Precipitation. Atmospheric and Oceanographic Sciences Library, Vol. 18. Kluwer Academic, 954 pp.
Pury, D. G., and G. D. Farquhar, 1997: Simple scaling of photosynthesis from leaves to canopies without the errors of big-leaf models. Plant Cell Environ., 20, 537–557, https://doi.org/10.1111/j.1365-3040.1997.00094.x.
Putman, W. M., and S.-J. Lin, 2007: Finite-volume transport on various cubed-sphere grids. J. Comput. Phys., 227, 55–78, https://doi.org/10.1016/j.jcp.2007.07.022.
Putman, W. M., and M. Suarez, 2011: Cloud-system resolving simulations with the NASA Goddard Earth Observing System global atmospheric model (GEOS-5). Geophys. Res. Lett., 38, L16809, https://doi.org/10.1029/2011GL048438.
Qaddouri, A., and V. Lee, 2011: The Canadian Global Environmental Multiscale model on the Yin-Yang grid system. Quart. J. Roy. Meteor. Soc., 137, 1913–1926, https://doi.org/10.1002/qj.873.
Qian, J.-H., F. H. Semazzi, and J. S. Scroggs, 1998: A global nonhydrostatic semi-Lagrangian atmospheric model with orography. Mon. Wea. Rev., 126, 747–771, https://doi.org/10.1175/1520-0493(1998)126<0747:AGNSLA>2.0.CO;2.
Ramanathan, V., E. J. Pitcher, R. C. Malone, and M. L. Blackmon, 1983: The response of a spectral general circulation model to refinements in radiative processes. J. Atmos. Sci., 40, 605–630, https://doi.org/10.1175/1520-0469(1983)040<0605:TROASG>2.0.CO;2.
Ramanathan, V., R. Cess, E. Harrison, P. Minnis, B. Barkstrom, E. Ahmad, and D. Hartmann, 1989: Cloud-radiative forcing and climate: Results from the earth radiation budget experiment. Science, 243, 57–63, https://doi.org/10.1126/science.243.4887.57.
Randall, D. A., 1976: The interaction of the planetary boundary layer with large-scale circulations. Ph.D. thesis, University of California, Los Angeles, 214 pp.
Randall, D. A., 1987: Turbulent fluxes of liquid water and buoyancy in partly cloudy layers. J. Atmos. Sci., 44, 850–858, https://doi.org/10.1175/1520-0469(1987)044<0850:TFOLWA>2.0.CO;2.
Randall, D. A., 1994: Geostrophic adjustment and the finite-difference shallow-water equations. Mon. Wea. Rev., 122, 1371–1377, https://doi.org/10.1175/1520-0493(1994)122<1371:GAATFD>2.0.CO;2.
Randall, D. A., J. A. Abeles, and T. G. Corsetti, 1985: Seasonal simulations of the planetary boundary layer and boundary-layer stratocumulus clouds with a general circulation model. J. Atmos. Sci., 42, 641–676, https://doi.org/10.1175/1520-0469(1985)042<0641:SSOTPB>2.0.CO;2.
Randall, D. A., Harshvardhan, D. A. Dazlich, and T. G. Corsetti, 1989: Interactions among radiation, convection, and large-scale dynamics in a general circulation model. J. Atmos. Sci., 46, 1943–1970, https://doi.org/10.1175/1520-0469(1989)046<1943:IARCAL>2.0.CO;2.
Randall, D. A., Q. Shao, and C.-H. Moeng, 1992: A second-order bulk boundary-layer model. J. Atmos. Sci., 49, 1903–1923, https://doi.org/10.1175/1520-0469(1992)049<1903:ASOBBL>2.0.CO;2.
Randall, D. A., and Coauthors, 1996a: A revised land surface parameterization (SiB2) for GCMs. Part III: The greening of the Colorado State University general circulation model. J. Climate, 9, 738–763, https://doi.org/10.1175/1520-0442(1996)009<0738:ARLSPF>2.0.CO;2.
Randall, D. A., K.-M. Xu, R. J. Somerville, and S. Iacobellis, 1996b: Single-column models and cloud ensemble models as links between observations and climate models. J. Climate, 9, 1683–1697, https://doi.org/10.1175/1520-0442(1996)009<1683:SCMACE>2.0.CO;2.
Randall, D. A., and Coauthors, 2003: Confronting models with data: The GEWEX Cloud Systems Study. Bull. Amer. Meteor. Soc., 84, 455–469, https://doi.org/10.1175/BAMS-84-4-455.
Randall, D. A., C. DeMott, C. Stan, M. Khairoutdinov, J. Benedict, R. McCrary, K. Thayer-Calder, and M. Branson, 2016: Simulations of the tropical general circulation with a multiscale global model. Multiscale Convection-Coupled Systems in the Tropics: A Tribute to Dr. Michio Yanai, Meteor. Monogr., No. 56, Amer. Meteor. Soc., https://doi.org/10.1175/AMSMONOGRAPHS-D-15-0016.1.
Rauscher, S. A., and T. D. Ringler, 2014: Impact of variable-resolution meshes on midlatitude baroclinic eddies using CAM-MPAS-A. Mon. Wea. Rev., 142, 4256–4268, https://doi.org/10.1175/MWR-D-13-00366.1.
Raymond, D. J., and A. M. Blyth, 1986: A stochastic mixing model for nonprecipitating cumulus clouds. J. Atmos. Sci., 43, 2708–2718, https://doi.org/10.1175/1520-0469(1986)043<2708:ASMMFN>2.0.CO;2.
Redi, M. H., 1982: Oceanic isopycnal mixing by coordinate rotation. J. Phys. Oceanogr., 12, 1154–1158, https://doi.org/10.1175/1520-0485(1982)012<1154:OIMBCR>2.0.CO;2.
Reed, B. C., J. F. Brown, D. VanderZee, T. R. Loveland, J. W. Merchant, and D. O. Ohlen, 1994: Measuring phenological variability from satellite imagery. J. Veg. Sci., 5, 703–714, https://doi.org/10.2307/3235884.
Richardson, L. F., 1911: IX. the approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam. Philos. Trans. Roy. Soc. London, 210A, 307–357, https://doi.org/10.1098/rsta.1911.0009.
Richardson, L. F., 1922: Weather Prediction by Numerical Process. Cambridge University Press, 250 pp.
Richter, J. H., A. Solomon, and J. T. Bacmeister, 2014: Effects of vertical resolution and nonorographic gravity wave drag on the simulated climate in the Community Atmosphere Model, version 5. J. Adv. Model. Earth Syst., 6, 357–383, https://doi.org/10.1002/2013MS000303.
Riehl, H., and J. Malkus, 1958: On the heat balance of the equatorial trough zone. Geophysica, 6, 503–538.
Rienecker, M. M., and Coauthors, 2011: MERRA: NASA’s Modern-Era Retrospective Analysis for Research and Applications. J. Climate, 24, 3624–3648, https://doi.org/10.1175/JCLI-D-11-00015.1.
Ringler, T., M. Petersen, R. L. Higdon, D. Jacobsen, P. W. Jones, and M. Maltrud, 2013: A multi-resolution approach to global ocean modeling. Ocean Modell., 69, 211–232, https://doi.org/10.1016/j.ocemod.2013.04.010.
Ritchie, H., 1991: Application of the semi-Lagrangian method to a multilevel spectral primitive-equations model. Quart. J. Roy. Meteor. Soc., 117, 91–106, https://doi.org/10.1002/qj.49711749705.
Ritchie, H., C. Temperton, A. Simmons, M. Hortal, T. Davies, D. Dent, and M. Hamrud, 1995: Implementation of the semi-Lagrangian method in a high-resolution version of the ECMWF forecast model. Mon. Wea. Rev., 123, 489–514, https://doi.org/10.1175/1520-0493(1995)123<0489:IOTSLM>2.0.CO;2.
Roach, L., C. Horvat, S. Dean, and C. M. Bitz, 2018: An emergent sea ice floe size distribution in a global coupled ocean-sea ice model. J. Geophys. Res. Oceans, 123, 4322–4337, https://doi.org/10.1029/2017JC013692.
Roach, W. T., and A. Slingo, 1979: A high resolution infrared radiative transfer scheme to study the interaction of radiation with cloud. Quart. J. Roy. Meteor. Soc., 105, 603–614, https://doi.org/10.1002/qj.49710544508.
Robert, A. J., 1966: The integration of a low order spectral form of the primitive meteorological equations. J. Meteor. Soc. Japan, 44, 237–245, https://doi.org/10.2151/jmsj1965.44.5_237.
Robert, A. J., 1969: The integration of a spectral model of the atmosphere by the implicit method. Proc. WMO/IUGG Symp. on NWP, Tokyo, Japan, Meteorological Society of Japan, VII.19–VII.24.
Robert, A. J., 1981: A stable numerical integration scheme for the primitive meteorological equations. Atmos.–Ocean, 19, 35–46, https://doi.org/10.1080/07055900.1981.9649098.
Robert, A. J., 1982: A semi-Lagrangian and semi-implicit numerical integration scheme for the primitive meteorological equations. J. Meteor. Soc. Japan. Ser. II, 60, 319–325.
Robert, A. J., J. Henderson, and C. Turnbull, 1972: An implicit time integration scheme for baroclinic models of the atmosphere. Mon. Wea. Rev., 100, 329–335, https://doi.org/10.1175/1520-0493(1972)100<0329:AITISF>2.3.CO;2.
Rodgers, C. D., and C. D. Walshaw, 1966: The computation of infra-red cooling rate in planetary atmospheres. Quart. J. Roy. Meteor. Soc., 92, 67–92, https://doi.org/10.1002/qj.49709239107.
Roeckner, E., L. Dümenil, E. Kirk, F. Lunkeit, M. Ponater, B. Rockel, R. Sausen, and U. Schlese, 1989: The Hamburg version of the ECMWF model (ECHAM): Research activities in atmospheric and oceanic modelling. CAS/JSC Working Group on Numerical Experimentation RAAOM Rep. 13, 7-1.
Rossby, C.-G., 1937: Isentropic analysis. Bull. Amer. Meteor. Soc., 18, 201–209, https://doi.org/10.1175/1520-0477-18.6-7.201.
Rossow, W. B., C. Delo, and B. Cairns, 2002: Implications of the observed mesoscale variations of clouds for the earth’s radiation budget. J. Climate, 15, 557–585, https://doi.org/10.1175/1520-0442(2002)015<0557:IOTOMV>2.0.CO;2.
Rothman, L. S., and Coauthors, 1987: The HITRAN database: 1986 edition. Appl. Opt., 26, 4058–4097, https://doi.org/10.1364/AO.26.004058.
Rotstayn, L. D., 2000: On the “tuning” of autoconversion parameterizations in climate models. J. Geophys. Res., 105, 15 495–15 507, https://doi.org/10.1029/2000JD900129.
Rotstayn, L. D., B. F. Ryan, and J. J. Katzfey, 2000: A scheme for calculation of the liquid fraction in mixed-phase stratiform clouds in large-scale models. Mon. Wea. Rev., 128, 1070–1088, https://doi.org/10.1175/1520-0493(2000)128<1070:ASFCOT>2.0.CO;2.
Rowntree, P., 1976: Response of the atmosphere to a tropical Atlantic Ocean temperature anomaly. Quart. J. Roy. Meteor. Soc., 102, 607–625, https://doi.org/10.1002/qj.49710243308.
Rowntree, P., and J. Walker, 1978: The effects of doubling the CO2 concentration on radiative-convective equilibrium. Carbon Dioxide, Climate and Society, Elsevier, 181–191.
Rutledge, S. A., and P. V. Hobbs, 1984: The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. XII: A diagnostic modeling study of precipitation development in narrow cold-frontal rainbands. J. Atmos. Sci., 41, 2949–2972, https://doi.org/10.1175/1520-0469(1984)041<2949:TMAMSA>2.0.CO;2.
Sadourny, R., 1972: Conservative finite-difference approximations of the primitive equations on quasi-uniform spherical grids. Mon. Wea. Rev., 100, 136–144, https://doi.org/10.1175/1520-0493(1972)100<0136:CFAOTP>2.3.CO;2.
Sadourny, R., 1975: The dynamics of finite-difference models of the shallow-water equations. J. Atmos. Sci., 32, 680–689, https://doi.or