What: Over 30 registered participants from 10 countries met to review our current understanding of and future challenges in physics–dynamics coupling (PDC).
When: 2–4 December 2014
Where: Ensenada, Mexico
Currently, atmospheric general circulation models (GCMs) discretize separately the processes described by the adiabatic equations driving the evolution of the grid-scale variables (the “dynamics”) and the bulk or statistical parameterizations of the mean effects of the subgrid and/or diabatic processes (the “physics”). These two components then need to be coupled to each other. This coupling of physics parameterizations to the resolved fluid dynamics (the dynamical core) is an important aspect of geophysical models. However, often model development is strictly segregated into either physics or dynamics, and the coupling is often guided by technical convenience rather than analysis (Williamson 2007). Hence, this area has many more unanswered questions than in-depth understanding. Furthermore, recent developments in the design of dynamical cores (significant increase in resolution, move to nonhydrostatic equation sets, variable resolution, and adaptive meshes, etc.), extended process physics (prognostic microphysics, 3D turbulence, nonvertical radiation, etc.), and predicted future changes of the computational infrastructure (e.g., exascale with its stronger need for task parallelism, data locality, and asynchronous time stepping) add even more complexity and new questions. To address these issues the first Physics Dynamics Coupling (PDC14) workshop (http://pdc.cicese.mx) was held in December 2014. The highlights and motivation of PDC14, which took place in Ensenada, Baja California, México, are summarized below.
Since the pioneering work of Lander and Hoskins (1997), which is probably one of the first papers explicitly addressing issues in the coupling between physical parameterizations and dynamics, a considerable number of papers on this subject have appeared. Examples are Caya et al. (1998), Williamson (2002), Dubal et al. (2004, 2006), and the references therein. Some of these address the physics–dynamics coupling (PDC) issues in idealized frameworks with simplified equation sets; others assess the characteristics of the coupling in complex GCMs. Regardless of the individual approach, they all shed light on the emerging challenge: how to unify the dynamics and physics developments, both highly interdependent and—in their own right—highly complex developments into a unique Earth System Model.
Beljaars et al. (2004) illustrate the importance of the subject. When adjusting the physics–dynamics coupling in relation to the time-stepping scheme in the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasts (ECMWF), they show clear improvements in the root-mean-square (RMS) errors of the 10-m wind speeds. Since then, more and more scientists in the field started to question the rather technically convenient union of physics and dynamics, the lack of understanding, and, in some cases, the lack of compatibility and consistency, which is detailed below.
However, what becomes apparent when reviewing the literature is that even if physics–dynamics coupling is generally recognized by modelers as important, it is not in itself a “core” research subject. Probably because of the interdisciplinary nature of the problem, researchers typically specialize in numerics utilized in the resolved fluid dynamics or in the representation of physical subgrid processes and address the coupling problem only “incidentally.” The literature does not yet provide any consensus on how to approach the problem. One method is to highly simplify the problem into tractable algebraic terms for analysis, with the risk of lacking generality and causality between the theoretical analysis and the behavior of the full model (Dubal et al. 2005, 2006). The other approach is to keep the full complexity of the GCM (Williamson 2002) with variations in the huge parameter space that these models span. This can make it extremely difficult to isolate causes and effects due to the nonlinear nature of the physics–dynamics coupling. Both methods, while offering some guidance and reassurance, often fail to provide clear and practical conclusions for the improvement of the models and paths for further investigation. This workshop built on this 20-yr history and aimed at gathering a community of modelers to discuss how to tackle and promote the problem of physics–dynamics coupling in the context of recent and future GCM developments.
There are two paradigms for how to couple the various physics components to each other and also to the dynamical core: parallel and sequential (Williamson 2002; Dubal et al. 2004). In the parallel paradigm, also called process splitting, each of the physics schemes in addition to the dynamical core are stepped forward in time independently of each other starting from the previous model state. All processes record the time tendencies of the prognostic variables that are then used to update the model state at the end of a time step. This is a simple approach, amenable to task parallelism on parallel computing architectures, but one that requires a suitably short time step so that the interactions between the different processes over a time step can be safely ignored.
In the sequential paradigm, also called time splitting, the physical processes are first ordered in some way and then, starting with the model state at the previous time step, the first scheme updates that model state that is then fed into the next scheme and so on until the final scheme completes the time step. This approach has less restriction on the time step but is dependent (sometimes critically so) on the order chosen and is not amenable to task parallelism.
There is also a third, hybrid approach, applied, for example, in the Met Office’s Unified Model, which uses the parallel approach for some physical parameterization schemes (those whose time scale is considered slow compared with the model time step) and the sequential approach for the remaining physics processes whose time scale is comparable with or shorter than the time step. This hybrid approach intends to take advantage of the strengths of both coupling strategies but is more complex with respect to the GCM software infrastructure.
The particular coupling choice is influenced by many aspects. For example, some numerical weather prediction (NWP) models such as the ECMWF model IFS (Temperton et al. 2001) or the most recent U.K. Met Office model (Wood et al. 2014) employ a combination of a semi-Lagrangian scheme for advection with a semi-implicit scheme for the temporal discretization. This combination allows for longer time steps and the sequential or hybrid coupling of physics/dynamics, and within the physical parameterization suite makes the solution of the equations more implicit. The latter is desirable from a numerical stability perspective. However, other semi-implicit spectral transform dynamical cores, such as the Eulerian and semi-Lagrangian dynamical cores available in the Community Atmosphere Model (CAM) at the National Center for Atmospheric Research (NCAR), follow a hybrid approach (Neale et al. 2010). These two dynamical cores employ a sequentially split physical parameterization suite and then couple the physics package to the dynamical core in a process split way. The latter is motivated by the fact that the process splitting in spectral models saves computational resources since fewer spectral transformations need to be computed. Dynamical cores with explicit time-stepping schemes and short time steps might also favor the parallel (process) physics–dynamics coupling strategy. Indeed, parallel coupling is common in research cloud-resolving models (CRM) and large-eddy simulation (LES) models with short time steps. However, most GCMs with explicit time-stepping techniques now separate the dynamics time step from the physics time step and, for example, subcycle the dynamical core several times before the physical parameterization suite is called. Since the resulting physics time steps are relatively long, the sequential split is therefore the dominant coupling strategy for most operational GCMs.
This, of course, raises other questions such as the time frequency of the physics–dynamics coupling (Reed et al. 2012; Williamson 2013) and how this choice is impacted by hardwired time-scale coefficients in some physical parameterization schemes. In addition, research has been active to assess whether the physics forcings should be computed on computational grids that are different from the dynamical core grid (Williamson 1999; Molod 2009) and whether the physics–dynamics coupling should employ explicit, implicit, or split-implicit time-stepping methods (Staniforth et al. 2002a,b). It is clear from these discussions that the optimal coupling strategy is not an obvious choice.
One of the recognized challenges, so far hindering advancement of the understanding in this field, is the lack of comprehensive test strategies, that is, a suite of tests ranging from fundamental analysis all the way through to full model tests, which are sufficiently sensitive to the physics–dynamics coupling strategy. As briefly mentioned before, there are two extremes that have been explored in the past. First, mathematical analysis with simplified equations, such as illustrated in a series of publications by Dubal et al. (2004, 2005, 2006), has shown that there is a clear theoretical benefit in certain coupling strategies. Second, Beljaars et al. (2004) and Wan et al. (2013) have shown the usefulness of complex GCM tests. However, in general there is a lack of understanding of whether, and if so how, the theoretical results relate to the full models. For this reason several groups are currently working on “bridging this gap” by developing test cases that have nearly the complexity of full model runs but are sufficiently transparent and portable to aid experimentation and model comparison. A particular focus is on the inclusion of simplified moist processes because they reveal a fundamental coupling process between the dynamical core and physics. This has been, for example, explored by Frierson et al. (2006) and in the tropical cyclone test case explored by Reed and Jablonowski (2012). The latter was used as the basis for a model intercomparison during the Dynamical Core Model Intercomparison Project in 2012 (DCMIP-2012; www.earthsystemcog.org/projects/dcmip-2012).
Although tests exist, so far they have not been used to demonstrate the benefits and/or detriments of different coupling strategies. Wan et al. (2015) designed a numerical convergence test to identify major sources of time-stepping error in a fully fledged GCM. The top-down diagnostic approach offers a promising alternative for the evaluation of coupling techniques.
PHYSICS–DYNAMICS COMPATIBILITY AND CONSISTENCY.
If individual components of the system (each physical and dynamical process) continue to be improved without consideration given to their coupling then the expected improvement is unlikely to be realized in the full GCM since the existing coupling error can increase in relative importance. This issue can be especially challenging for community models that employ a “plug and play” philosophy in which schemes developed in one modeling system are also used in another quite different one. Even if each subsystem employs a self-consistent set of approximations, inconsistent approximations across model components may break the basic conservation laws in the discretized equations. Indeed, the workshop highlighted that the consistency between the parameterizations can in fact be as important as the consistency between the physics and the dynamics.
A trivial, but real, example is the choice made for each of the physical constants, such as gravity, the gas constant, and so on. The code of one physics parameterization might make one choice, while another scheme might make another. A classic example concerns the choice of how to represent water phase changes in the model. Energy conservation in a multiphase system relies on the dependence of latent heats on temperature and on the multiphase formulation of specific heat capacities and the moist gas “constant.” The specific capacities and the multiphase gas constant are important variables of both a moist dynamical core and moist physics parameterizations. So although each component might, in isolation, be energetically consistent, the coupling of the two subcomponents may need a sophisticated interface to ensure energy conservation.
Although a more disciplined approach to model design and the imposition of good coding standards can address this issue, it is harder to ensure that the same fundamental and conceptual, sometimes hidden or forgotten, approximations are made across all model components. For example, as the resolution of atmospheric models increases, the hydrostatic assumption is removed in the dynamical cores. This raises the question whether a new nonhydrostatic dynamical core with typically height-based vertical coordinates can still be coupled to an existing physics package that was initially developed for a hydrostatic GCM with pressure-based vertical coordinates. Most often, the physics schemes for hydrostatic models assume that the physical processes do not change the pressure at a model level until a pressure adjustment is computed at the very end of the physics time step. This ensures the proper pressure adjustments if water vapor phase changes were present, while conserving the dry air mass within the vertical column. The validity of such built-in characteristics within the physics package needs to be reassessed when nonhydrostatic dynamical cores are introduced.
Achieving consistency between dynamical cores and physical parameterizations is hard enough. The coupling of atmosphere-only models with other very complex systems such as ocean, ice, land surface, or chemistry models further increases the challenge and the scope for inconsistent approximations. In such cases, the consistency with respect to the complexity of the components in a large Earth System Model should also be addressed.
During the workshop several additional current and future challenges were identified:
With the continuing increase in resolution, new challenges arise when models are run at a resolution where the scale of the parameterized process becomes similar to the cell size. The physical process is then partially resolved and can no longer be treated statistically. This is true, for example, for deep convection schemes at grid spacings between 1 and 10 km. This resolution range is denoted as the “gray zone.” But even with 25-km grid spacings, some conventional mass flux–based deep convection schemes start to lose their credibility owing to their typical built-in assumptions. For example, they assume that the fractional area of the convective updrafts is small (on the order of 10% or smaller) in comparison to the grid size and that the updrafts are balanced by downdrafts within the same grid box. In addition, their “quasi-equilibrium” principle assumes that convection is almost in equilibrium with the large-scale, nonconvective processes. These assumptions become invalid with shrinking grid spacings. This leads to the need for “seamless” and “scale-aware” parameterizations, particularly when variable resolution meshes are used. Additionally, the convergence of the parameterizations needs to be continuously monitored as the resolution increases. Research is now under way in these areas (e.g., Arakawa and Wu 2013; Gustafson et al. 2013; Grell and Freitas 2014; Xiao et al. 2015).
The current development of supercomputing architecture drives the need for local (ideally pointwise), independent, asynchronous computations. This is in direct conflict with scientific improvements that would benefit from three-dimensional physics as well as a closer coupling with the dynamics.
The numerical schemes within the dynamical cores and physical parameterizations lead to different representations of the prognostic variables in both model components. For example, the use of a high-order finite-element approach in a dynamical core does not match with the more typical low-order finite-difference or finite-volume numerical schemes in the physics. Questions arise whether and how much these numerical discrepancies matter.
Addressing the physics–dynamics coupling issue requires closer collaborations between the dynamical core and physical parameterization communities.
The workshop was very well received by the participants. The consensus was that before PDC14 no forum existed that addressed the challenges of physics–dynamics coupling. The participants expressed a strong wish to develop PDC into a biennial workshop series. A joint journal article is currently in preparation, summarizing the state of the art and current challenges.
WATCH THIS SPACE.
It has been confirmed that the successor workshop PDC16 will take place at the Pacific Northwest National Laboratory during 20–22 Sep 2016. Please do not hesitate to contact the authors for additional information and updates.
The authors thank Hui Wan for verifying the contents of this meeting summary. This conference was hosted by the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Ensenada, México. We gratefully acknowledge the support from Consejo Nacional de Ciencia y Tecnología (CONACYT) and the Department of Physical Oceanography at CICESE. We would especially like to thank Luis Zavala Sansón and Silvio Guido Lorenzo Marinone Moschetto for their support of the workshop and Thamar Cordova and Guadalupe Pacheco Cabrera for their administrative support. Christiane Jablonowski was supported by the U.S. Department of Energy, Office of Science Awards DE-SC0003990 and DE-SC0006684.