The purpose of the workshop series called “Concepts for Convective Parameterizations in Large-Scale Models” has been to encourage small numbers of European scientists to discuss the fundamental theoretical issues of convection parameterization. The workshop series has been funded by European Cooperation in the Field of Scientific and Technical Research (COST) Action ES0905. The sixth workshop in the series discussed the issues of generalization, consistency, and unification of parameterizations in atmospheric modeling with a focus on convection.
The workshop may be best summarized from the round table discussion organized on the last day, which began by asking for a starting point for such development of parameterizations. Key presentations during the workshop are also highlighted from this discussion.
Subgrid-scale atmospheric processes are ultimately turbulent (considering their extremely high Reynolds number associated with the flow). Thus, at least in the ultimate sense, turbulence theories are the most robust starting point for parameterization development: a rising convective plume is associated with filamentation of the plume edge air with subsequent dispersions associated with its turbulent behavior. As a result, as it rises, the plume air is gradually replaced by external air and gradually loses its compactness. The plume eventually breaks up into fragments of clouds.
What: Forty-one scientists from 16 European countries and Israel met to discuss how to construct parameterizations correctly.
When: 19–21 March 2013
Where: Palma, Spain
A semi-analytical method based on the Langevin equation can be used to study particle advection by turbulent flows (Vlad and Spineanu 2004, and references therein). A series of studies for two-dimensional turbulence reveals the trapping of the fluid particles inside eddies, which determine nonlinear transport far from Gaussian statistics. Stochastic quasi-coherent structures are generated as a result. An extension of this method to three-dimensional convective cloud turbulent motions also finds trapping: it manifests a very inhomogeneous turbulent diffusion that is associated with the untrapped fluid particles, whereas diffusion is very small for the trapped particles. The entrainment process is stochastic. It appears not only at the top and on the boundaries of the cloud, but also inside the cloud.
Trapping, by limiting the dilution only to a part of the buoyant parcel, helps to maintain undiluted kernels of various sizes, stochastically distributed inside the cloud. This line of research may lead to a development of a parameterization based on turbulence physics.
Laboratory experiments are an equally important but long-forgotten tradition for addressing these turbulent questions. The entrainment-plume hypothesis, which was subsequently adopted by Arakawa and Schubert's (1974) mass-flux parameterization, was originally proposed based on a water tank experiment by Morton et al. (1956). During the discussion, a participant showed an impressive experiment of thermal plume evolution by simply using a humidifier as a plume source. Thanks to contemporary laser technology, extensive measurements of the velocity field are possible in much higher resolution than conventional large-eddy simulations (LESs) can achieve. Verifications of the entrainment/detrainment hypothesis must, rather, be based on those high-resolution laboratory experiments, if these direct measurements are ever to be relevant for a parameterization verification.
A view opposing the ultimate turbulence perspective is to remain with the phenomenological observational information available from the synoptic scales. This is a classical approach originally established by Yanai et al. (1973). From this latter point of view, the ultimate goal of parameterizations is to predict correctly the apparent source terms, which can be diagnosed by a conventional sounding network. A clever exploitation of such a sounding network can even provide some subgrid-scale information such as mass flux profiles, which must be specified under a mass-flux-based parameterization. The vast information potentially contained in these sounding network datasets should not be forgotten.
The basic working principle under this framework is quasi-equilibrium. In the course of the presentations, the need to go beyond this traditional assumption is much emphasized. However, the actual precipitation time series generated by an operational convection parameterization under quasi-equilibrium is often highly noisy, suggesting that the system does not stay on a slow process as the hypothesis indicates. This is a more basic issue to be addressed before moving to more sophisticated approaches.
An intermediate view between these two perspectives is to try to exploit information from finescale numerical modeling by cloud-resolving models (CRMs) as well as LESs, but without getting into full turbulence details. This is a direction strongly promoted under a leadership of the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS; presently Global Atmospheric System Studies) over the last decades. The vast information provided for the subgrid-scale processes by this modeling is hardly disputed. For example, extensive diagnoses of entrainment and detrainment rates are available from these modeling results.
However, these models are far from perfect. For example, reproduction of observed tendencies in deep convective momentum transport by CRMs, as one of the presenters points out, is not quite easy. Moreover, such information may not be as directly useful as it seems at first glance. Parameterizations cannot be considered simple approximations of CRMs and LESs: the associated assumptions are so drastic that information from CRMs and LESs may not be directly relevant for verifications of a parameterization formulation. More emphatically stated, parameterization is more like a sketch or schematic diagram of reality represented by CRMs and LESs. In general, where curve fitting is used to construct a parameterization, it should be accompanied by analysis of causal mechanisms.
Another perspective for bridging the gap between the turbulence and the phenomenological views is to argue that for various reasons, not all the details of subgrid-scale turbulent processes may be relevant for constructing a parameterization for large-scale flow simulations. This perspective is analogous to the concept of a slow manifold, which is constructed by filtering out fast time scale processes such as gravity waves as originally formulated for an idealized dry large-scale atmospheric circulation. Although this perception is appealing, it is less likely that such an analogy with a slow manifold can be established with the atmospheric subgrid-scale processes: that many of these processes tend to be associated with coherencies and spatiotemporal organizations suggests that they should not be naively linked with the notion of a slow manifold.
Nevertheless, the analogy with a slow manifold is, at a very conceptual level, a potentially helpful perspective: intuitively not all the physics matter for developing parameterizations. We may also equally ask this: How many of the turbulence features must correctly be reproduced in weather forecast models? For example, from the point of view of turbulence studies, reproduction of an inertial subrange spectrum would be of critical importance. However, it does not follow automatically that its reproduction is also critically important for, say, a successful seasonal forecast.
MORAL AND WISDOM.
As a whole, the workshop identified multiple pathways for constructing parameterizations in a general, consistent, and unified manner. Once a basic strategy is defined, doing it from there is more of a matter of morality: proceeding carefully and diligently (i.e., without cheating).
The moist thermodynamic description of the atmosphere is a good example that helps make this point. The extension of dry thermodynamics to its moist counterpart is straightforward at the most conceptual level. However, the actual procedure tends to be rather involved, and for simplification's sake, various approximations are introduced in many of the derivations found in the literature. A good lesson that can be learned here is that a much simpler expression for the moist adiabatic lapse rate can be obtained when the whole derivation is performed without any approximations (Geleyn and Marquet 2012).
An important general wisdom is to never go backward. It is often tempting to simply add an extra term for representing something missing in the original formulation. For example, downdraft is added to mass-flux convection parameterization in such a manner that the implementation of downdraft effects in the whole formulation remains somehow ad hoc. Recall that the basic idea of mass-flux formulation is based on a dichotomy of updrafts and environment. Consistently adding a new component on top of them requires more careful considerations.
Efforts for recovering such internal consistencies in operational contexts should be well emphasized. Especially emphasized are the importance of achieving “seamlessness” from one version of a model to another (e.g., from a climate to a forecast version), as well as achieving a “traceability” of physical effects identified in more explicit studies by LES and CRM into a parameterization.
Unfortunately, pursuing generality, consistency, and unification in parameterizations is not simply a moral matter but also an ontological task: it is difficult for us to see the problem as a whole. Our situation may be compared to the famous Indian allegory of the blind scholars touching parts of an elephant in order to conceive the whole picture of this animal. Each scholar only perceives a part of the animal (the trunk, a leg, the nose, etc.), and they dispute vigorously with each other over the true nature of the elephant. By this analogy, scientists contest priorities in parameterization development because each of us sees only a part of the whole problem.
A way to avoid such myopic tendencies in our parameterization research is to make a parameterization formulation simple and compact so that we can see the formulation as a whole more easily. For example, the workshop featured a few presentations on the introduction of stochasticity into parameterizations. However, looking at the issue from a wider perspective of parameterization strategy, this idea is a mixed blessing. It is hard to beat the intuitive appeal of introducing stochasticity for representing subgrid-scale variabilities and for enhancing ensemble forecast spread. However, stochasticity adds extra complexity on top of the entrainment/detrainment and closure problems that we have to deal with operationally.
Another example, equally emphasized during the workshop, is the coupling of convection with boundary layer processes. Triggering convection by various boundary layer processes such as cold pools is, again, an attractive possibility to pursue in parameterizations. The fundamental importance of such investigations is hardly debatable. However, operational implementations of such processes tend to make convection parameterization less reliable due to complexity of the boundary layer processes.
Establishing generality, consistency, and unification of physical parameterizations in operational forecast models is becoming increasingly urgent with an accelerated increase in model resolutions. A solid commitment of the operational research centers is definitely required, but that is not enough. The pathway is not unique, nor is the choice of pathway even obvious, given the ontological reasons above. The problem must be seen as a whole before a right choice can be made. The facets of the problem will not all be put together into a single whole except through true interdisciplinarity: that is going to be the theme of the next workshop in the series.