## 1. Introduction

In current meteorological numerical models, the resolutions are not always sufficient to explicitly represent various physical processes. Hence, parameterizations are needed. An explicit approach can be taken only when the model resolution is sufficiently high. For example, in global climate modeling, a parameterization is employed for the convective precipitation processes, whereas explicit cloud physics are introduced in cloud-resolving model (CRM) simulations. However, a difficult decision has to be faced between a parameterization and explicit physics when the resolution of the model is intermediate. This has been a long-standing issue in mesoscale convective simulations, in which the convective precipitation processes are only partially resolved (cf. Molinari and Dudek 1992). It is also becoming an increasingly acute issue in global modeling, as the models begin to resolve the mesoscale explicitly (cf. Moncrieff and Klinker 1997).

However, such a choice is not necessarily dichotomous, but an alternative, intermediate approach must be possible. The traditional dichotomous approach has been based on a premise of the scale separation (cf. Arakawa and Schubert 1974). Here, we emphasize that the transition of the model physics from a fully implicit state that requires a parameterization to an explicit state is rather gradual (cf. Yano et al. 2000). Starting from this perspective, a simple approach is proposed in the present paper, aiming to alleviate the degrading effects by lower resolutions without calling for parameterizations.

The principal idea of the proposed approach is to recover a full variability of the system by multiplying a simulated value by a “renormalization” factor so that a lack of model variability due to a low resolution can be compensated. The idea is akin to the renormalization approach in statistical physics (cf. Kadanoff 2000), in which the renormalization group theory more formally justifies this approach. As a result, an accurate estimate of the macroscopic state is obtained by renormalizing the degraded microscopic descriptions. A similar approach is expected to be applicable, to some extent, in many meteorological systems, because wide ranges of studies (cf. Schertzer and Lovejoy 1991) suggest that the systems often satisfy a self-similarity, or more generally, a scaling law, with the latter characterized by a power-law spectrum. Under these conditions, the system loses variability with a constant rate with degrading data resolutions. Hence, a “renormalization” by a constant factor can recover this loss of variability, although a rigorous application of the renormalization group may not be attainable. For this reason, we call our approach the same, but with quotation marks.

The present approach can also be considered a natural extension of the concept of statistical equilibrium (cf. Emanuel et al. 1994) to the situations where clear scale separation no longer exists. Even under these situations, the system may still remain under a statistical equilibrium that provides a universal energy distribution in the wavenumber space. Such an argument naturally leads to the “renormalization” approach.

Here, we only assume self-similarity of the system in its most general sense, including those realized in wavelet space (cf. Figs. 7 and 8 of Yano et al. 2001b), that is, not necessarily corresponding to a fractality in real space. A more practical requirement for the vertical fluxes, as considered in the present paper, is that the eddies have more or less the same vertical structure regardless of the horizontal scales. Our analysis suggests that this tendency is satisfied more than often in convective systems, especially when the system is well organized. Indeed, the present approach may be considered statistical rather than physical. Nevertheless, the present result implies that it may be possible to parameterize deep convective systems without looking into the details of convective dynamics, which are traditionally characterized by updrafts, downdrafts, entrainments, detrainments, etc.

The renormalization approach has already been employed in the atmospheric sciences for radiative transfer (Hansen 1971; Cairns et al. 2000) and boundary layer turbulence (Cheng et al. 2002) problems. Its less rigorous application, as in the present case, has also been proposed for the subplume-scale representation in the mass flux formulation of the boundary layer parameterization (Petersen et al. 1999). A possibility of applying this idea to deep convective systems has already been suggested by Yano et al. (1996).

For a demonstration of this “renormalization” idea, we take the vertical fluxes of heat, moisture, and momentum as examples of the subgrid-scale processes. The thermodynamic vertical fluxes, especially, play the crucial roles in the dichotomy between a parameterization and explicit cloud physics. Typically, with decreasing resolutions, the vertical transport of both heat and moisture by resolved vertical motions becomes less efficient. With a lack of vertical transport, both an explicit activation of conditional instability and cloud physics descriptions become increasingly difficult. Hence, it forces us to turn on a parameterization. Here, instead, we propose to recover full vertical fluxes from a degraded description by applying “renormalizations,” which also enables us to maintain explicit cloud physics.

We test the proposed idea by using CRM simulations for both tropical and midlatitude situations, as outlined in the next section. Here, in the analyses, the domain-averaged quantities in CRM are equated with the grid-box averages in large-scale models. The analysis is performed in wavelet space for our convenience, as detailed in section 3. This also allows us to apply a similar “renormalization” approach on the wavelet-compressed dataset in section 4. The paper is concluded in section 5.

## 2. CRM simulations

A nonhydrostatic anelastic mesoscale model (Meso-NH) jointly developed by the Laboratoire d'Aérologie and Centre National de Recherches Météorologigues–Groupe d'Etude de l'Atmosphère Météorologigue (CNRM–GAME; cf. Lafore et al. 1998) was used for the present study (model details are available online at http://www.aero.obsmip.fr/mesonh/). The model includes a 1.5-order turbulence scheme, a radiation scheme interacting with the clouds, and a prognostic bulk description of cloud physics.

The simulations were performed over a doubly periodic domain with the sizes of 512 km × 512 km with a horizontal resolution of 2 km. A 47-level stretched grid is used in the vertical direction, with a resolution of 70 m at the surface, and 700 m toward the top of the model domain at 25 km. The sponge layer is placed above 20 km, and only the lowest 18 km are considered in the following analyses. The model is forced by observed domain-mean large-scale winds, temperature, and moisture, as a standard procedure for mesoscale CRM experiments (cf. Grabowski et al. 1996).

Two experiments are performed. The main case used for the present analysis is an experiment for 7 days initiated at 1200 UTC 10 December 1992 during the Tropical Ocean Global Atmosphere Coupled Ocean–Atmosphere Response Experiment (TOGA COARE), corresponding to a deep convection period but with a relatively weak easterly wind shear (cf. Fig. 1 of Guichard et al. 2000). The model runs on an *f* plane assuming the latitude 2.0°S. The sounding data used for forcing is of a 6-h interval. We have also performed the analysis for another case that corresponds to an Atmospheric Radiation Measurement Program (ARM; Xie et al. 2002) period over the central United States with the constant Coriolis factor for 36.6°N. The experiment was performed for 30 h and was initiated at 1200 UTC 29 June 1997, corresponding to a squall line period accompanied by a strong westerly wind shear. Both experiments cover organized precipitating convective events with the precipitation peaks at *t* = 30 h and 15 h, respectively, after the initiation of integrations (cf. Fig. 1 of Chaboureau and Bechtold 2002).

## 3. “Renormalization” under the wavenumber truncation

### a. Wavelet expansion

In order to test the “renormalization” idea both under degrading resolutions in this section and, also, under data compression in section 4 conveniently together, we use the orthogonal discrete wavelet proposed by Meyer (1992; see also, e.g., Mallat 1998, section 7.2.2). This wavelet has been used in our previous study (Yano et al. 2001a,b), to which we refer for details. We emphasize that, as far as the analysis of this section is concerned, the same idea could be equally tested by a filtering in Fourier space, or even by a simple space-averaging approach.

For degrading the resolution of data, the wavelet can be used in a completely analogous manner as the Fourier, by equivalently forming a complete orthogonal set. A major difference, however, concerns a degree of discretization of the scales, which are given by a measure of wavenumber as a power of two: *k* = 2^{i}, with *i* = 0, … , log_{2}*N* − 1, and *N* the total number of grid points in a horizontal direction. We designate the localizations indexed by *j* = 1, … , *k* for a given wavenumber *k.*

*l*=

*k*+

*j*and represent a wavelet by

*ψ*

_{l}(

*x*). Examples of Meyer wavelets are shown in Fig. 4 of Yano et al. (2001b). Hence, we expand variables [

*φ*(

*x,*

*y,*

*z*), e.g.] in three-dimensional CRM simulations as

_{lx,ly}

*z*) is the expansion coefficient; the subscripts

*x*and

*y,*here and in the following, designate the two horizontal directions; and

*N*

_{x}=

*N*

_{y}= 256 in the present study.

### b. Vertical flux

*φ*is given by

^{′}

_{0,0}

*z*) ≡ 0.

*k*

_{c}≡ 2

^{ic}

*k*

_{c}(so the index

*i*

_{c}), the heavier the degradation of the resolution, with the original total flux (2) recovered with

*i*

_{c}= 7. Note that the corresponding expression under the Fourier expansion is obtained by replacing the first two sums with that of the wavenumbers, and neglecting the remaining sums over

*j*

_{x}and

*j*

_{y}.

### c. “Renormalization” principle

*t*= 30 h) with various wavenumber truncations [Eq. (3)]. Although the amplitude of the flux decreases with decreasing truncation wavenumbers

*k*

_{c}(i.e., decreasing horizontal resolutions), the profile of the flux tends to remain similar. Hence, the original total flux

*w*′

*φ*′

*w*′

*φ*′

^{kc}

*α*(

*k*

_{c}) is defined as a function of the truncation wavenumber

*k*

_{c}.

*H*the top of the analysis domain (

*H*= 18 km) and

*ρ*the reference density profile. The sign of the total flux sgn (

*w*′

*φ*′

*n*= −1, 0, 1, 2, and the case with

*n*= 0 was found to provide the best result. Note that the standard least squares fitting gives

*n*= 1, and the above generalized formula is obtained by multiplying by a weight (

*ρwφ*′

^{n−1}.

The “renormalized” vertical fluxes based on the formulas (4) and (5) are shown in Fig. 2 for *i*_{c} = 1–6 with *k*_{c} = 2^{ic}*i*_{c} = 7) is also shown by the solid curve. The “renormalization” of the vertical flux is satisfactory, considering the simplicity of the approach. Nevertheless, this “renormalizability” is mainly lost when all the convective-scale structures are smoothed out by taking sufficiently low critical wavenumber, as shown for the case with *k*_{c} = 2, because the mesoscale tends to represent different vertical structures.

### d. Time-dependence analyses

In order to see the stability of the method, the time dependence of the “renormalization” factor *α*(*k*_{c}) is plotted for the TOGA COARE case in Fig. 3. Also, the normalized errors, that are defined as the rms errors relative to the standard deviation of the vertical variability of flux are plotted as functions of time in Fig. 4. The results for the ARM case are similar, and are not shown here.

Overall, the “renormalization” factors (Fig. 3) tend to be constant with time, except for the initial period (first day) of the integration, and some weak modulations associated with the development and decay of the convective systems. The same remarks also follow for the errors (Fig. 4), but some spiky behavior is also noted for the highest truncation (*k*_{c} = 1). Except for these exceptional events, the relative errors of these “renormalized” fluxes are always less than the order of unity for all the truncation levels considered. This is encouraging considering the inherent difficulties for parameterizing the subgrid-scale fluxes.

In order to better understand this general behavior, time series of vertical profiles are shown as the time-vertical cross section for the heat and zonal-wind fluxes in Figs. 5 and 6, respectively, with various truncation levels (but without “renormalization”). It is seen that the vertical profile of vertical fluxes evolve rather coherently with time for all the spatial scales, explaining a relative stability of the “renormalization” factors. However, development (e.g., *t* = 18–24 h) of the mesoscale structure (*k*_{c} ≤ 4), as seen in Figs. 5d and 6d, tends to lag behind that of the convective scales, and its decay tends to be faster than that for the latter, which continues beyond *t* = 40 h when the mesoscale component has sufficiently disintegrated during the first convective event, for example. This leads to a modulation of the “renormalization” factors, and some spiky increase of errors.

The momentum transport is more sensitive to the change of these dynamical regimes under the competition between the meso- and the convective-scale components, as comparison of Figs. 5 and 6 indicates. This also leads to wider fluctuations of “renormalization” factors and normalized errors as seen in Figs. 3 and 4.

The above analysis indicates a potential usefulness of this approach. However, the “renormalization” factors clearly differ for the two simulated cases (not shown), indicating the necessity for defining them depending on the large-scale environment and the flux types of concern.

## 4. “Renormalization” of the wavelet-compressed data

An advantage of the wavelet expansion is that it often allows a high compression of the original data (c.f. Mallat 1998, chapter 9). This principle works well for the convection systems, because it can effectively represent spatially isolated coherent structures (e.g., convective towers, stratiform clouds) by employing a function set with varying localizations in both spread (scale) and position. The idea of wavelet data compression is understood to be analogous to wavenumber-dependent filtering, but the amplitudes of the expansion coefficient are used as a criterion, instead. All the expansion coefficients smaller in magnitude than a threshold are discarded in the expansion (1). This also leads to a compression of data, because the data size can be reduced without noticeably losing the variability in the original data. Nevertheless, at a high compression limit, some model variability is inevitably lost. Hence, the “renormalization” approach can again be used in order to compensate for this effect.

*φ̂*

_{lx,ly}

*z*

*μ*

*φ̂*

^{2}

*z*

^{1/2}

*φ̂*

^{2}(

*z*)〉

^{1/2}is the height-dependent standard deviation of the coefficient

*φ̂*

_{lx,ly}

*l*

_{x}and

*l*

_{y}in the wavelet space at each vertical level) with the prime indicating the deviation from the domain mean, and

*μ*is the threshold factor. If we set

*μ*= 1, 2, 5, 10, then these generally correspond to data compression on the order of 10%, a few percent, 1%, and 0.1%, respectively.

The time dependence of the “renormalization” factor *α*(*μ*) and the normalized errors is shown for the TOGA COARE case in Figs. 7 and 8, respectively. The “renormalization” factor *α*(*μ*) under wavelet compression is more stable with time than the case with wavenumber truncation. It also appears to be more universal, independent of the flux variables, and only depending on the compression rate *μ.* The “renormalization” factor, which can also be considered a measure of the degree of total variability retained after compression, further shows an efficient compressibility of data by wavelets: the order of 90% of variability is retained by the 10%-level compression (*μ* = 1), and still 10% even by 1%-level compression (*μ* = 5). The normalized errors associated with “renormalization” are also more stable than the wavenumber-truncation case, and overall, remain less than the order of unity for the whole period. The especially spiky increase of errors found in the wavenumber-truncation case is less prominent in this case.

## 5. Summary and conclusions

The “renormalization” approach for a representation of subgrid-scale (unresolved scale) processes is proposed. The main idea here is to recover the total variability of the system (e.g., vertical heat flux), under a limited horizontal resolution, by multiplying a constant factor (i.e., “renormalization” factor) to the simulated value. Focus has been placed on the vertical fluxes, because their accurate estimates are also likely to justify the use of the explicit cloud microphysics, even under crude horizontal resolutions.^{1} Diagnostic tests for deep convective periods are performed by considering the CRM domain corresponding to a large-scale grid box. Our diagnostic tests have shown that “renormalization” works rather well, although the “renormalization” factor represents unnegligible time dependence.

An immediate application of the present approach, which also provides its next key test, is as a technique for better evaluating widely recognized grid-size sensitivities of the convective simulations over meso- to synoptic-scale domains, both with and without cumulus parameterizations (e.g., Liu et al. 2001a,b). Such fully prognostic experiments will be, however, far more involved than diagnostic testings, with the neighboring grid columns fully interacting with each other. For example, often observed in mesoscale convective simulations are higher grid-scale vertical velocities with degrading horizontal resolutions, in such manner that a correct level of vertical fluxes is still maintained (R. Laprise, 2000, personal communication). Hence, a direct application of a result from diagnostic tests could lead to an excessive estimate of vertical fluxes, rather than taming the grid-scale extreme vertical velocities. This indicates that a lower “renormalization” factor is safer to use in these realistic applications.

This “renormalization” approach should recover the original fine-resolution results under lower resolutions, if a loss of total variability is the main factor that controls the grid-size sensitivities. A necessary condition required for the applications is that all the major features are least marginally resolved in the system. In this case, the method works as a simple remedy for compensating degrading resolutions, although a good estimation of the “renormalization” factor may remain a difficult technical task. On the other hand, this approach fails when a rather negligible feature in the full-resolution run is critically contributing to a subsequent evolution of the whole system, for example, in triggering a new convective event. The extent of such criticality of the system ultimately limits the applicability of this “renormalization” approach.

A simpler, yet more effective “renormalization” of the system is found to be possible under the wavelet compression of data. In this case, the “renormalization” factor is more stable with time, and also tends to be invariant with change of the variables. This further suggests a feasibility of a highly truncated CRM modeling in the wavelet space, to a much higher degree than a simple compression approach allows, with the help of the “renormalization” approach. We are currently further pursuing this possibility.

## Acknowledgments

Discussions with Francoise Guichard and Jean-Philippe Lafore are acknowledged.

## REFERENCES

Arakawa, A., and W. H. Schubert, 1974: Interaction of a cumulus cloud ensemble with the large-scale environment. Part I.

,*J. Atmos. Sci.***31****,**674–701.Cairns, B., A. A. Lacis, and B. E. Carlson, 2000: Absorption within inhomogeneous clouds and its parameterization in general circulation models.

,*J. Atmos. Sci.***57****,**700–714.Chaboureau, J-P., and P. Bechtold, 2002: A simple cloud parameterization derived from cloud resolving model data: Diagnostic and prognostic applications.

,*J. Atmos. Sci.***59****,**2362–2372.Cheng, Y., V. M. Canuto, and A. M. Howard, 2002: An improved model for the turbulent PBL.

,*J. Atmos. Sci.***59****,**1550–1565.Emanuel, K. A., J. D. Neelin, and C. S. Bretherton, 1994: On large-scale circulations in convecting atmospheres.

,*Quart. J. Roy. Meteor. Soc.***120****,**1111–1143.Grabowski, W. W., X. Wu, and M. W. Moncrieff, 1996: Cloud-resolving modeling of tropical cloud systems during Phase III of GATE. Part I: Two-dimensional experiments.

,*J. Atmos. Sci.***53****,**3684–3709.Guichard, F., J-L. Redelsperger, and J-P. Lafore, 2000: Cloud-resolving simulation of convective activity during TOGA–COARE: Sensitivity to external sources of uncertainties.

,*Quart. J. Roy. Meteor. Soc.***126****,**3067–3095.Hansen, J. E., 1971: Multiple scattering of polarized light in planetary atmospheres. Part II: Sunlight reflected by terrestrial water clouds.

,*J. Atmos. Sci.***28****,**1400–1426.Kadanoff, L. P., 2000:

*Statistical Physics: Statics, Dynamics and Renormalization*. World Scientific, 500 pp.Lafore, J. P., and Coauthors. 1998: The Meso-NH atmosphere simulation system. Part I: Adiabatic formulation and control simulations.

,*Ann. Geophys.***16****,**90–109.Liu, C., M. W. Moncrieff, and W. W. Grabowski, 2001a: Explicit and parameterized realizations of convective cloud systems in TOGA COARE.

,*Mon. Wea. Rev.***129****,**1689–1703.Liu, C., M. W. Moncrieff, and W. W. Grabowski, 2001b: Hierarchical modelling of tropical convective systems using explicit and parametrized approaches.

,*Quart. J. Roy. Meteor. Soc.***127****,**493–515.Mallat, S., 1998:

*A Wavelet Tour of Signal Processing.*2d ed. Academic Press, 637 pp.Meyer, Y., 1992:

*Wavelets and Operators*. Cambridge University Press, 223 pp.Molinari, J., and M. Dudek, 1992: Parameterization of convective precipitation in mesoscale numerical models: A critical review.

,*Mon. Wea. Rev.***120****,**326–344.Moncrieff, M. W., and E. Klinker, 1997: Mesoscale cloud systems in the tropical western Pacific as a process in general circulation models: A TOGA COARE case-study.

,*Quart. J. Roy. Meteor. Soc.***123****,**805–827.Petersen, A. C., C. Beets, H. van Dop, P. G. Duynkerke, and A. P. Siebesma, 1999: Mass-flux characteristics of reactive scalars in the convective boundary layer.

,*J. Atmos. Sci.***56****,**37–56.Schertzer, D., and S. Lovejoy, Eds.,. 1991:

*Non-Linear Variability in Geophysics*. Kluwer Academic Press, 318 pp.Xie, S., and Coauthors. 2002: Intercomparison and evaluation of cumulus parametrizations under summertime midlatitude continental conditions.

,*Quart. J. Roy. Meteor. Soc.***128****,**1095–1136.Yano, J. I., J. C. McWilliams, and M. W. Moncrieff, 1996: Fractality in idealized simulations of large-scale tropical cloud system.

,*Mon. Wea. Rev.***124****,**838–848.Yano, J. I., W. W. Grabowski, G. L. Roff, and B. E. Mapes, 2000: Asymptotic approaches to convective quasi-equilibrium.

,*Quart. J. Roy. Meteor. Soc.***126****,**1861–1887.Yano, J. I., M. W. Moncrieff, and X. Wu, 2001a: Wavelet analysis of simulated tropical convective cloud systems. Part II: Decomposition of convective and mesoscales.

,*J. Atmos. Sci.***58****,**868–876.Yano, J. I., M. W. Moncrieff, X. Wu, and M. Yamada, 2001b: Wavelet analysis of simulated tropical convective cloud systems. Part I: Basic analysis.

,*J. Atmos. Sci.***58****,**850–867.

^{1}

Potentially, the cloud microphysics can also be further “renormalized,” but we expect that this procedure would be less crucial than that for the vertical fluxes.