## 1. Introduction

An integrated atmospheric environmental forecasting and simulation system, described herein, has been and is continuing to be developed by the Meteorological Research Branch (MRB) in partnership with the Canadian Meteorological Centre (CMC) of Environment Canada. The bilingual acronym GEM has been adopted for the model around which this system is constructed. Thus in English the model is designated as “the Global Environmental Multiscale model,” whereas in French it is referred to as “le modèle Global Environmental Multi-échelle.”

There are three important motivations for modeling the atmosphere. These are to forecast the weather, address climate issues such as global change, and address air quality issues such as smog, ozone depletion, and acid rain. Modeled atmospheric phenomena cover a very broad range of time and space scales, varying temporally from the subsecond scales of some chemical reactions to the centuries or millennia of climate simulation, and spatially from the fractions of a meter of chemical reaction and molecular diffusion, right through to the global scale of tens of thousands of kilometers. Numerical models of the atmosphere must be run with time and space scales that are commensurate with those associated with the phenomena of interest, and this imposes serious practical constraints and compromises on their formulation.

Emphasis is placed in this two-part paper on the concepts underlying the long-term developmental strategy, and on mesoscale results obtained using the described GEM model at its present state of development.

The goals of Part I are as follows:

- Motivate and outline the staged and ongoing development of a comprehensive and fully integrated global atmospheric environmental forecasting and simulation system.
- Discuss various design considerations.
- Summarize the current status of development.

Part II (Côté et al. 1998) is dedicated to presenting mostly mesoscale results for the GEM model, in particular those that led to its operational implementation for regional forecasting on 24 February 1997 at the CMC.

## 2. Rationale for developing a universal modeling system

### a. Operational weather forecasting considerations

Two operational data assimilation and weather forecasting cycles—one global and one regional—have been running daily at the CMC for a number of years. The global cycle, based on a spectral model (Ritchie and Beaudoin 1994), addresses medium-range forecasting needs and global data assimilation. The regional cycle provides the more detailed short-range (to 2 days) forecasts over North America and some of its adjacent waters, and was based on the Regional Finite Element (RFE) model (Mailhot et al. 1997) until its recent replacement by the GEM model.

Even though the two-cycle strategy is a costly choice, it is considered the only acceptable alternative for fulfilling the operational needs for *both* medium-range (and therefore necessarily global) forecasting, and high-resolution regional forecasting. If the two cycles are centered around two distinct models, the strategy then requires the maintenance, improvement, and optimization of two sets of libraries and procedures. This is very labor intensive (Courtier et al. 1991; Cullen 1993; Côté et al. 1993), and is so for principally three reasons. First, numerical weather prediction models and data assimilation systems need significant recoding in order to reap the benefits afforded by the new high-performance significantly parallel computer platforms (Barros et al. 1995; Dickinson et al. 1995; Drake et al. 1995; Estrade and Birman 1995; Hack et al. 1995; Hammond et al. 1995; Henderson et al. 1995; Isaksen and Barros 1995;Michalakes et al. 1995; von Laszewski et al. 1995; Wolters et al. 1995). Second, to improve the accuracy of the initial state of the atmosphere, that is, that of the analysis, requires a significant investment in the research and development of new data assimilation methods (e.g., Daley 1991). In this regard, the development of the tangent linear model of a forecast model and its adjoint, often needed for four-dimensional data assimilation [e.g., Courtier et al. (1991), (1994)], is time consuming. Third, to improve the predictive capability of a weather forecast model requires that a significant effort be devoted to the improvement of model parameterizations. These motivate the consolidation of both the global and regional assimilation and forecasting systems within a single flexible modeling framework.

To build a model whose intrinsic versatility would allow it to be used as the axle of both forecast cycles, and therefore create a unified environment, was thus the main incentive for the development of the GEM model. Using similar reasoning, both Météo-France (Courtier et al. 1991) and the U.K. Meteorological Office (Cullen 1993) have developed unification strategies, which are, however, different. At Météo-France, the IFS/ARPEGE/ALADIN (Integrated Forecast System/Action de Recherche Petite Échelle Grande Échelle/Aire Limitée Adaptation Dynamique Développement International) forecast system has been developed in collaboration with the ECMWF (European Centre for Medium-Range Weather Forecasts). It is based on the use of a global variable-resolution spectral model as proposed in Courtier and Geleyn (1988). For medium-range applications the system is run at uniform resolution by ECMWF, whereas it is run by Météo-France with variable resolution concentrated over France for regional forecasts to 3 days (Yessad and Bénard 1996). Because of some unanticipated intrinsic limitations on resolution (Caian and Geleyn 1997) due to the use of the Schmidt (1977) coordinate transformation to achieve variable resolution with a spectral model, a limited-area nonhydrostatic version (ALADIN) has been developed (Bubnova et al. 1995) for higher and more-focused resolution applications using code shared with that of the global version.

The strategy of the UKMO is both similar and different. It uses the UKMO’s global uniform-resolution finite-difference unified forecast/climate model (Cullen 1993; Cullen et al. 1997) for medium-range forecasting and climate simulation, and a limited-area code-shared version for mesoscale forecasting. In the context of medium-range forecasting and climate simulation, Cullen (1993) remarked that “Maintenance of two separate systems is no longer practicable or justified.” In the context of medium-range and mesoscale forecasting he further noted that the unified strategy “avoids the need for two separate teams of scientists and software systems, and allows the techniques used in large-scale modelling to be rigorously tested at the higher resolution used in the mesoscale model against the detailed observations available over the United Kingdom.” It also ensures a certain consistency between the driven and driving models regarding the numerical methods and parameterizations used, inasmuch as there is much communality for the latter between the two configurations.

In our view, most of the considerations that led both Météo-France and the UKMO to adopt unified modeling strategies also apply in the Canadian context. Our own unification strategy is based on the global variable-resolution strategy outlined in the shallow-water proof-of-concept work of Côté et al. (1993). As mentioned therein, the adopted strategy was influenced by the Courtier and Geleyn (1988) work but is, as argued both in Côté et al. (1993) and below, more flexible and of broader application.

### b. Air quality modeling considerations

A multiscale atmospheric model is also beneficial for the study of a wide range of air quality issues. A consistent treatment of physical processes in the atmosphere, advection, and chemical conversions, is now recognized as essential to appropriately model the earth’s atmospheric chemistry (Dastoor and Pudykiewicz 1996; Rood 1996). The coupling of the GEM model with a comprehensive treatment of chemistry should provide a framework for the study of atmospheric chemistry, both tropospheric and stratospheric, from the global scale down to the meso-*γ* scale. The integrated system also potentially permits better operational forecasts due to an improved radiation budget based on a reliable ozone distribution.

### c. Research community modeling considerations

On the one hand, the Canadian mesoscale community needs a nonhydrostatic mesoscale research model for the development and validation of physical parameterizations, such as surface- and boundary layer phenomena, moist convection, gravity wave drag, as well as for nowcasting research. On the other hand, research in large-scale dynamical meteorology greatly benefits from access to a global baroclinic model, and it is often convenient, for example, for examining quasi-horizontal dynamics in the stratosphere, to also have access to a shallow-water one. These needs are presently fulfilled in Canada by the following Environment Canada models: the limited-area MC2 (Mesoscale Compressible Community) model (Benoit et al. 1997); the global spectral SEF model (Ritchie and Beaudoin 1994); and the global climate model (GCM) (McFarlane et al. 1992). A further need is for a model to attack problems that are inherently multiscale, for example precipitating convective cloud systems (Moncrieff et al. 1997).

The GEM model’s multiscale potential combined with switch-controlled options to use either the fully compressible Euler equations, the hydrostatic primitive equations, or the shallow-water equations (the second two of these exist, and work is under way for the first), could provide a unified environment that would address these needs with a single community model. This would result in a further resource rationalization. It would also facilitate technology transfer in both directions between universities and Environment Canada (e.g., the transfer, adaptation, and further validation of advanced parameterization schemes from the university milieu to an operational one); enable researchers to easily test new ideas in a variety of contexts (e.g., land surface schemes for climate and synoptic- and mesoscale weather applications); without having to familiarize themselves with another modeling system and without having to interface their code with it; facilitate multidisciplinary research (e.g., on stratospheric ozone depletion); and facilitate research that requires the linkage of operational databases with models (e.g., the provision of initial and boundary conditions or verification analyses or climatologies), or that requires data assimilation capability (e.g., assimilation of chemical-species data).

To fully achieve our goals for a highly flexible community modeling system, a code-shared limited-area option, similar in concept to the UKMO unification strategy, should ultimately be developed. This would allow high-resolution “process study” type integrations to be performed with simple boundary conditions, thereby facilitating the development and calibration of parameterization schemes. It would also permit a comprehensive evaluation of the relative merits of the driven limited-area nesting approach against the global variable resolution one under well-controlled conditions using identical spatial and temporal discretization methods, physical parameterizations, horizontal and vertical resolution over the area of interest, initial conditions, and lower boundary conditions.

### d. Climate modeling considerations

As time progresses, the distinction between weather-forecast and climate models becomes ever blurred due to the increased sophistication of parameterizations in weather forecast models and the increased resolution of climate models. Evidence for this is provided by the diverse mix of weather forecasting and climate models used by participants to make 10-yr simulations in the Atmospheric Model Intercomparison Project (Gates 1992, 1995). It is now possible to reconfigure a global atmospheric circulation model for either weather forecasting or climate applications simply by fixing the resolution and appropriately choosing among available parameterizations. This is exemplified by the use of the UKMO Unified Model for both operational weather forecasting, and for climate simulation (Murphy 1995; Murphy and Mitchell 1995), and also by the use of the ARPEGE model at Météo-France (Yessad and Bénard 1996; Déqué and Piedelievre 1995).

As indicated in Cullen (1993), the advantages of doing so within a single universal model framework are threefold. First, it is more economical to maintain and further develop a single model than two or more. Second, any improvements (e.g., better parameterizations, numerical algorithms, diagnostics, optimized code for existing and new computer platforms, etc.) developed for weather forecasting or for climate simulation are immediately available for other applications without having to spend scarce, and possibly unavailable, resources on code transplantation, adaptation, and validation. Third, improved validation of parameterizations is possible because they can be easily tested in both weather forecasting and climate mode. Validation in climate mode reduces the likelihood of a climate drift when a parameterization is used in a data assimilation cycle, whereas running in weather forecast mode provides a better validation of the physical fidelity of certain parameterizations because of a more direct comparison against observations with much less spatiotemporal averaging of the verification data.

In their recent work at the Hadley Centre with a nested regional-climate model, Jones et al. (1995) and Jones et al. (1997) found inconsistencies between the climates of the driven (regional) and driving (global) models, and argued that a priority for future work should be to refine the experimental methodology. They briefly described three possible avenues: (i) retain one-way nesting but reduce the domain size to better constrain the large-scale circulation and reduce the inconsistencies between the global and regional climates; (ii) introduce two-way nesting between the global and regional climate models; and (iii) use a global spectral model with variable resolution as in the Déqué and Piedelievre (1995) study. Compared with the nested approach, they stated that this latter option has the clear advantage of circumvention of consistency problems, but the clear disadvantage of degraded resolution at the antipodes of enhanced resolution beyond that of a present-day global climate model. This is due to the aforementioned limitation by the Schmidt (1977) transformation on how resolution varies in a variable-resolution global spectral model, and it could potentially be addressed by the more flexible variable-resolution strategy adopted herein. It both permits a concentration of the resolution over an area of interest, and allows one to constrain the resolution at the resolution antipodes to be no worse than that of a present-day global climate model. This does not guarantee that this approach will indeed be viable for regional climate modeling (e.g., perhaps the effect of large-scale teleconnections is such that any local mesh refinement becomes questionable), but the Fox-Rabinovitz et al. (1997) comparison of uniform- and variable-resolution integrations for the Held and Suarez (1994) problem suggests that the approach should be further explored.

The proposed global modeling framework could, with an appropriate choice of parameterizations, be used for both global climate modeling and, provided the Fox-Rabinovitz et al. (1997) methodology withstands further scrutiny and validation, also for regional climate modeling.

### e. In summary

The foregoing motivates the development of a new highly flexible modeling system capable of meeting the weather forecasting needs, both operational and research, of Canada for the coming years. Such a modeling system also has the potential to meet those of air quality and climate modeling. A discussion follows of some of the more important design considerations for the GEM model at the heart of this modeling system.

## 3. Variable horizontal resolution

### a. Rationale for global variable resolution

The compromise of regional modeling is to favor high local resolution at the expense of a reduced period of validity. To protect the integrity of a forecast over an area of interest throughout the forecast period requires a computational domain much larger than the area of interest, since poorly resolved features at and near the computational boundaries propagate inward and contaminate the forecast, resulting in an ever-diminishing subdomain over which the forecast is accurate. Even under the most favorable circumstances, errors will be advected inward by the local wind. As a minimum, the uniform-resolution part of any computational domain therefore has to be sufficiently large that the entire embedded area of interest will not be contaminated at any time during the integration period by any error advected inward from its boundary. Also any resolution degradation or any application of numerical artifices, such as blending or enhanced diffusion, should only be applied outside this pristine uniform-resolution part of the computational domain. Generally, the longer the integration time and the larger the area of interest, the larger must be the pristine uniform-resolution computational domain within which the area of interest is embedded. However for some situations, for example, small-scale surface forcing, the forced component of the flow may be more important than the transient one. A realistic and more detailed response, for example, low-level channeling by valleys of an incoming uniform flow, may then be obtained via a local enhancement of resolution, without the need for correspondingly high resolution far upstream.

Strategies for regional modeling fall into two broad classes: interactive and noninteractive (e.g., Anthes 1983; Arakawa 1984; Staniforth 1997). For the *noninteractive* approach (the most popular one) a coarse-resolution forecast is used to specify time-dependent lateral boundary conditions for a model integrated over a limited area. The principal difficulty with this approach is the proper specification of lateral boundary conditions for open domains, and is related to fundamental problems of mathematical well-posedness (Oliger and Sundström 1978). It is theoretically possible to obtain a well-posed problem with a pointwise specification of boundary conditions for the *continuously* defined Euler equations, but *not* for the hydrostatic primitive equations. However, most, if not all, limited-area models *overspecify* these conditions, regardless of which particular equation set (hydrostatic primitive, nonhydrostatic Euler, or anelastic) is employed, thereby leading to an ill-posed *discrete* problem. In the absence of any control mechanism this usually leads to noise at the smallest-resolvable scale, which often appears near outflow boundaries and then spuriously propagates upstream (e.g., Miyakoda and Rosati 1977; Arakawa 1984;Robert and Yakimiw 1986; Vichnevetsky 1986), and can even lead to computational instability (e.g., Baumhefner and Perkey 1982). To compound the problem, spatial computational solutions can under certain circumstances be spuriously forced by lateral boundary conditions (Mesinger 1973).

A common and essential ingredient of limited-area strategies is the introduction of an adjustment region immediately adjacent to the lateral boundaries, where one or both of the techniques of blending and diffusion, either explicit or implicit, are applied. Blending, that is, a weighted average of the fine-resolution driven forecast with the coarse-resolution driving one within a boundary region, can destabilize the dynamic equilibrium of an incoming flow (e.g., Staniforth 1997). Adding viscosity (diffusion) to the equations in a boundary region increases their order from first to second, but does not in general render an overspecified problem well posed (Oliger and Sundström 1978). While it can control noise problems, it does so at the expense of further smoothing the already coarsely resolved incoming flow and further reducing its accuracy. It can also give rise to spurious and detrimental viscous boundary layers that may spuriously propagate and interact with the flow elsewhere (Oliger and Sundström 1978).

A practical acid test that any successful limited-area model should meet (Yakimiw and Robert 1990) is that *the solution obtained over a limited area should well match that of an equivalent-resolution model integrated over a much-larger domain.* To rigorously validate a model by conducting such acid tests in a controlled manner is not trivial, principally because of the difficulty and cost of setting up and running the control experiment. This doubtless explains why very few reports of such tests may be found in the literature for the validation of regional models using either of the two strategies. It is surprisingly difficult to meet this acid test even under very simple conditions. Motivated by the failure of a baroclinic limited-area model undergoing validation tests to satisfy this test, Robert and Yakimiw (1986) evaluated, in the context of the linearized 1D shallow-water equations, the nesting strategies of Williamson and Browning (1974), Perkey and Kreitzberg (1976), and Davies (1976), variants of which are still used in almost all of today’s limited-area models. They found that none of these methods works acceptably well for this simple problem. They did, however, note that these methods would work well in this context if the initial fields were first “flattened” close to the boundaries in a certain way. The results of their follow-up work for the nonlinear 2D shallow-water equations (Yakimiw and Robert 1990) suggest that their approach represents an improvement over previous ones. However, it remains to demonstrate its advantage for baroclinic models, where boundary conditions are significantly more difficult to specify and apply than for shallow-water models.

There have been a number of other reports over the years, both theoretical and practical, concerning lateral boundary condition–related difficulties with the noninteractive approach. These include Sundström and Elvius (1979), Davies (1983), Errico and Baumhefner (1987), Gustaffson (1990), Imbard et al. (1987), Vukicevic and Errico (1990), Errico et al. (1993), Alpert et al. (1996), and Paegle et al. (1997). Arakawa’s (1984) summary of the situation at the time of writing was:“Unfortunately, there is no standard method that has been demonstrated satisfactory for a broad spectrum of atmospheric events.” Although this evaluation was quite some time ago, it is nevertheless corroborated by the recent studies of Errico et al. (1993), Alpert et al. (1996), and Paegle et al. (1997). These studies all indicate that lateral boundary condition error can, depending upon the meteorological situation, importantly contribute to the total error. In particular, Alpert et al. (1996) believe that boundary factors play a crucial role in mesoscale modeling and that their importance has been underestimated by the mesoscale community. We concur. In our opinion, the problems of specifying and applying lateral boundary conditions for limited-area models have not as yet been entirely resolved, and more must be done to validate the methodologies employed by today’s mesoscale models, both interactive and noninteractive. A tangent linear model of a forecast model provides a powerful evaluation tool in this regard (Errico et al. 1993).

Our own preference for strategy is the *interactive* approach (e.g., Harrison and Elsberry 1972; Phillips and Shukla 1973; Kurihara and Tuleya 1974; Courtier and Geleyn 1988) where the resolution is varied in some manner away from the fine resolution of an area of interest to the coarser resolution of a surrounding outer region. Arakawa (1984) claims that this approach is conceptually superior. It has the desirable features that the flows inside and outside the area of interest mutually interact in a single dynamic system, and that it addresses the well-posedness issue of limited-area models. This is accomplished at the cost of integrating over a larger domain, usually taken to be either quasi-hemispheric or global. The cost effectiveness of this approach is discussed further below.

Imposing a wall boundary condition in the equatorial region, as in, for example, the RFE model (Mailhot et al. 1997) and the NCEP Nested Grid Model (DiMego et al. 1992), has the virtue of yielding a mathematically well posed problem. Note that to be consistent, the initial conditions in the vicinity of the boundary must be adjusted to satisfy the wall condition of a contained flow. Provided this is done in such a way as not to destroy the dynamic equilibrium of the flow over the area of interest because of a spurious induced large-scale imbalance, the adjusted tropical flow has insufficient time during the forecast period to contaminate the midlatitude forecast. Note, however, that strong Hadley circulations are important for sustaining subtropical jet streams, which can influence midlatitude flow. This both makes it more difficult to satisfactorily adjust the initial conditions to respect an equatorial wall, and restricts the validity of this approach as a function of both latitude and length of forecast period: the equatorial wall strategy is consequently more viable for Canada than for the United States. Taking the outer-region domain to be global rather than quasi-hemispheric, as advocated here, is less restrictive and facilitates obtaining an initial atmospheric state that is in good large-scale dynamic balance and less sensitive to the geographical location of the region of interest.

A smooth degradation of resolution in the outer domain avoids the deleterious impact on the accuracy of the solution of an abrupt change in resolution (Arakawa 1984; Zhang et al. 1986; Vichnevetsky and Turner 1991;Fox-Rabinovitz et al. 1997). The results of Gravel and Staniforth (1992) indicate that even if the resolution can be varied abruptly without introducing significant noise, the accuracy of the solution can nevertheless be severely and unacceptably degraded.

### b. Different approaches to global variable resolution

Variable resolution over the globe can be achieved in a number of different ways. In Courtier and Geleyn (1988) and Hardiker (1997) a continuous coordinate mapping due to Schmidt (1977) is employed such that the efficiency of the spectral method is hardly affected, and this approach is used in Météo-France’s operational ARPEGE model (Courtier et al. 1991). However the nature of the conformal coordinate transformation limits the focusing of the resolution to, roughly speaking, meso-*β*-scale applications (Caian and Geleyn 1997). A different continuous coordinate mapping, which is weakly varying and orthogonal but nonconformal, is used in Sharma et al. (1987) in the context of a finite-difference discretization. In Paegle (1989), Paegle et al. (1997), and Côté et al. (1993), variable resolution is achieved using finite elements. For the Paegle (1989) and Paegle et al. (1997) formulation the resolution is varied and focused in the north–south coordinate direction only.

The Côté et al. (1993) formulation adopted here is more general, and it permits resolution to be simultaneously varied in both coordinate directions in a flexible manner. It is an adaptation of the variable-resolution approach of the RFE model to spherical geometry using a regular (but variable resolution) arbitrarily rotated latitude–longitude mesh. The regular latitude–longitude mesh facilitates computational efficiency since its regularity is reflected in the resulting matrix structures, and certain properties that only hold for regular meshes can then be exploited [see Staniforth (1987) for discussion]. Rotating the mesh permits resolution to be focused over any area on the earth, and addresses the aforementioned limitation of the RFE model, namely, that it was designed for midlatitude applications and not for high-resolution tropical ones.

Sufficiently far removed from the uniform-resolution subdomain, the variable-resolution forecast will of course be of inferior quality to that of a uniform-resolution medium-range one, and this is only to be expected. It is simply the result of trading enhanced accuracy over a region of interest against reduced accuracy (or no forecast at all) elsewhere, and it is the essence of all regional forecasting strategies.

Variable resolution gives rise to a local degradation in accuracy and this results in local flow distortions. This distortion is, however, quite small in regions where the resolution is not significantly degraded. Kalnay de Rivas (1972) and Fox-Rabinovitz et al. (1997) have examined the truncation errors associated with the approximation of derivatives on a variable-resolution mesh. They show that approximating derivatives on nonuniform but smoothly varying meshes such as those described herein can be considered to be equivalent to approximating them with uniform resolution in a transformed coordinate system. Smoothly varying the mesh has two advantages. First it keeps the local truncation errors reasonably small, although they are admittedly larger in the region of degraded resolution, as one would expect. Second, compared to using meshes with an abrupt variation in resolution, spurious dispersion is greatly reduced as discussed in Fox-Rabinovitz et al. (1997) and in considerably more detail in Vichnevetsky (1986). Smoothly varying the resolution greatly reduces the possibility that the group velocity spuriously changes sign, which is the mathematical criterion that determines whether wave reflection occurs or not. This makes it a lot less likely that gravitational disturbances propagating outward from the uniform-resolution part of the domain will be spuriously reflected back from the internal boundary where resolution changes. Instead they continue outward as they should, albeit with some flow distortion as they leave the uniform-resolution subdomain. This behavior is analogous to an optics problem (Davies 1983), where light is smoothly refracted in a continuously varying medium, but partially or totally reflected at the interface of two uniformly defined but different media.

### c. Different mesh configurations for different applications

For global-scale problems, such as medium-range, monthly, and seasonal weather forecasting, the long-range transport of pollutants, and climate scenarios, a uniform latitude–longitude mesh configuration is appropriate. Resolution can in principle be degraded somewhat in the Southern Hemisphere for some Northern Hemisphere applications, and vice versa. Fox-Rabinovitz et al. (1997) report success with this strategy for the Held and Suarez (1994) dynamic-core problem.

For smaller-scale problems, several variable-resolution horizontal mesh configurations are displayed in Figs. 1–3, illustrating the flexibility of the approach. The first of these, shown from two different viewing perspectives in Figs. 1a,b, is suitable for forecasting at the synoptic scale for periods of up to 48 h. Over North America the resolution is 0.33°, and the resolution smoothly degrades away from the uniform-resolution subdomain with each successive mesh length being approximately 10% larger than that of its preceding neighbor. This mesh, covering approximately the same area as the last operational configuration of the quasi-hemispheric RFE model, has, perhaps surprisingly, 5% fewer degrees of freedom: this is because it is more efficient to work directly on the globe rather than on a distorted projection of it. Operationally, the GEM model uses the grid configuration of Fig. 1 to provide 48-h forecasts. The uniformly spaced (in latitude and longitude) mesh points in the high-resolution window are also almost uniformly spaced on the sphere with mesh lengths that vary between a maximum of approximately 37 km (for mesh lengths oriented west to east) and a minimum of 29 km (attained on the Atlantic and Pacific boundaries). This may be compared to the high-resolution window of the above-mentioned RFE mesh, where the corresponding mesh length spacing over the sphere varies from approximately 38 km (near the North Pole) to 25 km (in the Caribbean). Although this is not an essential element of the strategy, it was found convenient to orient the GEM model’s mesh as in Fig. 1. This is because it gives somewhat enhanced resolution at inflow on the western geographical boundary of the high-resolution window, and because it also gives a vector length of 255 which is well suited to the architecture of the Environment Canada’s NEC SX-4 computer.

The second grid configuration (Fig. 2) is suitable for meso-*β*-scale problems, such as forecasting the weather in more detail for periods up to 12 h. The resolution is now focused over a much smaller size (10° × 10° ∼ 1100 km × 1100 km) uniform-resolution subdomain, which is centered over British Columbia for this example. The resolution (0.033° ∼ 3.6 km) of this subdomain is 10 times finer than that of Fig. 1. The third grid configuration (Fig. 3) is suitable for meso-*γ*-scale problems, such as simulating an urban smog episode for a few hours, and the mesh is now centered over Montreal Island. For this highly focused mesh, the resolution (0.0033° ∼ 360 m) is further increased by a factor of 10 and focused over a 50-times-smaller (1.36° × 1.36° ∼ 150 km × 150 km) subdomain. For both the meso-*β* and meso-*γ* meshes (Figs. 2 and 3), the resolution again varies smoothly away from the uniform-resolution subdomain, with the same approximately 10% successive increase in mesh length as the synoptic-scale configuration (Fig. 1).

### d. Mesh properties

All three of the shown mesh configurations have the remarkable property that at least 50% of the total mesh points (i.e., approximately 70% for each of the two coordinate directions) are over the uniform high-resolution area of interest. This implies that the overhead of using a variable-resolution global model for very small scale applications over small areas is affordable, without necessarily being optimal.

The adopted variable-mesh strategy has the property that it becomes increasingly cost effective as a function of increasing resolution when resolution is focused over a given area. To see this, consider the cost of increasing the resolution over the uniform-resolution subdomain of the operational mesh (Fig. 1) while holding its size fixed, and while also smoothly degrading the resolution away from this subdomain in exactly the same manner:that is, each successive mesh length is always approximately 10% larger than its predecessor when moving outward in any of the four compass directions. The percentage of points over the uniform-resolution North American subdomain then increases exponentially (see Fig. 4, obtained using the analysis given in the appendix) as a function of increasing resolution over the subdomain. When computer power becomes available at the CMC to allow a 10-km-resolution regional configuration, the percentage of mesh points over the subdomain of interest will go from today’s 57% (approximately 76% in each coordinate direction) to 78% (approximately 88% in each of the two directions). Thus, this global variable-resolution strategy, which is already viable for regional forecasting for Canada at today’s resolution, will become ever more so in the future.

### e. Discussion

As stated above, global variable resolution has the desirable features that the flows inside and outside the area of interest mutually interact in a single dynamic system, and that it addresses the well-posedness issue of limited-area models. This is, however, accomplished at the cost of integrating over a larger domain and it is natural to ask the questions, how does the cost of running a variable-resolution global model compare to that of running a limited-area model, and what does one get for the expended computational effort? These are very difficult questions to answer in a general manner since the answers depend on many factors such as the size and geographical location of the area of interest, the forecast period, the resolution, the prevailing meteorological conditions, the difference in resolution between driving and driven models, the accuracy of a driving model’s forecast, and the particular limited-area or variable-resolution techniques employed. Nevertheless, an attempt will be made to theoretically outline some of the issues.

Assume that it is desired to obtain a forecast over a specified area of interest, such as the United States or Canada, for a certain period of time, say 48 h, at a certain resolution. To protect the integrity of the forecast at this resolution at the end of the time period, it is necessary as a minimum to integrate a model over a larger region at the uniform high resolution of the now-embedded area of interest. No numerical “fixes” are allowed within this larger area since the resulting errors would have the time to contaminate the area of interest within the forecast period and compromise its integrity. As discussed in Staniforth (1997), the size of this uniform-resolution integration domain depends very much upon the period of integration, the size of the area of interest, and the meteorological conditions, but it is typically many times larger than the area of interest. This is particularly so for wintertime situations with strong jets, and much less so for summertime ones with weak synoptic circulations but strong subsynoptic ones. Most forecast centers use a fixed integration area for all seasons, and it can be inferred from this that they intend that the chosen integration domain be able to properly handle worst-case scenarios.

Thus far in the argument, no distinction has been made between the global variable-resolution and limited-area strategies, and all other things being equal, the computational cost of the two strategies is thus far identical. Where things differ is how, and at what cost, one embeds this “protective” area within an even-larger integration domain. In a perfect world there would be no need to make this further extension of integration domain for the driven limited-area strategy, and it would be a clear winner. In reality, and as discussed above, it is necessary to enclose the minimum-sized “protective area” by a computational buffer region of sufficient size to adequately adjust the limited-area solution to that of the driving model. The size of this boundary region very much depends upon the accuracy of the information provided by the driving model within this region and upon the numerical adjustment strategy used. If the information provided is accurate and reasonably consistent with that of the driven model, and the numerical adjustment strategy well respects this [see however Staniforth (1997) for a blending counterexample], then this helps to keep this region, and the overhead, small:otherwise it may need to be much larger. An example of how things can go wrong is if the driving model provides a very poor forecast, for example, it spuriously breaks down a blocking high or misrepresents the precursor for an explosive development, due to a deficiency in the driving model or in the older analysis used as its initial conditions. The limited-area model then produces a forecast that respects these (by hypothesis) significantly erroneous boundary conditions and it consequently has a serious systematic error at the scale of the domain size (Vukicevic and Errico 1990).

For the variable-resolution strategy, additional mesh points must be added to provide global coverage, subject to the constraint that the resolution not vary too rapidly. This overhead varies widely according to application: there are, however, some mitigating factors. First and foremost is that the global variable-resolution approach is mathematically well posed and therefore more likely in our view to robustly give good results over a broader spectrum of situations than the limited-area one. Second, the variable-resolution strategy is not susceptible to the forecast degradation mentioned above for driven limited-area models due to deficiencies in their driving model and/or its use of an older analysis. Third, there is no established consensus on the size of a limited-area model’s computational buffer region (which greatly influences the comparative cost of the two strategies) and we believe that its width is often underestimated, possibly seriously so. Fourth, we believe that a 10% local variation in mesh length is a conservative estimate which could conceivably be significantly increased when flow gradients are small around the uniform-resolution subdomain (e.g., for an idealized mesoscale simulation embedded within a quiescent environment), with a consequent reduction in the overhead. The diffusion coefficient could also be increased outside of the uniform-resolution subdomain to locally control any problems due to a more-rapid local variation of resolution.

## 4. Other numerical modeling design considerations

### a. Nonhydrostatic considerations

The hydrostatic assumption, namely, the neglect of vertical acceleration in the vertical momentum equation, is an excellent approximation that is well respected in the atmosphere down to scales of 10 km or so. However at these scales the dynamical effects excluded by the hydrostatic assumption, for example internal wave breaking and overturning, start to become nonnegligible. To date, computer limitations have been such that almost all operational weather forecast (and climate simulation) models have been run with horizontal mesh lengths coarse enough to confidently employ the hydrostatic primitive equations. Looking to the future however, this will change (Daley 1988). In particular, if the new model is to be applied with meso-*γ*-scale mesh configurations similar to that of Fig. 3, then it should use the fully compressible Euler equations instead of the so-called hydrostatic primitive ones. This motivates the use of the “hydrostatic pressure” vertical coordinate proposed in Laprise (1992). This coordinate system permits a switch-controlled choice between the hydrostatic primitive equations (for large- and synoptic-scale applications), and the nonhydrostatic Euler equations (for smaller-scale applications), thus getting the best of two worlds. The computational and memory overhead associated with the latter option can then be avoided for applications where the hydrostatic approximation is valid. A terrain-following normalized pressure (Phillips 1957; Kasahara 1974) version is possible (Bubnova et al. 1995), allowing an easy incorporation of the lower boundary, and a relaxation toward the horizontal upward from the earth’s surface. For atmospheric applications, there is virtually no scale restriction on using hydrostatic pressure as vertical coordinate since it only requires density to be positive, and integrations are presented in Bubnova et al. (1995) with horizontal resolution as high as 80 m. Note, however, that terrain-following transformations ultimately break down in the presence of cliffs due to a breakdown of differentiability.

*π*is the “hydrostatic pressure” of Laprise, that is, it satisfies It has the advantage that it permits a straightforward adaptation of an existing library of physical parameterizations developed over the last decade or so. Although the GEM model has been formulated in terms of this terrain-following “hydrostatic pressure” coordinate, at the present state of development the code only exists for the hydrostatic primitive equations. Work on the nonhydrostatic version is, however, under way.

### b. Efficient time integration schemes

Numerical efficiency is very important when modeling the atmosphere. Vertically propagating acoustic waves and horizontally propagating external gravity waves propagate many times faster than the local wind speed, by a factor of 3 or more depending upon application. The time step of explicit Eulerian integration schemes is restricted by the speed of the fastest-propagating modes, which means that the time step is usually constrained by modes that carry little energy. The restrictions are particularly severe for global finite-difference or finite-element models due to the convergence of the meridians at the poles. This motivates the use of an implicit (or semi-implicit) time treatment of the terms that govern the propagation of acoustic and gravitational oscillations in order to greatly retard their propagation and permit a much larger time step. It results in the need to solve a 3D elliptic boundary value (EBV) problem. For a time-implicit scheme to be computationally advantageous, it must be possible to integrate with a sufficiently large time step to offset the overhead of solving the EBV problem. This is usually the case even for nonhydrostatic flows as discussed in Skamarock et al. (1997).

A further advantage of a time-implicit treatment of acoustic and gravitational oscillations (Staniforth 1997) is that for large time steps it dramatically retards the inward propagation of any error from the boundary region of a limited-area model, or from the outer region of a variable-resolution model. To protect the integrity of the forecast over an area of interest against contamination by any significant such source of error, a proportionately larger and possibly prohibitively costly computational domain would thus be needed by a model that employs an explicit time treatment of the acoustic and gravity terms. Such a problem may occur when the solution at the lateral boundary of a driven limited-area model becomes spuriously discontinuous due to overspecification of the boundary conditions (Oliger and Sundström 1978), or when the boundary values provided by the driving model happen to be inaccurate and in significant disagreement with the internally generated flow, thereby projecting nonnegligible energy onto small-scale rapidly propagating modes.

For an Eulerian treatment of advection, the use of an implicit or semi-implicit time scheme then constrains the local Courant number (*U*Δ*t*/Δ*x*) to be somewhat less than unity. For a latitude–longitude representation of the sphere this is particularly restrictive in polar regions due to the convergence of the meridians, but elsewhere the time truncation error is still several factors smaller than the spatial truncation error. This motivates the use of a semi-Lagrangian treatment of advection [Robert (1981), (1982); see Staniforth and Côté (1991) for a review], which is stable for Courant numbers much greater than unity and permits the time step to be chosen on the basis of accuracy rather than stability. The governing equations are thus approximated along a parcel trajectory that arrives at a mesh point at the new time step. The evaluation of substantive derivatives then reduces to taking a time difference along a trajectory. Upstream values are computed by interpolation (usually cubic) of values at mesh points surrounding the departure point (which is generally not a mesh point). Although semi-Lagrangian advection is not as local as Eulerian advection, it is nevertheless still local and therefore viable for models with global domains on massively parallel architectures as demonstrated in, for example, Barros et al. (1996).

Semi-implicit semi-Lagrangian methods were originally developed for hydrostatic primitive equation models (Robert et al. 1985) and are finding increasing favor for both weather forecast models (e.g., Chen and Bates 1996a; Gustafsson and McDonald 1996; Moorthi 1997;Purser and Leslie 1994; Ritchie and Beaudoin 1994; Ritchie et al. 1995; Tanguay et al. 1989), and climate models (e.g., Chen and Bates 1996b; Walsh and McGregor 1995; Williamson and Olson 1994, Williamson et al. 1998, Williamson and Olson 1998). They have also been extended to the fully compressible Euler equations (e.g., Tanguay et al. 1990; Bubnova et al. 1995; Semazzi et al. 1995; Cullen et al. 1997).

The choice of a semi-Lagrangian algorithm is also, in our opinion, a good one for air quality studies. Chemical reactions are highly dependent on the concentration *gradients* of the various species (Edouard et al. 1996);because of its small dispersion error, semi-Lagrangian advection well maintains sharp gradients, leading to very accurate chemical transport models (CTMs).

Recently Bartello and Thomas (1996) have claimed that semi-Lagrangian schemes are not cost effective for flows with shallow spectra (estimated by them to occur in the atmosphere for scales below 200–300 km), and particularly for flows with significant mesoscale topographic forcing (Rivest et al. 1994; Pinty et al. 1995; Héreil and Laprise 1996). The only practical evidence given to support their claim is an example of unforced uniform advection in 1D, which is not at all representative of the cited topographically forced/high-deformation 3D mesoscale flows with shallow spectra, and is therefore, in our opinion, unconvincing.

The crux of their theoretical argument is that because there is no time step advantage for semi-Lagrangian advection for such flows, it is more cost effective to use a lower-order Eulerian advection scheme rather than a higher-order, more accurate but more expensive, semi-Lagrangian one. Semi-Lagrangian advection for Courant numbers less than unity is, however, equivalent to upwind-biased Eulerian advection (Dietachmayer 1990;Bates 1991; Staniforth and Côté 1991; Leslie and Dietachmayer 1997). Indeed, Leslie and Dietachmayer (1997) argue that what counts is not whether the advection is semi-Lagrangian or Eulerian, but whether the scheme is of high order, and whether this order is even or odd. They also substantiate their argument with numerical results for the warm-bubble problem. Thus the issue raised by Bartello and Thomas (1996) is not really one of semi-Lagrangian versus Eulerian advection, but rather reduces to the age-old question (e.g., Kreiss and Oliger 1973), is it more cost effective to use a lower-order scheme at higher resolution or a higher-order one at lower resolution?

When examining this question it should be remembered that higher resolution has other implications than just the cost effectiveness of approximating advection, a point that is mentioned but downplayed in Bartello and Thomas (1996). In particular, higher resolution for the same accuracy implies an order-of-magnitude or more increase in memory demands; an order-of-magnitude or more points at which physical parameterizations presumably need to be computed and (for semi-implicit models) EBV problems solved; and the need for more time steps due to the use of the recommended three-time-level (required for stability reasons) *centered* second-order scheme, rather than a two-time-level third-order (in space) *upwind-biased* one that permits a larger time step for the same temporal truncation error. This latter consideration was overlooked by them. In our view the onus is still upon the proponents of the hypothesis that semi-Lagrangian methods are not cost effective at the mesoscale to demonstrate that this is indeed so by performing comparative integrations under realistic conditions.

Although we remain unconvinced by both their arguments and conclusions, it is nevertheless natural to ask the question, if it really does turn out that semi-Lagrangian techniques are not viable in the future for whatever reason, how would this result affect the strategy proposed herein? Our answer is that in this case an Eulerian option could be introduced into the advection code (which is highly localized) to obtain an Eulerian version of the model [similar to the global variable-resolution Eulerian formulation of Fox-Rabinovitz et al. (1997)], and a code-shared limited-area Eulerian version. This would also require changing the time scheme for stability reasons from a two-time-level-based one to a three-time-level one.

### c. Monotonicity and conservation

The interpolation procedure of the semi-Lagrangian algorithm can give rise to a local violation of monotonicity due to the Gibbs phenomenon. This can have particularly deleterious effects for physical quantities used by either the subgrid-scale parameterization, or the CTM. To alleviate this potential problem, the monotonic scheme of Bermejo and Staniforth (1992) is adopted. In this scheme, a high-order estimate (using cubic interpolation) is blended with a low-order one (using linear interpolation, which guarantees monotonicity). Consistent with approximation theory, the blending is done in such a way that the high-order solution is favored whenever smoothness warrants it (i.e., most of the time), otherwise the low-order solution is more strongly weighted.

Conservation is considered by many to be a potential problem for semi-Lagrangian schemes. Although conservation is generally excellent for short-term high-resolution integrations, it may be inadequate for longer (climate scale) simulations, or for particularly sensitive air quality studies. Priestley (1993) proposed a conserving variant of the monotonic Bermejo and Staniforth algorithm such that conservation is additionally enforced as a constraint via a minimization procedure, with local adjustments being made where the interpolation procedure is most susceptible to introduced errors. This algorithm has recently been introduced into the GEM model as an option for the interpolation of the humidity and cloud liquid water, as well as passive tracers, but is not as yet used operationally.

## 5. Data assimilation

Assimilation of both in situ and remotely sensed data [see Daley (1991) for a review] provides both the initial conditions required for numerical weather prediction and a means of monitoring climate change. It can also provide initial conditions for chemical species, such as ozone. The development of improved assimilation methods is an area of intense international research at the present time. These include 3D and 4D variational assimilation and extended Kalman filtering. Since numerical models are usually used as constraints for data assimilation, present and anticipated developments in data assimilation must be taken into account when designing a new model. A comprehensive discussion of data assimilation issues is beyond the scope of this paper, but a number of issues relevant to the development of data assimilation for the GEM model are touched upon herein.

Considerations for assimilating data at large and synoptic scales using the GEM model with uniform resolution are very similar to those for global uniform-resolution gridpoint models such as the UKMO Unified (Cullen 1993) and Moorthi et al. (1995) Models. Additional problems are, however, generally encountered when assimilating data at higher resolution. Some of these problems are related to fundamental scale and data issues, whereas some are peculiar to which strategy (interactive or noninteractive) is adopted for regional modeling.

A brief overview of mesoscale data assimilation issues is given in Daley (1991), and these include the following:

- A lot flatter kinetic energy spectrum for mesoscales than for synoptic and large scales (a
*k*^{−5/3}power law versus a*k*^{−3}one, where*k*is horizontal wavenumber), worsening aliasing problems; - a penury of data at sufficient density and of sufficient quality;
- a breakdown at the mesoscale of some of the scale and dynamic-balance assumptions that are implicitly assumed to hold when assimilating data at the large and synoptic scales; and
- the intermittency of some mesoscale phenomena.

The advection of poorly resolved information from outside a pristine uniform-resolution subdomain increases the forecast error of the model, be it interactive or noninteractive, used as a constraint in mesoscale data assimilation. For a limited-area model this is due to the lower space and time resolution of the driving model that furnishes the boundary conditions, and to the damping effect of the surrounding computational buffer region as information propagates through it. For the GEM model it is due to the lower resolution of the variable-resolution part of its domain.

For a limited-area data assimilation system, the open lateral boundary conditions complicate matters [e.g. Gustaffson (1990); Daley (1991)]. They generally overconstrain the analyses near the boundaries, which can lead to errors at the scale of the limited area, and they also make it more difficult to obtain realistically balanced analyses. Gustaffson (1990) has shown that forecast accuracy can be significantly degraded, even over the center of the forecast area, when the lateral boundary conditions used in the assimilation cycle are insufficiently accurate. This can occur due to inaccuracies in the forecasts of the driving model initiated from analyses valid 6 or 12 h prior to the current analysis time. It is possible in principle to allow recently observed data (e.g., at the current analysis time), if available in sufficient quantity and of sufficient accuracy, to correct a trial field in the vicinity of the lateral boundaries at the current analysis time. This strategy can, however, only be expected to be partially successful, since it leads to further discrepancies between the internally determined flow over the limited-area and the cross-boundary flow as incorrectly specified (by hypothesis) by the driving model. As pointed out by one of the reviewers, it has been found in practice at the UKMO that the mesoscale model has to be run after the larger-scale driving model, so that up-to-date data is used in the provision of the synoptic driving fields. The effect of this is exactly the same as using the single-model strategy proposed herein.

For a global data assimilation system that uses a variable-resolution model, such as the GEM one, the variable resolution can potentially cause two problems. First, when run in a self-contained manner independent of a global uniform-resolution system, analysis climate drift is a possibility. Provided the large-scale driving data assimilation system does not have a climate drift, and provided the limited area is not too large, a limited-area one is unlikely to have a serious drift since it is strongly constrained by the lateral boundary conditions. Second, even if there is no climate drift, accuracy may be degraded over the high-resolution area due to the advection of poorly represented information from low-resolution parts of the domain. Whether such problems occur depends on the extent to which resolution is poor outside the high-resolution subdomain, and on whether the data density and quality is sufficient to give accurate analyses for those parts of the domain that influence the accuracy of the ensuing forecast over the area of interest.

The above considerations motivate the development of a regional data assimilation spinup cycle (e.g., DiMego et al. 1992; Chouinard et al. 1994; Rogers et al. 1996), using optimal interpolation (OI) data assimilation techniques (Daley 1991) or 3D or 4D variational ones (e.g., Parrish and Derber 1992; Zupanski 1993; Courtier et al. 1994; Gauthier et al. 1996). Spinup data assimilation systems provide well-balanced analyses and reduce the time it takes from forecast initiation to achieve realistic precipitation rates. A typical period for a spinup cycle is 12 h. This is a good compromise between being long enough to address the precipitation spinup problem, but short enough that poorly resolved information does not propagate too far inward and thereby deteriorate the analysis and subsequent forecast. Note that high resolution over data-sparse regions such as the Pacific is not guaranteed to lead to better analyses, since data sparsity is probably the factor that most limits the accuracy of analyses there. This means that the resolution over the eastern Pacific of the operational regionally configured GEM model is probably quite sufficient for the purposes of a 12-h spinup data assimilation cycle.

A possible advantage of a variable-resolution assimilation system compared to a limited-area one is that recently observed data is more likely to improve the quality of the analysis in the vicinity of the internal boundary between the uniform-resolution subdomain and the variable-resolution part of the mesh. The data is assimilated naturally in this region without doing anything special. The analysis can then be expected to be in realistic balance, and the ensuing forecast should also remain in good balance throughout the forecast period since there cannot be any inconsistency between it and some independently, and possibly wrongly, determined boundary conditions provided by a driving model. Also, recent data *outside* a uniform-resolution subdomain can improve the analysis over the uniform-resolution subdomain, since the length scale of horizontal correlation functions can be sufficiently large that a piece of data *outside* the uniform-resolution subdomain can influence the analysis *within* it. The current analysis of most if not all limited-area assimilation systems are unaffected by such data because it is not used.

The GEM model and a 3D variational (3DVAR) data assimilation system (Gauthier et al. 1996), driven by the CMC’s spectral model, were developed concurrently. This affected both the development of a GEM-driven data assimilation system and the staged operational implementation of the GEM model at the CMC. The RFE model had a 12-h OI-based spinup cycle (Chouinard et al. 1994), and it would have been natural to have developed a similar cycle for the GEM model for operational regional forecasting as we had originally intended. This was, however, judged to be a wasted effort, since the system would soon need to be replaced by a 3DVAR one. It was therefore decided to try to implement the GEM model for regional forecasting without a spinup cycle in the belief that the GEM model’s performance would justify this, while simultaneously preparing a 3DVAR-based spinup system that would become viable after the SEF-driven 3DVAR system was validated. This validation culminated in the operational replacement of the SEF-driven OI system (Mitchell et al. 1996) by the SEF-driven 3DVAR one on 18 June 1997.

Shortly thereafter, a GEM-driven 3DVAR regional spinup and forecast cycle was run and evaluated twice daily by the CMC over a two-month period, and led to its operational implementation on 24 September 1997. The spinup cycle uses the incremental approach (Courtier et al. 1994) where innovations are computed in observation space with respect to the background state at the model’s full resolution, whereas global analysis increments are calculated at lower resolution. This will be described in detail by its developers in a future publication. Tests conducted with 33 cases drawn from the four seasons, plus the results from the two-month preimplementation period, show that both the analyses and the forecasts improve on average when using this spinup cycle, and precipitation spinup time is considerably reduced. Priorities for further development of this system include producing the analysis increments directly on model surfaces, improving the specification of the background error statistics, and making better use of both conventional data (e.g., significant-level radiosonde data and surface observations) and remotely sensed data (e.g., assimilation of satellite radiances).

Some of the more promising 4D data assimilation techniques require the tangent linear model (TLM), and its adjoint, of the underlying atmospheric forecast model to be developed. Some preliminary work on the semi-Lagrangian and iterative-process aspects of this was performed in the MRB in anticipation of this need (Polavarapu et al. 1995; Polavarapu and Tanguay 1998; Tanguay et al. 1997). The TLM and its adjoint for the adiabatic hydrostatic primitive equation version of the GEM model has been developed (Polavarapu and Tanguay 1997, personal communication). It is currently being used for sensitivity studies, which will be reported on elsewhere, and the MRB plans to build on both this work and the existing 3DVAR system to eventually develop a 4DVAR system for the GEM model.

## 6. Model formulation

### a. Governing equations

**V**

^{H}is horizontal velocity,

*ϕ*≡

*gz*is the geopotential height,

*ρ*is density,

*T*

_{υ}is virtual temperature,

*κ*=

*R*

_{d}/

*c*

_{pd},

*R*

_{d}is the gas constant for dry air,

*c*

_{pd}is the specific heat of dry air at constant pressure,

*q*

_{υ}is specific humidity of water vapor,

*f*is the Coriolis parameter,

**k**is a unit vector in the vertical,

*g*is the vertical acceleration due to gravity, and

**F**

^{H},

*F*

^{Tυ}

*F*

^{qυ}

### b. Boundary conditions

*p*

_{T}. Thus

### c. Transport equations

_{i}is the

*i*th atmospheric tracer and

*F*

^{Ψi}

**∇**_{3}and

**V**

_{3}are, respectively, the three-dimensional gradient operator and velocity vector.

### d. Temporal discretization

Equations (6.1)–(6.5) are first integrated in the absence of forcing, and the parameterized forcing terms appearing on the right-hand sides of (6.1)–(6.4) are then computed and added using the usual fractional-step time method (Yanenko 1971).

*F*represents one of the prognostic quantities [

**V**

^{H}, ln|∂

*p*/∂

*η*|, ln(

*T*

_{υ}/

*T**) −

*κ*ln(

*p*/

*p**)], and

*G*represents the remaining terms, some of which are nonlinear. Such an equation is approximated by time differences and weighted averages along a trajectory determined by an approximate solution to where

**x**

_{3}and

**V**

_{3}are the three-dimensional position and velocity vectors, respectively. Thus where

*ψ*

^{n}=

*ψ*(

**x**

_{3},

*t*),

*ψ*

^{n−1}=

*ψ*[

**x**

_{3}(

*t*− Δ

*t*),

*t*− Δ

*t*],

*ψ*= (

*F, G*),

*t*=

*n*Δ

*t.*

Note that this scheme is decentered along the trajectory as in Rivest et al. (1994), to avoid the spurious resonant response arising from a centered approximation in the presence of orography. The off-centering parameter *ε* is currently set to 0.1 for the operational regional configuration. Cubic interpolation is used everywhere for upstream evaluations [cf. (6.13)] except for the trajectory computations [cf. (6.12)], where linear interpolation is used with no visible degradation in the results.

*t,*the efficient solution of which is discussed below. An implicit time treatment, such as that adopted here, of the nonlinear terms has the useful property of being inherently computationally more stable than an explicit one [see e.g., the Gravel et al. (1993) analysis].

### e. Spatial discretization

A variable-resolution cell-integrated finite-element discretization on an Arakawa C grid is used in the horizontal with a placement of variables as shown schematically in Fig. 5. For uniform resolution it reduces to the usual staggered finite-difference formulation in spherical geometry (e.g., Bates et al. 1993; Cullen et al. 1997). This particular discretization was chosen for two reasons. First it is more suitable for tomorrow’s massively parallel architectures than the implicit spatial discretization of the shallow-water prototype described in Côté et al. (1993), since it gives rise to more local computations. The exception to this is the appearance of an elliptic boundary problem whose solution is discussed below. Second, the Arakawa C placement of variables is considered to be the best one when the mesh length is less than the Rossby radius of deformation (Arakawa and Lamb 1977; Cullen et al. 1997), making it suitable for mesoscale applications.

*R*

_{U}[

**x**

^{U}

_{3}

*t*)] and

*R*

_{V}[

**x**

^{V}

_{3}

*t*)] of the horizontal momentum equations are first interpolated to the scalar grid using 1D cubic Lagrange interpolation, where

**x**

^{U}

_{3}

*t*) and

**x**

^{V}

_{3}

*t*), respectively, are the arrival points of the

*U*and

*V*grids. This is simply a local four-point-weighted averaging along each of the two coordinate directions, and the result is denoted by

*R*

^{s}

_{V}

*R*

^{s}

_{U}

*R*

^{s}

_{V}

*s*refers to the scalar grid;

*R*

^{s}

_{V}

**x**

^{s}

_{3}

*t*)] is then interpolated to the upwind position

**x**

^{s}

_{3}

*t*− Δ

*t*), where

**x**

^{s}

_{3}

*t*) is an arrival point of the scalar grid. Next, a metric correction term

*δ*

^{s}

_{V}

*R*

^{s}

_{U}

*R*

^{s}

_{V}

*R*

^{s}

_{V}

**x**

^{s}

_{3}

*t*

*t*

*δ*

^{s}

_{V}

*R*

^{s}

_{V}

**x**

^{s}

_{3}

*t*

*R*

^{s}

_{U}

*R*

^{s}

_{V}

*U*and

*V*grids, respectively, using 1D cubic Lagrange interpolation and denoted by

*R*

^{U}

_{U}

*R*

^{V}

_{V}

*R*

_{U}and

*R*

_{V}. Thus where

**x**

^{U}

_{3}

*t*− Δ

*t*) and

**x**

^{V}

_{3}

*t*− Δ

*t*) denote the upstream points associated with the arrival points

**x**

^{U}

_{3}

*t*) and

**x**

^{V}

_{3}

*t*), respectively.

The vertical discretization is modeled after that of Tanguay et al. (1989).

### f. Solving the coupled nonlinear set of discretized equations

After spatial discretization the coupled set of nonlinear equations still has the form of (6.14). Terms on the right-hand side, which involve upstream interpolation, are evaluated once and for all. The coupled set is rewritten as a linear one (where the coefficients depend on the basic state) plus a perturbation that is placed on the right-hand side and which is relatively cheap to evaluate. The nonlinear set is then solved iteratively using the linear terms as a kernel, and the nonlinear terms on the right-hand side are reevaluated at each iteration using the most recent values. Two iterations have been found sufficient for practical convergence.

The linear set can be algebraically reduced to the solution of a 3D EBV problem. At the present stage of model development, the solution procedure is as follows:

- A vertical transform is applied to both sides of the problem in the standard way using the eigenmodes of the discrete vertical structure equation, to obtain a set of decoupled 2D Helmholtz problems (one per equivalent depth).
- For variable-resolution configurations on a rotated mesh, the implicit treatment of the Coriolis terms leads to nonseparable Helmholtz problems. However, since these terms are relatively small, the generalized conjugate gradient method of Concus et al. (1976) converges very quickly (typically two iterations are sufficient) when the preconditioner uses a simple minimax approximation of the Coriolis terms, leading to a semidirect solver. A slow Fourier transform in the form of a full matrix multiplication is used, which leads to a set of tridiagonal problems in the north–south direction that are solved via a LU decomposition. This is followed by an inverse Fourier transform to obtain the solution of the Helmholtz problems.
- For uniform-resolution configurations, efficiency is improved by using fast Fourier transforms to decouple the Helmholtz problems, and on unrotated meshes the problem further simplifies since the Coriolis terms separate naturally.
- The solution of the 3D EBV problem is then obtained by an inverse vertical transform of the solutions of the 2D Helmholtz problems.

The above algorithm vectorizes and parallelizes very well on shared-memory multiprocessor supercomputers such as Environment Canada’s NEC SX-4. Also, since the data structures and computations are very similar to those associated with the Fourier and Legendre transforms of spectral models, and efficient parallelization of such models on such architectures has already been demonstrated (Barros et al. 1996), it should parallelize well on distributed-memory massively parallel architectures. Nevertheless, algorithmic efficiency can and should be improved upon in the future. The cost of the vertical and slow Fourier transforms scale as the square of the number of levels and longitudes rather than linearly, and this becomes ever more costly as resolution is increased. A number of alternative methods can be considered such as multigrid and domain decomposition with a convergence accelerator. Further discussion on the efficient solution of the EBV problems that arise from semi-implicit discretizations may be found in Skamarock et al. (1997) and, with particular emphasis on parallel algorithms, in Smith et al. (1996). In the introduction of this latter reference, the important point is made that domain decomposition methods lead to algorithms with superior convergence properties and nearly *O*(1) work per degree of freedom, which is almost optimal.

### g. Physical parameterization

To close the problem, the source, sink, and redistribution terms of the right-hand sides of (6.1)–(6.4) must be specified or parameterized. These forcings are associated with both unrepresented and subgrid-scale phenomena. The GEM model has therefore been interfaced with the unified RPN (Recherche en Prévision Numérique) physics package, using the RPN standardized interface. This interface greatly facilitated matters since it allowed the immediate use of a tested set of parameterizations without any retuning. A recent description of the current operational parameterizations contained within the package may be found in Mailhot et al. (1997). Parameterizations are available for the following physical phenomena:

- turbulent fluxes of momentum, heat, and moisture over land, water, and ice, based on prognostic turbulent kinetic energy;
- surface-layer effects;
- gravity wave drag;
- prognostic clouds;
- solar and infrared radiation with or without cloud interaction;
- deep and shallow convection;
- condensation; and
- precipitation including evaporative effects.

The choice of parameterization depends upon the space and time scales of the application and the resolution of the forecast or simulation (e.g., Bougeault 1997). For example, a detailed and expensive radiation calculation is an essential ingredient for climate-scale simulations, but much less so for short- and medium-range weather forecasting. Also the appropriate treatment of convection is very different (e.g., Weisman et al. 1997) between a large-scale application where it is parameterized, and a meso-*γ*-scale application where it may be parameterized very differently or even represented explicitly. It is also useful to experiment with different parameterizations of a given process. For example, three different land surface parameterization schemes are available at RPN:the operational scheme, the CLASS scheme (Verseghy 1991, 1993) of Environment Canada’s Climate Branch, and the ISBA scheme (Bélair et al. 1998) of Météo-France.

In the context of a variable-resolution model, there can be a considerable disparity between the resolution of the uniform-resolution subdomain and that geographically far away from it. A frequently asked question is then: what does one do in this context regarding parameterization? In our view parameterizations should be chosen to be appropriate to the resolution of the high- and uniform-resolution subdomain. Provided that they do not behave pathologically at low resolution (in which case they could be simply switched off there, making the model somewhat more efficient), then there is simply insufficient time during the period of integration for their errors to reach the area of interest (i.e., a subarea of the area of uniform resolution) and contaminate it. Parameterizations should be formulated, to the extent possible, such that a measure of the model’s local resolution is a parameter used to determine local aggregations and local threshold values of trigger functions, thereby extending their validity as a function of resolution. This would reduce the need for retuning parameters each time the resolution is changed within the validity regime associated with the underlying parameterization hypotheses.

### h. Digital filter diabatic finalization

Digital filtering is proving to be a good method for the diabatic initialization of weather forecast models (Lynch and Huang 1994). For the model described here, the digital filtering diabatic finalization technique described in Fillion et al. (1995) is used to filter out high-frequency oscillations having periods smaller than 6 h. A 6-h forward integration of the complete model, including all the diabatic forcing terms, generates a time series of model states. These are filtered as this 6-h integration progresses in time leading to a time-filtered state that is valid 3 h into the integration. The model is then integrated forward in time in the usual way by restarting the integration at 3 h using this time-filtered state.

## 7. Summary

To meet the needs of operational weather forecasting and research, as well as those of air quality and climate modeling, a unified strategy is proposed that is based on the use of a global variable-resolution model, run with different configurations. Broadly speaking, these are

- a uniform-resolution global-scale configuration, for large-scale problems such as medium- and long-range weather forecasting, climate change modeling, and the long-range transport of pollutants;
- a variable-resolution synoptic-scale configuration for regional-scale problems such as more detailed forecasts to 2 days, and regional air quality and climate modeling; and
- variable-resolution meso-
*β*and meso-*γ*configurations for yet-more-detailed forecasts and simulations at correspondingly shorter time periods.

This approach offers economies in both operational and research environments, since there is only one model to maintain, develop, and optimize, instead of the usual two or more. It also provides a viable and conceptually simple solution to the nesting problem for regional forecasting: the planetary-scale waves are adequately resolved around a high-resolution subdomain (which resolves the smaller scale disturbances), there are no artificial lateral boundaries, and there is no abrupt change of resolution across an internal boundary since the resolution varies smoothly away from the area of interest.

Ingredients of this strategy include

- an implicit time treatment of the nonadvective terms responsible for the fastest-moving acoustic and gravitational oscillations;
- a semi-Lagrangian treatment of advection to overcome the stability limitations encountered in Eulerian schemes due to the convergence of the meridians and to strong jet streams;
- a cell-integrated finite-element spatial discretization to provide a robust way of achieving variable resolution;
- an arbitrary rotated latitude–longitude mesh to focus resolution on any part of the globe;
- an embedded advection–diffusion module to transport a family of chemical species for air quality and environmental-emergency-response applications;
- a 3D variational data assimilation system, to be described in detail elsewhere; and
- a tangent linear model and its adjoint to facilitate the development of future 4D data assimilation systems.

Results, with a particular emphasis for variable-resolution mesoscale configurations, are given in Part II.

The authors gratefully acknowledge the continuous support by the managers—Hubert Allard, Michel Béland, Peter Chen, Pierre Dubreuil, Louis Lefaivre, Réal Sarrazin, Angèle Simard, and David Steenbergen—of both the Meteorological Research Branch of the Climate and Atmospheric Research Directorate and the Canadian Meteorological Centre, during the development phase of the project described herein.

Many of the authors’ colleagues made very valuable technical contributions to the development of the GEM model. Particular thanks are due to James Caveen, Yves Chartier, Gabriel Lemay, Judy St. James, Joseph-Pierre Toviessi, and Michel Valin.

Thanks are also due to Pierre Gauthier, Stéphane Laroche, Josée Morneau, Saroja Polavarapu, Judy St. James, and Monique Tanguay for their work, briefly summarized here and to be described in detail by them elsewhere, on the complementary data assimilation aspects of the project.

Finally, the authors wish to thank the anonymous reviewers for their comments, which led to substantial improvements in the presentation of the material.

## REFERENCES

Alpert, P., S. O. Krichak, T. N. Krishnamurti, U. Stein, and M. Tsidulko, 1996: The relative roles of lateral boundaries, initial conditions, and topography in mesoscale simulations of lee cyclogenesis.

*J. Appl. Meteor.,***35,**1091–1099.Anthes, R. A., 1983: Regional models of the atmosphere in middle latitudes.

*Mon. Wea. Rev.,***111,**1306–1335.Arakawa, A., 1984: Boundary conditions in limited-area models.

*Proceedings of the Workshop on Limited-Area Numerical Weather Prediction Models for Computers of Limited Power,*Short- and Medium- Range Weather Prediction Research Publication Series No. 13 (WMO/TD No. 19), World Meteorological Organization, Geneva, Switzerland, 403–436.——, and V. R. Lamb, 1977: Computational design of the basic dynamical processes of the UCLA general circulation model.

*Methods in Comput. Phys.,*Vol. 17, Academic Press, 174–265.Barros, S. R. M., D. Dent, L. Isaksen, and G. Robinson, 1995: The IFS model overview and parallel strategies.

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology.*World Scientific, 303–318.——, ——, ——, and ——, 1996: A parallel spectral model. Research activities in atmospheric and oceanic modelling. CAS/JSC WGNE Rep. No. 23 (WMO/TD - No. 734), World Meteorological Organization, I.2–I.4.

Bartello, P., and S. J. Thomas, 1996: The cost-effectiveness of semi-Lagrangian advection.

*Mon. Wea. Rev.,***124,**2883–2897.Bates, J. R., 1991: Comments on “Noninterpolating semi-Lagrangian advection schemes with minimized dissipation and dispersion errors.”

*Mon. Wea. Rev.,***119,**230.——, S. Moorthi, and R. W. Higgins, 1993: A global multilevel atmospheric model using a vector semi-Lagrangian finite-difference scheme. Part I: Adiabatic formulation.

*Mon. Wea. Rev.,***121,**244–263.Baumhefner, D. P., and D. J. Perkey, 1982: Evaluation of lateral boundary errors in a limited-domain model.

*Tellus,***34,**409–428.Bélair, S., P. Lacarrère, J. Noilhan, V. Masson, and J. Stein, 1998: High-resolution simulation of surface and turbulent fluxes during HAPEX-MOBILHY.

*Mon. Wea. Rev.,*in press.Benoit, R., M. Desgagné, P. Pellerin, S. Pellerin, Y. Chartier, and S. Desjardins, 1997: The Canadian MC2: A semi-Lagrangian, semi-implicit wideband atmospheric model suited for finescale process studies and simulation.

*Mon. Wea. Rev.,***125,**2382–2415.Bermejo, R., and A. Staniforth, 1992: The conversion of semi-Lagrangian advection schemes to quasi-monotone schemes.

*Mon. Wea. Rev.,***120,**2622–2632.Bougeault, P., 1997: Physical parameterization for limited area models: some current problems and issues.

*Meteor. Atmos. Phys.,***63,**71–88.Bubnova, R., G. Hello, P. Bénard, and J.-F.Geleyn, 1995: Integration of the fully elastic equations cast in hydrostatic pressure terrain-following coordinate in the framework of the ARPEGE/Aladin NWP system.

*Mon. Wea. Rev.,***123,**515–535.Caian, M., and J.-F. Geleyn, 1997: Some limits to the variable mesh solution and comparison with the nested LAM one.

*Quart. J. Roy. Meteor. Soc.,***123,**743–766.Chen, M., and J. R. Bates, 1996a: Forecast experiments with a global finite-difference semi-Lagrangian model.

*Mon. Wea. Rev.,***124,**1992–2007.——, and ——, 1996b: A comparison of climate simulations from a semi-Lagrangian and an Eulerian GCM.

*J. Climate,***9,**1126–1149.Chouinard, C., J. Mailhot, H. L. Mitchell, A. Staniforth, and R. Hogue, 1994: The Canadian regional data assimilation system: operational and research applications.

*Mon. Wea. Rev.,***122,**1306–1325.Concus, P., G. H. Golub, and D. P. O’Leary, 1976. A generalized conjugate gradient method for the numerical solution of partial differential equations.

*Sparse Matrix Computations,*R. Bunch and D. J. Rose, Eds., Academic Press, 309–322.Côté, J., 1988: A Lagrange multiplier approach for the metric terms of semi-Lagrangian models on the sphere.

*Quart. J. Roy. Meteor. Soc.,***114,**1347–1352.——, M. Roch, A. Staniforth, and L. Fillion, 1993: A variable-resolution semi-Lagrangian finite-element global model of the shallow-water equations.

*Mon. Wea. Rev.,***121,**231–243.——, J.-G. Desmarais, S. Gravel, A. Méthot, A. Patoine, M. Roch, and A. Staniforth, 1998: The operational CMC–MRB Global Environmental Multiscale (GEM) model. Part II: Results.

*Mon. Wea. Rev.,***126,**1397–1418.Courtier, P., and J.-F. Geleyn, 1988: A global numerical weather prediction model with variable resolution: Application to the shallow-water equations.

*Quart. J. Roy. Meteor. Soc.,***114,**1321–1346.——, C. Freydiet, J.-F. Geleyn, F. Rabier, and M. Rochas, 1991: The Arpege project at Météo France.

*Proc. Numerical Methods in Atmospheric Models,*European Centre for Medium-Range Weather Forecasts, 193–231.——, J.-N. Thépaut, and A. Hollingsworth, 1994: A strategy for operational implementation of 4D-Var, using an incremental approach.

*Quart. J. Roy. Meteor. Soc.,***120,**1367–1388.Cullen, M. J. P., 1993: The unified forecast/climate model.

*Meteor. Mag.,***122,**81–94.——, T. Davies, M. H. Mawson, J. A. James, and S. C. Coulter, 1997:An overview of numerical methods for the next generation UK NWP and climate model.

*The André J. Robert Memorial Volume,*Canadian Meteorological and Oceanographic Society, 425–444.Daley, R., 1991:

*Atmospheric Data Analysis.*Cambridge Atmospheric and Space Science Series, Vol. 2, Cambridge University Press, 457 pp.Daley, R. W., 1988: The normal modes of the spherical non-hydrostatic equations with applications to the filtering of acoustic modes.

*Tellus,***40A,**96–106.Dastoor, A. P., and J. Pudykiewicz, 1996: A numerical global meteorological sulfur transport model and its application to arctic air pollution.

*Atmos. Environ.,***30,**1501–1522.Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models.

*Quart. J. Roy. Meteor. Soc.,***102,**405–418.——, 1983: Limitations of some common lateral boundary schemes used in regional NWP models.

*Mon. Wea. Rev.,***111,**1002–1012.Déqué, M., and J. P. Piedelievre, 1995: High resolution climate simulation over Europe.

*Climate Dyn.,***11,**321–339.Dickinson, A., P. Burton, J. Parker, and R. Baxter, 1995: Implementation and initial results from a parallel version of the Meteorological Office atmosphere prediction model.

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology.*World Scientific, 177–194.Dietachmayer, G. S., 1990: Comments on “Noninterpolating semi-Lagrangian advection schemes with minimized dissipation and dispersion errors.”

*Mon. Wea. Rev.,***118,**2252–2253.DiMego, G. J., K. E. Mitchell, R. A. Petersen, J. E. Hoke, J. P. Gerrity, J. C. Tuccilo, R. L. Wobus, and H. H. Juang, 1992: Changes to NMC’s Regional Analysis and Forecast System.

*Wea. Forecasting,***7,**185–198.Drake, J. B., I. T. Foster, J. G. Michalakes, B. Toonen, and P. H. Worley, 1995: Design and performance of a scalable parallel community climate model.

*Parallel Comput.,***21,**1571–1592.Edouard, S., B. Legras, F. Lefèvre, and R. Eymard, 1996: The effect of mixing on ozone depletion in the Arctic.

*Nature,***384,**444–447.Errico, R., and D. Baumhefner, 1987: Predictability experiments using a high-resolution limited-area model.

*Mon. Wea. Rev.,***115,**488–504.——, T. Vukicevic, and K. Raeder, 1993: Comparison of initial and lateral boundary condition sensitivity for a limited-area model.

*Tellus,***45A,**539–557.Estrade, J. F., and D. Birman, 1995: Adapting parallel IFS/ARPEGE to METEO-FRANCE implementation.

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology.*World Scientific, 206–222.Fillion, L., H. L. Mitchell, H. Ritchie, and A. Staniforth, 1995: The impact of a digital filter finalization technique in a global data assimilation system.

*Tellus,***47A,**304–323.Fox-Rabinovitz, M., G. Stenchikov, M. Suarez, and L. Takacs, 1997:A finite-difference GCM dynamical core with a variable resolution stretched grid.

*Mon. Wea. Rev.,***125,**2943–2968.Gates, W. L., 1992: AMIP: The Atmospheric Model Intercomparison Project.

*Bull. Amer. Meteor. Soc.,***73,**1962–1970.——, ed. 1995:

*The Proceedings of the First International AMIP Scientific Conference.*WCRP-92, WMO/TD-No. 732, World Climate Research Programme, World Meteorological Organisation, 532 pp.Gauthier, P., L. Fillion, P. Koclas, and C. Charette, 1996: Implementation of a 3D variational analysis at the Canadian Meteorological Centre. Preprints,

*11th Conf. on Numerical Weather Prediction,*Norfolk, Virginia, American Meteor. Soc. 232–234.Gravel, S., and A. Staniforth, 1992: Variable resolution and robustness.

*Mon. Wea. Rev.,***120,**2633–2640.——, ——, and J. Côté, 1993: A stability analysis of a family of baroclinic semi-Lagrangian forecast models.

*Mon. Wea. Rev.,***121,**815–826.Gustafsson, N., 1990: Sensitivity of limited area model data assimilation to lateral boundary condition fields.

*Tellus,***42A,**109–115.——, and A. McDonald, 1996: A comparison of the HIRLAM gridpoint and spectral semi-Lagrangian models.

*Mon. Wea. Rev.,***124,**2008–2022.Hack, J. J., J. M. Rosinski, D. L. Williamson, B. A. Boville, and J. E. Truesdale, 1995: Computational design of the NCAR community climate model.

*Parallel Comput.,***21,**1545–1569.Hammond, S. W., R. D. Loft, J. M. Dennis, and R. K. Sato, 1995: Implementation and performance issues of a massively parallel atmospheric model.

*Parallel Comput.,***21,**1593–1610.Hardiker, V., 1997: A global numerical weather prediction model with variable resolution.

*Mon. Wea. Rev.,***125,**59–73.Harrison, E. J., and R. L. Elsberry, 1972: A method of incorporating nested finite grids in the solution of systems of geophysical equations.

*J. Atmos. Sci.,***29,**1235–1245.Held, I. M., and M. J. Suarez, 1994: A proposal for the intercomparison of the dry dynamical cores of atmospheric general circulation models.

*Bull. Amer. Meteor. Soc.,***75,**1825–1830.Henderson, T., C. Baillie, S. Benjamin, M. Govett, L. Hart, A. Marroquin, B. Rodriguez, T. Black, R. Bleck, G. Carr, and J. Middlecoff, 1995: Progress toward demonstrating operational capability of massively parallel processors at the Forecast Systems Laboratory.

Héreil, P., and R. Laprise, 1996: Sensitivity of internal gravity waves to the time step of a semi-implicit semi-Lagrangian nonhydrostatic model.

*Mon. Wea. Rev.,***124,**972–999.Imbard, M., A. Craplet, P. Degardin, Y. Durand, A. Joly, N. Marie, and J.-F. Geleyn, 1987: Fine-mesh limited area forecasting with the French operational Peridot system. Proc.,

*The Nature and Prediction of Extra-tropical Weather Systems.*Vol. II, European Centre for Medium-Range Weather Forecasts, Shinfield Park, Reading, United Kingdom, 231–269.Isaksen, L., and S. R. M. Barros, 1995: IFS 4d-variational analysis overview and parallel strategy.

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology,*World Scientific, 337–351.Jones, R. G., J. M. Murphy, and M. Noguer, 1995: Simulation of climate change over Europe using a nested regional-climate model. I: Assessment of control climate, including sensitivity to location of lateral boundaries.

*Quart. J. Roy. Meteor. Soc.,***121,**1413–1449.——, ——, ——, and A. B. Keen, 1997: Simulation of climate change over Europe using a nested regional-climate model. II: Comparison of driving and regional model responses to a doubling of carbon dioxide.

*Quart. J. Roy. Meteor. Soc.,***123,**265–292.Kalnay de Rivas, E., 1972: On the use of nonuniform grids in finite-difference equations.

*J. Comput. Phys.,***10,**202–210.Kasahara, A., 1974: Various vertical coordinate systems used for numerical weather prediction.

*Mon. Wea. Rev.,***102,**509–522.Kreiss, H, and J. Oliger, 1973:

*Methods for the Approximate Solution of Time Dependent Problems.*GARP Publ. Series, No. 10, World Meteorological Organization, 107 pp. [Available from World Meteorological Organization, Case Postale 2300, CH-1211 Geneva 2, Switzerland.].Kurihara, Y., and R. E. Tuleya, 1974: Structure of a tropical cyclone developed in a three-dimensional numerical simulation model.

*J. Atmos. Sci.,***31,**893–919.Laprise, R., 1992: The Euler equations of motion with hydrostatic pressure as independent variable.

*Mon. Wea. Rev.,***120,**197–207.Leslie, L. M., and G. S. Dietachmayer, 1997: Comparing schemes for integrating the Euler equations.

*Mon. Wea. Rev.,***125,**1687–1691.Lynch, P., and X.-Y. Huang, 1994. Diabatic initialization using recursive filters.

*Tellus,***46A,**583–597.Mailhot, J., R. Sarrazin, B. Bilodeau, N. Brunet, and G. Pellerin, 1997: Development of the 35-km version of the operational regional forecast system.

*Atmos.–Ocean,***35,**1–28.McFarlane, N. A., G. J. Boer, J.-P. Blanchet, and M. Lazare, 1992: The Canadian Climate Centre second-generation general circulation model and its equilibrium climate.

*J. Climate,***5,**1013–1044.Mesinger, F., 1973: A method for construction of second-order accuracy difference schemes permitting no false two-grid-interval wave in the height field.

*Tellus,***25,**444–458.Michalakes, J., T. Canfield, R. Nanjundiah, S. Hammond, and G. Grell, 1995: Parallel implementation, validation, and performance of MM5.

Mitchell, H. L., C. Chouinard, C. Charette, R. Hogue, and S. J. Lambert, 1996: Impact of a revised analysis algorithm on an operational data assimilation system.

*Mon. Wea. Rev.,***124,**1243–1255.Miyakoda, K., and A. Rosati, 1977: One-way nested grid models: the interface conditions and the numerical accuracy.

*Mon. Wea. Rev.,***105,**1092–1107.Moncrieff, M. W., S. K. Krueger, D. Gregory, J.-L. Redelsperger, and W.-K. Tao, 1997: GEWEX Cloud System Study (GCSS) Working Group 4: precipitating convective cloud systems.

*Bull. Amer. Meteor. Soc.,***78,**831–845.Moorthi, S., 1997: NWP experiments with a gridpoint semi-Lagrangian semi-implicit global model at NCEP.

*Mon. Wea. Rev.,***125,**74–98.——, R. W. Higgins and J. R. Bates, 1995: A global multilevel atmospheric model using a vector semi-Lagrangian finite-difference scheme. Part II: Version with physics.

*Mon. Wea. Rev.,***123,**1523–1541.Murphy, J. M., 1995: Transient response of the Hadley Centre coupled ocean–atmosphere model to increasing carbon dioxide. Part I: Control climate and flux adjustment.

*J. Climate,***8,**36–56.——, and J. F. B. Mitchell, 1995: Transient response of the Hadley Centre coupled ocean–atmosphere model to increasing carbon dioxide. Part II: Spatial and temporal structure of response.

*J. Climate,***8,**57–80.Oliger, J., and A. Sundström, 1978: Theoretical and practical aspects of some initial boundary value problems in fluid dynamics.

*S.I.A.M. J. Appl. Math.,***35,**419–446.Paegle, J., 1989: A variable resolution global model based upon Fourier and finite element representation.

*Mon. Wea. Rev.,***117,**583–606.——, Q. Yang, and M. Wang, 1997: Predictability in limited area and global models.

*Meteor. Atmos. Phys.,***63,**53–69.Parrish, D. F., and J. C. Derber, 1992: The National Meteorological Center’s spectral statistical-interpolation analysis system.

*Mon. Wea. Rev.,***120,**1747–1763.Perkey, D. J., and C. Kreitzberg, 1976: A time-dependent lateral boundary scheme for limited-area primitive equation models.

*Mon. Wea. Rev.,***104,**744–755.Phillips, N. A., 1957: A coordinate system having some special advantages for numerical forecasting.

*J. Meteor.,***14,**184–185.——, and J. Shukla, 1973: On the strategy of combining coarse and fine grid meshes in numerical weather prediction.

*J. Appl. Meteor.,***12,**763–770.Pinty, J.-P., R. Benoit, E. Richard, and R. Laprise, 1995: Simple tests of a semi-implicit semi-Lagrangian model on 2D mountain wave problems.

*Mon. Wea. Rev.,***123,**3042–3058.Polavarapu, S., and M. Tanguay, 1998: Linearising iterative processes for four-dimensional data assimilation.

*Quart. J. Roy. Meteor. Soc.,*in press.——, ——, R. Ménard, and A. Staniforth, 1995: The tangent linear model for semi-Lagrangian schemes—Linearizing the process of interpolation.

*Tellus,***47A,**74–95.Priestley, A., 1993: A quasi-conservative version of the semi-Lagrangian advection scheme.

*Mon. Wea. Rev.,***121,**621–629.Purser, R. J., and L. M. Leslie, 1994: An efficient semi-Lagrangian scheme using third-order semi-implicit time integration and forward trajectories.

*Mon. Wea. Rev.,***122,**745–756.Ritchie, H., and C. Beaudoin, 1994: Approximations and sensitivity experiments with a baroclinic semi-Lagrangian spectral model.

*Mon. Wea. Rev.,***122,**2391–2399.——, C. Temperton, A. Simmons, M. Hortal, T. Davies, D. Dent, and M. Hamrud, 1995: Implementation of the semi-Lagrangian method in a high-resolution version of the ECMWF forecast model.

*Mon. Wea. Rev.,***123,**489–514.Rivest, C., A. Staniforth, and A. Robert, 1994: Spurious resonant response of semi-Lagrangian discretizations to orographic forcing: Diagnosis and solution.

*Mon. Wea. Rev.,***122,**366–376.Robert, A., 1981: A stable numerical integration scheme for the primitive meteorological equations.

*Atmos.*–*Ocean,***19,**35–46.——, 1982: A semi-Lagrangian and semi-implicit numerical integration scheme for the primitive meteorological equations.

*J. Meteor. Soc. Japan,***60,**319–325.——, and E. Yakimiw, 1986: Identification and elimination of an inflow boundary computational solution in limited area model integrations.

*Atmos.*–*Ocean,***24,**369–385.——, T. L. Yee, and H. Ritchie, 1985: A semi-Lagrangian and semi-implicit numerical integration scheme for multilevel atmospheric models.

*Mon. Wea. Rev.,***113,**388–394.Rogers, E., T. L. Black, D. G. Deaven, G. J. DiMego, Q. Zhao, M. Baldwin, N. W. Junker, and Y. Lin, 1996: Changes to the operational “Early” Eta analysis/forecast system at the National Centers for Environmental Prediction.

*Wea. Forecasting,***11,**391–413.Rood, R. B., 1996: Three dimensional transport models.

*Global Tracer Transport Models,*WMO/TD-No. 770, World Meteorological Organization. 152–156.Schmidt, F., 1977: Variable fine mesh in the spectral global models.

*Beitr. Phys. Atmos.,***50,**211–217.Semazzi, F. H. M., J.-H. Qian, and J. S. Scroggs, 1995: A global nonhydrostatic semi-Lagrangian atmospheric model.

*Mon. Wea. Rev.,***123,**2534–2550.Sharma, O. P., H. Upadhyaya, Th. Braine-Bonnaire, and R. Sadourny, 1987: Experiments on regional forecasting using a stretched coordinate general circulation model.

*Short- and Medium- Range Numerical Weather Prediction, Proc. WMO/IUGG NWP Symposium,*Tokyo, Japan, Met. Soc. Japan, 263–271.Skamarock, W. C., P. K. Smolarkiewicz, and J. B. Klemp, 1997: Preconditioned conjugate-residual solvers for Helmholtz equations in nonhydrostatic models.

*Mon. Wea. Rev.,***125,**587–599.Smith, B., P. Bjorstad, and W. Gropp, 1996.

*Domain Decomposition:Parallel Multilevel Methods for Elliptic Partial Differential Equations.*Cambridge University Press, 224 pp.Staniforth, A., 1987: Review—Formulating efficient finite-element codes for flows in regular domains.

*Int. J. Numer. Methods Fluids,***7,**1–16.——, 1997: Regional modeling: A theoretical discussion.

*Meteor. Atmos. Phys.,***63,**15–29.——, and J. Côté, 1991: Semi-Lagrangian integration schemes for atmospheric models—A review.

*Mon. Wea. Rev.,***119,**2206–2223.Sundström, A., and T. Elvius, 1979: Computational problems related to limited-area modeling.

*Numerical Methods Used in Atmospheric Models,*Vol. II, GARP Series No. 17, World Meteorological Organization, 379–416. [Available from World Meteorological Organization, Case Postale 2300, CH-1211 Geneva 2, Switzerland.].Tanguay, M., A. Simard, and A. Staniforth, 1989: A three-dimensional semi-Lagrangian scheme for the Canadian regional finite-element forecast model.

*Mon. Wea. Rev.,***117,**1861–1871.——, A. Robert, and R. Laprise, 1990: A semi-implicit semi-Lagrangian fully compressible regional forecast model.

*Mon. Wea. Rev.,***118,**1970–1980.——, S. Polavarapu, and P. Gauthier, 1997: Temporal accumulation of first-order linearization error for semi-Lagrangian passive advection.

*Mon. Wea. Rev.,***125,**1296–1311.Verseghy, D., 1991: CLASS—A Canadian land surface scheme for GCMs. I: Soil model.

*Int. J. Climatol.,***11,**111–113.——, 1993: CLASS—A Canadian land surface scheme for GCMs. II: Vegetation model and coupled runs.

*Int. J. Climatol.,***13,**343–370.Vichnevetsky, R., 1986: Invariance theorems concerning reflection at numerical boundaries.

*J. Comput. Phys.,***63,**268–282.——, 1987: Wave propagation and reflection in irregular grids for hyperbolic equations.

*Appl. Numer. Math.,***2,**133–166.——, and L. H. Turner, 1991: Spurious scattering from discontinuously stretching grids in computational fluid dynamics.

*J. Appl. Math.,***8,**315–328.von Laszewski, G., M. Seablom, M. Makivic, P. Lyster, and S. Ranka, 1995: Design issues for the parallelization of an optimal interpolation algorithm

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology,*World Scientific, 290–302.Vukicevic, T., and R. Errico, 1990: The influence of artificial and physical factors upon predictability estimates using a complex limited-area model.

*Mon. Wea. Rev.,***118,**1460–1482.Walsh, K., and J. L. McGregor, 1995: January and July climate simulations over the Australian region using a limited area model.

*J. Climate,***8,**2387–2403.Weisman, M. L., W. C. Skamarock, and J. B. Klemp, 1997: The resolution dependence of explicitly modeled convective systems.

*Mon. Wea. Rev.,***125,**527–548.Williamson, D. L., and G. L. Browning, 1974: Formulation of the lateral boundary conditions for the NCAR limited area model.

*J. Appl. Meteor.,***13,**8–16.——, and J. G. Olson, 1994: Climate simulations with a semi-Lagrangian version of the NCAR CCM2.

*Mon. Wea. Rev.,***122,**1594–1610.——, and ——, 1998: A comparison of semi-Lagrangian and Eulerian polar climate simulations.

*Mon. Wea. Rev.,***126,**991–1000.——, ——, and B. A. Boville, 1998. A comparison of semi-Lagrangian and Eulerian tropical climate simulations.

*Mon. Wea. Rev.,***126,**1001–1012.Wolters, L., R. van Engelen, G. Cats, N. Gustafsson, and T. Wilhelmsson, 1995: A data parallel HIRLAM forecast model.

*Parallel Supercomputing in Atmospheric Science: Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology,*World Scientific, 49–62.Yakimiw, E., and A. Robert, 1990: Validation experiments for a nested grid-point regional forecast model.

*Atmos.–Ocean,*466–472.Yanenko, N. N., 1971.

*The Method of Fractional Steps.*Springer, 160 pp.Yessad, K., and P. Bénard, 1996: Introduction of a local mapping factor in the spectral part of the Météo France global variable mesh numerical forecast model.

*Quart. J. Roy. Meteor. Soc.,***122,**1701–1719.Zhang, D.-L., H.-R. Chang, N. L. Seaman, T. T. Warner, and J. M. Fritsch, 1986: A two-way interactive nesting procedure with variable terrain resolution.

*Mon. Wea. Rev.,***114,**1330–1339.Zupanski, M., 1993: Regional four-dimensional variational data assimilation in a quasi-operational forecasting environment.

*Mon. Wea. Rev.,***121,**2396–2408.

# APPENDIX

## Asymptotic Properties of a Family of Variable-Resolution Meshes

*N*

^{un}+

*N*

^{var}+ 1) points shown in Fig. A1 for a domain of length (

*L*

^{un}+

*L*

^{var}) having a uniform-resolution subdomain of length

*L*

^{un}, where

*N*

^{var}is restricted to be even. There are thus

*N*

^{un}intervals in the uniform-resolution subdomain, and

*N*

^{var}/2 intervals in each of the two variable-resolution subdomains. Also, let the resolution in the variable-resolution subdomains vary such that, moving from the uniform-resolution subdomain, each successive mesh length is a fixed ratio

*r*larger than that of its preceding neighbor, with the first such mesh length being set to

*rh,*where

*h*is the constant mesh length of the uniform-resolution subdomain. Thus, considering the rightmost variable resolution part of the mesh in Fig. A1, the mesh length to the right of the point with the index

*n*is where

*r*. Solving (A.4) then yields

*r*> 1 and

*N*

^{var}large, (A.4) asymptotically reduces to and so asymptotically Consider now the question: how many additional points are asymptotically required in the variable resolution parts of the mesh if the resolution is doubled in the fixed-size uniform-resolution subdomain, while still maintaining a fixed ratio

*r*between successive mesh lengths in the two variable resolution subdomains? Now the number of mesh points

*N*

^{var}(

*h*/2) in the variable resolution parts of the mesh after such a doubling of resolution in the uniform-resolution subdomain asymptotically goes from the number

*N*

^{var}(

*h*) given in (A.8) to

Thus, asymptotically, for each successive doubling of resolution in the uniform resolution subdomain only an additional (ln2/ln*r*) points need to be added in each of the two variable resolution subdomains. For *r* ≈ 1.1, this means that only an extra seven points are asymptotically required in each variable-resolution subdomain when doubling the number in the uniform-resolution subdomain.

*λ*and

*θ*denote quantities in the

*λ*and

*θ*directions, respectively, then leads to the following expression for this ratio: where

*L*

^{var}

_{λ}

*π*−

*L*

^{un}

_{λ}

*L*

^{var}

_{θ}

*π*−

*L*

^{un}

_{θ}