## 1. Introduction

Using variable resolution is an efficient way to study multiscale phenomena with reduced conflicts between numerical accuracy and computational cost. This technique is very attractive for geophysical fluid dynamic simulations. Given limited computer resources, one can achieve high-quality numerical results by either keeping high grid resolution over the regions of interest or refining the grid dynamically when smaller-scale structures develop during the simulation (Côté 1997). Several methods can be adopted to implement the simulations with variable resolutions, such as nested grid (Pielke et al. 1992), stretched grid (Yessad and Bénard 1996), and adaptive mesh refinement (AMR) grid. Among these techniques, AMR is usually considered to be the most intelligent one where the prior information of flow structure is not required. Since the first effort of Skamarock (1989), several AMR models have been developed for geophysical fluid dynamic simulations; some of them are found in Skamarock and Klemp (1993), Blayo and Debreu (1999), Bacon et al. (2000), Hubbard and Nikiforakis (2003), Behrens (1998), Jablonowski et al. (2006), and St-Cyr et al. (2008), among others. More comprehensive reviews on the applications of adaptive meshes in geophysical fluid dynamic simulations can be found in Behrens (2006) and Nikiforakis (2009).

Two kinds of algorithms have gained popularity so far to implement the AMR algorithms on structured grid. The first one was proposed by Berger and Oliger (1984) and another one was introduced by De Zeeuw (1993). These two algorithms use different ways to organize the computational elements with different grid spacings. The comparisons between these two methods have been found in existing literature (e.g., van der Holst and Keppens 2007; Jablonowski et al. 2006), but so far it is still hard to tell which algorithm is superior to another in all aspects regarding the computational accuracy, efficiency, and flexibility. Considering the simple data structure and flexible refinement ratio, we adopt the Berger–Oliger AMR algorithm in the present study.

Since the 1980s, the Berger–Oliger AMR algorithm has been used in many applications of CFD. Some representative adaptive codes using this algorithm are CLAWPACK (available online at http://www.amath.washington.edu/~claw/), CHOMBO (available online at http://seesar.lbl.gov/ANAG/chombo/), DAGH (available online at http://www.cs.utexas.edu/users/dagh/), and SAMRAI (available online at http://www.llnl.gov/CASC/SAMRAI/). This method was first introduced to atmospheric simulations in a regional model in Skamarock (1989).

Because the AMR algorithm was originally developed on Cartesian grid, the key issue when implementing it in global geophysical fluid dynamic models is how to establish the global adaptive mesh in spherical geometry. Some special techniques have been devised to apply the AMR technique on the spherical latitude–longitude grid to build a global model (e.g., the work reported in Hubbard and Nikiforakis 2003). Recently, researchers are paying more and more attention to some advanced global grids with intrinsically quasi-uniform grid spacing, such as icosahedron grid (Williamson 1968; Sadourny et al. 1968), cubed-sphere grid (Sadourny 1972) and Yin–Yang overset grid (Kageyama and Sato 2004). The cubed-sphere grid projects the globe onto six patches of the inscribed cube that have an identically structured curvilinear coordinate system; thus, it is particularly attractive when one tries to implement the AMR technique derived for Cartesian grid. The models using local high-order schemes [e.g., spectral element (SE) model (Thomas and Loft 2002), discontinuous Galerkin (DG) model (Nair et al. 2005), and multimoment model (Chen and Xiao 2008)] can accurately and efficiently exchange the information over the patch boundaries on the cubed sphere. The numerical errors due to the broken coordinate along the boundaries can be more easily controlled with a local reconstruction. An adaptive spectral element model has been developed on cubed-sphere grid in St-Cyr et al. (2008).

A global shallow-water model has been recently developed on cubed-sphere grid using the multimoment finite volume (MM-FVM) scheme (Chen and Xiao 2008). Using the multimoment concept, the high-order spatial reconstruction can be constructed over a compact stencil. As mentioned before, this feature is preferred on the cubed-sphere grid. Based on this work, a practical adaptive global model will be reported in this paper by combining the cubed-sphere grid, the multimoment discretization and Berger–Oliger AMR algorithm. It should be noted that the compactness of the stencil required for spatial reconstruction due to the use of multimoments is essentially demanded for efficient and accurate coarse–fine interpolations across the AMR grids.

The rest of this paper is organized as follows: The multimoment finite volume scheme is described in section 2. The reconstruction is built so as to be capable of enforcing the solutions with desirable numerical properties, such as high-order accurate (up to fourth order), monotone and positive preserving, by simply modifying a slope parameter in the multimoment interpolation. Section 3 focuses on how to implement the Berger–Oliger AMR algorithm in the multimoment context. The solution procedure of Berger–Oliger AMR grid is briefly reviewed, and the coarse–fine interpolation based on multimoments is described in detail. The representative numerical tests are carried out for validations of the proposed model in section 4. This paper ends with conclusions in section 5.

## 2. Multimoment finite volume scheme

Different from the conventional numerical schemes, the multimoment method adopts two or more kinds of moments to construct the high-order numerical schemes (Xiao 2004; Xiao et al. 2006). Shown in the existing studies Ii and Xiao (2007), Akoh et al. (2008), Chen and Xiao (2008), Li et al. (2008), Ii and Xiao (2009, 2010), and Akoh et al. (2010), the multimoment schemes show competitive performance in respect to computational accuracy, efficiency, robustness, and flexibility. The increased degrees of freedom (DOFs) by using the multimoment concept especially make it possible to use compact stencil for high-order spatial reconstruction compared with the single moment method (e.g., conventional finite difference–volume method). This feature is very useful for high-order reconstructions on unstructured grid or under the circumstances where only limited stencil is available. As we will show later, the multimoment discretization is also of great benefit to the implementation of the AMR algorithm.

In the present formulation, two kinds of moments—point value (PV) moment and volume-integrated average (VIA) moment—are adopted to construct the numerical model. Shown in Fig. 1, for one-dimensional (1D) control volume *C _{i}* defined by [

*x*

_{i}_{−(1/2)},

*x*

_{i}_{+(1/2)}], two moments are defined for field variable

*q*(

*x*,

*t*) as

- PV moment
- VIA moment

*x*=

_{i}*x*

_{i}_{+(1/2)}−

*x*

_{i}_{−(1/2)}.

### a. Spatial reconstruction based on multimoments

*C*, one can determine a quadratic interpolatant as in the CIP-CSL2 scheme (Yabe et al. 2001),

_{i}*x*, the extension to nonuniform grid is straightforward.

In this paper, we use term “MM-FVM_3 scheme” to denote the numerical scheme using quadratic spatial reconstruction (3). The name indicates that the scheme is of third-order accuracy.

*d*= ∂

_{i}*Q*(

_{i}*x*)/∂

*x*|

_{x=xi}, a cubic polynomial is obtained,

*d*is not an independent moment; it is calculated from the known PVs and VIAs that are updated at every step as the computational variables. Moreover, this slope parameter is controllable; it makes the numerical scheme based on the CIP-CSL3 reconstruction very flexible. In Xiao and Yabe (2001), several kinds of reconstruction profiles were introduced to make the resultant schemes have the desired properties, such as monotonicity or high-order accuracy (fourth order). For the monotone scheme, the slope limiter can be computed by

_{i}*d*,

_{L}*d*, and

_{R}*d*are given by

_{C}

^{P}q_{i}is an auxiliary PV moment, computed by the PV and VIA moments through the three-point Simpson’s quadrature rule of fourth-order accuracy as

*θ*is a constant; we usually use

*θ*= 2 in (8) following the MC limiter (Leveque 2002).

*δ*is a small positive threshold.

For sake of brevity, we denote hereafter the numerical schemes using CIP-CSL3 reconstruction with the slope parameter being *d ^{M}* computed by (8) as MM-FVM_M scheme,

*d*

^{4+}by (11) or

*d*

^{4−}in (12) as MM-FVM_4 scheme, and

*d*by (13) as MM-FVM_P scheme.

^{P}For practical use, an effective mechanism should be designed to combine the different reconstruction profiles; thus, the multimoment model can automatically switch among the schemes with or without limiting projection. For this purpose, an indicator of the smoothness can be devised and used to choose the proper schemes; for example, the TVB limiter (Shu 1989) has been adopted in Ii and Xiao (2007) to keep high resolution at the smooth peaks and eliminate spurious oscillation around discontinuities. Further investigation can be carried out by making use of more sophisticated technique for oscillation suppressing, such as the weighted essentially nonoscillatory (WENO) concept (Jiang and Shu 1996).

### b. Updating the moments

*M*equations),

**q**is the vector of unknowns and

**f**is the flux vector. For the hyperbolic system, the Jacobian matrix is defined as 𝗔 = ∂

**f**/∂

**q**with the real eigenvalues

*λ*(

_{m}*m*= 1 to

*M*).

First, spatial differentials are discretized using the multimoment spatial reconstruction. The conservation laws are then reduced to the semidiscrete ordinary differential equations (ODEs) in which the spatial derivatives are discretized. To get high-order accuracy in time stepping, we make use of the Runge–Kutta scheme for time integration.

#### 1) Spatial discretization

The spatial discretization is described as follows. Provided that PV [^{P}**q**_{i−(1/2)}] and VIA(^{V}**q**_{i}*t* = *t ^{n}*, the spatial reconstruction

*Q*can be obtained by using (6) with the slope parameter calculated through different formulations (8), (11), (12), or (13). The evolution of PV and VIA moments is predicted separately by the corresponding equations discretized by different numerical formulations.

_{i}^{n}*x*=

*x*

_{i}_{+(1/2)}, it is written as

**f**/∂

**q**computed by the PVs defined at cell boundary

*x*=

*x*

_{i}_{+(1/2)}.

#### 2) Time integration

*ϕ*denotes any moment and

*ϕ*is known at

^{n}*t*=

*t*, the time integration is computed by Runge–Kutta scheme as

^{n}### c. Extension to multidimensions and cubed-sphere grid

It is straightforward to extend the multimoment scheme described above to multidimensional case on the structured grid. The moment configuration profile for a two-dimensional rectangular control volume is illustrated in Fig. 4. Eight PV moments and one VIA moment are defined in each grid element. PV moments are updated by solving the general Riemann problems that are conducted along the grid lines. The one-dimensional operations discussed above can be used in *x* and *y* directions, respectively. VIA moment is updated by flux-form formulation and thus is exactly conserved. The numerical fluxes across the edges of the control volume are obtained by the PV moments through the numerical integrations along the edges to assure the high-order accuracy. To extend the multimoment scheme to general curvilinear coordinates, we apply the multimoment scheme to the governing equations, which have been recast into the curvilinear coordinates. The details of extending the multimoment schemes to cubed-sphere grid can be found in Chen and Xiao (2008), where the metric terms are analytically computed from the gnomonic projection. The accuracy and other desirable features of the one-dimensional CIP-CSL3 reconstruction can be fully ported to the multidimensional computations on structured grid.

## 3. Implementation of Berger–Oliger AMR algorithm

The Berger–Oliger AMR algorithm was originally proposed in Berger and Oliger (1984) and Berger and Colella (1989). Using the Berger–Oliger AMR algorithm, the computational domain is covered by a number of blocks. These blocks are grouped into several levels according to the spatial resolutions, and blocks with fine resolution always overlay the blocks of the coarse levels. Each block has its own memory space for storing physical variables and other computational parameters. In this section, we briefly review the Berger–Oliger AMR algorithm at first. Then the coarse–fine interpolation procedure based on multimoments, which is substantially different from the existing methods, will be described in detail. Our implementation of Berger–Oliger AMR scheme follows the framework of CLAWPACK. Modifications are made to accommodate the multimoment discretization formulation and the Runge–Kutta time integration scheme. Efforts are also made to the data transfer computations on the cubed-sphere grid with adaptive meshes.

### a. Solution procedure on AMR grid

#### 1) Updating procedure on AMR grid

Provided that the adaptive mesh has been set up at time *t ^{n}* =

*n*Δ

*t*

_{0}, where Δ

*t*

_{0}is the time interval for base level (level 0), the flow solutions are put forward to the next time step as follows: Each block of adaptive mesh has its own storage space, and it can be updated using the same numerical scheme as that on uniform structured mesh if the required values on the ghost cells for spatial discretizations have been provided through coarse–fine interpolations. Because Berger–Oliger AMR algorithm is also “time adaptive,” different nesting levels are advanced according to a special recursive procedure (see Fig. 3 of Berger and Oliger 1984). The same work flow is adopted in the present adaptive model.

Different from the schemes on uniform grid, synchronization of the solutions over different levels is carried out at the time step of the coarser mesh to assure the numerical accuracy and conservation. Because the fine blocks overlay the blocks of the next coarse level, the solution in overlapping area is calculated with different grid resolutions. To keep the accuracy of numerical scheme, the solution on the coarse block obtained by using large grid spacing is replaced by the result calculated with fine grid resolution.

*k*and

*k*+ 1 is considered as an example and the refinement ratio of 2 is chosen for sake of simplicity. On coarse level

*k*, the numerical flux

*k*+ 1 (i.e.,

*t*, which is the time integration increment of level lev

^{k}*.*

_{k}*h*is the grid spacing and |Δ

^{k}*V*| = (

_{k}*h*

^{k})

^{2}is the area of the computational element of level lev

*.*

_{k}*y*direction are carried out in a similar manner.

#### 2) Setup and dynamic adjustment of the AMR grid

Before specifying initial conditions for simulations, the initial adaptive mesh must be generated. First, the base block (level 0) with prescribed resolution is constructed to cover the whole computational domain. Then, other levels are generated one by one with gradually refined resolutions until the targeted finest nesting level is reached. Given the blocks of level lev* _{k}*, we create the blocks of next finer level lev

_{k}_{+1}by following two steps: 1) flagging the computational cells that need to be refined and 2) separating the cells to a number of distinct clusters and generating the new blocks.

The refinement criterion is another key for the adaptive schemes. Generally, two kinds of refinement criteria are used in the existing models: that is, the criteria based on error estimation, such as Richardson extrapolation in Berger and Oliger (1984), and the criteria based on the representative physical variables, such as the gradient of a physical field is usually used to identify the discontinuities or large jumps where larger numerical errors might be expected. The latter is based on the structure of flow field and thus easy to implement. Though it is worth further investigation for more general refinement criteria (e.g., those described in Behrens 2006, chapter 2), the simple strategy by detecting the large gradients in the physical field is very effective and more popular in geophysical fluid simulations. We adopt this method in the present study.

How to cluster the elements to be refined into blocks is also important and worthy of attention. The simplest method to generate the blocks is bisection. A more sophisticated method based on pattern recognition was proposed in Berger and Rigoutsos (1991) and is used in the present model.

During computations, adaptive mesh is automatically adjusted in order to follow the evolution of the flow field. The AMR grid is completely or partially regenerated for every step or several time steps. Although the adjustment of AMR grid starts from fine level in an order reverse to the initial setup of the AMR (Berger and Oliger 1984), most operations used in the initial step can be applied with modest modification. As long as the new AMR grid has been constructed, the field variables on the refined elements are interpolated from the coarse level.

### b. Coarse–fine interpolation based on multimoments

As mentioned above, coarse–fine interpolation is required in an AMR model to find the values for the ghost cells as well as the newly generated cells after grid adjustment. The multimoment method is algorithmically different from other numerical schemes and more convenient for the data exchange among the grids of different levels. We describe the coarse–fine interpolations on AMR grid based on multimoments in this subsection.

The multimoment formulation makes use of locally defined degrees of freedom and thus allows one to build high-order reconstruction based on compact stencil; for example, only a single cell is required in the CIP-CSL2 reconstruction (3). As a consequence, the interpolation for the data transfer between coarse and fine grids can be easily conducted. In fact, the interpolation procedure never involves more than two levels of the nesting grids.

Figure 6 illustrates how to evaluate the ghost cells for block of level lev_{k}_{+1}. Here, the refinement ratio between two neighboring levels lev_{k}_{+1} and lev* _{k}* is set as

*r*= 2. Without loss of generality, the interpolation procedure described below applies to any integer refinement ratio. In Fig. 6, coarse cell

_{k}*is adjacent to fine levels lev*

_{k}

_{k}_{+1}. Its right boundary edge is part of the boundary of lev

_{k}_{+1}. Two cells

_{k}_{+1}. Given the PVs (including the auxiliary PV at the cell center) on coarse grid denoted by hollow circles in Fig. 6, we interpolate the ghost PVs for the finer grid at points denoted by the solid markers. The markers are classified into three types according to their locations as follows.

Type one: The ghost points denoted by solid circles coincide with the points where the PV moments of the coarse block are defined. The corresponding values are directly copied from the known PVs.

Type two: The ghost points denoted by solid squares are located on the grid lines of coarse block, and the PVs are determined by using the one-dimensional interpolation along the line elements.

Type three: The ghost points denoted by solid triangles are the internal points of the coarse element. Generally, a two-dimensional reconstruction polynomial is needed. However, a more efficient interpolation can be computed by using the one-dimensional interpolation based on the ghost PVs of type 2. It is equivalent to the so-called cascade interpolation, which sweeps the 1D scheme alternatively in different directions.

The coarse–fine interpolation used to determine the flow variables of newly generated fine elements is shown in Fig. 7. Coarse control volume

The values of PV moments of the fine grid elements can be determined by using the interpolation procedure as mentioned above. However, particular attention must be paid when distributing the VIA of the coarse grid to the four fine elements (denoted by solid diamonds in Fig. 7).

^{P}

*q*

_{l},

^{P}

*q*

_{r},

^{V}

*q*

^{L}

*q*

_{l},

^{L}

*q*

_{r}

^{L}

*q*

_{b}, and

^{L}

*q*

_{t}for left, right, bottom, and top edges, respectively) are computed by using the three-point Simpson’s rule based on the known PV moments as

^{V}

*q*

_{l},

^{V}

*q*

_{r}are volume-integrated averages over left and right halves of the control volume

^{L}

*q*

_{tl},

^{L}

*q*

_{tr}are line-integrated averages over the left and right halves of top boundary edge and

^{L}

*q*

_{bl},

^{L}

*q*

_{br}are corresponding quantities of bottom boundary edge.

### c. Extension to the cubed-sphere grid

On each patch of the cubed sphere, the AMR blocks can be constructed using the algorithm described above directly. The global AMR model, however, requires some extra efforts along the patch boundaries. The basic numerical procedure of the data transfer across the patch boundaries can be found in our previous report (Chen and Xiao 2008). We summarize the major points that need special attention when dealing with the data communications across the patch boundaries in the AMR context as follows.

(i) When implementing Berger–Oliger AMR algorithm on the elements along the patch boundary, it is necessary to know the grid structure on the adjacent patch. Since the local coordinate system is separately used for each patch, the connections between two neighboring patches concerning the projection and coordinate orientation must be created for the data transfer between the grid hierarchy.

(ii) For the blocks located along the patch boundaries, ghost cells for spatial discretization will be generated on the neighboring patch. Because of the broken coordinates along the patch boundary, the locations of these ghost cells need to be calculated from the geometrical relations of the projection, and the values of the physical variables need to be interpolated using the data of the corresponding cells of the neighboring patch.

(iii) Along the patch boundaries, two or three PV moments may be defined at the same location but are stored and updated on different patches separately. To achieve global conservation, a correction step (Chen and Xiao 2008) is required to assure these PV moments to be single valued. On the AMR grid, averaging operation is done when the patch boundary is shared by two blocks of same nesting level. When the boundary is shared by two blocks of different nesting levels, the values of PVs of the coarse block are substituted by those of the fine block.

(iv) On the global AMR grid, the aforementioned conservation correction along the interface between coarse and fine nesting levels may occur between two blocks on different patches. Special attention is required to take into account the difference in orientation between the local coordinates on the two neighboring patches when computing the fluxes for conservation correction.

## 4. Numerical tests

Numerical tests are carried out in this section to verify the proposed adaptive model. To examine the effect of using AMR technique in improving numerical accuracy and in saving computational cost, we have conducted numerical experiments with different levels of resolution with both uniformly refined and adaptively refined grids. The normalized *l*_{1}, *l*_{2}, and *l*_{∞} errors, which are defined following Williamson et al. (1992), and the corresponding CPU time on different grids are examined and compared.

### a. Advection tests

*will be refined if*

_{k}*h*(

*x*,

*y*) is the transported height field and

*δ*the threshold prescribed in advance. We use the same grid spacing in the

*x*and

*y*directions.

#### 1) Solid rotation of square wave on two-dimensional plane

*u*= 2

*y*and

*υ*= −2

*x*. The discontinuous jumps of the square wave are locally refined by the AMR model.

We ran the test on uniform and AMR grids of different spatial resolutions for one revolution (*t* = *π*) using the MM-FVM_M scheme with RK-3 temporal marching. The numerical errors and CPU times on different grids are given in Table 1. All numerical tests in this paper are computed on an Intel Xeon E5520 CPU (single process). In Table 1, the elapse CPU time is given for the case on the coarsest uniform grid, whereas the CPU times for all other cases are normalized to that on the coarsest uniform grid. The grid configuration is given as *n* × *m* × *r*, which means that the base grid has *n* × *n* computational elements, the maximal nesting level is *m*, and the refinement ratio between two adjacent levels is *r*. The results are grouped according to the finest resolution. Because of the discontinuous distribution of the height field, the normalized *l*_{1} and *l*_{2} errors reduce as the grid is refined, whereas the *l*_{∞} error does not change remarkably. It is observed that the normalized errors of each group on uniform and AMR grids are quite close, and the AMR model significantly reduced the CPU time compared to the uniformly refined computations. These results reveal the effectiveness of the present AMR model in saving computational cost.

Figure 8 displays the contour plots of the height fields at *t* = *π*/5, 3*π*/5, and *π* computed by two AMR grids. The jumps of the square pulse is well resolved by the AMR model, and the solution is significantly improved when finer mesh adaptation is implemented. One-dimensional plots of the cross-sectional (*y* = 0) profiles of height fields computed by different grids are given in Fig. 9. The discontinuities are reproduced with better resolution on the finer nesting levels. The spurious oscillations are effectively removed by the monotone limiter.

#### 2) Cosine bell advection on cubed sphere

*r*is the great circle distance between point (

*λ*,

*θ*) and the initial center (

*λ*

_{0},

*θ*

_{0}) = [(3

*π*/2), 0]. Other constants are specified as

*h*

_{0}= 1000 m and

*r*

_{0}=

*R*/3, where

*R*is the radius of the earth.

*u*

_{0}= 2

*πR*/(12 days) and the parameter

*α*represents the angle between the rotation axis and polar axis of the earth.

MM-FVM_P scheme is used in simulations with the RK-4 temporal integration scheme. We ran the numerical experiments on seven different grids of 16 × 1 × 1, 32 × 1 × 1, 64 × 1 × 1, 16 × 2 × 2, 16 × 3 × 2, 16 × 2 × 4, and 32 × 2 × 2. The notation of grid adaptation *n* × *m* × *r* has the same meaning as explained before. When applied to the cubed-sphere grid, an identical base grid of *n* × *n* is set up for each patch. On the cubed-sphere grid generated by equiangular projection, the finest resolution is measured by the minimum increment of the central angular.

We present two advection tests with flow directions being *α* = *π*/2 and *α* = *π*/4. The normalized errors and CPU times are given in Table 2 for *α* = *π*/2 and Table 3 for *α* = *π*/4. The normalized errors of the tests on AMR grids are similar to those on the uniformly refined grids of the same finest resolution. Significant reduction in CPU time is found in the AMR computations. The test with flow in the direction of *α* = *π*/4 is of particular importance for the AMR computation on the cubed-sphere grid because the cosine bell passes four vertices and two complete patch boundaries, where more complicated numerical procedure is involved. The contour plots of numerical results of flow with *α* = *π*/4 on two grids of 16 × 2 × 2 and 16 × 3 × 2 are given in Figs. 10 and 11 at days 3, 7.5, and 12 (one complete revolution). Present AMR model works well even along the patch boundaries and at the vertices of the cube. Normalized *l*_{2} errors of three refining grids are shown in Fig. 12. No obvious increase of *l*_{2} error is found when cosine bell moves across or along the patch boundaries. The multimoment model effectively controls the numerical errors introduced by the broken coordinates along patch boundaries.

#### 3) Deformational flow on cubed sphere

*λ*′,

*θ*′) [with the origin at (

*λ*,

_{c}*θ*) = (−

_{c}*π*/4, −3

*π*/10)] as

*γ*= 1.5 and

*δ*= 0.01 are specified to generate nonsmooth initial profile.

### b. Shallow-water tests

*δ*= 2 × 10

^{−5}is prescribed in the numerical tests.

*ξ*and

*η*are the local coordinates on each patch,

*u*and

*υ*are the contravariant velocity components. The details of these quantities are described in Chen and Xiao (2008). In shallow-water tests, the fourth-order MM-FVM_4 scheme with RK-4 is adopted for all cases.

#### 1) Steady-state geostrophic flow

*gh*

_{0}= 2.94 × 10

^{4}and

*u*

_{0}= 2

*πR*/(12 days) and

*α*is the angle between the rotation axis and the polar axis of the earth.

We computed the case of *α* = *π*/4, which is the most challenging one for models on the cubed-sphere grid. One uniform 36 × 1 × 1 grid and two static three-level 36 × 3 × 2 grids are adopted. The static multilevel grids are constructed by refining the computational cells if |*λ* − *λ _{c}*| <

*π*/8 and |

*θ*−

*θ*| <

_{c}*π*/12 where (

*λ*,

*θ*) is the location of the cell center and (

*λ*,

_{c}*θ*) is specified as [

_{c}*π*, (

*π*/4)] for configuration 1 and [(3

*π*/4), (

*π*/6)] for configuration 2 (see St-Cyr et al. 2008).

The proposed model was integrated on these three grids to day 14 without dynamic adjustment of the mesh. Time history of normalized *l*_{2} error is given in Fig. 16. Although introducing high-resolution blocks can effectively reduce the numerical errors in the areas they cover, the interpolation procedure for data transfer between the coarse–fine interfaces might introduce some extra numerical errors. Similar to the formulations examined in St-Cyr et al. (2008), multimoment FVM model also produces extra numerical errors when adding fine blocks to the uniform grid; however, using the same test configurations, the multimoment model performs better than the FVM model reported in St-Cyr et al. (2008). The normalized *l*_{2} errors increase 35.23% for configuration 1 and 5.37% for configuration 2 in the present tests. It should also be noted that the numerical error depends on the location of the refinement blocks. Different from the FVM result in St-Cyr et al. (2008), laying the fine blocks in regions of strong gradients in the present model effectively reduces the extra errors. The numerical results on different grids are given in Fig. 17. The thick solid lines are the edges between different refinement levels.

#### 2) Rossby–Haurwitz wave

*ω*,

*K*and

*r*are specified as

*ω*=

*K*= 7.848 × 10

^{−6}s

^{−1}and

*r*= 4. The quantities

*A*,

*B*,

*C*are functions of the latitude.

The numerical model is integrated to day 15 on the uniform 36 × 1 × 1 grid and static two-level 36 × 2 × 2 grid. The refinement criterion is specified by the initial meridional velocity as *u _{θ}* < 60 m s

^{−1}. The normalized

*l*

_{2}errors are calculated at days 5, 10, and 15 against the T511 spectral transform reference solution. The time history of normalized

*l*

_{2}errors on different grids is shown in Fig. 18. It is observed that the multimoment model produces very similar results in both runs. Numerical results together with the reference solution at day 10 are displayed in Fig. 19. As in the previous test, thick solid lines indicate the border between different levels. This test again shows the priority of the multimoment scheme in treating the coarse–fine interfaces over the traditional FVM models.

#### 3) Zonal flow over an isolated mountain

*h*

_{0}= 5960 m and

*u*

_{0}= 20 m s

^{−1}. A bottom mountain is centered at (

*λ*,

_{c}*θ*) = [(3

_{c}*π*/2), (

*π*/6)], and the height of the mountain is analytically given as

*h*

_{s}_{0}= 2000 m,

*r*

_{0}=

*π*/9, and

*r*= min[

*r*

_{0},

Four grids, including three uniform ones with resolutions of 18 × 1 × 1, 36 × 1 × 1, and 144 × 1 × 1 and one four-level adaptive one, 18 × 4 × 2, are chosen for this test. Besides the dynamic refinement criteria based on vorticity, the computational element where *h _{s}* > 0 is also refined. The normalized

*l*

_{2}errors compared with the T426 spectral transform solution are given in Fig. 20. The numerical results at days 5, 10, and 15 on AMR grid are given in Fig. 21. The CPU times required by different grids are summarized in Table 5. The normalized

*l*

_{2}error on AMR grid is very similar to that on the finest uniform 144 × 1 × 1 grid. The multimoment adaptive model has a performance quite similar to the SE model and better than the FVM model cited in St-Cyr et al. (2008). It is obvious that a large saving in the computational cost is achieved. The adaptive model consumes about only 16% CPU time to obtain the similar result compared with the uniformly refined grids.

#### 4) Barotropic instability test

This test problem is particularly suitable for examine the AMR model because the large jumps in the height and velocity fields are limited in the zonal belt, which are the major source of the numerical errors and easy to be dynamically identified. We carried out this test on different uniform and AMR grids as in the advection tests.

*u*

_{max}= 80 m s

^{−1},

*θ*

_{0}= (

*π*/7),

*θ*

_{1}= (

*π*/2) −

*θ*

_{0}, and

*e*= exp[−4/(

_{n}*θ*

_{1}−

*θ*

_{0})

^{2}]. The basic balanced height field can be obtained by integrating the following balance relation

*h*

_{0}is determined by prescribing the mean height to be 10 000 m. An initial perturbation of height field is added to the balanced flow to initiate the instability as

*ĥ*= 120m,

*α*= ⅓,

*β*= ⅕, and

*θ*

_{2}=

*π*/4.

The normalized *l*_{2} errors on different grids are given in Fig. 22, which are computed by comparing the numerical results with the reference solution obtained by a high-resolution solution of the present model on 256 × 1 × 1 grid. It is observed that locally refining the grid resolution by AMR effectively maintains the numerical accuracy. The numerical errors from the AMR computations are comparable to those using uniformly refined grids over the whole computational domain. Cases 2 and 4 are in the same group that has the same finest grid resolution (64 × 64 on each patch) for the regions where the instability develops, whereas cases 3, 5, 6, and 7 are in another group with a finest grid of 128 × 128. Cases 1, 2, and 3 use uniformly refined grids over the whole globe. The AMR solution of case 4 has almost identical errors to that with a uniformly refined grid. In a similar manner, the AMR computations of cases 5, 6, and 7 satisfactorily retrieve the solution on the globally refined grid in case 3.

The numerical results on four different grids, 32 × 1 × 1 (case 1), 32 × 2 × 2 (case 4), 32 × 3 × 2 (case 5), and 128 × 1 × 1 (case 3), are illustrated by the contour plots in Fig. 23 for height field at day 6. For clarity, we plot the boundaries between different nesting levels in these figures. The numerical result on coarse grid without AMR (panel 1 in Fig. 23) is dominated by the errors mainly generated from the patch boundaries. We observe that, with the increased grid resolutions by using finer nesting grids adaptively over the areas where the barotropic instability develops, the numerical results converge to the reference solution [see Fig. 4 of Galewsky et al. (2004)]. Consistent to the overall error evaluation shown in Fig. 22, the result of 32 × 3 × 2 (case 5) is very close to the uniform global refinement with the 128 × 1 × 1 (case 3) grid.

For a more detailed comparison in computational cost, we show the CPU times consumed on different grids in Table 6 where the grid configurations of all cases correspond to those given in Fig. 22. It is observed that the AMR computations in cases 5 and 7 take less than 30% CPU time compared to the global refinement computation. The adaptive model, which maintains the accuracy comparable to the uniform-grid computation, has a big advantage in the computational efficiency.

## 5. Conclusions

In this study, we extend the multimoment global shallow-water model proposed in Chen and Xiao (2008) to an AMR framework, which should be one of the important functions of a practical atmospheric or oceanic model. The Berger–Oliger AMR algorithm is extended to the spherical geometry on the cubed-sphere grid. The flexibility and locality of the multimoment reconstruction not only simplify the AMR implementation but also provide the numerical model with desirable properties, such as high-order accuracy and monotonicity enforcement.

Locally intensified flow structures, like tropical low pressure and midlatitude cyclone in atmosphere, are commonly observed in geophysical fluid motion. As we can see in the benchmark tests, AMR technique is able to significantly reduce the computational cost in the simulations of atmospheric and oceanic dynamics when the local grid adaptation is properly implemented. We expect the proposed formulation can be a base for practical models that solve geophysical flows with multigrid resolutions for the optimized usage of computer resources.

## Acknowledgments

This work is supported by National Natural Science Foundation of China and Chinese Academy of Sciences under Projects 10852001, 10902116, 40805045, and KJCX2-YW-L04. We thank anonymous reviewers for their constructive suggestions.

## REFERENCES

Akoh, R., S. Ii, and F. Xiao, 2008: A CIP/multi-moment finite volume method for shallow water equations with source terms.

,*Int. J. Numer. Methods Fluids***56****,**2245–2270.Akoh, R., S. Ii, and F. Xiao, 2010: A multi-moment finite volume formulation for shallow water equations on unstructured mesh.

,*J. Comput. Phys.***229****,**4567–4590.Bacon, D. P., and Coauthors, 2000: A dynamically adapting weather and dispersion model: The Operational Multiscale Environment Model with Grid Adaptivity (OMEGA).

,*Mon. Wea. Rev.***128****,**2044–2076.Behrens, J., 1998: Atmospheric and ocean modeling with an adaptive finite element solver for the shallow-water equations.

,*Appl. Numer. Math.***26****,**217–226.Behrens, J., 2006:

*Adaptive Atmospheric Modeling: Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications*. Springer, 207 pp.Berger, M. J., and J. Oliger, 1984: Adaptive mesh refinement for hyperbolic partial differential equations.

,*J. Comput. Phys.***53****,**484–512.Berger, M. J., and P. Colella, 1989: Local adaptive mesh refinement for shock hydrodynamics.

,*J. Comput. Phys.***82****,**64–84.Berger, M. J., and I. Rigoutsos, 1991: An algorithm for point clustering and grid generation.

,*IEEE Trans. Syst. Man Cybern.***5****,**1278–1286.Berger, M. J., and R. Leveque, 1998: Adaptive mesh refinement using wave-propagation algorithms for hyperbolic systems.

,*SIAM J. Numer. Anal.***35****,**2298–2316.Blayo, E., and L. Debreu, 1999: Adaptive mesh refinement for finite-difference ocean models: First experiments.

,*J. Phys. Oceanogr.***29****,**1239–1250.Chen, C. G., and F. Xiao, 2008: Shallow water model on cubed sphere by multi-moment finite volume method.

,*J. Comput. Phys.***227****,**5019–5044.Côté, J., 1997: Variable resolution techniques for weather prediction.

,*Meteor. Atmos. Phys.***63****,**31–38.De Zeeuw, D. L., 1993: A quadtree-based adaptively-refined Cartesian-grid algorithm for solution of the Euler equations. Ph.D. thesis, University of Michigan, 133 pp.

Galewsky, J., R. K. Scott, and L. M. Polvani, 2004: An initial-value problem for testing numerical models of the global shallow-water equations.

,*Tellus***56A****,**429–440.Hubbard, M. E., and N. Nikiforakis, 2003: A three-dimensional, adaptive, Godunov-type model for global atmospheric flows.

,*Mon. Wea. Rev.***131****,**1848–1864.Ii, S., and F. Xiao, 2007: CIP/multi-moment finite volume method for Euler equations: A semi-Lagrangian characteristic formulation.

,*J. Comput. Phys.***222****,**849–871.Ii, S., and F. Xiao, 2009: High order multi-moment constrained finite volume method. Part I: Basic formulation.

,*J. Comput. Phys.***228****,**3669–3707.Ii, S., and F. Xiao, 2010: A global shallow water model using high order multi-moment constrained finite volume method and icosahedral grid.

,*J. Comput. Phys.***229****,**1774–1796.Jablonowski, C., M. Herzog, J. E. Penner, R. C. Oehmke, Q. F. Stout, B. van Leer, and K. G. Powell, 2006: Block-structured adaptive grids on the sphere: Advection experiments.

,*Mon. Wea. Rev.***134****,**3691–3713.Jiang, G. S., and C. W. Shu, 1996: The efficient implementation of weighted ENO schemes.

,*J. Comput. Phys.***126****,**202–228.Kageyama, A., and T. Sato, 2004: The “Yin-Yang grid”: An overset grid in spherical geometry.

,*Geochem. Geophys. Geosyst.***5****,**Q09005. doi:10.1029/2004GC000734.Leveque, R. J., 2002:

*Finite Volume Methods for Hyperbolic Problems*. Cambridge University Press, 558 pp.Li, X., D. Chen, X. Peng, K. Takahashi, and F. Xiao, 2008: A multimoment finite-volume shallow-water model on the yin–yang overset spherical grid.

,*Mon. Wea. Rev.***136****,**3066–3086.Nair, R. D., J. Côté, and A. Staniforth, 1999: Cascade interpolation for semi-Lagrangian advection over the sphere.

,*Quart. J. Roy. Meteor. Soc.***125****,**1445–1468.Nair, R. D., S. J. Thomas, and R. D. Loft, 2005: A discontinuous Galerkin global shallow water model.

,*Mon. Wea. Rev.***133****,**876–887.Nikiforakis, N., 2009: Mesh generation and mesh adaptation for large-scale earth-system modelling.

,*Philos. Trans. Roy. Soc. London***367A****,**4473–4481.Pielke, R. A., and Coauthors, 1992: A comprehensive meteorological modeling system—RAMS.

,*Meteor. Atmos. Phys.***49****,**69–91.Sadourny, R., 1972: Conservative finite-difference approximations of the primitive equations on quasi-uniform spherical grids.

,*Mon. Wea. Rev.***100****,**136–144.Sadourny, R., A. Arakawa, and Y. Mintz, 1968: Integration of the nondivergent barotropic vorticity equation with an icosahedral-hexagonal grid for the sphere.

,*Mon. Wea. Rev.***96****,**351–356.Shu, C.-W., 1988: Total-variation-diminishing time discretization.

,*SIAM J. Sci. Stat. Comput.***9****,**1073–1084.Shu, C.-W., 1989: TVB uniformly high-order schemes for conservation laws.

,*Math. Comput.***49****,**105.Shu, C.-W., and S. Osher, 1988: Efficient implementation of essentially non-oscillatory shock-capturing schemes.

,*J. Comput. Phys.***77****,**439–471.Skamarock, W. C., 1989: Adaptive grid refinement for numerical weather prediction.

,*J. Comput. Phys.***80****,**27–60.Skamarock, W. C., and J. B. Klemp, 1993: Adaptive grid refinements for two-dimensional and three-dimensional nonhydrostatic atmospheric flow.

,*Mon. Wea. Rev.***121****,**788–804.St-Cyr, A., C. Jablonowski, J. M. Dennis, H. M. Tufo, and S. J. Thomas, 2008: A comparison of two shallow-water models with nonconforming adaptive grids.

,*Mon. Wea. Rev.***136****,**1898–1922.Thomas, S. J., and R. D. Loft, 2002: Semi-implicit spectral element atmospheric model.

,*J. Sci. Comput.***17****,**(1–4). 339–350.Toro, E. F., and V. A. Titarev, 2005: ADER: Arbitrary high order Godunov approach.

,*J. Comput. Phys.***202****,**196–215.van der Holst, B., and R. Keppens, 2007: Hybrid block-AMR in Cartesian and curvilinear coordinates: MHD applications.

,*J. Comput. Phys.***226****,**925–946.Williamson, D. L., 1968: Integration of the barotropic vorticity equation on a spherical geodesic grid.

,*Tellus***20****,**642–653.Williamson, D. L., J. B. Drake, J. J. Hack, R. Jakob, and P. N. Swarztrauber, 1992: A standard test set for numerical approximations to the shallow water equations in spherical geometry.

,*J. Comput. Phys.***102****,**211–224.Xiao, F., 2004: Unified formulation for compressible and incompressible flows by using multi-integrated moments I: One-dimensional inviscid compressible flow.

,*J. Comput. Phys.***195****,**629–654.Xiao, F., and T. Yabe, 2001: Completely conservative and oscillationless semi-Lagrangian schemes for advection transportation.

,*J. Comput. Phys.***170****,**498–522.Xiao, F., R. Akoh, and S. Ii, 2006: Unified formulation for compressible and incompressible flows by using multi-integrated moments II: Multi-dimensional version for compressible and incompressible flows.

,*J. Comput. Phys.***213****,**31–56. doi:10.1016/j.jcp.2005.08.002.Yabe, T., R. Tanaka, T. Nakamura, and F. Xiao, 2001: Exactly conservative semi-Lagrangian scheme (CIP-CSL) in one dimension.

,*Mon. Wea. Rev.***129****,**332–344.Yessad, K., and P. Bénard, 1996: Introduction of a local mapping factor in the spectral part of the Météo-France global variable mesh numerical forecast model.

,*Quart. J. Roy. Meteor. Soc.***122****,**1701–1719.

The CIP-CSL3 reconstruction profile.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The CIP-CSL3 reconstruction profile.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The CIP-CSL3 reconstruction profile.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The GRP at the interface between two control volumes.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The GRP at the interface between two control volumes.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The GRP at the interface between two control volumes.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Moment configuration for 2D case.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Moment configuration for 2D case.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Moment configuration for 2D case.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Conservation correction along coarse–fine boundary.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Conservation correction along coarse–fine boundary.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Conservation correction along coarse–fine boundary.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the ghost cells.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the ghost cells.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the ghost cells.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the flow variables in refined elements.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the flow variables in refined elements.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Coarse–fine interpolations based on multimoments for evaluating the flow variables in refined elements.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of square wave solid rotation on AMR grids (left) 40 × 2 × 2 and (right) 40 × 3 × 2. Shown are height fields at (top) *t* = (⅕)*π*, (middle) (⅗)*π*, and (bottom) *π*.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of square wave solid rotation on AMR grids (left) 40 × 2 × 2 and (right) 40 × 3 × 2. Shown are height fields at (top) *t* = (⅕)*π*, (middle) (⅗)*π*, and (bottom) *π*.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of square wave solid rotation on AMR grids (left) 40 × 2 × 2 and (right) 40 × 3 × 2. Shown are height fields at (top) *t* = (⅕)*π*, (middle) (⅗)*π*, and (bottom) *π*.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along *y* = 0) of height fields of square wave solid-rotation test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along *y* = 0) of height fields of square wave solid-rotation test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along *y* = 0) of height fields of square wave solid-rotation test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of cosine bell advection (*α* = *π*/4) on 16 × 2 × 2 grid. Shown are fields at days (top) 3, (middle) 7.5, and (bottom) 12.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of cosine bell advection (*α* = *π*/4) on 16 × 2 × 2 grid. Shown are fields at days (top) 3, (middle) 7.5, and (bottom) 12.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of cosine bell advection (*α* = *π*/4) on 16 × 2 × 2 grid. Shown are fields at days (top) 3, (middle) 7.5, and (bottom) 12.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 10, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 10, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 10, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of cosine bell advection test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of cosine bell advection test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of cosine bell advection test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of nonsmooth deformational flow on 16 × 2 × 2 grid. Shown are height fields at *t* = (top) 0.625, (middle) 1.5625, and (bottom) 2.5.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of nonsmooth deformational flow on 16 × 2 × 2 grid. Shown are height fields at *t* = (top) 0.625, (middle) 1.5625, and (bottom) 2.5.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of nonsmooth deformational flow on 16 × 2 × 2 grid. Shown are height fields at *t* = (top) 0.625, (middle) 1.5625, and (bottom) 2.5.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 13, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 13, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

As in Fig. 13, but for the test on 16 × 3 × 2 grid.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along equator) of height fields of nonsmooth deformational flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along equator) of height fields of nonsmooth deformational flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

The 1D plots (along equator) of height fields of nonsmooth deformational flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of steady-state geostrophic flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of steady-state geostrophic flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of steady-state geostrophic flow test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of steady-state geostrophic flow test on different grids. Shown are (top) height field at day 14 on 36 × 1 × 1 uniform grid and 36 × 3 × 2 static three-level grids constructed with configurations (middle) 1 and (bottom) 2.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of steady-state geostrophic flow test on different grids. Shown are (top) height field at day 14 on 36 × 1 × 1 uniform grid and 36 × 3 × 2 static three-level grids constructed with configurations (middle) 1 and (bottom) 2.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of steady-state geostrophic flow test on different grids. Shown are (top) height field at day 14 on 36 × 1 × 1 uniform grid and 36 × 3 × 2 static three-level grids constructed with configurations (middle) 1 and (bottom) 2.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of Rossby–Haurwitz wave test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of Rossby–Haurwitz wave test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of Rossby–Haurwitz wave test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of Rossby–Haurwitz wave test on different grids. Shown are height field at day 10 on (top) 36 × 1 × 1 uniform grid, (middle) 36 × 2 × 2 static two-level grid, and (bottom) T511 spectral transform reference solution.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of Rossby–Haurwitz wave test on different grids. Shown are height field at day 10 on (top) 36 × 1 × 1 uniform grid, (middle) 36 × 2 × 2 static two-level grid, and (bottom) T511 spectral transform reference solution.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of Rossby–Haurwitz wave test on different grids. Shown are height field at day 10 on (top) 36 × 1 × 1 uniform grid, (middle) 36 × 2 × 2 static two-level grid, and (bottom) T511 spectral transform reference solution.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of zonal flow over an isolated mountain test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of zonal flow over an isolated mountain test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of zonal flow over an isolated mountain test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of zonal flow over an isolated mountain test on 18 × 4 × 2 AMR grid. Shown are height field at days (top) 5, (middle) 10, and (bottom) 15.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of zonal flow over an isolated mountain test on 18 × 4 × 2 AMR grid. Shown are height field at days (top) 5, (middle) 10, and (bottom) 15.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of zonal flow over an isolated mountain test on 18 × 4 × 2 AMR grid. Shown are height field at days (top) 5, (middle) 10, and (bottom) 15.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of the barotropic instability test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of the barotropic instability test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Time history of normalized *l*_{2} errors of the barotropic instability test on different grids.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of the barotropic instability test on different grids. Shown are the relative vorticity at day 6. The grids used the computations are (top)–(bottom) 32 × 1 × 1, 32 × 2 × 2, 32 × 3 × 2, and 128 × 1 × 1.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of the barotropic instability test on different grids. Shown are the relative vorticity at day 6. The grids used the computations are (top)–(bottom) 32 × 1 × 1, 32 × 2 × 2, 32 × 3 × 2, and 128 × 1 × 1.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Contour plots of numerical results of the barotropic instability test on different grids. Shown are the relative vorticity at day 6. The grids used the computations are (top)–(bottom) 32 × 1 × 1, 32 × 2 × 2, 32 × 3 × 2, and 128 × 1 × 1.

Citation: Monthly Weather Review 139, 2; 10.1175/2010MWR3365.1

Errors and CPU times of solid rotation of the square wave on 2D plane. Model runs on the different uniform or AMR grids and the results are grouped according to the finest resolution.

Errors and CPU times of the cosine bell advection on the cubed sphere in the direction of *α* = *π*/2. Model runs on the different uniform or AMR grids and the results are grouped according to the finest resolution.

Errors and CPU times of the nonsmooth deformational flow on cubed sphere. Model runs on the different uniform or AMR grids and the results are grouped according to the finest resolution.

CPU times of zonal flow over an isolated mountain test on different grids. CPU time is normalized by that of the coarse grid (18 × 1 × 1).

CPU times of the barotropic instability test on different AMR grids. CPU time is normalized by that of the coarse grid (32 × 1 × 1).