Computational models based on discrete dynamical equations are a successful way of approaching the problem of predicting or forecasting the future evolution of dynamical systems. For linear and mildly nonlinear models, the solutions of the numerical algorithms on which they are based converge to the analytic solutions of the underlying differential equations for small time steps and grid sizes. In this paper, the authors investigate the time step sensitivity of three nonlinear atmospheric models of different levels of complexity: the Lorenz equations, a quasigeostrophic (QG) model, and a global weather prediction system (NOGAPS). It is illustrated here how, for chaotic systems, numerical convergence cannot be guaranteed forever. The time of decoupling of solutions for different time steps follows a logarithmic rule (as a function of time step) similar for the three models. In regimes that are not fully chaotic, the Lorenz equations are used to illustrate how different time steps may lead to different model climates and even different regimes. A simple model of truncation error growth in chaotic systems is proposed. This model decomposes the error onto its stable and unstable components and reproduces well the short- and medium-term behavior of the QG model truncation error growth, with an initial period of slow growth (a plateau) before the exponential growth phase. Experiments with NOGAPS suggest that truncation error can be a substantial component of total forecast error of the model. Ensemble simulations with NOGAPS show that using different time steps may be a simple and natural way of introducing an important component of model error in ensemble design.
Today, computational models of dynamical systems are quite common in fields as diverse as atmospheric and oceanic sciences, physics, biology, and engineering (e.g., Potter 1973; Oran and Boris 1987; Murray 1989; Gershenfeld 1999). Weather and climate prediction models, which are based on relatively well-known nonlinear partial differential equations, are probably the most remarkable and well-known examples of operational forecasting systems (e.g., Richardson 1922; Thompson 1961; Monin 1972; Marchuk 1974; Haltiner and Williams 1980; Daley 1991; Kalnay 2003). The Lorenz equations (Lorenz 1963) have been often used as a paradigm for the extreme sensitivity of some models to the initial conditions, which is a major source of uncertainty in Numerical Weather Prediction (NWP; e.g., Thompson 1957; Lorenz 1963; Leith 1974; Palmer 1995; Smith 2003).
It is also well known that chaotic systems such as the Lorenz equations are sensitive to model error. In section 2 of this paper, the Lorenz equations are used to explore this fact by testing the sensitivity of the model to time steps considerably smaller than the one used originally by Lorenz (1963). This time step sensitivity is not referred to explicitly in many of the recent general references on chaos and the Lorenz equations (e.g., Turcotte 1992; Strogatz 1994) or in recent studies of model error that make use of the Lorenz equations (e.g., Palmer 1993, 1995; Molteni 1994; Chu 1999).
The time step sensitivity of the Lorenz equations leads to some interesting consequences in terms of numerical convergence. In this paper it is illustrated how, for fully chaotic systems, numerical convergence cannot be guaranteed forever and that for regimes that are not fully chaotic, different time steps may lead to different model climates and even different regimes of the solution.
The time step sensitivity of a quasigeostrophic (QG) potential vorticity model (Marshall and Molteni 1993) is analyzed in order to study in more detail a model with dynamics that is closer to the real atmosphere, but still relatively simple. The usefulness of this model for atmospheric predictability and ensemble prediction research has been well documented in previous studies (e.g., Molteni and Palmer 1993; Mureau et al. 1993; Buizza and Palmer 1995; Reynolds and Palmer 1998). The QG model is well suited for this type of study, since it is complex enough to capture baroclinic synoptic-scale processes that are fundamental in forecast error growth, as well as being simple enough to allow for multiple extended integrations and complete linear stability analyses (e.g., Vannitsem and Nicolis1997; Reynolds and Errico 1999).
The QG model study (section 3) shows results regarding the time step sensitivity that are qualitatively similar to the results obtained with the Lorenz equations. We analyze in some detail the behavior of the error due to the different time steps and we compare it with errors due to the initial conditions. The time truncation error growth exhibits an interesting behavior, and we develop a simple model for the evolution of this error growth.
Because of inevitable uncertainties in initial conditions and to imperfect models, operational weather forecasts should also be viewed in a probabilistic sense, which means that forecasts should provide probabilities of the occurrence of specific events, as well as estimates of forecast skill (e.g., Leith 1974). A practical and computationally feasible approach to this problem is to perform an ensemble of forecasts from equally plausible initial states. Ensembles of forecasts created by perturbing the initial state have been produced operationally at centers such as the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP; e.g., Toth and Kalnay 1993; Buizza and Palmer 1998) since the early 1990s.
However, because forecast errors are caused by both errors in the initial conditions and in the model formulation, ensemble techniques should include both initial condition and model error. A significant problem with many current ensemble schemes is that they are underdispersive; that is, they often fail to include the verification state. This has been attributed in large part to the omission or inadequate treatment of model error in ensemble design. Recently, more attention has been turned to the inclusion of model error in ensemble design through, for example, the introduction of stochastic components in the physical parameterizations (e.g., Buizza et al. 1999). More often than not, however, studies have associated model error with parameterization error, while the nonlinear effects in the predictability of a system due to discretization or truncation error have been basically neglected.
In section 4 of this paper we test the sensitivity of the Navy Operational Global Atmospheric Prediction System (NOGAPS) to different time steps. We compare the sensitivity of NOGAPS with the QG model and discuss the differences in behavior. We also show that the use of different time steps is a simple and potentially relevant way of introducing model uncertainty in ensembles.
As a summary we use this hierarchy of three models of different levels of complexity in order to answer the following three questions:
What are the consequences in terms of numerical convergence of time step sensitivity in highly nonlinear models such as the Lorenz equations?
What are the differences between error growth due to truncation error (time step in this particular case) and initial condition error growth in simple but meteorologically relevant atmospheric models such as the QG model?
What is the impact of the truncation error in terms of the predictability of an operational system such as NOGAPS?
2. Lorenz equations and numerical convergence
In this section the issue of time step sensitivity of chaotic models is introduced by analyzing the Lorenz equations (Lorenz 1963). Note that Lorenz (1989) tested the sensitivity of simplified versions of the Lorenz equations to time steps larger than the one used originally in Lorenz (1963). For certain (large) values of the time step, Lorenz finds what he refers to as computational chaos. This issue is, in our opinion, associated with the concept of nonlinear numerical stability. In situations where the numerical stability parameter is somewhere between a fully stable and a fully unstable regime, the solutions may exhibit a chaotic behavior, and different dynamical regimes can be obtained for different time steps. Although fascinating in itself and actually quite relevant for weather and climate prediction, this issue should not be confused with our analysis. In the present study, we are analyzing the sensitivity to time steps that are several orders of magnitude smaller than the original time step used by Lorenz (1963).
The Lorenz equations,
(with r = 28, σ = 10, b = 8/3), are integrated with a second-order numerical scheme, as used by Lorenz (1963). To test the sensitivity to the time step, the equations are integrated with three different time steps: Δt = 0.01 [used in Lorenz (1963)], Δt = 0.001 and Δt = 0.0001 nondimensional Lorenz time units (LTU). In Fig. 1 the time evolution of X during the first 15 LTU for the three different time steps is shown. All solutions are apparently quite close to each other for some initial time, and at about 5 LTU the solution with time step 0.01 LTU starts to decouple from the other two solutions. At about 10 LTU the same happens with the solution obtained with time step 0.001 LTU. Note that the initial state is situated at the attractor of a simulation performed with a much smaller time step of 10−7 LTU. The Y and Z evolution show a similar behavior, and the same general sensitivity to the time step is present in simulations that start from other initial states.
These results suggest that there is no apparent convergence of the numerical algorithm used to solve the Lorenz equations. Convergence can be defined in the following sense. Let (dΦ/dt) = F(Φ) be a system of ordinary differential equations (ODEs). Let Φta represent the analytic solution of such a system given an initial state Φ0a and let Φtn,Δt represent a solution obtained from a numerical solver1 given an initial state Φ0n and using a time step Δt. The numerical algorithm converges to the analytic solution if limΔt→0Φtn,Δt = Φta.
where N is the order of accuracy of the solver, D is a constant depending only on F, and ||E0|| = ||Φ0n − Φ0a|| is the error in the initial state. This implies that over finite forecasting windows T < ∞, the forecast is contained within an exponential error cone, and that as Δt → 0 and ||E0|| → 0 the numerical solution converges to the analytic solution. Note however, that (2) does not imply uniform convergence on the infinite interval [0, ∞), nor does it imply asymptotic convergence as T → ∞. For chaotic systems, or systems that display sensitivity to initial conditions, convergence is not uniform; that is, as T increases the values of ||E0|| and Δt must be made exponentially smaller to attain the given accuracy. Sometimes (2) is too conservative; for example, for stable periodic motions numerical solutions can converge uniformly on [0, ∞), or have less than exponential error growth.
It might be thought that what is usually referred to as Anosov–Bowen shadowing, implies a stronger result than (2), but this is not the case. Anosov–Bowen shadowing (Anosov 1967; Bowen 1975) implies that for suitably well-behaved systems and prescribed ɛ > 0, there is a sufficiently small Δt such that there exists Φ0n for any Φ0a with ||Φtn,Δt − Φta|| < ɛ for all t ∈ [0, T]. That is, there is a numerical solution that remains ɛ close to the analytic solution. This result is of little use for weather prediction purposes, since NWP models are almost certainly not suitably well behaved, and Δt would have to be prohibitively small.
Previous studies have addressed issues associated with the accuracy of numerical realizations of chaotic models (Hammel et al. 1987; Grebogi et al. 1990; Dawson et al. 1994; Sauer et al. 1997). However, these studies deal almost exclusively with the issue of numerical (round-off) error in discrete maps, and no in-depth connection is made to issues directly associated with the truncation error in discretized ODEs or partial differential equations (PDEs). The most relevant issue in these papers is the possibility of finding a true trajectory with slightly different initial conditions that may shadow the numerical trajectory for a long time. Even for the simple cases when this is proven to be possible, such a result (although theoretically important) is of not much use for the weather prediction problem.
If the analytic solution is not known, convergence can be analyzed by computing an error that is the difference between numerical solutions obtained with a sequence of different time steps on the one hand, and a numerical solution (that is considered to be the closest to the analytic solution) obtained with the smallest time step possible.
To better understand the general behavior of the error, Fig. 2 shows the time evolution of the logarithm (base 10) of the state vector L2 norm error between the results with time steps 10−2, 10−3, 10−4, 10−5, and 10−6 LTU and the results obtained with time step 10−7 LTU (considered the closest to the analytic solution). It shows, in general, a regime dominated by an average exponential growth until saturation is achieved. In the exponential growth regime, the error (δ) behaves on average as δ ∝ eλt where regressions give values that are consistent with the exponential growth of initial perturbations in the Lorenz system (e.g., Strogatz 1994). This shows similarities between the growth of errors created by (i) different initial conditions and (ii) different time steps. In fact, plots (not shown) of error growth caused by a certain time step Δt and by an initial condition error of the order of Δt2 (note that the scheme is second-order accurate) illustrate well the general similarities between the two types of error. However, later on in this paper (when analyzing the QG model), it will be shown that there are some fundamental differences between the two types of error growth.
Note that the results with Δt = 10−6 LTU as the reference value (instead of 10−7 LTU) are basically the same, except of course for Δt = 10−6 LTU, which has zero error for this case. This confirms that the fundamental characteristics of these results are not sensitive to the reference value that is used to compute the error.
Also noticeable is an apparent regularity in the time when each of the curves saturates on the mean. To study this in more detail, a critical time of decoupling (Tc) is defined as the first point in time after which the state vector L2 norm error, corresponding to each time step, exceeds a certain threshold: five nondimensional units for this case. It should be mentioned that the overall results are not particularly sensitive to this threshold value. Different decoupling times were calculated for 10 simulations (all starting from randomly selected initial states at the attractor of the smallest time step experiment) as a function of the logarithm (base 10) of the time step. Linear regressions for each of the ten experiments lead to regression coefficients that vary between −5.1 and −5.8 and the regression coefficient for the mean values is −5.5. These results suggest that the critical time of decoupling follows approximately: Tc(Δt) ≈ Tc(1) − 5.5 log10(Δt).
This relation establishes a clear limit for the predictability of the model with a particular time step Δt. It can also be used to predict when the time of decoupling will occur for a certain time step, without having to perform that particular simulation.
The logarithmic nature of the previous equation can be explained in a simple manner assuming the typical exponential growth of perturbations in chaotic systems. From the exponential growth relation δ(t) ≈ δ0eλt, an equation for a critical time Tc, after which two trajectories initially separated by δ0 will decouple [at a threshold of δ(Tc) = 1], can be obtained as, assuming λ ≈ 0.9 (e.g., Strogatz 1994): Tc ≈ −2.6 log10(δ0). This equation is similar to the one mentioned above except for the fact that it includes a perturbation to the initial state instead of the time step and that the proportionality coefficient is about half of the previous one. However, assuming that the perturbation is of the order of the truncation error and that the numerical scheme is second-order accurate leads to δ0 ∝ Δt2 and consequently to a relation close to the previous logarithmic equation. This type of argument allows for a generalization: Tc ≈ −2.6 log10(ΔtN), where N is the order of the numerical scheme.
To explore the parameter space to understand the consequences of the time step sensitivity for regimes that are not fully chaotic, the sensitivity to different values of r is analyzed. The initial state for the following experiments is: X = 5, Y = 5, and Z = 5. The value r = 28 leads to the typical chaotic regime of the Lorenz equations. For r = 17, however, the solution is quite regular and the time evolution of the L2 norm error for different time steps indicates that in spite of oscillations the error is smaller for smaller time steps (not shown) as expected for linear and mildly nonlinear models.
Figure 3a shows the evolution of X for r = 19 for three different time steps (10−2, 10−3, and 10−4 LTU). In this regime the solutions exhibit what is often referred to as transient chaotic behavior (Strogatz 1994), but after some time all solutions converge to a stable fixed point. Depending on the time step used to integrate the equations, the values for the fixed points can be different, which means that the climate of the model is sensitive to the time step. In this particular case, the solution obtained with 0.01 LTU converges to a positive fixed point while the other two solutions converge to a negative value.
To conclude the analysis of the sensitivity to parameter r, Fig. 3b shows the time evolution (with r = 21.3) of X for three different time steps. For time steps 0.01 LTU and 0.0001 LTU the solution ceases to have a chaotic behavior and starts converging to a stable fixed point. However, for 0.001 LTU the solution stays chaotic, which shows that different time steps may not only lead to uncertainty in the predictions after some time, but may also lead to fundamentally different regimes of the solution. These results suggest that time steps may have an important impact in the statistics of climate models in the sense that something relatively similar may happen to more complex and realistic models of the climate system for time steps and parameter values that are currently considered to be reasonable.
Some more results regarding the time step sensitivity of the Lorenz equations are discussed in Teixeira (2002), where in particular the sensitivity to a higher-order numerical scheme was tested. Results obtained with a fourth-order Runge–Kutta scheme confirm the general findings made with the second-order scheme, with the main difference being the fact that the decoupling of solutions is separated in time by larger intervals with the higher-order scheme, following what is to be expected given the accuracy of each of the schemes and the generalization of the logarithmic relation discussed above.
It must be stressed once again that the results presented in this section are not in contradiction with the formal definition and theorem of convergence discussed at the beginning of the section. In particular, for that to be the case for the results regarding the sensitivity to parameter r, it would have to be shown that the type of numerical ambivalence illustrated in Fig. 3 persists for however small the time step may be.
3. Quasigeostrophic model and truncation error growth
In this section, results are presented based on experiments conducted with the QG potential vorticity (PV) model described in Marshall and Molteni (1993). The resolution used is T21 spectral truncation and three levels in the vertical (corresponding to 800, 500, and 200 hPa). The model forcing is composed of specified source terms of PV that are spatially varying but temporally constant and correspond to Northern Hemisphere winter climatology. The model has three types of dissipative forcing that are discussed in detail in Molteni (1994).
The QG model uses typically a 40-min time step. To obtain the initial conditions for these experiments, the model is first integrated for 100 days using this time step. The last day of the integration is used as the starting point for multiple 100-day integrations with time steps varying from 1.25 to 180 min. The integration with the highest temporal resolution (1.25 min) is taken as the control integration.
The error (difference) between the control integration and the integrations with coarser time steps is shown as a function of forecast time for the first 50 days in Fig. 4. For the QG model the error is defined as the length of the vector difference between the control-state vector and the state vector from the longer time step integration. The state vector, in this case, is the nondimensional streamfunction,2 although results using a kinetic energy metric are similar. The integration with the coarsest temporal resolution (180 min) diverges significantly from the control integration after 10 days and this difference reaches saturation at around 20 days. For the integration with the 2.5-min time step, divergence from the control occurs much later, after approximately 40 days, reaching saturation at around 50 days. Apparent in Fig. 4 is the regular interval between the decoupling times for the different time steps. In fact, the evolution of the decoupling time, chosen as when the error becomes greater than 2 × 10−3 (note that the error saturates approximately at 5 × 10−2) exhibits a log relationship with the time step in qualitative agreement with the results obtained for the Lorenz equations.
To analyze in detail the evolution of the truncation error, the logarithm (base 10) of the error for the different time steps is shown in Fig. 5. Also shown are two examples of errors caused by initial conditions, to allow for a comparison between truncation error and initial-condition error, and a line showing growth proportional to time t. In general, the initial-condition error grows exponentially as expected. The truncation error, however, shows a fundamentally different behavior: an initially rapid growth during the first 2 days is followed by a period (between about days 2 and 10) of relatively slow growth. Only after that, does the error grow exponentially until saturation is attained. Note that there is significant uniformity in the growth rates for the different perturbations (i.e., all the curves have a similar shape).
Although not clear from Fig. 5, close inspection of the initial-condition perturbation curves indicates that the growth during the first 12 h (about 0.24 day−1) is approximately half of that during the quasi-constant exponential growth phase later in the integration. This initial slow growth is consistent with previous studies that identified an early transient phase of growth when studying initial-condition perturbations (Trevisan and Legnani 1995; Vannitsem and Nicolis 1997).
To understand the differences between initial error growth and truncation error growth, it should be taken into account that, as each of the integrations is computed with a different time step, they correspond to slightly different model climatologies or attractors. The initially rapid growth phase is consistent with the idea that, at first, each model integration moves rapidly toward its own attractor, away from the attractor of the control.
One of the interesting results and connections to previous recent work is that the truncation error of the QG model (as shown in Fig. 5), and of the simple error growth model described below, exhibits initial growth that is proportional to time t (mean square error proportional to t2). This is in general agreement with recent results from Vannitsem and Toth (2002) and Nicolis (2003, 2004).
To better understand the truncation error growth behavior, in particular its initial stages, a simple model of the error evolution is developed. There has been a considerable amount of research on the development of simple models of error growth. Leith (1978) considers the error variance or mean square error and proposes a simple evolution equation for the error that represents initial-condition error and a linear term for model error. Leith (1978) obtains analytical solutions that he analyzes by comparing against results from weather prediction models. To understand the general behavior of the error from the ECMWF model, Lorenz (1982) proposes a simple model of the root-mean-square (RMS) error, which includes the initial-condition (exponential growth) term and a quadratic nonlinear saturation term. Dalcher and Kalnay (1987), Reynolds et al. (1994), and Simmons and Hollingsworth (2002) show that combinations of these two original equations can be successful in replicating the error behavior of complex weather prediction systems. Vannitsem and Toth (2002) and Nicolis (2003, 2004) utilize general approaches in order to develop equations for the evolution of model error and apply these ideas to several simple models. In particular, the short-time mean-square error behavior is analyzed in detail and shown to be quadratic in time.
A simple model of truncation error growth can be constructed by considering the relative error growth rates in the stable and unstable directions. Here, the essential idea behind the generalized Hartman–Grobman theorem (Pugh 1969; Shub 1987) is utilized, which implies that there is a nonlinear change of coordinates so that the dynamics of the nonlinear system is equivalent to that of a linear system with the same positive and negative eigenvalues. Hence, it can be assumed after linearization that the error growth obeys
where s represents the error in the direction of the stable modes, u the error in the direction of the unstable modes, σ is the rate of decay of stable modes, λ is the sensitivity to the initial conditions, a and b are coupling constants, ɛΔt is the typical one-step integration (truncation) error given the time step Δt, and (α, β) are the relative projections of the one-step integration error onto the stable and unstable directions. Note that this model is not intended to represent the long-time behavior of the error growth and as such does not include the effects of nonlinear saturation.
If the coupling between stable and unstable modes is relatively weak, which means |σλ| > |ab|, we can solve analytically an approximate version of these equations that neglects the coupling terms and obtain
Figure 6 shows the error growth for the stable and unstable modes and the total error s2 + u2 based on Eq. (4) with the following values: σ = 0.01, λ = 0.5, α = 0.5, β = 0.01, and ɛΔt = 10−12Δt4 (with Δt in minutes and t in days), for time steps of 10, 20, and 40 min. The constants were chosen in such a way as to be able to reproduce in a qualitative manner the results shown in Fig. 5. A procedure to obtain values for the coefficients based on the short- and long-time asymptotic solutions for the error is discussed when analyzing Fig. 7. Note that if the projection of the time step error onto the stable direction is small, then it will not dominate at short time scales, which means that it is important that α > β to obtain the plateau in the growth rate between the rapid initial error growth, as the solution moves onto the attractor, and the stage at which the unstable error growth becomes dominant.
Figure 6 illustrates a few relevant properties of this model of error growth. The total error is dominated during the first few days by the rapid movement of the trajectory toward its own attractor (away from the control attractor). This is represented by the stable mode evolving toward its asymptotic value. After about 10 days the unstable mode becomes the dominant component leading to a steady exponential growth of the error. The perturbations eventually become saturated (not included in Fig. 6).
For large values of σt, the error growth model shows that the asymptotic value for the stable error becomes, as expected, larger with larger time steps: s(t) ≈ αɛΔt/σ. This is also suggested by Fig. 5 and the different plateau values, for the different time steps, can actually be used to confirm that the time integration scheme of the QG model is of fourth order, a fact that is used to estimate the linear one-step truncation error in the simple model used to construct Fig. 6.
The fact that the duration of the plateau period in total error is independent of the time step is another interesting feature that the simple model is able to reproduce. In fact, assuming that the plateau period is over when the stable and the unstable errors become equal, it is straightforward to see from Eq. (4) that such a value is independent of the time step.
The value of λ = 0.5 used to obtain the results in Fig. 6 corresponds approximately to the sustained growth rates in the QG model. The daily growth rates for the QG model results, shown in Fig. 5, vary between 0.4 and 0.6 after the initial 10-day period and before nonlinear saturation is reached (the average growth rate between days 12 and 20 for the lower six curves is 0.493 day−1). This growth rate is larger than that of the leading Lyapunov vector or optimal Lyapunov vector (0.23 and 0.3 day−1, respectively) calculated for this model by Vannitsem and Nicolis (1997), but these discrepancies may be due to differences in the model dissipation values. It is, however, considerably smaller than the growth rate of 0.83 day−1 corresponding to the leading Lyapunov vector for a truncated, dry version of the medium-range forecast (MRF) model, found by Szunyogh et al. (1997).
It is straightforward to see that for short time periods, both the stable and unstable modes behave in a linear manner with respect to time: s(t) ≈ αɛΔtt and u(t) ≈ βɛΔtt. Figure 7 shows the evolution of the stable, unstable, and total errors produced by Eq. (4), with the same values as for Fig. 6 and corresponding to a time step of 10 min, together with the asymptotic solutions. These results are used to analyze the accuracy of the simple asymptotic solutions for short and long times, and determine the periods for which they are valid. It can be seen that, as expected, the linear behavior is a good approximation for the stable error evolution even at 20 days, but it is an accurate approximation of the unstable error only for the first forecast day. Another relevant aspect is that, at 20 days, the stable error is still far from its long-time asymptotic constant value. As mentioned above, the coefficients used to construct Fig. 6 can be approximately obtained by analyzing some of these asymptotic solutions and comparing them with the numerical results of the original QG model. In practice, four conditions are needed to determine the four coefficients (λ, α, β, and σ) since ɛΔt depends on the numerical integration scheme of the original QG model: λ can be estimated from the exponentially growing part of the solution; α can be estimated from the linear short-time solution of the stable error; σ can be obtained, as a function of α, from the long-time stable error solution; and β can be obtained from the equation u(t) = s(t), which leads (assuming a linear approximation for the stable error) to β = f (α, λ, τ), where τ is the approximate time when the plateau period is over. Note that β cannot be obtained from the short-time linear solution of the unstable error, since the decomposition in stable and unstable modes does not exist in the original QG model and in the early stages it is assumed that the error is dominated by the stable component.
To analyze the sensitivity of the solutions of system (3) to the different coefficients, three additional simulations were performed with different combinations of constants. Table 1 shows the values of each of the constants for the four experiments A to D. Figure 8 shows the evolution of the total error for each of the four simulations. Simulation A corresponds to the parameters defined for the results shown in Fig. 6. For simulation B, coefficients α and β are interchanged, with β = 0.5 and α = 0.01, while λ and σ remain the same. The consequence is that in simulation B, the error growth is dominated from the start by the unstable error. In simulation C, coefficients λ and σ are interchanged, with σ = 0.5 and λ = 0.01, while α and β are the same as for the original Fig. 6. In this case the total error growth is slower because of a lower value of λ and a higher value of σ, that leads to a much lower long-time asymptotic value of the stable error s(t) ≈ αɛΔt/σ and eventually (after about 70 days) to the unstable error becoming larger than the stable error. In simulation D, the four coefficients are interchanged leading to σ = 0.5, λ = 0.01, α = 0.01, and β = 0.5. For this case the total error is dominated by the unstable component, which, because of the larger value of β and smaller value of λ (as compared to the control experiment), possesses a short-time linear behavior for at least the first 20 days.
The full system of Eqs. (3) is also solved numerically in order to analyze in more detail the approximation of neglecting the coupling terms between the stable and unstable modes. Figure 9 shows the evolution of the total error for five different simulations, the default a = b = 0, and four simulations with b = −a where a is one of the following functions of λ and σ: a = λσ/100, a = λσ/10, a = −λσ/100, and a = −λσ/10. It is clear that, for the two smaller values of a and b, the solutions are almost indistinguishable from the default decoupled solution. For the two largest values of a and b, the solutions start to look different after about 10 days. When a > 0 and b < 0 the plateau period is longer than in the control experiment, since the positive a and negative b lead to a longer period when the stable error is larger than the unstable component. The opposite happens with a < 0 and b > 0, with the unstable component overtaking the stable one earlier than in the default experiment. Note that for values of a and b close to λ and σ solutions may behave in a pathological manner, with the stable error, for example (with a < 0), reaching zero. However, it should also be kept in mind that the simple model should have eigenvalues of approximately −σ and λ, with eigenvectors approximately aligned with the s and u axes, respectively. Otherwise the interpretation of the variables as representing stable and unstable directions would not be valid. To ensure that this is the case, a and b cannot be too large. This analysis shows that neglecting the coupling terms is a reasonable approximation, at least for values of a and b that are about 100 times smaller than λ and σ.
Note that one could consider time-varying coefficients, but the simple error growth model as it is, is able to explain the short- and medium-term evolution of the truncation error growth in the QG model in a satisfactory manner.
There are similar features between the simple model presented in this paper and some of the ones referred to above. Solutions similar to the one for the unstable error have been obtained before (e.g., Leith 1974) and when analyzing the error for a system with periodic behavior, Nicolis (2003) obtains an equation for the error that exhibits exponential decay (as in our stable error). There are however, new aspects when compared to previous studies: (i) the time step truncation error growth is considered explicitly, (ii) the error is decomposed onto stable and unstable components, and (iii) the simple model (considering both components) is applied in order to represent the truncation error growth of a relatively complex system such as the QG model.
In previous studies, when the issue of model truncation is approached, it is usually in the context of the errors associated with the parameterization of unresolved subgrid processes. This is not the case of the present paper where the truncation error is considered explicitly. A relevant aspect of the error growth model suggested here is that it shows that, at short time scales, the stable direction is more important, and the unstable direction does not dominate until longer time scales.
4. NOGAPS and ensemble design
In this section the impact of using different time steps in the context of the Navy Operational Global Atmospheric Prediction System (NOGAPS) is analyzed. NOGAPS (Hogan and Rosmond 1991) is a global spectral model in the horizontal and energy conserving finite difference (hybrid-sigma coordinate) in the vertical. The model uses vorticity and divergence, virtual potential temperature, specific humidity, and terrain pressure as the dynamic variables, with a semi-implicit treatment of gravity wave propagation. The physical parameterizations include boundary layer turbulence (Louis et al. 1982), moist convection (Emanuel and Zivkovic-Rothman 1999), convective and stratiform clouds (Teixeira and Hogan 2002), and solar and longwave radiation (Harshvardhan et al. 1987). NOGAPS is the global NWP model of the U.S. Navy, and drives several applications such as the Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPSTM; Hodur 1997; Hodur and Doyle 1998) and the Navy aerosol prediction model.
The results shown here for the time step sensitivity of the deterministic NOGAPS model correspond to a spatial resolution of T79L30. As with the simpler models, experiments are conducted by integrating NOGAPS using different time steps, starting from the same initial state. A control 10-day integration is performed using a small time step of 60 s. A series of integrations are then performed using larger time steps of 120, 240, 480, and 960 s (a typical operational time step for this resolution would be about 900 s). Figure 10 is analogous to Fig. 5 but based on NOGAPS 10-day forecasts starting from 28 January 2004 at 0000 UTC. Figure 10 shows the logarithm (base 10) of the L2 norm error in dry total energy, integrated over the globe and from the surface to 150 hPa.
As with the QG model, there is an initial rapid perturbation growth within the first day, followed by a period of slower growth. This initial rapid growth may be a manifestation of the model integrations with different time steps moving toward their own climatologies, or attractors as discussed in section 3. However, the overall behavior of the error is quite different from the QG model. The difference between the errors due to the different time steps is smaller in NOGAPS. See for example that in Fig. 5, close to the start of the simulation, the error difference in the QG model between the version with 180 min and the version with 20 min (9 times smaller) is about four orders of magnitude. In contrast, in NOGAPS the difference between the error with 960 s and the error with 120 s (8 times smaller) is only about one order of magnitude. The immediate implication of this result is that while in the QG model the time integration scheme is of fourth order, in NOGAPS the time integration appears to be of order one.
Another important difference is that all errors plotted in Fig. 10 are starting to feel the effects of nonlinear saturation after just a few days of simulation, which is in contrast to the behavior of the errors in the QG model. Note that the simple model of error growth discussed in the previous section does not include any term to represent nonlinear saturation and as such cannot be used to try to reproduce and better understand the behavior of the NOGAPS truncation error growth.
As a reference it should be mentioned that, for example, the difference between the 960 s integration and the 60-s integration ranges between 40% to 60% of the total forecast error (forecast minus analysis) for the NOGAPS operational configuration (T239L30) for the same period. This fact helps to reiterate the importance of model error in the predictability of NWP models, supporting some recent studies that have raised this issue (e.g., Orrell et al. 2001), although in different contexts.
A similar qualitative logarithmic relationship between the decoupling time and the time step of the integration exists in both NOGAPS and the QG model. These are also qualitatively similar to the relationship obtained for the Lorenz equations, which was explained based on the typical exponential growth of errors in chaotic systems.
The differences between the behavior of the QG model and NOGAPS can be, at least, partly explained by the fact that NOGAPS is a much more complex model that includes a full set of physical parameterizations to represent subgrid-scale physical processes such as radiation, turbulence, clouds, and moist convection. These parameterizations can introduce nonlinearities and have several specific numerical problems that are not present in a simpler model such as the QG model.
The accuracy of the time integration discussed above is a good example of such issues. Although the dynamics of NOGAPS is integrated with a second-order scheme, the fact that the parameterizations are integrated with a first-order scheme leads to the effective accuracy of the model to be of order one as shown in Fig. 10. A problematic issue is that some of the parameterizations may not be numerically convergent in terms of vertical resolution. Another important problem is nonlinear numerical stability that can constrain (for numerical reasons) the values of some of the parameterization terms. In particular, for parameter values (e.g., eddy-diffusivity coefficient, mass-flux coefficient) and time steps typically used in operational NWP centers, spurious numerical oscillations may occur in some variables. In the present simulations, NOGAPS was integrated with time steps that are of the order of, or smaller than, typical operational time steps for this particular resolution, in order to try to avoid or minimize nonlinear numerical stability problems. Discussions on some of these numerical issues related to parameterizations (stability in particular) can be found in Girard and Delage (1990), Beljaars (1991), or Teixeira (1999). Physical parameterizations lead to numerical intricacies that are often difficult to isolate, making the interpretation of NOGAPS results much more complicated than with the QG model.
It is also worth mentioning that while spatial resolution in NWP models has been increasing during the last few years (decreasing the spatial truncation error), temporal resolution has not. Actually, because of improvements in numerical methods (e.g., semi-Lagrangian advection), time steps have even increased in some models, leading to an increase in the time truncation error. This quest for stable numerical methods with large time steps, because of efficiency concerns, may well have its price in terms of the truncation error, both in the expected traditional linear sense and in the more nonlinear sense that is explored in this paper.
In what follows, the NOGAPS ensemble system is used to investigate the impact of utilizing different time steps in the ensemble design. In this version of the NOGAPS ensemble system, the initial-condition perturbations are introduced based on randomly sampling the analysis error variance estimate produced by the Naval Research Laboratory Atmospheric Variational Data Assimilation System (NAVDAS; Daley and Barker 2001).
In terms of ensemble design, the results discussed above suggest that ensemble spread may be enhanced by using different temporal resolutions for different ensemble members. Preliminary experiments seem to indicate that this is indeed the case. One example is shown in Fig. 11 where the ensemble spread (RMS 500-hPa height) for the entire globe, the Northern Hemisphere (NH) extratropics and the Southern Hemisphere (SH) extratropics, based on a 10 day T79L30 (900-s time step) 20-member ensemble, is given by the solid lines. The dashed lines indicate the ensemble spread when ten of the members are integrated with the time step reduced by half (from 900 to 450 s). Changing the time step of 50% of the ensemble members results in increased spread, particularly in the NH extratropics.
To perform a more detailed examination, one month (April 2005) of daily T119L30 32-member ensemble forecasts with half of the members with a time step of 300 s and the other half with 600 s is analyzed. The results show that changing the time step of half of the ensemble members results in increased spread and a decreased number of outliers3 (the control ensemble is composed of forecasts using only 300-s time steps). Figure 12 illustrates the increase in spread for the 12-h accumulated precipitation for the NH extratropics (30°–90°N; hereafter NHX) and the Tropics (20°S–20°N). In Fig. 13 the number of outliers for 12-h accumulated precipitation for the control 32-member ensemble and the ensemble with the different time steps (normalized by the expected number of outliers for a flat distribution) is shown for NHX and the Tropics. The verification is based on the precipitation from the first 12 h of forecast. It is clear that the number of outliers is significantly reduced both for the Tropics and the extratropics when different time steps are used in the ensemble formulation. For global 500-hPa height (not shown), the number of outliers is reduced by 8% and 14%, on average, for forecast times of 5 and 10 days, respectively.
These preliminary experiments suggest that varying temporal resolution may be a natural and simple way of introducing more variability due to model uncertainty into ensemble formulation.
This paper explores the time step sensitivity of nonlinear atmospheric models and illustrates how solutions with small but different time steps will decouple from each other after a certain finite amount of simulation time. The decoupling time depends on the time step in a logarithmic way, which means that the decoupling time can be predicted. This logarithmic relation is present in simulations using three models of different levels of complexity: the Lorenz (1963) equations, a QG model and an operational global weather prediction model. The logarithmic nature of the decoupling time also implies that, however small the time step may be, there is always a finite point in time after which the numerical solution diverges. This means that, for chaotic systems, uniform numerical convergence can only be guaranteed for a finite amount of time.
In the Lorenz equations, for regimes that are not fully chaotic, the sensitivity to the time step is more complex and different time steps can lead to different model climates or even different regimes. These results suggest that the statistics of climate models may be affected by the time step.
Using a QG model it is shown that truncation errors caused by different time steps evolve in a fundamentally different manner when compared to initial condition errors. Initial condition errors grow exponentially as expected while truncation errors show an initially rapid growth followed by a plateau (slow error growth) and then, only after that, exponential error growth until saturation is achieved.
A simple analytic model of truncation error growth is proposed that considers the relative error growth rates in the stable and unstable directions. This simple model suggests that time step truncation error in nonlinear models is a combination of two errors: (i) a stable error with an initially rapid growth that asymptotes to a saturation value, which is a function of the time step, caused by the fact that each model simulation moves rapidly toward its own attractor, away from the attractor of the control; and (ii) an unstable error with a mean exponential growth similar to initial condition error.
The Navy Operational Global Atmospheric Prediction System (NOGAPS) is used in order to study the time step sensitivity of a fully developed operational weather prediction model. Our analysis shows that the overall behavior of the truncation error is different from the QG model, with an initially rapid truncation error growth during the first day or so followed by a period of slower growth toward saturation. The results show that although the dynamics of NOGAPS is second-order accurate in time, the numerical solvers of the parameterizations lead to a first-order overall accuracy in the time integration.
NOGAPS ensemble experiments are performed using two different time steps. The results show an increase in the ensemble spread of 500-hPa geopotential height and precipitation, and a consistent reduction in the number of outliers. This suggests that different time steps may well be a natural and simple way of introducing model error in weather prediction ensemble design.
In the weather and climate prediction community, when thinking in terms of model predictability, there is a tendency to associate model error with the physical parameterizations. In this paper, it is shown that time truncation error in nonlinear models behaves in a more complex way than in linear or mildly nonlinear models and that it can be a substantial part of the total forecast error. The fact that it is relatively simple to test the sensitivity of a model to the time step, allowed us to study the implications of time step sensitivity in terms of numerical convergence and error growth in some depth. The simple analytic model proposed in this paper illustrates how the evolution of truncation error in nonlinear models can be understood as a combination of the typical linear truncation error and of the initial condition error associated with the error committed in the first time step integration (proportional to some power of the time step).
A relevant question is how much of this simple study of time step truncation error could help in understanding the behavior of more complex forms of model error associated with the parameterizations in weather and climate prediction models, and its interplay with initial condition error.
We thank F. Giraldo, T. Rosmond, B. Stevens, R. Perdigão, and two anonymous reviewers for their challenging and useful comments on versions of this manuscript. The support of the sponsor, the Office of Naval Research (ONR) through Program Element 0602435N and 0601153N is gratefully acknowledged. The DoD High Performance Computing Program at NAVO MSRC provided part of the computing resources. KJ acknowledges the support of ONRIFO with a NICOP Award N000140510668. JT acknowledges the support of a NOAA cooperative agreement for THORPEX.
Corresponding author address: João Teixeira, NATO Undersea Research Centre, Viale San Bartolomeo 400, 19138 La Spezia, Italy. Email: firstname.lastname@example.org
To be mathematically precise, Φtn,Δt is defined as a continuous curve in t given by some suitable interpolation between its numerical realizations.
The nondimensional streamfunction is normalized by the square of the radius of the earth (6.371 × 106 m) times twice the rotation rate of the earth (7.292 × 10−5 s−1).
An outlier is a verifying value, which is below the lowest ensemble value or above the highest ensemble value.