## 1. Introduction

It is widely admitted that there are two main reasons why predictions in meteorology are limited in time: (i) the amplification in the course of the evolution of small uncertainties in the initial conditions used in a prediction scheme, usually referred to as *initial errors*, and (ii) the presence of *model errors*, reflecting the fact that a model is only an approximate representation of nature. While the first kind of error is indicative of the property of sensitivity to the initial conditions, suggesting that atmospheric dynamics shares in this respect some key properties of deterministic chaos, the second one is an indicator of the property of sensitivity to the parameters and more generally of the structural stability of the set of evolution laws governing the system at hand.

There exists an extensive literature on both initial and model errors, much of it devoted to numerical experiments on large-scale numerical forecasting models (Dalcher and Kalnay 1987; Tribbia and Baumhefner 1988, 2004; Reynolds et al. 1994; Schubert and Schang 1996; Krishnamurti et al. 2004; Ivanov and Chu 2007). Although in practice the two sources of errors coexist and their respective effects on the results cannot be clearly identified, in most of the qualitative analyses reported they are treated separately (Lorenz 1996; Nicolis 1992, 2003, 2004). The objective of the present work is to address some generic features of the dynamics of prediction errors under the combined effect of initial and model errors and its connections with intrinsic properties. Furthermore, the crossovers between the two kinds of errors and between the initial and intermediate time regimes are considered.

**x**= (

*x*

_{1},···,

*x*) be the variables participating in the dynamics as captured by a certain model, and let

_{n}*μ*be the parameter (or a combination of parameters) of interest. The evolution laws have the general formwhere

**f**= (

*f*

_{1},···,

*f*) are typically nonlinear functions of

_{n}**x**and depend also on

*μ*. We suppose that the phenomenon of interest as it occurs in nature is described by an amended form of Eq. (1), where

*μ*takes a certain (generally unknown) value

*μ*and/or some extra terms associated with physical processes not properly accounted for by the model are incorporated, limiting ourselves for the sake of the present study to the case that the model (

_{N}**x**) and “nature” (

**x**

*) variables span the same phase space. The case in which the model variables span a subspace of the full phase space and only model errors are considered has recently been analyzed by one of the present authors (Nicolis 2004).*

_{N}*μ*=

*μ*+

_{N}*δμ*. We assume that

*η*is of the same order of magnitude as

*δμ*, a fact that we express by

*η*=

*γδμ*, where

*γ*is a factor of the order of 1, and introduce the error

**u**:We place ourselves under the conditions of a nearly perfect model or perhaps, more appropriately, of a weakly imperfect model (|

*δμ*/

*μ*| ≪ 1) and small initial errors. We also assume structural stability of the underlying evolution laws. This entails, in particular, that the system is not crossing criticalities of any sort in the range of variations of the parameters caused by the error

_{N}*δμ*. As a corollary, the attractors of the model and reference system are close in phase space. As it will turn out, adopting this setting will allow us to conduct a systematic study and identify some features of error dynamics independent of the particular model considered, which could thus be qualified in this sense as “generic.” In many operationally oriented numerical experiments on present-day realistic weather prediction models, these conditions (as well as some other more technically oriented ones enunciated in the sequel) may not be satisfied. Even so, having an idealized limiting case like the one considered here as a reference is helpful in the sense that possible deviations from the generic behaviors predicted by our analysis can be placed in the proper perspective and attributed to such factors as large initial errors or inadequacies in the parameterizations of some of the physical processes present.

**u**and

*δμ*, subtracting (2) from the result, and keeping the first nontrivial terms, one obtains an equation of evolution for the error:are the Jacobian matrix of

**f**and the model error source term, respectively, and the subscript

*N*implies evaluation of the corresponding quantities at

**x**=

**x**

*,*

_{N}*μ*=

*μ*. In most situations of interest

_{N}**x**

*is expected to have a quite intricate, chaotic-like evolution in time. Equation (4a) generalizes the parameterization proposed by Dalcher and Kalnay (1987) when both initial and model errors are present by establishing the link with the underlying evolution laws.*

_{N}In section 2, a systematic expansion of the solutions of Eq. (4a) in the short to intermediate time regime is carried out. Some general, model-independent features are brought out, such as the role of the mechanisms of error transfer between a particular initial direction to components along other directions, the existence of an extremum in the error evolution, and the relative importance of the two sources of error in the global evolution. In sections 3 and 4 the results are applied to bistable systems and systems evolving around the saddle point and compared to the exact expressions available for such systems. The case of chaotic dynamics is considered in section 5 using Lorenz’s thermal convection model as an example (Lorenz 1963). The main conclusions are summarized in section 6.

## 2. Short to intermediate time expansion

*t*,

*t*

_{0}) associated with the Jacobian 𝗝 satisfies the relationIn what follows we shall extract the behavior of the error vector

**u**and its norm

*N*(

**u**) in the regime of short to intermediate times. We start by expanding

**u**around

*t*= 0, keeping terms up to

*O*(

*t*

^{3}):Utilizing Eq. (4a), we straightforwardly obtain (see appendix A for details)where the coefficients

*A*and

_{i}, B_{i},*C*are given by (with the understanding that all quantities involved are to be evaluated at

_{i}*t*= 0 on the reference attractor)

*N*(

**u**), to the same order. Choosing the Euclidean norm, we write the quadratic error

*N*(

**u**) = |

**u**

^{2}(

*t*)| asand keep all terms up to

*t*

^{3}. This yieldsThe following comments are warranted, on inspecting Eqs. (7)–(9).

- At the level of
*u*(_{i}*t*), in addition to contributions due to the evolution of the initial error*u*by the Jacobian matrix 𝗝 and to the model error per se there are also terms arising from their combined effect. These terms show up as products of elements of the Jacobian matrix or derivatives thereof and of components of the model error source term_{j}**ϕ**and its derivatives (all derivatives being evaluated at*t*= 0 on the reference attractor). - At the level of |
**u**^{2}(*t*)|, in addition to the aforementioned contributions there are “direct” coupling terms as well, in which the initial error components*u*themselves multiply contributions containing the model error source term_{i}**ϕ**. - According to Eqs. (7) and (8) there is a cascade mechanism by which an initial error acting solely along a particular component is transferred in the course of time in phase space to eventually affect components along other (initially error-free) directions as well.

Because local quantities are subjected to large fluctuations, to proceed further we place ourselves in the perspective of a *statistical ensemble of forecasts* and perform an average of the square error (9) to get information independent of the initial condition chosen. This averaging involves two kinds of processes: a first, over the reference (“nature’s”) attractor, whose structure enters in Eqs. (5) through the state dependence of 𝗝 and **ϕ**; and a second one, over the possible orientations and magnitudes of the initial error vector **u**(0) per se. In doing this we assume initially unbiased and uncorrelated errors (〈*u _{i}*〉 = 0, 〈

*u*〉 = 〈

_{i}u_{j}*u*

_{i}^{2}〉

*δ*), keeping otherwise the full form of the associated probability distribution general. The usefulness of this latter type of averaging is to provide hints on the overall predictive skill of a forecasting model. A nice illustration is provided by Lorenz’s pioneering study (Lorenz 1982; see also Simmons and Hollingsworth 2002) that led to the estimate of the ∼2-day predictability horizon of the weather forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) in the early 1980s.

_{ij}^{kr}*t*

^{2}part) the short time behavior of model error found in previous work (Nicolis 2003, 2004). We also see that in the process of averaging all direct coupling terms between initial and model errors have cancelled. There subsists, however, a single contribution (last term in the

*t*

^{3}part) where the error source term is evolved by the Jacobian matrix. Notice that the Jacobian matrix enters in Eq. (10) both through its diagonal elements and its nondiagonal ones. As a rule these elements are not related to each other because the matrix does not need to be symmetric.

So far our formulation accounts for an arbitrary distribution of the magnitudes of the individual initial error components. It is now instructive to consider the limit where these errors are distributed isotropically in phase space, a property translated by 〈*u _{i}*

^{2}〉 =

*ϵ*

^{2}independently of

*i*. In this limit, the only surviving term in Eq. (10) still involving the time derivative of a phase space function will vanish. Furthermore, the coefficient of the

*t*part—which constitutes the dominant contribution for short times—displays the average of the sum of the diagonal elements of 𝗝, which is known to be equal to the rate of change of phase space volumes as the dynamics is proceeding. In dissipative systems—a class which encompasses most of the systems encountered in meteorology-related problems—phase space volumes contract on average, entailing that the sum in question is negative. This leads us to the very general conclusion that the mean quadratic error is bound to decrease for short times. The magnitude of Σ

*will determine the extent of this decreasing stage and, at the same time, the range of validity of the*

_{i}J_{ii}*t*expansion. In particular, in near-conservative systems where the sum is close to zero the expansion is expected to provide an adequate description for an appreciable period of time.

If the system’s dynamics is unstable, the abovementioned decreasing trend will eventually be reversed because the unstable modes will gradually take over, even in the absence of model error. There is thus bound to be, in such systems, a minimum of the mean square error as a function of time attained at some value *t** for which the time derivative of the right-hand side of Eq. (10) vanishes. As seen in sections 3–5, the *t* expansion of Eq. (10) provides in many cases reasonable estimates of this time, which can be further improved by alternative (more global) approximation schemes like the Padé approximants, as discussed further below.

Another interesting type of estimate afforded by our formulation is that of the relative importance of the contributions of initial (ϕ-independent parts) and model (ϕ-dependent parts) errors. Clearly, because the contributions of model errors start as terms of *O*(*t*^{2}) there is bound to be a time interval (which in certain cases may be quite short; see examples in the following sections) during which initial errors dominate model ones. As the latter are gradually building up, the question arises as to (i) their effect on the minimum attained at *t** and (ii) the existence of a crossover time *t**t**t** when the mean square error attains a minimum. Beyond this time *t**not* imply that the mean quadratic error vanishes: its two constituent parts (initial and model errors) reach equal magnitudes but do not cancel each other (they would need for this to be of opposite signs). Strict cancellation, if any, can only occur at the level of Eq. (7) for the error itself prior to averaging, for certain particular initial conditions. We stress again that all estimates above can be carried out systematically and in a quantitative manner in the limit where both initial and parameter errors are small. Beyond this limit they become system dependent and need to be considered on a case-by-case basis.

The general formulation and in particular the *t* expansion outlined above carry through in essentially the same form for the class of norms generalizing the Euclidean one by the presence of a “metric” *g _{ij}*,

*N*(

**u**) = ∑

*. The question of occurrence of minimum and crossover times is subtler because the Jacobian matrix elements*

_{ij}g_{ij}u_{i}u_{j}*J*are now weighted by the

_{ij}*g*terms. As an example, the

_{ij}*t*term in Eq. (10) is replaced by ∑

_{i}〈

*J*〉

_{ij}*g*〈

_{ii}*u*

_{i}^{2}〉

*t*or, in the limit where initial errors are isotropically distributed, by

*ϵ*

^{2}Σ

*〈*

_{i}*J*〉

_{ii}*g*. In a sense, because of the subsistence of the weighting factors

_{ii}t*g*multiplying 〈

_{ii}*J*〉, the error dynamics in the isotropic case under such a norm is mapped into the error dynamics in the anisotropic case under a Euclidean norm. Although no general statement can be made, one might expect (cf. also the comment in the last paragraph of section 4 below) that some of the results of minimum and crossover times will subsist as long as the

_{ii}*g*along the stable directions retain a sufficiently significant value. Finally, when norms that are not quadratic in

_{ii}**u**are adopted (e.g., the magnitude |

**u**| of the error vector) the terms linear in

**u**do not cancel in Eq. (10) and the model error grows in a subquadratic fashion. Extracting generic features then becomes more laborious, owing to the nonanalytic dependencies introduced by the absolute value function.

We stress that the assumptions of unbiased and uncorrelated errors used to derive Eq. (10), as well as the one on isotropically distributed errors used in much of the discussion following this equation, apply only for the initial errors. As the system evolves, errors will not only grow but will become, as a rule, strongly correlated by the dynamics. They will also develop in an anisotropic way and, for sufficiently long times, they will tend to be oriented along the leading Lyapunov vectors. In numerical weather prediction models used for operational purposes, correlated and anisotropic “initial” errors show up through the use of short forecasts in the process of data assimilation. As long as these errors remain small, they can be accounted for by the averaged versions of Eqs. (B1), (B3), and (B4), where no assumptions of randomness and isotropy such as were used in deriving Eq. (10) are made. As a counterpart, no conclusions of a generality comparable to that of our earlier ones can now be drawn because one needs to specify the kinds of correlations and anisotropies that may be present. This can only be done on a case-by-case basis.

*α*(

*α*= 1, …,

*n*):This relation can be applied to evaluate the effect on a particular component

*α*of initial errors acting selectively along a particular phase space direction

*β*or a combination of such directions. If the latter happens to be associated with a positive Lyapunov exponent, the growth of error will occur from the very start of the evolution.

## 3. Bistable systems

*x*

_{0}= 0, which is stable if

*μ*< 0; and if

*μ*> 0, two additional ones given by

*x*

_{±}= ±

*μ*

^{1/2}, which are stable in this range of

*μ*values, bifurcating from

*x*

_{0}= 0 when

*μ*crosses the value

*μ*= 0. Setting

*x*=

*x*+

_{N}*u*and

*μ*=

*μ*+

_{N}*δμ*, the linearized error Eq. (4a) becomesThe solution of this equation subject to an initial error

*u*(0) =

*u*is (we choose

*μ*> 0,

_{N}*x*=

_{N}*μ*

_{N}^{1/2})Notice that in the absence of model error the initial error decreases here monotonously with time, owing to the choice of one of the stable fixed points as a reference attractor. Squaring this expression and averaging over all values of

*u*sampled from a uniform distribution with 〈

*u*〉 = 0, 〈

*u*

^{2}〉 =

*ϵ*

^{2}, one obtains the mean quadratic error

*t** at which a minimum is attained:We notice that

*t** is advanced as the model error source term increases and postponed as the initial error source term increases. In fact, the existence of a minimum at a finite time

*t*=

*t** is here due entirely to the presence of model error because

*t** → ∞ as

*δμ*→ 0 for

*ϵ*fixed. This is a peculiarity of the class of models described by (12) and (13), where there is no coexistence of stable and unstable motions.

*t*expansion. Setting 𝗝 = −2

*μ*and ϕ =

_{N}*μ*

_{N}^{1/2}in Eq. (10), one getsThis is just the expansion of Eq. (15) to the order

*t*

^{3}, thereby illustrating the consistency of the general formulation of section 2. The time

*t** to attain the minimum now satisfies a quadratic equation in

*t*whose (unique) positive solution reduces to (16) in the limit

*μ*→ 0 (i.e., close to the bifurcation point). This provides an illustration of the discussion following Eq. (10) because in this limit the Jacobian tends to zero and the system becomes weakly dissipative.

_{N}*μ*> 0 this solution readsEvaluating this solution for a reference system

*x*where

_{N}*μ*=

*μ*,

_{N}*x*(0) =

*x*(0), and a model system

_{N}*x*where

*μ*=

*μ*+

_{N}*δμ*,

*x*(0) =

*x*(0) +

_{N}*u*, one has then access to the full nonlinear evolution of the instantaneous quadratic error

*u*

^{2}(

*t*). Figure 1 depicts the time evolution of this quantity as computed from the exact expression with

*x*(0) =

_{N}*μ*

_{N}^{1/2}(full line), the solution of the linearized equation for the error [Eq. (15); dashed line] and the

*t*expansion [Eq. (17), dotted line] for a small value of

*μ*. As can be seen the agreement is quite satisfactory, as expected from the comments made in connection with Eq. (10) and Eqs. (16) and (17). The situation is quite different for

_{N}*μ*values far away from the bifurcation point (Fig. 2). The

_{N}*t*expansion (dotted line) remains here close to the exact solution (full line) for a short period of time but subsequently exhibits a qualitatively different behavior, missing the minimum versus time altogether. The expansion can be improved significantly (Fig. 2, dashed–dotted line) by performing a partial resummation using a Padé approximant (Baker and Graves-Morris 1996) of the formin which the three coefficients

*p*

_{1},

*p*

_{2}, and

*q*

_{1}are determined by requiring that its

*t*expansion coincides with Eq. (17) to the order

*t*

^{3}. This approximation improves the

*t*

^{3}expansion and prolongs its range of validity while ensuring at the same time its positivity.

*t*

*ϵ*

^{2}and

*δμ*

^{2}in Eq. (15) reach the same value at

*t*=

*t*

*μ*

_{N}*t*

*ϵ*

^{2}and

*δμ*

^{2}parts of Eq. (15) and of its

*t*expansion [Eq. (17)] are plotted against time (full and dashed lines, respectively). In both cases a crossover is found at a

*t*

*t** for the total mean error 〈

*u*

^{2}(

*t*)〉 to attain the minimum as computed for the same parameter values (cf. Fig. 1). Notice that

*t*

*δμ*

^{2}/(2

*μ*), is sufficiently large compared to initial error.

_{N}The simplicity of the model studied in this section allows one to identify further the nature of the balance realized at the crossover time *t**t* = *t**δμ* and initial errors *u* have opposite signs. As a by-product, for such realizations the quadratic error |*u*(*t*)|^{2} would possess a minimum at *u* = 0. In other words, the minimum of total error and matching of its two components are linked for this class of realizations. This is not so any longer for realizations in which *δμ* and *u* have the same sign and, as a corollary, for the mean quadratic error itself.

## 4. Error dynamics around a saddle point

*x*is the coordinate of the saddle point along the

_{N}*x*axis,

*μ*(

_{N}*μ*> 0) plays the role of both the control parameter subjected to uncertainty and of the positive Lyapunov exponent, and −

_{N}*λ*(

_{N}*λ*> 0) is the negative Lyapunov exponent, supposed not to be subjected to uncertainty. To satisfy the dissipativity condition, we require

_{N}*λ*>

_{N}*μ*.

_{N}*u*

_{1}〉 = 〈

*u*

_{2}〉 = 0,〈

*u*

_{1}

^{2}〉 = 〈

*u*

_{2}

^{2}〉 =

*ϵ*

^{2}), one obtainsand the corresponding

*t*expansion up to

*O*(

*t*

^{3}), the analog of Eq. (10) with

*J*

_{11}=

*μ*,

_{N}*J*

_{22}= −

*λ*,

_{N}*J*

_{12}=

*J*

_{21}= 0 and ϕ

_{1}=

*x*, ϕ

_{N}_{2}= 0:By setting the time derivative of 〈

*u*

^{2}(

*t*)〉 to zero one can evaluate from Eqs. (24) the time

*t** for a minimum to occur. Owing to the presence of the contributions due to −

*λ*, a minimum is bound to exist even in the absence of model error. As pointed out earlier, this is a general feature of systems in which stable and unstable motions coexist (in this respect, the example of section 3 is an exception). But because model error gives a positive contribution to the time derivative it tends to advance the value of

_{N}*t** in such a way that the contribution containing exp(−2

*λ**) can still cancel those containing the positive exponentials. This confirms further the trend found in the previous sections. Figures 4a and b depict the time dependence of 〈

*t*_{N}*u*

^{2}(

*t*)〉 according to the full expression [Eq. (24a); full line] and its

*t*expansion [Eq. (24b); dotted line] for two different values of

*μ*, keeping the other parameters fixed. They confirm the second trend identified in the previous sections, namely that as

_{N}*λ*tends to

_{N}*μ*the agreement between the full and the approximate form of Eqs. (24) is improved.

_{N}*t*

*x*

_{N}^{2}

*δμ*

^{2}/

*μ*

_{N}^{2}in Eq. (24a) and expanding the square term, one sees that at the crossover time

*t*

*t*

*t**.

The above conclusions subsist in the case of anisotropically distributed initial errors [〈*u*_{1}^{2}〉 ≠ 〈*u*_{2}^{2}〉 in Eq. (23)] provided that the magnitude of the error along the stable direction does not fall below some critical value, which depends on the ratio of the positive to the negative Lyapunov exponent.

## 5. Low-order systems with chaotic dynamics

*x*measures the rate of convective (vertical) turnover,

*y*the horizontal temperature variation, and

*z*the vertical temperature variation. Parameters

*σ*and

*b*account, respectively, for the intrinsic properties of the material and for the geometry of the convective pattern. In what follows we focus on the role of parameter

*r*, the (reduced) Rayleigh number, which provides a measure of the strength of the thermal constraint to which the system is subjected and is the main component responsible for the thermal convection instability occurring in the system. Model error and Jacobian matrix—the vector

**ϕ**

*δμ*and the matrix 𝗝 in Eqs. (10) and (11)—reduce then towhere the bars indicate evaluation on the reference attractor. The latter is chosen to correspond to the typical values

*r*= 28,

*σ*= 10, and

*b*= 8/3. Notice that Σ

_{i}〈

*J*〉 = −(

_{ii}*σ*+

*b*+ 1) takes here a constant (state-independent) strongly negative value, reflecting the highly dissipative character of the ongoing dynamics.

Figure 6 summarizes results pertaining to the position of the minimum in the time evolution of the mean square error. The full lines are obtained by direct solution of the full Eqs. (26); the dashed ones stand for the analytic results provided by the Padé approximant, Eq. (19), corresponding to Eq. (10). The initial error is 〈*u*^{2}〉^{1/2} = 10^{−3}. As can be seen, the position of the minimum in the absence of model error is displaced to the left as the model error is increased, confirming further the trend found in the preceding sections. Furthermore, the numerical and analytic results are practically indistinguishable well beyond the minimum for model errors considerably exceeding the initial ones.

The relative importance of initial and model errors in the course of time is illustrated in Fig. 7. In both cases a crossover is predicted, located (as in sections 3 and 4) after the time at which the total error attains its minimum. For large model errors these times are very small; see also Fig. 6 with *δr* = 5 × 10^{−3}. The situation is different for model errors comparable to initial ones, although in this case the initial error evolution becomes increasingly unsatisfactory when limited to *O*(*t*^{3}) terms.

As a further indicator of the relative roles of initial and model errors, we depict in Fig. 8 the structure and transient evolution of the error probability distributions in the absence (*δr* = 0; full lines) and presence (*δr* = 5 × 10^{−3}; dashed lines) of model error. In both cases the initial errors of the *x*, *y*, and *z* variables are sampled from a uniform distribution of zero mean and variance equal to 3.3 × 10^{−4}. For *δr* = 0 the bulk of probability density remains confined in a fairly narrow interval of values. At the same time, a certain asymmetry is manifested, reflected by the tendency to develop a (rather modest) tail in the direction of large error values. The situation changes considerably under the combined action of initial and model errors: the distribution is now much broader and displays, transiently, a bimodal structure at times that is considerable longer than the minimum or the crossover times (Figs. 6 and 7). We are probably dealing here with an intermediate to long time type of effect, reflecting the increasing delocalization of the system in phase space induced by the presence of model error.

Up to now, random (unbiased) initial condition errors were used in the numerical experiments. However, there is evidence that systematic initial condition errors are present in the analyses used for operational forecasts. These systematic errors can arise either from observational biases coming, for instance, from a progressive degradation of the quality of a measurement device (see e.g., Kalnay 2003) or from the data assimilation procedure, which uses an imperfect model displaying some systematic drifts. The presence of such biases has been amply demonstrated during the reanalysis experiments performed at the National Centers for Environmental Prediction (NCEP; Kistler et al. 2001) or at ECMWF (Simmons et al. 2004), through a detailed comparison with observed data.

A natural question to be raised concerns the impact of these systematic errors in operational forecasting on the predictability of the system at hand under the simultaneous presence of model errors. This point is briefly addressed here by introducing a systematic initial error for one of the variables of the Lorenz model, namely the variable *z*. The amplitude of this systematic error has been taken equal to the standard deviation of the random part of error added to each model variable at the initial time.

Figure 9a depicts the time of the minimum for the experiments, with and without systematic errors, as a function of the amplitude of the model error perturbation *δr*. An interesting feature is that the time of the minimum for positive values of *δr* is now shifted toward larger values. In addition, the minimum (normalized by the initial value of the error) is deepening as compared with the case in which systematic errors are absent (Fig. 9b). Notice that when the amplitude of the systematic error is increased further, the deepening of the minimum and its shift are also increased.

**u**〉 = (0, 0,

*s*) (

*s*> 0), with

**ϕ**as in Eq. (27). Under these conditions the initial model error coupling is absent at the level of the first-order term of the short time expansion [Eq. (B1)] but gives a contribution at the level of the second-order term [third to last term in Eq. (B2)]. Using the explicit form of the Jacobian, one sees that this term yields a negative contribution under the conditions of Fig. 9a, equal to (2

*J*+

_{yz}*J*)ϕ

_{zy}*= −*

_{y}u_{z}δr*x*

^{2}

*sδr*. Discarding at this stage the

*O*(

*t*

^{3}) term, this will thus tend to increase the value of the time of minimum, which will be given byin agreement with Fig. 9a. Substituting into the expression of the error, one sees likewise that the value of the error at its minimum tends to decrease, in agreement with Fig. 9b.

In summary, a rich variety of behaviors can be found in the dynamics of the error in the Lorenz system for biased initial errors. In particular, a deepening of the error minimum and a shift of this minimum toward large times is obtained for some specific model and systematic errors. These features could have considerable operational implications because a model subjected to certain types of model errors could display different predictability properties depending on the presence, or not, of systematic errors in the initial conditions.

## 6. Conclusions

In this work some generic properties of the transient evolution of prediction errors under the combined effect of initial condition and of model errors have been derived, in the limit of small initial and parameter errors. The regime considered was in the short to intermediate time frame, as reflected by carrying out a power series expansion of the error [Eq. (7)] and its norm [Eq. (9)] limited to the *O*(*t*^{3}) terms. In its most general form, this expansion accounts for arbitrary types of initial and model errors beyond the usually considered case of unbiased (random) uncorrelated ones and brings out clearly the mechanisms by which an initial error acting along a particular phase space direction ends up contaminating, in the course of time, phase space directions that were initially error free. Under the additional assumption of uncorrelated and unbiased initial errors, a simplified expression was derived [Eq. (10)], which allowed us to identify conditions for the existence of a time at which mean quadratic errors attain a minimum, a crossover time at which the effects of initial conditions and of model errors match each other, or, possibly, the occurrence of inflexion points. In each case, the role of the intrinsic dynamics and in particular its dissipative character and the interplay between stability and instability has been brought out.

These general properties have been tested and illustrated on a number of generic low-order models of atmospheric dynamics. In all cases considered the crossover time was shown to exceed the time of the minimum. Some quantitative relations were obtained showing how the time of minimum is shifted as the magnitude of the model error is increased. The case of biased (systematic) initial errors was also considered in a model giving rise to deterministic chaos and was shown to be responsible for some qualitatively new properties. This case, as well as the case of correlated and anisotropic initial errors, definitely deserves a more comprehensive study in the future. In this respect, an interesting problem is to evaluate the impact on the crossover time of error sources of the data assimilation process, known to introduce preferential directions to the initial errors in phase space.

Finally, it would be interesting to extend the work reported here to account for multivariate systems (and in particular for spatially extended ones), as well as for cases in which the model and the reference variables do not span the same phase space. The role of stochastic perturbations and, in particular, the possibility that they may control to some extent the growth dynamics is also worth considering.

This work was supported in part by the Science Policy Office of the Belgian Federal Government under Contract MO/34/017. R. P. wishes to thank C. Pires for his encouragements and acknowledges the support of the Portuguese Foundation for Science and Technology.

## REFERENCES

Baker Jr., G. A., , and P. Graves-Morris, 1996:

*Padé Approximants*. Cambridge University Press, 746 pp.Charney, J. G., , and J. G. DeVore, 1979: Multiple flow equilibria in the atmosphere and blocking.

,*J. Atmos. Sci.***36****,**1205–1216.Dalcher, A., , and E. Kalnay, 1987: Error growth and predictability in operational ECMWF forecasts.

,*Tellus***39A****,**474–491.Ivanov, L. M., , and P. C. Chu, 2007: On stochastic stability of regional ocean models in wind forcing.

,*Nonlinear Processes Geophys.***14****,**655–670.Kalnay, E., 2003:

*Atmospheric Modeling, Data Assimilation, and Predictability*. Cambridge University Press, 364 pp.Kistler, R., and Coauthors, 2001: The NCEP–NCAR 50-Year Reanalysis: Monthly means CD-ROM and documentation.

,*Bull. Amer. Meteor. Soc.***82****,**247–267.Krishnamurti, T., , J. Sanjay, , A. Mitra, , and T. Vijaya Kumar, 2004: Determination of forecast errors arising from different components of model physics and dynamics.

,*Mon. Wea. Rev.***132****,**2570–2594.Lorenz, E. N., 1963: Deterministic nonperiodic flow.

,*J. Atmos. Sci.***20****,**130–141.Lorenz, E. N., 1982: Atmospheric predictability experiments with a large numerical model.

,*Tellus***34****,**505–513.Lorenz, E. N., 1996: Predictability: A problem partly solved.

*Proc. Seminar on Predictability,*Reading, Berkshire, United Kingdom, ECMWF, 1–18.Nicolis, C., 1992: Probabilistic aspects of error growth in atmospheric dynamics.

,*Quart. J. Roy. Meteor. Soc.***118****,**553–568.Nicolis, C., 2003: Dynamics of model error: Some generic features.

,*J. Atmos. Sci.***60****,**2208–2218.Nicolis, C., 2004: Dynamics of model error: The role of unresolved scales revisited.

,*J. Atmos. Sci.***61****,**1740–1753.Nicolis, C., , and G. Nicolis, 1995: Chaos in dissipative systems: Understanding atmospheric physics.

,*Adv. Chem. Phys.***91****,**511–570.Nicolis, G., 1995:

*Introduction to Nonlinear Science*. Cambridge University Press, 254 pp.Reynolds, C. A., , P. J. Webster, , and E. Kalnay, 1994: Random error growth in NMC’s global forecasts.

,*Mon. Wea. Rev.***122****,**1281–1305.Schubert, S., , and Y. Schang, 1996: An objective method for inferring sources of model error.

,*Mon. Wea. Rev.***124****,**325–340.Simmons, A. J., , and A. Hollingsworth, 2002: Some aspects of the improvement in skill of numerical weather prediction.

,*Quart. J. Roy. Meteor. Soc.***128****,**647–677.Simmons, A. J., and Coauthors, 2004: Comparison of trends and variability in CRU, ERA-40, and NCEP/NCAR analyses of monthly-mean surface air temperature. ERA-40 Project Report Series 18, 38 pp.

Tribbia, J. J., , and D. P. Baumhefner, 1988: The reliability of improvements in deterministic short-range forecasts in the presence of initial state and modeling deficiencies.

,*Mon. Wea. Rev.***116****,**2276–2288.Tribbia, J. J., , and D. P. Baumhefner, 2004: Scale interactions and atmospheric predictability: An updated perspective.

,*Mon. Wea. Rev.***63****,**703–713.

# APPENDIX A

## Derivation of Eqs. (7) and (8)

To obtain an explicit form starting from Eq. (6), one needs to evaluate the first three time derivatives of **u** at *t* = 0, when the system is on its reference attractor. The first derivative is available from Eq. (4a). Projecting along phase space direction *i*, one obtains immediately the expansion coefficient *A _{i}* in the form given by Eq. (8a).

*d*

**u**/

*dt*from Eq. (4a),Projecting this relation along phase space direction

*i*yields the expansion coefficient

*B*in the form given by Eq. (8b).

_{i}*d*

**u**/

*dt*from Eq. (4a):or, grouping similar terms together,

Projecting this relation along phase space direction *i*, keeping in mind that the dots imply summation over intermediate indices, yields the expansion coefficient *C _{i}* in the form given by Eq. (8c).

# APPENDIX B

## Derivation of Eq. (10)

We first derive explicit expressions for the coefficients of *t*, *t*^{2}, and *t*^{3} terms in Eq. (9).

### t terms

### t^{2} terms

### t^{3} terms

*A*part of the coefficient using Eqs. (8a) and (8b). We carry out explicitly the multiplication of expression (8a) by (8b), grouping together terms as above. We also take special care to combine whenever possible terms involving time derivatives in a way to obtain expressions where the time derivative acts on the full phase space function, acting on the initial errors. This yields the expressionTurning next to the

_{i}B_{i}*C*part of the coefficient of

_{i}u_{i}*t*

^{3}term in Eq. (9), we obtain, by utilizing Eq. (8c) and grouping terms in the same way as above,Summing (B3) and (B4) divided by 3 yields the explicit expression of the coefficient of the

*t*

^{3}term in Eq. (9).

*g*(the kind of function one deals with in physical systems),

*g*(

*T*) −

*g*(0) is finite and hence the term in the right-hand side of the last equality of Eq. (B5) tends to zero in the long time limit.

Consider next the average of the remaining terms over the distribution of initial errors. A further drastic simplification will occur in the case of unbiased errors, 〈*u _{j}*〉 = 0, because all terms in

*δμ*surviving the first averaging will give a vanishing contribution in (B1)–(B4). These are the last term in (B1), the third term in (B2), the third term in (B3), and the second and third terms in (B4). Assuming further that initial errors are uncorrelated, 〈

*u*〉 = 〈

_{i}u_{j}*u*

_{i}^{2}〉

*δ*

_{ij}^{kr}, will transform the fourth term in (B3) into the total derivative of

*J*

_{iℓ}

^{2}, which will give a vanishing contribution through the phase space averaging. Keeping track of all these steps, one arrives finally at Eq. (10).