Accounting for Model Error in Variational Data Assimilation: A Deterministic Formulation

Alberto Carrassi Institut Royal Météorologique de Belgique, Brussels, Belgium

Search for other papers by Alberto Carrassi in
Current site
Google Scholar
PubMed
Close
and
Stéphane Vannitsem Institut Royal Météorologique de Belgique, Brussels, Belgium

Search for other papers by Stéphane Vannitsem in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

In data assimilation, observations are combined with the dynamics to get an estimate of the actual state of a natural system. The knowledge of the dynamics, under the form of a model, is unavoidably incomplete and model error affects the prediction accuracy together with the error in the initial condition. The variational assimilation theory provides a framework to deal with model error along with the uncertainties coming from other sources entering the state estimation. Nevertheless, even if the problem is formulated as Gaussian, accounting for model error requires the estimation of its covariances and correlations, which are difficult to estimate in practice, in particular because of the large system dimension and the lack of enough observations. Model error has been therefore either neglected or assumed to be an uncorrelated noise. In the present work, an approach to account for a deterministic model error in the variational assimilation is presented. Equations for its correlations are first derived along with an approximation suitable for practical applications. Based on these considerations, a new four-dimensional variational data assimilation (4DVar) weak-constraint algorithm is formulated and tested in the context of a linear unstable system and of the three-component Lorenz model, which has chaotic dynamics. The results demonstrate that this approach is superior in skill to both the strong-constraint and a weak-constraint variational assimilation that employs the uncorrelated noise model error assumption.

Corresponding author address: Alberto Carrassi, Institut Royal Meteorologique de Belgique, Av. Circulaire 3, 1180, Brussels, Belgium. Email: a.carrassi@oma.be

This article included in the Intercomparisons of 4D-Variational Assimilation and the Ensemble Kalman Filter special collection.

Abstract

In data assimilation, observations are combined with the dynamics to get an estimate of the actual state of a natural system. The knowledge of the dynamics, under the form of a model, is unavoidably incomplete and model error affects the prediction accuracy together with the error in the initial condition. The variational assimilation theory provides a framework to deal with model error along with the uncertainties coming from other sources entering the state estimation. Nevertheless, even if the problem is formulated as Gaussian, accounting for model error requires the estimation of its covariances and correlations, which are difficult to estimate in practice, in particular because of the large system dimension and the lack of enough observations. Model error has been therefore either neglected or assumed to be an uncorrelated noise. In the present work, an approach to account for a deterministic model error in the variational assimilation is presented. Equations for its correlations are first derived along with an approximation suitable for practical applications. Based on these considerations, a new four-dimensional variational data assimilation (4DVar) weak-constraint algorithm is formulated and tested in the context of a linear unstable system and of the three-component Lorenz model, which has chaotic dynamics. The results demonstrate that this approach is superior in skill to both the strong-constraint and a weak-constraint variational assimilation that employs the uncorrelated noise model error assumption.

Corresponding author address: Alberto Carrassi, Institut Royal Meteorologique de Belgique, Av. Circulaire 3, 1180, Brussels, Belgium. Email: a.carrassi@oma.be

This article included in the Intercomparisons of 4D-Variational Assimilation and the Ensemble Kalman Filter special collection.

1. Introduction

Most operational weather prediction centers worldwide adopt a variational data assimilation algorithm (Sasaki 1970; Le Dimet and Talagrand 1986; Rabier et al. 2000). The state estimation in the variational assimilation is formulated as an optimal control problem, and aims at determining the trajectory that best fits the observations and accounts for the dynamical constraints given by the law supposed to govern the flow. The accuracy of the variational solution is naturally connected to our knowledge of the error associated with the information sources. Based on the Gaussian hypothesis, such knowledge is expressed via error covariances and correlations. However, while an accurate estimate of the observation error covariance is usually at hand, more difficulties arise for the background and model error covariances.

In the last decades, extensive researches have been devoted to improve the estimation of the background error covariance, particularly in the context of sequential Kalman filter (KF)-type algorithms (Kalman 1960; Ghil et al. 1981). Dee (1995) has pointed out the difficulties of specifying the model error because of the large size of a typical geophysical problem and the consequent enormous information requirement involved. He proposed a scheme for the online estimation of error covariance suitable also for the estimation of model error especially when the latter is expressed through a reduced number of relevant degrees of freedom.

In the context of ensemble-based schemes [see, e.g., Evensen (1994) for the ensemble Kalman filter (EnKF)] a lot of efforts have been devoted to the representation of model error through an optimal ensemble design. Among these studies, Hamill and Whitaker (2005) have investigated the ability of two methods, covariance inflation and additive random error, to parameterize error due to unresolved scales. Meng and Zhang (2007) have analyzed the performance of the EnKF in the context of a mesoscale and regional-scale model affected by significant model error due to physical parameterizations, while in Fujita et al. (2007) the EnKF was used to assimilate surface observations with the ensemble designed also to represent errors in the model physics. Houtekamer et al. (2009) have examined several approaches to account for model error in an operational EnKF used in a numerical weather prediction (NWP) context. They found that, from the approaches they considered, a combination of isotropic model-error perturbations and the use of different model versions for different ensemble members gave the best performance. A similar analysis was done by Li et al. (2009) in the context of the local ensemble transform Kalman filter (Hunt et al. 2007). They investigated several methods to account for model error, including model bias and system noise, and concluded that the best performances are obtained when these two approaches are combined.

In variational data assimilation, model error has been often ignored, assuming implicitly that it has only a minor influence compared to errors in the initial condition and in the observations. More recently, the refinement and the increase of the observational network have reversed the problem, suggesting an urgent need for deeper understanding of the model error dynamics and its treatment in data assimilation. Different solutions have been proposed in recent years to estimate and account for model error in variational assimilation (Derber 1989; Zupanski 1997; Vidard et al. 2004; Trémolet 2006). These studies have shown that treating the model error as part of the estimation problem leads to significant improvements in the accuracy of the state estimate. However, these studies have used crude estimations of the model error covariances (Trémolet 2007) and/or simple model error dynamical law, such as a first-order Markov process (Zupanski 1997). Because of the constraints given by the size of the problem, model error has been usually assumed uncorrelated in time (see, e.g., Trémolet 2006). In contrast to the case of ensemble-based schemes where model error covariances are estimated using the ensemble, in variational data assimilation these estimates have to be built up on some statistical or dynamical assumptions.

In the last few years, the dynamics of model error has attracted a lot of interest (e.g., Reynolds et al. 1994; Vannitsem and Toth 2002). In particular, a series of works have studied the behavior of deterministic model errors and identified some universal dynamical features (Nicolis 2003, 2004; Nicolis et al. 2009). They were at the origin of a deterministic formulation of the model error term in the extended Kalman filter (EKF; Carrassi et al. 2008). The present investigation takes advantage of the same theoretical framework on the deterministic dynamics of the model error to formulate a new approach for variational assimilation. Specifically, evolution equations for the model error covariances and correlations are derived along with suitable, application-oriented, approximations. These deterministic laws are then incorporated in the formulation of the variational problem.

Here we focus on model error due to incorrect parameterizations, but the approach can also be used in the case of error coming from processes that are not accounted for by the model but are parameterized in terms of the resolved scales (Nicolis 2004). The proposed algorithm is analyzed, in comparison with traditional approaches, in the context of two systems of increasing dynamical complexity, beginning with a one-dimensional linear system and then with the three-component Lorenz model (Lorenz 1963), a nonlinear dynamical system exhibiting a chaotic behavior.

The paper is organized as follows. In section 2 the variational assimilation problem is described, while the deterministic formulation for model error dynamics is presented in section 3. Results for both systems are given in section 4 and the final conclusions are drawn in section 5.

2. Formulation of the problem

Let us write the equations describing the model of a system as
i1520-0493-138-9-3369-e1
where the I-dimensional state vector x(t) describes a set of relevant physical variables of the system under consideration, and the Ip-dimensional vector λ describes a set of parameters that can be related, for instance, to parameterized physical mechanisms. Alternatively, the solution of model in (1) can be expressed as x(t) = (x0, λ), with x0 = x(t0) as the initial condition. We assume this model is used to describe the dynamics of a real system whose corresponding equations can be written as
i1520-0493-138-9-3369-e2
where v(t) is the unknown Itr-dimensional truth state and λtr is a Iptr-dimensional vector of unknown parameters. Alternatively, the evolution of v(t) can be written as v(t) = (v0, λtr).
We suppose that M measurements are collected at the discrete times (t1, t2, … , tM) within the reference time interval T. The observations, yo, are related to the model state through the observation operator , and are affected by an error, ϵk, which is assumed here to be a white noise:
i1520-0493-138-9-3369-e3
An a priori estimation, xb, of the model initial condition is supposed to be available. This is usually referred to as the background state, and
i1520-0493-138-9-3369-e4
where ϵb represents the background error.
We search for the trajectory that, on the basis of the background field and according to some specified criteria, best fits the observations over the reference period T. However, besides the observations and the background, the model dynamics itself represents a source of additional information that can be exploited in the state estimate. Nevertheless, since the model is not perfect, it is usually assumed that an additive model error, ϵm(t), affects the model prediction in the following form:
i1520-0493-138-9-3369-e5
Assuming that all errors are Gaussian, these information sources can be combined using a least squares approach. Assuming furthermore that these errors do not correlate with each other, the quadratic penalty functional, combining all the information, takes the following form (Jazwinski 1970, his section 5.3):
i1520-0493-138-9-3369-e6
The weighting matrices 𝗣tt, 𝗥k, and 𝗕 have to be regarded as a measure of our confidence in the model, in the observations and in the background field, respectively. In this Gaussian formulation these weights can be chosen to reflect the relevant moments of the corresponding Gaussian error distributions.

The best fit is defined as the solution, (t), minimizing the cost function J over the interval T. It is known that, under the aforementioned hypothesis of Gaussian errors, (t) corresponds to the maximum likelihood solution and J can be used to define a multivariate distribution of x(t) (Jazwinski 1970, his section 5.3). Note that in order to minimize J all errors have to be explicitly written as a function of the trajectory x(t).

The variational problem defined by (6) is usually referred to as weak constraint given that the model dynamics is affected by errors (Sasaki 1970). An important particular case is the strong-constraint variational assimilation in which the model is assumed to be perfect, that is ϵm = 0 (Lewis and Derber 1985; Le Dimet and Talagrand 1986). In this case the model-error-related term disappears and the cost function reads as
i1520-0493-138-9-3369-e7

The calculus of variations can be used to find the extremum of (6) [or (7)] and leads to the corresponding Euler–Lagrange equations (Le Dimet and Talagrand 1986; Bennett 1992, his sections 5.3–5.4). In the strong-constraint case, the requirement that the solution has to follow the dynamics exactly is satisfied by appending to (7) the model equations as a constraint by using a proper Lagrange multiplier field. However, the size and complexity of the typical numerical weather prediction problem is such that the Euler–Lagrange equations cannot be practically solved unless drastic approximations are introduced. When the dynamics is linear and the amount of observations is not very large, the Euler–Lagrange equations can be efficiently solved with the method of representers (Bennett 1992, his sections 5.3–5.4). An extension of this approach to nonlinear dynamics has been proposed in Uboldi and Kamachi (2000). Nevertheless, the representers method is far from being applicable for realistic high-dimensional problems such as NWP, and an attractive alternative is represented by the descent methods, which makes use of the gradient vector of the cost function in an iterative minimization procedure (Talagrand and Courtier 1987). This latter approach is used in most of the operational NWP centers that employ variational assimilation. Note finally that the Euler–Lagrange equations can also be used as a tool to obtain the cost function gradient. Details on the representers and the descent techniques are provided in section 4 in relation to the applications described therein.

In the discrete case, when the model dynamics is lumped over N time steps of size Δt > 0, the weak-constraint cost function becomes
i1520-0493-138-9-3369-e8
The best-fit trajectory is now defined over N time steps in the interval T, and 𝗣i,j represents the model error covariance matrix between times ti and tj.

Note that in the cost functions in (6) and (8), the model error is allowed to be correlated in time, and leads to the double integral and summation, respectively. If it is assumed to be a random uncorrelated noise, only covariances have to be taken into account and the double integral in the first rhs of (6) [the double summation in the first rhs of (8)] reduces to a single integral (to a single summation).

The search for the best-fit trajectory by minimizing the associated cost function requires the specification of the weighting matrices. The estimation of the matrices 𝗣tt is particularly difficult in realistic NWP applications because of the large size of the typical models currently in use. Therefore, it turns out to be crucial to define approaches for modeling the matrices 𝗣tt in order to reduce the number of parameters needed for their estimation.

3. Deterministic model error dynamics

The derivation of the statistical weights, 𝗣tt, is based on the formalism on model error dynamics introduced in Nicolis (2003) and is an extension of a previous work in which a deterministic model error treatment was incorporated into the extended Kalman filter (Carrassi et al. 2008).

Let us assume that the unknown true dynamics we intend to estimate can be conveniently expressed through a set of differential equations equal to (1) except in their parameters, so that
i1520-0493-138-9-3369-e9
where v(t) is now I-dimensional and λtr is an Ip-dimensional vector. A prediction of the evolution of (9), perceived through the model in (1), will be affected by errors in both the initial condition and the model parameters.

The assumption that the model error comes only from the misspecification of the parameters does not account for other potential model errors, present in a realistic application, such as those due to unresolved scales and/or to unrepresented physical processes. However, the approach described in the sequel can be straightforwardly extended to the case of omission errors expressible in terms of the resolved scales (Nicolis 2004), since they can be brought back to the simpler case of parametric errors (e.g., Nicolis et al. 2009). These types of errors are typical in NWP systems.

An approximate evolution law for the state estimation error is obtained by taking the difference between (9) and (1). As usual, for “small” error the linearized dynamics provides a reliable approximation of the actual evolution. The linearization is made along a model trajectory, the solution of (1), by expanding to the first order in δx = v(t) − x(t) and δλ = λtrλ, and reads
i1520-0493-138-9-3369-e10
The first partial derivative on the rhs of (10) is the Jacobian of the model dynamics evaluated along its trajectory. The second term, which corresponds to the model error, will be hereafter denoted as δμ = (∂f/∂λ)|λδλ.
The solution of (10), with initial condition δx(t0) = δx0, reads
i1520-0493-138-9-3369-e11
and
i1520-0493-138-9-3369-e12
with 𝗠t,t0 being the fundamental matrix (the propagator) relative to the linearized dynamics along the trajectory between t0 and t. Equation (11) states that, in the linear approximation, the error in the state estimate is given by the sum of two terms: one relative to the evolution of initial condition error and another one, δxm, relative to the model error.

We now make the conjecture that, as long as the errors in the initial condition and in the model parameters are small, (12) can be used to estimate the model error ϵm(t) entering the weak-constraint cost functions, and consequently the corresponding correlation matrices 𝗣(t′, t″). In this case, the model error dependence on the model state implies the dependence of model error correlation on the correlation time scale of the model variables themselves.

By taking the expectation of the product of (12) by itself, over an ensemble of realizations around a specific trajectory, we obtain an equation for the model error correlation matrix:
i1520-0493-138-9-3369-e13
The integral in (13) gives the model error correlation between times t′ and t″. In this form, (13) is of little practical use for any realistic nonlinear systems. A suitable expression can be obtained by considering its short-time approximation through a Taylor expansion around (t′, t″) = (t0, t0).
It can be shown (see appendix A) that the first nontrivial order is quadratic and reads
i1520-0493-138-9-3369-e14
Equation (14) states that the model error correlation between two arbitrary times, t′ and t″, within the short-time regime, is equal to the model error covariance at the origin, 〈δμ0 δμ0T〉, multiplied by the product of the two time intervals. Naturally the accuracy of this approximation is connected on the one hand to the length of the reference time period and on the other to the accuracy of the knowledge about the error in the parameters needed to estimate 〈δμ0 δμ0T〉. Nicolis (2003) has shown that the evolution of the quadratic error is bound to be universally quadratic in the short time for deterministic systems and that the range of validity of this evolution is related to the inverse of the largest Lyapunov exponent of the underlying dynamics.

4. Variational assimilation using short-time approximation for the model error

In this section we propose to use the short-time law in (14) as an estimate of the model error correlations in the variational assimilation. Besides the fact of being a short-time approximation, this evolution equation is based on the hypothesis of linear error dynamics. To highlight advantages and drawbacks of its application, we explicitly compare a weak-constraint variational assimilation employing this short-time approximation with other formulations.

The analysis is carried out in the context of two systems of increasing complexity. We first deal with a very simple example of scalar dynamics, which is fully capable of being integrated. The variational problem is solved with the technique of representers. The simplicity of the dynamics allows us to explicitly solve (13) and use it to estimate the model error correlations. This “full weak constraint” formulation of the four-dimensional variational data assimilation (4DVar) is evaluated and compared with the one employing the short-time approximation in (14). In addition, a comparison is made with the widely used strong-constraint 4DVar in which the model is considered as perfect.

In the last part of the section we extend the analysis to an idealized nonlinear chaotic system. In this case the minimization is made by using an iterative descent method, which makes use of the cost function gradient. In this nonlinear context the short-time approximated weak-constraint 4DVar is compared to the strong-constraint and to a weak-constraint 4DVar in which model error is treated as a random uncorrelated noise as it is often assumed in realistic applications.

a. Linear system: Solution with the representers

Let us consider the simple scalar dynamics:
i1520-0493-138-9-3369-e15
with λtr > 0, as our reference.
Suppose that M noisy observations of the state variable are available at the discrete times tk [0, T], 1 ≤ kM:
i1520-0493-138-9-3369-eq1
with ϵko being an additive random noise with variance σo2(tk) = σo2, 1 ≤ kM, and that a background estimate, xb, of the initial condition, x0, is at our disposal:
i1520-0493-138-9-3369-eq2
with ϵb being the background error with variance σb2. We assume the model is given by
i1520-0493-138-9-3369-eq3
We seek for a solution minimizing simultaneously the error associated with all these information sources. The quadratic cost function can be written in this case as
i1520-0493-138-9-3369-e16
The control variable here is the entire trajectory within the assimilation interval T. In (16) we have used the fact that the model error bias, δxm(t), is given by x(t) − x0eλt assuming the model and the control trajectory, x(t), are started from the same initial condition x0. Note that x0 is itself part of the estimation problem through the background term in the cost function.
The final minimizing solution of (16) is found using the technique of representer (the details of the derivation are given in appendix B) and reads
i1520-0493-138-9-3369-e17
where the M functions, rk(t), are the representers while the coefficients, βk, are given by
i1520-0493-138-9-3369-e18
where d is the innovation vector, d = (y1ox1f, … , yMoxMf), 𝗦 is the M × M matrix (S)i,j = ri(tj), and 𝗜 is the M × M identity matrix. The coefficients are then inserted into (17) to obtain the final solution (see appendix B).
In the derivation of the general solution in (17) [with the coefficients (18)], we have not specified the model error correlations p2(t′, t″); the particular choice adopted characterizes the formulations we aim to compare. Our first choice consists in evaluating the model error correlations through (13). By inserting δμ = (∂f/∂λ)δλ, with f (x) = λx, and the fundamental matrix, 𝗠t,t0 = eλ(tt0), associated with the dynamics (15), we get
i1520-0493-138-9-3369-e19
where the expectation, 〈·〉, is an average over a sample of initial conditions and parametric errors. Expression (19) can now be inserted into (B8) and (B9) to obtain the M representer functions in this case:
i1520-0493-138-9-3369-e20
The representers (20) are then inserted into (18) to obtain the coefficients for the solution, x(t), which is finally obtained through (17). This solution is hereafter referred to as the full weak constraint.
The same derivation is now repeated with the model error weights given by the short-time approximation in (14). By substituting δμ = (∂f/∂λ)δλ into (14), we obtain
i1520-0493-138-9-3369-e21
Once (21) is inserted into (B8) and (B9) the representer solutions become
i1520-0493-138-9-3369-e22
The representer functions are then introduced into (18) and (17) to obtain the solution, x(t), during the reference period T. The solution based on (22) is hereafter referred to as the short-time weak constraint.
To complete our comparative analysis the strong-constraint solution is derived. In this case, we seek a solution that follows the dynamics exactly while being a best fit to the data and the background field. We may proceed along the same lines as above; this would imply writing down a cost function without the term of the model discrepancy while the dynamics is appended as a constraint through a proper Lagrange multiplier field. Alternatively, by invoking the continuity of the solution (20), or (22), with respect to the model error weights, the strong-constraint solution is obtained in the limit δλ → 0 and reads
i1520-0493-138-9-3369-e23

The three solutions based respectively on (20), (22), and (23) are compared in a set of experiments. Simulated noisy observations sampled from a Gaussian distribution around a solution of (15) are distributed every 5 time units over an assimilation interval T = 50 time units. Different regimes of motion are considered by varying the true parameter λtr.

The results displayed in the sequel are averages over 103 initial conditions and parametric model errors, around x0 = 2 and λtr, respectively. The initial conditions are sampled from a Gaussian distribution with standard deviation σb = 1, while the model parameter, λ, is sampled by a Gaussian distribution with standard deviation |Δλ| = |λtrλ|; the observation error standard deviation is σo = 0.5.

Figure 1 shows the mean quadratic estimation error, as a function of time, during the assimilation period T. The different panels refer to experiments with different parameter for the truth 0.01 ≤ λtr ≤ 0.03, while the parametric error relative to the true value is set to Δλ/λtr = 50%. The three lines refer to the full weak-constraint (dashed line), the short-time approximated weak-constraint (continuous line), and the strong-constraint (dotted line) solutions, respectively. The bottom-right panel summarizes the results and shows the mean error, averaged also in time, as a function of λtr for the weak-constraint solutions only.

As expected, the full weak-constraint solution performs systematically better than any other approach. The variational solution employing the short-time approximation for the model error successfully outperforms the strong-constraint one. The last plot displays the increase of total error of this solution as a function of λtr. To understand this dependence, one must recall that the duration of the short-time regime in a chaotic system is bounded by the inverse of the largest amplitude Lyapunov exponent (Nicolis 2003). For the scalar unstable case considered here, this role is played by the parameter λtr. The increase of the total error of the short-time-approximated weak constraint as a function of λtr reflects the progressive decrease of the accuracy of the short-time approximation for this fixed data assimilation interval, T.

The accuracy of the short-time-approximated weak constraint in relation to the level of instability of the dynamics is further summarized in Fig. 2, where the difference between the mean quadratic error of this solution and the full weak-constraint one is plotted as a function of the nondimensional parameter tr, with 10 ≤ T ≤ 60 and 0.0100 ≤ λtr ≤ 0.0275. In all the experiments Δλ/λtr = 50%. Remarkably all curves are superimposed, a clear indication that the accuracy of the analysis depends essentially on the product of the instability of the system and the data assimilation interval. This feature is of course strongly related to the fact that the discrepancy of the short-time approximation is larger for large λtr and long times in (14).

We now turn to the analysis of the effect of the initial condition error on the weak-constraint solutions (20) and (22) in Fig. 3. We focus here on a setting with only one perfect observation in the middle of the assimilation period, at time t = 25. The panels refer to experiments with different parametric model error and show the mean quadratic error, averaged over a sample of 103 initial condition errors and over the assimilation period T, as a function of the standard deviation of the initial condition error σb. In all the experiments, λtr is fixed to 0.0225. For the smallest parametric model errors (top panels −Δλ/λtr ≤ 15%), the estimation error of both solutions monotonically increases with the initial condition error, until a common plateau is reached. Note that the full weak constraint, with a perfect initial condition (σb = 0), is able to keep the average error to almost 0. This is a very remarkable performance considering that only one observation is available within the assimilation period. The figure indicates further that the difference between the full and the approximated solutions decreases monotonically by increasing the initial condition error, and it is reduced to almost zero for sufficiently large errors. This clearly reflects the relative impact of initial condition and model errors on the quality of the assimilation. When the initial condition error is significantly larger than the model error, the accuracy of the state estimate is not improved employing the more costly (and accurate) full weak-constraint algorithm. Note also that the error plateau is reached for larger values of σb as Δλ/λtr increases.

Finally, in Fig. 4, the possibility of improving the quality of the solutions by inflating the model error covariance matrix is investigated. The aim is to understand whether the model error is under- or overestimated in the weak-constraint approaches and if an improvement can be obtained by inflating–educing the model error correlation term. The original network of 10 noisy observations, every 5 time units, in now used with σo = 0.5, σb = 1, and λtr = 0.01. The amplitude of the model error term, 〈x0δλ2, is now multiplied by a scalar factor 0.1 ≤ α ≤ 20 and then used in both the weak-constraint assimilations. The panels show the mean quadratic error as a function of α, for different parametric error Δλ/λtr.

As a first remark we observe that by increasing the model parametric error, the analysis error of all the solutions increases accordingly. In the smallest parametric error case, Δλ/λtr = 10%, for α close to 1, the solutions are very similar to each other. The full- and short-time-approximated weak constraint (dashed and continuous lines, respectively) only slightly improves over the strong constraint. By increasing α both solutions degrade rapidly at a rate that is faster for the full weak-constraint solution, indicating its high sensitivity to the model error correlation amplitude. Note that the growth of the error for large α is found for all the parametric errors considered, and that in the cases of small Δλ/λtr the weak-constraint solutions with large α performs even worse than the strong-constraint case. This means that when the model error is not dominant, assuming it is perfect is better than incorrectly estimating the corresponding correlations.

It is interesting to remark on the existence of a minimum in the error curves, which deepens with the increase of the parametric error. This minimum is systematically located at α = 1 for the full weak-constraint case, indicating that the estimate of the model error correlation based on (13) is adequate. On the other hand, for the short-time-approximated case the minimum is shifted to 3 < α < 4. This suggests that, as expected, the estimate of the actual model error correlation based on (14) is an underestimation and that a better performance can indeed be obtained by inflating it. Note furthermore that the level of the minima of the short-time-approximated weak constraint is very close to the full weak-constraint ones. This is a very encouraging result from the perspective of a realistic application where (13) cannot be solved.

b. Nonlinear system: Solution with descent method

Let us now turn to a more realistic dynamics. We adopt here the widely used Lorenz three-variable convective system (Lorenz 1963), as a prototype of nonlinear chaotic dynamics:
i1520-0493-138-9-3369-e24
with the canonical choice for its parameters, , the system behaves chaotically with a leading Lyapunov exponent equal to 0.90 per nondimensional time. We perform a set of observation system simulation experiments for which a solution of (24), with the canonical parameters, is the reference dynamics. The estimation is based on observations of the entire system’s state (i.e., observation operator equal to the identity 3 × 3 matrix), distributed within a given assimilation interval and affected by an uncorrelated Gaussian error with covariance 𝗥. The model dynamics is given by the Lorenz system in (24) with a modified set of parameters. The numerical integrations are carried out with a second-order Runge–Kutta scheme with a time step equal to 0.01 nondimensional time units.
The variational cost function can be written, according to (8), as
i1520-0493-138-9-3369-e25
where, as in section 2, stands for the model nonlinear propagator, and 𝗕 and 𝗣 for the background and model error covariance and correlation matrices, respectively. We have assumed the assimilation interval T has been discretized over N time steps of equal length Δt.
The control variable for the minimization is the series of the model state xi at each time step in the interval T. The minimizing solution is obtained by using a descent iterative method that makes use of the cost function gradient with respect to xi, 0 ≤ iN. This latter reads
i1520-0493-138-9-3369-e26
The matrices 𝗛 and 𝗠 represents the linearized observation operator and model propagator, respectively, while 𝗠T is the adjoint dynamics operator. The gradient expression (26) is derived assuming that observations are available at each time step ti, 0 ≤ iN. In the usual case of sparse observations the term proportional to the innovation disappears from the gradient with respect to the state vector at a time when observations are not present. Note furthermore that assuming the model error is correlated in time leads to the summation in squared brackets of (26), which accounts for the full contribution over the entire assimilation interval. The situation is drastically different if the model error is treated as an uncorrelated noise. In this case the cost function takes into account the model error covariances only, and the model error cost function term reduces to a single summation over the time steps weighted by the inverse of the model error covariances. The cost function gradient modifies accordingly and the summation over all time steps disappears (see, e.g., Trémolet 2006).
The cost function in (25) and its gradient in (26) define the discrete weak-constraint variational problem. We study here its performance when the estimate of the model error correlations 𝗣i,j is done by using the short-time approximation (14). In this discrete case it reads
i1520-0493-138-9-3369-eq4
The invariant term 〈δμ0 δμ0T〉, which is here a 3 × 3 symmetric matrix, is assumed to be known a priori and is estimated by accumulating statistics on the model attractor and perturbing randomly each of the three parameters σ, ρ, and β with respect to the canonical values and with a standard deviation |Δλ|. We compare the short-time weak constraint with the strong-constraint variational assimilation. In this latter case, the model error term disappears from the cost function in (25) and the gradient is computed with respect to the initial condition only (Talagrand and Courtier 1987).

Observation system simulation experiments are conducted with assimilation intervals equal to 2, 4, 8, or 16 time steps and with observations each 2, 4, 8, or 16 time steps. The simulated measurements of the three components of the model state are uncorrelated with each other and are affected by a Gaussian noise whose standard deviation is set to 5% of the system’s natural variability. The results are averaged over a sample of 50 initial conditions and parametric model error sampled by a Gaussian distribution with standard deviation |Δλ|. Each simulation lasts for 240 time steps, which is equivalent to 15 assimilation cycles for the longest T = 16 time steps up to 120 cycles for T = 2 time steps. The background error covariance matrix is set to diagonal with the entries equal to the initial condition error variance, 0.01% of the system climate variance. A more refined choice for 𝗕 will certainly have a positive impact on the algorithms, but this aspect is not our central concern here and the choice adopted is not expected to have an influence on the relative performance of the assimilation schemes.

Figure 5 shows the mean quadratic estimation error as a function of the observation frequency (i.e., the time steps between them, Δtobs) and for the different assimilation intervals: 2 (crosses), 4 (squares), 8 (triangles), and 16 (circles) time steps. The left panels refer to solutions of the strong-constraint variational assimilation, while the approximated weak-constraint solutions are shown in the right panels. The parametric errors are indicated in the text boxes, and all errors are normalized with respect to the system’s natural variability. As in the linear example of the previous section, when the parametric error is too small (now Δλ/λtr = 5%) the strong-constraint performance is not significantly improved by using the short-time weak constraint. Conversely, as long as larger parametric errors are considered, the improvement over the strong-constraint approach becomes substantial. In addition, by reducing the observation frequency, for fixed T, the strong-constraint solution deteriorates at a slower rate than the weak-constraint one.

Note that the accuracy of the weak-constraint algorithm should be affected by the limitation of validity of the short-time regime on which the estimate of the model error correlations is based. According to Nicolis (2003), we estimate the duration of the short-time regime in this system to be approximately equal to 0.07 nondimensional time units (the inverse of the largest in absolute value Lyapunov exponent of the true dynamics, −14.57). The rapid growth of the error for large data assimilation intervals, in light of Fig. 5, reflects this fact.

In Fig. 6 we show the time series of the three model variables relative to the true solution (continuous line), the strong-constraint solution (dotted line), and the approximated weak constraint (dashed line); the observations are displayed with the cross marks. The solutions refer to a period of 1200 time steps, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%. We see that the tracking of the unknown true evolution provided by the short-time weak-constraint assimilation is very efficient, unlike the strong-constraint one. This is particularly evident in correspondence with the peaks/trough of the signal and with changes of the system regimes. The latter are characterized by a change in the sign of the x field, when the trajectory changes the wing of the attractor. By reducing the frequency of the observations to Δtobs = 8 time steps a decrease of the overall quality of the short-time weak-constraint assimilation is observed (Fig. 7). Larger deviations from the truth are only observed in correspondence with the changes of regime.

The effect of a further degradation of the observational network is investigated in Fig. 8, where the observations are reduced to a single component of the system’s state. From the left to the right the panels refer to experiments with measurements of x, y, and z respectively, while from top to bottom the panels show the time series relative to each components. As in Fig. 6, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%, while the experiments last for 600 time steps. In comparison with the results in Fig. 6, we can observe here a general degradation of the algorithm skill to track the true dynamics. By time step 300, the scheme commits small errors in the estimate of both the phase and amplitude of the true signal, but it still systematically outperforms the strong-constraint solution.

The robustness of the proposed approach is finally compared with the uncorrelated noise treatment of the model error. This assumption has often been done in previous applications; it is particularly attractive because it reduces significantly the computational cost associated with the minimization procedure. Although more refined choices have been recently described in the literature (Trémolet 2007), model error covariances have been often set to be proportional to the background error covariance (e.g., Zupanski 1997; Vidard et al. 2004). We make the same choice here and compute the model error covariances as 𝗣 = α𝗕. Figure 9 shows the mean quadratic error as a function of the tuning parameter α. The panels refer to different parametric model error. The results are averaged over the same sample of 50 initial conditions and parametric model error, while Δtobs = 2 and T = 0.08. The error of the short-time weak-constraint (dashed line) and the strong-constraint (dotted line) 4DVar are also displayed for reference. The solid line with open squares refers to an experiment in which the model error is treated as an uncorrelated noise but the spatial covariances at observing times are estimated using the short-time approximation; the aim is to evaluate the relative impact of neglecting the time correlation and of using an incorrect spatial covariance.

The uncorrelated noise formulation (solid line without any marks) never reaches the accuracy of the proposed short-time weak constraint. Note, furthermore, that for the smallest parametric error considered and for small α it is even worse than the strong-constraint 4DVar where the model is assumed to be perfect. By further increasing α over α = 103 (not shown), the error reaches a plateau whose value is controlled by the observation error level. When the spatial covariance is estimated as in the short-time weak constraint, the performance is generally improved. Note, however, that for the smallest parametric error considered, and large α, the estimate 𝗣 = α𝗕 gives better skill; for all other cases, the improvement in correspondence with the best-possible α is only minor. This suggests that the degradation of the uncorrelated noise formulation over the short-time weak constraint is mainly the consequence of neglecting the time correlation and only to a small extent to the use of an incorrect spatial covariance.

5. Conclusions

Recently a deterministic formulation of the model error dynamics has been introduced (Nicolis 2003, 2004; Nicolis et al. 2009). A number of distinctive features have been identified, such as the existence of a universal short-time quadratic evolution law for the mean-square model error evolution (Nicolis 2003). This approach has been exploited here in the context of variational assimilation as a natural extension of the analysis performed for the EKF in Carrassi et al. (2008). A short-time approximation for the model error correlations has been derived, and a short-time-approximated weak-constraint 4DVar has been formulated. The performance of this algorithm has been analyzed in the context of two different dynamical systems.

First, a linear unstable one-dimensional dynamics has been considered. The performance of the short-time weak-constraint 4DVar has been compared to a weak-constraint formulation based on the analytical equation for the model error correlations, and to the classic strong-constraint 4DVar. A dramatic increase of the quality of the analysis was obtained with the short-time weak constraint as compared with the strong constraint. The difference with the weak-constraint 4DVar employing the full model error correlation equations increases with the level of instability and similar performances are attained for relatively stable configurations of the dynamics. The system instability and the length of the assimilation period are the main factors controlling the accuracy of the short-time weak constraint. The maximum length of the assimilation period over which the short-time weak-constraint 4DVar gives accurate skill is inversely proportional to the level of instability of the dynamics. The amplitude of the initial condition error also modulates the accuracy of the short-time weak constraint with respect to its full formulation. The inflation of the model error correlations helps to compensate for the error underestimation affecting the short-time approximation and improves its performance up to a level of accuracy equivalent to the full weak constraint.

The analysis has then been extended to a nonlinear chaotic dynamical system. Analogously, the short-time weak-constraint 4DVar improves substantially over the strong-constraint one. Furthermore, it is able to closely track the unknown true dynamics, which, for the dynamical system considered, is characterized by changes in the regimes that are not captured by the strong-constraint solution. In this nonlinear context we performed additional experiments with a weak-constraint 4DVar employing the uncorrelated noise assumption for the model error and, as often done in practice, the model error covariance was assumed to be proportional to the background-error covariance matrix. The analysis reveals that the 4DVar employing the uncorrelated noise assumption never attains the level of accuracy of the proposed short-time weak-constraint case.

The results obtained in the present idealized contexts suggest that there are potentialities in applying this deterministic formulation to more realistic models and/or observational setups. The size of a typical geophysical system of practical relevance is the main obstacle to the application of the present approach. Nevertheless, the increase of computational power on the one hand and the development of advanced techniques for the optimal choice of the control space representation (Bocquet 2009) on the other hand can make possible its application in realistic contexts. Follow-up studies are necessary to reveal more specific aspects of the practical implementation that could not be highlighted in the experimental setting used here, and will be addressed in future works.

Acknowledgments

We thank C. Nicolis and F. Uboldi for their careful reading of the manuscript and the three anonymous reviewers and the editor, Herschel Mitchell, for their insightful suggestions and comments. This work is supported by the Belgian Federal Science Policy Program under Contract MO/34/017.

REFERENCES

  • Bennett, A. F. , 1992: Inverse Methods in Physical Oceanography. Cambridge University Press, 346 pp.

  • Bocquet, M. , 2009: Towards optimal choices of control space representation for geophysical data assimilation. Mon. Wea. Rev., 137 , 23312348.

    • Search Google Scholar
    • Export Citation
  • Carrassi, A. , S. Vannitsem , and C. Nicolis , 2008: Model error and sequential data assimilation: A deterministic formulation. Quart. J. Roy. Meteor. Soc., 134 , 12971313.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P. , 1995: Online estimation of error covariance parameters for atmospheric data assimilation. Mon. Wea. Rev., 123 , 11281145.

    • Search Google Scholar
    • Export Citation
  • Derber, J. , 1989: A variational continuous assimilation technique. Mon. Wea. Rev., 117 , 24372446.

  • Evensen, G. , 1994: Sequential data assimilation with a nonlinear quasigeostrophic model using Monte-Carlo methods to forecast error statistics. J. Geophys. Res., 99 , (C5). 1014310162.

    • Search Google Scholar
    • Export Citation
  • Fujita, T. , D. J. Stensrud , and D. C. Dowell , 2007: Surface data assimilation using an ensemble Kalman filter approach with initial condition and model physics uncertainties. Mon. Wea. Rev., 135 , 18461868.

    • Search Google Scholar
    • Export Citation
  • Ghil, M. , S. E. Cohn , J. Tavantzis , K. Bube , and E. Isaacson , 1981: Application of estimation theory to numerical weather prediction. Dynamic Meteorology: Data Assimilation Methods, L. Bengtsson, M. Ghil, and E. Källén, Eds., Springer, 139–224.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M. , and J. S. Whitaker , 2005: Accounting for the error due to unresolved scales in ensemble data assimilation: A comparison of different approaches. Mon. Wea. Rev., 133 , 31323147.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L. , H. L. Mitchell , and X. Deng , 2009: Model error representation in an operational ensemble Kalman filter. Mon. Wea. Rev., 137 , 21262143.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R. , E. Kostelich , and I. Szunyogh , 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230 , 112126.

    • Search Google Scholar
    • Export Citation
  • Jazwinski, A. H. , 1970: Stochastic Processes and Filtering Theory. Academic Press, 376 pp.

  • Kalman, R. , 1960: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng., 82 , 3545.

  • Le Dimet, F. X. , and O. Talagrand , 1986: Variational algorithms for analysis and assimilation of meteorological observations: Theoretical aspects. Tellus, 38A , 97110.

    • Search Google Scholar
    • Export Citation
  • Lewis, J. M. , and J. C. Derber , 1985: The use of adjoint equations to solve a variational adjustment problem with advective constraint. Tellus, 37A , 309322.

    • Search Google Scholar
    • Export Citation
  • Li, H. , E. Kalnay , T. Miyoshi , and C. M. Danforth , 2009: Accounting for model errors in ensemble data assimilation. Mon. Wea. Rev., 137 , 34073419.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N. , 1963: Deterministic non-periodic flows. J. Atmos. Sci., 20 , 130141.

  • Meng, Z. , and F. Zhang , 2007: Test of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part II: Imperfect model experiments. Mon. Wea. Rev., 135 , 14031423.

    • Search Google Scholar
    • Export Citation
  • Nicolis, C. , 2003: Dynamics of model error: Some generic features. J. Atmos. Sci., 60 , 22082218.

  • Nicolis, C. , 2004: Dynamics of model error: The role of unresolved scales revisited. J. Atmos. Sci., 61 , 17401753.

  • Nicolis, C. , R. Perdigao , and S. Vannitsem , 2009: Dynamics of prediction errors under the combined effect of initial condition and model errors. J. Atmos. Sci., 66 , 766778.

    • Search Google Scholar
    • Export Citation
  • Rabier, F. , H. Järvinen , E. Klinker , J-F. Mahfouf , and A. Simmons , 2000: The ECMWF operational implementation of four-dimensional variational assimilation. Part I: Experimental results with simplified physics. Quart. J. Roy. Meteor. Soc., 126 , 11431170.

    • Search Google Scholar
    • Export Citation
  • Reynolds, C. , P. J. Webster , and E. Kalnay , 1994: Random error growth in NMC’s global forecasts. Mon. Wea. Rev., 122 , 12811305.

    • Search Google Scholar
    • Export Citation
  • Sasaki, Y. , 1970: Some basic formalism in numerical variational analysis. Mon. Wea. Rev., 98 , 875883.

  • Talagrand, O. , and P. Courtier , 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Quart. J. Roy. Meteor. Soc., 113 , 13111328.

    • Search Google Scholar
    • Export Citation
  • Trémolet, Y. , 2006: Accounting for an imperfect model in 4D-Var. Quart. J. Roy. Meteor. Soc., 132 , 24832504.

  • Trémolet, Y. , 2007: Model error estimation in 4D-Var. Quart. J. Roy. Meteor. Soc., 133 , 12671280.

  • Uboldi, F. , and M. Kamachi , 2000: Time-space weak-constraint data assimilation for nonlinear models. Tellus, 52A , 412421.

  • Vannitsem, S. , and Z. Toth , 2002: Short-term dynamics of model errors. J. Atmos. Sci., 59 , 25942604.

  • Vidard, P. A. , A. Piacentini , and F-X. Le Dimet , 2004: Variational data analysis with control of the forecast bias. Tellus, 56 , 177188.

    • Search Google Scholar
    • Export Citation
  • Zupanski, D. , 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev., 125 , 22742292.

    • Search Google Scholar
    • Export Citation

APPENDIX A

Short-Time Evolution of the Model Error Covariance

Equation (13) gives the model error correlation matrix and it is a function of the two variables t′ and t″. We proceed by doing a Taylor expansion around (t′, t″) = (t0, t0):
i1520-0493-138-9-3369-ea1
The first time derivatives of 𝗣(t′, t″) reads
i1520-0493-138-9-3369-ea2
and
i1520-0493-138-9-3369-ea3
where we have made use of the following identity:
i1520-0493-138-9-3369-ea4
with 𝗝 representing the Jacobian matrix associated with (1).
For the second time derivative we obtain
i1520-0493-138-9-3369-ea5
i1520-0493-138-9-3369-ea6
and
i1520-0493-138-9-3369-ea7
After evaluating the time derivatives at (t′, t″) = (t0, t0), we see that the first nontrivial term is the quadratic, so the second-order Taylor expansion of (13) reads
i1520-0493-138-9-3369-ea8

APPENDIX B

Scalar System: Solution with Representers

Here we derive the solutions in (17) and (18) using the technique of representers. We begin by evaluating the variations of J(x):
i1520-0493-138-9-3369-eb1
For convenience we introduce the variable
i1520-0493-138-9-3369-eb2
Using the condition for a local extremum of J(x), xJ(x) = 0, and (B2), after some reordering, we obtain the Euler–Lagrange equations:
i1520-0493-138-9-3369-eb3
subject to , for the variable x(t), and
i1520-0493-138-9-3369-eb4
where q(T) = 0, for the variable q(t) (usually referred to as the adjoint field), while δ(ttk) is the Dirac delta function. In the derivation of (B3) and (B4) we have used the inverse of the model error correlations defined through
i1520-0493-138-9-3369-eb5
The solution in (B3) represents the best fit to the observations and to the dynamical model according to the penalty function in (16). However, (B3) is coupled to (B4) through the model error term, which appears as a forcing; conversely, the observation term, depending on x(t), acts as a forcing in (B4).

As mentioned in section 2, the method of representers can be used to decouple and solve the Euler–Lagrange equations in the linear case (Bennett 1992, his section 5.3). The method is used here to find the minimizing solution of (16) and this application is briefly detailed in the following, but we refer to Bennett (1992, his section 5.3) for an exhaustive description of the approach in a general case.

A solution is sought in the following form:
i1520-0493-138-9-3369-eb6
for the forward field x(t), and
i1520-0493-138-9-3369-eb7
for the adjoint field q(t). The 2M functions, rk(t) and ak(t), are the representers and their adjoint respectively, while the βk are M coefficients to be determined.
By inserting the definitions (B6) and (B7) into (B3) we obtain the following:
i1520-0493-138-9-3369-eb8
which is subject to , 1 ≤ kM. The coupling with x(t) in (B4) is removed by imposing that the adjoint representers satisfy
i1520-0493-138-9-3369-eb9
which is subject to ak(T) = 0, 1 ≤ kM. In practice the contribution of each measurement has been converted into a single impulse.
Equation (B9) can now be inserted into (B8) to obtain the M representer functions rk(t). As can be shown by substituting back the expressions for rk(t) and ak(t) in the Euler–Lagrange (B3) and (B4), in order to write (B8) and (B9) the vector of coefficients has to be chosen so that (Bennett 1992, his section 5.3)
i1520-0493-138-9-3369-eb10
where d is the innovation vector, d = (y1ox1f, … , yMoxMf), 𝗦 is the M × M matrix (S)i,j = ri(tj), and 𝗜 is the M × M identity matrix. The coefficients are then inserted in (B6) to obtain the final solution.

Fig. 1.
Fig. 1.

Mean quadratic estimation error as a function of time for variational assimilation with the system in (15). Experiments with (top left to bottom left) λtr = 0.01–0.03 in increments of 0.0025. (bottom right) The mean quadratic error, for the weak-constraint solutions only, averaged also over the assimilation interval T as a function of λtr. Strong-constraint solution (dotted line), full weak-constraint solution (dashed line), and short-time approximated weak-constraint solution (solid line).

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 2.
Fig. 2.

Difference between the mean quadratic error of the short-time-approximated and the full weak-constraint solution, for the system in (15), as a function of tr, for the assimilation period 10 ≤ T ≤ 60 and for values of λtr = 0.0100–0.0275. In all the experiments, Δλ/λtr = 50%.

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 3.
Fig. 3.

Mean quadratic estimation error, averaged over the entire assimilation period, as a function of the standard deviation of the initial condition error σb, for the system in (15). Experiments with (top left to bottom right) Δλ/λtr = 5%–30% in increments of 5%. Full weak-constraint solution (dashed line), short-time-approximated weak-constraint solution (solid line), and their error difference (dotted line).

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 4.
Fig. 4.

As in Fig. 3, but for as a function of the tuning parameter α multiplying the model error covariance matrix, for the system in (15). Strong-constraint solution (dotted line), full weak-constraint solution (dashed line), and short-time-approximated weak-constraint solution (solid line).

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 5.
Fig. 5.

Mean quadratic estimation error as a function of the observation frequency Δtobs for variational assimilation in the Lorenz system in (24). Experiments with (top to bottom) Δλ/λtr = 5%–20% in increments of 5% for different assimilation intervals: T = 2 (crosses), T = 4 (squares), T = 8 (triangles), and T = 16 (circles) time steps. (left) Strong-constraint solution and (right) short-time-approximated weak-constraint solution.

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 6.
Fig. 6.

Time series of the model state components for the system in (24): (top) x, (middle) y, and (bottom) z. The solutions refer to a period of 1200 time steps, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%. True evolution (solid lines), observations (crosses), strong-constraint solution (dotted lines), and short-time-approximated weak-constraint solution (dashed lines).

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 7.
Fig. 7.

As in Fig. 6, but T = 8 time steps and Δtobs = 8 time steps.

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 8.
Fig. 8.

Time series of the model state components, for the system in (24): (top) x, (middle) y, and (bottom) z. The solutions refer to a period of 600 time steps, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%. True evolution (solid lines), observations (crosses), strong-constraint solution (dotted lines), and short-time-approximated weak-constraint solution (dashed lines). Observations at (left) x, (middle) y, and (right) z.

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Fig. 9.
Fig. 9.

Mean quadratic estimation error as a function of the tuning parameter α multiplying the model error covariance in the weak-constraint 4DVar with the uncorrelated noise assumption (see text for details). The dynamics is given by the system in (24). Experiments with (top left to bottom right) Δλ/λtr 5%–20% in increments of 5%. Short-time-approximated weak-constraint 4DVar (dashed–dotted line), strong-constraint 4DVar (dotted line), uncorrelated noise weak-constraint 4DVar (continuous line), and uncorrelated noise weak-constraint 4DVar with spatial covariance as in the short-time-approximated weak-constraint (continuous line with open squares).

Citation: Monthly Weather Review 138, 9; 10.1175/2010MWR3192.1

Save
  • Bennett, A. F. , 1992: Inverse Methods in Physical Oceanography. Cambridge University Press, 346 pp.

  • Bocquet, M. , 2009: Towards optimal choices of control space representation for geophysical data assimilation. Mon. Wea. Rev., 137 , 23312348.

    • Search Google Scholar
    • Export Citation
  • Carrassi, A. , S. Vannitsem , and C. Nicolis , 2008: Model error and sequential data assimilation: A deterministic formulation. Quart. J. Roy. Meteor. Soc., 134 , 12971313.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P. , 1995: Online estimation of error covariance parameters for atmospheric data assimilation. Mon. Wea. Rev., 123 , 11281145.

    • Search Google Scholar
    • Export Citation
  • Derber, J. , 1989: A variational continuous assimilation technique. Mon. Wea. Rev., 117 , 24372446.

  • Evensen, G. , 1994: Sequential data assimilation with a nonlinear quasigeostrophic model using Monte-Carlo methods to forecast error statistics. J. Geophys. Res., 99 , (C5). 1014310162.

    • Search Google Scholar
    • Export Citation
  • Fujita, T. , D. J. Stensrud , and D. C. Dowell , 2007: Surface data assimilation using an ensemble Kalman filter approach with initial condition and model physics uncertainties. Mon. Wea. Rev., 135 , 18461868.

    • Search Google Scholar
    • Export Citation
  • Ghil, M. , S. E. Cohn , J. Tavantzis , K. Bube , and E. Isaacson , 1981: Application of estimation theory to numerical weather prediction. Dynamic Meteorology: Data Assimilation Methods, L. Bengtsson, M. Ghil, and E. Källén, Eds., Springer, 139–224.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M. , and J. S. Whitaker , 2005: Accounting for the error due to unresolved scales in ensemble data assimilation: A comparison of different approaches. Mon. Wea. Rev., 133 , 31323147.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L. , H. L. Mitchell , and X. Deng , 2009: Model error representation in an operational ensemble Kalman filter. Mon. Wea. Rev., 137 , 21262143.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R. , E. Kostelich , and I. Szunyogh , 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230 , 112126.

    • Search Google Scholar
    • Export Citation
  • Jazwinski, A. H. , 1970: Stochastic Processes and Filtering Theory. Academic Press, 376 pp.

  • Kalman, R. , 1960: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng., 82 , 3545.

  • Le Dimet, F. X. , and O. Talagrand , 1986: Variational algorithms for analysis and assimilation of meteorological observations: Theoretical aspects. Tellus, 38A , 97110.

    • Search Google Scholar
    • Export Citation
  • Lewis, J. M. , and J. C. Derber , 1985: The use of adjoint equations to solve a variational adjustment problem with advective constraint. Tellus, 37A , 309322.

    • Search Google Scholar
    • Export Citation
  • Li, H. , E. Kalnay , T. Miyoshi , and C. M. Danforth , 2009: Accounting for model errors in ensemble data assimilation. Mon. Wea. Rev., 137 , 34073419.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N. , 1963: Deterministic non-periodic flows. J. Atmos. Sci., 20 , 130141.

  • Meng, Z. , and F. Zhang , 2007: Test of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part II: Imperfect model experiments. Mon. Wea. Rev., 135 , 14031423.

    • Search Google Scholar
    • Export Citation
  • Nicolis, C. , 2003: Dynamics of model error: Some generic features. J. Atmos. Sci., 60 , 22082218.

  • Nicolis, C. , 2004: Dynamics of model error: The role of unresolved scales revisited. J. Atmos. Sci., 61 , 17401753.

  • Nicolis, C. , R. Perdigao , and S. Vannitsem , 2009: Dynamics of prediction errors under the combined effect of initial condition and model errors. J. Atmos. Sci., 66 , 766778.

    • Search Google Scholar
    • Export Citation
  • Rabier, F. , H. Järvinen , E. Klinker , J-F. Mahfouf , and A. Simmons , 2000: The ECMWF operational implementation of four-dimensional variational assimilation. Part I: Experimental results with simplified physics. Quart. J. Roy. Meteor. Soc., 126 , 11431170.

    • Search Google Scholar
    • Export Citation
  • Reynolds, C. , P. J. Webster , and E. Kalnay , 1994: Random error growth in NMC’s global forecasts. Mon. Wea. Rev., 122 , 12811305.

    • Search Google Scholar
    • Export Citation
  • Sasaki, Y. , 1970: Some basic formalism in numerical variational analysis. Mon. Wea. Rev., 98 , 875883.

  • Talagrand, O. , and P. Courtier , 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Quart. J. Roy. Meteor. Soc., 113 , 13111328.

    • Search Google Scholar
    • Export Citation
  • Trémolet, Y. , 2006: Accounting for an imperfect model in 4D-Var. Quart. J. Roy. Meteor. Soc., 132 , 24832504.

  • Trémolet, Y. , 2007: Model error estimation in 4D-Var. Quart. J. Roy. Meteor. Soc., 133 , 12671280.

  • Uboldi, F. , and M. Kamachi , 2000: Time-space weak-constraint data assimilation for nonlinear models. Tellus, 52A , 412421.

  • Vannitsem, S. , and Z. Toth , 2002: Short-term dynamics of model errors. J. Atmos. Sci., 59 , 25942604.

  • Vidard, P. A. , A. Piacentini , and F-X. Le Dimet , 2004: Variational data analysis with control of the forecast bias. Tellus, 56 , 177188.

    • Search Google Scholar
    • Export Citation
  • Zupanski, D. , 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev., 125 , 22742292.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Mean quadratic estimation error as a function of time for variational assimilation with the system in (15). Experiments with (top left to bottom left) λtr = 0.01–0.03 in increments of 0.0025. (bottom right) The mean quadratic error, for the weak-constraint solutions only, averaged also over the assimilation interval T as a function of λtr. Strong-constraint solution (dotted line), full weak-constraint solution (dashed line), and short-time approximated weak-constraint solution (solid line).

  • Fig. 2.

    Difference between the mean quadratic error of the short-time-approximated and the full weak-constraint solution, for the system in (15), as a function of tr, for the assimilation period 10 ≤ T ≤ 60 and for values of λtr = 0.0100–0.0275. In all the experiments, Δλ/λtr = 50%.

  • Fig. 3.

    Mean quadratic estimation error, averaged over the entire assimilation period, as a function of the standard deviation of the initial condition error σb, for the system in (15). Experiments with (top left to bottom right) Δλ/λtr = 5%–30% in increments of 5%. Full weak-constraint solution (dashed line), short-time-approximated weak-constraint solution (solid line), and their error difference (dotted line).

  • Fig. 4.

    As in Fig. 3, but for as a function of the tuning parameter α multiplying the model error covariance matrix, for the system in (15). Strong-constraint solution (dotted line), full weak-constraint solution (dashed line), and short-time-approximated weak-constraint solution (solid line).

  • Fig. 5.

    Mean quadratic estimation error as a function of the observation frequency Δtobs for variational assimilation in the Lorenz system in (24). Experiments with (top to bottom) Δλ/λtr = 5%–20% in increments of 5% for different assimilation intervals: T = 2 (crosses), T = 4 (squares), T = 8 (triangles), and T = 16 (circles) time steps. (left) Strong-constraint solution and (right) short-time-approximated weak-constraint solution.

  • Fig. 6.

    Time series of the model state components for the system in (24): (top) x, (middle) y, and (bottom) z. The solutions refer to a period of 1200 time steps, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%. True evolution (solid lines), observations (crosses), strong-constraint solution (dotted lines), and short-time-approximated weak-constraint solution (dashed lines).

  • Fig. 7.

    As in Fig. 6, but T = 8 time steps and Δtobs = 8 time steps.

  • Fig. 8.

    Time series of the model state components, for the system in (24): (top) x, (middle) y, and (bottom) z. The solutions refer to a period of 600 time steps, the assimilation interval is T = 8 time steps, Δtobs = 4 time steps, and Δλ/λtr = 15%. True evolution (solid lines), observations (crosses), strong-constraint solution (dotted lines), and short-time-approximated weak-constraint solution (dashed lines). Observations at (left) x, (middle) y, and (right) z.

  • Fig. 9.

    Mean quadratic estimation error as a function of the tuning parameter α multiplying the model error covariance in the weak-constraint 4DVar with the uncorrelated noise assumption (see text for details). The dynamics is given by the system in (24). Experiments with (top left to bottom right) Δλ/λtr 5%–20% in increments of 5%. Short-time-approximated weak-constraint 4DVar (dashed–dotted line), strong-constraint 4DVar (dotted line), uncorrelated noise weak-constraint 4DVar (continuous line), and uncorrelated noise weak-constraint 4DVar with spatial covariance as in the short-time-approximated weak-constraint (continuous line with open squares).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1062 708 204
PDF Downloads 245 61 2