## Abstract

Temporally distributed deterministic and stochastic excitation of the tangent linear forecast system governing forecast error growth and the tangent linear observer system governing assimilation error growth is examined. The method used is to determine the optimal set of distributed deterministic and stochastic forcings of the forecast and observer systems over a chosen time interval. Distributed forcing of an unstable system addresses the effect of model error on forecast error in the presumably unstable forecast error system. Distributed forcing of a stable system addresses the effect on the assimilation of model error in the presumably stable data assimilation system viewed as a stable observer. In this study, model error refers both to extrinsic physical error forcing, such as that which arises from unresolved cumulus activity, and to intrinsic error sources arising from imperfections in the numerical model and in the physical parameterizations.

## 1. Introduction

A comprehensive understanding of the dynamics of linear systems is fundamental to physical science and, while linear systems arise in a wide variety of contexts, in this work we have in mind the particular examples of the dynamics of perturbations to the tangent linear forecast and data assimilation error systems. Among important problems in dynamical system theory is understanding the response of a linear system to forcing distributed in time. In the context of linear forecast error systems, temporally distributed forcing is used to account for the effect of model error on forecast predictability; while in the case of a data assimilation system it addresses the effect of model error on state identification for model initialization.

Because the forecast system is nonnormal as well as time dependent, its study requires the methods of generalized stability theory (GST; Farrell and Ioannou 1996a,b). These methods have heretofore been applied inter alia to the problem of cyclogenesis (Farrell 1982, 1989, the problem of transition to turbulence of boundary layer and laboratory channel flows (Farrell 1988a; Butler and Farrell 1992; Reddy and Henningson 1993), the maintenance of the turbulent state in laboratory shear flows (Farrell and Ioannou 1993b; Bamieh and Dahleh 2001; Jovanović and Bamieh 2001), the control of boundary layer and channel shear flow turbulence (Bewley and Liu 1998; Farrell and Ioannou 1998; Kim 2003; Högberg et al. 2003a,b), and in numerical weather prediction to determine the impact of uncertainties in the initial state on the forecast (Farrell 1988b; Lacarra and Talagrand 1988; Farrell 1990; Molteni and Palmer 1993; Buizza and Palmer 1995). However, as the initialization of forecast models is improved with the advent of new data sources and the introduction of variational assimilation methods, the medium range forecast will become increasingly affected by uncertainties resulting from incomplete physical parameterizations and numerical approximations in the forecast model, and by the necessarily misrepresented subgrid-scale chaotic processes such as cumulus convection that act as temporally stochastic distributed forcing of the forecast error system. These influences, referred to collectively as model error, conventionally appear as an external forcing in the forecast error system. Improving the understanding of model error and specifically identifying forcings that lead to the greatest forecast errors is centrally important in predictability studies. In analogy with the optimal perturbations that lead to the greatest forecast error in the case of initial condition error, these continuous error sources will be called optimal distributed forcings.

Two distinct dynamical representations for model error will be considered. The first is deterministic distributed forcing and the second is stochastic distributed forcing. The underlying theory for the deterministic case is based on analysis of the dynamical system as a mapping from the space of input forcings to the space of output states (Dullerud and Paganini 2000). This theory enabled Glover (1984) to obtain the optimal truncation of a dynamical system by using a singular value decomposition of the map between past forcings and future responses of the system (cf. Zhou and Doyle 1998). The singular values of this map, called Hankel singular values, have recently been used to truncate the linearized fluid equations and construct a reduced-order Kalman filter for use in meteorological assimilation (Farrell and Ioannou 2001a,b).

The link between distributed forcing and the growth of forecast errors is direct, but of equal interest is the role of distributed forcing in limiting the accuracy with which a system state can be identified. In order to address this problem we consider the data assimilation system viewed as a Luenberger observer (Luenberger 1971). Such an observer is necessarily asymptotically stable in contrast to the forward forecast problem, which is generally asymptotically unstable.

Recently D’Andrea and Vautard (2000) obtained approximate optimal temporally distributed deterministic forcings of the forecast error system (which they refer to as forcing singular vectors) and Barkmeijer et al. (2003) obtained the optimal temporally distributed deterministic forcing of the forecast error system over fixed spatial structures. In this paper, we extend the study of distributed forcings of the forecast error system by finding efficient methods for obtaining the exact solution of the optimal temporally distributed deterministic and stochastic forcing of the forecast and assimilation systems.

First we introduce the norms used to measure the forcing and the error state, and review the method used to obtain the optimal deterministic forcing that produces the greatest error state norm at a given time. The optimal problem for stochastic forcing and for impulsive forcing is then solved. Finally, examples are shown for optimal deterministic, stochastic, and impulsive forcing in forecast systems and observer systems.

## 2. Optimal deterministic forcing of linear systems

The problem of obtaining the forcing of a time-varying system over the time interval [0, *T*] that leads to the greatest state norm at time *T* has been solved in the context of control theory (see, i.e., Dullerud and Paganini 2000). Because of the novelty and power of the methods used to solve this problem, and their relation to methods used for obtaining optimal truncation of meteorological systems (cf. Farrell and Ioannou 2001a), it is useful to review this solution.

We seek the deterministic forcing *f*(*t*) of unit norm on *t* ∈ [0, *T*] producing the greatest state norm at time *T* that is, it maximizes the square norm of the state ||** ψ**(

*T*)||

^{2}, assuming the state is initially zero,

**(0) = 0, and that**

*ψ***obeys the linear equation:**

*ψ*The forcing **f**(*t*) is taken to be in ℒ_{2}([0, *T*]), that is, a square integrable vector function on the interval [0, *T*] with an inner product:

The square norm in this space is

The state ** ψ** is in

*C*, that is, an

^{N}*N*dimensional complex vector. We choose the Euclidean inner product for the state vectors:

where H denotes the hermitian transpose. The square of the norm in this space is

We have employed the subscript ℒ_{2} to denote the inner product of square integrable functions in order to distinguish it from the inner product of vectors.

With these norms we seek the forcing, **f**, maximizing the forcing normalized final state at time *T*:

The state ** ψ** at time

*T*is obtained for any forcing,

**f**, by the linear map

*ϕ*:

In order to obtain the optimal distributed forcing, some auxiliary results are needed. First the adjoint of the mapping *ϕ* is required. The adjoint of *ϕ* is denoted *ϕ** and it maps a state *ψ*_{1}(*T*) to a forcing **g**(*t*) defined over the interval [0, *T*], with the property that

This adjoint map *ϕ** is (cf. Farrell and Ioannou 2001a):

Composition of the mapping *ϕ* and its adjoint *ϕ** maps system states to system states. This is a rank *N* mapping that can be seen using (7) and (9) to have matrix representation:

which is also the finite-time state covariance matrix under the assumption of temporally white noise forcing with unit covariance 𝗜. The covariance matrix 𝗖 in (10) is hermitian, and is assumed to be full rank. The covariance can also be obtained at time *T* by integrating from *t* = 0 to *t* = *T* the Lyapunov equation:

with the initial condition 𝗖(0) = 0 (cf. Farrell and Ioannou 1996a).

Now we define the projection operator:

Operator *P* maps forcings to forcings. This linear map is an orthogonal projector on the space of functions because it is hermitian and satisfies the property:

Indeed,

Also note the property:

Indeed,

Consider a forcing **f**, its projection *P***f**, and its orthogonal complement (*I* − *P*)**f**. Then (15) establishes that forcing (*I* − *P*)**f** makes no contribution to the state at ** ψ**(

*T*).

Assume that at time *T*, the system is in state ** ψ**, and consider all the forcings

**f**that could have produced this state, that is, all

**f**with the property

*ϕ*

**f**=

**. That there exist forcings for which any chosen state**

*ψ***can be reached at time**

*ψ**T*requires that the system be controllable, but this is ensured by the full rank of 𝗖, which in the control literature is referred to as the controllability Grammian (Brockett 1970).

All the forcings, **f**, that produce state ** ψ** have the same projection

*P*

**f**. To show this, note that by assumption:

and by (15) the projected forcing *P***f** also produces the state ** ψ** so

Multiplying (18) by *ϕ**𝗖^{−1} gives

But the left side of (19) is *P*^{2}**f** = *P***f**. Hence all **f** that produce state ** ψ** have the same projection:

It is now easy to see that the forcing *P***f** is the minimal forcing producing the state ** ψ**. To show this, we decompose the forcing into its orthogonal complements:

Because

we have

and it follows that

which proves that the forcing

is the minimal forcing to produce state ** ψ**. It is easy to verify that this forcing produces

**:**

*ψ*Using (24) the maximization (6) over functions, **f**, can be transformed to a maximization over states, ** ψ**(

*T*):

This equation shows that the optimal over forcings (lhs) is equal to an optimal over states (rhs). Note that the form of the second optimization is reminiscent of the covariance or Mahalanobis metric often used in predictability analysis (Palmer et al. 1998) suggesting the interpretation of optimals weighted by the Mahalanobis metric as structures that are most easily forced.

Quotient (27) is maximized for unit forcing by the state:

where *λ*_{1} is the maximum singular value of 𝗖 and *υ*_{1} is the corresponding singular vector of 𝗖 [*υ*_{1} is conventionally called the top empirical orthogonal function (EOF) of 𝗖]. The optimal unit forcing is obtained from (9) and (25):

and the optimal value of the quotient (27) is clearly *λ*_{1}. It is also clear from (29) that the optimal forcing structure at optimizing time, **f**_{opt}(*T*), is identical to the optimal state *υ*_{1}. If the system is asymptotically stable, *λ*_{1} remains finite as *T* → ∞, and if the system is autonomous as well as stable, *λ*_{1} asymptotes to a constant; while if the system is asymptotically unstable, *λ*_{1} grows exponentially as *T* → ∞. In the limit of small optimizing time 𝗖 = *O*(*T*) and therefore (27) approaches zero linearly in time, a result that can be traced to the quadratic weighting of the forcing by ||·||^{2}_{ℒ2}, which has the effect of penalizing concentration of forcing in time.

If *υ*_{1} and *υ*_{2} are two distinct singular vectors of 𝗖, the corresponding forcings **f**_{1,2}(*t*) = *ϕ**𝗖^{−1}*υ*_{1,2} are orthogonal because

The optimal forcing and the associated state of the system can be obtained simultaneously by integrating the following coupled forward and adjoint systems backward over the finite interval from time *t* = *T* to the initial time *t* = 0:

with ** ψ**(

*T*) =

*λ*

_{1}

*υ*_{1}and

**f**(

*T*) =

*υ*_{1}/

*λ*

_{1}. The initial state

*ψ*_{opt}(0) = 0 is recovered as a consistency check.

Consider forcings, **f**, with ||**f**||^{2}_{ℒ2} = 1. The states these produce will lie on an ellipsoid with axes of length *λ*_{i} in the directions of the orthonormal *υ** _{i}*, where

*υ**are the singular vectors of 𝗖 with corresponding singular values*

_{i}*λ*. The corresponding forcings

_{i}which produce the extremal states *ψ*_{i}(*T*) = *λ*_{i}*υ*_{i} are an orthonormal finite dimensional set of functions that span the space of minimal forcings. These forcings induce a characterization of ** ψ**(

*T*) and its excitation by distributed forcing analogous to the characterization induced by the evolved optimals and the optimal initial conditions in the initial condition excitation problem. For this reason we extend the characterization of optimality to all the forcings that render the Rayleigh quotient (6) extremal. A distinction between optimal distributed forcings and optimal initial conditions is that the set of optimal distributed forcings,

**f**

*, is the minimal spanning set of forcings that produce nonzero states at*

_{i}*t*=

*T*, but the set of optimal forcings,

**f**

*, do not span the whole space ℒ*

_{i}_{2}([0,

*T*]). However, the optimal forcings do split the whole of ℒ

_{2}([0,

*T*]) into two orthogonal components: the range of

*P*,

*P*ℒ

_{2}([0,

*T*]), comprises linear combinations of the orthogonal set of

**f**

*which span the forcings that contribute to the state at*

_{i}*T*, and the orthogonal space of functions in the kernel of

*P*, (

*I*−

*P*)ℒ

_{2}([0,

*T*]), which do not contribute to the state at

*T*.

An arbitrary forcing **f** may be projected on its minimal component *P***f** with the property that only this component of the forcing produces a nonzero state at the optimizing time *T*.

Specifically, an arbitrary state ** ψ**(

*T*) can be expanded in the basis of the orthonormal eigenvectors,

*υ**, of the finite-time covariance matrix 𝗖(*

_{i}*T*):

where

Then the minimal forcing,

is the only relevant forcing for producing this state. Further, because *λ _{i}* generally fall rapidly with

*i*, most states are difficult to reach; for instance by (35), states

**(**

*ψ**T*) that project on

*υ**with small values of*

_{i}*λ*require large-amplitude forcing. It follows that calculation of the first few eigenvectors,

_{i}

*υ**, of the finite-state covariance 𝗖(*

_{i}*T*) and the optimal forcings,

**f**

*, that produce them, suffices to characterize the potential influence of model error on the forecast and assimilation systems. In this way the decomposition of forcings into contributing and noncontributing spaces can be made independent of the state dimension*

_{i}*N*, when

*N*is large, by taking the spanning set for the contributing space to be restricted to the forcings with substantial

*λ*.

_{i}## 3. Optimal constant spatial structure for stochastic forcing of a linear system

In the previous section we obtained the optimal deterministic forcing for producing a specified state and the maximum state norm at a given time. However, situations arise in which the forcing is not known precisely but can be characterized statistically. A common characteristic of statistically specified, physically derived forcing is that the forcing is stochastic with a short correlation time when compared to the characteristic time scales of the linear operator. Given such a stochastic time dependence we seek the constant spatial forcing structure producing the greatest expected state square norm at a chosen time. This situation is formulated using the following stochastic equation:

where **f** is a column vector specifying the spatial structure and *η*(*t*) a temporally white noise scalar function of unit variance satisfying 〈*η*(*t*_{1})*η*(*t*_{2})〉 = *δ*(*t*_{2} − *t*_{1}).

The optimal spatial structure, **f**_{opt}, of the stochastic forcing is to be found under the assumption that its temporal behavior is a white noise process, *η*(*t*). This contrasts with the deterministic optimal forcing problem studied above, in which the optimal response was found over both the spatial and temporal structure of the forcing.

The problem of obtaining the fixed spatial structure of temporally white forcing that excites the greatest expected square state norm in the asymptotic limit *t* → ∞ has been addressed previously (Farrell and Ioannou 1993a,b, 1996a; Kleeman and Moore 1997). This structure is the stochastic optimal. The problem of obtaining the analogous finite-time stochastic optimal under both certain and uncertain dynamics was studied by Farrell and Ioannou (2002a,b). We will now briefly review how the stochastic optimal is determined.

The ensemble expected square norm of the state at time *T* is

where the matrix 𝗕(*T*) is

The matrix 𝗕(*T*) can be obtained by integrating from *t* = 0 to *t* = *T* the equation,

with the initial condition 𝗕(0) = 0.

It is immediate from (38) that the singular vectors of the hermitian matrix 𝗕 are the finite-time stochastic optimals. The finite-time stochastic optimal associated with the largest singular value of 𝗕 is the unit norm spatial forcing structure that, when excited white in time over the interval [0, *T*], produces the greatest expected square norm of the state at time *T*. This unit norm spatial structure results in unit variance forcing in (36). This largest expected square state response, when normalized by the forcing covariance, is

where *μ*_{1} is the largest singular value of 𝗕(*T*) and **f**_{opt} the corresponding singular vector. The forcing normalizations in (41) are not the same as in the deterministic case. Deterministic forcings are normalized by the forcing’s integrated square amplitude as in (3). Such a normalization is not appropriate for white noise forcing because white noise realizations are unbounded in square amplitude. The normalization in (41) assures a unit variance forcing in (36), which is appropriate for white noise forcing.

## 4. Optimal impulsive forcing on an interval

Consider an impulsive forcing applied at a point in time, *t*_{0}:

where **f** is the spatial structure of the forcing. It is assumed, as previously, that the state is initially zero, ** ψ**(0) = 0, and that the forcing is applied at a point in the interval so that, 0 ≤

*t*

_{0}≤

*T*, where

*T*is a final time.

We seek the time *t*_{0} and the forcing spatial structure **f** of unit norm, corresponding to unit impulse, resulting in the greatest state norm at *T*, that is, we seek to maximize the quotient:

The state at time *T* is

Consequently, we seek the spatial structure that maximizes

The maximum value of *R _{p}* is the square of the largest singular value

*σ*

^{2}

_{1}, of the propagator

**Φ**(

*T*,

*t*

_{0}), which is also the square of the norm of

**Φ**(

*T*,

*t*

_{0}), denoted ||

**Φ**(

*T*,

*t*

_{0})||

^{2}

_{2}. The forcing applied at

*t*

_{0}producing the greatest state norm at

*T*is the top singular vector of the propagator

**Φ**(

*T*, t

_{0}), which is the deterministic global optimal initial condition corresponding to optimizing time

*T −*

*t*

_{0}. Therefore, the optimal point forcing has the structure of the global optimal initial condition over

*t*∈ [0,

*T*] and is applied at the global optimal time in the past. If we denote by

*t*

_{go}the time

*t*for which ||

**Φ**(

*T*,

*t*)||

_{2}is maximum, that is, the time at which the global optimal growth is achieved, then the forcing is applied at

*t*

_{0}=

*t*

_{go}. It is clear that the optimal point forcing is identical to the optimal initial condition at

*t*

_{0}.

Given an arbitrary final state ** ψ**(

*T*) the optimal impulsive forcing to produce

**(**

*ψ**T*) is obtained by finding

in which **Φ**(*t*, *T*) = **Φ**^{−1}(*T*, *t*), and taking *t*_{0} as the time of this minimum and **f** = **Φ**(*t*, *T*)** ψ**(

*T*)

*δ*(

*t*−

*t*

_{0}) as the impulsive forcing that optimally excites state

**(**

*ψ**T*).

A related problem is finding the minimum constant amplitude time-varying structure over an interval that produces a given state. The solution is to force the normalized function obtained from the backward propagator:

## 5. Examples

### a. A scalar system example

Consider the one-dimensional and necessarily normal system:

From (28) and (29) and the fact that a one-dimensional system is self-adjoint, it follows that the deterministic unit norm forcing maximizing the state at optimizing time *T* is

and the corresponding maximum state at optimizing time *T* is

The optimal forcing for this example is shown in Fig. 1a for a representative stable case (*α* = −1) and in Fig. 1b for a similar unstable case (*α* = 1); both for *T* = 5. Note that the forcing in the stable case is concentrated near the optimizing time at the end of the interval, because forcing at earlier times would excite the decaying state and, as a result, be suboptimal. In the unstable case the forcing is concentrated at the start of the interval to take advantage of the growth. This is a general property of optimal forcings. The square norm of the state at the optimizing time *T* as a function of *T* is shown for the stable case in Fig. 1c (case labeled deterministic). For small *T* the square norm of the state is |*ψ*_{opt}(*T*)|^{2} ≈ *T* but for large *T* it asymptotes to 1/2*α*. The optimal forcing at optimizing time *T* has norm *f*_{opt}(*T*) = 1/*λ* while the optimal state norm is *ψ*_{opt} = *λ* with an optimal finite-time variance:

As in this example, it is generally the case that in stable normal systems the optimal forcing extends over a time interval of the order of the decay time of the system while in nonnormal systems the optimal forcing assumes the temporal extent of the system’s global optimal as will be shown. The expected square norm of the state at *T* for unit covariance stochastic forcing is easily seen to be

which is identical to the optimal deterministic state square norm. This is shown in Fig. 1 for *a* = −1. Also shown in Fig. 1 is the norm of the state for optimal impulsive forcing, which is identical to the optimal obtained from any unit initial condition on the interval. Because in the case of a stable normal system the amplitude of the states decays monotonically after they are excited, the optimal point for impulsive forcing is at the optimizing time *T* itself and consequently the optimal response is equal to 1 for all choices of *T*.

Assume that in this simple scalar system we are given a forcing *f*(*t*). To find the part of *f*(*t*) that influences the state at *T*, we need only to calculate the projection of *f*(*t*) on the space of optimal forcing that according to (12) is

The part of *f*(*t*) that does not influence the final state is *f* − *Pf*. For example if the forcing were constant, *f*(*t*) = 1, only the projection,

produces a contribution to the state at *T*, while *f* − *Pf* produces no contribution. This decomposition of *f*(*t*) = 1 into contributing and noncontributing components is shown in Fig. 1.

### b. The Reynolds system example

Consider the two-dimensional highly nonnormal Reynolds system that models streak formation in boundary layers:

with stable matrix 𝗔:

The square norm of the state at time *T* produced by optimal deterministic forcing over the interval *t* ∈ [0, *T*] is shown as a function of *T* in Fig. 2. The response is much higher than that found for a normal system with the same eigenvalues because of the nonnormality of the system [e.g., (56)]. The variance equation for the above system is

The first term *ψ*^{H}(𝗔^{H} + 𝗔)*ψ* represents the variance tendency from the background mediated by the nonnormality of 𝗔 while (**f ψ**

^{H}+

*ψ*f^{H}) is the variance tendency directly accumulated from the forcing. These two terms are plotted in Fig. 3c as a function of time for the forcing that leads to maximum state norm at time

*T*= 5 in system (55). Note that the variance obtained directly from forcing is far less than the variance obtained from nonnormal processes.

Because this is a two-dimensional example, the state at *t = T* produced by the optimal stochastic and deterministic forcing are identical.^{1} The optimal norm of the state produced by impulsive forcing and, for comparison, the optimal initial value growth ||e^{𝗔T}||^{2} are shown as a function of *T* in Fig. 2. Note that the norm of the state produced by optimal impulsive forcing follows the optimal growth curve until *T* reaches the global optimal time *t*_{go} (which occurs for this case at *t*_{go} = 0.67), then for *T* > *t*_{go} it maintains this constant value. The optimal impulsive forcing is applied at time *t* = *T* − *t*_{go} and has the structure of the global optimal initial condition.

## 6. Modeling data assimilation as a stable observer system

Consider assimilating data taken from the exact solution of a nonlinear system, **x*** _{t}*. The forecast error

**e**

*=*

_{f}**x**

*−*

_{f}**x**

*obeys the following equation:*

_{t}in which model error is represented by **f**(t) and 𝗔 is the tangent linear system about **x*** _{t}*. Forecast error typically grows exponentially when averaged over a sufficiently long time due to the Lyapunov instability that inherently results from the time dependence of 𝗔 (Arnold 1998; Farrell and Ioannou 1999). However, a Lyapunov stable system related to (58) can be constructed to model the process of state estimation for forecast initialization, a process referred to as data assimilation. This can be done because a data assimilation system is equivalent to a system in which the matrix 𝗔 in (58) is modified by the introduction of observations so that the solution to the modified system,

**x**

*, converges to the system state,*

_{a}**x**

*, in a statistical sense (Luenberger 1971). The effect of model and observational error on data assimilation can be analyzed quite generally using this representation of a data assimilation system as a Luenberger observer.*

_{t}Assume that observations, **y**_{ob} of the state are taken at various locations and introduced into the model. Let the observations be related to the true state of the atmosphere, **x*** _{t},* as

where 𝗛 is an observation matrix, 𝗥 is the observational error covariance, and **w** is a vector of white noise processes. Given an observational platform, **y**_{ob} is available while the truth **x*** _{t}* is not available.

Assimilate these observations to obtain an analysis, **x*** _{a}*, with analysis error

**e**

*=*

_{a}**x**

*−*

_{a}**x**

*satisfying the Luenberger observer system:*

_{t}The above equation defines the assimilation system model. Ideally, the gain, 𝗞, is chosen to minimize the analysis error variance trace (〈e_{a}**e**^{H}_{a}〉).

Unlike the forecast error system, a Luenberger observer system is asymptotically stable. Any gain, 𝗞, that stablizes 𝗔 − 𝗞𝗛 results in an observer with bounded error, this error being forced by a combination of model error **f**(*t*) and observational error 𝗥 (cf. 60). It is immediately evident that in the absence of model and observation error the analysis will converge to truth. Good gains, 𝗞, do not just stabilize the operator but simultaneously reduce the nonnormality of the tangent linear operator so that the minimum of trace (〈**e**_{a}**e**^{H}_{a}〉) is maintained by the combination of observational and model error. The optimal gain under the assumption of Gaussian error statistics is the Kalman gain (Kalman 1964; Zhou and Doyle 1998). A suboptimal observer, such as might be associated with an operational forecast system, can be modeled as a stabilizing 𝗞 not equal to the Kalman gain. Just as analysis of the tangent linear forecast system reveals how initialization error and distributed forcing contribute to forecast error, so does analysis of the observer system reveal how model and observation error contribute to analysis error.

We are now equipped to study the effect of distributed error on data assimilation and will begin by seeking the deterministic and stochastic forcings leading to the greatest deviations of the estimate **x*** _{a}* from the true state

**x**

*as measured by the norm of*

_{t}**e**.

## 7. The Eady assimilation and forecast system model example

Consider as an example error system the Eady model (Eady 1949) with a zonal wavenumber chosen so that the model is unstable. Introduce continuous observation of the error streamfunction at the channel center and construct a stable Kalman observer under the assumption of zero model and unit observation error. Under these assumptions, a single continuous observation at the channel center suffices to make this optimal observer asymptotically stable.

The nondimensional linearized equation for streamfunction perturbations ** ψ** in the Eady model is

in which the perturbation is assumed to be of form ** ψ**(

*z*,

*t*)

*e*, where

^{ikx}*k*is the zonal wavenumber. The perturbation potential vorticity is

*D*

^{2}

**, with**

*ψ**D*

^{2}≡ ∂

^{2}/∂

*z*

^{2}−

*k*

^{2}and the perturbation potential vorticity damping rate is

*r*. This model is Boussinesq with constant stratification on an

*f*plane and has periodic boundary conditions in the zonal,

*x*, direction and a solid lid at height

*z*=

*H*. The zonal flow is

*U*(

*z*) =

*z*. Horizontal scales are nondimensionalized by

*L*= 1000 km; vertical scales by

*H*=

*fL*/

*N*= 10 km; velocity by

*U*

_{0}= 30 m s; and time by

*T*=

*L*/

*U*

_{0}, so that a time unit is approximately 9.25 h. Midlatitude values are chosen for the Brunt–Väisälä frequency,

*N*= 10

^{−2}s

^{−1}, and Coriolis parameter,

*f*= 10

^{−4}s

^{−1}. The coefficient of Rayleigh damping has value

*r*= 0.15 corresponding to a decay rate of 1/2.5 day

^{−1}.

Conservation of potential temperature at the ground, *z* = 0, and tropopause, *z* = 1, provides the following boundary conditions:

The model is discretized using *N* = 21 points in the vertical and expressed in the matrix form:

In (64) generalized velocity coordinates ** ϕ** = 𝗠

^{1/2}

**were used so that the Euclidian square norm of the system state corresponds to error energy. The error energy is**

*ψ*

*ϕ*^{H}

**=**

*ϕ*

*ψ*^{H}𝗠

*ψ*, and for a vertical discretization increment

*δz*the metric 𝗠 is

The assimilation system in generalized velocity variables is governed by the operator 𝗔* _{a}* = 𝗔 − 𝗞𝗛, where 𝗞 is the Kalman gain for observation at the channel center given by 𝗞 = 𝗫𝗛

^{H}𝗥

^{−1}, where 𝗫 satisfies the continuous-time algebraic Ricatti equation under the perfect model assumption (cf. Zhou and Doyle 1998):

and 𝗛 = 𝗣𝗠^{−1/2} is the observation matrix, in which, for our example, 𝗣 is a row vector with one nonzero element corresponding to the grid point at the channel center, and 𝗥 is the observation error covariance matrix (which is a scalar in our example). In contrast, the unstable forecast system is governed by the operator 𝗔. We seek in both the stable assimilation and the unstable forecast system the optimal deterministic and stochastic forcing leading to the greatest error at time *T*.

### a. The observer-stabilized Eady model

Let 𝗔* _{a}* be the matrix of the observer-stabilized Eady model for zonal wavenumber

*k*= 1.5. While the unobserved Eady model at this wavenumber is unstable with dimensional growth rate 1/2.5 day

^{−1}, the observer-stabilized model has a minimum dimensional decay rate 1/2.6 day

^{−1}. The square norm of the state at time

*T*produced by optimal distributed forcing over the interval

*t*∈ [0,

*T*] is shown as a function of

*T*in Fig. 4. The square norm of the state at

*T*produced by each of the 21 orthonormal optimal forcings is shown in Fig. 5. Two EOFs and their associated forcing structures account for most of the perturbation dynamics suggesting the generally valid conclusion that a few forcings account for most of the error and the majority of the forcings make relatively little contribution.

The norm of the distributed forcing that maximizes the state norm at *T* = 40 is shown as a function of time in Fig. 6a along with the optimal growth ||*e*^{𝗔a(T−t)}||. The norm of the forcing follows the optimal growth curve and is maximized at *T − **t*_{go}, where *t*_{go} is the time for which the maximum of the norm of the propagator *e*^{𝗔at} occurs (in this case the maximum occurs at *t*_{go} = 7). The corresponding growth of the error state energy, the energy tendency by nonnormal processes, and the energy tendency produced by forcing are shown in Fig. 6b.

The evolution in structure of the first deterministic optimal is shown in the left panels of Fig. 7. The corresponding evolution in structure of the state is shown in the right panels. At optimizing time both the state and the optimal forcing assume the structure of the first eigenvector of the finite-time covariance matrix at time *T*, 𝗖(*T*).

The error norm produced by optimal stochastic forcing, and the response to optimal impulsive forcing are shown as a function of *T* in Fig. 4. The norm of the error due to impulsive forcing follows the deterministic initial state optimal growth curve reaching at time *t*_{go} = 7 the global optimal norm after which it assumes the constant value of the global optimal growth. For optimizing times *T* > *t*_{go} the optimal impulsive forcing occurs at time *t* = *T* − *t*_{go} and has the structure of the global optimal initial condition. This structure and that of the stochastic optimal are shown in Fig. 8. If the spatial structure of the white noise forcing is taken to be that of the stochastic optimal shown in Fig. 8a then the resulting state covariance at optimizing time is primarily composed of the first three eigenvectors of 𝗖(*T*) shown in Fig. 9.

### b. The unstable Eady forecast model

The error in the unstable Eady model at *k* = 1.5 grows exponentially. The error norm for optimal distributed, stochastic, and impulsive forcing are shown in Fig. 10. The optimal stochastic forcing and the optimal deterministic forcing produce almost identical error at time *T* because the system is unstable and excitation of a single structure, the adjoint of the mode, is responsible for nearly all the error at time *T*. The time development of the norm of the optimal deterministic forcing for optimizing time *T* = 20 is shown in Fig. 11. The forcing is concentrated at the beginning of the interval in order to take advantage of the instability. The structure of the forcing and the error at various times are shown in Fig. 12. At the optimizing time the structure both of the error and the forcing is that of the fastest growing mode, at earlier times the forcing has the structure of the adjoint of this mode. The stochastic optimal and the optimal impulsive forcing (which consists of forcing at the initial time of and with the structure of the optimal initial condition that maximizes perturbation growth at *T* = 20), although formally different, in this strongly unstable example have both assumed the structure of the adjoint of the most unstable mode of the model, which is shown in the second panel from the top on the left-hand side of Fig. 12.

## 8. Conclusions

In this work we have studied the response of a linear dynamical system to distributed deterministic and stochastic forcing. Specific atmospheric applications of this study are to the presumed unstable forward forecast error system and the necessarily stable data assimilation system that are each continually perturbed by disturbances that degrade the forecast and assimilation, respectively. These disturbances may be physical in origin and may also arise from numerical model deficiencies. The disturbances may be deterministic or stochastic. We analyzed the effect of temporally distributed forcing on stable and unstable error systems by obtaining a spanning set of optimal distributed forcings over a chosen interval making use of the adjoint system propagator applied to the finite-time covariance matrix eigenvectors. In the stable case modeling a data assimilation system, the optimal forcing is concentrated near the end of the interval of the optimization, while in the unstable forecast error system the optimal forcing is concentrated near the beginning of the interval. The optimals use both accumulation of forcing variance as well as variance transfer from the mean in their growth process, although transfer from the mean was dominant in the examples. The structure of the response to the optimal distributed forcing is the first eigenvector of the finite-time covariance 𝗖(*T*) = ∫^{T}_{0}**Φ**(*T*, *t*)**Φ**^{H}(*T*, *t*)*dt*. In the case of stochastic forcing, the structure of the forcing producing the greatest response is the finite-time stochastic optimal, which is the first eigenvector of the stochastic optimal matrix 𝗕(*T*) = ∫^{T}_{0}**Φ**^{H}(*T*, *t*)**Φ**(*T*, *t*)*dt*, and this forcing generally tends to have the structure of the optimal initial condition. Finally, in the case of point impulsive forcing, the greatest response is obtained by a forcing concentrated at the global optimizing time in the past and with the structure of the global optimal initial condition.

Most forcings make little or no contribution to forecast and assimilation error, and a small subspace of forcings produce most of the error that is realized primarily in a small subspace of error states. These forcings are the optimal forcings that project on the dominant eigenvectors of the finite-time covariance matrix.

Forecast error may arise primarily from error in the initial analysis, or it may be forced by error sources distributed over the forecast interval. Initial error influence on forecast is characterized by the set of optimal perturbations, while error distributed in time is characterized by optimal forcing functions. It remains to determine the relative importance of these error sources in practical forecast situations.

Consider macroscopic cyclone formation as distinct from perturbation error growth. The cyclone may arise from forcing distributed over a given interval or from an initial disturbance at the beginning of the interval. Improved understanding of the origin of cyclones depends, in part, on a better characterization of the relative role of initial conditions and distributed forcings in the cyclogenesis process.

## Acknowledgments

This work was supported by NSF Grant ATM-0123389 and by ONR Grant N00014-99-1-0018.

## REFERENCES

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

^{∞}-error bounds.

**,**

**,**

_{τ}= 100 turbulence using linear state-feedback control.

**,**

**,**

**,**

**,**

**,**

**,**

**,**

## Footnotes

*Corresponding author address:* Dr. Brian F. Farrell, Harvard University, Pierce Hall 107V, Oxford St. Mail Area H0162, Cambridge, MA 02138. Email: farrell@deas.harvard.edu

^{1}

The reason is that in two-dimensional systems the eigenvalues of the covariance matrix 𝗖 and the stochastic optimal matrix 𝗕 are equal because the 𝗕 and 𝗖 matrices possess two invariants: they have the same trace and also satisfy the relation ℜ [trace (𝗔𝗖)] = ℜ [trace (𝗔𝗕)], which can be easily verified from their respective Lyapunov equations (ℜ denotes the real part). The eigenvalues of 𝗖 and 𝗕 are, in general, different for nonnormal systems with dimension greater than two.