## 1. Introduction

The ensemble Kalman filter (EnKF), introduced by Evensen (1994), is a Monte Carlo approximation to the traditional Kalman filter (KF; Kalman and Bucy 1961; Gelb et al. 1974). The EnKF uses an ensemble of forecasts to estimate background-error covariances. By providing flow- and location-dependent estimates of background error, the EnKF can more optimally adjust the background to newly available observations. In doing so, it may be able to produce analyses and forecasts that are much more accurate than current operational data assimilation schemes, which assume that the background error is known a priori and does not vary in time. The EnKF automatically provides a random sample of the estimated analysis-error distribution, which can then be used as initial conditions for an ensemble prediction system. The EnKF and four-dimensional variational data assimilation (4DVAR) are presently the two leading candidates to replace the current generation of three-dimensional variational (3DVAR) schemes [see Hamill et al. (2001) and references therein for a discussion of the merits of each approach].

Houtekamer and Mitchell (1998) recognized that in order for the EnKF to maintain sufficient spread in the ensemble and prevent filter divergence, the observations should be treated as random variables. They introduced the concept of using perturbed sets of observations to update each ensemble member. The perturbed observations consisted of the actual, or “control” observations plus random noise, with the noise randomly sampled from the observational-error distribution used in the data assimilation. Different ensemble members were updated using different sets of perturbed observations. Burgers et al. (1998) provided a theoretical justification for perturbing the observations in the EnKF, showing that if the observations are not treated as random variables, analysis error covariances are systematically underestimated, typically leading to filter divergence. Interestingly, in early implementations of the EnKF, perturbed observations (e.g., Evensen 1994; Evensen and van Leeuwen 1996) were not used, yet problems with filter divergence were not found. Most likely this was because for these oceanographic applications, observation errors were typically much smaller than either background errors or the system noise. The relative influence of noise associated with observational errors is much greater in atmospheric applications.

In this paper we discuss two important consequences by sampling error in ensemble data assimilation schemes based on the Kalman filter. These are associated with 1) the nonlinear dependence of the analysis-error covariance estimate on the background-error covariance estimate, and 2) the noise added to the observations. The former is a property of all Kalman filter-based schemes, while the latter is specific to algorithms like the EnKF requiring perturbed observations. The consequences of the nonlinear dependence of the analysis-error covariance estimate on the background-error covariance estimate are similar to the “inbreeding” effect noted by Houtekamer and Mitchell (1998) and analyzed by van Leeuwen (1999) and Houtekamer and Mitchell (1999). The consequences of perturbing observations have not been thoroughly explored in previous critiques of the EnKF (e.g., Anderson 2001; van Leeuwen 1999).

Ensemble filtering approaches have been designed without perturbed observations (e.g., Lermusiaux and Robinson 1999; Anderson 2001; Bishop et al. 2001). Here, we propose another ensemble filtering algorithm that does not require the observations to be perturbed, is conceptually simple, and, if observations are processed one at a time, is just as fast as the EnKF. We will demonstrate that this new algorithm, which we shall call the ensemble square root filter (EnSRF) is more accurate than the EnKF for a given ensemble size.

The remainder of the paper will be organized as follows: section 2 provides a background on the formulation of the EnKF as well as a simple example of how nonlinearity in the covariance update and noise from perturbed observations can cause problems in ensemble data assimilation. Section 3 presents the EnSRF algorithm, and sections 4 and 5 discuss comparative tests of the EnKF and EnSRF in a low-order model and a simplified global circulation model (GCM) respectively.

## 2. Background

### a. Formulation of the ensemble Kalman filter

**x**

^{b}be an

*m*-dimensional background model forecast; let

**y**

^{o}be a

*p*-dimensional set of observations; let 𝗵 be the operator that converts the model state to the observation space (here, assumed linear, though it need not be); let 𝗽

^{b}be the

*m*×

*m*-dimensional background-error covariance matrix; and let 𝗿 be the

*p*×

*p*-dimensional observation-error covariance matrix. The minimum error-variance estimate of the analyzed state

**x**

^{a}is then given by the traditional Kalman filter update equation (Lorenc 1986),

**x**

^{a}

**x**

^{b}

**y**

^{o}

**x**

^{b}

^{b}

^{T}

^{b}

^{T}

^{−1}

^{a}is reduced by the introduction of observations by an amount given by

^{a}

^{b}

^{T}

^{T}

**P**

^{b}

^{a}= 〈(

**x**

^{t}−

**x**

^{a})(

**x**

^{t}−

**x**

^{a})

^{T}〉, where

**x**

^{t}is the true state and 〈 · 〉 denotes the expected value. The definition of

**x**

^{a}is then substituted from (1), and observation and background errors are assumed to be uncorrelated.

^{b}is approximated using the sample covariance from an ensemble of model forecasts. Hereafter, the symbol 𝗽 is used to denote the sample covariance from an ensemble, and 𝗸 is understood to be computed using sample covariances. Expressing the variables as an ensemble mean (denoted by an overbar) and a deviation from the mean (denoted by a prime), the update equations for the EnKF may be written aswhere

**P**

^{b}=

**x**′

^{b}

**x**′

^{bT}

*n*− 1)

^{n}

_{i=1}

**x**′

^{b}

**x**′

^{bT},

*n*is the ensemble size, 𝗸 is the traditional Kalman gain given by Eq. (2) and

**y**′

^{o}are randomly drawn from the probability distribution of observation errors (Burgers et al. 1998). Note that wherever an overbar is used in the context of a covariance estimate a factor of

*n*− 1 instead of

*n*is implied in the denominator, so that the estimate is unbiased. In the EnKF framework, there is no need to compute and store the full matrix 𝗽

^{b}. Instead, 𝗽

^{b}𝗵

^{T}and 𝗵𝗽

^{b}𝗵

^{T}are estimated directly using the ensemble (Evensen 1994; Houtekamer and Mitchell 1998).

**y**′

^{o}=

**0**) using the same gain (𝗸 =

^{a}

**P**

^{b}

^{T}

^{T}causes 𝗽

^{a}to be systematically underestimated. If random noise is added to the observations so that

**y**′

^{o}≠

**0**, the analyzed ensemble variance isIf the observation noise is defined such that 〈

**y**′

^{o}

**y**′

^{oT}

^{a}is equal to that traditional Kalman filter result [Eq. (3)], since the expected value of the background-observation-error covariance 〈

**x**′

^{b}

**y**′

^{oT}

### b. The impact of sampling error on analysis-error covariance estimates

For a finite-sized ensemble, there is sampling error in the estimation of background-error covariances. In other words, 𝗽_{∞} ≠ 𝗽_{n}, where 𝗽_{n} is the error covariance obtained from an *n*-member ensemble and 𝗽_{∞} is the “exact” error covariance, defined to be that which would be obtained from an infinite ensemble. When observations are perturbed in the EnKF, there is also sampling error in the estimation of the observation-error covariances. We now explore how these can result in a misestimation of the analysis-error covariance in a simple experiment with a small ensemble.

To wit, consider an ensemble data assimilation system updating a one-dimensional state vector **x**^{b} to a new observation **y**^{o}. One million multiple replications of a simple experiment are performed, an experiment where we are interested solely in the accuracy of the analysis-error covariance. Here, **x**^{b} and **y**^{o} are assumed to represent the same quantity, so 𝗵 = 1. In each replication of the experiment, a five-member ensemble of **x**^{b} is created, with each ensemble member sampled randomly from a *N*(0, 1) distribution. The distribution of background-error variance estimates computed for these ensembles form a chi-square distribution with four degrees of freedom (denoted by the solid curve in Fig. 1a). Since these estimates are unbiased, the mean of this distribution is equal to the true value (𝗽^{b} = 1). Two types of data assimilation methods are then tested. The first is the EnKF, in which each member is updated to an associated perturbed observation. Assuming 𝗿 = **1**, the perturbed observations are randomly sampled from a *N*(0, 1) distribution and then adjusted so each set of perturbed observations has zero mean and unit variance. The second is a hypothetical ensemble filter that does not require perturbed observations (EnKF-noPO). For the EnKF-noPO, once the background-error covariance is estimated from the randomly sampled ensemble, we simply assume that the analysis-error covariance can be predicted by Eq. (3).

Let us first illustrate the consequences of sampling error associated with the nonlinear dependence of the analysis-error covariance on the background-error covariance. After the first observation, Eq. (3) simplifies to 𝗽^{a} = (1 − 𝗸)𝗽^{b} = (1 − *γ*𝗽^{b})𝗽^{b}, where *γ* = 1/(𝗽^{b} + 𝗿). Because of sampling error, 𝗽^{b} is a random variable, so 𝗽^{a} is a random variable as well, even in the absence of perturbed observations. The distribution of 𝗽^{a} after the assimilation of a single observation in the EnKF-noPO is shown by the dashed curve in Fig. 1a. As expected, the effect of the observation is to reduce the ensemble variance, as evidenced by the higher probabilities at lower values of the error variance for 𝗽^{a} (dashed curve), as compared to 𝗽^{b} (solid curve). For this simple scalar example, Eq. (3) is akin to a power transformation applied to the sampling distribution for 𝗽^{b} (e.g., Wilks 1995, his section 3.4). This particular transformation reduces 𝗽^{a} associated with large 𝗽^{b} much more than it reduces it for small 𝗽^{b}. In this case, the expected value for 𝗽^{b} is 1.0, and the expected value of 𝗽^{a} is 0.5. However, the mean of the 𝗽^{a} distribution is ∼0.44, indicating the distribution of 𝗽^{a} is biased even though the distribution of 𝗽^{b} was not. This effect is related to the so-called inbreeding problem noted by Houtekamer and Mitchell (1998) and analyzed by van Leeuwen (1999).

Since this effect is purely a result of sampling error in the estimation of 𝗽^{b} and not perturbing the observations, all ensemble data assimilation schemes based on the Kalman filter should suffer from its consequences, not just the EnKF. For a finite ensemble, ensemble-mean errors will, on average, be larger than the “exact” value of 𝗽^{a}. Since the bias associated with the nonlinear dependency of 𝗸 on 𝗽^{b} results in a systematic underestimation of exact 𝗽^{a}, the ensemble analysis-error covariance will systematically underestimate the actual ensemble mean analysis error. In ensemble-based data assimilation systems, if the analysis-error covariances systematically underestimate ensemble mean analysis error, observations will be underweighted, and filter divergence may occur. This is why methods for increasing ensemble variance, such as the covariance inflation technique described by Anderson and Anderson (1999) and the “double EnKF” proposed by Houtekamer and Mitchell (1998), are necessary.

Problems caused by the nonlinearity in the covariance update (3) should lessen for larger ensembles. In that case, the sampling distribution for 𝗽^{b} becomes narrower and more symmetric about its mean value, and the power transformation associated with the nonlinearity in the Kalman gain results in a 𝗽^{a} distribution, which is itself more symmetric and less biased.

^{a}is given by Eq. (7), which in this experiment simplifies to

The distribution of this quantity is shown by the dotted curve in Fig. 1a. Note that the EnKF 𝗽^{a} distribution has the same mean (∼0.44) as the EnKF-noPO distribution, since it suffers from the same problem associated with the nonlinearity in the covariance update. However, the distribution of 𝗽^{a} for the EnKF is broader than that for the EnKF-noPO, indicating that the sampling error is larger and the analysis-error covariance estimate is less accurate, on average. Indeed, the mean absolute error of the estimated 𝗽^{a} is ∼0.24 for the EnKF, as compared to ∼0.14 for the EnKF-noPO. Figure 2 shows the mean absolute error of the estimated 𝗽^{a} for the EnKF and EnKF-noPO as a function of ensemble size. The horizontal line on this figure highlights the fact that a 13-member EnKF ensemble is necessary to obtain an analysis-error covariance estimate that is as accurate as that obtained from a 5-member EnKF-noPO ensemble.

From Fig. 1a, it is also clear that very small values of 𝗽^{a} are significantly more probable after an application of the EnKF with perturbed observations than after an application of a hypothetical EnKF-noPO. As a consequence, there is a higher probability that the estimated 𝗽^{a} will be less than the exact value when observations are perturbed (62% vs 59% for the EnKF-noPO). To understand this, let us consider the contribution of each term in Eq. (8). Since these terms all contain **x**′^{b}, they are not independent and one cannot infer the probability distribution of 𝗽^{a} [*P*(𝗽^{a})] from the unconditional distributions of *ζ*_{1}, *ζ*_{2}, and *ζ*_{3}. Fig. 1b shows the dependence of *P*(*ζ*_{3}) on *ζ*_{1} + *ζ*_{2}. When *ζ*_{1} + *ζ*_{2} is large, so is the variance of *ζ*_{3}. The solid curve in Fig. 1c shows *P*(*ζ*_{1} + *ζ*_{2}) after the assimilation of the perturbed observations. Overplotted on this are curves of *P*(*ζ*_{3}) for two different values of *ζ*_{1} + *ζ*_{2}. *P*(*ζ*_{3}) is broader when *ζ*_{1} + *ζ*_{2} is larger, and the probability of adding noise with a certain variance is proportional to the probability of a given *ζ*_{1} + *ζ*_{2}.

Consider the effect of the noise term *ζ*_{3} on the probability of 𝗽^{a} having the value indicated at the point labeled “a.” The noise term *ζ*_{3} in 𝗽^{a} when *ζ*_{1} + *ζ*_{2} has the value indicated at point “b” spreads out probability, increasing *P*(𝗽^{a}) at a. However, when *ζ*_{1} + *ζ*_{2} has the value given at a, the noise term does not increase *P*(𝗽^{a}) at b in equal measure. Hence, the addition of noise from term *ζ*_{3} can change the shape of the distribution of 𝗽^{a}, further skewing the distribution of 𝗽^{a} if the distribution of *ζ*_{1} + *ζ*_{2} is already skewed.

Houtekamer and Mitchell (1998) introduced the concept of a double EnKF, in which two ensemble data assimilation cycles are run in parallel with the background-error covariances from one ensemble being used to update the other. The double EnKF was designed to combat the biases introduced by the nonlinear dependence of the analysis-error covariance on the background-error covariance. We have tested the double EnKF in this simple scalar model and have found that, as predicted by van Leeuwen (1999), it actually reverses the sign of this bias, resulting in an ensemble whose mean analysis-error variance is an overestimate of the “exact” value (which would be obtained with an infinite ensemble) but is a very accurate estimate of the actual ensemble-mean analysis error. However, the effects of the extra sampling error associated with perturbed observations are not mitigated by the double EnKF, and the mean absolute error in the estimation of 𝗽^{a} for the double EnKF with 2*n* members is very similar to the single EnKF with *n* members (Fig. 2).

In summary, this simple example shows how perturbed observations can increase the sampling error of an ensemble data assimilation system. Pham (2001) introduced a method for perturbing observations that guarantees that the background-observation-error covariance terms in Eq. (7) is zero. Under these circumstances, the deleterious effects of the perturbed observations just discussed should vanish. However, the computational cost of constraining the perturbed observations in this way would be significant in a model with many degrees of freedom. Therefore, an ensemble data assimilation system that does not require perturbed observations to maintain sufficient ensemble variance is desirable, and should be both more accurate and less susceptible to filter divergence than the traditional EnKF with unconstrained perturbed observations.

## 3. An ensemble square root filter

**x**′

^{a}=

**x**′

^{b}−

**x**′

^{b}= (𝗶 −

**x**′

^{b}. We seek a definition for

^{a}[given by Eq. (3)], yields an equation for

^{b}

^{T}

^{b}

*p*×

*p*error-covariance matrices (where

*p*is the total number of observations), this is essentially a Monte Carlo implementation of a square root filter (Maybeck 1979). For this reason, we call this algorithm the ensemble square root filter, or EnSRF. The matrix square roots in Eq. (10) are not unique, they can be computed in different ways, such as Cholesky factorization or singular value decomposition. Square root filters were first developed for use in small computers on board aircraft, where word-length restrictions required improved numerical precision and stability. These requirements outweighed the computational overhead required to compute the square root matrices, which can be significant.

^{b}𝗵

^{T}and 𝗿 reduce to scalars, and Eq. (9) may be writtenIf

*α*𝗸, where

*α*is a constant, then 𝗸𝗸

^{T}may be factored out of the above equation, resulting in a scalar quadratic for

*α,*Since we want the deviations from the ensemble mean to be reduced in magnitude while maintaining the same sign, we choose the solution to Eq. (12) that is between 0 and 1. This solution isThis was first derived by J. Potter in 1964 (Maybeck 1979). Here, 𝗵𝗽

^{b}𝗵

^{T}and 𝗿 are scalars representing the background and observational-error variance at the observation location. Using

**y**′

^{o}=

**0**. If 𝗿 is diagonal, observations may be processed one at a time, and the EnSRF requires no more computation than the traditional EnKF with perturbed observations. In fact, as noted by Maybeck (1979; p. 375) the most efficient way of updating the deviations from the mean in the square root filter is to process the measurements one at a time using the Potter algorithm described above, due to the extra overhead of computing matrix square roots when observations are processed in batches. If observation errors are correlated (𝗿 is not diagonal), the variables may be transformed into the space in which 𝗿 is diagonal, or observations with correlated errors may be processed in batches using Eq. (10).

The Kalman filter equations (Eqs. 1, 2, and 3) are derived assuming that the forward operator 𝗵, which interpolates the first-guess to the observations locations, is linear. In an operational setting, 𝗵 may be a nonlinear operator, such as a radiative transfer model that converts temperature, humidity, and ozone concentrations into radiances. Houtekamer and Mitchell (2001) noted that, since in the EnKF 𝗵 is applied to the background fields individually (instead of to the covariance matrix 𝗽^{b} directly), it is possible to use a nonlinear 𝗵. However, using a nonlinear 𝗵 (as opposed to a linearized 𝗵) in the EnKF may not necessarily improve the analysis, since it violates the assumptions used to derive the analysis equations. If, as suggested by Houtekamer and Mitchell (2001) 𝗵 is applied to the background fields individually, before computation of 𝗽^{b}𝗵^{T} and 𝗵𝗽^{b}𝗵^{T}, then the derivation of Eq. (13) applies even when 𝗵 is nonlinear. However, all of the results presented in this paper are for linear 𝗵; we have not tested the behavior of the EnSRF for nonlinear 𝗵.

The ensemble adjustment filter (EnAF) of Anderson (2001), the error-subspace statistical estimation method (ESSE) of Lermusiaux and Robinson (1999), and the ensemble transform Kalman filter (ETKF) of Bishop et al. (2001), all use the traditional Kalman filter update equation [Eq. (1)] to update the ensemble mean, and constrain the updated ensemble covariance to satisfy Eq. (3). Therefore, they are all functionally equivalent, differing only in algorithmic details. In the EnAF, the linear operator 𝗮 = 𝗶 − ^{b}. A random sample of the updated eigenspectrum is then used as initial conditions for the next ensemble forecast. In the ETKF, the matrix square roots are computed in the subspace spanned by the ensemble. Here we have shown that all of these methods have a common ancestor, the square root filter developed in the early 1960s for aeronautical guidance systems with low precision. By applying the algorithm of Potter (1964), we have presented a computationally simple way of constraining the updated ensemble covariance that is no more expensive than the traditional EnKF when observations are processed serially, one at a time. Sequential processing of observations also makes it much easier to implement covariance localization, which improves the analysis while preventing filter divergence in small ensembles (Houtekamer and Mitchell 2001; Hamill et al. 2001).

Burgers et al. (1998) showed that the covariance estimates produced by EnKF with perturbed observations asymptote to the traditional extended Kalman filter result as the number of ensemble members tends to infinity. For ensemble data assimilation schemes based on the square root filter without perturbed observations, the extended Kalman filter result is obtained when the number of ensemble members equals the number of state variables (C. Bishop 2001, personal communication). This implies that in such cases the ensemble and the true state cannot be considered to be drawn from the same probability distribution. However, in most real-world applications this is not an issue since the number of ensemble members is likely to be far less than the number of state variables.

## 4. Results with the 40-variable Lorenz model

The simple example presented in section 2b demonstrated that perturbing the observations in the EnKF can have a potentially serious impact on filter performance. In this section we attempt to quantify this impact by comparing results with the EnKF and EnSRF in the 40-variable dynamical system of Lorenz and Emanuel (1998), and then in an idealized general circulation model with roughly 2000 state variables in section 5.

*i*= 1, … ,

*m*with cyclic boundary conditions. Here we use

*m*= 40,

*F*= 8 and a fourth-order Runge–Kutta time integration scheme with a time step of 0.05 nondimensional units. For this parameter setting, the leading Lyapunov exponent implies an error-doubling time of about 8 time steps, and the fractal dimension of the attractor is about 27 (Lorenz and Emanuel 1998). For these assimilation experiments, each state variable is observed directly, and observations have uncorrelated errors with unit variance. A 10-member ensemble is used, and observations are assimilated every time step for 50 000 time steps (after a spinup period of 1000 time steps). Observations are processed serially (one after another) for the EnSRF, and simultaneously (by inverting a 40 × 40 matrix) for the EnKF. The rms error of the ensemble mean (

*E*

_{1}) is defined aswhere

*n*= 10 is the number of ensemble members,

*m*= 40 is the number of state variables,

*X*

^{j}

_{i}

*j*th ensemble member for the

*i*th variable, and

*X*

^{true}is the “true” state from which the observations were sampled. Here,

*E*

_{2}is the average rms error of each ensemble member,The rms ratio

*R*=

*E*

_{1}/

*E*

_{2}is a measure of how similar the truth is to a randomly selected member of the ensemble (Anderson 2001). If the truth is statistically indistinguishable from any ensemble member, then the expected value of

*R*=

*n*+ 1)/2

*n*

*R*< 0.74 (

*R*> 0.74) the ensemble variance overestimates (underestimates) the error of the ensemble mean (

*E*

_{1}).

As discussed in section 2b, sampling error can cause filter divergence in any ensemble data assimilation system, so some extra processing of the ensemble covariances is almost always necessary. The two techniques used here are distance-dependent covariance filtering (Houtekamer and Mitchell 2001; Hamill et al. 2001) and covariance inflation (Anderson and Anderson 1999).

Covariance filtering counters the tendency for ensemble variance to be excessively reduced by spurious long-range correlations between analysis and observations points by applying a filter that forces the ensemble covariances to go to zero at some distance *L* from the observation being assimilated. In the EnSRF some care must be taken when applying covariance localization, since the derivation of Eq. (13) assumes that there has been no filtering of the background-error covariances used to compute the Kalman gain. In fact, if covariance localization is applied in the computation of 𝗸 and Eq. (13) is used to update the ensemble deviations, the analysis-error covariances will only satisfy Eq. (3) if the covariance localization function is a Heaviside step function, that is, the background-error covariances are set to zero beyond a certain distance *L* from the observation and are unmodified otherwise. However, we have found that using smoother covariance localization functions, such as the function given by Eq. 4.10 in Gaspari and Cohn (1999), while not strictly constraining the ensemble to satisfy Eq. (3), actually produce a more accurate analysis than the Heaviside function. Hence, all of the results presented here use the Gaspari–Cohn function for covariance localization.

The nonlinear dependence of the gain matrix on the background-error covariance will result in a biased estimate of analysis error, and the analysis system will weight the first-guess forecasts too heavily on average. Underestimation of the background-error covariances has a more severe impact on analysis error than overestimation (Daley 1991, section 4.9). To compensate for this, covariance inflation simply inflates the deviations from the ensemble-mean first-guess by a small constant factor *r* for each member of the ensemble, before the computation of the background-error covariances and before any observations are assimilated. The double EnKF introduced by Houtekamer and Mitchell (1998) combats this problem by using parallel ensemble data assimilation cycles in which the background-error covariances estimated by one ensemble are used to calculate the gain for the other. In the appendix, we develop a double EnSRF and present results as a function of covariance filter length scale for the 40-variable model. The results are qualitatively similar to those shown here for the “single” filters with covariance inflation. However, as noted in the appendix, the double EnSRF requires significantly more computation than the single EnSRF, since the simplifications to Eq. (9) obtained for serial processing of observations are no longer applicable.

Figure 3 shows *E*_{1} averaged over 50 000 assimilation cycles for the EnKF and EnSRF, as a function of the covariance inflation factor and the length scale of the covariance localization filter. The shaded areas on these plots indicate regions in parameter space where the filter has diverged, that is, has drifted into a regime where it effectively ignores observations. For both the EnKF and EnSRF filter divergence occurs for a given covariance filter length scale *L* when *r* is less than a critical value. However, the EnSRF appears to be less susceptible to filter divergence, since the critical value of the covariance inflation factor is always less for a given *L.* Overall, for almost any parameter value, the error in the EnSRF is less than the EnKF. The minimum error in the EnSRF is 0.2, which occurs at *L* = 24 for *r* = 1.03. For the EnKF, the minimum error is 0.26, which occurs at *L* = 15 for *r* = 1.07. These results are consistent with those presented for the simple scalar example in section 2b. The observational noise terms in Eq. (7) increases the sampling error in the estimation of the error covariances in the EnKF, so that a larger value of the covariance inflation factor *r* is needed to combat filter divergence. The reduced sampling error in the EnSRF allows the algorithm to extract more useful covariance information from the ensemble for a given ensemble size. This is consistent with the fact that larger values of *L* benefit the EnSRF, but are detrimental to the EnKF.

The results of section 2b suggest that, because of reduced sampling error, the EnSRF should produce a more accurate estimate of analysis-error variance than the EnKF for a given ensemble size. Therefore, if both the EnKF and the EnSRF start with the same 𝗽^{b}, the EnSRF should produce a more accurate 𝗽^{a} when the ensemble is updated with observations. After propagating the resulting ensembles forward to the next analysis time with a model, the EnSRF ensemble should therefore yield a more accurate estimate of 𝗽^{b}, thus producing an ensemble mean with lower error after the next set of observations are assimilated, even though the expected analysis-error variance is the same as obtained with the EnKF. The ratio of ensemble-mean error to analysis-error variance should therefore be larger in the EnKF than the EnSRF. Figure 4 shows the rms ratio for the 40-variable model experiments. For a 10-member ensemble, the expected value for an ensemble that faithfully represents the true underlying probability distribution is 0.74. If the rms ratio is smaller (larger) than this value, the ensemble overestimates (underestimates) the uncertainty in the analysis. For nearly all parameter settings the EnSRF has a lower rms ratio than the EnKF, indicating that, as expected the variance in the EnKF ensemble is indeed smaller relative to ensemble mean analysis error.

## 5. Results with an idealized two-level GCM

The results shown thus far have been for very simple models, where the number of ensemble members, the number of observations, and the number of model variables are all of the same order. In this section we will present results with an idealized primitive equation GCM with roughly 2000 degrees of freedom, in order to demonstrate that these results also apply to more realistic higher-dimensional systems.

The model is virtually identical to the two-level (250 and 750 hPa) model described by Lee and Held (1993), run at T31 horizontal resolution with hemispheric symmetry imposed. The prognostic variables of the model are baroclinic and barotropic vorticity, baroclinic divergence and barotropic potential temperature, yielding 2047 state variables in all. Barotropic divergence is assumed to be zero, and the baroclinic potential temperature is set to a constant value of 10 K. The lower-level winds are subjected to a mechanical damping with a timescale of 4 days, and the baroclinic potential temperature is relaxed back to a radiative equilibrium state with a pole-to-equator temperature difference of 80 K with a timescale of 20 days. The functional form of the radiative equilibrium profile is as given by Eq. (3) in Lee and Held (1993). A quad-harmonic (∇^{8}) diffusion is applied to all prognostic variables, and the coefficient is chosen so that the smallest resolvable scale is damped with an *e*-folding timescale of 3 h. An explicit fourth-order Runge–Kutta time integration scheme is used, with 14 model time steps per day. Based on the leading Lyapunov exponent, the error-doubling time of this model is about 2.4 days.

Starting with a random perturbation superimposed upon a resting state, the model is integrated for 400 days. The first 200 days are discarded. Observations are generated by sampling the model output at selected locations every 12 h, with random noise added to simulate observation error. There are 46 observation locations, defined by a geodesic grid such that points are roughly equally spaced on the surface of the hemisphere at an interval of approximately 2300 km (Fig. 5). At each observation location, there are observations of winds at 250 and 750 hPa and temperature at 500 hPa. The error standard deviations for wind and temperature are set to 1 m s^{−1} and 1 K, respectively. Here, 20-member ensembles are used, and the initial ensemble consists of randomly selected model states from the reference integration. Assimilation experiments are run for 180 days, and statistics are computed over the last 120 days. Experiments were conducted using the EnSRF and the EnKF with perturbed observations for a range of covariance filter length scales and covariance inflation factors. Observations were processed serially in both the EnSRF and the EnKF. EnKF experiments with simultaneous observation processing for selected parameter settings produced nearly identical results, so serial processing was used because it runs faster and requires less memory.

Figures 6 and 7 summarize the results for 500-hPa potential temperature rms error and rms ratio, as a function of covariance filter length scale and covariance inflation factor. The gray-shaded areas indicate filter divergence. The results are qualitatively similar to those obtained with the 40-variable Lorenz model. The minimum rms error for the EnSRF is 0.096 K, which occurs for a covariance filter length scale of 9500 km and a covariance inflation factor of 1.05. For the EnKF, the minimum error is 0.125 K, which occurs at *L* = 7500 km and *r* = 1.10. The rms ratios are generally larger for the EnKF indicating that relative to the EnSRF, ensemble variance is smaller relative to ensemble mean error. The EnSRF also is less susceptible to filter divergence than the EnKF, also in agreement with the Lorenz 40-variable results.

## 6. Conclusions

If the same gain and the same observations are used to update each ensemble member in an ensemble data assimilation system, analysis-error covariances will be systematically underestimated, and filter divergence may occur. To overcome this, an ensemble of perturbed observations can be assimilated, whose statistics reflect the known observation errors. In the limit of infinite ensemble size, in a system where all sources of error (both observation and model) are correctly sampled, this approach yields the correct analysis-error covariances. However, when ensemble sizes are finite, the noise added to the observations produces spurious observation-background error covariances associated with sampling error in the estimation of the observation-error covariances. Using a very simple example, we demonstrate how sampling error in both the background and observational error covariances can affect the accuracy of analysis-error covariance estimates in ensemble data assimilation systems. Because of the nonlinear dependence of the analysis-error covariance on the background-error covariance, sampling error in the estimation of background-error covariances will cause analysis-error covariances to be underestimated on average. Thus, techniques to boost ensemble variance are almost always necessary in ensemble data assimilation to prevent filter divergence. The addition of noise to the observations in an ensemble data assimilation system has two primary effects: 1) reducing the accuracy of the analysis-error covariance estimate by increasing the sampling error, and 2) increasing the probability that the analysis-error covariance will be underestimated by the ensemble. The latter is a consequence of the fact that the perturbed observations act as a source of multiplicative noise in the analysis system, since the amplitude of the noise in the analysis-error covariances is proportional to the background-error estimate. Underestimation of error covariances is undesirable, and has a larger impact on analysis error than a commensurate overestimation. Given these effects, we expect that ensemble data assimilation methods that use perturbed observations should have a higher error for a given ensemble size than a comparable system without perturbed observations. Designing an ensemble data assimilation system that does not require perturbed observations is therefore a desirable goal.

Various techniques for ensemble data assimilation that do not require perturbing the observations have recently been proposed. We have designed and tested an algorithm which is called the ensemble square-root filter, or EnSRF. The algorithm avoids the systematic underestimation of the posterior covariance which led to the use of perturbed observations in EnKF by using a different Kalman gain to update the ensemble mean and deviations from the ensemble mean. The traditional Kalman gain is used to update the ensemble mean, and a “reduced” Kalman gain is used to update deviations from the ensemble mean. The reduced Kalman gain is simply the traditional Kalman gain times a factor between 0 and 1 that depends only on the observation and background error variance at the observation location. This scalar factor is determined by the requirement that the analysis-error covariance match what would be predicted by the traditional Kalman filter covariance update equation [Eq. (3)] given the current background-error covariance estimate. The EnSRF algorithm proposed here involves processing the observations serially, one at a time. This implementation is attractive because, in addition to being algorithmically simple, it avoids the need to compute matrix square roots and thus it requires no more computation than the EnKF with perturbed observations and serial observation processing.

The benefits of ensemble data assimilation without perturbed observations are demonstrated by comparing the EnKF and the EnSRF in a hierarchy of models, ranging from a simple scalar model to a idealized primitive equation GCM with *O*(10^{3}) state variables. In all cases, the EnSRF produces an analysis ensemble whose ensemble mean error is lower than the EnKF for the same ensemble size.

The EnSRF as formulated here requires observations to be processed one at a time, which may pose quite a challenge in an operational setting where observations can number in the millions. It will be crucial to develop algorithms that allow observations at locations where background errors are uncorrelated to be processed in parallel. The treatment of model error, which we have not considered in this study, will likely be a crucial element in any future operational system. These will continue to be active areas of research as ensemble data assimilation methods are implemented in more complex and realistic systems.

Fruitful discussions with Jeffrey Anderson, Thomas Bengstton, Gilbert Compo, Doug Nychka, Chris Snyder, and Michael Tippett are gratefully acknowledged, as are the detailed and thoughtful reviews by Craig Bishop and Peter Houtekamer.

## REFERENCES

Anderson, J. L., 2001: An ensemble adjustment filter for data assimilation.

,*Mon. Wea. Rev.***129****,**2884–2903.Anderson, J. L., , and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts.

,*Mon. Wea. Rev.***127****,**2741–2758.Andrews, A., 1968: A square root formulation of the Kalman covariance equations.

,*AIAA J.***6****,**1165–1168.Bishop, C. H., , B. Etherton, , and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects.

,*Mon. Wea. Rev.***129****,**420–436.Burgers, G., , P. J. van Leeuwen, , and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter.

,*Mon. Wea. Rev.***126****,**1719–1724.Daley, R., 1991:

*Atmospheric Data Assimilation*. Cambridge University Press, 457 pp.Evensen, G., 1994: Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics.

,*J. Geophys. Res.***99**((C5),) 10143–10162.Evensen, G., , and P. J. van Leeuwen, 1996: Assimilation of Geosat altimeter data for the Agulhas curren using the ensemble Kalman filter with a quasigeostrophic model.

,*Mon. Wea. Rev.***124****,**85–96.Gaspari, G., , and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions.

,*Quart. J. Roy. Meteor. Soc.***125****,**723–757.Gelb, A., , J. F. Kasper, , R. A. Nash, , C. F. Price, , and A. A. Sutherland, 1974:

*Applied Optimal Estimation*. The M.I.T. Press, 374 pp.Hamill, T. M., , J. S. Whitaker, , and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter.

,*Mon. Wea. Rev.***129****,**2776–2790.Houtekamer, P. L., , and H. L. Mitchell, 1999: Reply.

,*Mon. Wea. Rev.***127****,**1378–1379.Houtekamer, P. L., . 2001: A sequential ensemble Kalman filter for atmospheric data assimilation.

,*Mon. Wea. Rev.***129****,**123–137.Houtekamer, P. L., . 1998: Data assimilation using an ensemble Kalman filter technique.

,*Mon. Wea. Rev.***126****,**796–811.Ide, K., , P. Courtier, , M. Ghil, , and A. C. Lorenc, 1997: Unified notation for data assimilation: Operational, sequential, and variational.

,*J. Meteor. Soc. Japan***75****,**181–189.Kalman, R., , and R. Bucy, 1961: New results in linear prediction and filtering theory.

,*Trans. AMSE J. Basic Eng.***83D****,**95–108.Lee, S., , and I. M. Held, 1993: Baroclinic wave packets in models and observations.

,*J. Atmos. Sci.***50****,**1413–1428.Lermusiaux, P. F. J., , and A. R. Robinson, 1999: Data assimilation via error subspace statistical estimation. Part I: Theory and schemes.

,*Mon. Wea. Rev.***127****,**1385–1407.Lorenc, A. C., 1986: Analysis methods for numerical weather prediction.

,*Quart. J. Roy. Meteor. Soc.***112****,**1177–1194.Lorenz, E. N., , and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model.

,*J. Atmos. Sci.***55****,**399–414.Maybeck, P. S., 1979: Square root filtering.

*Stochastic Models, Estimation and Control,*Vol. 1, Academic Press, 411 pp.Murphy, J. M., 1988: The impact of ensemble forecasts on predictability.

,*Quart. J. Roy. Meteor. Soc.***114****,**89–125.Pham, D. T., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems.

,*Mon. Wea. Rev.***129****,**1194–1207.van Leeuwen, P. J., 1999: Comment on “Data assimilation using an ensemble Kalman filter technique.”.

,*Mon. Wea. Rev.***127****,**1374–1377.Wilks, D. S., 1995:

*Statistical Methods in the Atmospheric Sciences*. Academic Press, 467 pp.

# APPENDIX

## The Double EnSRF

As discussed in section 2, in any ensemble data assimilation system the nonlinear dependence of the gain matrix on the background-error covariance will result in a biased estimate of analysis-error covariances, and the analysis system will weight the first-guess forecasts too heavily on average. Houtekamer and Mitchell (1998) referred to this problem as the “inbreeding effect.” Left unchecked, this problem may eventually lead to a condition called “filter divergence,” in which the ensemble spread decreases dramatically while the ensemble mean error increases and the analysis drifts further and further from the true state. To prevent this, covariance inflation (Anderson and Anderson 1999) simply inflates the deviations from the ensemble mean first-guess by a small constant factor for each member of the ensemble, before the computation of the background-error covariances and before any observations are assimilated. The “forgetting factor,” used by Pham (2001), plays essentially the same role. Another way to deal with this problem, the so-called double EnKF was introduced by Houtekamer and Mitchell (1998). The double EnKF uses parallel ensemble data assimilation cycles in which the background-error covariances estimated by one ensemble are used to calculate the Kalman gain for the other. Van Leeuwen (1999) presents an analysis of the double EnKF that shows how it compensates for the biases associated with nonlinearity in the Kalman gain. Here we present a comparison between a double EnSRF and the double EnKF to complement the results for the single filters with covariance inflation presented in section 4.

*i*(

*i*= 1, 2) denote which ensemble the quantity was calculated from. Therefore, the counterpart of Eq. (9) for the double EnSRF is

^{b}

_{i}

^{T}and 𝗿 reduce to scalars. However, substituting

_{i}=

*α*𝗸

_{i}does not result in a scalar quadratic equation for

*α,*since 𝗸

_{i}

^{T}

_{i}

_{i}involves the computation of matrix square roots. Since this negates the advantage of serial processing, we elect to process all the observations at once. Defining 𝗮

_{i}= 𝗶 −

_{i}

_{i},where

^{a}

_{i}

Figure A1a shows the ensemble mean rms error for the double EnSRF and the double EnKF, as a function of covariance filter length scale *L* (the covariance inflation factor is not needed in the double filter) for 20 total ensemble members (10 in each ensemble). The experimental setup is as described in section 4, except that two parallel data assimilation cycles are run. For all but the shortest *L,* the double EnSRF is more accurate than the double EnKF. The minimum error for the double EnSRF (EnKF) is 0.2 (0.25) for *L* = 20 (17). This is quantitatively similar to the results obtained for the single filters with covariance inflation in section 4, suggesting that the conclusions and interpretations presented there are not sensitive to the method used to counter the tendency for filter-divergence associated with nonlinearity in the Kalman gain [i.e., the inbreeding effect noted by Houtekamer and Mitchell (1998)]. Fig. A1b shows the ensemble mean rms error for the double filters as a function of ensemble size. Values are plotted for “optimal” covariance filter length scales, that is, those values of *L* at which the error is a minimum. Clearly, the double EnSRF is more accurate for a given ensemble size than the double EnKF. The horizontal line in that figure is shown to illustrate the fact that a double EnSRF with a total of 14 members (7 in each ensemble) is as accurate as the double EnKF with 38 total members (19 in each ensemble). However, as formulated here, in addition to the extra computational overhead associated with running two 10-member ensembles instead of one, the double EnSRF requires the computation of two *m* × *m* Cholesky lower-triangular matrix square roots, as well as the inverse of a lower-triangular matrix, where *m* is the dimension of the model state vector. This makes the double EnSRF much more computationally expensive than either the double EnKF or the single EnSRF.