## 1. Introduction

Ensemble square root Kalman filters are an efficient deterministic variant of the original ensemble Kalman filter (EnKF; Evensen 1994; Burgers et al. 1998). Common members of this class of filters are the ensemble transform Kalman filter (ETKF; Bishop et al. 2001), the ensemble adjustment Kalman filter (EAKF; Anderson 2001, 2003), and the ensemble square root Kalman filter with serial processing of observations (EnSRF; Whitaker and Hamill 2002). Recently, also the singular “evolutive” interpolated Kalman (SEIK) filter (Pham et al. 1998a; Pham 2001) and the newly developed error-subspace transform Kalman filter (ESTKF; Nerger et al. 2012b) have been classified as ensemble square root filters (Nerger et al. 2012b). All ensemble square root Kalman filters express the analysis equation of the Kalman filter in a square root form combined with an explicit transformation of the state ensemble (see Tippett et al. 2003). Most filter methods are formulated to assimilate all observations synchronously. However, the EAKF and the EnSRF are typically described to assimilate single observations serially, which increases the efficiency of these filter formulations. Further, both algorithms are algorithmically identical in case of serial observation processing. For example, the DART assimilation system (Anderson et al. 2009) provides an EAKF with serial observation processing.

Localization of covariance matrices in ensemble-based Kalman filters is required for data assimilation into large-scale models, because the typical ensemble size is limited to the order of 10–100 states, which is much smaller than the degrees of freedom of the models. By damping long-distance covariances, localization stabilizes the analysis update of the filter and increases the rank of the forecast covariance matrix as well as the local number of degrees of freedom for the analysis. The localization is either applied to the forecast covariance matrix, here denoted covariance localization (CL; Houtekamer and Mitchell 1998, 2001), or to the observation error covariance matrix (Hunt et al. 2007), here denoted observation localization (OL). The relation of both localization methods was the focus of several recent studies (Sakov et al. 2010; Greybush et al. 2011; Janjić et al. 2011). Further, Nerger et al. (2012a) proposed a method, denoted regulated localization, to make the localizing effect of OL and CL comparable. The OL is typically applied in algorithms that do not explicitly compute the forecast error covariance matrix like the local ensemble transform Kalman filter (LETKF; Hunt et al. 2007), the SEIK filter, and the ESTKF. In contrast, the EAKF and the EnSRF compute elements of the forecast covariance matrix and apply CL. While the filters that apply OL assimilate all available observations at once, the EAKF and EnSRF methods that use CL perform a serial assimilation of single observations.

This study examines the interaction between CL and serial processing of observations in detail and demonstrates that it can destabilize the analysis update. It is known in the community (e.g., C. Snyder 2014, personal communication) that the serial processing of observations can lead to the situation that the actual analysis result depends on the order in which the observations are assimilated. This dependence is caused by the fact that the update equation for the state error covariance matrix is not fulfilled when localization is applied. This was already noted by Whitaker and Hamill (2002), but there is yet no publication that studies the effect of the inconsistent update of the state error covariance matrix. Whitaker et al. (2008) used the observation ordering to develop a variant of the EnSRF in which the observations are assimilated in an order of decreasing impact to the assimilation. The motivation for this scheme was described to be that it allows for an adaptive observation thinning algorithm by omitting observations that insignificantly reduce the estimated state error variance. Whitaker et al. (2008) also compared the assimilation performance of the EnSRF with the LETKF when applied with a global atmospheric model and found only small differences. Similarly, Holland and Wang (2013) compared the LETKF with the EnSRF without particular observation ordering for the assimilation with a simplified atmospheric model. They found only small differences in the state estimates with slightly smaller errors in the LETKF estimates.

While the previous studies found small differences between the estimates of LETKF and EnSRF it is unclear which conditions influence the differences and whether there are conditions under which larger differences can occur. To some extent the differences in the state estimates are a result of different localization strengths in the OL and CL schemes for the same localization function (see Miyoshi and Yamane 2007). Here, this difference will be reduced by using for OL the regulated localization function by Nerger et al. (2012a). The instability that can result from the interaction of localization and serial observation processing is demonstrated and examined in numerical experiments with the small Lorenz-96 model (Lorenz 1996; Lorenz and Emanuel 1998). To compare the different effects of serial and synchronous assimilation of the observations, the two widely used filter methods EnSRF and LETKF are applied. For a direct examination of the influence of serial observation processing also a formulation of the EnSRF that assimilates all observations at once is applied. While this formulation is too costly to be applied in large-scale systems, it can be used with the small Lorenz-96 model.

The study is organized as follows: the EnSRF and the LETKF will be reviewed together with their localizations in section 2. This section also discusses the reasons for the inconsistent update of the covariance matrix. The configuration of the twin experiments with the Lorenz-96 model are described in section 3. The filter instability is demonstrated in time-mean results in section 4. The interaction of the localization and serial observation processing is further examined in section 5, while section 6 examines the effect of the order in which the observations are assimilated. In section 7 the relevance of the findings with regard to real atmospheric and oceanographic applications is discussed. Finally, conclusions are drawn in section 8.

## 2. Filter algorithms

This section reviews the EnSRF with CL (Whitaker and Hamill 2002) as a typical method using serial observation processing and the LETKF using OL (Hunt et al. 2007), which uses synchronous assimilation.

*m*vectors

*n*,

*a*” denotes the analysis. The uncertainty of the state estimate is described by the ensemble covariance matrix:

*k*” is omitted as in the analysis step of the filters all quantities refer to the same time.

### a. The EnSRF

Whitaker and Hamill (2002) proposed an ensemble square root Kalman filter with serial processing of observations (EnSRF). In this filter, the state estimate and the ensemble perturbations are updated iteratively in a loop over all individual observations. This method is motivated by the fact that for a single observation the formulation of Potter (see Maybeck 1979, section 7.3) can be applied to update the state error covariance matrix. This formulation is particularly efficient because matrix inversions, required for multiple observations, reduce to the inverse of a single number.

*i*) indicate quantities at the

*i*th iteration of the loop over single observations. Likewise, the subscript denotes the index of the scalar observation assimilated at the

*i*th iteration. The state estimate is updated according to

*i*,

*i*th element of the observation vector of size

*p*, and

*n*. The matrix of ensemble perturbations is updated according to

In the experiments performed below, the localization matrix

### b. The LETKF

The LETKF was introduced by Hunt et al. (2007) as a localized variant of the ETKF (Bishop et al. 2001). The LETKF applies a localized analysis with OL. Here, the LETKF is reviewed following Nerger et al. (2012a), which provides a particularly efficient formulation of the algorithm.

*m*by

*ρ*with

*m*is given by

*σ*to denote the local analysis domain and

*δ*to denote the domain of the corresponding observations of nonzero weight, the LETKF can be written as

### c. Inconsistency of the covariance update with localization

Whitaker and Hamill (2002) noted that the update equation for the state covariance matrix in the EnSRF, Eq. (6), is not consistent if localization with a smooth correlation function is used. Whitaker and Hamill (2002) reported that their study used the GC99 function despite the possible violation of Eq. (6), because it resulted in estimates with lower estimation errors compared to the case when a Heaviside step function was used, which would avoid the inconsistency.

Over all, the inconsistency of the covariance matrix update does occur in all filter algorithms that are based on the simplified single-sided update in Eq. (23). The difference between synchronous observation assimilation (as in the LETKF) and serial observation processing (as in the EnSRF) is, however, that the former method computes a single update of the matrix

*i*’s observation it follows from Eq. (6) as

## 3. Configuration of numerical experiments

To assess the assimilation performances of the EnSRF and LETKF, identical twin experiments are conducted using the Lorenz-96 model (Lorenz 1996; Lorenz and Emanuel 1998). This nonlinear model has been used in several studies to examine the behavior of different ensemble-based Kalman filters (e.g., Anderson 2001; Whitaker and Hamill 2002; Ott et al. 2004; Lawson and Hansen 2004; Sakov and Oke 2008; Janjić et al. 2011). The same configuration as in Nerger et al. (2012a) is used and the results can be directly compared with their results.

A trajectory representing the “truth” is computed over 60 000 time steps from the initial state of constant value of 8.0, but

Second-order exact sampling from the true trajectory Pham (2001) is used to generate the initial ensemble. To assess the assimilation performance over a long assimilation experiment, the assimilation is performed at each time step over 50 000 time steps with an ensemble of 10 states. For the observations, an offset of 1000 time steps of the true trajectory is used to avoid the spinup phase of the model. The localization is applied with a fixed support radius. All experiments are repeated 10 times with varying random numbers for the generation of the initial ensemble. The assimilation performance will be assessed by the root-mean-square error of each experiment averaged over each set of 10 experiments. The random numbers used to perturb the observations are not varied. It would have a similar effect to varying the initial ensemble.

## 4. Mean assimilation performance

The effect of the serial observation processing can be demonstrated in a full-length experiment with the Lorenz-96 model. Figure 1 shows the averaged RMS errors for a range of forgetting factors and support radii of the localization function and three different observations errors. The filters diverge when the time-mean RMS error is larger than the observation error. If at least one of the 10 repetitions of each experiment diverges, the rectangle for this parameter pair is left white. The overall shape of the RMS error distribution, namely, a minimum error region that is surrounded by larger errors, shows that the parameter ranges chosen for the experiments cover the optimal parameter values.

The first two rows in Fig. 1 show the average RMS errors for the serial EnSRF and LETKF, respectively. As discussed by Nerger et al. (2012a), the regulated localization as used here in the LETKF should make the filter results with OL very similar to those with CL. However, there are significant differences, which are most pronounced for the smallest observation error of

The EnSRF-bulk update scheme discussed in section 2a avoids the serial observation processing, but applies CL. Hence, comparing the serial EnSRF with EnSRF bulk allows us to directly see the influence of serial observation processing. The averaged RMS errors for EnSRF bulk are shown in the third row of Fig. 1. In the stable assimilation regime (e.g., for

## 5. Stability of the EnSRF analysis with localization

To examine the reasons for the differences in the RMS errors obtained with the EnSRF, EnSRF bulk, and LETKF, the first analysis step of the experiments discussed above is examined in more detail. While obviously the first analysis step is not necessarily representative for the whole assimilation experiment it nonetheless allows us to study the different behaviors of the filters. At the first analysis step, the experiments start with a “climatological” state estimate with an RMS error of about 3.5. The initial ensemble estimate of the error is slightly lower with about 2.5. The error of the analysis state after the first analysis step depends on the observation error. It is larger than the asymptotic error level, which is reached only after several forecast-analysis cycles. The advantage of examining the first analysis step is that it shows the instability in a very clear way. Further, the results are practically uninfluenced by the model nonlinearity as only a single time step was computed.

The parameters considered in this section are a forgetting factor of 0.95 and a support radius of 20 grid points. For these parameters, all three filter formulations converge and the averaged RMS errors discussed in section 4 are close to their minimum.

The EnSRF is configured to assimilate each observation in a loop starting from the observation at the grid point with index 1 and then ordered with increasing index. Thus, when the state of size 40 is fully observed, the state estimate and the ensemble are modified 40 times in each analysis step. The panels in Fig. 2 show the true and estimated RMS errors of the state for the sequence of assimilating 1–40 observations. To be able to directly examine one assimilation series, only one ensemble realization is shown here. The exact shape of the curves shown in Fig. 2 is specific for the set of random numbers used to generate the ensemble and those used to generate the observations. However, using other random numbers does not change the overall conclusions. Figure 2 also shows the RMS errors from the analogous experiments with the LETKF and the EnSRF bulk. Here all observations are assimilated at once. To be able to study the dependence of the RMS error on the number of observations, 40 experiments are performed for each filter and each observation error in which between 1 and 40 observations are assimilated. In contrast to the EnSRF, the intermediate results would not be realized in an experiment with 40 observations.

For

For smaller observation errors, the interim increase of the true errors for the EnSRF and EnSRF bulk is larger. When 3, 27, or 28 observations are assimilated for

For

The effect that leads to the large increase of the RMS error for the EnSRF and EnSRF bulk is further demonstrated in Fig. 3. Here, the state estimates for the EnSRF, EnSRF bulk, and LETKF are shown when different numbers of observations are assimilated in the case of

The differences between the serial EnSRF and the EnSRF bulk are only caused by the serial observation processing. From Fig. 2 it is visible that the difference between both filters accumulates with a growing number of assimilated observations. The repeated inconsistent covariance updates of the serial EnSRF do not always result in larger errors of the state estimate. For example, if only observations in the first half of the model domain are assimilated, the serial EnSRF shows smaller errors compared to the EnSRF bulk. However, for more than 30 observations, the RMS errors from EnSRF bulk are smaller than those from the serial EnSRF for all experiments. The estimated RMS errors are almost identical for the EnSRF and EnSRF bulk. However, the serial observation processing of the EnSRF results in covariance matrices that are distinct from those obtained with the EnSR bulk as is also demonstrated in the appendix. The different variance and covariance estimates are tapered by the localization matrix and result in state updates that are different in both filters. The differences are most pronounced for the smallest observation error of

The differences between the EnSRF bulk and the LETKF are mainly caused by the different localization schemes. While for a single observation, the regulated OL used for the LETKF results in a localization effect that is identical to the CL in the EnSRF and EnSRF bulk, this is no longer the case if multiple observations are assimilated at the same time [see Nerger et al. (2012a) for a detailed discussion of the regulated OL]. However, the regulated OL results in much better state estimates in particular if the observations are incomplete as is visible from Figs. 2 and 3. For the Kalman gain, the regulated OL results in a different localization function that improves the state estimates without reducing the support radius of the localization. For the EnSRF, one would need to strongly reduce the localization support radius for CL (e.g., to eight grid points for

As pointed out in section 2c, the inconsistent update of the state error covariance matrix should not only appear in the EnSRF, but also in other filters that process observations serially. The LETKF method can be easily modified to perform a loop of analysis steps with single observations. For consistency, the forgetting factor has to be removed from Eq. (19). Instead, the ensemble perturbations are inflated once before the analysis step by the square root of the inverse forgetting factor as done in the EnSRF. The bottom panel in Fig. 2 shows also the RMS error for the LETKF with serial observation processing. Similar to the EnSRF, the RMS error shows a peak for 3 observations and the instability around 25 observations. The true RMS errors are lower than for the EnSRF and the estimated RMS errors are slightly larger. This shows that the influence of the localization on the update of the covariance matrix in the serial variant of the LETKF is not identical to that in the EnSRF. However, the general instability of the analysis also occurs for the LETKF when it is applied with serial observation processing.

Note, that the change of the EnSRF behavior that is demonstrated here for different observation errors is not an effect of model nonlinearity. Only a single model time step has been computed before the first analysis time, which does not have much influence on the ensemble distribution. Actually, the behavior shown in Figs. 2 and 3 would look very similar when the analysis would be performed at the initial time without any time stepping. Thus, one could perform this experiment even without the Lorenz-96 model. That is, one only needs a covariance matrix, and initial state estimate and a set of observations together with their error estimate. By sampling the covariance matrix and state estimate with a small ensemble of 10 members one could compute the analysis step. The larger differences in the state update for decreasing observation errors are due to the fact that the effect of the inconsistently updated covariances grows with the influence of the observations on the state estimate. However, the effect of the differences can sometimes average out, as is visible from the nearly identical RMS errors for

## 6. Influence of the observation order

The analysis result in case of serial observation processing depends on the order in which the observations are assimilated. Hence, one might wonder whether one can improve the analysis results obtained with the serial EnSRF by changing the order in which the observations are assimilated. Accordingly, the influence of the order is examined here for the application with the Lorenz-96 model. Only the case

The bottom panel in Fig. 2 shows that the true RMS was largest when observations at the grid points 25–30 were assimilated. This is far from grid point 1 where the assimilation series started. Thus, a first test is whether one can stabilize the analysis by using a more uniform sorting of the observations. To this end, the observation order is revised so that the gridpoint indices of the assimilated observations are chosen like 1, 21, 11, 31, 6, 26, 16, 36, and continued so that the remaining gaps are filled in an approximately uniform way. The top panel in Fig. 4 shows the RMS error over the number of assimilated observations for this observation order. For comparison, the LETKF also assimilated the same observations. Using the revised observation order, the large peak in the RMS error of the EnSRF at around 25 assimilated observations (Fig. 2, bottom panel) has actually disappeared. In this respect, the reordering of the observations is successful. However, up to about 20 assimilated observations, the RMS errors are now very close to the error without assimilation. Also, there are smaller peaks where the true RMS error exceeds the error without assimilation with values up to about 4.5. Further, the final RMS error after assimilating all 40 observations in the revised order is 0.91 and, hence, almost identical to the error of 0.94 without reordering. Figure 4 also shows that for the LETKF more than 20 observations need to be assimilated to significantly reduce the RMS error. However, the RMS error remains smaller than that of the EnSRF and reaches a value of 0.13 when 40 observations are assimilated.

The top panel in Fig. 5 shows the mean RMS error for the full experiment in which the EnSRF with the reordered observations is applied over 50 000 analysis steps. Compared to the case

An alternative to the series of global state updates in the EnSRF was introduced by Whitaker et al. (2008). This variant of the EnSRF, denoted below the L-EnSRF, performs individual local analysis updates for each grid point with the observations ordered by their influence on the state at the grid point. For this method one computes for each grid point the variance reduction in the analysis update induced by a single observation. Then the observations are assimilated individually at each grid point in decreasing order according to the variance reduction.

The bottom panel in Fig. 4 shows the RMS error for the L-EnSRF as a function of the number of assimilated observations. The RMS error remains close to the RMS error without assimilation, or even above it, until about 29 observations are assimilated. Thus, the individual sorting of the observations in the L-EnSRF also avoids the instability peak around grid points 25–30 in the original EnSRF without reordering. For more than 29 observations, the RMS error decreases strongly. The final error for 40 assimilated observations is reduced to 0.51. Hence, it is significantly smaller than the error of the EnSRF with the original order, but larger than that of the LETKF. The reduction of the RMS error is also visible in the full experiment over 50 000 analysis steps as is shown in the bottom panel in Fig. 5. The minimum mean RMS error is reduced from 0.0193 to 0.0190. This change is small, but statistically significant. Further, the filter is stabilized and the parameter region in which the assimilation converges is increased. However, the RMS errors obtained with the L-EnSRF are still larger than those of the LETKF. In addition, the region of filter convergence is larger for the LETKF than for the L-EnSRF.

## 7. Practical relevance of the EnSRF instability

The numerical experiments conducted in the sections above clearly show the effect of the instability in the EnSRF analysis. However, these experiments are highly idealized. In particular, the Lorenz-96 model simulates only a single model field. Further, the dynamics of the model are homogenous and, hence, also the distribution of the errors in the state estimate and the ensemble perturbations is rather uniform. Also, the full model state was observed. The observation errors were varied by one order of magnitude in the experiments. This allowed us to vary the strength of the assimilation impact. The largest influence of the serial observation processing in case of the EnSRF and of the regulated OL in case of the LETKF occurred for the smallest observation error, which was only 4% of the error of the initial state estimate.

For real-world cases (e.g., Whitaker et al. 2008; Sakov et al. 2012; Losa et al. 2014), the mean RMS error estimated by the ensemble is typically of the same order as the observation errors. In this respect, these applications should operate in the regime of the largest observation errors used in the idealized experiments. In this case, no significant differences between the LETKF and EnSRF are to be expected. However, in realistic cases the estimated errors will show spatial variations and larger error estimates can occur locally. For example, if eddies appear in a high-resolution ocean model, the ensemble spread could become large due to varying locations of the eddies or when only some ensemble members simulate the eddies while other miss them. In the atmosphere a situation might appear with convective-scale models, when some ensemble members estimate convection while others do not. When in this situation accurate observations are assimilated, the effect of serial observation processing might deteriorate the assimilation performance. However, in this case also the spatial extent of the region with large state error estimates and small observation errors will influence the effect of the serial observation processing. It is unclear which spatial extent is necessary to make the effect visible.

The experiments with the Lorenz-96 model showed only a negligible effect of reordering or randomizing the observation sequence, unless one sorts the observations explicitly with decreasing influence and performs local analyses. However, in atmospheric data assimilation the location of the observations can also vary nearly randomly between successive analysis times. This kind of randomization might also influence the effect of the serial observation processing.

## 8. Conclusions

This study examined the influence of localization in ensemble-based Kalman filter formulations that perform the assimilation of an observation vector as a series over single observations. Filter algorithms of this type are the ensemble adjustment Kalman filter (EAKF) and the ensemble square root filter (EnSRF).

Most ensemble Kalman filters update in the analysis step the state error covariance matrix, which is represented by the ensemble of model states, using the nonsymmetric update equation of the Kalman filter. This equation is cheaper to evaluate than the more general symmetric update equation, but only valid when the Kalman gain is computed with the same forecast state error covariance matrix as used in the update equation. Using a localized covariance matrix in the gain while using the nonlocalized matrix in the update equation, results in an inconsistent analysis state error covariance matrix. To some extent this inconsistency is inherent in all ensemble-based Kalman filters because they approximate the state error covariance matrix by the low-rank ensemble covariance matrix, but they increase the rank for the analysis step by applying localization. Filter algorithms that assimilate a whole observation vector simultaneously, update the covariance matrix only once during an analysis step. In contrast, in filters with serial observation processing, the size of the observation vector defines how often the covariance matrix is updated.

The assimilation performance of the EnSRF was compared with that of the local ensemble transform Kalman filter (LETKF) with regulated observation localization using twin experiments with the Lorenz-96 model. When the observation errors were of a similar magnitude as the initial errors of the state estimate, both filter methods showed a similar behavior. When the observation errors were decreased, the EnSRF showed a stronger tendency to diverge and larger minimum RMS errors than the LETKF and a variant of the EnSRF that assimilates all observations at once.

Changing the observation order resulted in an improvement of the assimilation performance of the EnSRF. For this, each single grid point needed to be updated with an individual order of the observations. As proposed by Whitaker et al. (2008), ordering the observation with decreasing influence to reduce the estimate variance resulted in the best assimilation performance. However, in the twin experiments the EnSRF with localized update and individually ordered observations still exhibited larger minimum errors and a stronger tendency to diverge than the LETKF.

The idealized experiments used the Lorenz-96 model. However, the repeated inconsistent update of the covariance matrix and, hence, the ensemble states is a general property with serial observation processing. Thus, the instability of the analysis with serial observation processing should also occur with other models. However, for practical applications the deterioration of the filter performance of the EnSRF will often not be relevant. Overall, the experiments indicate that the inconsistent ensemble update does only deteriorate the filter performance of the EnSRF in cases when the observations have a strong influence (i.e., when the observation error is small compared to the estimated error of the state). In most real-world applications, the observation and state errors have a similar magnitude and the serial observation processing should be stable. This finding is consistent with the fact that the EnSRF or EAKF algorithms have been successfully applied for a wide range of data assimilation problems. However, one should be careful that the observation errors do not become significantly smaller than the estimated state errors and, hence, induce a strong assimilation influence.

The LETKF method performed better than the EnSRF with smaller estimation errors and better stability. This difference was caused by the different localization schemes and the application of regulated observation localization for the LETKF. However, it is obvious that the LETKF—as all other ensemble Kalman filters—performs an inconsistent update of the state error covariance matrix when it is applied with localization. Thus, while the localization methods are empiric schemes that have been demonstrated to improve the state estimates and the stability of ensemble Kalman filters, their influence on the error estimates is still unclear. For example, Janjić et al. (2011) examined a localization variant of the SEIK filter in which the covariance matrix is updated using a Heaviside step function and only using the smooth weighting function for the update of the state estimate. While the update of the ensemble perturbations is also not fully consistent in this formulation, it exhibited very good assimilation performance with the Lorenz-96 model. Further research into localization is required to ensure consistent corrections of both the state estimate and the ensemble perturbations in the analysis steps of the ensemble-based Kalman filters.

## Acknowledgments

The author would like to thank the two anonymous reviewers as well Dr. J. Whitaker and the editor, Dr. A. Aksoy, for their helpful comments.

## APPENDIX

### 2D Example of the Serial Observation Assimilation

The correct state estimate is

## REFERENCES

Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation.

,*Mon. Wea. Rev.***129**, 2884–2903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.Anderson, J. L., 2003: A local least squares framework for ensemble filtering.

,*Mon. Wea. Rev.***131**, 634–642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collings, R. Torn, and A. Arellano, 2009: The Data Assimilation Research Testbed: A community facility.

,*Bull. Amer. Meteor. Soc.***90**, 1283–1296, doi:10.1175/2009BAMS2618.1.Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects.

,*Mon. Wea. Rev.***129**, 420–436, doi:10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: On the analysis scheme in the ensemble Kalman filter.

,*Mon. Wea. Rev.***126**, 1719–1724, doi:10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics.

,*J. Geophys. Res.***99**, 10 143–10 162, doi:10.1029/94JC00572.Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions.

,*Quart. J. Roy. Meteor. Soc.***125**, 723–757, doi:10.1002/qj.49712555417.Greybush, S. J., E. Kalnay, T. Miyoshi, K. Ide, and B. R. Hunt, 2011: Balance and ensemble Kalman filter localization techniques.

,*Mon. Wea. Rev.***139**, 511–522, doi:10.1175/2010MWR3328.1.Holland, B., and X. Wang, 2013: Effects of sequential or simultaneous assimilation of observations and localization methods on the performance of the ensemble Kalman filter.

,*Quart. J. Roy. Meteor. Soc.***139**, 758–770, doi:10.1002/qj.2006.Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique.

,*Mon. Wea. Rev.***126**, 796–811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation.

,*Mon. Wea. Rev.***129**, 123–137, doi:10.1175/1520-0493(2001)129<0123:ASEKFF>2.0.CO;2.Hunt, B. R., E. J. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter.

,*Physica D***230**, 112–126, doi:10.1016/j.physd.2006.11.008.Janjić, T., L. Nerger, A. Albertella, J. Schröter, and S. Skachko, 2011: On domain localization in ensemble based Kalman filter algorithms.

,*Mon. Wea. Rev.***139**, 2046–2060, doi:10.1175/2011MWR3552.1.Lawson, W. G., and J. A. Hansen, 2004: Implications of stochastic and deterministic filters as ensemble-based data assimilation methods in varying regimes of error growth.

,*Mon. Wea. Rev.***132**, 1966–1981, doi:10.1175/1520-0493(2004)132<1966:IOSADF>2.0.CO;2.Lorenz, E. N., 1996: Predictability—A problem partly solved.

*Proc. Seminar on Predictability,*Reading, United Kingdom, ECMWF, 1–18.Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model.

,*J. Atmos. Sci.***55**, 399–414, doi:10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.Losa, S. N., S. Danilov, J. Schröter, T. Janjić, L. Nerger, and F. Janssen, 2014: Assimilating NOAA SST data into BSH operational circulation model for the North and Baltic Seas: Part 2. Sensitivity of the forecast’s skill to the prior model error statistics.

,*J. Mar. Syst.***129**, 259–270, doi:10.1016/j.jmarsys.2013.06.011.Maybeck, P. S., 1979:

*Stochastic Models, Estimation, and Control*. Vol. 1. Academic Press, 444 pp.Miyoshi, T., and S. Yamane, 2007: Local ensemble transform Kalman filter with an AGCM at a T159/L48 resolution.

,*Mon. Wea. Rev.***135**, 3841–3861, doi:10.1175/2007MWR1873.1.Nerger, L., and W. Hiller, 2013: Software for ensemble-based data assimilation systems —Implementation strategies and scalability.

,*Comput. Geosci.***55**, 110–118, doi:10.1016/j.cageo.2012.03.026.Nerger, L., W. Hiller, and J. Schröter, 2005: PDAF—The Parallel Data Assimilation Framework: Experiences with Kalman filtering.

*Use of High Performance Computing in Meteorology—Proceedings of the 11th ECMWF Workshop,*W. Zwieflhofer and G. Mozdzynski, Eds., World Scientific, 63–83.Nerger, L., T. Janjić, J. Schröter, and W. Hiller, 2012a: A regulated localization scheme for ensemble-based Kalman filters.

,*Quart. J. Roy. Meteor. Soc.***138**, 802–812, doi:10.1002/qj.945.Nerger, L., T. Janjić, J. Schröter, and W. Hiller, 2012b: A unification of ensemble square root Kalman filters.

,*Mon. Wea. Rev.***140**, 2335–2345, doi:10.1175/MWR-D-11-00102.1.Ott, E., and Coauthors, 2004: A local ensemble Kalman filter for atmospheric data asimilation.

,*Tellus***56A**, 415–428, doi:10.1111/j.1600-0870.2004.00076.x.Pham, D. T., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems.

,*Mon. Wea. Rev.***129**, 1194–1207, doi:10.1175/1520-0493(2001)129<1194:SMFSDA>2.0.CO;2.Pham, D. T., J. Verron, and L. Gourdeau, 1998a: Singular evolutive Kalman filters for data assimilation in oceanography.

*C. R. Acad. Sci., Ser. II,***326**(4), 255–260.Pham, D. T., J. Verron, and M. C. Roubaud, 1998b: A singular evolutive extended Kalman filter for data assimilation in oceanography.

,*J. Mar. Syst.***16**, 323–340, doi:10.1016/S0924-7963(97)00109-7.Sakov, P., and P. R. Oke, 2008: Implications of the form of the ensemble transformation in the ensemble square root filters.

,*Mon. Wea. Rev.***136**, 1042–1053, doi:10.1175/2007MWR2021.1.Sakov, P., G. Evensen, and L. Bertino, 2010: Asynchronous data assimilation with the EnKF.

,*Tellus***62A**, 24–29, doi:10.1111/j.1600-0870.2009.00417.x.Sakov, P., F. Counillon, L. Bertino, K. A. Lisaeter, P. R. Oke, and A. Korablev, 2012: TOPAZ4: An ocean-sea ice data assimilation system for the North Atlantic and Arctic.

,*Ocean Sci.***8**, 633–656, doi:10.5194/os-8-633-2012.Tippett, M. K., J. L. Anderson, C. H. Bishop, T. M. Hamill, and J. S. Whitaker, 2003: Ensemble square root filters.

,*Mon. Wea. Rev.***131**, 1485–1490, doi:10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations.

,*Mon. Wea. Rev.***130**, 1913–1927, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.Whitaker, J. S., T. M. Hamill, X. Wei, Y. Song, and Z. Toth, 2008: Ensemble data assimilation with the NCEP global forecast system.

,*Mon. Wea. Rev.***136**, 463–482, doi:10.1175/2007MWR2018.1.