## 1. Introduction

Ensemble methods have been used in a variety of geophysical estimation problems, including atmospheric applications, oceanography, and land surface systems. Recently there has been growing interest in particle filtering methods in particular, as these methods are better able to capture the nonlinearity inherent in many geophysical systems [e.g., the merging particle filter of Nakano et al. (2007), the equivalent-weights filter of Ades and van Leeuwen (2013), and the implicit particle filter (Morzfeld et al. 2012)]. At the same time, particle filters also tend to suffer from the “curse of dimensionality” where the required ensemble size grows very rapidly as the dimension increases. Thus, it would be useful to know a priori whether a particle filter is feasible to implement in a given system.

The curse of dimensionality is a well-known problem in density estimation, as Monte Carlo estimation of high-dimensional probability densities demands notoriously large sample sizes (Silverman 1986). In a series of related papers, Bengtsson et al. (2008), Bickel et al. (2008), and Snyder et al. (2008) show that the curse of dimensionality is also manifest in the simplest particle filter. They demonstrate that the required ensemble size scales exponentially with a statistic related, in part, to the system dimension and that may be considered as an effective dimension. More general particle filters employ sequential importance sampling and allow a choice of proposal distribution from which particles are drawn. Snyder et al. (2015) [see also Snyder (2012)] showed that the exponential increase of the ensemble size with effective dimension also holds for particle filters using the optimal proposal distribution (Doucet et al. 2001), which we will introduce in more detail in section 5.

We will consider particle filters based on both of the proposals above. In the case examined by Bengtsson et al. (2008), Bickel et al. (2008), and Snyder et al. (2008), the proposal is the transition distribution for the system dynamics, where new particles are generated by evolving particles from the previous time under the system dynamics. It yields the bootstrap filter of Gordon et al. (1993) and was termed the “standard” proposal by Snyder (2012) and Snyder et al. (2015). The optimal proposal is of interest both because of its relation to the implicit and equivalent-weights particle filters and because it minimizes the degeneracy of weights, as shown in Snyder et al. (2015), and thereby provides a bound on the performance of other particle filters that use sequential importance sampling.

Our ultimate goal is to be able to determine whether a particle filter would be feasible to implement, given that we have statistics from, say, a working ensemble Kalman filter (EnKF). For the standard proposal, this is straightforward: the forecast step of ensemble forecasts provides a draw from the proposal and we simply need to compute weights based on the observation likelihood for each member. However, it is harder to use an existing ensemble to assess the feasibility of the particle filter based on the optimal proposal, since it is nontrivial to develop an algorithm to sample from this proposal (cf. Morzfeld et al. 2012). An alternative is to utilize the results of Bengtsson et al. (2008), Bickel et al. (2008), and Snyder et al. (2008, 2015), which relate the behavior of the weights in the linear, Gaussian case to eigenvalues of certain covariance matrices that may be estimated from an existing ensemble. We aim to evaluate the use of these results in a more general nonlinear, non-Gaussian setting, by using ensembles from a working sequential EnKF to calculate the relevant statistics (without implementing the particle filter directly.) As a specific test case, we employ the Lorenz-96 system and demonstrate the nonlinearity present in this example. Note that sampling error also presents an issue in applying these results; we investigate these effects and possible methods for overcoming them in this paper as well.

In addition, it is nontrivial to implement the truly optimal particle filter in nonlinear settings when the observations are not available every model time step. We investigate several approximations to the implementation of the “optimal” particle filter and utilize these approximations in sections 5 and 6. However, we emphasize that these approximations are no longer guaranteed to be optimal.

We note here that Chorin and Morzfeld (2013) have investigated a different, but related, effective dimension of a Gaussian data assimilation problem. In particular, they define a “feasibility criterion” to be the Frobenius norm of the steady-state posterior covariance matrix (which can be exactly calculated in the linear, Gaussian regime.) While both studies explore potential limitations of particle filtering in high-dimensional systems, their criterion is based on bounding the total posterior error variance as a function of an effective dimension, whereas the studies of Snyder et al. (2008) and Snyder et al. (2015) quantify the relation between degeneracy of the particle-filter weights and an effective dimension.

The remainder of this paper is organized as follows. In section 2, we review the ensemble Kalman filter and the particle filter and their respective implementations. Section 3 reviews the previous results of Snyder et al. (2008) regarding the limits of particle filters in high-dimensional linear systems. Section 4 verifies the applicability of the results for linear, Gaussian systems and the standard proposal to nonlinear systems by testing the results on the Lorenz-96 system; this is specifically useful for understanding the similar extension needed for the optimal proposal. In section 5, we consider approximations of the optimal proposal in a nonlinear system and discuss some of the difficulties that arise, in particular regarding additive model noise in nonlinear systems. Section 6 includes comparisons of performance of the standard proposal and approximation to the optimal proposal in a nonlinear system. Finally, section 7 summarizes the results and draws conclusions.

## 2. Review of ensemble methods and previous results

Ensemble data assimilation methods approximate probability distributions using an ensemble of states, either weighted or unweighted. Two common ensemble methods are the EnKF and the particle filter. Generally, the traditional EnKF algorithm uses unweighted ensemble members, which are themselves updated when an observation becomes available. On the other hand, the particle filter uses a weighted ensemble. When an observation is available, the particles are drawn from a proposal distribution and then reweighted according to the observation.

In this section, we will first describe the setup and some notation, and then briefly review the standard and optimal proposal implementations of the particle filter as well as the ensemble Kalman filter.

### a. Setup and notation

*k*indexes time and

*l*below, while observations are available every

*k*. Let the time between observations be denoted by

*k*is suppressed for notational convenience, are i.i.d. random variables that represent the system noise and have a distribution to be specified later. Further define

*m*is nonlinear, since the noise is convolved into the state at the intermediate integration times

*l*. Next assume that we have linear, noisy observations of the state given by

### b. Particle filter

### c. Ensemble Kalman filter

*f*stands for “forecast,” and

*a*will represent “analysis.” If an observation is also available at time

The EnKF is a linear method, and thus will be suboptimal for problems that are significantly non-Gaussian even if

### d. Review of previous asymptotic results

Snyder et al. (2008) prove, in certain regimes, an exponential relationship between the variance of the observation log-likelihood and the inverse of the maximum weight. In the linear Gaussian case, this variance can be calculated as a sum of eigenvalues of an explicit function of covariances. First, we give some definitions.

Snyder et al. (2008) apply these asymptotic results to the particle filter with the standard proposal, where

In a system where each degree of freedom is independent and independently observed, these expressions simplify and show that

## 3. Model and experimental setup

We solve a discrete-time, stochastic version of this equation, cast in the form (1). Fixing an integration time step

We will need an example ensemble data assimilation (DA) scheme to calculate the statistics necessary to test the asymptotic theory. Since our goal is to demonstrate that these statistics may be used in practice to determine the applicability of the particle filter, we will use the EnKF, a common method for high-dimensional problems, to calculate the statistics. While the EnKF is suboptimal in the nonlinear case, we wish to show that the method is reasonably effective across a wide range of parameters for this system. The spread–skill relation (Table 1) indicates that this is true. For this experiment, we fix

Forecast error and variance of the working EnKF, with varying values of observation noise.

In section 6, we will run a sequential particle filter for many observations to compare the overall performance of different proposal algorithms. In this case, we will need to resample in order to prevent weight collapse: here, we test two different resampling thresholds. The first is the resampling threshold defined in Kong et al. (1994), in which the filter is set to resample when the effective sample size

## 4. Extension to nonlinear case: Standard proposal

Our goal is to show how to use an existing DA ensemble to determine whether it would be feasible to use a particle filter for a given nonlinear system, and if so, how many particles would be necessary to avoid filter collapse. In the case of the standard proposal, it is straightforward to directly calculate the weights without implementing the particle filter and quantify the statistics of the maximum weight directly. Alternatively, if we know

We consider the Lorenz (1996) equations with

Measure of nonlinearity of the Lorenz-96 system, for varying lengths of time.

The results in Table 2 show that the measure of nonlinearity is very close to 0 for a single integration step, but quickly increases for longer time windows. This implies that the system is well approximated by a linear model after just one integration step, but the nonlinearity increases as the length of integration increases. We have, therefore, chosen observation frequencies for the following experiments that guarantee nonlinear behavior of the model between observations in order to test the theory. Additionally, note that although we are operating in a regime where the system is fully observed, based on theory, we expect the same results to hold in the more realistic situation of inhomogeneous spatial observation coverage. In particular, fewer observations will lead to more strongly non-Gaussian probability distributions; however, we are testing the effects of non-Gaussian distributions by ensuring the time between observations is long enough to display nonlinear behavior.

Next, to test the asymptotic theory on the calculation of ^{−5} to 0.05. Note that varying the observation error leads to different values of *τ*, and thus different data points, since the estimate of *τ* and will result in ensembles that are close to collapse. Intuitively, this can be understood by thinking about a one-dimensional case: if the variance of the observation likelihood is very small, then the support of the probability distribution is very narrow, and all particles except the one closest to the observation will have very small weight.

Figure 1 shows the results of the asymptotic theory of collapse, where each data point is averaged over the last 2990 steps of the filter and the error bars represent 95% confidence intervals. Note that the observation error is increasing as we move in the positive *x*-axis direction. Results using the full covariance to calculate the eigenvalues are given in blue. They agree well with the theory in the regime near the origin, where the theory is formally valid, but deviations from the theory increase as

There are several additional reasons for deviation from the theory within the asymptotic regime as well. In particular, the theory relies on the assumption that

To test whether the non-Gaussian nature of

Skewness of the ensembles of

To additionally give a visual approximation of the distribution of *N*_{e} = 10 000) and plot the histograms of

In practice, there are difficulties using (17) to estimate

Thus, we also tested this theory using a computationally feasible approximation for the eigenvalues of ^{1} Nevertheless, using the approximation of

## 5. Optimal proposal

Next, we follow the approach of the previous section, but apply the asymptotic theory to our approximation of the optimal proposal. Specifically, we wish to use an existing ensemble to evaluate the feasibility of a particle filter using the optimal proposal. As in the case of the standard proposal above, the evaluation will be limited by the fact that it applies results from linear, Gaussian systems in a nonlinear, non-Gaussian setting, and by sampling errors in estimating the necessary covariance matrices from a finite ensemble. We will check these limitations with numerical simulations using the Lorenz (1996) system.

There are two additional issues that must be addressed when evaluating the feasibility of the optimal proposal. First, the assumption that

### a. Model noise in nonlinear systems

The matrix

It is not immediately obvious whether one of these definitions is to be preferred. They will agree in the limit of linear dynamics and may differ as nonlinearity increases. We have, therefore, explored the behavior of both approaches in the case with

The two definitions do, however, differ substantially in their computational demands, as the computation of the

### b. Numerical results

Snyder et al. (2015) have rigorously shown that the asymptotics developed in Bengtsson et al. (2008) and Snyder et al. (2008) also hold for the optimal proposal. Here, we numerically show how these results extend to the nonlinear system of Lorenz (1996), with our approximation of the optimal proposal. As in the experiment with the standard proposal, we run the EnKF with a localization radius of 5 and a covariance inflation of 1.05 on the Lorenz equations with 100 variables for 3000 sequential observations. We fix ^{−3} to 1. In this experiment, we use the approximation *τ* and the exact weights. The size of the ensemble used to calculate

The results are given in Fig. 3. Clearly, the data points do not agree with the theory as well as in the experiment with the standard proposal. This is likely due to the parameter choices in this experiment. When using this approximation of the optimal proposal, we empirically found that we needed to increase the time between observations in order to satisfy the assumption that the filter is close to collapse [i.e., that

Skewness of the ensembles of

As in the previous section, we also perform this experiment with the larger ensemble size of *N*_{e} = 10 000 and plot the histograms of *N*_{e} = 10 000 are comparable to the time averages for

Thus, we would expect worse agreement with the asymptotics in these experiments, because *τ*, the data points for which this approximation was used (in red) are much closer to the theoretical line. That is, the underestimation of *τ* by the diagonal approximation compensates for the overestimation of *τ* by the theory due to the steep spectrum. But, as in the case of the standard proposal, these approximations are more accurate in the asymptotic regime (close to the origin) while the data deviate from the theory away from this regime.

## 6. Performance of standard and optimal proposals

Recently, there has been a focus in the particle filtering community on the optimal proposal as an improvement over the standard proposal (Doucet et al. 2000; Arulampalam et al. 2002; Bocquet et al. 2010; Snyder 2012; Snyder et al. 2015). Intuitively, sampling from a distribution conditioned on the new observations should perform better than a distribution conditioned on the previous observations. The form of the weight update should also provide intuition behind the performance gain: the standard proposal weight update involves the distribution of the observations conditioned on the state at the current time

In a review of non-Gaussian data assimilation methods, Bocquet et al. (2010) performed a simple comparison between the standard and optimal proposal implementation of the particle filter and found that the optimal proposal results in lower mean squared errors for smaller ensemble sizes, and has comparable performance to the standard proposal for large ensemble sizes. Here, we perform experiments that not only compare the mean squared errors of these methods, but we also consider the frequency at which resampling occurs as well as the maximum weight of each method after a single step.

To test the usefulness of the optimal proposal, experiments were run with the Lorenz (1996) system with 5, 10, and 20 variables, with full observations once per time step for 300 time steps, using both the standard proposal and our approximate implementation of the optimal proposal as described in section 5. The observation error variance, system noise variance, and initial ensemble variance are fixed at

Figure 5 shows the root mean squared error of the posterior mean as a function of ensemble size. For the system with the smallest state dimension (

A further difference is that the filter using the standard proposal resamples much more often than that of our approximation to the optimal proposal, with both resampling thresholds (see Table 5). This may help explain why our approximation of the optimal proposal has better error values: since resampling introduces additional sampling noise into the algorithm, resampling less frequently should be preferable to resampling often.

Number of times each method was resampled in a window of 300 assimilation steps, for varying ensemble sizes and state dimensions. (top) Resample when effective sample size

Note that under the effective sampling size threshold, the number of times the filter resampled increases with ensemble size for a fixed state dimension. This may be due to the dependence of the threshold on the ensemble size, leading to increased resampling frequency with

In addition to having smaller errors over time, the optimal proposal is less likely to experience collapse than the standard proposal. A hint to this behavior is given by the lower number of necessary resampling steps for our approximation to the optimal proposal than the standard proposal; however, this effect can be studied directly by comparing the maximum weight after one step for each proposal. Results are shown in Fig. 6. All parameters are fixed at the same value for each proposal, except the state dimension, which varies as shown. The ensemble size is fixed at

When

Comparison of the performance of standard and optimal proposals, varying the size of model system noise.

As Table 6 shows, the difference in errors between the standard and approximation of the optimal proposal increases as the system noise increases. For the smallest system noise, the ratio of the error from the approximation to the optimal proposal to the standard error is very close to 1, but for larger noise, our approximation of the optimal proposal yields a significant decrease in error over the standard proposal. Since the optimal proposal (or an approximation thereof) requires more computational effort than the standard proposal, if the problem of interest has very small system noise, then the standard proposal should be used. Lin et al. (2013) present the optimal proposal particle filter as one method in a class of “look ahead” algorithms, and investigate other such algorithms in the context of computational expense for various types of problems.

## 7. Discussion and conclusions

In this work, we attempted to answer the question of whether one could predict the collapse of the optimal particle filter without building a scheme to sample from the optimal proposal. We have shown that this is possible in the Lorenz (1996) system, using results from Snyder et al. (2008) and their extension to the optimal proposal in Snyder et al. (2015). The results of the former demonstrate how to use eigenvalues of matrices from a linear, Gaussian system to calculate the effective dimension

In extending the asymptotic results to nonlinear systems and the optimal proposal, another important issue is to estimate an “effective” system-noise covariance corresponding to the additive Gaussian noise at observation times assumed in Snyder et al. (2008, 2015). We have discussed several different approximations of this covariance, and shown that the asymptotic results are also valid with our implementation of the optimal proposal when these approximate system-noise covariances are used. We emphasize that this implementation is an approximation to the truly “optimal” proposal, and thus the theory guaranteeing optimality of this proposal algorithm no longer holds in this setting.

Additionally, the eigenvalue decompositions necessary to estimate the effective dimension will be costly for large systems (and large ensembles.) Thus, in practice, we will need to find computationally feasible approximations. In this work, we have chosen to approximate the matrices as diagonal to simplify these eigenvalue calculations. This approximation appears to be effective in the idealized system considered here, though it also tends to overestimate the degree of collapse. The margin of this overestimation decreases as the system gets closer to collapse.

Finally, motivated by the results of Snyder (2012), which demonstrate the benefits of the optimal proposal implementation over the standard proposal in a simple example, we investigated the performance gain of an approximate implementation of the optimal proposal over the standard proposal in a nonlinear system. We have shown that the approximation of the optimal proposal implementation not only collapses less frequently than the standard proposal in the same regimes, but also results in quicker error convergence as a function of increasing particles. Thus, for systems in which the particle filter may work, utilizing the optimal proposal can provide increased performance with fewer particles than the standard proposal. The optimal proposal, however, is not trivial to implement and its benefits disappear in the limit of small system noise.

There are several remaining challenges regarding particle filters in nonlinear systems. Experiments still need to be done to determine how the filters behave when applied sequentially; the experiments in sections 4 and 5 of this paper study the degree of collapse after one assimilation step. However, this does not preclude the possibility of the particle filter collapsing after two or more steps. Second, while we have presented a general methodology, we have only tested it on one system; further testing in a wider variety of systems would be of interest. In addition, it could be useful to know whether the numerical results in this paper have an analytical analog, as in the linear Gaussian case. Finally, further work should be done to investigate the optimal proposal, particularly in regards to approximations of the model noise covariance.

## Acknowledgments

Slivinski was supported by the NSF through Grants DMS-0907904 and DMS-1148284, by ONR through DOD (MURI) Grant N000141110087, and by NCAR’s Advanced Study Program during a collaborative visit to NCAR.

## APPENDIX

### Sampling from Optimal Proposal

## REFERENCES

Ades, M., and P. van Leeuwen, 2013: An exploration of the equivalent weights particle filter.

,*Quart. J. Roy. Meteor. Soc.***139**, 820–840, doi:10.1002/qj.1995.Anderson, J., and S. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts.

,*Mon. Wea. Rev.***127**, 2741–2758, doi:10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.Arulampalam, M., S. Maskell, N. Gordon, and T. Clapp, 2002: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking.

,*IEEE Trans. Signal Process.***50**, 174–188, doi:10.1109/78.978374.Bengtsson, T., P. Bickel, and B. Li, 2008: Curse of dimensionality revisted: Collapse of the particle filter in very large scale systems.

*Probability and Statistics: Essays in Honor of David A. Freedman*, D. Nolan and T. Speed, Eds., Institute of Mathematical Statistics, 316–334.Bickel, P., B. Li, and T. Bengtsson, 2008: Sharp failure rates for the bootstrap particle filter in high dimensions.

*Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh*, B. Clarke and S. Ghosal, Eds., Institute of Mathematical Statistics, 318–329.Bocquet, M., C. A. Pires, and L. Wu, 2010: Beyond Gaussian statistical modeling in geophysical data assimilation.

,*Mon. Wea. Rev.***138**, 2997–3023, doi:10.1175/2010MWR3164.1.Burgers, G., P. van Leeuwen, and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter.

,*Mon. Wea. Rev.***126**, 1719–1724, doi:10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.Chopin, N., 2004: Central limit theorem for sequential Monte Carlo methods and its application to Bayesian inference.

,*Ann. Stat.***32**, 2385–2411, doi:10.1214/009053604000000698.Chorin, A. J., and M. Morzfeld, 2013: Conditions for successful data assimilation.

,*J. Geophys. Res. Atmos.***118**, 11 522–11 533, doi:10.1002/2013JD019838.Del Moral, P., and L. M. Murray, 2015: Sequential Monte Carlo with highly informative observations. ArXiv, accessed 29 January 2016. [Available online at http://arxiv.org/abs/1405.4081.]

Doucet, A., and A. Johansen, 2011: A tutorial on particle filtering and smoothing: Fifteen years later. Dept. of Statistics, Oxford University, 39 pp. [Available online at http://www.stats.ox.ac.uk/~doucet/doucet_johansen_tutorialPF2011.pdf.]

Doucet, A., S. Godsill, and C. Andrieu, 2000: On sequential Monte Carlo sampling methods for Bayesian filtering.

,*Stat. Comput.***10**, 197–208, doi:10.1023/A:1008935410038.Doucet, A., N. de Freitas, and N. Gordon, 2001: An introduction to sequential Monte Carlo methods.

*Sequential Monte Carlo Methods in Practice*, A. Doucet, N. de Freitas, and N. Gordon, Eds., Springer-Verlag, 2–14.Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics.

,*J. Geophys. Res.***99**, 10 143–10 162, doi:10.1029/94JC00572.Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation.

,*Ocean Dyn.***53**, 343–367, doi:10.1007/s10236-003-0036-9.Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions.

,*Quart. J. Roy. Meteor. Soc.***125**, 723–757, doi:10.1002/qj.49712555417.Gordon, N., D. Salmond, and A. Smith, 1993: Novel approach to nonlinear-non-Gaussian Bayesian state estimation.

,*IEE Proc., F, Radar Signal Process.***140**, 107–113, doi:10.1049/ip-f-2.1993.0015.Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter.

,*Mon. Wea. Rev.***129**, 2776–2790, doi:10.1175/1520-0493(2001)129<2776:DDFOBE>2.0.CO;2.Hastings, W. K., 1970: Monte Carlo sampling methods using Markov Chains and their applications.

,*Biometrika***57**, 97–109, doi:10.1093/biomet/57.1.97.Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique.

,*Mon. Wea. Rev.***126**, 796–811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation.

,*Mon. Wea. Rev.***129**, 123–137, doi:10.1175/1520-0493(2001)129<0123:ASEKFF>2.0.CO;2.Kong, A., J. Liu, and W. Wong, 1994: Sequential imputations and Bayesian missing data problems.

,*J. Amer. Stat. Assoc.***89**, 278–288, doi:10.1080/01621459.1994.10476469.Lin, M., R. Chen, and J. Liu, 2013: Lookahead strategies or sequential Monte Carlo.

,*Stat. Sci.***28**, 69–94, doi:10.1214/12-STS401.Lorenz, E. N., 1996: Predictability: A problem partly solved.

*Proc. Seminar on Predictability*, Vol. 1, Reading, Berkshire, United Kingdom, ECMWF, 1–18.Morzfeld, M., X. Tu, E. Atkins, and A. J. Chorin, 2012: A random map implementation of implicit filters.

,*J. Comput. Phys.***231**, 2049–2066, doi:10.1016/j.jcp.2011.11.022.Nakano, S., G. Ueno, and T. Higuchi, 2007: Merging particle filter for sequential data assimilation.

,*Nonlinear Processes Geophys.***14**, 395–408, doi:10.5194/npg-14-395-2007.Robert, C. P., and G. Casella, 2004:

Springer-Verlag, 645 pp.*Monte Carlo Statistical Methods.*Silverman, B., 1986:

. Chapman and Hall, 175 pp.*Density Estimation for Statistics and Data Analysis*Snyder, C., 2012: Particle filters, the “optimal” proposal and high-dimensional systems.

*ECMWF Seminar on Data Assimilation for Atmosphere and Ocean*, Shinfield, Reading, United Kingdom, ECMWF, 161–170.Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering.

,*Mon. Wea. Rev.***136**, 4629–4640, doi:10.1175/2008MWR2529.1.Snyder, C., T. Bengtsson, and M. Morzfeld, 2015: Performance bounds for particle filters using the optimal proposal.

,*Mon. Wea. Rev.***143**, 4750–4761, doi:10.1175/MWR-D-15-0144.1.van Leeuwen, P., 2009: Particle filtering in geophysical systems.

,*Mon. Wea. Rev.***137**, 4089–4114, doi:10.1175/2009MWR2835.1.

^{1}

Since