Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

Bo Liu King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Search for other papers by Bo Liu in
Current site
Google Scholar
PubMed
Close
,
Boujemaa Ait-El-Fquih King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Search for other papers by Boujemaa Ait-El-Fquih in
Current site
Google Scholar
PubMed
Close
, and
Ibrahim Hoteit King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Search for other papers by Ibrahim Hoteit in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

The Bayesian filtering problem for data assimilation is considered following the kernel-based ensemble Gaussian mixture filtering (EnGMF) approach introduced by Anderson and Anderson. In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution is analyzed. Then the focus is on two aspects: (i) the efficient implementation of EnGMF with (relatively) small ensembles, where a new deterministic resampling strategy is proposed preserving the first two moments of the posterior GM to limit the sampling error; and (ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

Corresponding author address: Ibrahim Hoteit, King Abdullah University of Science and Technology, Division of Computer, Electrical and Mathematical Sciences and Engineering, 23955-6900 Thuwal, Saudi Arabia. E-mail: ibrahim.hoteit@kaust.edu.sa

Abstract

The Bayesian filtering problem for data assimilation is considered following the kernel-based ensemble Gaussian mixture filtering (EnGMF) approach introduced by Anderson and Anderson. In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution is analyzed. Then the focus is on two aspects: (i) the efficient implementation of EnGMF with (relatively) small ensembles, where a new deterministic resampling strategy is proposed preserving the first two moments of the posterior GM to limit the sampling error; and (ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

Corresponding author address: Ibrahim Hoteit, King Abdullah University of Science and Technology, Division of Computer, Electrical and Mathematical Sciences and Engineering, 23955-6900 Thuwal, Saudi Arabia. E-mail: ibrahim.hoteit@kaust.edu.sa

1. Introduction

Nonlinear Bayesian filtering is widely used in many fields to estimate the state of a nonlinear dynamical system given a set of noisy observations and knowledge of the system dynamics (Gustafsson et al. 2002). Let and denote the state variable at time instant and its corresponding observation, respectively. The Bayesian filtering problem is formulated as determining the estimate of the state given all available observations up to the estimation time. This is given by the mathematical expectation of the posterior probability density function (pdf) of the state as
e1
where is the shorthand of the observation set , and denotes the posterior pdf of . For such a problem, one can apply the Bayes’s rule to recursively update the posterior pdf with incoming observations:
e2
where is the likelihood function. The prior distribution density is computed given the posterior density at the previous assimilation cycle as
e3
and the normalization constant is defined as
e4
In practice, analytical computation of the above integrals is usually not possible, especially when the dynamical model and/or the observation model are nonlinear and/or non-Gaussian. This gives rise to a number of approximate solutions (e.g., Evensen 1994; Houtekamer and Mitchell 1998; Lermusiaux 1999; Pham 2001; Anderson 2001; Hoteit et al. 2002; Tippett et al. 2003; van Leeuwen 2009; Snyder et al. 2008; Luo et al. 2012).

The particle filter (PF) approximates the posterior distribution of the state by a mixture of Dirac distributions defined at a set of sampled points in the state space called particles (Gordon et al. 1993; Liu 2008; van Leeuwen 2009). The evolution of the state distribution is then carried by propagating all particles forward in time with the dynamical model. When an observation is available, the relative likelihoods of these forecast particles are calculated to update their weights. The posterior estimate (1) is then approximated by the weighted average of the particles. Particles in the PF are not updated by the observations so that (theoretically) dynamical balances are not affected by the analysis (van Leeuwen 2009). The drawback is that the particles are not pulled toward the observations; only their relative weights are updated. After a few analysis steps, the algorithm assigns most of the weight to very few particles such that the weights of the remaining particles become negligible (Gustafsson et al. 2002). The statistical information carried by the particles becomes less meaningful, and may result in particle degeneracy and filter divergence (Snyder et al. 2008). The most standard way of preventing degeneracy is to abandon particles with very low weights and to resample multiple copies of the particles with large weights. However, since only a portion of the particles is duplicated after this procedure, the diversity of the particles gradually degrades (van Leeuwen 2009).

The particle degeneracy problem is more severe in large-dimensional systems where the number of required particles exponentially increases, which is known as the “curse of dimensionality” (Bengtsson et al. 2008). An alternative approach to overcome this problem is the ensemble Kalman filter (EnKF) (Evensen 2003), which represents the Gaussian-based distribution of the Kalman filter (KF) state, which involves estimating an ensemble of particles with uniform weights. The evolution of the ensemble in time is then performed similar to that in the PF by integrating the particles (or ensemble members) into the dynamical model. Once an observation is available, a Kalman update is applied to each member and the average of the updated ensemble is taken as the posterior estimate of the state (Evensen 2003). To avoid perturbing the observations, deterministic variants of the EnKF have also been proposed in which the ensemble mean and covariance are updated exactly as in the KF (Tippett et al. 2003). The EnKFs were shown to perform reasonably well and to be remarkably robust even when implemented with small ensembles, especially when equipped with localization and inflation techniques (Anderson and Anderson 1999; Hamill and Snyder 2000). In contrast with the PF, the Kalman update of the particles mitigates the risk of ensemble degeneracy by pulling the particles toward the observations (Kivman 2003; Hoteit et al. 2008).

The EnKF framework is based on Gaussian prior and posterior pdfs. This limits the filter’s ability to represent non-Gaussian distributions, which is one of the fundamental advantages of non-Gaussian filtering schemes (Anderson and Anderson 1999). To overcome this limitation, a number of methods have been proposed to extend the EnKF to the more general framework of Gaussian mixture (GM) models. As demonstrated by Alspach and Sorenson (1972), when the likelihood is Gaussian and the observation operator is linear, a GM prior leads to a GM posterior for which the parameters of the Gaussian components are updated as in the KF and their weights are updated as in the PF. Two families of GM-based algorithms can be distinguished by how the forecast GM approximation is constructed:

  • Clustering-based GM algorithms—In this class of algorithms, clustering techniques are used to build a GM representation of the forecast pdf from a forecast ensemble. When a new observation is available, the weights are first updated according to the PF. The particles are then resampled before applying the Kalman update to the ensemble members as in the EnKF. These algorithms use distinct clustering techniques and/or resampling strategies in the update step (e.g., Bengtsson et al. 2003; Smith 2007; Sondergaard and Lermusiaux 2013). The reader may refer to Frei and Künsch (2013b) for a thorough discussion about this class of methods.

  • Kernel-based GM algorithms—The GM representation of the forecast pdf is constructed from the forecast ensemble via a Gaussian-type kernel function according to the standard density estimation approach (Silverman 1986). The weights in this GM are uniform, the centers are defined by the forecast particles, and the kernel function bandwidth matrix is designed as the scaled sample covariance of the forecast particles. In the update step, the weights and the first two moments of the GM components are updated as in the PF and KF, respectively. The kernel-type GM approach was first used for data assimilation by Anderson and Anderson (1999), who performed a comparison with a single Gaussian representation. A number of different schemes designed for large-dimensional applications then followed (e.g., Bengtsson et al. 2003; Hoteit et al. 2008, 2012; Stordal et al. 2011). These are discussed in more detail in section 2d.

Here, we follow the kernel-based approach, in an effort to contribute to the field on multiple levels. After formulating the ensemble-based GM filter (EnGMF) by Anderson and Anderson (1999) for any observational operator, we present a mathematical analysis of the effect of the bandwidth parameter of the kernel function on the second moment of the posterior pdf. Next, we focus on the efficient implementation of the EnGMF for data assimilation with small ensembles where we propose, among other solutions, to replace the stochastic resampling with a new deterministic resampling scheme while preserving the first two moments of the analysis pdf to limit sampling error (i.e., the mismatch between the posterior GM distribution and the statistical characteristics of the associated samples). We also focus on theoretical and numerical analyses of the influence of the bandwidth parameter on the contributions of the KF and PF update steps and on the variance of the posterior weights. Numerical experiments are conducted with the Lorenz-96 model to study the behavior of the EnGMF with deterministic resampling, and to evaluate its performance against EnGMF with stochastic resampling, the (stochastic) EnKF, and the (deterministic) ensemble transform Kalman filter (ETKF).

2. The ensemble Gaussian mixture filter with stochastic resampling (EnGMF_SR)

Consider a discrete time state-space model in which the state of the system at time is integrated with a forward model and is observed through a linear observation operator :
e5
e6
The input process noise and the observation process noise are assumed to be independent, jointly independent, and independent of . We also assume that for each k, and are Gaussian with zero means and covariances and , respectively.

Instead of approximating the posterior pdf by a mixture of Dirac delta functions as in the PF, we use a GM to form a continuous representation from the forecast ensemble. The advantage of using a GM to approximate the prior pdf before applying the Bayesian update is twofold: 1) it introduces a Kalman update to each particle, as in the EnKF, which should help to mitigate particle degeneracy by pulling them toward the observations (Kivman 2003; Hoteit et al. 2008); and 2) it introduces a weight to each particle, as in the PF, complementing the EnKF update step to make it consistent with the non-Gaussian Bayesian update. Resampling the particles from the posterior GM would then allow the posterior distribution to be integrated forward in time following an ensemble-based Monte Carlo approach. Since the new particles are drawn from a GM instead of a mixture of Dirac delta functions, the diversity of the particles is better preserved (Pham 2001).

This idea was first introduced by Anderson and Anderson (1999), who assumed the observational operator was an identity matrix. Similarly to van Leeuwen (2009), here we consider the case of any observational operator . Suppose that at the kth assimilation cycle, an ensemble of N analysis samples representing the analysis pdf
e7
is available. This ensemble is propagated forward in time with the model in (5) as in the PF:
e8
where is a noise generated from the Gaussian pdf . Then, based on the kernel density estimation technique, a mixture of Gaussian kernel functions is used to construct a GM approximation of the forecast distribution from the forecast ensemble as
e9
where is the bandwidth matrix, which must be symmetric and semipositive definite. The centers of the Gaussian components in the mixture are also referred to as particles.
The bandwidth matrix represents how the probability distribution is continuously assigned to the sample points and their neighbor space. Roughly speaking, a smaller refers to the case where more probability is assigned to the points where the samples are located and less probability is assigned to their neighbor space. The approximation in (9) degenerates to a mixture of Dirac delta functions (as in the PF) if one assigns the zero matrix to , which is considered as a stiff trust in the samples. Thus, we design the bandwidth matrix to be proportional to the sample covariance of the forecast ensemble as (Silverman 1986)
e10
where β is a nonnegative scalar parameter and the sample covariance is defined as
e11
with as the average of the forecast ensemble. Applying the Bayes’s rule, the posterior pdf of could be computed as follows (Sorenson and Alspach 1970):
e12
where is the normalization constant. Recalling Theorem 3.1 in Anderson and Moore (1979) (214–215), (12) reduces to a GM when the observational operator is linear:
e13
where
e14
e15
e16
and the weights are updated via
e17
Equations similar to (14)(17) can be obtained in the more general case of a nonlinear observation operator , either by replacing by its gradient evaluated at the centers of the forecast GM in the analysis step (Sorenson and Alspach 1970) or by considering an augmented state vector, , to formulate a new state-space system with a new linear observation operator. Another common strategy would be to use the observation matrix-free method (Mandel 2006), which replaces with , where is the ensemble perturbation matrix of the simulated observations, commonly implemented in the EnKF methods (Evensen 2003).
The posterior estimate of the state is taken as the weighted average of the updated particles:
e18
After updating the posterior pdf, a resampling step is needed before integrating the forecast pdf to generate an ensemble from the analysis pdf in (13), that is
e19
To this end, one can apply the classical stochastic resampling step as suggested by Anderson and Anderson (1999), which requires two steps: 1) draw N random scalars , with probabilities and 2) resample the new particles from the Gaussian distribution . Hereafter, we will refer to this algorithm as the EnGMF with stochastic resampling (EnGMF_SR), and we summarize it below in the following three steps:
  • Forecast step—Starting from a set of particles with uniform weights, the nonlinear model is integrated forward to compute the forecast particles as with . The average of the forecast particles is taken as the forecast estimate and the sample covariance, , is given by (11). The prior GM is then constructed as with .

  • Analysis step—After a new observation becomes available, the analysis pdf at time is approximated by , where the centers and the posterior bandwidth matrix are updated with a KF step in (14)(16). The weights are computed as in (17). The weighted sum of the centers of the posterior GM is taken as the analysis estimate.

  • Stochastic resampling step—Before proceeding with a new forecast cycle, a set of new particles with uniform weights is stochastically resampled from the posterior GM as described above.

3. Analysis of the filtering variance with respect to the bandwidth parameter

The EnGMF scheme involves two update steps in the analysis cycle. The first one is the KF-type particle update in (14), which linearly moves the forecast samples from areas with lower probability to areas with higher probability, as indicated by the observations, and narrows down sample dispersion. The second step is the PF-type weight update in (17), which modifies the relative weights of the GM components according to their likelihood with respect to the observations. The parameter β in (10) plays a key role, balancing the weights of the KF and PF update steps. Below we analyze the behavior of the EnGMF in terms of the second moment of the analysis pdf with respect to β.

Given the forecast GM, , the second moment of this mixture model is given by
e20
which is larger than the sample covariance of the forecast particles by a factor . The filter gain is computed as a function of based on (15), that is,
e21
The second moment of the posterior GM in (13) can be decomposed as
e22
e23
e24
where denotes the sample covariance of the updated particles and
e25
with , , and . The matrix denotes the weighted sample covariance of the particles relative to the weights . Note that is the discrepancy between two nonnegative definite covariance matrices and is thus not guaranteed to be nonnegative definite.

The posterior covariance is then decomposed into a sum of three terms. The variable represents the dispersion of the particles after the KF update and reflects the strength of this update of the particles. More specifically, increasing the value of β results in a larger , which pulls the particles closer to the observations and produces a smaller . Here can be viewed as representing the influence of the weight update in (17) on the second moment of the posterior distribution. Since , a larger β results in a smaller variance of , which makes closer to zero. If the analysis weights are uniform, then . The last term is directly controlled by the bandwidth parameter β.

The above discussion suggests that increasing the value of β would increase the impact of the KF update, which decreases the posterior particle dispersion , broadens , and pushes closer to zero. In contrast, decreasing the value of β increases the impact of the PF update, which results in a larger , tightens , and broadens . If , the Kalman update would vanish, and the proposed filter degenerates to a standard PF. If , the filter gain degenerates to the EnKF gain. Using uniform weights, the EnGMF algorithm reduces to that of the (stochastic) EnKF, in the sense that it updates each particle with the observation but without perturbation. In this case, the parameter β in the EnGMF plays the role of the inflation factor in the EnKF.

Even when implemented with small ensembles, the EnKF performed satisfactorily while the PF required very large ensembles (Nakano et al. 2007). Thus, when the ensemble size is small, one could manage the actions of the EnGMF in a similar way to the EnKF by assigning a larger value to the parameter β to inflate the weight of the Kalman update. On the other hand, when the ensemble size is large enough, one may use a small β to increase the contribution by the PF update.

Tuning β could also be interpreted following another aspect. In particle filtering, the number of required particles grows exponentially with the state dimension (Snyder et al. 2008). This means that in the case of a large-dimensional system and given a fixed number of samples, one should use a large value of β to better cover the support of the sampled distribution. Similar remarks have been made in the literature; for example, Silverman (1986) and Scott (1992) who suggested that the optimal bandwidth parameter should be proportional to the negative exponent of the state-space dimension. Furthermore, when the dynamical model is imperfect, which usually drives the forecast particles away from the true forecast distribution, a larger β helps assigning more weight to the observations to avoid filter divergence. In the numerical experiments we present in section 5, the bandwidth parameter β is tuned by trial and error.

4. Practical implementation with small ensembles

In realistic atmospheric and oceanic data assimilation applications, it is only feasible to implement the filter with small ensembles. This limitation leads to some undesirable effects, such as (i) rank deficiency and underestimation of the sample error variances of the system’s state and overestimation of the corresponding cross covariances (Whitaker and Hirst 2002; Hamill and Whitaker 2011), (ii) large estimation variance due to the wide variation of the weights (Robert and Casella 2004), and (iii) large resampling variance (Liu 2008). In this section, we discuss and propose some techniques to enhance the robustness of the EnGMF when implemented with small ensembles.

a. Dealing with the undersampling of the forecast mixture covariance

Covariance localization has been applied to tackle the problems of rank deficiency and spuriously large cross correlations between distant state variables in the ensemble’s covariance matrix (Hamill and Whitaker 2011). The covariance localization technique multiplies sample covariances element by element with a distance-dependent correlation function that has local support to force the ensemble covariances to zero beyond some distance (Hamill et al. 2001). Application of this technique in the EnGMF is straightforward: the localized sample covariance is used directly in the computation of both the Kalman gain in (15) and the weights in (17).

It is also straightforward to apply covariance inflation in the EnGMF to mitigate for the underestimation of sample error variances due to the use of small ensembles and often neglected model error. The simplest approach is to choose a larger value of the bandwidth parameter β. In many situations, appropriate covariance inflation has been shown not only to improve the performances of the filter (e.g., Anderson and Anderson 1999; Anderson 2007; Lermusiaux 2007), but also to enhance its robustness (Luo and Hoteit 2011; Altaf et al. 2013).

Since an ensemble is only an approximation of the posterior pdf, it is generally thought to cover only a subspace of the true covariance. The use of small ensembles generally means that a significant part of the state space is not represented by the ensemble, producing unrealistic confidence in the filter forecast (Song et al. 2010). The hybrid EnKF-OI/3DVAR method (Hamill and Snyder 2000) is another approach that one could apply to enhance the sample forecast covariance mixture. In this method, the prior covariance is estimated as a linear combination of a flow-dependent EnKF-based sample covariance and a stationary background covariance as a way to compensate for the complement of the ensemble’s subspace. This technique has been successfully applied in several studies (see e.g., Lorenc 2003; Fertig et al. 2007; Song et al. 2010), and was also shown to enhance the EnKF’s performance and robustness. In this work, we estimate the stationary background covariance from a long-run model, as in Song et al. (2010), and then take the convex combination of and as an alternative design of the bandwidth matrix as follows:
e26
where is the covariance weighting parameter. Generally, one may consider assigning a weight to the stationary background error covariances depending on the size of the ensemble (larger weights for larger ensembles). In the simulation results presented below, the covariance weighting parameter α is tuned by trial and error, but one can also consider adapting its value as in Gharamti et al. (2014).

b. Dealing with the weight impoverishment

As any kernel-based filter, the EnGMF suffers from weight collapse, especially when the state dimension is large (Bengtsson et al. 2008; Hoteit et al. 2008). Generally, if the weights vary widely, assigning large weights to only a very few particles may degrade the filter’s behavior between the assimilation cycles, ultimately resulting in filter divergence (Robert and Casella 2004). To mitigate the weight collapse, Frei and Künsch (2013b) considered two approaches to reduce the variance of analysis weights. In one approach, the Gaussian distribution of the observation noise is replaced by some heavy-tailed distribution to enhance the robustness of the weight update to outliers, following the idea of defensive mixture sampling (Hesterberg 1991). The second approach modifies the analysis weights by solving a convex optimization problem designed to retain a certain amount of diversity in the ensemble without deviating too much from the average. Stordal et al. (2011) directly and efficiently reduced the weight diversity by a factor of , with a tuning factor by nudging the original analysis weights toward uniform distribution, that is,
e27
Although this may introduce some bias in the filter because it moves the estimate away from the original analysis estimate, it may still capture some of the non-Gaussian features, that are left out after the KF update (Stordal et al. 2011) . In our numerical experiments presented below, we apply weight nudging in (27) to reduce the variance of the analysis weights.

c. A deterministic resampling procedure

The classical resampling scheme in (19) draws N random realizations from the analysis pdf. Although the empirical distribution of these realizations is an unbiased estimator of the analysis distribution (van der Vaart and Wellner 1996), the sampling error increases when the size of the ensemble decreases. Here, we focus on small ensembles and present a deterministic resampling step to generate a new ensemble directly from the analysis ensemble , perfectly matching the first moment of the analysis pdf in (13) and approximately matching the second moment. This idea is similar to that in Smith (2007), Luo et al. (2010), and Hoteit et al. (2012), who randomly sampled the particles while preserving the first two moments of the analysis pdf. First, we rewrite the sample covariance of the analysis ensemble as
e28
where
e29
The covariance of the analysis pdf in (13), which is equivalent to (24), is then
e30
Comparing the first two moments of the distribution of the analysis ensemble and the analysis pdf (13), one has
eq1
We, therefore, generate a new ensemble by directly relocating the analysis ensemble in two steps:
  • Shift the analysis ensemble members as
    e31
    to match the mean of the analysis pdf in (13), that is the mean of is .
  • Inflate the deviations of the particles away from their mean by , that is
    e32
    to approximately match the covariance of the analysis pdf in (13).
After resampling, the first two moments of the new ensemble are as follows:
eq2
The discrepancy between the sample covariance of the new ensemble and the covariance of the analysis pdf in (13) can be evaluated by any given matrix norm as follows:
e33
Based on our analysis in section 5d, the variance of the analysis weights, and as such using (25), tends toward zero with increasing ensemble size N. Similarly, the bandwidth parameter β should also be chosen to be small with increasing ensemble size (Scott 1992), and since is bounded by the change of β, then tends toward zero. Consistency in resampling with respect to the first two moments is therefore guaranteed. Note that perturbing the observation with noise generated from may allow removing the term from both (28) and (30), thereafter improving the matching of the analysis covariance. This may, however, introduce some noise to the system, which could degrade the filter’s performance (Nerger et al. 2005; Altaf et al. 2014).

The EnGMF with the proposed deterministic resampling, which we refer to hereafter as EnGMF_DR, has the same forecast and analysis steps as the EnGMF_SR, but resamples the analysis ensemble deterministically using (31) and (32). Covariance localization, weight nudging, and hybrid formulation can of course be incorporated into both EnGMF_SR and EnGMF_DR, as we will discuss further in the numerical experiments presented in section 5.

d. Other variants of Gaussian mixture filtering

Different forms of Gaussian mixture filtering proposed for data assimilation share some similarities with the proposed EnGMF_DR. Bengtsson et al. (2003) adopted the GM representation and replaced the resampling step with a hybrid approach, combining EnKF and PF to mitigate the weight collapse. The covariances of the (prior) forecast pdf are computed based on only the members closest to the centers. Kim et al. (2003) introduced a parametric GM filter in which the forecast particles are updated based on an approximate implementation of the Bayes’s rule involving the conditional pdfs instead of the classical Kalman update step. The GM filter in Hoteit et al. (2008) directly propagates the GM analysis pdf during the forecast step, negating the need to sample the posterior after every analysis step, unlike the EnGMF, but only when the weight variance is measured to be small by the entropy. To avoid linearizing the dynamical model at the center of each component of the GM and to enable the implementation of a filter with high-dimensional systems, Hoteit et al. (2008) assumed a uniform low-rank covariance for all GM components. Hoteit et al. (2012) later investigated a GMF in which each component of the forecast/analysis GM is represented by a different ensemble, as a way to use a different covariance for each Gaussian component. Because the filter was viewed as a set of weighted EnKFs running in parallel, it is considered to be computationally demanding. Stordal et al. (2011) followed Hoteit et al. (2008) in their formulation of the GMF and suggested further decreasing the variance of the posterior weights by introducing the concept of weight nudging toward a uniform distribution. An adaptive weight nudging scheme has since been proposed to tune the strength of the nudging based on minimization of the sum of the variance and the bias of the weights. In Sondergaard and Lermusiaux (2013), the GM is constructed from the forecast particles in a reduced state space using an expectation-maximization (EM) algorithm. This allows varying the number of Gaussian components in the GM, which was determined based on the Bayesian information criterion (BIC). A similar EM-BIC approach following Tagade et al. (2014) further proposed sampling directly from the posterior GM, while only matching its first moment by applying a modified Kalman update to the forecast particles. Frei and Künsch (2013a) resorted to the so-called progressive correction idea of Musso et al. (2001) to propose a GM filter that allows a smooth transition, somehow tuned by a transition parameter, between the PF and the EnKF. The resulting Kalman gain involves the same formula as in the EnGMF in which the transition parameter plays the role of the bandwidth parameter.

5. Numerical experiments

The strongly nonlinear Lorenz-96 (L96) model (Lorenz and Emanuel 1998) is used to evaluate the behavior of the EnGMF_DR and to assess its performance with respect to the EnKF (with stochastic perturbation), the EnGMF_SR, and a deterministic EnKF, the ETKF (Hunt et al. 2007). L96 simulates the time evolution of an atmospheric quantity and is described by the following set of differential equations:
e34
where , denotes the jth element of the state x at time t. Boundary conditions are cyclic, which means , , and . The system dimension is . The constant F represents dissipation with the model exhibiting chaotic behavior for , which we used here (Karimi and Paul 2010). In our simulations, we use the fourth-order Runge–Kutta method to numerically integrate the system states with step size , equivalent to 6 hours in real time. The trajectory of a reference run is taken as the true trajectory to evaluate the filter’s performance. The state variables are observed every day () with observations extracted from the reference run and perturbed with normally distributed noise of zero mean and unit variance. The observation error covariance is set to the identity matrix. Three observational scenarios are considered: full density (i.e., all model state variables are observed), half density (i.e., every second variable is observed), and quarter density (i.e., every fourth variable is observed).
We numerically integrate the model for 10 000 time steps. All elements of the initial state used for the reference run are set to F, except for the 20th element, which is set to . To allow for a transition period, the model trajectory of the first 5000 time steps was discarded. The states of the last 5000 time steps are considered as the reference states, which we later attempt to estimate by assimilating the observations into the model. The mean of the initial ensemble is set as the mean of the integration of the first 5000 time steps. The initial ensemble is generated by adding normally distributed noise with zero mean and identity covariance, while the filters are all run for 5000 time steps. The estimates of the first 620 time steps (155 days in real time) are considered as a spinup period, and the estimates of the last 4380 time steps (4 years in real time) are used for evaluation. The root-mean-square error (RMSE) between the reference states and the filter estimates averaged over all variables and over the assimilation period ,
e35
is used to evaluate the performances of the filters. All experiments are repeated 40 times, each time with a randomly generated initial ensemble and randomly generated observation noise. The average of the RMSEs of the runs is taken as the final result to reduce statistical fluctuations. No model error was included in any of the experiments.

The fifth-order correlation function (Gaspari and Cohn 1999) is used to localize the prior sample covariances for all filters. We also use the inflation technique in the EnKF and the ETKF, as described in Anderson and Anderson (1999), which plays a similar role as the bandwidth parameter β in the EnGMF, as we discussed in section 3.

a. Effect of the bandwidth parameter β

The first series of experiments was designed to study the sensitivity of the proposed filter to the bandwidth parameter β under different scenarios. The bandwidth was set as in (10). The EnGMF_DR and the EnGMF_SR are implemented using different values of β, ensemble sizes, observational densities, and localization length scales. No stationary covariance was added to the sample covariance in this set of experiments. The weight nudging technique is used with . While may not be the “optimal” value, multiple other runs with different values of γ (not shown here) suggest that this should not in any way change the overall conclusion and comparison with the other schemes. This small value of γ was also found to be one of the most appropriate choices by Stordal et al. (2011). The EnKF with perturbed observations and the ETKF are implemented under the same conditions.

Figure 1 plots the contour maps of the RMSEs between the reference states and the filter estimates as the results from all filters with 10 ensemble members. The RMSE is plotted as a function of the localization length scale and the value of β, or the inflation factor for the EnKF and ETKF, for the three observational scenarios: “quarter,” “half,” and “full,” respectively. Overall, the tested filters benefit from covariance localization, and none (except for the EnGMF_SR) require strong localization when a dense observational network is available. Similar to the role of the inflation factor in the EnKF and the ETKF, a larger value of the bandwidth parameter, β, enhances the performance of the EnGMF_DR and helps to alleviate forecast covariance sampling error when the localization length scale is insufficiently small. In comparison with the EnKF, the ETKF, and the EnGMF_SR, the EnGMF_DR achieves the lowest RMSEs under the different simulation conditions. The EnGMF_DR is further shown to generally require less localization and be less sensitive to the choice of the tuning parameter, β. The deterministic resampling scheme enhances the behavior of the EnGMF, while EnGMF_SR performs most poorly and requires the strongest localization. The (deterministic) ETKF also behaves significantly better than the (stochastic) EnKF, further emphasizing the relevance of deterministic resampling in both Gaussian- and GM-based schemes.

Fig. 1.
Fig. 1.

RMSE averaged over the simulation time and all variables as a function of β (or the inflation factor for EnKF) and the localization length scale. The EnKF, the ETKF, the EnGMF_DR, and the EnGMF_SR with kernel bandwidth [see (10)] are implemented with 10 ensemble members and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities at every fourth model time step (or 1 day in real time). Weight nudging is applied with . The location of the minimum RMSE in each panel is marked with a “+” and the corresponding value.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

We also implemented the filters with 20 ensemble members under the same experimental setup as described above, and the results are plotted in Fig. 2. First, one can clearly see that increasing the ensemble size significantly improves the behavior of the EnGMF_SR. Although the RMSEs of the EnGMF_DR, the ETKF, and the EnKF become closer compared to those obtained with , the performance of the EnGMF_DR is still more robust to the choice of localization and inflation/bandwidth parameters, particularly in the scenario with a dense observational network.

Fig. 2.
Fig. 2.

As in Fig. 1, but with 20 ensemble members.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

Hereafter, we analyze the results of the EnGMF_DR and study its behavioral response to changes in the bandwidth parameter β and the ensemble size N. For this purpose, we implemented the EnGMF_DR with different values of β and N with the same localization length scale. The corresponding results are shown in Fig. 3, where the RMSEs between the reference states and the filter estimates as a function of the ensemble size N and the value of β for the three different observational scenarios are plotted. The top three panels plot the RMSEs of the simulations with small ensembles (), which refer to the scenarios with large sampling errors. Localization is applied in these three experiments with a length scale of 20. The bottom three panels show the RMSEs resulting from the runs with large ensembles (), which refer to the scenarios with small sampling errors; localization is not applied in these runs. The weight nudging technique with was applied in all runs. These results suggest the following facts about the EnGMF_DR. First, as expected, the accuracy of the estimates strongly depends on the ensemble size and the density of the observations: the larger the ensemble size and the more variables that are observed, the more accurate the filter estimates are. Second, for any observational network, the best value of β (one that achieves the minimum RMSE in the simulations) is large when the ensemble size is small, and vice versa: a large bandwidth improves the filter’s robustness to sampling errors. And third, for a given ensemble size, a large β achieves better estimation accuracy with denser observational networks. This could be explained by the fact that highly nonlinear systems lead to more non-Gaussian pdfs, which limits the efficiency of the KF update. With more observations, the posterior pdf is expected to become “more” Gaussian, since the likelihood (observations conditional on the states) is Gaussian, which should improve the efficiency of the KF update.

Fig. 3.
Fig. 3.

RMSE averaged over time and all variables as a function of ensemble size and the filter parameter. The EnGMF_DR with kernel bandwidth [see (10)] is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities at every fourth model time step (or 1 day in real time).

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

b. EnGMF_DR with a hybrid covariance

In this experiment, the hybrid covariance technique was applied to estimate the forecast sample covariance of the mixture as in (26). Figure 4 plots the contour maps of the RMSEs of the different simulation scenarios. We apply weight nudging with and covariance localization with a length scale of 20. The top three panels are the contour maps of the RMSEs resulting from simulations with ensemble sizes and the three different observational scenarios. To focus on the effect of the covariance weighting parameter α, we set . Results from this experiment indicate that one may use a smaller α with more observed variables. This means that the stationary covariance would contribute more to the hybrid covariance when the observational density is low. This is consistent with the fact that static covariances help fill the gap with climatological covariances when the dynamical system is less constrained by data.

Fig. 4.
Fig. 4.

(top) The EnGMF_DR RMSE averaged over time and all variables as a function of ensemble size and the covariance weighting parameter α. The EnGMF_DR, equipped with the hybrid kernel bandwidth in (26) and at , is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities. (bottom) The EnGMF_DR RMSE averaged over time and all variables as a function of ensemble size and the bandwidth parameter β. The EnGMF_DR, equipped with the hybrid kernel bandwidth in (26) and at , is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

The bottom three panels in Fig. 4 use , and the RMSE is plotted as a function of β and the ensemble size. By comparing these three panels with the top three panels in Fig. 3, we see that a significant improvement was achieved by nudging the ensemble covariance to a stationary background covariance when fewer observations are assimilated, especially when the ensemble size is small. However, in agreement with the results of the top panels in Fig. 4, the covariance weighting parameter appears too large with the full observational scenario with the performance becomes even worse than without stationary background covariance. This phenomenon may be explained by the fact that with more observations, the assimilation problem is not only more linear, but the dimensionality of the observation space is increased, which reduces the efficiency of the PF update (van Leeuwen 2009). A proper tuning of the value of α is therefore important to achieve benefits from the hybrid technique.

c. Contributions of the Kalman update and the weight update

This experiment was conducted to analyze contributions from the Kalman update in (14) and the weight update in (17) in the EnGMF analysis. The kernel function bandwidth matrix is defined as in (10), without adding a stationary error covariance (i.e., ). Localization is implemented with a length scale of 20, and observations of all model variables are assimilated in all runs. To evaluate the impact of the Kalman and weight updates at every analysis cycle, we consider the sample average of the updated particles without weights as the posterior estimate of the state that is produced only by the Kalman update. Let its corresponding RMSE averaged over all analysis time be denoted by . The contributions of the Kalman and weight updates are then evaluated by the following two ratios:
e36
e37
where and denote the forecast and analysis of the EnGMF averaged over all analysis time, respectively. Here measures the decrease in the forecast RMSE after the Kalman update, and measures the decrease in the after the weight update. Note that when (i.e., ), the weight update is not beneficial in terms of . We analyze and as functions of the ensemble size N and the kernel bandwidth parameter β.
Figure 5 plots and with different ensemble sizes without weight nudging (i.e., ). The results of this figure suggest the following. First, the contribution of the Kalman update increases as β increases before it saturates for large values of β because in this case the Kalman gain tends to become independent of the value of β as . Second, the contribution of the weight update decreases as β increases, such that beyond , the weight update may even degrade the estimation accuracy (i.e., ). To analyze this phenomenon, we rewrite the weights update in (17) as follows:
e38
where and . Given a set of forecast particles and the observation , the weights only depend on , which can be written as
e39
One can see from (39) that although decreases as β increases, which reduces the overall influence of the weight update, the contribution of in becomes smaller, or in other words, the contribution of in increases as β increases. In particular, when β is very large, can be approximated as
e40
This implies that the weights mainly depend on and less on and suggests that the weights may not be accurately estimated when the filter is implemented with a large β and a small ensemble.
Fig. 5.
Fig. 5.

Both and as functions of the bandwidth parameter β and the ensemble size. (a) Here is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). (b) Here is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). No weight nudging was used.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

Figure 6 plots and when the filter was applied with weight nudging (). Compared with the results from Fig. 5, one can see that the weight nudging slightly decreased the contribution of the KF update when insufficiently small values of β are used and positively increased the contribution of the weight update. Moreover, for any ensemble size, the weight nudging always helps to extend the range of positive as a function of β, demonstrating the efficiency of this scheme, as previously suggested by Stordal et al. (2011). We have further tested different values of γ (not shown here). The conclusions from these runs were consistent with those reported by Stordal et al. (2011) who argued that on the one hand, the values of β should increase with increasing values of γ, and on the other hand, the minimum RMSE generally decreases with large weight nudging and small bandwidth parameter values. Finally, the weight nudging also seems to have some indirect impact on the contribution of the KF update. This indeed enhances the contribution of the weight update, further reducing the forecast error (RMSEf) and subsequently the need for the KF update.

Fig. 6.
Fig. 6.

Both and as functions of the bandwidth parameter β and the ensemble size. (a) Here is plotted with full observational density and three ensemble sizes (N = 10, 50, and 100). (b) Here is plotted with full observational density and three ensemble sizes (N = 10, 50, and 100). Weight nudging at was used.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

d. Influence of β on the variance of the weights

We finally examine the variance of the weights averaged over all analysis steps as a function of the ensemble size N and the kernel bandwidth parameter β for the three different observational scenarios. No weight nudging was used. The averaged variances of the weights are plotted in Fig. 7. The top panel plots the averaged variance with the full observation scenario and three ensemble sizes, N = 30, 50, and 100. The bottom panel plots the averaged variance with 50 ensemble members for all three observation scenarios (quarter, half, and full). Note that the filter diverges when β is larger than 0.7 with in a quarter observation scenario. First, one notes that the averaged variance of the weights is more sensitive to the ensemble size N and the kernel bandwidth parameter β, than to the observation density. The pattern is the same for all observational scenarios. Moreover, for any kernel bandwidth parameter β, the averaged variance of the weights decreases when the ensemble size increases. Also for any particular ensemble size, the averaged variance of the weights decreases as β increases.

Fig. 7.
Fig. 7.

The averaged variance of the weights as a function of the bandwidth parameter β and the ensemble size N. (a) The averaged variance is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). (b) The averaged variance is plotted with 50 ensemble members and three observational densities (quarter, half, and full). No weight nudging was used.

Citation: Monthly Weather Review 144, 2; 10.1175/MWR-D-14-00292.1

To analyze these results, consider a set of forecast particles at the kth forecast cycle and the predicted observations . The squared Mahalanobis distance between the predicted observation and the real observation scaled by is
e41
The terms and denote the predicted observations with maximum and minimum squared Mahalanobis distances from the observation , respectively. Then the maximum discrepancy between any two is
e42
The pdfs of the observation given the forecast particle , that is the weights before normalization are given as
e43
which monotonically decreases with . These take their values from the interval , where and are given as
e44
Define as the maximum discrepancy between any two weights, an upper bound of the variance of the weights can be defined as follows:
e45
The upper bound of the variance of the weights in (45) monotonically increases with . Given a set of forecast particles , increasing β always results in a smaller , which means a smaller . When β is large, , meaning that its impact on the upper bound in (45) becomes more pronounced. As for the impact of the ensemble size on the averaged variance of the weights, one can see from (45) that a larger ensemble size achieves a smaller upper bound for the variance, as can be expected.

6. Summary and discussion

This work presented a nonlinear discrete-time Bayesian filtering scheme designed for data assimilation with small ensembles. We first formulated the kernel-based ensemble Gaussian mixture filter (EnGMF) originally introduced by Anderson and Anderson (1999) to the case of any observation operator. This filter combines two types of representations of the state distribution. In the forecast step, a Dirac delta mixture representation is adopted to efficiently propagate the state analysis density with the model following an ensemble-based Monte Carlo approach. In the analysis step, a Gaussian mixture (GM) representation of the forecast probability density is constructed based on the kernel density estimator. Each forecast ensemble member is regarded as the center of a GM model and the kernel function bandwidth matrix is designed proportional to the sample covariance of the forecast ensemble. Once an observation is available, the posterior distribution of the state is then computed based on two update steps: a Kalman update and a weight update for each particle. To improve the robustness of the filter with (relatively) small ensembles, we resorted to inflation and localization techniques to mitigate the rank deficiencies of the mixture covariance, nudged the analysis weights to reduce their variance, and proposed an efficient deterministic resampling scheme to generate a new ensemble that preserves the first two moments of the analysis distribution directly from the analysis particles.

A tuning bandwidth parameter β is designed to adjust the contributions of the Kalman update and the weights update in the analysis step. A comprehensive mathematical analysis of the bandwidth parameter β on the filter’s second moment is presented. Our analysis reveals that the value of β affects the covariance of the posterior distribution in terms of three aspects: the contribution of the Kalman update, the contribution of the weight update, and the posterior bandwidth matrix. Numerical experiments were conducted with the Lorenz-96 nonlinear model to evaluate the performances of the EnGMF with deterministic resampling (EnGMF_DR) and to compare its results against those of the EnGMF with stochastic resampling (EnGMF_SR), the (stochastic) ensemble Kalman filter (EnKF), and the ensemble transform Kalman filter (ETKF). The simulation results suggest that the EnGMF_DR achieves the lowest RMSEs under the various studied scenarios, namely, different observational densities, ensemble sizes, and tuning factors (bandwidth/inflation parameters). It is further shown to generally require less localization and is less sensitive to the choice of the bandwidth parameter. The deterministic resampling scheme significantly contributed to the superior performances of the EnGMF. Nudging the ensemble covariance to a stationary background covariance, following a hybrid EnKF formulation, may also enhance the performance of the EnGMF, especially in the low-observational density scenario.

In the case of small ensembles, the EnGMF_DR generally benefits from a larger bandwidth parameter that increases the contribution of the Kalman update, despite reducing the contribution of the weight update. We have further found that the weights cannot be well estimated and may even contribute negatively to the filter’s update when the ensemble is small and the bandwidth parameter is insufficiently small. This suggests that the Kalman update is more robust to the ensemble size than the weights update, calling for careful tuning of the bandwidth parameter. Weight nudging reduces the variance of the weights and enhances the EnGMF performance; assimilating more observations does not seem to contribute to decrease the variance of the weights.

The proposed filtering framework can be directly implemented in any system already equipped with an EnKF. Its algorithm is very similar to the standard EnKF algorithm with an additional weight update step and a resampling step and does not require perturbing the observations. It has a potential applicability for large-scale systems. Future work will consider realistic oceanic and atmospheric applications and will also focus on investigating other resampling schemes that may preserve more statistical features in addition to the first two moments of the posterior GM pdf, as for instance, locally matching the covariances of the Gaussian components.

Acknowledgments

The research reported in this publication was supported by the King Abdullah University of Science and Technology (KAUST).

REFERENCES

  • Alspach, D., and H. Sorenson, 1972: Nonlinear Bayesian estimation using Gaussian sum approximations. IEEE Trans. Automat. Contrib., 17, 439448, doi:10.1109/TAC.1972.1100034.

    • Search Google Scholar
    • Export Citation
  • Altaf, M. U., T. Butler, X. Luo, C. Dawson, T. Mayo, and I. Hoteit, 2013: Improving short-range ensemble Kalman storm surge forecasting using robust adaptive inflation. Mon. Wea. Rev., 141, 27052720, doi:10.1175/MWR-D-12-00310.1.

    • Search Google Scholar
    • Export Citation
  • Altaf, M. U., T. Butler, T. Mayo, X. Luo, C. Dawson, A. W. Heemink, and I. Hoteit, 2014: A comparison of ensemble Kalman filters for storm surge assimilation. Mon. Wea. Rev., 142, 28992914, doi:10.1175/MWR-D-13-00266.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, B. D. O., and J. B. Moore, 1979: Optimal Filtering. Prentice-Hall, 357 pp.

  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus, 59A, 210224, doi:10.1111/j.1600-0870.2006.00216.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, doi:10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., C. Snyder, and D. Nychka, 2003: Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res., 108, 8775, doi:10.1029/2002JD002900.

    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., P. Bickel, and B. Li, 2008: Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. Inst. Math. Stat. Collect., 2, 316334, doi:10.1214/193940307000000518.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367, doi:10.1007/s10236-003-0036-9.

    • Search Google Scholar
    • Export Citation
  • Fertig, E., J. Harlim, and B. Hunt, 2007: A comparative study of 4D-VAR and a 4D ensemble Kalman filter: Perfect model simulations with Lorenz-96. Tellus, 59A, 96100, doi: 10.1111/j.1600-0870.2006.00205.x.

    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013a: Bridging the ensemble Kalman and particle filters. Biometrika, 100, 781800, doi:10.1093/biomet/ast020.

    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013b: Mixture ensemble Kalman filters. Comput. Stat. Data Anal., 58, 127138, doi:10.1016/j.csda.2011.04.013.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Gharamti, M., J. Valstar, and I. Hoteit, 2014: An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models. Adv. Water Resour., 71, 115, doi:10.1016/j.advwatres.2014.05.001.

    • Search Google Scholar
    • Export Citation
  • Gordon, N., D. Salmond, and A. F. M. Smith, 1993: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process., 140, 107113, doi:10.1049/ip-f-2.1993.0015.

    • Search Google Scholar
    • Export Citation
  • Gustafsson, F., F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P.-J. Nordlund, 2002: Particle filters for positioning, navigation, and tracking. IEEE Trans. Signal Process., 50, 425437, doi:10.1109/78.978396.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 29052919, doi:10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. S. Whitaker, 2011: What constrains spread growth in forecasts initialized from ensemble Kalman filters? Mon. Wea. Rev., 139, 117131, doi:10.1175/2010MWR3246.1.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, 27762790, doi:10.1175/1520-0493(2001)129<2776:DDFOBE>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hesterberg, T., 1991: Weighted average importance sampling and defensive mixture distributions. Tech. Rep. 148, Stanford University, 16 pp.

  • Hoteit, I., D.-T. Pham, and J. Blum, 2002: A simplified reduced order Kalman filtering and application to altimetric data assimilation in Tropical Pacific. J. Mar. Syst., 36, 101127, doi:10.1016/S0924-7963(02)00129-X.

    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D.-T. Pham, G. Triantafyllou, and G. Korres, 2008: A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev., 136, 317334, doi:10.1175/2007MWR1927.1.

    • Search Google Scholar
    • Export Citation
  • Hoteit, I., X. Luo, and D.-T. Pham, 2012: Particle Kalman filtering: A nonlinear Bayesian framework for ensemble Kalman filters. Mon. Wea. Rev., 140, 528542, doi:10.1175/2011MWR3640.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R., E. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112126, doi:10.1016/j.physd.2006.11.008.

    • Search Google Scholar
    • Export Citation
  • Karimi, A., and M. R. Paul, 2010: Extensive chaos in the Lorenz-96 model. Chaos, 20, 043105, doi:10.1063/1.3496397.

  • Kim, S., G. L. Eyink, J. M. Restrepo, F. J. Alexander, and G. Johnson, 2003: Ensemble filtering for nonlinear dynamics. Mon. Wea. Rev., 131, 25862594, doi:10.1175/1520-0493(2003)131<2586:EFFND>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kivman, G. A., 2003: Sequential parameter estimation for stochastic systems. Nonlinear Processes Geophys., 10, 253259, doi:10.5194/npg-10-253-2003.

    • Search Google Scholar
    • Export Citation
  • Lermusiaux, P. F. J., 1999: Estimation and study of mesoscale variability in the strait of Sicily. Dyn. Atmos. Oceans, 29, 255303, doi:10.1016/S0377-0265(99)00008-1.

    • Search Google Scholar
    • Export Citation
  • Lermusiaux, P. F. J., 2007: Adaptive modeling, adaptive data assimilation and adaptive sampling. Physica D, 230, 172196, doi:10.1016/j.physd.2007.02.014.

    • Search Google Scholar
    • Export Citation
  • Liu, J. S., 2008: Monte Carlo Strategies in Scientific Computing. Springer, 346 pp.

  • Lorenc, A. C., 2003: The potential of the ensemble Kalman filter for NWP—A comparison with 4D-Var. Quart. J. Roy. Meteor. Soc., 129, 31833203, doi:10.1256/qj.02.132.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model. J. Atmos. Sci., 55, 399414, doi:10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2011: Robust ensemble filtering and its relation to covariance inflation in the ensemble Kalman filter. Mon. Wea. Rev., 139, 39383953, doi:10.1175/MWR-D-10-05068.1.

    • Search Google Scholar
    • Export Citation
  • Luo, X., I. M. Moroz, and I. Hoteit, 2010: Scaled unscented transform Gaussian sum filter: Theory and application. Physica D, 239, 684701, doi:10.1016/j.physd.2010.01.022.

    • Search Google Scholar
    • Export Citation
  • Luo, X., I. Hoteit, and I. Moroz, 2012: On a nonlinear Kalman filter with simplified divided difference approximation. Physica D, 241, 671680, doi:10.1016/j.physd.2011.12.003.

    • Search Google Scholar
    • Export Citation
  • Mandel, J., 2006: Efficient implementation of the ensemble Kalman filter. CCM Rep. 231, Denver and Health Sciences Center, University of Colorado, 7 pp.

  • Musso, C., N. Oujdane, and F. L. Gland, 2001: Improving regularised particle filters. Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds., Springer, 247–271.

  • Nakano, S., G. Ueno, and T. Higuchi, 2007: Merging particle filter for sequential data assimilation. Nonlinear Processes Geophys., 14, 395408, doi:10.5194/npg-14-395-2007.

    • Search Google Scholar
    • Export Citation
  • Nerger, L., W. Hiller, and J. Schröter, 2005: A comparison of error subspace Kalman filters. Tellus, 57A, 715735, doi:10.1111/j.1600-0870.2005.00141.x.

    • Search Google Scholar
    • Export Citation
  • Pham, D. T., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 11941207, doi:10.1175/1520-0493(2001)129<1194:SMFSDA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Robert, C. P., and G. Casella, 2004: Monte Carlo Statistical Methods. Springer-Verlag, 645 pp.

  • Scott, D. W., 1992: Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley-Interscience, 317 pp.

  • Silverman, B. W., 1986: Density Estimation for Statistics and Data Analysis. Chapman and Hall, 175 pp.

  • Smith, K. W., 2007: Cluster ensemble Kalman filter. Tellus, 59A, 749757, doi:10.1111/j.1600-0870.2007.00246.x.

  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, doi:10.1175/2008MWR2529.1.

    • Search Google Scholar
    • Export Citation
  • Sondergaard, T., and P. F. J. Lermusiaux, 2013: Data assimilation with Gaussian mixture models using the dynamically orthogonal field equations. Part I: Theory and scheme. Mon. Wea. Rev., 141, 17371760, doi:10.1175/MWR-D-11-00295.1.

    • Search Google Scholar
    • Export Citation
  • Song, H., I. Hoteit, B. D. Cornuelle, and A. C. Subramanian, 2010: An adaptive approach to mitigate background covariance limitations in the ensemble Kalman filter. Mon. Wea. Rev., 138, 28252845, doi:10.1175/2010MWR2871.1.

    • Search Google Scholar
    • Export Citation
  • Sorenson, H., and D. Alspach, 1970: Gaussian sum approximations for nonlinear filtering. IEEE Symp. on Adaptive Processes (Ninth) Decision and Control, 1970, Vol. 9, Austin, TX, IEEE, 193–193.

  • Stordal, A. S., H. A. Karlsen, G. Nævdal, H. J. Skaug, and B. Vallès, 2011: Bridging the ensemble Kalman filter and particle filters: The adaptive Gaussian mixture filter. Comput. Geosci., 15, 293305, doi:10.1007/s10596-010-9207-1.

    • Search Google Scholar
    • Export Citation
  • Tagade, P., H. Seybold, and S. Ravela, 2014: Mixture ensembles for data assimilation in dynamic data-driven environmental systems. Proc. Comput. Sci., 29, 12661276, doi:10.1016/j.procs.2014.05.114.

    • Search Google Scholar
    • Export Citation
  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, doi:10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • van der Vaart, A. W., and J. A. Wellner, 1996: Weak Convergence and Empirical Processes: With Application to Statistics. Springer, 508 pp.

    • Search Google Scholar
    • Export Citation
  • van Leeuwen, P. J., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114, doi:10.1175/2009MWR2835.1.

  • Whitaker, S., and D. Hirst, 2002: Correlational analysis of challenging behaviours. Br. J. Learn. Disabil., 30, 2831, doi:10.1046/j.1468-3156.2002.00081.x.

    • Search Google Scholar
    • Export Citation
Save
  • Alspach, D., and H. Sorenson, 1972: Nonlinear Bayesian estimation using Gaussian sum approximations. IEEE Trans. Automat. Contrib., 17, 439448, doi:10.1109/TAC.1972.1100034.

    • Search Google Scholar
    • Export Citation
  • Altaf, M. U., T. Butler, X. Luo, C. Dawson, T. Mayo, and I. Hoteit, 2013: Improving short-range ensemble Kalman storm surge forecasting using robust adaptive inflation. Mon. Wea. Rev., 141, 27052720, doi:10.1175/MWR-D-12-00310.1.

    • Search Google Scholar
    • Export Citation
  • Altaf, M. U., T. Butler, T. Mayo, X. Luo, C. Dawson, A. W. Heemink, and I. Hoteit, 2014: A comparison of ensemble Kalman filters for storm surge assimilation. Mon. Wea. Rev., 142, 28992914, doi:10.1175/MWR-D-13-00266.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, B. D. O., and J. B. Moore, 1979: Optimal Filtering. Prentice-Hall, 357 pp.

  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus, 59A, 210224, doi:10.1111/j.1600-0870.2006.00216.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, doi:10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., C. Snyder, and D. Nychka, 2003: Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res., 108, 8775, doi:10.1029/2002JD002900.

    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., P. Bickel, and B. Li, 2008: Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. Inst. Math. Stat. Collect., 2, 316334, doi:10.1214/193940307000000518.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367, doi:10.1007/s10236-003-0036-9.

    • Search Google Scholar
    • Export Citation
  • Fertig, E., J. Harlim, and B. Hunt, 2007: A comparative study of 4D-VAR and a 4D ensemble Kalman filter: Perfect model simulations with Lorenz-96. Tellus, 59A, 96100, doi: 10.1111/j.1600-0870.2006.00205.x.

    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013a: Bridging the ensemble Kalman and particle filters. Biometrika, 100, 781800, doi:10.1093/biomet/ast020.

    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013b: Mixture ensemble Kalman filters. Comput. Stat. Data Anal., 58, 127138, doi:10.1016/j.csda.2011.04.013.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Gharamti, M., J. Valstar, and I. Hoteit, 2014: An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models. Adv. Water Resour., 71, 115, doi:10.1016/j.advwatres.2014.05.001.

    • Search Google Scholar
    • Export Citation
  • Gordon, N., D. Salmond, and A. F. M. Smith, 1993: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process., 140, 107113, doi:10.1049/ip-f-2.1993.0015.

    • Search Google Scholar
    • Export Citation
  • Gustafsson, F., F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P.-J. Nordlund, 2002: Particle filters for positioning, navigation, and tracking. IEEE Trans. Signal Process., 50, 425437, doi:10.1109/78.978396.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 29052919, doi:10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. S. Whitaker, 2011: What constrains spread growth in forecasts initialized from ensemble Kalman filters? Mon. Wea. Rev., 139, 117131, doi:10.1175/2010MWR3246.1.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, 27762790, doi:10.1175/1520-0493(2001)129<2776:DDFOBE>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hesterberg, T., 1991: Weighted average importance sampling and defensive mixture distributions. Tech. Rep. 148, Stanford University, 16 pp.

  • Hoteit, I., D.-T. Pham, and J. Blum, 2002: A simplified reduced order Kalman filtering and application to altimetric data assimilation in Tropical Pacific. J. Mar. Syst., 36, 101127, doi:10.1016/S0924-7963(02)00129-X.

    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D.-T. Pham, G. Triantafyllou, and G. Korres, 2008: A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev., 136, 317334, doi:10.1175/2007MWR1927.1.

    • Search Google Scholar
    • Export Citation
  • Hoteit, I., X. Luo, and D.-T. Pham, 2012: Particle Kalman filtering: A nonlinear Bayesian framework for ensemble Kalman filters. Mon. Wea. Rev., 140, 528542, doi:10.1175/2011MWR3640.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R., E. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112126, doi:10.1016/j.physd.2006.11.008.

    • Search Google Scholar
    • Export Citation
  • Karimi, A., and M. R. Paul, 2010: Extensive chaos in the Lorenz-96 model. Chaos, 20, 043105, doi:10.1063/1.3496397.

  • Kim, S., G. L. Eyink, J. M. Restrepo, F. J. Alexander, and G. Johnson, 2003: Ensemble filtering for nonlinear dynamics. Mon. Wea. Rev., 131, 25862594, doi:10.1175/1520-0493(2003)131<2586:EFFND>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kivman, G. A., 2003: Sequential parameter estimation for stochastic systems. Nonlinear Processes Geophys., 10, 253259, doi:10.5194/npg-10-253-2003.

    • Search Google Scholar
    • Export Citation
  • Lermusiaux, P. F. J., 1999: Estimation and study of mesoscale variability in the strait of Sicily. Dyn. Atmos. Oceans, 29, 255303, doi:10.1016/S0377-0265(99)00008-1.

    • Search Google Scholar
    • Export Citation
  • Lermusiaux, P. F. J., 2007: Adaptive modeling, adaptive data assimilation and adaptive sampling. Physica D, 230, 172196, doi:10.1016/j.physd.2007.02.014.

    • Search Google Scholar
    • Export Citation
  • Liu, J. S., 2008: Monte Carlo Strategies in Scientific Computing. Springer, 346 pp.

  • Lorenc, A. C., 2003: The potential of the ensemble Kalman filter for NWP—A comparison with 4D-Var. Quart. J. Roy. Meteor. Soc., 129, 31833203, doi:10.1256/qj.02.132.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model. J. Atmos. Sci., 55, 399414, doi:10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2011: Robust ensemble filtering and its relation to covariance inflation in the ensemble Kalman filter. Mon. Wea. Rev., 139, 39383953, doi:10.1175/MWR-D-10-05068.1.

    • Search Google Scholar
    • Export Citation
  • Luo, X., I. M. Moroz, and I. Hoteit, 2010: Scaled unscented transform Gaussian sum filter: Theory and application. Physica D, 239, 684701, doi:10.1016/j.physd.2010.01.022.

    • Search Google Scholar
    • Export Citation
  • Luo, X., I. Hoteit, and I. Moroz, 2012: On a nonlinear Kalman filter with simplified divided difference approximation. Physica D, 241, 671680, doi:10.1016/j.physd.2011.12.003.

    • Search Google Scholar
    • Export Citation
  • Mandel, J., 2006: Efficient implementation of the ensemble Kalman filter. CCM Rep. 231, Denver and Health Sciences Center, University of Colorado, 7 pp.

  • Musso, C., N. Oujdane, and F. L. Gland, 2001: Improving regularised particle filters. Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds., Springer, 247–271.

  • Nakano, S., G. Ueno, and T. Higuchi, 2007: Merging particle filter for sequential data assimilation. Nonlinear Processes Geophys., 14, 395408, doi:10.5194/npg-14-395-2007.

    • Search Google Scholar
    • Export Citation
  • Nerger, L., W. Hiller, and J. Schröter, 2005: A comparison of error subspace Kalman filters. Tellus, 57A, 715735, doi:10.1111/j.1600-0870.2005.00141.x.

    • Search Google Scholar
    • Export Citation
  • Pham, D. T., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 11941207, doi:10.1175/1520-0493(2001)129<1194:SMFSDA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Robert, C. P., and G. Casella, 2004: Monte Carlo Statistical Methods. Springer-Verlag, 645 pp.

  • Scott, D. W., 1992: Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley-Interscience, 317 pp.

  • Silverman, B. W., 1986: Density Estimation for Statistics and Data Analysis. Chapman and Hall, 175 pp.

  • Smith, K. W., 2007: Cluster ensemble Kalman filter. Tellus, 59A, 749757, doi:10.1111/j.1600-0870.2007.00246.x.

  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, doi:10.1175/2008MWR2529.1.

    • Search Google Scholar
    • Export Citation
  • Sondergaard, T., and P. F. J. Lermusiaux, 2013: Data assimilation with Gaussian mixture models using the dynamically orthogonal field equations. Part I: Theory and scheme. Mon. Wea. Rev., 141, 17371760, doi:10.1175/MWR-D-11-00295.1.

    • Search Google Scholar
    • Export Citation
  • Song, H., I. Hoteit, B. D. Cornuelle, and A. C. Subramanian, 2010: An adaptive approach to mitigate background covariance limitations in the ensemble Kalman filter. Mon. Wea. Rev., 138, 28252845, doi:10.1175/2010MWR2871.1.

    • Search Google Scholar
    • Export Citation
  • Sorenson, H., and D. Alspach, 1970: Gaussian sum approximations for nonlinear filtering. IEEE Symp. on Adaptive Processes (Ninth) Decision and Control, 1970, Vol. 9, Austin, TX, IEEE, 193–193.

  • Stordal, A. S., H. A. Karlsen, G. Nævdal, H. J. Skaug, and B. Vallès, 2011: Bridging the ensemble Kalman filter and particle filters: The adaptive Gaussian mixture filter. Comput. Geosci., 15, 293305, doi:10.1007/s10596-010-9207-1.

    • Search Google Scholar
    • Export Citation
  • Tagade, P., H. Seybold, and S. Ravela, 2014: Mixture ensembles for data assimilation in dynamic data-driven environmental systems. Proc. Comput. Sci., 29, 12661276, doi:10.1016/j.procs.2014.05.114.

    • Search Google Scholar
    • Export Citation
  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, doi:10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • van der Vaart, A. W., and J. A. Wellner, 1996: Weak Convergence and Empirical Processes: With Application to Statistics. Springer, 508 pp.

    • Search Google Scholar
    • Export Citation
  • van Leeuwen, P. J., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114, doi:10.1175/2009MWR2835.1.

  • Whitaker, S., and D. Hirst, 2002: Correlational analysis of challenging behaviours. Br. J. Learn. Disabil., 30, 2831, doi:10.1046/j.1468-3156.2002.00081.x.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    RMSE averaged over the simulation time and all variables as a function of β (or the inflation factor for EnKF) and the localization length scale. The EnKF, the ETKF, the EnGMF_DR, and the EnGMF_SR with kernel bandwidth [see (10)] are implemented with 10 ensemble members and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities at every fourth model time step (or 1 day in real time). Weight nudging is applied with . The location of the minimum RMSE in each panel is marked with a “+” and the corresponding value.

  • Fig. 2.

    As in Fig. 1, but with 20 ensemble members.

  • Fig. 3.

    RMSE averaged over time and all variables as a function of ensemble size and the filter parameter. The EnGMF_DR with kernel bandwidth [see (10)] is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities at every fourth model time step (or 1 day in real time).

  • Fig. 4.

    (top) The EnGMF_DR RMSE averaged over time and all variables as a function of ensemble size and the covariance weighting parameter α. The EnGMF_DR, equipped with the hybrid kernel bandwidth in (26) and at , is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities. (bottom) The EnGMF_DR RMSE averaged over time and all variables as a function of ensemble size and the bandwidth parameter β. The EnGMF_DR, equipped with the hybrid kernel bandwidth in (26) and at , is implemented with different ensemble sizes and observations from three network densities were assimilated: (left) quarter, (middle) half, and (right) full observational densities.

  • Fig. 5.

    Both and as functions of the bandwidth parameter β and the ensemble size. (a) Here is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). (b) Here is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). No weight nudging was used.

  • Fig. 6.

    Both and as functions of the bandwidth parameter β and the ensemble size. (a) Here is plotted with full observational density and three ensemble sizes (N = 10, 50, and 100). (b) Here is plotted with full observational density and three ensemble sizes (N = 10, 50, and 100). Weight nudging at was used.

  • Fig. 7.

    The averaged variance of the weights as a function of the bandwidth parameter β and the ensemble size N. (a) The averaged variance is plotted with full observational density and three ensemble sizes (N = 30, 50, and 100). (b) The averaged variance is plotted with 50 ensemble members and three observational densities (quarter, half, and full). No weight nudging was used.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1805 1095 528
PDF Downloads 455 100 6