An Efficient State–Parameter Filtering Scheme Combining Ensemble Kalman and Particle Filters

Boujemaa Ait-El-Fquih King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Search for other papers by Boujemaa Ait-El-Fquih in
Current site
Google Scholar
PubMed
Close
and
Ibrahim Hoteit King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Search for other papers by Ibrahim Hoteit in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

This work addresses the state–parameter filtering problem for dynamical systems with relatively large-dimensional state and low-dimensional parameters’ vector. A Bayesian filtering algorithm combining the strengths of the particle filter (PF) and the ensemble Kalman filter (EnKF) is proposed. At each assimilation cycle of the proposed EnKF–PF, the PF is first used to sample the parameters’ ensemble followed by the EnKF to compute the state ensemble conditional on the resulting parameters’ ensemble. The proposed scheme is expected to be more efficient than the traditional state augmentation techniques, which suffer from the curse of dimensionality and inconsistency that is particularly pronounced when the state is a strongly nonlinear function of the parameters. In the new scheme, the EnKF and PF interact via their ensembles’ members, in contrast with the recently introduced two-stage EnKF–PF (TS–EnKF–PF), which exchanges point estimates between EnKF and PF while requiring almost double the computational load. Numerical experiments are conducted with the Lorenz-96 model to assess the behavior of the proposed filter and to evaluate its performances against the joint PF, joint EnKF, and TS–EnKF–PF. Numerical results suggest that the EnKF–PF performs best in all tested scenarios. It was further found to be more robust, successfully estimating both state and parameters in different sensitivity experiments.

Current affiliation: Division of Physical Science and Engineering, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Boujemaa Ait-El-Fquih, boujemaa.aitelfquih@kaust.edu.sa

Abstract

This work addresses the state–parameter filtering problem for dynamical systems with relatively large-dimensional state and low-dimensional parameters’ vector. A Bayesian filtering algorithm combining the strengths of the particle filter (PF) and the ensemble Kalman filter (EnKF) is proposed. At each assimilation cycle of the proposed EnKF–PF, the PF is first used to sample the parameters’ ensemble followed by the EnKF to compute the state ensemble conditional on the resulting parameters’ ensemble. The proposed scheme is expected to be more efficient than the traditional state augmentation techniques, which suffer from the curse of dimensionality and inconsistency that is particularly pronounced when the state is a strongly nonlinear function of the parameters. In the new scheme, the EnKF and PF interact via their ensembles’ members, in contrast with the recently introduced two-stage EnKF–PF (TS–EnKF–PF), which exchanges point estimates between EnKF and PF while requiring almost double the computational load. Numerical experiments are conducted with the Lorenz-96 model to assess the behavior of the proposed filter and to evaluate its performances against the joint PF, joint EnKF, and TS–EnKF–PF. Numerical results suggest that the EnKF–PF performs best in all tested scenarios. It was further found to be more robust, successfully estimating both state and parameters in different sensitivity experiments.

Current affiliation: Division of Physical Science and Engineering, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Boujemaa Ait-El-Fquih, boujemaa.aitelfquih@kaust.edu.sa

1. Introduction

Data assimilation combines data and dynamical models to provide the best possible estimate of the state of the underlying system. This approach is often implemented sequentially, inferring the state given the available observations and the knowledge of the system dynamics. This is known as the so-called Bayesian filtering problem. In a Bayesian framework, the posterior probability density function (pdf) of the state, also called analysis pdf, is needed to compute any type of state estimate (and higher moments), as for instance the posterior mean (PM), which minimizes the mean-squared error (MSE) (van Trees 1968). In a filtering algorithm, the analysis pdf is computed recursively following two steps (Künsch 2001): a forecast (or propagation) step that integrates the previous analysis pdf with the dynamical model to obtain the forecast pdf and an analysis (or update) step to update the forecast pdf with the incoming observation. One can also reverse the order of the propagation-update steps without “violating” the Bayesian formulation of the filtering problem, which results in another generic algorithm involving the one- (or two) step-ahead smoothing pdf of the state in addition to the forecast and analysis pdfs (Desbouvries and Ait-El-Fquih 2008; Desbouvries et al. 2011; Ait-El-Fquih et al. 2016).

In practice, however, the optimal solution of the generic filtering algorithm, in the sense of MSE minimization, is generally not available except for linear-Gaussian systems for which this algorithm reduces to the famous Kalman filter (KF) (Kalman 1960; Jazwinski 1970; Anderson and Moore 1979). A number of suboptimal numerical methods have therefore been proposed, as for instance, the well-known sequential Monte Carlo [or particle filter (PF)] method (Doucet et al. 2001a). The PF algorithm recursively computes an approximation of the analysis distribution as a random sample of state vectors, called particles. The forecast step propagates the particles forward with the dynamical model to approximate the forecast distribution. These forecast particles are then weighted in the update step based on their relative likelihoods (Gordon et al. 1993).

The PF suffers from the weights’ degeneracy problem in which all particles’ weights, except very few, become almost zero after few assimilation cycles, which severely limits the filter’s performance (Liu and Chen 1998; Doucet et al. 2001a; Snyder et al. 2008; van Leeuwen 2009). This phenomenon is in part due to the fact that in the update step, the PF only uses the incoming observations to update the weights and not the states (Hoteit et al. 2008; van Leeuwen 2009; Hoteit et al. 2012). One could drastically increase the number of particles to avoid this phenomenon, but this would certainly lead to a prohibitive computational cost. The most standard solution to mitigate this phenomenon is to resample the particles by duplicating those with large weights and abandoning those with low weights (Rubin 1988; Gordon et al. 1993; Liu and Chen 1998; Doucet et al. 2001a). Although the PFs were shown to perform well in a number of low-dimensional systems (Kivman 2003; Subramanian et al. 2012), they nevertheless remain inefficient in large-dimensional systems due to the exorbitant number of particles needed to efficiently sample the state space (curse of dimensionality); in certain situations, the number of particles needed scales exponentially with the system dimension (Crisan and Doucet 2002; Snyder et al. 2008).

Despite the promising PF-type schemes that have recently been proposed to cope with the curse of dimensionality (see e.g., Spiller et al. 2008; Husz et al. 2011; Morzfeld et al. 2012; Ades and van Leeuwen 2013; Djuric and Bugallo 2013; Ait-El-Fquih and Hoteit 2015; Septier and Peters 2015; Ait-El-Fquih and Hoteit 2016), the so-called ensemble KF (EnKF) (Evensen 1994, 2006; Hoteit et al. 2015) is still most commonly used for data assimilation into large-scale geophysical problems. The EnKF shares the same forecast step with the PF, but assumes the joint state–observation forecast pdf to be Gaussian at the update step. Indeed, it updates the particles (often called “ensemble members”) following a KF-like correction based on stochastically perturbed observations. The KF-like corrections keep the particles close to the observations, which helps with mitigating the risk of degeneracy (Kivman 2003; Hoteit et al. 2008). This leads to remarkable performances even when the filter is implemented with small ensembles, especially when equipped with localization and inflation techniques (Anderson and Anderson 1999; Hamill and Snyder 2000).

With more studies demonstrating the efficiency of the EnKF in various applications, other EnKF-like algorithms were introduced. These include derivation of deterministic EnKF variants, which were shown to be more robust with limited-size ensembles (e.g., Pham 2001; Bishop et al. 2001; Hoteit et al. 2002; Tippett et al. 2003; Hunt et al. 2007; Hoteit et al. 2015); use of the less restrictive Gaussian mixture assumption (e.g., Hoteit et al. 2008; Stordal et al. 2011; Hoteit et al. 2012; Frei and Künsch 2013; Liu et al. 2016); development of ensemble Kalman smoothing algorithms (e.g., Evensen and van Leeuwen 2000; Dunne et al. 2007); and extending the EnKF to the framework of state–parameter filtering problems (e.g., Moradkhani et al. 2005; Annan et al. 2005; Chen and Zhang 2006; Aksoy et al. 2006; Bellsky et al. 2014; Rasmussen et al. 2015; Gharamti et al. 2014, 2015; Ait-El-Fquih et al. 2016).

The state–parameter filtering problem consists of filtering systems where the state model is not perfectly known, and depends on a set of unknown (static) parameters. In such a problem, the estimation of the state requires that of the parameters. The EnKF-like methods that have been introduced to address this problem follow either the so-called joint or dual approach. The joint approach consists in gathering the state and parameters in the same vector and applying the standard EnKF to the resulting augmented system to simultaneously estimate the state and the parameters (Chen and Zhang 2006; Gharamti et al. 2015). In the dual approach, the state and the parameters are treated separately, starting with an update of the parameters followed by an update of the state (Moradkhani et al. 2005). The separation of the update steps was shown to provide more consistent estimates of the parameters (see e.g., Wen and Chen 2007). Despite its heuristic framework (Hendricks Franssen and Kinzelbach 2008), the dual EnKF was found to be more efficient than the joint EnKF at the cost of increased computational burden (Gharamti et al. 2015). Recently, Ait-El-Fquih et al. (2016) followed the concept of one-step-ahead smoothing in a fully Bayesian filtering framework to derive a new dual-like EnKF that performs better than the standard dual EnKF, while requiring almost no increase in the computational load.

While EnKFs are commonly used in large-scale (state and parameters) applications, we focus here on the situation in which the number of parameters to be estimated is small, so a PF does not require a prohibitive number of particles to sample the parameters space (provided that the number of observations is not too large).1 We thus introduce a new filtering scheme that combines the PF, to sample the parameters’ ensemble, with an EnKF, to sample the state ensemble conditionally on the parameters’ ensemble. The use of a PF to estimate the parameters is intended to deal with the non-Gaussian character of the parameters posterior pdf (Kivman 2003). The benefit of the PF is expected to be more pronounced when the nonlinearity between the state and the parameters is pronounced, case in which the joint EnKF was shown to provide spurious cross covariances between the state and the parameters, leading to poor performances (Carrassi and Vannitsem 2011; Santitissadeekorn and Jones 2015).

Among the proposed works that combine the EnKF and PF (see e.g., Hoteit et al. 2008, 2012; Frei and Künsch 2012; Slivinski et al. 2015; Liu et al. 2016; Zhang et al. 2017), only that of Santitissadeekorn and Jones (2015) relates to our study, as it deals with dynamical systems in which the state model is large and depends on few unknown parameters. In that work, the authors introduced a two-stage EnKF–PF scheme that exchanges estimates (ensemble means) of parameters and state between the PF and the EnKF components. In contrast, our scheme exchanges the ensembles, representing the approximate posterior distributions, instead of only their means, which is expected to lead to better performances while requiring only half the computational load. The remainder of this paper is organized as follows. Section 2 states the problem and reviews the joint PF and EnKF. Section 3 derives the new scheme combining EnKF with PF and discusses the main differences with the algorithm of Santitissadeekorn and Jones (2015). Section 4 presents results of various numerical experiments with the Lorenz-96 model, and section 5 concludes the work and discusses future directions.

2. Problem statement

Consider a discrete-time state–parameter dynamical system:
e1
in which xn, θ, and yn denote the system state, the parameters’ vector, and the observation at time tn, of dimensions nx, nθ, and ny, respectively. The function fn is the dynamical operator integrating the state from time tn to tn+1, and hn is the observational operator at time tn. The noise processes, u = {un and v = {vn, are assumed to be independent, jointly independent, and independent of x0 and θ. Throughout the paper, y0:n{y0, y1, …, yn}, and p(xn) and p(xn | y0:l) stand for the prior pdf of xn and the posterior pdf of xn given y0:l, respectively. All other pdfs used are defined in a similar way.The state–parameter filtering problem aims at estimating at each time tn, the state xn, and the parameters vector θ from the history of the observations, y0:n. Let for a random variable u and a realization v of another random variable, (u) and (u | v) denote the expected values of u with respect to (w.r.t.) p(u) and p(u | v), respectively. A standard solution of this problem is the PM:
e2
e3
which minimizes the MSE. The evaluation of (2) and (3) requires the knowledge of the so-called analysis pdfs, p(xn | y0:n) and p(θ | y0:n). One classical way to compute these pdfs starts by computing the analysis pdf p(zn | y0:n) of the augmented state zn []T, where θn = θn−1, followed by a marginalization. By virtue of the factorization,
e4
which, in turn, follows from the hidden Markov chain character of the pairwise process, (z, y) = {zn, yn, it is possible to recursively compute p(zn | y0:n) (see e.g., Ait-El-Fquih and Desbouvries 2006, 2011) and references therein). Two steps are performed to compute p(zn | y0:n) from p(zn−1 | y0:n−1).
  1. Forecast step: The joint transition pdf, p(zn | zn−1) = p(xn | xn−1, θn) p(θn | θn−1), is used to compute the joint forecast pdf, p(zn | y0:n−1), following a marginalization formula [Chapman–Kolmogorov equation (Jazwinski 1970)]:
    e5
  2. Analysis step: The likelihood, p(yn | zn) = p(yn | xn), is combined with the joint forecast pdf, following the Bayes’s rule, to obtain the following:
    e6
    e7
Before we focus on the practical derivation of this generic algorithm, it is worth mentioning that the PF is inevitably vulnerable to the “sample attrition” issue when dealing with (fixed) parameters. This may result in loss of diversity of particles, and eventually one ending up with identical particles (see e.g., West and Liu 2001; Frei and Künsch 2012; Santitissadeekorn and Jones 2015). One popular way to cope with this is to impose some artificial dynamics on the parameters:
e8
where is a zero mean Gaussian process assumed to be independent and independent of z0, , and . The reader may consult West and Liu (2001) and Frei and Künsch (2012) and references therein for a comprehensive literature about the choice of the operator gn−1(⋅) and the covariance n−1, of the noise wn−1. We also assume z0, un, and vn to be Gaussian with un ~ (0, n) and vn ~ (0, n). The transition pdfs, p(xn | xn−1, θn) and p(θn | θn−1), and the likelihood, p(yn | xn), are therefore Gaussian with
e9
e10
e11
where x(m, ) denotes a Gaussian pdf of argument x and parameters (m, ).

Joint state–parameters filtering with PF and EnKF

The main idea is to transform the state–parameter system (x, θ, y) into a classical state–space system with an augmented state z = (x, θ) and then applying the standard PF (Gordon et al. 1993) and EnKF (Evensen 1994) to the augmented system. The resulting joint PF and joint EnKF are Monte Carlo implementations of the generic algorithms (5),(7) and (5),(6), respectively. They share the same forecast step [that follows from the Markov property in (5)] and differ in their analysis steps [that follow from the Bayes’s equations in (7) and (6), respectively].

Assume that one has an independently and identically distributed (i.i.d.) (analysis) random ensemble of members (or particles),
eq1
from p(zn−1 | y0:n−1), and wants to sample an i.i.d. (forecast) random ensemble, { from p(zn | y0:n−1). This can be achieved by applying the hierarchical sampling property 1 (see appendix A) on (5):
e12
e13
~ (0, n−1) and ~(0, n−1). In other words, the forecast members are obtained by integrating the previous analysis members forward with the state–parameter model.
The analysis step of the joint PF involves a weighting and then a resampling of the forecast particles. The weight of each augmented particle, , is computed by applying the Rubin’s sampling importance resampling (SIR) mechanism (recalled in property 2 in appendix A) on (7):
e14
The analysis estimates of state and parameters are computed from {,, which approximates the analysis distributions in (2) and (3). The ensemble {,, is then sampled with replacement to obtain the (empirical) posterior { before we proceed to the next assimilation cycle.

The joint PF was shown to provide satisfactory performances in many applications involving low-dimensional systems, but remains impractical in large dimensions. Indeed, when an insufficient number of particles are used, regions of high probabilities are unlikely to be sampled in the forecast step. The analysis step cannot remedy this because the analysis particles are obtained from a (discrete) sampling of the forecast ensemble.

The joint EnKF avoids this issue thanks to the (continuous) sampling character of its analysis step. The incoming observations are indeed used to update the forecast particles (or ensemble members) using a KF-like analysis step, as in (15), which results in analysis members that are not only different from the forecast members, but also more likely to belong to regions of high probabilities. Hereafter, for any ensemble {u(m), let and u denote its empirical mean and covariance, respectively; and u,v the cross covariance between {u(m) and {v(m). Assuming p(zn, yn | y0:n−1) is Gaussian, the analysis members are computed by applying the conditional sampling property 3 (appendix A) to (6):
e15
where is an observation forecast member that is computed by sampling from the Gaussian [hn(), n]. The KF-like update of the parameters is based on the sampled cross covariance, . The later may, however, not be well estimated when the parameters and state variables are strongly nonlinearly related, which may severely limit the performance of the joint EnKF (Kivman 2003). In the subsequent section, we introduce an alternative scheme that uses the PF to sample the parameters and the EnKF to sample the state.

3. The EnKF–PF scheme

a. The generic algorithm

The forecast step of the proposed scheme is identical to that of the joint PF and the joint EnKF. In the analysis step, instead of using a truly augmented (i.e., joint) approach to compute p(zn | y0:n) from p(zn | y0:n−1), we adopt a conditional strategy that involves an analysis step that separately updates p(θn | y0:n) and p(xn | θn, y0:n), from which one then obtains the following:
e16

The idea of using the conditional (or marginalization) strategy in (16) has been already used in systems with particular structures, including those for which a part of the augmented state (here xn) involves linear-Gaussian models (Doucet et al. 2000, 2001b; Schön et al. 2005; Guimaraes et al. 2010) or models with finite (discrete) state spaces (Ghahramani and Jordan 1997; Doucet et al. 2000). In these cases, p(xn | θn, y0:n) is analytically tractable either using the KF or the hidden Markov model (HMM) algorithm (depending on whether the state is continuous or discrete). This enables one to marginalize out xn from the joint posterior distribution and only focus on estimating, using the PF, p(θn | y0:n), which belongs to a space of reduced dimension. More related to our work, this conditional technique, also known as Rao–Blackwellisation, has been recently adopted by Santitissadeekorn and Jones (2015) in the context of state–parameter filtering, yet with a nonlinear (conditional) state model, in order to apply the EnKF to estimate the state. This allows the authors to derive a two-stage EnKF–PF scheme that exchanges estimates (ensemble means) of parameters and state between the PF and the EnKF components. Here, we propose an alternative scheme that exchanges the ensembles instead of only their means. A thorough discussion about the differences between these two approaches is given in section 3c below.

1) Parameters’ analysis step

Since the parameters will be updated based on the PF, one chooses the form in (7) for the generic analysis step:
e17
The conditional likelihood, p(yn | θn, y0:n−1), is not known. As will be made clearer in the next section, this pdf needs to be assumed Gaussian in the (EnKF) state update step, as it is used to sample the observations forecast ensemble. While we exploit this Gaussian assumption here, we still need to compute the first two moments of this likelihood. To that end, we propose to approximate these moments based on a linear minimum MSE (LMMSE) optimization criteria2 (Anderson and Moore 1979; Kailath et al. 2000). Given y0:n−1, one readily obtains a LMMSE-based estimation of the first two moments of p(yn | θn, y0:n−1) as follows:
e18
e19

2) (Conditional) state analysis step

Similarly to (6), one has the following:
e20
The use of the EnKF to “sample” (20) requires p(yn, xn | θn, y0:n−1) to be Gaussian. On the other hand, the EnKF-like sampling of (20) relies on sampling the marginals, p(yn | θn, y0:n−1) and p(xn | θn, y0:n−1), and thus a beforehand computation of their first two moments. A LMMSE estimate of those of p(yn | θn, y0:n−1) has been already considered in (18)(19). Those of p(xn | θn, y0:n−1), are similarly approximated as follows:
e21
e22
Note that in the linear observational operator case [i.e., hn(xn) = nxn in the system (1)], only p(xn | θn, y0:n−1) needs to be assumed Gaussian since in such a case the pdf p(yn | θn, y0:n−1), which is given by
e23
is also Gaussian with parameters exactly equal to the LMMSE estimates in (18) and (19), which, in turn, are equal to and n+ n, respectively.

b. Practical implementation

We resort to classical sampling properties, recalled in the appendix, to derive a Monte Carlo implementation of the generic algorithm in (5), (17), and (20). As stated previously, the forecast step of the proposed scheme is the same as those of the PF and EnKF, given by (12)(13). In the analysis step and using (16), each joint analysis member ~ p(zn | y0:n) is computed following a conditioning strategy that requires first computing ~ p(θn | y0:n) using the PF update and then ~ p[xn | , y0:n] using an EnKF update. The proposed EnKF–PF algorithm is summarized below (the proof is given in appendix B).

  • Joint forecast of state and parameters:

    1. The joint forecast ensemble, , is first computed using (12)(13). This step is initialized at time t0 by sampling from a given joint prior, p(x0, θ0).

    2. The observation forecast ensemble is obtained as , with = hn[] and ~ (0, n).

    3. Compute the empirical means, , , and , and the forecast perturbed matrices, , , and , from which the (cross) covariances, , , , , , , , and “the gains,” , and, , are then evaluated.3

  • Parameters’ analysis (PF-like update):

    1. The normalized weights, {[], }, are computed based on
      e24
      e25
      An approximation of the PM analysis estimate in (3) and the associated error covariance are then given by and , respectively.
    2. The weighted ensemble, {, is then sampled with replacement to obtain an analysis ensemble with equal weights, {.

  • State analysis (EnKF-like update):

    1. The means (for all m) of the conditional forecast pdf, p[], are estimated as
      e26
    2. A singular value decomposition (SVD) of the M × M symmetric matrix is performed leading to M × M orthogonal n and diagonal Σn matrices.

    3. Samples are then generated from p[] as
      e27
      where (Σn)1/2 denotes the square root of the diagonal matrix Σn and rn(m) ~ (0,).
    4. These are then updated based on the observation, yn, following a KF-like correction, leading to the state analysis ensemble of interest, {, as
      e28
      e29
      and whose mean is taken as an approximation of the PM estimate in (2).

A schematic illustration of these steps is given in Fig. 1. Once the forecast members and are generated, they are updated, successively, based on the observation, yn. The update of the parameters is performed following a PF-like mechanism that consists of a succession of a weighting step evaluating a (normalized) weight for each member , based on the density, p[], which is assumed to be Gaussian with moments computed based on LMMSE criteria; and a resampling step leading to the (equi-weighted) analysis members of interest, . The update of the state is then performed by first computing the moments of the assumed Gaussian density, p[], based on the LMMSE criteria, then sampling this density before applying an EnKF-like update to the resulting samples to obtain the state analysis members of interest, . Here, the joint pdf p() needs to be Gaussian, not only its marginals as above. Finally, one notes that in the particular case of a linear observation operator, only p() needs to be Gaussian [as this implies that both p[] and p() are Gaussian].

Fig. 1.
Fig. 1.

A schematic illustration of the steps of computing a joint analysis ensemble () from the previous one () using EnKF–PF.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

c. Discussion

The proposed algorithm combines the PF to sample ~ p(θn | y0:n), with the EnKF to sample ~ p[]. The quantities are taken as samples of the state analysis pdf, p(xn | y0:n). The algorithm has been derived following a Bayesian formulation, while assuming p(xn, yn | θn, y0:n−1) to be Gaussian with moments computed based on the LMMSE criteria. As it is well known, the Gaussian assumption of the joint state–observation forecast pdf is a standard assumption in the context of ensemble Kalman filtering. We further exploited this Gaussianity, along with the LMMSE criteria, in the PF stage to efficiently compute the likelihood, p(yn | θn, y0:n−1), which, in turn, is used to evaluate the particles’ weights. Now, assuming4 p(xn, yn | θn, y0:n−1) is Gaussian implies that its marginals p(xn | θn, y0:n−1) and p(yn | θn, y0:n−1) are also Gaussian, but not the forecast–analysis pdfs of interest, p(xn | y0:n−1), p(xn | y0:n), p(θn | y0:n−1), or p(θn | y0:n), in contrast with the joint EnKF in which all these pdfs need to be Gaussian as the derivation of this filter relies on the Gaussian assumption over p(xn, θn, yn | y0:n−1).

Although the proposed EnKF–PF involves the cross covariances between the state (observations) and parameters’ forecast pdfs as in (21)(22) [(18)(19), respectively], these are indeed not used in practice, as in the joint EnKF, to generate the analysis members of interest, [, respectively]. Instead they are only used to compute the means of p[] in the EnKF stage as in (26) [the weights in the PF stage as in (24), respectively], which are then used to generate these members. The resulting members are therefore not restricted to approximate Gaussian distributions as in the joint EnKF. While are computed by the free-Gaussian sampling PF, the non-Gaussian character of stems from the fact that these members are computed based on an EnKF update but conditionally on [this can be seen from Eq. (29), which involves through and (27)(28)]. On the other hand, the proposed EnKF–PF requires as many model integrations as the joint EnKF (as they share the same forecast step). The analysis steps of these algorithms involve operations with linear complexity in nx. Thus, although the analysis step of the proposed scheme involves a few more operations than that of the joint EnKF, the computational complexities of these schemes should be approximately of the same order in relatively large-dimensional state applications.

In terms of algorithmic comparison with the recently introduced two-stage EnKF–PF (TS–EnKF–PF) by Santitissadeekorn and Jones (2015), we note the following aspects:

  • As the proposed scheme, the TS–EnKF–PF inherently relies on the Gaussian assumption over p(xn, yn | θn, y0:n−1). However, instead of using the LMMSE criteria to compute the first two moments, the authors approximate p(xn | θn, y0:n−1) with the forecast pdf p(xn | y0:n−1), whose moments can be estimated from the ensemble members. Such an assumption further enables one to compute the mean of the pdf p(yn | θn, y0:n−1), which serves to evaluate the particles’ weights in the PF stage, and the observation forecast members in the EnKF stage [see Eqs. (9) and (11) in Santitissadeekorn and Jones (2015)]. As for the covariance of p(yn | θn, y0:n−1), it is set to be equal to the covariance of p(yn | xn), n. However, based on (23), this covariance is actually equal to Cncov[hn(xn) | θn, y0:n−1] + n, which therefore suggests that cov[hn(xn) | θn, y0:n−1] is neglected in Santitissadeekorn and Jones (2015). This may strongly underestimate the true covariance, n, when it is much larger than n. Such a problem could be mitigated in the proposed approach in which the particles’ weights are computed based on p[yn |, y0:n−1] = [(),], where [] and are given in (24) and (25), respectively. Indeed, n is approximated by = + n, where is the LMMSE-based approximation of cov[hn(xn) | θn, y0:n−1], which is not assumed to be zero as in the TS–EnKF–PF, but evaluated from the ensembles.

  • In the EnKF–PF, the information between the PF and EnKF is exchanged through the ensembles’ members (see e.g., Fig. 1). In the TS–EnKF–PF, the exchange is only carried through the ensembles’ means, which should result in loss of performance, especially when multimodal posterior distributions are involved. As discussed in Santitissadeekorn and Jones (2015), “feeding” the EnKF with the parameters’ mean only may divert the background ensemble of the EnKF from high-probability regions, and likewise when the state mean “feeds” the PF.

  • Because of the independent two-stage formulation of the TS–EnKF–PF, the dynamical model needs to be integrated in both PF and EnKF stages, each of them involving a different state forecast ensemble. In the proposed scheme, the model is run only in the (shared) forecast stage, to obtain a state forecast ensemble that is used in both PF and EnKF update stages. Therefore, the proposed scheme can be roughly half computationally less demanding than the TS–EnKF–PF when the same ensemble size is used in both PF and EnKF.

4. Numerical experiments

Numerical experiments are performed with the strongly nonlinear Lorenz-96 (L96) (Lorenz and Emanuel 1998) to assess the behavior of the proposed EnKF–PF and to evaluate its performance against the joint PF, the joint EnKF, and the TS–EnKF–PF. The L96 model, which describes the time evolution of an atmospheric quantity, solves the following set of differential equations:
e30
where x(j, t), j = 1, 2, …, nx, with, unless otherwise stated, nx = 40, denotes the jth element of the state at time t. Boundary conditions are cyclic [i.e., x(−1, t) = x(nx − 1, t) , x(0, t) = x(nx, t), and x(1, t) = x(nx + 1, t)]. The constant F(j), which represents dissipation, is commonly equal to 8 for all j; such a value has been shown to exhibit a chaotic behavior of the model (Karimi and Paul 2010). Here, we follow Santitissadeekorn and Jones (2015) and assume it unknown and a function of two parameters, θ = [θ (1), θ (2)]T, through the following equation:
e31
We consider for now a perfect model scenario [i.e., the noise un in the model in (1) vanishes], so the uncertainty in θ is the only source of model error. The imperfect model scenario will be considered later.

We use the standard fourth-order Runge–Kutta method to numerically integrate the model in (30) with a time step size δm = 0.05, equivalent to 6 h in real time. The reference (true) state trajectory is built based on the reference parameters, with values θ*(1) = 2 and θ*(2) = 40. The initial (reference) state is set to F, based on the reference parameters. The model is then integrated for 36 000 time steps (corresponding to 24.6575 yr in real time). The first 30 000 time steps of the resulting trajectory is discarded (as a spinup period), and the remaining 6000 time steps are considered as the reference states. The observations are obtained by perturbing the corresponding states with a standard Gaussian noise (i.e., ).

We follow Santitissadeekorn and Jones (2015) and choose for all filters involving the PF (i.e., all except the joint EnKF), the “parameters’ kernel smoothing” model [see (8)] of West and Liu (2001) in which gn−1(θn−1) = αθn−1 + (1 − α)and n−1 = (1 − α2), with α = 0.9. This model has been shown to be more efficient than the random walk model (Gordon et al. 1993), for which gn−1(θn−1) = θn−1 and n−1 = . This was explained by the ability of this model to push the new set of particles toward the mean, which eliminates “the overdispersion” issue, the main limitation of the random walk model.

In all filters, the initial forecast ensembles of the parameters θ(1) and θ(2) are independently generated according to the (prior) Gaussian distributions (4, √2) and (60, 3), respectively (Santitissadeekorn and Jones 2015). The initial forecast state ensemble is generated from a Gaussian distribution centered around the mean of the reference states with an identity covariance, following Hamill and Whitaker (2011) and Song et al. (2010).

In all experiments, the joint PF and the PF components of the EnKF–PF and TS–EnKF–PF use the residual resampling strategy (Liu and Chen 1998; van Leeuwen 2009), which is well known to improve upon the simple probabilistic one used in the bootstrap PF of Gordon et al. (1993). On the other hand, the EnKF was implemented with the covariance inflation and localization techniques to enhance the filter’s robustness and improve its performance with small ensembles. Covariance inflation mitigates for the underestimation of the sample error variances that result from the use of a small ensemble, among other neglected uncertainties and filtering approximations (Anderson and Anderson 1999; Hoteit et al. 2002). Covariance localization tackles the rank deficiency and spuriously large cross correlations between distant state variables in the ensemble covariance matrix (Hamill and Whitaker 2011). Here, localization is applied using the fifth-order correlation function given in (4.10) in Gaspari and Cohn (1999). In all experiments, we follow Santitissadeekorn and Jones (2015) and use a localization length scale and an inflation factor α = 1.2 for which the joint EnKF provided the best performances.5

In the first set of experiments, we compare the filters’ behavior with M = 100 ensemble members in a setting where the data are assimilated every four model time steps, which is equivalent to one day in real time (i.e., the observations time step is δo = 4δm). In all the experiments, the results are averaged over 30 independent simulations, each time with a randomly generated initial ensembles and observation noise. In terms of computational cost, the EnKF and PF components of the TS–EnKF–PF use the same ensemble size; this means that the latter is more computationally demanding than the joint EnKF and the proposed EnKF–PF.

Figure 2 plots the time evolution of the analysis estimates of the marginal parameters and associated 95% confidence intervals (bounded by ±1.96 × standard deviation). As expected, the joint PF seems to suffer from the high dimensionality of the system.6 The joint EnKF outperforms the joint PF, suggesting the relevance of using a KF-like update on the forecast members. Its performance nevertheless saturates after about 100 assimilation cycles and the contribution of the observations becomes less and less significant. The associated standard deviations are very small (close to zero), meaning a very narrow confidence interval with bounds coinciding with the estimates and far from containing the true values of parameters. The TS–EnKF–PF estimates very well both parameters compared to the joint schemes [even though θ(2) requires about 500 assimilation cycles to be well estimated]. Furthermore, the true values of θ(2) and θ(1) fall within the TS–EnKF–PF confidence intervals, in contrast with the joint PF and the joint EnKF. The proposed EnKF–PF provides the most accurate estimates (in terms of reaching the true values) for both parameters, further suggesting confidence intervals that include the true values of these parameters.7

Fig. 2.
Fig. 2.

Time evolution of the parameters analysis estimates (dashed blue) and associated 95% confidence intervals (green) using the (top to bottom) joint PF, joint EnKF, TS–EnKF–PF, and EnKF–PF with 100 members for (left) θ (1) and (right) θ (2). The true values of parameters are indicated in red. In the last three filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

Figure 3 plots the bias of the analysis estimates of the first four components of the state and the parameters (i.e., the error between the reference states–parameters and the average of their estimates over 30 different set of observations). Overall, the proposed EnKF–PF and, to a lesser extent, the TS–EnKF–PF suggest the lowest biases, followed by the joint EnKF, which, in turn, clearly outperforms the joint PF.

Fig. 3.
Fig. 3.

Time evolution of the bias (dashed blue) of the first four components of the state and the parameters, computed as the error between the true values of these variables and the average (over 30 independent repetitions) of their analysis estimates obtained using (a) EnKF–PF, (b) TS–EnKF–PF, (c) joint EnKF, and (d) joint PF with 100 members. In the first three filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

In the next set of experiments, the filters’ performances are evaluated based on the root MSE (RMSE) misfits between the reference augmented states z(j, n) and the filter (analysis) estimates (j, n) averaged over all variables and over the last Nn = 100 assimilation cycles:
e32
The RMSEs of the parameters’ vector and the actual state are evaluated similarly. The relative estimation error is also considered for the (scalar) marginal parameters. Similarly to above, the presented RMSEs–relative errors are averaged over 30 independent repetitions.

a. Sensitivity to the ensemble size

We now study the sensitivity of the four filtering algorithms to the ensemble size, assimilating again the data every δo = 4δm. Figure 4 plots the RMSE (the relative error, respectively), as a function of the ensemble size, of the analysis estimates of the full joint state–parameters’ vector zn = (xn, θ), the state xn, and the parameters’ vector, θ = [θ(1), θ(2)] [the marginal parameters θ(1) and θ(2), respectively]. As can be seen, the “blended” schemes that combine PF with EnKF roughly lead to comparable performances, and both clearly outperform the joint EnKF, especially for not too large ensembles (i.e., M < 200). Indeed, one can notice that the joint EnKF requires more than 150 members to reach the accuracy of the blended schemes using only 50 members. On the other hand, although the joint EnKF with M ≥ 200 members performs slightly better than the blended schemes in estimating the marginal parameter, θ(1), it, nevertheless, provides similar performances in estimating the joint parameters’ vector θ. Recalling
eq2
this can be explained by the fact that compared to the joint EnKF, the EnKF–PF and TS–EnKF–PF generate more representative analysis members of θ(2) given those of θ(1) {i.e., they more efficiently sample p[θ(2) | θ(1)a,(m), y0:n] in term 1}, which compensates for the effect of (slightly) less accurately estimating θ(1), which, in turn, may originate from less efficiently sampling p[θ(1) | y0:n] (in term 2). Furthermore, the linear relationship between θ(1) and yn [which comes from (30)(31) and the linear observation model] may explain the reasonable estimation of θ(1) by the joint EnKF.
Fig. 4.
Fig. 4.

Time- and variable-averaged RMSE of the analysis estimates of state–parameters’ (a) vector, (b) state, and (c) parameters’ vector, and (d),(e) time-averaged relative error of the analysis estimates of marginal parameters, provided by joint EnKF (blue), TS–EnKF–PF (green), and EnKF–PF (red) as a function of ensemble size. In all filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

b. Sensitivity to the frequency of observations

We now fix the ensemble size to M = 100 and test the filters’ performances with different lengths of assimilation windows, fo = δo/δm (i.e., the times at which data are assimilated). Figure 5 displays the RMSE (the relative error, respectively), as a function of fo, of the analysis estimates of the state–parameters’ vector zn, the state xn, and the parameters’ vector θ [the marginal parameters θ(1) and θ(2), respectively]. Again, overall, the TS–EnKF–PF and the EnKF–PF suggest better performances than the joint EnKF, in particular for estimating the full state–parameters’ vector. The two former filters lead to comparable results when the assimilation period of the data does not exceed 2 days. However, when the data are assimilated less frequently, the performance of TS–EnKF–PF degrades increasingly with fo, while the proposed EnKF–PF is shown to be more robust to changes in fo. One can also see from Figs. 5b and 5c that the joint EnKF performs almost as well as the blended schemes in estimating the state, but it is clearly less accurate when it comes to the parameters’ vector. The fact that the joint EnKF suggests RMSEs closer to those of the hybrid schemes for the state (than for the parameters’ vector), is also consistent with the results of Figs. 4 and 6, which outline the impact of the ensemble size and the observation noise, respectively. Furthermore, while the joint EnKF performs poorly with θ, it still estimates θ(1) as good as the hybrid algorithms. Based on
eq3
the (relative) poor performance of the joint EnKF in estimating θ is not only due to the filter’s inability to well estimate the cross covariance of the posterior marginal pdfs (which is involved in term 1), but also to its inability to efficiently sample the posterior p[θ(2) | y0:n] (term 2). The strong nonlinear relationship between θ(1) and θ(2), and between θ(2) and yn [see (30)(31)], may explain the poor estimation of the cross covariance in term 1 and the inefficient sampling in term 2, respectively.
Fig. 5.
Fig. 5.

As in Fig. 4, but as a function of the temporal assimilation period. In all filters, ensemble size, localization length scale, and inflation factor were set to 100, 2, and 1.2, respectively.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

Fig. 6.
Fig. 6.

As in Fig. 4, but as a function of observation error variance. In all filters, ensemble size, localization length scale, and inflation factor were set to 100, 2, and 1.2, respectively.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

c. Sensitivity to the observation noise

The filters are now tested in more challenging scenarios by studying their sensitivity to larger observation noise (i.e., to different variances σ2 ≥ 1). We fix the ensemble size to M = 100 and the assimilation window to fo = 4. Figure 6 displays the RMSE (the relative error, respectively), as a function of σ2, of the analysis estimates of the joint state–parameters’ vector zn, the state xn, and the parameters’ vector θ [the marginal parameters θ(1) and θ(2), respectively]. The joint EnKF, EnKF–PF, and, to a lesser extent, TS–EnKF–PF, are not very sensitive to the increase of the noise in the data (especially for σ2 ≥ 2). Furthermore, for all tested values of σ2, the proposed EnKF–PF exhibits the best behavior in estimating the joint state–parameters’ vector and its marginal (state and parameters) components. For instance, for σ2 ≥ 5, which is the noisiest scenario, EnKF–PF is more accurate (in terms of RMSE of estimating zn) than TS–EnKF–PF with σ2 = 2 and joint EnKF with σ2 = 1. On the other hand, while the TS–EnKF–PF and joint EnKF estimate the state almost as accurately as the proposed scheme, the former performs clearly better than the second for both the joint parameters’ vector and its marginal θ(2). As for θ(1), however, the TS–EnKF–PF performs worse than the joint EnKF, which, in turn, compares well with the EnKF–PF. This, however, does not prevent the TS–EnKF–PF from more accurately computing the analysis members of θ(2) given those of θ(1), providing joint estimates of these parameters (i.e., of θ) that are more accurate than those provided by the joint EnKF.

d. Case of stochastic state model and nonlinear observation operator

We conduct here the same experiment as above in a scenario where the L96 model is stochastic given the parameters. We therefore include, as in the system in (1), an additive Gaussian noise with zero mean and covariance (see e.g., Stordal et al. 2011; Shen and Tang 2015). The RMSE and relative errors of the analysis estimates of the state and parameters are displayed in Fig. 7. As one can see, the errors are slightly larger than those in Fig. 6, but these were obtained in the perfect model case, while roughly suggesting a similar global behavior. More specifically, the proposed filter exhibits the best behavior, especially for strongly noisy scenarios (σ2 ≥ 2), followed by the TS–EnKF–PF when the state–parameters’ vector z, the parameters’ vector θ, and the marginal parameter θ(2) are concerned, or by the joint EnKF when the state x and the marginal parameter θ(1) are concerned.

Fig. 7.
Fig. 7.

As in Fig. 6, but in the case of an imperfect state model with an error variance of 0.1.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

We further consider a more challenging scenario that includes, as above, a noise in the L96 model, and a strongly nonlinear observation operator hn(xn) = 5 tanh(xn) (Shen and Tang 2015). The averaged estimation errors are plotted in Fig. 8. As expected, these errors are generally larger than those of Fig. 7. Furthermore, in contrast with the linear observation operator case, the joint EnKF outperforms the TS–EnKF–PF in all cases, even for z, θ, and θ(2). The behavior of the TS–EnKF–PF is probably due to the strong nonlinearity of the observation operator, which may lead to more pronounced multimodal distributions, case for which passing only the mean of the parameters’ posterior distribution to one EnKF may result in the filter divergence as the background ensemble in the EnKF may diverge from a high probability region and likewise for passing only the mean of the state posterior distribution to the PF (Santitissadeekorn and Jones 2015, p. 2031). The performance of the TS–EnKF–PF improves for mildly nonlinear observation operator, as can be seen in Fig. 9 corresponding to hn(xn) = 5tanh(xn/10) (Shen and Tang 2015).

Fig. 8.
Fig. 8.

As in Fig. 7, but in the case of a strongly nonlinear observation operator.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

Fig. 9.
Fig. 9.

As in Fig. 7, but in the case of a weakly nonlinear observation operator.

Citation: Monthly Weather Review 146, 3; 10.1175/MWR-D-16-0485.1

5. Conclusions

We proposed a Bayesian filtering algorithm for state–parameter dynamical systems of large dimension and depending on few unknown parameters. The new scheme combines the PF to sample the parameters’ ensembles with the EnKF to compute the state ensembles conditional on the parameters’ ensembles. In the case of (relatively) small ensembles, the proposed EnKF–PF is expected to perform better than the joint PF, as it uses the EnKF for the large-dimensional component (state) to avoid the curse of dimensionality, and the joint EnKF, as it uses the PF for the low-dimensional component (parameters) to avoid inconsistency issues related to the limitation of the Gaussian framework. Compared to a recently introduced two-stage EnKF–PF (TS–EnKF–PF) scheme, in which the PF and EnKF exchange information through their ensembles means, the EnKF–PF exchanges the EnKF and PF ensembles.

We tested the proposed EnKF–PF with the nonlinear Lorenz-96 model in which the dissipation factor is parameterized by two parameters, θ = [θ(1), θ(2)]. We conducted various sensitivity experiments to evaluate the behavior and robustness of the proposed scheme and to compare its performance against those of the joint PF, joint EnKF, and TS–EnKF–PF. In the linear observation operator scenario, the results suggest that the EnKF–PF always outperforms the other filters, being more robust and successfully estimating both state and parameters under different scenarios. While the joint PF leads to the worst performances for state estimation, the joint EnKF and the TS–EnKF–PF often exhibited performances that are similar to that of the proposed EnKF–PF. For parameters estimation, however, the proposed scheme always provided the best results, exhibiting robust behavior under challenging scenarios, where data are very noisy or assimilated less frequently. Regarding the TS–EnKF–PF and the joint EnKF, while the former provides less reliable estimates of the marginal parameter θ(1) when data are not frequently assimilated, it still nevertheless provides better estimates of the joint parameters’ vector θ. However, in the case of a strongly nonlinear observation operator, the joint EnKF becomes more accurate that the TS–EnKF–PF.

In the proposed EnKF–PF, the number of particles/members must be the same in both PF and EnKF, as these communicate via their ensembles, and not only through the means, as in the TS–EnKF–PF, or the means and cross covariances, as in the divided EnKF scheme of Luo and Hoteit (2014). The EnKF–PF could, however, be extended to derive a new two-stage scheme for which the EnKF and PF exchange information via their means, or any other statistics of the ensembles. In this case, one may consider using different ensemble sizes in the EnKF and PF, as in the TS–EnKF–PF and divided EnKF. It could also be generalized to the case of Gaussian-mixture EnKFs, instead of EnKF, and to the unsupervised case for which one or more of the system hyperparameters is unknown, as for instance the observation noise covariance. Such extensions will be considered in future studies.

Acknowledgments

Research reported in this publication was supported by King Abdullah University of Science and Technology (KAUST).

APPENDIX A

Some Useful Random Sampling Results

a. Property 1 [Hierarchical sampling (Robert 2007)]

Assuming that one can sample from p(x1) and p(x2 | x1), then a sample, , from p(x2) can be obtained by drawing from p(x1) and then from p(x2 | ).

b. Property 2 [Rubin’s SIR (Rubin 1988)]

Assuming that one has sampled a set, {x(s), of independent and identical (i.i.d.) realizations from a prior, p(x), then, one can asymptotically draw an i.i.d. set, {x*(s), from the posterior, p(x | y), in two steps:

  1. Weighting. A normalized weight, ω(s)p(y | x(s)), is first assigned to each sample x(s).

  2. Resampling. Each sample, x*(s), is then computed by sampling from the probability mass function, ], where δ(⋅) is the Kronecker delta.

c. Property 3 [Conditional sampling (Hoffman and Ribak 1991)]

Consider a Gaussian pdf, p(x, y), with xy and y denoting the cross covariance of x and y and the covariance of y, respectively. Then, a sample, x*, from p(x | y), can obtained as, x* = + xy(y), where ~ p(x, y).

APPENDIX B

The EnKF–PF Update Steps

a. PF-like update of the parameters

Starting from a forecast ensemble, {, an analysis ensemble, {, is computed following a PF-like strategy based on property 2 and (17). The normalized weights of are first computed as p[]. The forecast ensemble is then resampled according to the resulting weights. As shown in section 3a(1), p[] is approximated by a Gaussian density with parameters given by [] and in (18)(19), with . Approximating expectations and (cross) covariances in (18)(19) using the forecast ensembles { and {, one eventually obtains (24) and (25) as approximations of [] and , respectively.

b. EnKF-like update of the state

The state analysis ensemble, {, is computed using an EnKF-like update based on (20) and property 3, by assuming that p(xn, yn | θn, y0:n−1) is Gaussian. Indeed, for each m = 1, …, M given ~ p[xn |, y0:n−1] and ~ p[yn |, y0:n−1], one obtains through an update of based on yn, following a KF-like correction with a gain, :
eb1
An ensemble estimation of the (cross) covariances in the expression of the gain, n, can be obtained from the LMMSE-based estimation of the covariance of p[xn, yn | , y0:n−1]: cov[xn, yn |, y0:n−1] ≈ and cov[yn |, y0:n−1] ≈ . These are then inserted in (B1) to obtain the analysis update in (29).
Finally, one needs to compute the members [from which the expression in (28) of is readily obtained by applying property 1 in (23)]. To that end, we propose to sample them from the (assumed) Gaussian pdf p[xn |, y0:n−1] with moments {[], }. Using the forecast ensembles, { and {, the mean [] [in (21)] is approximated as in (26) and the covariance [in (22)] is approximated by . This leads to
eb2
where ~ (0, ) and () is a square root of . To compute a square root of with , one uses the SVD decomposition to factorize the M × M symmetric matrix as , where the matrix n is orthogonal and Σn is diagonal. One then computes = which is finally used in (B2) to obtain (27).

REFERENCES

  • Ades, M., and P. van Leeuwen, 2013: An exploration of the equivalent weights particle filter. Quart. J. Roy. Meteor. Soc., 139, 820840, https://doi.org/10.1002/qj.1995.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and F. Desbouvries, 2006: Kalman filtering in triplet Markov chains. IEEE Trans. Signal Process., 54, 29572963, https://doi.org/10.1109/TSP.2006.877651.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and F. Desbouvries, 2011: Fixed-interval Kalman smoothing algorithms in singular state-space systems. J. Signal Process. Syst. Signal Image Video Technol., 65, 469478, https://doi.org/10.1007/s11265-010-0532-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and I. Hoteit, 2015: An efficient multiple particle filter based on the variational Bayesian approach. Proc. IEEE Int. Symp. on Signal Processing and Information Technology (ISSPIT), Abu Dhabi, United Arab Emirates, IEEE, https://doi.org/10.1109/ISSPIT.2015.7394338.

    • Crossref
    • Export Citation
  • Ait-El-Fquih, B., and I. Hoteit, 2016: A variational Bayesian multiple particle filtering scheme for large-dimensional systems. IEEE Trans. Signal Process., 64, 54095422, https://doi.org/10.1109/TSP.2016.2580524.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., M. Gharamti, and I. Hoteit, 2016: A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology. Hydrol. Earth Syst. Sci., 20, 32893307, https://doi.org/10.5194/hess-20-3289-2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Aksoy, A., F. Zhang, and J. Nielsen-Gammon, 2006: Ensemble-based simultaneous state and parameter estimation with MM5. Geophys. Res. Lett., 33, L12801, doi:10.1029/2006GL026186.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, B. D. O., and J. B. Moore, 1979: Optimal Filtering. Prentice Hall, 368 pp.

  • Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, https://doi.org/10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Annan, J. D., D. J. Lunt, J. C. Hargreaves, and P. J. Valdes, 2005: Parameter estimation in an atmospheric GCM using the ensemble Kalman filter. Nonlinear Processes Geophys., 12, 363371, https://doi.org/10.5194/npg-12-363-2005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bellsky, T., J. Berwald, and L. Mitchell, 2014: Nonglobal parameter estimation using local ensemble Kalman filtering. Mon. Wea. Rev., 142, 21502164, https://doi.org/10.1175/MWR-D-13-00200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., P. Bickel, and B. Li, 2008: Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. Probability and Statistics: Essays in Honor of David A. Freedman, D. Nolan and T. Speed, Eds.,Vol. 2, Institute of Mathematical Statistics, 316–334, https://doi.org/10.1214/193940307000000518.

    • Crossref
    • Export Citation
  • Bishop, C., B. Etherton, and S. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, https://doi.org/10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Carrassi, A., and S. Vannitsem, 2011: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Quart. J. Roy. Meteor. Soc., 137, 435451, https://doi.org/10.1002/qj.762.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, Y., and D. Zhang, 2006: Data assimilation for transient flow in geologic formations via ensemble Kalman filter. Adv. Water Resour., 29, 11071122, https://doi.org/10.1016/j.advwatres.2005.09.007.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crisan, D., and A. Doucet, 2002: A survey on convergence results on particle filtering methods for practitioners. IEEE Trans. Signal Process., 50, 736746, https://doi.org/10.1109/78.984773.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Desbouvries, F., and B. Ait-El-Fquih, 2008: Direct, prediction-based and smoothing-based particle filter algorithms. Proc. Fourth World Conf. of the IASC, Yokohama, Japan, IASC, 384–393.

  • Desbouvries, F., Y. Petetin, and B. Ait-El-Fquih, 2011: Direct, prediction- and smoothing-based Kalman and particle filter algorithms. Signal Process., 91, 20642077, https://doi.org/10.1016/j.sigpro.2011.03.013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Djuric, P., and M. Bugallo, 2013: Particle filtering for high-dimensional systems. Proc. IEEE Fifth Int. Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), St. Martin, France, IEEE, https://doi.org/10.1109/CAMSAP.2013.6714080.

    • Crossref
    • Export Citation
  • Doucet, A., N. de Freitas, K. P. Murphy, and S. J. Russell, 2000: Rao-Blackwellised particle filtering for dynamic Bayesian networks. Proc. 16th World Conf. on Uncertainty in Artificial Intelligence, Stanford, CA, AUAI, 176–183.

  • Doucet, A., N. de Freitas, and N. Gordon, Eds., 2001a: Sequential Monte Carlo Methods in Practice. Springer-Verlag, 582 pp., https://doi.org/10.1007/978-1-4757-3437-9.

    • Crossref
    • Export Citation
  • Doucet, A., N. J. Gordon, and V. Krishnamurthy, 2001b: Particle filters for state estimation of jump Markov linear systems. IEEE Trans. Signal Process., 49, 613624, https://doi.org/10.1109/78.905890.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dunne, S., D. Entekhabi, and E. Njoku, 2007: Impact of multiresolution active and passive microwave measurements on soil moisture estimation using the ensemble Kalman smoother. IEEE Trans. Geosci. Remote Sens., 45, 10161028, https://doi.org/10.1109/TGRS.2006.890561.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, https://doi.org/10.1029/94JC00572.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2006: Data Assimilation: The Ensemble Kalman Filter. Springer, 280 pp.

  • Evensen, G., and P. van Leeuwen, 2000: An ensemble Kalman smoother for nonlinear dynamics. Mon. Wea. Rev., 128, 18521867, https://doi.org/10.1175/1520-0493(2000)128<1852:AEKSFN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. Künsch, 2012: Sequential state and observation noise covariance estimation using combined ensemble Kalman and particle filters. Mon. Wea. Rev., 140, 14761495, https://doi.org/10.1175/MWR-D-10-05088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013: Mixture ensemble Kalman filters. Comput. Stat. Data Anal., 58, 127138, https://doi.org/10.1016/j.csda.2011.04.013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, https://doi.org/10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghahramani, Z., and M. Jordan, 1997: Factorial hidden Markov models. Mach. Learn., 29, 245273, https://doi.org/10.1023/A:1007425814087.

  • Gharamti, M. E., J. Valstar, and I. Hoteit, 2014: An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models. Adv. Water Resour., 71, 115, https://doi.org/10.1016/j.advwatres.2014.05.001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gharamti, M. E., B. Ait-El-Fquih, and I. Hoteit, 2015: An iterative ensemble Kalman filter with one-step-ahead smoothing for state-parameters estimation of contaminant transport models. J. Hydrol., 527, 442457, https://doi.org/10.1016/j.jhydrol.2015.05.004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gordon, N. J., D. J. Salmond, and A. F. M. Smith, 1993: Novel approach to nonlinear/ non-Gaussian Bayesian state estimation. IEE Proc. F, 140, 107113.

    • Search Google Scholar
    • Export Citation
  • Guimaraes, A., B. Ait-El-Fquih, and F. Desbouvries, 2010: A fixed-lag particle smoother for blind SISO equalization of time-varying channels. IEEE Trans. Wireless Commun., 9, 512516, https://doi.org/10.1109/TWC.2010.02.081694.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 29052919, https://doi.org/10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. S. Whitaker, 2011: What constrains spread growth in forecasts initialized from ensemble Kalman filters? Mon. Wea. Rev., 139, 117131, https://doi.org/10.1175/2010MWR3246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hendricks Franssen, H., and W. Kinzelbach, 2008: Real-time groundwater flow modeling with the ensemble Kalman filter: Joint estimation of states and parameters and the filter inbreeding problem. Water Resour. Res., 44, W09408, https://doi.org/10.1029/2007WR006505.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, Y., and E. Ribak, 1991: Constrained realizations of Gaussian fields—A simple algorithm. Astrophys. J., 380, L5L8, https://doi.org/10.1086/186160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D. Pham, and J. Blum, 2002: A simplified reduced order Kalman filtering and application to altimetric data assimilation in Tropical Pacific. J. Mar. Syst., 36, 101127, https://doi.org/10.1016/S0924-7963(02)00129-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D. Pham, G. Triantafyllou, and G. Korres, 2008: A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev., 136, 317334, https://doi.org/10.1175/2007MWR1927.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., X. Luo, and D. Pham, 2012: Particle Kalman filtering: A nonlinear Bayesian framework for ensemble Kalman filters. Mon. Wea. Rev., 140, 528542, https://doi.org/10.1175/2011MWR3640.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D.-T. Pham, M. Gharamti, and X. Luo, 2015: Mitigating observation perturbation sampling errors in the stochastic EnKF. Mon. Wea. Rev., 143, 29182936, https://doi.org/10.1175/MWR-D-14-00088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hunt, B., E. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112126, https://doi.org/10.1016/j.physd.2006.11.008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Husz, Z. L., A. M. Wallace, and P. R. Green, 2011: Tracking with a hierarchical partitioned particle filter and movement modelling. IEEE Trans. Syst. Man Cybern., 41B, 15711584, https://doi.org/10.1109/TSMCB.2011.2157680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jazwinski, A. H., 1970: Stochastic Processes and Filtering Theory. Vol. 64, Mathematics in Science and Engineering, Academic Press, 400 pp.

  • Kailath, T., A. H. Sayed, and B. Hassibi, 2000: Linear Estimation. Pearson, 854 pp.

  • Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. Trans. ASME–J. Basic Eng., 82D, 3545.

  • Karimi, A., and M. R. Paul, 2010: Extensive chaos in the Lorenz-96 model. Chaos, 20, 043105, https://doi.org/10.1063/1.3496397.

  • Kivman, G. A., 2003: Sequential parameter estimation for stochastic systems. Nonlinear Processes Geophys., 10, 253259, https://doi.org/10.5194/npg-10-253-2003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Künsch, H., 2001: State space and hidden Markov models. Complex Stochastic Systems, O. E. Barndorff-Nielsen, D. R. Cox, and C. Klüppelberg, Eds., CRC Press, 109–173.

  • Liu, B., B. Ait-El-Fquih, and I. Hoteit, 2016: Efficient kernel-based ensemble Gaussian mixture filtering. Mon. Wea. Rev., 144, 781800, https://doi.org/10.1175/MWR-D-14-00292.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, J. S., and R. Chen, 1998: Sequential Monte Carlo methods for dynamic systems. J. Amer. Stat. Assoc., 93, 10321044, https://doi.org/10.1080/01621459.1998.10473765.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model. J. Atmos. Sci., 55, 399414, https://doi.org/10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2014: Ensemble Kalman filtering with a divided state-space strategy for coupled data assimilation problems. Mon. Wea. Rev., 142, 45424558, https://doi.org/10.1175/MWR-D-13-00402.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moradkhani, H., S. Sorooshian, H. V. Gupta, and P. R. Houser, 2005: Dual state–parameter estimation of hydrological models using ensemble Kalman filter. Adv. Water Resour., 28, 135147, https://doi.org/10.1016/j.advwatres.2004.09.002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morzfeld, M., X. Tu, E. Atkins, and A. Chorin, 2012: A random map implementation of implicit filters. J. Comput. Phys., 231, 20492066, https://doi.org/10.1016/j.jcp.2011.11.022.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pham, D., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 11941207, https://doi.org/10.1175/1520-0493(2001)129<1194:SMFSDA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rasmussen, J., H. Madsen, K. H. Jensen, and J. C. Refsgaard, 2015: Data assimilation in integrated hydrological modeling using ensemble Kalman filtering: Evaluating the effect of ensemble size and localization on filter performance. Hydrol. Earth Syst. Sci., 19, 29993013, https://doi.org/10.5194/hess-19-2999-2015.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Robert, C., 2007: The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer Science & Business Media, 606 pp., https://doi.org/10.1007/0-387-71599-1.

    • Crossref
    • Export Citation
  • Rubin, D., 1988: Using the SIR algorithm to simulate posterior distributions. Bayesian Statistics, J. Bernardo et al., Eds., Vol. 3, Oxford University Press, 395–402.

  • Santitissadeekorn, N., and C. Jones, 2015: Two-stage filtering for joint state-parameter estimation. Mon. Wea. Rev., 143, 20282041, https://doi.org/10.1175/MWR-D-14-00176.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schön, T., F. Gustafsson, and P.-J. Nordlund, 2005: Marginalized particle filters for mixed linear/nonlinear state-space models. IEEE Trans. Signal Process., 53, 22792289, https://doi.org/10.1109/TSP.2005.849151.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Septier, F., and G. W. Peters, 2015: An overview of recent advances in Monte-Carlo methods for Bayesian filtering in high-dimensional spaces. Theoretical Aspects of Spatial-Temporal Modeling, G. W. Peters and T. Matsui, Eds., Springer, 31–61, https://doi.org/10.1007/978-4-431-55336-6_2.

    • Crossref
    • Export Citation
  • Shen, Z., and Y. Tang, 2015: A modified ensemble Kalman particle filter for non-Gaussian systems with nonlinear measurement functions. J. Adv. Model. Earth Syst., 7, 5066, https://doi.org/10.1002/2014MS000373.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Slivinski, L., E. Spiller, A. Apte, and B. Sandstede, 2015: A hybrid particle-ensemble Kalman filter for Lagrangian data assimilation. Mon. Wea. Rev., 143, 195211, https://doi.org/10.1175/MWR-D-14-00051.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, https://doi.org/10.1175/2008MWR2529.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Song, H., I. Hoteit, B. Cornuelle, and A. Subramanian, 2010: An adaptive approach to mitigate background covariance limitations in the ensemble Kalman filter. Mon. Wea. Rev., 138, 28252845, https://doi.org/10.1175/2010MWR2871.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Spiller, E. T., A. Budhiraja, K. Ide, and C. K. R. T. Jones, 2008: Modified particle filter methods for assimilating Lagrangian data into a point-vortex model. Physica D, 237, 14981506, https://doi.org/10.1016/j.physd.2008.03.023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stordal, A. S., H. A. Karlsen, G. Naevdal, H. J. Skaug, and B. Valles, 2011: Bridging the ensemble Kalman filter and particle filters: The adaptive Gaussian mixture filter. Comput. Geosci., 15, 293305, https://doi.org/10.1007/s10596-010-9207-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Subramanian, A., I. Hoteit, B. Cornuelle, and H. Song, 2012: Linear vs. nonlinear filtering with scale selective corrections for balanced dynamics in a simple atmospheric model. J. Atmos. Sci., 69, 34053419, https://doi.org/10.1175/JAS-D-11-0332.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, https://doi.org/10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • van Leeuwen, P. J., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114, https://doi.org/10.1175/2009MWR2835.1.

  • van Trees, H., 1968: Detection, Estimation, and Modulation Theory: Part I. John Wiley and Sons, 716 pp.

  • Wen, X. H., and W. H. Chen, 2007: Real-time reservoir updating using ensemble Kalman filter: The confirming approach. Soc. Petrol. Eng., 11, 431442.

    • Search Google Scholar
    • Export Citation
  • West, M., and J. Liu, 2001: Combined parameter and state estimation in simulation-based filtering. Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds., Springer, 197–223.

    • Crossref
    • Export Citation
  • Zhang, H., H.-J. Hendricks Franssen, X. Han, J. Vrugt, and H. Vereecken, 2017: State and parameter estimation of two land surface models using the ensemble Kalman filter and particle filter. Hydrol. Earth Syst. Sci., 21, 49274958, https://doi.org/10.5194/hess-21-4927-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

For a narrow likelihood (i.e., that peaks in a very small portion of the observations space), the curse of dimensionality may also occur even if the parameters space is small, unless the amount of sampled particles is very large. As discussed in Bengtsson et al. (2008) (see also Snyder et al. 2008; van Leeuwen 2009), such a scenario occurs especially when the data involve a large number of conditionally independent and Gaussian observations.

2

For two random variables u and v, the LMMSE estimator of u given (a realization of) v and the associated error covariance are given by (u) + Cov(u, v) Cov(v)−1(v[v]) and Cov(u) − Cov(u, v)Cov(v)−1Cov(v, u), respectively; where Cov(u), Cov(v), and Cov(u, v) denote the covariances of u and v, and the cross covariance between u and v, respectively.

3

Note that the nθ × nθ matrix , whose inverse is needed in the expression of “the gains” is positive definite, thus invertible, as the number of particles, M, is supposed to be larger than the number of parameters nθ.

4

It should be stressed that (21)(22) [(18)(19), respectively] that approximate the moments of the assumed Gaussian pdf p(xn | θn, y0:n−1) [p(yn | θn, y0:n−1), respectively] under the LMMSE optimization criteria, have the same form as the true moments if p(xn, θn | y0:n−1) [p(yn, θn | y0:n−1), respectively] is Gaussian. However, in our study, these joint pdfs are not required to be Gaussian as the proposed algorithm uses only their first two moments, which are empirically estimated from the members, and still valid if they are not Gaussian.

5

More precisely, Santitissadeekorn and Jones (2015) used an inflation α= 1, suggesting that the joint EnKF results are not too sensitive to α when ≤ 4. Our tests reveal that the joint EnKF provides the best estimates with α = 1.2, especially when implemented with not too large ensembles.

6

Other experiments that have been conducted using alternative initial priors of the parameters with supports including the true values θ*(1) and θ*(2), suggested an improvement of the joint PF in tracking these true values, while remaining less accurate than the three other filters.

7

As commonly done, we tolerate here some flexibility in the notion of “accuracy” and limit it to the closeness to the reference state/parameters instead of its true PM estimate, which is not accessible. We further provide the (empirical) confidence intervals (or uncertainties) only as a way to assess whether the range of values, which is likely to include the estimates, includes the reference state/parameters.

Save
  • Ades, M., and P. van Leeuwen, 2013: An exploration of the equivalent weights particle filter. Quart. J. Roy. Meteor. Soc., 139, 820840, https://doi.org/10.1002/qj.1995.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and F. Desbouvries, 2006: Kalman filtering in triplet Markov chains. IEEE Trans. Signal Process., 54, 29572963, https://doi.org/10.1109/TSP.2006.877651.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and F. Desbouvries, 2011: Fixed-interval Kalman smoothing algorithms in singular state-space systems. J. Signal Process. Syst. Signal Image Video Technol., 65, 469478, https://doi.org/10.1007/s11265-010-0532-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., and I. Hoteit, 2015: An efficient multiple particle filter based on the variational Bayesian approach. Proc. IEEE Int. Symp. on Signal Processing and Information Technology (ISSPIT), Abu Dhabi, United Arab Emirates, IEEE, https://doi.org/10.1109/ISSPIT.2015.7394338.

    • Crossref
    • Export Citation
  • Ait-El-Fquih, B., and I. Hoteit, 2016: A variational Bayesian multiple particle filtering scheme for large-dimensional systems. IEEE Trans. Signal Process., 64, 54095422, https://doi.org/10.1109/TSP.2016.2580524.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ait-El-Fquih, B., M. Gharamti, and I. Hoteit, 2016: A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology. Hydrol. Earth Syst. Sci., 20, 32893307, https://doi.org/10.5194/hess-20-3289-2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Aksoy, A., F. Zhang, and J. Nielsen-Gammon, 2006: Ensemble-based simultaneous state and parameter estimation with MM5. Geophys. Res. Lett., 33, L12801, doi:10.1029/2006GL026186.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, B. D. O., and J. B. Moore, 1979: Optimal Filtering. Prentice Hall, 368 pp.

  • Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, https://doi.org/10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Annan, J. D., D. J. Lunt, J. C. Hargreaves, and P. J. Valdes, 2005: Parameter estimation in an atmospheric GCM using the ensemble Kalman filter. Nonlinear Processes Geophys., 12, 363371, https://doi.org/10.5194/npg-12-363-2005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bellsky, T., J. Berwald, and L. Mitchell, 2014: Nonglobal parameter estimation using local ensemble Kalman filtering. Mon. Wea. Rev., 142, 21502164, https://doi.org/10.1175/MWR-D-13-00200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bengtsson, T., P. Bickel, and B. Li, 2008: Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. Probability and Statistics: Essays in Honor of David A. Freedman, D. Nolan and T. Speed, Eds.,Vol. 2, Institute of Mathematical Statistics, 316–334, https://doi.org/10.1214/193940307000000518.

    • Crossref
    • Export Citation
  • Bishop, C., B. Etherton, and S. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, https://doi.org/10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Carrassi, A., and S. Vannitsem, 2011: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Quart. J. Roy. Meteor. Soc., 137, 435451, https://doi.org/10.1002/qj.762.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, Y., and D. Zhang, 2006: Data assimilation for transient flow in geologic formations via ensemble Kalman filter. Adv. Water Resour., 29, 11071122, https://doi.org/10.1016/j.advwatres.2005.09.007.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crisan, D., and A. Doucet, 2002: A survey on convergence results on particle filtering methods for practitioners. IEEE Trans. Signal Process., 50, 736746, https://doi.org/10.1109/78.984773.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Desbouvries, F., and B. Ait-El-Fquih, 2008: Direct, prediction-based and smoothing-based particle filter algorithms. Proc. Fourth World Conf. of the IASC, Yokohama, Japan, IASC, 384–393.

  • Desbouvries, F., Y. Petetin, and B. Ait-El-Fquih, 2011: Direct, prediction- and smoothing-based Kalman and particle filter algorithms. Signal Process., 91, 20642077, https://doi.org/10.1016/j.sigpro.2011.03.013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Djuric, P., and M. Bugallo, 2013: Particle filtering for high-dimensional systems. Proc. IEEE Fifth Int. Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), St. Martin, France, IEEE, https://doi.org/10.1109/CAMSAP.2013.6714080.

    • Crossref
    • Export Citation
  • Doucet, A., N. de Freitas, K. P. Murphy, and S. J. Russell, 2000: Rao-Blackwellised particle filtering for dynamic Bayesian networks. Proc. 16th World Conf. on Uncertainty in Artificial Intelligence, Stanford, CA, AUAI, 176–183.

  • Doucet, A., N. de Freitas, and N. Gordon, Eds., 2001a: Sequential Monte Carlo Methods in Practice. Springer-Verlag, 582 pp., https://doi.org/10.1007/978-1-4757-3437-9.

    • Crossref
    • Export Citation
  • Doucet, A., N. J. Gordon, and V. Krishnamurthy, 2001b: Particle filters for state estimation of jump Markov linear systems. IEEE Trans. Signal Process., 49, 613624, https://doi.org/10.1109/78.905890.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dunne, S., D. Entekhabi, and E. Njoku, 2007: Impact of multiresolution active and passive microwave measurements on soil moisture estimation using the ensemble Kalman smoother. IEEE Trans. Geosci. Remote Sens., 45, 10161028, https://doi.org/10.1109/TGRS.2006.890561.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, https://doi.org/10.1029/94JC00572.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2006: Data Assimilation: The Ensemble Kalman Filter. Springer, 280 pp.

  • Evensen, G., and P. van Leeuwen, 2000: An ensemble Kalman smoother for nonlinear dynamics. Mon. Wea. Rev., 128, 18521867, https://doi.org/10.1175/1520-0493(2000)128<1852:AEKSFN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. Künsch, 2012: Sequential state and observation noise covariance estimation using combined ensemble Kalman and particle filters. Mon. Wea. Rev., 140, 14761495, https://doi.org/10.1175/MWR-D-10-05088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frei, M., and H. R. Künsch, 2013: Mixture ensemble Kalman filters. Comput. Stat. Data Anal., 58, 127138, https://doi.org/10.1016/j.csda.2011.04.013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, https://doi.org/10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ghahramani, Z., and M. Jordan, 1997: Factorial hidden Markov models. Mach. Learn., 29, 245273, https://doi.org/10.1023/A:1007425814087.

  • Gharamti, M. E., J. Valstar, and I. Hoteit, 2014: An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models. Adv. Water Resour., 71, 115, https://doi.org/10.1016/j.advwatres.2014.05.001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gharamti, M. E., B. Ait-El-Fquih, and I. Hoteit, 2015: An iterative ensemble Kalman filter with one-step-ahead smoothing for state-parameters estimation of contaminant transport models. J. Hydrol., 527, 442457, https://doi.org/10.1016/j.jhydrol.2015.05.004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gordon, N. J., D. J. Salmond, and A. F. M. Smith, 1993: Novel approach to nonlinear/ non-Gaussian Bayesian state estimation. IEE Proc. F, 140, 107113.

    • Search Google Scholar
    • Export Citation
  • Guimaraes, A., B. Ait-El-Fquih, and F. Desbouvries, 2010: A fixed-lag particle smoother for blind SISO equalization of time-varying channels. IEEE Trans. Wireless Commun., 9, 512516, https://doi.org/10.1109/TWC.2010.02.081694.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 29052919, https://doi.org/10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. S. Whitaker, 2011: What constrains spread growth in forecasts initialized from ensemble Kalman filters? Mon. Wea. Rev., 139, 117131, https://doi.org/10.1175/2010MWR3246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hendricks Franssen, H., and W. Kinzelbach, 2008: Real-time groundwater flow modeling with the ensemble Kalman filter: Joint estimation of states and parameters and the filter inbreeding problem. Water Resour. Res., 44, W09408, https://doi.org/10.1029/2007WR006505.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, Y., and E. Ribak, 1991: Constrained realizations of Gaussian fields—A simple algorithm. Astrophys. J., 380, L5L8, https://doi.org/10.1086/186160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D. Pham, and J. Blum, 2002: A simplified reduced order Kalman filtering and application to altimetric data assimilation in Tropical Pacific. J. Mar. Syst., 36, 101127, https://doi.org/10.1016/S0924-7963(02)00129-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D. Pham, G. Triantafyllou, and G. Korres, 2008: A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev., 136, 317334, https://doi.org/10.1175/2007MWR1927.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., X. Luo, and D. Pham, 2012: Particle Kalman filtering: A nonlinear Bayesian framework for ensemble Kalman filters. Mon. Wea. Rev., 140, 528542, https://doi.org/10.1175/2011MWR3640.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoteit, I., D.-T. Pham, M. Gharamti, and X. Luo, 2015: Mitigating observation perturbation sampling errors in the stochastic EnKF. Mon. Wea. Rev., 143, 29182936, https://doi.org/10.1175/MWR-D-14-00088.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hunt, B., E. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112126, https://doi.org/10.1016/j.physd.2006.11.008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Husz, Z. L., A. M. Wallace, and P. R. Green, 2011: Tracking with a hierarchical partitioned particle filter and movement modelling. IEEE Trans. Syst. Man Cybern., 41B, 15711584, https://doi.org/10.1109/TSMCB.2011.2157680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jazwinski, A. H., 1970: Stochastic Processes and Filtering Theory. Vol. 64, Mathematics in Science and Engineering, Academic Press, 400 pp.

  • Kailath, T., A. H. Sayed, and B. Hassibi, 2000: Linear Estimation. Pearson, 854 pp.

  • Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. Trans. ASME–J. Basic Eng., 82D, 3545.

  • Karimi, A., and M. R. Paul, 2010: Extensive chaos in the Lorenz-96 model. Chaos, 20, 043105, https://doi.org/10.1063/1.3496397.

  • Kivman, G. A., 2003: Sequential parameter estimation for stochastic systems. Nonlinear Processes Geophys., 10, 253259, https://doi.org/10.5194/npg-10-253-2003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Künsch, H., 2001: State space and hidden Markov models. Complex Stochastic Systems, O. E. Barndorff-Nielsen, D. R. Cox, and C. Klüppelberg, Eds., CRC Press, 109–173.

  • Liu, B., B. Ait-El-Fquih, and I. Hoteit, 2016: Efficient kernel-based ensemble Gaussian mixture filtering. Mon. Wea. Rev., 144, 781800, https://doi.org/10.1175/MWR-D-14-00292.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, J. S., and R. Chen, 1998: Sequential Monte Carlo methods for dynamic systems. J. Amer. Stat. Assoc., 93, 10321044, https://doi.org/10.1080/01621459.1998.10473765.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model. J. Atmos. Sci., 55, 399414, https://doi.org/10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2014: Ensemble Kalman filtering with a divided state-space strategy for coupled data assimilation problems. Mon. Wea. Rev., 142, 45424558, https://doi.org/10.1175/MWR-D-13-00402.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moradkhani, H., S. Sorooshian, H. V. Gupta, and P. R. Houser, 2005: Dual state–parameter estimation of hydrological models using ensemble Kalman filter. Adv. Water Resour., 28, 135147, https://doi.org/10.1016/j.advwatres.2004.09.002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Morzfeld, M., X. Tu, E. Atkins, and A. Chorin, 2012: A random map implementation of implicit filters. J. Comput. Phys., 231, 20492066, https://doi.org/10.1016/j.jcp.2011.11.022.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pham, D., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 11941207, https://doi.org/10.1175/1520-0493(2001)129<1194:SMFSDA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rasmussen, J., H. Madsen, K. H. Jensen, and J. C. Refsgaard, 2015: Data assimilation in integrated hydrological modeling using ensemble Kalman filtering: Evaluating the effect of ensemble size and localization on filter performance. Hydrol. Earth Syst. Sci., 19, 29993013, https://doi.org/10.5194/hess-19-2999-2015.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Robert, C., 2007: The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer Science & Business Media, 606 pp., https://doi.org/10.1007/0-387-71599-1.

    • Crossref
    • Export Citation
  • Rubin, D., 1988: Using the SIR algorithm to simulate posterior distributions. Bayesian Statistics, J. Bernardo et al., Eds., Vol. 3, Oxford University Press, 395–402.

  • Santitissadeekorn, N., and C. Jones, 2015: Two-stage filtering for joint state-parameter estimation. Mon. Wea. Rev., 143, 20282041, https://doi.org/10.1175/MWR-D-14-00176.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schön, T., F. Gustafsson, and P.-J. Nordlund, 2005: Marginalized particle filters for mixed linear/nonlinear state-space models. IEEE Trans. Signal Process., 53, 22792289, https://doi.org/10.1109/TSP.2005.849151.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Septier, F., and G. W. Peters, 2015: An overview of recent advances in Monte-Carlo methods for Bayesian filtering in high-dimensional spaces. Theoretical Aspects of Spatial-Temporal Modeling, G. W. Peters and T. Matsui, Eds., Springer, 31–61, https://doi.org/10.1007/978-4-431-55336-6_2.

    • Crossref
    • Export Citation
  • Shen, Z., and Y. Tang, 2015: A modified ensemble Kalman particle filter for non-Gaussian systems with nonlinear measurement functions. J. Adv. Model. Earth Syst., 7, 5066, https://doi.org/10.1002/2014MS000373.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Slivinski, L., E. Spiller, A. Apte, and B. Sandstede, 2015: A hybrid particle-ensemble Kalman filter for Lagrangian data assimilation. Mon. Wea. Rev., 143, 195211, https://doi.org/10.1175/MWR-D-14-00051.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, https://doi.org/10.1175/2008MWR2529.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Song, H., I. Hoteit, B. Cornuelle, and A. Subramanian, 2010: An adaptive approach to mitigate background covariance limitations in the ensemble Kalman filter. Mon. Wea. Rev., 138, 28252845, https://doi.org/10.1175/2010MWR2871.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Spiller, E. T., A. Budhiraja, K. Ide, and C. K. R. T. Jones, 2008: Modified particle filter methods for assimilating Lagrangian data into a point-vortex model. Physica D, 237, 14981506, https://doi.org/10.1016/j.physd.2008.03.023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stordal, A. S., H. A. Karlsen, G. Naevdal, H. J. Skaug, and B. Valles, 2011: Bridging the ensemble Kalman filter and particle filters: The adaptive Gaussian mixture filter. Comput. Geosci., 15, 293305, https://doi.org/10.1007/s10596-010-9207-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Subramanian, A., I. Hoteit, B. Cornuelle, and H. Song, 2012: Linear vs. nonlinear filtering with scale selective corrections for balanced dynamics in a simple atmospheric model. J. Atmos. Sci., 69, 34053419, https://doi.org/10.1175/JAS-D-11-0332.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, https://doi.org/10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • van Leeuwen, P. J., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114, https://doi.org/10.1175/2009MWR2835.1.

  • van Trees, H., 1968: Detection, Estimation, and Modulation Theory: Part I. John Wiley and Sons, 716 pp.

  • Wen, X. H., and W. H. Chen, 2007: Real-time reservoir updating using ensemble Kalman filter: The confirming approach. Soc. Petrol. Eng., 11, 431442.

    • Search Google Scholar
    • Export Citation
  • West, M., and J. Liu, 2001: Combined parameter and state estimation in simulation-based filtering. Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds., Springer, 197–223.

    • Crossref
    • Export Citation
  • Zhang, H., H.-J. Hendricks Franssen, X. Han, J. Vrugt, and H. Vereecken, 2017: State and parameter estimation of two land surface models using the ensemble Kalman filter and particle filter. Hydrol. Earth Syst. Sci., 21, 49274958, https://doi.org/10.5194/hess-21-4927-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    A schematic illustration of the steps of computing a joint analysis ensemble () from the previous one () using EnKF–PF.

  • Fig. 2.

    Time evolution of the parameters analysis estimates (dashed blue) and associated 95% confidence intervals (green) using the (top to bottom) joint PF, joint EnKF, TS–EnKF–PF, and EnKF–PF with 100 members for (left) θ (1) and (right) θ (2). The true values of parameters are indicated in red. In the last three filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

  • Fig. 3.

    Time evolution of the bias (dashed blue) of the first four components of the state and the parameters, computed as the error between the true values of these variables and the average (over 30 independent repetitions) of their analysis estimates obtained using (a) EnKF–PF, (b) TS–EnKF–PF, (c) joint EnKF, and (d) joint PF with 100 members. In the first three filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

  • Fig. 4.

    Time- and variable-averaged RMSE of the analysis estimates of state–parameters’ (a) vector, (b) state, and (c) parameters’ vector, and (d),(e) time-averaged relative error of the analysis estimates of marginal parameters, provided by joint EnKF (blue), TS–EnKF–PF (green), and EnKF–PF (red) as a function of ensemble size. In all filters, the localization length scale and inflation factor were set to 2 and 1.2, respectively. All observations were assimilated at every four model time steps (1 day in real time).

  • Fig. 5.

    As in Fig. 4, but as a function of the temporal assimilation period. In all filters, ensemble size, localization length scale, and inflation factor were set to 100, 2, and 1.2, respectively.

  • Fig. 6.

    As in Fig. 4, but as a function of observation error variance. In all filters, ensemble size, localization length scale, and inflation factor were set to 100, 2, and 1.2, respectively.

  • Fig. 7.

    As in Fig. 6, but in the case of an imperfect state model with an error variance of 0.1.

  • Fig. 8.

    As in Fig. 7, but in the case of a strongly nonlinear observation operator.

  • Fig. 9.

    As in Fig. 7, but in the case of a weakly nonlinear observation operator.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 3566 1769 786
PDF Downloads 547 110 22