• Aksoy, A., , F. Zhang, , and J. W. Nielsen-Gammon, 2006: Ensemble-based simultaneous state and parameter estimation with MM5. Geophys. Res. Lett., 33, L12801, doi:10.1029/2006GL026186.

    • Search Google Scholar
    • Export Citation
  • Annan, J. D., , D. J. Lunt, , J. C. Hargreaves, , and P. J. Valdes, 2005: Parameter estimation in an atmospheric GCM using the ensemble Kalman filter. Nonlinear Processes Geophys., 12, 363371, doi:10.5194/npg-12-363-2005.

    • Search Google Scholar
    • Export Citation
  • Arulampalam, M. S., , S. Maskell, , N. Gordon, , and T. Clapp, 2002: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process., 50, 174188, doi:10.1109/78.978374.

    • Search Google Scholar
    • Export Citation
  • Baek, S.-J., , B. R. Hunt, , E. Kalnay, , E. Ott, , and I. Szunyogh, 2006: Local ensemble Kalman filtering in the presence of model bias. Tellus, 58A, 293–306, doi:10.1111/j.1600-0870.2006.00178.x.

    • Search Google Scholar
    • Export Citation
  • Bellsky, T., , J. Berwald, , and L. Mitchell, 2014: Nonglobal parameter estimation using local ensemble Kalman filtering. Mon. Wea. Rev., 142, 2150–2164, doi:10.1175/MWR-D-13-00200.1.

    • Search Google Scholar
    • Export Citation
  • Burgers, G., , J. P. Van Leeuwen, , and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126,17191724, doi:10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Carrassi, A., , and S. Vannitsem, 2011: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Quart. J. Roy. Meteor. Soc., 137, 435451, doi:10.1002/qj.762.

    • Search Google Scholar
    • Export Citation
  • Casella, G., , and C. Robert, 1996: Rao-Blackwellization of sampling schemes. Biometrika, 83, 81–94, doi:10.1093/biomet/83.1.81.

  • DelSole, T., , and X. Yang, 2010: State and parameter estimation in stochastic dynamical models. Physica D, 239, 17811788, doi:10.1016/j.physd.2010.06.001.

    • Search Google Scholar
    • Export Citation
  • Doucet, A., , N. de Freitas, , K. Murphy, , and S. J. Russell, 2000a: Rao-Blackwellised particle filtering for dynamic Bayesian networks. Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, 176–183.

  • Doucet, A., , S. Godsill, , and C. Andrieu, 2000b: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput., 10, 197–208, doi:10.1023/A:1008935410038.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 143–10 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Friedland, B., 1969: Treatment of bias in recursive filtering. IEEE Trans. Auto. Control,14, 359–367, doi:10.1109/TAC.1969.1099223.

  • Gillijns, S., , and B. De Moor, 2007: Model error estimation in ensemble data assimilation. Nonlinear Processes Geophys., 14, 59–71.

  • Gordon, N. J., , D. J. Salmond, , and A. F. M. Smith, 1993: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar Signal Process.,140, 107–113.

  • Houtekamer, P., , and H. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796–811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kang, J. S., , E. Kalnay, , J. Liu, , I. Fung, , T. Miyoshi, , and K. Ide, 2011: Variable localization” in an ensemble Kalman filter: Application to the carbon cycle data assimilation. J. Geophys. Res., 116, D09110, doi:10.1029/2010JD014673.

    • Search Google Scholar
    • Export Citation
  • Koyama, H., , and M. Watanabe, 2010: Reducing forecast errors due to model imperfections using ensemble Kalman filtering. Mon. Wea. Rev., 138, 3316–3332, doi:10.1175/2010MWR3067.1.

    • Search Google Scholar
    • Export Citation
  • Liu, J. S., , and R. Chen, 1998: Sequential Monte Carlo methods for dynamic systems. J. Amer. Stat. Assoc., 93 (443), 1032–1044.

  • Lorenz, E., 2006: Predictability: A problem partly solved. Predictability of Weather and Climate, T. Palmer and R. Hagedorn, Eds., Cambridge University Press, 40–58.

  • Moradkhani, H., , H. Sorooshian, , H. Gupta, , and P. Houser, 2005: Dual state-parameter estimation of hydrological models using ensemble Kalman filter. Adv. Water Resour., 28, 135–147, doi:10.1016/j.advwatres.2004.09.002.

    • Search Google Scholar
    • Export Citation
  • Nakano, S., , G. Ueno, , and T. Higuchi, 2007: Merging particle filter for sequential data assimilation. Nonlinear Processes Geophys., 14, 395408, doi:10.5194/npg-14-395-2007.

    • Search Google Scholar
    • Export Citation
  • Ott, E., and et al. , 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415–428, doi:10.1111/j.1600-0870.2004.00076.x.

    • Search Google Scholar
    • Export Citation
  • Schön, T., , F. Gustafsson, , and P. Nordlund, 2005: Marginalized particle filters for mixed linear/nonlinear state-space models. IEEE Trans. Signal Process., 53, 22792289, doi:10.1109/TSP.2005.849151.

    • Search Google Scholar
    • Export Citation
  • Snyder, C., , T. Bengtsson, , P. Bickel, , and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 4629–4640, doi:10.1175/2008MWR2529.1.

    • Search Google Scholar
    • Export Citation
  • Stroud, J., , and T. Bengtsson, 2007: Sequential state and variance estimation within the ensemble Kalman filter. Mon. Wea. Rev., 135, 3194–3208, doi:10.1175/MWR3460.1.

    • Search Google Scholar
    • Export Citation
  • Vossepoel, F. C., , and P. J. Van Leeuwen, 2007: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Mon. Wea. Rev., 135, 10061020, doi:10.1175/MWR3328.1.

    • Search Google Scholar
    • Export Citation
  • West, M., , and J. Liu, 2001: Combined parameter and state estimation in simulation-based filtering. Sequential Monte Carlo Methods in Practices, A. Doucet et al., Eds., Springer, 197–223.

  • Wikle, C. K., , L. M. Berliner, , and N. Cressie, 1998: Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: A Bayesian fusion of century-scale observations with a simple model. Environ. Ecol. Stat., 5, 117154, doi:10.1023/A:1009662704779.

    • Search Google Scholar
    • Export Citation
  • Wilks, D., 2005: Effects of stochastic parametrizations in the Lorenz ‘96 system. Quart. J. Roy. Meteor. Soc.,131, 389–407, doi:10.1256/qj.04.03.

  • Yang, X., , and T. DelSole, 2009: Using the ensemble Kalman filter to estimate multiplicative model parameters. Tellus, 61A, 601609, doi:10.1111/j.1600-0870.2009.00407.x.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Errors in the state and parameter estimates by the augmented EnKF for various values of the inflation factor and localization length scale.

  • View in gallery

    Comparison of the error in the (un)observed state and parameter estimates for the two-stage filtering with the Liu–West model (blue) and the augmented EnKF (black) for various based on 10 different experimental runs.

  • View in gallery

    Comparison of the error in the (un)observed state and parameter estimates for the two-stage filtering with the Liu–West model (blue) and with the persistence model (red) for various based on 10 different experimental runs.

  • View in gallery

    The convergence of the point estimates (mean) of the autoregressive parameters ϕ and σ. The dashed lines show the true parameters. Here we use and the number of particles in the EnKF stage is 20.

  • View in gallery

    Relative error for parameters ϕ and β.

  • View in gallery

    (left) A scatterplot of the true forcing and along with the straight line obtained by fitting to the pairs and . (middle) The estimates of the parameters ϕ and by a brute-force approach. (right) Comparison between the actual temporal autocorrelation of the true forcing and that implied by in Eq. (20) for different values of ϕ.

  • View in gallery

    (top) The evolution of the mean estimate for each parameter. (bottom) The posterior distributions of the parameters after 1500 assimilation cycles with from 10 independent experiments. The vertical lines show the values of and .

  • View in gallery

    Average RMSE over a period of for the forecast run without assimilating observation. The deterministic parameters are assumed to be and .

  • View in gallery

    RMSE for the state estimation by EnKF. The deterministic parameters are assumed to be and .

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 142 142 15
PDF Downloads 122 122 7

Two-Stage Filtering for Joint State-Parameter Estimation

View More View Less
  • 1 Department of Mathematics, University of North Carolina, Chapel Hill, Chapel Hill, North Carolina
© Get Permissions
Full access

Abstract

This paper presents an approach for the simultaneous estimation of the state and unknown parameters in a sequential data assimilation framework. The state augmentation technique, in which the state vector is augmented by the model parameters, has been investigated in many previous studies and some success with this technique has been reported in the case where model parameters are additive. However, many geophysical or climate models contain nonadditive parameters such as those arising from physical parameterization of subgrid-scale processes, in which case the state augmentation technique may become ineffective. This is due to the fact that the inference of parameters from partially observed states based on the cross covariance between states and parameters is inadequate if states and parameters are not linearly correlated. In this paper, the authors propose a two-stage filtering technique that runs particle filtering (PF) to estimate parameters while updating the state estimate using an ensemble Kalman filter (EnKF). These two “subfilters” interact recursively based on the point estimates computed at each stage. The applicability of the proposed method is demonstrated using the Lorenz-96 system, where the forcing is parameterized and the amplitude and phase of the forcing are to be estimated jointly with the state. The proposed method is shown to be capable of estimating these model parameters with a high accuracy as well as reducing uncertainty while the state augmentation technique fails.

Corresponding author address: Naratip Santitissadeekorn, Department of Mathematics, University of North Carolina, Chapel Hill, Phillips Hall, CB 3250, Chapel Hill, NC 27599-3250. E-mail: naratips@email.unc.edu

Abstract

This paper presents an approach for the simultaneous estimation of the state and unknown parameters in a sequential data assimilation framework. The state augmentation technique, in which the state vector is augmented by the model parameters, has been investigated in many previous studies and some success with this technique has been reported in the case where model parameters are additive. However, many geophysical or climate models contain nonadditive parameters such as those arising from physical parameterization of subgrid-scale processes, in which case the state augmentation technique may become ineffective. This is due to the fact that the inference of parameters from partially observed states based on the cross covariance between states and parameters is inadequate if states and parameters are not linearly correlated. In this paper, the authors propose a two-stage filtering technique that runs particle filtering (PF) to estimate parameters while updating the state estimate using an ensemble Kalman filter (EnKF). These two “subfilters” interact recursively based on the point estimates computed at each stage. The applicability of the proposed method is demonstrated using the Lorenz-96 system, where the forcing is parameterized and the amplitude and phase of the forcing are to be estimated jointly with the state. The proposed method is shown to be capable of estimating these model parameters with a high accuracy as well as reducing uncertainty while the state augmentation technique fails.

Corresponding author address: Naratip Santitissadeekorn, Department of Mathematics, University of North Carolina, Chapel Hill, Phillips Hall, CB 3250, Chapel Hill, NC 27599-3250. E-mail: naratips@email.unc.edu

1. Introduction

Data assimilation (DA) requires a mathematical model that accurately simulates the actual dynamical processes. In many instances, the model contains uncertain parameters that may appear as additive or multiplicative parameters or as so-called closure parameters that arise from the parameterizations of the unresolved subscale processes. We will often refer to these parameters, related to the stochastic part of the model such as variance of a Wiener process, as “stochastic parameters.” Use of the incorrect values of the parameters in the DA scheme may lead to large errors in the state estimate and inconsistency between the forecast and reality. A key strategy in increasing the effectiveness of the numerical prediction of climate, weather, or other geophysical processes is the development of a DA method for simultaneously estimating values of the model parameters as well as the state variables, which are both incompletely known. The problem of joint state-parameter estimation has been investigated in many previous studies. To deal with the uncertainty of model parameter in the context of DA, a commonly used DA approach such as an ensemble Kalman filter (EnKF; Evensen 1994; Houtekamer and Mitchell 1998) or local ensemble transform Kalman filter (LETKF) in Ott et al. (2004) has been adapted by augmenting the state vector with the uncertain parameters; hence, the augmented method (Aksoy et al. 2006; Annan et al. 2005; Baek et al. 2006; Bellsky et al. 2014; Carrassi and Vannitsem 2011; Gillijns and De Moor 2007; Koyama and Watanabe 2010). The standard Kalman update equations are then applied to estimate the combined state-parameter vector. If the dimension of the set of model parameters is comparable to that of the state vector, the augmented state vector becomes significantly larger than that of the original problem, which introduces an increase in the computational load as well as inaccuracies in computing covariance matrices. One approach to avoid this difficulty is the interacting Kalman filter whereby two Kalman filters are designed to estimate states and parameter separately and the two filters interact (Friedland 1969; Koyama and Watanabe 2010; Moradkhani et al. 2005). A more recent approach to this problem also successfully deals with the local variability of parameters by using the augmented LETKF (Baek et al. 2006; Bellsky et al. 2014; Kang et al. 2011). This approach computes the Kalman update equations for the subdivided local regions in parallel with a dimensionally reduced state vector. Apart from the above issue of extra computational load in estimating parameters, filter divergence is another important issue for joint state-parameter estimation problems. In all Kalman-type methods, making inferences about model parameters relies substantially on the (flow dependent) cross covariance between the state variables and the model parameters (Carrassi and Vannitsem 2011), which can be approximated from the ensemble forecast in the ensemble-based methods. Therefore, in the situation where the nonlinearity induced by augmenting parameters as artificial states is large, the augmented method may fail. Note that this highly nonlinear feedback of the parameters to the dynamical states tends to be more prominent in the multiplicative parameters than additive ones as investigated in Yang and DelSole (2009), where a temporally smoothed parameter update equation must be used instead of the persistence model to avoid model blow-up as similarly done in Koyama and Watanabe (2010), but this issue could depend on the model as well as prior mean and covariance as many applications of the augmented method have also been demonstrated for multiplicative parameters (Aksoy et al. 2006; Annan et al. 2005). Such nonlinearity will also be higher when the measurement frequency is low. In the case of stochastic parameters, the augmented techniques are usually problematic as demonstrated in DelSole and Yang (2010). A fully nonlinear approach such as a particle method (Vossepoel and Van Leeuwen 2007) or hierarchial Bayesian model approach (Wikle et al. 1998) for estimating parameters in some geophysical models has been studied but its utility requires a large ensemble size, hence, a high computational cost, and a systemic way to incorporate a spatial correlation structure if local variability of those parameters are concerned.

In this paper, we focus on the case that the dimension of the state vector is large while that of the nonadditive model parameters is comparatively small. We then apply the EnKF to estimate the state and separately estimate the model parameters with the particle filter and the two subfilters recursively interact; hence, the name “two-stage” filtering. The results will be compared with the augmented EnKF, which will be reviewed in appendix A, to confirm its ineffectiveness in such a situation and important gains made by using the two-stage filter. The standard particle filtering (PF) technique will be summarized in appendix B. In section 2, the two-stage filtering method in this paper will be explained and it will be related to a simplification of the well-known Rao-Blackwellized particle filtering (RBPF; Casella and Robert 1996; Doucet et al. 2000b). In section 3, we test the proposed method using the Lorenz-96 model by Lorenz (2006) and assume the “perfect model” scenario where the only source of the model error is the uncertain parameters. In section 4, we demonstrate the accuracy of the method in estimating the stochastic parameters of an (autoregressive) AR(1) process. Section 5 addresses the “imperfect model” case using the fast–slow Lorenz-96 model as a proof of concept in which the closure parameters arising from some parameterizations of the fast-scale process will be estimated.

2. Two-stage filtering

Let be the m-dimensional model state vector and be the q-dimensional vector specifying the model parameters whose true values are constant but unknown. Let , where denotes , be a map that propagates the state at time to . We consider the combined vector as the new state vector that is updated according to the dynamical system:
e1
where is a uncorrelated, mutually independent, white Gaussian noise sequence with zero mean and covariance matrix associated with matrix . Let be the r-dimensional observation vector, which is related to the model state by the following equation:
e2
where is assumed to be zero-mean Gaussian white noise with covariance matrix and the observation operator is assumed to be linear only to simplify notation but our discussion below is still valid without this assumption. In most situations, the model parameters are not observed and the observation operator for the augmented system has the following form:
e3
In the augmented EnKF, see appendix A, the parameter vector θ is updated by a linear regression according to the discrepancy between the observations and the model forecasts and the cross covariance between the state and parameter vectors. Therefore, the state-parameter cross covariance must be carefully modeled to ensure accuracy of the parameter and state estimates. In EnKF, this is tantamount to setting the initial ensemble with the “correct” statistics. However, this is difficult to guarantee in general. In addition, there is a case where the state and parameter are not linearly correlated, for example, if the parameter is the variance of the Wiener process added to a linear equation, then the innovation contains no information about this parameter as explained in DelSole and Yang (2010).
In the Bayesian filtering framework, we aim to recursively evaluate the filtering distribution , where . The PF (Doucet et al. 2000b; Gordon et al. 1993) introduces an approximate solution to this problem without the assumptions of linearity or Gaussian uncertainties and they are not limited to estimating only the first two moments as in the Kalman-type methods, see appendix B. However, the PF is computationally intractable and impractical for a high-dimensional problem (Snyder et al. 2008). Therefore, many modified PFs have been developed to reduce the overall computational load in comparison to the standard PF. The two-stage filtering proposed in this paper is motivated by an approach used in RBPF (Casella and Robert 1996; Doucet et al. 2000a,b) that runs PF on a part of state while updating the corresponding particles for the other part of state using the conditional Kalman filter (KF). Suppose that the model state is evolved in a linear-Gaussian fashion, we may consider the following factorization for the joint state-parameter estimation:
e4
Although is assumed to be Gaussian for a given set of parameters, is generally non-Gaussian. Running the standard PF for the combined state w can be computationally expensive if the dimension of w is large and it does not efficiently exploit the linear structure of the model state. The key idea of RBPF is that a PF method should be used on the parameter vector θ, which is assumed to have a small dimension in this paper, while a KF method should be applied for the state vector x. To this end, the RBPF method approximates by weighted particles and we rewrite Eq. (4) as
e5
where denotes the particle weight. Observe that N KFs must be used to evaluate in the above equation for each i. In general, can be sampled from any appropriate proposal density. For simplicity, we will sample from the transition density , in which case we can use the standard Bayesian analysis (Arulampalam et al. 2002), to show that the particle weights can be recursively updated by
e6
Note that we do not naturally have a dynamical rule for the parameter, so we have to artificially design . Some choices of the parameter dynamics will be discussed later. The above predictive density of observations conditioned on the parameter serves as a likelihood function and it can be evaluated by
e7
where the background mean and the background covariance are computed from the particles using the fixed parameter value and , , and are defined in appendix A. It is clear that the computational cost per particle is generally more expensive than applying the standard PF on the combined state w. In particular, if , we will have to run KF N times at each assimilation cycle. However, the RBPF can still be expected to improve the efficiency over the standard PF since fewer particles are required to achieve a given convergence (Doucet et al. 2000a,b; Schön et al. 2005). Also, the RBPF approaches have been reported to significantly reduce the variance of the particle weights in comparison to the standard PF (Doucet et al. 2000a). The two-stage filtering in this paper adopts the state partitioning approach from RBPF but it uses EnKF instead of KF to compute in Eq. (5). Also, it reduces the overall computational load by simplifying the standard RBPF method as described below.
Like the RBPF, our two-stage filtering uses PF for estimating the parameter vector θ and EnKF for the model state vector x. However, we replace in Eq. (5) by
e8
where is a point estimate based on (i.e., the parameter is fixed and treated as an observation). We also make a simplification:
e9
where and is the state estimate from the EnKF in the (k − 1)th step. This means that in computing Eq. (9), we assume the state to be “known” without uncertainty as in Eq. (7), which has to be computed based on the analysis covariance . The above simplifications together result in a sequential “interaction” of one PF and one single EnKF through the point parameter estimate from PF and the mean estimate of the state from EnKF; hence, two-stage filtering. Of course, there will be a loss in performance with this simplification. If is multimodal, passing only the mean of this distribution to one single EnKF may result in filter divergence since the background ensemble in EnKF may diverge from a high-probability region and likewise for passing only the mean of the state to the PF step. In addition, neglecting the predictive covariance in the weight update may lead to underestimating the true covariance if this covariance is expected to be much larger than observational uncertainty. Therefore, we restrict our numerical experiments in the subsequent sections to the cases where the flow maps do not produce a multimodal forecast distribution.

The algorithm for the two-stage filtering can now be summarized below.

  • Initialization
    • Sample initial particle for parameter , .
    • Choose initial distribution for the state, say .
    • Sample initial state ensemble members , .For every assimilation cycle k, we perform the following:
  • PF stage
    • Suppose that we have the point estimate for the state variable from the previous step.
    • Artificially “move” the parameter particles according to some artificial (stochastic/deterministic) dynamics, say , and update the predicted observation:
      e10
    • Compute the unnormalized weights in Eq. (6) by using the approximation in Eq. (9).
    • Normalize the weight to obtain the weighted particle .
    • If necessary, resample particles based on an inspection of the effective sample size . A small value of indicates a large variance of the particle weights and, hence, severe weight degeneracy.
    • Compute a point estimate from (e.g., ensemble mean).
  • EnKF stage
    • Update the background ensemble to obtain the predicted observation based on the parameter estimate from the above PF-stage:
      e11
    • Use EnKF to obtain analysis ensemble and the analysis distribution , where and are the mean and covariance of the analysis ensemble, respectively.
    • Set .
Note that we implement the residual resampling method in Liu and Chen (1998), which yields an improvement over the simple resampling method introduced in the original bootstrap filter by Liu and Chen (1998). Although we will not test the behavior of different resampling techniques in this work, we believe that the choice may not be crucial since particles will be jittered. The resampling step is typically performed based on the threshold value for . In this work, we set the threshold for the effective sample size .

The choice of the artificial dynamics also plays a pivotal role in the success of this method. Since the true parameters are assumed to be constant in time, the so-called persistence model has been commonly used in several studies and it is given by
e12
The persistence model is, however, vulnerable to the “sample attrition” issue, by which, after some assimilation steps, only few particles with significant weights are sampled and remain for the next assimilation steps, so we lost the diversity of particles. In addition, if the initial parameters are misspecified to begin with, we can only resample from the same set of not very informative particles at every assimilation cycle in the PF stage. This will eventually make the model state ensemble drift too far away from the observation. In dealing with these issues, small random disturbances may be added to the sample in the resampling step. One natural way is to replace the persistence model by a random walk model as originally suggested in Gordon et al. (1993):
e13
for some given covariance matrix . This model generates a new set of parameter particles at every assimilation cycle. In Gordon et al. (1993), is a diagonal covariance matrix with a (tunable) standard deviation σ, which is inversely proportional to the dth root of the sample size and the length of the ensemble support. Too large σ would diffuse the distribution while too small σ would not able to practically address the sample attrition issue. In fact, regardless of the value of σ, the independent random movement of parameter particles will always result in an overdispersion in the sense that the Monte Carlo variance of the random walk model will always be larger than the ensemble variance before adding random disturbance. Therefore, the posterior distribution will eventually be far too diffuse due to the buildup of the overdispersed covariance. This issue has been long recognized and a solution has been proposed by West and Liu (2001). In their work, a new artificial model for the parameters is given by
e14
where is the ensemble mean of the parameter ensemble and is the sample covariance of the parameter vector. Clearly, this model is designed to “shrink” a new set of particles toward the mean at the degree determined by α. Therefore, the overdispersion of parameter particles is suppressed. In the framework of the “smoothing kernel” or “kernel dressing,” the optimal value of α can be calculated at each assimilation cycle for a given “target” variance, which is typically the variance of the parameter ensemble before applying any artificial dynamics to it, but this is usually inconvenient in practice and a heuristic choice of may be used instead [see West and Liu (2001) for more details]. Alternatively, one may consider another recent technique called “merging particle filter” (Nakano et al. 2007) whereby the particles are resampled M times, instead of one time, and the new particles are averaged over these multiple sets of resampled particles to gain the diversity of particles. The weight average is designed to preserve the Monte Carlo mean and variance of particles before the resampling process.

As for the EnKF step, we will implement the “stochastic” technique where observations are perturbed additively by random realizations from the measurement noise probability distribution so that empirical ensemble covariance maintains the original Kalman update formulation (Burgers et al. 1998). As common to nonlinear data assimilation, an inflation factor may be applied by multiplying ensemble anomalies by the (scalar) inflation factor . In addition, the localization of observation in the EnKF is implemented through the covariance localization method with a use of Gaspari and Cohn’s fifth-order correlation function whose shape is determined by the localization scale.

3. Case study 1: Lorenz-96 with parameterized forcing

In this numerical experiment, we assume that the dynamics of the “true” state is governed by the Lorenz-96 model (Lorenz 2006):
e15
where with cyclic indices and F is the forcing function. We choose and assume that F is parameterized by
e16
where and is unknown and has to be estimated. The perfect model case is assumed in this experiment, hence, the forecast model also uses Eqs. (15) and (16). Therefore, the uncertainty in the unknown θ is the only source of model error. Note that this setup allows us to justify our estimation skill by comparing the parameter estimates with the true parameter , which is chosen to be . In the presence of other sources of model error, however, it would be better to emphasize parameter estimates that result in the model output fitting with the observations as well as possible, not the error in parameter estimates.

The model in Eq. (15) is numerically solved by the fourth-order Runge–Kutta method with a time step . We initialize the model state ensemble by running a spinup run for 30 000 and use the simulation from the next 6000 time steps in the experiment. One single member from the ensemble is then used as the “truth” and the observations are constructed by adding Gaussian noise to the odd-indexed state variables, hence, 20 observations. The parameter particles, however, have no “climatological information,” so we initialize the parameter and by the normal densities and , respectively. The prior densities for the parameters are not correctly informative since true parameters have low probability. We set the parameter in the Liu–West model in Eq. (14).

We first run the augmented EnKF with and the assimilation interval for various values of the inflation factor and localization length scale in order to choose some reasonable values for these EnKF parameters. Here we apply the covariance filtering only for [see Eq. (A1)] since θ is global. The absolute errors, which are averaged over the 1200 assimilation cycles and 10 different (independent) experimental runs, are plotted in Fig. 1. Based on the errors in the state estimates, we see that the “optimal” length scale for covariance filtering should be less than about 4, but there is no significant improvement in the error for different values of the inflation factor. We also repeat the above analysis for and and obtain a similar result, but not shown here. Therefore, we will set the localization length scale to 2 for all subsequent experiments in this section and leave the inflation factor to . For the two-stage filtering in this section, the number of particles for the PF stage is always set to 0.6N, where N is the combined number of particles in the PF and EnKF stages.

Fig. 1.
Fig. 1.

Errors in the state and parameter estimates by the augmented EnKF for various values of the inflation factor and localization length scale.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

Figure 2 shows the error in the state and parameter estimates for various N and . The error is averaged over the final 100 assimilation cycles and 10 experimental runs. The averaged root-mean-square error (RMSE) is plotted for the state estimates and the relative error is plotted for the parameter estimates. The assimilation interval renders the degree of nonlinearity in this experiment. The short assimilation interval leads to an approximately linear flow map and when extended to and , the degree of the nonlinearity of the flow map is significantly elevated as previously demonstrated in Stroud and Bengtsson (2007). As seen in Fig. 2, the errors in the state estimates increase for the longer assimilation windows and the augmented method performs slightly better for the state estimates. However, we must keep in mind that for the same total number of particles, the two-stage method has fewer particles to update the state. It should be noticed that while the errors in the observed states are kept well below the noise standard deviation, which is the unity, those of the unobserved states are almost above the noise standard deviation for and all filters fail to provide a reliable inference for the unobserved states at , which is attributable to a change from a linear regime to a strongly nonlinear flow map that disrupts a validity of the linear inference by the Kalman update equations. As for the errors in the parameter estimates, the augmented method yields highly accurate parameter estimates when , but as increases the errors in parameter estimates also increase. However, the errors in parameter estimates for the two-stage filtering are low for all N and they do not have the same trend as those obtained from the augmented method (i.e., the they do not noticeably increases, as increases). A possible explanation to this is that a large spread in the ensemble forecast the augmented method can make an analysis increment for the parameter update smaller as is clear from Eq. (A5). However, the two-stage method is immune to this as the state and parameter are updated separately.

Fig. 2.
Fig. 2.

Comparison of the error in the (un)observed state and parameter estimates for the two-stage filtering with the Liu–West model (blue) and the augmented EnKF (black) for various based on 10 different experimental runs.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

The role of the Liu–West model in the two-stage filtering is to mitigate a fast collapse of parameter particles and this is particularly essential for a small sample size, which can be seen as a comparison between the two-stage filtering with Liu–West and the persistence models in Fig. 3. For a small sample size, the initial samples for parameter particles may be absent in the neighborhood of the true parameters and if the persistence model is used, significant weights can concentrate in the neighborhoods of incorrect parameters when the particles collapse, which is noticeable from the increase in errors of the parameter estimates for the persistence model as N becomes smaller. Therefore, some type of “jittering” is needed to mitigate this issue but this advantage becomes less evident for a large sample size.

Fig. 3.
Fig. 3.

Comparison of the error in the (un)observed state and parameter estimates for the two-stage filtering with the Liu–West model (blue) and with the persistence model (red) for various based on 10 different experimental runs.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

4. Case study 2: AR(1)

In contrast to the previous experiment that estimates the deterministic parameters, this experiment investigates the ability of the augmented and two-stage filtering methods for estimating a stochastic parameter. To this end, we use the standard autoregressive [AR(1)] stochastic model given by
e17
where and ϕ and σ are the damping coefficient and the variance of the process, respectively. It has been shown by DelSole and Yang (2010) that the parameter σ cannot be updated correctly by the augmented method due to the lack of dependence between the mean forecast and an additive stochastic parameter in a linear model. In the following experiment, we generate the truth from a single realization of the system in Eq. (17) and the observation, denoted by , is generated from the truth by adding noise drawn independently from . The initial priors for ϕ and σ are sampled from and , the uniform distributions in the interval and , respectively. The results for 10 different experiments whose realizations of the observation noise are different are shown in Fig. 4. The Liu–West model is used to artificially move around the particles and we use in Eq. (14). It is clear that both methods give reasonably accurate estimates for ϕ whereas the estimates for the stochastic parameter σ are obtained accurately only by the two-stage filtering. Figure 5 compares the absolute error averaged over 10 experimental runs for various values of , the number of particles in the PF stage, and , the assimilation interval. The number of particles in the EnKF stage is fixed to 20. Clearly, a large number of particles is required for a large and the most efficient number of particles for this experiment is at the corner of the L-shaped trend in the error, which is approximately . However, at around , a 10% error can already be achieved for .
Fig. 4.
Fig. 4.

The convergence of the point estimates (mean) of the autoregressive parameters ϕ and σ. The dashed lines show the true parameters. Here we use and the number of particles in the EnKF stage is 20.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

Fig. 5.
Fig. 5.

Relative error for parameters ϕ and β.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

5. Case study 3: Fast–slow Lorenz-96 system

We test our method in the case that the model includes a so-called “closure parameter.” This may arise from the parameterization of some unresolved physical processes. We consider the following fast–slow variant of the Lorenz-96 model, where the slow variable is forced by the fast variable :
e18
where and , both of which are cyclic. We use , , , the coupling strength , the time-scale separation , and the magnitude of the fast component . It will be convenient to denote the fast-scale forcing by . We use the fourth-order Runge–Kutta method to numerically integrate this fast–slow system with the integration time step to generate the truth time series for and and the partial observations are then constructed by adding Gaussian noise to the true states for . In particular, we generate an ensemble of random samples within a small volume and run it for 10 000. This long spinup run results in a sample cloud that should cover the attractor of the system. We then choose the ensemble mean to be our initial truth and run it further to generate a time series of the truth run.
In the following experiments, we assume that only the physical process of the slow variables are known, so we use a forecast model for the slow variables that takes into account the effect of the (unresolved) fast-scale variables only through a stochastic parameterization in terms of the resolved variables . In particular, the forecast model is given by
e19
where the polynomial with coefficients and represents an approximation of the unresolved forcing and is the stochastic forcing that represents uncertainty due to the deterministic parameterizations. Following a study of parameterization in the Lorenz-96 system by Wilks (2005), the deviation from the fitted line is given as an independent AR(1) process for each slow variable :
e20
where . Thus, our goal is to estimate the deterministic parameters, and , and the stochastic parameters, ϕ and . We plot the true forcing as a function of the true state variables using the truth and the result is shown as the scatterplot in Fig. 6. Clearly, from the perspective of knowing only the slow process, there is an uncertainty in the fast-scale forcing for a given , which is higher for a larger value of . Nevertheless, the trend of the data cloud in Fig. 6 looks reasonably linear and the coefficients of the fitted line are found to be and . Based on these fitted deterministic parameters, we can compute the residual:
e21
The parameter ϕ can then be estimated by the lag-1 autocorrelation coefficient of the time series of for each i, which yields almost the same result for all is as shown in Fig. 6. We then use the estimate of ϕ for each i and to fitting the variance and the results are shown also in Fig. 6. However, we should not assert that these fitted parameters are the true parameters since all four parameters in Eqs. (19) and (20) should be approximated simultaneously to take into account the nonlinear feedback of the stochastic parameters that can change the correlation between the state and the deterministic parameters. Also, it should be noted that the model for is imperfect in that the variance is independent of , which contradicts what was mentioned above, and the temporal autocorrelation in the model decreases exponentially with the time lag (in ) but the actual autocorrelation has a different characteristic as shown in Fig. 6.
Fig. 6.
Fig. 6.

(left) A scatterplot of the true forcing and along with the straight line obtained by fitting to the pairs and . (middle) The estimates of the parameters ϕ and by a brute-force approach. (right) Comparison between the actual temporal autocorrelation of the true forcing and that implied by in Eq. (20) for different values of ϕ.

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

We run 10 different experiments that start with different initial ensembles drawn independently from the same prior distribution. The parameters are initially drawn from the following prior distributions: , , , and . The initial ensemble is generated from a long spinup of the fast–slow system in Eq. (18) using a time step of . As for the forecast model, we numerically solve Eq. (19) with a numerical time step and set the assimilation interval to . For the two-stage filtering, we use 400 particles in the PF stage and 50 ensemble members in the EnKF stage; hence, we use 450 ensemble members in the augmented EnKF for a fair comparison.

In Fig. 7, the evolution of the mean estimates for 1600 assimilation cycle shows the difference between the online parameter estimates and those obtained from the offline fitting with the truth. The marginal posterior distributions for the parameters at the final assimilation cycle are shown in Fig. 7 for two-stage filtering. The distributions for and contain and in the supports accordingly. Nevertheless, we do not expect the true parameters to be and as already explained. Similarly, the means of the posterior marginal distributions for ϕ and are different from those offline fitted parameter values shown in Fig. 6. As for the case of the augmented EnKF, the results show the particle divergence, where most of the particles diverge from the observation and the filter eventually “blows up,” so they are not plotted here.

Fig. 7.
Fig. 7.

(top) The evolution of the mean estimate for each parameter. (bottom) The posterior distributions of the parameters after 1500 assimilation cycles with from 10 independent experiments. The vertical lines show the values of and .

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

As demonstrated in Fig. 7, the discrepancy between different experimental runs of the estimates for ϕ and is larger than that of and . We would like to investigate this issue further by running the forecast run without assimilating data as well as using EnKF for the fixed deterministic parameters. Thus, we fix and , which are the average of the final estimates over the 10 different experiments, and vary the stochastic parameters only. For the forecast run, 200 initial samples are drawn from the normal distribution whose the mean is the ensemble mean of the initial cloud obtained from the spinup run and likewise for the covariance. These initial samples are then propagated forward in time based on a given set of model parameters. The sample mean is then used as the forecast estimate, on which the error is based. The RMSE of the forecast run without assimilating data is averaged over a period of and the result is shown in Fig. 8. It can be seen that there exists an optimal band of for each ϕ, and the offline fitted parameter values for ϕ and fall into this optimal band (see again Fig. 6). For the state-only data assimilation using EnKF, we set samples and . The plot of RMSEs averaged over 1600 assimilation cycles is shown in Fig. 9 and it suggests that the RMSE does not depend much on the parameter ϕ as long as the variance is sufficiently small. This result is similar to that of the forecast run without assimilating data except that the optimal band disappears and smaller values of gives lower RMSE in the EnKF setting. It should also be observed that in both settings as ϕ approaches unity, the optimal range of becomes larger. This is consistent with the effect of the second term in Eq. (20), which implies that as , the variance does not play much role and the next value of become less uncertain with the knowledge of the current value of . Furthermore, these RMSE results are consistent with the large variation observed in the parameter estimate for ϕ. The parameter values obtained from the joint state-parameter estimation also agree with these results (i.e., they are in the region of small RMSEs). We believe that the discrepancy between the parameters that produces the best forecast and those that produces the best state estimation has something to do with the ensemble spread. The ensemble spread should not be too large; otherwise, the prior produced by the model run will not be useful. Therefore, the error tends to be smaller in the region of small .

Fig. 8.
Fig. 8.

Average RMSE over a period of for the forecast run without assimilating observation. The deterministic parameters are assumed to be and .

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

Fig. 9.
Fig. 9.

RMSE for the state estimation by EnKF. The deterministic parameters are assumed to be and .

Citation: Monthly Weather Review 143, 6; 10.1175/MWR-D-14-00176.1

6. Summary and discussion

This paper proposed a two-stage filtering method for joint state-parameter estimation based on a combination of the PF and EnKF methods. Specifically, the PF is used to estimate the uncertain parameter vector by treating the initial state vector as a known quantity (i.e., setting ). The new parameter estimate is then updated based on the posterior parameter distribution approximated by the PF and used in the subsequent EnKF stage to update the state vector EnKF. Three numerical experiments are used to evaluate the ability of the two-stage filtering for the joint state-parameter estimation in comparison with the augmented EnKF method. Specifically, the first experiment uses the Lorenz-96 system and assumes that the forecast model and the parameterization are perfect and the parameter is constant. Partial observations (only half of the state variables are observed) and misspecified initial distributions of the parameters are used to test the proposed method in the ability to calibrate the incorrect parameters against the actual parameters. Our numerical results show that the two-stage filtering method yields more accurate parameter estimates than the augmented EnKF. The results also show the ineffectiveness of using the persistence model to artificially evolve the parameters. In particular, the use of the Liu–West model show a substantial improvement in the stability of the filter. In the second experiment, we focus on the stochastic parameters and show that the augmented method fails to provide a reliable estimate for the variance parameter in the AR(1) process while the two-stage filtering performs well.

The third experiment uses the fast–slow Lorenz-96 as the true model whereas the forecast model assumes the perfect physical law of the slow variable only, but uses a first-order polynomial to parameterize the unresolved fast-scale variable. We assume the two coefficients of the polynomial to be constant but add the stochastic term into the forcing. This stochastic term is assumed to be a realization of an AR(1) process, which is determined by the autocorrelation parameters and variance of the process. The constant coefficients can be properly estimated only by the two-stage filtering. As for the AR(1) parameters, they converge for an individual run but do not converge to the same parameter values in different independent experimental runs. The validity of the resulted parameter values is tested in view of a forecast error as well as RMSE obtained from running the EnKF for state estimate only. Further justifications of the accuracy of these parameter estimates will require an in-depth investigation of the range of the optimal parameter values for this parameterization scheme.

The applications of the two-stage filtering in this paper are restricted to the cases where the dimension of the parameter space is relatively small and the parameters are spatially constant. In the case where the parameter vectors are spatially dependent and largely dimensional, a localization technique may be needed to reduce the dimension of the original problem by analyzing a local region with a smaller dimension. With this in mind, a localization for the PF must be developed and incorporated into the two-stage filtering. In another situation where the flow map of the state vector produces a multimodal forecast distribution, the EnKF would undoubtedly be ineffective and we may have to replace the EnKF stage in the two-stage filtering by, for example, the PF. In addition, our method has not been tested for time-dependent parameters or parameter switching. We believe that this is a challenging problem for which the jittering technique such as the Liu–West model may not be reliable. Our idea of using the two-stage filtering may still have an advantage over the augmented method in this case, but more appropriate proposal densities for sampling parameters may be required to allow parameter switching. The above situations are beyond the scope of the present paper and future work to deal with them will certainly provide a better tool for joint state-parameter estimation in the framework of the two-stage filtering.

Acknowledgments

This work was supported by the Office of Naval Research Grant N00014-11-1-0087 and NSF Grant DMS-0940363.

APPENDIX A

Augmented EnKF for Joint State-Parameter Estimation

Based on the models in Eqs. (1) and (2), the EnKF method uses the spread of the ensemble of size n to approximate the background error covariance matrix and the Kalman update equations are applied to approximate the analyzed ensemble mean and the analysis error covariance matrix. For the augmented state-parameter system w, the background error covariance has the following substructures:
ea1
where is the background error covariance computed from the forecast ensemble covariance of x, is the cross covariance between the model state x and parameter θ, and is the background error covariance computed from the forecast ensemble covariance of θ. The inference about the unobserved parameter and its uncertainty for the joint state-parameter estimation relies crucially on the cross covariance matrix , which “linearly regresses” the increment of the observed states to update the increment of the unobserved parameters. This can be easily seen by using the standard Kalman equation for the analyzed ensemble:
ea2
where is the forecast state and the Kalman gain matrix is given by
ea3
Substituting Eqs. (A1) into (A3), we can rewrite Eq. (A2) as
ea4
where the gain matrices for the model state and the parameter are given by
ea5
It is now clear that the gain matrix for the analyzed parameter ensemble depends on . Notice that while the covariance matrix has no effect on both gain matrices, and , the covariance matrix effects both. The equation for in Eq. (A5) also shows that the larger uncertainty in the forecast model state (i.e., the larger ), the smaller the rate change to the parameter increment for a given . Therefore, in a chaotic system, which typically causes a large ensemble spreading, the update of the model parameters can be expected to be slow. This may lead to filter divergence if the parameters are initially misspecified (e.g., the actual parameters are at the tail of the initial distribution of the parameters) since the parameters could be “sticking” to their (incorrect) initial values for so long that the filter may end up repeatedly running a forecast with incorrect parameters, which leads most ensemble members to rapidly drift away from the observations. In some cases, some ensemble members may become dynamically unstable and the ensemble forecast becomes unbounded, in which case the filter “blows up.”

APPENDIX B

Particle Filtering

Particle filtering (Doucet et al. 2000b; Gordon et al. 1993) is a sequential Monte Carlo (SMC) approach for approximating the desired posterior distribution by an empirical distribution:
eb1
where is the state trajectory and are i.i.d. samples from the posterior. In general, it is difficult to draw random samples from and we must draw samples from a proposal distribution (or importance function) . The empirical distribution in Eq. (B1) can then be rewritten as the weighted empirical distribution:
eb2
where and the weights are given by
eb3
The weight can be trivially normalized via a constant . Based on the standard Bayesian analysis, the weight can be recursively evaluated according to the weight update relation:
eb4
The most common and convenient choice of the proposal distribution is so that the weight update follows a simple form:
eb5
In the limit of , the convergence of the filter to the true posterior density can be achieved [see Doucet et al. (2000b)]. In practice, this method suffers from the curse of dimensionality and its successful application is limited to problems with a small or moderate dimension. Note that in the context of joint state-parameter estimation, the total dimension of the combined state w is typically large while the dimension of parameter space is small. In such a situation, a direct application of PF to w can be computationally intractable.

REFERENCES

  • Aksoy, A., , F. Zhang, , and J. W. Nielsen-Gammon, 2006: Ensemble-based simultaneous state and parameter estimation with MM5. Geophys. Res. Lett., 33, L12801, doi:10.1029/2006GL026186.

    • Search Google Scholar
    • Export Citation
  • Annan, J. D., , D. J. Lunt, , J. C. Hargreaves, , and P. J. Valdes, 2005: Parameter estimation in an atmospheric GCM using the ensemble Kalman filter. Nonlinear Processes Geophys., 12, 363371, doi:10.5194/npg-12-363-2005.

    • Search Google Scholar
    • Export Citation
  • Arulampalam, M. S., , S. Maskell, , N. Gordon, , and T. Clapp, 2002: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process., 50, 174188, doi:10.1109/78.978374.

    • Search Google Scholar
    • Export Citation
  • Baek, S.-J., , B. R. Hunt, , E. Kalnay, , E. Ott, , and I. Szunyogh, 2006: Local ensemble Kalman filtering in the presence of model bias. Tellus, 58A, 293–306, doi:10.1111/j.1600-0870.2006.00178.x.

    • Search Google Scholar
    • Export Citation
  • Bellsky, T., , J. Berwald, , and L. Mitchell, 2014: Nonglobal parameter estimation using local ensemble Kalman filtering. Mon. Wea. Rev., 142, 2150–2164, doi:10.1175/MWR-D-13-00200.1.

    • Search Google Scholar
    • Export Citation
  • Burgers, G., , J. P. Van Leeuwen, , and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126,17191724, doi:10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Carrassi, A., , and S. Vannitsem, 2011: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Quart. J. Roy. Meteor. Soc., 137, 435451, doi:10.1002/qj.762.

    • Search Google Scholar
    • Export Citation
  • Casella, G., , and C. Robert, 1996: Rao-Blackwellization of sampling schemes. Biometrika, 83, 81–94, doi:10.1093/biomet/83.1.81.

  • DelSole, T., , and X. Yang, 2010: State and parameter estimation in stochastic dynamical models. Physica D, 239, 17811788, doi:10.1016/j.physd.2010.06.001.

    • Search Google Scholar
    • Export Citation
  • Doucet, A., , N. de Freitas, , K. Murphy, , and S. J. Russell, 2000a: Rao-Blackwellised particle filtering for dynamic Bayesian networks. Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, 176–183.

  • Doucet, A., , S. Godsill, , and C. Andrieu, 2000b: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput., 10, 197–208, doi:10.1023/A:1008935410038.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 143–10 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Friedland, B., 1969: Treatment of bias in recursive filtering. IEEE Trans. Auto. Control,14, 359–367, doi:10.1109/TAC.1969.1099223.

  • Gillijns, S., , and B. De Moor, 2007: Model error estimation in ensemble data assimilation. Nonlinear Processes Geophys., 14, 59–71.

  • Gordon, N. J., , D. J. Salmond, , and A. F. M. Smith, 1993: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar Signal Process.,140, 107–113.

  • Houtekamer, P., , and H. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796–811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kang, J. S., , E. Kalnay, , J. Liu, , I. Fung, , T. Miyoshi, , and K. Ide, 2011: Variable localization” in an ensemble Kalman filter: Application to the carbon cycle data assimilation. J. Geophys. Res., 116, D09110, doi:10.1029/2010JD014673.

    • Search Google Scholar
    • Export Citation
  • Koyama, H., , and M. Watanabe, 2010: Reducing forecast errors due to model imperfections using ensemble Kalman filtering. Mon. Wea. Rev., 138, 3316–3332, doi:10.1175/2010MWR3067.1.

    • Search Google Scholar
    • Export Citation
  • Liu, J. S., , and R. Chen, 1998: Sequential Monte Carlo methods for dynamic systems. J. Amer. Stat. Assoc., 93 (443), 1032–1044.

  • Lorenz, E., 2006: Predictability: A problem partly solved. Predictability of Weather and Climate, T. Palmer and R. Hagedorn, Eds., Cambridge University Press, 40–58.

  • Moradkhani, H., , H. Sorooshian, , H. Gupta, , and P. Houser, 2005: Dual state-parameter estimation of hydrological models using ensemble Kalman filter. Adv. Water Resour., 28, 135–147, doi:10.1016/j.advwatres.2004.09.002.

    • Search Google Scholar
    • Export Citation
  • Nakano, S., , G. Ueno, , and T. Higuchi, 2007: Merging particle filter for sequential data assimilation. Nonlinear Processes Geophys., 14, 395408, doi:10.5194/npg-14-395-2007.

    • Search Google Scholar
    • Export Citation
  • Ott, E., and et al. , 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415–428, doi:10.1111/j.1600-0870.2004.00076.x.

    • Search Google Scholar
    • Export Citation
  • Schön, T., , F. Gustafsson, , and P. Nordlund, 2005: Marginalized particle filters for mixed linear/nonlinear state-space models. IEEE Trans. Signal Process., 53, 22792289, doi:10.1109/TSP.2005.849151.

    • Search Google Scholar
    • Export Citation
  • Snyder, C., , T. Bengtsson, , P. Bickel, , and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 4629–4640, doi:10.1175/2008MWR2529.1.

    • Search Google Scholar
    • Export Citation
  • Stroud, J., , and T. Bengtsson, 2007: Sequential state and variance estimation within the ensemble Kalman filter. Mon. Wea. Rev., 135, 3194–3208, doi:10.1175/MWR3460.1.

    • Search Google Scholar
    • Export Citation
  • Vossepoel, F. C., , and P. J. Van Leeuwen, 2007: Parameter estimation using a particle method: Inference mixing coefficients from sea-level observations. Mon. Wea. Rev., 135, 10061020, doi:10.1175/MWR3328.1.

    • Search Google Scholar
    • Export Citation
  • West, M., , and J. Liu, 2001: Combined parameter and state estimation in simulation-based filtering. Sequential Monte Carlo Methods in Practices, A. Doucet et al., Eds., Springer, 197–223.

  • Wikle, C. K., , L. M. Berliner, , and N. Cressie, 1998: Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: A Bayesian fusion of century-scale observations with a simple model. Environ. Ecol. Stat., 5, 117154, doi:10.1023/A:1009662704779.

    • Search Google Scholar
    • Export Citation
  • Wilks, D., 2005: Effects of stochastic parametrizations in the Lorenz ‘96 system. Quart. J. Roy. Meteor. Soc.,131, 389–407, doi:10.1256/qj.04.03.

  • Yang, X., , and T. DelSole, 2009: Using the ensemble Kalman filter to estimate multiplicative model parameters. Tellus, 61A, 601609, doi:10.1111/j.1600-0870.2009.00407.x.

    • Search Google Scholar
    • Export Citation
Save