• Anderson, J., 1996: A method for producing and evaluating probabilistic forecasts from ensemble model integrations. J. Climate, 9, 15181530.

    • Search Google Scholar
    • Export Citation
  • Apte, A., M. Hairer, A. Stuart, and J. Voss, 2007: Sampling the posterior: An approach to non-Gaussian data assimilation. Physica D, 230, 5064.

    • Search Google Scholar
    • Export Citation
  • Apte, A., C. Jones, A. Stuart, and J. Voss, 2008a: Data assimilation: Mathematical and statistical perspectives. Int. J. Numer. Methods Fluids, 56, 10331046.

    • Search Google Scholar
    • Export Citation
  • Apte, A., C. Jones, and A. Stuart, 2008b: A Bayesian approach to Lagrangian data assimilation. Tellus, 60, 336347.

  • Arulampalam, M., S. Maskell, N. Gordon, and T. Clapp, 2002: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process., 50, 174188.

    • Search Google Scholar
    • Export Citation
  • Auvinen, H., J. Bardsley, H. Haario, and T. Kauranne, 2009: Large-scale Kalman filtering using the limited memory BFGS method. Electron. Trans. Numer. Anal., 35, 217233.

    • Search Google Scholar
    • Export Citation
  • Bain, A., and D. Crişan, 2008: Fundamentals of Stochastic Filtering. Springer Verlag, 390 pp.

  • Banks, H., 1992: Computational issues in parameter estimation and feedback control problems for partial differential equation systems. Physica D, 60, 226238.

    • Search Google Scholar
    • Export Citation
  • Banks, H., and K. Kunisch, 1989: Estimation Techniques for Distributed Parameter Systems. Birkhauser, 315 pp.

  • Bengtsson, T., C. Snyder, and D. Nychka, 2003: Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res., 108, 8775, doi:10.1029/2002JD002900.

    • Search Google Scholar
    • Export Citation
  • Bennett, A., 2002: Inverse Modeling of the Ocean and Atmosphere. Cambridge University Press, 234 pp.

  • Brett, C., A. Lam, K. Law, D. McCormick, M. Scott, and A. Stuart, 2012: Accuracy and stability of filters for dissipative PDEs. Physica D, in press.

    • Search Google Scholar
    • Export Citation
  • Bröcker, J., 2010: On variational data assimilation in continuous time. Quart. J. Roy. Meteor. Soc., 136, 19061919.

  • Brooks, S., and A. Gelman, 1998: General methods for monitoring convergence of iterative simulations. J. Comput. Graph. Stat., 7, 434455.

    • Search Google Scholar
    • Export Citation
  • Bryson, A., and M. Frazier, 1963: Smoothing for linear and nonlinear dynamic systems. U.S. Air Force Tech. Rep. AFB-TDR-63-119, Wright-Patterson Air Force Base, OH, Aeronautical Systems Division, 353–364.

  • Carrassi, A., M. Ghil, A. Trevisan, and F. Uboldi, 2008: Data assimilation as a nonlinear dynamical systems problem: Stability and convergence of the prediction-assimilation system. Chaos, 18, 023112, doi:10.1063/1.2909862.

    • Search Google Scholar
    • Export Citation
  • Chorin, A., and P. Krause, 2004: Dimensional reduction for a Bayesian filter. Proc. Natl. Acad. Sci. USA, 101, 15 01315 017.

  • Chorin, A., M. Morzfeld, and X. Tu, 2010: Implicit particle filters for data assimilation. Commun. Appl. Math. Comput. Sci., 5, 221240.

    • Search Google Scholar
    • Export Citation
  • Cotter, S., M. Dashti, J. Robinson, and A. Stuart, 2009: Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Probl., 25, 115008, doi:10.1088/0266-5611/25/11/115008.

    • Search Google Scholar
    • Export Citation
  • Cotter, S., M. Dashti, and A. Stuart, 2011: Variational data assimilation using targetted random walks. Int. J. Numer. Methods Fluids, 68, 403421.

    • Search Google Scholar
    • Export Citation
  • Courtier, P., and O. Talagrand, 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. II: Numerical results. Quart. J. Roy. Meteor. Soc., 113, 13291347.

    • Search Google Scholar
    • Export Citation
  • Cox, H., 1964: On the estimation of state variables and parameters for noisy dynamic systems. IEEE Trans. Autom. Control, 9, 512.

  • Cox, S., and P. Matthews, 2002: Exponential time differencing for stiff systems. J. Comput. Phys., 176, 430455.

  • Doucet, A., N. De Freitas, and N. Gordon, 2001: Sequential Monte Carlo Methods in Practice. Springer Verlag, 581 pp.

  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367.

  • Evensen, G., 2009: Data Assimilation: The Ensemble Kalman Filter. Springer Verlag, 307 pp.

  • Evensen, G., and Coauthors, 1994: Assimilation of Geosat altimeter data for the Agulhas Current using the ensemble Kalman filter with a quasigeostrophic model. Mon. Wea. Rev., 124, 8596.

    • Search Google Scholar
    • Export Citation
  • Fisher, M., M. Leutbecher, and G. Kelly, 2005: On the equivalence between Kalman smoothing and weak-constraint four-dimensional variational data assimilation. Quart. J. Roy. Meteor. Soc., 131, 32353246.

    • Search Google Scholar
    • Export Citation
  • Hamill, T., C. Snyder, and R. Morss, 2000: A comparison of probabilistic forecasts from bred, singular-vector, and perturbed observation ensembles. Mon. Wea. Rev., 128, 18351851.

    • Search Google Scholar
    • Export Citation
  • Harlim, J., and A. Majda, 2008: Filtering nonlinear dynamical systems with linear stochastic models. Nonlinearity, 21, 1281, doi:10.1088/0951-7715/21/6/008.

    • Search Google Scholar
    • Export Citation
  • Harvey, A., 1991: Forecasting, Structural Time Series Models, and the Kalman Filter. Cambridge University Press, 554 pp.

  • Hesthaven, J., S. Gottlieb, and D. Gottlieb, 2007: Spectral Methods for Time-Dependent Problems. Cambridge University Press, 273 pp.

  • Hinze, M., R. Pinnau, M. Ulbrich, and S. Ulbrich, 2009: Optimization with PDE Constraints. Springer, 270 pp.

  • Jazwinski, A., 1970: Stochastic Processes and Filtering Theory. Academic Press, 376 pp.

  • Kaipio, J., and E. Somersalo, 2005: Statistical and Computational Inverse Problems. Springer, 339 pp.

  • Kalman, R., 1960: A new approach to linear filtering and prediction problems. J. Basic Eng., 82, 3545.

  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, 341 pp.

  • Kelley, C., 2003: Solving Nonlinear Equations with Newton’s Method. Vol. 1, Fundamentals of Algorithms, Society for Industrial Mathematics, 104 pp.

  • Lawless, A., N. Nichols, and S. Ballard, 2003: A comparison of two methods for developing the linearization of a shallow-water model. Quart. J. Roy. Meteor. Soc., 129, 12371254.

    • Search Google Scholar
    • Export Citation
  • Lei, J., P. Bickel, and C. Snyder, 2010: Comparison of ensemble Kalman filters under non-Gaussianity. Mon. Wea. Rev., 138, 12931306.

  • Leutbecher, M., 2003: Adaptive observations, the Hessian metric and singular vectors. Proc. ECMWF Seminar on Recent Developments in Data Assimilation for Atmosphere and Ocean, Reading, United Kingdom, ECMWF, 8–12.

  • Liu, N., and D. S. Oliver, 2003: Evaluation of Monte Carlo methods for assessing uncertainty. SPE J., 8, 188195.

  • Lorenc, A., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112, 11771194.

  • Lorenz, E., 1963: Deterministic nonperiodic flow. J. Atmos Sci., 20, 130141.

  • Lorenz, E., 1996: Predictability: A problem partly solved. Proc. Seminar on Predictability, Reading, United Kingdom, ECMWF, 1–18.

  • Majda, A., J. Harlim, and B. Gershgorin, 2010: Mathematical strategies for filtering turbulent dynamical systems. Dyn. Syst., 27, 441486.

    • Search Google Scholar
    • Export Citation
  • Meng, Z., and F. Zhang, 2008: Tests of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part IV: Comparison with 3DVAR in a month-long experiment. Mon. Wea. Rev., 136, 36713682.

    • Search Google Scholar
    • Export Citation
  • Miller, R., M. Ghil, and F. Gauthiez, 1994: Advanced data assimilation in strongly nonlinear dynamical systems. J. Atmos. Sci., 51, 10371056.

    • Search Google Scholar
    • Export Citation
  • Nocedal, J., and S. Wright, 1999: Numerical Optimization. Springer Verlag, 636 pp.

  • Palmer, T., R. Gelaro, J. Barkmeijer, and R. Buizza, 1998: Singular vectors, metrics, and adaptive observations. J. Atmos. Sci., 55, 633653.

    • Search Google Scholar
    • Export Citation
  • Quinn, J., and H. Abarbanel, 2010: State and parameter estimation using Monte Carlo evaluation of path integrals. Quart. J. Roy. Meteor. Soc., 136, 18551867.

    • Search Google Scholar
    • Export Citation
  • Saad, Y., 1996: Iterative Methods for Sparse Linear Systems. 1st ed. PWS Publishing, 447 pp.

  • Snyder, T., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640.

    • Search Google Scholar
    • Export Citation
  • Stuart, A., 2010: Inverse problems: A Bayesian perspective. Acta Numer., 19, 451559.

  • Talagrand, O., and P. Courtier, 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Quart. J. Roy. Meteor. Soc., 113, 13111328.

    • Search Google Scholar
    • Export Citation
  • Tarantola, A., 2005: Inverse Problem Theory and Methods for Model Parameter Estimation. Society for Industrial Mathematics, 342 pp.

  • Temam, R., 2001: Navier–Stokes Equations: Theory and Numerical Analysis. American Mathematical Society, 408 pp.

  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490.

  • Toth, Z., and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev., 125, 32973319.

  • Trefethen, L., and D. Bau, 1997: Numerical Linear Algebra. Society for Industrial Mathematics, 361 pp.

  • van Leeuwen, P., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114.

  • van Leeuwen, P., 2010: Nonlinear data assimilation in geosciences: An extremely efficient particle filter. Quart. J. Roy. Meteor. Soc., 136, 19911999.

    • Search Google Scholar
    • Export Citation
  • Vogel, C., 2002: Computational Methods for Inverse Problems. Society for Industrial Mathematics, 183 pp.

  • Vogel, C., and J. Wade, 1995: Analysis of costate discretizations in parameter estimation for linear evolution equations. SIAM J. Control Optim., 33, 227254.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., and F. Zhang, 2012: E4DVAR: Coupling an ensemble Kalman filter with four-dimensional variational data assimilation in a limited-area weather prediction model. Mon. Wea. Rev., 140, 587600.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., F. Zhang, X. Huang, and X. Zhang, 2010: Intercomparison of an ensemble Kalman filter with three- and four-dimensional variational data assimilation methods in a limited-area model over the month of June 2003. Mon. Wea. Rev., 139, 566572.

    • Search Google Scholar
    • Export Citation
  • Zupanski, D., 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev., 125, 22742292.

    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    Low Reynolds number, stationary solution regime (ν = 0.1). (left) The vorticity w(0) of the smoothing distribution at t = 0 and (right) its Fourier coefficients for T = 10h = 2, for (top) the MCMC sample mean and (bottom) the truth. The MAP estimator is not distinguishable from the mean by eye and so is not displayed. The prior mean is taken as a draw from the prior and hence is not as smooth as the initial condition. It is the influence of the prior that makes the MAP estimator and mean rough, although structurally the same as the truth (the solution operator is smoothing, so these fluctuations are immediately smoothed out; see Fig. 2).

  • View in gallery
    Fig. 2.

    Low Reynolds number, stationary solution regime (ν = 0.1). (left) The vorticity w(T) of the filtering distribution at t = T and (right) its Fourier coefficients for T = 10h = 2. Only the MCMC sample mean is shown, since the solutions have been smoothed out and the differences among the MAP, mean, and truth are imperceptible.

  • View in gallery
    Fig. 3.

    The MCMC histogram for (left) t = 0 and (right) t = T = 10h = 2 together with the Gaussian approximation obtained from 4DVAR for low Reynolds number, stationary state regime (ν = 0.1).

  • View in gallery
    Fig. 4.

    (left) Average squared velocity spectrum on the attractor for ν = 0.01. (right) Difference between quantity a and quantity b, where a is the difference of the truth u(t) with a solution uτ(t) initially perturbed in the direction of the dominant local Lyapunov vectors υτ on a time interval of length τ with τ = 0.02, 0.2, and 0.5 [thus uτ(0) = u(t) + ɛυτ], and b is the evolution of that perturbation under the linearized model Uτ(t) = DΨ(u(0); t)ɛυτ. The magnitude of perturbation ɛ is determined by the projection of the initial posterior covariance in the direction υτ. The difference plotted thus indicates differences between linear and nonlinear evolution with the direction of the initial perturbations chosen to maximize growth and with size of the initial perturbations commensurate with the prevalent uncertainty. The relative error |[uτ(τ) − u(τ)] − Uτ(τ)|/|Uτ(τ)| (in l2) is 0.01, 0.15, and 0.42, respectively, for the three chosen values of increasing τ.

  • View in gallery
    Fig. 5.

    The MCMC mean as in Fig. 1 for high Reynolds number, strongly chaotic solution regime for ν = 0.01, T = 10h = 0.2: (top) t = 0 and (bottom) t = T.

  • View in gallery
    Fig. 6.

    As in Fig. 3, but for strongly chaotic regime, ν = 0.01, T = 0.2, and h = 0.02. (top) Mode u1,1 and (bottom) mode u5,5.

  • View in gallery
    Fig. 7.

    Example of an unstable trajectory for 3DVAR with ν = 0.01, h = 0.2. (top left) The norm-squared error between the estimated mean and the truth u(tn) in comparison to the preferred upper bound [i.e., the total observation error tr(Γ), (21)] and the lower bound (20). The other three plots show the estimator, m(t) together with the truth u(t) and the observations yn for (top right) Im(u0,1) and (bottom) (left) Re(u1,2) and (right) Re(u7,7).

  • View in gallery
    Fig. 8.

    Example of a variance-inflated stabilized trajectory for [3DVAR] with the same external parameters as in Fig. 7. Panels are as in Fig. 7.

  • View in gallery
    Fig. 9.

    Example of an unstable trajectory for LRExKF with ν = 0.01, h = 0.5. Panels are as in Fig. 7.

  • View in gallery
    Fig. 10.

    Example of a variance-inflated stabilized trajectory (updated with model B from section 2 on the complement of the low-rank approximation) for [LRExKF] with the same external parameters as in Fig. 9. Panels are as in Fig. 9.

  • View in gallery
    Fig. 11.

    The (left) posterior and (right) prior of the covariance from converged innovation statistics from the cycled 3DVAR algorithm in comparison to the converged covariance from the FDF algorithm and the posterior distribution.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 605 283 26
PDF Downloads 407 182 16

Evaluating Data Assimilation Algorithms

K. J. H. LawWarwick Mathematics Institute, University of Warwick, Coventry, United Kingdom

Search for other papers by K. J. H. Law in
Current site
Google Scholar
PubMed
Close
and
A. M. StuartWarwick Mathematics Institute, University of Warwick, Coventry, United Kingdom

Search for other papers by A. M. Stuart in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given all the observations on a time window of interest, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms.

A key aspect of geophysical data assimilation is the high dimensionality and limited predictability of the computational model. This paper examines the two-dimensional Navier–Stokes equations in a periodic geometry, which has these features and yet is tractable for explicit and accurate computation of the posterior distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that are evaluated, as quantified by the relative error in reproducing moments of the posterior, are four-dimensional variational data assimilation (4DVAR) and a variety of sequential filtering approximations based on three-dimensional variational data assimilation (3DVAR) and on extended and ensemble Kalman filters.

The primary conclusions are that, under the assumption of a well-defined posterior probability distribution, (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution, (ii) they do not perform as well in reproducing the covariance, and (iii) the error is compounded by the need to modify the covariance, in order to induce stability. Thus, filters can be a useful tool in predicting mean behavior but should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms when assumptions underlying them are not valid and will not change if the model complexity is increased.

Corresponding author address: Kody J. H. Law, Warwick Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom. E-mail: k.j.h.law@warwick.ac.uk

Abstract

Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given all the observations on a time window of interest, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms.

A key aspect of geophysical data assimilation is the high dimensionality and limited predictability of the computational model. This paper examines the two-dimensional Navier–Stokes equations in a periodic geometry, which has these features and yet is tractable for explicit and accurate computation of the posterior distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that are evaluated, as quantified by the relative error in reproducing moments of the posterior, are four-dimensional variational data assimilation (4DVAR) and a variety of sequential filtering approximations based on three-dimensional variational data assimilation (3DVAR) and on extended and ensemble Kalman filters.

The primary conclusions are that, under the assumption of a well-defined posterior probability distribution, (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution, (ii) they do not perform as well in reproducing the covariance, and (iii) the error is compounded by the need to modify the covariance, in order to induce stability. Thus, filters can be a useful tool in predicting mean behavior but should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms when assumptions underlying them are not valid and will not change if the model complexity is increased.

Corresponding author address: Kody J. H. Law, Warwick Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom. E-mail: k.j.h.law@warwick.ac.uk

1. Introduction

The positive impact of data assimilation schemes on numerical weather prediction (NWP) is unquestionable. Improvements in forecast skill over decades reflect not only the increased resolution of the computational model but also the increasing volumes of data available, as well as the increasing sophistication of algorithms to incorporate this data. However, because of the huge scale of the computational model, many of the algorithms used for data assimilation employ approximations, based on both physical insight and computational expediency, whose effect can be hard to evaluate. The aim of this paper is to describe a method of evaluating some important aspects of data assimilation algorithms by comparing them with a gold standard: the Bayesian posterior probability distribution on the system state given observations. In so doing we will demonstrate that carefully chosen filters can perform well in predicting mean behavior, but that they typically perform poorly when predicting uncertainty, such as covariance information.

In typical operational conditions the observed data, model initial conditions, and model equations are all subject to uncertainty. Thus we take the perspective that the gold standard, which we wish to reproduce as accurately as possible, is the (Bayesian) posterior probability distribution of the system state (possibly including parameters) given the observations. For practical weather forecasting scenarios this is not computable. The two primary competing methodologies for data assimilation that are computable, and hence are implemented in practice, are filters (Kalnay 2003) and variational methods (Bennett 2002). We will compare the (accurately computed, extremely expensive) Bayesian posterior distribution with the output of the (approximate, relatively cheap) filters and variational methods used in practice. Our underlying dynamical model is the 2D Navier–Stokes equations in a periodic setting. This provides a high dimensional dynamical system, which exhibits a range of complex behaviors yet is sufficiently small that the Bayesian posterior may be accurately computed by state-of-the-art statistical sampling in an offline setting.

The idea behind filtering is to update the posterior distribution of the system state sequentially at each observation time. This may be performed exactly for linear systems subject to Gaussian noise, and is then known as the Kalman filter (Kalman 1960; Harvey 1991). For nonlinear or non-Gaussian scenarios the particle filter (Doucet et al. 2001) may be used and provably approximates the desired probability distribution as the number of particles is increased (Bain and Crişan 2008). However, in practice this method performs poorly in high dimensional systems (Snyder et al. 2008) and, while there is considerable research activity aimed at overcoming this degeneration (van Leeuwen 2010; Chorin et al. 2010; Bengtsson et al. 2003), it cannot currently be viewed as a practical tool within the context of geophysical data assimilation. To circumvent problems associated with the representation of high dimensional probability distributions some form of Gaussian approximation is typically used to create practical filters. The oldest and simplest such option is to use a nonlinear generalization of the mean update in the Kalman filter, employing a constant prior covariance operator, obtained offline through knowledge coming from the underlying model and past observations (Lorenc 1986); this methodology is sometimes referred to as three-dimensional variational data assimilation (3DVAR). More sophisticated approximate Gaussian filters arise either from linearizing the dynamical model, yielding the extended Kalman filter (ExKF; Jazwinski 1970), or from utilizing ensemble statistics, leading to the ensemble Kalman filter (EnKF; Evensen et al. 1994; Evensen 2003). Information about the underlying local (in time) Lyapunov vectors, or bred vectors [see Kalnay (2003) for discussion] can be used to guide further approximations that are made when implementing these methods in high dimensions. We will also be interested in the use of Fourier diagonal filters (FDFs), introduced in Harlim and Majda (2008) and Majda et al. (2010), which approximate the dynamical model by a statistically equivalent linear dynamical system in a manner that enables the covariance operator to be mapped forward in closed form; in steady state the version we employ here reduces to a particular choice of 3DVAR, based on climatological statistics. An overview of particle filtering for geophysical systems may be found in van Leeuwen (2009) and a quick introduction to sequential filtering may be found in Arulampalam et al. (2002).

Whereas filtering updates the system state sequentially each time when a new observation becomes available, variational methods attempt to incorporate data distributed over an entire time interval. This may be viewed as an optimization problem in which the objective function is to choose the initial state, and possibly forcing to the physical model, in order to best match the data over the specified time window. As such it may be viewed as a PDE-constrained optimization problem (Hinze et al. 2009), and more generally as a particular class of regularized inverse problem (Vogel 2002; Tarantola 2005; Banks and Kunisch 1989). This approach is referred to as four-dimensional variational data assimilation (4DVAR) in the geophysical literature when the optimization is performed over just the initial state of the system (Talagrand and Courtier 1987; Courtier and Talagrand 1987) and as weak constraint 4DVAR when optimization is also over forcing to the system (Zupanski 1997).

From a Bayesian perspective, the solution to an inverse problem is statistical rather than deterministic and is hence significantly more challenging: regularization is imposed through viewing the unknown as a random variable, and the aim is to find the posterior probability distribution on the state of the system on a given time window, given the observations on that time window. With the current and growing capacity of computers it is becoming relevant and tractable to begin to explore such approaches to inverse problems in differential equations (Kaipio and Somersalo 2005), even though it is currently not feasible to do so for NWP. There has, however, been some limited study of the Bayesian approach to inverse problems in fluid mechanics using path integral formulations in continuous time as introduced in Apte et al. (2007); see Apte et al. (2008a,b), Quinn and Abarbanel (2010), and Cotter et al. (2011) for further developments. We will build on the algorithmic experience contained in these papers here. For a recent overview of Bayesian methodology for inverse problems in differential equations, see Stuart (2010), and for the Bayesian formulation of a variety of inverse problems arising in fluid mechanics, see Cotter et al. (2009). The key “take home” message of this body of work on Bayesian inverse problems is that it is often possible to compute the posterior distribution of state given noisy data with high degree of accuracy, albeit at great expense: the methodology could not be used online as a practical algorithm, but it provides us with a gold standard against which we can evaluate online approximate methods used in practice.

There are several useful connections to make among the Bayesian posterior distribution, filtering methods, and variational methods, all of which serve to highlight the fact that they are all attempting to represent related quantities. The first observation is that, in the linear Gaussian setting, if backward filtering is implemented on a given time window (this is known as smoothing) after forward filtering, then the resulting mean is equivalent to 4DVAR (Fisher et al. 2005). The second observation is that the Bayesian posterior distribution at the end of the time window, which is a non-Gaussian version of the Kalman smoothing distribution just described, is equal to the exact filtering distribution at that time, provided the filter is initialized with the same distribution as that chosen at the start of the time window for the Bayesian posterior model (Stuart 2010). The third observation is that the 4DVAR variational method corresponds to maximizing the Bayesian posterior distribution and is known in this context as a maximum a posteriori estimator (Cox 1964; Kaipio and Somersalo 2005). More generally, connections between filtering and smoothing have been understood for some time (Bryson and Frazier 1963).

For the filtering and variational algorithms implemented in practice, these connections may be lost or weakened because of the approximations made to create tractable algorithms. Hence we attempt to evaluate these algorithms by their ability to reproduce moments of the Bayesian posterior distribution since this provides an unequivocal notion of a perfect solution, given a complete model description, including sources of error; we hence refer to it as the gold standard. We emphasize that we do not claim to present optimal implementations of any method except the gold standard Markov chain Monte Carlo (MCMC) sampling. Nonetheless, the phenomena we observe and the conclusions we arrive at will not change qualitatively if the algorithms are optimized. They reflect inherent properties of the approximations used to create online algorithms useable in practical online scenarios.

The ability of filters to track the signal in chaotic systems has been the object of study in data assimilation communities for some time and we point to the paper of Miller et al. (1994) as an early example of this work, confined to low dimensional systems, and to the more recent work of Carrassi et al. (2008) for study of both low and high dimensional problems, and for further discussion of the relevant literature. As mentioned above, we develop our evaluation in the context of the 2D Navier–Stokes equations in a periodic box. We work in parameter regimes in which at most O(103) Fourier modes are active. This model has several attractive features. For instance, it has a unique global attractor with a tunable parameter, the viscosity (or, equivalently, the Reynolds number), which tunes between a one-dimensional stable fixed point and very high dimensional strongly chaotic attractor (Temam 2001). As the dimension of the attractor increases, many scales are present, as one would expect in a model of the atmosphere. By working with dimensions of size O(103) we have a model of significantly higher dimension than the typical toy models that one encounters in the literature (Lorenz 1996, 1963). Therefore, while the 2D Navier–Stokes equations do not model atmospheric dynamics, we expect the model to exhibit similar predictability issues as arise atmospheric models, and this fact, together their high dimensionaliy, makes them a useful model with which to study aspects of atmospheric data assimilation. However, we do recognize the need for follow-up studies that investigate similar issues for models such as the Lorenz-96 model, or quasigeostrophic models, which can mimic or model the baroclinic instabilities that drive so much of atmospheric dynamics.

The primary conclusions of our study are as follows: (i) With appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution. (ii) However, these filters typically perform poorly when attempting to reproduce information about covariance as the assumptions underlying them may not be valid. (iii) This poor performance is compounded by the need to modify the filters, and their covariance in particular, in order to induce filter stability and avoid divergence. Thus, while filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model is more complex (e.g., due to a smaller viscosity in our model). We reiterate that these conclusions are based on our assumption of well-defined initial prior, observational error, and hence Bayesian posterior distributions. Because of the computational cost of computing the latter we look only at one, initial, interval of observations, but upon our assumption the accuracy over this first interval will limit accuracy on all subsequent intervals, and they will not become better. Under the reasonable assumption that the process has finite correlation time, the initial prior will be forgotten eventually and, in the present context, this effect would be explored by choosing different priors coming from approximation of the asymptotic distribution by some filtering algorithm and/or climatological statistics and testing the robustness of conclusions, and indeed of the filtering distribution itself, to changes in prior. The question of sensitivity of the results to choice of prior is not addressed here. We also restrict our attention here to the perfect model scenario.

Many comparisons of various versions of these methods have been carried out recently. For example, Meng and Zhang (2008) and Zhang et al. (2010) compare the EnKF forecast with 3DVAR and 4DVAR (without updated covariance) in the Weather Research and Forecasting model (WRF). In their real-data experiments, they conclude that EnKF and 4DVAR perform better with respect to the root-mean-square error (RMSE), while the EnKF forecast performs better for longer lead times. This result is consistent with ours, although it could be explained by an improved approximation of the posterior distribution at each update time. Our results indicate 4DVAR could perform better here, as long as the approximate filtering distribution of 4DVAR with the propagated Hessian is used. Of course this is too expensive in practice and often a constant covariance is used; this will limit performance in reproducing the statistical variation of the posterior filtering distribution for prior in the next cycle. This issue is addressed partially in Meng and Zhang (2008) and Zhang and Zhang (2012), where EnKF is coupled to 4DVAR and the covariance comes from the former, while the mean is updated by the latter, and the resulting algorithm outperforms either of the individual ones in the RMSE sense. Two fundamental classes of EnKFs were compared theoretically in the large ensemble limit in Lei et al. (2010), and it was found that the stochastic version (the one we employ here) in which observations are perturbed is more robust to perturbations in the forecast distribution than the deterministic one. Another interesting comparison was undertaken in Hamill et al. (2000) in which several ensemble filters alternative to EnKF in operational use are compared with respect to RMSE as well as other diagnostics such as rank histograms (Anderson 1996). We note that over long times the RMSE values for the algorithms we consider are in the same vicinity as the errors between the estimators and the truth that we present at the single filtering time.

The rest of the paper will be organized in the following sections. First, we introduce the model and inverse problem in section 2; then we describe the various methods used to (approximately) compute posterior smoothing and filtering distributions in section 3. Then we describe the results of the numerical simulations in two sections: The first (section 4) explores the accuracy of the filters by comparison with the posterior distribution and the truth; the second (section 5) explains the manifestation of instability in the filters, describes how they are stabilized, and studies implications for accuracy. We provide a summary and conclusions in section 6. In the appendix we describe some details of the numerical methods.

2. Statement of the model

In this section we describe the dynamical model, and the filtering and smoothing problems that arise from assimilating data into that model. The discussion is framed prior to discretization. Details relating to numerical implementation may be found in the appendix.

a. Dynamical model: Navier–Stokes equation

The dynamical model we will consider is the two-dimensional incompressible Navier–Stokes equation in a periodic box with side of length 2. By projecting into the space of divergence-free velocity fields, this may be written as a dynamical equation for the divergence-free velocity field u with the form
e1
Here A (known as the Stokes operator) models the dissipation and acts as a (negative) Laplacian on divergence free fields, F(u) is the nonlinearity arising from the convective time derivative, and f is the body force, all projected into divergence free functions; ν is the viscosity parameter. We also work with spatial mean-zero velocity fields as, in periodic geometries, the mean evolves independently of the other Fourier modes. See Temam (2001) for details concerning the formulation of incompressible fluid mechanics in this notation. We let denote the space of square-integrable, periodic, and mean-zero divergence-free functions on the box. To assure that our results are self-contained apart from the particular choice of model considered, we define the map Ψ(·; t): so that the solution of (1) satisfies
e2

Equation (1) has a global attractor and the viscosity parameter ν tunes between regimes in which the attractor is a single stationary point, through periodic, quasi-periodic, chaotic, and strongly chaotic (the last two being difficult to distinguish between). These regimes are characterized by an increasing number of positive Lyapunov exponents, and hence increasing dimension of the unstable manifold. In turn, this results in a system that becomes progressively less predictable. This tunability through all predictability regimes, coupled to the possibility of high dimensional effective dynamics that can arise for certain parameter regimes of the PDE, makes this a useful model with which to examine some of the issues inherent in atmospheric data assimilation.

b. Inverse problem

The basic inverse problem that underlies data assimilation is to estimate the state of the system, given the model dynamics for the state, together with noisy observations of the state. In our setting, since the model dynamics are deterministic, this amounts to estimating the initial condition from noisy observations at later times. This is an ill-posed problem that we regularize by adopting a Bayesian approach to the problem, imposing a prior Gaussian random field assumption on the initial condition. It will be useful to define for any covariance operator B and we use this notation throughout the paper, in particular in the observation space with B = Γ and in the initial condition space with B = 0.

Our prior regularization on the initial state is to assume
e3
The prior mean m0 is our best guess of the initial state before data are acquired (background mean), and the prior covariance 0 (background covariance) regularizes this by allowing variability with specified magnitude at different length scales. The prior covariance 0: is self-adjoint and positive, and is assumed to have summable eigenvalues, a condition that is necessary and sufficient for draws from this prior to be square integrable.
Now we describe the noisy observations. We observe only the velocity field, and not the pressure. Let Γ: be a self-adjoint positive operator and let
e4
denote noisy observations of the state at time tk = kh, which, for simplicity of exposition only, we have assumed to be equally spaced. We assume independence of the observational noise: yk|uk is independent of yj|uj for all jk; and the observational noise is assumed independent of the initial condition u0.

For simplicity and following convention in the field, we will not distinguish notationally between the random variable and its realization, except in the case of the truth, which will be important to distinguish by u in subsequent sections in which it will be simulated and known. The inverse problem consists of estimating the posterior probability distribution of u(t), given noisy observations , with jJ. This is referred to as

  • Smoothing when t < tj;

  • Filtering when t = tj;

  • Predicting when t > tj.

Under the assumption that the dynamical model is deterministic, the smoothing distribution at time t = 0 can be mapped forward in time to give the exact filtering distribution, which in turn can be mapped forward in time to give the exact predicting distribution (and likewise the filtering distribution mapped backward, if the forward map admits an inverse, yields the smoothing distribution). If the forward map were linear, for instance in the case of the Stokes equation [F(u) = 0], then the posterior distribution would be Gaussian as well, and could be given in closed form via its mean and covariance. In the nonlinear case, however, the posterior cannot be summarized through a finite set of quantities such as mean and covariance and, in theory, requires infinitely many samples to represent. In the language of the previous section, as the dimension of the attractor increases with Reynolds number, the nonlinearity begins to dominate the equation, the dynamics become less predictable, and the inverse problem becomes more difficult. In particular, Gaussian approximations can become increasingly misleading. We will see that sufficient nonlinearity for these misleading effects can arise more than one way, via the dynamical model or the observational frequency.

1) Smoothing

We start by describing the Bayesian posterior distribution, and link this to variational methods. Let uk =u(kh), Ψ(u) = Ψ(u; h), and Ψk(·) = Ψ(·; kh). Furthermore, define the conditional measures for j1, j2J:
eq1
(For notational convenience we do not distinguish between a probability distribution and its density, using μ and ℙ interchangeably for both). The posterior distributions are completely characterized by the dynamical model in (2) and by the random inputs given in (4) and (3).
We focus on the posterior distribution μ0|J since this probability distribution, once known, determines μj|J for all Jj ≥ 0 simply by using (2) to map the probability distribution at time t = 0 into that arising at any later time t > 0. Bayes’ rule gives a characterization of μ0|J via the ratio of its density with respect to that of the prior:1
eq2
so that
eq3
where
eq4
The constant of proportionality is independent of u and irrelevant for the algorithms that we use below to probe the probability distribution μ0|J. Note that here, and in what follows, u denotes the random variable u0.
Using the fact that the prior μ0 is Gaussian it follows that the maximum a posteriori estimator of μ0|J is the minimizer of the functional:
e5
We let ; that is, returns the value of u at which I(u) achieves its minimum. This so-called MAP estimator is, of course, simply the solution of the 4DVAR strong constraint variational method. The mathematical formulation of various inverse problems for the Navier–Stokes equations, justifying the formal manipulations in this subsection, may be found in Cotter et al. (2009).

2) Filtering

The posterior filtering distribution at time j given all observations up to time j can also be given in closed form by an application of Bayes’ rule. The prior is taken as the predicting distribution:
e6
The δ function appears because the dynamical model is deterministic. As we did for smoothing, we can apply Bayes rule to obtain the ratio of the density of μj|j with respect to μj|j−1 to obtain
e7
where
e8

Together (6) and (7) provide an iteration that, at the final observation time, yields the measure μJ|J. As mentioned in the introduction, this distribution can be obtained by evolving the posterior smoothing distribution μ0|J forward in time under the dynamics given by (2).

3. Overview of methods

In this section, we provide details of the various computational methods we use to obtain information about the probability distribution on the state of the system, given observations, in both the smoothing and filtering contexts. To approximate the gold standard, the Bayesian posterior distribution, we use state-of-the-art Markov chain Monte Carlo sampling for the inverse problem to obtain a large number of samples from the posterior distribution that are sufficient to represent its mode and the posterior spread around it. We also describe optimization techniques to compute the MAP estimator of the posterior density, namely 4DVAR. Both the Bayesian posterior sampling and 4DVAR are based on obtaining information from the smoothing distribution from section 2b(1). Then we describe a variety of filters, all building on the description of sequential filtering distributions introduced in section 2b(2), using Gaussian approximations of one form or another. These filters are 3DVAR, the Fourier diagonal filter, the extended Kalman filter, and the ensemble Kalman filter. We will refer to these filtering algorithms collectively as approximate Gaussian filters to highlight the fact that they are all derived by imposing a Gaussian approximation in the prediction step.

a. Markov chain Monte Carlo sampling of the posterior

We work in the setting of the Metropolis-Hastings variant of MCMC methods, employing recently developed methods that scale well with respect to system dimension; see Cotter et al. (2011) for further details and references. The resulting random walk method that we use to sample from μ0|J is given as follows:2

  • Draw u(0) ~ (m0, 0) and set n = 1.

  • Define

  • Draw u* ~ (m*, β20).

  • Let α(n−1) = min{1, exp[Φ(u(n−1) − Φ(u*)]} and set
    eq5
  • nn + 1 and repeat.

After a burn-in period of M steps, This sample is then pushed forward to yield a sample of time-dependent solutions, {u(n)(t)}, where u(n)(t) = Ψ[u(n); t], or, in particular in what follows, a sample of the filtering distribution {ΨJ u(n)}.

b. Variational methods: 4DVAR

As described in section 2, the minimizer of I defined in (5) defines the 4DVAR approximation, the basic variational method. A variety of optimization routines can be used to solve this problem. We have found Newton’s method to be effective, with an initial starting point computed by homotopy methods starting from an easily computable problem.

We now outline how the 4DVAR solution may be used to generate an approximation to the distribution of interest. The 4DVAR solution (MAP estimator) coincides with the mean for unimodal symmetric distributions. If the variance under μ0|J is small then it is natural to seek a Gaussian approximation. This has the form , where
eq6
Here D2 denotes the second derivative operator. This Gaussian on the initial condition u0 can be mapped forward under the dynamics, using linearization for the covariance, since it is assumed small, to obtain where and
eq7
Here D denotes the derivative operator and the asterisk (*) the adjoint.

c. Approximate Gaussian filters

Recall the key update formulas (6) and (7). Note that the integrals are over the function space , a fact that points to the extreme computational complexity of characterizing probability distributions for problems arising from PDEs or their high dimensional approximation. We will describe various approximations, which are all Gaussian in nature, and make the update formulas tractable. We describe some generalities relating to this issue before describing various method dependent specifics in following subsections.

If Ψ is nonlinear then the fact that μj−1|j−1 is Gaussian does not imply that μj|j−1 is Gaussian; this follows from (6). Thus prediction cannot be performed simply by mapping mean and covariance. However, the update (7) has the property that if μj|j−1 is Gaussian then so is μj|j. If we assume that μj|j−1 = (mj, j), then (7) shows that μj|j is Gaussian where is the MAP estimator given by
e9
[so that minimizes Ij(u)] and
eq8
Note that, using (8), we see that Ij is a quadratic form whose minimizer is given in closed form as the solution of a linear equation with the form
e10
where
e11
If the output of the prediction step given by (6) is approximated by a Gaussian, then this provides the basis for a sequential Gaussian approximation method. To be precise, if we have that
eq9
and we have formulas, based on an approximation of (6), that enable us to compute the map
e12
then together (10)(12) provide an iteration for Gaussian approximations of the filtering distribution μj|j of the form
eq10
In the next few subsections we explain a variety of such approximations, and the resulting filters.

1) Constant Gaussian filter (3DVAR)

The constant Gaussian filter, referred to as 3DVAR, consists of making the choices and j in (12). It is natural, theoretically, to choose = 0 as the prior covariance on the initial condition. However, as we will see, other issues may intervene and suggest or necessitate other choices.

2) Fourier diagonal filter

A first step beyond 3DVAR, which employs constant covariances when updating to incorporate new data, is to use some approximate dynamics in order to make the update (12). In Harlim and Majda (2008) and Majda et al. (2010) it is demonstrated that, in regimes exhibiting chaotic dynamics, linear stochastic models can be quite effective for this purpose: this is the idea of the FDF. In this subsection we describe how this idea may be used in both the steady and turbulent regimes of the Navier–Stokes system under consideration. For our purposes, and as observed in Harlim and Majda (2008), this approach provides a rational way of deriving the covariances in 3DVAR, based on climatological statistics.

The basic idea is, for the purposes of filtering, to replace the nonlinear map uj+1 = Ψ(uj) by the linear (stochastic when ≠ 0) map
e13
Here it is assumed that L is negative definite and diagonal in the Fourier basis, has summable eigenvalues and is diagonal in the Fourier basis, and ξj is a random noise chosen from the distribution (0, I). More sophisticated linear stochastic models could (and should) be used, but we employ this simplest of models to convey our ideas.
If L = exp(−Mh) and = [I − exp(−2Mh)]Ξ, then (13) corresponds to the discrete time h solution of the Ornstein–Uhlenbeck (OU) process
eq11
where dW is the infinitesimal Brownian motion increment with identity covariance. The stationary solution is (0, Ξ) and letting Mk,k = αk, the correlation time for mode k can be computed as 1/αk. We employ two models of the form (13) in this paper, labeled A and B and detailed below. Before turning to them, we describe how this linear model is incorporated into the filter.
In the case of linear dynamics such as these, the map (12) is given in closed form:
eq12
This can be improved, however, in the spirit of 3DVAR, by updating only the covariance in this way and mapping the mean under the nonlinear map, to obtain the following instance of (12):
eq13
We implement the method in this form. We note that, because L < 1, the covariance j converges to some that can be computed explicitly and, asymptotically, the algorithm behaves like 3DVAR with a systematic choice of covariance. We now turn to the choices of L and .
  • Model A is used in the stationary regime. It is found by setting L = exp(−νAh) and taking = I where = 10−12. Although this does not correspond to an accurate linearization of the model in low wavenumbers, it is reasonable for high wavenumbers.

  • Model B is used in the strongly chaotic regime, and is based on the original idea in Harlim and Majda (2008) and Majda et al. (2010). The two quantities Ξk,k and αk are matched to the statistics of the dynamical model, as follows. Let u(t) denote the solution to the Navier–Stokes equation (1) which, abusing notation, we assume to be represented in the Fourier domain, with entries uk(t). Then and Ξ are given by the formulas

eq14
In practice these integrals are approximated by finite discrete sums. Furthermore, we set the off-diagonal entries of Ξ to zero to obtain a diagonal model. We set Then the αk is computed using the formulas
eq15
Again, finite discrete sums are used to approximate the integrals.

3) Low rank extended Kalman filter (LRExKF)

The idea of the extended Kalman filter is to assume that the desired distributions are approximately Gaussian with small covariance. Then linearization may be used to show that a natural approximation of (12) is the map3
e14
Updating the covariance this way requires one forward tangent linear solve and one adjoint solve for each dimension of the system, and is therefore prohibitively expensive for high dimensional problems. To overcome this we use a low rank approximation to the covariance update.
We write this explicitly as follows. Compute the dominant m eigenpairs of j as defined in (14); these satisfy
eq19
Define the rank m matrix = VΛV* and note that this captures the essence of the covariance implied by the extended Kalman filter, in the directions of the m dominant eigenpairs. When the eigenvalues are well separated, as they are here, a small number of eigenvalues capture the majority of the action and this is very efficient. We then implement the filter
e15
where = 10−12 as above. The perturbation term prevents degeneracy.

The notion of keeping track of the unstable directions of the dynamical model is not new, although our particular implementation differs in some details. For discussions and examples of this idea see Toth and Kalnay (1997), Palmer et al. (1998), Kalnay (2003), Leutbecher (2003), Auvinen et al. (2009), and Hamill et al. (2000).

4) Ensemble Kalman filter

The ensemble Kalman filter, introduced in Evensen et al. (1994) and overviewed in Evensen (2003, 2009), is slightly outside the framework of the previous three filters and there are many versions [see Lei et al. (2010) for a comparison between two major categories]. This is because the basic object that is updated is an ensemble of particles, not a mean and covariance. This ensemble is used to compute an empirical mean and covariance. We describe how the basic building blocks of approximate Gaussian filters, namely (10), (11), and (12), are modified to use ensemble statistics.

We start with (12). Assuming one has an ensemble , (12) is replaced by the approximations
eq20
eq21
and
e16
Equation (10) is approximated via an ensemble of equations found by replacing mj by and replacing yi by independent draws from (yj, Γ) This leads to updates of the ensemble members whose sample mean yields . For infinite particles, the sample covariance yields . In the comparisons we consider the covariance to be the analytical one, , as in (11), rather than the ensemble sample covariance, which yields the one implicitly in the next update (12). The discrepancy between these can be large for small samples and in different situations it may have either a positive or negative effect on the filter divergence discussed in section 5. Solutions of the ensemble of equations of form (10) are implemented in the standard Kalman filter fashion; this does not involve computing the inverse covariances that appear in (11). There are many variants on the EnKF and we have simply chosen one representative version. See, for example, Tippett et al. (2003) and Evensen (2009).

4. Filter accuracy

In this section we describe various aspects of the accuracy of both variational methods (4DVAR) and approximate Gaussian filters, evaluating them with respect to their effectiveness in reproducing the following two quantities: (i) the posterior distribution on state given observations and (ii) the truth u that gives rise to the observations. The first of these is found by means of accurate MCMC simulations and is then characterized by three quantities: its mean, variance, and MAP estimator. It is our contention that, where quantification of uncertainty is important, the comparison of algorithms by their ability to predict (i) is central; however many algorithms are benchmarked in the literature by their ability to predict the truth [(ii)] and so we also include this information. A comparison of the algorithms with the observational data is also included as a useful check on the performance of the algorithms. Note that studying the error in (i) requires comparison of probability distributions; we do this primarily through comparison of mean and covariance information. In all our simulations the posterior distribution and the distributions implied by the variational and filtering algorithms are approximately Gaussian; for this reason studying the mean and covariance is sufficient. We note that we have not included model error in our study: uncertainty in the dynamical model comes only through the initial condition, and thus attempting to match the “truth” is not unnatural in our setting. Matching the posterior distribution is, however, arguably more natural and is a concept that generalizes in a straightforward fashion to the inclusion of model error. In this section all methods are presented in their “raw” form, unmodified and not optimized. Modifications that are often used in practice are discussed in the next section.

a. Nature of approximations

In this preliminary discussion we make three observations that help to guide and understand subsequent numerical experiments. For the purposes of this discussion we assume that the MCMC method, our gold standard, provides exact samples from the desired posterior distribution. There are then two key approximations underlying the methods that we benchmark against MCMC in this section. The first is the Gaussian approximation, which is made in 3DVAR–FDF, 4DVAR (when propagating from t = 0 to t = T), LRExKF, and EnKF; the second additional approximation is sampling, which is made only in EnKF. The 3DVAR and FDF methods make a universal, steady approximation to the covariance while 4DVAR, LRExKF, and EnKF all propagate the approximate covariance using the dynamical model. Our first observation is thus that we expect 3DVAR and FDF to underperform the other methods with regard to covariance information. The second observation arises from the following: the predicting (and hence smoothing and filtering) distribution will remain close to Gaussian as long as there is a balance between dynamics remaining close to linear and the covariance being small enough (i.e., there is an appropriate level of either of these factors that can counteract any instance of the other one). In this case the evolution of the distribution is well approximated to leading order by the nonautonomous linear system update of ExKF, and similarly for the 4DVAR update from t = 0 to t = T. Our second observation is hence that the bias in the Gaussian approximation will become significant if the dynamics is sufficiently nonlinear or if the covariance becomes large enough. These two factors that destroy the Gaussian approximation will be more pronounced as the Reynolds number increases, leading to more, and larger, growing (local) Lyapunov exponents and, as the time interval between observations increases, allowing further growth or, for 4DVAR, as the total time interval grows. The third and final observation concerns EnKF methods. In addition to making the Gaussian approximation, these rely on sampling to capture the resulting Gaussian. Hence the error in the EnKF methods will become significant if the number of samples is too small, even when the Gaussian approximation is valid. Furthermore, since the number of samples required tends to grow both with dimension and with the inverse of the size of the quantity being measured, we expect that EnKF will encounter difficulties in this high dimensional system that will be exacerbated when the covariance is small.

We will show in the following that in the stationary case, and for high-frequency observations in the strongly chaotic case, the ExKF does perform well because of an appropriate balance of the level of nonlinearity of the dynamics on the scale of the time between observations and the magnitude of the covariance. Nonetheless, a reasonably sized ensemble in the EnKF is not sufficiently large for the error from that algorithm to be dominated by the ExKF error, and it is instead determined by the error in the sample statistics with which EnKF approximates the mean and covariance; this latter effect was demonstrated on a simpler model problem in Apte et al. (2008a). When the observations are sufficiently sparse in time in the strongly chaotic case, the Gaussian approximation is no longer valid and even the ExKF fails to recover accurate mean and covariance.

b. Illustration via two regimes

This section is divided into two subsections, each devoted to a dynamical regime: stationary and strongly chaotic. The true initial condition u in the case of strongly chaotic dynamics is taken as an arbitrary point on the attractor obtained by simulating an arbitrary initial condition until statistical equilibrium. The initial condition for the case of stationary dynamics is taken as a draw from the Gaussian prior evolved a short time forward in the model, since the statistical equilibrium is the trivial one. Note that in the stationary dynamical regime the equation is dominated by the linear term and hence this regime acts as a benchmark for the approximate Kalman filters, since they are exact in the linear case. Each of these sections in turn explores the particular characteristics of the filter accuracy inherent to that regime as a function of time between observations, h. The final time T will mostly be fixed, so that decreasing h will increase the density of observations of the system on a fixed time domain; however, on several occasions we study the effect of fixing h and changing the final time T. Studies of the effect on the posterior distribution of increasing the number of observations are undertaken for some simple inverse problems in fluid mechanics in Cotter et al. (2011) and are not undertaken here.

We now explain the basic format of the tables that follow and indicate the major features of the filters that they exhibit. The first eight rows each correspond to a method of assimilation, while the final two rows correspond to the truth, at the start and end of the time window studied, for completeness. Labels for these rows are given in the far left column. The posterior distribution (MCMC) and MAP estimator (4DVAR) are each obtained via the smoothing distribution, and hence comparison is made at the initial time, t = 0, and at the final time, t = T, by mapping forward. For all other methods, the comparison is only with the filtering distribution at the final time, t = T. The remaining columns each indicate the relative error of the given filter with a particular diagnostic quantity of interest. The second, fourth, fifth, and sixth columns show e = ‖M(t) − m(t)‖/‖M(t)‖, where, for a given t (either 0 or T), m(t) is the time t mean of the filtering (or smoothing) distribution obtained from each of the various methods along the rows and M(t) is, respectively, the mean of the posterior distribution found by MCMC, [u(t)]; the truth, u(t); the observation y(t); or the MAP estimator (4DVAR). The norm used is the L2{[−1, 1) × [−1, 1)} norm. The third column shows
eq22
where var indicates the variance, u is sampled from the posterior distribution (via MCMC), and U is the Gaussian approximate state obtained from the various methods. The subscripts in the titles in the top row indicate which relative error is given in that column.

The following universal observations can be made independent of model parametric regime:

  • The numerical results support the three observations made in the previous subsection.

  • Most algorithms do a reasonably god job of reproducing the mean of the posterior distribution.

  • The LRExKF and 4DVAR both do a reasonably good job of reproducing the variance of the posterior distribution if the Reynolds number is sufficiently small and/or the observation frequency high; otherwise there are circumstances in which the approximations underlying the ad hoc filters are not justified and they then fail to reproduce covariance information with any accuracy.

  • All other algorithms perform poorly when reproducing the variance of the posterior distribution.

  • All estimators of the mean are uniformly closer to the truth than the observations for all h.

  • In almost all cases, the estimators of the mean are closer to the mean of the posterior distribution than to the truth.

  • The error of the estimators of the mean with respect to the truth tends to increase with increasing h.

  • The error of the mean with respect to the truth decreases for increasing number of observations.

  • LRExKF usually has the smallest error with respect to the posterior mean and sometimes accurately recovers the variance.

  • The error in the variance is sometimes overestimated and sometimes underestimated, and usually this is wavenumber dependent in the sense that the variance of certain modes is overestimated and the variance of others is underestimated. This will be discussed further in the next section.

  • The posterior smoothing distribution becomes noticeably non-Gaussian although still unimodal, while the filtering distribution remains very close to Gaussian.

c. Stationary regime

In the stationary regime, ν = 0.1, the basic time step used is dt = 0.05, the smallest h considered is h = 0.2, and we fix T = 2 as the filtering time at which to make comparisons of the approximate filters with the moments of the posterior distribution via samples from MCMC, the MAP estimator from 4DVAR, the truth, and the observations. Figure 1 shows the vorticity, w (left), and Fourier coefficients, |uk| (right), of the smoothing distribution at t = 0 in the case when h = 0.2. The top panels are the mean of the posterior distribution found with MCMC, (u), and the bottom panels are the truth, u(0). The MAP estimator [minimizer of I(u), ] is not shown because it is not discernible from the mean in this case. Notice that the mean and MAP estimator on the initial condition resemble the large-scale structure of the truth but are rougher. This roughness is caused by the presence of the prior mean m0 drawn according to the distribution [u(0), 0]. The solution operator Ψ immediately removes this roughness as it damps high wavenumbers; this effect can be seen in the images of the smoothing distribution mapped forward to time t = T (i.e., the filtering distribution) in Fig. 2 (here only the mean is shown, as neither the truth nor the MAP estimator is distinguishable from it). This is apparent in the data in the tables discussed below, in which the distances between the truth, the posterior distribution, and the MAP estimator are all mutually much closer for the final time than the initial; this contraction of the errors in time is caused by the underlying dynamics that involves exponential attraction to a unique stationary state. This is further exhibited in Fig. 3, which shows the histogram of the smoothing distribution for the real part of a sample mode, u1,1, at the initial time (left) and final time (right).

Fig. 1.
Fig. 1.

Low Reynolds number, stationary solution regime (ν = 0.1). (left) The vorticity w(0) of the smoothing distribution at t = 0 and (right) its Fourier coefficients for T = 10h = 2, for (top) the MCMC sample mean and (bottom) the truth. The MAP estimator is not distinguishable from the mean by eye and so is not displayed. The prior mean is taken as a draw from the prior and hence is not as smooth as the initial condition. It is the influence of the prior that makes the MAP estimator and mean rough, although structurally the same as the truth (the solution operator is smoothing, so these fluctuations are immediately smoothed out; see Fig. 2).

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 2.
Fig. 2.

Low Reynolds number, stationary solution regime (ν = 0.1). (left) The vorticity w(T) of the filtering distribution at t = T and (right) its Fourier coefficients for T = 10h = 2. Only the MCMC sample mean is shown, since the solutions have been smoothed out and the differences among the MAP, mean, and truth are imperceptible.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 3.
Fig. 3.

The MCMC histogram for (left) t = 0 and (right) t = T = 10h = 2 together with the Gaussian approximation obtained from 4DVAR for low Reynolds number, stationary state regime (ν = 0.1).

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Table 1 presents data for increasing h = 0.2, 1, 2, with T = 2 fixed. Notable trends, in addition to those mentioned at the start of this section, are as follows: (i) the 4DVAR smoothing distribution has much smaller error with respect to the mean at t = T than at t = 0, with the former increasing and the latter decreasing for increasing h; (ii) the errors of 4DVAR with respect to the mean and the variance at t = 0 and t = T are close to or below the threshold of accuracy of MCMC; and (iii) the errors of both the mean and the variance of 3DVAR tend to decrease with increasing h.

Table 1.

Stationary state regime, ν = 0.1, T = 2, with (top) h = 0.2, (middle) h = 1, and (bottom) h = 2. The first column defines the method corresponding to the given row. The second, fourth, fifth, and sixth columns show the norm difference, e = ‖Mm‖/‖M‖, where m is the mean obtained from the method for a given row and M is, respectively, the mean of the posterior distribution (MCMC), the truth, the observation, and the MAP estimator. The third column is the norm difference, e = ‖var[u] − var[U]‖/‖var[u]‖ where var indicates the variance, u is sampled from the posterior (via MCMC), and U is the approximate state obtained from the various methods.

Table 1.

d. Strongly chaotic regime

In the strongly chaotic regime, ν = 0.01, the basic time step used is dt = 0.005, the smallest h considered is h = 0.02, and we fix T = 0.2 or T = 1 as the filtering time at which to make comparisons of the approximate filters. In this regime, the dynamics are significantly more nonlinear and less predictable, with a high dimensional attractor spanning many scales. Indeed, the average squared velocity spectrum decays approximately like for ∣k∣ < kf, with kf being the magnitude of the forcing wavenumber, and much more rapidly for ∣k∣ > kf. See the left panel of Fig. 4 for the average spectrum of the solution on the attractor and Fig. 5 for an example snapshot of the solution on the attractor. The flow is not in any of the classical regimes of cascades, but there is an upscale transfer of energy because of the forcing at intermediate scale. The viscosity is not negligible even at the largest scales, thereby allowing statistical equilibrium; this may be thought of as being generated by the empirical measure on the global attractor whose existence is assured for all ν > 0. We confirmed this with simulations to times of order O(103ν) giving O(107) samples with which to compute the converged correlation statistics used in FDF.

Fig. 4.
Fig. 4.

(left) Average squared velocity spectrum on the attractor for ν = 0.01. (right) Difference between quantity a and quantity b, where a is the difference of the truth u(t) with a solution uτ(t) initially perturbed in the direction of the dominant local Lyapunov vectors υτ on a time interval of length τ with τ = 0.02, 0.2, and 0.5 [thus uτ(0) = u(t) + ɛυτ], and b is the evolution of that perturbation under the linearized model Uτ(t) = DΨ(u(0); t)ɛυτ. The magnitude of perturbation ɛ is determined by the projection of the initial posterior covariance in the direction υτ. The difference plotted thus indicates differences between linear and nonlinear evolution with the direction of the initial perturbations chosen to maximize growth and with size of the initial perturbations commensurate with the prevalent uncertainty. The relative error |[uτ(τ) − u(τ)] − Uτ(τ)|/|Uτ(τ)| (in l2) is 0.01, 0.15, and 0.42, respectively, for the three chosen values of increasing τ.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 5.
Fig. 5.

The MCMC mean as in Fig. 1 for high Reynolds number, strongly chaotic solution regime for ν = 0.01, T = 10h = 0.2: (top) t = 0 and (bottom) t = T.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Small perturbations in the directions of maximal growth of the dynamics grow substantially over the larger times between observations we look at, while over the shorter times the dynamics remain well approximated by the linearization. See the right panel of Fig. 4 for an example of the local maximal growth of perturbations. Figure 5 shows the initial and final time profiles of the mean as in Figs. 1 and 2. Now that the solutions themselves are rougher, it is not possible to notice the influence of the prior mean at t = 0, and the profiles of the truth and MAP are indistinguishable from the mean throughout the interval of time. The situation in this regime is significantly different from the situation close to a stationary solution, primarily because the dimension of the attractor is very large and the dynamics on it are very unpredictable. Notice in Fig. 6 (top) that the uncertainty in u11 now barely decreases as we pass from initial time t = 0 to final time t = T. Indeed for moderately high modes, the uncertainty increases [see Fig. 6 (bottom) for the distribution of u55].

Fig. 6.
Fig. 6.

As in Fig. 3, but for strongly chaotic regime, ν = 0.01, T = 0.2, and h = 0.02. (top) Mode u1,1 and (bottom) mode u5,5.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Table 2 presents data for increasing h = 0.02, 0.1, 0.2, with T = 0.2 fixed. Table 3 shows data for increasing h = 0.2, 0.5, with T = 1 fixed. Notable trends, in addition to those mentioned at the start of the section, are as follows: (i) When computable, the variance of the 4DVAR smoothing distribution has smaller error at t = 0 than at t = T. (ii) The 4DVAR smoothing distribution error with respect to the variance cannot be computed accurately for T = 1 because of accumulated error for long times in the approximation of the adjoint of the forward operator by the discretization of the analytical adjoint. (iii) The error of 4DVAR with respect to the mean at t = 0 for h ≤ 0.1 is below the threshold of accuracy of MCMC. (iv) The error in the variance for the FDF algorithm is very large because the is an order of magnitude larger than Γ. (v) The FDF algorithm is consistent in recovering the mean for increasing h, while the other algorithms deteriorate. (vi) The error of FDF with respect to the variance decreases with increasing h. (vii) For h = 0.5 and T = 1 the FDF performs best and these desirable properties of the FDF variant on 3DVAR are associated with stability and will be discussed in the next section. (viii) For increasing h, the error in the mean of LRExKF increases first when h = 0.1 and T = 0.2 and becomes close to the error in the variance which can be explained by the bias induced by neglecting the next order of the expansion of the dynamics. Finally, (ix) the error in LRExKF is substantial when T = 1 and it significantly fails when h = 0.5, which is consistent with the time scale on which nonlinear effects become prominent (see Fig. 4) and the linear approximation would not be expected to be valid. The error in the mean is larger, again as expected from the Ito correction term.

Table 2.

As in Table 1, but for the strongly chaotic regime with ν = 0.01, T = 0.2, and h = (top) 0.02, (middle) 0.1, and (bottom) 0.2.

Table 2.
Table 3.

As in Table 2, but T = 1, and h = (top) 0.2 and (bottom) 0.5. The variance is omitted from the 4DVAR solutions here because we are unable to attain a solution with zero derivative. We must note here that we have taken the approach of differentiating and then discretizing. Therefore, over longer time intervals such as this, the error between the discretization of the analytical derivative and derivative of the finite-dimensional discretized forward map accumulates and the derivative of the objective function is no longer well defined because of this error. Nonetheless, we confirm that we do obtain the MAP estimator because the MCMC run does not yield any point of higher probability.

Table 3.

5. Filter stability

Many of the accuracy results for the filters described in the previous section are degraded if, as is common practice in applied scenarios, modifications are made to ensure that the algorithms remain stable over longer time intervals; that is, if some form of variance inflation is performed to keep the algorithm close to the true signal, or to prevent it from suffering filter divergence (see Jazwinski 1970; Fisher et al. 2005; Evensen 2009, and references therein). In this section we describe some of the mathematics that underlies stabilization, describe numerical results illustrating it, and investigate its effect on filter accuracy. The basic conclusion of this section is that stabilization via variance inflation enables algorithms to be run for longer time windows before diverging, but may cause poorer accuracy in both the mean (before divergence) and the variance predictions. Again, we make no claims of optimal implementation of these filters, but rather aim to describe the mechanism of stabilization and the common effect, in general, as measured by ability to reproduce the gold standard posterior distribution.

We define stability in this context to mean that the distance between the truth and the estimated mean remains small. As we will demonstrate, whether or not this distance remains small depends on whether the observations made are sufficient to control any instabilities inherent in the model dynamics. To understand this issue it is instructive to consider the 3DVAR, FDF, and LRExKF filters, all of which use a prediction step [(12)] that updates the mean using . When combined with the data incorporation step [(10)] we get an update equation of the form
e17
where is the Kalman gain matrix. If we assume that the data are derived from a true signal satisfying and that
eq23
where the ηj denote the observation errors, then we see that (17) has the form
e18
If the observational noise is assumed to be consistent with the model used for the assimilation, then ηj ~ (0, Γ) are independent and identically distributed (i.i.d.) random variables and we note that (18) is an inhomogenous Markov chain.
Note that
e19
so that by defining the error and subtracting (19) from (18) we obtain the equation
eq24
where . The stability of the filter will be governed by families of products of the form
eq25
We observe that IKj will act to induce stability, as it has a norm less than one in appropriate spaces; Dj, however, will induce some instability whenever the dynamics themselves contain unstable growing modes. The balance between these effects—stabilization through observation and instability in the dynamics—determines whether the overall algorithm is stable.

The operator Kj weights the relative importance of the model and the observations, according to covariance information. Therefore, this weighting must effectively stabilize the growing directions in the dynamics. Note that increasing Cjvariance inflation—has the effect of moving Kj toward the identity, so the mathematical mechanism of controlling the instability mechanism by variance inflation is elucidated by the discussion above. In particular, when the assimilation is proceeding in a stable fashion, the modes in which growing directions have support typically overestimate the variance to induce this stability. In unstable cases, there are at least some times when some modes in which growing directions have support underestimate the variance, leading to instability of the filter. It is always the case that the onset of instability occurs when the distance from the estimated mean to the truth persistently exceeds the estimated standard deviation. In Brett et al. (2012) we provide the mathematical details and rigorous proofs that underpin the preceding discussion.

In the following, two observations concerning the size of the error are particularly instructive. First, using the distribution assumed on the ηj, the following lower bound on the error is immediate:4
e20
This implies that the average scale of the error of the filter, with respect to the truth, is set by the scale of the observation error, and it shows that the choice of the covariance updates, and hence the Kalman gain Kj, will affect the exact size of the average error, on this scale. The second observation follows from considering the trivial “filter” obtained by setting KjI, which corresponds to simply setting so that all weight is placed on the observations. In this case the average error is equal to
e21
As we would hope that incorporation of the model itself improves errors, we view (21) as providing an upper bound on any reasonable filter and we will consider the filter “unstable” if the squared error from the truth exceeds tr(Γ). Thus we use (21) and (20) as guiding upper and lower bounds when studying the errors in the filter means in what follows.

In cases where our basic algorithm is unstable in the sense just defined we will also implement a stabilized algorithm by adopting the commonly used practice of variance inflation. The discussion above demonstrates how this acts to induce stability by causing the Kj to move closer to the identity. For 3DVAR this is achieved by taking the original 0 and redefining it via the transformation . In all the numerical computations presented in this paper that concern the stabilized version of 3DVAR we take = 0.01 The FDF(b) algorithm remains stable since it already has an inflated variance via the model error term. For LRExKF we achieve variance inflation by replacing the perturbation term of (15) with , where is the covariance arising from FDF(b). Finally we discuss stabilization of the EnKF. This is achieved by taking the original j’s given by (16) and redefining them via the transformations , and j → (1 + ɛi)j + ɛp0 with = 10−4, ɛi = 0.1, ɛp = 0.01. The parameter prevents initial divergence, ɛi maintains stability with direct incremental inflation, and ɛp provides rank correction. This is only one option out of a wide array of such possible heuristically derived transformations. For example, rank correction is often performed by some form of localization that preserves trace and eliminates long-range correlations, while our rank correction preserves long-range correlations and provides trace inflation. The point here is that our transformation captures the essential mechanism of stabilization by inflation, which, again, is our objective.

We denote the stabilized versions of 3DVAR, LRExKF, and EnKF by [3DVAR], [LRExKF], and [EnKF]. Because FDF itself always remains stable we do not show results for a stabilized version of this algorithm. Note that we use ensembles in EnKF of equal size to the number of approximate eigenvectors in LRExKF, in order to ensure comparable work. This is always 100, except for large h, when some of the largest 100 eigenvalues are too close to 0 to maintain accuracy, and so fewer eigenvectors are used in LRExKF in these cases. Also, note again that we are looking for general features across methods and are not aiming to optimize the inflation procedure for any particular method.

Examples of an unstable instance of 3DVAR and the corresponding stabilized filter, [3DVAR], are depicted in Figs. 7 and 8, respectively , with ν = 0.01, h = 0.2. In this regime the dynamics are strongly chaotic. The first point to note is that both simulations give rise to an error that exceeds the lower bound (20); and that the unstable algorithm exceeds the desired bound (21), while the stabilized algorithm does not; note also that the stabilized algorithm output is plotted over a longer time interval than the original algorithm. A second noteworthy point relates to the power of using the dynamical model: this is manifest in the bottom right panels of each figure, in which the trajectory of a high wavenumber mode, close to the forcing frequency, is shown. The assimilation performs remarkably well for the trajectory of this wavenumber relative to the observations in the stabilized case, owing to the high weight on the dynamics and stability of the dynamical model for that wavenumber. Examples of an unstable instance of LRExKF and the corresponding stabilized filter, [LRExKF], are depicted in Figs. 9 and 10, respectively, with ν = 0.01, h = 0.5. The behavior illustrated is very similar to that exhibited for 3DVAR and [3DVAR].

Fig. 7.
Fig. 7.

Example of an unstable trajectory for 3DVAR with ν = 0.01, h = 0.2. (top left) The norm-squared error between the estimated mean and the truth u(tn) in comparison to the preferred upper bound [i.e., the total observation error tr(Γ), (21)] and the lower bound (20). The other three plots show the estimator, m(t) together with the truth u(t) and the observations yn for (top right) Im(u0,1) and (bottom) (left) Re(u1,2) and (right) Re(u7,7).

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 8.
Fig. 8.

Example of a variance-inflated stabilized trajectory for [3DVAR] with the same external parameters as in Fig. 7. Panels are as in Fig. 7.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 9.
Fig. 9.

Example of an unstable trajectory for LRExKF with ν = 0.01, h = 0.5. Panels are as in Fig. 7.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Fig. 10.
Fig. 10.

Example of a variance-inflated stabilized trajectory (updated with model B from section 2 on the complement of the low-rank approximation) for [LRExKF] with the same external parameters as in Fig. 9. Panels are as in Fig. 9.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

In the following tables we make a comparison between the original form of the filters and their stabilized forms, using the gold standard Bayesian posterior distribution as the desired outcome. Table 4 shows data for h = 0.02 and 0.2 with T = 0.2 fixed. Tables 5 and 6 show data for h = 0.2 and 0.5, respectively, with T = 1 fixed. We focus our discussion on the approximation of the mean. It is noteworthy that, on the shorter time horizon T = 0.2, the stabilized algorithms are less accurate with respect to the mean than their original counterparts, for both values of observation time h; this reflects a lack of accuracy caused by inflating the variance. As would be expected, however, this behavior is reversed on longer time intervals, as is shown when T = 1.0, reflecting enhanced stability caused by inflating the variance. Table 5 shows the case T = 1.0 with h = 0.2, and the stabilized version of 3DVAR outperforms the original version, although the stabilized versions of EnKF and LRExKF are not as accurate as the original version. In Table 6, with h = 0.5 and T = 1.0, the stabilized versions improve upon the original algorithms in all three cases. Furthermore, in Table 6, we also display the FDF showing that, without any stabilization, this outperforms the other three filters and their stabilized counterparts.

Table 4.

The data of unstable algorithms from Table 2 (ν = 0.01, T = 0.2) are reproduced above [with h = (top) 0.02 and (bottom) 0.2], along with the respective stabilized versions in brackets. Here the stabilized versions usually perform worse. Note that over longer time scales, the unstabilized version will diverge from the truth, while the stabilized one remains close.

Table 4.
Table 5.

As in Table 4, but T = 5h = 1 and h = 0.2. The [3DVAR] performs better with respect to the mean.

Table 5.
Table 6.

As in Table 5, but h = 0.5. All stabilized algorithms now perform better with respect to the mean. [LRExKF] above uses 50 eigenvectors in the low rank representation, and performs worse for larger numbers, indicating that the improvement is largely due to the FDF component. The stable FDF data are included here as well, since FDF is now competitive as the optimal algorithm in terms of mean estimator. This is expected to persist for larger time windows and lower-frequency observations, since the LRExKF is outside of the regime of validity, as shown in Fig. 4.

Table 6.

6. Conclusions

Incorporating noisy data into uncertain computational models presents a major challenge in many areas of the physical sciences, and in atmospheric modeling and NWP in particular. Data assimilation algorithms in NWP have had measurable positive impact on forecast skill. Nonetheless, assessing the ability of these algorithms to forecast uncertainty is more subtle. It is important to do so, however, especially as prediction is pushed to the limits of its validity in terms of time horizons considered, or physical processes modeled. In this paper we have proposed an approach to the evaluation of the ability of data assimilation algorithms to predict uncertainty. The cornerstone of our approach is to adopt a fully non-Gaussian Bayesian perspective in which the probability distribution of the system state over a time horizon, given data over that time horizon, plays a pivotal role: we contend that algorithms should be evaluated by their ability to reproduce this probability distribution, or important aspects of it, accurately.

To make this perspective useful it is necessary to find a model problem that admits complex behavior reminiscent of atmospheric dynamics, while being sufficiently small to allow computation of the Bayesian posterior distribution, so that data assimilation algorithms can be compared against it. Although MCMC sampling of the posterior can, in principle, recover any distribution, it becomes prohibitively expensive for multimodal distributions, depending on the energy barriers between modes. However for unimodal problems, state-of-the-art sampling techniques allow fully resolved MCMC computations to be undertaken. We have found that the 2D Navier–Stokes equations provide a model for which the posterior distribution may be accurately sampled using MCMC, in regimes where the dynamics is stationary and where it is strongly chaotic. We have confined our attention to strong constraint models and have implemented a range of variational and filtering methods, evaluating them by their ability to reproduce the Bayesian posterior distribution. The setup is such that the Bayesian posterior is unimodal and approximately Gaussian. Thus the evaluation is undertaken by comparing the mean and covariance structure of the data assimilation algorithms against the actual Bayesian posterior mean and covariance. Similar studies were undertaken in the context of a subsurface geophysical inverse problem in Liu and Oliver (2003), although the conclusions were less definitive. It would be interesting to revisit such subsurface geophysical inverse problems using the state-of-the-art MCMC techniques adopted here, in order to compute the posterior distribution. Moreover it would be interesting to conduct a study, similar to that undertaken here, for models of atmospheric dynamics such as Lorenz-96 or for quasigeostrophic models, which admit baroclinic instabilities.

These studies, under the assumption of a well-defined posterior probability distribution, lead to four conclusions: (i) Most filtering and variational algorithms do a reasonably good job of reproducing the mean. (ii) For most of the filtering and variational algorithms studied and implemented here, there are circumstances when the approximations underlying the ad hoc filters are not justified and they then fail to reproduce covariance information with any accuracy. (iii) Most filtering algorithms exhibit instability on longer time intervals causing them to lose accuracy in even mean prediction. (iv) Filter stabilization, via variance inflation of one sort or the other, ameliorates this instability and can improve long-term accuracy of the filters in predicting the mean, but can reduce the accuracy on short time intervals and of course makes it impossible to predict the covariance. In summary, most data assimilation algorithms used in practice should be viewed with caution when using them to make claims concerning uncertainty although, when properly tuned, they will frequently track the signal mean accurately for fairly long time intervals. These conclusions are intrinsic to the algorithms, and result from the nature of the approximations made in order to create tractable online algorithms; the basic conclusions are not expected to change by use of different dynamical models or by modifying the parameters of those algorithms.

Finally we note that we have not addressed in this paper the important but complicated issue of how to choose the prior distribution on the initial condition. We finish with some remarks concerning this. The “accuracy of the spread” of the prior is often monitored in practice with a rank histogram (Anderson 1996). This can be computed even in the absence of an ensemble for any method in the class of those discussed here, by partitioning the real line in bins according to the assumed Gaussian prior density. It is important to note that uniform component-wise rank histograms in each direction guarantee that there are no directions in which the variance is consistently underestimated, and this should therefore be sufficient for stability. It is also necessary for the accurate approximation of the Bayesian posterior distribution, but by no means sufficient (Hamill et al. 2000). Indeed, one can iteratively compute a constant prior with the cycled 3DVAR algorithm (Hamill et al. 2000) such that the estimator from the algorithm will have statistics consistent with the constant prior used in the algorithm. The estimator produced by this algorithm is guaranteed by construction to yield uniform rank histograms of the type described above, and yet the actual prior coming from the posterior at the previous time is not constant, so this cannot be a good approximation of the actual prior. See Fig. 11 for an image of the posterior and prior variance that is consistent with the statistics of the estimator over 100 iterations of 3DVAR with ν = 0.01 and h = 0.5 at time T = 1, as compared with the true posterior and converged FDF variance. Notice that FDF overestimates in the high-variance directions and underestimates in the low-variance directions (which correspond in our case to the unstable and stable directions, respectively). The RMSE of 3DVAR with constant converged FDF variance is smaller than with constant variance from converged statistics, and yet the former clearly will yield component-wise rank histograms that appear to always underestimate the “spread” in the low-variance, stable directions and overestimate in the high-variance, unstable directions. It is also noteworthy that the FDF variance accurately recovers the decay of the posterior variance but is about an order of magnitude larger. Further investigation of how to initialize statistical forecasting algorithms clearly remains a subject presenting many conceptual and practical challenges.

Fig. 11.
Fig. 11.

The (left) posterior and (right) prior of the covariance from converged innovation statistics from the cycled 3DVAR algorithm in comparison to the converged covariance from the FDF algorithm and the posterior distribution.

Citation: Monthly Weather Review 140, 11; 10.1175/MWR-D-11-00257.1

Acknowledgments

Both authors are grateful to the referees for numerous suggestions that have improved the presentation of this material. In particular, we thank Chris Snyder. KJHL is grateful to the EPSRC for funding. AMS is grateful to EPSRC, ERC, and ONR for funding.

Appendix

Some Numerical Details

Here we provide some details of the numerical algorithms underlying the computations that we present in the main body of the paper. First, we will describe the numerical methods used for the dynamical model. Second, we study the adjoint solver. Third, we discuss various issues related to the resulting optimization problems and large linear systems encountered. Finally, we discuss the MCMC method used to compute the gold standard posterior probability distribution.

In the dynamical and observational models the forcing in (1) is taken to be f = ∇ψ, where ψ = cos(kx) and ∇ = , with being the canonical skew-symmetric matrix, and k = (1, 1) for stationary (ν = 0.1) regime, while k = (5, 5) for the strongly chaotic regime in order to allow an upscale cascade of energy. Furthermore, we set the observational noise to white noise Γ = γ2I, where γ = 0.04 is chosen as 10% of the maximum standard deviation of the strongly chaotic dynamics, and we choose an initial smoothness prior 0 = A−2, where A is the Stokes operator. We notice that only the observations on the unstable manifold of the underlying solution map need to be assimilated. A similar observation was made in Chorin and Krause (2004) in the context of particle filters. Our choice of prior and observational covariance reflects this in the sense that the ratio of the prior to the observational covariance is larger for smaller wavenumbers (and greater than 1, in particular), in which the unstable manifold has support, while this ratio tends to 0 as |k| → ∞. The initial mean, or background state, is chosen as m0 ~ (u, 0), where u is the true initial condition. In the case of strongly chaotic dynamics it is taken as an arbitrary point on the attractor obtained by simulating an arbitrary initial condition until statistical equilibrium. The initial condition for the case of stationary dynamics is taken as a draw from the Gaussian prior, since the statistical equilibrium is the trivial one.

Our numerical method for the dynamical model is based on a Galerkin approximation of the velocity field in a divergence-free Fourier basis. We use a modification of a fourth-order Runge–Kutta method, ETD4RK Cox and Matthews (2002), in which the heat semigroup is used together with Duhamel’s principle to solve exactly for the diffusion term. A spectral Galerkin method Hesthaven et al. (2007) is used in which the convolutions arising from products in the nonlinear term are computed via FFTs. We use a double-sized domain in each dimension, buffered with zeros, resulting in 642 gridpoint FFTs, and only half the modes are retained when transforming back into spectral space in order to prevent dealiasing, which is avoided as long as fewer than ⅔ the modes are retained. Data assimilation in practice always contends with poor spatial resolution, particularly in the case of the atmosphere in which there are many billions of degrees of freedom. For us the important resolution consideration is that the unstable modes, which usually have long spatial scales and support in low wavenumbers, are resolved. Therefore, our objective here is not to obtain high spatial resolution but rather to obtain high temporal resolution in the sense of reproducibility. We would like the divergence of two nearby trajectories to be dictated by instability in the dynamical model rather than the numerical time-stepping scheme.

It is also important that we have accurate adjoint solvers, and this is strongly linked to the accuracy of the forward solver. The same time-stepper is used to solve the adjoint equation, with twice the time step of the forward solve, since the forward solution is required at half-steps in order to implement this method for the nonautonomous adjoint solve. Many issues can arise in the implementation of adjoint or costate methods (Banks 1992; Vogel and Wade 1995) and the practitioner should be aware of these. The easiest way to ensure convergence is to test that the tangent linearized map is indeed the linearization of the solution map and then confirm that the adjoint is the adjoint to a suitable threshold. We have taken the approach of “optimize then discretize” here, and as such our adjoint model is the discretization of the analytical adjoint. This effect becomes apparent in the accuracy of the linearization for longer time intervals, and we are no longer able to compute accurate gradients and Hessians as a result.

Regarding linear algebra and optimization issues we make the following observations. A Krylov method (GMRES) is used for linear solves in the Newton method for 4DVAR, and the Arnoldi method is used for low-rank covariance approximations in LRExKF and for the filtering time T covariance approximation in 4DVAR. The LRExKF always sufficiently captures more than 99% of the full rank version as measured in the Frobenius (matrix l2) norm. The initial Hessian in 4DVAR as well as the ones occurring within Newton’s method are computed by finite difference. Using a gradient flow (preconditioned steepest descent) computation, we obtain an approximate minimizer close to the actual minimizer and then a preconditioned Newton–Krylov nonlinear fixed-point solver is used (NSOLI; Kelley 2003). This approach is akin to the Levenburgh–Marquardt algorithm. See Trefethen and Bau (1997) and Saad (1996) for overviews of the linear algebra and Nocedal and Wright (1999) for an overview of optimization. Strong constraint 4DVAR can be computationally challenging and, although we do not do so here, it would be interesting to study weak constraint 4DVAR from a related perspective; see Bröcker (2010) for a discussion of weak constraint 4DVAR in continuous time. It is useful to employ benchmarks in order to confirm that gradients are being computed properly when implementing optimizers (see, e.g., Lawless et al. 2003).

Finally, we comment on the MCMC computations, which, of all the algorithms implemented here, lead to the highest computational cost. This, of course, is because it fully resolves the posterior distribution of interest whereas the other algorithms use crude approximations, the consequences of which we study by comparison with accurate MCMC results. Each time step requires four function evaluations, and each function evaluation requires eight FFTs, so it costs 32 FFTs for each time step. We fix the lengths of paths at 40 time steps for most of the computations, but nonetheless this is on the order of 1000 FFTs per evaluation of the dynamical model. If a 642 FFT takes 1 ms, then this amounts to 1 s per sample. Clearly this is a hurdle as it would take on the order of 10 days to obtain on the order of millions of samples in series. We overcome this by using the MAP estimator (4DVAR solution) as the initial condition in order to accelerate burn-in, and then run independent batches of 104 samples in parallel with independent seeds in the random number generator. We also minimize computational effort within the method by employing the technique of early rejection introduced by Haario (H. Haario 2010, personal communication), which means that rejection can be detected before the forward computation required for evaluation of Φ reaches the end of the assimilation time window; the computation can then be stopped and hence computational savings made.

It is important to recognize that we cannot rely too heavily on results of MCMC with smaller relative norms than 10−3 for the mean or 10−2 for the variance, because we are bound to O(N−1/2) convergence and it is already prohibitively expensive to get several million samples. More than 107 is not tractable. Convergence is measured by a version of the potential scale reduction factor (Brooks and Gelman 1998), 1:8 = ‖var[u1(t)] − var[u8(t)]‖/‖var[u1(t)]‖, where u1 corresponds to sample statistics with one chain and u8 corresponds to sample statistics over eight chains. We find 1:8 = O(10−2) for N = 3.2 × 105 samples in each chain. If we define em1:8 = ‖[u1(t)] − [u8(t)]‖/‖[u1(t)]‖, then we have em1:8 = O(10−3).

REFERENCES

  • Anderson, J., 1996: A method for producing and evaluating probabilistic forecasts from ensemble model integrations. J. Climate, 9, 15181530.

    • Search Google Scholar
    • Export Citation
  • Apte, A., M. Hairer, A. Stuart, and J. Voss, 2007: Sampling the posterior: An approach to non-Gaussian data assimilation. Physica D, 230, 5064.

    • Search Google Scholar
    • Export Citation
  • Apte, A., C. Jones, A. Stuart, and J. Voss, 2008a: Data assimilation: Mathematical and statistical perspectives. Int. J. Numer. Methods Fluids, 56, 10331046.

    • Search Google Scholar
    • Export Citation
  • Apte, A., C. Jones, and A. Stuart, 2008b: A Bayesian approach to Lagrangian data assimilation. Tellus, 60, 336347.

  • Arulampalam, M., S. Maskell, N. Gordon, and T. Clapp, 2002: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process., 50, 174188.

    • Search Google Scholar
    • Export Citation
  • Auvinen, H., J. Bardsley, H. Haario, and T. Kauranne, 2009: Large-scale Kalman filtering using the limited memory BFGS method. Electron. Trans. Numer. Anal., 35, 217233.

    • Search Google Scholar
    • Export Citation
  • Bain, A., and D. Crişan, 2008: Fundamentals of Stochastic Filtering. Springer Verlag, 390 pp.

  • Banks, H., 1992: Computational issues in parameter estimation and feedback control problems for partial differential equation systems. Physica D, 60, 226238.

    • Search Google Scholar
    • Export Citation
  • Banks, H., and K. Kunisch, 1989: Estimation Techniques for Distributed Parameter Systems. Birkhauser, 315 pp.

  • Bengtsson, T., C. Snyder, and D. Nychka, 2003: Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res., 108, 8775, doi:10.1029/2002JD002900.

    • Search Google Scholar
    • Export Citation
  • Bennett, A., 2002: Inverse Modeling of the Ocean and Atmosphere. Cambridge University Press, 234 pp.

  • Brett, C., A. Lam, K. Law, D. McCormick, M. Scott, and A. Stuart, 2012: Accuracy and stability of filters for dissipative PDEs. Physica D, in press.

    • Search Google Scholar
    • Export Citation
  • Bröcker, J., 2010: On variational data assimilation in continuous time. Quart. J. Roy. Meteor. Soc., 136, 19061919.

  • Brooks, S., and A. Gelman, 1998: General methods for monitoring convergence of iterative simulations. J. Comput. Graph. Stat., 7, 434455.

    • Search Google Scholar
    • Export Citation
  • Bryson, A., and M. Frazier, 1963: Smoothing for linear and nonlinear dynamic systems. U.S. Air Force Tech. Rep. AFB-TDR-63-119, Wright-Patterson Air Force Base, OH, Aeronautical Systems Division, 353–364.

  • Carrassi, A., M. Ghil, A. Trevisan, and F. Uboldi, 2008: Data assimilation as a nonlinear dynamical systems problem: Stability and convergence of the prediction-assimilation system. Chaos, 18, 023112, doi:10.1063/1.2909862.

    • Search Google Scholar
    • Export Citation
  • Chorin, A., and P. Krause, 2004: Dimensional reduction for a Bayesian filter. Proc. Natl. Acad. Sci. USA, 101, 15 01315 017.

  • Chorin, A., M. Morzfeld, and X. Tu, 2010: Implicit particle filters for data assimilation. Commun. Appl. Math. Comput. Sci., 5, 221240.

    • Search Google Scholar
    • Export Citation
  • Cotter, S., M. Dashti, J. Robinson, and A. Stuart, 2009: Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Probl., 25, 115008, doi:10.1088/0266-5611/25/11/115008.

    • Search Google Scholar
    • Export Citation
  • Cotter, S., M. Dashti, and A. Stuart, 2011: Variational data assimilation using targetted random walks. Int. J. Numer. Methods Fluids, 68, 403421.

    • Search Google Scholar
    • Export Citation
  • Courtier, P., and O. Talagrand, 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. II: Numerical results. Quart. J. Roy. Meteor. Soc., 113, 13291347.

    • Search Google Scholar
    • Export Citation
  • Cox, H., 1964: On the estimation of state variables and parameters for noisy dynamic systems. IEEE Trans. Autom. Control, 9, 512.

  • Cox, S., and P. Matthews, 2002: Exponential time differencing for stiff systems. J. Comput. Phys., 176, 430455.

  • Doucet, A., N. De Freitas, and N. Gordon, 2001: Sequential Monte Carlo Methods in Practice. Springer Verlag, 581 pp.

  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367.

  • Evensen, G., 2009: Data Assimilation: The Ensemble Kalman Filter. Springer Verlag, 307 pp.

  • Evensen, G., and Coauthors, 1994: Assimilation of Geosat altimeter data for the Agulhas Current using the ensemble Kalman filter with a quasigeostrophic model. Mon. Wea. Rev., 124, 8596.

    • Search Google Scholar
    • Export Citation
  • Fisher, M., M. Leutbecher, and G. Kelly, 2005: On the equivalence between Kalman smoothing and weak-constraint four-dimensional variational data assimilation. Quart. J. Roy. Meteor. Soc., 131, 32353246.

    • Search Google Scholar
    • Export Citation
  • Hamill, T., C. Snyder, and R. Morss, 2000: A comparison of probabilistic forecasts from bred, singular-vector, and perturbed observation ensembles. Mon. Wea. Rev., 128, 18351851.

    • Search Google Scholar
    • Export Citation
  • Harlim, J., and A. Majda, 2008: Filtering nonlinear dynamical systems with linear stochastic models. Nonlinearity, 21, 1281, doi:10.1088/0951-7715/21/6/008.

    • Search Google Scholar
    • Export Citation
  • Harvey, A., 1991: Forecasting, Structural Time Series Models, and the Kalman Filter. Cambridge University Press, 554 pp.

  • Hesthaven, J., S. Gottlieb, and D. Gottlieb, 2007: Spectral Methods for Time-Dependent Problems. Cambridge University Press, 273 pp.

  • Hinze, M., R. Pinnau, M. Ulbrich, and S. Ulbrich, 2009: Optimization with PDE Constraints. Springer, 270 pp.

  • Jazwinski, A., 1970: Stochastic Processes and Filtering Theory. Academic Press, 376 pp.

  • Kaipio, J., and E. Somersalo, 2005: Statistical and Computational Inverse Problems. Springer, 339 pp.

  • Kalman, R., 1960: A new approach to linear filtering and prediction problems. J. Basic Eng., 82, 3545.

  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, 341 pp.

  • Kelley, C., 2003: Solving Nonlinear Equations with Newton’s Method. Vol. 1, Fundamentals of Algorithms, Society for Industrial Mathematics, 104 pp.

  • Lawless, A., N. Nichols, and S. Ballard, 2003: A comparison of two methods for developing the linearization of a shallow-water model. Quart. J. Roy. Meteor. Soc., 129, 12371254.

    • Search Google Scholar
    • Export Citation
  • Lei, J., P. Bickel, and C. Snyder, 2010: Comparison of ensemble Kalman filters under non-Gaussianity. Mon. Wea. Rev., 138, 12931306.

  • Leutbecher, M., 2003: Adaptive observations, the Hessian metric and singular vectors. Proc. ECMWF Seminar on Recent Developments in Data Assimilation for Atmosphere and Ocean, Reading, United Kingdom, ECMWF, 8–12.

  • Liu, N., and D. S. Oliver, 2003: Evaluation of Monte Carlo methods for assessing uncertainty. SPE J., 8, 188195.

  • Lorenc, A., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112, 11771194.

  • Lorenz, E., 1963: Deterministic nonperiodic flow. J. Atmos Sci., 20, 130141.

  • Lorenz, E., 1996: Predictability: A problem partly solved. Proc. Seminar on Predictability, Reading, United Kingdom, ECMWF, 1–18.

  • Majda, A., J. Harlim, and B. Gershgorin, 2010: Mathematical strategies for filtering turbulent dynamical systems. Dyn. Syst., 27, 441486.

    • Search Google Scholar
    • Export Citation
  • Meng, Z., and F. Zhang, 2008: Tests of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part IV: Comparison with 3DVAR in a month-long experiment. Mon. Wea. Rev., 136, 36713682.

    • Search Google Scholar
    • Export Citation
  • Miller, R., M. Ghil, and F. Gauthiez, 1994: Advanced data assimilation in strongly nonlinear dynamical systems. J. Atmos. Sci., 51, 10371056.

    • Search Google Scholar
    • Export Citation
  • Nocedal, J., and S. Wright, 1999: Numerical Optimization. Springer Verlag, 636 pp.

  • Palmer, T., R. Gelaro, J. Barkmeijer, and R. Buizza, 1998: Singular vectors, metrics, and adaptive observations. J. Atmos. Sci., 55, 633653.

    • Search Google Scholar
    • Export Citation
  • Quinn, J., and H. Abarbanel, 2010: State and parameter estimation using Monte Carlo evaluation of path integrals. Quart. J. Roy. Meteor. Soc., 136, 18551867.

    • Search Google Scholar
    • Export Citation
  • Saad, Y., 1996: Iterative Methods for Sparse Linear Systems. 1st ed. PWS Publishing, 447 pp.

  • Snyder, T., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640.

    • Search Google Scholar
    • Export Citation
  • Stuart, A., 2010: Inverse problems: A Bayesian perspective. Acta Numer., 19, 451559.

  • Talagrand, O., and P. Courtier, 1987: Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Quart. J. Roy. Meteor. Soc., 113, 13111328.

    • Search Google Scholar
    • Export Citation
  • Tarantola, A., 2005: Inverse Problem Theory and Methods for Model Parameter Estimation. Society for Industrial Mathematics, 342 pp.

  • Temam, R., 2001: Navier–Stokes Equations: Theory and Numerical Analysis. American Mathematical Society, 408 pp.

  • Tippett, M., J. Anderson, C. Bishop, T. Hamill, and J. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490.

  • Toth, Z., and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev., 125, 32973319.

  • Trefethen, L., and D. Bau, 1997: Numerical Linear Algebra. Society for Industrial Mathematics, 361 pp.

  • van Leeuwen, P., 2009: Particle filtering in geophysical systems. Mon. Wea. Rev., 137, 40894114.

  • van Leeuwen, P., 2010: Nonlinear data assimilation in geosciences: An extremely efficient particle filter. Quart. J. Roy. Meteor. Soc., 136, 19911999.

    • Search Google Scholar
    • Export Citation
  • Vogel, C., 2002: Computational Methods for Inverse Problems. Society for Industrial Mathematics, 183 pp.

  • Vogel, C., and J. Wade, 1995: Analysis of costate discretizations in parameter estimation for linear evolution equations. SIAM J. Control Optim., 33, 227254.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., and F. Zhang, 2012: E4DVAR: Coupling an ensemble Kalman filter with four-dimensional variational data assimilation in a limited-area weather prediction model. Mon. Wea. Rev., 140, 587600.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., F. Zhang, X. Huang, and X. Zhang, 2010: Intercomparison of an ensemble Kalman filter with three- and four-dimensional variational data assimilation methods in a limited-area model over the month of June 2003. Mon. Wea. Rev., 139, 566572.

    • Search Google Scholar
    • Export Citation
  • Zupanski, D., 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev., 125, 22742292.

    • Search Google Scholar
    • Export Citation
1

Note that our observations include data at time t = 0. Because the prior is Gaussian and the observational noise is Gaussian we could alternatively redefine the prior to incorporate this data point, which can be done in closed form; the observations would then start at time t = h.

2

Here “w.p.” denotes “with probability.”

3

As an aside, we note that a more sophisticated improved version we have not seen yet in the literature would include the higher-order drift term involving the Hessian. Although adding significant expense there could be scenarios in which it would be worthwhile to attempt this.

4

Here denotes expectation with respect to the random variables ηj.

Save