Abstract

Data assimilation combines forecasts from a numerical model with observations. Most of the current data assimilation algorithms consider the model and observation error terms as additive Gaussian noise, specified by their covariance matrices Q and R, respectively. These error covariances, and specifically their respective amplitudes, determine the weights given to the background (i.e., the model forecasts) and to the observations in the solution of data assimilation algorithms (i.e., the analysis). Consequently, Q and R matrices significantly impact the accuracy of the analysis. This review aims to present and to discuss, with a unified framework, different methods to jointly estimate the Q and R matrices using ensemble-based data assimilation techniques. Most of the methods developed to date use the innovations, defined as differences between the observations and the projection of the forecasts onto the observation space. These methods are based on two main statistical criteria: 1) the method of moments, in which the theoretical and empirical moments of the innovations are assumed to be equal, and 2) methods that use the likelihood of the observations, themselves contained in the innovations. The reviewed methods assume that innovations are Gaussian random variables, although extension to other distributions is possible for likelihood-based methods. The methods also show some differences in terms of levels of complexity and applicability to high-dimensional systems. The conclusion of the review discusses the key challenges to further develop estimation methods for Q and R. These challenges include taking into account time-varying error covariances, using limited observational coverage, estimating additional deterministic error terms, or accounting for correlated noise.

1. Introduction

In meteorology and other environmental sciences, an important challenge is to estimate the state of the system as accurately as possible. In meteorology, this state includes pressure, humidity, temperature and wind at different locations and elevations in the atmosphere. Data assimilation (DA) refers to mathematical methods that use both model predictions (also called background information) and partial observations to retrieve the current state vector with its associated error. An accurate estimate of the current state is crucial to get good forecasts, and it is particularly so whenever the system dynamics is chaotic, such as it is the case for the atmosphere.

The performance of a DA system to estimate the state depends on the accuracy of the model predictions, the observations, and their associated error terms. A simple, popular and mathematically justifiable way of modeling these errors is to assume them to be independent and unbiased Gaussian white noise, with covariance matrices Q for the model and R for the observations. Given the aforementioned importance of Q and R in estimating the analysis state and error, a number of studies dealing with this problem has arisen in the last decades. This review work presents and summarizes the different techniques used to estimate simultaneously the Q and R covariances. Before discussing the methods to achieve this goal, the mathematical formulation of DA is briefly introduced.

a. Problem statement

Hereinafter, the unified DA notation proposed in Ide et al. (1997) is used.1 Data assimilation algorithms are used to estimate the state of a system, x, conditionally on observations, y. A classic strategy is to use sequential and ensemble DA frameworks, as illustrated in Fig. 1, and to combine two sources of information: model forecasts (in green) and observations (in blue). The ensemble framework uses different realizations, also called members, to track the state of the system at each assimilation time step.

Fig. 1.

Sketch of sequential and ensemble DA algorithms in the observation space (i.e., in the space of the observations y), where the observation operator H is omitted for simplicity. The ellipses represent the forecast Pf and analysis Pa error covariances, whereas the model Q and observation R error covariances are the unknown entries of the state-space model in Eqs. (1) and (2). The forecast error covariance matrix is written Pf and is the sum of Pm, the forecast state xf spread, and the model error Q. This scheme is a modified version that is based on Fig. 1 from Carrassi et al. (2018).

Fig. 1.

Sketch of sequential and ensemble DA algorithms in the observation space (i.e., in the space of the observations y), where the observation operator H is omitted for simplicity. The ellipses represent the forecast Pf and analysis Pa error covariances, whereas the model Q and observation R error covariances are the unknown entries of the state-space model in Eqs. (1) and (2). The forecast error covariance matrix is written Pf and is the sum of Pm, the forecast state xf spread, and the model error Q. This scheme is a modified version that is based on Fig. 1 from Carrassi et al. (2018).

The forecasts of the state are based on the usually incomplete and approximate knowledge of the system dynamics. The evolution of the state from time k − 1 to k is given by the model equation

 
x(k)=Mk[x(k1)]+η(k),
(1)

where the model error η implies that the dynamic model operator Mk is not perfectly known. Model error is usually assumed to follow a Gaussian distribution with zero mean (i.e., the model is unbiased) and covariance Q. The dynamic model operator Mk in Eq. (1) has also an explicit dependence on k, because it may depend on time-dependent external forcing terms. At time k, the forecast state is characterized by the mean of the forecast states, xf, and its uncertainty matrix, namely Pf, which is also called the background error covariance matrix and is noted as B in DA.

The forecast covariance Pf is determined by two processes. The first is the uncertainty propagated from k − 1 to k by the model Mk (the green shade within the dashed ellipse in Fig. 1 and denoted by Pm). The second process is the model error covariance Q accounted by the noise term at time k in Eq. (1). Given that model error is largely unknown and originated by various and diverse sources, the matrix Q is also poorly known. Model error sources encompass the model M deficiencies to represent the underlying physics, including deficiencies in the numerical schemes, the cumulative effects of errors in the parameters, and the lack of knowledge of the unresolved scales. Its estimation is a challenge in general, but it is particularly so in geosciences because we usually have far fewer observations than those needed to estimate the entries of Q (Daley 1992; Dee 1995). The sum of the two covariances Pm and Q gives the forecast covariance matrix, Pf (full green ellipse in Fig. 1). In the illustration given here, a large contribution of the forecast covariance Pf is due to Q. This situation reflects what is common in ensemble DA, where Pm can be too small, as a consequence of the ensemble undersampling of the initial condition error (i.e., the covariance estimated at the previous analysis). In that case, inflating Q could partially compensate for the bad specification of Pm.

DA uses a second source of information, the observations y, which are assumed to be linked to the true state x through the time-dependent operator Hk. This step in DA algorithms is formalized by the observation equation

 
y(k)=Hk[x(k)]+ϵ(k),
(2)

where the observation error ϵ describes the discrepancy between what is observed and the truth. In practice, it is important to remove as much as possible the large-scale bias in the observation before DA. Then, it is common to state that the remaining error ϵ follows a Gaussian and unbiased distribution with a covariance R (the blue ellipse in Fig. 1). This covariance takes into account errors in the observation operator H, the instrumental noise and the representation error associated with the observation, typically measuring a higher-resolution state than the model represents. Operationally, a correct estimation of R that takes into account all of these effects is often challenging (Janjić et al. 2018).

DA algorithms combine forecasts with observations, based on the model and observation equations, given in Eqs. (1) and (2), respectively. The corresponding system of equations is a nonlinear state-space model. As illustrated in Fig. 1, this Gaussian DA process produces a posterior Gaussian distribution with mean xa and covariance Pa (red ellipse). The system given in Eqs. (1) and (2) is representative of a broad range of DA problems, as described in seminal papers such as Ghil and Malanotte-Rizzoli (1991), and still relevant today as referenced by Houtekamer and Zhang (2016) and Carrassi et al. (2018). The assumptions made in Eqs. (1) and (2) about model and observation errors (additive, Gaussian, unbiased, and mutually independent) are strong, yet convenient from the mathematical and computational point of view. Nevertheless, these assumptions are not always realistic in real DA problems. For instance, in operational applications, systematic biases in the model and in the observations are recurring problems. Indeed, biases affect significantly the DA estimations and a specific treatment is required; see Dee (2005) for more details.

From Eqs. (1) and (2), noting that M, H, and y are given, the only parameters that influence the estimation of x are the covariance matrices Q and R. These covariances play an important role in DA algorithms. Their importance was early put forward in Hollingsworth and Lönnberg (1986), in section 4.1 of Ghil and Malanotte-Rizzoli (1991), and in section 4.9 of Daley (1991). The results of DA algorithms highly depend on the two error covariance matrices Q and R, which have to be specified by the users. But these covariances are not easy to tune. Indeed, their impact is hard to grasp in real DA problems with high-dimensionality and nonlinear dynamics. We thus illustrate the problem with a simple example first.

b. Illustrative example

In either variational or ensemble-based DA methods, the quality of the reconstructed state (or hidden) vector x largely depends on the relative amplitudes between the assumed observation and model errors (Desroziers and Ivanov 2001). In Kalman filter–based methods, the signal-to-noise ratio ||Pf||/||R||, where Pf depends on Q, impacts the Kalman gain, which gives the relative weights of the observations against the model forecasts. Here, the || || operator represents a matrix norm. For instance, Berry and Sauer (2013) used the Frobenius norm to study the effect of this ratio in the reconstruction of the state in toy models.

The importance of Q, R, and ||Pf||/||R|| is illustrated with the aid of a toy example, using a scalar state x and simple linear dynamics. This simplified setup avoids several issues typical of realistic DA applications: the large dimension of the state, the strong nonlinearities and the chaotic behavior. In this example, the dynamic model in Eq. (1) is a first-order autoregressive model, denoted by AR(1) and defined by

 
x(k)=0.95x(k1)+η(k),
(3)

with η ~ N(0, Qt), where the superscript t means “true” and Qt = 1. Furthermore, observations y of the state are contaminated with an independent additive zero-mean and unit-variance Gaussian noise, such that Rt = 1 in Eq. (2) with H(x) = x. The goal is to reconstruct x from the noisy observations y at each time step. The AR(1) dynamic model defined by Eq. (3) has an autoregressive coefficient close to one, representing a process that evolves slowly over time, and a stochastic noise term η with variance Qt. Although the knowledge of these two sources of noise is crucial for the estimation problem, identifying them is not an easy task. Given that the dynamic model is linear and the error terms are additive and Gaussian in this simple example, the Kalman smoother provides the best estimation of the state (see section 2 for more details). To evaluate the effect of badly specified Q and R errors on the reconstructed state with the Kalman smoother, different experiments were conducted with values of {0.1, 1, 10} for the ratio Q/R (in this toy example, we use Q/R instead of ||Pf||/||R|| for simplicity).

Figure 2 shows, as a function of time, the true state (red line) and the smoothing Gaussian distributions represented by the 95% confidence intervals (gray shaded) and their means (black lines). We also report the root-mean-square error (RMSE) of the reconstruction and the so-called coverage probability, or percentage of x that falls in the 95% confidence intervals (defined as the mean ± 1.96 the standard deviation in the Gaussian case). In this synthetic experiment, the best RMSE and coverage probability obtained, applying the Kalman smoother with true Qt = Rt = 1, are 0.71% and 95%, respectively. Using a small model error variance Q = 0.1Qt in Fig. 2a, the filter gives a large weight to the forecasts given by the quasi-persistent autoregressive dynamic model. On the other hand, with a small observation error variance R = 0.1Rt in Fig. 2b, excessive weight is given to the observation and the reconstructed state is close to the noisy measurements. These results show the negative impact of independently badly scaled Q and R error variances. In the case of overestimated model error variance as in Fig. 2c, the mean reconstructed state vector and thus its RMSE are identical to Fig. 2b. In the same way, overestimated observation error variance like in Fig. 2d gives similar mean reconstruction, as in Fig. 2a. These last two results are due to the fact that in both cases, the ratio Q/R are equal, respectively, to 10 and 0.1. Now, we consider in Figs. 2e and 2f the case where the Q/R ratio is equal to 1, but, respectively, using the simultaneous underestimation and overestimation of model and observation errors. In both cases, the mean reconstructed state is equal to that obtained with the true error variances (i.e., RMSE = 0.71). The main difference is the gray confidence interval, which is supposed to contain 95% of the true trajectory: the spread is clearly underestimated in Fig. 2e and overestimated in Fig. 2f, with coverage probability of 36% and 100%, respectively.

Fig. 2.

Example of a univariate AR(1) process generated using Eq. (3) with Qt = 1 (red line), noisy observations as in Eq. (2) with Rt = 1 (black dots), and reconstructions with a Kalman smoother (black lines and gray 95% confidence interval) with different values of Q and R, from 0.1 to 10. The optimal values of RMSE and coverage probabilities are 0.71% and 95%, respectively.

Fig. 2.

Example of a univariate AR(1) process generated using Eq. (3) with Qt = 1 (red line), noisy observations as in Eq. (2) with Rt = 1 (black dots), and reconstructions with a Kalman smoother (black lines and gray 95% confidence interval) with different values of Q and R, from 0.1 to 10. The optimal values of RMSE and coverage probabilities are 0.71% and 95%, respectively.

We used a simple synthetic example, but for large dimensional and highly nonlinear dynamics, such an underestimation or overestimation of uncertainty may have a strong effect and may cause filters to collapse. The main issue in ensemble-based DA is an underdispersive spread, as in Fig. 2e. In that case, the initial condition spread is too narrow, and model forecasts (starting from these conditions) would be similar and potentially out of the range of the observations. In the case of an overdispersive spread, as in Fig. 2f, the risk is that only a small portion of model forecasts would be accurate enough to produce useful information on the true state of the system. This illustrative example shows how important is the joint tuning of model and observation errors in DA. Since the 1990s, a substantial number of studies have dealt with this topic.

c. Seminal work in the data assimilation community

In a seminal paper, Dee (1995) proposed an estimation method for parametric versions of Q and R matrices. The method, based on maximizing the likelihood of the observations, yields an estimator that is a function of the innovation defined by yH(xf). Maximization is performed at each assimilation step, with the current innovation computed from the available observations. This technique was later extended to estimate the mean of the innovation, which depends on the biases in the forecast and in the observations (Dee and da Silva 1999). The method was then applied to realistic cases in Dee et al. (1999), making the maximization of innovation likelihood a promising technique for the estimation of errors in operational forecasts.

Following a distinct path, Desroziers and Ivanov (2001) proposed using the observation-minus-analysis diagnostic. It is defined by yH(xa) with xa being the analysis (i.e., the output of DA algorithms). The authors proposed an iterative optimization technique to estimate a scaling factor for the background B = Pf and observation R matrices. The procedure was shown to converge to a proper fixed point. As in Dee’s work, the fixed-point method presented in Desroziers and Ivanov (2001) is applied at each assimilation step, with the available observations at the current step.

Later, Chapnik et al. (2004) showed that the maximization of the innovation likelihood proposed by Dee (1995) makes the observation-minus-analysis diagnostic of Desroziers and Ivanov (2001) optimal. Moreover, the techniques of Dee (1995) and Desroziers and Ivanov (2001) have been further connected to the generalized cross-validation method previously developed by statisticians (Wahba and Wendelberger 1980).

These initial studies clearly nurtured the discussion of the estimation of observation R, model Q, or background B = Pf error covariance matrices in the modern DA literature. For demonstration purposes, the algorithms proposed in Dee (1995) and Desroziers and Ivanov (2001) were tested on realistic DA problems, using a shallow-water model on a plane with a simplified Kalman filter, and using the French ARPEGE three-dimensional variational framework, respectively. In both cases, although good performances have been obtained with a small number of iterations, the proposed algorithms have shown some limits, in particular with regard to the simultaneous estimation of the two sources of errors: observation and model (or background). In this context, Todling (2015) pointed out that using only the current innovation is not enough to distinguish the impact of Q and R, which still makes their simultaneous estimation challenging. Given that our preliminary focus here is to review methods for the joint estimate of Q and R, the work Dee (1995) and Desroziers and Ivanov (2001) are not further detailed hereinafter. After these two seminal studies, various alternatives were proposed. They are based on the use of several types of innovations and are discussed in this review.

d. Methods presented in this review

The main topic of this review is the “joint estimation of Q and R.” Thus, only methods based on this specific goal are presented in detail. A history of what have been, in our opinion, the most relevant contributions and the key milestones for Q and R covariance estimation in DA is sketched in Fig. 3. The highlighted papers are discussed in this review, with a summary of the different methods, given in Table 1. We distinguish four methods and we can classify them into two categories: those that rely on moment-based methods, and those using likelihood-based methods. Both methods make use of the innovations. The main concepts of the techniques are briefly introduced below.

Fig. 3.

Timeline of the main methods used in geophysical data assimilation for the joint estimation of Q and R over the last 15 years. Dee (1995) and Desroziers and Ivanov (2001) are not represented here but are certainly the seminal work of this research field in data assimilation.

Fig. 3.

Timeline of the main methods used in geophysical data assimilation for the joint estimation of Q and R over the last 15 years. Dee (1995) and Desroziers and Ivanov (2001) are not represented here but are certainly the seminal work of this research field in data assimilation.

Table 1.

Comparison of several methods to estimate error covariance Q and R in data assimilation.

Comparison of several methods to estimate error covariance Q and R in data assimilation.
Comparison of several methods to estimate error covariance Q and R in data assimilation.

On the one hand, moment-based methods assume equality between theoretical and empirical statistical moments. A first approach is to study different type of innovations in the observation space (i.e., working in the space of the observations instead of the space of the state). It has been initiated in DA by Rutherford (1972) and Hollingsworth and Lönnberg (1986). A second approach extracts information from the correlation between lag innovations, namely innovations between consecutive times. On the other hand, likelihood-based methods aim to maximize likelihood functions with statistical algorithms. One option is to use a Bayesian framework, assuming prior distributions for the parameters of Q and R covariance matrices. Another option is to use the iterative expectation–maximization algorithm to maximize a likelihood function.

The four methods listed in Fig. 3 will be examined in this paper. Before doing that, it is worth mentioning existing review work that have attempted to summarize the methods in DA context and beyond.

e. Other review papers

Other review papers on parameter estimation (including Q and R matrices) in state-space models have appeared in the statistical and signal processing communities. The first one (Mehra 1972) introduces moment- and likelihood-based methods in the linear and Gaussian case [i.e., when η and ϵ are Gaussians and M is a linear operator in Eqs. (1) and (2)]. Many extensions to nonlinear state-space models have been proposed since the seminal work of Mehra, and these studies are summarized in the recent review by Duník et al. (2017), with a focus on moment-based methods and the extended Kalman filter (Jazwinski 1970). The book chapter by Buehner (2010) presents another review of moment-based methods, with a focus on the modeling and estimation of spatial covariance structures Q and R in DA with the ensemble Kalman filter algorithm (Evensen 2009).

In the statistical community, the recent development of powerful simulation techniques, known as sequential Monte Carlo algorithms or particle filters, has led to an extensive literature on the statistical inference in nonlinear state-space models relying on likelihood-based approaches. A recent and detailed presentation of this literature can be found in Kantas et al. (2015). However, these methods typically require a large number of particles, which make them impractical for geophysical DA applications.

The review presented here focuses on methods proposed in DA, especially the moment- and likelihood-based techniques that are suitable for geophysical systems (i.e., with high dimensionality and strong nonlinearities).

f. Structure of this review

The paper is organized as follows. Section 2 briefly presents the filtering and smoothing DA algorithms used in this work. The main families of methods used in the literature to jointly estimate error covariance matrices Q and R are then described. First, moment-based methods are introduced in section 3. Then, we describe in section 4 the likelihood-based methods. We also mention other alternatives in section 5, along with methods used in the past but not exactly matching the scope of this review, and diagnostic tools to check the accuracy of Q and R. In section 6, we provide a summary and discussion on what we consider to be the forthcoming challenges in this area.

2. Filtering and smoothing algorithms

This review paper focuses on the estimation of Q and R in the context of ensemble-based DA methods. For the overall discussion of the methods and to set the notation, a short description of the ensemble version of the Kalman recursions is presented in this section: the ensemble Kalman filter (EnKF) and ensemble Kalman smoother (EnKS).

The EnKF and EnKS estimate various state vectors xf(k), xa(k), xs(k) and covariance matrices Pf(k), Pa(k), Ps(k), at each time step 1 ≤ kK, where K represents the total number of assimilation steps. Kalman-based algorithms assume a Gaussian prior distribution

 
p[x(k)|y(1:k1)]~N[xf(k),Pf(k)].

Then, filtering and smoothing estimates correspond to the Gaussian posterior distributions

 
p[x(k)|y(1:k)]~N[xa(k),Pa(k)]and
 
p[x(k)|y(1:K)]~N[xs(k),Ps(k)]

of the state conditionally to past/present observations and past/present/future observations, respectively.

The basic idea of the EnKF and EnKS is to use an ensemble x1,,xNe of size Ne to track Gaussian distributions over time with the empirical mean vector x¯=(1/Ne)i=1Nexi and the empirical error covariance matrix [1/(Ne1)]i=1Ne(xix¯)(xix¯)T.

The EnKF/EnKS equations are divided into three main steps, ∀i = 1, …, Ne and ∀k = 1, …, K—the forecast step (forward in time):

 
xif(k)=Mk[xia(k1)]+ηi(k);
(4a)

the analysis step (forward in time):

 
di(k)=y(k)Hk[xif(k)]+ϵi(k),
(4b)
 
Kf(k)=Pf(k)HkT[HkPf(k)HkT+R(k)]1,and
(4c)
 
xia(k)=xif(k)+Kf(k)di(k);
(4d)

and the reanalysis step (backward in time):

 
Ks(k)=Pa(k)MkT[Pf(k+1)]1and
(4e)
 
xis(k)=xia(k)+Ks(k)[xis(k+1)xif(k+1)],
(4f)

with Kf(k) and Ks(k) being the filter and smoother Kalman gains, respectively. Here, Pf(k) and HkPf(k)HkT denote the empirical covariance matrices of xif(k) and Hk[xif(k)], respectively. Then, Pf(k)HkT and Pa(k)MkT denote the empirical cross-covariance matrices between xif(k) and Hk[xif(k)] and between xia(k) and Mk[xia(k)], respectively. These quantities are estimated using Ne ensemble members.

In some of the methods presented in this review, the ensembles are also used to approximate Mk and Hk by linear operators Mk and Hk such as

 
Mk=EkM(a)(Ek1a)and
(5a)
 
Hk=EkH(f)(Ekf),
(5b)

with the dagger indicating the pseudoinverse and EkM(a), Ek1a, EkH(f), and Ekf being the matrices containing along their columns the ensemble perturbation vectors (the centered ensemble vectors) of Mk[xia(k1)], xia(k1), Hk[xif(k)], and xif(k), respectively.

In Eq. (4b), the innovation is denoted as d and is tracked by d1(k),,dNe(k). The innovation is the key ingredient of the methods presented in sections 3 and 4.

3. Moment-based methods

To constrain the model and observational errors in DA systems, initial efforts were focused on the statistics of relevant variables that could contain information on covariances. The innovation, given in Eq. (4b), corresponds to the difference between the observations and the forecast in the observation space. This variable implicitly takes into account the Q and R covariances. Unfortunately, as explained in Blanchet et al. (1997), by using only current observations, their individual contributions cannot be easily disentangled. Thus, the techniques with only the classic innovation y(k)Hk[xf(k)] are not discussed further in this review.

Two main approaches have been proposed in the literature to address this issue. They are based on the idea of producing multiple equations involving Q and R. The first approach uses different type of innovation statistics (i.e., not only the classic one). The second approach is based on lag innovations, or differences between consecutive innovations. From a statistical point of view, they refer to the “methods of moments,” where we construct a system of equations that links various moments of the innovations with the parameters and then replace the theoretical moments by the empirical ones in these equations.

a. Innovation statistics in the observation space

This first approach, based on the Desroziers diagnostic (Desroziers et al. 2005), is historical and now popular in the DA community. It does not exactly fit the topic of this review paper (i.e., estimating the model error Q), since it is based on the inflation of the background covariance matrix Pf. However, this forecast error covariance is defined by Pf(k)=MkPa(k1)MkT+Q in the Kalman filter, considering a linear model operator Mk. Thus, even if DA systems do not use an explicit model error perturbation controlled by Q, the inflation of the background covariance matrix Pf has similar effects, compensating for the lack of an explicit model uncertainty.

Desroziers et al. (2005) proposed examining various innovation statistics in the observation space. It is based on a different type of innovation statistics between observations, forecasts, and analysis, with all of them defined in the observation space: namely, dof(k)=y(k)Hk[xf(k)] as in Eq. (4b) and doa(k)=y(k)Hk[xa(k)]. In theory, in the linear and Gaussian case, for unbiased forecast and observation, and when Pf(k) and R(k) are correctly specified, the Desroziers innovation statistics should verify the equalities:

 
E[dof(k)dof(k)T]=HkPf(k)HkT+R(k)and
(6a)
 
E[doa(k)dof(k)T]=R(k),
(6b)

with E being the expectation operator. Equation (6a) is given by using Eq. (4b):

 
dof(k)dof(k)T=y(k)xf(k)THkTHkxf(k)y(k)T+Hkxf(k)xf(k)THkT+y(k)y(k)T
(7)

and then applying the expectation operator and using the definition of Pf and R. The observation-minus-forecast innovation statistics in Eq. (6a) is not useful to constrain model error Q. Indeed, dof does not depend explicitly on Q but rather on the forecast error covariance matrix Pf. Thus, the combination of Eqs. (6a) and (6b) can be used as a diagnosis of the forecast and observational error covariances in the system. A mismatch between the Desroziers statistics and the actual covariances, namely the left- and right-hand side terms in Eqs. (6a) and (6b), indicates inappropriate estimated covariances Pf(k) and R(k).

The forecast covariance Pf is sometimes badly estimated in ensemble-based assimilation systems. The limitations may be attributed to a number of causes. The limited number of ensemble members produces an over or, most of the time, underestimation of the forecast variance. Another limitation is the inaccuracies in methods used to sample initial condition or model error. The underestimation of the forecast covariance produces negative feedback, and the estimated analysis covariance Pa is thus underestimated, which in turn produces a further underestimation of the forecast covariance in the next cycle. This feedback process leads to filter divergence, as was pointed out by Pham et al. (1998), Anderson and Anderson (1999) or Anderson (2007). To avoid this filter divergence, inflating the forecast covariance Pf has been proposed. This covariance inflation accounts for both sampling errors and the lack of representation of model errors, like a too-small amplitude for Q or the fact that a bias is omitted in η and ε from Eqs. (1) and (2). In this context, the diagnostics given by the Desroziers innovation statistics have been proposed as a tool to constrain the required covariance inflation in the system.

We distinguish three inflation methods: multiplicative, additive and relaxation-to-prior. In the multiplicative case, the forecast error covariance matrix Pf is usually multiplied by a scalar coefficient greater than 1 (Anderson and Anderson 1999). Using innovation statistics in the observation space, adaptive procedures to estimate this coefficient have been proposed by Wang and Bishop (2003) and Anderson (2007, 2009) conditionally to the spatial location, Li et al. (2009), Miyoshi (2011), Bocquet (2011), Bocquet and Sakov (2012), Miyoshi et al. (2013), Bocquet et al. (2015), El Gharamti (2018) and Raanes et al. (2019). To prevent excessive inflation or deflation, some authors have proposed assuming a priori distribution for the multiplicative inflation factor. The most usual a priori distributions used by the authors are Gaussian in Anderson (2009), inverse-gamma in El Gharamti (2018) or inverse chi-square in Raanes et al. (2019).

In practice, multiplicative inflation tends to excessively inflate in the data-sparse regions and inflate too little in the densely observed regions. As a result, the spread looks like exaggeration of data density (i.e., too much spread in sparsely observed regions, and vice versa). Additive inflation solves this problem but requires many samples for additive noise; these drawbacks and benefits are discussed in Miyoshi et al. (2010). In the additive inflation case, the diagonal terms of the forecast and analysis empirical covariance matrices is increased (Mitchell and Houtekamer 2000; Corazza et al. 2003; Whitaker et al. 2008; Houtekamer et al. 2009). This regularization also avoids the problems corresponding to the inversion of the covariance matrices.

The last alternative is the relaxation-to-prior method. In application, this technique is more efficient than both additive and multiplicative inflations because it maintains a reasonable spread structure. The idea is to relax the reduction of the spread at analysis. We distinguish the method proposed in Zhang et al. (2004), where the forecast and analysis ensemble perturbations are blended, from the one given in Whitaker and Hamill (2012), which multiplies the analysis ensemble without blending perturbations. This last method is thus a multiplicative inflation, but applied after the analysis, not the forecast. Ying and Zhang (2015) and Kotsuki et al. (2017b) proposed methods to adaptively estimate the relaxation parameters using innovation statistics. Their conclusions are that adaptive procedures for relaxation-to-prior methods are robust to sudden changes in the observing networks and observation error settings.

Closely connected to multiplicative inflation estimation is statistical modeling of the error variance terms proposed by Bishop and Satterfield (2013) and Bishop et al. (2013). From numerical evidence based on the 10-dimensional Lorenz-96 model, the authors assume an inverse-gamma prior distribution for these variances. This distribution allows for an analytic Bayesian update of the variances using the innovations. Building on Bocquet (2011), Bocquet et al. (2015), and Ménétrier and Auligné (2015), this technique was extended in Satterfield et al. (2018) to adaptively tune a mixing ratio between the true and sample variances.

Adaptive covariance inflations are estimation methods directly attached to a traditional filtering method (such as the EnKF used here), with almost negligible overhead computational cost. In practice, the use of this technique does not necessarily imply an additive error term η in Eq. (1). Thus, it is not a direct estimation of Q but rather an inflation applied to Pf in order to compensate for model uncertainties and sampling errors in the EnKFs, as explained in Raanes et al. (2019, their section 4 and appendix C). Several DA systems work with an inflation method and use it for its simplicity, low cost, and efficiency. As an example of inflation techniques, the most straightforward inflation estimation is a multiplicative factor λ of the incorrectly scaled P˜f(k) so that the corrected forecast covariance is given by Pf(k)=λ(k)P˜f(k). The estimate of the inflation factor is given by taking the trace of Eq. (6a):

 
λ˜(k)=dof(k)Tdof(k)Tr[R(k)]Tr[HkP˜f(k)HkT].
(8)

The estimated inflation parameter λ˜ computed at each time k can be noisy. The use of temporal smoothing of the form λ(k+1)=ρλ˜(k)+(1ρ)λ(k) is crucial in operational procedures. Alternatively, Miyoshi (2011) proposed calculating the estimated variance of λ(k), denoted as σλ(k)2, using the central limit theorem. Then, λ(k + 1) is updated using the previous estimate λ(k) and the Gaussian distribution with mean λ˜(k) and variance σλ(k)2. From the Desroziers diagnostics, at each time step k and when sufficient observations are available, an estimate of R(k) is possible using Eq. (6b). For instance, Li et al. (2009) proposed estimating each component of a diagonal and averaged R matrix. However, the diagonal terms cannot take into account spatial correlated error terms, and constant values for observation errors are not realistic. Then, Miyoshi et al. (2013) proposed additionally estimating the off-diagonal components of the time-dependent matrix R(k). The Miyoshi et al. (2013) implementation is summarized in the  appendix as algorithm 1.

The Desroziers diagnostic method has been applied widely to estimate the real observation error covariance matrix R in numerical weather prediction (NWP). The observations are coming from different sources. In the case of satellite radiances, Bormann et al. (2010) applied three methods, including the Desroziers diagnostic and the method detailed in Hollingsworth and Lönnberg (1986) to estimate a constant diagonal term of R using the innovation dof and its correlations in space, assuming that horizontal correlations in dof samples are purely due to Pf. Weston et al. (2014) and Campbell et al. (2017) then included the interchannel observation error correlations of satellite radiances in DA and obtained improved results when compared with the case using a diagonal R. For spatial error correlations in R, Kotsuki et al. (2017a) estimated the horizontal observation error correlations of satellite-derived precipitation data. Including horizontal observation error correlations in DA for densely observed data from satellites and radars is more challenging than including interchannel error correlations in DA. Indeed, the number of horizontally error-correlated observations is much larger, and some recent studies have been tackling this issue (e.g., Guillet et al. 2019).

To conclude, the Desroziers diagnostic is a consistency check and makes it possible to detect if the error covariances Pf and R are incorrect. When and how this method can result in accurate or inaccurate estimates, and convergence properties, have been studied in depth by Waller et al. (2016) and Ménard (2016). The Desroziers diagnostic is also useful to estimate off-diagonal terms of R, for instance taking into account the spatial error correlations. However, covariance localization used in the ensemble Kalman filter might induce erroneous estimates of spatial correlations (Waller et al. 2017).

b. Lag innovation between consecutive times

Another way to estimate error covariances is to use multiple equations involving Q and R, exploiting cross correlations between lag innovations. More precisely, it involves the current innovation d(k) = dof(k) defined in Eq. (4b) and past innovations d(k − 1), …, d(kl). Lag innovations were introduced by Mehra (1970) to recover Q and R simultaneously for Gaussian, linear and stationary dynamic systems. In such a case, {d(k)}k≥1 is completely characterized by the lagged covariance matrix Cl = Cov[d(k), d(kl)], which is independent of k. In other words, the information encoded in {d(k)}k≥1 is completely equivalent to the information provided by {Cl}l≥0. Moreover, for linear systems in a steady state, analytic relations exist between Q, R and E[d(k)d(kl)T]. However, these linear relations can be dependent and redundant for different lags l. Therefore, as stated in Mehra (1970), only a limited number of Q components can be recovered.

Bélanger (1974) extended these results to the case of time-varying linear stochastic processes, taking d(k)d(kl)T as “observations” of Q and R and using a secondary Kalman filter to update them iteratively. On the one hand, considering the time-varying case may increase the number of components in Q that can be estimated. On the other hand, as pointed out in Bélanger (1974), this method would no longer be analytically exact if Q and R were updated adaptively at each time step. One numerical difficulty of Bélanger’s method is that it needs to invert a matrix of size m2 × m2, where m refers to the dimension of the observation vector. However, this difficulty has been largely overcome by Dee et al. (1985) in which the matrix inversion is reduced to O(m3) by taking the advantage of the fact that the big matrix comes from some tensor product.

More recent work has focused on high-dimensional and nonlinear systems using the extended or ensemble Kalman filters. Berry and Sauer (2013) proposed a fast and adaptive algorithm inspired by the use of lag innovations proposed by Mehra. Harlim et al. (2014) applied the original Bélanger algorithm empirically to a nonlinear system with sparse observations. Zhen and Harlim (2015) proposed a modified version of Bélanger’s method, by removing the secondary filter and alternatively solving Q and R in a least squares sense based on the averaged linear relation over a long term.

Here, we briefly describe the algorithm of Berry and Sauer (2013), considering the lag-zero and lag-one innovations. The following equations are satisfied in the linear and Gaussian case, for unbiased forecast and observation when Pf(k) and R(k) are correctly specified:

 
E[d(k)d(k)T]=HkPf(k)HkT+R(k)=Σ(k)and
(9a)
 
E[d(k)d(k1)T]=HkMkPf(k1)Hk1THkMkKf(k1)Σ(k1).
(9b)

Equation (9a) is equivalent to Eq. (6a). Moreover, Eq. (9b) results from the fact that, developing the expression of d(k) using consecutively Eqs. (2), (1), (4a), and (4d), the innovation can be written as

 
d(k)=y(k)Hkxf(k)=Hk[x(k)xf(k)]+ϵ(k)=Hk[Mkx(k1)xf(k)+η(k)]+ϵ(k)=Hk{Mk[x(k1)xa(k1)]+η(k)}+ϵ(k)=HkMk[x(k1)xf(k1)Kf(k1)d(k1)]+Hkη(k)+ϵ(k).
(10)

Hence, the innovation product d(k)d(k − 1)T between two consecutive times is given by

 
HkMk[x(k1)xf(k1)]d(k1)THkMk[Kf(k1)d(k1)]d(k1)T+Hkη(k)d(k1)T+ϵ(k)d(k1)T
(11)

and, assuming that the model η and observation ϵ error noises are white and mutually uncorrelated, then E[η(k)d(k − 1)T] = 0 and E[ϵ(k)d(k − 1)T] = 0. Last, developing E[d(k)d(k − 1)T], Eq. (9b) is satisfied.

The algorithm in Berry and Sauer (2013) is summarized in the  appendix as algorithm 2. It is based on an adaptive estimation of Q(k) and R(k), which satisfies the following relations in the linear and Gaussian case:

 
P˜(k)=(HkMk)1d(k)d(k1)THk1T,+Kf(k1)d(k1)d(k1)THk1T,
(12a)
 
Q˜(k)=P˜(k)Mk1Pa(k2)Mk1T,and
(12b)
 
R˜(k)=d(k)d(k)THkPf(k)HkT.
(12c)

In operational applications, when the number of observations is not equal to the number of components in state x, H is not a square matrix and Eq. (12a) is ill-defined. To avoid the inversion of H, Berry and Sauer (2013) proposed considering parametric models for Q and then solving a linear system associated with Eqs. (12a) and (12b). It is written as a least squares problem such that

 
Q˜(k)=arg minQd(k)d(k1)T+HkMkKf(k1)d(k1)d(k1)THkMkMk1Pa(k2)Mk1THk1THkMkQHk1T.
(13)

In this adaptive procedure, joint estimations of Q˜(k) and R˜(k) can abruptly vary over time. Thus, the temporal smoothing of the covariances being estimated becomes crucial. As suggested by Berry and Sauer (2013), such temporal smoothing between current and past estimates is a reasonable choice:

 
Q(k+1)=ρQ˜(k)+(1ρ)Q(k)and
(14a)
 
R(k+1)=ρR˜(k)+(1ρ)R(k),
(14b)

with Q(1) and R(1) being the initial conditions and ρ being the smoothing parameter. When ρ is large (close to 1), weight is given to the current estimates Q˜ and R˜, and when ρ is small (close to 0) it gives smoother Q and R sequences. The value of ρ is arbitrary and may depend on the system and how it is observed. For instance, in the case where the number of observations equals the size of the system, Berry and Sauer (2013) uses ρ = 5 × 10−5 in order to estimate the full matrix Q for the Lorenz-96 model.

The algorithm in Berry and Sauer (2013) only considers lag-zero and lag-one innovations. By incorporating more lags, Zhen and Harlim (2015) and Harlim (2018) showed that it makes it possible to deal with the case in which some components of Q are not identifiable from the method in Berry and Sauer (2013). For instance, let us consider the two-dimensional system with any stationary operator M and H = [1, 0], meaning that only the first component of the system is observed. This is a linear, Gaussian, stationary system, and Mehra’s theory implies that two parameters of Q are identifiable. However, using only lag-one innovations as in Berry and Sauer (2013), Eq. (13) becomes a scalar equation and only one parameter of Q can be determined. The idea of considering more lag innovations to estimate more components of Q was tested in Zhen and Harlim (2015). Numerical results show that considering more than one lag can improve the estimates of Q and R. For instance, Zhen and Harlim (2015) focused on the Lorenz-96 model. Results show that when Q is stationary, the trace of Q and R are equal, and when observations are taken at twenty fixed equally spaced grid points for every five integration time steps, the optimal RMSE of the estimates of Q and R is achieved when four time lags are considered. But with more lags, the performance is degraded.

To summarize, methods based on lag innovation between consecutive times have been studied for a long time in the signal processing community. The original methods (Mehra 1970; Bélanger 1974) were analytically established for linear systems with Gaussian noises. Inspired by these foundational ideas, empirical methods have been established for nonlinear systems in DA (Berry and Sauer 2013; Harlim et al. 2014; Zhen and Harlim 2015). Although these methods have not been tested in any operational experiment, the idea of using lagged innovations seems to have significant potential.

4. Likelihood-based methods

This section focuses on methods based on the likelihood of the observations, given a set of statistical parameters. The conceptual idea behind what we refer to as likelihood-based methods is to determine the optimal statistical parameters (i.e., Q and R) that maximize the likelihood function for a given set of observations that may be distributed over time. In this way, the aim is to derive estimation methods that use the observations to find the most suitable, or most likely parameters.

Early studies in Dee (1995), Blanchet et al. (1997), Mitchell and Houtekamer (2000) and Liang et al. (2012) proposed finding the optimal Q and R that maximize the current innovation likelihood at time k. Unfortunately, if only the current observations are used, the joint estimation of Q and R is not well constrained (Todling 2015). To tackle this issue, several solutions have been recently proposed where the likelihood function considers observations distributed in time over several assimilation cycles.

The likelihood-based methods are broadly divided into two categories. One approach uses a Bayesian framework. It assumes a priori knowledge about the parameters and estimate jointly the posterior distribution of Q and R together with the state of the system, or alternatively to estimate them in a two-stage process.2 The second one is based on the frequentist viewpoint and attempts a point estimate of the parameters by maximizing a total likelihood function.

a. Bayesian inference

In the Bayesian framework, the elements of the covariance matrices Q and R are assumed to have a priori distributions that are controlled by hyperparameters. In practice, it is difficult to have prior distributions for each element of Q and R, especially for large DA systems. Instead, parametric forms are used for the matrices, typically describing the shape and level noise. We denote the corresponding parameters as θ.

The inference in the Bayesian framework aims to determine the posterior density p[θ|y(1:k)]. Two techniques have appeared, the first based on a state augmentation and the second based on a rigorous Bayesian update of the posterior distribution.

1) State augmentation

In the Bayesian framework, θ is a random variable such that the state is augmented with these parameters by defining z(k) = [x(k), θ]. To define an augmented state-space model, one has to define an evolution equation for the parameters. This leads to a new state-space model of the form of Eqs. (1) and (2) with x replaced by z. Therefore, the state and the parameters are estimated jointly using the DA algorithms.

State augmentation was first proposed in Schmidt (1966) and is known as the Schmidt–Kalman filter. This technique was mainly used to estimate both the state of the system and additional parameters, including bias, forcing terms and physical parameters. These kinds of parameters are strongly related to the state of the system (Ruiz et al. 2013a). Therefore, they are identifiable and suitable for an augmented state approach. However, Stroud and Bengtsson (2007) and later DelSole and Yang (2010) formally demonstrated that augmentation methods fail for variance parameters like Q and R. The explanation is that in the EnKF, the empirical forecast covariance Pf is computed using all the ensemble members, each one with a different realization of the random variable θ. Thus, Pf and consequently the Kalman gain Kf, are mixing the effects of Q and R parameters contained in θ. Therefore, after applying Eq. (4d), the update of z corresponding to the θ parameters is the same for all of the parameters. To capture the impact of a single variance parameter on the prediction covariance and circumvent the limitation of the state augmentation, Scheffler et al. (2019) proposed to use an ensemble of states integrated with the same variance parameter. The choice of an ensemble of states for each variance parameter leads to two nested ensemble Kalman filters. The technique performs successfully under different model error covariance structures but has an important computational cost.

Another critical aspect of state augmentation is that one needs to define an evolution model for the augmented state z(k) = [x(k), θ(k)]. If persistence is assumed in the parameters such that they are constant in time, this leads to filter degeneracy, since the estimated variance of the error in θ is bound to decrease in time. To prevent or at least mitigate this issue, it was suggested to use an independent inflation factor on the parameters (Ruiz et al. 2013b) or to impose artificial stochastic dynamics for θ, typically a random walk or AR(1) model, as introduced in Eq. (3) and proposed in Liu and West (2001). The tuning of the parameters introduced in these artificial dynamics may be difficult, and this introduces bias into the procedure, which is hard to quantify.

2) Bayesian update of the posterior distribution

Instead of the inference of the joint posterior density using a state augmentation strategy, the state x(k) and parameters θ can be divided into a two-step inference procedure using the following formula:

 
p[x(k),θ|y(1:k)]=p[x(k)|y(1:k),θ]p[θ|y(1:k)],
(15)

which is a direct consequence of the conditional density definition. In Eq. (15), p[x(k)|y(1: k), θ] represents the posterior distribution of the state, given the observations and the parameter θ. It can be computed using a filtering DA algorithm. The second term on the right-hand side of Eq. (15) corresponds to the posterior distribution of the parameters, given the observations up to time k. The latter can be updated sequentially using the following Bayesian hierarchy:

 
p[θ|y(1:k)]p[y(k)|y(1:k1),θ]p[θ|y(1:k1)],
(16)

where p[y(k)|y(1:k − 1), θ] is the likelihood of the innovations.

Different approximations have been used for p[θ|y(1:k)] in Eq. (16); these include parametric models based on Gaussian (Stroud et al. 2018), inverse-gamma (Stroud and Bengtsson 2007) or Wishart distributions (Ueno and Nakamura 2016), particle-based approximations (Frei and Künsch 2012; Stroud et al. 2018) and grid-based approximation (Stroud et al. 2018).

The methods proposed in the literature also differ by the approximation used for the likelihood of the innovations. We emphasize that p[y(k)|y(1:k − 1), θ] needs to be evaluated for different values of θ at each time step, and that this requires applying the filter from the initial time with a single value of θ, which is computationally impossible for applications in high dimensions. To reduce computational time, it is generally assumed that xf and Pf are independent of θ, and only observations y(kl:k − 1) in a small time window from the current observation are used when computing the likelihood of the innovations [see Ueno and Nakamura (2016) and Stroud et al. (2018) for a more detailed discussion]. A summary of the Bayesian method from Stroud et al. (2018) is given in the  appendix as algorithm 3. It was implemented within the EnKF framework and is one of the most recent studies based on the Bayesian approach.

Applications of the Bayesian method in the DA context are now discussed. It has mainly been used to estimate shape and noise parameters of Q and R error covariance matrices. For instance, Purser and Parrish (2003) and Solonen et al. (2014) estimated statistical parameters controlling the magnitude of the variance and the spatial dependencies in the model error Q, assuming that R is known. There are also applications aimed at estimating parameters governing the shape of the observation error covariance matrix R only: Frei and Künsch (2012) and Stroud et al. (2018) in the Lorenz-96 system, Winiarek et al. (2012, 2014) for the inversion of the source term of airborne radionuclides using a regional atmospheric model, and Ueno and Nakamura (2016) using a shallow-water model to assimilate satellite altimetry.

As pointed out in Stroud and Bengtsson (2007), Bayesian update algorithms work best when the number of unknown parameters in θ is small. This limitation may explain why the joint estimation of parameters controlling both model and observation error covariances is not systematically addressed. For instance, Stroud and Bengtsson (2007) used the EnKF with the Lorenz-96 model for the estimation of a common multiplicative scalar parameter for predefined matrices Q and R. Alternatively, Stroud et al. (2018) tested the Bayesian method on different spatiotemporal systems to estimate the signal-to-noise ratio between Q and R. Nevertheless, based on the experiments about the importance of the signal-to-noise ratio ||Pf||/||R|| presented in Fig. 2, we know that this estimation of the ratio is not optimal.

Widely used in the statistical community, the Bayesian framework is useful incorporating physical knowledge about error covariance matrices and constraining their estimation process. In the DA literature, authors have used a priori distributions for the shape and noise parameters of Q and R, but rarely both. Operationally, only a limited number of parameters can be estimated. To address this issue, Stroud and Bengtsson (2007) suggested combining Bayesian algorithms with other techniques.

b. Maximization of the total likelihood

The innovation likelihood at time k, p[y(k)|y(1:k − 1), θ] in Eq. (16), can be maximized to find the optimal θ (i.e., Q and R matrices or parameterizations of them). In practice, when this maximization is done at each time step, two issues arise. First, the innovation covariance matrix Σ(k)=HkPf(k)HkT+R(k) combines the information about R and Q, the latter being contained in Pf. When using only time k, it is difficult to disentangle the model and observation error covariances; in application, the aforementioned studies only estimated one of them. Second, the number of observations at each time step is in general limited and, as pointed out by Dee (1995), available observations should exceed “the number of tunable parameters by two or three orders of magnitude.” To overcome these limitations, a reasonable alternative is to use a batch of observations within a time window and to assume θ to be constant in time. The resulting total likelihood expressed sequentially through conditioning is given by

 
p[y(1:K)|θ]=k=1Kp[y(k)|y(1:k1),θ].
(17)

Because it is an integration of innovation likelihoods over a long period of time from k = 1 to k = K, Eq. (17) provides more observational information to estimate Q and R. The maximization of this total likelihood has been applied for the estimation of deterministic and stochastic parameters (related to Q) using a direct sequential optimization procedure (DelSole and Yang 2010). Ueno et al. (2010) used a grid-based procedure to estimate noise levels and spatial correlation lengths of Q and a noise level for R. This grid-based method uses predefined sets of covariance parameters and evaluates the different combinations to find the one that maximizes the likelihood criterion. Brankart et al. (2010) also proposed a method using the same criterion but adding (at the initial time) information on scale and correlation length parameters of Q and R. This information is only given the first time, and is progressively forgotten over time, using a decreasing exponential factor. The marginalization of the hidden state in Eq. (17) considers all the previous observations, and it requires the use of a filter. The maximization of the total likelihood p[y(1:K)|θ] to estimate model error covariance Q was conducted in Pulido et al. (2018), where they used a gradient-based optimization technique and the EnKF.

The likelihood function given in Eq. (17) only depends on the observations y. This likelihood can be written in a different way, taking into account both the observations and the hidden state x. Indeed, the marginalization of the hidden state to obtain the total likelihood can be produced using the whole trajectory of the state from k = 0 to the last time step K all at once. It is given by

 
p[y(1:K)|θ]=p[x(0:K),y(1:K)|θ]dx(0:K).
(18)

The maximization of the total likelihood as a function of statistical parameters θ is not possible, since the total likelihood cannot be evaluated directly, nor its gradient with regard to the parameters (Pulido et al. 2018). Shumway and Stoffer (1982) proposed using an iterative procedure based on the expectation–maximization algorithm (hereinafter denoted as EM). They applied it to estimate the parameters of a linear state-space model, with linear dynamics, and a linear observational operator and Gaussian errors. The EM algorithm was introduced by Dempster et al. (1977).

Each iteration of the EM algorithm consists of two steps. In the expectation step (E-step), the posterior density p[x(0:K)|y(1:K), θ(n)] is determined conditioned on the batch of observations y(1:K) and given the parameters θ(n) = [Q(n), R(n)] from the previous iteration or initial guess. This is obtained through the application of a smoother like the EnKS. Then, the M-step relies on the maximization of an intermediate function, depending on the posterior density obtained in the E-step. The intermediate function is defined by the conditional expectation:

 
E(log{p[x(0:K),y(1:K)|θ]}|y(1:K),θ(n)).
(19)

If, as in Eqs. (1) and (2), the observational and model errors are assumed to be additive, unbiased, and Gaussian, the expression for the logarithm of the joint density in Eq. (19) is given by

 
12{k=1Kx(k)M[x(k1)]Q2+log|Q|+y(k)H[x(k)]R2+log|R|}+c,
(20)

where ||v||A2 is defined to be equal to vTA−1v and c is a constant independent of Q and R. In this case, an analytic expression for the optimal error covariances at each iteration of the EM algorithm can be obtained. The estimators of the parameters that maximize Eq. (19) using Eq. (20) are

 
Q(n+1)=1Kk=1KE({x(k)M[x(k1)]}{x(k)M[x(k1)]}T|y(1:K),θ(n)])and
(21a)
 
R(n+1)=1Kk=1KE({y(k)H[x(k)]}{y(k)H[x(k)]}T|y(1:K),θ(n)).
(21b)

The application of the EM algorithm for the estimation of Q and R is straightforward. Starting from Q(1) and R(1), an ensemble Kalman smoother is applied with this first guess and the batch of observations y(1:K) to obtain the posterior density p[(x(0:K)|y(1:K), θ(1)]. Then Eqs. (21a) and (21b) are used to update and obtain Q(2) and R(2). Next, a new application of the smoother is conducted using the parameters Q(2) and R(2) and the observations y(1: K), the new resulting states are used in Eqs. (21a) and (21b) to estimate Q(3) and R(3), and so on. As a diagnostic of convergence or as a stop criterion, the product of innovation likelihood functions given in Eq. (17) is evaluated using a filter. The EM algorithm guarantees that the total likelihood increases in each iteration and that the sequence θ(n) converges to a local maximum (Wu 1983). A summary of the EM method (using EnKF and EnKS) from Dreano et al. (2017) is given in the  appendix as algorithm 4.

EM is a well-known algorithm used in the statistical community. This procedure is parameter-free and robust, due to the large number of observations used to approximate the likelihood when using a long batch period (Shumway and Stoffer 1982). Although the use of the EM algorithm is still limited in DA, it is becoming more and more popular. Some studies have implemented the EM algorithm for estimating only the observation error matrix R. For instance, Ueno and Nakamura (2014) used the model proposed in Zebiak and Cane (1987) and satellite altimetry observations, whereas Liu et al. (2017) used an air quality model for accidental pollutant source retrieval. But the estimation of only the observation error covariance is limited, and other studies have tried to jointly estimate model error Q and R matrices, for instance as in Tandeo et al. (2015) for an orographic subgrid-scale nonlinear observation operator. Then, Dreano et al. (2017) and Pulido et al. (2018) used the EM procedure to produce joint estimation of Q and R matrices in the Lorenz-63 and stochastic parameters of the Lorenz-96 systems, respectively. Recently, Yang and Mémin (2019) extended the EM procedure for the estimation of physical parameters in a one-dimensional shallow-water model, more specifically for the identification of stochastic subgrid terms. Last, an online adaptation of the EM algorithm for the estimation of Q and R at each time step, after the filtering procedure, has been proposed in Cocucci et al. (2020). In this adaptive case, the likelihood is averaged locally over time, see Cappé (2011) for more details.

To our knowledge, EM has not been tested yet on operational systems with large observation and state space. In that case, the use of parametric forms for the matrices Q and R is essential to reduce the number of statistical parameters θ to estimate. For instance, Dreano et al. (2017) and Liu et al. (2017) showed that in the particular cases where covariances are diagonal or of the form αA, with A being a positive definite matrix, expressions in Eqs. (21a) and (21b) are simplified, and a suboptimal θ in the space of the parametric covariance form can be obtained.

5. Other methods

In this section, we describe other methods that have been used to estimate Q and R, and that cannot be included in the categories presented in the previous sections. In particular, we report here about methods that are applied either a posteriori, after DA cycles, or without applying any DA algorithms.

a. Analysis (or reanalysis) increment approach

This first method is based on previous DA outputs. The key idea here is to use the analysis (or reanalysis) increments to provide a realistic sample of model errors from which statistical moments, such as the covariance matrix Q, can be empirically estimated. This assumes that the sequence of reanalysis xs (or analysis xa) is the best available representation of the true process x. In that case, the following approximation in Eq. (1) is made:

 
η(k)=M[x(k1)]x(k)M[xs(k1)]xs(k).
(22)

In this approximation, it is implicitly assumed that the estimated state is the truth so that the initial condition at time k − 1 is neglected. A similar approximation of the true process by xa or xs in Eq. (2) can be used to estimate the observation error covariance matrix R.

Operationally, the analysis (or reanalysis) increment method is applied after a DA filter (or smoother) to estimate the Q matrix. This method was originally introduced by Leith (1978), and later used to account for model error in the context of ensemble Kalman filters, using analysis and reanalysis increments by Mitchell and Carrassi (2015), and in the context of weak-constraint variational assimilation by Bowler (2017). Along this line, Rodwell and Palmer (2007) also proposed evaluating the average of instantaneous analysis increments to represent the systematic forecast tendencies of a model.

b. Covariance matching

The covariance matching method was introduced by Fu et al. (1993). It involves matching sample covariance matrices to their theoretical expectations. Thus, it is a method of moments, similar to the work in Desroziers et al. (2005), except that covariance matching is performed on a set of historical observations and numerical simulations (noted xsim), without applying any DA algorithms. It has been extended by Menemenlis and Chechelnitsky (2000) to time-lagged innovations, as first considered in Bélanger (1974).

In the case of a constant and linear observation operator H, the basic idea in Fu et al. (1993) is to assume the following system:

 
xsim(k)=x(k)+ηsim(k),
(23a)
 
ηsim(k)=Aηsim(k1)+η(k),and
(23b)
 
Hxsim(k)y(k)=Hηsim(k)+ϵ(k),
(23c)

with A being a transition matrix close to the identity matrix, assuming slow variations of the numerical simulation errors ηsim. In Eqs. (23b) and (23c), the definitions of η and ε errors remain similar, as in the general Eqs. (1) and (2).

Assuming that Q and R are constant over time, ϵ is uncorrelated from x and from ηsim, then Eqs. (23c) and (23a) yield to the following estimates of R and Psim (the latter represents the error covariance of the numerical simulations):

 
R^=12{E[(yHxsim)(yHxsim)T]E[(Hxsim)(Hxsim)T]+E(yyT)}and
(24a)
 
HP^simHT=12{E[(yHxsim)(yHxsim)T]+E[(Hxsim)(Hxsim)T]E(yyT)},
(24b)

where E is the expectation operator over time. Then, an estimate of Q is obtained using Eqs. (23b) and (24b) and assuming that Psim has a unique time-invariant limit.

c. Forecast sensitivity

In operational meteorology, it is critical to learn the sensitivity of the forecast accuracy to various parameters of a DA system, in particular the error statistics of both the model and the observations. This is why a significant portion of literature considers the tuning problem of R and Q through the lens of the sensitivity of the forecast to these parameters. The computation of those sensitivities can be seen as a first-order correction or diagnostic for such an estimation. The forecast sensitivities are computed either using the adjoint model (Daescu and Todling 2010; Daescu and Langland 2013) in the context of variational methods, or a forecast ensemble (Hotta et al. 2017) in the context of the EnKF.

The basic idea is to compute at each assimilation cycle an innovation between forecast and analysis, noted dfa(k) = xf(k) − xa(k). Then, the forecast sensitivity is given by dfa(k)TSdfa(k) with S being a diagonal scaling matrix, to normalize the components of dfa. The Q and R estimates are the matrices that minimize dfa(k). The adjoint or the ensemble are thus used to compute the partial derivatives of this forecast sensitivity. w.r.t. Q and R.

6. Conclusions and perspectives

As often considered in data assimilation, this review paper also deals with model and observation errors that are assumed additive and Gaussian with covariance matrices Q and R. The model error corresponds to the dynamic model deficiencies to represent the underlying physics, whereas the observation error corresponds to the instrumental noise and the representativity error. Model and observation errors are assumed to be uncorrelated and white in time. The model and observations are also assumed unbiased, a strong assumption for real data assimilation applications.

The discussion starts with the aid of an illustration of the individual and joint impacts of improperly calibrated covariances using a linear toy model. The experiments clearly showed that to achieve reasonable filter accuracy (i.e., in terms of root-mean-squared error), it is crucial to carefully define both Q and R. The effect on the coverage probability of a misspecification of Q and R is also highlighted. This coverage probability is related to the estimated covariance of the reconstructed state, and thus to the uncertainty quantification in data assimilation. After the one-dimensional illustration, the core of the paper gives an overview of various methods to jointly estimate the Q and R error covariance matrices: they are summarized and compared below.

a. Comparison of existing methods for estimating Q and R

We mainly focused in this review on four methods for the joint estimation of the error covariances Q and R. The methods are summarized in Table 1. They correspond to classic estimation methods, based on statistical moments or likelihoods. The main difference between the four methods comes from the innovations taken into account: the total innovation, as in the EM algorithm proposed by Shumway and Stoffer (1982); lag innovations, following the idea given in Mehra (1970); or different type of innovations in the observation space, as in Desroziers et al. (2005). Additionally, to constrain the estimation, hierarchical Bayesian approaches use prior distributions for the shape parameters of Q and R.

Most of the methods estimate the model error Q. The exception is the one using the Desroziers diagnostic, dealing with different type of innovations in the observation space, which instead estimates an inflation factor for Pf. Moreover, the methods are mainly defined online, meaning that they aim to estimate Q and R adaptively, together with the current state of the system. Consequently, these methods require additional tunable parameters to smooth the estimated covariances over time. However, most of the methods presented in this review also have an offline variant. In that case, a batch of observations is used to estimate Q and R. In some methods, such as the EM algorithm, the parameters are determined iteratively. These offline approaches avoid the use of additional smoothing parameters.

Throughout this review paper, as usually stated in DA, it is assumed that model error η and observation error ϵ, defined in Eqs. (1) and (2), are Gaussian. Consequently, the distribution of the innovations is also Gaussian. The four presented methods use this property to build estimates of Q and R adequately. But, if η and ϵ are non-Gaussian, Desroziers diagnostic and lag-innovation methods are not suitable anymore. However, the EM procedures and Bayesian methods are still relevant, although they must be used with an appropriate filter (e.g., particle filters), not Kalman-based algorithms (i.e., assuming a Gaussian distribution of the state). Recently, the treatment of non-Gaussian error distributions in DA has been explored in Katzfuss et al. (2020), using hierarchical state-space models. This Bayesian framework allows to handle unknown variables that cannot be easily included in the state vector (e.g., parameters of Q and R) and to model non-Gaussian observations.

The four methods have been applied at different levels of complexity. For instance, Bayesian inference methods (due to their algorithm complexity) and the EM algorithm (due to its computational cost) have so far only been applied to small dynamic models. However, the online version of the EM algorithm is less consuming and opens new perspectives of applications on larger models. On the other hand, methods using innovation statistics in the observation space have already been applied to NWP models.

The four methods summarized in Table 1 show differences in maturity in terms of applications and methodological aspects. This review also shows that there are still remaining challenges and possible improvements for the four methods.

b. Remaining challenges for each method

The first challenge concerns the improvements of adaptive techniques regarding additional parameters that control the variations of Q and R estimates over time. Instead of using fixed values for these parameters, for instance fixed ρ in the lag innovations or σλ2 in the inflation methods, we suggest using time-dependent adaptations. This adaptive solution could avoid the problems of instabilities close to the solution. Another option could be to adapt these procedures, working with stable parameter values (small ρ, low σλ2) and iterating the procedures on a batch of observations, as in the EM algorithm. This offline variant was suggested and tested in Desroziers et al. (2005) with encouraging results. To the best of our knowledge, it has not yet been tested with lag-innovation methods.

The second challenge concerns considering time-varying error covariance matrices. The adaptive procedures, based on online estimations with temporal smoothing of Q and R, are supposed to capture slowly evolving covariances. On the contrary, offline methods like the EM algorithm are working on a batch of observations, assuming that Q and R are constant over the batch period. Online solutions for the EM algorithm, with the likelihood averaged locally over time (Cocucci et al. 2020), could also capture slow evolution of the covariances. Another simple solution could be to work on small sets of observations, named as minibatches, and to apply the EM algorithm in each set using the previous estimates as an initial guess. These intermediate schemes are of common use in machine learning.

A third challenge has to do with the assumption, used by all of the methods described herein, that observation and model errors are mutually independent. Nevertheless, as pointed out in Berry and Sauer (2018), observation and model error are often correlated in real data assimilation problems (e.g., for satellite retrieval of Earth observations that uses model outputs in the inversion process). Methods based on Bayesian inference can, in principle, exploit existing model-to-observation correlations by using a prior joint distribution (i.e., not two individual ones). The explicit taking into account of this correlation can then constrain the optimization procedure. This is not possible in the other approaches described in this review, at least not in their standard known formulations, and the presence of model-observation correlation can deteriorate their accuracy.

A fourth challenge is common to all the methods presented in this review. Iterative versions of the presented algorithms need initial values or distributions for R and Q (or B = Pf in the case of Desroziers). However, as mentioned in Waller et al. (2016) for the Desrorziers diagnostics, there is no guarantee that the algorithms will converge to the optimal solution. Indeed, in such an optimization problem, there are possibly several local and nonoptimal solutions. Suboptimal specifications of R, Q, or B in the initial DA cycle will affect the final estimation results. There are several solutions to avoid this convergence problem: initialize the covariance matrices using physical expertise, execute the iterative algorithms several times with different initial covariance matrices, or use stochastic perturbations in the optimization algorithms to avoid to be trapped in local solutions. These aspects of convergence and sensitivity to initial conditions have so far been poorly addressed. It is therefore necessary to check which method is robust operationally.

The last remaining challenge concerns the estimation of other statistical parameters of the state-space model given in Eqs. (1) and (2) and associated filters. Indeed, the initial conditions x(0) and P(0) are crucial for certain satellite retrieval problems and have to be estimated. This is the case, for instance, when the time sequence of observations is short (i.e., shorter than the spinup time of the filter with an uninformative prior) or when filtering and smoothing are repeated on various iterations, as in the EM algorithm. Estimation methods should also consider the estimation of systematic or time-varying biases, the deterministic part of η and ϵ. This was initially proposed by Dee and da Silva (1999) and tested in Dee et al. (1999) in the case of maximizing the innovation likelihood, in Dee (2005) in a state augmentation formulation, and was adapted to a Bayesian update formulation in Liu et al. (2017) and in Berry and Harlim (2017). Recently, the joint estimation of bias and covariance error terms, for the treatment of brightness temperatures from the European geostationary satellite, has been successfully applied in Merchant et al. (2020).

c. Perspectives for geophysical DA

Beyond the aforementioned potential improvements in the existing techniques, specific research directions need to be taken by the data assimilation community. The main one concerns the realization of a comprehensive numerical evaluation of the different methods for the estimation of Q and R, built on an agreed experimental framework and a consensus model. Such an effort would help to evaluate (i) the pros and cons of the different methods (including their capability to deal with high dimensionality, localization in ensemble methods, and their practical feasibility), (ii) their effects on different error statistics (RMSE, coverage probabilities, and other diagnostics), (iii) the potential combination of the various methods (especially those considering constant or adaptive covariances), and (iv) the capability to take into account other sources of error (due for instance to improper parameterizations, multiplicative errors, or forcing terms).

The use of a realistic DA problem, with a high-dimensional state-space and a limited and heterogeneous observational coverage should be addressed in the future. In that realistic case, the observational information per degree of freedom will be significantly lower, and the estimates of Q and R will deteriorate. Parametric versions of these error covariance matrices will therefore be necessary. Among the parameters, some of them will control the variances, and will be different depending on the variable. Other parameters will control the spatial correlation lengths, that could be isotropic or anisotropic, depending on the region of interest and the considered variable. Cross correlations between variables will also have to be considered. Consequently, Q and R will be block matrices with as few parameters as possible.

A further challenge for future work is the evaluation of the feasibility of estimating nonadditive, non-Gaussian, and time-correlated noises under the current estimation frameworks. In this way, the need for observational constraints for the stochastic perturbation methods in the NWP community could be considered within the estimation framework discussed in this review.

Acknowledgments

This work has been carried out as part of the Copernicus Marine Environment Monitoring Service (CMEMS) 3DA project. CMEMS is implemented by Mercator Ocean in the framework of a delegation agreement with the European Union. This work was also partially supported by FOCUS Establishing Supercomputing Center of Excellence. CEREA is a member of Institut Pierre Simon Laplace (IPSL). Author Carrassi has been funded by the project REDDA (250711) of the Norwegian Research Council. Carrassi was also supported by the Natural Environment Research Council (Agreement PR140015 between NERC and the National Centre for Earth Observation). We thank Paul Platzer, a second-year Ph.D. student, who helped to popularize the summary and the introduction, and John C. Wells, Gilles-Olivier Guégan, and Aimée Johansen for their English grammar corrections. We also thank the five anonymous reviewers for their precious comments and ideas to improve this review paper. We are immensely grateful to Prof. David M. Schultz, Chief Editor of Monthly Weather Review, for his detailed advice and careful reading of the paper.

APPENDIX

Four Main Algorithms to Jointly Estimate Q and R in Data Assimilation

Algorithm 1 is an adaptive algorithm for the EnKF as implemented by Miyoshi et al. (2013). The steps of the algorithm are the following:

- initialize inflation factor [for instance λ(1) = 1)];

fork in 1:Kdo

 fori in 1:Nedo

  - compute forecast xif(k) using Eq. (4a);

  - compute innovation di(k) using Eq. (4b);

end

 - compute empirical covariance P˜f(k) of the xif(k);

 - compute Kf(k) using Eq. (4c) where Pf(k)HkT and HkP˜f(k)HkT are inflated by λ(k);

 fori in 1:Nedo

  - compute analysis xia(k) using Eq. (4d);

end

 - compute mean innovations dof(k) and doa(k) with diof(k)=y(k)Hk[xif(k)] and dioa(k)=y(k)Hk[xia(k)];

 - update R(k) from Eq. (6b) using the cross-covariance between diof(k) and dioa(k);

 - estimate λ˜(k) using Eq. (8) where HkP˜f(k)HkT is inflated by λ(k);

 - update λ(k + 1) using temporal smoother;

end

Algorithm 2 is an adaptive algorithm for the EnKF by Berry and Sauer (2013). The steps of the algorithm are the following:

- initialize Q(1) and R(1);

fork in 1:Kdo

 fori in 1:Nedo

  - compute forecast xif(k) using Eq. (4a);

  - compute innovation di(k) using Eq. (4b);

end

 - compute Kf(k) using Eq. (4c);

 fori in 1:Nedo

  - compute analysis xia(k) using Eq. (4d);

end

 - apply Eq. (12a) to get P˜(k) using linearizations of Mk and Hk given in Eqs. (5a) and (5b);

 - estimate Q˜(k) using Eq. (12b);

 - estimate R˜(k) using Eq. (12c);

 - update Q(k + 1) and R(k + 1) using temporal smoothers;

end

Algorithm 3 is an adaptive algorithm for the EnKF from Stroud et al. (2018). The steps of the algorithm are the following:

- define a priori distributions for θ (shape parameters of Q and R);

fork in 1:Kdo

 doi in 1:Nedo

  - draw samples θi(k) from p[θ|y(1:k − 1)];

  - compute forecast xif(k) using Eq. (4a) with θi(k);

  - compute innovation di(k) using Eq. (4b) with θi(k);

end

- compute Kf(k) using Eq. (4c);

 fori in 1:Nedo

  - compute analysis xia(k) using Eq. (4d);

end

 - approximate Gaussian likelihood of innovations p[y(k)|y(1:k − 1), θ(k)] using empirical mean d¯(k)=(1/Ne)i=1Nedi(k) and empirical covariance Σ(k)=[1/(Ne1)]i=1Ne[di(k)d¯(k)][di(k)d¯(k)]T with di(k)=y(k)Hk[xif(k)];

 - update p[θ|y(1:k)] using Eq. (16);

end

Algorithm 4 is an EM algorithm for the EnKF/EnKS from Dreano et al. (2017). The steps of the algorithm are the following:

- define θ (shape parameters of Q and R);

- set p[y(1:K)|θ(0)] = +∞;

- initialize n = 1, θ(1) and ϵ (stop condition);

whilep[y(1:K)|θ(n)] − p[y(1:K)|θ(n−1)] > ϵdo

 fork in 1:Kdo

  fori in 1:Nedo

   - compute forecast xif(k) using Eq. (4a);

   - compute innovation di(k) using Eq. (4b);

  end

  - compute Kf(k) using Eq. (4c);

  fori in 1:Nedo

   - compute analysis xia(k) using Eq. (4d);

  end

end

 fork in K:1 do

  - compute Ks(k) using Eq. (4e);

  fori in 1:Nedo

   - compute reanalysis xis(k) using Eq. (4f);

  end

end

- increment nn + 1;

- estimate Q(n) using Eq. (21a);

- estimate R(n) using Eq. (21b);

end

REFERENCES

REFERENCES
Anderson
,
J. L.
,
2007
:
An adaptive covariance inflation error correction algorithm for ensemble filters
.
Tellus
,
59A
,
210
224
, https://doi.org/10.1111/j.1600-0870.2006.00216.x.
Anderson
,
J. L.
,
2009
:
Spatially and temporally varying adaptive covariance inflation for ensemble filters
.
Tellus
,
61A
,
72
83
, https://doi.org/10.1111/j.1600-0870.2008.00361.x.
Anderson
,
J. L.
, and
S. L.
Anderson
,
1999
:
A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts
.
Mon. Wea. Rev.
,
127
,
2741
2758
, https://doi.org/10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.
Bélanger
,
P. R.
,
1974
:
Estimation of noise covariance matrices for a linear time-varying stochastic process
.
Automatica
,
10
,
267
275
, https://doi.org/10.1016/0005-1098(74)90037-5.
Berry
,
T.
, and
T.
Sauer
,
2013
:
Adaptive ensemble Kalman filtering of non-linear systems
.
Tellus
,
65A
,
20331
, https://doi.org/10.3402/tellusa.v65i0.20331.
Berry
,
T.
, and
J.
Harlim
,
2017
:
Correcting biased observation model error in data assimilation
.
Mon. Wea. Rev.
,
145
,
2833
2853
, https://doi.org/10.1175/MWR-D-16-0428.1.
Berry
,
T.
, and
T.
Sauer
,
2018
:
Correlation between system and observation errors in data assimilation
.
Mon. Wea. Rev.
,
146
,
2913
2931
, https://doi.org/10.1175/MWR-D-17-0331.1.
Bishop
,
C. H.
, and
E. A.
Satterfield
,
2013
:
Hidden error variance theory. Part I: Exposition and analytic model
.
Mon. Wea. Rev.
,
141
,
1454
1468
, https://doi.org/10.1175/MWR-D-12-00118.1.
Bishop
,
C. H.
,
E. A.
Satterfield
, and
K. T.
Shanley
,
2013
:
Hidden error variance theory. Part II: An instrument that reveals hidden error variance distributions from ensemble forecasts and observations
.
Mon. Wea. Rev.
,
141
,
1469
1483
, https://doi.org/10.1175/MWR-D-12-00119.1.
Blanchet
,
I.
,
C.
Frankignoul
, and
M. A.
Cane
,
1997
:
A comparison of adaptive Kalman filters for a tropical Pacific Ocean model
.
Mon. Wea. Rev.
,
125
,
40
58
, https://doi.org/10.1175/1520-0493(1997)125<0040:ACOAKF>2.0.CO;2.
Bocquet
,
M.
,
2011
:
Ensemble Kalman filtering without the intrinsic need for inflation
.
Nonlinear Processes Geophys.
,
18
,
735
750
, https://doi.org/10.5194/npg-18-735-2011.
Bocquet
,
M.
, and
P.
Sakov
,
2012
:
Combining inflation-free and iterative ensemble Kalman filters for strongly nonlinear systems
.
Nonlinear Processes Geophys.
,
19
,
383
399
, https://doi.org/10.5194/npg-19-383-2012.
Bocquet
,
M.
,
P. N.
Raanes
, and
A.
Hannart
,
2015
:
Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation
.
Nonlinear Processes Geophys.
,
22
,
645
662
, https://doi.org/10.5194/npg-22-645-2015.
Bormann
,
N.
,
A.
Collard
, and
P.
Bauer
,
2010
:
Estimates of spatial and interchannel observation-error characteristics for current sounder radiances for numerical weather prediction. II: Application to AIRS and IASI data
.
Quart. J. Roy. Meteor. Soc.
,
136
,
1051
1063
, https://doi.org/10.1002/qj.615.
Bowler
,
N. E.
,
2017
:
On the diagnosis of model error statistics using weak-constraint data assimilation
.
Quart. J. Roy. Meteor. Soc.
,
143
,
1916
1928
, https://doi.org/10.1002/qj.3051.
Brankart
,
J.-M.
,
E.
Cosme
,
C.-E.
Testut
,
P.
Brasseur
, and
J.
Verron
,
2010
:
Efficient adaptive error parameterizations for square root or ensemble Kalman filters: Application to the control of ocean mesoscale signals
.
Mon. Wea. Rev.
,
138
,
932
950
, https://doi.org/10.1175/2009MWR3085.1.
Buehner
,
M.
,
2010
:
Error statistics in data assimilation: Estimation and modelling. Data Assimilation: Making Sense of Observations, W. Lahoz, B. Khattatov, and R. Menard, Eds., Springer, 93–112
.
Campbell
,
W. F.
,
E. A.
Satterfield
,
B.
Ruston
, and
N. L.
Baker
,
2017
:
Accounting for correlated observation error in a dual-formulation 4D variational data assimilation system
.
Mon. Wea. Rev.
,
145
,
1019
1032
, https://doi.org/10.1175/MWR-D-16-0240.1.
Cappé
,
O.
,
2011
:
Online expectation-maximisation. Mixtures: Estimation and Applications, K. L. Mengersen, C. Robert, and M. Titterington, Eds.,Wiley Series in Probability and Statistics, John Wiley & Sons, 1–53
.
Carrassi
,
A.
,
M.
Bocquet
,
L.
Bertino
, and
G.
Evensen
,
2018
:
Data assimilation in the geosciences: An overview on methods, issues and perspectives
.
Wiley Interdiscip. Rev.: Climate Change
,
9
,
e535
, https://doi.org/10.1002/wcc.535.
Chapnik
,
B.
,
G.
Desroziers
,
F.
Rabier
, and
O.
Talagrand
,
2004
:
Properties and first application of an error-statistics tuning method in variational assimilation
.
Quart. J. Roy. Meteor. Soc.
,
130
,
2253
2275
, https://doi.org/10.1256/qj.03.26.
Cocucci
,
T. J.
,
M.
Pulido
,
M.
Lucini
, and
P.
Tandeo
,
2020
:
Model error covariance estimation in particle and ensemble Kalman filters using an online expectation-maximization algorithm. arXiv:2003.02109
, https://arxiv.org/pdf/2003.02109.pdf.
Corazza
,
M.
,
E.
Kalnay
,
D. J.
Patil
,
R.
Morss
,
M.
Cai
,
I.
Szunyogh
,
B. R.
Hunt
, and
J. A.
Yorke
,
2003
:
Use of the breeding technique to estimate the structure of the analysis “errors of the day.”
Nonlinear Processes Geophys.
,
10
,
233
243
, https://doi.org/10.5194/npg-10-233-2003.
Daescu
,
D. N.
, and
R.
Todling
,
2010
:
Adjoint sensitivity of the model forecast to data assimilation system error covariance parameters
.
Quart. J. Roy. Meteor. Soc.
,
136
,
2000
2012
, https://doi.org/10.1002/qj.693.
Daescu
,
D. N.
, and
R. H.
Langland
,
2013
:
Error covariance sensitivity and impact estimation with adjoint 4D-Var: Theoretical aspects and first applications to NAVDAS-AR
.
Quart. J. Roy. Meteor. Soc.
,
139
,
226
241
, https://doi.org/10.1002/qj.1943.
Daley
,
R.
,
1991
:
Atmospheric Data Analysis
.
Cambridge University Press
,
457
pp.
Daley
,
R.
,
1992
:
Estimating model-error covariances for application to atmospheric data assimilation
.
Mon. Wea. Rev.
,
120
,
1735
1746
, https://doi.org/10.1175/1520-0493(1992)120<1735:EMECFA>2.0.CO;2.
Dee
,
D. P.
,
1995
:
On-line estimation of error covariance parameters for atmospheric data assimilation
.
Mon. Wea. Rev.
,
123
,
1128
1145
, https://doi.org/10.1175/1520-0493(1995)123<1128:OLEOEC>2.0.CO;2.
Dee
,
D. P.
,
2005
:
Bias and data assimilation
.
Quart. J. Roy. Meteor. Soc.
,
131
,
3323
3343
, https://doi.org/10.1256/qj.05.137.
Dee
,
D. P.
, and
A. M.
da Silva
,
1999
:
Maximum-likelihood estimation of forecast and observation error covariance parameters. Part I: Methodology
.
Mon. Wea. Rev.
,
127
,
1822
1834
, https://doi.org/10.1175/1520-0493(1999)127<1822:MLEOFA>2.0.CO;2.
Dee
,
D. P.
,
S. E.
Cohn
,
A.
Dalcher
, and
M.
Ghil
,
1985
:
An efficient algorithm for estimating noise covariances in distributed systems
.
IEEE Trans. Autom. Control
,
30
,
1057
1065
, https://doi.org/10.1109/TAC.1985.1103837.
Dee
,
D. P.
,
G.
Gaspari
,
C.
Redder
,
L.
Rukhovets
, and
A. M.
da Silva
,
1999
:
Maximum-likelihood estimation of forecast and observation error covariance parameters. Part II: Applications
.
Mon. Wea. Rev.
,
127
,
1835
1849
, https://doi.org/10.1175/1520-0493(1999)127<1835:MLEOFA>2.0.CO;2.
DelSole
,
T.
, and
X.
Yang
,
2010
:
State and parameter estimation in stochastic dynamical models
.
Physica D
,
239
,
1781
1788
, https://doi.org/10.1016/j.physd.2010.06.001.
Dempster
,
A. P.
,
N. M.
Laird
, and
D. B.
Rubin
,
1977
:
Maximum likelihood from incomplete data via the EM algorithm
.
J. Roy. Stat. Soc.
,
39B
,
1
22
, https://doi.org/10.1111/j.2517-6161.1977.tb01600.x.
Desroziers
,
G.
, and
S.
Ivanov
,
2001
:
Diagnosis and adaptive tuning of observation-error parameters in a variational assimilation
.
Quart. J. Roy. Meteor. Soc.
,
127
,
1433
1452
, https://doi.org/10.1002/qj.49712757417.
Desroziers
,
G.
,
L.
Berre
,
B.
Chapnik
, and
P.
Poli
,
2005
:
Diagnosis of observation, background and analysis-error statistics in observation space
.
Quart. J. Roy. Meteor. Soc.
,
131
,
3385
3396
, https://doi.org/10.1256/qj.05.108.
Dreano
,
D.
,
P.
Tandeo
,
M.
Pulido
,
T.
Chonavel
,
B. A.
It-El-Fquih
, and
I.
Hoteit
,
2017
:
Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm
.
Quart. J. Roy. Meteor. Soc.
,
143
,
1877
1885
, https://doi.org/10.1002/qj.3048.
Duník
,
J.
,
O.
Straka
,
O.
Kost
, and
J.
Havlík
,
2017
:
Noise covariance matrices in state-space models: A survey and comparison of estimation methods-Part I
.
Int. J. Adapt. Control Signal Process.
,
31
,
1505
1543
, https://doi.org/10.1002/acs.2783.
El Gharamti
,
M.
,
2018
:
Enhanced adaptive inflation algorithm for ensemble filters
.
Mon. Wea. Rev.
,
146
,
623
640
, https://doi.org/10.1175/MWR-D-17-0187.1.
Evensen
,
G.
,
2009
:
Data Assimilation: The Ensemble Kalman Filter
.
Springer Science and Business Media
,
332
pp.
Frei
,
M.
, and
H. R.
Künsch
,
2012
:
Sequential state and observation noise covariance estimation using combined ensemble Kalman and particle filters
.
Mon. Wea. Rev.
,
140
,
1476
1495
, https://doi.org/10.1175/MWR-D-10-05088.1.
Fu
,
L.-L.
,
I.
Fukumori
, and
R. N.
Miller
,
1993
:
Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. Part II: A linear, wind-driven model
.
J. Phys. Oceanogr.
,
23
,
2162
2181
, https://doi.org/10.1175/1520-0485(1993)023<2162:FDMTTG>2.0.CO;2.
Ghil
,
M.
, and
P.
Malanotte-Rizzoli
,
1991
:
Data assimilation in meteorology and oceanography. Advances in Geophysics, Vol. 33, Academic Press, 141–266
, https://doi.org/10.1016/S0065-2687(08)60442-2.
Guillet
,
O.
,
A. T.
Weaver
,
X.
Vasseur
,
Y.
Michel
,
S.
Gratton
, and
S.
Gürol
,
2019
:
Modelling spatially correlated observation errors in variational data assimilation using a diffusion operator on an unstructured mesh
.
Quart. J. Roy. Meteor. Soc.
,
145
,
1947
1967
, https://doi.org/10.1002/qj.3537.
Harlim
,
J.
,
2018
:
Ensemble Kalman filters. Data-Driven Computational Methods: Parameter and Operator Estimations, J. Harlim, Ed., Cambridge University Press, 31–59
.
Harlim
,
J.
,
A.
Mahdi
, and
A. J.
Majda
,
2014
:
An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models
.
J. Comput. Phys.
,
257
,
782
812
, https://doi.org/10.1016/j.jcp.2013.10.025.
Hollingsworth
,
A.
, and
P.
Lönnberg
,
1986
:
The statistical structure of short-range forecast errors as determined from radiosonde data. Part I: The wind field
.
Tellus
,
38A
,
111
136
, https://doi.org/10.3402/tellusa.v38i2.11707.
Hotta
,
D.
,
E.
Kalnay
,
Y.
Ota
, and
T.
Miyoshi
,
2017
:
EFSR: Ensemble forecast sensitivity to observation error covariance
.
Mon. Wea. Rev.
,
145
,
5015
5031
, https://doi.org/10.1175/MWR-D-17-0122.1.
Houtekamer
,
P. L.
, and
F.
Zhang
,
2016
:
Review of the ensemble Kalman filter for atmospheric data assimilation
.
Mon. Wea. Rev.
,
144
,
4489
4532
, https://doi.org/10.1175/MWR-D-15-0440.1.
Houtekamer
,
P. L.
,
H. L.
Mitchell
, and
X.
Deng
,
2009
:
Model error representation in an operational ensemble Kalman filter
.
Mon. Wea. Rev.
,
137
,
2126
2143
, https://doi.org/10.1175/2008MWR2737.1.
Ide
,
K.
,
P.
Courtier
,
M.
Ghil
, and
A. C.
Lorenc
,
1997
:
Unified notation for data assimilation: Operational, sequential and variational
.
J. Meteor. Soc. Japan
,
75
,
181
189
, https://doi.org/10.2151/jmsj1965.75.1B_181.
Janjić
,
T.
, and et al
,
2018
:
On the representation error in data assimilation
.
Quart. J. Roy. Meteor. Soc.
,
144
,
1257
1278
, https://doi.org/10.1002/qj.3130.
Jazwinski
,
A. H.
,
1970
:
Stochastic Processes and Filtering Theory
.
Academic Press
,
376
pp.
Kantas
,
N.
,
A.
Doucet
,
S. S.
Singh
,
J.
Maciejowski
, and
N.
Chopin
,
2015
:
On particle methods for parameter estimation in state-space models
.
Stat. Sci.
,
30
,
328
351
, https://doi.org/10.1214/14-STS511.
Katzfuss
,
M.
,
J. R.
Stroud
, and
C. K.
Wikle
,
2020
:
Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models
.
J. Amer. Stat. Assoc.
,
115
,
866
885
, https://doi.org/10.1080/01621459.2019.1592753.
Kotsuki
,
S.
,
T.
Miyoshi
,
K.
Terasaki
,
G.-Y.
Lien
, and
E.
Kalnay
,
2017a
:
Assimilating the global satellite mapping of precipitation data with the Nonhydrostatic Icosahedral Atmospheric Model (NICAM)
.
J. Geophys. Res. Atmos.
,
122
,
631
650
, https://doi.org/10.1002/2016jd025355.
Kotsuki
,
S.
,
Y.
Ota
, and
T.
Miyoshi
,
2017b
:
Adaptive covariance relaxation methods for ensemble data assimilation: Experiments in the real atmosphere
.
Quart. J. Roy. Meteor. Soc.
,
143
,
2001
2015
, https://doi.org/10.1002/qj.3060.
Leith
,
C. E.
,
1978
:
Objective methods for weather prediction
.
Annu. Rev. Fluid Mech.
,
10
,
107
128
, https://doi.org/10.1146/annurev.fl.10.010178.000543.
Li
,
H.
,
E.
Kalnay
, and
T.
Miyoshi
,
2009
:
Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter
.
Quart. J. Roy. Meteor. Soc.
,
135
,
523
533
, https://doi.org/10.1002/qj.371.
Liang
,
X.
,
X.
Zheng
,
S.
Zhang
,
G.
Wu
,
Y.
Dai
, and
Y.
Li
,
2012
:
Maximum likelihood estimation of inflation factors on error covariance matrices for ensemble Kalman filter assimilation
.
Quart. J. Roy. Meteor. Soc.
,
138
,
263
273
, https://doi.org/10.1002/qj.912.
Liu
,
J.
, and
M.
West
,
2001
:
Combined parameter and state estimation in simulation-based filtering. Sequential Monte Carlo Methods in Practice, A. Doucet, N. Freitas, and N. Gordon, Eds., Springer, 197–223
.
Liu
,
Y.
,
J.-M.
Haussaire
,
M.
Bocquet
,
Y.
Roustan
,
O.
Saunier
, and
A.
Mathieu
,
2017
:
Uncertainty quantification of pollutant source retrieval: Comparison of Bayesian methods with application to the Chernobyl and Fukushima Daiichi accidental releases of radionuclides
.
Quart. J. Roy. Meteor. Soc.
,
143
,
2886
2901
, https://doi.org/10.1002/qj.3138.
Mehra
,
R. K.
,
1970
:
On the identification of variances and adaptive Kalman filtering
.
IEEE Trans. Autom. Control
,
15
,
175
184
, https://doi.org/10.1109/TAC.1970.1099422.
Mehra
,
R. K.
,
1972
:
Approaches to adaptive filtering
.
IEEE Trans. Autom. Control
,
17
,
693
698
, https://doi.org/10.1109/TAC.1972.1100100.
Ménard
,
R.
,
2016
:
Error covariance estimation methods based on analysis residuals: Theoretical foundation and convergence properties derived from simplified observation networks
.
Quart. J. Roy. Meteor. Soc.
,
142
,
257
273
, https://doi.org/10.1002/qj.2650.
Menemenlis
,
D.
, and
M.
Chechelnitsky
,
2000
:
Error estimates for an ocean general circulation model from altimeter and acoustic tomography data
.
Mon. Wea. Rev.
,
128
,
763
778
, https://doi.org/10.1175/1520-0493(2000)128<0763:EEFAOG>2.0.CO;2.
Ménétrier
,
B.
, and
T.
Auligné
,
2015
:
Optimized localization and hybridization to filter ensemble-based covariances
.
Mon. Wea. Rev.
,
143
,
3931
3947
, https://doi.org/10.1175/MWR-D-15-0057.1.
Merchant
,
C. J.
,
S.
Saux-Picart
, and
J.
Waller
,
2020
:
Bias correction and covariance parameters for optimal estimation by exploiting matched in-situ references
.
Remote Sens. Environ.
,
237
, 111590, https://doi.org/10.1016/j.rse.2019.111590.
Mitchell
,
H. L.
, and
P. L.
Houtekamer
,
2000
:
An adaptive ensemble Kalman filter
.
Mon. Wea. Rev.
,
128
,
416
433
, https://doi.org/10.1175/1520-0493(2000)128<0416:AAEKF>2.0.CO;2.
Mitchell
,
L.
, and
A.
Carrassi
,
2015
:
Accounting for model error due to unresolved scales within ensemble Kalman filtering
.
Quart. J. Roy. Meteor. Soc.
,
141
,
1417
1428
, https://doi.org/10.1002/qj.2451.
Miyoshi
,
T.
,
2011
:
The Gaussian approach to adaptive covariance inflation and its implementation with the local ensemble transform Kalman filter
.
Mon. Wea. Rev.
,
139
,
1519
1535
, https://doi.org/10.1175/2010MWR3570.1.
Miyoshi
,
T.
,
Y.
Sato
, and
T.
Kadowaki
,
2010
:
Ensemble Kalman filter and 4D-Var intercomparison with the Japanese operational global analysis and prediction system
.
Mon. Wea. Rev.
,
138
,
2846
2866
, https://doi.org/10.1175/2010MWR3209.1.
Miyoshi
,
T.
,
E.
Kalnay
, and
H.
Li
,
2013
:
Estimating and including observation-error correlations in data assimilation
.
Inverse Probl. Sci. Eng.
,
21
,
387
398
, https://doi.org/10.1080/17415977.2012.712527.
Pham
,
D. T.
,
J.
Verron
, and
M. C.
Roubaud
,
1998
:
A singular evolutive extended Kalman filter for data assimilation in oceanography
.
J. Mar. Syst.
,
16
,
323
340
, https://doi.org/10.1016/S0924-7963(97)00109-7.
Pulido
,
M.
,
P.
Tandeo
,
M.
Bocquet
,
A.
Carrassi
, and
M.
Lucini
,
2018
:
Stochastic parameterization identification using ensemble Kalman filtering combined with maximum likelihood methods
.
Tellus
,
70A
,
1
17
, https://doi.org/10.1080/16000870.2018.1442099.
Purser
,
R. J.
, and
D. F.
Parrish
,
2003
:
A Bayesian technique for estimating continuously varying statistical parameters of a variational assimilation
.
Meteor. Atmos. Phys.
,
82
,
209
226
, https://doi.org/10.1007/s00703-001-0583-x.
Raanes
,
P. N.
,
M.
Bocquet
, and
A.
Carrassi
,
2019
:
Adaptive covariance inflation in the ensemble Kalman filter by Gaussian scale mixtures
.
Quart. J. Roy. Meteor. Soc.
,
145
,
53
75
, https://doi.org/10.1002/qj.3386.
Rodwell
,
M. J.
, and
T. N.
Palmer
,
2007
:
Using numerical weather prediction to assess climate models
.
Quart. J. Roy. Meteor. Soc.
,
133
,
129
146
, https://doi.org/10.1002/qj.23.
Ruiz
,
J. J.
,
M.
Pulido
, and
T.
Miyoshi
,
2013a
:
Estimating model parameters with ensemble-based data assimilation: A review
.
J. Meteor. Soc. Japan
,
91
,
79
99
, https://doi.org/10.2151/jmsj.2013-201.
Ruiz
,
J. J.
,
M.
Pulido
, and
T.
Miyoshi
,
2013b
:
Estimating model parameters with ensemble-based data assimilation: Parameter covariance treatment
.
J. Meteor. Soc. Japan
,
91
,
453
469
, https://doi.org/10.2151/jmsj.2013-403.
Rutherford
,
I. D.
,
1972
:
Data assimilation by statistical interpolation of forecast error fields
.
J. Atmos. Sci.
,
29
,
809
815
, https://doi.org/10.1175/1520-0469(1972)029<0809:DABSIO>2.0.CO;2.
Satterfield
,
E. A.
,
D.
Hodyss
,
D. D.
Kuhl
, and
C. H.
Bishop
,
2018
:
Observation-informed generalized hybrid error covariance models
.
Mon. Wea. Rev.
,
146
,
3605
3622
, https://doi.org/10.1175/MWR-D-18-0016.1.
Scheffler
,
G.
,
J.
Ruiz
, and
M.
Pulido
,
2019
:
Inference of stochastic parametrizations for model error treatment using nested ensemble Kalman filters
.
Quart. J. Roy. Meteor. Soc.
,
145
,
2028
2045
, https://doi.org/10.1002/qj.3542.
Schmidt
,
S. F.
,
1966
:
Applications of state space methods to navigation problems
.
Adv. Control Syst.
,
3
,
293
340
, https://doi.org/10.1016/B978-1-4831-6716-9.50011-4.
Shumway
,
R. H.
, and
D. S.
Stoffer
,
1982
:
An approach to time series smoothing and forecasting using the EM algorithm
.
J. Time Ser. Anal.
,
3
,
253
264
, https://doi.org/10.1111/j.1467-9892.1982.tb00349.x.
Solonen
,
A.
,
J.
Hakkarainen
,
A.
Ilin
,
M.
Abbas
, and
A.
Bibov
,
2014
:
Estimating model error covariance matrix parameters in extended Kalman filtering
.
Nonlinear Processes Geophys.
,
21
,
919
927
, https://doi.org/10.5194/npg-21-919-2014.
Stroud
,
J. R.
, and
T.
Bengtsson
,
2007
:
Sequential state and variance estimation within the ensemble Kalman filter
.
Mon. Wea. Rev.
,
135
,
3194
3208
, https://doi.org/10.1175/MWR3460.1.
Stroud
,
J. R.
,
M.
Katzfuss
, and
C. K.
Wikle
,
2018
:
A Bayesian adaptive ensemble Kalman filter for sequential state and parameter estimation
.
Mon. Wea. Rev.
,
146
,
373
386
, https://doi.org/10.1175/MWR-D-16-0427.1.
Tandeo
,
P.
,
M.
Pulido
, and
F.
Lott
,
2015
:
Offline parameter estimation using EnKF and maximum likelihood error covariance estimates: Application to a subgrid-scale orography parametrization
.
Quart. J. Roy. Meteor. Soc.
,
141
,
383
395
, https://doi.org/10.1002/qj.2357.
Todling
,
R.
,
2015
:
A lag-1 smoother approach to system-error estimation: Sequential method
.
Quart. J. Roy. Meteor. Soc.
,
141
,
1502
1513
, https://doi.org/10.1002/qj.2460.
Ueno
,
G.
, and
N.
Nakamura
,
2014
:
Iterative algorithm for maximum-likelihood estimation of the observation-error covariance matrix for ensemble-based filters
.
Quart. J. Roy. Meteor. Soc.
,
140
,
295
315
, https://doi.org/10.1002/qj.2134.
Ueno
,
G.
, and
N.
Nakamura
,
2016
:
Bayesian estimation of the observation-error covariance matrix in ensemble-based filters
.
Quart. J. Roy. Meteor. Soc.
,
142
,
2055
2080
, https://doi.org/10.1002/qj.2803.
Ueno
,
G.
,
T.
Higuchi
,
T.
Kagimoto
, and
N.
Hirose
,
2010
:
Maximum likelihood estimation of error covariances in ensemble-based filters and its application to a coupled atmosphere-ocean model
.
Quart. J. Roy. Meteor. Soc.
,
136
,
1316
1343
, https://doi.org/10.1002/qj.654.
Wahba
,
G.
, and
J.
Wendelberger
,
1980
:
Some new mathematical methods for variational objective analysis using splines and cross validation
.
Mon. Wea. Rev.
,
108
,
1122
1143
, https://doi.org/10.1175/1520-0493(1980)108<1122:SNMMFV>2.0.CO;2.
Waller
,
J. A.
,
S. L.
Dance
, and
N. K.
Nichols
,
2016
:
Theoretical insight into diagnosing observation error correlations using observation-minus-background and observation-minus-analysis statistics
.
Quart. J. Roy. Meteor. Soc.
,
142
,
418
431
, https://doi.org/10.1002/qj.2661.
Waller
,
J. A.
,
S. L.
Dance
, and
N. K.
Nichols
,
2017
:
On diagnosing observation-error statistics with local ensemble data assimilation
.
Quart. J. Roy. Meteor. Soc.
,
143
,
2677
2686
, https://doi.org/10.1002/qj.3117.
Wang
,
X.
, and
C. H.
Bishop
,
2003
:
A comparison of breeding and ensemble transform Kalman filter ensemble forecast schemes
.
J. Atmos. Sci.
,
60
,
1140
1158
, https://doi.org/10.1175/1520-0469(2003)060<1140:ACOBAE>2.0.CO;2.
Weston
,
P. P.
,
W.
Bell
, and
J. R.
Eyre
,
2014
:
Accounting for correlated error in the assimilation of high-resolution sounder data
.
Quart. J. Roy. Meteor. Soc.
,
140
,
2420
2429
, https://doi.org/10.1002/qj.2306.
Whitaker
,
J. S.
, and
T. M.
Hamill
,
2012
:
Evaluating methods to account for system errors in ensemble data assimilation
.
Mon. Wea. Rev.
,
140
,
3078
3089
, https://doi.org/10.1175/MWR-D-11-00276.1.
Whitaker
,
J. S.
,
T. M.
Hamill
,
X.
Wei
,
Y.
Song
, and
Z.
Toth
,
2008
:
Ensemble data assimilation with the NCEP global forecast system
.
Mon. Wea. Rev.
,
136
,
463
482
, https://doi.org/10.1175/2007MWR2018.1.
Winiarek
,
V.
,
M.
Bocquet
,
O.
Saunier
, and
A.
Mathieu
,
2012
:
Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant
.
J. Geophys. Res.
,
117
,
D05122
, https://doi.org/10.1029/2011JD016932.
Winiarek
,
V.
,
M.
Bocquet
,
N.
Duhanyan
,
Y.
Roustan
,
O.
Saunier
, and
A.
Mathieu
,
2014
:
Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations
.
Atmos. Environ.
,
82
,
268
279
, https://doi.org/10.1016/j.atmosenv.2013.10.017.
Wu
,
C. F. J.
,
1983
:
On the convergence properties of the EM algorithm
.
Ann. Stat.
,
11
,
95
103
, https://doi.org/10.1214/aos/1176346060.
Yang
,
Y.
, and
E.
Mémin
,
2019
:
Estimation of physical parameters under location uncertainty using an ensemble-expectation-maximization algorithm
.
Quart. J. Roy. Meteor. Soc.
,
145
,
418
433
, https://doi.org/10.1002/qj.3438.
Ying
,
Y.
, and
F.
Zhang
,
2015
:
An adaptive covariance relaxation method for ensemble data assimilation
.
Quart. J. Roy. Meteor. Soc.
,
141
,
2898
2906
, https://doi.org/10.1002/qj.2576.
Zebiak
,
S. E.
, and
M. A.
Cane
,
1987
:
A model El Niño–Southern Oscillation
.
Mon. Wea. Rev.
,
115
,
2262
2278
, https://doi.org/10.1175/1520-0493(1987)115<2262:AMENO>2.0.CO;2.
Zhang
,
F.
,
C.
Snyder
, and
J.
Sun
,
2004
:
Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter
.
Mon. Wea. Rev.
,
132
,
1238
1253
, https://doi.org/10.1175/1520-0493(2004)132<1238:IOIEAO>2.0.CO;2.
Zhen
,
Y.
, and
J.
Harlim
,
2015
:
Adaptive error covariances estimation methods for ensemble Kalman filters
.
J. Comput. Phys.
,
294
,
619
638
, https://doi.org/10.1016/j.jcp.2015.03.061.
For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Footnotes

1

Other notations are also used in practice.

2

Some of the methods presented in section 3 also use the Bayesian philosophy; for instance, they assume a priori distribution for the multiplicative inflation parameter λ (Anderson 2009; El Gharamti 2018).