Abstract

Two ensemble formulations of the four-dimensional variational (4DVar) data assimilation technique are examined for a low-dimensional dynamical system. The first method, denoted E4DVar, uses tangent linear and adjoint model operators to minimize a cost function in the same manner as the traditional 4DVar data assimilation system. The second method, denoted 4DEnVar, uses an ensemble of nonlinear model trajectories to replace the function of linearized models in 4DVar, thus improving the parallelization of the data assimilation. Background errors for each algorithm are represented using a hybrid error covariance, which includes climatological errors as well as ensemble-estimated errors from an ensemble Kalman filter (EnKF). Numerical experiments performed over a range of scenarios suggest that both methods provide similar analysis accuracy for dense observation networks, and in perfect model experiments with large ensembles. Nevertheless, E4DVar has clear benefits over 4DEnVar when substantial covariance localization is required to treat sampling error. The greatest advantage of the tangent-linear approach is that it implicitly propagates a localized, full-rank ensemble covariance in time, thus avoiding the need to localize a time-dependent ensemble covariance. The tangent linear and adjoint model operators also provide a means of evolving flow-dependent information from the climate-based error component, which is found to be beneficial for treating model error. Challenges that need to be overcome before adopting a pure ensemble framework are illustrated through experiments estimating time covariances with four-dimensional ensembles and comparing results with those estimated with a tangent linear model.

1. Introduction

To simulate real weather events with an acceptable degree of accuracy, numerical weather prediction models rely on carefully specified initial conditions that consider all past and present information leading up to the initialization time. Data assimilation provides a means of initializing models with the most probable state, given prior statistics about the model and imperfect observations. For large dimensional systems, like atmospheric models, these methods typically assume Gaussian errors for the observations and prior probability density, and solve a linear system of equations for the posterior mean and covariance. Data assimilation methods can take the form of filters, which consider information up to the time of the analysis, or smoothers, which consider both past and future information relative to the analysis time. The most commonly used filters in atmospheric science are the ensemble Kalman filter (EnKF; Evensen 1994; Houtekamer and Mitchell 1998) and 3D variational data assimilation methods (3DVar; Le Dimet and Talagrand 1986), and the most commonly used smoother is the 4D variational method (Thepáut and Courtier 1991). Ensemble-based approaches, like the EnKF, provide a sample representation of the posterior error covariance after each data assimilation cycle, which can be used for generating probabilistic forecasts and prior error statistics for future data assimilation cycles. On the other hand, variational methods provide an efficient means of assimilating large numbers of observations; the solution, however, is purely deterministic in that posterior covariances are not estimated. The framework for variational systems already exists for most models used operationally and for research; examples include thoroughly tested data processing and quality control procedures, efficient preconditioning methods, and physical balance constraints for the solution.

Another benefit of variational data assimilation is the straightforward implementation of a hybrid prior error covariance. Hybrid data assimilation methods combine flow-dependent, ensemble covariance with the stationary, climate-based covariances that are typically used in 3DVar and 4DVar. Hamill and Snyder (2000) found that mixing ensemble and static covariance produces better results than using the two components separately during data assimilation: the ensemble provides nonstationary error statistics, while the static covariance removes some of the filter sensitivity to ensemble size. Ensemble filters that use hybrid covariance are also less sensitive to model error, which introduces additional bias in the ensemble estimate (Etherton and Bishop 2004). Using a low-dimensional model, Bishop and Satterfield (2013) provide a mathematical justification for hybrid data assimilation by showing that a linear combination of ensemble and static covariance provides the optimal prior error covariance, given a distribution of ensemble variances and the likelihood of variances given the true conditional error variance.

Among the various types of hybrid data assimilation methods, recent studies find the best performance, in terms of analysis and forecast error reduction, from systems that incorporate ensemble information in variational smoothers (Tian et al. 2008, 2009; Zhang et al. 2009; Buehner et al. 2010b; Bishop and Hodyss 2011; Zhang and Zhang 2012; Zhang et al. 2013; Clayton et al. 2013; Kuhl et al. 2013; Poterjoy and Zhang 2014; Fairbairn et al. 2014; Wang and Lei 2014). One strategy is to introduce ensemble perturbations into pre-existing 4DVar systems, which will be denoted ensemble-4DVar (E4DVar) in this manuscript. A second strategy is to use a 4D ensemble to replace the tangent linear and adjoint model operators in 4DVar (Liu et al. 2008), which will be denoted 4D-ensemble-Var (4DEnVar). 4DEnVar requires the additional cost of running an ensemble forecast through the time window from which observations are collected, but avoids the coding and maintenance of a tangent linear model. More importantly, 4DEnVar does not require running the tangent linear model and its adjoint during each iteration of the cost function minimization, thus increasing the parallel capability of this algorithm over E4DVar. In this study, both E4DVar and 4DEnVar are implemented using the approach introduced in Zhang et al. (2009): from an ensemble forecast, the analysis state is found by minimizing the hybrid variational cost function, while an EnKF updates the ensemble perturbations around the hybrid solution.

Buehner et al. (2010a,b) provided the first comparison of E4DVar and 4DEnVar with other commonly used data assimilation methods for operational global models. Using a low-dimensional model, Fairbairn et al. (2014) performed a more systematic comparison of these methods, and investigated their sensitivity to observation density, ensemble size, model errors, and other system parameters. Both Buehner et al. and Fairbairn et al. applied a one-way coupling strategy in which ensemble perturbations are introduced into the variational system from a cycling EnKF, but the resulting variational solution is not fed back into the EnKF component. This approach facilitates a comparison of the two data assimilation methods using identical ensemble covariance, but does not allow the variational solution to affect the ensemble forecasts between cycles. In both of these studies, the authors applied either an ensemble-estimated covariance or a static covariance (no hybrid covariance) during data assimilation, and found E4DVar to provide more accurate results than 4DEnVar. Similar to Fairbairn et al. the current study applies a low-dimensional model to examine several aspects of E4DVar and 4DEnVar data assimilation systems. This study differs from previous studies that directly compare E4DVar and 4DEnVar in that it focuses primarily on hybrid implementations of the two systems, and uses a two-way coupling between EnKF and variational components, thus allowing E4DVar and 4DEnVar to evolve different ensemble forecast error covariance over time.

Hybrid data assimilation systems have known benefits for atmospheric models, but contain a number of user-specified parameters that must be tuned for optimal performance. These parameters include ensemble-specific values, such as ensemble size, covariance localization radius of influence, and covariance inflation coefficients, which are required for standard implementations of ensemble filters and smoothers. They also require the specification of a weighting coefficient for determining the amount of ensemble and stationary covariance used for the prior error statistics. To investigate the sensitivity of E4DVar and 4DEnVar to these tuning parameters, the two methods are compared over a range of configurations using the dynamical system introduced in Lorenz (1996, hereafter L96). The L96 model mimics the behavior of an atmospheric quantity over a latitudinal circle, but has a state dimension several orders of magnitude smaller than a typical atmospheric model, which allows for large numbers of assimilation experiments to be performed. Three main goals of these experiments are as follows: 1) investigate the role of the stationary covariance in hybrid data assimilation methods in the presence of preexisting ensemble covariance tuning parameters, 2) compare E4DVar with 4DEnVar systematically over a variety of data assimilation scenarios, and 3) examine the challenges that need to be overcome before replacing linearized models in 4DVar with a 4D ensemble.

The organization of this manuscript is as follows. We introduce the two data assimilation methods and forecast model in section 2. In section 3 we describe the implicit role of 4D covariance in variational smoothers, and show that differences in E4DVar and 4DEnVar estimations of time covariances determine their relative performance during data assimilation. Section 4 describes results from cycling data assimilation experiments, and section 5 provides a summary and conclusions.

2. Hybrid four-dimensional data assimilation

In this section, we present the mathematical framework for hybrid E4DVar and 4DEnVar, along with the dynamical system adopted for applying the two methods. We use lowercase boldface roman font to indicate vectors, uppercase boldface arial font to indicate matrices, and italic font to indicate scalars and nonlinear operators.

a. Hybrid E4DVar

The 4DVar method seeks a solution that minimizes the misfit of a control variable to the background state at and observations at times . For the incremental form of 4DVar, the minimization is carried out with respect to increments from (Courtier et al. 1994). The function to minimize can be expressed as the sum of background () and observation () cost functions:

 
formula

where is the background error covariance, is the observation error covariance, is the tangent linear model operator, and is the tangent linear observation operator. The vector contains the innovations at each time along a model trajectory from and is given by

 
formula

where and are the (possibly nonlinear) forecast model and observation operators, respectively.

The 4DVar cost function can be preconditioned for faster minimization by letting , where is a square root of the background error covariance matrix: (Lorenc et al. 2000). The cost function is then minimized with respect to the new control variable :

 
formula

By taking the gradient of (3) with respect to () and setting , the set of solutions that minimize the cost function must satisfy the system of linear equations:

 
formula

In (4), and are the adjoint versions of the linearized model and observation operators, respectively, and the matrix on the left side of (4) that multiplies is the Hessian of . The control variable transformation forces the eigenvalues of the Hessian matrix to be greater than or equal to 1, owing to the fact that the covariance of has an inverse equal to the identity matrix. The resulting Hessian has a smaller condition number, which allows numerical solvers to find a solution to (4) in less iterations than would be required otherwise (Lorenc 1997). In this study, we apply the conjugate gradient method to solve (4) for , and then multiply the solution by to get the analysis increment.

Ensemble information is introduced into the 4DVar cost function by separating increments from the background state into two components:

 
formula

where is the increment resulting from the static background covariance and is the increment resulting from the ensemble-estimated covariance. The coefficient β can range from 0 to 1 and controls the impact of the two increments on the final analysis. This form of hybrid data assimilation is equivalent to replacing the background error covariance in (1) with , where is the ensemble-estimated error covariance and is a static covariance that does not change with time1 (Wang et al. 2007).

Using a similar substitution described above, let for , and for , where indicates element wise multiplication, and is the correlation matrix used for localizing the ensemble covariance. As described in Buehner (2005), a square root of the localized ensemble covariance is given by , where = × diag; Wang et al. (2007) show that this method is mathematically equivalent to the approach described in Lorenc (2003). Here, diag is a matrix with the nth ensemble perturbation vector in the diagonal elements, and 0 in the nondiagonal elements. Equation (5) can then be written as

 
formula

where the hybrid increment is expressed as a product of the concatenated square root matrices and control variables . From (3), the E4DVar cost function is written as

 
formula

Likewise, the control variables that minimize the cost function are found by substituting for and for in (4) and solving for .

b. Hybrid 4DEnVar

In 4DEnVar, the role of the tangent linear and adjoint model in (4) is replaced by a four-dimensional ensemble (Liu et al. 2008). A hybrid 4DEnVar system can be formulated using the substitution described above in (6). Recall that E4DVar uses linear model and observation operators to approximate the transformation of into observation space at time t:

 
formula

Using an ensemble forecast stored at each observation time in the assimilation window, on the right-hand side of (8) can be written as

 
formula

where diag for .2 Here, approximates the evolution of the nth ensemble perturbation from time 0 to t using the full nonlinear model. 4DEnVar requires a localization of the ensemble covariance at each observation time in the window, as opposed to E4DVar, which propagates a localized covariance forward implicitly using the tangent linear and adjoint models. The localization of time-dependent covariance introduces additional complexity in the 4DEnVar algorithm. To perform the localization, we adopt the strategy used in most previous studies, which is to use the same correlation matrix at each time (Liu et al. 2009; Buehner et al. 2010b; Liu and Xiao 2013; Fairbairn et al. 2014).

To remove the tangent linear model operator from the climatological term in (8), is assumed to be the identity matrix so ; this assumption is made in “first guess at the appropriate time” data assimilation, or 3DVar-FGAT (Fisher and Andersson 2001). Equation (8) can then be approximated by

 
formula

where . The 4DEnVar cost function is formed by substituting for and for in (3):

 
formula

A similar substitution is made in (4) to solve for the that minimizes . The analysis increments at each observation time in the assimilation window are then given by . For most of this study, the increments and background state at the beginning of the window are added and propagated to the middle of the window to estimate the analysis. In section 4f, we provide a comparison of data assimilation experiments using the increments and background at the middle of the window to justify our choice for defining the analysis.

c. Coupled EnKF–variational data assimilation

The hybrid data assimilation methods discussed above use ensemble forecasts to provide flow-dependent statistics for the background state. In this study, we estimate the posterior mean using the E4DVar or 4DEnVar analysis, and adjust the ensemble perturbations at the middle of the assimilation window using an EnKF (Zhang et al. 2009). In doing so, the variational component assimilates observations at their correct times from to , while the EnKF uses ensemble information at and assumes all observations are valid at the same time. The hybrid analysis at is propagated to with , and added to the posterior perturbations to initialize the ensemble forecast for the next data assimilation cycle.

d. Treatment of sampling errors

The use of a small ensemble size relative to the state dimension, along with model error, nonlinearity, and other systematic errors in the data assimilation algorithm, can introduce bias in the estimation of the posterior mean and covariance. To treat noise in the ensemble error approximation, the prior ensemble covariance is localized using a Gaussian correlation function, with a tunable length scale parameter, or radius of influence (). The localization function is applied using the same in the EnKF and variational steps of the two-way coupled data assimilation systems examined in this study. The posterior covariance is also inflated after each cycle using a covariance relaxation method (Zhang et al. 2004), which replaces the posterior ensemble perturbations with a linear combination of posterior and prior perturbations:

 
formula

The α in (12) is called the “relaxation coefficient” and ranges from 0 to 1, where 0 implies no inflation.

e. Forecast model

We use the L96 model to examine several aspects of the two data assimilation systems and explore the sensitivity of each method to tunable parameters. The model consists of variables for , which are equally spaced on a periodic domain. These variables are governed by the following set of equations:

 
formula

which are integrated forward numerically using the fourth-order Runge–Kutta method with a time step of 0.005 time units or 36 min.3 This dynamical system contains a quadratic advection term, a linear dissipation term, and a constant damping term. It exhibits chaotic behavior for and , and produces a solution that has an error doubling time comparable to synoptic-scale features in atmospheric models for (L96). For this study, we fix at 40 and use for most experiments, but allow F to vary when investigating the sensitivity of E4DVar and 4DEnVar to model error and error growth rate. To distinguish between the F used by the true dynamical system, and the F used by the model in imperfect model experiments, we use and to denote the true and model values of F, respectively.

3. Four-dimensional covariance

The analysis increments that minimize the E4DVar and 4DEnVar cost functions depend implicitly on spatiotemporal covariances modeled in the assimilation window. For the case where a single observation at time is assimilated, Thepáut et al. (1996) show that the analysis increment at that minimizes the 4DVar cost function is given by

 
formula

where is the background error covariance at projected into observation space, and is the observation error variance. The matrix in (14) contains covariances between state variables at the beginning of the assimilation window and at , and is given by . E4DVar and 4DEnVar both rely implicitly on and to calculate . Because the spread of information from observations at future times to state variables at the beginning of the assimilation window is controlled by , we will discuss the means by which this covariance matrix is estimated by E4DVar and 4DEnVar.

The E4DVar calculation of follows the same form as 4DVar, but with replaced with a linear combination of the static and localized ensemble covariance:

 
formula

The hybrid comes from a weighted sum of static and ensemble covariances that are propagated in time by the tangent linear model.

In 4DEnVar, the approximation of also contains a contribution from the static and ensemble covariance, except time covariance is not evolved from , and the ensemble component comes from a 4D ensemble forecast from to :

 
formula

4DEnVar has the advantage of using the nonlinear model to estimate the ensemble component of , which is beneficial for cases when the tangent linear approximation is invalid, or when parts of cannot be linearized. The latter case occurs for complex physical parameterization schemes used in atmospheric models (e.g., microphysics and cumulus schemes); therefore, is often linearized from a simplified version of (e.g., Sun and Crook 1997; Honda et al. 2005; Huang et al. 2009).

In addition to not being able to evolve time covariance from , another disadvantage of 4DEnVar is that localization must be performed on a time covariance matrix. Comparing (15) and (16), E4DVar avoids this problem by localizing at , then applying the tangent linear model on the full-rank localized ensemble covariance. Recall from section 2b, the formulation of 4DEnVar tested in this study does not consider time when performing the localization. We examine the impact of the different localizations applied in E4DVar and 4DEnVar, by considering the case where the weight of static covariance is set to 0. To perform these experiments, 10 ensemble members are drawn from , where is taken as a 100-day forecast from arbitrary initial conditions, and (Fig. 1a) is a climatological covariance estimated during one of the cycling data assimilation experiments described in section 4. We apply the tangent linear and ensemble methods to estimate with and without covariance localization, then calculate the mean squared error (MSE) of for = 0–48 h. The localization applied for these cases takes the form of a Gaussian function with a of 3; this value was chosen based on results from cycling experiments presented in section 4. The true value of (Fig. 1b) at each time is estimated from a -member ensemble forecast from the nonlinear model, which produces results that are nearly indistinguishable from cases that use to evolve time covariances directly from (not shown). RMSEs are averaged over 1000 trials and plotted in Fig. 2 with error bars that represent the 95% confidence interval for the calculations; owing to the large number of trials, the confidence interval at most times is narrower than the plotted line width.

Fig. 1.

(a) The climatological covariance, and (b) the true estimated for h using a size ensemble initialized from the climatological error distribution.

Fig. 1.

(a) The climatological covariance, and (b) the true estimated for h using a size ensemble initialized from the climatological error distribution.

Fig. 2.

Mean squared error of covariance estimated from the E4DVar method (black) and 4DEnVar method (red) without localization (dashed lines) and with localization (solid lines).

Fig. 2.

Mean squared error of covariance estimated from the E4DVar method (black) and 4DEnVar method (red) without localization (dashed lines) and with localization (solid lines).

When no localization is applied (dashed lines in Fig. 2), E4DVar and 4DEnVar provide similar errors for . For this case, the E4DVar covariance simplifies to

 
formula

which is identical to the low-rank 4DEnVar approximation, but with ensemble perturbations evolved in time using . Equation (17) is no longer valid when localization is applied; in which case, E4DVar uses to evolve a full-rank covariance in time. The 4DEnVar estimate, however, remains low rank until localization is applied on the final solution. Applying the localization at , rather than at the beginning of the assimilation window, results in significantly larger errors in the 4DEnVar representation of (solid lines in Fig. 2).

The effects of the different localization strategies are illustrated in Fig. 3 by applying both methods to estimate covariances between all state variables in at h and variable 20 at = 0, 6, 12, and 24 h; these covariances come directly from the 20th column of . In the left column of Fig. 3, the true covariance at each time (black line) is plotted with values estimated from a 10-member ensemble (blue line). In the right column, the localized covariances are estimated by applying to evolve time covariance from (red line, indicating the E4DVar covariance) and by localizing the 10-member ensemble directly (dashed blue line, indicating the 4DEnVar covariance).

Fig. 3.

Covariance between all 40 state variables at h and variable 20 at (a),(b) h; (c),(d) h; (e),(f) h; and (g),(h) h. Values are estimated from the true covariance (black lines in first and second columns), a 10-member ensemble covariance (blue lines in first column), E4DVar localized covariance (red lines in second column), and 4DEnVar localized covariance (dashed blue lines in second column).

Fig. 3.

Covariance between all 40 state variables at h and variable 20 at (a),(b) h; (c),(d) h; (e),(f) h; and (g),(h) h. Values are estimated from the true covariance (black lines in first and second columns), a 10-member ensemble covariance (blue lines in first column), E4DVar localized covariance (red lines in second column), and 4DEnVar localized covariance (dashed blue lines in second column).

For close to 0 h, both methods provide an accurate representation of the true covariance near variable 20. As increases, the E4DVar covariance maintains the correct structure of the true signal, which propagates downstream and changes shape. The 4DEnVar covariance, however, becomes contaminated by spurious correlations, which propagate into the region where the true signal was originally represented well by the low-rank approximation. To match the true covariance by h, the ensemble covariance in Fig. 3g requires the following: 1) the removal of distant spurious correlations, 2) a localization function that has a peak near the downstream covariance maximum, and 3) a change in structure of the signal. The first and second requirements may be obtainable through adaptive localization methods that have been applied for similar applications (e.g., Bishop and Hodyss 2011), but the third requirement is unattainable by localization alone.

Though not shown, the localization strategy used by E4DVar to estimate single columns of (red lines in Fig. 3) can be implemented in an ensemble framework by localizing the ensemble perturbations around the variable of interest at h before running the ensemble forecast to h. For example, each can be replaced by , where is the 20th column of . The ensemble mean is then added to the localized perturbations, which are propagated forward with the nonlinear model to provide a sample representation of column 20 of . This approach avoids the use of a tangent linear model, but requires running the nonlinear model times for each state variable.

The discussion in this section focuses mostly on the approximation of in the two data assimilation algorithms; however, the same positive and negative qualities of the two methods are equally true for the estimation of . Discrepancies between the 4D covariance estimation in E4DVar and 4DEnvar will be used to interpret the relative performance of these two methods in the cycling data assimilation experiments described in section 4.

4. Cycling data assimilation experiments

In this section, we present results from experiments that examine the sensitivity of E4DVar and 4DEnVar to a variety of modeling and observing system configurations, and tuning parameters that control the background error statistics. Each experiment uses observations generated from a “truth” simulation with added noise drawn from every κ variables (grid points) and time units; unless otherwise noted in the manuscript, κ = 2 and = 1.2 h, which means half the model variables in are observed every 1.2 h with an error variance of 1. Observations are assimilated over a 2600-day period, and root-mean-square errors (RMSEs) from the last 2500 days are used to quantify the accuracy of the posterior mean analyses. The first 100 days of data assimilation act as a spinup period to allow the ensemble covariance to evolve flow-dependent information for the given setup of the model and observation network.

We perform separate sets of data assimilation experiments to examine the sensitivity of E4DVar and 4DEnVar to ensemble size, window length, observation network, model forcing, and model error; the system parameters for each of these experiments are listed in Table 1. Results from each 2500-day assimilation period are summarized by averaging the analysis RMSEs over all state variables and data assimilation cycles. Given the number of samples used to approximate these values, the largest margin of error for the RMSE calculations is found to be less than 0.01 with 95% confidence. For simplicity, RMSEs in each experiment are rounded to the nearest 0.001 units when the margin of error is smaller than 0.001, and 0.01 units when the margin of error is greater than 0.001.

Table 1.

Configuration of cycling data assimilation experiments.

Configuration of cycling data assimilation experiments.
Configuration of cycling data assimilation experiments.

The hybrid forms of E4DVar and 4DEnVar require two prior covariance matrices: and . As described in section 2c, is estimated from an ensemble forecast run from the previous data assimilation cycle, and comes from time-averaged forecast errors. To estimate , we first run the data assimilation systems with for a given model setup, assimilation window length T, and observation network, and use differences between the forecast and true system state to estimate covariances for the model state variables. These covariances are then averaged to form a covariance matrix that contains the same diagonal and off-diagonal elements for each variable; an example is provided in Fig. 1a.

a. Ensemble size

For large modeling applications, computing resources often limit the number of ensemble members used to estimate . This sampling deficiency leads to an underestimation of the posterior error variance during every cycle unless covariance localization and inflation are used. In this section, we apply E4DVar and 4DEnVar with = 5, 10, 20, and 40 to examine the sensitivity of each method to sampling errors caused by the finite ensemble size. These experiments use no static covariance (i.e., ), a fixed window length of h, and a range of and α to tune the imperfect error statistics from the ensembles.

Figure 4 shows mean RMSEs calculated for each ensemble size and configuration of and α near the optimal values, which are indicated by the black box in each subplot. As expected, both methods become more stable and require less localization (larger ) and less inflation (smaller α) as increases. The optimal and α for a given are comparable for E4DVar and 4DEnVar, and are typically found near values that lead to filter divergence; we define filter divergence objectively by flagging configurations that produced 100-cycle-average RMSEs larger than 4 with NA for “not available” in the figures. In general, the configurations that modify the covariance the least while maintaining a stable solution produce the best results. E4DVar consistently outperforms 4DEnVar for cases with large sampling error (i.e., 5, 10, and 20), owing to differences in localization. This result is verified by increasing to 40 and turning the covariance localization off (Figs. 4g,h). As suggested by Fig. 2, the two methods produce comparable RMSEs when no modifications are made to the prior covariance; Fairbairn et al. (2014) found the same result using a slightly more complex version of the L96 model.

Fig. 4.

Mean analysis RMSEs estimated for a range of α and ROI, and = 5, 10, 20, and 40; the range of α and changes for each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 4.

Mean analysis RMSEs estimated for a range of α and ROI, and = 5, 10, 20, and 40; the range of α and changes for each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

A hybrid covariance is used to compensate for sampling errors in cases that use 5, 10, and 20 (Figs. 4a–f). These experiments use a fixed , taken from the optimal values found in Fig. 4, and assimilate the observations using a range of α and β values (Fig. 5). For = 5 and 10, E4DVar and 4DEnVar benefit from a configuration that uses a mix of about 10% static and 90% ensemble covariance. The application of a hybrid covariance also reduces the need for large variance inflation, as seen by the decrease in optimal α between Figs. 4a–d and 5a–d. The positive impact of using a hybrid covariance is less significant for = 20, where the optimal β drops below 0.05 for the pair of data assimilation systems. The optimal α between Figs. 4e,f and 5e,f, remains fixed at 0.3, suggesting that covariance relaxation is more effective than the hybrid covariance at treating sampling errors when increases.

Fig. 5.

Mean analysis RMSEs estimated for a range of α and β, with fixed at the optimal values found in Fig. 4 and = 5, 10, and 20; the range of β changes between the second and third rows. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 5.

Mean analysis RMSEs estimated for a range of α and β, with fixed at the optimal values found in Fig. 4 and = 5, 10, and 20; the range of β changes between the second and third rows. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

The relationship between β and is examined by holding α constant at the optimal values found in Fig. 5. Mean RMSEs from this experiment (Fig. 6) show that the optimal remains close to the values found when β = 0 in Fig. 4. The performance of both methods, however, depends less on the prescribed for hybrid covariance, which is beneficial for cases in which the optimal is unknown or changes significantly between observation type, location, and flow regime. While the results presented in Figs. 5 and 6 show E4DVar producing the lowest RMSEs over the range of parameters, it is significant to note that E4DVar and 4DEnVar respond equally to the use of a hybrid covariance to treat sampling errors.

Fig. 6.

Mean analysis RMSEs estimated for a range of α and , and = 5, 10, and 20; the range of β and changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 6.

Mean analysis RMSEs estimated for a range of α and , and = 5, 10, and 20; the range of β and changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

b. Assimilation window length

This section examines the sensitivity of E4DVar and 4DEnVar to assimilation window length. The same observations are assimilated in all experiments, except T values of 0, 6, and 24 h are used. Smaller values of T require more frequent data assimilation cycles, with T = 0 h resulting in an ensemble filter step each time observations are available ( = 1.2 h). Because the experiments with longer window lengths contain fewer assimilation cycles over the test period, RMSEs are averaged from analysis data every 24 h to match the longest T used in the experiments.

Figure 7 shows mean RMSEs from experiments that use β = 0, = 10, and a range of and α values near the optimal pair of ensemble covariance parameters. The methods are equivalent when run in filtering mode (T = 0), and they both exhibit a decrease in RMSEs as T is increased to 24 h. Smoothers use increasingly more data to approximate the analysis state as window length is extended, which leads to a smaller sensitivity to random errors in the ensemble and observation network, and an overall reduction in analysis RMSEs. For the same reasoning, the longer window length also decreases the amount of relaxation needed to prevent filter divergence (Fig. 7). Though not shown, the performance of both methods continues to increase with T until reaching a point where nonlinearity in the model dynamics introduces errors in the solution. Figure 7 also verifies that differences in covariance localization cause E4DVar to outperform 4DEnVar as window length increases; the inability of 4DEnVar (with localization) to produce analyses as accurate as E4DVar for large T is consistent with the error estimation of for large in Fig. 2.

Fig. 7.

Using an ensemble size of 10, mean analysis RMSEs are estimated for a range of α and , and T = 0, 6, and 24 h; the range of α values changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 7.

Using an ensemble size of 10, mean analysis RMSEs are estimated for a range of α and , and T = 0, 6, and 24 h; the range of α values changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

We introduce hybrid covariance in the cost function to examine how T affects the optimal weight of static covariance for = 10. Figure 8 shows that the optimal β decreases with T in a manner similar to α (Fig. 7). This result provides additional evidence that both 4D data assimilation methods are less sensitive to sampling noise as assimilation window length increases. Furthermore, the inability of 4DEnVar to propagate in the assimilation window does not appear to decrease the benefit of using static covariance to treat sampling errors induced by the small ensemble size—this conclusion follows from the near-equal decrease in RMSEs when hybrid covariance is applied in E4DVar and 4DEnVar. Because of the compact shape of the climatological covariance in this dynamical system (cf. Figs. 1 and 3), serves mostly to reduce the weight of spurious sample correlations that remain after localization.

Fig. 8.

Mean analysis RMSEs estimated for a range of α and β, and T = 0, 6, and 24 h; the range of β values changes between each row. Ensemble size and are fixed at 10 and the optimal value found in Fig. 7, respectively. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 8.

Mean analysis RMSEs estimated for a range of α and β, and T = 0, 6, and 24 h; the range of β values changes between each row. Ensemble size and are fixed at 10 and the optimal value found in Fig. 7, respectively. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

c. Observation network

We examine the sensitivity of the data assimilation methods to spatial and temporal observation density by changing the observation parameters κ and . In doing so, we adopt the model and data assimilation setup used to produce Figs. 4e and 4f by fixing the system parameters at , h, and ; the ensemble size is increased from previous experiments to reduce the effects of random sampling errors on the data assimilation. We first decrease κ from 2 to 1 (Figs. 9a,b), which doubles the spatial density of the observations. The additional observations reduce the differences in analysis accuracy between E4DVar and 4DEnVar, and allow both methods to become stable over a larger number of ensemble parameters (cf. Figs. 4c,d). Because the system has full observability in this experiment, the ensemble members remain close to the true solution between data assimilation cycles, which reduces the sensitivity of analysis RMSEs to covariance localization and inflation.

Fig. 9.

Using an ensemble size of 20 and h, mean analysis RMSEs are estimated for a range of α and for experiments assimilating observations (a),(b) of every variable (); (c),(d) every third variable (); and (e),(f) and every 3 h ( h); the range of α and values changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 9.

Using an ensemble size of 20 and h, mean analysis RMSEs are estimated for a range of α and for experiments assimilating observations (a),(b) of every variable (); (c),(d) every third variable (); and (e),(f) and every 3 h ( h); the range of α and values changes between each row. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

We then make the observation network sparse by increasing κ to 3 (Figs. 9c,d). With κ fixed at 3, we also decrease the temporal frequency of observations by increasing from 1.2 to 3 h (Figs. 9e,f). These experiments show clear benefits of E4DVar over 4DEnVar in terms of stability and accuracy when fewer observations are used. For sparse observation networks, the number of possible model solutions that can fit the observations in the assimilation window increases. The process of modeling an accurate background error covariance in the assimilation window then becomes more challenging, owing to the larger variance required to match the increase in uncertainty. As a result, the solution depends more on the choice of and α for estimating background error covariance given limited information.

We perform a second set of experiments to examine the impact of using a hybrid covariance to improve the background error estimation for sparse observation networks. Figure 10 shows the sensitivity of E4DVar and 4DEnVar to parameters α and β when data are assimilated from the three different observation networks using a fixed of 7. For the densely spaced observations (Figs. 10a,b), the optimal β is found near 0.02, which amounts to only a small decrease in E4DVar and 4DEnVar analysis RMSEs compared to the case in which no static covariance is used. The optimal weight of the static covariance, however, increases when the observation network is made more sparse (Figs. 10c–f), which is indicative of less accurate ensemble covariance when fewer observations are available to constrain the solution. While E4DVar continues to produce smaller analysis RMSEs than 4DEnVar, we find that the two methods benefit equally from the use of a hybrid covariance in these experiments.

Fig. 10.

Using an ensemble size of 20 and h, mean analysis RMSEs are estimated for a range of α and β for experiments assimilating observations (a),(b) of every variable (); (c),(d) every third variable (); and (e),(f) and every 3 h ( h); the range of β and α values change between the first and second rows. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 10.

Using an ensemble size of 20 and h, mean analysis RMSEs are estimated for a range of α and β for experiments assimilating observations (a),(b) of every variable (); (c),(d) every third variable (); and (e),(f) and every 3 h ( h); the range of β and α values change between the first and second rows. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

d. Model forcing

In section 3, 4D covariances modeled using ensemble and tangent-linear approaches show that the accuracy of decreases faster with time in 4DEnVar compared to E4DVar (Fig. 2). One problem with the ensemble representation of is that spurious correlations—which are typically removed at the beginning of the time window in E4DVar—propagate forward and degrade the estimation of the true covariance signal (Fig. 3). The model forcing term F can control the rate at which small initial-condition errors grow in time between and the time of observations at . For example, the error doubling time of the model forecast when F is increased from 8 to 10 decreases from 2.1 to 1.5 days (Lorenz and Emanuel 1998). Likewise, the model forcing term is expected to influence the period over which time covariance can be estimated effectively by the two data assimilation methods.

Using a perfect model, we perform data assimilation experiments with F set to 9, 10, and 11 to test E4DVar and 4DEnvar on dynamical systems that are intrinsically less predictable. These experiments are similar to those presented in Figs. 7e and 7f in that we use , h, and , which means the performance of each method depends greatly on how is estimated over a long window using statistics from a small ensemble. Results from these experiments (Fig. 11) show that increasing F leads to larger analysis RMSEs in both data assimilation systems. The faster error growth also causes both methods to become increasingly less stable for large α; significant posterior variance inflation is no longer needed to increase the prior variance for future data assimilation cycles. As hypothesized, the 4DEnVar analyses suffer a larger drop in accuracy, owing to faster error growth in the assimilation window.

Fig. 11.

Using an ensemble size of 10 and h, mean analysis RMSEs are estimated for a range of α and , and = 9, 10, and 11. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 11.

Using an ensemble size of 10 and h, mean analysis RMSEs are estimated for a range of α and , and = 9, 10, and 11. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

e. Model error

In this section, we test E4DVar and 4DEnVar with an imperfect forecast model by keeping the true forcing term fixed at and reducing the forcing term in the model used during prediction and assimilation steps to . For these experiments, we use an ensemble size of and a window length of h, so applying an incorrect forecast model over a long assimilation window causes the dominant system errors. In section 4b, the same configuration is applied with a perfect model to produce Figs. 7e and 7f. Figures 12a and 12b show results from experiments that apply E4DVar and 4DEnVar with to find the optimal and α for tuning the ensemble covariance without hybrid covariance. Both methods produce the lowest analysis RMSEs when a relatively small (around 3–4) is used with an α of 0.6. Model error reduces the accuracy of 4D covariances approximated in the assimilation window, and covariance localization acts as the only mechanism for improving its estimation. The issues discussed in section 3 regarding time localization in 4DEnVar become more severe, owing to the use of a shorter to manage sampling errors related to the imperfect model. As a result, E4DVar provides more accurate 4D covariances and lower analysis RMSEs than 4DEnVar when the system is subjected to model error. Fairbairn et al. (2014) reached a similar conclusion from their experiments comparing E4DVar and 4DEnVar with an imperfect model.

Fig. 12.

(a),(b) Mean analysis RMSEs are estimated for a range of α and for = 20, T = 24 h, and an incorrect model forcing of = 7. (c)–(f) is fixed at 3, and RMSEs are estimated for ranges of α and β. Hybrid covariance is used in the standard cost function form in (c) and (d) and with ensemble perturbations in (e) and (f). Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 12.

(a),(b) Mean analysis RMSEs are estimated for a range of α and for = 20, T = 24 h, and an incorrect model forcing of = 7. (c)–(f) is fixed at 3, and RMSEs are estimated for ranges of α and β. Hybrid covariance is used in the standard cost function form in (c) and (d) and with ensemble perturbations in (e) and (f). Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

When a hybrid covariance is introduced to treat model error during data assimilation, E4DVar and 4DEnVar respond very differently than in experiments where sampling errors related to ensemble size is most dominant (see section 4a). Increasing β from 0.0 to 0.1 in the E4DVar experiment leads to a 30% reduction in analysis RMSEs (Fig. 12c), and removes most of the sensitivity to α. The hybrid covariance in E4DVar serves a similar purpose as additive inflation for EnKFs; introduces orthogonal directions for errors to grow in the assimilation window that would otherwise be missing from the ensemble component. 4DEnVar analyses, however, show a much smaller decrease in RMSE when the static covariance is introduced (Fig. 12d). Similar to the nonhybrid cases, the hybrid implementation of 4DEnVar requires a large α for optimal performance, which suggests that the data assimilation benefits less from when it is fixed in time. Because we generate using exact forecast errors estimated from previous nonhybrid data assimilation experiments, both hybrid methods benefit from the inclusion of model error in the climatological covariance estimate. Our approach for generating a climatological covariance differs from atmospheric science applications, where is generated without any prior knowledge of the true state. As a result, we expect our data assimilation systems to yield more accurate analyses than would be found for a system with an unknown true state.

One means of propagating through the assimilation window in 4DEnVar is to replace the ensemble forecast perturbations at with hybrid perturbations:

 
formula

where each is drawn randomly from . This approach introduces additional noise in the covariance estimate, but the static contribution is no longer fixed in time. Figures 12e and 12f show results from E4DVar and 4DEnVar experiments in which the climatological covariance is introduced through hybrid perturbations, rather than using the full-rank matrix in the cost function. 4DEnVar clearly benefits from the hybrid perturbation method, as seen by the decrease in analysis RMSEs and smaller reliance on α. Nevertheless, E4DVar performs worse with the hybrid perturbations, owing to the fact that is replaced by a sample estimate. This example demonstrates the advantage of using to evolve through the assimilation window in E4DVar, and suggests that 4DEnVar may benefit more from other means of treating model error, such as additive inflation or stochastic physics in the ensemble.

f. Choice of analysis

For the data assimilation experiments described up to this point in the manuscript, the analysis is formed using the nonlinear model to propagate the analysis increment plus background field at forward to the middle of the window at :

 
formula

E4DVar and 4DEnVar apply different strategies for propagating increments forward in time when minimizing the cost function. In E4DVar, increments at in the assimilation window are evolved in time using the tangent linear model, so the analysis at the middle of the window is given by

 
formula

The difference between (19) and (20) reflects the linear approximation made in the incremental 4DVar algorithm used in this study. In 4DEnVar, analysis increments at each time in the window come from multiplying the control variables by a matrix of evolved ensemble perturbations (i.e., ). Therefore, the 4DEnVar analysis at the middle of the window is given by

 
formula

Liu et al. (2009) formulate their 4DEnVar cost function so that the control variables are valid at the middle of the assimilation window. This approach is identical to minimizing (11), then applying (21) to form the analysis.

To examine the sensitivity of E4DVar and 4DEnVar to the choice of analysis calculation, we revisit the experiments presented in Figs. 8e and 8f, which test the performance of both methods over a range of α and β values and a 24-h assimilation window. We apply E4DVar and 4DEnVar using the analyses defined by (20) and (21), respectively, and compare the results with experiments using (19). Mean analysis errors are plotted in Fig. 13 using “ increments” to denote experiments that propagate the analysis from the beginning of the window using the nonlinear model, and “ h increments” to denote experiments that add increments from the middle of the window to the background at this time. Figures 13a and 13c show a minor advantage of using the nonlinear model to propagate the analysis forward from in the E4DVar case. The validity of the tangent linear approximation in this model leads to analysis differences that are statistically indistinguishable for many of the configurations tested. On the other hand, we find the analysis accuracy in the 4DEnVar case to be degraded when increments are taken from the middle of the window and added to the background at this time (Figs. 13b,d). For 4DEnVar, the advantage of using (19) to define the analysis instead of (21) is realized by substituting for in (19), and applying a first-order Taylor series approximation for the nonlinear model:

 
formula
 
formula

The term in (23) is similar to the used in (21), except the square root of the background error covariance is integrated from to using the tangent linear model instead of an ensemble forecast from the nonlinear model. Evidence provided in section 3 suggests that provides a better approximation of the background errors than , owing to differences in covariance localization and the inability of 4DEnVar to propagate the static covariance forward in time. Because the method that uses the nonlinear model to propagate an analysis from the beginning to middle of the window implicitly models the evolution of perturbations more accurately in time, it produces better analyses than the alternative approach.

Fig. 13.

Mean analysis RMSEs estimated for a range of α and β, and T = 24 h using (a),(b) analysis increments propagated forward with the nonlinear model from h, and (c),(d) analysis increments estimated at h and added to the background at this time. Ensemble size and are fixed at 10 and the optimal value found in Fig. 7, respectively. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

Fig. 13.

Mean analysis RMSEs estimated for a range of α and β, and T = 24 h using (a),(b) analysis increments propagated forward with the nonlinear model from h, and (c),(d) analysis increments estimated at h and added to the background at this time. Ensemble size and are fixed at 10 and the optimal value found in Fig. 7, respectively. Red shading indicates higher RMSEs, NA indicates that filter divergence occurs during the experiment, and the smallest errors are indicated by the black box.

5. Conclusions

Using a low-dimensional dynamical system, this study investigates the behavior of two hybrid 4D data assimilation systems currently being tested for operational use and for research applications in atmospheric science. One method, called E4DVar, is an extension of the traditional 4DVar method, and uses tangent linear and adjoint models to implicitly evolve a hybrid prior error covariance in the assimilation window. A second method, called 4DEnVar, uses a four-dimensional ensemble in place of the linearized models in 4DVar. Because of the absence of tangent linear and adjoint models in 4DEnVar, the climatological component of the background error covariance matrix is fixed in time; therefore, the 4DEnVar method in this study operates as a hybrid between 4DEnVar and 3DVar-FGAT.

A fundamental difference between E4DVar and 4DEnVar is how temporal covariances are estimated in the assimilation window. 4DEnVar uses an ensemble forecast from the full nonlinear model to estimate the time evolution of prior error covariance. This approach is beneficial for situations where the tangent linear approximation is invalid, as is the case for physical parameterization schemes used frequently in atmospheric models. It also provides a means of performing 4D data assimilation in a manner that is highly parallel, unlike E4DVar, which requires an integration of the tangent linear and adjoint models during each iteration of the cost function minimization. Nevertheless, the ensemble representation of time covariances in 4DEnVar has disadvantages when a hybrid covariance is applied, or when localization is needed to treat sampling errors.

The L96 model is used to examine the performance of E4DVar and 4DEnVar under a number of model, data assimilation, and observing system configurations. Experiments performed using ensemble covariance during data assimilation yield results that agree with the findings of Buehner et al. (2010b) and Fairbairn et al. (2014) in that E4DVar consistently produces the lowest analysis errors. The two methods provide comparable analysis accuracy when the forecast model is perfect, and when dense observation networks and large ensembles are available; however, E4DVar performs significantly better for sparse observations, and in cases where small ensemble sizes and model error decrease the optimal localization length scale. E4DVar also produces significantly lower analysis errors than 4DEnVar when either the assimilation window length or model forcing term are increased; in both cases, the pure ensemble representation of covariances in the assimilation window is found to be less accurate than applying linearized models to evolve a localized covariance matrix in time.

The E4DVar method has additional advantages when a hybrid prior covariance is used in the cost function for imperfect model experiments. The ability of E4DVar to evolve time-dependent covariance from the climatological component in the assimilation window has the same effect as applying additive inflation in EnKFs, except without the extra sampling errors. We find that 4DEnVar benefits more from using hybrid perturbations—comprising samples from the ensemble forecast and climatological error distribution—rather than using the full climatological covariance matrix in the cost function. Hybrid perturbations allow the ensemble to evolve errors from the climatological covariance in time, but introduce additional sampling noise. The inability of 4DEnVar to evolve the full static covariance forward in time, however, only appears to be an issue when the forecast model is imperfect. In perfect model experiments, E4DVar and 4DEnVar benefit equally from using static covariance in the cost function to improve the prior error covariance estimated from small ensembles, or in cases where observations are sparse.

One drawback of E4DVar not addressed in this study is the necessary use of a simplified forecast model when coding the tangent linear and adjoint operators for weather applications. Data assimilation experiments performed at national forecast centers, such as Environment Canada (Buehner et al. 2010b) and the Met Office (Lorenc et al. 2014), suggest that the positive qualities of E4DVar outweigh the negative effects of simplified linear operators in global models. For high-resolution limited-area models, Gustafsson and Bojarova (2014) provide evidence that suggests the opposite to be true. The relative performance of E4DVar and 4DEnVar for convective-scale applications, where errors in physical parameterization schemes are nontrivial, remains an open question. Despite the potential errors from simplified linear models in E4DVar, this method has advantages over 4DEnVar when short localization length scales are required, when forecast errors grow quickly in time, and when a hybrid covariance is necessary to compensate for model errors. Furthermore, alternative approaches to the hybrid methods tested in this study can be considered more effective for weather applications (e.g., replacing 3DVar-FGAT in 4DEnVar with 4DVar or E4DVar). These strategies would also combine the most computationally expensive components of each system, and require the coding of tangent linear and adjoint models.

Acknowledgments

This work was supported by the Office of Naval Research Grant N000140910526, the National Science Foundation Grant AGS-1305798, and NASA Grant NNX12AJ79G. The authors thank Steven Greybush from The Pennsylvania State University for many helpful discussions during the course of this work, and three anonymous reviewers for their insightful comments.

REFERENCES

REFERENCES
Bishop
,
C. H.
, and
D.
Hodyss
,
2011
:
Adaptive ensemble covariance localization in ensemble 4D-VAR state estimation
.
Mon. Wea. Rev.
,
139
,
1241
1255
, doi:.
Bishop
,
C. H.
, and
E. A.
Satterfield
,
2013
:
Hidden error variance theory. Part I: Exposition and analytic model
.
Mon. Wea. Rev.
,
141
,
1454
1468
, doi:.
Buehner
,
M.
,
2005
:
Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi-operational NWP setting
.
Quart. J. Roy. Meteor. Soc.
,
131
,
1013
1043
, doi:.
Buehner
,
M.
,
P. L.
Houtekamer
,
C.
Charette
,
H.
Mitchell
, and
B.
He
,
2010a
:
Intercomparison of variational data assimilation and the ensemble Kalman filter for global deterministic NWP. Part I: Description and single-observation experiments
.
Mon. Wea. Rev.
,
138
,
1550
1566
, doi:.
Buehner
,
M.
,
P. L.
Houtekamer
,
C.
Charette
,
H.
Mitchell
, and
B.
He
,
2010b
:
Intercomparison of variational data assimilation and the ensemble Kalman filter for global deterministic NWP. Part II: One-month experiments with real observations
.
Mon. Wea. Rev.
,
138
,
1567
1586
, doi:.
Clayton
,
A. M.
,
A. C.
Lorenc
, and
D. M.
Barker
,
2013
:
Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office
.
Quart. J. Roy. Meteor. Soc.
,
139
,
1445
1461
, doi:.
Courtier
,
P.
,
J.-N.
Thepáut
, and
A.
Hollingsworth
,
1994
:
A strategy for operational implementation of 4D-Var, using an incremental approach
.
Quart. J. Roy. Meteor. Soc.
,
120
,
1367
1387
, doi:.
Etherton
,
B. J.
, and
C. H.
Bishop
,
2004
:
Resilience of hybrid ensemble/3DVAR analysis schemes to model error and ensemble covariance error
.
Mon. Wea. Rev.
,
132
,
1065
1080
, doi:.
Evensen
,
G.
,
1994
:
Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics
.
J. Geophys. Res.
,
99
,
10 143
10 162
, doi:.
Fairbairn
,
D.
,
S. R.
Pring
,
A. C.
Lorenc
, and
I.
Roulstone
,
2014
:
A comparison of 4DVar with ensemble data assimilation methods
.
Quart. J. Roy. Meteor. Soc.
,
140
,
281
294
, doi:.
Fisher
,
M.
, and
E.
Andersson
,
2001
: Developments in 4D-Var and Kalman filtering. ECMWF Tech. Memo. 347, ECMWF, 36 pp.
Gustafsson
,
N.
, and
J.
Bojarova
,
2014
:
Four-dimensional ensemble variational (4D-En-Var) data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)
.
Nonlinear Processes Geophys.
,
21
,
745
762
, doi:.
Hamill
,
T. M.
, and
C.
Snyder
,
2000
:
A hybrid ensemble Kalman filter-3D variational analysis scheme
.
Mon. Wea. Rev.
,
128
,
2905
2919
, doi:.
Honda
,
Y.
,
M.
Nishijima
,
K.
Koizumi
,
Y.
Ohta
,
K.
Tamiya
,
T.
Kawabata
, and
T.
Tsuyuki
,
2005
:
A pre-operational variational data assimilation system for a non-hydrostatic model at the Japan Meteorological Agency: Formulation and preliminary results
.
Quart. J. Roy. Meteor. Soc.
,
131
,
3465
3475
, doi:.
Houtekamer
,
P. L.
, and
H. L.
Mitchell
,
1998
:
Data assimilation using an ensemble Kalman filter technique
.
Mon. Wea. Rev.
,
126
,
796
811
, doi:.
Huang
,
X.
, and Coauthors
,
2009
:
Four-dimensional variational data assimilation for WRF: Formulation and preliminary results
.
Mon. Wea. Rev.
,
137
,
299
314
, doi:.
Kuhl
,
D. D.
,
T. E.
Rosmond
,
C. H.
Bishop
,
J.
McLay
, and
N. L.
Baker
,
2013
:
Comparison of hybrid ensemble/4DVar and 4DVar within the NAVDAS-AR data assimilation framework
.
Mon. Wea. Rev.
,
141
,
2740
2758
, doi:.
Le Dimet
,
F.-X.
, and
O.
Talagrand
,
1986
:
Variational algorithms for analysis and assimilation of meteorological observations: Theoretical aspects
.
Tellus
,
38A
,
97
110
, doi:.
Liu
,
C.
, and
Q.
Xiao
,
2013
:
An ensemble-based four-dimensional variational data assimilation scheme. Part III: Antarctic applications with Advanced Research WRF using real data
.
Mon. Wea. Rev.
,
141
,
2721
2739
, doi:.
Liu
,
C.
,
Q.
Xiao
, and
B.
Wang
,
2008
:
An ensemble-based four-dimensional variational data assimilation scheme. Part I: Technical formulation and preliminary test
.
Mon. Wea. Rev.
,
136
,
3363
3373
, doi:.
Liu
,
C.
,
Q.
Xiao
, and
B.
Wang
,
2009
:
An ensemble-based four-dimensional variational data assimilation scheme. Part II: Observing System Simulation Experiments with Advanced Research WRF (ARW)
.
Mon. Wea. Rev.
,
137
,
1687
1704
, doi:.
Lorenc
,
A. C.
,
1997
:
Development of an operational variational assimilation scheme
.
J. Meteor. Soc. Japan
,
75
,
339
346
.
Lorenc
,
A. C.
,
2003
:
The potential of the ensemble Kalman filter for NWP: A comparison with 4D-Var
.
Quart. J. Roy. Meteor. Soc.
,
129
,
3183
3203
, doi:.
Lorenc
,
A. C.
, and Coauthors
,
2000
:
The Met Office global three-dimensional variational data assimilation scheme
.
Quart. J. Roy. Meteor. Soc.
,
126
,
2991
3012
, doi:.
Lorenc
,
A. C.
,
N.
Bowler
,
A.
Clayton
, and
S.
Pring
,
2014
: Development of the Met Office’s 4DEnVar system. Proc. Sixth EnKF Workshop, Buffalo, NY, Met Office. [Available online at hfip.psu.edu/fuz4/EnKF2014/EnKF-Day1/Lorenc_4DEnVar.pptx.]
Lorenz
,
E. N.
,
1996
: Predictability: A problem partly solved. Proc. Seminar on Predictability, Vol. 1, Reading, United Kingdom, ECMWF, 1–18.
Lorenz
,
E. N.
, and
K. A.
Emanuel
,
1998
:
Optimal sites for supplementary weather observations: Simulation with a small model
.
J. Atmos. Sci.
,
55
,
399
414
, doi:.
Poterjoy
,
J.
, and
F.
Zhang
,
2014
:
Intercomparison and coupling of ensemble and four-dimensional variational data assimilation methods for the analysis and forecasting of Hurricane Karl (2010)
.
Mon. Wea. Rev.
, 142, 3347–3364, doi:.
Sun
,
J.
, and
N. A.
Crook
,
1997
:
Dynamical and microphysical retrieval from Doppler radar observations using a cloud model and its adjoint. Part I: Model development and simulated data experiments
.
J. Atmos. Sci.
,
54
,
1642
1661
, doi:.
Thepáut
,
J.-N.
, and
P.
Courtier
,
1991
:
Four-dimensional variational data assimilation using the adjoint of a multilevel primitive-equation model
.
Quart. J. Roy. Meteor. Soc.
,
117
,
1225
1254
, doi:.
Thepáut
,
J.-N.
,
P.
Courtier
,
G.
Belaud
, and
G.
Lamaître
,
1996
:
Dynamical structure functions in a four-dimensional variational assimilation: A case study
.
Quart. J. Roy. Meteor. Soc.
,
122
,
535
561
, doi:.
Tian
,
X.
,
Z.
Xie
, and
A.
Dai
,
2008
: An ensemble-based explicit four-dimensional variational assimilation method. J. Geophys. Res.,113, D21124, doi:.
Tian
,
X.
,
Z.
Xie
,
A.
Dai
,
C.
Shi
,
B.
Jia
,
F.
Chen
, and
K.
Yang
,
2009
: A dual-pass variational data assimilation framework for estimating soil moisture profiles from AMSR-E microwave brightness temperature. J. Geophys. Res.,114, D16102, doi:.
Wang
,
X.
, and
T.
Lei
,
2014
:
GSI-based four-dimensional ensemble-variational (4DEnsVar) data assimilation: Formulation and single-resolution experiments with real data for NCEP Global Forecast System
.
Mon. Wea. Rev.
,
142
,
3303
3325
, doi:.
Wang
,
X.
,
C.
Snyder
, and
T. M.
Hamill
,
2007
:
On the theoretical equivalence of differently proposed ensemble-3DVar hybrid analysis schemes
.
Mon. Wea. Rev.
,
135
,
222
227
, doi:.
Zhang
,
F.
,
C.
Snyder
, and
J.
Sun
,
2004
:
Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter
.
Mon. Wea. Rev.
,
132
,
1238
1253
, doi:.
Zhang
,
F.
,
M.
Zhang
, and
J. A.
Hansen
,
2009
:
Coupling ensemble Kalman filter with four-dimensional variational data assimilation
.
Adv. Atmos. Sci.
,
26
,
1
8
, doi:.
Zhang
,
F.
,
M.
Zhang
, and
J.
Poterjoy
,
2013
:
E3DVar: Coupling an ensemble Kalman filter with three-dimensional variational data assimilation in a limited-area weather prediction model and comparison to E4DVar
.
Mon. Wea. Rev.
,
141
,
900
917
, doi:.
Zhang
,
M.
, and
F.
Zhang
,
2012
:
E4DVar: Coupling an ensemble Kalman filter with four-dimensional data assimilation in a limited-area weather prediction model
.
Mon. Wea. Rev.
,
140
,
587
600
, doi:.

Footnotes

1

The standard 4DVar uses , where is estimated from climatological background errors.

2

Another option is to apply the full nonlinear observation operator on the forecast members and mean before calculating as is done in Liu et al. (2008).

3

As described in L96, the variables are scaled so that the time units in (13) are approximately equivalent to 5 days.