1. Introduction
For the past ~2 decades, the pursuit of improving the performance of ensemble Kalman filters (EnKFs) has been the focus of many research studies across various Earth system applications including numerical weather prediction (NWP) and oceanography (Carrassi et al. 2018, and references therein). The EnKF (Evensen 2003) approximates the background error covariance matrix with the sample covariance from an ensemble of model states. For unbiased forecasts and in the limit of a large ensemble, this usually produces robust error covariance that accurately represents the uncertainty in the background ensemble mean (Furrer and Bengtsson 2007; Sacher and Bartello 2008). Owing to the large dimension of Earth system models, using a large ensemble size is not always possible. Moreover even in the most sophisticated atmospheric and oceanic solvers, model errors are known to exist. Because of this, the resulting background error covariance estimate may have large inaccuracies that can degrade the quality of the Kalman update.
Methods to ameliorate the imperfections in the background covariance include inflation (e.g., Pham et al. 1998; Mitchell and Houtekamer 2000; Anderson and Anderson 1999; Whitaker and Hamill 2012; El Gharamti 2018), localization (e.g., Houtekamer and Mitchell 1998; Bishop and Hodyss 2009a,b; Anderson 2012; Lei et al. 2016) and the use of multiphysics ensembles (e.g., Skamarock et al. 2008; Meng and Zhang 2008; Berner et al. 2011). Inflation increases the spread of the ensemble around the ensemble mean without changing the rank of the covariance. The rank may change if the inflation is designed to vary in space (El Gharamti et al. 2019). Localization, on the other hand, tackles sampling error by reducing spurious correlations in the error covariance usually resulting in a full-rank matrix. The multiphysics, often referred to as multischeme, approach addresses model error and limited ensemble spread by using ensemble members with different model configurations.
Because the ensemble size is limited, certain parts of the state’s distribution may be hidden or unknown to the sample covariance matrix. Hamill and Snyder (2000) argued that blending in some static (climatological) information in the ensemble covariance might present the ensemble with new correction directions. This can be achieved through a linear combination of the flow-dependent ensemble covariance and a stationary background covariance, typically used in 3DVAR and Optimal Interpolation (OI) systems. The work of Bishop and Satterfield (2013) and Bishop et al. (2013) confirmed this and proposed an analytical justification for such linear hybridization, under certain assumptions. In practice, many studies have used this hybrid covariance approach (e.g., Etherton and Bishop 2004; Buehner 2005; Wang et al. 2008; Kuhl et al. 2013; Clayton et al. 2013; Penny et al. 2015; Bowler et al. 2017) and proved its effectiveness to enhancing the ensemble filtering procedure.
Linearly combining the background ensemble and static covariance matrices requires tuning a weighting factor. Gharamti et al. (2014) proposed to optimize this factor by maximizing (or minimizing) the information gain between the forecast and the analysis statistics. The authors tested their approach for state and parameter estimation in a subsurface contaminant transport problem. Ménétrier and Auligné (2015) used a variational procedure to find the optimal localization and hybridization parameters simultaneously. Their technique tackles sampling error by minimizing the quadratic error between the localized-hybrid covariance and an asymptotic covariance matrix (obtained with a very large ensemble). The authors validated their method using a 48 h WRF ensemble forecast system in north America. Satterfield et al. (2018) estimated the weighting factor by first, finding the distribution of the true error variance given the ensemble variance, and then computing its mean. Using high-order polynomial regression, the authors could approximately estimate the weighting between the static and the ensemble covariance.
Instead of weighing the ensemble and static background covariance matrices with a deterministic factor, the following study aims to investigate the behavior of the hybrid system when the weighting factor is assumed to be “a random variable.” A Bayesian algorithm is proposed in which the probability density function (pdf) of the weighting factor is updated, at each data assimilation (DA) cycle, using the available data. The derivation is presented for two scenarios: a spatially constant and a spatially varying weighting field. The proposed scheme has a number of attractive features: 1) it is sequential in nature such that the posterior pdf of the weight at the current time becomes the prior at the next DA cycle, 2) it allows for adaptive selection of the covariance weighting without manual tuning, 3) it can mitigate not only sampling errors but also model errors, and 4) its ease of implementation.
The rest of the paper is organized as follows. Section 2 introduces a Bayesian approach to estimate the hybrid weighting factor. The performance of the proposed scheme is compared to the EnKF using twin-experiments in section 3. Sensitivity to ensemble size, model error and observation network is tested and analyzed. A summary of the findings and further discussions are found in section 4.
2. Adaptive hybrid ensemble–variational scheme
Setting α = 1, Eq. (3) results in an EnKF with a purely flow-dependent ensemble-based covariance. For α = 0, the system morphs into an ensemble optimal interpolation (EnOI) scheme. Some studies impose two different weighting factors on the ensemble and the static covariances such that the sum of the weights does not add to 1. This is usually attributed to under or overdispersion in the spread of certain variables in
a. Bayesian approach to choose α
Initially, the prior pdf for α needs to be specified. A simple choice could be a Gaussian. Since α only varies between 0 and 1, a beta distribution may be a more suitable choice. The algorithm proceeds at each assimilation step, by taking the product of the pdfs as shown in Eq. (4). After finding the posterior pdf for α, the mode can be computed and used in Eq. (3) in the next forecast cycle. The subsequent prior is set to be the same as the posterior.
Illustration
We would like to test the Bayesian scheme outlined above in a simple 1D example. We consider a single variable ensemble with variance
The prior pdfs and the likelihood are plotted in Fig. 1. For visual purposes, the likelihood (red curve) has been scaled so it can be viewed properly on the same axis with the other pdfs. Multiplying the priors and the likelihood produces the two blue posterior pdfs. Analytically, the posteriors are not exactly beta and Gaussian like their prior counterparts (although visually they are similar). Before the update, equal weight is placed on the ensemble and static background variance. After the update, the modes of the Gaussian-like and beta-like posteriors are: α = 0.66 and α = 0.76, respectively. Because of the relatively large innovation (prior mean minus observation), the update places more weight on the larger variance (the ensemble here) thus increasing the background spread. Roughly speaking, this increases the “apparent” consistency between the prior and the observation. As can be seen, the variance of the prior pdfs shrunk by almost 42% after the update.

A single variable example illustrating the Bayesian update for α. Two choices for the marginal prior distribution of α are considered: beta (thick black line) and Gaussian (thin black line). The likelihood function estimated from Eq. (7) is represented by the dot–dashed red line. The thick and thin blue lines are the posterior distributions for the beta and Gaussian priors, respectively. “M” in the legend denotes the mode of the pdf, while “V” is the variance.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
To study the impact of the innovation on the proposed Bayesian update scheme, we consider 3 cases: small bias (d = 0.01), moderate bias (d = 1) and large bias (d = 3). We assess the behavior for a wide range of values and different combinations for

Bayesian update for α in the single-variable example for different bias scenarios: (left) small bias, (center) moderate bias, and (right) large bias. The results are shown for a range of values for the ensemble variance
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
b. Spatially varying α
Is it appropriate to use a single value for α for the entire state domain? The answer in most models is generally no. This is because both model and sampling errors may not be homogeneous in space. For instance, ocean models have long exhibited strong warm SST biases in the Southern Ocean and cold biases in the North Pacific and Atlantic Oceans. Another reason is that we often work with complex observation networks where some regions are densely observed and others are only sparsely observed. This generally implies that the ensemble spread in different regions can be vastly different. As an example, one could look at in situ radiosonde observations. Densely observed troposphere over Europe and North America might have smaller ensemble variance compared to other less constrained areas in the Southern Hemisphere. Based on this, it is nearly impossible to address different issues in different parts of the domain if α is spatially homogenous. The problem may be alleviated if each state variable is assigned a different weighting factor.
1) The algorithm
2) Implementation and cost
The update procedure follows the two-step DA approach (Anderson 2003), commonly used in the Data Assimilation Research Testbed (DART; Anderson et al. 2009). The update of the weighting factors happens before updating the state as follows:
- Forecast step:
- Starting from the most recent analysis state, the ensemble members are propagated using the model until the next observation time.
- Analysis step:Loop over the observations: o = 1 → No.2.1) Update α:Loop over each state variable: j = 1 → Nx.Find:
.For each pair of , compute using (A2) and (14).2.2) Compute the observation increments:Evaluate the forward operators on the background ensemble and the static states.Compute the hybridized observation space variance: .Using the observation value and its variance, compute the increments: δy.2.3) Update x:Loop over each state variable: j = 1 → Nx.Compute the hybridized state-observation covariance: .Add the observation increments to the state: .
In terms of computational cost, updating the weighting factors in step 2.1 is relatively small compared to the other operations in steps 2.2 and 2.3. To find the posterior distribution of α, one first needs to solve the cubic formula in (A2) and then evaluate (14). Such a computation is dominated by evaluating 2Nx cubic roots and Nx natural logarithms. For large ensemble sizes (i.e., Ne > 20), the added computational cost by step 2.1 was found to be smaller than 10% of the total computation time. This was demonstrated in a wide range of toy models such as Lorenz 63 and Lorenz 96. There is no reason to believe that this overhead cost would be considerably different in large systems. In addition to step 2.1, computing the forward operators on the background climatological states, in step 2.2, every assimilation cycle can be quite costly. This issue diminishes when the observation networks do not–immensely–change between assimilation cycles. In other words, if one assumes the observation network to be fixed in time then computing
Apart from complexity, the proposed hybrid technique requires storing
3. Observing system simulation experiments
The initial ensemble members are randomly drawn from a normal distribution centered at ξ0 with a unit variance. The initial ensemble mean ξ0 is obtained by integrating the model for 5 years in real-world time starting from the last forecast of the truth run. Climatology states xs,i are sampled from a very long model run every 5000 time steps for a total of Ns = 1000. The choice of 1000 static background states was made through offline single cycle DA experiments (not shown) where a wide range of values for α were tested. Ns could be different for other models depending on the details of the dynamics. Figure 3 shows the static background covariance matrix and an initial ensemble covariance constructed using 10 members. As can be seen,

(top) Initial ensemble covariance
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
Each data assimilation run below was repeated 20 times, each with randomly drawn initial ensemble and observation errors. The presented metrics, e.g., RMSE, are averaged over all 20 runs.
a. Ensemble size
In the first set of experiments, we assume perfect model conditions to assess the performance of the proposed adaptive hybrid scheme for different ensemble sizes. The forecast model and the climatology both use a forcing of 8 like the truth run. The observation network consists of the odd numbered variables for a total of 20 observations. We compare five different filtering schemes: EnKF with no enhancements (i.e., no inflation or localization), EnOI, EnKF-OI with a fixed weighting factor α = 0.5 (nonadaptive), EnKF-OI-c, and EnKF-OI-v. For the adaptive schemes, the initial distribution of the weighting factor is assumed normal with mean
Figure 4 plots the prior and the posterior root-mean-square errors (RMSE) for different ensemble sizes ranging from 3 to 200. The RMSE has been averaged both in space and time. For very small ensemble sizes (Ne < 20, the EnKF failed to produce useful forecasts and suffered from ensemble divergence. This is not surprising and it confirms the fact that implementing the EnKF with limited ensemble size and no other statistical enhancements is not reliable and will not yield meaningful results. For Ne ≤ 45, the EnOI outperformed the EnKF producing better quality forecasts and analyses. Note that the EnOI is independent of the ensemble size because it uses static background perturbations. The EnKF starts becoming competitive with other filters when the ensemble size is at least 80. Among all tested DA schemes, the EnKF-OI-v is the most accurate for all ensemble sizes. The EnKF-OI-c matches the performance of the EnKF-OI-v for Ne < 50 but for larger ensemble sizes EnKF-OI-v is the superior scheme. The hybrid EnKF-OI with a fixed weighting factor performs better than the EnKF for all tested ensemble sizes. It is better than the EnOI only for Ne > 20.

Prior (solid) and posterior (dashed) spatially and temporally averaged RMSE resulting from EnKF, EnOI, EnKF-OI (α fixed to 0.5), EnKF-OI-c, and EnKF-OI-v for different ensemble sizes. For clarity, the y axis is shown in log scale.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
Figure 4 indicates that for very small ensemble sizes, the proposed EnKF-OI-v is able to match the accuracy of the EnOI. As the ensemble size increases, the performance of EnKF-OI-v improves until it matches and slightly outperforms that of the EnKF for large ensemble sizes. This illustrates the robust functionality of the proposed scheme in the presence of sampling errors only. As can be seen in Fig. 5, the EnKF-OI-v decreases the weight on the ensemble covariance to 0.1 for Ne ≤ 5. By doing so, the filter can retrieve perturbations with larger magnitude from the static background covariance in order to combat severe sampling errors in the ensemble. As the ensemble size increases, sampling errors diminish and α is shown to converge to 1. Since it is spatially varying, EnKF-OI-v is more responsive to changes in the statistics of the ensemble as compared to EnKF-OI-c. For example, with an ensemble size of 200 one would expect the ensemble to be the main source for the background uncertainties and that is indeed the case for EnKF-OI-v (α averaged in time and space is 0.96). EnKF-OI-c, on the other hand, still partially relies on the static background covariance matrix assigning it a ~20% weight.

Weighting factor obtained using both adaptive hybrid schemes as function of the ensemble size. Each asterisk denotes an average over all assimilation cycles. For EnKF-OI-c, averaging is performed over time. The result of the EnKF-OI-v involves averaging both in space and time.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
The ratio between the average ensemble spread and the RMSE is found close to unity using the EnKF-OI-v. Additional discussion on this will be presented, using imperfect model conditions, in the next section.
b. Model error
To simulate model errors, we vary the forcing term in the forecast model. 11 DA experiments are performed, using 20 ensemble members, in which F is set to 3, 4,…, 13. In each experiment,
1) Effects of inflation
Average prior RMSE results for 9 different inflation values are shown in Fig. 6. Both EnKF and EnKF-OI-v diverge when using λ = 2 for all tested forcing values. When modeling biases are large and the model is very chaotic (F = 12 and 13), the EnKF becomes numerically unstable and inflation does not seem to help. In contrast, EnKF-OI-v does not blow up in these biased conditions. Both schemes behave equally well for small forcing values; e.g., F = 3, 4. As F increases, the EnKF-OI-v estimates are significantly more accurate than those of the EnKF. The smallest RMSE is obtained using EnKF-OI-v for F = 8 (no model errors) and λ = 1.08. We note that the hybrid scheme with no inflation (i.e., λ = 1) is able to outperform the EnKF with any inflation for all tested bias scenarios. It is not shown here, but we reached a similar assessment for the posterior RMSE estimates.

Average prior RMSE resulting from (top) the EnKF and (middle) the EnKF-OI-v for different forcing and inflation configurations. Shaded regions in gray indicate filter divergence. (bottom) The temporally and spatially averaged weighting factor from EnKF-OI-v. The ensemble size is set to 20.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
To understand the interaction between the adaptive hybrid algorithm and inflation, we plot the resulting spatially and temporally averaged weighting factor in the bottom panel of Fig. 6. It is evident that the change in α is proportional to that of λ. As inflation increases, biases and sampling errors in
2) Effects of localization
Instead of inflating the background variance, we now limit the size of the update for each state variable through localization. 10 different cutoff radii ranging from 0.1 to 100 are tested. Average prior RMSE resulting from the EnKF and the EnKF-OI-v are shown in Fig. 7. Both EnKF and EnKF-OI-v perform best when the localization radius is relatively small given sampling errors and strong model biases. Similar to the previous set of experiments, the performance of the EnKF strongly degrades for large values of F. The EnKF-OI-v systematically outperforms the EnKF for all tested localization and forcing cases. Moreover, with no localization (cutoff is 100) and in the presence of model errors the EnKF-OI-v can match the forecast accuracy of a localized EnKF with 0.2 cutoff.

Average prior RMSE resulting from (left) the EnKF and (right) the EnKF-OI-v for different forcing and localization configurations (Ne = 20). Shaded regions in gray indicate filter divergence.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
The impact of localizing the flow-dependent background ensemble covariance on the update of the weighting factor is described in Fig. 8. Visibly, one can notice two opposing patterns (shown by the dashed–dotted line). Under rapidly changing and chaotic conditions (i.e., F ≥ 8), α tends to increase as the impact of localization increases. Given the strong error growth in the forecast model, the spread of the ensemble stays large enough throughout the experiment. Consequently, the adaptive algorithm pushes α closer to 1 retaining the localized flow-dependent perturbations. On the contrary, when the conditions are more stable (F < 8), α shrinks as the algorithm attempts to increase the variability of the underdispersed ensemble using the static background perturbations. To confirm this analysis, we plot in the left panel of Fig. 8 the ratio of the average ensemble spread (AES) obtained using the EnKF to that of the EnKF-OI-v. The AES of the EnKF for F < 8 is very small compared to the EnKF-OI-v. In the case of F = 3, the plotted ratio is ~0.1 indicating that the prior spread of the EnKF is only 10% the spread of the EnKF-OI-v. The variance of both schemes is approximately the same for F ≥ 8 with an AES ratio roughly equal to 1.

(left) Ratio of the average ensemble spread resulting from the EnKF to that of EnKF-OI-v. (right) Spatially and temporally averaged weighting factor from the EnKF-OI-v runs. Results are shown for different forcing and localization configurations. The ensemble size is set to 20. Shaded regions in gray indicate filter divergence (i.e., hybrid filter never diverged).
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
c. Observation network
The spatially adaptive hybrid scheme is further tested using four sparse observation networks. Observed locations in these networks are as follows; Data void I (DV-I): the first 20 variables, DV-II: the first and last 5 variables, DV-III: 10 variables in the center of the domain and DV-IV: only 5 center variables. These experiments assume a perfect forecast model and use an ensemble size of 20. Localization cutoff radius is 0.1 and inflation is turned off.
The time-averaged spatial distribution of the weighting factor resulting from each experiment is plotted in Fig. 9. Because the ensemble covariance is heavily localized, the weighting factor values are close to 1. Similar behavior was observed and discussed in Fig. 8. Spatially, observed locations are assigned smaller weight than the nonobserved ones because they are constantly updated and thus they have smaller ensemble spread. Partially relying on

Average weighting factor in space obtained using the proposed EnKF-OI-v for Ne = 20. Different curves represent four different observation networks.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1

Weighting factor in space obtained using the proposed EnKF-OI-v for the first 800 DA cycles. The ensemble size is set to 20.
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
Combined with model errors (F = 10), DV-I is used in a final assessment to compare the performance of all filtering schemes. The best performance (obtained after tuning inflation and localization) of the EnKF, EnKF-OI (α = 1/2), EnKF-OI-c, and EnKF-OI-v are compared for different ensemble sizes ranging from 15 to 200. As can be shown in Fig. 11, the proposed EnKF-OI-c and EnKF-OI-v are able to outperform the EnKF for all tested ensemble members, especially for small ensemble sizes. In fact, the adaptive hybrid schemes using 20 members are shown to produce more accurate prior estimates compared to a well-tuned 120-member EnKF. Unlike the perfect modeling conditions (Fig. 4) in which we saw EnKF-OI-v adapts to α ≈ 1 for large ensemble size, here α only reached 0.7 for Ne = 200. This indicates that relying only on ensemble perturbations in case of sparse observations and in the presence of model errors may yield unsatisfactory state estimates. The nonadaptive hybrid scheme performs well for small ensemble sizes, however, it is unable to match the accuracy of the EnKF for large ensemble sizes. The improvements of the EnKF-OI-v over the EnKF-OI-c are minimal (~3%). This may be attributed to the nature of the spatial correlations in the L96, which are not as significant as those presented by a large GCM.

Spatially and temporally averaged prior RMSE obtained for different ensemble sizes using EnKF, EnKF-OI (α = 1/2), EnKF-OI-c, and EnKF-OI-v. The schemes are tuned for best performance using inflation and localization. Model errors are simulated using F = 10 in the forecast model and the observation network is set to data void I (DV-I).
Citation: Monthly Weather Review 149, 1; 10.1175/MWR-D-20-0101.1
4. Summary and discussion
A new spatially and temporally varying adaptive hybrid ensemble and variational filtering algorithm is introduced. The individual weights assigned to the ensemble and the static background covariance matrices are assumed to be random variables. A Bayesian approach is used to estimate these weights conditioned on available data. The resulting data assimilation scheme can be decomposed into: 1) a filter to estimate the weighting factors α and 2) another filter to estimate the state x. Using the Lorenz 96 nonlinear model, the proposed scheme was tested against the ensemble Kalman filter in various settings.
Under perfect modeling conditions, EnKF-OI-v (spatially varying form of the scheme) was found to be more accurate than the EnKF for a wide range of ensemble sizes. For very small ensemble sizes, both EnKF-OI-v and EnKF-OI-c (spatially constant variant) are less prone to sampling errors as they morph into an EnOI system fully utilizing the static background perturbations. For large ensemble sizes, the adaptive scheme makes the hybrid filter behave just like an EnKF with pure flow-dependent prior statistics. With model imperfections, the EnKF-OI-v was found to be distinctly more stable than the EnKF, producing quality forecasts even in highly biased conditions. The adaptive scheme appropriately selected the weighting coefficients given the amount of inflation and localization imposed on the ensemble. To illustrate, with enough variability in the system highly localized ensembles got assigned larger weight. Spatially, EnKF-OI-v was able to detect densely observed regions and counteract ensemble spread reduction by increasing the weight on the static background covariance.
One of the drawbacks of the proposed adaptive algorithm is its computational cost. As discussed in section 2, a large inventory of climatological model states need to be stored on the machine if
This work proposed the idea of using a different weighting coefficient for every grid cell. One of the advantages of such an approach is that it removes some of the sensitivity of ensemble filters to choices of localization (Fig. 7). The hybrid covariances, apart from being full rank, are generally less noisy and have better identifiable spatial structures. This study, however, did not provide a complete assessment of the properties of the resulting hybrid covariances when a spatially constant or a spatially varying weighting field is used. This is an interesting line of research that could be explored going forward. Furthermore, proposing non-Gaussian distributions for the weighting factors should be contemplated. Allowing the variance of α to change in time is also something this study did not focus on and would be worth investigating. Testing the performance of the EnKF-OI-v in large Earth system models using real data will be considered in future research. This includes assimilating spatially heterogenous datasets and nonlocal observations.
The proposed spatially varying adaptive hybrid scheme requires the observations to be processed serially. If the observations are to be assimilated all at once, as in many NWP systems, one may be restricted to using the spatially constant adaptive form. In the current L96 framework, the difference between the performance of EnKF-OI-v and EnKF-OI-c was minimal. However, this might not be the case in other systems where spatial correlations have different patterns and significance. In general, estimating even a spatially constant weighting factor is far more superior than using a time-invariant one making this work quite relevant to NWP systems. In fact, one possible application could be the state-of-the-art hybrid ensemble–variational systems such as hybrid 4DEnVar (Lorenc et al. 2015). Without using an adjoint or a tangent linear model, having the right weighting between the ensemble and the static background covariance can be crucial.
The author thanks three anonymous reviewers for their useful comments and suggestions. The author would also like to thank Jeff Anderson for intriguing discussions. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation.
Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
APPENDIX
Posterior Estimate of the Weighting Factor
REFERENCES
Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634–642, https://doi.org/10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.
Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus, 59A, 210–224, https://doi.org/10.1111/j.1600-0870.2006.00216.x.
Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 72–83, https://doi.org/10.1111/j.1600-0870.2008.00361.x.
Anderson, J. L., 2012: Localization and sampling error correction in ensemble Kalman filter data assimilation. Mon. Wea. Rev., 140, 2359–2371, https://doi.org/10.1175/MWR-D-11-00013.1.
Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 2741–2758, https://doi.org/10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.
Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Avellano, 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 1283–1296, https://doi.org/10.1175/2009BAMS2618.1.
Berner, J., S.-Y. Ha, J. Hacker, A. Fournier, and C. Snyder, 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 1972–1995, https://doi.org/10.1175/2010MWR3595.1.
Bishop, C. H., and D. Hodyss, 2009a: Ensemble covariances adaptively localized with ECO-RAP. Part I: Tests on simple error models. Tellus, 61A, 84–96, https://doi.org/10.1111/j.1600-0870.2008.00371.x.
Bishop, C. H., and D. Hodyss, 2009b: Ensemble covariances adaptively localized with ECO-RAP. Part II: A strategy for the atmosphere. Tellus, 61A, 97–111, https://doi.org/10.1111/j.1600-0870.2008.00372.x.
Bishop, C. H., and E. A. Satterfield, 2013: Hidden error variance theory. Part I: Exposition and analytic model. Mon. Wea. Rev., 141, 1454–1468, https://doi.org/10.1175/MWR-D-12-00118.1.
Bishop, C. H., E. A. Satterfield, and K. T. Shanley, 2013: Hidden error variance theory. Part II: An instrument that reveals hidden error variance distributions from ensemble forecasts and observations. Mon. Wea. Rev., 141, 1469–1483, https://doi.org/10.1175/MWR-D-12-00119.1.
Bowler, N., and Coauthors, 2017: The effect of improved ensemble covariances on hybrid variational data assimilation. Quart. J. Roy. Meteor. Soc., 143, 785–797, https://doi.org/10.1002/QJ.2964.
Buehner, M., 2005: Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi-operational NWP setting. Quart. J. Roy. Meteor. Soc., 131, 1013–1043, https://doi.org/10.1256/qj.04.15.
Carrassi, A., M. Bocquet, L. Bertino, and G. Evensen, 2018: Data assimilation in the geosciences: An overview of methods, issues, and perspectives. Wiley Interdiscip. Rev.: Climate Change, 9, e535, https://doi.org/10.1002/wcc.535.
Clayton, A. M., A. C. Lorenc, and D. M. Barker, 2013: Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office. Quart. J. Roy. Meteor. Soc., 139, 1445–1461, https://doi.org/10.1002/QJ.2054.
Descombes, G., T. Auligné, F. Vandenberghe, D. Barker, and J. Barre, 2015: Generalized background error covariance matrix model (gen_be v2. 0). Geosci. Model Dev., 8, 669–696, https://doi.org/10.5194/GMD-8-669-2015.
Desroziers, G., L. Berre, B. Chapnik, and P. Poli, 2005: Diagnosis of observation, background and analysis-error statistics in observation space. Quart. J. Roy. Meteor. Soc., 131, 3385–3396, https://doi.org/10.1256/qj.05.108.
El Gharamti, M., 2018: Enhanced adaptive inflation algorithm for ensemble filters. Mon. Wea. Rev., 146, 623–640, https://doi.org/10.1175/MWR-D-17-0187.1.
El Gharamti, M., K. Raeder, J. Anderson, and X. Wang, 2019: Comparing adaptive prior and posterior inflation for ensemble filters using an atmospheric general circulation model. Mon. Wea. Rev., 147, 2535–2553, https://doi.org/10.1175/MWR-D-18-0389.1.
Etherton, B. J., and C. H. Bishop, 2004: Resilience of hybrid ensemble/3DVAR analysis schemes to model error and ensemble covariance error. Mon. Wea. Rev., 132, 1065–1080, https://doi.org/10.1175/1520-0493(2004)132<1065:ROHDAS>2.0.CO;2.
Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343–367, https://doi.org/10.1007/s10236-003-0036-9.
Furrer, R., and T. Bengtsson, 2007: Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants. J. Multivar. Anal., 98, 227–255, https://doi.org/10.1016/j.jmva.2006.08.003.
Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723–757, https://doi.org/10.1002/QJ.49712555417.
Gharamti, M. E., J. Valstar, and I. Hoteit, 2014: An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models. Adv. Water Resour., 71, 1–15, https://doi.org/10.1016/j.advwatres.2014.05.001.
Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 2905–2919, https://doi.org/10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2.
Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796–811, https://doi.org/10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.
Kuhl, D. D., T. E. Rosmond, C. H. Bishop, J. McLay, and N. L. Baker, 2013: Comparison of hybrid ensemble/4DVar and 4DVar within the NAVDAS-AR data assimilation framework. Mon. Wea. Rev., 141, 2740–2758, https://doi.org/10.1175/MWR-D-12-00182.1.
Lei, L., J. L. Anderson, and J. S. Whitaker, 2016: Localizing the impact of satellite radiance observations using a global group ensemble filter. J. Adv. Model. Earth Syst., 8, 719–734, https://doi.org/10.1002/2016MS000627.
Lorenc, A. C., N. E. Bowler, A. M. Clayton, S. R. Pring, and D. Fairbairn, 2015: Comparison of hybrid-4DEnVar and hybrid-4DVar data assimilation methods for global NWP. Mon. Wea. Rev., 143, 212–229, https://doi.org/10.1175/MWR-D-14-00195.1.
Lorenz, E. N., 1996: Predictability: A problem partly solved. Proc. Seminar on Predictability, Vol 1, Reading, United Kingdom, ECMWF, 1–18, https://www.ecmwf.int/node/10829.
Ménétrier, B., and T. Auligné, 2015: Optimized localization and hybridization to filter ensemble-based covariances. Mon. Wea. Rev., 143, 3931–3947, https://doi.org/10.1175/MWR-D-15-0057.1.
Meng, Z., and F. Zhang, 2008: Tests of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part III: Comparison with 3DVAR in a real-data case study. Mon. Wea. Rev., 136, 522–540, https://doi.org/10.1175/MWR3352.1.
Mitchell, H. L., and P. Houtekamer, 2000: An adaptive ensemble Kalman filter. Mon. Wea. Rev., 128, 416–433, https://doi.org/10.1175/1520-0493(2000)128<416:AAEKF>2.0.CO;2.
Parrish, D. F., and J. C. Derber, 1992: The National Meteorological Center’s spectral statistical-interpolation analysis system. Mon. Wea. Rev., 120, 1747–1763, https://doi.org/10.1175/1520-0493(1992)120<1747:TNMCSS>2.0.CO;2.
Penny, S. G., D. W. Behringer, J. A. Carton, and E. Kalnay, 2015: A hybrid global ocean data assimilation system at NCEP. Mon. Wea. Rev., 143, 4660–4677, https://doi.org/10.1175/MWR-D-14-00376.1.
Pham, D. T., J. Verron, and M. C. Roubaud, 1998: A singular evolutive extended Kalman filter for data assimilation in oceanography. J. Mar. Syst., 16, 323–340, https://doi.org/10.1016/S0924-7963(97)00109-7.
Sacher, W., and P. Bartello, 2008: Sampling errors in ensemble Kalman filtering. Part I: Theory. Mon. Wea. Rev., 136, 3035–3049, https://doi.org/10.1175/2007MWR2323.1.
Satterfield, E. A., D. Hodyss, D. D. Kuhl, and C. H. Bishop, 2018: Observation-informed generalized hybrid error covariance models. Mon. Wea. Rev., 146, 3605–3622, https://doi.org/10.1175/MWR-D-18-0016.1.
Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., https://doi.org/10.5065/D68S4MVH.
Wang, X., D. M. Barker, C. Snyder, and T. M. Hamill, 2008: A hybrid ETKF–3DVAR data assimilation scheme for the WRF Model. Part II: Real observation experiments. Mon. Wea. Rev., 136, 5132–5147, https://doi.org/10.1175/2008MWR2445.1.
Whitaker, J. S., and T. M. Hamill, 2012: Evaluating methods to account for system errors in ensemble data assimilation. Mon. Wea. Rev., 140, 3078–3089, https://doi.org/10.1175/MWR-D-11-00276.1.
Wu, W.-S., R. J. Purser, and D. F. Parrish, 2002: Three-dimensional variational analysis with spatially inhomogeneous covariances. Mon. Wea. Rev., 130, 2905–2916, https://doi.org/10.1175/1520-0493(2002)130<2905:TDVAWS>2.0.CO;2.