## 1. Introduction

The ensemble Kalman filter (EnKF), an approximation to the Kalman filter (Kalman and Bucy 1961), estimates the background-error covariance from an ensemble of short-term model forecasts. The use of EnKF data assimilation systems to initialize ensemble weather predictions is growing (e.g., Houtekamer et al. 2005, 2009; Meng and Zhang 2007; Whitaker et al. 2008; Buehner et al. 2010; Hamill et al. 2011), because of the simplicity of the algorithm and its ability to provide *flow-dependent* estimates of background and analysis error. In order for the EnKF to perform optimally, the background (prior) ensemble should sample all sources of error in the forecast environment, including sampling error due to limitations in ensemble size and errors in the model itself. Inevitably, some sources of error will be undersampled, resulting in an EnKF that when cycled has a suboptimal estimate of the background-error covariance with systematically underestimated variances. Such an EnKF may not give enough weight to observations, which in a chaotic system will cause the subsequent ensemble forecasts to drift farther from the truth. At the next assimilation time, ensemble-estimated covariance model will be even more deficient, causing the update to give even less weight to observations. This problem can progressively worsen, potentially resulting in a condition called “filter divergence,” in which the ensemble variance becomes vanishingly small and observation information is completely ignored. Because of this, all EnKF systems used in weather prediction employ methods to account of unrepresented or underestimated error sources in the prior ensemble. These include multiplicative inflation (Anderson and Anderson 1999), which inflates either the prior or posterior ensemble by artificially increasing the amplitude of deviations from the ensemble mean, and additive inflation, which involves adding perturbations with zero mean drawn from a specified distribution to each ensemble member (Mitchell and Houtekamer 2000). Covariance localization (Hamill et al. 2001) is typically used to ameliorate the effects of sampling error by tapering the covariances to zero with distance from the observation location. Whitaker et al. (2008) compared simple uniform multiplicative inflation with additive inflation in a simple model, and found that additive inflation performed better, since the simple uniform multiplicative inflation generated too much spread in regions less constrained by observations. Meng and Zhang (2007) found that using different physical parameterization schemes within the forecast ensemble can significantly improve EnKF performance. Houtekamer et al. (2009) compared additive inflation with various methods for treating model error within the forecast model itself, such as multimodel ensembles, stochastic backscatter (Shutts 2005; Berner et al. 2009), and stochastically perturbed physics tendencies (Buizza and Palmer 1999). They found that additive inflation, sampling from a simple isotropic covariance model, had the largest positive impact. However, Hamill and Whitaker (2011) found that parameterizing unrepresented error sources with additive inflation will decrease the flow dependence of background-error covariance estimates and reduce the growth rate of ensemble perturbations, with potentially negative consequences on analysis quality.

In this study, we reexamine the use of inflation (additive and multiplicative) as methods for accounting for under-represented sources of background error in ensemble data assimilation. The goal is to elucidate the strengths and weakness of each method in isolation, and justify the use of both simultaneously. Experiments are conducted using an idealized two-level primitive equation model on a sphere, including model error. We hypothesize that multiplicative inflation algorithms should inflate more where observations are dense to account for the fact that sampling errors (and other sources of under-represented observation-network-dependent data assimilation errors) are likely to be a larger fraction of the total background error in those regions. To this end, we propose a very simple algorithm that inflates the posterior ensemble proportional to the amount that ensemble variance is reduced by the assimilation of observations, and we compare this new algorithm to existing ones. We also hypothesize that additive inflation will outperform multiplicative inflation alone when unrepresented model errors dominate unrepresented observation-network-dependent system errors (which in this simplified environment consists solely of sampling error due to limitations in ensemble size). The opposite should be true when sampling error dominates model error. When neither model error nor sampling error dominates, a combination of multiplicative inflation and additive inflation should perform better than either alone. To put it simply, we aim to demonstrate that when using inflation, observation-network-dependent assimilation errors are best handled by multiplicative schemes, while model errors (which do not depend on the observing network) are best treated by additive schemes. The following section describes the algorithms used and experiments performed, while the results and conclusions are summarized in the final section.

## 2. Idealized experiments

### a. Forecast model

The forecast model used in these experiments is virtually identical to the two-level primitive equation spectral model of Lee and Held (1993). This model was also used in the data assimilation experiments of Whitaker and Hamill (2002) and Hamill and Whitaker (2011). Here, unless otherwise noted, data assimilation experiments are run with a spectral resolution of T31 (triangular truncation at total wavenumber 31), with the two levels set to 250 and 750 hPa. Observations are sampled from a nature run using the same model, but at T42 resolution. The prognostic variables of the forecast model are baroclinic and barotropic vorticity, baroclinic divergence, and barotropic potential temperature. Barotropic divergence is identically zero, and baroclinic potential temperature (static stability) is kept constant at 10 K. Lower-level winds are mechanically damped with an *e*-folding time scale of 4 days, and barotropic potential temperature is relaxed back to a radiative equilibrium state with a pole-to-equator temperature difference of 80 K with a time scale of 20 days. The radiative equilibrium profile of Lee and Held [1993, their Eq. (3)] was used. The ∇^{8} diffusion was applied to all the prognostic variables, the smallest resolvable scale is damped with an *e*-folding time scale of 3 h (6 h for the nature run). Time integration is performed with a fourth-order Runge–Kutta scheme with 18 time steps day^{−1} at T31 resolution, and 30 day^{−1} at T42 resolution. The error doubling time of the T31 model is approximately 2.4 days. The climate of the model (computed as a zonal and time mean over 1000 days of integration) is shown in Fig. 1 for the T31 forecast model and the T42 nature run. The time-mean systematic error of the T31 model is quite small outside the tropics and polar regions.

### b. Data assimilation methodology

The serial ensemble square root filter algorithm of Whitaker and Hamill (2002) is used in conjunction with a 20-member ensemble, unless otherwise noted. Details are provided in Hamill and Whitaker (2011). Covariance localization (Hamill et al. 2001) is used to ameliorate the effects of sampling error, using the compact Gaussian-like polynomial function of Gaspari and Cohn (1999). Unless otherwise noted, the covariance localization was set so that increments taper to zero 3500 km away from observation locations. This is close to the optimal value for all of the experiments with 20-member ensembles. Observations of geopotential height at 250 and 750 hPa are assimilated at Northern Hemisphere radiosonde locations (Fig. 2) every 12 h with an observation error standard deviation of 10 m. The observing network is made hemispherically symmetric by reflecting the Northern Hemisphere radiosonde locations into the Southern Hemisphere, resulting in a network with 1022 observing locations.

### c. Comparison of multiplicative inflation methods

Sacher and Bartello (2008) showed that sampling error in the estimate of the Kalman gain should be proportional to the amplitude of the Kalman gain itself, so that more inflation is needed when observations are making large corrections to the background. Therefore, it seems desirable to have a multiplicative inflation scheme that inflates the ensemble variance more in regions where observations have a larger impact.

*i*th ensemble member, and

*i*th ensemble member. We refer to this method as relaxation-to-prior perturbations (RTPP). Unlike constant covariance inflation, this technique has the desired property of increasing the posterior ensemble variance in proportion to the amount that the assimilation of observations has reduced the prior variance. In the limit that tunable parameter

*α*approaches 1.0, the posterior ensemble is completely replaced by the prior ensemble. For values of

*α*between 0 and 1, part of the posterior ensemble is replaced by the prior ensemble. This approach amounts to a combination of multiplicative inflation (in which the inflation factor is less than 1) and additive inflation where the perturbations are taken from the prior ensemble.

*perturbations*back to their prior values at each grid point as in RTPP, we relax the ensemble

*standard deviation*back to the prior via

*n*is the ensemble size. This formula can be rewritten as

*α*, the multiplicative inflation in RTPS is proportional to the amount the ensemble spread is reduced by the assimilation of observations, normalized by the posterior ensemble spread. We have chosen to represent the inflation parameter in the RTPP and RTPS schemes with the same symbol

*α*, since in both cases it represents a relaxation to the prior (standard deviation in the case of RPTS and ensemble perturbation in the case of RTPP). However, because RTPS inflation is purely multiplicative and RTPP inflation is partly multiplicative and partly additive, the sensitivity of the data assimilation to both the absolute value of

*α*and perturbations to that value may be different.

Anderson (2009) proposed a Bayesian algorithm for estimating a spatially and temporally varying field of covariance inflation as part of the state update. When run as part of an EnKF assimilation system using a global general circulation model with all “conventional” (i.e., nonsatellite radiance) observations, the Bayesian algorithm produces a spatial field of inflation that looks very similar to that implied by RTPS inflation [Eq. (3)], with large values of inflation in regions of dense and/or accurate observations, like North America and Europe (Fig. 13 in Anderson et al. 2009).

The role of covariance localization is to ameliorate the effects of sampling error, yet we have hypothesized that spatially varying covariance inflation is also necessary to deal with the observation-network-dependent effects of sampling error. Why do we need both? Covariance localization, as it is typically formulated, tapers increments with distance from the observation localization, allowing the full increment to be applied at the observation location and no increment past a specified cutoff distance from the observation. Allowing the full increment to be applied at the observation location (i.e., having the localization function peak at unity) implicitly assumes sampling errors in the estimation of background-error covariances between model and observation priors that are collocated in space are zero. In the simple case where observation operator is the identity matrix (model state variables are observed), this implies that covariance localization only deals with sampling errors in the estimation of background-error covariances, not variances. Covariance inflation is therefore needed to account for sampling error in the estimation of background-error variances, which will invariably be underestimated by small ensembles when the data assimilation system is cycled. In the case of RTPP inflation, it can be demonstrated that this is equivalent to applying a covariance localization function that peaks at a value of 1 − *α* at the observation location when updating ensemble perturbations (see the appendix for details).

Figure 3 shows global-mean ensemble-mean background error, spread, and inflation statistics collected over 2000 assimilation times for the three experiments after a spinup period of 50 days. The RTPS inflation method produces slightly more accurate analyses and short-term forecasts than either RTPP and constant covariance inflation. RTPP outperforms constant covariance inflation but produces very large errors when the inflation parameter exceeds the optimal value. For reference, we also show in Fig. 3 the ensemble-mean error and spread for an experiment using the adaptive inflation algorithm of Anderson (2009) (the horizontal cyan curves). The adaptive inflation algorithm requires very little tuning (there is some sensitivity to the value of the prior inflation variance chosen), and produces analysis of similar quality to the best-tuned RTPS results. Maps of time-mean ensemble-mean background error, spread, and inflation factor for RTPP and RTPS are shown in Fig. 4. The parameter settings were chosen to maximize the consistency between globally averaged error and spread, which approximately corresponds to the point at which the dashed and solid lines cross in Fig. 3. The time-mean inflation factor was estimated by computing the ratio of the time-mean spread after and before the application of RTPP and RTPS inflation [Eqs. (1) and (2)]. The pattern of ensemble mean error is similar in both experiments, with a large error maxima at the downstream end of the observational data voids. The largest relative difference in error between RTPP and RTPS is in the tropics. The pattern of ensemble spread in the RTPP experiment is more zonally homogeneous than in the RTPS experiment, and does not match the pattern of ensemble mean error as closely. We hypothesize that this is due to the fact that the ensemble perturbations in the RTPP ensemble are controlled more by the growing instabilities of the dynamical system, and less by inhomogeneities in the observing network. The reason for this is described in the follow paragraphs. The effective inflation is nearly twice as large in the RTPP experiment over the data-rich regions in midlatitudes. This is because the background ensemble spread is larger in the RTPP experiment in those regions, so that the reduction in ensemble spread by the analysis (which is roughly proportional to the ratio of background-error variance to observation error variance) is also larger. Because the inflation algorithms relax back to the amplitude of the prior ensemble standard deviation (for RTPS) or ensemble perturbations (for RTPP), the amplitude of the inflation will be approximately proportional to the background ensemble spread where there are observations.

RTPP inflation has at least one desirable property; it produces ensemble perturbations that grow faster than the other inflation methods. This is illustrated in Fig. 5, which shows the ratio of background spread to analysis spread for the experiments depicted in Fig. 3. Near the minimum in ensemble-mean error, the RTPP ensemble spread grows about 19% during over the assimilation interval (12 h), compared to 7.6% for RTPS inflation and 6.5% for simple covariance inflation. The reason for this can be understood by noting that RTPP inflation involves adding scaled prior perturbations to the posterior ensemble. When the inflation parameter *α* is 1, the posterior ensemble is completely replaced by the prior ensemble. In that case, the structure and amplitude of the ensemble perturbations is not modified during the assimilation and the perturbations are simply recentered around the updated ensemble mean. The assimilation cycle then becomes similar to the process used to compute the leading Lyapunov vector (Legras and Vautard 1995), which reflects the dominant instabilities of a dynamical system. This also explains why the performance of RTPP inflation degrades rapidly when the inflation parameter is increased above the optimal value; the ensemble perturbations become increasingly colinear as they collapse to the leading Lyapunov vector, reducing the effective number of degrees of freedom spanned by the ensemble. However, Fig. 5 shows that the spread growth does not increase for RTPP inflation monotonically as the inflation parameter is increased. This is because the amplitude of the ensemble perturbations becomes large enough that nonlinear effects begin to cause saturation.

To further explore the impact of the multiplicative inflation method on the growth properties of the analysis ensemble, we have calculated the analysis-error covariance singular vector (AECSV) spectrum following the methodology of Hamill et al. (2003). The AECSVs are the structures that explain the greatest forecast variance and whose initial size is consistent with the flow-dependent analysis-error covariance statistics of the data assimilation system. Figure 6 confirms that the RTPP ensemble AECSV spectrum is steeper, with more of the variance concentrated in fewer, faster-growing modes (as indicated by the dashed lines on the figure). The AECSVs for the RTPS ensemble grow more slowly, and about half of them are decaying modes (as indicated by the red solid line dropping below the horizontal black line on Fig. 6). This results in less spread growth over the assimilation interval, but an ensemble that can effectively span a larger portion of the space of possible analysis errors. The fact that the RTPP ensemble is dominated more by growing instabilities of the dynamical system is consistent with the pattern of ensemble spread shown in Fig. 4. The pattern of time-mean RTPP ensemble spread is more zonally symmetric than RTPS spread, and is more reflective of the energy source for baroclinic growth (the midlatitude jet and associated baroclinic zone) and less reflective of the inhomogeneities of the observing network. RTPP spread is also much smaller than RTPS spread in the tropics, where the energy source for dynamical growth is much weaker.

Although we have not explored the impact that RTPP and RPTS inflation have on the degree of balance between the temperature and wind fields in the analysis ensemble, we do expect that the analyses produced using RTPP inflation will be more balanced. This is because RTPS inflation is a spatially varying multiplicative inflation, which inevitably will alter the balance between the temperature and wind fields that is present in the analysis ensemble. In contrast, RTPP inflation merely rescales the analysis perturbations by a spatially uniform value 1 − *α*, and adds background perturbations scaled by *α*. Neither of these operations should affect the existing covariance between temperature and wind in the ensemble. Therefore, if preserving the balances present in the ensemble is of primary concern, RTPP inflation may be preferred over RTPS inflation, even though RTPS inflation produces smaller analysis errors.

### d. Combined additive and multiplicative inflation

In Hamill and Whitaker (2005), it was found that additive inflation performed better than constant covariance inflation in an idealized two-layer primitive equation model, including truncation model error. Similarly, Whitaker et al. (2008) found that additive inflation outperformed constant covariance inflation and RTPP inflation in a full global numerical weather prediction system. Given that RTPS inflation generally performs better than RTPP and constant covariance inflation, how does it perform compared to additive inflation? Here we use random samples from a climatological distribution of actual 12-h forecast model error for our additive inflation. The distribution is computed using the same method as Hamill and Whitaker (2005), that is by truncating the T42 nature run to T31, running 12-h forecasts at T31, and computing the difference between these forecasts and the corresponding T31 truncated nature run fields. The only source of error in these forecasts is due to the lower resolution of the forecast model. At each analysis time, 20 samples are chosen randomly from this distribution, the mean is removed, and the resulting fields are scaled and added to each ensemble member. Figure 7 shows the ensemble background error for experiments using a combination of this additive inflation and RTPS multiplicative inflation. The additive inflation parameter is simply the scaling factor applied to the randomly chosen truncation model error fields. The values of ensemble-mean error when the additive inflation parameter is zero are identical to those shown in Fig. 3 (the solid red line). From this plot, it is easy to see that additive inflation without multiplicative inflation produces lower errors than multiplicative inflation alone, in agreement with the results of Hamill and Whitaker (2005) and Whitaker et al. (2008). However, a combination of additive and multiplicative inflation produces lower errors than either method used alone. The minimum error (8.6 m s^{−1}) occurs with a multiplicative inflation parameter of 0.5 and an additive inflation parameter of 1.4. Conditioning the additive perturbations to the dynamics by adding them to the previous ensemble-mean analysis (instead of the current analysis) and evolving them forward in time by one assimilation interval (as suggested by Hamill and Whitaker 2011) reduces the minimum error slightly, by approximately 2%–3% (not shown). Using random samples of 12-h differences drawn from a T31 model run works nearly as well as using actual truncation model error fields for the additive inflation, yielding a minimum error of (8.8 m s^{−1}) when the additive inflation parameter is 0.24 and the multiplicative inflation parameter is 0.5 (Fig. 8).

The fact that a combination of additive and multiplicative inflation works better than either do alone suggests that they are simulating different unrepresented background-error sources. RTPS multiplicative inflation is by design dependent on the observation network, while the additive inflation we have used is independent of the assimilation system. Therefore, we hypothesize that RTPS multiplicative inflation is useful in capturing the especially deleterious effects of sampling errors in regions where observations are dense, while additive inflation is useful in capturing sources of background error that are assimilation-system independent, such as errors in the forecast model.

To test this idea we ran two experiments: one in which the model error was eliminated by using the T42 model in the assimilation and another in which the sampling error was reduced by increasing the ensemble size from 20 to 200. In the former experiment, we expect that the impact of additive inflation would be reduced relative to multiplicative inflation, since the only source of unrepresented error (sampling error) comes from the data assimilation system itself. In the latter experiment, sampling error is greatly reduced, so that the dominant unrepresented source of error should be model error and the impact of multiplicative inflation should be reduced relative to additive inflation. These expectations are confirmed in Figs. 9 and 10. Figure 9 shows that in the absence of model error, multiplicative inflation alone outperforms any combination of multiplicative and additive inflation. Figure 10 shows that when model error is the dominant source of unrepresented background errors, additive inflation alone outperforms any combination of multiplicative and additive inflation.

### e. Replacing additive inflation with stochastic backscatter

The additive inflation algorithm used here is somewhat ad hoc, and it would be preferable to incorporate a physically based parameterization of model error directly into the forecast model. Such a parameterization would account for the presence of model error directly in the background ensemble forecast. The only source of error in our two-level model experiments is associated with model truncation. More specifically, model error in our experiments is a result of the effects of unresolved and unrealistically damped scales on the resolved scales through an inverse energy cascade. This is exactly the sort of model error that stochastic kinetic energy backscatter (SKEB) schemes (Shutts 2005; Berner et al. 2009) were designed to represent. The algorithm described by Berner et al. (2009) involves generating a random streamfunction pattern from an AR-1 process with a specified time scale and covariance structure. These random patterns are then modulated by the model’s kinetic energy dissipation rate (resulting from the ∇^{8} hyperdiffusion). The resulting tendencies are added as a forcing term in the vorticity equation. Figure 11 shows the total kinetic energy spectra for the T42 model, the T31 model without SKEB, and the T31 model with SKEB. The kinetic energy in the T31 model without SKEB is deficient relative to the T42 model at all scales, but especially so near the truncation wavenumber where the hyperdiffusion is active. Adding SKEB to the T31 model brings the energy up much closer to the level of the T42 model. The random streamfunction pattern used to generate the SKEB forcing was assumed to be spatially white in the streamfunction norm, with a decay time scale of 6 h. The amplitude of the random streamfunction pattern was set to 15, a value chosen to give the best fit to the T42 model kinetic energy spectrum shown in Fig. 11.

Figure 12 show the results for a set of assimilation experiments using a combination of SKEB to represent model error, and multiplicative inflation to represent other sources of unrepresented background errors (in this case, primarily sampling errors). Not surprisingly, a combination of SKEB and multiplicative inflation turns out to be better than either alone. However, comparing Fig. 12 to Fig. 8, SKEB does not seem to perform significantly better than simple, ad hoc additive inflation. Also, in contrast to the additive inflation case, SKEB alone does not perform better than additive inflation alone. Of course, there are several tunable parameters in the SKEB scheme (including the total variance injected, the time scale of the random streamfunction pattern, and the covariance structure of the random streamfunction pattern) and it is likely that better results could be obtained by more carefully tuning these parameters. However, our results do suggest that it is surprisingly hard to beat a combination of simple additive and multiplicative inflation as a parameterization of unrepresented sources of error in an ensemble data assimilation system. We do not mean to suggest that ad hoc inflation is inherently better, merely that it provides a useful baseline for measuring progress in the development of more physically based methods, such as SKEB.

## 3. Conclusions

In order for an EnKF to perform optimally, the background (prior) ensemble should sample all sources of error in the forecast environment, including those associated with the data assimilation itself (such as sampling error due to finite ensemble size, misspecification of observation errors, and errors in forward operators) as well as errors in the forecast model itself. We have proposed a new multiplicative inflation algorithm to deal with the effects of these unrepresented sources of error that is simple to implement in complicated models. Using idealized experiments with a two-level spherical primitive equation model, where the only source of model error is associated with model truncation, and the only source of data assimilation error is associated with finite ensemble size, we show that this new inflation scheme performs as well or better than other commonly used schemes. It has the desirable property of inflating more strongly where the assimilation of observations has a larger effect on the ensemble variance. It is in these regions where sampling error is expected to be a larger fraction of the total background error.

Combining this new multiplicative inflation algorithm with additive inflation, it is found that a combination of the two performs better than either alone, even when the additive perturbations are drawn from an ad hoc distribution that does not directly use knowledge of the known properties of the model error in this simplified environment. This leads us to hypothesize that multiplicative inflation is best suited to account for unrepresented observation-network-dependent assimilation errors (in this case sampling error), while model errors (which do not depend on the observing network) are best treated by additive inflation, or stochastically within the forecast model itself. Since the additive inflation algorithm is somewhat ad hoc, it is expected that a more physically based parameterization of model error, such as stochastic kinetic energy backscatter, will perform better. Tests replacing additive inflation with SKEB in the data assimilation show that it is surprisingly hard to improve upon additive inflation. This suggest that a combination of simple ad hoc additive inflation with the new multiplication inflation algorithm proposed here can provide a rigorous baseline for testing new more sophisticated representations of unrepresented sources of error in ensemble data assimilation systems.

More generally, these results suggest that it is desirable to treat different sources of unrepresented background error in ensemble data assimilation systems separately, using as much a priori knowledge regarding the characteristics of these errors as possible. In the case of inflation, using the fact that we expect part of the missing error to be observation-network dependent and part of it to be independent of the observing network leads us to an improved scheme that has both additive and multiplicative aspects. Applying this philosophy to the model error, we might expect that errors associated with convection, boundary layer physics, and unresolved dynamics might best be treated separately, as long we have a prior knowledge about the characteristics of these separate sources of error to guide us. Similarly, for unrepresented sources of error associated with the data assimilation system itself, such as misspecification of observation errors and errors in forward operators, there may be methods that work better than the RTPS multiplicative inflation used here. More research is certainly needed to understand what the most important unrepresented sources of error are in operational ensemble data assimilation systems and how to characterize those errors individually.

## APPENDIX

### Similarities between Covariance Localization and RTPP Inflation

*n*is the ensemble size, and

*n*− 1 instead of

*n*in the denominator, so that the estimate is unbiased. If

*R*is diagonal, observations may be assimilated serially, one at a time, so that the analysis after assimilation of the

*N*th observation becomes the background estimate for assimilating the (

*N*+ 1)th observation (Gelb et al. 1974). With this simplification,

*R*and

**Γ**is a vector containing the localization function on the model grid for an individual observation. Here

**Γ**is unity at the observation location, and zero beyond a specified distance from the observation. In the special case where only a single observation is assimilated, RTPP inflation is equivalent to multiplying the second term on the right-hand side of Eq. (A4) by 1 −

*α*, so that when

*α*= 1, the analysis perturbation is identical to the background perturbation, and when

*α*= 0, no inflation is applied. If the factor 1 −

*α*is incorporated into the localization function

**Γ**, it becomes apparent that RTPP inflation is equivalent to applying covariance localization with a localization function that peaks at 1 −

*α*at the observation localization in the ensemble perturbation update (but not in the ensemble-mean update). When more than one observation is assimilated, applying RTPP inflation after the analysis step is not formally identical to localizing with a function that peaks at 1 −

*α*during the serial assimilation update. However, numerical experimentation shows that results shown in Fig. 3 are essentially unchanged if RTPP is applied in this way. We note that RTPS inflation is not equivalent to a modified localization in the perturbation update.

## REFERENCES

Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters.

,*Tellus***61A**, 72–83.Anderson, J. L., and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts.

,*Mon. Wea. Rev.***127**, 2741–2758.Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Avellano, 2009: The data assimilation research testbed: A community facility.

,*Bull. Amer. Meteor. Soc.***90**, 1283–1296.Berner, J., G. J. Shutts, M. Leutbecher, and T. N. Palmer, 2009: A spectral stochastic kinetic energy backscatter scheme and its impact on flow-dependent predictability in the ECMWF ensemble prediction system.

,*J. Atmos. Sci.***66**, 603–626.Buehner, M., P. L. Houtekamer, C. Charette, H. L. Mitchell, and B. He, 2010: Intercomparison of variational data assimilation and the ensemble Kalman filter for global deterministic NWP. Part I: Description and single-observation experiments.

,*Mon. Wea. Rev.***138**, 1902–1921.Buizza, R., and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system.

,*Quart. J. Roy. Meteor. Soc.***125**, 2887–2908.Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions.

,*Quart. J. Roy. Meteor. Soc.***125**, 723–757.Gelb, A., J. F. Kasper, R. A. Nash, C. F. Price, and A. A. Sutherland, 1974:

*Applied Optimal Estimation.*M. I. T. Press, 374 pp.Hamill, T. M., and J. S. Whitaker, 2005: Accounting for the error due to unresolved scales in ensemble data assimilation: A comparison of different approaches.

,*Mon. Wea. Rev.***133**, 3132–3147.Hamill, T. M., and J. S. Whitaker, 2011: What constrains spread growth in forecasts initialized from ensemble Kalman filters?

,*Mon. Wea. Rev.***139**, 117–131.Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter.

,*Mon. Wea. Rev.***129**, 2776–2790.Hamill, T. M., C. Snyder, and J. S. Whitaker, 2003: Ensemble forecasts and the properties of flow-dependent analysis-error covariance singular vectors.

,*Mon. Wea. Rev.***131**, 1741–1758.Hamill, T. M., J. S. Whitaker, M. Fiorino, and S. G. Benjamin, 2011: Global ensemble predictions of 2009’s tropical cyclones initialized with an ensemble Kalman filter.

,*Mon. Wea. Rev.***139**, 668–688.Houtekamer, P. L., H. L. Mitchell, G. Pellerin, M. Buehner, M. Charron, L. Spacek, and B. Hansen, 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations.

,*Mon. Wea. Rev.***133**, 604–620.Houtekamer, P. L., H. L. Mitchell, and X. Deng, 2009: Model error representation in an operational ensemble Kalman filter.

,*Mon. Wea. Rev.***137**, 2126–2143.Kalman, R., and R. Bucy, 1961: New results in linear prediction and filtering theory.

,*Trans. AMSE J. Basic Eng.***83D**, 95–108.Lee, S., and I. M. Held, 1993: Baroclinic wave packets in models and observations.

,*J. Atmos. Sci.***50**, 1413–1428.Legras, B., and R. Vautard, 1995: A guide to Lyapunov vectors.

*Proc. ECMWF Seminar on Predictability,*Vol. 1, Reading, United Kingdom, ECMWF, 143–156.Meng, Z., and F. Zhang, 2007: Tests of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part II: Imperfect model experiments.

,*Mon. Wea. Rev.***135**, 1403–1423.Mitchell, H. L., and P. L. Houtekamer, 2000: An adaptive ensemble Kalman filter.

,*Mon. Wea. Rev.***128**, 416–433.Sacher, W., and P. Bartello, 2008: Sampling errors in ensemble Kalman filtering. Part I: Theory.

,*Mon. Wea. Rev.***136**, 3035–3049.Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems.

,*Quart. J. Roy. Meteor. Soc.***131**, 3079–3102.Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations.

,*Mon. Wea. Rev.***130**, 1913–1924.Whitaker, J. S., T. M. Hamill, X. Wei, Y. Song, and Z. Toth, 2008: Ensemble data assimilation with the NCEP global forecast system.

,*Mon. Wea. Rev.***136**, 463–482.Zhang, F., C. Snyder, and J. Sun, 2004: Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter.

,*Mon. Wea. Rev.***132**, 1238–1253.