• Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithms for ensemble filters. Tellus, 59A, 210224, doi:10.1111/j.1600-0870.2006.00216.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arakawa, A., 1966: Computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I. Comput. Phys., 1, 119143, doi:10.1016/0021-9991(66)90015-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, doi:10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Charney, J. G., 1971: Geostrophic turbulence. J. Atmos. Sci., 28, 10871095, doi:10.1175/1520-0469(1971)028<1087:GT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367, doi:10.1007/s10236-003-0036-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frenkel, Y., A. J. Majda, and B. Khouider, 2012: Using the stochastic multicloud model to improve convective parameterization: A paradigm example. J. Atmos. Sci., 69, 10801105, doi:10.1175/JAS-D-11-0148.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gottwald, G. A., and A. J. Majda, 2013: A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks. Nonlinear Processes Geophys., 20, 705712, doi:10.5194/npg-20-705-2013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Griffies, S. M., and R. W. Hallberg, 2000: Biharmonic friction with a Smagorinsky-like viscosity for use in large-scale eddy-permitting ocean modes. Mon. Wea. Rev., 128, 29352946, doi:10.1175/1520-0493(2000)128<2935:BFWASL>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., and A. J. Majda, 2014: Stochastic superparameterization in quasigeostrophic turbulence. J. Comput. Phys., 271, 7898, doi:10.1016/j.jcp.2013.09.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., Y. Lee, and A. J. Majda, 2015a: Ensemble filtering and low-resolution model error: Covariance inflation, stochastic parameterization, and model numerics. Mon. Wea. Rev., 143, 39123924, doi:10.1175/MWR-D-15-0032.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., Y. Lee, and A. J. Majda, 2015b: Numerical schemes for stochastic backscatter in the inverse cascade of quasigeostrophic turbulence. Multiscale Model. Simul., 13, 10011021, doi:10.1137/140990048.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, 27762790, doi:10.1175/1520-0493(2001)129<2776:DDFOBE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harlim, J., and A. J. Majda, 2010: Catastrophic filter divergence in filtering nonlinear dissipative systems. Commun. Math. Sci., 8, 2743, doi:10.4310/CMS.2010.v8.n1.a3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129, 123137, doi:10.1175/1520-0493(2001)129<0123:ASEKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, 368 pp.

    • Crossref
    • Export Citation
  • Keating, S., A. J. Majda, and K. Smith, 2012: New methods for estimating ocean eddy heat transport using satellite altimetry. Mon. Wea. Rev., 140, 17031722, doi:10.1175/MWR-D-11-00145.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kelly, D., A. Majda, and X. T. Tong, 2015: Concrete ensemble Kalman filters with rigorous catastrophic filter divergence. Proc. Natl. Acad. Sci. USA, 112, 10 58910 594, doi:10.1073/pnas.1511063112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, Y., A. J. Majda, and D. Qi, 2017: Stochastic superparameterization and multiscale filtering of turbulent tracers. Multiscale Model. Simul., 15, 215234, doi:10.1137/16M1080239.

    • Crossref
    • Export Citation
  • Li, H., E. Kalnay, and T. Miyoshi, 2009: Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 135, 523533, doi:10.1002/qj.371.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2013: Covariance inflation in the ensemble Kalman filter: A residual nudging perspective and some implications. Mon. Wea. Rev., 141, 33603368, doi:10.1175/MWR-D-13-00067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2014: Ensemble Kalman filtering with residual nudging: An extension to state estimation problems with nonlinear observation operators. Mon. Wea. Rev., 142, 36963712, doi:10.1175/MWR-D-13-00328.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Majda, A. J., and J. Harlim, 2012: Filtering Complex Turbulent Systems. Cambridge University Press, 368 pp.

  • Majda, A. J., and I. Grooms, 2014: New perspectives on superparameterization for geophysical turbulence. J. Comput. Phys., 271, 6077, doi:10.1016/j.jcp.2013.09.014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Majda, A. J., D. Qi, and T. P. Sapsis, 2014: Blended particle filters for large-dimensional chaotic dynamical systems. Proc. Natl. Acad. Sci. USA, 111, 75117516, doi:10.1073/pnas.1405675111.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qi, D., and A. J. Majda, 2015: Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems. Physica D, 298–299, 2141, doi:10.1016/j.physd.2015.02.002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sakov, P., and P. Oke, 2008: A deterministic formulation of the ensemble Kalman filter: An alternative to ensemble square root filters. Tellus, 60A, 361371, doi:10.1111/j.1600-0870.2007.00299.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Salmon, R., 1998: Lectures on Geophysical Fluid Dynamics. Oxford University Press, 400 pp.

    • Crossref
    • Export Citation
  • Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 131, 30793102, doi:10.1256/qj.04.106.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, doi:10.1175/2008MWR2529.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., J. L. Anderson, C. H. Bishop, T. M. Hamill, and J. S. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, doi:10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tong, X. T., A. J. Majda, and D. Kelly, 2016: Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation. Commun. Math. Sci., 14, 12831313, doi:10.4310/CMS.2016.v14.n5.a5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., G. P. Compo, and J. N. Thepaut, 2009: A comparison of variational and ensemble-based data assimilation systems for reanalysis of sparse observation. Mon. Wea. Rev., 137, 19911999, doi:10.1175/2008MWR2781.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ying, Y., and F. Zhang, 2015: An adaptive covariance relaxation method for ensemble data assimilation. Quart. J. Roy. Meteor. Soc., 141, 28982906, doi:10.1002/qj.2576.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    Snapshots of streamfunctions . (top) Upper layer and (bottom) lower layer. (from left to right) Low-, mid-, and high-latitude cases.

  • View in gallery
    Fig. 2.

    Time-averaged total kinetic energy (KE) spectra by direct numerical reference (black), stochastic superparameterization (blue), and ocean code (red).

  • View in gallery
    Fig. 3.

    Low-latitude case. Snapshots of posterior upper-layer streamfunctions by the ocean code at the 570th, 580th, 590th, and 600th cycles. Observation points are marked with circles. Catastrophic filter divergence is invoked after the 600th cycle.

  • View in gallery
    Fig. 4.

    Low-latitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

  • View in gallery
    Fig. 5.

    Midlatitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

  • View in gallery
    Fig. 6.

    High-latitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

  • View in gallery
    Fig. 7.

    Time series of and for low- and high-latitude cases using stochastic superparameterization and localization. The dashed line is the corresponding threshold value.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 483 214 10
PDF Downloads 162 29 1

Preventing Catastrophic Filter Divergence Using Adaptive Additive Inflation for Baroclinic Turbulence

Yoonsang LeeCenter of Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, New York, New York

Search for other papers by Yoonsang Lee in
Current site
Google Scholar
PubMed
Close
,
Andrew J. MajdaCenter of Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, New York, New York

Search for other papers by Andrew J. Majda in
Current site
Google Scholar
PubMed
Close
, and
Di QiCenter of Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, New York, New York

Search for other papers by Di Qi in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Ensemble-based filtering or data assimilation methods have proved to be indispensable tools in atmosphere and ocean science as they allow computationally cheap, low-dimensional ensemble state approximation for extremely high-dimensional turbulent dynamical systems. For sparse, accurate, and infrequent observations, which are typical in data assimilation of geophysical systems, ensemble filtering methods can suffer from catastrophic filter divergence, which frequently drives the filter predictions to machine infinity. A two-layer quasigeostrophic equation, which is a classical idealized model for geophysical turbulence, is used to demonstrate catastrophic filter divergence. The mathematical theory of adaptive covariance inflation by Tong et al. and covariance localization are investigated to stabilize the ensemble methods and prevent catastrophic filter divergence. Two forecast models—a coarse-grained ocean code, which ignores the small-scale parameterization, and stochastic superparameterization (SP), which is a seamless multiscale method developed for large-scale models without scale gap between the resolved and unresolved scales—are applied to generate large-scale forecasts with a coarse spatial resolution compared to the full resolution . The methods are tested in various dynamical regimes in ocean with jets and vorticities, and catastrophic filter divergence is documented for the standard filter without inflation. Using the two forecast models, various kinds of covariance inflation with or without localization are compared. It shows that proper adaptive additive inflation can effectively stabilize the ensemble methods without catastrophic filter divergence in all regimes. Furthermore, stochastic SP achieves accurate filtering skill with localization while the ocean code performs poorly even with localization.

Denotes content that is immediately available upon publication as open access.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Yoonsang Lee, ylee@cims.nyu.edu

Abstract

Ensemble-based filtering or data assimilation methods have proved to be indispensable tools in atmosphere and ocean science as they allow computationally cheap, low-dimensional ensemble state approximation for extremely high-dimensional turbulent dynamical systems. For sparse, accurate, and infrequent observations, which are typical in data assimilation of geophysical systems, ensemble filtering methods can suffer from catastrophic filter divergence, which frequently drives the filter predictions to machine infinity. A two-layer quasigeostrophic equation, which is a classical idealized model for geophysical turbulence, is used to demonstrate catastrophic filter divergence. The mathematical theory of adaptive covariance inflation by Tong et al. and covariance localization are investigated to stabilize the ensemble methods and prevent catastrophic filter divergence. Two forecast models—a coarse-grained ocean code, which ignores the small-scale parameterization, and stochastic superparameterization (SP), which is a seamless multiscale method developed for large-scale models without scale gap between the resolved and unresolved scales—are applied to generate large-scale forecasts with a coarse spatial resolution compared to the full resolution . The methods are tested in various dynamical regimes in ocean with jets and vorticities, and catastrophic filter divergence is documented for the standard filter without inflation. Using the two forecast models, various kinds of covariance inflation with or without localization are compared. It shows that proper adaptive additive inflation can effectively stabilize the ensemble methods without catastrophic filter divergence in all regimes. Furthermore, stochastic SP achieves accurate filtering skill with localization while the ocean code performs poorly even with localization.

Denotes content that is immediately available upon publication as open access.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author e-mail: Yoonsang Lee, ylee@cims.nyu.edu

1. Introduction

Ensemble-based filters or data assimilation methods, including the ensemble Kalman filter (EnKF; Evensen 2003) and ensemble square root filters such as the ensemble transform Kalman filter (ETKF; Bishop et al. 2001) and the ensemble adjustment Kalman filter (EAKF; Anderson 2001), provide accurate statistical estimation of a geophysical system combining a forecast model and observations. These methods quantify the uncertainty of the system using an ensemble that samples the information of the system. Geophysical systems are complex and of high dimension and thus require enormously huge computational costs for long time integration. Thus, the ensemble-based methods are indispensable tools for data assimilation as the methods allow computationally cheap and low-dimensional state approximation. Because of the simplicity and efficiency of the ensemble-based filters, these methods are widely applied to various fields of geophysical science such as numerical weather prediction (Kalnay 2003).

Despite their successful applications in geophysical applications, ensemble-based filters suffer from small ensemble size due to the high dimensionality and expensive computational costs [frequently referred as “curse of dimensionality” (Snyder et al. 2008) or “curse of small ensemble size” (Majda and Harlim 2012)], which can lead to filter divergence. Sampling errors due to insufficient ensemble size and imperfect model errors often yield underestimation of the uncertainty in the forecast and thus filters trust the forecast with larger confidence than the information given by observations. Inaccurate uncertainty quantification in the forecast fails to track the true signal and thus filter performance degrades, which is called filter divergence (Majda and Harlim 2012). Also insufficient ensemble size can lead to spurious overestimation of cross correlations between otherwise uncorrelated variables (Hamill et al. 2001; Whitaker et al. 2009; Sakov and Oke 2008) which also affects filter performance. Covariance inflation, which inflates the prior covariance and pulls the filter back toward observations, is one method to remedy the filter divergence (Anderson 2001). For the overestimation of cross correlations between uncorrelated variables, localization, which multiplies the covariances between prior state variables and observation variables by a correlation function with local support, is a powerful method to correct the overestimated cross correlations (Houtekamer and Mitchell 2001).

Catastrophic filter divergence (Harlim and Majda 2010; Gottwald and Majda 2013) is another important issue hindering the applications of the ensemble-based methods to high-dimensional systems especially in the case with sparse and infrequent observations and small observation errors. Catastrophic filter divergence drives the filter predictions to machine infinity although the underlying system remains in a bounded set. In data assimilation of geophysical systems in the ocean, for example, observations are often sparse and infrequent. In observations of ocean dynamics such as sea surface temperature, observations become accurate using various techniques such as tropical moored buoys, ocean reference status, and surface drifting buoys. But observations are still inadequate and sparse to sample over the vast surface and the interior of the ocean.

It is shown rigorously in Kelly et al. (2015) that catastrophic filter divergence is not caused by numerical instability, instead the analysis step of filters generates catastrophic filter divergence. Although the standard covariance inflation and localization stabilize filters and improve accuracy, they cannot avoid catastrophic filter divergence. In Harlim and Majda (2010), it is demonstrated that ensemble-based methods with constant covariance inflation still suffer from catastrophic filter divergence. In this study we also see that covariance localization decreases the occurrence of catastrophic filter divergence but does not prevent catastrophic divergence.

To avoid catastrophic filter divergence, a judicious model error using linear stochastic models was studied in Harlim and Majda (2010) with skillful results in some parameter regimes. Recently a simple remedy of catastrophic filter divergence without using linear stochastic models has been proposed through rigorous mathematical arguments and tested for the Lorenz-96 model in Tong et al. (2016). The approach in Tong et al. (2016) adaptively inflates covariance with minimal additional costs according to the distribution of the ensemble. The strength of inflation is determined by two statistics of the ensemble: 1) ensemble innovation, which measures how far predicted observations are from actual observations; and 2) cross covariance between observed and unobserved variables [see (12) and (13) in section 3, respectively]. If the filter is malfunctioning based on these two statistics, inflation is triggered and becomes larger when filters stray further into malfunction. Note that there are other adaptive covariance inflation techniques. Anderson (2007) proposed a method that uses the Bayesian update theory to the inflation parameters. Li et al. (2009) use the Kalman filter update to the inflation parameters based on Gaussian assumptions for the innovation statistics. Instead of using Bayesian update for the inflation parameters, Luo and Hoteit (2013) use the innovation statistics directly to determine the filter performance and trigger the inflation [see Luo and Hoteit (2014) for nonlinear observations]. The method in Tong et al. (2016) is different from these methods in that Tong et al. (2016) use the cross covariance between observed and unobserved variables in addition to the innovation statistics, which are derived from a rigorous mathematical theory to stabilize filters. We will show below that the cross covariance is essential, besides the innovation, in preventing catastrophic filter divergence in baroclinic turbulence (see Fig. 7).

In this study we demonstrate catastrophic filter divergence of ensemble-based filters in the two-layer quasigeostrophic equations, which are classical idealized models for geophysical turbulence (Salmon 1998). The adaptive inflation method is then applied for this two-layer system to avoid catastrophic filter divergence. Both a coarse-grained ocean code, which ignores the subgrid-scale parameterization, and stochastic superparameterization (Majda and Grooms 2014; Grooms et al. 2015b), which is a seamless multiscale method developed for large-scale models without scale gap between the resolved and unresolved scales, are applied to generate forecasts with a coarse spatial resolution for each layer compared to the full resolution , which generates true signals. We test ensemble methods for various dynamical regimes in the ocean corresponding to idealized low-, mid-, and high-latitude cases and document that catastrophic filter divergence occurs for ensemble-based methods even with localization unless adaptive inflation is applied. Ensemble filtering for the two-layer quasigeostrophic equations using these forecast models—the ocean code and stochastic superparameterization—has already been studied in Grooms et al. (2015a) to investigate the effect of constant inflation on accounting for model errors without catastrophic filter divergence. In this study we test a very sparse observation network that observes only points of the upper-layer streamfunction with a small observation error variance corresponding to 1% of the total variance of the streamfunction to represent the typical realistic scenario with sparse high-quality data, which leads to catastrophic filter divergence.

Using both the ocean code and stochastic superparameterization, various kinds of covariance inflation with or without localization are compared. We verify that proper adaptive covariance inflation can effectively stabilize the ensemble-based filters uniformly without catastrophic filter divergence in all test regimes. Furthermore, stochastic superparameterization achieves accurate filtering skill with localization while the ocean code performs poorly even with localization.

The structure of this paper is as follows. In section 2 we briefly review an ensemble method, the EAKF (Anderson 2001) with covariance inflation and localization. The adaptive inflation method to prevent catastrophic filter divergence is described in section 3 including how to choose parameters of the adaptive method. In section 4 the two-layer quasigeostrophic equation with baroclinic instability is described and two coarse-grained forecast models—the ocean code and stochastic superparameterization—are explained. Numerical experiments with various inflation strategies with or without localization are reported in section 5 along with stabilized and improved filtering results using the adaptive inflation method. In section 6 we conclude this paper with a discussion.

2. Ensemble filtering

In this section we briefly describe the EAKF (Anderson 2001), which in our experience is a more stable and accurate scheme than other popular ensemble-based methods (Majda and Harlim 2012). We assume that the true signal is generated by a nonlinear mapping ,
e1
where is a state vector at the nth observation time. We consider a linear observation of by an observation operator , where is a full rank matrix [see Tong et al. (2016) for the case when the observation space is larger than the model space]:
e2
where is a mean zero Gaussian noise with a variance σ independent in different times and space grid points. Thus, we have a natural decomposition of into observed and unobserved variables and , respectively, where and , respectively. Although it is not necessary, we assume that , where is the identity matrix of size for an easy exposition of the adaptive inflation algorithm. Note that for general linear observation operators, there is one-to-one correspondence through a linear transformation between and . Thus, the following results under the assumption of can be easily extended to the pair and instead of the pair and .
As other ensemble-based filters, EAKF, which is one of the ensemble square root filters (Tippett et al. 2003), uses ensemble members to represent statistical properties of the state but uses only the first- and second-order moments (i.e., mean and covariance) to update each ensemble member. First, EAKF generates prior predictions by solving a forecast model for each ensemble member:
e3
where is an approximate forecast model to the true dynamics . From the forecast ensemble , the prior mean and covariance are given by
e4
and
e5
respectively. With these prior mean and covariance, the standard Kalman formula using observation gives the following posterior mean and covariance:
e6
and
e7
respectively. For an ensemble perturbation matrix whose kth column is given by the ensemble perturbation , EAKF finds an adjustment matrix so that the adjusted ensemble satisfies the posterior covariance (7):
e8
Once the adjustment matrix is calculated, the posterior ensemble is obtained by adding the adjusted perturbation to the posterior mean. That is , where is the kth column of the ensemble perturbation matrix .
Covariance inflation overcomes some problems caused by sampling errors due to insufficient ensemble numbers or an imperfect model and requires only a minimal additional cost to the original EAKF. The covariance inflation introduces more uncertainty in the prior covariance so that the filter has more weight on the information given by observations. That is, for a constant , which determines the strength of inflation, covariance inflation inflates the prior covariance:
e9
for multiplicative inflation or
e10
for additive inflation. Then the ensemble is modified to satisfy the inflated prior covariance by spreading the ensemble for the multiplicative inflation and by adding additional noise for the additive inflation. Although covariance inflation improves filter skill in many applications it is reported that constant inflation does not prevent catastrophic filter divergence with sparse and accurate observation networks (Harlim and Majda 2010).

3. Adaptive additive inflation

A simple remedy in Tong et al. (2016) to stabilize ensemble-based filters by preventing catastrophic filter divergence is to adaptively trigger the inflation and change the strength . Although the adaptive inflation method of Tong et al. (2016) works both for the multiplicative inflation (9) and the additive inflation (10), we focus on the simpler additive inflation in this study. The inflation strength of (10) is determined by two statistics of the following ensemble:
e11
where is a tunable positive constant, is a measure related to the innovation process in a standard Kalman filter:
e12
is the Euclidean norm (i.e., the largest singular value) of the cross covariance between the observed and unobserved variables:
e13
and and are fixed positive thresholds to decide whether the filter is performing well or not. The first statistical information measures the accuracy of the prediction, that is, how far the predicted observations are from actual observations. The second statistical information is an important factor because large cross covariance can magnify a small error in the observed component and impose it on the unobserved variables. Hence, the adaptive inflation can be regarded as a control of these two statistics to prevent catastrophic filter divergence. Note that these two factors are in fact derived from a rigorous mathematical argument for nonlinear stability of finite ensemble filters that can be found in Tong et al. (2016).
In contrast to the conventional covariance inflation that modifies the prior ensemble to satisfy the inflated covariance, our approach does not modify the prior ensemble to inflate covariance; the additive inflation can make the rank of the posterior covariance equal to d while its rank cannot exceed , where K is the ensemble size. Thus, in adaptive additive inflation, we use the inflated prior covariance (10) to calculate the posterior mean while the posterior covariance does not change. That is, instead of (6), the posterior mean is defined as
e14
using
e15
where the posterior covariance is the same as (7), that is, no inflated prior covariance.
The two thresholds and of (11) are important factors as they differentiate poor forecasts from properly working forecasts. Using an elementary benchmark of accuracy that should be surpassed by filters, we use the following aggressive thresholding (Tong et al. 2016). The thresholds are given by
e16
where is the Euclidean norm of and
e17
where the benchmark for accuracy is the mean-square error of an estimator using an invariant probability measure of the model:
e18
As the invariant measure of the model is not available, aggressive thresholding uses a Gaussian approximation to the invariant measure using climatological properties, mean and covariance. Then the conditional distribution given observation is a Gaussian measure and can be computed exactly, which gives the following formula:
e19

As is generally not available or difficult to calculate in real applications, a simple approach to estimate without using is suggested in Tong et al. (2016). The approach uses a trivial estimator but requires an estimation of some parameters related to the forecast dynamics. To exclude the error in estimating the parameters from the forecast dynamics, we use the reference simulations to calculate in our experiments.

4. Model equations and forecast models

In atmosphere and ocean science, quasigeostrophic equations are widely used as classical idealized models of geophysical turbulence (Salmon 1998). In this study we use a two-layer quasigeostrophic equation as the model equation to observe catastrophic filter divergence in high-dimensional data assimilation and test the adaptive additive inflation to prevent catastrophic filter divergence. The system is maintained by baroclinic instability imposed by vertical shear flows and shows interesting features in geophysical turbulence such as inverse cascade of energy and zonal jets. After describing the model equation in section 4a, two coarse-grained forecast models—an ocean code that ignores the subgrid scales and another forecast method with stochastic parameterization of the subgrid scales—are explained in section 4b.

a. Two-layer quasigeostrophic equations

Our model equation to generate high-dimensional geophysical turbulence is the following two-layer quasigeostrophic equation in a doubly periodic domain used in Grooms and Majda (2014), Majda and Grooms (2014), Grooms et al. (2015a), and Lee et al. (2017) to generate baroclinic turbulence:
eq1
e20
eq2
eq3
Here is the potential vorticity in the upper () and lower () layers, is the deformation wavenumber, r is a linear Ekman drag coefficient at the bottom layer of the flows, and is a nondimensional constant resulting from the variation of the vertical projection of Coriolis frequency with latitude and the velocity field for the streamfunction . To stabilize the equation by absorbing the downscale cascade of enstrophy at the smallest scales while leaving other scales nearly inviscid for interesting dynamics at large scales, we use a hyperdissipation with a hyperviscosity ν, which is commonly used in turbulence simulations. To maintain nontrivial dynamics of (20) by baroclinic instability, a large-scale zonal vertical shear is applied with equal and opposite unit velocities that are related to the terms in (20).

Following the experiments in Grooms et al. (2015a) and Lee et al. (2017), we test three different regimes corresponding to low-, mid-, and high-latitude ocean models by changing the β-plane effect and the bottom drag r (see Table 1 for the parameter values of the three test regimes). While the deformation wavenumber is fixed at 25, we use a fine resolution of grid points for each layer to generate true signals in our data assimilation experiments. The hyperviscosity ν is set to 1.28 × 10−15 and we use a pseudospectral space discretization while the time integration uses a fourth-order semi-implicit Runge–Kutta method by incorporating an exponential integration for the linear stiff dissipation term. The time step is fixed at 2 × 10−5 for all test regimes.

Table 1.

Parameters of (20) for three test regimes. Other parameters are fixed at and .

Table 1.

In the high-latitude case (or the f-plane case), the quasigeostrophic equation is dominated by spatially homogeneous and isotropic flows (see Fig. 1 for snapshots of the upper- and lower-layer streamfunction). In the mid- and low-latitude cases, which have the β-plane effect, the flows organize into inhomogeneous and anisotropic structure such as zonal jets.

Fig. 1.
Fig. 1.

Snapshots of streamfunctions . (top) Upper layer and (bottom) lower layer. (from left to right) Low-, mid-, and high-latitude cases.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

b. Forecast models with and without stochastic parameterization

As a forecast model in data assimilation of the true signal given by (20), we consider two forecast models on low-resolution grid points: 1) an ocean code that uses only a coarse grid without parameterizing the small scales and 2) stochastic superparameterization that parameterizes the effect of the small scales by modeling the small scales as randomly oriented plane waves (Majda and Grooms 2014; Grooms et al. 2015b). Note that these two forecast models are imperfect models as they approximate the true signal on a low-resolution grid. Thus, in data assimilation using ensemble-based methods, there is an error from the imperfect model in addition to the sampling error due to a small ensemble size.

The first forecast model, which we call ocean code, solves the following approximation to (20), which replaces the hyperdissipation by a biharmonic dissipation of relative vorticity :
e21
This replacement is to mimic the biharmonic dissipation commonly used in eddy-permitting ocean models (Griffies and Hallberg 2000). By analogy with ocean models and some atmospheric models, the ocean code also uses the second-order energy- and enstrophy-conserving Arakawa finite differencing (Arakawa 1966) for the nonlinear advection terms . For time integration, we use a second-order Runge–Kutta integration with the same exponential integrator for the linear stiff term and a time step fixed at 5 × 10−4.

We consider another forecast model called stochastic superparameterization, which uses randomly oriented plane waves for the parameterization of the subgrid scales. The subgrid scales are generally not zero and influence the evolution of the resolved scales. Especially in quasigeostrophic turbulence, which includes regimes with a net transfer of kinetic energy from small to large scales (Charney 1971), it is important to accurately model the effects of the underresolved eddies to obtain accurate properties of the system such as energy spectrum. Stochastic superparameterization is developed as a multiscale model for turbulence without scale gap between the resolved and unresolved scales (Grooms and Majda 2014; Majda and Grooms 2014). Among various versions of stochastic superparameterization, we use the most recent version developed in Grooms et al. (2015b) to deal with arbitrary boundary conditions using finite-difference numerics for the large scales.

The stochastic superparameterization forecast model solves (21) using the same second-order finite differencing for the nonlinear term but with additional terms , obtained from the stochastic subgrid-scale parameterization:
e22
The parameterization terms , are computed by modeling the subgrid scale as randomly oriented plane waves. Under this modeling of the subgrid scales, stochastic superparameterization replaces the nonlinear terms of the subgrid-scale equation using additional damping and white noise forcing, which yields quasilinear equation conditional to the resolved scale variable. We also use the method in Grooms et al. (2015b) to impose temporal correlations in the parameterization by using a Wiener process model for the orientation of the plane waves. Because the subgrid scales are solved in formally infinite domains, this approach has no scale gap between the resolved and subgrid scales. Also, the stochastic modeling of the subgrid scales generates the missing instability of the subgrid scales using deterministic parameterization of the subgrid scales. See the appendix for more details on the subgrid-scale parameterization terms. Note that we use the same time integration as in the ocean code, thus the difference between the ocean code and stochastic superparameterization comes from the parameterization terms .
Figure 2 shows the time-averaged kinetic energy (KE) spectra:
eq4
by the direct numerical method (black) along with stochastic superparameterization (blue) and the ocean code (red) using biharmonic viscosity and obtained by tuning to match the energy spectra. Although the ocean code has much weaker dissipation than stochastic superparameterization, the ocean code has smaller energies than stochastic superparameterization while stochastic superparameterization captures the correct large-scale kinetic energy spectra; the small energy of the ocean code cannot be improved further by tuning the biharmonic viscosity coefficient. This result implies that the ocean code could have filter divergence by inappropriately capturing the uncertainty in the forecast due to small energy of the resolved scales. On the other hand, it is shown that stochastic parameterization can act to reduce model error (Shutts 2005; Frenkel et al. 2012) and it increases ensemble spread, which yields an effect similar to covariance inflation. In the next section, we will see that stochastic superparameterization requires smaller covariance inflation than the ocean code as the ocean code has large model errors that cannot be improved by covariance inflation.
Fig. 2.
Fig. 2.

Time-averaged total kinetic energy (KE) spectra by direct numerical reference (black), stochastic superparameterization (blue), and ocean code (red).

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

5. Catastrophic filter divergence and numerical experiments

In this section we demonstrate catastrophic filter divergence for all three test regimes regardless of the two forecast models—the ocean code and stochastic superparameterization—with sparse high-quality observations that are infrequent in time. Catastrophic filter divergence is effectively prevented using the adaptive additive inflation for both forecast methods. Stochastic superparameterization achieves accurate filtering skill with localization while the ocean code fails to achieve accurate skill even with localization.

a. Filtering setup

For EAKF, we use a sequential update of observations used in Anderson (2001), which avoids explicit computation of the SVD in (8) by processing observations individually. The true signal is given by a fine-resolution solution of (20) using grid points for each layer and a fourth-order semi-implicit Runge–Kutta integration with a time step 2 × 10−5. The two forecast models use the same coarse resolution for each layer and a second-order semi-implicit Runge–Kutta with a time step of 5 × 10−4.

We observe only the upper-layer streamfunction, analogous to observation of sea surface height, on a sparse uniform grid while the streamfunction in the lower layer is completely unobserved. Observation error variances correspond to 1% of the streamfunction variance for each test regime. Following the idea of Keating et al. (2012), the eddy turnover time (where Z is the time-averaged total enstrophy ) is comparable to 0.006 for all test regimes and we use infrequent observations with an observation interval of 0.008. Note that using the time step of the forecast models, 5 × 10−4, this observation interval requires 16 time integrations for each forecast step. The ensemble size is 17 smaller than the dimension of the forecast model, which is general in real data assimilation.

The initial ensemble members are generated from the true signal by adding random noise with zero mean and a variance corresponding to 30% of the total variance. For each filtering test, we run 1000 assimilation cycles and take the last 600 cycles to measure filter performance using the time-averaged RMS error (RMSE):
e23
and pattern correlation (PC):
e24
Respectively, where is the -inner product.
For covariance inflation, we test several combinations of inflation methods—for the inflation strength in (10), no inflation (noI) , constant inflation (CI) for a constant , adaptive inflation (AI) by (11), and constant + adaptive inflation (CAI):
e25
To obtain and , we gradually increase them from 0 until they have the smallest catastrophic filter divergence occurrence percentage (see Table 2 for tuned and used in this study). The thresholds for adaptive inflation are given by the aggressive thresholding (16) and (17) where the benchmark for accuracy, is given by 10, 166, and 155 for the low-, mid-, and high-latitude cases, respectively, from the reference simulations. Along with these inflation methods, we also use covariance localization with the compactly supported fifth-order piecewise rational function from Gaspari and Cohn (1999). The localization radius (where influence of observation is zero) is set to eight grid points. The distance between two adjacent observation points are 12 and thus the square region centered at each observation point is marginally updated from the other observation points.
Table 2.

Constant and adaptive inflation parameters and for each test regime.

Table 2.

b. Filter experiments—Catastrophic filter divergence and stabilization

If no inflation is applied, EAKF has catastrophic filter divergence for both forecast models. Figure 3 shows a sequence of snapshots of the low-latitude case upper-layer streamfunction by the ocean code without inflation and localization (observation points are marked with black circles). At the 570th cycle, the filter still captures the meridional structure of the low-latitude case but as more cycles continue, instability develops at unobserved grid points, which eventually diverges to machine infinity after the 600th cycle. The first row of Fig. 4 shows time series of the RMS errors by the forecast methods when they suffer from catastrophic filter divergence. The RMS errors increase gradually but they eventually diverge to machine infinity. The two forecast models run slightly longer with localization but localization fails to prevent catastrophic filter divergence. The second row of Fig. 4 shows time series of RMS errors with the constant + adaptive inflation where the cycles at which adaptive inflation is triggered is marked with dots. In the ocean code case with no localization, the adaptive inflation is triggered at the beginning period and then stops although the filter still degrades. Inflation is triggered again when the filter fails to capture the true signal. The ocean code with localization triggers the adaptive inflation most of the time and obtains a stable result but also fails to achieve accurate filtering skill. In the stochastic superparameterization case with the adaptive inflation and localization, adaptive inflation is triggered only 99 times out of 1000 cycles where most of the adaptive inflation is triggered at the beginning and infrequently triggered later as the filter is performing well.

Fig. 3.
Fig. 3.

Low-latitude case. Snapshots of posterior upper-layer streamfunctions by the ocean code at the 570th, 580th, 590th, and 600th cycles. Observation points are marked with circles. Catastrophic filter divergence is invoked after the 600th cycle.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

Fig. 4.
Fig. 4.

Low-latitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

The occurrence percentage of catastrophic filter divergence out of 100 different runs is in Table 3. With no localization and inflation, the filter suffers from catastrophic filter divergence more than 75% for both the ocean code and stochastic superparameterization. The constant inflation stabilizes the filter slightly but it does not prevent catastrophic filter divergence perfectly. The CI with no localization has a higher percentage of divergence than the no inflation case for the stochastic superparameterization forecast model. Through stochastic parameterization of subgrid scales, stochastic superparameterization has more variability than the ocean code and thus additional constant inflation is not necessary.

Table 3.

Occurrence percentage of catastrophic filter divergence out of 100 different runs with and without localization. No inflation (noI), constant (CI), adaptive (AI), and constant + adaptive (CAI) inflation methods.

Table 3.

AI with and without localization significantly decreases the number of occurrence of catastrophic filter divergence but the ocean code fails to prevent catastrophic filter divergence entirely. For the CAI, all methods are stable even without localization. Note that for stochastic superparameterization, both AI and CAI work well preventing catastrophic filter divergence while the ocean code fails to prevent the divergence in the AI case. As we discussed before, stochastic superparameterization has enough ensemble spread through stochastic parameterization of the subgrid scales and thus when adaptive inflation is already applied, constant inflation plays a marginal role in improving filter skill.

For the stabilized filters with the CAI, we compare the filter performance using the time-averaged posterior RMS errors and pattern correlations (the performance difference between the AI and CAI is marginal when there is no catastrophic filter divergence). In the low-latitude case (shown in Table 4), both the ocean code and the superparameterization methods fail to achieve accurate filtering skill without localization. The RMS errors are larger than the standard deviation of the streamfunction and both forecast methods do not capture the correlation with the true signal. When localization is combined with the adaptive inflation, it helps to increase filtering skill for both methods. The superparameterization has significantly improved results; RMS error is smaller than 50% of the standard deviation of the streamfunction and pattern correlation is larger than 90% for both layers. Although the lower-layer streamfunction is completely unobserved, the adaptive filter achieves accurate filter skill. The ocean code result is improved using localization but it still suffers from standard filter divergence with RMS errors larger than the standard deviation of the streamfunction.

Table 4.

Low-latitude case. Streamfunction estimation for both layers. Posterior RMS errors and pattern correlations are in parentheses.

Table 4.

In the midlatitude case, the superparameterization still has meaningful filtering skill and is superior to the ocean code although the performance is slightly degraded compared to the low-latitude case as the midlatitude is more turbulent than the low-latitude case. The RMS error by superparameterization with the adaptive inflation and localization is about 30% smaller than the standard deviation and pattern correlations are larger than 75% (see Table 5 for the midlatitude case RMS errors and pattern correlations). On the other hand, the ocean code does not show any significant skill even with the adaptive inflation and localization. In the midlatitude case, the ocean code using adaptive inflation displays comparable results with and without localization, and both fail to achieve meaningful filtering results. For the superparameterization, on the other hand, significant improvement in filter skill can be achieved using localization (see the second row of Fig. 5 for the time series of RMS errors with the adaptive inflation). As the RMS errors are more fluctuating than the low-latitude case, the adaptive inflation is triggered most of the time for all combination of inflation and localization.

Table 5.

Midlatitude case. Streamfunction estimation for both layers. Posterior RMS errors and pattern correlations are in parentheses.

Table 5.
Fig. 5.
Fig. 5.

Midlatitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

The last test regime, high-latitude case (Fig. 6), is the most difficult test case as it is strongly turbulent and dominated by homogeneous and isotropic vortical flows with no spatial structure. In this test regime, stochastic superparameterization with CAI and localization still achieves a smaller RMS error and a larger pattern correlation than the ocean code. The observed upper-layer RMS error is 10% smaller than the standard deviation while the unobserved lower-layer RMS error is only 5% smaller than the standard deviation (Table 6).

Fig. 6.
Fig. 6.

High-latitude case. Time series of upper-layer RMS error. The cycles, at which adaptive inflation is triggered, are marked with filled circles. Standard deviation of the streamfunction is shown by the dashed line.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

Table 6.

High-latitude case. Streamfunction estimation for both layers. Posterior RMS errors and pattern correlations are in parentheses.

Table 6.

For the stochastic superparamterization case with localization and the CAI, the time series of the two statistics, and , are shown in Fig. 7 (the dashed line is the corresponding threshold value to trigger the inflation). As the midlatitude case result is between the low- and high-latitude cases, we consider the last two cases here. In the high-latitude case, mostly triggers the inflation while does intermittently. Thus, we can expect comparable results of other adaptive inflation techniques such as Luo and Hoteit (2013), which uses only innovation statistics. However, in the low-latitude case, dominates the trigger of the inflation while plays a marginal role, which implies it will be necessary to use the cross covariance, , to stabilize the filter. Note that the ocean code has similar results.

Fig. 7.
Fig. 7.

Time series of and for low- and high-latitude cases using stochastic superparameterization and localization. The dashed line is the corresponding threshold value.

Citation: Monthly Weather Review 145, 2; 10.1175/MWR-D-16-0121.1

6. Conclusions

Ensemble-based filtering methods are indispensable tools in atmosphere and ocean science as they provide computationally cheap and low dimensional ensemble state estimation for extremely high dimensional turbulent systems. But these methods can suffer from catastrophic filter divergence, which drives the forecast predictions to machine infinity especially when the observation is sparse, accurate, and infrequent although the underlying true signal remains bounded. Using an idealized model for the geophysical turbulence of the ocean, the two-layer quasigeostrophic equation with baroclinic instability, and a sparse observation network, which is general in real applications, we were able to see catastrophic filter divergence of the ensemble adjustment Kalman filter, which is one of the most stable and accurate ensemble methods.

The constant covariance inflation and localization, which are widely used methods to account for the sampling errors due to insufficient ensemble size and model errors from imperfect forecast models, stabilize the filter but fail to prevent the catastrophic filter divergence. Increasing the observation size or ensemble number can help to prevent catastrophic filter divergence but this approach is practically prohibitive and sometimes impossible as it requires enormous amount of financial and computer resources to cover the vast surface of the ocean. Instead we followed the adaptive inflation approach of Tong et al. (2016) to prevent catastrophic filter divergence. The adaptive approach requires a minimal additional computational cost compared to the standard ensemble based methods and uses only two low-order statistics of the ensemble—the ensemble innovation and cross covariance between observed and unobserved variables.

We tested the adaptive inflation using two forecast models—the ocean code without parameterization of the subgrid scales and stochastic superparameterization—which parameterizes the subgrid scales by modeling them as randomly oriented plane waves. Although both forecast models are stabilized with the adaptive inflation, stochastic superparameterization displays filtering skill superior to the ocean code. When the ensemble method is combined with localization and adaptive inflation, stochastic superparameterization achieves RMS errors smaller than the climatological error while the ocean code still suffers from the standard filter divergence with RMS errors comparable to the climatological error.

As we have shown in this study, covariance inflation is an important and useful technique in ensemble based methods to improve filtering skill. There are another class of adaptive inflation techniques such as Anderson (2007) and Ying and Zhang (2015). Although the adaptive inflation in Tong et al. (2016) is based on rigorous mathematical arguments, it would be interesting to test other adaptive inflation methods to avoid catastrophic filter divergence like the blended filter (Majda et al. 2014; Qi and Majda 2015) that combines a particle filter in a low-dimensional subspace and efficient Kalman filter in the orthogonal part. As it is investigated in Harlim and Majda (2010) through a linear stochastic model for the forecast, a judicious model error could be alternative to prevent catastrophic filter divergence.

Acknowledgments

The research of A. J. Majda is partially supported by Office of Naval Research Grant ONR MURI N00014-12-1-0912 and DARPA 25-74200-F4414. Y. Lee is supported as a postdoctoral fellow by these grants. D. Qi is supported as a graduate research assistant by the ONR grant. We thank the three anonymous reviewers for their comments, which significantly improved the manuscript.

APPENDIX

Stochastic Superparameterization

For completeness of this paper, we describe the details of the stochastic subgrid parameterization terms of (22). The subgrid parameterization terms account for the small-scale contributions to the large-scale equation and in stochastic superparameterization (Majda and Grooms 2014; Grooms and Majda 2014; Grooms et al. 2015b) the terms have the following form:
ea1
where A is a tunable parameter that determines the strength of the small-scale contributions, is a spatial smoothing operator, and and are functions of the large-scale variables that are calculated by solving linear stochastic models of the small-scale equations coupled to the large-scale variables. The linear stochastic models are obtained by replacing the nonlinear terms of the small-scale equations by adding additional damping and white noise forcing and are solved by freezing the large-scale variables [see Majda and Grooms (2014), Grooms and Majda (2014), and Grooms et al. (2015b) for more details].
The angle θ represents the orientation of the small-scale plane waves and is modeled by a Weiner process that is independent at different coarse-grid points
ea2
to impose temporal correlation on the subgrid-scale parameterization. As σ is a parameter that determines the decorrelation time of the small scales and thus this parameter can be estimated from the small-scale decorrelation time without tuning (in our experiments, is set to 250, 167.5, and 67.5 for the high-, mid-, and low-latitude cases, respectively). The other parameter A is hand-tuned by matching the kinetic energy of stochastic superparameterization which yields 6750, 1700, and 350 for the high-, mid-, and low-latitude cases, respectively. When the large-scale equation is solve by a low-order method such as the finite-difference method used in this paper, the subgrid-scale parameterzation accumulates strongly at the coarse grid scale. The smoothing operator , which uses a three-gridpoint average in each direction, is necessary to damp the subgrid parameterization spectrum at the coarse grid Nyquist wavenumber. Note that when the large-scale equation is solve by spectral methods, the smoothing operator is not necessary (see Grooms and Majda 2014).

REFERENCES

  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithms for ensemble filters. Tellus, 59A, 210224, doi:10.1111/j.1600-0870.2006.00216.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Arakawa, A., 1966: Computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I. Comput. Phys., 1, 119143, doi:10.1016/0021-9991(66)90015-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, doi:10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Charney, J. G., 1971: Geostrophic turbulence. J. Atmos. Sci., 28, 10871095, doi:10.1175/1520-0469(1971)028<1087:GT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53, 343367, doi:10.1007/s10236-003-0036-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frenkel, Y., A. J. Majda, and B. Khouider, 2012: Using the stochastic multicloud model to improve convective parameterization: A paradigm example. J. Atmos. Sci., 69, 10801105, doi:10.1175/JAS-D-11-0148.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gottwald, G. A., and A. J. Majda, 2013: A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks. Nonlinear Processes Geophys., 20, 705712, doi:10.5194/npg-20-705-2013.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Griffies, S. M., and R. W. Hallberg, 2000: Biharmonic friction with a Smagorinsky-like viscosity for use in large-scale eddy-permitting ocean modes. Mon. Wea. Rev., 128, 29352946, doi:10.1175/1520-0493(2000)128<2935:BFWASL>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., and A. J. Majda, 2014: Stochastic superparameterization in quasigeostrophic turbulence. J. Comput. Phys., 271, 7898, doi:10.1016/j.jcp.2013.09.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., Y. Lee, and A. J. Majda, 2015a: Ensemble filtering and low-resolution model error: Covariance inflation, stochastic parameterization, and model numerics. Mon. Wea. Rev., 143, 39123924, doi:10.1175/MWR-D-15-0032.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grooms, I., Y. Lee, and A. J. Majda, 2015b: Numerical schemes for stochastic backscatter in the inverse cascade of quasigeostrophic turbulence. Multiscale Model. Simul., 13, 10011021, doi:10.1137/140990048.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, 27762790, doi:10.1175/1520-0493(2001)129<2776:DDFOBE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harlim, J., and A. J. Majda, 2010: Catastrophic filter divergence in filtering nonlinear dissipative systems. Commun. Math. Sci., 8, 2743, doi:10.4310/CMS.2010.v8.n1.a3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129, 123137, doi:10.1175/1520-0493(2001)129<0123:ASEKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, 368 pp.

    • Crossref
    • Export Citation
  • Keating, S., A. J. Majda, and K. Smith, 2012: New methods for estimating ocean eddy heat transport using satellite altimetry. Mon. Wea. Rev., 140, 17031722, doi:10.1175/MWR-D-11-00145.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kelly, D., A. Majda, and X. T. Tong, 2015: Concrete ensemble Kalman filters with rigorous catastrophic filter divergence. Proc. Natl. Acad. Sci. USA, 112, 10 58910 594, doi:10.1073/pnas.1511063112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, Y., A. J. Majda, and D. Qi, 2017: Stochastic superparameterization and multiscale filtering of turbulent tracers. Multiscale Model. Simul., 15, 215234, doi:10.1137/16M1080239.

    • Crossref
    • Export Citation
  • Li, H., E. Kalnay, and T. Miyoshi, 2009: Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 135, 523533, doi:10.1002/qj.371.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2013: Covariance inflation in the ensemble Kalman filter: A residual nudging perspective and some implications. Mon. Wea. Rev., 141, 33603368, doi:10.1175/MWR-D-13-00067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, X., and I. Hoteit, 2014: Ensemble Kalman filtering with residual nudging: An extension to state estimation problems with nonlinear observation operators. Mon. Wea. Rev., 142, 36963712, doi:10.1175/MWR-D-13-00328.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Majda, A. J., and J. Harlim, 2012: Filtering Complex Turbulent Systems. Cambridge University Press, 368 pp.

  • Majda, A. J., and I. Grooms, 2014: New perspectives on superparameterization for geophysical turbulence. J. Comput. Phys., 271, 6077, doi:10.1016/j.jcp.2013.09.014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Majda, A. J., D. Qi, and T. P. Sapsis, 2014: Blended particle filters for large-dimensional chaotic dynamical systems. Proc. Natl. Acad. Sci. USA, 111, 75117516, doi:10.1073/pnas.1405675111.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qi, D., and A. J. Majda, 2015: Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems. Physica D, 298–299, 2141, doi:10.1016/j.physd.2015.02.002.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sakov, P., and P. Oke, 2008: A deterministic formulation of the ensemble Kalman filter: An alternative to ensemble square root filters. Tellus, 60A, 361371, doi:10.1111/j.1600-0870.2007.00299.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Salmon, R., 1998: Lectures on Geophysical Fluid Dynamics. Oxford University Press, 400 pp.

    • Crossref
    • Export Citation
  • Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 131, 30793102, doi:10.1256/qj.04.106.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Snyder, C., T. Bengtsson, P. Bickel, and J. Anderson, 2008: Obstacles to high-dimensional particle filtering. Mon. Wea. Rev., 136, 46294640, doi:10.1175/2008MWR2529.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., J. L. Anderson, C. H. Bishop, T. M. Hamill, and J. S. Whitaker, 2003: Ensemble square root filters. Mon. Wea. Rev., 131, 14851490, doi:10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tong, X. T., A. J. Majda, and D. Kelly, 2016: Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation. Commun. Math. Sci., 14, 12831313, doi:10.4310/CMS.2016.v14.n5.a5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., G. P. Compo, and J. N. Thepaut, 2009: A comparison of variational and ensemble-based data assimilation systems for reanalysis of sparse observation. Mon. Wea. Rev., 137, 19911999, doi:10.1175/2008MWR2781.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ying, Y., and F. Zhang, 2015: An adaptive covariance relaxation method for ensemble data assimilation. Quart. J. Roy. Meteor. Soc., 141, 28982906, doi:10.1002/qj.2576.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save