Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: Yicun Zhen x
  • All content x
Clear All Modify Search
Yicun Zhen and Fuqing Zhang

Abstract

This study proposes a variational approach to adaptively determine the optimum radius of influence for ensemble covariance localization when uncorrelated observations are assimilated sequentially. The covariance localization is commonly used by various ensemble Kalman filters to limit the impact of covariance sampling errors when the ensemble size is small relative to the dimension of the state. The probabilistic approach is based on the premise of finding an optimum localization radius that minimizes the distance between the Kalman update using the localized sampling covariance versus using the true covariance, when the sequential ensemble Kalman square root filter method is used. The authors first examine the effectiveness of the proposed method for the cases when the true covariance is known or can be approximated by a sufficiently large ensemble size. Not surprisingly, it is found that the smaller the true covariance distance or the smaller the ensemble, the smaller the localization radius that is needed. The authors further generalize the method to the more usual scenario that the true covariance is unknown but can be represented or estimated probabilistically based on the ensemble sampling covariance. The mathematical formula for this probabilistic and adaptive approach with the use of the Jeffreys prior is derived. Promising results and limitations of this new method are discussed through experiments using the Lorenz-96 system.

Full access
Yicun Zhen, Pierre Tandeo, Stéphanie Leroux, Sammy Metref, Thierry Penduff, and Julien Le Sommer

Abstract

Because of the irregular sampling pattern of raw altimeter data, many oceanographic applications rely on information from sea surface height (SSH) products gridded on regular grids where gaps have been filled with interpolation. Today, the operational SSH products are created using the simple, but robust, optimal interpolation (OI) method. If well tuned, the OI becomes computationally cheap and provides accurate results at low resolution. However, OI is not adapted to produce high-resolution and high-frequency maps of SSH. To improve the interpolation of SSH satellite observations, a data-driven approach (i.e., constructing a dynamical forecast model from the data) was recently proposed: analog data assimilation (AnDA). AnDA adaptively chooses analog situations from a catalog of SSH scenes—originating from numerical simulations or a large database of observations—which allow the temporal propagation of physical features at different scales, while each observation is assimilated. In this article, we review the AnDA and OI algorithms and compare their skills in numerical experiments. The experiments are observing system simulation experiments (OSSE) on the Lorenz-63 system and on an SSH reconstruction problem in the Gulf of Mexico. The results show that AnDA, with no necessary tuning, produces comparable reconstructions as does OI with tuned parameters. Moreover, AnDA manages to reconstruct the signals at higher frequencies than OI. Finally, an important additional feature for any interpolation method is to be able to assess the quality of its reconstruction. This study shows that the standard deviation estimated by AnDA is flow dependent, hence more informative on the reconstruction quality, than the one estimated by OI.

Restricted access
Pierre Tandeo, Pierre Ailliot, Marc Bocquet, Alberto Carrassi, Takemasa Miyoshi, Manuel Pulido, and Yicun Zhen

Abstract

Data assimilation combines forecasts from a numerical model with observations. Most of the current data assimilation algorithms consider the model and observation error terms as additive Gaussian noise, specified by their covariance matrices Q and R, respectively. These error covariances, and specifically their respective amplitudes, determine the weights given to the background (i.e., the model forecasts) and to the observations in the solution of data assimilation algorithms (i.e., the analysis). Consequently, Q and R matrices significantly impact the accuracy of the analysis. This review aims to present and to discuss, with a unified framework, different methods to jointly estimate the Q and R matrices using ensemble-based data assimilation techniques. Most of the methods developed to date use the innovations, defined as differences between the observations and the projection of the forecasts onto the observation space. These methods are based on two main statistical criteria: 1) the method of moments, in which the theoretical and empirical moments of the innovations are assumed to be equal, and 2) methods that use the likelihood of the observations, themselves contained in the innovations. The reviewed methods assume that innovations are Gaussian random variables, although extension to other distributions is possible for likelihood-based methods. The methods also show some differences in terms of levels of complexity and applicability to high-dimensional systems. The conclusion of the review discusses the key challenges to further develop estimation methods for Q and R. These challenges include taking into account time-varying error covariances, using limited observational coverage, estimating additional deterministic error terms, or accounting for correlated noise.

Restricted access