Search Results

You are looking at 1 - 8 of 8 items for

  • Author or Editor: Emmanuel Cosme x
  • Refine by Access: All Content x
Clear All Modify Search
Christophe Genthon
,
Gerhard Krinner
, and
Emmanuel Cosme

Abstract

Because many of the synoptic cyclones south of the 60°S parallel originate from 60°S and lower latitudes, nudging an atmospheric general circulation model (AGCM) with meteorological analyses at the periphery of the Antarctic region may be expected to exert a strong control on the atmospheric circulation inside the region. Here, the ECMWF reanalyses are used to nudge the atmospheric circulation of the Laboratoire de Météorologie Dynamique Zoom (LMDZ) stretched-grid AGCM in a 15-yr simulation spanning the 1979–93 period. The horizontal resolution (grid spacing) in the model reaches ∼100 km south of 60°S. Nudging is exerted along the 60°S parallel, and this is called lateral nudging for the Antarctic region. Nudging is also performed farther north, near 50° and 40°S, but this is not essential for the results discussed here. Surface pressure and winds in the atmospheric column are nudged without relaxation to maximize control by the meteorological analyses, at the expense of some “noise” confined to the latitudes where nudging is exerted. The performances of lateral nudging are evaluated with respect to station observations, the free (unnudged) model, the ECMWF reanalyses, and in limited instances with respect to nudging the surface pressure only. It is shown that the free model has limited but persistent surface pressure and geopotential defects in the Antarctic region, which are efficiently corrected by lateral nudging. Also, the laterally nudged simulations confirm, and to some extent correct, a geopotential deficiency of the ECMWF reanalyses over the east Antarctic continent previously identified by others. The monthly mean variability of surface climate at several stations along a coast-to-pole transect is analyzed. A significant fraction of the observed variability of surface pressure and temperature is reproduced. The fraction is often less than in the reanalyses. However, the differences are not large considering that the nudged model is forced at distances of hundreds to thousands of kilometers whereas the reanalyses are forced at much shorter distances, in principle right at each station site by the very station data. The variability of surface wind is significantly less well reproduced than that of pressure and temperature in both the nudged model and the reanalyses. Carefully adjusted polar physics in the LMDZ model seems to compensate for a distant observational constraint in the cases when the nudged model results appear similar or even superior to the reanalyses. Lateral nudging is less computationally intensive than global nudging, and it induces realistic variability and chronology while leaving full expression of the model physics in the region of interest. Laterally nudging an AGCM with meteorological analyses can offer complementary value over the analyses themselves, not only by producing additional atmospheric information not available from the analyses, but also by correcting possible regional defects in the analyses.

Full access
Emmanuel Cosme
,
Jacques Verron
,
Pierre Brasseur
,
Jacques Blum
, and
Didier Auroux

Abstract

Smoothers are increasingly used in geophysics. Several linear Gaussian algorithms exist, and the general picture may appear somewhat confusing. This paper attempts to stand back a little, in order to clarify this picture by providing a concise overview of what the different smoothers really solve, and how. The authors begin addressing this issue from a Bayesian viewpoint. The filtering problem consists in finding the probability of a system state at a given time, conditioned to some past and present observations (if the present observations are not included, it is a forecast problem). This formulation is unique: any different formulation is a smoothing problem. The two main formulations of smoothing are tackled here: the joint estimation problem (fixed lag or fixed interval), where the probability of a series of system states conditioned to observations is to be found, and the marginal estimation problem, which deals with the probability of only one system state, conditioned to past, present, and future observations. The various strategies to solve these problems in the Bayesian framework are introduced, along with their deriving linear Gaussian, Kalman filter-based algorithms. Their ensemble formulations are also presented. This results in a classification and a possible comparison of the most common smoothers used in geophysics. It should provide a good basis to help the reader find the most appropriate algorithm for his/her own smoothing problem.

Full access
Monika Krysta
,
Eric Blayo
,
Emmanuel Cosme
, and
Jacques Verron

Abstract

In the standard four-dimensional variational data assimilation (4D-Var) algorithm the background error covariance matrix remains static over time. It may therefore be unable to correctly take into account the information accumulated by a system into which data are gradually being assimilated.

A possible method for remedying this flaw is presented and tested in this paper. A hybrid variational-smoothing algorithm is based on a reduced-rank incremental 4D-Var. Its consistent coupling to a singular evolutive extended Kalman (SEEK) smoother ensures the evolution of the matrix. In the analysis step, a low-dimensional error covariance matrix is updated so as to take into account the increased confidence level in the state vector it describes, once the observations have been introduced into the system. In the forecast step, the basis spanning the corresponding control subspace is propagated via the tangent linear model.

The hybrid method is implemented and tested in twin experiments employing a shallow-water model. The background error covariance matrix is initialized using an EOF decomposition of a sample of model states. The quality of the analyses and the information content in the bases spanning control subspaces are also assessed. Several numerical experiments are conducted that differ with regard to the initialization of the matrix. The feasibility of the method is illustrated. Since improvement due to the hybrid method is not universal, configurations that benefit from employing it instead of the standard 4D-Var are described and an explanation of the possible reasons for this is proposed.

Full access
Jean-Michel Brankart
,
Emmanuel Cosme
,
Charles-Emmanuel Testut
,
Pierre Brasseur
, and
Jacques Verron

Abstract

In large-sized atmospheric or oceanic applications of square root or ensemble Kalman filters, it is often necessary to introduce the prior assumption that long-range correlations are negligible and force them to zero using a local parameterization, supplementing the ensemble or reduced-rank representation of the covariance. One classic algorithm to perform this operation consists of taking the Schur product of the ensemble covariance matrix by a local support correlation matrix. However, with this parameterization, the square root of the forecast error covariance matrix is no more directly available, so that any observational update algorithm requiring this square root must include an additional step to compute local square roots from the Schur product. This computation generates an additional numerical cost or produces high-rank square roots, which may deprive the observational update from its original efficiency. In this paper, it is shown how efficient local square root parameterizations can be obtained, for use with a specific square root formulation (i.e., eigenbasis algorithm) of the observational update. Comparisons with the classic algorithm are provided, mainly in terms of consistency, accuracy, and computational complexity. As an application, the resulting parameterization is used to estimate maps of dynamic topography characterizing a basin-scale ocean turbulent flow. Even with this moderate-sized system (a 2200-km-wide square basin with 100-km-wide mesoscale eddies), it is observed that more than 1000 ensemble members are necessary to faithfully represent the global correlation patterns, and that a local parameterization is needed to produce correct covariances with moderate-sized ensembles. Comparisons with the exact solution show that the use of local square roots is able to improve the accuracy of the updated ensemble mean and the consistency of the updated ensemble variance. With the eigenbasis algorithm, optimal adaptive estimates of scaling factors for the forecast and observation error covariance matrix can also be obtained locally at negligible additional numerical cost. Finally, a comparison of the overall computational cost illustrates the decisive advantage that efficient local square root parameterizations may have to deal simultaneously with a larger number of observations and avoid data thinning as much as possible.

Full access
Jean-Michel Brankart
,
Emmanuel Cosme
,
Charles-Emmanuel Testut
,
Pierre Brasseur
, and
Jacques Verron

Abstract

In Kalman filter applications, an adaptive parameterization of the error statistics is often necessary to avoid filter divergence, and prevent error estimates from becoming grossly inconsistent with the real error. With the classic formulation of the Kalman filter observational update, optimal estimates of general adaptive parameters can only be obtained at a numerical cost that is several times larger than the cost of the state observational update. In this paper, it is shown that there exists a few types of important parameters for which optimal estimates can be computed at a negligible numerical cost, as soon as the computation is performed using a transformed algorithm that works in the reduced control space defined by the square root or ensemble representation of the forecast error covariance matrix. The set of parameters that can be efficiently controlled includes scaling factors for the forecast error covariance matrix, scaling factors for the observation error covariance matrix, or even a scaling factor for the observation error correlation length scale.

As an application, the resulting adaptive filter is used to estimate the time evolution of ocean mesoscale signals using observations of the ocean dynamic topography. To check the behavior of the adaptive mechanism, this is done in the context of idealized experiments, in which model error and observation error statistics are known. This ideal framework is particularly appropriate to explore the ill-conditioned situations (inadequate prior assumptions or uncontrollability of the parameters) in which adaptivity can be misleading. Overall, the experiments show that, if used correctly, the efficient optimal adaptive algorithm proposed in this paper introduces useful supplementary degrees of freedom in the estimation problem, and that the direct control of these statistical parameters by the observations increases the robustness of the error estimates and thus the optimality of the resulting Kalman filter.

Full access
Jean-Michel Brankart
,
Clément Ubelmann
,
Charles-Emmanuel Testut
,
Emmanuel Cosme
,
Pierre Brasseur
, and
Jacques Verron

Abstract

In the Kalman filter standard algorithm, the computational complexity of the observational update is proportional to the cube of the number y of observations (leading behavior for large y). In realistic atmospheric or oceanic applications, involving an increasing quantity of available observations, this often leads to a prohibitive cost and to the necessity of simplifying the problem by aggregating or dropping observations. If the filter error covariance matrices are in square root form, as in square root or ensemble Kalman filters, the standard algorithm can be transformed to be linear in y, providing that the observation error covariance matrix is diagonal. This is a significant drawback of this transformed algorithm and often leads to an assumption of uncorrelated observation errors for the sake of numerical efficiency. In this paper, it is shown that the linearity of the transformed algorithm in y can be preserved for other forms of the observation error covariance matrix. In particular, quite general correlation structures (with analytic asymptotic expressions) can be simulated simply by augmenting the observation vector with differences of the original observations, such as their discrete gradients. Errors in ocean altimetric observations are spatially correlated, as for instance orbit or atmospheric errors along the satellite track. Adequately parameterizing these correlations can directly improve the quality of observational updates and the accuracy of the associated error estimates. In this paper, the example of the North Brazil Current circulation is used to demonstrate the importance of this effect, which is especially significant in that region of moderate ratio between signal amplitude and observation noise, and to show that the efficient parameterization that is proposed for the observation error correlations is appropriate to take it into account. Adding explicit gradient observations also receives a physical justification. This parameterization is thus proved to be useful to ocean data assimilation systems that are based on square root or ensemble Kalman filters, as soon as the number of observations becomes penalizing, and if a sophisticated parameterization of the observation error correlations is required.

Full access
Giovanni Abdelnur Ruggiero
,
Emmanuel Cosme
,
Jean-Michel Brankart
,
Julien Le Sommer
, and
Clement Ubelmann

Abstract

Most data assimilation algorithms require the inverse of the covariance matrix of the observation errors. In practical applications, the cost of computing this inverse matrix with spatially correlated observation errors is prohibitive. Common practices are therefore to subsample or combine the observations so that the errors of the assimilated observations can be considered uncorrelated. As a consequence, a large fraction of the available observational information is not used in practical applications. In this study, a method is developed to account for the correlations of the errors that will be present in the wide-swath sea surface height measurements, for example, the Surface Water and Ocean Topography (SWOT) mission. It basically consists of the transformation of the observation vector so that the inverse of the corresponding covariance matrix can be replaced by a diagonal matrix, thus allowing to genuinely take into account errors that are spatially correlated in physical space. Numerical experiments of ensemble Kalman filter analysis of SWOT-like observations are conducted with three different observation error covariance matrices. Results suggest that the proposed method provides an effective way to account for error correlations in the assimilation of the future SWOT data. The transformation of the observation vector proposed herein yields both a significant reduction of the root-mean-square errors and a good consistency between the filter analysis error statistics and the true error statistics.

Full access
Florian Le Guillou
,
Sammy Metref
,
Emmanuel Cosme
,
Clément Ubelmann
,
Maxime Ballarotta
,
Julien Le Sommer
, and
Jacques Verron

Abstract

During the past 25 years, altimetric observations of the ocean surface from space have been mapped to provide two dimensional sea surface height (SSH) fields that are crucial for scientific research and operational applications. The SSH fields can be reconstructed from conventional altimetric data using temporal and spatial interpolation. For instance, the standard Developing Use of Altimetry for Climate Studies (DUACS) products are created with an optimal interpolation method that is effective for both low temporal and low spatial resolution. However, the upcoming next-generation SWOT mission will provide very high spatial resolution but with low temporal resolution. The present paper makes the case that this temporal–spatial discrepancy induces the need for new advanced mapping techniques involving information on the ocean dynamics. An algorithm is introduced, dubbed the BFN-QG, that uses a simple data assimilation method, the back-and-forth nudging (BNF), to interpolate altimetric data while respecting quasigeostrophic (QG) dynamics. The BFN-QG is tested in an observing system simulation experiments and compared to the DUACS products. The experiments consider as reference the high-resolution numerical model simulation NATL60 from which are produced realistic data: four conventional altimetric nadirs and SWOT data. In a combined nadirs and SWOT scenario, the BFN-QG substantially improves the mapping by reducing the root-mean-square errors and increasing the spectral effective resolution by 40 km. Also, the BFN-QG method can be adapted to combine large-scale corrections from nadir data and small-scale corrections from SWOT data so as to reduce the impact of SWOT correlated noises and still provide accurate SSH maps.

Open access