Search Results
You are looking at 1 - 8 of 8 items for :
- Author or Editor: Jean-Michel Brankart x
- Article x
- Refine by Access: All Content x
Abstract
The Levantine Intermediate Water (LIW) is an important water mass for the overall hydrology of the Mediterranean Sea and there are open questions connected with the possible long-term variability of its physical characteristics. This paper is dedicated to the analysis and the interpretation of the LIW long-term variations over the last 50 years. It is based on data analysis and model simulations. On the one hand, new temperature and salinity gridded data of interannual and decadal anomalies have been produced from existing historical datasets. On the other hand, a long-term primitive equation model simulation has been generated, to be compared to the observational reconstructions.
Results indicate that the major feature of both datasets (observations and model) is an intense cooling of the LIW (0.24°–0.28°C at 200-m depth) at the beginning of the 1980s (winters 1981 and 1983). Around the Aegean Sea and the Cretan Arc, the amplitude of the cooling is as large as 0.4°C.
The model simulations, forced by the Comprehensive Ocean–Atmosphere Data Set atmospheric fluxes, reproduce the cooling event quite faithfully. The possible processes at the origin of these interannual/decadal variations are discussed. Hypotheses are proposed and tested against observations. In particular it is shown that, over the period of interest, the major part of the LIW interannual/decadal variability has been directly forced by anomalies in the surface heat budget of the Eastern Mediterranean.
Abstract
The Levantine Intermediate Water (LIW) is an important water mass for the overall hydrology of the Mediterranean Sea and there are open questions connected with the possible long-term variability of its physical characteristics. This paper is dedicated to the analysis and the interpretation of the LIW long-term variations over the last 50 years. It is based on data analysis and model simulations. On the one hand, new temperature and salinity gridded data of interannual and decadal anomalies have been produced from existing historical datasets. On the other hand, a long-term primitive equation model simulation has been generated, to be compared to the observational reconstructions.
Results indicate that the major feature of both datasets (observations and model) is an intense cooling of the LIW (0.24°–0.28°C at 200-m depth) at the beginning of the 1980s (winters 1981 and 1983). Around the Aegean Sea and the Cretan Arc, the amplitude of the cooling is as large as 0.4°C.
The model simulations, forced by the Comprehensive Ocean–Atmosphere Data Set atmospheric fluxes, reproduce the cooling event quite faithfully. The possible processes at the origin of these interannual/decadal variations are discussed. Hypotheses are proposed and tested against observations. In particular it is shown that, over the period of interest, the major part of the LIW interannual/decadal variability has been directly forced by anomalies in the surface heat budget of the Eastern Mediterranean.
Abstract
To study the Mediterranean general circulation, there is a constant need for reliable interpretations of available hydrological observations. Optimal data analyses (in the probabilistic point of view of objective analysis) are fulfilled using an original finite-element technique to minimize the variational principle of the spline procedure. Anyway, a prior statistical knowledge of the problem is required to adapt the optimization criterion to the purpose of this study and to the particular features of the system. The main goal of this paper is to show how the cross-validation methodology can be used to deduct statistical estimators of this information only from the dataset. The authors also give theoretical and/or numerical evidence that modified estimators–-using generalized cross-validation or sampling algorithms–-are interesting in the analysis optimization process. Finally, results obtained by the application of these methods to a Mediterranean historical database and their comparison with those provided by other techniques show the usefulness and the reliability of the method.
Abstract
To study the Mediterranean general circulation, there is a constant need for reliable interpretations of available hydrological observations. Optimal data analyses (in the probabilistic point of view of objective analysis) are fulfilled using an original finite-element technique to minimize the variational principle of the spline procedure. Anyway, a prior statistical knowledge of the problem is required to adapt the optimization criterion to the purpose of this study and to the particular features of the system. The main goal of this paper is to show how the cross-validation methodology can be used to deduct statistical estimators of this information only from the dataset. The authors also give theoretical and/or numerical evidence that modified estimators–-using generalized cross-validation or sampling algorithms–-are interesting in the analysis optimization process. Finally, results obtained by the application of these methods to a Mediterranean historical database and their comparison with those provided by other techniques show the usefulness and the reliability of the method.
Abstract
A cross-validation algorithm is developed to perform probabilistic observing system simulation experiments (OSSEs). The use of a probability distribution of “true” states is considered rather than a single “truth” using a cross-validation algorithm in which each member of an ensemble simulation is alternatively used as the “truth” and to simulate synthetic observation data that reflect the observing system to be evaluated. The other available members are used to produce an updated ensemble by assimilating the specific data, while a probabilistic evaluation of the observation impacts is obtained using a comprehensive set of verification skill scores. To showcase this new type of OSSE studies with tractable numerical costs, a simple biogeochemical application under the Horizon 2020 AtlantOS project is presented for a single assimilation time step, in order to investigate the value of adding biogeochemical (BGC)-Argo floats to the existing satellite ocean color observations. Further experiments must be performed in time as well for a rigorous and effective evaluation of the BGC-Argo network design, though some evidence from this preliminary work suggests that assimilating chlorophyll data from a BGC-Argo array of 1000 floats can provide additional error reduction at the surface, where the use of spatial ocean color data is limited (due to cloudy conditions), as well at depths ranging from 50 to 150 m.
Abstract
A cross-validation algorithm is developed to perform probabilistic observing system simulation experiments (OSSEs). The use of a probability distribution of “true” states is considered rather than a single “truth” using a cross-validation algorithm in which each member of an ensemble simulation is alternatively used as the “truth” and to simulate synthetic observation data that reflect the observing system to be evaluated. The other available members are used to produce an updated ensemble by assimilating the specific data, while a probabilistic evaluation of the observation impacts is obtained using a comprehensive set of verification skill scores. To showcase this new type of OSSE studies with tractable numerical costs, a simple biogeochemical application under the Horizon 2020 AtlantOS project is presented for a single assimilation time step, in order to investigate the value of adding biogeochemical (BGC)-Argo floats to the existing satellite ocean color observations. Further experiments must be performed in time as well for a rigorous and effective evaluation of the BGC-Argo network design, though some evidence from this preliminary work suggests that assimilating chlorophyll data from a BGC-Argo array of 1000 floats can provide additional error reduction at the surface, where the use of spatial ocean color data is limited (due to cloudy conditions), as well at depths ranging from 50 to 150 m.
Abstract
In Kalman filter applications, an adaptive parameterization of the error statistics is often necessary to avoid filter divergence, and prevent error estimates from becoming grossly inconsistent with the real error. With the classic formulation of the Kalman filter observational update, optimal estimates of general adaptive parameters can only be obtained at a numerical cost that is several times larger than the cost of the state observational update. In this paper, it is shown that there exists a few types of important parameters for which optimal estimates can be computed at a negligible numerical cost, as soon as the computation is performed using a transformed algorithm that works in the reduced control space defined by the square root or ensemble representation of the forecast error covariance matrix. The set of parameters that can be efficiently controlled includes scaling factors for the forecast error covariance matrix, scaling factors for the observation error covariance matrix, or even a scaling factor for the observation error correlation length scale.
As an application, the resulting adaptive filter is used to estimate the time evolution of ocean mesoscale signals using observations of the ocean dynamic topography. To check the behavior of the adaptive mechanism, this is done in the context of idealized experiments, in which model error and observation error statistics are known. This ideal framework is particularly appropriate to explore the ill-conditioned situations (inadequate prior assumptions or uncontrollability of the parameters) in which adaptivity can be misleading. Overall, the experiments show that, if used correctly, the efficient optimal adaptive algorithm proposed in this paper introduces useful supplementary degrees of freedom in the estimation problem, and that the direct control of these statistical parameters by the observations increases the robustness of the error estimates and thus the optimality of the resulting Kalman filter.
Abstract
In Kalman filter applications, an adaptive parameterization of the error statistics is often necessary to avoid filter divergence, and prevent error estimates from becoming grossly inconsistent with the real error. With the classic formulation of the Kalman filter observational update, optimal estimates of general adaptive parameters can only be obtained at a numerical cost that is several times larger than the cost of the state observational update. In this paper, it is shown that there exists a few types of important parameters for which optimal estimates can be computed at a negligible numerical cost, as soon as the computation is performed using a transformed algorithm that works in the reduced control space defined by the square root or ensemble representation of the forecast error covariance matrix. The set of parameters that can be efficiently controlled includes scaling factors for the forecast error covariance matrix, scaling factors for the observation error covariance matrix, or even a scaling factor for the observation error correlation length scale.
As an application, the resulting adaptive filter is used to estimate the time evolution of ocean mesoscale signals using observations of the ocean dynamic topography. To check the behavior of the adaptive mechanism, this is done in the context of idealized experiments, in which model error and observation error statistics are known. This ideal framework is particularly appropriate to explore the ill-conditioned situations (inadequate prior assumptions or uncontrollability of the parameters) in which adaptivity can be misleading. Overall, the experiments show that, if used correctly, the efficient optimal adaptive algorithm proposed in this paper introduces useful supplementary degrees of freedom in the estimation problem, and that the direct control of these statistical parameters by the observations increases the robustness of the error estimates and thus the optimality of the resulting Kalman filter.
Abstract
In large-sized atmospheric or oceanic applications of square root or ensemble Kalman filters, it is often necessary to introduce the prior assumption that long-range correlations are negligible and force them to zero using a local parameterization, supplementing the ensemble or reduced-rank representation of the covariance. One classic algorithm to perform this operation consists of taking the Schur product of the ensemble covariance matrix by a local support correlation matrix. However, with this parameterization, the square root of the forecast error covariance matrix is no more directly available, so that any observational update algorithm requiring this square root must include an additional step to compute local square roots from the Schur product. This computation generates an additional numerical cost or produces high-rank square roots, which may deprive the observational update from its original efficiency. In this paper, it is shown how efficient local square root parameterizations can be obtained, for use with a specific square root formulation (i.e., eigenbasis algorithm) of the observational update. Comparisons with the classic algorithm are provided, mainly in terms of consistency, accuracy, and computational complexity. As an application, the resulting parameterization is used to estimate maps of dynamic topography characterizing a basin-scale ocean turbulent flow. Even with this moderate-sized system (a 2200-km-wide square basin with 100-km-wide mesoscale eddies), it is observed that more than 1000 ensemble members are necessary to faithfully represent the global correlation patterns, and that a local parameterization is needed to produce correct covariances with moderate-sized ensembles. Comparisons with the exact solution show that the use of local square roots is able to improve the accuracy of the updated ensemble mean and the consistency of the updated ensemble variance. With the eigenbasis algorithm, optimal adaptive estimates of scaling factors for the forecast and observation error covariance matrix can also be obtained locally at negligible additional numerical cost. Finally, a comparison of the overall computational cost illustrates the decisive advantage that efficient local square root parameterizations may have to deal simultaneously with a larger number of observations and avoid data thinning as much as possible.
Abstract
In large-sized atmospheric or oceanic applications of square root or ensemble Kalman filters, it is often necessary to introduce the prior assumption that long-range correlations are negligible and force them to zero using a local parameterization, supplementing the ensemble or reduced-rank representation of the covariance. One classic algorithm to perform this operation consists of taking the Schur product of the ensemble covariance matrix by a local support correlation matrix. However, with this parameterization, the square root of the forecast error covariance matrix is no more directly available, so that any observational update algorithm requiring this square root must include an additional step to compute local square roots from the Schur product. This computation generates an additional numerical cost or produces high-rank square roots, which may deprive the observational update from its original efficiency. In this paper, it is shown how efficient local square root parameterizations can be obtained, for use with a specific square root formulation (i.e., eigenbasis algorithm) of the observational update. Comparisons with the classic algorithm are provided, mainly in terms of consistency, accuracy, and computational complexity. As an application, the resulting parameterization is used to estimate maps of dynamic topography characterizing a basin-scale ocean turbulent flow. Even with this moderate-sized system (a 2200-km-wide square basin with 100-km-wide mesoscale eddies), it is observed that more than 1000 ensemble members are necessary to faithfully represent the global correlation patterns, and that a local parameterization is needed to produce correct covariances with moderate-sized ensembles. Comparisons with the exact solution show that the use of local square roots is able to improve the accuracy of the updated ensemble mean and the consistency of the updated ensemble variance. With the eigenbasis algorithm, optimal adaptive estimates of scaling factors for the forecast and observation error covariance matrix can also be obtained locally at negligible additional numerical cost. Finally, a comparison of the overall computational cost illustrates the decisive advantage that efficient local square root parameterizations may have to deal simultaneously with a larger number of observations and avoid data thinning as much as possible.
Abstract
In the Kalman filter standard algorithm, the computational complexity of the observational update is proportional to the cube of the number y of observations (leading behavior for large y). In realistic atmospheric or oceanic applications, involving an increasing quantity of available observations, this often leads to a prohibitive cost and to the necessity of simplifying the problem by aggregating or dropping observations. If the filter error covariance matrices are in square root form, as in square root or ensemble Kalman filters, the standard algorithm can be transformed to be linear in y, providing that the observation error covariance matrix is diagonal. This is a significant drawback of this transformed algorithm and often leads to an assumption of uncorrelated observation errors for the sake of numerical efficiency. In this paper, it is shown that the linearity of the transformed algorithm in y can be preserved for other forms of the observation error covariance matrix. In particular, quite general correlation structures (with analytic asymptotic expressions) can be simulated simply by augmenting the observation vector with differences of the original observations, such as their discrete gradients. Errors in ocean altimetric observations are spatially correlated, as for instance orbit or atmospheric errors along the satellite track. Adequately parameterizing these correlations can directly improve the quality of observational updates and the accuracy of the associated error estimates. In this paper, the example of the North Brazil Current circulation is used to demonstrate the importance of this effect, which is especially significant in that region of moderate ratio between signal amplitude and observation noise, and to show that the efficient parameterization that is proposed for the observation error correlations is appropriate to take it into account. Adding explicit gradient observations also receives a physical justification. This parameterization is thus proved to be useful to ocean data assimilation systems that are based on square root or ensemble Kalman filters, as soon as the number of observations becomes penalizing, and if a sophisticated parameterization of the observation error correlations is required.
Abstract
In the Kalman filter standard algorithm, the computational complexity of the observational update is proportional to the cube of the number y of observations (leading behavior for large y). In realistic atmospheric or oceanic applications, involving an increasing quantity of available observations, this often leads to a prohibitive cost and to the necessity of simplifying the problem by aggregating or dropping observations. If the filter error covariance matrices are in square root form, as in square root or ensemble Kalman filters, the standard algorithm can be transformed to be linear in y, providing that the observation error covariance matrix is diagonal. This is a significant drawback of this transformed algorithm and often leads to an assumption of uncorrelated observation errors for the sake of numerical efficiency. In this paper, it is shown that the linearity of the transformed algorithm in y can be preserved for other forms of the observation error covariance matrix. In particular, quite general correlation structures (with analytic asymptotic expressions) can be simulated simply by augmenting the observation vector with differences of the original observations, such as their discrete gradients. Errors in ocean altimetric observations are spatially correlated, as for instance orbit or atmospheric errors along the satellite track. Adequately parameterizing these correlations can directly improve the quality of observational updates and the accuracy of the associated error estimates. In this paper, the example of the North Brazil Current circulation is used to demonstrate the importance of this effect, which is especially significant in that region of moderate ratio between signal amplitude and observation noise, and to show that the efficient parameterization that is proposed for the observation error correlations is appropriate to take it into account. Adding explicit gradient observations also receives a physical justification. This parameterization is thus proved to be useful to ocean data assimilation systems that are based on square root or ensemble Kalman filters, as soon as the number of observations becomes penalizing, and if a sophisticated parameterization of the observation error correlations is required.
Abstract
Most data assimilation algorithms require the inverse of the covariance matrix of the observation errors. In practical applications, the cost of computing this inverse matrix with spatially correlated observation errors is prohibitive. Common practices are therefore to subsample or combine the observations so that the errors of the assimilated observations can be considered uncorrelated. As a consequence, a large fraction of the available observational information is not used in practical applications. In this study, a method is developed to account for the correlations of the errors that will be present in the wide-swath sea surface height measurements, for example, the Surface Water and Ocean Topography (SWOT) mission. It basically consists of the transformation of the observation vector so that the inverse of the corresponding covariance matrix can be replaced by a diagonal matrix, thus allowing to genuinely take into account errors that are spatially correlated in physical space. Numerical experiments of ensemble Kalman filter analysis of SWOT-like observations are conducted with three different observation error covariance matrices. Results suggest that the proposed method provides an effective way to account for error correlations in the assimilation of the future SWOT data. The transformation of the observation vector proposed herein yields both a significant reduction of the root-mean-square errors and a good consistency between the filter analysis error statistics and the true error statistics.
Abstract
Most data assimilation algorithms require the inverse of the covariance matrix of the observation errors. In practical applications, the cost of computing this inverse matrix with spatially correlated observation errors is prohibitive. Common practices are therefore to subsample or combine the observations so that the errors of the assimilated observations can be considered uncorrelated. As a consequence, a large fraction of the available observational information is not used in practical applications. In this study, a method is developed to account for the correlations of the errors that will be present in the wide-swath sea surface height measurements, for example, the Surface Water and Ocean Topography (SWOT) mission. It basically consists of the transformation of the observation vector so that the inverse of the corresponding covariance matrix can be replaced by a diagonal matrix, thus allowing to genuinely take into account errors that are spatially correlated in physical space. Numerical experiments of ensemble Kalman filter analysis of SWOT-like observations are conducted with three different observation error covariance matrices. Results suggest that the proposed method provides an effective way to account for error correlations in the assimilation of the future SWOT data. The transformation of the observation vector proposed herein yields both a significant reduction of the root-mean-square errors and a good consistency between the filter analysis error statistics and the true error statistics.
Abstract
This study investigates the origin and features of interannual–decadal Atlantic meridional overturning circulation (AMOC) variability from several ocean simulations, including a large (50 member) ensemble of global, eddy-permitting (1/4°) ocean–sea ice hindcasts. After an initial stochastic perturbation, each member is driven by the same realistic atmospheric forcing over 1960–2015. The magnitude, spatiotemporal scales, and patterns of both the atmospherically forced and intrinsic–chaotic interannual AMOC variability are then characterized from the ensemble mean and ensemble spread, respectively. The analysis of the ensemble-mean variability shows that the AMOC fluctuations north of 40°N are largely driven by the atmospheric variability, which forces meridionally coherent fluctuations reaching decadal time scales. The amplitude of the intrinsic interannual AMOC variability never exceeds the atmospherically forced contribution in the Atlantic basin, but it reaches up to 100% of the latter around 35°S and 60% in the Northern Hemisphere midlatitudes. The intrinsic AMOC variability exhibits a large-scale meridional coherence, especially south of 25°N. An EOF analysis over the basin shows two large-scale leading modes that together explain 60% of the interannual intrinsic variability. The first mode is likely excited by intrinsic oceanic processes at the southern end of the basin and affects latitudes up to 40°N; the second mode is mostly restricted to, and excited within, the Northern Hemisphere midlatitudes. These features of the intrinsic, chaotic variability (intensity, patterns, and random phase) are barely sensitive to the atmospheric evolution, and they strongly resemble the “pure intrinsic” interannual AMOC variability that emerges in climatological simulations under repeated seasonal-cycle forcing. These results raise questions about the attribution of observed and simulated AMOC signals and about the possible impact of intrinsic signals on the atmosphere.
Abstract
This study investigates the origin and features of interannual–decadal Atlantic meridional overturning circulation (AMOC) variability from several ocean simulations, including a large (50 member) ensemble of global, eddy-permitting (1/4°) ocean–sea ice hindcasts. After an initial stochastic perturbation, each member is driven by the same realistic atmospheric forcing over 1960–2015. The magnitude, spatiotemporal scales, and patterns of both the atmospherically forced and intrinsic–chaotic interannual AMOC variability are then characterized from the ensemble mean and ensemble spread, respectively. The analysis of the ensemble-mean variability shows that the AMOC fluctuations north of 40°N are largely driven by the atmospheric variability, which forces meridionally coherent fluctuations reaching decadal time scales. The amplitude of the intrinsic interannual AMOC variability never exceeds the atmospherically forced contribution in the Atlantic basin, but it reaches up to 100% of the latter around 35°S and 60% in the Northern Hemisphere midlatitudes. The intrinsic AMOC variability exhibits a large-scale meridional coherence, especially south of 25°N. An EOF analysis over the basin shows two large-scale leading modes that together explain 60% of the interannual intrinsic variability. The first mode is likely excited by intrinsic oceanic processes at the southern end of the basin and affects latitudes up to 40°N; the second mode is mostly restricted to, and excited within, the Northern Hemisphere midlatitudes. These features of the intrinsic, chaotic variability (intensity, patterns, and random phase) are barely sensitive to the atmospheric evolution, and they strongly resemble the “pure intrinsic” interannual AMOC variability that emerges in climatological simulations under repeated seasonal-cycle forcing. These results raise questions about the attribution of observed and simulated AMOC signals and about the possible impact of intrinsic signals on the atmosphere.