Search Results
You are looking at 1 - 10 of 16 items for
- Author or Editor: Jacques Verron x
- Refine by Access: All Content x
Abstract
The feasibility of using a subgrid-scale eddy parameterization, based on statistical mechanics of potential vorticity, is investigated. A specific implementation is derived for the somewhat classic barotropic vorticity equation in the case of a fully eddy-active, wind-driven, midlatitude ocean on the β plane.
The subgrid-scale eddy fluxes are determined by a principle of maximum entropy production so that these fluxes always efficiently drive the system toward statistical equilibrium. In the absence of forcing and friction, the system then reaches this equilibrium, while conserving all the constants of motion of the inviscid barotropic equations. It is shown that this equilibrium is close to a Fofonoff flow, like that obtained with truncated spectral models, although the statistical approach is different.
The subgrid-scale model is then validated in a more realistic case, with wind forcing and friction. The results of this model at a coarse resolution are compared with reference simulations at a resolution four times higher. The mean flow is correctly recovered, as well as the variability properties, such as the kinetic energy fields and the eddy flux of potential vorticity. Although only the barotropic dynamics of a homogeneous wind-driven ocean flow has been considered at this stage, there is no formal obstacle for a generalization to multilayer baroclinic flows.
Abstract
The feasibility of using a subgrid-scale eddy parameterization, based on statistical mechanics of potential vorticity, is investigated. A specific implementation is derived for the somewhat classic barotropic vorticity equation in the case of a fully eddy-active, wind-driven, midlatitude ocean on the β plane.
The subgrid-scale eddy fluxes are determined by a principle of maximum entropy production so that these fluxes always efficiently drive the system toward statistical equilibrium. In the absence of forcing and friction, the system then reaches this equilibrium, while conserving all the constants of motion of the inviscid barotropic equations. It is shown that this equilibrium is close to a Fofonoff flow, like that obtained with truncated spectral models, although the statistical approach is different.
The subgrid-scale model is then validated in a more realistic case, with wind forcing and friction. The results of this model at a coarse resolution are compared with reference simulations at a resolution four times higher. The mean flow is correctly recovered, as well as the variability properties, such as the kinetic energy fields and the eddy flux of potential vorticity. Although only the barotropic dynamics of a homogeneous wind-driven ocean flow has been considered at this stage, there is no formal obstacle for a generalization to multilayer baroclinic flows.
Abstract
This article looks at the problem of optimizing spatiotemporal sampling of the ocean circulation using single- or twin-satellite missions. A review of the basic orbital constraints is first presented and this, together with some elementary sampling considerations, provides a solid foundation for choosing satellite orbital parameters. A modeling and assimilation approach enables even further progress to be made by simulating the dynamic features of the ocean fields that are to be measured; it also enables the process of integrating data into models to be simulated.
Several scenarios for two altimetric satellites flying simultaneously are evaluated with respect to their ability to monitor oceanic circulation as simulated with a numerical model. The twin-experiment approach is used: simulated data are assimilated into the numerical model, while a benchmark experiment provides the necessary dataset for validation and intercomparison. The model is quasigeostrophic and multilayered. The ocean model domain is at basin scale, centered on the midlatitudes. Model resolution (20 km) is fine enough to exhibit the intense mesoscale nonlinear variability typical of the midlatitudes. The assimilation technique used is sequential nudging of sea surface height applied to along-track data.
Dual scenarios are built consisting of all possible combinations of satellites having 3-, 10- (Topex-Poseidon), 17- (Geosat) and 30-day orbital repeat periods. In the specific context of our modeling and assimilation approach, improved scenarios with respect to Topex-Poseidon, and a fortiori Geosat, appear to be those that favor improving temporal rather than spatial resolution. This unexpected result would, for example, suggest that a Topex-Poseidon- or Geosat-type satellite is satisfactory with regard to the spatial sampling of oceanic mesoscales. But any further gain would be acquired mostly by increasing temporal sampling, for example, by flying another Topex-Poseidon- or Geosat-type satellite offset in time by a typical half-period. Investigations of ground-track inclination effects are also presented.
Abstract
This article looks at the problem of optimizing spatiotemporal sampling of the ocean circulation using single- or twin-satellite missions. A review of the basic orbital constraints is first presented and this, together with some elementary sampling considerations, provides a solid foundation for choosing satellite orbital parameters. A modeling and assimilation approach enables even further progress to be made by simulating the dynamic features of the ocean fields that are to be measured; it also enables the process of integrating data into models to be simulated.
Several scenarios for two altimetric satellites flying simultaneously are evaluated with respect to their ability to monitor oceanic circulation as simulated with a numerical model. The twin-experiment approach is used: simulated data are assimilated into the numerical model, while a benchmark experiment provides the necessary dataset for validation and intercomparison. The model is quasigeostrophic and multilayered. The ocean model domain is at basin scale, centered on the midlatitudes. Model resolution (20 km) is fine enough to exhibit the intense mesoscale nonlinear variability typical of the midlatitudes. The assimilation technique used is sequential nudging of sea surface height applied to along-track data.
Dual scenarios are built consisting of all possible combinations of satellites having 3-, 10- (Topex-Poseidon), 17- (Geosat) and 30-day orbital repeat periods. In the specific context of our modeling and assimilation approach, improved scenarios with respect to Topex-Poseidon, and a fortiori Geosat, appear to be those that favor improving temporal rather than spatial resolution. This unexpected result would, for example, suggest that a Topex-Poseidon- or Geosat-type satellite is satisfactory with regard to the spatial sampling of oceanic mesoscales. But any further gain would be acquired mostly by increasing temporal sampling, for example, by flying another Topex-Poseidon- or Geosat-type satellite offset in time by a typical half-period. Investigations of ground-track inclination effects are also presented.
Abstract
Smoothers are increasingly used in geophysics. Several linear Gaussian algorithms exist, and the general picture may appear somewhat confusing. This paper attempts to stand back a little, in order to clarify this picture by providing a concise overview of what the different smoothers really solve, and how. The authors begin addressing this issue from a Bayesian viewpoint. The filtering problem consists in finding the probability of a system state at a given time, conditioned to some past and present observations (if the present observations are not included, it is a forecast problem). This formulation is unique: any different formulation is a smoothing problem. The two main formulations of smoothing are tackled here: the joint estimation problem (fixed lag or fixed interval), where the probability of a series of system states conditioned to observations is to be found, and the marginal estimation problem, which deals with the probability of only one system state, conditioned to past, present, and future observations. The various strategies to solve these problems in the Bayesian framework are introduced, along with their deriving linear Gaussian, Kalman filter-based algorithms. Their ensemble formulations are also presented. This results in a classification and a possible comparison of the most common smoothers used in geophysics. It should provide a good basis to help the reader find the most appropriate algorithm for his/her own smoothing problem.
Abstract
Smoothers are increasingly used in geophysics. Several linear Gaussian algorithms exist, and the general picture may appear somewhat confusing. This paper attempts to stand back a little, in order to clarify this picture by providing a concise overview of what the different smoothers really solve, and how. The authors begin addressing this issue from a Bayesian viewpoint. The filtering problem consists in finding the probability of a system state at a given time, conditioned to some past and present observations (if the present observations are not included, it is a forecast problem). This formulation is unique: any different formulation is a smoothing problem. The two main formulations of smoothing are tackled here: the joint estimation problem (fixed lag or fixed interval), where the probability of a series of system states conditioned to observations is to be found, and the marginal estimation problem, which deals with the probability of only one system state, conditioned to past, present, and future observations. The various strategies to solve these problems in the Bayesian framework are introduced, along with their deriving linear Gaussian, Kalman filter-based algorithms. Their ensemble formulations are also presented. This results in a classification and a possible comparison of the most common smoothers used in geophysics. It should provide a good basis to help the reader find the most appropriate algorithm for his/her own smoothing problem.
Abstract
In the standard four-dimensional variational data assimilation (4D-Var) algorithm the background error covariance matrix
A possible method for remedying this flaw is presented and tested in this paper. A hybrid variational-smoothing algorithm is based on a reduced-rank incremental 4D-Var. Its consistent coupling to a singular evolutive extended Kalman (SEEK) smoother ensures the evolution of the
The hybrid method is implemented and tested in twin experiments employing a shallow-water model. The background error covariance matrix is initialized using an EOF decomposition of a sample of model states. The quality of the analyses and the information content in the bases spanning control subspaces are also assessed. Several numerical experiments are conducted that differ with regard to the initialization of the
Abstract
In the standard four-dimensional variational data assimilation (4D-Var) algorithm the background error covariance matrix
A possible method for remedying this flaw is presented and tested in this paper. A hybrid variational-smoothing algorithm is based on a reduced-rank incremental 4D-Var. Its consistent coupling to a singular evolutive extended Kalman (SEEK) smoother ensures the evolution of the
The hybrid method is implemented and tested in twin experiments employing a shallow-water model. The background error covariance matrix is initialized using an EOF decomposition of a sample of model states. The quality of the analyses and the information content in the bases spanning control subspaces are also assessed. Several numerical experiments are conducted that differ with regard to the initialization of the
Abstract
High resolution ocean general circulation model experiments were carried out to investigate the effects of a midocean ridge on the eddy field and the mean circulation on the basin scale. A quasigeostrphic two-layer model was used. Long term statistics were computed for a detailed comparison with the flat bottom case. An eddy-driven anticyclonic gyre, locked over the topography, appears as a new feature of the deep circulation pattern. The eddy energy radiation in both layers is strongly constrained by the topography. Insofar as surface currents are concerned, the ridge acts, to a limited extent, as a new western boundary for the eastern basin.
Abstract
High resolution ocean general circulation model experiments were carried out to investigate the effects of a midocean ridge on the eddy field and the mean circulation on the basin scale. A quasigeostrphic two-layer model was used. Long term statistics were computed for a detailed comparison with the flat bottom case. An eddy-driven anticyclonic gyre, locked over the topography, appears as a new feature of the deep circulation pattern. The eddy energy radiation in both layers is strongly constrained by the topography. Insofar as surface currents are concerned, the ridge acts, to a limited extent, as a new western boundary for the eastern basin.
Abstract
The Solomon Sea is a key region of the southwest Pacific Ocean, connecting the thermocline subtropics to the equator via western boundary currents (WBCs). Modifications to water masses are thought to occur in this region because of the significant mixing induced by internal tides, eddies, and the WBCs. Despite their potential influence on the equatorial Pacific thermocline temperature and salinity and their related impact on the low-frequency modulation of El Niño–Southern Oscillation, modifications to water masses in the Solomon Sea have never been analyzed to our knowledge. A high-resolution model incorporating a tidal mixing parameterization was implemented to depict and analyze water mass modifications and the Solomon Sea pathways to the equator in a Lagrangian quantitative framework. The main routes from the Solomon Sea to the equatorial Pacific occur through the Vitiaz and Solomon straits, in the thermocline and intermediate layers, and mainly originate from the Solomon Sea south inflow and from the Solomon Strait itself. Water mass modifications in the model are characterized by a reduction of the vertical temperature and salinity gradients over the water column: the high salinity of upper thermocline water [Subtropical Mode Water (STMW)] is eroded and exported toward surface and deeper layers, whereas a downward heat transfer occurs over the water column. Consequently, the thermocline water temperature is cooled by 0.15°–0.3°C from the Solomon Sea inflows to the equatorward outflows. This temperature modification could weaken the STMW anomalies advected by the subtropical cell and thereby diminish the potential influence of these anomalies on the tropical climate. The Solomon Sea water mass modifications can be partially explained (≈60%) by strong diapycnal mixing in the Solomon Sea. As for STMW, about a third of this mixing is due to tidal mixing.
Abstract
The Solomon Sea is a key region of the southwest Pacific Ocean, connecting the thermocline subtropics to the equator via western boundary currents (WBCs). Modifications to water masses are thought to occur in this region because of the significant mixing induced by internal tides, eddies, and the WBCs. Despite their potential influence on the equatorial Pacific thermocline temperature and salinity and their related impact on the low-frequency modulation of El Niño–Southern Oscillation, modifications to water masses in the Solomon Sea have never been analyzed to our knowledge. A high-resolution model incorporating a tidal mixing parameterization was implemented to depict and analyze water mass modifications and the Solomon Sea pathways to the equator in a Lagrangian quantitative framework. The main routes from the Solomon Sea to the equatorial Pacific occur through the Vitiaz and Solomon straits, in the thermocline and intermediate layers, and mainly originate from the Solomon Sea south inflow and from the Solomon Strait itself. Water mass modifications in the model are characterized by a reduction of the vertical temperature and salinity gradients over the water column: the high salinity of upper thermocline water [Subtropical Mode Water (STMW)] is eroded and exported toward surface and deeper layers, whereas a downward heat transfer occurs over the water column. Consequently, the thermocline water temperature is cooled by 0.15°–0.3°C from the Solomon Sea inflows to the equatorward outflows. This temperature modification could weaken the STMW anomalies advected by the subtropical cell and thereby diminish the potential influence of these anomalies on the tropical climate. The Solomon Sea water mass modifications can be partially explained (≈60%) by strong diapycnal mixing in the Solomon Sea. As for STMW, about a third of this mixing is due to tidal mixing.
Abstract
The effect of an isolated canyon interrupting a long continental shelf of constant cross section on the along-isobath, oscillatory motion of a homogeneous, incompressible fluid is considered by employing laboratory experiments (physical models) and a numerical model. The laboratory experiments are conducted in two separate cylindrical test cells of 13.0- and 1.8-m diameters, respectively. In both experiments the shelf topography is constructed around the periphery of the test cells, and the oscillatory motion is realized by modulating the rotation rate of the turntables. The numerical model employs a long shelf in a rectangular Cartesian geometry. It is found from the physical experiments that the oscillatory flow drives two characteristic flow patterns depending on the values of the temporal Rossby number, Ro t , and the Rossby number, Ro. For sufficiently small Ro t , and for the range of Ro investigated, cyclonic vortices are formed during the right to left portion of the oscillatory cycle, facing toward the deep water, on (i) the inside right and (ii) the outside left of the canyon; that is, the cyclone regime. For sufficiently large Ro t and the range of Ro studied, no closed cyclonic eddy structures are formed, a flow type designated as cyclone free.
The asymmetric nature of the right to left and left to right phases of the oscillatory, background flow leads to the generation of a mean flow along the canyon walls, which exits the canyon region on the right, facing toward the deep water, and then continues along the shelf break before decaying downstream. A parametric study of the physical and numerical model experiments is conducted by plotting the normalized maximum mean velocity observed one canyon width downstream of the canyon axis against the normalized excursion amplitude X. These data show good agreement between the physical experiments and the numerical model. For X ≥ 0.4, the normalized, maximum, mean velocity is independent of X and is roughly equal to 0.6; i.e., the maximum mean velocity is approximately equal to the mean forcing velocity over one half of the oscillatory cycle (these experiments are all of the cyclone flow type). For X ≤ 0.4, the normalized maximum mean velocity separates into (i) a lower branch for which the mean flow is relatively small and increases with X (cyclone-free flow type) and (ii) an upper branch for which the mean flow is relatively large and decreases with X (cyclone flow type).
The time-dependent nature of the large-scale eddy field for a numerical model run in the cyclone regime is shown to agree well qualitatively with physical experiments in the same regime. Time-mean velocity and streamfunction fields obtained from the numerical model are also shown to agree well with the laboratory experiments. Comparisons are also made between the present model findings and some oceanic observations and findings from other models.
Abstract
The effect of an isolated canyon interrupting a long continental shelf of constant cross section on the along-isobath, oscillatory motion of a homogeneous, incompressible fluid is considered by employing laboratory experiments (physical models) and a numerical model. The laboratory experiments are conducted in two separate cylindrical test cells of 13.0- and 1.8-m diameters, respectively. In both experiments the shelf topography is constructed around the periphery of the test cells, and the oscillatory motion is realized by modulating the rotation rate of the turntables. The numerical model employs a long shelf in a rectangular Cartesian geometry. It is found from the physical experiments that the oscillatory flow drives two characteristic flow patterns depending on the values of the temporal Rossby number, Ro t , and the Rossby number, Ro. For sufficiently small Ro t , and for the range of Ro investigated, cyclonic vortices are formed during the right to left portion of the oscillatory cycle, facing toward the deep water, on (i) the inside right and (ii) the outside left of the canyon; that is, the cyclone regime. For sufficiently large Ro t and the range of Ro studied, no closed cyclonic eddy structures are formed, a flow type designated as cyclone free.
The asymmetric nature of the right to left and left to right phases of the oscillatory, background flow leads to the generation of a mean flow along the canyon walls, which exits the canyon region on the right, facing toward the deep water, and then continues along the shelf break before decaying downstream. A parametric study of the physical and numerical model experiments is conducted by plotting the normalized maximum mean velocity observed one canyon width downstream of the canyon axis against the normalized excursion amplitude X. These data show good agreement between the physical experiments and the numerical model. For X ≥ 0.4, the normalized, maximum, mean velocity is independent of X and is roughly equal to 0.6; i.e., the maximum mean velocity is approximately equal to the mean forcing velocity over one half of the oscillatory cycle (these experiments are all of the cyclone flow type). For X ≤ 0.4, the normalized maximum mean velocity separates into (i) a lower branch for which the mean flow is relatively small and increases with X (cyclone-free flow type) and (ii) an upper branch for which the mean flow is relatively large and decreases with X (cyclone flow type).
The time-dependent nature of the large-scale eddy field for a numerical model run in the cyclone regime is shown to agree well qualitatively with physical experiments in the same regime. Time-mean velocity and streamfunction fields obtained from the numerical model are also shown to agree well with the laboratory experiments. Comparisons are also made between the present model findings and some oceanic observations and findings from other models.
Abstract
Bulk formulations parameterizing turbulent air–sea fluxes remain among the main sources of error in present-day ocean models. The objective of this study is to investigate the possibility of estimating the turbulent bulk exchange coefficients using sequential data assimilation. It is expected that existing ocean assimilation systems can use this method to improve the air–sea fluxes and produce more realistic forecasts of the thermohaline characteristics of the mixed layer. The method involves augmenting the control vector of the assimilation scheme using the model parameters that are to be controlled. The focus of this research is on estimating two bulk coefficients that drive the sensible heat flux, the latent heat flux, and the evaporation flux of a global ocean model, by assimilating temperature and salinity profiles using horizontal and temporal samplings similar to those to be provided by the Argo float system. The results of twin experiments show that the method is able to correctly estimate the large-scale variations in the bulk parameters, leading to a significant improvement in the atmospheric forcing applied to the ocean model.
Abstract
Bulk formulations parameterizing turbulent air–sea fluxes remain among the main sources of error in present-day ocean models. The objective of this study is to investigate the possibility of estimating the turbulent bulk exchange coefficients using sequential data assimilation. It is expected that existing ocean assimilation systems can use this method to improve the air–sea fluxes and produce more realistic forecasts of the thermohaline characteristics of the mixed layer. The method involves augmenting the control vector of the assimilation scheme using the model parameters that are to be controlled. The focus of this research is on estimating two bulk coefficients that drive the sensible heat flux, the latent heat flux, and the evaporation flux of a global ocean model, by assimilating temperature and salinity profiles using horizontal and temporal samplings similar to those to be provided by the Argo float system. The results of twin experiments show that the method is able to correctly estimate the large-scale variations in the bulk parameters, leading to a significant improvement in the atmospheric forcing applied to the ocean model.
Abstract
The characteristics of the mesoscale turbulence simulated at a resolution of ⅓° by a sigma-coordinate model (SPEM) and a geopotential-coordinate model (OPA) of the South Atlantic differ significantly. These two types of models differ with respect to not only their numerical formulation, but also their topography (smoothed in SPEM, as in every sigma-coordinate application). In this paper, the authors examine how these topographic differences result in eddy flows that are different in the two models. When the topography of the Agulhas region is smoothed locally in OPA, as is done routinely in SPEM, the production mechanism of the Agulhas rings, their characteristics, and their subsequent drift in the subtropical gyre, are found to converge toward those in SPEM. Furthermore, the vertical distribution of eddy kinetic energy (EKE) everywhere in the basin interior becomes similar in SPEM and OPA and, according to some current meter data, becomes more realistic when mesoscale topographic roughness is removed from the OPA bathymetry (as in SPEM). As expected from previous process studies, this treatment also makes the sensitivity of the Agulhas rings to the Walvis Ridge become similar in SPEM and OPA. These findings demonstrate that many properties of the eddies produced by sigma- and geopotential-coordinate models are, to a significant extent, due to the use of different topographies, and are not intrinsic to the use of different vertical coordinates. Other dynamical differences, such as the separation of western boundary currents from the shelf or the interaction of the flow with the Zapiola Ridge, are attributed to intrinsic differences between both models. More generally, it is believed that, in the absence of a correct parameterization of current–topography interactions, a certain amount of topographic smoothing may have a beneficial impact on geopotential coordinate model solutions.
Abstract
The characteristics of the mesoscale turbulence simulated at a resolution of ⅓° by a sigma-coordinate model (SPEM) and a geopotential-coordinate model (OPA) of the South Atlantic differ significantly. These two types of models differ with respect to not only their numerical formulation, but also their topography (smoothed in SPEM, as in every sigma-coordinate application). In this paper, the authors examine how these topographic differences result in eddy flows that are different in the two models. When the topography of the Agulhas region is smoothed locally in OPA, as is done routinely in SPEM, the production mechanism of the Agulhas rings, their characteristics, and their subsequent drift in the subtropical gyre, are found to converge toward those in SPEM. Furthermore, the vertical distribution of eddy kinetic energy (EKE) everywhere in the basin interior becomes similar in SPEM and OPA and, according to some current meter data, becomes more realistic when mesoscale topographic roughness is removed from the OPA bathymetry (as in SPEM). As expected from previous process studies, this treatment also makes the sensitivity of the Agulhas rings to the Walvis Ridge become similar in SPEM and OPA. These findings demonstrate that many properties of the eddies produced by sigma- and geopotential-coordinate models are, to a significant extent, due to the use of different topographies, and are not intrinsic to the use of different vertical coordinates. Other dynamical differences, such as the separation of western boundary currents from the shelf or the interaction of the flow with the Zapiola Ridge, are attributed to intrinsic differences between both models. More generally, it is believed that, in the absence of a correct parameterization of current–topography interactions, a certain amount of topographic smoothing may have a beneficial impact on geopotential coordinate model solutions.
Abstract
In Kalman filter applications, an adaptive parameterization of the error statistics is often necessary to avoid filter divergence, and prevent error estimates from becoming grossly inconsistent with the real error. With the classic formulation of the Kalman filter observational update, optimal estimates of general adaptive parameters can only be obtained at a numerical cost that is several times larger than the cost of the state observational update. In this paper, it is shown that there exists a few types of important parameters for which optimal estimates can be computed at a negligible numerical cost, as soon as the computation is performed using a transformed algorithm that works in the reduced control space defined by the square root or ensemble representation of the forecast error covariance matrix. The set of parameters that can be efficiently controlled includes scaling factors for the forecast error covariance matrix, scaling factors for the observation error covariance matrix, or even a scaling factor for the observation error correlation length scale.
As an application, the resulting adaptive filter is used to estimate the time evolution of ocean mesoscale signals using observations of the ocean dynamic topography. To check the behavior of the adaptive mechanism, this is done in the context of idealized experiments, in which model error and observation error statistics are known. This ideal framework is particularly appropriate to explore the ill-conditioned situations (inadequate prior assumptions or uncontrollability of the parameters) in which adaptivity can be misleading. Overall, the experiments show that, if used correctly, the efficient optimal adaptive algorithm proposed in this paper introduces useful supplementary degrees of freedom in the estimation problem, and that the direct control of these statistical parameters by the observations increases the robustness of the error estimates and thus the optimality of the resulting Kalman filter.
Abstract
In Kalman filter applications, an adaptive parameterization of the error statistics is often necessary to avoid filter divergence, and prevent error estimates from becoming grossly inconsistent with the real error. With the classic formulation of the Kalman filter observational update, optimal estimates of general adaptive parameters can only be obtained at a numerical cost that is several times larger than the cost of the state observational update. In this paper, it is shown that there exists a few types of important parameters for which optimal estimates can be computed at a negligible numerical cost, as soon as the computation is performed using a transformed algorithm that works in the reduced control space defined by the square root or ensemble representation of the forecast error covariance matrix. The set of parameters that can be efficiently controlled includes scaling factors for the forecast error covariance matrix, scaling factors for the observation error covariance matrix, or even a scaling factor for the observation error correlation length scale.
As an application, the resulting adaptive filter is used to estimate the time evolution of ocean mesoscale signals using observations of the ocean dynamic topography. To check the behavior of the adaptive mechanism, this is done in the context of idealized experiments, in which model error and observation error statistics are known. This ideal framework is particularly appropriate to explore the ill-conditioned situations (inadequate prior assumptions or uncontrollability of the parameters) in which adaptivity can be misleading. Overall, the experiments show that, if used correctly, the efficient optimal adaptive algorithm proposed in this paper introduces useful supplementary degrees of freedom in the estimation problem, and that the direct control of these statistical parameters by the observations increases the robustness of the error estimates and thus the optimality of the resulting Kalman filter.