Search Results
You are looking at 1 - 5 of 5 items for
- Author or Editor: I. Michael Navon x
- Refine by Access: All Content x
Abstract
The adjoint model of a finite-element shallow-water equations model was obtained with a view to calculate the gradient of a cost functional in the framework of using this model to carry out variational data assimilation (VDA) experiments using optimal control of partial differential equations.
The finite-element model employs a triangular finite-element Galerkin scheme and serves as a prototype of 2D shallow-water equation models with a view of tackling problems related to VIDA with finite-element numerical weather prediction models. The derivation of the adjoint of this finite-element model involves overcoming specific computational problems related to obtaining the adjoint of iterative procedures for solving systems of nonsymmetric linear equations arising from the finite-element discretization and dealing with irregularly ordered discrete variables at each time step.
The correctness of the adjoint model was verified at the subroutine, level and was followed by a gradient cheek conducted once the full adjoint model was assembled. VDA experiments wore performed using model-generated observations. In our experiments, assimilation was carried out assuming that observations consisting of a full-model-state vector are available at every time step in the window of assimilation. Successful retrieval was obtained using the initial conditions as control variables, involving the minimization of a cost function consisting of the weighted sum of difference between model solution and model-generated observations.
An additional set of experiments was carried out aiming at evaluating the impact of carrying out VDA involving variable mesh resolution in the finite-element model over the entire assimilation period. Several conclusions are drawn related to the efficiency of VDA with variable horizontal mesh resolution finite-element discretization and the transfer of information between coarse and fine meshes.
Abstract
The adjoint model of a finite-element shallow-water equations model was obtained with a view to calculate the gradient of a cost functional in the framework of using this model to carry out variational data assimilation (VDA) experiments using optimal control of partial differential equations.
The finite-element model employs a triangular finite-element Galerkin scheme and serves as a prototype of 2D shallow-water equation models with a view of tackling problems related to VIDA with finite-element numerical weather prediction models. The derivation of the adjoint of this finite-element model involves overcoming specific computational problems related to obtaining the adjoint of iterative procedures for solving systems of nonsymmetric linear equations arising from the finite-element discretization and dealing with irregularly ordered discrete variables at each time step.
The correctness of the adjoint model was verified at the subroutine, level and was followed by a gradient cheek conducted once the full adjoint model was assembled. VDA experiments wore performed using model-generated observations. In our experiments, assimilation was carried out assuming that observations consisting of a full-model-state vector are available at every time step in the window of assimilation. Successful retrieval was obtained using the initial conditions as control variables, involving the minimization of a cost function consisting of the weighted sum of difference between model solution and model-generated observations.
An additional set of experiments was carried out aiming at evaluating the impact of carrying out VDA involving variable mesh resolution in the finite-element model over the entire assimilation period. Several conclusions are drawn related to the efficiency of VDA with variable horizontal mesh resolution finite-element discretization and the transfer of information between coarse and fine meshes.
Abstract
An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorithm that reduces errors in Hessian estimation induced by lack of validity of the tangent linear approximation. The new preconditioning method was numerically tested in the framework of variational data assimilation experiments using both the National Aeronautics and Space Administration (NASA) semi-Lagrangian semi-implicit global shallow-water equations model and the adiabatic version of the NASA/Data Assimilation Office (DAO) Goddard Earth Observing System Version 1 (GEOS-1) general circulation model. The authors' results show that the new preconditioning method speeds up convergence rate of minimization when applied to variational data assimilation cases characterized by strong nonlinearity.
Finally, the authors address issues related to computational cost of the new algorithm presented in this paper. These include the optimal determination of the number of random realizations p necessary for Hessian estimation methods. The authors tested a computationally efficient method that uses a coarser gridpoint model to estimate the Hessian for application to a fine-resolution mesh. The tests yielded encouraging results.
Abstract
An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorithm that reduces errors in Hessian estimation induced by lack of validity of the tangent linear approximation. The new preconditioning method was numerically tested in the framework of variational data assimilation experiments using both the National Aeronautics and Space Administration (NASA) semi-Lagrangian semi-implicit global shallow-water equations model and the adiabatic version of the NASA/Data Assimilation Office (DAO) Goddard Earth Observing System Version 1 (GEOS-1) general circulation model. The authors' results show that the new preconditioning method speeds up convergence rate of minimization when applied to variational data assimilation cases characterized by strong nonlinearity.
Finally, the authors address issues related to computational cost of the new algorithm presented in this paper. These include the optimal determination of the number of random realizations p necessary for Hessian estimation methods. The authors tested a computationally efficient method that uses a coarser gridpoint model to estimate the Hessian for application to a fine-resolution mesh. The tests yielded encouraging results.
Abstract
The fixed-lag Kalman smoother (FLKS) has been proposed as a framework to construct data assimilation procedures capable of producing high-quality climate research datasets. FLKS-based systems, referred to as retrospective data assimilation systems, are an extension to three-dimensional filtering procedures with the added capability of incorporating observations not only in the past and present time of the estimate, but also at future times. A variety of simplifications are necessary to render retrospective assimilation procedures practical.
In this article, an FLKS-based retrospective data assimilation system implementation for the Goddard Earth Observing System Data Assimilation System is presented. The practicality of this implementation comes from the practicality of its underlying (filter) analysis system, that is, the physical-space statistical analysis system (PSAS). The behavior of two schemes is studied here. The first retrospective analysis (RA) scheme is designed simply to update the regular PSAS analyses with observations available at times ahead of the regular analysis times. Results are presented for when observations 6-h ahead of the analysis time are used to update the PSAS analyses and thereby to calculate the so-called lag-1 retrospective analyses. Consistency tests for this RA scheme show that the lag-1 retrospective analyses indeed have better 6-h predictive skill than the predictions from the regular analyses. This motivates the introduction of the second retrospective analysis scheme, which, at each analysis time, uses the 6-h retrospective analysis to create a new forecast to replace the forecast normally used in the PSAS analysis, and therefore allows the calculation of a revised (filter) PSAS analysis. This procedure is referred to as the retrospective-based iterated analysis (RIA) scheme. Results from the RIA scheme indicate its potential for improving the overall quality of the assimilation.
Abstract
The fixed-lag Kalman smoother (FLKS) has been proposed as a framework to construct data assimilation procedures capable of producing high-quality climate research datasets. FLKS-based systems, referred to as retrospective data assimilation systems, are an extension to three-dimensional filtering procedures with the added capability of incorporating observations not only in the past and present time of the estimate, but also at future times. A variety of simplifications are necessary to render retrospective assimilation procedures practical.
In this article, an FLKS-based retrospective data assimilation system implementation for the Goddard Earth Observing System Data Assimilation System is presented. The practicality of this implementation comes from the practicality of its underlying (filter) analysis system, that is, the physical-space statistical analysis system (PSAS). The behavior of two schemes is studied here. The first retrospective analysis (RA) scheme is designed simply to update the regular PSAS analyses with observations available at times ahead of the regular analysis times. Results are presented for when observations 6-h ahead of the analysis time are used to update the PSAS analyses and thereby to calculate the so-called lag-1 retrospective analyses. Consistency tests for this RA scheme show that the lag-1 retrospective analyses indeed have better 6-h predictive skill than the predictions from the regular analyses. This motivates the introduction of the second retrospective analysis scheme, which, at each analysis time, uses the 6-h retrospective analysis to create a new forecast to replace the forecast normally used in the PSAS analysis, and therefore allows the calculation of a revised (filter) PSAS analysis. This procedure is referred to as the retrospective-based iterated analysis (RIA) scheme. Results from the RIA scheme indicate its potential for improving the overall quality of the assimilation.
Abstract
This paper addresses the anomaly correlation of the 500-hPa geopotential heights from a suite of global multimodels and from a model-weighted ensemble mean called the superensemble. This procedure follows a number of current studies on weather and seasonal climate forecasting that are being pursued. This study includes a slightly different procedure from that used in other current experimental forecasts for other variables. Here a superensemble for the ∇2 of the geopotential based on the daily forecasts of the geopotential fields at the 500-hPa level is constructed. The geopotential of the superensemble is recovered from the solution of the Poisson equation. This procedure appears to improve the skill for those scales where the variance of the geopotential is large and contributes to a marked improvement in the skill of the anomaly correlation. Especially large improvements over the Southern Hemisphere are noted. Consistent day-6 forecast skill above 0.80 is achieved on a day to day basis. The superensemble skills are higher than those of the best model and the ensemble mean. For days 1–6 the percent improvement in anomaly correlations of the superensemble over the best model are 0.3, 0.8, 2.25, 4.75, 8.6, and 14.6, respectively, for the Northern Hemisphere. The corresponding numbers for the Southern Hemisphere are 1.12, 1.66, 2.69, 4.48, 7.11, and 12.17. Major improvement of anomaly correlation skills is realized by the superensemble at days 5 and 6 of forecasts. The collective regional strengths of the member models, which is reflected in the proposed superensemble, provide a useful consensus product that may be useful for future operational guidance.
Abstract
This paper addresses the anomaly correlation of the 500-hPa geopotential heights from a suite of global multimodels and from a model-weighted ensemble mean called the superensemble. This procedure follows a number of current studies on weather and seasonal climate forecasting that are being pursued. This study includes a slightly different procedure from that used in other current experimental forecasts for other variables. Here a superensemble for the ∇2 of the geopotential based on the daily forecasts of the geopotential fields at the 500-hPa level is constructed. The geopotential of the superensemble is recovered from the solution of the Poisson equation. This procedure appears to improve the skill for those scales where the variance of the geopotential is large and contributes to a marked improvement in the skill of the anomaly correlation. Especially large improvements over the Southern Hemisphere are noted. Consistent day-6 forecast skill above 0.80 is achieved on a day to day basis. The superensemble skills are higher than those of the best model and the ensemble mean. For days 1–6 the percent improvement in anomaly correlations of the superensemble over the best model are 0.3, 0.8, 2.25, 4.75, 8.6, and 14.6, respectively, for the Northern Hemisphere. The corresponding numbers for the Southern Hemisphere are 1.12, 1.66, 2.69, 4.48, 7.11, and 12.17. Major improvement of anomaly correlation skills is realized by the superensemble at days 5 and 6 of forecasts. The collective regional strengths of the member models, which is reflected in the proposed superensemble, provide a useful consensus product that may be useful for future operational guidance.
Abstract
Four-dimensional variational data assimilation (VDA) experiments have been carded out using the adiabatic version of the NASA/Goddard Laboratory for Atmospheres semi-Lagrangian semi-implicit (SLSI) multilevel general circulation model. The limited-memory quasi-Newton minimization technique was used to find the minimum of the cost friction. With model-generated observations, different first-guess initial conditions were used to carry out the experiments. The experiments included randomly perturbed initial conditions, as well as different weight matrices in the cost function.
The results show that 4D VDA works well with various initial conditions as control variables. Scaling the gradient of the cost function proves to be an effective method of improving the convergence rate of the VDA minimization process.
The impacts of the length of the assimilation interval and the time density of the observations on the convergence rate of the minimization have also been investigated. An improved assimilation was obtained when observations were available in selected segments of the assimilation window. Moreover, our 4D VDA experiments with the SLSI model confirm the results obtained by Navon et al. and Li et al. concerning the impact of the length of the assimilation window. The choice of an adequate lime distribution of observations along with an appropriate length of assimilation interval is an important issue that will he further investigated.
Abstract
Four-dimensional variational data assimilation (VDA) experiments have been carded out using the adiabatic version of the NASA/Goddard Laboratory for Atmospheres semi-Lagrangian semi-implicit (SLSI) multilevel general circulation model. The limited-memory quasi-Newton minimization technique was used to find the minimum of the cost friction. With model-generated observations, different first-guess initial conditions were used to carry out the experiments. The experiments included randomly perturbed initial conditions, as well as different weight matrices in the cost function.
The results show that 4D VDA works well with various initial conditions as control variables. Scaling the gradient of the cost function proves to be an effective method of improving the convergence rate of the VDA minimization process.
The impacts of the length of the assimilation interval and the time density of the observations on the convergence rate of the minimization have also been investigated. An improved assimilation was obtained when observations were available in selected segments of the assimilation window. Moreover, our 4D VDA experiments with the SLSI model confirm the results obtained by Navon et al. and Li et al. concerning the impact of the length of the assimilation window. The choice of an adequate lime distribution of observations along with an appropriate length of assimilation interval is an important issue that will he further investigated.