1. Introduction
Data assimilation can be described as the ensemble of techniques for retrieving geophysical fields from different sources such as observations, governing equations, statistics, etc.
Being heterogeneous in nature, quality, and density, these data sources have to be put together to optimally retrieve (the meaning of “optimal” has to be precisely defined) the geophysical fields. Due to its inherent operational tasks, meteorology has played an important role in the development of data assimilation techniques. An ever-increasing amount of data and models are considered as an ensemble from which the optimal information should be extracted. Behind most of the classical methods used in meteorology such as optimal interpolation, variational methods, statistical estimation, etc., there is a variational principle, that is, the retrieved fields are obtained through the minimization of some functional depending on the various sources of information. The retrieved fields are obtained through some optimality condition, which can be an Euler or Euler–Lagrange condition if the regularity conditions are satisfied. Since these conditions are first-order conditions, it follows that they involve the first-order derivatives of the functional that is minimized. In this sense, data assimilation techniques are first-order methods. But first-order methods are only necessary conditions for optimality but not sufficient ones. To obtain sufficient conditions we need to proceed one step further and to introduce second-order information. By the same token, from the mathematical point of view sensitivity studies with respect to some parameter can be obtained through Gâteaux derivatives with respect to this parameter. Therefore if we seek the sensitivity of fields that have already been defined through some first-order conditions, we will have to proceed one order of derivation further and in this sense our sensitivity studies require second-order information.
The purpose of this review paper is to show how to obtain and how to use in an efficient way second-order information in data assimilation. In a first part we will show how the second-order derivative can be computed, primarily in a very general framework, then illustrate it with some examples. Then we will show how this second-order information can be linked to the issues of uniqueness of a solution to the problem of data assimilation. This will be shown to be not only a mathematical consideration but also rather a practical issue whereby information can be extracted by studying second-order information.
In a second part of the paper we will proceed to show how to derive sensitivity analyses from models and data. The analysis of the impact of uncertainties in the model and in the data provides essential links between purely deterministic methods (such as variational data assimilation) and stochastic methods (Kalman filter–type). We will then proceed to demonstrate how the link between these methods can be clearly understood through use of second-order information.
Researchers in other disciplines have carried out pioneering work using second-order information. Work in seismology using second-order information and applying it to obtain accurate Hessian/vector products for truncated-Newton minimization was carried out by Santosa and Symes (1988, 1989) and by Symes (1990, 1991, 1993). Reuther (1996) and Arian and Ta'asan (1999) illustrated the importance of second-order adjoint analysis for optimal control and shape optimization for inviscid aerodynamics. Hou and Sheen (1993) used second-order sensitivity analysis for heat conduction problems.
Second-order information was tackled in automatic differentiation (AD) by Abate et al. (1997), Giering and Kaminski (1998a,b), Gay (1996), Hovland (1995), Bischof (1995), Burger et al. (1992), Griewank and Corliss (1991), and Griewank (1993, 2000, 2001), to cite but a few. Several AD packages such as the tangent linear and adjoint model compiler (TAMC) of Giering and Kaminski (1998a) allow calculation of the Hessian of the cost functional.
Early work on second-order information in meteorology includes Thacker (1989) followed by work of Wang et al. (1992, 1993) and Wang (1993). Wang et al. (1995, 1998) considered use of second-order information for optimization purposes namely to obtain truncated-Newton and adjoint Newton algorithms using exact Hessian/vector products. Application of these ideas was presented in Wang et al. (1997).
Kalnay et al. (2000) introduced an elegant and novel pseudo-inverse approach and showed its connection to the adjoint Newton algorithm of Wang et al. (1997) (see Kalnay et al. 2000; Pu and Kalnay 1999; Pu et al. 1997).
Ngodock (1996) applied second-order information in conjunction with sensitivity analysis in the presence of observations and applied it to the ocean circulation. Le Dimet et al. (1997) presented the basic theory for second-order adjoint analysis related to sensitivity analysis. A condensed summary of the theory is presented in Le Dimet and Charpentier (1998).
The structure of the paper is as follows. Section 2 deals with the theory of the second-order adjoint method, both for time-independent and time-dependent models. The methodology is briefly illustrated using the shallow-water equations model. Section 3 deals with the connection between sensitivity analysis and second-order information. Section 4 briefly presents the Kalnay et al. (2000) quasi-inverse method and its connection with second-order information. Issues related to second-order Hessian information in optimization theory are addressed in section 5. Both unconstrained and constrained minimization issues are briefly discussed. Finally the use of accurate Hessian/vector products to improve the performance of the truncated Newton method are presented along with the adjoint truncated-Newton method. A method for approximating the Hessian of the cost function with respect to the control variables proposed by Courtier et al. (1994), based on rank-p approximation and bearing similarity to approximation of the Hessian in quasi-Newton methods (see Davidon 1959, 1991), is presented in section 5d.
Section 6 is dedicated to methods of obtaining the second-order adjoint via AD technology, while issues of computational complexity of AD for the second-order adjoint are presented in the appendix. Use of the Hessian of the cost functional to estimate error covariance matrices is briefly discussed in section 7. The use of Hessian singular vectors used for development of a simplified Kalman filter is addressed briefly in section 8.
Finally as a numerical illustration we present in section 9 the application of the second-order adjoint of a limited area model of the shallow-water equations to obtain an accurate Hessian/vector product compared to an approximate Hessian vector product obtained by finite differences. Automatic differentiation is implemented using the adjoint model compiler TAMC. The Hessian/vector information is used in a truncated-Newton minimization algorithm of the cost functional with respect to the initial conditions taken as the control variables and its impact versus the Hessian/vector product obtained via finite differences is assessed. The numerical results obtained verify the theoretically derived computational cost of obtaining the second-order adjoint via automatic differentiation. The Arnoldi package (ARPACK) was then used in conjunction with the second-order adjoint to gain information about the spectrum of the Hessian of the cost function. The unified notation of Ide et al. (1997) for data assimilation will be used.
Summary and conclusions are finally presented in section 10.
2. Computing the second-order information
In this section we will deal with deterministic models while the case of stochastic modeling will be discussed later in this manuscript.
a. First-order necessary conditions
The gradient of J is obtained in the following way:



The gradient of J is obtained by exhibiting the linear dependence of Ĵ with respect to u. This is done by introducing the adjoint variable P (to be defined later according to convenience).
b. Second-order adjoint







The system (16)–(17) will be called the second-order adjoint. Therefore we can obtain the product of the Hessian by a vector u by (i) solving the system (16)–(17) and (ii) applying formula (18).
c. Remarks
The system (16)–(17), which has to be solved to obtain the Hessian/vector product, can be derived from the Gâteaux derivative (4), which is the same as (17). In the literature, the system (16)–(17) is often called the tangent linear model, this denomination being rather inappropriate because it implies the issue of linearization and the subsequent notion of range of validity, which is not relevant in the case of a derivative.
In the case of an N-finite dimensional space the Hessian can be fully computed after N integrations of vector components ei of the canonical base.
Equation (16) differs from the adjoint model by the forcing terms, which will depend on u and R.

However several integrations of the model and of its adjoint model will be necessary in this case to determine the range of validity of the finite-difference approximation (Wang 1995 and references therein).
d. Time-dependent model





Because V is time dependent, its associated adjoint variable Q will be also time dependent. Let us remark that the gradient of J with respect to V will depend on time, which is not surprising since J also depends on time. From a computational point of view the discretization of V will have to be carried out in such a way that the discretized variable remains in a space of “reasonable” dimension.


We then introduce Q and R, second-order adjoint variables. They will be defined later for ease in use of presentations.





We would like to point out that Eq. (44) follows directly from Eq. (43) by using Eq. (42). The product of the Hessian by a vector is obtained exactly by a direct integration of (40) and (42) followed by a backward integration in time of (39) and (41).





One can also obtain the product of a vector of the control space, times the Hessian at the cost of a single integration of the second-order adjoint.
e. Example: The shallow-water equations
The shallow-water equations (SWEs) represent the flow of an incompressible fluid whose depth is small with respect to the horizontal dimension.

- We neglect the model error, which following the previous notation, implies 𝗕 ≡ 0. We only control the initial conditions.
- We impose periodic boundary conditions.
- The observations are assumed continuous in both space and time, which is tantamount to assuming H ≡ 𝗜, where 𝗜 is the identity operator. Let U0 = (u0, υ0, ϕ0)T, that is, the initial condition, then the cost function assumes the formwhere γ is a nonunit weighting term.



The calculation of second-order derivatives requires the storage of the model trajectory, the tangent linear model, and the adjoint model.
3. Sensitivity analysis and second-order information
a. General sensitivity analysis
In general a model has the following kinds of variables:
- (i) State variable: 𝗭 in a space
𝕃 , which describes the physical properties of the medium (velocity, pressure, temperature, …); 𝗭 depends on time and space. - (ii) Input variable: 𝗜 in a space
𝕃 , which has to be provided to the model (e.g., initial or boundary conditions), most of the time these variables are not directly measured but they can be estimated through data assimilation. - (iii) Parameters: 𝗞 represents empirical parameters (e.g., diffusivity) that most models contain, which have to be tuned to adjust the model to the observations.
In many applications sensitivity analysis is carried out; for instance if we consider some scalar quantity linked to a solution of the model, what will be its variation if there is some perturbation on the inputs of the model?

There are two ways to estimate the sensitivity.
b. Sensitivity analysis via finite differences


c. Sensitivity via the adjoint


The gradient will be obtained by exhibiting the linear dependence of



It is worth noting that the sensitivity is obtained only after one run of the adjoint model and the result is exact. The cost to be paid is in software development since an adjoint model has to be developed.
d. Sensitivity analysis and data assimilation

For sensitivity studies in the presence of observations, with a given response function we have to consider the OS as a generalized model





Therefore the algorithm to get the sensitivity is as follows:
- (i) solve the optimality system to obtain X and P
- (ii) solve the coupled system (78) to obtain Q and R, and
- (iii) compute the sensitivity by (79).
The sensitivity in the presence of observations requires taking into account the second-order information. A very simple example given by Le Dimet et al. (1997) clearly shows the necessity of the introduction of this term.
4. Kalnay et al. (2000) quasi-inverse method and second-order information

One stops after a number of minimization iterations when ‖∇J‖ is small enough to satisfy a convergence criterion.
In order to determine the optimal value of the step size, the minimization algorithm, say quasi-Newton, requires additional computations of the gradient ∇Ji−1, so that the number of direct and adjoint integrations required by adjoint 4DVAR can be larger than the number of minimization iterations (see Kalnay et al. (2000).
The inverse 3DVAR approach of Kalnay seeks to obtain directly the “perfect solution,” that is, the special δx that makes ∇J = 0, provided δx is small.
As shown by Kalnay et al. (2000) this is equivalent to the adjoint Newton algorithm used by Wang et al. (1997) except that it does not require a line minimization.
Wang et al. (1998) proposed an adjoint Newton algorithm that also required the backward integration of the tangent linear model and proposed a reformulation of the adjoint Newton when the TLM is not invertible. They did not explore this idea in depth. Physical processes are generally not parameterized in a reversible form in atmospheric models—a problem that can be only overcome to some extent by using simplified reversible physics. Also truly dissipative processes in atmospheric models are not reversible and as such will constitute a problem for the inverse 3DVAR. To show the link of inverse 3DVAR to second-order information we follow Kalnay et al. (2000) to show that inverse 3DVAR is equivalent to using a perfect Newton iterative method to solve the minimization problem at a given time level.

Since cost functions used in 4DVAR are close to quadratic functions, one may view 3DVAR as a perfect preconditioner of a simplified 4DVAR problem.
In general, availability of second-order information allows powerful minimization algorithms to be used (Wang et al. 1995, 1997) even when the inverse 3DVAR is difficult to obtain as is the case with full physics models.
5. Hessian information in optimization theory
Hessian information is crucial in many aspects of both constrained and unconstrained minimization. All minimization methods start by assuming a quadratic model in the vicinity of the minimum of a multivariate minimization problem.

a. Spectrum of the Hessian and rate of convergence of unconstrained minimization




If 𝗚(X*) is ill-conditioned, the error in X will vary with the direction of the perturbation p.
If p is a linear combination of eigenvectors of 𝗚(X*) corresponding to the largest eigenvalues, the size of ‖X − X*‖ will be relatively small, while if, on the other hand, p is a linear combination of eigenvectors of 𝗚(X*) corresponding to the smallest eigenvalues, the size of ‖X − X*‖ will be relatively large; that is, there will be slow convergence.
b. Role of the Hessian in constrained minimization
The Hessian information plays a very important role in constrained optimization as well. We shall deal here with optimality conditions where again Taylor series approximations are used to analyze the behavior of the objective function
We shall consider first optimal conditions for linear equality constraints.







c. Application of second-order-adjoint technique to obtain exact Hessian/vector product
We will exemplify this application by considering a truncated-Newton algorithm for large-scale minimization.
1) Description of Truncated-Newton methods





The task of choosing an adequate h is an arduous one (see Nash and Sofer 1996, chapter 11.4.1 and references therein). For in-depth descriptions of the truncated-Newton (also referred to as the Hessian-free) method, see Nash (1984a–d, 1985) and Nash and Sofer (1989a,b), as well as Schlick and Fogelson (1992a,b), and early work by Dembo et al. (1982) and Dembo and Steihaug (1983). A comparison of limited memory quasi-Newton (see Liu and Nocedal 1989) and truncated-Newton methods is provided by Nash and Nocedal (1991), while a comprehensive well-written survey of truncated-Newton methods is presented in Nash (2000). A comparison between limited memory quasi-Newton and truncated-Newton methods applied to a meteorological problem is described in depth by Zou et al. (1990, 1993).
d. A method for estimating the Hessian matrix





6. Second-order adjoint via automatic differentiation
There is an increased interest in obtaining the second-order adjoint via automatic differentiation (AD).
Research work has been carried out in the recent version of the FORTRAN TAMC AD package designed by Giering and Kaminski (1998a) allowing for both the calculation of Hessian/vector products as well as for the more computationally expensive derivation of the full Hessian with respect to the control variables. Comparable CPU times to those required for hand coding were reported (Giering and Kaminski 1998b).
The importance of the Hessian/vector products derived by AD is particularly important in minimization where there is often interest not only in the first but rather in the second derivatives of the cost functional, which convey crucial information.
Griewank (2000) estimated the computational complexity of implementing second-order adjoints in a thorough manner.
He found that for calculating Hessian/vector products an effort leading to a run-time ratio of about a factor of 13 was required.
The calculation of the ratio between the effort required to obtain Hessian/vector products and that required to calculate the gradient of the cost was found to be a factor between 2 and 3 only.



The derivation originates in the approach put forward by Griewank (2001) of Jacobian accumulation procedures using implicit function theorem.
Griewank (2000) derives a class of derivative accumulation procedures as edge eliminations on the linearized computational Hessian graph.


One can show that first-order derivatives form the nonzero elements of matrix

7. Use of Hessian of cost functional to estimate error covariance matrices
A relationship exists between the inverse Hessian matrix and the analysis error covariance matrix of either 3DVAR or 4DVAR (see Thacker 1989; Thepaut and Courtier 1991; Rabier and Courtier 1992; Yang et al. 1996; Le Dimet et al. 1997).


8. Hessian singular vectors
Computing Hessian singular vectors (HSVs) uses the full Hessian of the cost function in the variational data assimilation, which can be viewed as an approximation of the inverse of the analysis error covariance matrix and it is used at initial time to define a norm. The total energy norm is still used at optimization time. See work by Barkmeijer et al. (1998, 1999). The HSVs are consistent with the 3DVAR estimates of the analysis error statistics. They are also defined in the context of 4DVAR. In practice one never knows the full 3DVAR Hessian in its matrix form and a generalized eigenvalue problem is solved as will be described below.
The HSVs are also used in a method first proposed by Courtier (1993) and tested by Rabier et al. (1997) for the development of a simplified Kalman filter fully described by Fisher (1998) and compared with a low-resolution explicit extended Kalman filter by Ehrendorfer and Bouttier (1998).
Let 𝗠 be the propagator of the tangent linear model, and 𝗣 a projection operator setting a vector to zero outside a given domain.
Consider positive-definite and symmetric operators including a norm at initial and optimization time, respectively.

In HSVs the operator 𝗖 is equal to the Hessian of the 3D-/4DVAR cost function.
In this calculation, in which the inner product at the initial time is defined by the Hessian matrix of an analysis cost function, the vectors sk are partially evolved singular vectors, while the vectors zk are produced during the adjoint model integration.
Veerse (1999) proposes to take advantage of this form of the appropriate Hessian in order to obtain approximations of the inverse analysis error covariance matrix, using the limited memory inverse Broyden, Fletcher, Goldfarb, and Shanno (BFGS) minimization algorithm.

Many minimization methods are implemented by using the inverse Hessian matrix/vector product that is built into the minimization code, such as Nocedal's algorithm (Nocedal 1980). These methods are useful when the second-order adjoint is not available due to either memory or CPU limitations.
9. Numerical experiments: Application of AD Hessian/vector products to the truncated Newton algorithm
For the numerical experiments we consider the truncated-Newton algorithm to minimize the cost function (59) associated with the SWE model (56)–(58). The spatial domain considered is a 6000 km × 4400 km channel with a uniform 21 × 21 spatial grid, such that the dimension of the initial condition vector (u, υ, ϕ)T is 1083, and the Hessian of the cost function is a 1083 × 1083 matrix.
The initial conditions are those of Grammeltvedt (1969). As for the boundary conditions, on the southern and northern boundaries the normal velocity components are set to zero, while periodic boundary conditions are assumed in the west–east direction. Integration is performed with a time increment Δt = 600s and the length of the assimilation window is 10 h. Data assimilation is implemented in a twin experiments framework such that the value of the cost function at the minimum point must be zero. As the set of control parameters, we consider the initial conditions that are perturbed with random values chosen from a uniform distribution.
The second-order adjoint model was generated using TAMC (Giering and Kaminski 1998a). The correctness of the adjoint-generated routines was checked using the small perturbations technique. Assuming that the cost function J(X) is evaluated by the subroutine model (J, X), computation of the Hessian/vector products 𝗚(X)u via automatic differentiation is performed in two steps. First the reverse (adjoint) mode is applied to generate the adjoint model. Next, the tangent (forward) mode is applied to the adjoint model to generate the second-order adjoint (SOA) model. The performance of the minimization process using AD SOA is analyzed versus an approximate Hessian/vector product computation given by (116), with a hand-coded adjoint model implementation. The absolute and relative differences between the computed Hessian/vector product at the first iteration (initial guess state) are shown in Fig. 1 for the first 100 components. The first-order finite-difference method (FD) provides in average an accuracy of two to three significant digits. The optimization process using FD stops after 28 iterations when the line search fails to find an acceptable step size along the search direction, whereas for the SOA method a relative reduction in the cost function up to the machine precision is reached at iteration 29. The evolution of the normalized cost function and gradient norm are presented in Figs. 2 and 3, respectively.
The computational cost is of same order of magnitude for both the finite-difference approach and the exact second-order adjoint approach. The second-order adjoint approach requires integrating the original nonlinear model and its TLM forward in time and integrating the first-order adjoint model and second-order adjoint model backward in time once. The average ratio of the CPU time required to compute the gradient of the cost function to the CPU time used in evaluating the cost function was CPU(∇J)/CPU(J) ≈ 3.7. If we assume that the value of the gradient ∇J(X) in (116) is already available (previously computed in the minimization algorithm), only one additional gradient evaluation ∇J(X + hu) is needed in (116) to evaluate the Hessian/vector product using the FD method. In this case, we then have an average ratio to compute the Hessian/vector product CPU(𝗚u)FD/CPU(J) ≈ 3.7. Using the SOA method to compute the exact Hessian/vector product, we obtained an average CPU(𝗚u)SOA/CPU(J) ≈ 9.4, in agreement with the estimate (A4) in the appendix. We notice that in addition to the Hessian/vector product the AD SOA implementation also provides the value of the gradient of the cost function. The average ratio CPU(𝗚u)SOA/CPU(∇J) ≈ 2.5 we obtained is also in agreement with the CPU estimate (A2) in the appendix.
a. Numerical calculation of Hessian eigenvalues

The computed Ritz values and the relative residuals are included in Table 1 for the Hessian evaluated at the initial guess point, and in Table 2 for the Hessian evaluated at the optimal point X*. For our test example the eigenvalues of the Hessian are positive, such that the Hessian is positive definite and the existence of a minimum point is assured. The condition number of the Hessian is of order k(𝗚) ∼ 104, which explains the slow convergence of the minimization process.
Use of the Hessian of cost function eigenvalue information in regularization of ill-posed problems was illustrated by Alekseev and Navon (2001, 2002). The application consisted of a wavelet regularization approach for dealing with an ill-posed problem of adjoint parameter estimation applied to estimating inflow parameters from downflow data in an inverse convection case applied to the two-dimensional parabolized Navier–Stokes equations. The wavelet method provided a decomposition into two subspaces, by identifying both a well-posed as well as an ill-posed subspace, the scale of which was determined by finding the minimal eigenvalues of the Hessian of a cost functional measuring the lack of fit between model prediction and observed parameters. The control space is transformed into a wavelet space. The Hessian of the cost was obtained either by a discrete differentiation of the gradients of the cost derived from the first-order adjoint or by using the full second-order adjoint. The minimum eigenvalues of the Hessian are obtained either by employing a shifted iteration method following Zou et al. (1992) or by using the Rayleigh quotient. The numerical results obtained illustrated the usefulness and applicability of this algorithm if the Hessian minimal eigenvalue is greater or equal to the square of the data error dispersion, in which case the problem can be considered to be well-posed (i.e., regularized). If the regularization fails, that is, the minimal Hessian eigenvalue is less than the square of the data error dispersion of the problem, the following wavelet scale should be neglected, followed by another algorithm iteration.
10. Summary and conclusions
The recent development of variational methods in operational meteorological centers (European Centre for Medium-Range Weather Forecasts, Météo-France) has demonstrated the strong potential of these methods.
Variational techniques require the development of powerful tools such as the adjoint model, which are useful for the adjustment of the inputs of the model (initial and/or boundary conditions). From the mathematical point of view the first-order adjoint will provide only necessary conditions for an optimal solution. The second-order analysis goes one step further and provides information that is essential for many applications:
- (i) Sensitivity analysis should be derived from a second-order analysis, that is, from the derivation of the optimality system. This is made crystal clear when sensitivity with respect to observations is required. In the analysis, observations appear only as a forcing term in the adjoint model; therefore, in order to estimate the impact of observations this is the system that should be derived.
- (ii) Second-order information will improve the convergence of the optimization methods, which are the basic algorithmic components of variational analysis.
- (iii) The second-order system permits estimating the covariances of the fields. This information is essential for the estimation of the impact of errors on the prediction.
The numerical results obtained illustrate the ease with which present-day automatic differentiation packages allow one to obtain second-order adjoint models as well as Hessian/vector products. They also confirm numerically the CPU estimates for computational complexity as derived in section 7 (see also Griewank 2000).
Numerical calculation of the leading eigenvalues of the Hessian along with its smallest eigenvalues yields results similar to those obtained by Wang et al. (1998) and allows for valuable insight into the Hessian spectrum, thus allowing us to deduct the important information related to condition number of the Hessian and, hence, to the expected rate of convergence of minimization algorithms.
With the advent of ever more powerful computers, the use of second-order information in data assimilation will be within realistic reach for 3D models and is expected to become more prevalent.
The purpose of this paper has been to demonstrate the importance of new developments in second-order analysis: many directions of research remain open in this domain.
The authors wish to thank two anonymous reviewers whose constructive comments led to a marked improvement in the presentation of the present paper.
The second author would like to acknowledge support from the NSF through Grant ATM-9731472, managed by Dr. Pamela Stephens, whom we would like to thank for her support.
IDOPT is a joint project CNRS–INRIH–Université Joseph Fourier INPG.
REFERENCES
Abate, J., , C. Bischof, , A. Carle, , and L. Roh, 1997: Algorithms and design for a second-order automatic differentiation module. Proc. Int. Symp. on Symbolic and Algebraic Computing (ISSAC) ‘97, Maui, HI, Association of Computing Machinery, 149–155.
Alekseev, K. A., , and I. M. Navon, 2001: The analysis of an ill-posed problem using multiscale resolution and second order adjoint techniques. Comput. Methods Appl. Mech. Eng, 190 , 1937–1953.
Alekseev, K. A., , and I. M. Navon, 2002: On estimation of temperature uncertainty using the second order adjoint problem. Int. J. Comput. Fluid Dyn., in press.
Arian, E., , and S. Ta'asan, 1999: Analysis of the Hessian for aerodynamic optimization: inviscid flow. Comput. Fluids, 28 , 853–877.
Averbukh, V. Z., , S. Figueroa, , and T. Schlick, 1994: Remark on Algorithm-566. ACM Trans. Math Software, 20 , 282–285.
Barkmeijer, J., , M. van Gijzen, , and F. Bouttier, 1998: Singular vectors and estimates of the analysis-error covariance metric. Quart. J. Roy. Meteor. Soc, 124A , 1695–1713.
Barkmeijer, J., , R. Buizza, , and T. N. Palmer, 1999: 3D-Var Hessian singular vectors and their potential use in the ECMWF Ensemble Prediction System. Quart. J. Roy. Meteor. Soc, 125B , 2333–2351.
Bischof, C. H., 1995: Automatic differentiation, tangent linear models, and (pseudo) adjoints. Proceedings of the Workshop on High-Performance Computing in the Geosciences, F.-X. Le Dimet, Ed., NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences, Vol. 462, Kluwer Academic, 59–80.
Burger, J., , J. L. Brizaut, , and M. Pogu, 1992: Comparison of two methods for the calculation of the gradient and of the Hessian of the cost functions associated with differential systems. Math. Comput. Simul, 34 , 551–562.
Coleman, T. F., , and J. J. More, 1984: Estimation of sparse Hessian matrices and graph-coloring problems. Math. Program, 28 , 243–270.
Coleman, T. F., , and J. Y. Cai, 1986: The cyclic coloring problem and estimation of sparse Hessian matrices. SIAM J. Algebra Discrete Math, 7 , 221–235.
Coleman, T. F., , and J. Y. Cai, 1985a: Fortran subroutines for estimating sparse Hessian matrices. ACM Trans. Math. Software, 11 (4,) 378–378.
Coleman, T. F., , B. S. Garbow, , and J. J. More, 1985b: Software for estimating sparse Hessian matrices. ACM Trans. Math. Software, 11 (4,) 363–377.
Courtier, P., 1993: Introduction to numerical weather prediction data assimilation methods. Proc. ECMWF Seminar on Developments in the Use of Satellite Data in Numerical Weather Prediction, Reading, United Kingdom, ECMWF, 189–209.
Courtier, P., , J-N. Thepaut, , and A. Hollingsworth, 1994: A strategy for operational implementation of 4D-Var, using an incremental approach. Quart J. Roy. Meteor. Soc, 120 , 1367–1388.
Davidon, W. C., 1991: Variable metric method for minimization. SIAM J. Optim, 1 , 1–17.
Davidson, E. R., 1975: The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of a large real symmetric matrices. J. Comput. Phys, 17 , 87–94.
Dembo, R. S., , and T. Steihaug, 1983: Truncated-Newton algorithms for large-scale unconstrained optimization. Math. Program, 26 , 190–212.
Dembo, R. S., , S. C. Eisenstat, , and T. Steihaug, 1982: Inexact Newton methods. SIAM J. Numer. Anal, 19 , 400–408.
Dixon, L. C. W., 1991: Use of automatic differentiation for calculating Hessians and Newton steps. Automatic Differentiation of Algorithms: Theory, Implementation, and Application, A. Griewank and G. F. Corliss, Eds., SIAM, 115–125.
Ehrendorfer, M., , and F. Bouttier, 1998: An explicit low-resolution extended Kalman filter: Implementation and preliminary experimentation. ECMWF Tech. Memo. 259, 27 pp.
Fisher, M., 1998: Development of a simplified Kalman filter. ECMWF Tech. Memo. 260, 16 pp.
Fisher, M., , and P. Courtier, 1995: Estimating the covariance matrices of analysis and forecast errors in variational data assimilation. ECMWF Tech. Memo. 220, 28 pp.
Forsythe, G. E., , and E. G. Strauss, 1955: On best conditioned matrices. Proc. Amer. Math. Soc, 6 , 340–345.
Gauthier, P., 1992: Chaos and quadri-dimensional data assimilation: A study based on the Lorentz model. Tellus, 44A , 2–17.
Gay, D. M., 1996: More AD of nonlinear AMPL models: Computing Hessian information and exploiting partial separability in computational differentiation: Techniques, applications, and tools. Proceedings in Applied Mathematics, M. Berz et al., Eds., Vol. 89, SIAM, 173–184.
Giering, R., , and T. Kaminski, 1998a: Recipes for adjoint code construction. ACM Trans. Math. Software, 24 (4,) 437–474.
Giering, R., , and T. Kaminski, 1998b: Using TAMC to generate efficient adjoint code: Comparison of automatically generated code for evaluation of first and second order derivatives to hand written code from the Minpack-2 collection. Automatic Differentiation for Adjoint Code Generation, C. Faure, Ed., INRIA Research Rep. 3555, 31–37.
Gilbert, J. C., 1992: Automatic differentiation and iterative processes. Optim. Methods Software, 1 , 13–21.
Gill, P. E., , and W. Murray, 1979: Newton-type methods for unconstrained and linearly constrained optimization. Math. Program, 28 , 311–350.
Gill, P. E., , W. Murray, , and M. H. Wright, 1981: Practical Optimization. Academic Press, 401 pp.
Grammeltvedt, A., 1969: A survey of finite-difference schemes for the primitive equations for a barotropic fluid. Mon. Wea. Rev, 97 , 387–404.
Griewank, A., 1993: Some bounds on the complexity of gradients, Jacobians, and Hessians. Complexity in Nonlinear Optimization, P. M. Pardalos, Ed., World Scientific, 128–161.
Griewank, A., 2000: Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Frontiers in Applied Mathematics, Vol. 19, SIAM, 369 pp.
Griewank, A., 2001: Complexity of gradients, Jacobians and Hessians. Encyclopaedia of Optimization, C. A. Floudas and P. M. Pardalos, Eds., Vol. 1, Kluwer Academic, 290–300.
Griewank, A., , and G. F. Corliss, 1991: Automatic Differentiation of Algorithms: Theory, Implementation, and Application. SIAM, 353 pp.
Hou, G. J-W., , and J. Sheen, 1993: Numerical methods for second order shape sensitivity analysis with application to heat conduction problems. Int. J. Numer. Methods Eng, 36 , 417–435.
Hovland, P., 1995: Using ADIFOR 1.0 to Compute Hessians. Center for Research on Parallel Computation Tech. Rep. CRPC-TR95540-S, Rice University, Houston, TX, 12 pp.
Ide, K., , P. Courtier, , M. Ghil, , and A. Lorenc, 1997: Unified notation for data assimilation: Operational sequential and variational. J. Meteor. Soc Japan, 75B , 71–79.
Jackson, R. H. F., , and G. P. McCormic, 1988: Second order sensitivity analysis in factorable programming: Theory and applications. Math. Program, 41 , 1–28.
Kalnay, E., , S. K. Park, , Z-X. Pu, , and J. Gao, 2000: Application of the quasi-inverse method to data assimilation. Mon. Wea. Rev, 128 , 864–875.
Le Dimet, F. X., , and I. Charpentier, 1998: Methodes de second order en assimilation de donnees. Equations aux Dérivées Partielles et Applications (Articles Dédiées à Jacques-Louis Lions), Gauthier-Villars, 623–640.
Le Dimet, F. X., , H. E. Ngodock, , B. Luong, , and J. Verron, 1997: Sensitivity analysis in variational data assimilation. J. Meteor. Soc. Japan, 75B , 245–255.
Lehoucq, R. B., , D. C. Sorensen, , and C. Yang, 1998: ARPACK Users' Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. Software, Environments, and Tools, Vol. 6, SIAM, 160 pp.
Liu, D. C., , and J. Nocedal, 1989: On the limited memory BFGS method for large scale minimization. Math. Progam, 45 , 503–528.
Moré, J. J., , B. S. Garbow, , and K. E. Hillstrom, 1981: Testing unconstrained optimization software. ACM Trans. Math. Software, 7 , 17–41.
Nash, S. G., 1984a: Newton-type minimization via the Lanczos method. SIAM J. Numer. Anal, 21 , 770–788.
Nash, S. G., 1984b: Truncated-Newton methods for large-scale function minimization. Applications of Nonlinear Programming to Optimization and Control, H. E. Rauch, Ed., Pergamon Press, 91–100.
Nash, S. G., 1984c: User's guide for TN/TNBC: Fortran routines for nonlinear optimization. Mathematical Sciences Dept. Tech. Rep. 307, The Johns Hopkins University, 17 pp.
Nash, S. G., 1984d: Solving nonlinear programming problems using truncated-Newton techniques. Numerical Optimization, P. T. Boggs, R. H. Byrd, and R. B. Schnabel, Eds., SIAM, 119–136.
Nash, S. G., 1985: Preconditioning of truncated-Newton methods. SIAM J. Sci. Stat. Comput, 6 , 599–616.
Nash, S. G., 2000: A survey of truncated-Newton methods. J. Comput. Appl. Math, 124 , 45–59.
Nash, S. G., , and A. Sofer, 1989a: Block truncated-Newton methods for parallel optimization. Math. Progam, 45 , 529–546.
Nash, S. G., , and A. Sofer, 1989b: A parallel line search for Newton type methods in computer science and statistics. Proc. 21st Symp. on the Interface: Computing Science and Statistic, Orlando, FL, American Statistical Association, 134–137.
Nash, S. G., , and J. Nocedal, 1991: A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization. SIAM J. Optim, 1 , 358–372.
Nash, S. G., , and J. Nocedal, 1996: Linear and Nonlinear Programming. McGraw-Hill, 692 pp.
Ngodock, H. E., 1996: Data assimilation and sensitivity analysis. Ph.D. thesis, University Joseph Fourier, Grenoble, France, 213 pp.
Nocedal, J., 1980: Updating quasi-Newton matrices with limited storage. Math. Comput, 35 , 773–782.
Nocedal, J., , and S. J. Wright, 1999: Numerical Optimization. Springer Verlag Series in Operations Research, 656 pp.
O'Leary, D. P., 1983: A discrete Newton algorithm for minimizing a function of many variables. Math. Progam, 23 , 20–23.
Powell, M. J. D., , and P. L. Toint, 1979: Estimation of sparse Hessian matrices. SIAM J. Numer. Anal, 16 , 1060–1074.
Pu, Z. X., , and E. Kalnay, 1999: Targeting observations with the quasi-inverse linear and adjoint NCEP global models: Performance during FASTEX. Quart. J. Roy. Meteor. Soc, 125 , 3329–3337.
Pu, Z. X., and and Coauthors, 1997: Sensitivity of forecast errors to initial conditions with a quasi-inverse linear method. Mon. Wea. Rev, 125 , 2479–2503.
Rabier, F., , and P. Courtier, 1992: Four dimensional assimilation in the presence of baroclinic instability. Quart. J. Roy. Meteor. Soc, 118 , 649–672.
Rabier, F., and and Coauthors, 1997: Recent experimentation on 4D-Var and first results from a simplified Kalman filter. ECMWF Tech. Memo. 240, 42 pp.
Reuther, J. J., 1996: Aerodynamic shape optimization using control theory. Ph.D. dissertation, University of California, Davis, 226 pp.
Santosa, F., , and W. W. Symes, 1988: Computation of the Hessian for least-squares solutions of inverse problems of reflection seismology. Inverse Problems, 4 , 211–233.
Santosa, F., , and W. W. Symes, 1989: An Analysis of Least Squares Velocity Inversion. Geophysical Monogr., Vol. 4, Society of Exploration Geophysicists, 168 pp.
Schlick, T., , and A. Fogelson, 1992a: TNPACK—A truncated Newton minimization package for large-scale problems: I. Algorithm and usage. ACM Trans. Math. Software, 18 , 46–70.
Schlick, T., , and A. Fogelson, 1992b: TNPACK—A truncated Newton minimization package for large-scale problems: II. Implementation examples. ACM Trans. Math. Software, 18 , 71–111.
Sleijpen, G. L. G., , and H. A. van der Vorst, 1996: A Jacobi–Davidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal, 17A , 401–425.
Symes, W. W., 1990: Velocity inversion: A case study in infinite-dimensional optimization. Math. Program, 48 , 71–102.
Symes, W. W., 1991: A differential semblance algorithm for the inverse problem of reflection seismology. Comput. Math. Appl, 22 , (4/5),. 147–178.
Symes, W. W., 1993: A differential semblance algorithm for the inversion of multioffset seismic reflection data. J. Geophys. Res, 98 , (B2),. 2061–2073.
Thacker, W. C., 1989: The role of Hessian matrix in fitting models to measurements. J. Geophys. Res, 94 , 6177–6196.
Thepaut, J-N., , and P. Moll, 1990: Variational inversion of simulated TOVS radiances using the adjoint technique. Quart. J. Roy. Meteor. Soc, 116 , 1425–1448.
Thepaut, J-N., , and P. Courtier, 1991: Four-dimensional variational assimilation using the adjoint of a multilevel primitive equation model. Quart. J. Roy. Meteor. Soc, 117 , 1225–1254.
Veerse, F., 1999: Variable-storage quasi-Newton operators as inverse forecast/analysis error covariance matrices in data assimilation. INRIA Tech. Rep. 3685, 28 pp.
Wang, Z., 1993: Variational data assimilation with 2-D shallow water equations and 3-D FSU global spectral models. Tech. Rep. FSU-SCRI-93T-149, The Florida State University, Tallahassee, FL, 235 pp.
Wang, Z., , I. M. Navon, , F. X. Le Dimet, , and X. Zou, 1992: The second order adjoint analysis: Theory and application. Meteor. Atmos. Phys, 50 , 3–20.
Wang, Z., , I. M. Navon, , and X. Zou, 1993: The adjoint truncated Newton algorithm for large-scale unconstrained optimization. Tech. Rep. FSU-SCRI-92-170, The Florida State University, Tallahassee, FL, 44 pp.
Wang, Z., , I. M. Navon, , X. Zou, , and F. X. Le Dimet, 1995: A truncated-Newton optimization algorithm in meteorology applications with analytic Hessian/vector products. Comput. Optim. Appl, 4 , 241–262.
Wang, Z., , K. K. Droegemeier, , L. White, , and I. M. Navon, 1997: Application of a new adjoint Newton algorithm to the 3D ARPS storm-scale model using simulated data. Mon. Wea. Rev, 125 , 2460–2478.
Wang, Z., , K. K. Droegemeier, , and L. White, 1998: The adjoint Newton algorithm for large-scale unconstrained optimization in meteorology applications. Comput. Optim. Appl, 10 , 283–320.
Yang, W., , I. M. Navon, , and P. Courtier, 1996: A new Hessian preconditioning method applied to variational data assimilation experiments using an adiabatic version of NASA/GEOS-1 GCM. Mon. Wea. Rev, 124 , 1000–1017.
Zou, X., , I. M. Navon, , F. X. Le Dimet, , A. Nouailler, , and T. Schlick, 1990: A comparison of efficient large-scale minimization algorithms for optimal control applications in meteorology. Tech. Rep. FSU-SCRI-90-167, The Florida State University, Tallahassee, FL, 44 pp.
Zou, X., , I. M. Navon, , and F. X. Le Dimet, 1992: Incomplete observations and control of gravity waves in variational data assimilation. Tellus, 44A , 273–296.
Zou, X., , I. M. Navon, , M. Berger, , P. K. H. Phua, , T. Schlick, , and F. X. Le Dimet, 1993: Numerical experience with limited-memory quasi-Newton methods and truncated Newton methods. SIAM J. Numer. Optim, 3 , 582–608.
APPENDIX
Computational Complexity of AD Calculation of the Second-Order Adjoint

Here w is a vector of positive weights that depend on the computing system and represent the number of clock cycles needed for fetching and/or storing data items, multiplication, addition, and finally for taking into account nonlinear operations.

As mentioned by Nocedal and Wright (1999) automatic differentiation has increasingly been using more sophisticated techniques that allow, when used in reverse mode, for calculation either of full Hessians or Hessian/vector products. However the automatic differentiation technique should not be regarded as a substitute for the user to think that this is a fail-safe product and each derivative calculation obtained with AD should be carefully assessed.
Gay (1996) has shown how to use partial separability of the Hessian in AD while Powell and Toint (1979) and Coleman and More (1984), along with Coleman and Cai (1986), have shown how to estimate a sparse Hessian using either graph-coloring techniques or other highly effective schemes.
Software for the estimation of sparse Hessians is available in the work of Coleman et al. (1985a,b). See also the work of Dixon (1991) and the general presentation of Gilbert (1992).
Averbukh et al. (1994) supplemented the work of Moré et al. (1981), which provides function and gradient subroutines of 18 test functions for multivariate minimization. Their supplementary Hessian segments enable users to test optimization software that requires second derivative information.

The absolute (dashed line) and relative (solid line) differences between the Hessian/vector product computed with the SOA method and with the finite-difference method at the first iteration (initial guess state). The first 100 components are considered
Citation: Monthly Weather Review 130, 3; 10.1175/1520-0493(2002)130<0629:SOIIDA>2.0.CO;2

The evolution of the normalized cost function during the minimization using the SOA method (solid line) and the finite-difference method (dashed line) to compute the Hessian/vector product
Citation: Monthly Weather Review 130, 3; 10.1175/1520-0493(2002)130<0629:SOIIDA>2.0.CO;2

The evolution of the normalized gradient norm during the minimization using the SOA method (solid line) and the finite-difference method (dashed line) to compute the Hessian/vector product
Citation: Monthly Weather Review 130, 3; 10.1175/1520-0493(2002)130<0629:SOIIDA>2.0.CO;2
First five largest and smallest computed Ritz values of the Hessian matrix and the corresponding relative residuals. The Hessian is evaluated at the initial guess point

First five largest and smallest computed Ritz values of the Hessian matrix and the corresponding relative residuals. The Hessian is evaluated at the computed optimal point
