1. Introduction
Spatial analysis of observations, also called gridding, is a common task in oceanography and meteorology, and a series of methods and implementations exists and is widely used. Here Nd data points of values di, i = 1, …, Nd at location (xi, yi) are generally distributed unevenly in space. Furthermore, the values of di are affected by observational errors, including representativity errors. From this dataset an analysis on a regular grid is often desired. It has been quickly recognized that it would be natural to define the best analysis as the one that has the lowest expected error. This definition has led to kriging and optimal interpolation (OI) methods (e.g., Gandin 1965; Delhomme 1978; Bretherton et al. 1976) and to the Kalman–Bucy filter and data assimilation with adjoint models in the context of forecast models (e.g., Lorenc 1986).
These methods assume that statistics on observational errors and the spatial covariance of the field to be analyzed are available to infer the “best” analysis field. As these methods aim at minimizing the analysis error, it is not a surprise that they also provide the theoretical a posteriori error field for the analysis. The practical implementation of these methods can lead to very different performances, also when it is necessary to calculate the error fields (e.g., Bouttier and Courtier 2002).


This formulation is discretized on a finite-element mesh covering the domain with triangles. Each of the triangles is in fact subdivided into three subtriangles on each of which the solution is expanded as a cubic polynomial. This rich function allows a sufficient degree of continuity so that the functional is well defined. The unknowns are then the coefficients of the polynomials, or in the finite-element vocabulary, the connectors. The functional is a quadratic function of these connectors, and the minimization leads to a linear system to be solved for these connectors. In the present implementation, this solution is done by a direct skyline solver exploiting the banded structure of the matrix to invert. For larger problems the recent DIVA version also allows for an iterative solution of this sparse linear system with a preconditioning.




















In DIVA or 3DVAR, matrices
For the linear observation operators used here, DIVA, 3DVAR, and OI provide the same results (under the hypotheses mentioned above), but the computational aspects are quite different, in particular when it comes to the error calculations.
For 3DVAR implementations, the calculation of the a posteriori error covariance requires the computation of the inverse Hessian matrix, whereas the analysis itself only uses gradient calculations (e.g., Rabier and Courtier 1992). To some extent, the need to calculate the full Hessian matrix can be circumvented by the use of Lanczos vectors of the conjugate gradient approach (e.g., Moore et al. 2011, in the context of 4DVAR). However, in this case the need of more Lanczos vectors required to provide an accurate estimate of the Hessian matrix defeats the purpose of the conjugate gradient approach to use as few iterations as possible. More recently, with approaches specifying the background covariance matrices by an ensemble (e.g., Hamill and Snyder 2000), error calculations can use the equivalence with OI to exploit the reduced rank of the covariance matrix.
For OI, in each point where the analysis is needed, an analysis of the covariance is requested for the a posteriori error calculation. This can lead to very high computational costs unless reduced rank approaches are possible (e.g., Kaplan et al. 2000; Beckers et al. 2006) or localization is used (e.g., Reynolds and Smith 1994). In the latter case, the error field can be calculated at the same time as the local analysis at almost no additional cost. It also has the advantage of allowing a highly parallel approach.
For DIVA, several problems exist: 1) neither covariance functions nor background matrices are explicitly formulated, so that error calculations have been only made possible by exploiting the equivalence with OI and the discovery of a quick method to numerically calculate on the fly covariance functions (Troupin et al. 2012); 2) the computational burden is still high, as an analysis in each of the N points where the error is requested must be performed; 3) localization could only be exploited at the inversion step of the finite-element formulation by exploiting the banded structure of the matrix to calculate the value of a connector. This has not been implemented, as it would lead to suboptimal solutions and in any case would not allow the error calculation in parallel with the analysis (such as in OI implementations), as the error field is not formulated in terms of connectors.
So, several methods are faced with high computational costs to retrieve error fields. Because covariances are generally estimated from data (e.g., Emery and Thomson 2001) and are not perfectly specified, we expect that error fields derived from the theoretical models are not “true” error fields in any case. Therefore, it can be considered overkill in computations trying to calculate errors with the full theoretical formulation in all locations and some relaxation can be accepted.
The present paper will present two “error calculations” in section 2 that to various degrees mimic the “exact” error field but with reduced cost. The method will be illustrated in section 3 with the 2D version of DIVA, but generalizations to the other cases mentioned in the introduction will be discussed in section 4.
2. Approximations for error analysis at reduced costs
The direct formulations for error covariances are rarely applied because matrices are too large and/or covariance matrices are not explicitly formulated. Alternative ways to get information on the analysis error are desirable.




a. Clever poor man’s error
If we replace covariances to be analyzed with a vector with all elements being a constant background variance, then we generally overestimate the error reduction but we have a computational huge gain, because the same analysis is valid for ALL of the N points in which we want to calculate the error. Instead of N backward substitutions or iterative solutions, we only need one additional analysis to add an “error field” to the analysis. This was already implemented (Troupin et al. 2010) and was called poor man’s error. In reality we can do better for a similar cost by looking at the situation of an isolated data point and focusing on the error reduction (8).




We see that when applying the idea of the poor man’s error (putting d = 1 into the analysis) analysis, (9) yields some resemblance to the actual nondimensional error reduction (10), but it overestimates the error reduction since it uses c instead of c2.







DIVA correlation function (the kernel of the DIVA functional) in an infinite domain as a function of r/L (thin line). The squared correlation function that leads to the exact error reduction for one data point (thick line) shows how strongly the poor man’s error using the thin line overestimates the error reduction. Adapting the correlation length scale c(r/L′) (dashed line) in the poor man’s error (called clever poor man’s error) shows how one can mimic the exact squared correlation by comparing the thick and dashed lines.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
If we have several data points separated by distances much larger than the correlation length scale, then the presence of other data points does not influence the analysis and error field around a data point and hence the poor man’s error calculation replacing all data values by one and changing the correlation length scale will provide, with a single analysis, the error reduction term on the full grid.
For regions with higher data coverage, the method provides a too optimistic view of the error, but the method can be easily used to mask gridded regions far away from the data (see error on mask on a 101 × 101 grid in Fig. 2).

Test case with a single point in the center of the domain. The error standard deviation is shown for the different methods. (top-left) A section along y = 0. The title for each 2D plot identifies the method and includes two indicators of the quality of the error field. The first number is the relative error on the error field as a percentage, where the true field is the field real covariance when the error is scaled by the local background variance. For the case where boundary effects are taken into account, the reference solutions is real covariance bnd. The second indicator gives the number of grid points where a mask derived from the error field is not the same as the exact one. White crosses indicate real data locations, and black dots indicate pseudodata locations.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
The recipe for the method, which we call clever poor man’s error, is thus straightforward: adapt the correlation length scale and then apply the analysis tool to a data vector with unit values to retrieve the complete error reduction field in a single analysis step.
b. Almost exact error fields








For this use and also because Aii is needed in cross-validation techniques (e.g., Wahba and Wendelberger 1980; Brankart and Brasseur. 1996), the calculation of Aii (via an analysis of a data vector with zeros everywhere except at data point i) has been optimized for DIVA and is accessible at reasonable cost (Troupin et al. 2013, manuscript submitted to Geosci. Model Dev. Discuss.). This means we can calculate the error estimates at data locations, which leaves only one problem, how do you calculate the error in other locations?
An easy way to achieve this is to add a pseudodata point with a virtual huge observational error for any location where the error has to be calculated. For DIVA, this high observational error translates into a very small data weight μi (1) that numerically does not cause any problem in the data analysis step. It is then easy to calculate the error at any location. However, this would still be costly if done everywhere, as Aii needs to be calculated in this pseudodata location without the benefit in terms of outlier detection or cross validation (as we know that the data are not real). We should therefore limit the number of additional pseudodata points and still be able to calculate the error everywhere. In fact, we can consider this again as a gridding problem: knowing the error ”exactly” at a series of points, what is the value of the error field in other locations? We can thus use the gridding tool itself, where the “observations” are the calculated errors and where the “observational” error is zero and hence the signal-to-noise ratio is infinity (or just very large in the numerical code). There remains to specify the correlation length scale for gridding the error field, but as shown in the analysis of the clever poor man’s error, a good choice is the adapted length scale L′ (13). Furthermore, it is easy to define the background field if we grid the error reduction: since the “data” locations are the places where we have the error exactly, in other locations we do not have data and the background error reduction is simply zero. Finally, because of the influence of data over a correlation length distance, it seems reasonable to add randomly α2D/L2 pseudodata over the surface D, where α ~ 1 defines the precision with which we want the error field.
For completeness, a discussion on the background covariance is needed. Up to now we have scaled the error reduction by σ2, the overall background variance. However, with DIVA, the background covariance varies spatially and increases near boundaries because of the variational formulation (Troupin et al. 2012). So, the local background variance in location (x, y) has a value of
A final comment concerns the number of data and the cost to calculate Aii for each data point: generally the number of data points is much lower than the number of grid points, so that the computational burden to calculate these coefficients remains reasonable compared to the burden of a full error calculation. Should there be a very large number of observations, there is no problem to restrict the error calculation to a subset of the data points, as together with the pseudodata points a nice coverage of the grid is easily achieved.
3. Test cases
To diagnose the quality of the error estimates, we will provide three indicators: a graphical representation and two numbers. The first metric is simply the relative error on the error field (the root-mean-square of the difference in error variances between the true error field and the approximate one compared to the true error variance). The second one tries to check how well the error field can be used to mask regions with insufficient data coverage. Typically, when the error variance of the analysis is larger than 50% of the background variance, it means the data did not provide a significant amount of information and the analysis could be masked. Then we can compare the masks derived from the exact error and the approximate one and see how many grid points do not have the same mask.
a. A single data point
This case simply serves to check that the analysis we showed is valid and to see how the different methods compare in the situation with a single data point in the center of the domain with a unit signal-to-noise ratio and a unit background variance. In this case, the error variance at the origin is 0.5 and the standard deviation shown in Fig. 2 is 0.707. The number of grid points for the gridded field is 101 × 101 to which we can compare the number of mask misses.
For all errors without taking into account the boundary effects, the visual inspection shows that the hybrid, the clever poor man’s error, and the almost exact error approach are indistinguishable from the exact solution. Only the poor man’s error is significantly different, as expected. Quantitatively, the relative errors on the error fields are less than a percent and no mask errors occur, except again for the poor man’s error. The hybrid error estimate is very close to the exact one using real covariances. The slight difference is because the analytical covariance function (12) is the one of an infinite domain, whereas the computation domain used here is finite. When boundary effects are taken into account, we observe the highest errors near the boundary [see Troupin et al. (2012) for details and explanations]. But again, the approximate fields are of excellent quality, though with higher rms error because of the stronger spatial variability of the error field. To capture this variability better, we can increase the number of pseudodata by increasing α. Indeed, with a value of α = 3 (Fig. 3) the quality increases, whereas decreasing the value of α provides still acceptable results and the error mask is still excellent in this case.

Error fields for a single point in the center with (top) fine sampling α = 3 of pseudodata and (bottom) coarse sampling α = 0.3. White crosses indicate real data locations, and black dots pseudodata locations.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
The computational time, not yet shown here as a single data point, is rarely encountered in practice and the CPU time of the present case is similar to the one in section 3c (see Table 1).
CPU time for the test case with 150 data points distributed randomly in part of the domain (schematic case) and a realistic case of the Mediterranean Sea.

b. Aligned data points
A slightly more complicated situation is one where 10 points are aligned in y = 0 for x ≥ 0 as shown in Fig. 4.

Error fields for 10 data points in y = 0, x ≥ 0. White crosses indicate real data locations, and black dots pseudodata locations.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
The poor man’s error is now clearly too optimistic, also at the data locations, because it overestimates the error reduction at each data point due to the other data points. The clever poor man’s estimate clearly reduces the problem, but the hybrid and the almost exact error outperform it. We also see that the hybrid method degrades near data points close to the boundary, as to be expected.
c. Points in part of the domain only
The same conclusions as in the previous case hold if we now place 150 data points in the top-right part of the domain (Fig. 5). The clever poor man’s error improves the results from the poor man’s error, but the hybrid and the almost exact error perform better, with the best approximate method again the almost exact error version. For boundary effects, capturing the error field near the boundary is more problematic but the error field and mask are still of quality.

Error fields for 150 random points in one quadrant. White crosses indicate real data locations, and black dots pseudodata locations.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
Up to now we only compared the quality of the fields, but we can also compare the computational load. As seen in Table 1, the most expensive methods are those calculating the exact field (with scaled or unscaled background variances). The hybrid method consumes less time because it does not need the calculation of a covariance function by another DIVA calculation but can use an analytical function instead. However, compared to the cost of the almost exact error version, the hybrid method is one order of magnitude more expensive, yet the almost exact error calculation provides error estimates of similar or better quality. Finally, the poor man’s error calculations are clearly the fastest and therefore interesting for exploratory work.
d. Realistic test case
We finally test the methods with the same dataset as the one used in Troupin et al. (2012) so that we can use the same statistical parameters and do not need to recalibrate the analysis. We use salinity measurements in the Mediterranean Sea at a depth of 30 m in July, for the 1980–90 period and reconstruct the solution on a high-resolution output grid with 500 × 250 grid points.
The analysis itself (Fig. 6) shows the well-known features such as the inflow of Atlantic waters in Gibraltar; the anticyclonic gyres in the Alboran Sea; the spreading of the Atlantic waters off the North African coast; the high salinities of the eastern Levantine Basin, a signature of Black Sea waters in the Aegean Sea; and the high salinity in the northern part of the western Mediterranean. Also the influence of the Po River in the northern Adriatic is visible. This analysis itself is calculated within a few seconds, and we focus now on the computationally more expensive error fields.

Analysis of salinity measurements in the Mediterranean Sea at a depth of 30 m in July for the 1980–90 period.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
The error fields are scaled by the global background variance, and white crosses indicate real data locations and black dots indicate pseudodata locations. The real error field (top panel of Fig. 7) shows the effect of low data coverage in the southern parts and the lower errors near data locations. As before, the poor man’s error is quite optimistic but quantitatively not reliable. The mask derived from the poor man’s error with only 43 incorrectly masked points has some skills, but the clever poor man’s error provides more acceptable quantitative results and masks. In particular, the regions void of data in the southern part and around Sardaigna are now captured. The hybrid method and the almost exact approach (Fig. 8) have similar metrics, but if we look at the details, the “almost exact” error field clearly better resolves features such as the higher error fields around Sardinia and in the eastern Thyrrenian Sea. Also, the error structure in the Alboran Sea is better recovered, despite the very low number of pseudodata (black dots) used.

Real error field, poor man’s error, and clever poor man’s error.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1

Hybrid and almost exact approach.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1
For the error fields with boundary effects (Fig. 9), using the high pseudodata coverage along the coast makes it possible to capture the variable background variance, but because of the fine mesh along the coast, probably too many pseudodata have been added there. This results in excellent metrics, with only four incorrectly masked points and only 1% error in the error field. The relatively large number of pseudodata is then reflected also in the CPU time. But even with this coverage, the computational gain of a factor of 11 compared to the exact calculation is still significant. Comparing CPU times in this realistic case shows without doubt the usefulness of the new approaches (Table 1), which have been included in the DIVA tool (http://modb.oce.ulg.ac.be/mediawiki/index.php/DIVA). Indeed, climatology productions generally require gridding at several levels, months, or seasons for several parameters, so that already in the 2D case the computational efficiency matters. When it comes to generalizations of our methods to 3DVAR or OI in several dimensions, the expected gain might be even more interesting as we will show now.

Real error with boundary effects and almost exact approach.
Citation: Journal of Atmospheric and Oceanic Technology 31, 2; 10.1175/JTECH-D-13-00130.1







Normally, the numerical grids have a grid spacing that is much smaller than the physical length scales and the last term is therefore in favor of very high efficiency. If we work with a forecast model, its numerical grid is typically recommended to be 8 times smaller than the scales of interest. With only a few data points, we then reach gains of one to two orders of magnitude in 2D and almost three orders of magnitude in 3D. The gain decreases if the number of observations is high and allows for capturing the degrees of freedom of the system. If the number of observations is much larger than
4. Generalizations
We have presented our ideas in the framework of DIVA with a diagonal observational error covariance matrix and will now analyze how the methods can be applied in other frameworks.
One problem that can be encountered is therefore a nondiagonal observational error covariance matrix







The presence of a nondiagonal




Still other background covariance specifications rely on projections on empirical orthogonal functions (EOFs). Such EOF decompositions are to some extent similar to a spectral decompositions, but the base functions are calculated from the data instead of being defined by analytical functions given a priori. The equivalent of the spectral density such as (22) is captured in the singular values of the singular value decomposition (SVD) leading to the EOFs. These coefficients or singular values can therefore be tampered with when a change in correlation length scale is to be obtained.
There are thus several possibilities to change the correlation function of the analysis tool so that it can be optimized to mimic its own square. In complicated implementations, the approach should, of course, be tested, and possibly calibrated, by looking at a covariance function generated by an analysis with a single data point and comparing it to the one obtained when using the tampered version. One should retrieve a correlation function for the tampered version, which is close to the square of the original one.
We see that there are many ways to adapt the length scale or correlations for the clever poor man’ error calculation and the final gridding step of the almost exact error approach. Should this adaptation be difficult or not efficient, the almost exact error approach can still be applied by covering the domain with more pseudodata and making the final gridding step using the original covariances or a simpler gridding tool. Indeed, the error is already calculated exactly with a fine resolution so that any gridding method, even with a poorly specified correlation structure, when applied to these exact values of the error, should work fine. This is, however, then at the expense of more analyses to get the exact error in more locations.
To illustrate these ideas on an example, we can look at a typical 3DVAR approach used in operational mode, using the so-called National Meteorological Center (NMC) method (e.g., Parrish and Derber 1992; Fisher 2003), presented here, assuming we are still working with anomalies with respect to the background field.

























It is now clear that the adaptations to change the correlations are quite localized and therefore it should be possible to implement the poor man’s error and the almost exact error calculations in operational 3DVAR implementations. We can finally note that in the NMC version, the parameters involved in
5. Conclusions
The preparation of error fields is generally much more expensive than the preparation of an analysis. We proposed two new ideas to provide some practical and economic ways to provide such error fields. The first method only needs a second analysis with modified correlation length scale and is particularly well suited for exploratory analysis or masking of gridded fields in regions insufficiently covered by data [such as done in the web version (Barth et al. 2010) or within Ocean Data View (ODV) (Schlitzer 2013)]. The second method, on the other hand, can be used for cases in which sufficient confidence in the covariance matrices justifies the use of the full error calculation. In this case, the new method we presented drastically reduces the computational burden without sacrificing the quality of the error field. The method is particularly useful when employed in parallel with outlier detection methods and cross validation, as the same computations can be reused.
We illustrated the approach using the specific analysis tool DIVA, but also paved the way for generalizations for a variety of situations when background covariances are formulated differently or when the observational error covariance matrix is nondiagonal. The ideas presented here can therefore be implemented in various versions of analysis tools.
In particular, we detailed how both methods can be adapted to 3DVAR approaches used in operational systems. They could then provide an alternative to the Lanczos vector–based estimates of the Hessian matrix. The new approach is particularly interesting if the background covariance is factorized or a very efficient preconditioning was applied so that the calculation of several minimizations to get error estimates in selected locations can be tackled.
Concerning future work in the context of DIVA, in the present paper we limited ourselves to the implementation of the case of uncorrelated observational error—that is, a diagonal
DIVA has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under Grant Agreement 283607, SeaDataNet 2, and from project EMODNET (MARE/2008/03–Lot 3 Chemistry–SI2.531432) from the Directorate-General for Maritime Affairs and Fisheries. This research was also supported by the SANGOMA Project (European FP7-SPACE-2011 project, Grant 283580). The F.R.S.–FNRS is acknowledged for providing supercomputing access. This is a MARE publication.
REFERENCES
Barth, A., , Alvera-Azcárate A. , , Troupin C. , , Ouberdous M. , , and Beckers J.-M. , 2010: A web interface for griding arbitrarily distributed in situ data based on Data-Interpolating Variational Analysis (DIVA). Adv. Geosci., 28, 29–37, doi:10.5194/adgeo-28-29-2010.
Barth, A., , Beckers J.-M. , , Troupin C. , , Alvera-Azcárate A. , , and Vandenbulcke L. , 2013: Divand-1.0: n-dimensional variational data analysis for ocean observations. Geosci. Model Dev. Discuss.,6, 4009–4051, doi:10.5194/gmdd-6-4009-2013.
Beckers, J.-M., , Barth A. , , and Alvera-Azcárate A. , 2006: DINEOF reconstruction of clouded images including error maps—Application to the sea-surface temperature around Corsican Island. Ocean Sci., 2, 183–199, doi:10.5194/os-2-183-2006.
Bekas, C., , Kokiopoulou E. , , and Saad Y. , 2007: An estimator for the diagonal of a matrix. Appl. Numer. Math., 57, 1214–1229.
Bouttier, F., , and Courtier P. , 2002: Data assimilation concepts and methods March 1999. Meteorological Training Course Lecture Series, ECMWF, 59 pp. [Available online at http://www.ecmwf.int/newsevents/training/lecture_notes/pdf_files/ASSIM/Ass_cons.pdf.]
Brankart, J.-M., , and Brasseur P. , 1996: Optimal analysis of in situ data in the western Mediterranean using statistics and cross-validation. J. Atmos. Oceanic Technol., 13, 477–491.
Brankart, J.-M., , Ubelmann C. , , Testut C.-E. , , Cosme E. , , Brasseur P. , , and Verron J. , 2009: Efficient parameterization of the observation error covariance matrix for square root or ensemble Kalman filters: Application to ocean altimetry. Mon. Wea. Rev., 137, 1908–1927.
Brasseur, P., 1994: Reconstruction de champs d’observations océanographiques par le modèle variationnel inverse: Méthodologie et applications. Ph.D. thesis, University of Liège, 262 pp.
Brasseur, P., , Beckers J.-M. , , Brankart J.-M. , , and Schoenauen R. , 1996: Seasonal temperature and salinity fields in the Mediterranean Sea: Climatological analyses of a historical data set. Deep-Sea Res. I, 43, 159–192, doi:10.1016/0967-0637(96)00012-X.
Bretherton, F. P., , Davis R. E. , , and Fandry C. , 1976: A technique for objective analysis and design of oceanographic instruments applied to MODE-73. Deep-Sea Res., 23, 559–582, doi:10.1016/0011-7471(76)90001-2.
Courtier, P., , Thépaut J.-N. , , and Hollingsworth A. , 1994: A strategy for operational implementation of 4D-Var, using an incremental approach. Quart. J. Roy. Meteor. Soc., 120, 1367–1387, doi:10.1002/qj.49712051912.
Delhomme, J. P., 1978: Kriging in the hydrosciences. Adv. Water Resour., 1, 251–266, doi:10.1016/0309-1708(78)90039-8.
Emery, W. J., , and Thomson R. E. , 2001: Data Analysis Methods in Physical Oceanography.2nd ed. Elsevier, 654 pp.
Fischer, C., , Montmerle T. , , Berre L. , , Auger L. , , and Ştefănescu S. E. , 2005: An overview of the variational assimilation in the ALADIN/France numerical weather-prediction system. Quart. J. Roy. Meteor. Soc., 131, 3477–3492, doi:10.1256/qj.05.115.
Fisher, M., 2003: Background error covariance modelling. Seminar on Recent Development in Data Assimilation for Atmosphere and Ocean, Reading, United Kingdom, ECMWF, 45–63. [Available online at ftp://beryl.cerfacs.fr/pub/globc/exchanges/daget/DOCS/sem2003_fisher.pdf.]
Gandin, L. S., 1965: Objective Analysis of Meteorological Fields. Israel Program for Scientific Translations, 242 pp.
Girard, D. A., 1998: Asymptotic comparison of (partial) cross-validation, GCV and randomized GCV in nonparametric regression. Ann. Stat., 26, 315–334.
Hamill, T. M., , and Snyder C. , 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 2905–2919.
Hayden, C. M., , and Purser R. J. , 1995: Recursive filter objective analysis of meteorological fields: Applications to NESDIS operational processing. J. Appl. Meteor., 34, 3–15.
Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, and Predictability.Cambridge University Press, 341 pp.
Kaplan, A., , Kushnir Y. , , and Cane M. A. , 2000: Reduced space optimal interpolation of historical marine sea level pressure: 1854–1992. J. Climate, 13, 2987–3002.
Lorenc, A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112, 1177–1194, doi:10.1002/qj.49711247414.
McIntosh, P. C., 1990: Oceanographic data interpolation: Objective analysis and splines. J. Geophys. Res., 95 (C8), 13 529–13 541.
Moore, A. M., , Arango H. G. , , Broquet G. , , Powell B. S. , , Weaver A. T. , , and Zavala-Garay J. , 2011: The Regional Ocean Modeling System (ROMS) 4-dimensional variational data assimilation systems: Part I—System overview and formulation. Prog. Oceanogr., 91, 34–49, doi:10.1016/j.pocean.2011.05.004.
Parrish, D., , and Derber J. , 1992: The National Meteorological Center’s spectral statistical interpolation analysis system. Mon. Wea. Rev., 120, 1747–1763.
Rabier, F., , and Courtier P. , 1992: Four-dimensional assimilation in the presence of baroclinic instability. Quart. J. Roy. Meteor. Soc., 118, 649–672, doi:10.1002/qj.49711850604.
Reynolds, R. W., , and Smith T. M. , 1994: Improved global sea surface temperature analyses using optimum interpolation. J. Climate, 7, 929–948.
Rixen, M., , Beckers J.-M. , , Brankart J.-M. , , and Brasseur P. , 2000: A numerically efficient data analysis method with error map generation. Ocean Modell., 2, 45–60, doi:10.1016/S1463-5003(00)00009-3.
Schlitzer, R., cited 2013: Ocean data view. [Available online at http://odv.awi.de.]
Seaman, R. S., , and Hutchinson M. , 1985: Comparative real data tests of some objective analysis methods by withholding observations. Aust. Meteor. Mag., 33, 37–46.
Tang, J. M., , and Saad Y. , 2012: A probing method for computing the diagonal of a matrix inverse. Numer. Linear Algebra Appl., 19, 485–501.
Troupin, C., , Machín F. , , Ouberdous M. , , Sirjacobs D. , , Barth A. , , and Beckers J.-M. , 2010: High-resolution climatology of the north-east Atlantic using Data-Interpolating Variational Analysis (DIVA). J. Geophys. Res.,115, C08005, doi:10.1029/2009JC005512.
Troupin, C., and Coauthors, 2012: Generation of analysis and consistent error fields using the Data Interpolating Variational Analysis (DIVA). Ocean Modell., 52–53, 90–101, doi:10.1016/j.ocemod.2012.05.002.
Wahba, G., , and Wendelberger J. , 1980: Some new mathematical methods for variational objective analysis using splines and cross validation. Mon. Wea. Rev., 108, 1122–1143.
Xiang, D., , and Wahba G. , 1996: A generalized approximate cross validation for smoothing splines with non-Gaussian data. Stat. Sin., 6, 675–692.