## 1. Introduction

Oceanographers have continually been troubled by the lack of synopticity in large-scale in situ measurements of the ocean. The Argo program (information available online at http://www.argo.ucsd.edu) attempts to resolve this with an array of drifting floats released into the earth’s oceans. These provide profiles of pressure, temperature, and salinity, often down to 2000 m, at roughly 10-day intervals during the lifetime of each float. Without any direct current measurements, however, these are limited in their ability to give the absolute geostrophic velocity field, although satellite positioning, while the drifters are at the surface, gives an approximation for the flow at the drifting depth. The Bernoulli inverse method is here proposed as a solution for this problem, because it can be used with profile data to calculate sea surface height. Argo float data and the Bernoulli method together have the potential to produce repeated near-synoptic snapshots of the state of the ocean.

The Bernoulli method was first proposed by Killworth (1986). Later improvements were made by Cunningham (2000), whose scheme is adopted here. In summary, the method first identifies steady geostrophic streamlines connecting two profiles (numbered here 1 and 2), by finding pairs of points, one on each profile, with pressures *p*_{1} and *p*_{2}, that have the same value of the following two conserved properties: salinity *S* and a modified potential temperature Θ.

*B*, defined as

^{1}

*U*is the internal energy,

*p*is pressure,

*ρ*is density,

*υ*is speed,

*g*is acceleration resulting from gravity, and

*z*is the vertical height of the point. As pointed out by Cunningham (2000), the kinetic energy term is very small compared to the other terms and so it is omitted henceforth. The conservation equation (or crossing equation, because it represents an intersection point on an

*S*–Θ diagram) for

*B*above can be rewritten in the form

*F*is a known function of the input variables, pressure

*p*, in situ temperature

*T*and salinity

*S*;

*η*is the sea surface height and

*D*

_{1}and

*D*

_{2}are the dynamic heights referenced to zero at the surface for the two profiles. This equation connects the unknowns

*η*

_{1}and

*η*

_{2}.

Examination of real observations typically produces many more of these crossing equations than the number of profiles (which corresponds to the number of *η*s to solve for). This overdetermined set of equations is, therefore, solved by singular value decomposition.

Cunningham (2000) has validated the method using model data as input, comparing his results directly with the sea surface height derived by the model. He then qualitatively compared solutions for the North Atlantic calculated from CTD data collected on a month-long cruise to known features of the circulation. Here we seek to make a quantative comparison of the sea surface height (or, equivalently, the surface current) to model and altimetric observations over time scales of the order of 10 days.

## 2. Implementation

The solution procedure has been coded in Python (information available online at http://www.python.org). This is a very high level open-source language, which allows for the fast development of code. It has good memory management and is portable between machines. It is also object oriented, so that the problem can easily be broken down into distinct steps, each of which can be tested independently.

Data are downloaded in a supplied time range and in a particular ocean basin from the Argo data center. The data come in the Network Common Data Form (NETCDF) format, and are unpacked using the functions in Konran Hinson’s ScientificPython module (information online at http://dirac.cnrs-orleans.fr/ScientificPython). Simple editing is performed based on the Argo quality control flags, bad values of latitude and longitude, and profiles with large pressure gaps (more than 200 m), which are removed. The data are then split into separate profiles.

The set of profiles to be considered is then passed to a Bernoulli inverse solver. Thermodynamic properties are calculated using the algorithms of Feistel (2003). The solver is designed to store each solution as a separate object that can be reused by changing parameters at any point of the method, and then solving again from that point. For example, crossing point equations can be filtered in different ways, eliminating those that do not fit user-supplied criteria, but without having to calculate them all again.

The procedure is running once a week as a stand-alone system, creating inverses of the previous 10 days of float data and publishing the results on a Web page (online at http://www.noc.soton.ac.uk/JRD/PROC/people/sga/bernoulli).

Some of the results from the scheme are discussed in the next section.

## 3. Results

The mean monthly number of profiles that are used after the above filtering for 2003 is shown in Table 1.

Figure 1 shows the Bernoulli inverse of 10 days of float data that are collected up to 21 November 2003. The top panel shows the sea surface height, represented as a vertical line, pointing up the page for positive values and down the page for negative values, with the float position marked by a filled circle. The bottom panel shows the statistical error as evaluated by the singular value decomposition (note the difference in scale of a factor of 10). These errors lie largely in the 1–2-cm range.

A first step in validating these results could be to interpolate onto a regular grid. For sparse data, however, the interpolation problem is not trivial, and may obscure the results from the inverse. Instead, large-scale features of the calculated height fields will be examined, which can be compared directly with model and satellite data. Later, interpolated fields can be evaluated by testing to see whether such features are preserved in the output grid (section 5).

Examination of the calculated fields reveals a region of consistent sea surface slope in the northeastern Atlantic. Taking a rectangle of data from 10° to 70°N and from 40° to 10°W and plotting height against latitude gives Fig. 2. The solid line represents a linear least squares fit to the data between 30° and 50°N, with a slope of 0.021 m deg^{−1} and a correlation coefficient of 0.77. This is a tangible characteristic of the float inverses, which can be tested against other observations. Examination of the results from 10-day intervals (not shown) covering 2003 suggests that it does not vary over this year.

## 4. Validation

### a. Model

We have used output from the Ocean Circulation and Climate Advanced Modelling Project (OCCAM) 1/4° primitive equation model, which includes a free surface code (see, e.g., Saunders et al. 1999). This model was forced with European Centre for Medium-Range Weather Forecasts (ECMWF) mean winds and was relaxed to climatological temperature and salinity at the surface. Consequently, the results do not correspond to any particular year. We require the mean total surface geostrophic current in the same latitude–longitude box as that used for the inverse data. The model sea surface height field is calculated as a streamfunction for the depth-independent part of the velocity field, so it is not straightforward to go directly from the mean gradient of height to a surface geostrophic current. Instead, the following procedure is adopted.

Use depth, temperature, and salinity profiles from the model to calculate the geostrophic velocity relative to zero at the surface.

Compare the profile calculated in step 1 with the model total velocity profile at the same longitude and latitude.

Find a deep layer where the two profiles lie parallel and take the difference between them—this is assumed to be the depth-independent part of the total geostrophic velocity.

Add the velocity that is found in step 3 to the profile calculated in step 1—the velocity at the surface is the total geostrophic velocity that is required.

Average the west–east component of velocity in the latitude–longitude box under consideration.

Convert the velocity to an equivalent slope.

### b. Altimetry

Until recently, altimetric sea surface height data were mainly used in oceanography in the form of an anomaly or a difference from a mean or a previous field, because of the uncertainty in the geoid. Such a differencing operation removes any stationary signal, including the one to be tested here. With the advent of the Gravity Recovery and Climate Experiment (GRACE), launched by the National Aeronautics and Space Administration (NASA) in 2002, satellite observations of height can begin to be combined with a geoid estimate. An initial release of a 1° gridded product [GRACE Gravity Model (GGM01C)] by the University of Texas has been used (information available online at http://www.csr.utexas.edu/grace/gravity). TOPEX/Poseidon data have been extracted from the Radar Altimeter Database System (RADS) (Naeije et al. 2000). Figure 3 shows altimeter heights from the TOPEX/Poseidon cycle near 20 November 2003, averaged into 1° boxes with the geoid height subtracted, for the longitude range of 40°–10°W. Only TOPEX/Poseidon points within a radius of 0.25° from a GRACE grid point have been used. The solid line is a linear least squares fit to the data; and only those grid points that have five or more TOPEX/Posiedon values have been included. An analysis of variance test for this model shows that the slope is significant.

### c. Comparison

Figure 4 shows slopes that are calculated by a linear least squares method from TOPEX/Poseidon data covering its lifetime up to the beginning of 2004 for the region from 30° to 50°N and from 40° to 10°W. Instead of plotting error bars at each point, an estimate of error is shown by an error envelope bracketing the data. This is calculated at each point by adding and subtracting twice the standard deviation of the slope from the fit, which has then been smoothed for clarity. Although noisy, there is little evidence of any trend with time. This justifies the use of model data that cannot be assigned any fixed time reference. Figure 5 shows a comparison of slopes for the three methods for the region of interest. The lines that are marked with a circle correspond to the results from fits for Bernoulli inverses of 10 days of data at weekly intervals covering 2003. The lines that are marked with a square correspond to 11 consecutive 30-day snapshots from OCCAM. The lines that are marked with a triangle are the results from fitting to the altimetry data for every cycle in 2003. Note that the time base for the OCCAM data is assigned a nominal start time. Error envelopes at two standard deviations are also shown as dotted (OCCAM), dashed (Bernoulli), and solid lines (altimeter). A consistent slope of 2 cm deg^{−1} is apparent. Converting this slope into a surface current gives a large-scale eastward velocity component of 2 cm s^{−1}.

Pingree (1993) studies the movement of drogued buoys to the west and south of the United Kingdom. He suggests that typical surface currents in the region south of 51°N should be 2–3 cm s^{−1} to the southeast. Van Aken (2002) describes results that are derived from drifters in the Bay of Biscay over the period of 1995–99. He finds surface currents of 1.5–2.0 cm s^{−1} in the abyssal region of the bay (i.e., at the eastern boundary of the analysis presented here). In winter this flow is to the east, but is to the south in summer. He attributes this to a direct response to the seasonal wind forcing. Both of these observations are consistent with the results found here.

## 5. Interpolated heights

We require a method to interpolate from the inverse height fields on to a grid of regular points. Optimal interpolation is, therefore, used in the form described by Bretherton et al. (1976). This has the advantage over other methods that an estimate of error can be made.

*d*is proportional to the distance between any pair of observations (see, e.g., Lorenc et al. 1991). The variable

*d*is chosen here to be the distance between profile pairs divided by a scale parameter. The latter is estimated by finding the median distance between crossing-point pairs for a number of solutions from 2003. A value of 250 km is used.

Because of the patchiness of the data, a simple analysis/correction scheme after Lorenc et al. (1991) has been used. The scheme follows a number of steps:

interpolate linearly from a background grid onto positions of new data,

optimally interpolate for the difference between new data and the interpolated field onto the same grid as background, and

add the interpolated grid to the background grid.

There are a number of possible choices for the background grid. For example, each grid that is created could be used as the background grid for the next calculation. To reduce the propagation of error, the background field is calculated by taking all observations over a year (in this case 2003, because this is the only complete year of data that is available at the time of writing), and averaging them into grid boxes centered on each grid point. The background error variance is chosen as the median of the variances of each grid box.

Figure 6 demonstrates the product from this interpolation scheme. It shows sea surface height and error for the same time as Fig. 1, but now contoured from the gridded data. The Gulf Stream and North Atlantic current system are clearly revealed. The branching point between these two currents that is described by Clarke et al. (1980) is also evident. Calculating a north–south sea surface slope in the northeast Atlantic, as before, yields −0.018 m deg^{−1} with a correlation coefficient of 0.78, so that the gridding appears to be faithful to large-scale signals.

## 6. Conclusions

A preoperational scheme for producing sea surface height fields in the North Atlantic by Bernoulli inverse has been implemented using near-real-time observations from the Argo float program. Comparison of a large-scale feature of the product to both model and altimeter data shows good agreement.

## Acknowledgments

Stuart Cunningham provided much valuable advice. The OCCAM data were kindly provided by Andrew Coward and Beverly deCuevas at SOC. The copy of RADS at SOC is maintained by Helen Snaith. Thanks are due to Trevor McDougall for his valuable comments on an early draft of this manuscript. We are grateful to the reviewers of this paper for their helpful advice and comments.

## REFERENCES

Bretherton, F. P., Davis R. E. , and Fandry C. B. , 1976: A technique for objective analysis and design of oceanographic experiments applied to MODE-73.

,*Deep-Sea Res.***23****,**559–582.Clarke, R. A., Hill H. W. , Reiniger R. F. , and Warren B. A. , 1980: Current system south and east of the Grand Banks of Newfoundland.

,*J. Phys. Oceanogr.***10****,**25–65.Cunningham, S. A., 2000: Circulation and volume flux of the North Atlantic using synoptic hydrographic data in a Bernoulli inverse.

,*J. Mar. Res.***58****,**1–35.Feistel, R., 2003: A new extended Gibbs thermodynamic potential of seawater.

*Progress in Oceanography*, Vol. 58, Pergamon, 43–114.Killworth, P. D., 1986: A Bernoulli inverse method for determining the ocean circulation.

,*J. Phys. Oceanogr.***16****,**2031–2051.Lorenc, A. C., Bell R. S. , and MacPherson B. , 1991: The Meteorological Office analysis correction data assimilation scheme.

,*Quart. J. Roy. Meteor. Soc.***117****,**59–89.McDougall, T. J., 1989: Streamfunctions for the lateral velocity vector in a compressible ocean.

,*J. Mar. Res.***47****,**267–284.Naeije, M., Schrama E. , and Scharroo R. , 2000: The Radar Altimeter Database System project RADS.

*Proc. IEEE 2000 Int. Geosciences and Remote Sensing Symp. (IGARSS 2000),*Honolulu, HI, IEEE, 487–490.Pingree, R. D., 1993: Flow of surface waters to the west of the British Isles and in the Bay of Biscay.

,*Deep-Sea Res.***40B****,**369–388.Saunders, P. M., Coward A. C. , and de Cueva B. A. , 1999: The Circulation of the Pacific Ocean seen in a Global Ocean Model (OCCAM).

,*J. Geophys. Res.***104****,**C8,. 18281–18299.van Aken, H. M., 2002: Surface currents in the Bay of Biscay as observed with drifters between 1995 and 1999.

,*Deep-Sea Res.***49A****,**1071–1086.

The mean number of float profiles used in each month of 7-day inverses covering 2003.

^{1}

The Bernoulli function is but one choice that can be made for a conserved quantity. McDougall (1989) discusses the choices and concludes that the Montgomery function is the most accurate, although finding the location of crossing points is more time consuming than with the Bernoulli function. Comparison of the solutions for different conserved quantities will be made in a later article.