## 1. Introduction

The method of successive corrections is an iterative procedure used to interpolate meteorological and oceanographic data onto uniform space–time grids (cf. chapter 3 of Daley 1991, and references therein). The successive corrections algorithm is based on a sequence of differencing and smoothing operations. At each iteration cycle, the current grid of estimates are interpolated to the data locations. The differences between these interpolants and the data are formed and smoothed back to the grid by forming weighted averages defined by a prespecified weighting function parameterized by a “radius of influence,” or *span.* The smoothed differences are then added to the current estimates, and the next iteration is performed. The span of the weighting function may be fixed or varied from one iteration to the next. In its original form (Bergthorsson and Doos 1955; Cressman 1959), the method was initialized by a grid of initial or “background” values typically obtained from forecasts or climatology. Subsequent modifications replaced the background fields with smoothed values obtained from the data (e.g., Barnes 1964, 1967; Koch et al. 1983), or provided approximants to optimal estimates (Bratseth 1986).

While supplanted to a degree by other methods of statistical interpolation and assimilation schemes (e.g., 3DVAR and 4DVAR, as described by Rabier et al. 1998; Courtier et al. 1998; Andersson et al. 1998; Rabier et al. 2000; Mahfouf and Rabier 2000; Klinker et al. 2000), the method of successive corrections continues to be applied to various meteorological and oceanographic datasets (e.g., Levitus 1982; Vasquez et al. 1990; Perigaud 1990; Levitus and Boyer 1994; daSilva et al. 1994; Josey et al. 1998, 1999; Liu et al. 1998; Kent et al. 2000). Even if the use of successive corrections is declining, its widespread historical use makes a clear understanding of the properties of the method useful.

Successive corrections is linear in the data, and acts as a low-pass filter, or smoother, whose filter transfer functions have been investigated for certain cases. Barnes (1964) analyzed the spectral content of fixed-span successive corrections for the continuous case with a fixed span and concluded that as the number of iterations increased, shorter scales of variability in the data were included in the gridded estimates. In his terms, the gridded estimates “converged” toward the “true” field values. The number of iterations could be varied depending on the data distribution, measurement errors, and the scales of variability desired in the final grid. Thus, the degree of smoothing, or the effective filter cutoff, could be controlled through setting the span and number of iterations. Barnes (1967) [see Koch et al. (1983) for a more readily accessible review] realized that, by varying the span of the weighting function, the desired degree of filtering could be reached in fewer iterations. By selecting the decrease in span appropriately, the desired filter cutoff could be reached in only two iteration cycles at the expense of a somewhat less sharp filter (i.e., a slower rolloff of the filter transfer function at the cutoff wavelength). Achtemeier (1987) performed a similar analysis and concluded that superior filtering properties (more rapid rolloff) were obtained by fixed-span iterations. Daley (1991) calculated filter transfer functions for two-iteration, fixed-span successive corrections for the continuous case and for regular, discrete data distributions. Using the elegant formulation of Stephens and Stitt (1970), Stephens and Polan (1971) derived transfer functions for one and two-iteration, fixed-span successive corrections estimates for randomly distributed data.

While use of the Barnes (1967) two-iteration implementation of the successive corrections method is widespread, and supported by his theoretical analyses, the application of variable-span successive corrections with more than two iterations is also common. For example, Levitus (1982) applied a four-iteration, variable-span implementation and provided a filter transfer function, but it is not clear how this was derived based on the fixed-span formula of Barnes (1964) that is cited. Some applications of this technique are ad hoc: Vasquez et al. (1990) and Liu et al. (1998) apparently gave little consideration to the effect of what seem to be arbitrarily selected numbers of iterations and spans.

In this paper, a matrix formulation of the successive corrections algorithm is presented, from which the smoother weights (the a posteriori weights of Daley 1991) can be readily obtained for general implementations of the successive corrections algorithm, and for arbitrary distributions of data. Using the smoother weights so derived, filter transfer functions for a number of examples of successive corrections in one dimension are presented. The effects of varying the number of iterations and the spans are discussed, as is the influence of the estimation grid spacing. Comparison is made with the loess smoother, a noniterative smoothing algorithm.

As will be shown, fixed-span successive corrections is capable of generating smoother weights that lead to an effective low-pass filter with a rapid rolloff, in accordance with the earlier works of Barnes (1964, 1967) and Achtemeier (1987). The variable-span implementation is shown to depend almost entirely on the span used in the final iteration, obviating the need for the earlier iterations. As with any smoothing algorithm, the filtering properties of successive corrections depends heavily on the distribution of the data around any grid point. The properties of any specific implementation of successive corrections thus need to be evaluated on a case-by-case basis. For this reason no specific set of parameters is recommended.

## 2. Matrix formulation of successive corrections

*N*observations

*y*

_{n}of some variable of interest at the locations (in one, two, or three dimensions)

*x*

_{n},

*n*= 1, …

*N.*From these data, we seek estimates

*g*

_{m}at

*M*selected locations

*η*

_{m},

*m*= 1, …,

*M.*Ordinarily, the

*η*

_{m}define a regular grid in one or more dimensions. The successive corrections method iterates to obtain a sequence of estimates

*g*

_{m}(

*ν*) as

*ν*is the iteration number,

*s*

_{mn}(

*ν*) are specified weights (here, referred to as the

*analytical*weights to distinguish them from the smoother weights calculated later), and the

*ŷ*

_{n}(

*ν*) are interpolated (using a method assumed here to be linear) values of the variable at the locations

*x*

_{n}obtained from the

*g*

_{m}(

*ν*):

**y**= [

*y*

_{n}],

**g**= [

*g*

_{m}], and

**ŷ**= [

*ŷ*

_{n}], the

*M*×

*N*smoother matrices 𝗦(

*ν*) = [

*s*

_{mn}(

*ν*)], and the

*N*×

*M*interpolation matrix 𝗟 = [

*l*

_{nm}], so that

**g**(0),

*K*iterations

As equation (1) shows, before applying the successive corrections method, one must select the background field **g**(0), an estimation grid *η*_{m} and interpolation algorithm that are both embodied in 𝗟, the number of iterations, *K,* and the analytical weights for each datum and iteration, 𝗦(*ν*).

*s*

_{mn}=

*s*(

*r*

_{mn})/

*s*

_{m}, where

*r*

_{mn}= ‖

*η*

_{m}−

*x*

_{n}‖ is the distance between the datum and the grid point,

*s*(

*r*) is a function that monotonically decreases away from

*r*= 0, and

*s*

_{m}=

^{N}

_{n=1}

*s*

_{mn}. Two common weighting functions are the Cressman function (Cressman 1959),

Even after the grid and the function defining the analytical weights are chosen, it is clear that there are many ways to implement the successive corrections method. The number of iterations to be used and the span of the function defining the matrices 𝗦 (e.g., the parameter *R* in the Cressman function, or the *e*-folding point of the Gaussian) both need to be specified. The span can either be fixed for all iterations, or it can be varied among the iterations.

## 3. Examples of successive corrections

**g**(0) = 0, the interpolation step is accomplished using cubic Lagrange interpolation, and the function defining the analytical weights (with the exception of one case, cf. Fig. 2) is a Gaussian,

*s*(

*r*) = exp(−

*r*

^{2}/

*σ*

^{2}). With

**g**(0) = 0, equation (1) may be written as

**g**(

*K*) = 𝗔

**y**, and thus for each grid point, the procedure results in a linear estimate or smoother,

*a*

_{mn}for a specific grid point

*η*

_{m}are displayed. The filtering properties of the estimate are examined using the modulus of the equivalent transfer function for that grid point:

^{1}

*f*

_{c}that minimizes the relative error. In the examples analyzed here, the integral in the numerator of (3) was evaluated numerically, and the minimization required to define

*f*

_{c}was performed using the golden section algorithm (Press et al. 1992).

### a. Coincident data and grid locations

When the data locations and grid points are the same, the interpolation matrix 𝗟 is the identity. This ideal case has been studied by previous authors (e.g., Daley 1991) to provide a simplified analysis of the successive corrections method. It is reexamined here in order to isolate the effects of the choices of span and the number of iterations.

For the examples in this section, the data locations *x*_{n} and the estimation grid points *η*_{m} are coincident on a grid with spacing 0.05. The specific grid point is selected so that the edges of the dataset will not influence the smoother weights or transfer functions. The grid extends to 5*σ*_{m} on either side of the selected grid point, where *σ*_{m} is the largest value of the Gaussian *e*-folding scale *σ* applied in a given example.

#### 1) Fixed-span iterations

The first case considered is iteration with a fixed span, that is, 𝗦(*ν*) is constant with a Gaussian *e*-folding scale of *σ* = 1. The number of iterations, *K,* was varied from 1 through 20. The results are shown in Fig. 1 for selected values of *K.* It is apparent from Fig. 1b that the successive corrections method is a low-pass filter. For *K* = 1, the successive corrections method is equivalent to a simple, Gaussian-weighted smoother. As the number of iterations is increased, the analytical weights are modified by the algorithm and the filter passes successively higher frequencies and has a steeper rolloff (lower relative error), in agreement with the results of Barnes (1964, 1967) and Achtemeier (1987). The cutoff frequency increases by almost a factor of 2 from one iteration to four iterations. This implementation is convergent when using Gaussian analytical weights (e.g., Daley 1991); both the relative error and the cutoff frequency approach asymptotic values with increasing *K.*

It is noteworthy that the widely used Cressman weighting function results in a divergent iteration sequence for *K* > 4 (Fig. 2). This fundamental shortcoming of the Cressman function as a weighting function for the successive corrections method has previously been pointed out by Daley (1991).

#### 2) Variable-span, two iterations

This example is the case presented by Barnes (1967) and Koch et al., (1983). There are two iterations with different Gaussian analytical weights, characterized by the *e*-folding parameters *σ*_{1} for the first iteration and *σ*_{2} ≤ *σ*_{1} for the second. For this example, *σ*_{1} was varied from 1 to 6, while *σ*_{2} was held fixed at *σ*_{2} = 1. The results are shown in Fig. 3. The smoother weights and equivalent transfer functions are similar for all *σ*_{1} except when *σ*_{1} = 1, corresponding to the *K* = 2 case in the previous example. For *σ*_{1} > 2, the cutoff frequencies and relative errors are nearly insensitive to *σ*_{1}. Comparison with Fig. 1 shows that *σ*_{1} ≥ 2 results in smoother weights and transfer functions almost identical to those of the *K* = 1 example. Apparently the additional iteration with *σ*_{2} < *σ*_{1} has only a slight effect.

*K*= 2 [recalling that 𝗟 = 𝗜 and

**g**(0) = 0]:

**g**

**y**

**y**

*σ*

_{1}>

*σ*

_{2}, 𝗦(1) smooths the data more than 𝗦(2), so that the lower frequencies retained after low-pass filtering with 𝗦(1) will be attenuated by the high-pass filtering using [𝗜 − 𝗦(2)]. Depending on the smoothing provided by 𝗦(2) relative to 𝗦(1), this first term on the right-hand side may be negligible, so that

**g**(2) ≈ 𝗦(2)

**y**. For

*σ*

_{1}> 2 this is the case, while for

*σ*

_{1}= 2, the filtering characteristics of 𝗦(1) and 𝗦(2) are close enough so that the combined effect of the low-pass filter followed by the high-pass filter is small but noticeable (see the dashed line in Figs. 3a and 3b). This situation is similar to that described by Koch et al. (1983), where multiple iterations with a slowly decreasing span improve the filter response (see their Fig. A2).

#### 3) Variable-span, three iterations

For this example, *K* = 3, and the final iteration is fixed at *σ*_{3} = 1. The analytical weights for the first two iterations vary with *σ*_{1} and *σ*_{2} ranging from 1 through 6, with *σ*_{1} ≥ *σ*_{2}. Only the cutoff frequencies and relative errors are presented in Fig. 4. For fixed *σ*_{2}, the filtering characteristics are almost independent of *σ*_{1}; and for *σ*_{2} > 2, the filtering characteristics are only weakly dependent on *σ*_{2}. The lowest relative errors are found for the three-iteration, fixed-span case, *σ*_{1} = *σ*_{2} = *σ*_{3} = 1. The situation is similar to the two-iteration example considered in section 3a(2): with variable spans, the succession of low- and high-pass filtering operators in the second term of (1) essentially cancel one another, and the successive corrections revert to the equivalent of a single pass, with the final, smallest span for the analytical weights.

The relative errors shown in Figs. 1, 3, and 4 demonstrate that iteration with fixed spans results in successively sharper filter transfer functions, while the use of variable-spans does little to enhance the filtering properties of the successive corrections method unless the spans decrease slowly with each iteration.

### b. The effects of grid spacing

An integral part of each successive corrections iteration is interpolation from the grid values of the previous step to the data locations. This interpolation is embodied in the matrix 𝗟 in equation (1). The exact form of 𝗟 will depend on both the interpolation algorithm used and the configuration of the grid points *η*_{m} relative to the data. Probably the most common interpolation algorithm used in successive corrections is polynomial or Lagrange interpolation (Press et al. 1992). As noted above, when the data and estimation grid locations coincide, 𝗟 = 𝗜 and the interpolation step has no effect. In realistic applications, where the data are irregularly spaced or otherwise do not coincide with the estimation grid points, this ideal situation will not exist and the nature of the interpolation must be considered.

Figure 5 shows smoother weights (i.e., the *l*_{nm}) and filter transfer function moduli for the cubic Lagrange interpolation (used herein) applied with two different grid spacings, and to situations where the data locations are located either midway or one-quarter of the way between estimation grid points. For the grid spacing of 0.05, smoother weights for both the midpoint and quarter-point are concentrated about those points, and the resulting transfer functions show that effectively none of the existing signal will be attenuated. For the coarse grid spacing of 1.0, on the other hand, the smoother weights are spread out away from the interpolation points and the Lagrange interpolation results in some signal attenuation that changes in quality from the midpoint (greater attenuation) to the quarter-point (less attenuation). Figure 5 demonstrates the fact that cubic Lagrange interpolation based on a coarse grid is less able to approximate a highly variable function than cubic Lagrange interpolation based on a fine grid. Thus, for a coarse estimation grid, if the data locations do not coincide with the estimation grid points, the interpolation step will introduce some filtering that varies from one data location to another. Depending on the spectral content of the process being smoothed, this may in turn degrade the filtering properties of the successive corrections algorithm, since the frequency content of the interpolated time series will vary with location.

The effect of varying the estimation grid spacing with the uniform dataset of section 3b is examined here for one case each of fixed- and variable-span successive corrections. Figure 6a shows the smoother weights and transfer function moduli for Gaussian-weighted, fixed-span successive corrections estimates with *K* = 3, data with a uniform spacing of 0.05, and grid spacings Δ varying from 0.05 to 1.0. The span of the Gaussian function was set to *σ* = 0.4 so that the transfer function for the Δ = 0.05 case (i.e., when the estimation grid locations coincide with the data points) has unit cutoff frequency. Figure 6a shows that for Δ > 0.05, the fixed-span successive corrections method filters the data more strongly, retaining less high-frequency content. The transfer functions for Δ = 0.75 and 1.0 cases deviate from the other cases in that they both markedly exceed unit value at lower frequencies. Variability at these frequencies is therefore amplified when the interpolation grid is coarse.

The variable-span case (Fig. 6b) is Gaussian-weighted with *K* = 3, *σ*_{3} = 0.274, *σ*_{2} = 2*σ*_{3}, and *σ*_{1} = 4*σ*_{3}, which results in a transfer function with unit cutoff frequency for Δ = 0.05. For the same range of Δ as shown in Fig. 6a, the filtering characteristics are much less sensitive to the change in grid spacing. This is another manifestation of the cancellation of the concatenated low- and high-pass filter operators discussed in section 3a(2).

## 4. Comparison with the loess smoother and further examples

It was shown in section 3 that the successive corrections algorithm acts as a low-pass filter. There are many other smoothing and interpolation algorithms that behave similarly. One of these methods, the quadratic loess smoother (Cleveland 1979, modified to use a Gaussian weighting function by Schlax et al. 2001) has been selected here for comparison with three cases of Gaussian-weighted successive corrections: one with *K* = 1, and the *K* = 3, fixed- and variable-span examples considered in section 3b. For the *K* = 1 case, *σ* is set to the same value as the final iteration for the *K* = 3 variable-span case. The estimation grid spacing has been set at Δ = 0.05 for the successive corrections estimates. The parameters for the four methods have been chosen to yield unit cutoff frequencies. The loess smoother requires only a single smoothing parameter, with weights obtained using the method of Schlax et al. (2001).

### a. Uniformly spaced data

Figure 7a shows the smoother weights and transfer functions for the four smoothers when both the data and the estimation grid have a uniform spacing of 0.05. The smoother weights for all but the *K* = 1 successive corrections are quite similar. The transfer functions all display the general low-pass form with relatively minor differences. The *K* = 3, fixed-span successive corrections transfer function has the lowest relative error (0.076) of the four cases, and thus best approximates the ideal low-pass filter, followed by loess with a slightly higher relative error of 0.083, the *K* = 3 variable-span (0.113) and the *K* = 1 (0.133) successive corrections smoothers. As in the previous examples, the additional iterations in the variable-span case provide little improvement in relative error over a single iteration. Moreover, the variable-span case has much larger relative error than fixed-span successive corrections.

Figure 7b shows how the smoothers respond to a sparser uniform data distribution with spacing 0.3 (and estimation grid spacing Δ = 0.05). The smoother weights and transfer functions for the four cases are again similar to each other and to a low-pass filter for frequencies less than about 1.75. The effect of the larger data spacing is apparent in both the smoother weights and the transfer functions. The smoother weights are asymmetric because the data are not symmetric about the selected estimation grid point. The peak in the transfer functions around a frequency of 3 is due to aliasing in the classical sense. Similar peaks will be found for all cases of uniformly spaced data; were the abscissa for the plot of the transfer function for the 0.05 data spacing in Fig. 7a extended to include frequencies up to the Nyquist frequency of 10, the first alias for that example would be observed. The data spacing directly affects the ability of *any* smoothing algorithm to filter the data. Signal energy at frequencies near 3 would be attenuated by the smoothers in Fig. 7a with the fine data spacing; the same signal energy would contaminate the estimates with the coarse dataset considered in Fig. 7b, thus degrading the performance of the low-pass filters.

### b. Nonuniformly spaced data

All of the previous examples considered the smoother weights and transfer functions with uniformly spaced data and at an estimation grid point far removed from the edge of the dataset. In realistic data analyses, data are typically nonuniformly distributed in space and time, yielding both regions of high data density and regions with significant data gaps. The performance of interpolation algorithms can change drastically under such conditions (Stephens 1967; Jones 1972; Schlax and Chelton 1992). It is thus instructive to consider the behavior of these smoothers under conditions other than the ideal ones considered thus far.

#### 1) Randomly spaced data

Figures 8a and 8b show how the smoothers respond to data that are randomly spaced. For these examples, the estimation grid spacing is Δ = 0.05, and the data locations were distributed according to the uniform random distribution with average spacings of 0.05 (Fig. 8a) and 0.3 (Fig. 8b). Randomly spaced data reduces the ability of all of the smoothers to accurately low-pass filter the data. In both Figs. 8a and 8b, it is seen that the transfer functions have lower cutoff frequencies than their counterparts for the uniformly spaced cases in Figs. 7a and 7b. For both cases, more signal energy between frequencies of 1 and 3 will be incorporated into the smoothed fields. This “aliasing” resulting from nonuniformly spaced data has been examined by Jones (1972), Schlax and Chelton (1992), and Chelton and Schlax (1994) and can significantly degrade the performance of smoothing algorithms.

#### 2) Edges and gaps

An extreme case of nonuniformly spaced data results when the estimation grid point lies at the edge of the dataset. Figure 9 shows the smoother weights and transfer function moduli for the four smoothers in this setting when the data and estimation grid spacing are both 0.05. The performance of all algorithms is significantly degraded. The loess smoother changes most dramatically, amplifying much more of the high-frequency energy than estimates at locations away from the dataset edge, presumably because of the relatively large smoother weights associated with the data points nearest the edge (see the left panel of Fig. 9). The *K* = 3 successive corrections smoothers respond less to the change of estimation location, while the *K* = 1 case (i.e., simple Gaussian smoothing) is the least affected.

Figures 10 and 11 demonstrate how the smoothers behave at a data gap. Once again, where the data exist, they have a uniform data spacing of 0.05, while Δ = 0.05. In both figures the gap width varies from 0.2 through 1.0. Figure 10 shows the smoother weights and transfer functions for estimates located in the center of the gap, while in Fig. 11 the estimation points lie one quarter of the way across the gap.

For an estimate at the center of the gap (Fig. 10), increasing the gap width results in smoother weights that are dominated by the weights associated with the data very near the gap edges. Comparison of Fig. 10 with Fig. 7 shows that the near-edge data receive much more relative weight than the data near the center of the effective span of the smoothers for the case with uniform data. This is most pronounced for the loess smoother, which, for the span used in this example, is not capable of yielding an estimate for the widest gap since the half-span of the smoother is not wide enough to encompass any observations outside of the data gap (Figs. 10e and 11e). The effect on the transfer functions is to lower the cutoff frequency for the main lobe, and to increase the number and amplitudes of the sidelobes. Thus, estimates at the center of a data gap may be contaminated by high-frequency signal energy.

It is noteworthy that both the fixed and variable span *K* = 3 estimates behave in a similar fashion when used to interpolate across data gaps: contrary to the conventional wisdom that often motivates the use of variable-span successive corrections, the first two iterations do not compensate for the presence of the data gap. Indeed, the utility of multiple interactions for the fixed-span case is questionable in this setting, since the *K* = 1 estimates have the smallest sidelobes in both Figs. 10 and 11, and are thus the least susceptible to aliasing of unresolved variability.

When the estimation point is closer to one of the gap edges (Fig. 11), the data points at the proximal edge are disproportionally weighted when the gap becomes large. The associated transfer functions do not have the distinct sidelobes present in Fig. 10. The difference in the weights and transfer functions between Figs. 10 and 11 imply that the estimates made at various locations across the gap will differ substantially in terms of their frequency content.

As an example of how the successive corrections method responds to a data gap, four implementations of the method were applied to a simulated dataset (Fig. 12). A 161-point realization of a first-order autoregressive process with parameter 0.7 and spacing 12.5 was generated. Utilizing a grid with the same spacing, successive correction estimates were calculated with *K* = 1 and *σ* = 25; *K* = 3 with a fixed span and *σ* = 40; *K* = 5 with variable spans and *σ*_{i} = 500, 250, 125, 60, and 25 for *i* = 1, … , 5, respectively; and *K* = 3 with variable spans and *σ*_{1} = 125, *σ*_{2} = 60, and *σ*_{3} = 25. Figure 12a shows the estimates obtained when a gap width of 300 is formed in the dataset. Outside of the gap, the estimates from all four implementations of the successive corrections method are very similar. As previously noted, the two variable-span implementations yield results almost identical to those obtained from the single iteration with *σ* = 25. For the chosen span, the fixed-span estimates are nearly indistinguishable from the *K* = 1 and variable-span estimates. Within the data gap (the gray shaded area in Fig. 12a) the estimates follow the trend of the last few data near the gap edge. This behavior is consistent with the observations made above: the near-edge data are given a disproportionate weight (see Fig. 11). The sudden jump in the estimates at the center of the gap occurs when the span bridges the gap and the weights become symmetric at the gap center, and data on both edges of the gap are included in the estimate (see Fig. 10). The *K* = 1 estimates (dashed line) are the least variable across the gap.

The variability of the estimates in the gap is demonstrated by Fig. 12b, where the gap has been slightly narrowed to a width of 275, so that one additional datum on each side of the gap in Fig. 12a is included. This minor change in the dataset completely changes the estimates within the gap. Again, the high relative weight assigned to the near-edge data produces unstable estimates. The similarity of the two variable-span estimates is of particular interest. The inclusion of two extra iterations with relatively large spans (i.e., *σ*_{1} = 500 and *σ*_{2} = 250) does not stabilize the estimates, as is generally presumed in the use of successive corrections.

Because of the erroneous nature and extreme sensitivity of the estimates within the gap, the parameters used to generate the grid of estimates in Fig. 12 are clearly not appropriate for extrapolating across the given gap. Figure 13 shows a much smoother set of estimates obtained using the previous sets of spans increased by a factor of 5. The estimates so obtained are insensitive to the slight change in gap width, but clearly do not approximate the data outside of the gap as well as the estimates in Fig. 12. This demonstrates a fundamental trade-off that must be made when smoothing a dataset. High-resolution estimates (with a small amount of smoothing) can well approximate the data in densely sampled parts of the data record, but will be meaningless in data gaps. Low-resolution estimates (with a large amount of smoothing) more strongly filter the data and retain only relatively low-frequency features, but are less sensitive to data gaps. To retain uniform frequency content and accuracy in an interpolated time series, the choice of smoothing parameters is dictated by the widths of the data gaps. It is clear from Fig. 12 that this limitation is not mitigated by variable-span successive corrections, although this is usually the motivation for applying this method.

## 5. Discussion

The examples considered here demonstrate the complex dependence of the filtering characteristics of the successive corrections method on the choices of the various parameters. For a fixed span, both the cutoff frequency and the relative error vary considerably with the number of iterations as well as with the selected span. More iterations result in a sharper filter rolloff with a higher cutoff frequency. A poor choice for the smoothing weights (e.g., the Cressman weighting function) can result in a divergent sequence of estimates. The configuration of the estimation grid can also affect how fixed-span successive corrections filter the data.

An important conclusion from the results presented here is that a variable-span successive corrections case results in filtering that is essentially independent of the number of iterations, and almost wholly dependent on the span of the function generating the analytical weights for the final iteration. The inutility of multiple iterations for variable-span successive corrections suggests that this method is computationally inefficient and should be avoided. The advantage of the fixed-span method, namely that more iterations result in a sharper filter, appears to be offset in data gaps by the presence of large sidelobes in the transfer functions that are less extreme for the *K* = 1 case of simple Gaussian smoothing. While all of the successive corrections estimates examined here are less sensitive to dataset edges than the loess smoother, it is not clear that the additional computational burden imposed by multiple-iteration successive corrections is compensated for by superior smoothing in general.

The quality of gridded fields produced by successive corrections will depend upon the correct choice of the parameters for the algorithm and an understanding of what the dataset is capable of resolving. As the examples in section 4 show, the filtering properties of any smoother can depend as much on the data distribution as on the selection of the smoother parameters. The fact that a given smoother and dataset allow the selection of a desired set of smoothing parameters does not imply that the nominal filtering will be achieved. If the parameters are chosen so that the low-pass filtering is insufficient to accomodate the data distribution, the resulting grid of estimates will be contaminated with spurious short-scale signals (cf. Figs. 12 and 13; Greenslade et al. 1997; Chelton and Schlax 1994), regardless of how fine the estimation grid spacing is.

## Acknowledgments

The authors wish to thank Michael Freilich, Steve Esbensen, and Mark Askelson for thoughtful reviews and comments. The research presented in this paper was supported by Contract 1206715 from the Jet Propulsion Laboratory for funding of Jason Science Working Team activities and by NASA Grant NAS5-32965 for funding of Ocean Vector Winds Science Team Activities.

## REFERENCES

Achtemeier, G. L., 1987,: On the concept of varying influence radii for a successive corrections objective analysis,.

,*Mon. Wea. Rev***115****,**1760–1771.Andersson, E., and Coauthors,. . 1998: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). III: Experimental results.

,*Quart. J. Roy. Meteor. Soc***124****,**1831–1860.Barnes, S., 1964: A technique for maximizing details in numerical weather map analysis.

,*J. Meteor***3****,**395–409.Barnes, S., 1967: Mesoscale objective analysis using weighted time-series observations, NOAA Tech. Memo. ERL NSSL-62, National Severe Storms Laboratory, Norman, OK, 60 pp. [NTIS COM-73-10781.].

Bergthorsson, P., and B. Doos, 1955: Numerical weather map analysis.

,*Tellus***7****,**329–340.Bracewell, R. N., 1986:

*The Fourier Transform and Its Applications,*. McGraw-Hill, 474 pp.Bratseth, A., 1986: Statistical interpolation by means of successive corrections.

,*Tellus***38A****,**439–447.Chelton, D. B., and M. G. Schlax, 1994: The resolution capability of an irregularly sampled dataset: With application to Geosat altimeter data.

,*J. Atmos. Oceanic Technol***11****,**534–550.Cleveland, W. S., 1979: Robust locally weighted regression and smoothing scatterplots.

,*J. Amer. Stat. Assoc***74****,**829–836.Courtier, P., and Coauthors,. . 1998: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: Formulation.

,*Quart. J. Roy. Meteor. Soc***124****,**1783–1807.Cressman, G., 1959: An operational objective analysis system.

,*Mon Wea. Rev***87****,**367–374.Daley, R., 1991:

*Atmospheric Data Analysis*. Cambridge University Press, 457 pp.daSilva, A. M., C. C. Young, and S. Levitus, 1994:

*Atlas of Surface Marine Data*. Vol. 1,*Algorithms and Procedures,*NOAA, 83 pp.Greenslade, D. J. M., D. B. Chelton, and M. G. Schlax, 1997: The midlatitude resolution capability of sea level fields constructed from single and multiple satellite altimeter datasets.

,*J. Atmos. Oceanic Technol***14****,**849–870.Jones, R. H., 1972: Aliasing with unequally spaced observations.

,*J. Appl. Meteor***11****,**245–254.Josey, S. A., E. C. Kent, and P. K. Taylor, 1998: The Southampton Oceanography Centre (SOC) ocean–atmosphere heat, momentum and freshwater flux atlas. Southampton Oceanography Centre Rep. 6, Southampton, United Kingdom, 55 pp.

Josey, S. A., E. C. Kent, and P. K. Taylor, 1999: New insights into the ocean heat budget closure problem from analysis of the SOC air–sea flux climatology.

,*J. Climate***12****,**2856–2880.Kent, E. C., P. K. Taylor, and P. G. Challenor, 2000: The effect of successive correction on variability estimates for climatological datasets.

,*J. Climate***13****,**1845–1857.Klinker, E., F. Rabier, G. Kelly, and J-F. Mahfouf, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. III: Experimental results and diagnostics with operational configuration.

,*Quart. J. Roy. Meteor. Soc***126****,**1191–1215.Koch, S., M. Desjardins, and P. Kocin, 1983: An interactive Barnes objective map analysis scheme for use with satellite and conventional data.

,*J. Climate Appl. Meteor***22****,**1487–1503.Levitus, S., 1982:

*Climatological Atlas of the World Ocean*. NOAA Prof. Paper 13, 173 pp. and 17 microfiche.Levitus, S., and T. P. Boyer, 1994:

*Temperature*. Vol. 4,*World Ocean Atlas 1994,*NOAA Atlas NESDIS, 117 pp.Liu, W. T., W. Tang, and P. S. Polito, 1998: NASA scatterometer provides global ocean–surface wind fields with more structures than numerical weather prediction.

,*Geophys. Res. Lett***25****,**761–764.Mahfouf, J-F., and F. Rabier, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. II: Experimental results with improved physics.

,*Quart. J. Roy. Meteor. Soc***126****,**1171–1190.Perigaud, C., 1990: Sea level oscillations observed with GEOSAT along two shear fronts of the Pacific North Equatorial Countercurrent.

,*J. Geophys. Res***95****,**(C5),. 7239–7248.Priestley, M. B., 1981:

*Spectral Analysis and Time Series*. Academic Press, 890 pp.Press, W. H., S. A. Teukolsky, W. T. Vettering, and B. P. Flannery, 1992:

*Numerical Recipes in FORTRAN,*. Cambridge University Press, 963 pp.Rabier, F., A. McNally, E. Andersson, P. Courtier, P. Unden, J. Eyre, A. Hollingsworth, and F. Bouttier, 1998: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). II: Structure functions.

,*Quart. J. Roy. Meteor. Soc***124****,**1809–1829.Rabier, F., H. Jarvinen, E. Klinker, J-F. Mahfouf, and A. Simmons, 2000:: The ECMWF operational implementation of four-dimensional variational assimilation. I: Experimental results with simplified physics.

,*Quart. J. Roy. Meteor. Soc***126****,**1143–1170.Schlax, M. G., and D. B. Chelton, 1992: Frequency domain diagnostics for linear smoothers.

,*J. Amer. Stat. Assoc***87****,**1070–1081.Schlax, M. G., D. B. Chelton, and M. H. Freilich, 2001: Sampling errors in wind fields constructed from single and tandem scatterometer datasets.

,*J. Atmos. Oceanic Technol***18****,**1014–1036.Stephens, J., 1967: Filtering responses of selected distance-dependent weight functions,.

,*Mon. Wea. Rev***95****,**45–46.Stephens, J., and J. Stitt, 1970: Optimum influence radii for interpolation with the method of successive corrections.

,*Mon. Wea. Rev***98****,**680–687.Stephens, J., and A. Polan, 1971: Spectral modification by objective analysis.

,*Mon. Wea. Rev***99****,**374–378.Vasquez, J., V. Zlotnicki, and L. Fu, 1990: Sea level variabilities in the Gulf Stream between Cape Hatteras and 50°W: A GEOSAT study.

,*J. Geophys. Res***95****,**17957–17964.Yang, C., and R. Shapiro, 1973: The effects of the observational system and the method of interpolation on the computation of spectra.

,*J. Atmos. Sci***30****,**530–536.

Same as Fig. 1 except that the Cressman function is used to define the analytical weights

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 1 except that the Cressman function is used to define the analytical weights

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 1 except that the Cressman function is used to define the analytical weights

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Smoother weights and (b) transfer function moduli for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 2 and final span *σ*_{2} = 1, for initial span *σ*_{1} = 1 (dotted line), 2 (dashed line), 3 (solid line), 4 (heavy dotted line), 5 (heavy dashed line), and 6 (heavy solid line). (c) Cutoff frequencies and (d) relative errors of the smoothers in (a) and (b), as functions of *σ*_{1}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Smoother weights and (b) transfer function moduli for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 2 and final span *σ*_{2} = 1, for initial span *σ*_{1} = 1 (dotted line), 2 (dashed line), 3 (solid line), 4 (heavy dotted line), 5 (heavy dashed line), and 6 (heavy solid line). (c) Cutoff frequencies and (d) relative errors of the smoothers in (a) and (b), as functions of *σ*_{1}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Smoother weights and (b) transfer function moduli for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 2 and final span *σ*_{2} = 1, for initial span *σ*_{1} = 1 (dotted line), 2 (dashed line), 3 (solid line), 4 (heavy dotted line), 5 (heavy dashed line), and 6 (heavy solid line). (c) Cutoff frequencies and (d) relative errors of the smoothers in (a) and (b), as functions of *σ*_{1}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Cutoff frequencies and (b) relative errors for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 3 and final span *σ*_{3} = 1, contoured as functions of *σ*_{1} and *σ*_{2}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Cutoff frequencies and (b) relative errors for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 3 and final span *σ*_{3} = 1, contoured as functions of *σ*_{1} and *σ*_{2}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(a) Cutoff frequencies and (b) relative errors for Gaussian-weighted, variable-span, successive corrections smoothers with number of iterations *K* = 3 and final span *σ*_{3} = 1, contoured as functions of *σ*_{1} and *σ*_{2}

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for cubic Lagrange interpolation with grid spacing 0.05 (light lines) and 1.0 (heavy lines), to points located at the grid midpoints (solid lines) and one quarter of the way between the grid points (dashed lines). Note that the light-solid and dashed lines nearly coincide

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for cubic Lagrange interpolation with grid spacing 0.05 (light lines) and 1.0 (heavy lines), to points located at the grid midpoints (solid lines) and one quarter of the way between the grid points (dashed lines). Note that the light-solid and dashed lines nearly coincide

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for cubic Lagrange interpolation with grid spacing 0.05 (light lines) and 1.0 (heavy lines), to points located at the grid midpoints (solid lines) and one quarter of the way between the grid points (dashed lines). Note that the light-solid and dashed lines nearly coincide

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for three-iteration, Gaussian-weighted (a) fixed-span and (b) variable-span successive corrections smoothers for estimation grid point spacings of 0.05 (solid line), 0.3 (dashed line), 0.5 (dotted line), 0.75 (heavy dashed line), and 1.0 (heavy solid line). The spans were chosen so that the transfer functions would have unit cutoff frequency when the estimation grid locations corresponded to the data that were located at intervals of 0.05. The variable-span successive corrections smoother used spans *σ*_{1} = 4*σ*_{3}, *σ*_{2} = 2*σ*_{3}, and *σ*_{3} = .274

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for three-iteration, Gaussian-weighted (a) fixed-span and (b) variable-span successive corrections smoothers for estimation grid point spacings of 0.05 (solid line), 0.3 (dashed line), 0.5 (dotted line), 0.75 (heavy dashed line), and 1.0 (heavy solid line). The spans were chosen so that the transfer functions would have unit cutoff frequency when the estimation grid locations corresponded to the data that were located at intervals of 0.05. The variable-span successive corrections smoother used spans *σ*_{1} = 4*σ*_{3}, *σ*_{2} = 2*σ*_{3}, and *σ*_{3} = .274

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for three-iteration, Gaussian-weighted (a) fixed-span and (b) variable-span successive corrections smoothers for estimation grid point spacings of 0.05 (solid line), 0.3 (dashed line), 0.5 (dotted line), 0.75 (heavy dashed line), and 1.0 (heavy solid line). The spans were chosen so that the transfer functions would have unit cutoff frequency when the estimation grid locations corresponded to the data that were located at intervals of 0.05. The variable-span successive corrections smoother used spans *σ*_{1} = 4*σ*_{3}, *σ*_{2} = 2*σ*_{3}, and *σ*_{3} = .274

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for the loess smoother (solid line); Gaussian-weighted, fixed-span successive corrections smoothers with *K* = 1 (dashed line) and *K* = 3 (dotted line); and Gaussian-weighted, variable-span successive corrections smoother with *K* = 3 and spans *σ*_{1} = 4*σ*_{3} and *σ*_{2} = 2*σ*_{3} (heavy dotted line). The estimation grid for the successive corrections smoothers was 0.05. (a) Uniform data with spacing 0.05, (b) uniform data with spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for the loess smoother (solid line); Gaussian-weighted, fixed-span successive corrections smoothers with *K* = 1 (dashed line) and *K* = 3 (dotted line); and Gaussian-weighted, variable-span successive corrections smoother with *K* = 3 and spans *σ*_{1} = 4*σ*_{3} and *σ*_{2} = 2*σ*_{3} (heavy dotted line). The estimation grid for the successive corrections smoothers was 0.05. (a) Uniform data with spacing 0.05, (b) uniform data with spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

(left) Smoother weights and (right) transfer function moduli for the loess smoother (solid line); Gaussian-weighted, fixed-span successive corrections smoothers with *K* = 1 (dashed line) and *K* = 3 (dotted line); and Gaussian-weighted, variable-span successive corrections smoother with *K* = 3 and spans *σ*_{1} = 4*σ*_{3} and *σ*_{2} = 2*σ*_{3} (heavy dotted line). The estimation grid for the successive corrections smoothers was 0.05. (a) Uniform data with spacing 0.05, (b) uniform data with spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using (a) random data with nominal spacing 0.05, and (b) random data with nominal spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using (a) random data with nominal spacing 0.05, and (b) random data with nominal spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using (a) random data with nominal spacing 0.05, and (b) random data with nominal spacing 0.3

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using uniform data with spacing 0.05 only for time <0. Note the change in the ordinate scales from Figs. 7 and 8

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using uniform data with spacing 0.05 only for time <0. Note the change in the ordinate scales from Figs. 7 and 8

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 7 except using uniform data with spacing 0.05 only for time <0. Note the change in the ordinate scales from Figs. 7 and 8

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Figs. 7 except using uniform data with spacing 0.05 with various width data gaps and estimation points in the center of the gaps. Gap widths are (a) 0.2, (b) 0.4, (c) 0.6, (d) 0.8, and (e) 1.0. The loess estimate is not shown in (e)

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Figs. 7 except using uniform data with spacing 0.05 with various width data gaps and estimation points in the center of the gaps. Gap widths are (a) 0.2, (b) 0.4, (c) 0.6, (d) 0.8, and (e) 1.0. The loess estimate is not shown in (e)

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Figs. 7 except using uniform data with spacing 0.05 with various width data gaps and estimation points in the center of the gaps. Gap widths are (a) 0.2, (b) 0.4, (c) 0.6, (d) 0.8, and (e) 1.0. The loess estimate is not shown in (e)

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 10 except with estimation points located one-quarter of the gap width from the left edge of the gap

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 10 except with estimation points located one-quarter of the gap width from the left edge of the gap

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 10 except with estimation points located one-quarter of the gap width from the left edge of the gap

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Successive corrections estimates applied to a simulated dataset with a data gap. The data used in the estimates are marked as solid dots. The data removed to form the gap are marked as open circles. The shaded region shows the location of the data gap. The four estimates are: *K* = 1, *σ* = 25 (dashed line); *K* = 3 and a fixed span with *σ* = 40 (dotted line); *K* = 5, *σ*_{1} = 500, *σ*_{2} = 250, *σ*_{3} = 125, *σ*_{4} = 60, and *σ*_{5} = 25 (thin solid line); and *K* = 3 with *σ*_{1} = 125, *σ*_{2} = 60, and *σ*_{3} = 25 (heavy solid line). (a) Gap width 300, (b) gap width 275

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Successive corrections estimates applied to a simulated dataset with a data gap. The data used in the estimates are marked as solid dots. The data removed to form the gap are marked as open circles. The shaded region shows the location of the data gap. The four estimates are: *K* = 1, *σ* = 25 (dashed line); *K* = 3 and a fixed span with *σ* = 40 (dotted line); *K* = 5, *σ*_{1} = 500, *σ*_{2} = 250, *σ*_{3} = 125, *σ*_{4} = 60, and *σ*_{5} = 25 (thin solid line); and *K* = 3 with *σ*_{1} = 125, *σ*_{2} = 60, and *σ*_{3} = 25 (heavy solid line). (a) Gap width 300, (b) gap width 275

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Successive corrections estimates applied to a simulated dataset with a data gap. The data used in the estimates are marked as solid dots. The data removed to form the gap are marked as open circles. The shaded region shows the location of the data gap. The four estimates are: *K* = 1, *σ* = 25 (dashed line); *K* = 3 and a fixed span with *σ* = 40 (dotted line); *K* = 5, *σ*_{1} = 500, *σ*_{2} = 250, *σ*_{3} = 125, *σ*_{4} = 60, and *σ*_{5} = 25 (thin solid line); and *K* = 3 with *σ*_{1} = 125, *σ*_{2} = 60, and *σ*_{3} = 25 (heavy solid line). (a) Gap width 300, (b) gap width 275

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 12 except that the spans of the successive corrections estimators are all increased by a factor of 5

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 12 except that the spans of the successive corrections estimators are all increased by a factor of 5

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

Same as Fig. 12 except that the spans of the successive corrections estimators are all increased by a factor of 5

Citation: Monthly Weather Review 130, 2; 10.1175/1520-0493(2002)130<0372:FTFFTM>2.0.CO;2

^{1}

Assuming that the spectral representation (e.g., Priestley 1981) holds, then *y*_{n} = ^{∞}_{−∞}*πifx*_{n})*F*(*s*) *ds* where *F*(*s*) is the Fourier transform associated with the stochastic process generating the *y*_{n}. Direct substitution into (2) leads to *g*_{m} = ^{∞}_{−∞}*P̂*^{*}_{m}*f*)*F*(*f*) *df*, the form applied by Schlax and Chelton (1992). This form also follows from the power theorem (e.g., Bracewell 1986). The earlier works of Jones (1972) and Yang and Shapiro (1973) (that were drawn to our attention by M. Askelson) preferred to write the equivalent form *g*_{m} = ^{∞}_{−∞}*ϕ̂*^{*}_{m}*f*)*F*(*f*) exp(2*πifη*_{m}) *df* with *ϕ̂*_{m}(*f*) = *P̂*_{m}(*f*) exp( −2*πifη*_{m}), wherein the estimate *g*_{m} appears as the Fourier transform of the product of *F* and the transfer function *ϕ*_{m}. Writing *P̂*_{m}(*f*) = ‖*P̂*_{m}(*f*)‖ exp(2*πiχ*) leads to _{m}(*f*) = ‖*P̂*_{m}(*f*)‖ exp[2*πi*(*χ* − *η*_{m})]. Since the description of the filtering properties of the successive corrections algorithm here depends only upon ‖*P̂*_{m}(*f*)‖, this difference in notation is not significant for present purposes. The bias calculations of Schlax and Chelton (1992) are consistent with their definition of *P̂*_{m}(*f*), and this formulation is in accord with their applications in which it is assumed that only ‖*F*(*f*)‖ is known a priori.