Versatile Harmonic Tidal Analysis: Improvements and Applications

M. G. G. Foreman Institute of Ocean Sciences, Fisheries and Oceans Canada, Sidney, British Columbia, Canada

Search for other papers by M. G. G. Foreman in
Current site
Google Scholar
PubMed
Close
,
J. Y. Cherniawsky Institute of Ocean Sciences, Fisheries and Oceans Canada, Sidney, British Columbia, Canada

Search for other papers by J. Y. Cherniawsky in
Current site
Google Scholar
PubMed
Close
, and
V. A. Ballantyne Canadian Hydrographic Service, Fisheries and Oceans Canada, Sidney, British Columbia, Canada

Search for other papers by V. A. Ballantyne in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

New computer software that permits more versatility in the harmonic analysis of tidal time series is described and tested. Specific improvements to traditional methods include the analysis of randomly sampled and/or multiyear data; more accurate nodal correction, inference, and astronomical argument adjustments through direct incorporation in the least squares matrix; multiconstituent inferences from a single reference constituent; correlation matrices and error estimates that facilitate decisions on the selection of constituents for the analysis; and a single program that analyzes one- or two-dimensional time series. This new methodology is evaluated through comparisons with results from old techniques and then applied to two problems that could not have been accurately solved with older software. They are (i) the analysis of ocean station temperature time series spanning 25 yr, and (ii) the analysis of satellite altimetry from a ground track whose proximity to land has led to significant data dropout. This new software is free as part of the Institute of Ocean Sciences (IOS) Tidal Package and can be downloaded, along with sample input data and an explanatory readme file.

Corresponding author address: M. G. G. Foreman, Institute of Ocean Sciences, Fisheries and Oceans Canada, P.O. Box 6000, Sidney, BC V8L 4B2, Canada. Email: mike.foreman@dfo-mpo.gc.ca

Abstract

New computer software that permits more versatility in the harmonic analysis of tidal time series is described and tested. Specific improvements to traditional methods include the analysis of randomly sampled and/or multiyear data; more accurate nodal correction, inference, and astronomical argument adjustments through direct incorporation in the least squares matrix; multiconstituent inferences from a single reference constituent; correlation matrices and error estimates that facilitate decisions on the selection of constituents for the analysis; and a single program that analyzes one- or two-dimensional time series. This new methodology is evaluated through comparisons with results from old techniques and then applied to two problems that could not have been accurately solved with older software. They are (i) the analysis of ocean station temperature time series spanning 25 yr, and (ii) the analysis of satellite altimetry from a ground track whose proximity to land has led to significant data dropout. This new software is free as part of the Institute of Ocean Sciences (IOS) Tidal Package and can be downloaded, along with sample input data and an explanatory readme file.

Corresponding author address: M. G. G. Foreman, Institute of Ocean Sciences, Fisheries and Oceans Canada, P.O. Box 6000, Sidney, BC V8L 4B2, Canada. Email: mike.foreman@dfo-mpo.gc.ca

1. Introduction

There have been many advances in tidal analysis and prediction since the earliest documented predictions for the bore on the Chhien-Thang River in China and flood tide at London Bridge in the eleventh and thirteenth centuries, respectively (Cartwright 1999). Parker (2007) has recently published a guide on the various considerations that are needed, and contemporary approaches that can be used, to carry out accurate tidal analyses and predictions. One of the most successful and widely used approaches has been, and continues to be, harmonic analysis wherein the energy at specific tidal frequencies is determined by a mathematical fitting procedure, usually least squares. Though computer software that performs harmonic tidal analysis of one- and two-dimensional time series has been are available for more than 40 yr (links to software packages are available online at http://www.pol.ac.uk/psmsl/training/analysis.html), many of these codes are restrictive in both the form of the input time series (e.g., regularly sampled, albeit with gaps) and the manner in which nodal correction, astronomical argument, and inference calculations are made (e.g., as adjustments to results from a least squares fit). In the early days of harmonic analysis, these restrictions arose from computer limitations that necessitated efficiency more than accuracy in the algorithms (Godin 1972; Foreman 1977, henceforth F77). However, present computer capacities mean that these restrictions need no longer apply.

In this study, we develop and test a more versatile harmonic analysis technique that can accept randomly sampled data and embed the nodal and astronomical argument corrections and multiple inference calculations into an overdetermined matrix that is solved using singular value decomposition (SVD) techniques (Golub and Van Loan 1983; Press et al. 1992). The input time series is also allowed to be one or two dimensions, thereby eliminating the need for separate programs to analyze tidal heights and currents. In the latter case, the final harmonics are also expressed in terms of current ellipse parameters. Though the use of randomly sampled data raises issues of constituent selection and independence, the SVD approach allows for the calculation of covariance matrices and correlation coefficients that permit an assessment of these dependencies (Cherniawsky et al. 2001). Thus, an iterative approach can be used to determine which constituents should be sought directly and which should be inferred. Embedding the inference calculations into the overdetermined matrix means that the inferred constituents will affect all other constituents included in the analysis, not only the reference constituent that is the basis of the inference. In addition, embedding the nodal corrections in the matrix has not only a similar effect for the satellite and major constituent, but it also removes the need for assumptions that underlie usual postfit corrections and that may restrict the length of the analysis period (F77).

The analysis is first tested by comparing its results against those arising from the F77 conventional approach for a pair of synthetic time series with and without background noise. A direct assessment of accuracy is possible, because the amplitudes and phases of the constituents are known. A second test is then performed with a 12-yr time series to demonstrate the effects of embedding the astronomical argument and nodal correction calculations directly into the least squares fit, rather than as postfit adjustments. To demonstrate the use of correlation coefficients in constituent selection, further tests are performed with a synthetic time series of randomly sampled data. Finally, two examples are given to illustrate the versatility of the new technique. The first is the analyses of salinities from conductivity–temperature–depth (CTD) stations that span 20 yr, and the second is the analysis of Ocean Topography Experiment (TOPEX)/Poseidon (T/P) satellite altimetry along a short track crossing the Strait of Georgia, whose proximity to land has led to significant data dropout.

2. Traditional and versatile harmonic analyses

Tidal potential theory (Doodson 1921) predicts the existence of hundreds of tidal frequencies, each of which can be expressed as a linear combination of the rates of change of mean lunar time and five astronomical variables that uniquely specify the position of the sun and moon. For each frequency, the six integer coefficients associated with this linear combination are referred to as its Doodson numbers. However, it is neither practical nor mathematically feasible to include all constituents in every analysis, because many frequencies are so close that a time series of several years’ duration is required to separate some neighbors by one cycle; while, according to potential theory, others should have very small amplitudes. Godin (1972) resolved this dilemma by defining constituent “clusters” that have the same first three Doodson numbers. Each cluster was assigned the name of its major constituent (in terms of tidal potential amplitude), while the lesser constituents were termed “satellites.” Harmonic tidal analysis then followed two steps: (i) all satellites were ignored and amplitude and phases were determined for all major constituents that could be resolved given the time series length; (ii) a so-called nodal correction was performed to account for the presence of the satellites and—if necessary—an inference was carried out to correct for important missing major constituents. More details on both these steps will be given later.

The five astronomical variables referred to previously are associated with the 27.32 day, 365.24 day, 8.85 yr, 18.6 yr, and 21 000 yr cycles arising from variations in the mean longitude of the moon, the mean longitude of the sun, the longitude of the lunar perigee, the longitude of moon’s nodal progression (inclination of its orbit to the equator), and the longitude of the solar perigee, respectively. Though the term “nodal correction” was originally coined before the advent of modern computers to designate corrections for only the moon’s nodal progression that were not incorporated into the astronomical argument calculation for the main constituent, the term “satellite modulation” is more appropriate now, because the correction has been extended to include the effects of variations in lunar and solar perigees. In mathematical terms, the harmonic analysis approach originally proposed by Godin (1972) and employed by F77 and Pawlowicz et al. (2002, henceforth PBL02) assumed that a one-dimensional time series with tidal and nontidal energies can be expressed as
i1520-0426-26-4-806-e1
where h(tj) is the measurement at time tj; Z0 is a constant background value; fk(t0) and uk(t0) are the nodal corrections to amplitude and phase, respectively, at some reference time t0 for major constituent k with frequency ωk; Ak and gk (k = 1,n), are the amplitude and phase lag of constituent k, respectively; Vk(t0) is the astronomical argument for constituent k at time t0; R(tj) is the nontidal residual; and n is the number of tidal constituents. A least squares approach is usually employed to solve for Z0, Ak, and gk; the observation times are often assumed to arise from regular sampling (e.g., hourly, though gaps are permitted); and the number n and specific constituents k selected for the analysis are usually determined in accordance with the time series length and the estimated background noise level.
Deciding which major constituents should be included in the first stage of a harmonic analysis is not easy. Whereas the so-called Rayleigh criterion (Godin 1972) argues that a time series of length T is required to distinguish between constituents with a frequency separation of T−1, linear algebra suggests that m independent observations h(tj) should be sufficient to solve a matrix equation for m/2 amplitudes and phases, as described in Eq. (1). In actuality, the decision also needs to consider the nontidal signal [R(tj) in Eq. (1)] and the condition number (Ortega 1972) of the least squares matrix. Munk and Hasselmann (1964) refined the Rayleigh criterion by showing that “meaningful statements” can be made about the tidal energies associated with frequencies ω1 and ω2 provided
i1520-0426-26-4-806-e2
where signal/noise is the ratio of the tidal variance to the nontidal variance. Foreman and Henry (1989) extended the selection analysis further by using standard matrix theory. Assuming that 𝗔x = b and 𝗔x′ = b′ are the matrix equations associated with (1) when the background noise R is zero and nonzero, respectively; K(𝗔) is the condition number for 𝗔; and ‖ · ‖ is a measurement norm, matrix theory (Ortega 1972) states that
i1520-0426-26-4-806-e3
In the context of tidal analysis, this inequality has the following interpretation: the term is the inverse of Munk and Hasselmann’s (1964) signal-to-noise ratio, while is the fractional error in the fitted amplitudes and phases. The effect of including relatively close frequencies (in the Rayleigh criterion sense) in the harmonic analysis is to make the rows of 𝗔 more linearly dependent and increase K(𝗔). So, the combination of relatively close frequencies with substantial background noise relative to the signal should cause relatively large differences between the calculated set of parameters x′ and their true values x. However, if the frequencies are not close and/or the background noise is small, the fitted solution should be more accurate.

Though Godin’s (1972) harmonic tidal analysis approach has been used successfully for many years, it does have limitations. The first and perhaps foremost among these is the underlying assumption of stationarity, wherein the tidal amplitude and phase for each constituent are assumed to remain constant over the period of the time series. In shallow water, this assumption is often invalid as a result of nonlinear interactions between the tide and storm surges or variable river discharge that literally changes tidal amplitudes and phases for the periods during which these phenomena occur. However, this is generally not a serious problem, as the effects are often small or of short duration. In cases where the effects are more substantial, such as for internal tidal currents that change with the stratification or seasonally varying ice cover that can modify both tidal elevation and current harmonics, wavelet analysis (Jay and Flinchem 1997, 1999) has been used successfully, though it does not actually produce improved amplitudes and phases.

As described earlier, the second limitation is that the time series may not be sufficiently long to adequately separate the energy from constituents that are close in frequency, or are aliased as a result of infrequent sampling. Though this can be overcome through the use of inference—wherein the amplitude and phase of lesser constituents are assumed to have specific relationships to a larger reference constituent—the choice of inference parameters and the manner in which the calculation is carried out can limit the accuracy. This will be discussed and illustrated later.

The third limitation arises from the implementation of the nodal corrections and, to a much lesser extent, the astronomical argument. The use of ωk(tj − t0) + Vk(t0) in Eq. (1) assumes that the astronomical argument for constituent k, Vk(tj), varies linearly about some reference time t0. Though this is generally a very good assumption, it does tend to break down as the time series extends over several years. On the other hand, assuming that fk and uk remain constant with their values for t0 is more questionable. For many constituents, not only do these nodal variations change significantly over 18.6 and 8.85 yr but also the manner in which the corrections are implemented (Godin 1972; F77; PBL02) can cause the accuracy of analysis results to deteriorate as the time series extend beyond one year.

In an effort to remove some of these deficiencies and limitations, a new harmonic analysis program has been developed that starts by replacing Eq. (1) with
i1520-0426-26-4-806-e4
In this case, V, u, and f are evaluated at the precise times of each measurement, thus eliminating inaccuracies that arise from assuming a linear variation in the astronomical argument [i.e., Vk(tj) = ωk(tj − t0) + Vk(t0)] and temporally constant values for the nodal corrections. In addition, this program includes a linear trend (coefficient a), allows for the measurements tj to arise from arbitrary sampling, and permits multiconstituent inferences (i.e., more than one constituent can be inferred from a single reference constituent) that are computed directly within the least squares fit, rather than as a correction to postfit values. The nodal correction and astronomical argument parameters f, u, and V are also embedded in the least squares matrix. (Differences between the old and new approaches are illustrated schematically in Fig. 1.) This not only eliminates the need for postfit corrections, but it also removes the restriction that analysis periods should not be much longer than one year (PBL02). That said, times series longer than 18.6 yr can avoid nodal corrections completely and are better analyzed using techniques that include the satellite constituents directly (Foreman and Neufeld 1991).
Setting
i1520-0426-26-4-806-e5
Eq. (4) can be re-expressed as the system of j = 1,m linear equations
i1520-0426-26-4-806-e6
If m > 2(n + 1), this system is overdetermined and is usually solved by minimizing
i1520-0426-26-4-806-e7
with respect to the unknowns Z0, a, and Xk, Yk, for k = 1,n. Though there are a variety of techniques for performing this least squares fit (F77 used the Cholesky algorithm), the approach chosen here is the SVD algorithm (Golub and Van Loan 1983; Press et al. 1992) described in Cherniawsky et al. (2001). In addition to being accurate and efficient, SVD has other advantages that will be explained shortly.

Though it might be viewed as a disadvantage rather than an improvement, another change with this new program is that the constituents to be used in the analysis are not selected automatically. They must be specified by the user. This was deemed necessary, as the provision of arbitrary sampling generally means that variations of the Rayleigh criterion are no longer valid for determining constituent selection. F77 and PBL02, for example, employ a Rayleigh criterion decision tree (see Tables 1–4 in F77) that provides a hierarchy of constituent selection based on frequency separation and tidal potential amplitudes such that when a time series is not sufficiently long to separate two neighboring constituents, only the one with the larger expected amplitude is included directly in the analysis. On the other hand, the SVD approach produces a covariance matrix and correlation coefficients (see Cherniawsky et al. 2001) that allow a direct method for evaluating the (in)dependence of the chosen constituents. It is, therefore, relatively easy to perform a series of tests to determine the best choice of constituents. This issue will be discussed in more detail later.

The mathematics underlying multiple inferences is relatively straightforward. Assume that Ak0, gk0, fk0(tj), uk0(tj), and Vk0(tj) are the amplitude, phase lag, nodal amplitude correction, nodal phase correction, and astronomical argument, respectively, for the reference constituent at time tj, whereas Ai, gi, fi(tj), ui(tj), and Vi(tj) are the analogous values for the i = 1,Nk0 constituents to be inferred from that reference constituent. The nodal correction values and astronomical arguments can be calculated for each time tj, whereas the amplitudes and phases can be computed from the harmonic analysis. In particular, once the amplitude ratios ri = Ai/Ak0 and phase differences ϕi = gk0gi between the reference and inferred constituents are specified (usually from previous analyses at the same or nearby locations), the only remaining unknowns are Ak0 and gk0, which can be determined as follows. Setting
i1520-0426-26-4-806-e8
the contribution from the reference constituent and those to be inferred at time tj,
i1520-0426-26-4-806-e9
can be re-expressed [in analogy with (6)] as
i1520-0426-26-4-806-e10
where the unknowns to be determined by solving an overdetermined system of linear equations are
i1520-0426-26-4-806-e11
The amplitude and phase of the reference constituent are then recovered as
i1520-0426-26-4-806-e12
while those for the inferred constituents are computed using the prescribed amplitude ratios and phase differences. By simply replacing the coefficients of Xk and Yk in (6) with those for Xk0 and Yk0 in (10), the inferences can be included directly in the least squares calculation rather than as postfit corrections.

3. Testing the improvements

To assess accuracy, the new methodology was tested with several synthetic time series that were generated using specified constituent amplitudes and phases and random background noise levels. Constituent nodal corrections and astronomical arguments were computed and incorporated at each time level in these syntheses, consistent with the new analysis approach and what would arise with actual observations. The noise was created by scaling uniform random numbers that were generated in the range [−0.5, 0.5] with subroutine RAN1 (Press et al. 1992), so that the ratio of their standard deviation to the standard deviation of the tidal signal was at a prescribed level.

For the first test, hourly elevations for Tofino, British Columbia (Fig. 2), were generated between 1 January 2008 and 1 March 2008 using the same mean sea level and the same amplitudes and phases for the largest eight constituents (Q1, O1, P1, K1, N2, M2, S2, and K2) as are employed by the Canadian Hydrographic Service in their annual tide table predictions (available online at http://www-sci.pac.dfo-mpo.gc.ca/tides_e.htm). Two time series were computed—one without any background noise and the other with a 25% background noise level—as described above. (Yearly analyses of Tofino observations typically have noise-to-signal ratios of about 17%.) As the period required to separate the two constituent pairs K1 and P1 and S2 and K2 is approximately six months, |ω2 − ω1|T0.33 and the Munk and Hasselmann (1964) criterion suggests that “meaningful” results should be possible by analyzing both time series without any inference. The same 51 constituents selected with the Rayleigh criterion value of 0.33 in F77 were included in all analyses. Table 1 shows essentially no difference between the analysis results arising from F77 and the new methodology. This indicates that for relatively short periods such as two months, the accuracy improvements arising from embedding the nodal corrections into the least squares matrix and evaluating the astronomical argument exactly (as opposed to using a linear approximation) are not appreciable. With no background noise, both techniques recovered the K1, P1, S2, and K2 signals to within machine precision; however, with 25% noise, they both had approximately 20% errors in the P1 and K2 amplitudes. The matrix condition number for the new methodology, computed as the ratio of the largest to the smallest singular values, was 15.3. So, in this case, the right-hand side of (3) was a generous upper bound.

As a second test, we repeated the previous analysis for only the January portion of the synthesized record and now inferred P1 and K2 using their exact amplitude ratio and phase difference relationships relative to K1 and S2, respectively. Table 2 shows analysis results for the eight constituents included in the synthesis as well as those for NO1, J1, L2, and ETA2, lesser constituents whose frequencies are close to those inferred. Again, the overall accuracy with which both methods recovered the amplitudes and phases of the original eight constituents is generally quite good. For example, the P1 amplitudes are within 4% of their true values in the 25% noise case. However, a notable deficiency of the old inference method is apparent in the amplitudes of NO1, J1, L2, and ETA2 for the no-noise case. The fact that P1 and K2 energies are present in the time series but not accounted for in the first stage (least squares fit) of the F77 harmonic analysis, causes a leakage to neighboring constituents that is not corrected in the subsequent postfit inference calculation. In fact, this leakage is still evident in the 25% noise results, as the F77 amplitudes for these four lesser constituents are consistently larger than those for the new method (whose errors arise solely from the background noise). Granted the new inference method would also display some leakage in the no-noise case if the inference parameters were not exact, but results from the 25% noise case suggest that it would be less than that for F77.

To demonstrate the advantage of embedding the nodal corrections and astronomical arguments in the least squares matrix, we analyzed in a third test two 12-yr time series for Tofino generated with the same eight major constituents and the same noise levels. Though assumptions in the nodal correction technique developed by Godin (1972) and employed in F77 become increasing invalid as the time series length extends beyond 1 yr, a 12-yr analysis can be done with the old method if appropriate array sizes are increased. The amplitudes and phases arising from these analyses are given in Table 3. Though the phase errors for all eight constituents are seen to be reasonably close to zero for both methods and for both noise levels, the F77 amplitude errors for constituents O1, K1, M2, and K2 are approximately 8%, 5%, 2%, and 15%, respectively, whereas the analogous values for the new method are essentially zero, even with the 25% noise-level case. This improvement is solely a result of having more accurate nodal corrections in the new method. The reason the phases were relatively consistent between methods is that the analysis period was approximately centered over 2006, which corresponds to K1/O1 maximum amplitudes (and M2/N2 minimum amplitudes) in their 18.6-yr variations, thereby making u(t0) [as defined in Eq. (1)] a good approximation. However, the same centering means that the associated amplitude corrections f(t0) for the old method were too large for the diurnals and too small for M2 and N2.

The foregoing 12-yr analysis was also carried out with the Foreman and Neufeld (1991) long (18.6 yr and more) analysis software, which avoids nodal corrections completely by directly including both satellites and major constituents in the least squares fit. A standard run with the full complement of satellite and major constituents solves for 529 pairs of amplitudes and phases, and because many of these have nearest neighbors with a frequency separation of 18.6−1 year −1, the matrix condition number can be large when the analyzed time series length is much less than 18.6 yr. —such is the case here. With essentially no background noise (it only arises from rounding the synthesized hourly values to three decimal places), there were errors of 4%, 5%, and 13% in the O1, K1, and M2 amplitudes, respectively (Table 3). Increasing the resolution to four decimal places scarcely changed the results, so the matrix condition number must be the term dictating accuracy in Eq. (3). Reducing the number of constituents/satellites in the analysis to only those included in the synthesis improved the accuracy slightly (presumably by reducing the matrix condition number), while increasing the time series length to 19 yr produced the same accuracy as that for the new method, even with the full complement of 529 constituents. These tests demonstrate that if there is no reason to question the satellite amplitude ratios and phase differences based on tidal potential theory and inherent in standard nodal corrections, it is more accurate to use the new analysis method rather than the Foreman and Neufeld (1991) method for time series fewer than 18.6 yr.

4. Constituent selection and error estimates

To illustrate how SVD-generated correlations and error estimates can be used to assist in constituent selection, we first return to our analysis of the January 2008 portion of the synthetic Tofino time series with a 25% noise-to-signal ratio. This analysis used the same 38 constituents that were chosen with a Rayleigh constant of 0.97 and the Rayleigh criterion decision tree described in F77. Constituents P1 and K2 were inferred from K1 and S2, respectively, using the exact amplitude ratios and phase differences used in the synthesis. The RMS deviation for the original time series was 0.838 m, while the RMS residual after the least squares fit was 0.200 m, a value consistent with our expected 25% noise level.

As described in Cherniawsky et al. (2001), the SVD least squares solution also produces estimates of the correlation between all possible Xk and Y [Eq. (5)] constituent combinations and error estimates for these same variables. However, these error estimates assume a normal distribution (Press et al. 1992) for the R(tj) residuals of Eq. (6), and as pointed out by Munk et al. (1965), PBL02, and others, the residual spectrum is generally more red than white, with cusps around each tidal frequency. So, the assumption of a normal distribution is questionable. If the time series had a constant sampling interval, then it would be relatively easy to follow PBL02 and use fast Fourier transform (FFT) methods to estimate the background variance in a frequency band around each—or at least each major—constituent, and then apply a parametric bootstrap method to provide better uncertainty estimates. However, with the provision for irregular sampling within the analyzed time series, it is not obvious how a single approach can be used to compute background variance estimates around tidal frequencies. Standard FFT techniques can be used with regular sampling but they cannot be used with irregular. Perhaps a more general Fourier approach that employs a least squares approach to find the energy at specific frequencies separated by regular intervals could be developed, but such an approach raises issues of what the interval and frequencies should be. Alternatively, if a long regularly sampled time series is available at the same or a nearby location to the one being analyzed, then a de-tided spectra and more accurate amplitude and phase estimates could be computed following a bootstrap approach like that described in PBL02. Although this is an intriguing problem, it is beyond the scope of the present study. So for now, we are left with the estimates presently provided by an admittedly incorrect Gaussian assumption for the residuals.

For the January 2008 analysis of synthetic Tofino elevations, the largest correlations were 0.157 between the Yk component of ETA2 and the Xk component of S2, the ratio of the largest to the smallest singular value was 2.11, while the error estimates for all the Xk and Yi constituent amplitudes ranged between 0.006 and 0.011 m. Despite the incorrect Gaussian assumption underlying their calculation, the actual amplitude errors listed in the sixth column of Table 2 compare favorably to these Xk and Yi error estimates. Apart from Z0, the inferred constituents P1 and K2, and the remaining six constituents used in the synthesis (Q1, O1, K1, N2, M2, and S2), only MU2, MK3, SK3, S4, 2SM6, and M8 had amplitudes that were more that twice as large as their standard deviation estimates. Thus, if we could assume that the postfit residuals [R(tj) in Eq. (1)] were Gaussian, a Student’s t test would only find these constituents to be statistically different from zero at the 95% level. Repeating the analysis with only these “significant” constituents plus Z0, and inferring P1 and K2, reduced the RMS postfit residual slightly to 0.198 m. It also decreased the largest correlations (now between M2 and N2) to 0.105, reduced the range of the Xk and Yk error estimates to 0.008 from 0.11 m, and maintained essentially the same accuracy (amplitudes to within 1 mm and phases to within 0.1°) as that shown in the sixth column of Table 2; thereby suggesting that the additional 23 constituents did not really contribute to the initial analysis.

The previous test used hourly sampled data for which the Rayleigh (F77), Munk and Hasselmann (1964), or Foreman and Henry (1989) criteria could provide reasonable guidance on constituent selection. In this next series of tests, we analyze randomly sampled data, so that the first two criteria do not apply. The dataset is based on the same 2008 synthesized Tofino time series used previously but in this case, we randomly select which hourly samples are to be used in the analysis. In the first test, we used the Press et al. (1992) subroutine RAN1 to randomly choose 744 hourly “observations” (with 25% noise) during the 1 January–30 June period. (As an aside, these observations need not be ordered chronologically for the analysis.) Table 4 shows the amplitudes and phases obtained by analyzing with the same set of 51 constituents that would be selected by the F77 decision tree for a 6-month record. Not only are the results reasonably accurate but the ratio of maximum to minimum singular value (matrix condition number) was 2.78 and the maximum correlation coefficient was 0.137, between the lesser constituents OO1 and UPS1. Removing all those constituents with amplitudes less than twice the error estimates and rerunning the analysis reduced the analysis set to only 20 constituents. The matrix condition number was now 2.40 and the largest correlation coefficient was 0.115, between UPS1 and TAU1. As shown in Table 4, the amplitude accuracy for this new analysis is close to the previous one with 51 constituents and with one exception (K1): the phase errors have decreased.

The next tests analyzed 488 randomly selected hourly values during the period of 1 January–1 March with the same constituent dataset that F77 would choose for that period, plus P1 and K2. The matrix condition number was now 4.94 and as would be expected, the largest correlation coefficients ranged between 0.822 and 0.832 for P1/K1 and K2/S2. Amplitude and phase errors for the major constituents are shown in Table 4. Repeating the analysis with P1 and K2 inferred and eliminating all constituents with amplitudes less than twice their error estimates improved the accuracy (Table 4) of all constituents involved in the inference, with the exception of the K1 amplitudes. The matrix condition number dropped to 1.71, and the largest correlation coefficient was now 0.155, between Q1 and O1.

Though many more tests could be performed, the preceding few have demonstrated that by monitoring correlation coefficients, matrix condition numbers, and Student’s t test values (albeit based on the generally incorrect assumption that the residuals have a Gaussian distribution), an iterative procedure can be used to determine the best set of constituents to be included with this new harmonic analysis.

5. New applications

a. CTD analyses

The first example is the analysis of CTD observations along lines A (LA) and B (LB) off the southwest coast of Vancouver Island (Fig. 2). Eleven and 12 respective stations along these lines have generally been sampled 2–3 times a year since 1980, with observations taken at the standard depths of 0, 5, 10, and every 10 m thereafter down to the bottom, or 2400 m. Though the stratification and internal tide patterns do change seasonally, it is feasible to restrict each CTD time series to one season and analyze for tidal variations in salinity and temperature. Here we restrict the observations to June through September, inclusive. There are between 18 and 52 observations at each standard depth at each station, and the analyses solve directly for a linear trend and constituents Z0, K1, M2, while inferring Q1, O1, and P1 from K1, and N2, S2, and K2 from M2. Inference parameters were taken from a 1-yr analysis of hourly tide gauge observations at Port Renfrew (Fig. 2). The objectives of the analyses are to (i) compute the magnitude of the tidal variations in these observations; (ii) determine if the M2 variations show evidence of internal tides (or at least that portion of the internal tide that is phase locked with the barotropic tide); and (iii) determine if the observations show a linear trend.

The linear trend in temperature and the M2 temperature phases along line B are shown in Figs. 3 and 4, respectively. Average summer seasonal currents crossing this line include a near-surface shelf break current (SBC) flowing to the southeast, a near-surface Vancouver Island coastal current (VICC) flowing to the northwest, and a California Undercurrent (CUC) flowing to the northwest along the continental slope and centered at about 200-m depth (Freeland et al. 1984; Foreman et al. 2000). A warming of up to 0.05°C yr−1 in the SBC, a cooling of up to 0.03°C yr−1 in the VICC, and no change in the CUC are evident in Fig. 3. However, the standard deviation estimates associated with these values generally range between 0.01° and 0.03°C yr−1, so these trends are not statistically significant. Correlation coefficients between a and the other constituent parameters are generally less than 0.3.

Though the M2 phase patterns seen in Fig. 4 are noisy, there is the suggestion of a vertical mode structure at the edge of the continental shelf where internal tides are known to be generated (Drakopoulos and Marsden 1993). However, the pattern might also be due to barotropic tide advection of a temperature field with vertical and lateral structure. As with the linear trend results, the relatively few number of points in the time series means that these particular results are not statistically significant. Nevertheless, it has been demonstrated that long-term CTD analyses are feasible with this new program so that as the time series continues to lengthen, statistically significant results might be expected.

It should be mentioned that these randomly sampled CTD time series could have been analyzed with the software that is described in Foreman and Henry (1979, henceforth FH79), which employs the same astronomical, nodal, and inference correction approach as F77. However, for analyses of relatively few observations spanning 25 yr, only a few constituents can be resolved. The FH79 approach should be less accurate than the new approach, because it assumes constant nodal correction parameters over the duration of the analysis period and it does not allow multiple inferences. Though the true temperature variations at tidal amplitudes are not known in this case, a simple analysis of the 5-m temperature data at the line B station nearest to the shore seems to confirm this supposition. Using FH79 to solve directly for constituents O1, K1, M2, and S2 and inferring Q1, P1, N2, and S2, respectively, from those four produced M2 and K1 amplitudes that were 20% and 17% larger, respectively, and phases that were 17° smaller and 51° larger, respectively, than those described above with the new method. The fact that the overall K1 differences (FH79 minus new method) are larger than those for M2, while the M2 temperature amplitudes themselves are larger, is consistent with what would be expected to be larger errors in the FH79 nodal corrections during that period.

b. Satellite altimeter analyses

The second example is the analysis of altimeter data from the TOPEX/Poseidon satellite. In this case, the time series are restricted to the period from 30 September 1992 to 8 August 2002 and are taken from collocated points along a small portion of track 90 that crosses the Strait of Georgia in a southeasterly direction (Fig. 5). These points are separated by approximately 5.7 km. The fact that the strait is only about 30 km wide means that there is frequent data loss as a result of signal contamination from nearby land, particularly at the northern and southern portions of the track. Out of a maximum sample size of 364, there are only 12 locations with 58 or more noncontaminated values and the best site has only 276. Harmonic analyses were performed at each of these 12 locations using 17 constituents composed of Z0, Sa, Ssa, and the 7 largest diurnal and 7 largest semidiurnal constituents. No inference was performed initially, though it is well-known (Parke et al. 1987; Ray 1998; Cherniawsky et al. 2001) that the 9.9156-day sampling interval for T/P can cause significant aliasing, even with records as long as 10 yr. Noteworthy constituent pairs that may remain difficult to separate with this record length are K1 and Ssa and P1 and K2, and this was borne out in the correlation coefficients arising from the analyses. For the central location with the sample size of 276, the largest correlation coefficients were 0.270 and 0.226 between the Xk and Yk components of K2 and P1, respectively, while the next largest values were 0.193 and 0.146 for the analogous components of K1 and Ssa, respectively. At a site with a sample size of only 119 values, the K2/P1 values rose to 0.340 and 0.290. Figure 6 illustrates the relationship between the along-track P1 and K2 amplitudes, their correlation coefficients, and the analysis sample sizes. (The correlation coefficients at sites 25 and 36 are not shown, because these analyses had numerous constituent separation problems, thereby making all results questionable.) Here, P1 and K2 amplitudes at the nearby Point Atkinson tide gauge (Fig. 5) are also shown as a means of evaluating accuracy. Clearly, the smaller sample sizes lead to higher correlations and generally less accurate results. Repeating these analyses with K2 inferred from S2, and the inference relationships computed from the Point Atkinson harmonics, improved the P1 and K2 accuracy at the sites near the ends of the track. For example, the K2 amplitude at site 36 decreased from 133.6 to 68.3 cm, much closer to the 61.9-cm value at Point Atkinson.

As with the CTD analysis example, this application could also have been solved with the FH79 software. However, for the same reasons explained earlier, we would expect poorer accuracy, because the nodal correction parameters would be held constant during the 10-yr period of the analysis.

6. Summary and conclusions

The previous presentation has described and applied new computer software that permits more versatility in the harmonic analysis of tidal time series. Specific improvements to traditional methods include the analysis of randomly sampled and/or multiyear data; more accurate nodal correction, inference, and astronomical argument adjustments through direct incorporation into the least squares matrix; and correlation matrices and error estimates that facilitate decisions on the selection of constituents for the analysis. One- and two-dimensional time series can be analyzed with the same code and in the case of the latter, the final harmonics are also expressed in terms of current ellipse parameters.

The accuracy of the new methodology was assessed through a series of test analyses using synthetic data with and without background noise. Where feasible, comparisons were also done with results from F77 and the Foreman and Neufeld (1991) long analysis codes. The use of correlation matrix output in constituent selection decisions was demonstrated with two further examples. Finally, the software was applied to two problems that could not have been solved with the older software. The first application was the analysis of 25 yr of CTD data along a transect off southwest Vancouver Island that suggested both long-term temperature trends in the shelf break and Vancouver Island Coastal Currents as well as vertical variations in M2 phase that might be attributable to internal tides. The second application was the analysis of T/P satellite altimetry data along a ground track in the Strait of Georgia whose proximity to land has led to significant data dropout. In this case, the relationships between the number of valid data points and constituent aliasing is clearly seen in the correlation matrices and thus can be used as a guide to determine which constituents should be included directly in the analysis and which should be inferred.

Though the set of astronomical constants and constituents used in this new software is the same as that employed in F77 and PBL02—namely, those derived from Cartwright and Tayler (1971) and Cartwright and Edden (1973)—they could easily be replaced with more recent versions such as those of Hartmann and Wenzel (1995; available online at http://bowie.gsfc.nasa.gov/hw95/), which also include the tide generating potential of the planets Venus, Jupiter, Mars, Mercury, and Saturn. Only changes to the astronomical input file and list of constituents to be included in the analysis would be required.

This new software (in FORTRAN) is freely available as part of the Institute of Ocean Sciences (IOS) Tidal Package and can be downloaded, along with sample input data and a short explanatory readme file, from the FTP site given on http://www-sci.pac.dfo-mpo.gc.ca/osap/projects/tidpack/tidpack_e.htm. All comments and problem reports are welcome.

Acknowledgments

We thank Trish Kimber, Wendy Callendar, and Ming Guo for their assistance with the figures; Jake Galbraith for processing the CTD data; Brian Beckley and Richard Ray for providing the T/P altimeter data; and two anonymous reviewers for their constructive comments on an earlier version of the manuscript.

REFERENCES

  • Cartwright, D. E., 1999: Tides: A Scientific History. Cambridge University Press, 292 pp.

  • Cartwright, D. E., and Tayler R. J. , 1971: New computations of the tide-generating potential. Geophys. J. Roy. Astron. Soc., 23 , 4574.

    • Search Google Scholar
    • Export Citation
  • Cartwright, D. E., and Edden C. A. , 1973: Corrected tables of tidal harmonics. Geophys. J. Roy. Astron. Soc., 33 , 253264.

  • Cherniawsky, J. Y., Foreman M. G. G. , Crawford W. R. , and Henry R. F. , 2001: Ocean tides from TOPEX/Poseidon sea level data. J. Atmos. Oceanic Technol., 18 , 649664.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doodson, A. T., 1921: The harmonic development of the tide generating potential. Proc. Roy. Soc. London, A100 , 306328.

  • Drakopoulos, P. G., and Marsden R. F. , 1993: The internal tide off the west coast of Vancouver Island. J. Phys. Oceanogr., 23 , 758775.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., 1977: Manual for tidal heights analysis and prediction. Pacific Marine Science Rep. 77-10, Institute of Ocean Sciences, Patricia Bay, 66 pp. [Available online at http://www.pac.dfo-mpo.gc.ca/SCI/osap/publ/online/heights.pdf.].

    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., and Henry R. F. , 1979: Tidal analysis based on high and low water observations. Pacific Marine Science Rep. 79-15, Institute of Ocean Sciences, Patricia Bay. [Available online at http://www.pac.dfo-mpo.gc.ca/SCI/osap/publ/online/high-low.pdf.].

    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., and Henry R. F. , 1989: The harmonic analysis of tidal model time series. Adv. Water Resour., 12 , 109120.

  • Foreman, M. G. G., and Neufeld E. M. , 1991: Harmonic tidal analyses of long time series. Int. Hydrogr. Rev., LXVIII , 85108.

  • Foreman, M. G. G., Thomson R. E. , and Smith C. L. , 2000: Seasonal current simulations for the western continental margin of Vancouver Island. J. Geophys. Res., 105 , 1966519698.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Freeland, H. J., Crawford W. R. , and Thomson R. E. , 1984: Currents along the Pacific coast of Canada. Atmos.–Ocean, 22 , 151172.

  • Godin, G., 1972: The Analysis of Tides. University of Toronto Press, 264 pp.

  • Golub, G. H., and Van Loan C. F. , 1983: Matrix Computations. The Johns Hopkins University Press, 476 pp.

  • Hartmann, T., and Wenzel H-G. , 1995: The HW95 tidal potential catalogue. Geophys. Res. Lett., 22 , 35533556.

  • Jay, D. A., and Flinchem E. P. , 1997: Interaction of fluctuating river flow with a barotropic tide: A test of wavelet tidal analysis methods. J. Geophys. Res., 102 , 57055720.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jay, D. A., and Flinchem E. P. , 1999: A comparison of methods for analysis of tidal records with multi-scale non-tidal background energy. Cont. Shelf Res., 19 , 16951732.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Munk, W., and Hasselmann K. , 1964: Super-resolution of tides. Studies on Oceanography—A Collection of Papers Dedicated to Koji Hidaka, K. Yoshida, Ed., University of Tokyo, 339–344.

    • Search Google Scholar
    • Export Citation
  • Munk, W., Zetler B. , and Groves G. W. , 1965: Tidal cusps. Geophys. J. Int., 10 , 211219.

  • Ortega, J. M., 1972: Numerical Analysis: A Second Course. Academic Press, 201 pp.

  • Parke, M. E., Stewart R. H. , Farliss D. L. , and Cartwright D. R. , 1987: On the choice of orbits for an altimetric satellite to study ocean circulation and tides. J. Geophys. Res., 92 , 1169311707.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Parker, B. B., 2007: Tidal analysis and prediction. NOAA Special Publication NOS CO-OPS 3, U.S. Department of Commerce, 378 pp. [Available online at http://tidesandcurrents.noaa.gov.].

    • Search Google Scholar
    • Export Citation
  • Pawlowicz, R., Beardsley B. , and Lentz S. , 2002: Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE. Comput. Geosci., 28 , 929937.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Press, W. H., Teukolsky S. A. , Vettering W. T. , and Flannery B. P. , 1992: Numerical Recipes in FORTRAN. 2nd ed. Cambridge University Press, 963 pp.

    • Search Google Scholar
    • Export Citation
  • Ray, R. D., 1998: Spectral analysis of highly aliased sea-level signals. J. Geophys. Res., 103 , 2499125003.

Fig. 1.
Fig. 1.

Schematic showing the old and new harmonic tidal analysis approaches.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Fig. 2.
Fig. 2.

Std CTD lines off southwest Vancouver Island and western Washington State. Tofino, the tide gauge whose harmonics were used for all the synthetic tests, and Port Renfrew, the reference station that was used to provide inference parameters for the CTD analysis, are also shown. LA and LB denote sampling lines A and B, respectively.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Fig. 3.
Fig. 3.

The linear trend in temperature (°C yr−1) along line B and locations of VICC and SBC.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Fig. 4.
Fig. 4.

The M2 temperature phase lags (°) along line B.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Fig. 5.
Fig. 5.

T/P satellite ground track 90 and collocated stations in the Strait of Georgia. The location of the reference tide gauge at Point Atkinson is also shown.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Fig. 6.
Fig. 6.

Select harmonic analysis results for P1 and K2 at collocated sites along the T/P ground track in the Strait of Georgia.

Citation: Journal of Atmospheric and Oceanic Technology 26, 4; 10.1175/2008JTECHO615.1

Table 1.

Constituents K1, P1, S2, and K2 true amplitudes and phase lags and their errors when analyzing the 1 Jan–1 Mar 2008 Tofino synthesized hourly time series with the old (F77) and new harmonic analysis programs. There was no inference, and the noise was uniform random, with the percentage referring to its std dev relative to the tidal signal std dev.

Table 1.
Table 2.

True amplitudes and phase lags and their errors when analyzing the January 2008 Tofino synthesized hourly time series with the old (F77) and new harmonic analysis programs. Constituents P1 and K2 were inferred from K1 and S2, respectively, using exact amplitude ratios and phase differences. The noise was uniform random, with the percentage referring to its std dev relative to the tidal signal std dev.

Table 2.
Table 3.

As in Table 2 except when analyzing the 1 Jan 2000–31 Dec 2011 Tofino synthesized hourly time series with the old (F77) and new harmonic analysis programs. The noise was uniform random, with the percentage referring to its std dev relative to the tidal signal sd dev.

Table 3.
Table 4.

As in Table 2 except when analyzing randomly selected values within the periods of 1 Jan–30 Jun 2008 (columns 3, 4, 8, 9) and 1 Jan–1 Mar 2008 (columns 5, 6, 10, 11).

Table 4.
Save
  • Cartwright, D. E., 1999: Tides: A Scientific History. Cambridge University Press, 292 pp.

  • Cartwright, D. E., and Tayler R. J. , 1971: New computations of the tide-generating potential. Geophys. J. Roy. Astron. Soc., 23 , 4574.

    • Search Google Scholar
    • Export Citation
  • Cartwright, D. E., and Edden C. A. , 1973: Corrected tables of tidal harmonics. Geophys. J. Roy. Astron. Soc., 33 , 253264.

  • Cherniawsky, J. Y., Foreman M. G. G. , Crawford W. R. , and Henry R. F. , 2001: Ocean tides from TOPEX/Poseidon sea level data. J. Atmos. Oceanic Technol., 18 , 649664.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doodson, A. T., 1921: The harmonic development of the tide generating potential. Proc. Roy. Soc. London, A100 , 306328.

  • Drakopoulos, P. G., and Marsden R. F. , 1993: The internal tide off the west coast of Vancouver Island. J. Phys. Oceanogr., 23 , 758775.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., 1977: Manual for tidal heights analysis and prediction. Pacific Marine Science Rep. 77-10, Institute of Ocean Sciences, Patricia Bay, 66 pp. [Available online at http://www.pac.dfo-mpo.gc.ca/SCI/osap/publ/online/heights.pdf.].

    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., and Henry R. F. , 1979: Tidal analysis based on high and low water observations. Pacific Marine Science Rep. 79-15, Institute of Ocean Sciences, Patricia Bay. [Available online at http://www.pac.dfo-mpo.gc.ca/SCI/osap/publ/online/high-low.pdf.].

    • Search Google Scholar
    • Export Citation
  • Foreman, M. G. G., and Henry R. F. , 1989: The harmonic analysis of tidal model time series. Adv. Water Resour., 12 , 109120.

  • Foreman, M. G. G., and Neufeld E. M. , 1991: Harmonic tidal analyses of long time series. Int. Hydrogr. Rev., LXVIII , 85108.

  • Foreman, M. G. G., Thomson R. E. , and Smith C. L. , 2000: Seasonal current simulations for the western continental margin of Vancouver Island. J. Geophys. Res., 105 , 1966519698.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Freeland, H. J., Crawford W. R. , and Thomson R. E. , 1984: Currents along the Pacific coast of Canada. Atmos.–Ocean, 22 , 151172.

  • Godin, G., 1972: The Analysis of Tides. University of Toronto Press, 264 pp.

  • Golub, G. H., and Van Loan C. F. , 1983: Matrix Computations. The Johns Hopkins University Press, 476 pp.

  • Hartmann, T., and Wenzel H-G. , 1995: The HW95 tidal potential catalogue. Geophys. Res. Lett., 22 , 35533556.

  • Jay, D. A., and Flinchem E. P. , 1997: Interaction of fluctuating river flow with a barotropic tide: A test of wavelet tidal analysis methods. J. Geophys. Res., 102 , 57055720.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jay, D. A., and Flinchem E. P. , 1999: A comparison of methods for analysis of tidal records with multi-scale non-tidal background energy. Cont. Shelf Res., 19 , 16951732.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Munk, W., and Hasselmann K. , 1964: Super-resolution of tides. Studies on Oceanography—A Collection of Papers Dedicated to Koji Hidaka, K. Yoshida, Ed., University of Tokyo, 339–344.

    • Search Google Scholar
    • Export Citation
  • Munk, W., Zetler B. , and Groves G. W. , 1965: Tidal cusps. Geophys. J. Int., 10 , 211219.

  • Ortega, J. M., 1972: Numerical Analysis: A Second Course. Academic Press, 201 pp.

  • Parke, M. E., Stewart R. H. , Farliss D. L. , and Cartwright D. R. , 1987: On the choice of orbits for an altimetric satellite to study ocean circulation and tides. J. Geophys. Res., 92 , 1169311707.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Parker, B. B., 2007: Tidal analysis and prediction. NOAA Special Publication NOS CO-OPS 3, U.S. Department of Commerce, 378 pp. [Available online at http://tidesandcurrents.noaa.gov.].

    • Search Google Scholar
    • Export Citation
  • Pawlowicz, R., Beardsley B. , and Lentz S. , 2002: Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE. Comput. Geosci., 28 , 929937.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Press, W. H., Teukolsky S. A. , Vettering W. T. , and Flannery B. P. , 1992: Numerical Recipes in FORTRAN. 2nd ed. Cambridge University Press, 963 pp.

    • Search Google Scholar
    • Export Citation
  • Ray, R. D., 1998: Spectral analysis of highly aliased sea-level signals. J. Geophys. Res., 103 , 2499125003.

  • Fig. 1.

    Schematic showing the old and new harmonic tidal analysis approaches.

  • Fig. 2.

    Std CTD lines off southwest Vancouver Island and western Washington State. Tofino, the tide gauge whose harmonics were used for all the synthetic tests, and Port Renfrew, the reference station that was used to provide inference parameters for the CTD analysis, are also shown. LA and LB denote sampling lines A and B, respectively.

  • Fig. 3.

    The linear trend in temperature (°C yr−1) along line B and locations of VICC and SBC.

  • Fig. 4.

    The M2 temperature phase lags (°) along line B.

  • Fig. 5.

    T/P satellite ground track 90 and collocated stations in the Strait of Georgia. The location of the reference tide gauge at Point Atkinson is also shown.

  • Fig. 6.

    Select harmonic analysis results for P1 and K2 at collocated sites along the T/P ground track in the Strait of Georgia.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 2823 989 239
PDF Downloads 1825 491 44