Search Results
You are looking at 1 - 10 of 56 items for
- Author or Editor: Carl Wunsch x
- Refine by Access: All Content x
Abstract
The time- and space-scale descriptive power of two-dimensional Fourier analysis is exploited to reanalyze the behavior of midlatitude variability as seen in altimetric data. These data are used to construct a purely empirical and analytical frequency–zonal wavenumber spectrum of ocean variability for periods between about 20 days and 15 yr and on spatial scales of about 200–10 000 km. The spectrum is dominated by motions along a “nondispersive” line, which is a robust feature of the data but for whose prominence a complete theoretical explanation is not available. The estimated spectrum also contains significant energy at all frequencies and wavenumbers in this range, including eastward-propagating motions, which are likely some combination of nonlinear spectral cascades, wave propagation, and wind-forced motions. The spectrum can be used to calculate statistical expectations of spatial average sea level and transport variations. However, because the statistics of trend determination in quantities such as sea level and volume transports depend directly upon the spectral limit of the frequency approaching zero, the appropriate significance calculations remain beyond reach, because low-frequency variability is indistinguishable from trends already present in the data.
Abstract
The time- and space-scale descriptive power of two-dimensional Fourier analysis is exploited to reanalyze the behavior of midlatitude variability as seen in altimetric data. These data are used to construct a purely empirical and analytical frequency–zonal wavenumber spectrum of ocean variability for periods between about 20 days and 15 yr and on spatial scales of about 200–10 000 km. The spectrum is dominated by motions along a “nondispersive” line, which is a robust feature of the data but for whose prominence a complete theoretical explanation is not available. The estimated spectrum also contains significant energy at all frequencies and wavenumbers in this range, including eastward-propagating motions, which are likely some combination of nonlinear spectral cascades, wave propagation, and wind-forced motions. The spectrum can be used to calculate statistical expectations of spatial average sea level and transport variations. However, because the statistics of trend determination in quantities such as sea level and volume transports depend directly upon the spectral limit of the frequency approaching zero, the appropriate significance calculations remain beyond reach, because low-frequency variability is indistinguishable from trends already present in the data.
Abstract
To produce an interpretation of the surface kinetic energy as measured by altimeters, a survey is made of the vertical structure of kinetic energy profiles in a large number of globally distributed long current meter records. Although the data are geographically confined primarily to a latitude band in the North Pacific, to the North Atlantic, and to a few moorings in the South Atlantic, the results show, generally speaking, that most regions are dominated by the barotropic and first baroclinic modes. Because of the near-surface intensification of baroclinic modes altimeters primarily reflect the first baroclinic mode, and thus the motion of the main thermocline. There is good quantitative agreement, with a few exceptions, with estimates of the surface kinetic energy obtained from the TOPEX/POSEIDON altimeter and from vertical extrapolations to the surface of the mooring profiles. These results are consistent with previous suggestions that barotropic models have little skill in depicting variability as seen in the altimeter data. An EOF analysis is shown to produce fictitious mode coupling unless the dynamical modes have very different energy levels.
Abstract
To produce an interpretation of the surface kinetic energy as measured by altimeters, a survey is made of the vertical structure of kinetic energy profiles in a large number of globally distributed long current meter records. Although the data are geographically confined primarily to a latitude band in the North Pacific, to the North Atlantic, and to a few moorings in the South Atlantic, the results show, generally speaking, that most regions are dominated by the barotropic and first baroclinic modes. Because of the near-surface intensification of baroclinic modes altimeters primarily reflect the first baroclinic mode, and thus the motion of the main thermocline. There is good quantitative agreement, with a few exceptions, with estimates of the surface kinetic energy obtained from the TOPEX/POSEIDON altimeter and from vertical extrapolations to the surface of the mooring profiles. These results are consistent with previous suggestions that barotropic models have little skill in depicting variability as seen in the altimeter data. An EOF analysis is shown to produce fictitious mode coupling unless the dynamical modes have very different energy levels.
Abstract
A recent paper by Hu et al. (https://doi.org/10.1126/sciadv.aax7727) has raised the interesting question of whether the ocean circulation has been “speeding up” in the last decades. Their result contrasts with some estimates of the lack of major trends in oceanic surface gravity waves and wind stress. In general, both the increased energy and implied power inputs of the calculated circulation correspond to a small fraction of the very noisy background values. An example is the implied power increase of about 3 × 108 W, as compared to wind energy inputs of order 1012 W. Here the problem is reexamined using a state estimate that has the virtue of being energy, mass, etc. conserving. Because it is an estimate over an entire recent 26-yr interval, it is less sensitive to the strong changes in observational data density and distribution, and it does not rely upon nonconservative “reanalyses.” The focus is on the energy lying in the surface layers of the ocean. A potential energy increase is found, but it is almost completely unavailable—arising from the increase in mean sea level. A weak increase in kinetic energy in the top layer (10 m) is confirmed, corresponding to an increase of order 1 cm s−1 yr−1 over 26 years. An estimate of kinetic energy in the full water column shows no monotonic trend, but the changes in the corresponding available potential energy are not calculated here.
Abstract
A recent paper by Hu et al. (https://doi.org/10.1126/sciadv.aax7727) has raised the interesting question of whether the ocean circulation has been “speeding up” in the last decades. Their result contrasts with some estimates of the lack of major trends in oceanic surface gravity waves and wind stress. In general, both the increased energy and implied power inputs of the calculated circulation correspond to a small fraction of the very noisy background values. An example is the implied power increase of about 3 × 108 W, as compared to wind energy inputs of order 1012 W. Here the problem is reexamined using a state estimate that has the virtue of being energy, mass, etc. conserving. Because it is an estimate over an entire recent 26-yr interval, it is less sensitive to the strong changes in observational data density and distribution, and it does not rely upon nonconservative “reanalyses.” The focus is on the energy lying in the surface layers of the ocean. A potential energy increase is found, but it is almost completely unavailable—arising from the increase in mean sea level. A weak increase in kinetic energy in the top layer (10 m) is confirmed, corresponding to an increase of order 1 cm s−1 yr−1 over 26 years. An estimate of kinetic energy in the full water column shows no monotonic trend, but the changes in the corresponding available potential energy are not calculated here.
Abstract
Determining flow fields and mixing rates from chemical tracer distributions is a challenging and important oceanographic problem. Thus the conclusion, that solutions obtained for underdetermined systems were “devoid of physical content”, drawn by Fiadeiro and Veronis after attempting to “invert” a simple tracer distribution in a known advective-diffusive field, is particularly disturbing. The problem they formulated is reexamined here. The procedures used differ from theirs in making use of the full machinery of inverse methods; even in the grossly underdetermined case, it is possible to (i) obtain useful information about the underlying flow field, (ii) to deduce the structure of the parts that are not determinable, (iii) to find formal error bars arising both from data noise and from indeterminate components, (iv) to make use of a priori statistical information and a posteriori tests, and, in general, (v) to extract much useful information about the field.
Abstract
Determining flow fields and mixing rates from chemical tracer distributions is a challenging and important oceanographic problem. Thus the conclusion, that solutions obtained for underdetermined systems were “devoid of physical content”, drawn by Fiadeiro and Veronis after attempting to “invert” a simple tracer distribution in a known advective-diffusive field, is particularly disturbing. The problem they formulated is reexamined here. The procedures used differ from theirs in making use of the full machinery of inverse methods; even in the grossly underdetermined case, it is possible to (i) obtain useful information about the underlying flow field, (ii) to deduce the structure of the parts that are not determinable, (iii) to find formal error bars arising both from data noise and from indeterminate components, (iv) to make use of a priori statistical information and a posteriori tests, and, in general, (v) to extract much useful information about the field.
Abstract
An infinitely deep stratified ocean on a equatorial beta plane is forced with a periodic wind system. The resulting linearized motion is shown to result in a deep cellular flow structure in rough agreement with recent observations. Because of the infinite depth, the vertical structure is dependent only on the horizontal structure and frequency of the wind-forced layer. The motion is a mechanism for carrying momentum downward from the surface. A western boundary is easily accommodated.
Abstract
An infinitely deep stratified ocean on a equatorial beta plane is forced with a periodic wind system. The resulting linearized motion is shown to result in a deep cellular flow structure in rough agreement with recent observations. Because of the infinite depth, the vertical structure is dependent only on the horizontal structure and frequency of the wind-forced layer. The motion is a mechanism for carrying momentum downward from the surface. A western boundary is easily accommodated.
Abstract
A model of the Atlantic has been formulated that combines ordinary quasi-geostrophic constraints (based upon the dynamic method and Ekman layer) with a great variety of additional information available about the time-average ocean circulation. The goal is to combine very diverse data types and beliefs and to be able to test for compatibility and incremental usefulness as a way around the paucity of conventional data, a lack of which otherwise greatly hinders determination of the circulation.
The approach is axiomatic. Such a model is based here upon the use of linear inequality constraints, which permit the combination of the dynamic method with “core layer”-like constraints, as well as observations of deep water velocities, overflow transports and the like. The model is then exploited to find absolute bounds (maxima and minima) upon the annual mean and seasonal meridional fluxes of heat and the maximum rate of tropical near-surface upwelling. Some latitudes of nearly vanishing mean meridional heat flux are just possible within the imposed constraints, but it appears impossible to reverse the sign of the heat flux at any latitude except in winter. The latitude of maximum possible annual-mean poleward heat flux is 40°N. Based upon a radiocarbon box model, the value of tropical upwelling is much less than published values. The model is very “slack”, i.e., most properties are locally determined rather than being forced by distant constraints.
Abstract
A model of the Atlantic has been formulated that combines ordinary quasi-geostrophic constraints (based upon the dynamic method and Ekman layer) with a great variety of additional information available about the time-average ocean circulation. The goal is to combine very diverse data types and beliefs and to be able to test for compatibility and incremental usefulness as a way around the paucity of conventional data, a lack of which otherwise greatly hinders determination of the circulation.
The approach is axiomatic. Such a model is based here upon the use of linear inequality constraints, which permit the combination of the dynamic method with “core layer”-like constraints, as well as observations of deep water velocities, overflow transports and the like. The model is then exploited to find absolute bounds (maxima and minima) upon the annual mean and seasonal meridional fluxes of heat and the maximum rate of tropical near-surface upwelling. Some latitudes of nearly vanishing mean meridional heat flux are just possible within the imposed constraints, but it appears impossible to reverse the sign of the heat flux at any latitude except in winter. The latitude of maximum possible annual-mean poleward heat flux is 40°N. Based upon a radiocarbon box model, the value of tropical upwelling is much less than published values. The model is very “slack”, i.e., most properties are locally determined rather than being forced by distant constraints.
Abstract
The dominant contributor to the random error of an altimetric satellite system is the long wavelength uncertainty in the orbital radius. It is shown that calibration by a comparatively modest tide gauge system can drastically reduce the overall error in global estimates of large-scale oceanic variability. The procedure used is a form of optimal estimation. Absolute (time average) altimetric calibration is much more difficult because it requires absolute calibration of the tide gauge positions (in three dimensions) but the error reduction process would be the same.
Abstract
The dominant contributor to the random error of an altimetric satellite system is the long wavelength uncertainty in the orbital radius. It is shown that calibration by a comparatively modest tide gauge system can drastically reduce the overall error in global estimates of large-scale oceanic variability. The procedure used is a form of optimal estimation. Absolute (time average) altimetric calibration is much more difficult because it requires absolute calibration of the tide gauge positions (in three dimensions) but the error reduction process would be the same.
Abstract
The irregular space-time sampling of any finite region by an orbiting satellite raises difficult questions as to which frequencies and wavenumbers can be determined and which will alias into others. Conventional sampling theorems must be extended to account for both irregular data distributions and observational noise—the sampling irregularity making the system much more susceptible to noise than in regularly sampled cases. The problem is formulated here in terms of least-squares and applied to spacecraft in 10-day and 17-day repeating orbits. The “diamond-pattern” laid down spatially in such repeating orbits means that either repeat period adequately samples the spatial variables, but the slow overall temporal coverage in the 17-day pattern leads to much greater uncertainty than in the shorter repeat cycle. The result is not definitive and it is not concluded that a 10-day orbit repeat is the most appropriate one, A major conclusion however, is that different orbital choices have potentially quite different sampling characteristics which need to be analyzed in terms of the spectral characteristics of the moving sea surface. Conclusions drawn from model assimilation studies need to be placed in a context of the reality of their spectral content via-àvis the ocean.
Abstract
The irregular space-time sampling of any finite region by an orbiting satellite raises difficult questions as to which frequencies and wavenumbers can be determined and which will alias into others. Conventional sampling theorems must be extended to account for both irregular data distributions and observational noise—the sampling irregularity making the system much more susceptible to noise than in regularly sampled cases. The problem is formulated here in terms of least-squares and applied to spacecraft in 10-day and 17-day repeating orbits. The “diamond-pattern” laid down spatially in such repeating orbits means that either repeat period adequately samples the spatial variables, but the slow overall temporal coverage in the 17-day pattern leads to much greater uncertainty than in the shorter repeat cycle. The result is not definitive and it is not concluded that a 10-day orbit repeat is the most appropriate one, A major conclusion however, is that different orbital choices have potentially quite different sampling characteristics which need to be analyzed in terms of the spectral characteristics of the moving sea surface. Conclusions drawn from model assimilation studies need to be placed in a context of the reality of their spectral content via-àvis the ocean.
This pedagogical note reminds the reader that the interpretation of climate records is dependent upon understanding the behavior of stochastic processes. In particular, before concluding that one is seeing evidence for trends, shifts in the mean, or changes in oscillation periods, one must rule out the purely random fluctuations expected from stationary time series. The example of the North Atlantic oscillation (NAO) is mainly used here: the spectral density is nearly white (frequency power law ≈ s−0.2) with slight broadband features near 8 and 2.5 yr. By generating synthetic but stationary time series, one can see exhibited many of the features sometimes exciting attention as being of causal climate significance. Such a display does not disprove the hypothesis of climate change, but it provides a simple null hypothesis for what is seen. In addition, it is shown that the linear predictive skill for the NAO index must be very slight (less than 3% of the variance). A brief comparison with the Southern Oscillation shows a different spectral distribution, but again a simulation has long periods of apparent systematic sign and trends. Application of threshold-crossing statistics (Ricean) shows no contradiction to the assumption that the Darwin pressure record is statistically stationary.
This pedagogical note reminds the reader that the interpretation of climate records is dependent upon understanding the behavior of stochastic processes. In particular, before concluding that one is seeing evidence for trends, shifts in the mean, or changes in oscillation periods, one must rule out the purely random fluctuations expected from stationary time series. The example of the North Atlantic oscillation (NAO) is mainly used here: the spectral density is nearly white (frequency power law ≈ s−0.2) with slight broadband features near 8 and 2.5 yr. By generating synthetic but stationary time series, one can see exhibited many of the features sometimes exciting attention as being of causal climate significance. Such a display does not disprove the hypothesis of climate change, but it provides a simple null hypothesis for what is seen. In addition, it is shown that the linear predictive skill for the NAO index must be very slight (less than 3% of the variance). A brief comparison with the Southern Oscillation shows a different spectral distribution, but again a simulation has long periods of apparent systematic sign and trends. Application of threshold-crossing statistics (Ricean) shows no contradiction to the assumption that the Darwin pressure record is statistically stationary.