Some high-resolution CTD data collected in the spring of 2017 are analyzed using Thorpe sorting and scale analyses, including both the commonly used “Thorpe scale” method and a lesser-used method that is based on directly estimating the “available overturn potential energy” (AOPE): the difference between potential energies of the raw versus sorted density profiles in a mixing “turbulent patch.” The speed of the profiler varied, so the spatial (vertical) sampling is uneven. A method is developed and described to apply the Thorpe scaling and the AOPE approaches to such unevenly sampled data. The AOPE approach appears to be less sensitive to the (poorly constrained) estimate of the “background” buoyancy frequency N. Although these approaches are typically used to first estimate the dissipation rate εK of turbulent kinetic energy, the real goal is to estimate the diffusivity of density Kρ and hence the net alteration of the density profile by mixing. Two easily measured dimensionless parameters are presented as possible metrics of the “age” or “state” of the mixing patch, which might help to resolve the question of how the total turbulent energy and dissipation are apportioned between kinetic and potential components and hence how much of the measured AOPE ends up changing the background stratification. A speculative example as to how this might work is presented.
In the spring of 2017, an Office of Naval Research (ONR) departmental research initiative, directed at understanding the evolution of Langmuir circulation under rapidly changing conditions, funded a multi-investigator experiment [the Langmuir Cell Departmental Research Initiative (LCDRI)] that was carried out in the South California Bight. As part of this multiplatform multi-institutional effort, my group [the “Multiscale Ocean Dynamics” (MOD) group at Scripps Institution of Oceanography] deployed both a phased array Doppler sonar and a Wirewalker (WW) carrying a CTD (which measures conductivity, temperature, and depth = pressure). These were deployed and operated from the Research Platform (R/P) Floating Instrument Platform (FLIP), which was moored in about 1-km ocean depth, roughly midway along a ridge that runs between San Clemente and San Nicolas Islands, separating basins about 2 times as deep (see Fig. 1).
In this note I focus on just the data from the WW, leading to estimates of turbulent dissipation and mixing rates as well as the surface mixing layer depth and evolution. The WW provided a time series of vertical profiles of temperature, salinity, and density. It ran intermittently from 21 to 25 March (capturing one strong wind event); then, after we increased the bottom weight significantly (to 45 kg), it ran continuously from ~0800 UTC 25 March to ~2100 UTC (1400 local time) 7 April, when we ceased data collection and prepared to “FLIP down” and return to port. Here I examine data from the long continuous run (about 13.5 days), which spans two strong wind events and then a period of calm (see Fig. 2). Thorpe-sorting and scale analysis is employed to estimate dissipation and diffusion rates using both the traditional Thorpe-scaling method and the less widely used “available overturn potential energy” (AOPE) approach. New here (to my knowledge) are application of these Thorpe-sorting approaches to unevenly sampled data and direct comparisons of the Thorpe-scale versus AOPE approaches. It is also suggested that a robust estimate of the “equivalent linear background stratification” should preserve the potential energy of the density-sorted profile.
Two easily computed nondimensional quantities are identified that may help to determine the state of each turbulent patch as sampled: 1) the ratio of the slopes of least squares fit lines to the density profiles before and after sorting (the m ratio), and 2) the ratio of the Thorpe scale to the total turbulent patch size (the L ratio; both defined in more detail near the end of this paper). These are tightly correlated, indicating that they are likely the result of the same process. Unfortunately, no direct measurements of turbulent dissipation rates are available for this dataset, so independent validation of some of the results and suggestions will require future work. In fact, the relation between the turbulent kinetic energy dissipation rate εK and the mixing of density Kρ, related to a “mixing efficiency” Γ, is itself also likely a function of the “state of the patch,” calling into question the use of εK as a reliable measure of density mixing rates using a fixed value of Γ, especially where the turbulent patches vary from shear-driven to convective overturns (breaking internal waves) in nature.
2. Background: Thorpe sorting and scaling
The story begins with Dougherty’s (1961) and Ozmidov’s (1965) theoretical insight that there must be an “outer scale” where stratification inhibits the vertical extent of an isotropic turbulent patch or layer. On dimensional grounds, this scales like
which is now known as the “Ozmidov length scale.” Here εK is the turbulent kinetic energy dissipation rate per kilogram of fluid (units are length2 time−3) and N is the “background” buoyancy frequency:
where g is gravity, ρ0 is a reference density, and ∂ρ/dz is the “bulk stratification” (more on this later).
Then Thorpe’s (1977) insight was that one might estimate this scale from CTD profile data by seeing how far each sample has to be moved vertically to form a stable density profile (which would serve as a proxy for the “undisturbed” profile); this is now known as “Thorpe sorting”:
where the right arrow indicates resorting by density [with a 1:1 mapping from n to m(n)], and the caret represents density-sorted values (this will be clarified later, accommodating also uneven sample spacing). For an upcast, as comes from a WW (details below), the sorted densities are then monotonically decreasing toward the surface: .
Thorpe then defined what has come to be known as the “Thorpe scale,” the RMS displacement over each turbulent patch between the actual and sorted depths:
where the angle brackets denote an average over the patch and Q is the set of samples in the patch. Thorpe hypothesized that this is proportional to LO; however, note that Thorpe also cautioned that this should apply to shear-driven turbulence and not necessarily to either convective or double-diffusive situations.
Next, Dillon (1982) made direct dissipation measurements in a sheared thermocline, where he supposed there was negligible double-diffusion or convective overturns (e.g., breaking internal waves), and found that the data on average indicated LO ≈ 0.8LT (but note that the error bounds on this estimate are on the order of a factor of 2, even with averages over 10 or more profiles). Inverting Eq. (1) then yields the estimate
This approach and approximate equivalence have been applied to more situations than Thorpe originally envisioned. For more details and up-to-date history of the Thorpe-scaling approach, including some inherent biases, see Mater et al. (2015), Scotti (2015), and references therein. Extending especially some results from the latter study, there is some hope to eventually be able to apply this accessible analysis to include convectively driven cases as well as shear driven, and mixtures thereof, combined with either of the indicators of the “age” or “state” of each turbulent patch that are described near the end of this note. Double diffusion remains beyond this analysis and is henceforth disregarded (but noted).
The dissipation rate εK of TKE is then generally used to estimate the density diffusivity Kρ (units are length2 time−1), which is one of the real goals:
is the mixing efficiency, with εP defined as the dissipation rate of the turbulent potential energy (essentially the destruction rate of turbulent density variance, with the same units as εK).
Dillon also briefly described an alternative approach, using a direct estimate of the AOPE by calculating the difference in the potential energy of the measured density profiles minus that of the sorted ones in each patch, but this approach has seldom been pursued. I pursue it more thoroughly below; but first the data are described.
3. Extracting density profiles from the Wirewalker data
The basic behavior of the WW profiler is described in Pinkel et al. (2011) and Smith et al. (2012). In brief, the profiler body hangs onto a wire with a one-way clamp, grabbing on when the wire goes down and letting go when it goes up. The wire is attached to a surface buoy, tethered loosely to FLIP, that moves with the waves, providing the energy to ratchet the profiler down against its buoyancy (set to be slightly positive). At the bottom, it hits a bumper that releases the clamp, so it drifts “smoothly” up the wire to a top-stop, which reengages the one-way clamp. Thus the profiler takes profiles continually at a speed depending on the waves and currents (the latter typically 0.25–0.5 m s−1, which adds friction but also advects “self-induced turbulence” away). Typical profiling speeds were of order 0.3 m s−1 in both directions. For the LCDRI, the bottom stop was at about 93-m depth (roughly as deep as FLIP), and the top stop was as close to the surface buoy as we dared put it, so the measurements got to within 10 cm of the surface on occasion. The WW carried an RBR, Ltd., RBRconcerto model CTD on an insulated wire so an inductive modem could (and did) send data to the laboratory on FLIP in real time. The instruments and modem on the WW ran from a battery pack, sized to last up to a month or so. Considering how close the measurements come to the surface, the offset between the pressure and T and C sensors was taken into account (about 9 cm), as was the measured atmospheric pressure (1 mbar ≈ −1 cm). The conductivity sensor is an open-cell ring, so no pumping is needed [stated accuracy of ~0.003 mS cm−1 and resolution of 0.001; sample volume is ~(2 cm)3]; the thermistor was the fast-response version (0.1-s time constant; accuracy of 0.002°C and resolution of 0.000 05°C). The system sampled constantly at 6 Hz for the whole 13.5-day period; thus the time constant for the thermistor corresponds to about 0.6 of a sample interval. The thermistor was located toward the upper edge of the conductivity cell, offset horizontally by a few centimeters, so the effective sample volumes are roughly matched in depth.
One issue (always) is “salinity spiking” versus “T–S compensation” (or “spice”: sets of temperature T–salinity S pairs that have the same density, warmer and saltier versus fresher and colder). A simple quick fix for salinity spiking (which arises from any mismatch in the T-vs-conductivity C sensor responses) is to apply a running-median filter to the estimated salinity. The running-median filter is pretty good at eliminating spikes and reducing noise while preserving “steplike” changes in (e.g.) density—a feature that is important in stratified turbulence. The first pass here employed 11-point medians (spanning roughly 0.55 m on average). To see if spiciness is a problem, this was compared with using a constant salinity (effectively using T only) to see whether that reduces the apparent number of turbulent patches. Since the overall range of salinity is just 33.3–33.8 “practical salinity units” (psu) and is pretty well behaved versus depth, salinity was not expected to matter much. However, this analysis indicates there were many very small T–S variations consistent with spice variability: the 11-point median filter turned up 74 888 turbulent patches, of which 26 235 were two-point “inversions”; the constant S version turned up 107 302 patches, of which 38 306 were two-point inversions. Apparently the T–S compensation is more important than salinity spiking (at least with the median filtering) in terms of suppressing these (spurious?) small-scale patches. To determine an optimum-length median filter, trials were made with 3–14-point windows. The results are consistent using median filters from 8 to 14 points long. In the spirit of “less smoothing is better,” the following employs the eight-point median filter (on salinity only). For this filter length, 36 763 of 71 703 total patches have fewer than five samples. Most of these apparent patches are very small: one-third are the minimum two-point reversals, and more than one-half contain fewer than five samples. It seems reasonable to ignore these for now as probably being at least partially a product of the noise level of the sensors and anyhow not significant in terms of either turbulent dissipation or the mixing of density. For the comparisons carried out below, only turbulent patches with six or more samples are considered (roughly, patches larger than 30 cm).
Another issue is noise-induced “false overturns” (e.g., see Stansfield et al. 2001; Johnson and Garrett 2004; Gargett and Garner 2008). However, this is mainly an issue in deep weakly stratified regions, such as the Arctic (as considered in the latter two references). The conditions here more nearly resemble those described in the first of these references, where they found that “noise editing” only changed the result by less than 25%, which is considered “not significant” in estimates of turbulence. So no attempt is made here to edit out “noise patches,” other than by the median filtering and excluding patches with fewer than six samples as mentioned above.
The downward profiles are skipped because the wave-induced “ratchet” motions create significant accelerations (affecting especially the pressure estimates), and also because the sensors are in the profiler’s wake going down. To isolate the upcasts, the deepest and shallowest samples of each profiling cycle were first located. Then upcasts were taken from the first bottom to the following top sample and then skipping to the next bottom-to-top, and so on, for a total of 1435 upcasts from 93 m to the surface over 13.5 days. However, especially near the surface, the sampling can “back up” occasionally as a result of extra strong waves and currents; so to get monotonic upcasts, points deeper than the previous sample +1 mm were skipped, effectively constraining the sample interval to be at least 1 mm (z is positive upward, with depths below the surface all being negative). Such reversals also occur in CTD casts from a rolling ship (e.g., see Gargett and Garner 2008), and are dealt with similarly. Reversals on the upcasts only occurred when the horizontal currents were strong and only near the surface where the wave action is strongest; when they did occur the currents likely ensured that the wake of the profiling body was advected to the side before the next depth level was sampled. “Depth” here is actually “−pressure” (1 dbar ≈ −1 m, depending on the density), which removes most surface wave effects and allows sampling to within 0.3 m of the moving surface (on average) even with 4-m significant wave heights. The resulting profiles have variable sample intervals in depth, as the speed of the profiling body through the water varies, sometimes considerably (see Fig. 3). Overall, it slows down above the pycnocline where the profiler’s buoyancy contrast is smaller; thus, in addition to smaller scale variations, the average sample interval decreased from about 0.07 m at depth to around 0.03 m near the surface.
4. Application of Thorpe sorting to the Wirewalker data
The above Thorpe-scale definition in Eq. (4) is usually applied assuming a constant vertical sample interval (and hence constant sample layer thicknesses). To apply this to unevenly sampled WW data takes some care. The approach used here is to assume “persistence” to define “initial” layer thicknesses: the value measured at zn is assumed to persist to the next measurement at zn+1, so the sample layer has thickness Δn = zn+1 − zn. Then the layers are sorted by density and summed up from the bottom-most point, forming an appropriate set of density-sorted depths and layer thicknesses:
where (again) the right arrow indicates resorting by density; the are summed up from the bottom of the cast at . To work properly for the calculation of AOPE (especially), the depths must be centered in each layer, rather than at the bottom edges. The simplest approach (taken here) is to initially define the layers as extending from each zn to zn+1 (as above), to apply Eq. (8), and then to redefine all of the zns and s after that by adding one-half of the corresponding layer thicknesses to them, thus producing layers with centered depths. Note that the revised s do not in general match any of the revised zns (including the first one), because layers of unequal thickness can be swapped. This maintains the same depth and vertical extent of each turbulent patch before versus after sorting (see Fig. 4 for an example). To conceptually restore the actual mean depth of the whole turbulent patch, one-half of the lowest raw (uncareted) layer thickness, Δ1/2, could be subtracted from all the careted and uncareted depths, but this does not affect either the Thorpe scale or the difference in potential energy between the profiles before and after sorting, which is all that enters in the estimates of dissipation and diffusion, so this is computationally extraneous.
The change in potential energy before and after sorting is estimated as
where g is gravity, is the total turbulent patch thickness, and ρ0 ≡ ⟨ρ⟩ is the thickness-weighted mean density in the patch:
This definition of the angle brackets also applies to the Thorpe-scale definition in Eq. (4), and the latter is calculated using the modified “center depths” for consistency.
A convenient feature of the “Thorpe displacements” is that they sum to zero over each turbulent patch:
[recall that m = m(n) such that and ]. In other words, the average displacement of the sample layers, , is zero, since the patch does not change depth because of resorting. This is useful for defining the limits of the patches: summing cumulatively over each upcast, the nonturbulent regions are identically zero (within machine round-off), while the turbulent patches are not. Although there are cases in which one would be visually tempted to separate a patch into two or more, this criterion provides an objective measure (see Fig. 5).
To get from the AOPE (units are length2 time−2) to an energy dissipation rate requires a time scale τ. Following Scotti (2015), this is assumed to be proportional to the background buoyancy frequency:
where α is a factor that might vary with the state of the patch. Indeed, Scotti (2015) did find that it varied in the simulations of Couette flow with Richardson numbers of 0.1 and smaller; but for values of 0.25 and above, typical of real ocean turbulent patches, it appears to remain roughly constant at about α ≈ 4 or so (discussed further below).
What is an appropriate value for N? For this, I appeal to the concept of an “equivalent linear stratification”: specifically, one for which Scotti’s (2015) Eq. (11) explicitly holds true, which is
Here represents the equivalent linear stratification translated into an “equivalent buoyancy frequency” and the subscript A denotes that it is derived from the directly estimated AOPE. In other words,
For quasi-steady shear-driven turbulence, for which the turbulent potential energy (TPE) is equal to the AOPE, the turbulent dissipation of TPE can then be estimated using Dillon’s estimate of LO/LT:
These results are illustrated in Fig. 6 as color contours on the time–depth plane, as if this universally applied. Also shown is the “surface actively mixing depth,” determined as the bottom of the uppermost turbulent patch (black dots, as in Fig. 2). The estimated (patch mean) dissipation per kilogram (εK) is plotted as constant over each patch.
However, as shown by Scotti (2015), not all of the AOPE necessarily goes into TPE, which is the part that alters the background stratification. In fact, for his idealized transiently static density inversion simulations, only about 10% of the original AOPE ends up as an increase of the background potential energy (actually 11%–12%, depending on the aspect ratio of the inversion). The rest goes into either radiating internal waves or is dissipated by εK. This suggests defining the fraction β of AOPE that goes into TPE:
where β varies from 1 for steady shear-driven turbulence to order 0.1 for a transiently static inversion (or purely buoyancy-driven turbulence). Then the dissipation rate of TPE (εP) can be written as
For the quasi-steady shear-driven case, then, β/(2αΓ) ≈ 0.64; with β ≈ 1 and Γ ≈ 0.2, this yields an estimate for α of ~4. For his two transiently static inversion simulations, Scotti (2015) finds a value of α near 4 (or 5) as well, which is all we know about α: so we may as well assume it is ~4. For the latter two simulations, Scotti (2015) also finds values for the time integral of εK that imply a mixing efficiency Γ of about 3/4 for the case with an aspect ratio of 10, up to about 2 for the (isotropic) case with aspect ratio of 1. So, while we can probably assume α is ~4, it appears that both β and Γ are strong functions of the state or age of the patch, the former going roughly from 0.1 to 1.0 as the state goes from a “clean inversion” to shear driven, while the latter goes from O(1) down to 0.2. The variation in Γ, in particular, is worrisome for the common practice of basing estimates of Kρ from εK as in Eq. (6). In comparison, combining Eqs. (6) and (18),
which no longer involves Γ [this was also pointed out by Scotti (2015)]. The middle two forms could use NLSF (defined below) instead of NA if desired; the first of these then avoids the need to evaluate AOPE, and the second avoids the use of LT. The last form on the right just evaluates NA from Eq. (14). If we can obtain a robust estimate of β and can assume α ≈ 4 (say), this approach could potentially provide some skill in estimating Kρ without the need to go through εK and Γ. Given the uncertainty in Γ, which varies from 0.2 for shear-driven turbulence to about 2 for the isotropic overturn simulated by Scotti (2015), estimating Kρ from εK might even be the less-reliable approach, especially considering we have no way to determine the aspect ratio of the overturns. While the decrease in β implies overestimates of Kρ from “traditional” Thorpe scaling in convective overturns, the increase in Γ conversely implies underestimates from εK. For those of us (including myself!) who have struggled for years to obtain direct estimates of εK, this is somewhat daunting news.
5. Other estimates of N, and two measures of the state of the patch
The concept of an equivalent linear stratification was also used by Mater et al. (2015), but in a different way: they invert the Ellison length scale to estimate ∂ρ/∂z as ⟨ρ′2⟩1/2/LT, where ρ′ is the difference between the raw and sorted density at each depth. However, since the sorted s here do not match the raw zns, estimates of ρ′ are not available. Another approach is to do least squares fits (LSF) of lines with the form
to the sorted density profiles [as Mater et al. (2015) did in their appendix B], where the tilde denotes the linear fit density and and are constants determined by the fit to the sorted data (hence the carets). Note that, since density is decreasing with height, the slope is negative for a stable profile, which is why is defined here with a minus sign. By defining , the LSF slope for a given turbulent patch is
and the solution becomes
It so happens that this solution also has the property that the potential energy is the same as for the actual sorted profile:
Given the definition of the angle brackets, the last form is the same as the last term in Eq. (9), but for the factor g/⟨ρ⟩. In any case, this slope (dimensions of ∂ρ/∂z) can be used to calculate another “equivalent linear” buoyancy frequency, NLSF (say). Given the connection to potential energy, it is interesting to compare this with NA (Fig. 7). Given that both NA and NLSF nominally preserve the potential energy of the sorted profile, it is not surprising that they agree well with each other, even though they are computed in different ways (the first involving both AOPE and LT; the latter involving only the sorted profile): the mean value of NLSF is 2.4 × 10−5 s−1 (or 0.42%) larger than NA, and the standard deviation is 4.0 × 10−4 s−1, as compared with the mean value NA = 5.2 × 10−3 s−1. The scatter between them must indicate a noise level of some kind. It is worth remarking that Mater et al. (2015) found that the results using NLSF were very close to what they got using their Ellison-derived value. By implication, the results using NA should also be close. It is curious that the mean dissipation εK calculated with NLSF is about 6.8% larger than that using NA, which implies that the larger relative values of NLSF must correlate with larger Thorpe scales (the difference in εK is larger than just N3 alone would explain).
The same LSF procedure can be applied to the raw (uncareted) density profiles, and, as above, this also nominally preserves the same potential energy as the raw profile. For each turbulent patch, the resulting m is bounded by : no reordering of the layers can produce a steeper slope, and the exactly inverted profile yields the negative bound. In fact, the degenerate case of a two-point inversion is always at that latter bound. For larger patches (say, six or more points thick), the values of the “m ratio” scatter broadly between +1 and −1 (see Fig. 8, top panel). Note also that there is not a strong difference in this scatter for the “active” versus “calm” times (it was stormy from 25 March to 1 April and calm after that). However, there is an interesting trend in the mean value of versus number of samples (see Fig. 8, lower panel). The drop-off for fewer than six points provides another motivation to exclude those smaller patches.
Similarly, the ratio LT/LQ (the L ratio) has a maximum value that kinematically must correspond to an exact inversion. In the well-sampled limit, this maximum value can be found from the integral
so max(LT/LQ) = 3−1/2 (with finite sample sizes, the limit tends to be smaller). In contrast, as this ratio becomes small, the corresponding turbulence is in some sense “weak” relative to the scale of the patch, again indicating a progression from young to old—this time as the ratio decreases from 3−1/2 to zero.
Figure 9 shows a scatterplot of the m-ratio versus the L-ratio LT/LQ. The correlation is strong for two seemingly unrelated measures. The red line denotes a quadratic fit:
Why this relation would be quadratic is unclear, but the fit is hard to discard. And, given a quadratic relation along with Eq. (24), the factor 6 is not arbitrary but is demanded by the need to go from +1 to −1 over L-ratio values from 0 to 3−1/2. In fact, it was this close correlation of these two independent measures that led me to speculate that they both say something about the state or age of the individual patches.
Some measures similar to this L ratio have been used in the past. Stansfield et al. (2001) describe a PDF of the actual displacements in a patch scaled by LQ, mainly as a way to look at whether they “have included all stages in the average.” Johnson and Garrett (2004) examined the values of an “averaged L ratio” in comparison to values generated by random noise on a linear density profile, with somewhat inconclusive results. And Gargett and Garner (2008) defined a “ratio” of the upward displacements versus downward to identify false patches resulting from salinity spiking. However, no one (to my knowledge) has suggested the present usage.
Aside from the underresolved patches, an m ratio near −1 (and an L ratio near 3−1/2) likely indicates a “young” patch, since an exact density inversion should be disrupted quickly by the developing turbulence. At this extreme, it matches our conceptual picture of a transiently static density inversion such as modeled by Scotti (2015). However, I would remark that, while it took some time to become turbulent in these idealized simulations, in the real ocean (with a background spectrum of internal waves and other motions) it would likely be turbulent from the start, as seen in observations of Kelvin–Helmholtz instabilities made by Seim and Gregg (1994). Conversely, values of the m ratio near +1 (and with very small L ratios) should indicate “old” patches, where the patch matches our conceptual picture of quasi-steady shear-driven turbulence (such as we might observe above the Equatorial Undercurrent, for example). So it seems plausible that the m-ratio could be used to estimate each patch’s age or state.
Conceptually, the m ratio is a measure of the amount of potential energy available in the patch compared to the maximum possible, with +1 corresponding to a small fraction and −1 indicating the maximum. It therefore seems a likely candidate to indicate the relative importance of buoyancy forcing versus shear. Plausibly, it may provide a way to estimate an appropriate value of β: for example (say),
6. Estimates of εP and Kρ: The traditional versus a speculative new approach
The time integral of εP is directly related to the net change in the “background potential energy,” so it makes sense to compare the traditional estimate of εP with the new version given in Eq. (17). The traditional version is just 0.2 times the εK shown in Fig. 6. The speculative new approach is to use Eq. (17) with α = 4 and β given by Eq. (26). While this is unlikely to indicate which is more accurate, it is of interest to see how much difference it might make (see Fig. 10). Indeed, turbulence being what it is, we expect both versions to fluctuate fairly randomly from one cast to the next in any case, no matter which is the more accurate. However, it is encouraging that the new approach appears to reduce some of the largest patches with the largest estimates, while many of the smaller-scale or large-scale but weaker estimates remain nearly the same; a pattern that roughly matches the trends in bias described by Mater et al. (2015), although this comparison is by no means quantitative. Also, the differences that “catch the eye” are for those patches touching the surface, where we should be most skeptical of this whole approach: not only can there be significant buoyancy forcing from surface cooling/heating, but there is definitely strong mechanical forcing from the Langmuir circulation generation mechanism. On the other hand, if this can be shown or made to work there, it should work practically anywhere (aside from double diffusion).
Since the diffusivity Kρ is a quantity of general interest, it is worthwhile to contrast the traditional estimate from Eq. (6) with Γ = 0.2 (i.e., multiplying the estimates of εK shown in Fig. 6 by ) to the speculative suggestion of using Eq. (19) with (again) α = 4 and β given by Eq. (26). As was seen for εP, the larger values and larger patches, especially near the surface, are somewhat reduced in the new estimate while the weaker patches and deeper smaller patches are not so much (see Fig. 11).
Since directly resolving the dissipation or diffusion of density is so hard, it will be difficult to verify whether this new approach helps, aside from doing more DNS simulations. In addition, it is hard to imagine simulations that can truly reproduce a mixture of shear and overturns. Another possibility is to look at results from freely drifting WWs, where advection is less important and the net local change in potential energy before and after an event might be assessed. It would be especially useful if the WWs could simultaneously measure the current profiles too.
A final concern is resolving the partitioning between TKE and TPE and hence the variation of the mixing efficiency Γ. Can this be explicitly evaluated in further fully resolved overturning simulations, and can more “pseudocasts” be evaluated, as described in Scotti (2015)? This could potentially be helpful in understanding a much larger inventory of both historical and ongoing dissipation estimates using both the Thorpe-sorting and direct εK estimation approaches.
Working with irregularly spaced data is not hard, once the method has been worked out. In contrast, interpolating samples in a turbulent patch to a regular grid, especially on a finer scale, would introduce many fictitious samples closer to the mean density of the patch, since samples can oscillate between lightest and densest extremes (see Fig. 4). This would likely have a strong effect on the results. It is also computationally more efficient to skip the interpolation effort and use the actual number of points measured. (The MATLAB software code implementing this analysis, along with sample data, is available in the online supplemental materials).
The AOPE method using Thorpe sorting would appear to be more robust with respect to the poorly constrained estimates of the “background stratification” than the traditional Thorpe-scaling method, since the latter depends on N3 while the former depends only linearly on the estimated N.
Both Thorpe-sorting methods lend themselves more directly to estimating εP (dissipation of turbulent potential energy) than εK (dissipation of turbulent kinetic energy), since the latter actually requires knowledge of the mixing efficiency Γ, which is poorly constrained. Also, εP is more directly related to actual changes in the background stratification and to the mixing of density (by diffusivity Kρ).
It is encouraging that two newly identified dimensionless parameters, (the m ratio) and LT/LQ (the L ratio), which are both plausibly related to the age of a turbulent mixing patch, may provide a route toward estimating the partitioning of the turbulent energy between potential and kinetic parts during the moment sampled by a particular cast. Thus there is hope that we might be able to improve historical estimates of mixing based on the Thorpe-sorting methods using these ratios.
Less encouraging is the observation (from Scotti 2015) that the value of the mixing efficiency, Γ ≡ εP/εK, is not well constrained, implying that using εK to infer εP or Kρ may turn out to be a bit misleading.
This work was supported by ONR Contract/Grant N00014-15-1-2580. I thank the MOD engineering group, and San Nguyen in particular (who accompanied me on FLIP and did most of the work), for getting the gear working and mounted on FLIP. I also thank the Air–Sea Interaction Laboratory for extensive help at sea and, in particular, Laurent Grare, who helped to deploy and rescue/redeploy the WW under stormy conditions. I am also grateful to those on board for many useful scientific discussions, and to Tom Golfinos and FLIP’s crew for making the expedition a total success. Last, I express my extreme gratitude to the three anonymous reviewers, who really helped to turn this work from a marginal comment into something that I hope is worth thinking about.
Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-18-0234.s1.