Browse
Abstract
Micropulse differential absorption lidars (MPD) for water vapor, temperature, and aerosol profiling have been developed, demonstrated, and are addressing the needs of the atmospheric science community for low-cost ground-based networkable instruments capable of long-term monitoring of the lower troposphere. The MPD instruments use a diode-laser-based (DLB) architecture that can easily be adapted for a wide range of applications. In this study, a DLB direct-detection Doppler lidar based on the current MPD architecture is modeled to better understand the efficacy of the instrument for vertical wind velocity measurements, with the long-term goal of incorporating these measurements into the current network of MPD instruments. The direct-detection Doppler lidar is based on a double-edge receiver that utilizes two Fabry–Pérot interferometers and a vertical velocity retrieval that requires the ancillary measurement of the backscatter ratio, which is the ratio of the total backscatter coefficient to the molecular backscatter coefficient. The modeling in this paper accounts for the major sources of error. It indicates that the vertical velocity can be retrieved with an error of less than 0.56 m s−1 below 4 km with a 150-m range resolution and an averaging time of 5 min.
Significance Statement
Monitoring the temperature, relative humidity, and winds in the lower atmosphere is important for improving weather forecasting, particularly for severe weather such as thunderstorms. Cost-effective micropulse differential absorption lidar (MPD) instrumentation for continuous temperature and humidity monitoring has been developed and demonstrated, and its effects on weather forecasting are currently being evaluated. The modeling study described in this paper studies the feasibility of using a similar cost-effective MPD instrument architecture for monitoring vertical wind velocity in the lower atmosphere. Modeling indicates that wind velocities can be measured with less than 0.56 m s−1 accuracy and demonstrates the feasibility of adding vertical wind velocity measurements to the MPD instruments.
Abstract
Micropulse differential absorption lidars (MPD) for water vapor, temperature, and aerosol profiling have been developed, demonstrated, and are addressing the needs of the atmospheric science community for low-cost ground-based networkable instruments capable of long-term monitoring of the lower troposphere. The MPD instruments use a diode-laser-based (DLB) architecture that can easily be adapted for a wide range of applications. In this study, a DLB direct-detection Doppler lidar based on the current MPD architecture is modeled to better understand the efficacy of the instrument for vertical wind velocity measurements, with the long-term goal of incorporating these measurements into the current network of MPD instruments. The direct-detection Doppler lidar is based on a double-edge receiver that utilizes two Fabry–Pérot interferometers and a vertical velocity retrieval that requires the ancillary measurement of the backscatter ratio, which is the ratio of the total backscatter coefficient to the molecular backscatter coefficient. The modeling in this paper accounts for the major sources of error. It indicates that the vertical velocity can be retrieved with an error of less than 0.56 m s−1 below 4 km with a 150-m range resolution and an averaging time of 5 min.
Significance Statement
Monitoring the temperature, relative humidity, and winds in the lower atmosphere is important for improving weather forecasting, particularly for severe weather such as thunderstorms. Cost-effective micropulse differential absorption lidar (MPD) instrumentation for continuous temperature and humidity monitoring has been developed and demonstrated, and its effects on weather forecasting are currently being evaluated. The modeling study described in this paper studies the feasibility of using a similar cost-effective MPD instrument architecture for monitoring vertical wind velocity in the lower atmosphere. Modeling indicates that wind velocities can be measured with less than 0.56 m s−1 accuracy and demonstrates the feasibility of adding vertical wind velocity measurements to the MPD instruments.
Abstract
Observations made by weather radars play a central role in many aspects of meteorological research and forecasting. These applications commonly require that radar data be supplied on a Cartesian grid, necessitating a coordinate transformation and interpolation from the radar’s native spherical geometry using a process known as gridding. In this study, we introduce a variational gridding method and, through a series of theoretical and real data experiments, show that it outperforms existing methods in terms of data resolution, noise filtering, spatial continuity, and more. Known problems with existing gridding methods (Cressman weighted average and nearest neighbor/linear interpolation) are also underscored, suggesting the potential for substantial improvements in many applications involving gridded radar data, including operational forecasting, hydrological retrievals, and three-dimensional wind retrievals.
Abstract
Observations made by weather radars play a central role in many aspects of meteorological research and forecasting. These applications commonly require that radar data be supplied on a Cartesian grid, necessitating a coordinate transformation and interpolation from the radar’s native spherical geometry using a process known as gridding. In this study, we introduce a variational gridding method and, through a series of theoretical and real data experiments, show that it outperforms existing methods in terms of data resolution, noise filtering, spatial continuity, and more. Known problems with existing gridding methods (Cressman weighted average and nearest neighbor/linear interpolation) are also underscored, suggesting the potential for substantial improvements in many applications involving gridded radar data, including operational forecasting, hydrological retrievals, and three-dimensional wind retrievals.
Abstract
The latest established generation of weather radars provides polarimetric measurements of a wide variety of meteorological and nonmeteorological targets. While the classification of different precipitation types based on polarimetric data has been studied extensively, nonmeteorological targets have garnered relatively less attention beyond an effort to detect them for removal from meteorological products. In this paper we present a supervised learning classification system developed in the Finnish Meteorological Institute (FMI) that uses Bayesian inference with empirical probability density distributions to assign individual range gate samples into 7 meteorological and 12 nonmeteorological classes, belonging to five top-level categories of hydrometeors, terrain, zoogenic, anthropogenic, and immaterial. We demonstrate how the accuracy of the class probability estimates provided by a basic naive Bayes classifier can be further improved by introducing synthetic channels created through limited neighborhood filtering, by properly managing partial moment nonresponse, and by considering spatial correlation of class membership of adjacent range gates. The choice of Bayesian classification provides well-substantiated quality estimates for all meteorological products, a feature that is being increasingly requested by users of weather radar products. The availability of comprehensive, fine-grained classification of nonmeteorological targets also enables a large array of emerging applications, utilizing nonprecipitation echo types and demonstrating the need to move from a single, universal quality metric of radar observations to one that depends on the application, the measured target type, and the specificity of the customers’ requirements.
Significance Statement
In addition to meteorological echoes, weather radars observe a wide variety of nonmeteorological phenomena including birds, insects, and human-made objects like ships and aircraft. Conventionally, these data have been rejected as undesirable disturbance, but lately their value for applications like aeroecological monitoring of bird and insect migration has been understood. The utilization of these data, however, has been hampered by a lack of comprehensive classification of nonmeteorological echoes. In this paper we present a comprehensive, fine-grained, probabilistic classifier for all common types of nonmeteorological echoes which enables the implementation of a wide range of novel weather radar applications.
Abstract
The latest established generation of weather radars provides polarimetric measurements of a wide variety of meteorological and nonmeteorological targets. While the classification of different precipitation types based on polarimetric data has been studied extensively, nonmeteorological targets have garnered relatively less attention beyond an effort to detect them for removal from meteorological products. In this paper we present a supervised learning classification system developed in the Finnish Meteorological Institute (FMI) that uses Bayesian inference with empirical probability density distributions to assign individual range gate samples into 7 meteorological and 12 nonmeteorological classes, belonging to five top-level categories of hydrometeors, terrain, zoogenic, anthropogenic, and immaterial. We demonstrate how the accuracy of the class probability estimates provided by a basic naive Bayes classifier can be further improved by introducing synthetic channels created through limited neighborhood filtering, by properly managing partial moment nonresponse, and by considering spatial correlation of class membership of adjacent range gates. The choice of Bayesian classification provides well-substantiated quality estimates for all meteorological products, a feature that is being increasingly requested by users of weather radar products. The availability of comprehensive, fine-grained classification of nonmeteorological targets also enables a large array of emerging applications, utilizing nonprecipitation echo types and demonstrating the need to move from a single, universal quality metric of radar observations to one that depends on the application, the measured target type, and the specificity of the customers’ requirements.
Significance Statement
In addition to meteorological echoes, weather radars observe a wide variety of nonmeteorological phenomena including birds, insects, and human-made objects like ships and aircraft. Conventionally, these data have been rejected as undesirable disturbance, but lately their value for applications like aeroecological monitoring of bird and insect migration has been understood. The utilization of these data, however, has been hampered by a lack of comprehensive classification of nonmeteorological echoes. In this paper we present a comprehensive, fine-grained, probabilistic classifier for all common types of nonmeteorological echoes which enables the implementation of a wide range of novel weather radar applications.
Abstract
Accurate vertical velocity retrieval from dual-Doppler analysis (DDA) is a long-standing problem of radar meteorology. Typical radar scanning strategies poorly observe the vertical component of motion, leading to large uncertainty in vertical velocity estimates. Using a vertical vorticity equation constraint in addition to a mass conservation constraint in DDA has shown promise in improving vertical velocity retrievals. However, observation system simulation experiments (OSSEs) suggest this technique requires rapid radar volume scans to realize the improvements due to the vorticity tendency term in the vertical vorticity constraint. Here, the vertical vorticity constraint DDA is tested with real, rapid-scan radar data to validate prior OSSEs results. Generally, the vertical vorticity constraint DDA produced more accurate vertical velocities from DDAs than those that did not use the constraint. When the time between volume scans was greater than 30 s, the vertical velocity accuracy was significantly affected by the vorticity tendency estimation method. A technique that uses advection correction on provisional DDA wind fields to shorten the discretization interval for the vorticity tendency calculation improved the vertical velocity retrievals for longer times between volume scans. The skill of these DDAs was similar to those using a shorter time between volume scans. These improvements were due to increased accuracy of the vertical vorticity tendency using the advection correction technique. The real radar data tests also revealed that the vertical vorticity constraint DDAs are more forgiving to radar data errors. These results suggest that vertical vorticity constraint DDA with rapid-scan radars should be prioritized for kinematic analyses.
Abstract
Accurate vertical velocity retrieval from dual-Doppler analysis (DDA) is a long-standing problem of radar meteorology. Typical radar scanning strategies poorly observe the vertical component of motion, leading to large uncertainty in vertical velocity estimates. Using a vertical vorticity equation constraint in addition to a mass conservation constraint in DDA has shown promise in improving vertical velocity retrievals. However, observation system simulation experiments (OSSEs) suggest this technique requires rapid radar volume scans to realize the improvements due to the vorticity tendency term in the vertical vorticity constraint. Here, the vertical vorticity constraint DDA is tested with real, rapid-scan radar data to validate prior OSSEs results. Generally, the vertical vorticity constraint DDA produced more accurate vertical velocities from DDAs than those that did not use the constraint. When the time between volume scans was greater than 30 s, the vertical velocity accuracy was significantly affected by the vorticity tendency estimation method. A technique that uses advection correction on provisional DDA wind fields to shorten the discretization interval for the vorticity tendency calculation improved the vertical velocity retrievals for longer times between volume scans. The skill of these DDAs was similar to those using a shorter time between volume scans. These improvements were due to increased accuracy of the vertical vorticity tendency using the advection correction technique. The real radar data tests also revealed that the vertical vorticity constraint DDAs are more forgiving to radar data errors. These results suggest that vertical vorticity constraint DDA with rapid-scan radars should be prioritized for kinematic analyses.
Abstract
Direct measurement of forces within the rough bed layer have been limited by previous spatial-averaging shear force studies. A highly sensitive force transducer assembled with a target sphere was used to measure and record the instantaneous three-dimensional forces of sediment at incipient motion. In the current study, a laser Doppler anemometer, ultrasonic displacement meter, and a force transducer accompanied by video recordings were used to experimentally investigate the incipient motion of sediment. The developed experimental setup have the potential to resolve and improve fundamental classical hypotheses regarding the incipient sediment motion. Experiments conducted in a large recirculating flume verified that the force transducer detects instantaneous forces at incipient motion under varies hydrodynamic conditions. Depth time series, instantaneous horizontal, vertical, and lateral forces are presented for dam-break and tidal breaking bores. Evidence suggests that the uplift vertical force plays an important role in destabilizing and in the incipient motion of particles. A sudden decrease in horizontal force was observed in tidal breaking bore due to flow reversal; however, a rapid rise was observed due to initial impact of dam-break bore. Bore velocity seems to have a larger effect on dam-break force than bore height. Furthermore, lateral force has the least influence during tidal breaking bore, while sediment particles are subjected to additional lateral force during dam-break bore.
Abstract
Direct measurement of forces within the rough bed layer have been limited by previous spatial-averaging shear force studies. A highly sensitive force transducer assembled with a target sphere was used to measure and record the instantaneous three-dimensional forces of sediment at incipient motion. In the current study, a laser Doppler anemometer, ultrasonic displacement meter, and a force transducer accompanied by video recordings were used to experimentally investigate the incipient motion of sediment. The developed experimental setup have the potential to resolve and improve fundamental classical hypotheses regarding the incipient sediment motion. Experiments conducted in a large recirculating flume verified that the force transducer detects instantaneous forces at incipient motion under varies hydrodynamic conditions. Depth time series, instantaneous horizontal, vertical, and lateral forces are presented for dam-break and tidal breaking bores. Evidence suggests that the uplift vertical force plays an important role in destabilizing and in the incipient motion of particles. A sudden decrease in horizontal force was observed in tidal breaking bore due to flow reversal; however, a rapid rise was observed due to initial impact of dam-break bore. Bore velocity seems to have a larger effect on dam-break force than bore height. Furthermore, lateral force has the least influence during tidal breaking bore, while sediment particles are subjected to additional lateral force during dam-break bore.
Abstract
High-frequency radars (HFR) remotely measure ocean surface currents based on the Doppler shift of electromagnetic waves backscattered by surface gravity waves with one-half of the electromagnetic wavelength, called Bragg waves. Their phase velocity is affected by their interactions with the mean Eulerian currents and with all of the other waves present at the sea surface. Therefore, HFRs should measure a quantity related to the Stokes drift in addition to mean Eulerian currents. However, different expressions have been proposed for this quantity: the filtered surface Stokes drift, one-half of the surface Stokes drift, and the weighted depth-averaged Stokes drift. We evaluate these quantities using directional wave spectra measured by bottom-mounted acoustic wave and current (AWAC) profilers in the lower Saint Lawrence Estuary, Quebec, Canada, deployed in an area covered by four HFRs: two Wellen radars (WERA) and two coastal ocean dynamics applications radars (CODAR). Since HFRs measure the weighted depth-averaged Eulerian currents, we extrapolate the Eulerian currents measured by the AWACs to the sea surface assuming linear Ekman dynamics to perform the weighted depth averaging. During summer 2013, when winds are weak, correlations between the AWAC and HFR currents are stronger (0.93) than during winter 2016/17 (0.42–0.62), when winds are high. After adding the different wave-induced quantities to the Eulerian currents measured by the AWACs, however, correlations during winter 2016/17 significantly increase. Among the different expressions tested, the highest correlations (0.80–0.96) are obtained using one-half of the surface Stokes drift, suggesting that HFRs measure the latter in addition to mean Eulerian currents.
Abstract
High-frequency radars (HFR) remotely measure ocean surface currents based on the Doppler shift of electromagnetic waves backscattered by surface gravity waves with one-half of the electromagnetic wavelength, called Bragg waves. Their phase velocity is affected by their interactions with the mean Eulerian currents and with all of the other waves present at the sea surface. Therefore, HFRs should measure a quantity related to the Stokes drift in addition to mean Eulerian currents. However, different expressions have been proposed for this quantity: the filtered surface Stokes drift, one-half of the surface Stokes drift, and the weighted depth-averaged Stokes drift. We evaluate these quantities using directional wave spectra measured by bottom-mounted acoustic wave and current (AWAC) profilers in the lower Saint Lawrence Estuary, Quebec, Canada, deployed in an area covered by four HFRs: two Wellen radars (WERA) and two coastal ocean dynamics applications radars (CODAR). Since HFRs measure the weighted depth-averaged Eulerian currents, we extrapolate the Eulerian currents measured by the AWACs to the sea surface assuming linear Ekman dynamics to perform the weighted depth averaging. During summer 2013, when winds are weak, correlations between the AWAC and HFR currents are stronger (0.93) than during winter 2016/17 (0.42–0.62), when winds are high. After adding the different wave-induced quantities to the Eulerian currents measured by the AWACs, however, correlations during winter 2016/17 significantly increase. Among the different expressions tested, the highest correlations (0.80–0.96) are obtained using one-half of the surface Stokes drift, suggesting that HFRs measure the latter in addition to mean Eulerian currents.
Abstract
Reconstructing tidal signals is indispensable for verifying altimetry products, forecasting water levels, and evaluating long-term trends. Uncertainties in the estimated tidal parameters must be carefully assessed to adequately select the relevant tidal constituents and evaluate the accuracy of the reconstructed water levels. Customary harmonic analysis uses ordinary least squares (OLS) regressions for their simplicity. However, the OLS may lead to incorrect estimations of the regression coefficient uncertainty due to the neglect of the residual autocorrelation. This study introduces two residual resamplings (moving-block and semiparametric bootstraps) for estimating the variability of tidal regression parameters and shows that they are powerful methods to assess the effects of regression errors with nontrivial autocorrelation structures. A Monte Carlo experiment compares their performance to four analytical procedures selected from those provided by the RT_Tide, UTide, and NS_Tide packages and the robustfit.m MATLAB function. In the Monte Carlo experiment, an iteratively reweighted least squares (IRLS) regression is used to estimate the tidal parameters for hourly simulations of one-dimensional water levels. Generally, robustfit.m and the considered RT_Tide method overestimate the tidal amplitude variability, while the selected UTide and NS_Tide approaches underestimate it. After some substantial methodological corrections the selected NS_Tide method shows adequate performance. As a result, estimating the regression variance–covariance with the considered RT_Tide, UTide, and NS_Tide methods may lead to the erroneous selection of constituents and underestimation of water level uncertainty, compromising the validity of their results in some applications.
Significance Statement
At many locations, the production of reliable water level predictions for marine navigation, emergency response, and adaptation to extreme weather relies on the precise modeling of tides. However, the complicated interaction between tides, weather, and other climatological processes may generate large uncertainties in tidal predictions. In this study, we investigate how different statistical methods may lead to different quantification of tidal model uncertainty when using data with completely known properties (e.g., knowing the tidal signal, as well as the amount and structure of noise). The main finding is that most commonly used statistical methods may estimate incorrectly the uncertainty in tidal parameters and predictions. This inconsistency is due to some specific simplifying assumptions underlying the analysis and may be reduced using statistical techniques based on data resampling.
Abstract
Reconstructing tidal signals is indispensable for verifying altimetry products, forecasting water levels, and evaluating long-term trends. Uncertainties in the estimated tidal parameters must be carefully assessed to adequately select the relevant tidal constituents and evaluate the accuracy of the reconstructed water levels. Customary harmonic analysis uses ordinary least squares (OLS) regressions for their simplicity. However, the OLS may lead to incorrect estimations of the regression coefficient uncertainty due to the neglect of the residual autocorrelation. This study introduces two residual resamplings (moving-block and semiparametric bootstraps) for estimating the variability of tidal regression parameters and shows that they are powerful methods to assess the effects of regression errors with nontrivial autocorrelation structures. A Monte Carlo experiment compares their performance to four analytical procedures selected from those provided by the RT_Tide, UTide, and NS_Tide packages and the robustfit.m MATLAB function. In the Monte Carlo experiment, an iteratively reweighted least squares (IRLS) regression is used to estimate the tidal parameters for hourly simulations of one-dimensional water levels. Generally, robustfit.m and the considered RT_Tide method overestimate the tidal amplitude variability, while the selected UTide and NS_Tide approaches underestimate it. After some substantial methodological corrections the selected NS_Tide method shows adequate performance. As a result, estimating the regression variance–covariance with the considered RT_Tide, UTide, and NS_Tide methods may lead to the erroneous selection of constituents and underestimation of water level uncertainty, compromising the validity of their results in some applications.
Significance Statement
At many locations, the production of reliable water level predictions for marine navigation, emergency response, and adaptation to extreme weather relies on the precise modeling of tides. However, the complicated interaction between tides, weather, and other climatological processes may generate large uncertainties in tidal predictions. In this study, we investigate how different statistical methods may lead to different quantification of tidal model uncertainty when using data with completely known properties (e.g., knowing the tidal signal, as well as the amount and structure of noise). The main finding is that most commonly used statistical methods may estimate incorrectly the uncertainty in tidal parameters and predictions. This inconsistency is due to some specific simplifying assumptions underlying the analysis and may be reduced using statistical techniques based on data resampling.
Abstract
Horizontal velocity gradients of a flow field and the related kinematic properties (KPs) of divergence, vorticity, and strain rate can be estimated from dense drifter deployments, e.g., the spatiotemporal average divergence (and other KPs) over a triangular area defined by three drifters and over a given time interval can be computed from the initial and final areas of said triangle. Unfortunately, this computation can be subject to large errors, especially when the triangle shape is far from equilateral. Therefore, samples with small aspect ratios are generally discarded. Here we derive the thresholds on two shape metrics that optimize the balance between retention of good and removal of bad divergence estimates. The primary tool is a high-resolution regional ocean model simulation, where a baseline for the average divergence can be established, so that actual errors are available. A value of 0.2 for the scaled aspect ratio Λ and a value of 0.86π for the largest interior angle θ are found to be equally effective thresholds, especially at scales of 5 km and below. While discarding samples with low Λ or high θ values necessarily biases the distribution of divergence estimates slightly toward positive values, this bias is small compared to (and in the opposite direction of) the Lagrangian sampling bias due to drifters preferably sampling convergence regions. Errors due to position uncertainty are suppressed by the shape-based subsampling. The subsampling also improves the identification of the areas of extreme divergence or convergence. An application to an observational dataset demonstrates that these model-derived thresholds can be effectively used on actual drifter data.
Significance Statement
Divergence in the ocean indicates how fast floating objects in the ocean spread apart, while convergence (negative divergence) captures how fast they accumulate. Measuring divergence in the ocean, however, remains challenging. One method is to estimate divergence from the trajectories of drifting buoys. This study provides guidance under what circumstances these estimates should be discarded because they are too likely to have large errors. The criteria proposed here are less stringent than some of the ad hoc criteria previously used. This will allow users to retain more of their estimates. We consider how position uncertainty affects the reliability of the divergence estimates. An observational dataset collected in the Mediterranean is used to illustrate an application of these reliability criteria.
Abstract
Horizontal velocity gradients of a flow field and the related kinematic properties (KPs) of divergence, vorticity, and strain rate can be estimated from dense drifter deployments, e.g., the spatiotemporal average divergence (and other KPs) over a triangular area defined by three drifters and over a given time interval can be computed from the initial and final areas of said triangle. Unfortunately, this computation can be subject to large errors, especially when the triangle shape is far from equilateral. Therefore, samples with small aspect ratios are generally discarded. Here we derive the thresholds on two shape metrics that optimize the balance between retention of good and removal of bad divergence estimates. The primary tool is a high-resolution regional ocean model simulation, where a baseline for the average divergence can be established, so that actual errors are available. A value of 0.2 for the scaled aspect ratio Λ and a value of 0.86π for the largest interior angle θ are found to be equally effective thresholds, especially at scales of 5 km and below. While discarding samples with low Λ or high θ values necessarily biases the distribution of divergence estimates slightly toward positive values, this bias is small compared to (and in the opposite direction of) the Lagrangian sampling bias due to drifters preferably sampling convergence regions. Errors due to position uncertainty are suppressed by the shape-based subsampling. The subsampling also improves the identification of the areas of extreme divergence or convergence. An application to an observational dataset demonstrates that these model-derived thresholds can be effectively used on actual drifter data.
Significance Statement
Divergence in the ocean indicates how fast floating objects in the ocean spread apart, while convergence (negative divergence) captures how fast they accumulate. Measuring divergence in the ocean, however, remains challenging. One method is to estimate divergence from the trajectories of drifting buoys. This study provides guidance under what circumstances these estimates should be discarded because they are too likely to have large errors. The criteria proposed here are less stringent than some of the ad hoc criteria previously used. This will allow users to retain more of their estimates. We consider how position uncertainty affects the reliability of the divergence estimates. An observational dataset collected in the Mediterranean is used to illustrate an application of these reliability criteria.
Abstract
The ocean mixed layer model (OMLM) is improved using the large-eddy simulation (LES) and the inverse estimation method. A comparison of OMLM (Noh model) and LES results reveals that underestimation of the turbulent kinetic energy (TKE) flux in the OMLM causes a negative bias of the mixed layer depth (MLD) during convection, when the wind stress is weak or the latitude is high. It is further found that the entrainment layer thickness is underestimated. The effects of alternative approaches of parameterizations in the OMLM, such as nonlocal mixing, length scales, Prandtl number, and TKE flux, are examined with an aim to reduce the bias. Simultaneous optimizations of empirical constants in the various versions of Noh model with different parameterization options are then carried out via an iterative Green’s function approach with LES data as constraining data. An improved OMLM is obtained, which reflects various new features, including the enhanced TKE flux, and the new model is found to improve the performance in all cases, namely, wind-mixing, surface heating, and surface cooling cases. The effect of the OMLM grid resolution on the optimal empirical constants is also investigated.
Significance Statement
This work illustrates a novel approach to improve the parameterization of vertical mixing in the upper ocean, which plays an important role in climate and ocean models. The approach utilizes the data from realistic turbulence simulation, called large-eddy simulation, as proxy observation data for upper ocean turbulence to analyze the parameterization, and the statistical method, called inverse estimation, to obtain the optimized empirical constants used in the parameterization. The same approach can be applied to improve other turbulence parameterization, and the new vertical mixing parameterization can be applied to improve climate and ocean models.
Abstract
The ocean mixed layer model (OMLM) is improved using the large-eddy simulation (LES) and the inverse estimation method. A comparison of OMLM (Noh model) and LES results reveals that underestimation of the turbulent kinetic energy (TKE) flux in the OMLM causes a negative bias of the mixed layer depth (MLD) during convection, when the wind stress is weak or the latitude is high. It is further found that the entrainment layer thickness is underestimated. The effects of alternative approaches of parameterizations in the OMLM, such as nonlocal mixing, length scales, Prandtl number, and TKE flux, are examined with an aim to reduce the bias. Simultaneous optimizations of empirical constants in the various versions of Noh model with different parameterization options are then carried out via an iterative Green’s function approach with LES data as constraining data. An improved OMLM is obtained, which reflects various new features, including the enhanced TKE flux, and the new model is found to improve the performance in all cases, namely, wind-mixing, surface heating, and surface cooling cases. The effect of the OMLM grid resolution on the optimal empirical constants is also investigated.
Significance Statement
This work illustrates a novel approach to improve the parameterization of vertical mixing in the upper ocean, which plays an important role in climate and ocean models. The approach utilizes the data from realistic turbulence simulation, called large-eddy simulation, as proxy observation data for upper ocean turbulence to analyze the parameterization, and the statistical method, called inverse estimation, to obtain the optimized empirical constants used in the parameterization. The same approach can be applied to improve other turbulence parameterization, and the new vertical mixing parameterization can be applied to improve climate and ocean models.
Abstract
The static and dynamic performances of the RBRargo 3 are investigated using a combination of laboratory-based and in situ datasets from floats deployed as part of an Argo pilot program. Temperature and pressure measurements compare well to co-located reference data acquired from shipboard CTDs. Static accuracy of salinity measurements is significantly improved using 1) a time lag for temperature, 2) a quadratic pressure dependence, and 3) a unit-based calibration for each RBRargo 3 over its full pressure range. Long-term deployments show no significant drift in the RBRargo 3 accuracy. The dynamic response of the RBRargo 3 demonstrates the presence of two different adjustment time scales: a long-term adjustment O(120) s, driven by the temperature difference between the interior of the conductivity cell and the water, and a short-term adjustment O(5–10) s, associated to the initial exchange of heat between the water and the inner ceramic. Corrections for these effects, including dependence on profiling speed, are developed.
Abstract
The static and dynamic performances of the RBRargo 3 are investigated using a combination of laboratory-based and in situ datasets from floats deployed as part of an Argo pilot program. Temperature and pressure measurements compare well to co-located reference data acquired from shipboard CTDs. Static accuracy of salinity measurements is significantly improved using 1) a time lag for temperature, 2) a quadratic pressure dependence, and 3) a unit-based calibration for each RBRargo 3 over its full pressure range. Long-term deployments show no significant drift in the RBRargo 3 accuracy. The dynamic response of the RBRargo 3 demonstrates the presence of two different adjustment time scales: a long-term adjustment O(120) s, driven by the temperature difference between the interior of the conductivity cell and the water, and a short-term adjustment O(5–10) s, associated to the initial exchange of heat between the water and the inner ceramic. Corrections for these effects, including dependence on profiling speed, are developed.